entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.03384v1 | 20240905094325 | Hardware Acceleration of LLMs: A comprehensive survey and comparison | [
"Nikoletta Koilia",
"Christoforos Kachris"
] | cs.AR | [
"cs.AR",
"cs.AI"
] |
Hardware Acceleration of LLMs: A comprehensive survey and comparison
Nikoletta Koilia
Department of Electrical
and Electronics Engineering
University of West Attica
Athens, Greece
eee19387106@uniwa.gr
Christoforos Kachris
Department of Electrical
and Electronics Engineering
University of West Attica
Athens, Greece
kachris@uniwa.gr
==========================================================================================================================================================================================================================================================================================
§ ABSTRACT
Large Language Models (LLMs) have emerged as powerful tools for natural language processing tasks, revolutionizing the field with their ability to understand and generate human-like text. In this paper, we present a comprehensive survey of the several research efforts that have been presented for the acceleration of transformer networks for Large Language Models using hardware accelerators.
The survey presents the frameworks that have been proposed and then performs a qualitative and quantitative comparison regarding the technology, the processing platform (FPGA, ASIC, In-Memory, GPU), the speedup, the energy efficiency, the performance (GOPs), and the energy efficiency (GOPs/W) of each framework.
The main challenge in comparison is that every proposed scheme is implemented on a different process technology making hard a fair comparison. The main contribution of this paper is that we extrapolate the results of the performance and the energy efficiency on the same technology to make a fair comparison; one theoretical and one more practical. We implement part of the LLMs on several FPGA chips to extrapolate the results to the same process technology and then we make a fair comparison of the performance.
hardware acceleration, survey, FPGAs, ASIC, large language models
Hardware Acceleration of LLMs: A comprehensive survey and comparison
Nikoletta Koilia
Department of Electrical
and Electronics Engineering
University of West Attica
Athens, Greece
eee19387106@uniwa.gr
Christoforos Kachris
Department of Electrical
and Electronics Engineering
University of West Attica
Athens, Greece
kachris@uniwa.gr
==========================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Modeling human language on a large scale is a complex process that has taken decades to develop. It started in 1950 with Claude Shannon, who applied information theory to human language. Since then, tasks like translation and speech recognition have advanced significantly.
Artificial Intelligence (AI) and Machine Learning (ML) are key to this progress. ML, a subset of AI, allows computers to learn from data. ML models are either supervised (making predictions) or unsupervised. This thesis focuses on supervised models, which predict and compare values to minimize error through optimization.
Deep Learning models are divided into Generative (creating new data) and Discriminative (distinguishing data types). Generative AI, a subset of deep learning, uses neural networks to process labeled and unlabeled data. Large Language Models (LLMs) help understand characters, words, and texts.
In 2017, transformers revolutionized language modeling. Transformers, a type of neural network, handle long-term text dependencies using an attention mechanism. Google created the first transformer model for text translation in 2017. Transformers have since evolved, improving attention mechanisms and architectures.
ChatGPT, a notable LLM, predicts text continuations and performs tasks like answering questions, summarizing texts, and more. It uses probability distributions to generate various text forms based on user requests.
§.§ LLMs
Large Language Models (LLMs) are extensive, general-purpose models that can be pre-trained and adapted for specific tasks. They solve common language problems such as text classification, question answering, summarization, and text generation in various domains.
LLMs are "general-purpose" because they handle diverse tasks and "large" due to their massive training datasets and numerous parameters. These models have multiple neural network layers with adjustable weights that learn to predict the next word in a sentence during training.
The number of parameters indicates the model's complexity and capacity. Weights, adjusted during training, connect neurons in different layers, influencing the model's performance.
Transformers, a type of LLM, consist of an encoder and a decoder. The encoder has six layers, each with Multi-Head Self-Attention and a feed-forward network. The decoder has six layers, including an additional multi-head attention layer over the encoder's output.
The attention mechanism maps queries and key-value pairs to outputs, with positional encoding adding information about character positions. This architecture enables transformers to handle long-term dependencies in text effectively
§.§ Encoder-Decoder
The encoder-decoder architecture is central to Large Language Models (LLMs) and designed to process and generate sequences. This architecture has two stages:
* Encoder: The input (e.g., natural language) is transformed into a vector representation that encapsulates the meaning of the input.
* Decoder: The decoder takes this vector representation and generates an output sequence, such as a translation into another language.
§.§ Attention Mechanism
The Attention Mechanism is vital in modern machine learning, especially in transformers, improving sequence processing tasks like translation and text generation. It connects both the encoder and decoder stages.
The Attention Mechanism includes two further mechanisms: Multi-Head Attention and Self-Attention.
The former focuses attention on different parts of the input simultaneously, allowing the model to recognize complex patterns and relationships in the input data. The latter captures dependencies and relationships between tokens regardless of their distance. It uses three matrices: Query (Q), Key (K), and Value (V). These matrices determine how much attention each token should give to another, enhancing the quality of translations and other sequence-based tasks.
§.§ Related work
Until now there is not any comprehensive survey on the hardware accelerators to speed-up the most computational intensive tasks of Transformers. In <cit.>, a survey has presented a survey on the hardware acceleration of transformer networks for autonomous driving. The paper presents several efforts on the acceleration of tasks such as object detection, 3D segmentation, and lane detection.
In 2022, Huang et al. presented a survey on hardware acceleration for transformers <cit.>. The paper was mostly focused on the the transformer model compression algorithm based on the hardware accelerator and was limited mostly on FPGA-based implementation.
In 2023, Emani et al <cit.> presented a comprehensive performance study of LLMs on several computing platforms and evaluated their performance characteristics for these models.
In this paper, we present a comprehensive survey of the several research efforts that have been presented for the acceleration of transformer networks for Large Language models and NLP using hardware accelerators. The survey presents the frameworks that have been proposed and then performs a qualitative and quantitative comparison regarding the technology, the processing platform (GPU, FPGA, ASIC, In-Memory), the performance, and the energy efficiency of each framework. First, we present the accelerators based on FPGAs, then we present the accelerators targeting GPUs and finally accelerators ported on ASICs and In-memory architectures.
The main contributions of this papers are the followings:
* An extensive survey of hardware acceleration of LLM using FPGA, ASICs, In-memory architectures and GPUs.
* A comparison in terms of performance (GOPs), energy efficiency (GOPs/W) and speedup.
* An extrapolation of the features to the same technology for a fair comparison in terms of performance and energy efficiency.
§ FPGA-BASED ACCELERATORS
§.§ FTRANS
In 2020, Li et al <cit.> presented a hardware acceleration framework, called FTRANS, that was targeting the acceleration of transformer-based large scale language representations. It focuses on compression and acceleration to address computing and storage requirements, achieving up to 16 times compression with minimal accuracy loss through a Block Circulant Matrix (BCM) based weight model. The model significantly improves speed and energy efficiency, surpassing CPU and GPU implementations, with a comparison showing FTRANS is 81x faster and 9x more energy-efficient than alternatives, specifically compared to the GPU processor RTX5000 using VCU118 (16nm). The accelerator achieves a performance rate of 170 GOPs and an energy efficiency rate of 6.8 GOPs/W.
§.§ Multi-Head Attention
In 2020, Lu et al. presented an FPGA based architecture for the acceleration of the most computationally intensive parts of transformer networks <cit.>. In their work they propose a novel hardware accelerator for two key components, i.e., the multi-head attention (MHA) ResBlock and the position-wise feed-forward network (FFN) ResBlock, which are the two most complex layers in the Transformer.
The proposed framework is implemented on a Xilinx FPGA. Based on the performance evaluation the proposed design achieves a speed-up of 14.6× compared to a V100 GPU.
§.§ FPGA NPE
In 2021, Khan et al. presented an FPGA acceleration for language models called NPE. <cit.>. The NPE architecture consists of an instruction control unit (ICU), a memory read unit (MRU), a memory write unit (MWU), a matrix multiply unit (MMU), and a nonlinear vector unit (NVU).
NPE was implemented on Xilinx Zynq Z-7100 FPGA board clocked at 200 MHz. NPE is compared with other frameworks like FTRANS and implementation on CPU and GPU. Although that there is not any significant speedup compared to other computing platforms, the main advantage is the energy efficiency. NPE achieves around 4× better energy efficiency over CPU (i7-8700k) and 6× over GPU (RTX 5000).
§.§ Column Balanced Block Pruning
In 2021, Peng et al. presented a novel scheme on accelerating Transformer networks using column balanced block-wise pruning <cit.>. The column balanced block-wise pruning combines the key features of both bank balanced pruning and block-wise pruning. The column balanced block-wise pruning ranks the blocks’ L2 norm by each column to get the pruning thresholds and prunes blocks for each column.
The proposed framework has been implemented on different hardware platforms (Intel i5-5257U (2.7 GHZ) CPU, Nvidia Jetson TX2 GPU, and Xilinx Alveo U200 FPGA) for further comparison of latency and throughput. The experimental results showed that the FPGA platform achieves a 11× speed up compared to the CPU platform and 2× speed up compared to the GPU platform.
§.§ Compressed Block Row
In 2021, Panjie Qi et al, presented an acceleration framework that combines balanced model compression at the algorithm level with an FPGA implementation optimization at the hardware level <cit.>. In their work, they propose an effective sparse matrix storage structure for block-balanced pruning, known as Compressed Block Row (CBR), and their hardware design includes an accelerator for sparse models. Moreover, they present a performance analytic methodology for evaluating accelerator performance. The experiments demonstrate that their CBR format outperforms conventional formats and saves substantial storage space.
The proposed framework is implemented on a Xilinx ALveo U200 FPGA. Based on the performance evaluation the proposed design achieves a speed-up of 38x compared to a Nvidia Guardo RTX 6000.
§.§ ViA
In 2022, Teng Wang et al, presented ViA <cit.>, an FPGA-based accelerator architecture for Vision Transformers (ViT), featuring a memory recognition unit, a memory write unit, and processing elements like the NSA self-attention module and MLP. It proposes data partitioning strategies to enhance efficiency and reduce dependency. ViA's FPGA implementation significantly outperforms CPUs, GPUs, and previous FPGA accelerators, achieving 60x the speed and 5x the energy efficiency of alternatives like the Nvidia Tesla V100 and Alveo U50 (16nm). ViA reaches an acceleration rate of 309.6 GOPs and an energy efficiency rate of 7.9 GOPs/W.
§.§ FPGA DFX
In 2022, Hong et al. presented DFX <cit.> for the acceleration of the transformer networks used in LLMs. Similarly to NPE, the DFX architecture proposed a modular architecture consisting for several computer core for the acceleration of the transformer networks.
For the evaluation, DFX has been implemented on an Intel Xeon Gold 6226R CPU with four Xilinx Alveo U280 data center acceleration cards. DFX achieves an average of 3.8x throughput and 4x higher energy efficiency compared to the GPU appliances.
§.§ STA
In 2022, Chao Fang et al, presented the Sparse Transformer Accelerator (STA) on FPGA to address the high computational demands of transformer models<cit.>. Utilizing an N
structure, the STA minimizes operations and memory size while enhancing performance. The design includes a unified matrix multiplication mechanism, a Softmax module, and a Dense Matrix Multiplication Engine (DMME), implemented on an Intel Arria 10 SX660 device. It significantly improves energy efficiency and reduces latency compared to previous FPGA methods.
The STA is divided into STA-4 and STA-8 subcategories. STA-4 achieves 6.7 times better performance and is 10 times more energy-efficient than other models, with an acceleration rate of 392.8 GOPs and energy efficiency of 33.6 GOPs/W, using Nvidia RTX 2080Ti for comparison. STA-8, while slightly less performant with 4.4x better performance, offers 12.3x better energy efficiency, achieving an acceleration rate of 523.8 GOPs and energy efficiency of 41.2 GOPs/W.
§.§ FPGA OPU
In 2023, Bai et al. proposed another scheme for the acceleration of transformer networks called Overaly OPU <cit.>. They propose a configurable computation unit to support the inference of diverse networks. Specifically, they propose 48 processing elements (PEs) that are configured for the acceleration of the transformer networks. The output stage of the adder tree can be switched during the inference process. That way, data from forwarding modules can flow through the computation unit in a pre-defined connection state. The proposed scheme achieves 5x-15× speedup compared with a CPU, 1.1-2.9× speedup compared with GPU (RTX 3090), and, 1.10-2.5× speedup compared with the other FPGA accelerators such as NPE <cit.>.
§.§ FPGA acceleration of Transformer networks
In 2022, Tzanos et al, presented a high-performance hardware accelerator for the transformer networks <cit.>. Transformer networks use a technique called attention. The attention, adopted by the field of neuroscience, is the ability to be able to selectively concentrate on specific data while ignoring other data of the environment. In deep learning we imitate this technique through attention mechanisms and one way to achieve this is to encode a sequence not into a single fixed vector but to create a model that produces a vector for each output step by adding a set of weights which will later be optimized.
The performance evaluation showed that the proposed framework can achieve 2.3x system speedup for the BERT model compared to a 40-thread processor and 80.5x speed-up over a single-core CPU.
§.§ FlexRun
In 2023, Hur at al. presented an FPGA-based accelerator to speedup the diverse and complex NLP models, called FlexRun <cit.>. The paper is focused on accelerating both Recurrent Neural Networks (RNNs) models such as SRNN or long short term memory (LSTM) and attention-based NLP models, such as Transformer, and GPT2.
=99999
For evaluation, they compare FlexRun with Intel’s Brainwave-like architecture on a Stratix-10 GX FPGA and a Tesla V100 GPU with tensor cores enabled. Compared to the FPGA baseline, FlexRun achieves an average speedup of 1.59× on various configurations of BERT. For GPT2, FlexRun gets 1.31× average speedup. Next, when comparing to the GPU implementation, FlexRun improves the performance by 2.79× and 2.59× for BERT and GPT2, respectively.
§.§ HPTA
In 2023, Yuntao Han and Qiang Liu presented the High-Performance Transformer Accelerator (HPTA) <cit.>, leveraging a custom multiplication matrix, adder tree, and memory subsystem. It can handle various types of transformers used in Natural Language Processing (NLP) and Computer Vision (CV). The performance of HPTA was evaluated against CPU, GPU, and other FPGA implementations. The results showed significant improvements in speed and energy efficiency for both BERT and Swin Transformer models. Compared to CPU and GPU, HPTA processed BERT up to 44x faster and 175x more energy-efficiently. It was also 1.8x faster than previous FPGA accelerators
§.§ Swin
In 2023, Zhiyang Liu, Zhenhua Ren, and Pengyu Yin developed an accelerator for the Swin Transformer in computer vision tasks, addressing hardware acceleration challenges with large images <cit.>. The architecture includes computation units for GELU and Softmax, allowing Swin Transformer Block execution in one cycle and improving efficiency by replacing Layer Normalization (LN) with Batch Normalization (BN). It offers significant speed and energy efficiency improvements over CPU and GPU. The accelerator is categorized into Swin-T, Swin-S, and Swin-B. Swin-T is 1.8x faster and 20.5x more energy-efficient, Swin-S is 1.7x faster and 18.6x more energy-efficient, and Swin-B is 4.4x faster and 14.6x more energy-efficient compared to the Nvidia GeForce RTX 2080Ti. The acceleration rates are 431.2, 403.5, and 436.4 GOPs for Swin-T, Swin-B, and Swin-S, respectively.
§.§ Zhongyo Zhao
In 2023, Zhongyo Zhao presented an accelerator that uses an Output Block Storing (OBS) data handling method to efficiently execute transformer models for object recognition <cit.>. The proposed method involves dividing the inputs and allocating weights into small block matrices to reduce memory access for input data and weights. Additionally, the OBS data flow maintains usage rates by collecting partial sums, while slightly reducing them compared to the output block data flow. This results in improved overall energy efficiency. The accelerator implements this data flow and achieves a processing rate of 728.3 GOPs and an energy efficiency of 58.31 GOPs/W, surpassing previous CNN-based accelerators. This study used a Xilinx VC709 processor for comparison and employed Virtex™ 7VC707 (28nm) technology.
§.§ ODE-based acceleration
In 2024, a hybrid approach was proposed for the acceleration of the transformer networks by Okubo et al<cit.>. The proposed scheme uses ResNet as a backbone architecture and replaces a part of its convolution layers with an MHSA (Multi-Head Self-Attention) mechanism. Using this approach they manage to significantly reduce the parameter size of such models by using Neural
ODE (Ordinary Differential Equation) as a backbone architecture instead of ResNet. The proposed hybrid model reduces the parameter size by 94.6% compared to the CNN-based ones without degrading the accuracy.
The performance evaluation on a Xilinx Zynq UltraScale+ MPSoC platform shows that the proposed FPGA implementation achieves 12.8× speedup and 9.2× energy efficiency compared to an ARM Cortex-A53 CPU implementation.
§.§ Beta
In 2024, Yuhao Ji presented a Binary Transformer Accelerator (BETA) that achieves high performance and flexibility <cit.>. This is accomplished through a computational flow subtraction method aimed at optimizing QMMs. The QMM is a programmable machine that can support a wide range of precision while providing high parallelism, speed, and energy efficiency. Various experiments compared BETA with previous FPGA accelerators, concluding that energy efficiency continuously improves. While performance speed compared to other CPUs and GPUs is not mentioned, the energy efficiency is reported to be 22x better. The study used the RTX3090 and ZCU102 (16nm) technology, with BETA achieving an acceleration rate and energy efficiency rate of 1436 GOPs and 174 GOPs/W, respectively.
§.§ Me-Vit
In 2024, Kyle Marino, Pengmiao Zhang, and Viktor K. Prasanna introduced Me-ViT <cit.>, a memory-efficient Vision Transformer design that outperforms traditional ViT accelerators on FPGA in speed and energy efficiency. Me-ViT combines Self-Attention and Multi-Layer Perceptron blocks, reducing data transfers and intermediate writes by loading weights only once. Its Memory-Efficient Processing Element (ME-PE) minimizes data movement and computation interruptions. Using systolic arrays for matrix multiplication, Me-ViT optimizes memory access, providing scalable, high-performance solutions for vision tasks on FPGA. Compared to CPUs and GPUs, Me-ViT is 5.1x faster and 4x more energy-efficient, achieving an acceleration rate of 2,682 GOPs. The study uses Nvidia TITAN RTX GPU and Alveo U200 (16nm) technology for comparison
§.§ TransAxx
In 2024, Dimitrios Danopoulos, Georgios Zervakis, and Dimitrios Soudris introduced TransAxx <cit.>, a framework aimed at enhancing the efficiency of Vision Transformer (ViT) models through approximation computing. It includes a PyTorch-based system that supports continuous approximation computing and assesses its effects on ViT models. The technique involves studying the sensitivity of transformers to approximate multipliers, fine-tuning for accuracy, and using the Monte Carlo Tree Search (MCTS) algorithm to create approximate accelerators. Key techniques for accuracy improvement include quantization, pre-calibration training, and adaptive retraining. The framework reduces computational complexity and memory demands while balancing speed and energy efficiency. TransAxx provides a comprehensive approach for optimizing ViT models, enabling professionals to improve performance with limited resources through methods like quantization, calibration, and retraining.
§.§ Ikumo Okubo
In 2024, Ikumi Okubo introduced a cost-effective FPGA implementation of the Tiny Transformer model utilizing a Neural Ordinary Differential Equation (Neural ODE) technique <cit.>. This method uses fewer parameters and less memory compared to ResNet-based deep models, making it suitable for resource-constrained devices. The model features ODEBlocks that reuse parameters, a learned relative positional encoding, and quantization to n-bit integers using LLTs. It also incorporates Depth-wise Separable Convolution (DSC) and Multi-Head Self-Attention (MHSA), forming a hybrid architecture. This approach is highly memory-efficient and significantly improves speed and energy efficiency, being 12.8x faster and 9.2x more energy-efficient than other models, and is compared to the ARM Cortex-A53 CPU using ZCU102 (16nm) technology.
§.§ SSR
In 2024, Jinming Zhuang presented SSR<cit.> as a unique architecture emphasizing the balance between latency and performance in accelerating transformers. It employs various elements such as FPGA and examines the trade-off between latency and performance for different models, achieving performance and energy efficiency increases. The method used is matrix multiplication, which controls the data communication between accelerators and seeks ways to improve performance. SSR provides open-source tools for reproducing results and can optimize communication between accelerators, reducing data transmission costs. Compared to other CPUs and GPUs, SSR is approximately 36x faster and 21x more energy-efficient than previous accelerators. This study utilizes the Nvidia A10G GPU and VCK190 (7nm) technology.
§ CPU AND GPU-BASED ACCELERATORS
§.§ TurboTransformer
In 2021, Jiarui Fang and Yang Yu introduced the TurboTransformers accelerator <cit.>, a technique for efficiently serving Transformer models on GPUs for variable-length inputs. They addressed the challenges of padding smaller sequences to match the length of the longest sequence in a batch. By using dynamic programming to solve the optimization issue, they increased the response rate by 35 % compared to not using batching.
To reduce memory size, TurboTransformers introduces a variable-length allocator that employs a segment-based memory management technique and a space reuse mechanism in the computation graph, reducing memory usage by 50 per cent compared to a reference allocator. Testing the system with various Transformer models, including BERT and Albert, the authors found that TurboTransformers outperformed PyTorch and ONNXRuntime in latency and performance for variable-length inputs, being 2.8x faster
§.§ Jaewan Choi
In 2022, researcher Jaewan Choi presented the study titled "Accelerating Transformer Networks through Rewiring of Softmax Layers"<cit.>, which provides a method to accelerate the Softmax layer in transformer networks. The research introduces a rewiring technique to speed up the Softmax layer in transformer networks, which has become increasingly important as transformer models process longer sequences to improve accuracy rates. The proposed technique divides the Softmax layer into several sub-layers, changes the data access pattern, and then merges the disassembled Softmax sub-layers with the subsequent and preceding processes. This method accelerates the inference of BERT, GPT-Neo, BigBird, and Longformer on a current GPU by up to 1.25x, 1.12x, 1.57x, and 1.65x respectively, significantly reducing off-chip memory traffic.
§.§ SoftMax
In 2022, Choi et al. presented a novel framework for acceleration of transformer networks through Recomposing Softmax Layers<cit.>. The softmax layer normalizes the elements of the attention matrix to values between 0 and 1. This operation is conducted along the row vector of the attention matrix. Based on the profiling, the softmax layer in the scaled dot-product attention (SDA) block uses 36%, 18%, 40%, and 42% of the total execution time of BERT, GPT-Neo, BigBird, and Longformer, respectively.
Softmax recomposition achieves up to 1.25×, 1.12×, 1.57×, and 1.65× speedups in inferring BERT, GPT-Neo, BigBird, and Longformer on a A100 GPU by significantly reducing the off-chip memory traffic.
§.§ LightSeq2
In 2022, Wang et al. proposed a series of GPU optimizations to accelerate the training for a general family of Transformer models on GPUs called LightSeq2 <cit.>.
LightSeq2 proposes 3 techniques for the acceleration of the training of transformer networks.
Firstly, to all types of transformers, LightSeq2 uses fused kernel operators for both encoder and decoder
layers. Adjacent fine-grained element-wise kernels are fused into one coarse-grained kernel,
resulting in fewer kernel launches and intermediate results. For example, the last kernel of the self-attention layer implements bias adding, dropout, and residual kernels with only one kernel launch.
The performance evaluation shows that LightSeq2 is consistently faster (1.4-3.5×) than previous systems on different GPUs and it can achieve up to 3x speedup on large public datasets.
§.§ Simplified Transformer Networks
In 2023, He and Hofmann <cit.> have also proposed a novel framework to accelerate transformer networks in GPUs by simplified transformers without compromising convergence properties and downstream task performance.
Based on the performance evaluation both on autoregressive decoder-only and BERT encoder-only models, the simplified transformers emulate the per-update training speed and performance of standard transformers, while enjoying 15% faster training throughput in GPUs, and using 15% fewer parameters.
§.§ LLMA
In 2023, Nan Yang introduced LLMA<cit.>, an accelerator for large language models (LLMs) that enhances inference speed through interaction with reference data. This method uses a reference-based decoding mechanism to select and process tokens efficiently, enabling parallel execution on GPUs without needing new models. LLMA is easy to implement and deploy, providing over twice the speed for various model sizes using the Nvidia 32G V100 GPU.
§.§ FlexGen
In 2023, researchers introduced FlexGen, a high-throughput system for generating large language models (LLMs) designed for latency processing in resource-limited environments. FlexGen generates 32 tokens per prompt and evaluates throughput by the number of tokens generated divided by adaptation and decoding time. Compared to DeepSpeed ZeRO-Inference and Hugging Face Accelerate, FlexGen provides 40x more throughput with the same latency using an Nvidia T4 (16GB) GPU. Built on PyTorch, FlexGen utilizes multiple CUDA streams and CPU threads for I/O combination, significantly increasing performance through CPU computation and result overlapping.
§.§ vLLMs
In 2023, researchers introduced the vLLMs model to address efficient memory management for large language models (LLMs), which have high memory requirements <cit.>. They proposed a strategy called PagedAttention, which divides key-value attention into fixed-size blocks and uses paging to maintain them. This approach enhances memory efficiency and reduces the memory footprint of LLMs. The vLLM architecture leverages PagedAttention to manage memory effectively, particularly in beam search scenarios with a fixed number of candidates. The model supports mixed decoding approaches with various sharing and memory access patterns, using a mapping layer to convert logical blocks to physical blocks, further optimizing memory usage and reducing the overall memory footprint of LLMs.
§.§ Alisa
In 2024, researchers introduced the ALISA model <cit.>, aimed at accelerating large language models (LLMs) through sparse window attention (SWA) and dynamic scheduling. This approach addresses the limitations of existing optimizations in maintaining competitive accuracy. SWA creates sparse patterns that are both locally static and globally dynamic, preserving the sequential semantics of language while capturing its dynamic evolution. Dynamic scheduling further enhances performance by balancing memory access and token processing. By integrating SWA, dynamic scheduling, and KV compression, ALISA significantly reduces the memory footprint of KV stores. The study demonstrates that ALISA outperforms previous methods in accuracy and performance, with comparisons across three families of open-source LLM models.
§ ASIC ACCELERATORS
§.§ A3
One of the early research on the acceleration of transformer networks was proposed in 2020 by Hma et al. called A3 <cit.>. The paper proposes a hardware accelerator for attention mechanisms in NNs, that not only focused
on the efficient implementation of the attention mechanism in hardware but also on reducing the amount of computation in attention mechanism through algorithmic optimization and approximation. It presents an approximate candidate selection mechanism to reduce the number of search targets, and thus the amount of computation.
The proposed scheme has not been implemented on FPGA but is has been implemented on a cycle-accurate Verilog design targeting a TSMC 40nm ASIC clocked at 1GHz. Based on the performance evaluation, the proposed scheme can achieve up to 7x speedup compared to a Intel Gold 6128 CPU implementation and up to 11x better energy efficiency against versus a CPU implementation.
§.§ ELSA
In 2021, Ham et al. presented a hardware-software Co-design approach for the acceleration of transformer networks called Elsa<cit.>.
Based on the fact that irrelevant relations can be effectively filtered out by computing approximate similarity,
ELSA substantially reduces computational waste in a selfattention operation. Unlike conventional hardware such as CPU or GPUs, that cannot benefit from approximation, ELSA propose a specialized hardware that directly translates this reduction to further improve performance and energy efficiency.
They evaluate several representative self-attention-oriented NN models to demonstrate the effectiveness of the ELSA. For performance evaluation, they implemented a custom simulator for ELSA targeting a 40nm ASIC clocked at 1GHz. ELSA-moderate achieves up to 157x speedup compared to GPUs and two orders of magnitude improvements in energy efficiency over the GPU for the self-attention computation.
§.§ SpAtten
In 2021, Want et al. presented a framework for the acceleration for large language models called Spatten. <cit.>
SpAtten propose a novel scheme for the acceleration of NLP using three algorithmic optimizations: cascade token pruning, cascade head pruning and progressive quantization to reduce computation and memory access.
The proposed scheme has been implemented on a cycle-accurate design using SpinalHDL and mapped to ASIC using a 40nm TSMC library. SpAtten and achieves 162x, and 347x speedup over a GPU (TITAN Xp), and a Xeon CPU, respectively. In terms of energy efficiency SpAtten achieves 1193x and 4059x energy savings compared to GPU and CPU.
§.§ Sanger
Lu at el. presented in 2021 another novel approach for the acceleration of transformer networks called Sanger<cit.>.
Sanger accelerates the sparse attention models by combining dynamic sparsity patterns and reconfigurable architecture.
The software part provides sparsity patterns, which can achieve high performance and a balanced workload. The architecture is designed with reconfigurability to support the dynamic characteristics of sparsity, which helps to improve the compression ratio.
To allow more flexibility in the sparsity patterns, Sanger proposed a reconfigurable systolic array based on this dataflow. Sanger was implemented in Chisel hardware that was translated to Verilog RTL. The design was targeting an ASIC using the UMC 55nm technology clocked at 500MHz.
§.§ SALO
In 2022, Guan Shen presents a spatial accelerator to improve transformer performance for long sequences by using hybrid sparse attention patterns <cit.>. This model addresses computational and memory challenges, achieving 25.5x faster performance and 336.1x better energy efficiency compared to a Nvidia GTX 1080Ti with FreePDK (45nm) technology. It demonstrates enhanced attention processing through efficient hardware and data strategies
§.§ AccelTran
In 2020, Shikhar Tuli introduced AccelTran <cit.>, an accelerator architecture to improve transformer model efficiency in natural language processing. AccelTran uses matrix compression and data flow strategies to enhance energy efficiency. It has two versions: AccelTran-Edge for low-power portable devices and AccelTran-Server for high-throughput cloud applications. AccelTran-Edge outperforms Raspberry Pi in throughput and power consumption, while AccelTran-Server offers 5.7x higher throughput and 3.7x lower energy use compared to the Energon model. The acceleration rates are 372,000 GOPs for AccelTran-Server and 7,520 GOPs for AccelTran-Edge
§.§ DTQAtten
In 2022, Tao Yang introduced DTQAtten <cit.>, a technique to enhance NLP model efficiency by combining dynamic quantization and specialized hardware. DTQAtten reduces memory usage, inference latency, and energy consumption in models like BERT and GPT-2. It is 16.4x faster and 3.8x more energy-efficient than the SpAtten model on Nvidia Titan Xp using TSMC (40nm) technology, achieving 952 GOPs throughput and 1298.4 GOPs/W energy efficiency. This makes DTQAtten significantly superior to advanced accelerators such as A3 and SpAtten.
§.§ Energon
In 2023, Zhou et al. presented a algorithm-architecture co-design approach that accelerates various transformers using dynamic sparse attention, called Energon <cit.>. Energon proposes a mix-precision multi-round filtering (MP-MRF) algorithm to dynamically identify query-key pairs at runtime.
Energon adopts low bit-width in each filtering round and only the finally selected pairs are used for high-precision tensors in the attention stage to reduce overall complexity. By this means, they manage to reduce by 4× to 8× the computation cost with negligible accuracy loss.
Energon is implemented as a co-processor targeting a 45nm ASIC library.
Based on the performance evaluation it is shown Energon achieves 168× and 8.7× speedup and up to 10000× and 1000× energy reduction over Intel Xeon 5220 CPU and NVIDIA V100 GPU, respectively.
§.§ H3D Transformer
In 2024, a new 3D heterogeneous accelerator design was proposed for transformer models <cit.>. It combines compute-in-memory (CIM) and digital tensor processing units (TPUs) to address chip area and energy consumption issues. The design uses 22nm FeFET digital CIM chips for high-density on-chip memory and processes MatMul tasks efficiently. It achieves 10 TOPS/W (10,000 GOPs/W) for BERT and GPT-2 models, which is 2.6x to 3.1x better than 7nm TPU and FeFET memory baselines.
§.§ SALO2
In 2024, Jieru Zhao introduced SALO2<cit.>, an enhanced framework for efficient attention computation in both static and dynamic sparsity scenarios. Combining software optimizations with hardware accelerators and pattern matching units, SALO2 improves performance and flexibility for various applications. It uses sparse attention algorithms and data reorganization techniques. SALO2 is about 25x faster and 70x more energy-efficient than previous models, using Nvidia RTX4090 with FreePDK (45nm) technology as a benchmark.
§ IN-MEMORY HARDWARE ACCELERATORS
§.§ ATT
In 2020, Guo et al. presented another approach for the acceleration of attention-based accelerators called ATT <cit.> based on resistive RAM. ATT is based on crossbar-based resistive RAM that can eliminates weight movement between memory and processing units with a dedicated pipeline design for Attention-based Neural Networks. The proposed scheme consists of several modules.
The proposed scheme has been simulated using CACTI 7.0 at 32 nm to model the power and area of the SRAM buffer and the Mask Cache. Based on the performance evaluation, ATT can achieve 202x speedup compared to NVIDIA GTX 1080 Ti GPU.
§.§ ReTransformer
In 2020, Yang et al. proposed an in-memory framework for the acceleration of transformers called ReTransformer <cit.>. ReTransformer is a ReRAM-based In-Memory architecture for Transformer acceleration that is not only accelerate the scaled dot-product attention of Transformer using ReRAM-based In-Memory architecture but also eliminate some data dependency by avoiding writing the intermediate results using the proposed matrix decomposition technique. Furthermore, ReTransformer proposes a new sub-matrix pipeline design for multi-head self-attention.
The performance evaluation shows that compared to GPU, ReTransformer can achieve up to 23.21× speedup while the corresponding overall power is reduced by 1086×.
§.§ iMCAT
In 2021, Laguna et al. presented a novel in-memory architecture for the acceleration of transformer networks for long sentences called iMCAT <cit.>. The proposed framework uses a combination of XBars and CAMs to accelerate transformer networks. The acceleration of transformer networks is achieved by combining several techniques such as computing in-memory, thus minimizing the memory transfer overhead, caching reusable parameters to reduce the number of operations, exploiting the available parallelism in the attention mechanism, and finally using locality sensitive hashing to filter the number of sequence elements by their importance.
The performance evaluation shows that this approach achieves a 200x speedup and 41x energy improvement for a sequence length of 4098.
§.§ TransPIM
In 2023, researchers introduced TransPIM<cit.> to enhance Transformer operations through combined software and hardware design. By adding auxiliary computation units (ACUs) to memory, it improves vector reduction and Softmax algorithms. TransPIM increases speed and energy efficiency by optimizing parallelism, reducing latency, and energy use up to 10.8 and 5.7x, respectively. It also uses token-based data division to boost memory-level parallelism and reduce data transfer costs. This architecture achieves an acceleration rate of 734 GOPs, significantly enhancing Transformer model performance
§.§ iMTransformer
In 2022, Ann Franchesca Laguna introduced iMTransformer, an architecture for hardware-software co-design in memory systems. It leverages Multi-Head Attention (MHA) in transformer networks for parallel processing and details encoder and decoder layers' roles. The study covers masked, bidirectional, and encoder-decoder MHA types and explores sparse attention techniques to improve efficiency. iMTransformer enhances parallelism and reduces energy use, achieving 11x faster speed and 12.6x better energy efficiency compared to Nvidia Titan RTX GPU using CAMS (12nm) technology.
§.§ X-Former
In 2023, Sridharan et al. presented a novel in-memory hardware acceleration to speedup transformer networks called X-Former<cit.>. X-Former is a hybrid spatial in-memory hardware accelerator that consists of both NVM and CMOS processing elements to execute transformer workloads efficiently.
X-Former is composed primarily of a Projection Engine with NVM processing tiles for executing MV MStatic operations and an Attention Engine with CMOS processing tiles for executing MV MDynamic operations. The main difference compared to other in-memory architectures is that the weights of all the layers are stored in the Projection Engine to prevent reprogramming the NVM tiles, while the Attention Engine is optimized to only process the largest self-attention layer due to area constraints.
Based on the performance evaluation it is shown that X-Former achieves up to 85x and 7.5x improvements in latency and energy over a NVIDIA GeForce GTX 1060 GPU and upto 10.7x and 4.6x improvements in latency and energy over a state-of-the-art in-memory NVM accelerator.
§.§ TranCIM
In 2023, researchers introduced TranCIM<cit.>, an innovative design to improve transformer model efficiency in NLP, computer vision, and bioinformatics. TranCIM addresses data transfer and processing challenges with a fully digital CIM accelerator that enhances memory access and computations for attention and fully connected layers. It uses a pipeline function with bitline transfer architecture for efficient matrix computation. Compared to other models and Nvidia Jetson Nano, TranCIM is 16.9x faster and 1.6x more energy-efficient using CMOS (28nm) technology.
§.§ H3DATTEN
n 2023, researchers introduced H3DAtten<cit.>, an architecture enhancing vision transformer efficiency. Combining analog (ACIM) and digital (DCIM) in-memory computing, H3DAtten uses the Swin transformer for accurate image recognition and object detection. It processes input features in variable-sized windows for multi-scale feature extraction. ACIM converts analog data to digital bits, while DCIM uses SRAM for MatMul operations, reducing energy and latency. H3DAtten achieves 1600 GOPs acceleration and 7100 GOPs/W energy efficiency, outperforming existing hardware accelerators.
§.§ PRIMATE
In 2024, the Primate<cit.> framework was introduced to accelerate transformer models using dynamic token pruning and PIM (Processing-In-Memory) technology. Primate offers higher capacity, greater bandwidth, lower latency, and stable computations. It uses a pipeline strategy to maximize parallelism and efficiency. Primate improves performance by 30.6x, space efficiency by 29.5x, and energy efficiency by 4.3x compared to the TransPIM model based on Nvidia RTX 2080Ti.
§.§ HARSEA
In 2024, researchers introduced HARDSEA<cit.>, an accelerator architecture for transformer models focusing on the self-attention mechanism. HARDSEA uses a hybrid analog-digital approach with sparse self-attention and digital SRAM-CIM for in-memory computation, reducing costs by leveraging sparsity. It shows excellent performance with transformer models, achieving 28.5x better acceleration and 1,894.3x better energy efficiency compared to Nvidia RTX 3090. HARDSEA has an acceleration rate of 921.6 GOPs and an energy efficiency rate of 943.7 GOPs/W.
§ QUANTITATIVE COMPARISON
Table <ref> shows all of the hardware-based accelerators that have been proposed and the main features for each accelerator. Each row presents the name of the accelerator, the type of the accelerator (FPGA/ASIC/In-memory), the performance and the energy-efficiency. In some cases, the papers were also mentioning the speedup when the proposed architecture is compared against a CPU and a GPU. However, since the baseline comparison for each architecture was different we present only the absolute performance and energy efficiency and not the speedup for each architecture.
§.§ Quantitative Comparison on performance
Figure <ref> shows the performance of each accelerator based on the process technology while Figure <ref> shows the performance in logarithmic scale for better visibility.
As it is shown in the table and on the figures, the highest performance is achieved by the AccelTran (server) architecture using 14nm process technology with 372,000 GOPs, while the lowest is by the ReTransformer model. Additionally, it is observed that models within the same technology, such as ViA, Me-ViT, Ftrans, and others, do not have similar performance. However, for the accelerators that do not use the same process technology it is hard to make a fair comparison as the process technology is affects significantly the performance of a hardware accelerator.
§.§ Comparison of Energy efficiency vs process technology
Figure <ref> shows the energy efficiency in terms of GOPs/W for most of the hardware accelerators while Figure <ref> shows the energy efficiancy in logarithmic scale for better visibility. Since many proposed architectures do not measure the energy efficiancy, we list only the accelerators that present the energy efficiency. Again, many accelerators are based on different process technology and therefore it is hard to make a fair comparison.
The models using primarily memory (In-Memory Accelerators) have better energy efficiency. This happens because they have reduced data movement, as this specific architecture allows data to be processed directly in memory, without being transferred from memory to the central processing unit (CPU).
§.§ Comparison of acceleration in 16nm technology
It is important to note that there is no easy way to compare acceleration and energy efficiency when the existing models have different characteristics and especially when they use different process technology that affects significantly the performance of the hardware accelerators. To make a fair comparison of the hardware accelerators we extrapolate the performance and the energy efficiency on the same process technology. As the in-memory accelerators the performance is not based only on the process technology, the extrapolation is performed only on the FPGA and ASIC accelerators where the process technology affects significantly the performance of the systems.
Based on the article by Aaron Stillmaker and B.Baas titled "Scaling equations for the accurate prediction of CMOS device performance from 180 nm to 7nm,"<cit.> we extrapolated the performance and the energy efficiency on a 16nm technology to make a fair comparison of acceleration and energy efficiency. Based on the tables and equations provided by the paper, we made the extrapolation of the features on the same technology.
Tables <ref> shows the extrapolated performance of the hardware accelerators for the 16nm process technology.
Figure <ref> shows the absolute performance of the hardware accelerators when the performance is extrapolated on the same 16nm process technology. AccelTran achieves the highest performance (omitted in the diagram for better visibility of the rest of the results).
§.§ Experimental extrapolation
While the scaling equations provided by <cit.> can help on the extrapolation of the performance on the same technology, we perfmed also an experimantal extrapolation for the architectures targetting FPGAs.
The most computational intensive part of the LLMs is the matrix multiplication. Therefore we used a matrix multiplication IP core in VHDL<cit.> and we used Quartus EDA tool to evaluate the performance and accuracy of these accelerators. We tested the matrix multiplication code across various FPGA technologies, specifically 20nm, 28nm, 40nm, 55nm, 65nm, and 180nm, to verify the theoretical conversion to 16nm technology. The results of the matrix multiplication on several FPGA technologies can help extrapolate the results of the hardware accelerators on the same technology.
The FPGA device, the process technology and the results obtained from the matrix multiplications IP cores are listed in the Table <ref>:
Figure <ref> shows the maximum clock frequency achieved for each FPGA device and process technology for the matrix multiplications. The table below lists the FPGA accelerators, including the frequency at which they operated and the corresponding technology. The table shows the new frequency that is extrapolate on the 16nm process technology and the new performance achieved using the extrapolated performance. As the performance in FPGAs is depending on the maximum clock frequency, the extrapolated performance allows a fair comparison between architectures that were mapped on different process technologies.
§ CONCLUSIONS
Large Language Models have been emerged as a promising and powerful technology for the science and the society in general. There are several research efforts on the acceleration of the LLMs. However, the use of different process technologies makes it hard to make a fair comparison. In this paper we extrapolated the performance and the energy-efficiency of the hardware accelerators to the same process technology using both a theoretical and an experimental methodology. Overall it seems that the in-memory architectures provide much better energy efficiency compared to other technologies. However, in terms of absolute performance (GOPs) it seems that accelerators targetting ASICs can provide much higher performance compared to other platforms.
IEEEtran
|
http://arxiv.org/abs/2409.02700v1 | 20240904133657 | Redshift Space Distortions corner interacting Dark Energy | [
"Pietro Ghedini",
"Rasmi Hajjar",
"Olga Mena"
] | astro-ph.CO | [
"astro-ph.CO",
"hep-ph"
] |
pietro.ghedini3@studio.unibo.it
Dipartimento di Fisica e Astronomia, Università di Bologna, via Irnerio 46, 40126 Bologna, Italy
rasmi.hajjar@ific.uv.es
omena@ific.uv.es
Instituto de Física Corpuscular (IFIC), University of Valencia-CSIC, Parc Científic UV, C/ Catedrático José Beltrán 2, E-46980 Paterna, Spain
§ ABSTRACT
Despite the fact that the ΛCDM model has been highly successful over the last few decades in providing an accurate fit to a broad range of cosmological and astrophysical observations, different intriguing tensions and anomalies emerged at various statistical levels. Given the fact that the dark energy and the dark matter sectors remain unexplored,
the answer to some of the tensions may rely on modifications of these two dark sectors.
This manuscript explores the important role of the growth of structure in constraining non-standard cosmologies. In particular, we focus on the interacting dark energy (IDE) scenario, where dark matter and dark energy interact non-gravitationally. We aim to place constraints on the phenomenological parameters of these alternative models, by considering different datasets related to a number of cosmological measurements, to achieve a complementary analysis. A special emphasis is devoted to redshift space distortion measurements (RSD), whose role in constraining beyond the standard paradigm models has not been recently highlighted. These observations indeed have a strong constraining power, rendering all parameters to their ΛCDM canonical values, and therefore leaving little room for the IDE models explored here.
Redshift Space Distortions corner interacting Dark Energy
Olga Mena
September 9, 2024
=========================================================
§ INTRODUCTION
Cosmological observations have provided us clear evidences for the existence of both a dark energy and a dark matter components, but their nature and putative interactions, beyond the pure gravitational one, remain unknown. Since observations allow it, one could extend the ΛCDM model by introducing a new non-gravitational interaction in the dark sector, i.e. between dark energy and dark matter. That is, while the strength of interactions between ordinary matter and the dark energy fields is severely constrained by observations <cit.>, interactions among the dark sectors are still allowed. Therefore, over the last several years, the possibility of an interaction between the dark matter and the dark energy fluids has been thoroughly investigated using different available cosmological observations <cit.>, see also the reviews of Refs. <cit.>. This “dark coupling”, based on models of coupled quintessence <cit.>, could affect significantly the evolution history of the Universe and the evolution of the density perturbations. The presence of such a coupling could alleviate the “coincidence” problem since the introduction of a coupling could stabilise the ratio of the two dark components during the entire expansion history. Interacting dark energy-dark matter cosmologies <cit.> may not only provide the scenario where to alleviate the “why now?” problem but also could help in solving some of the existing cosmological tensions, such as the one between CMB estimates <cit.> and SH0ES (Supernovae and H_0 for the Equation of State of dark energy) measurements of the Hubble constant <cit.>, with a significance of ∼ 5.3σ (see also Refs. <cit.>).
The basic underlying idea in these theories relies on the possible non-gravitational interaction between dark matter and dark energy. Such an interaction can be characterized by a continuous flow of energy and/or momentum between these two dark sectors. This energy flow modifies the expansion history of the Universe both at the background and perturbation levels. The interaction function representing the continuous flow of energy and/or momentum between the dark matter and dark energy sectors is also known as the coupling function, and it is the main feature of interacting dark energy theories: once the interaction function is prescribed, the dynamics of the Universe can be determined either analytically or numerically. Despite the fact that in the literature it is usually assumed a pure phenomenological approach for the dark sectors coupling, a class of interaction functions can be derived from the field theory perspective <cit.>. Consequently some interaction functions can also find a well-defined motivation, with the possibility of translating the bounds of the phenomenological parameters into bounds of the parameters of the proposed theory.
Very recently, the DESI collaboration has presented new high-precision Baryon Acoustic Oscillation (BAO) measurements <cit.> and new cosmological results <cit.>. The new DESI results point to a richer dark energy sector than that expected within the minimal ΛCDM scenario. Profiting from this exiting evidence, the recent work of Ref. <cit.> has shown a preference for interactions
exceeding the 95% confidence level (CL). The very same study concludes that high
and low redshift observations can be equally or better
explained in interacting dark energy cosmologies than in the ΛCDM framework and also provide higher
values of H_0, compatible with SH0ES observations. DESI data may therefore imply non-standard cosmological scenarios and it is mandatory to fully explore these schemes in light of high and low redshift data.
Interacting dark energy cosmologies are known to modify significantly the growth rate of structure. Measurements of the growth rate are at reach thank to Redshift Space Distortions (RSD), which provide another low-redshift observable to constrain non-canonical late time physics via the quantity fσ_8, being f the growth rate of structure and σ_8 the matter clustering parameter. In the following, we analyze a class of interacting dark energy models in light of the most recent publicly available cosmological data.
The structure of the paper is as follows. Section <ref> contains an introduction to coupled dark matter-dark energy cosmologies. The cosmological data sets and the methodology are both described in Sec. <ref>. The bounds on the dark sector coupling arising from the different data sets considered here are presented in Sec. <ref>, and we present our conclusions and final remarks in Sec. <ref>.
§ INTERACTING DARK SECTOR SCENARIOS
We consider a homogeneous and isotropic description of our Universe, which is well described by the Friedmann-Lemaître-Robertson-Walker (FLRW) metric
d s^2 = - dt^2 + a^2 (t) [dr^2/1-k r^2 + r^2 (dθ ^2 + sin^2 θ d ϕ^2 ) ] ,
expressed in terms of the comoving coordinates (t, r, θ, ϕ), where a (t) refers to the scale factor of the Universe and k is the curvature scalar. The curvature scalar may take three distinct values, k ={0, +1, -1}, to represent three different geometries of the Universe, spatially flat, closed and open, respectively. In the following, we shall work in a flat Universe with an extra interaction between the pressure-less dark matter/cold dark matter and the dark energy components, governed by the conservation equation
∇^μ (T^ dm_μν + T^ de_μν)= 0 ,
which can be decoupled into two equations with the introduction of an interaction function Q (t) as follows
ρ̇_ dm + 3 H ρ_ dm = Q (t) ,
ρ̇_ de + 3 H (1+w) ρ_ de = - Q (t) ;
where H ≡ȧ/a is the Hubble rate of a FLRW flat Universe, ρ_ dm (de) are the time-dependent dark matter (dark energy) mass-energy densities and w denotes the equation of state of the dark energy component. The dot denotes derivative with respect to conformal time. Let us note that for Q (t) > 0, the energy transfer occurs from dark energy to dark matter, while Q(t) <0 indicates the transfer of energy in the reverse direction, i.e. from dark matter to dark energy.
From here one can see that the two dark components acquire a new behavior, that can be shown introducing the effective background equations of state of the dark matter and the dark energy components:
w_dm^eff = -Q/3Hρ_dm ,
w_de^eff = w + Q/3Hρ_de .
The most exploited parameterization of the interaction function is Q = H ξ f(ρ_ dm, ρ_ de) <cit.>, where ξ is the coupling parameter that characterizes the strength of the interaction function, H is the Hubble parameter of the FLRW Universe and f(ρ_ dm, ρ_ de) is any continuous function of
ρ_ dm, ρ_ de, i.e. of the energy densities of dark matter and dark energy. In this work we shall consider the following well-known interaction function
Q = H ξ ρ_ de ,
where ξ could be either time-dependent or time-independent. In the following, and for the sake of simplicity, we shall assume that such a coupling is time-independent. Now, in agreement with the sign convention of the interaction function, ξ >0 (equivalently, Q > 0) means an energy transfer from dark energy to dark matter and ξ < 0 means that the energy flow is from dark matter to dark energy. Notice that for some interaction models the energy density of dark matter and/or of dark energy could be negative <cit.>.
In this regard we do not impose any further constraints on the fluids and let the data discriminate between the most observationally favoured scenarios. We believe this approach is appropriate because it avoids unwanted biases and also considers the rising interest in the community for the putative presence of a negative cosmological constant <cit.>. We therefore extract the constraints on the cosmological parameters, assuming the choice of the interaction function given by Eq. (<ref>).
A crucial parameter for deriving our constraints is the growth factor rate f, since RSD measurements <cit.> constrain the combined quantity fσ_8, which is the product of the linear growth rate and the root mean square mass fluctuation amplitude for spheres of size 8 h^-1 Mpc. The mass variance of the matter clustering, for a generic physical scale R, is given by
σ^2_R=1/2π^2∫_0^∞P(k,z)W_R^2(k)dk ,
where P(k,z) is the matter power spectrum and W_R(k) is the window function. The growth rate f is defined as
f ≡dlnδ_dm/dlna=δ'_dm/δ_dma .
Therefore, f represents a measure of the evolution of the matter overdensity δ_dm from the primordial density fluctuations to the large-scale structure observed today. The matter overdensity can be obtained by solving the so-called growth equation, which, within dark energy models characterized by a dark energy equation of state w, reads as
δ”_dm=-(2-q)δ'_dm/a+3/2Ω_dmδ_dm/a^2 + 3/2Ω_bδ_b/a^2 ,
where q is the deceleration parameter, defined as
q=-ℋ̇/ℋ^2=1/2(1+3 w Ω_de) .
The growth rate f can be parameterized as
f=[Ω_dm(a)]^γ=[Ω_dma^-3/Ω_de+Ω_dma^-3]^γ ,
with Ω_dm(a) the relative dark matter energy density and γ, called the growth index, is defined as a constant if we consider dark energy models within General Relativity. In particular, what one finds is that γ≃3(w-1)/(6w-5) <cit.>. For the ΛCDM model, w=-1 and therefore γ=6/11≃ 0.55 <cit.>.
For the interacting dark energy cosmologies considered here, the standard growth equation Eq. (<ref>) is modified as
<cit.>
δ”_dm=-Bδ'_dm/a+3/2AΩ_dmδ_dm/a^2 + 3/2Ω_bδ_b/a^2 ,
where
A =1+2/31/Ω_dmρ_de/ρ_dm[-ξ(1-q-3w)+ξ^2(1+ρ_de/ρ_dm)] ;
B =2-q+ξρ_de/ρ_dm .
Figure <ref> shows the combined quantity fσ_8 versus the redshift for the standard ΛCDM model and also for interacting dark sector cosmologies as the coupling strength increases. We also illustrate the dependence of fσ_8 on the dark matter energy density. The observational values of fσ_8(z) are those reported in Tab. <ref>. Notice that as the coupling strength increases, the discrepancy with the ΛCDM prediction also does, as well as the inconsistency with the majority of the observational data points, especially for large values of the dark matter energy density. That is, for a fixed value of the dark matter energy density today, the departure with the standard growth of structure picture raises as the value of the coupling gets more negative.
§ METHODOLOGY AND DATA SETS
§.§ Cosmological measurements
In this section we present the datasets and likelihoods used to derive the constraints on our interacting dark matter-dark energy model.
CMB measurements- The Planck mission <cit.> has achieved exceptionally precise measurements of the power spectra of CMB anisotropies. The CMB power spectra contains a vast amount of information. We use as our baseline dataset the temperature (TT) and polarization (EE) auto-spectra, plus their cross-spectra (TE), as incorporated in the (for multipoles ℓ<30) and (for multipoles ℓ>30) likelihoods from the PR3 release <cit.>. In addition to the primary temperature and polarization anisotropy power spectra, we also have information on the power spectrum of the gravitational lensing potential <cit.>. All of the likelihoods described above are already included in <cit.>. In the following, we shall denote with Planck the results obtained using Planck temperature, polarization and lensing measurements.
RSD measurements-
As RSD reference measurements, we consider the data points in Tab. <ref> taken from Ref. <cit.>. If two or more sets of data concern the same cosmological tracer, the considered [fσ_8](z) data are obtained from uncorrelated redshift bins. On the other hand, when different cosmological tracers were analysed, the used data is taken from possibly correlated redshift. When the same galaxy survey performed two or more measurements corresponding to different data releases, we consider only the latest measurement of fσ_8. We built our own likelihood since the main observations exploited here are not included in . In the following, we shall denote this dataset as RSD.
Supernovae measurements-
Type Ia Supernovae (SN Ia) serve as standardizable candles that can be used to measure the expansion of the Universe. Within the ΛCDM model, SN Ia have lower statistical power with respect to modern BAO measurements, but are still useful to reduce the number of degeneracies inside the cosmological parameter set.
The SN Ia dataset considered in this work is the Pantheon+ compilation <cit.>. It contains 1550 spectroscopically confirmed SN Ia in the redshift range 0.001<z<2.26.
We use the public likelihood of Ref. <cit.>, included in . In the following, we shall denote this dataset as PantheonPlus.
DESI measurements-
DESI spectroscopic targets are selected from photometric catalogs of the 9th public data release of the DESI Legacy Imaging Surveys <cit.>. The five tracer samples, covering a total redshift range from z=0.1 to z=4.2, are:
Bright Galaxy Samples (BGS) <cit.> in the range 0.1<z<0.4, Luminous Red Galaxy Sample (LRG) <cit.> in the range 0.4<z<0.6 and 0.6<z<0.8, Emission Line Galaxy Sample (ELG) <cit.> in the range 1.1<z<1.6, combined LRG and ELG Sample (LRG+ELG) in the range 0.8<z<1.1 <cit.>, Quasar Sample (QSO) <cit.> in the range 0.8<z<2.1 and Lyman-α Forest Sample (Lyα) <cit.> in the range 1.77<z<4.16. In the following, we shall denote these BAO datasets as DESI.
§.§ Cosmological inference
The Boltzmann solver used to interface with the cosmological inference code ( <cit.>) is a modified version of the Cosmic Linear Anisotropy Solving System code () <cit.>, in order to include the effect of the coupling between the dark sectors.
The Bayesian inference is performed using the Metropolis-Hastings MCMC sampler <cit.> implemented in <cit.>. In order to test the convergence of the chains obtained using this approach, we utilize the Gelman-Rubin criterion <cit.>, and we establish a threshold for chain convergence of R-1 ≤ 0.01. We sample the set of parameters of our extension of the ΛCDM model including the dark coupling (referred from now on as ΛCDM+ξ), i.e. {w_b, w_dm, 100θ_s, ln(10^10A_s), n_s, τ, ξ}, where w_b(dm)=Ω_b(dm)h^2 is the baryon (cold dark matter) energy density, θ_s is the angular size of the horizon at the last scattering surface, τ is the optical depth, log(10^10A_s) is the amplitude of primordial scalar perturbation and n_s is the scalar spectral index. Table <ref> presents the priors used for all the seven sampled parameters in the MCMC analyses in the ΛCDM+ξ fiducial cosmology explored here. All of the priors are uniform distributions in the given ranges. We considered the equation of state parameter for dark energy, w, to be w=-1+ϵ, with ϵ = 0.01 in order to regularise early-time super-horizon instabilities in the dynamics of cosmological perturbations <cit.>.
§ CONSTRAINTS ON THE DARK SECTOR COUPLING
In the following we shall present the constraints coming from the combination of different cosmological measurements. The combinations of the previously described datasets considered are:
* Planck;
* Planck + RSD;
* Planck + DESI;
* Planck + DESI + RSD;
* Planck + DESI + PantheonPlus + RSD.
We start by describing the constraints from Planck CMB data alone. The effect of the introduced coupling between dark matter and dark energy, as expected, has a significant influence mostly on the Ω_dmh^2 parameter. Indeed, as one can see from Fig. <ref>, there is a strong degeneracy between Ω_dmh^2 and ξ, with lower values of Ω_dmh^2 being permissible (with respect to the ΛCDM results) for more negative values of the coupling ξ. As a consequence, from Tab. <ref>, we can notice that the best-fit value for Ω_dmh^2 is significantly lower with respect to the ΛCDM one Ω_dmh^2|_Λ CDM=0.1200. All the other parameters, instead, appear to be less sensitive to the introduced coupling between dark energy and dark matter, as the obtained results are consistent with those obtained in the ΛCDM case (see Fig. <ref>). These results are consistent with the fact that Planck data alone are not able to break the strong degeneracy between the dark matter energy density and the coupling.
Once we consider RSD observations, despite the fact that also in this case the only parameter affected by the presence of the coupling is Ω_dmh^2, differently from the results obtained considering Planck alone, we notice that the RSD measurements are indeed very constraining. Figure <ref> makes clear this point, as the degeneracy between the dark matter energy density and the dark coupling is notably reduced with respect to the case of Planck data alone. Indeed, we notice that the best-fit value of Ω_dmh^2 quoted in Tab. <ref> matches the result obtained in the standard cosmological model using Planck data only. These results are consistent with the fact that RSD measurements constrain the late time physics of the Universe, when the dominant effects induced by the coupling ξ should become more important. RSD measurements help to break the degeneracy between ξ and Ω_dmh^2, placing a tighter constraint on the coupling ξ. When Planck data are combined with RSD measurements, the bounds mainly improve on the parameters related to the growth rate of structure.
The following data combination we describe here is Planck plus DESI BAO observations. As can be noticed from Fig. <ref>, the degeneracy between Ω_dm h^2 and ξ is the main effect of the coupling, as all other parameters perfectly reproduce the results obtained in the ΛCDM case. Interestingly, despite the fact that both RSD measurements and the DESI BAO measurements provide information on late time physics, RSD measurements have a much more constraining power than DESI BAO observations, as the strong degeneracy between Ω_dm h^2 and ξ is more effectively alleviated in the case of the combination of CMB data with RSD. We also notice that the combination of Planck and DESI seems to prefer a more negative value of the coupling with respect to the case of Planck and RSD. This is due to the slightly larger value of the Hubble constant preferred by DESI observations. Notice that the best-fit for ξ differs from zero, we refer the reader to the recent work of Ref. <cit.> for further discussion. Nevertheless, in order to assess this neat result one should wait for future DESI data releases. Based on the results of DESI observations reported in <cit.>, it seems that a non canonical value for the equation of state of the dark energy component is preferred. Interacting dark energy cosmologies, indeed, introduce an effective equation of state for dark energy (see eq. (<ref>)), that differs from the standard cosmological constant value, and such a dark energy equation of state may be preferred by DESI measurements, showing therefore a mild preference for a non-zero coupling.
As previously stated, RSD measurements appear to have a much stronger effect on the constraints on ξ, thanks to the fact that they directly measure the impact of the interaction rate on the growth of structure.
Figure <ref> illustrates the fact that the bounds on the coupling when adding RSD to Planck+DESI changes are led by the strong constraining power of the RSD dataset. The degeneracy of Ω_dmh^2 with the coupling ξ is greatly reduced due to the tighter lower bound on ξ, making the value of the Ω_dmh^2 parameter very similar to its ΛCDM result (see Tab. <ref>).
Figure <ref> illustrates the strong degeneracy between the Hubble constant H_0 and the coupling ξ. Indeed, as previously mentioned, IDE models could be able to alleviate the so-called H_0 tension, see Refs. <cit.>. We notice that Planck alone allows higher values of H_0, consistent with the measurements of SH0ES. This is related to the very strong degeneracy that appears between Ω_dmh^2 and the coupling ξ (see Fig. <ref>), that is translated into a degeneracy between H_0 and ξ. Also including DESI measurements, we notice that there is still a huge degeneracy, which reduces the H_0 tension as for more negative couplings, higher H_0 values are allowed. As soon as we consider also Supernovae and RSD measurements, the constraints become tighter. In particular, considering the right panel of Fig. <ref>, we notice that the RSD measurements give a very tight constraint for the coupling, which translates into a value for the Hubble parameter today in agreement with the one we have from Planck. Therefore, focusing on Planck and Planck+DESI measurements, interacting dark energy models could fully solve the H_0 tension. We remind the reader that actually the addition of DESI data seems to prefer a non zero coupling, even though still compatible with a zero value for ξ at the 95% CL. When one includes also SN Ia measurements, we obtain lower values of H_0 with respect to the previous dataset combinations considered, but higher than those obtained with the CMB measurements of Planck in the ΛCDM scenario. This alleviates the H_0 tension as well, even if not significantly. It is only when we introduce the RSD measurements that the constraints become extremely tight, favoring values of H_0 which are in agreement with the Planck ΛCDM results, not allowing to alleviate the tension. This is a neat important result, since the RSD measurements are the most sensitive observable to the effect of the coupling. The left panel of Fig. <ref> compares how the expected degeneracy between Ω_dmh^2 and ξ gets constrained by the different measurements. Here, the behaviour is the same as before. Since ξ and Ω_dmh^2 are strongly degenerate, a tighter constraint on the coupling ξ will lead to a tighter constraint in Ω_dmh^2. We notice that with Planck data alone the degeneracy is very strong, which is consistent with the fact that CMB measurements are less sensitive to late time physics. Including DESI measurements, we notice that the degeneracy gets mildly reduced. As soon as we consider also Supernovae and RSD measurements, the constraints becomes tighter. In particular, we notice that the RSD measurements give a very tight constraint for the coupling, which translates into a smaller degeneracy in the (Ω_dmh^2, ξ) plane.
§ CONCLUSIONS
We have investigated the role of the growth of structure in constraining non-standard dark energy cosmologies, focusing on an interacting dark energy (IDE) scenario, in which the interaction is proportional to the dark energy energy density. We place constraints on the phenomenological parameters of these alternative cosmological models by analyzing various cosmological datasets, in different combinations, to obtain a complementary analysis. The cosmological analyses yield constraints consistent with a zero coupling scenario (ξ∼ 0), considering all the datasets taken into account in this work. These findings indicate that current cosmological observations do not yet provide significant evidence for non-gravitational interactions between dark matter and dark energy. As one would expect, Planck observations are not able to set very strong constraints on the coupling, as they are less sensible to the late time physics of the Universe and therefore there is a strong degeneracy between the coupling and the dark matter energy density when considering CMB data only. When adding all the other datasets, the constraints become tighter. This is because Supernovae, BAO and RSD measurements are more sensitive to the time in which dark energy and the flow between the two dark sectors becomes important. Nevertheless, some of these late time observables are more sensitive than the others, being RSD the ones providing the tighter constraints. The most interesting results are those obtained introducing the DESI BAO measurements, which lead to a non-zero best-fit value for the coupling, even if still consistent with a vanishing value for ξ at 95% CL. In order to fully assess this result, one should wait for future releases from the DESI collaboration. We find, as we would expect, that the largest degeneracy of the coupling ξ appears with the density parameter of cold dark matter, Ω_dmh^2. The degeneracy between these two parameters is strong when considering only CMB measurements. Nevertheless, when we introduce other measurements, which are more sensitive to late time physics (when ξ becomes more important), the degeneracy is reduced. Our analyses place particular emphasis on RSD measurements, which have not been extensively highlighted in the recent literature. RSD data are crucial: indeed, we found that it is only when we add them that the constraints become much tighter, helping enormously in breaking the degeneracies. Our findings have implications for the H_0 tension, which suggests discrepancies in the Hubble constant values derived from different observational methods. For the same reason above, while some measurements present a huge degeneracy between H_0 and ξ, alleviating the tension, when we include the RSD data the degeneracy almost disappears and the constraints on the Hubble constant are restored to the Planck ΛCDM values. Even though the cosmological analyses including all the data sets considered here point towards a vanishing coupling scenario, indicating that the interacting dark energy cosmologies could not have the final answer to the H_0 tension, one must keep in mind that we only focused in this work on a particular case, among all the possible ones. In summary, while current observations favor a zero dark energy-dark matter coupling scenario, the use of cosmological observations, including RSD measurements, lay the groundwork for future investigations into the dynamics of the Universe. Expanding the range of models and incorporating future datasets will be essential in uncovering the true nature of dark energy, dark matter, and their potential interactions.
The authors would like to thank William Giarè for enlightening discussions. This work has been supported by the Spanish MCIN/AEI/10.13039/501100011033 grants PID2020-113644GB-I00 (RH and OM) and by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grants H2020-MSCA-ITN-2019/860881-HIDDeN and HORIZON-MSCA-2021-SE-01/101086085-ASYMMETRY. RH is supported by the Spanish grant FPU19/03348 of MU.
The authors also acknowledge support from the Generalitat Valenciana grants PROMETEO/2019/083 and CIPROM/2022/69 (RH and OM). RH and OM acknowledge the Galileo Galilei Institute for Theoretical Physics (GGI) for their hospitality during the completion of this work. OM acknowledges the financial support from the MCIU with funding from the European Union NextGenerationEU (PRTR-C17.I01) and Generalitat Valenciana (ASFAE/2022/020).
|
http://arxiv.org/abs/2409.02310v1 | 20240903214135 | Geometry-aware Feature Matching for Large-Scale Structure from Motion | [
"Gonglin Chen",
"Jinsen Wu",
"Haiwei Chen",
"Wenbin Teng",
"Zhiyuan Gao",
"Andrew Feng",
"Rongjun Qin",
"Yajie Zhao"
] | cs.CV | [
"cs.CV"
] |
Defect Landscape Engineering to Tune Skyrmion-Antiskyrmion Systems in FeGe
Serena Eley
September 9, 2024
==========================================================================
§ ABSTRACT
Establishing consistent and dense correspondences across multiple images is crucial for Structure from Motion (SfM) systems. Significant view changes, such as air-to-ground with very sparse view overlap, pose an even greater challenge to the correspondence solvers. We present a novel optimization-based approach that significantly enhances existing feature matching methods by introducing geometry cues in addition to color cues. This helps fill gaps when there is less overlap in large-scale scenarios. Our method formulates geometric verification as an optimization problem, guiding feature matching within detector-free methods and using sparse correspondences from detector-based methods as anchor points. By enforcing geometric constraints via the Sampson Distance, our approach ensures that the denser correspondences from detector-free methods are geometrically consistent and more accurate. This hybrid strategy significantly improves correspondence density and accuracy, mitigates multi-view inconsistencies, and leads to notable advancements in camera pose accuracy and point cloud density. It outperforms state-of-the-art feature matching methods on benchmark datasets and enables feature matching in challenging extreme large-scale settings.
§ INTRODUCTION
Correspondence, as perhaps the most fundamental problem in computer vision, helps us with mapping the world. Solving correspondence is required in most applications in Augmented Reality and Robotics since it derives the camera poses and 3D point locations <cit.>.
In the 3D reconstruction from a set of uncalibrated images, the correspondence problem meets two additional demands: 1) the demand to be able to observe a corresponding point across multiple views, which is known as the track length of a correspondence point, and 2) the demand to find a dense set of such points such that they are sufficient for the recovery of 3D geometry of the scene.
In this paper, we consider a very difficult correspondence problem—dense, consistent correspondences across a sequence of frames with large camera baselines. To our knowledge, no prior works have achieved satisfactory results in the presence of these challenges: both detector-based and detector-free methods struggle with significant view changes, as illustrated in Figure <ref>, due to their inherent limitations. Detector-based methods often match only a small number of keypoints, making accurate image registration difficult.
While the 3D reconstruction problem described thereof has been well studied for decades, they are notably tackled by either requiring a pre-calibration of the captured cameras <cit.>, or the requirement that consecutive frames captured are closely located to each other <cit.>. In recent years, however, correspondence methods based on learned artificial neural networks (ANNs) have demonstrated significant improvements over the accuracy and robustness of the matching between a pair of in-the-wild, uncalibrated images <cit.>, which has given rise to the possibility of reconstructing 3D scenes from the abundant, in-the-wild photos captured in daily setting <cit.>. In recent years, correspondence methods using learned artificial neural networks (ANNs) have demonstrated significant improvements in the accuracy and robustness of the matching between a pair of in-the-wild, uncalibrated images. This makes 3D scene reconstruction from the abundant, in-the-wild photos captured in daily settings possible.
Recent state-of-the-art correspondence methods are separated into two major branches: the detector-based methods <cit.> that employ two independent steps to extract feature descriptor for key points first and perform matching later, and the detector-free methods <cit.> that find matches in an end-to-end trained neural networks in one shot. While the detector-based methods excel in performing matching among image sets with rich texture and small view point changes, their reliance on the keypoint detection and description poses significant challenges when images of interest lack textures, or their local features are not sufficient for matching.
On the other hand, the detector-free methods <cit.> naturally find much denser correspondences between a pair of images with a dense search, but they typically lack control of the locations of keypoints across views and are optimized for sub-pixel correspondences that depend on specific image pairs. As shown in Figure <ref>, this pair dependency makes them unsuitable for maintaining long-track correspondence across multiple images. Therefore, it leads to poor performance when these methods are directly applied to Structure-from-Motion (SfM) reconstruction. Though quantization on the found matches can mitigate the problem of short track length, it comes at the price of accuracy and requires additional refinement that makes the entire system more computationally expensive.
We aim to bridge the gap between the detector-based and detector-free methods in order to obtain dense correspondences that can be tracked across multiple views with large baselines - addressing a very difficult correspondence problem in SfM reconstruction from in-the-wild image collections. Specifically, we propose a novel optimization-based method that refines the pair-wise dense correspondences by iteratively filtering and reassigning correspondence under a geometric constraint formulated by the Sampson Distance <cit.>. This method allows the early rejection of outliers during the matching steps, improving the reliability of correspondences in large-baseline scenarios.
Interestingly, we have observed that incorporating both priors from the existing detector-based method and enforcing the geometric constraint helps our model achieve better camera prediction and longer track length among the found correspondences, as demonstrated in Section <ref>.
The designed optimization module thus consistently improves upon any detector-based and detector-free backbones, by providing denser correspondences compared to the detector-based methods, while refining the detector-free results to be more accurate and consistent across multiple views. 3D point cloud reconstructed with our feature matcher thus excels in both density and accuracy compared to previous state-of-the-art approaches.
To validate the effectiveness of our proposed method, we have conducted comprehensive evaluations using several publicly available datasets, including Image Matching Competition Benchmark <cit.> and MegaDepth <cit.>. We compared the SfM reconstruction results using matches
from our feature matcher against several state-of-the-art baseline methods. The
results demonstrated better accuracy in estimated camera poses and longer track length compared to those from the baselines.
Additionally, we have extended our evaluation to a challenging large-baseline setting that requires mapping from the UAV to street view perspectives (Figure <ref>).
§ RELATED WORK
§.§ Detector-based Feature Matching
A conventional detector-based matching framework typically detects keypoints in images, describes them using high-dimensional vectors, and further applies a matching algorithm to find the correspondences between these image sets <cit.>. Historically, numerous handcrafted methods like SIFT <cit.>, SURF <cit.>, and ORB <cit.> have been used for the keypoint detection. Lately, learning-based methods <cit.>, particularly those leveraging convolution neural networks (CNNs), have significantly improved the performance over the handcrafted features and shown robustness under large viewpoint and illumination changes. For instance, SuperPoint <cit.> achieved a good performance by creating a vast dataset of pseudo-ground-truth interest points and using a self-supervised interest point detector. SuperGlue <cit.> learns feature matching using point features from a graph neural network (GNN) and attention layers that integrate global context information.
Patch2Pix <cit.> has made further improvement with patch-level matches and geometry-guided refinement that has enhanced the correspondence accuracy. Patch2Pix <cit.> utilizes the Sampson distance <cit.> to refine initial patch-level matches to pixel-level accuracy, ensuring geometric consistency and precision. However, Patch2Pix <cit.> is limited to refining existing sparse matches and add new matches or extend beyond the initial patch-level matches.
Despite their success, detector-based methods are limited by their dependency on feature detectors, which struggle to consistently identify distinct interest points in areas that lack clear and distinguishable features.
§.§ Detector-free Feature Matching
Detector-free approaches <cit.> bypass the feature detection stage and directly generate matches, following a concept that originates from SIFT Flow <cit.>. The introduction of learning-based methods that employ contrastive loss to train pixel-wise feature descriptors <cit.> marked a significant development in this field. These methods - similar to detector-based approaches - employ the nearest neighbor search for post-processing to match the dense descriptors.
NCNet <cit.> learns dense correspondences in an end-to-end manner, creating 4D cost volumes that account for all possible inter-image matches. 4D convolutions are applied to refine these volumes, promoting consistency across the correspondence field. Building on this work, SparseNCNet <cit.> improved the efficiency with sparse convolutions. DRCNet <cit.> improved dense matching accuracy with a coarse-to-fine architecture. While NCNet addresses all potential matches, its 4D convolution’s receptive field is limited to small local neighborhoods.
LoFTR <cit.>, in contrast, aims for a broader match consensus, leveraging the expansive receptive field of transformers. ASpanFormer <cit.>, MatchFormer <cit.>, and Efficient LoFTR <cit.> follows the transformer-based design to utilize multilevel cross-attention to mutually update image features, thereby incorporating dual-view contexts for achieving both global and local consensus. While these methods represent significant improvements to the state-of-the-art, they still struggle to uphold high-level contexts such as epipolar geometries. Therefore, we propose a geometry-aware approach that enforces epipolar consistency in dense feature matching.
§.§ Structure from Motion
Structure from Motion (SfM) methods have been extensively studied <cit.> in the past. Some of these efforts focus on the scaling to large-scene reconstruction <cit.>. SfM methods mainly use detector-based matching at the beginning of their pipelines, which limits the overall SfM performance in challenging cases. Some end-to-end SfM methods propose to directly model camera poses <cit.>. These methods avoid the bottleneck of feature detection but have limited scalability to real-world scenarios. Although Recent differentiable rendering methods <cit.> can optimize poses and the scene representation simultaneously, they rely on the traditional SfM pipelines like COLMAP<cit.> to provide the initial camera poses. Lindenberger et al. <cit.> improves the accuracy of SfM and visual localization by refining keypoints, camera poses, and 3D points post to the traditional SfM pipelines. Unlike previous SfM methods, our proposed feature matcher ensures robustness and local geometry consistency at the first step of an SfM system.
§ METHODS
The core of our methods is an optimization module that operates on existing sparse matches M_s and two feature maps f_c^A and f_c^B from a pair of images. The module optimizes for the relative camera poses between the two images and refines the input coarse correspondences into refined coarse correspondence M_c_final that is geometric consistent under a Sampson distance <cit.> constraint. In Section <ref>, we provide an overview of how the sparse matches and feature maps are obtained, as well as the definition of Sampson distance. In Section <ref>, we discuss the details of our optimization module.
§.§ Preliminaries
§.§.§ Detector Based Method
Given two images I^A and I^B, our method accepts a detector-based backbone F_db that first identifies the sparse interest points with a keypoint detector. Then, local features are extracted around the identified interest points, encoding the local visual appearance in high-dimensional vectors. These features are then matched between I^A and I^B using either learned neural networks <cit.> or nearest-neighbor search to generate a set of sparse correspondences M_s. In our implementation, we use SuperPoint <cit.> + SuperGlue <cit.> to obtain sparse correspondence between a given image pair as it has been shown that the two methods combined can provide anchor points that introduce robust and good priors to guide dense matching <cit.>. We also evaluate the performance of different detector-based backbones in Section <ref>.
§.§.§ Detector Free Method
Given two images I^A and I^B, our method accepts a detector-free backbone F_df that processes the entire image to output semi-dense correspondences without relying on explicit keypoint detection. Instead, dense feature extraction is performed using deep neural networks. The output features f_c^A and f_c^B are then matched using similarity metrics, which are generated to quantify the confidence and select the final correspondences. In our implementation, we use AspanFormer <cit.> as our detector-free backbone.
§.§.§ Sampson Distance
Given a fundamental matrix F_m and the correspondence P^A_i, P^B_j, the Sampson Distance d(i,j) is the distance of P^A_i, P^B_j to their true epipolar line, which is given by:
d(i,j) = ( 𝐏_j^B^⊤𝐅_𝐦𝐏_i^A )^2/(𝐅_𝐦𝐏_i^A)_1^2 + (𝐅_𝐦𝐏_i^A)_2^2 + (𝐅_𝐦^⊤𝐏_j^B)_1^2 + (𝐅_𝐦^⊤𝐏_j^B)_2^2,
where point 𝐏_i^A = (x_i, y_i, 1)^⊤ and point 𝐏_i^B = (x_j, y_j, 1)^⊤ are two points in I^A and I^B respectively and (𝐅_𝐦𝐏_i^A)_k^2 is the k-th entry of the vector (𝐅_𝐦𝐏_i^A). Therefore, the Sampson distance <cit.> gives the
first-order approximation of the reprojection error. The error is minimized when two points are a perfect match.
§.§ Geometry Aware Optimization Module
Our proposed optimization module aims to enforce geometric consistency among correspondences and guide the feature matching with geometric information. We utilizes M_s from f_db to initialize the fundamental matrix. With dense features f_c^A and f_c^B from f_df, our module first computes a confidence map P_c using the similarity metrics.
Then the Sampson Distance <cit.> is used as a geometry constraint of an image pair to iteratively select and reassign the correspondences by updating P_c. Such constraints can reduce mismatches and address missed matches due to insufficient appearance features and repeating texture patterns.
Initial Step
Given the features f_c^A and f_c^B, the initial confidence matrix is obtained with a dual-softmax operator. For the i-th feature in I^A and the j-th feature in I^B, the similarity score is calculated as
S(i, j) = ⟨𝐟_c^A(i), 𝐟_c^B(j) ⟩ ,
where ⟨ a, b ⟩
denotes the dot product. Then the confidence level P_c is calculated as
P_c(i, j) = softmax(S(i, ·))_j ·softmax(S(·, j))_i ,
Based on the initial confidence matrix,
M_c = {(ĩ,j̃)|∀(ĩ,j̃)∈ MNN(P_c), P_c(ĩ,j̃) ≥θ_iter} .
we follow LoFTR <cit.>
to apply mutual nearest neighbor (MNN) criteria using equation <ref> to obtain the initial matches. The row index i represents the index of the patch from I^A and the column index j represents the index from I^B. (ĩ,j̃) is a match if and only if the confidence level is the maximum of both ith row and jth column. We choose points where their confidences are larger than θ_iter as matches to enter iterative steps.
The initial fundamental matrix F_m is calculated using the normalized 8-points method if there exist enough good anchor points in M_s, which is at least 10 points with confidence larger than 0.5. Otherwise, the initial fundamental matrix will be calculated using the top half matches from equation <ref>.
Iterative Steps
Our iterative steps begin with initialized coarse matches M_s and fundamental matrix F_m and update them in each iteration. The Sampson distance error is calculated for all initial matches according to Equation <ref>.
Geometric confidence is then estimated from the Sampson distances as
P_d(i, j) = sigmoid(relu(τ-d(i, j)),
where τ is a distance threshold, and smaller values of τ will lead to more strict constraints. The geometric confidence for correspondences that have distances larger than the distance threshold will be 0.5. In other words, the confidence of matches that violate the geometry constraint will be penalized.
Finally, the confidence map is updated by the geometric confidence map by a given weight w. Therefore, the final confidence is
P_c(i, j)_i = P_c(i, j)_i-1*P_d(i, j)_i-1· w.
At every iteration, the updated confidence map is normalized using min-max normalization to avoid extreme values. When the iteration ends, the module will select the final coarse-level matches
M_c_final based on the final confidence map from the last iteration with mutual nearest neighbor criteria according to equation <ref>, but with a different threshold θ_c to avoid selecting a large amount of correspondences with very small confidences.
The refined matches M_c_final are further fed into a correlation-based refinement block, which is the same with LoFTR <cit.>, to obtain the final matching results.
§.§ Structure from Motion Reconstruction
The final matching results are fed into COLMAP <cit.> for estimating structure from motion. By integrating our robust and dense matches into COLMAP <cit.>, our method is shown to enhance the accuracy and completeness of the camera poses and reconstructed point cloud, resulting in superior 3D models, even in challenging scenarios with significant viewpoint changes, as discussed in Section <ref>.
§.§ Implementation Details
The Sampson Distance threshold τ is set to 10 which allows a few pixels of error. The iteration of the optimization module is set to 10 and the weight w is 1.2. The confidence threshold θ_iter is set to a small value of 0.01. The idea is to conduct a comprehensive search across the space. The effect of different hyperparameters will be discussed in Section <ref>.
§ EXPERIMENTS
In this section, we demonstrate the performance of our method in the SfM reconstruction task in terms of the accuracy of camera pose estimation and its application in challenging real-world scenarios. We also conduct a comprehensive ablation study. All experiments were run on an RTX 4090 GPU with 24GB VRAM and a 13th Gen Intel® Core™ i9-13900KF CPU with 32 cores and 126GB RAM.
§.§.§ Dataset
We utilize the Image Matching Challenge (IMC) 2021 <cit.> and ScanNet <cit.> benchmark datasets to evaluate our method's performance in SfM reconstruction for outdoor and indoor settings respectively. As there are no publicly available air-to-ground datasets, we collected two additional air-to-ground datasets to demonstrate our method's ability to find reliable correspondences in this challenging scenario.
The IMC Phototourism benchmark <cit.> encompasses multiple large-scale outdoor scenes. The test set includes 9 sites, and each set has 100 images. The ground truth or pseudo ground truth camera poses are obtained from COLMAP <cit.> on the entire collection of images and then sampled into test sets. ScanNet <cit.> contains 1613 monocular indoor sequences with ground truth poses and depth maps. We use the data split of the test set following SuperGlue <cit.> and LoFTR <cit.>, which includes 15 sequences. The primary challenge lies in accurately recovering the camera poses from in-the-wild images with significant viewpoint and illumination changes. For all test scenes, the images are considered unordered.
The first collected air-to-ground dataset consists of 16 UAV images and 21 ground images, while the second dataset contains 16 UAV images and 24 ground images. The primary challenge for these datasets is to accurately register UAV images with ground view images. For all scenes, the images are considered unordered and are matched exhaustively. When aerial images are matched with ground images, to avoid failure in finding correspondences due to resizing, the aerial images are cropped into quarters plus a center region to generate five pairs. The matches are subsequently merged to obtain the final matches. This strategy is used for all methods.
§.§.§ Metrics and Comparing Methods
We report the metric area-under-curve (AUC) to evaluate camera pose accuracy for IMC 2021 <cit.> and ScanNet <cit.>. Specifically, for every reconstructed model, we aligned the predicted camera poses with the ground truth camera poses. We then computed the absolute angular translation and rotation errors. The AUC selects the maximum between these two errors and outputs the area under the accuracy threshold curve. As the collected air-to-ground datasets lack ground-truth camera poses, we follow <cit.> to report the number of registered images, the number of 3D points, the average track length, and the average projection error (px) in this setting to evaluate the methods by the completeness of the reconstructed model.
We compared our method against four different methods:
SIFT <cit.>+NN, which is the default feature matching method in COLMAP <cit.>; SuperPoint <cit.>+Superglue <cit.>, a robust method widely used in SfM; ALIKED <cit.>+LightGlue <cit.>, a state-of-the-art detector-based feature matching method; DKM <cit.> and RoMa <cit.>, two state-of-the-art dense feature matching methods. We omit RoMa <cit.> for air-to-ground datasets since DKM <cit.> appears to be superior to RoMa <cit.> in terms of 3D reconstruction for outdoor scenes.
In order to reduce the computational time, the number of matches per pair is limited to 4K for all methods. Bold text indicates the best results, and underlined text indicates the second best.
§.§ Comparison to State of the Art Methods
§.§.§ Results
SfM results using our feature matching method on both the IMC 2021 <cit.> and the ScanNet <cit.> datasets have shown significant improvements in performance. As shown in Table <ref> and Table <ref>, our feature matcher contributes to the highest average pose estimation AUC in both datasets. Specifically, Table <ref> indicates an increase of +3.1 in AUC@3 for estimated camera poses compared to ALIKED <cit.> + LG <cit.>, as can be visually shown in Figure <ref>. Similar gains are observed for ScanNet <cit.>, as shown in Table <ref>. These findings underscore the efficacy of our method in achieving superior SfM results.
As shown in Table <ref> and Figure <ref>, compared to other methods, our method is able to register all images and is the only method that can align the ground images with UAV images for set 2. We have noticed that reconstructed camera poses using DKM <cit.> in set 1 are apparently wrong with ground images pointing down and speculate that it is due to the lack of multi-image matches for feature tracking. In addition, DKM <cit.> also fails in reconstructing a model for UAV images in set 2. These results further validate the practical utility and effectiveness of our feature matcher in real-world scenarios.
§.§ Ablation Study
We have conducted an ablation study to investigate the impact of different parameters and design choices on both feature matching and 3D reconstruction.
Ablation study on Homography Estimation. Feature matching is evaluated on the widely adopted HPatches dataset <cit.>. HPatches <cit.> contains a total of 108 sequences with significant illumination changes and large viewpoint changes. We follow the evaluation protocol of LoFTR <cit.> to resize the shorter size of the image to 480 and report AUCs at 3 different thresholds. Table <ref> shows results on different parameters and design choices on HPathes <cit.>. The “Baseline” denotes only using ASpanFormer <cit.>. The results show the effects of different parameters in feature matching. Additionally, the full method outperforms other design variations, confirming the robustness and effectiveness of our approach in enhancing feature matching accuracy. More experiments related to feature matching can be found in supplementary material.
Ablation study on 3D reconstruction. In order to comprehensively assess the efficacy of our proposed components within the SfM system, we also measure the impact of the aforementioned design changes on 3D reconstruction. To this end, we randomly sample 20 images from each scene of IMC 2021 phototourism datasets <cit.> and compare the SfM reconstruction results based on pose estimation. We follow IMC <cit.> to report relative pose errors for all possible pairs since we are using subsets for this experiment. All results are averaged across all scenes and all methods are evaluated on the same subset of data.
“SG” denotes anchor points from SuperGlue <cit.>; if the optimization (“Geo”) module is not enabled, anchor points are naively concatenating to final matches. If there are no anchor points, the optimization (“Geo") module is initialized with the top half most confident matches from only the detector-free backbone. As shown in Tables <ref> and <ref>, the results again demonstrate the effectiveness of our proposed method as it brings improvement over the baseline methods in the accuracy of SfM reconstruction.
Ablation study on detector-based backbones. Following the setting of Section <ref>, Table <ref> compares the performance of using two detector-based methods on the IMC dataset <cit.>. This demonstrates that SP<cit.> + SG<cit.> is more effective in providing priors in guiding dense matching.
§ LIMITATION AND FUTURE WORK
The main limitation of our method is efficiency, due to the computational cost of the transformer-based architecture and the significant number of correspondences produced, as shown in Table <ref>. Utilizing a newly developed backbone model with a focus on efficiency, such as Efficient LoFTR <cit.>, can improve it with slightly degraded performance.
Due to our method's reliance on the backbone model to provide initial matches for fundamental matrix estimation and its heuristic design, our method could fail with poor initial matches from backbone models. Nevertheless, our evaluations against baselines demonstrate a marked improvement in accuracy, as well as the capability to find reliable matches in challenging scenarios. Furthermore, we believe the accuracy of SfM reconstruction can be further improved by adopting multi-view refinement techniques, such as those proposed by pixSfM <cit.>.
§ CONCLUSION
We propose the geometry-aware optimization module, an innovative optimization approach that bridges detector-based matcher and detector-free matcher to improve SfM reconstruction and accuracy of estimated camera poses. This module leverages epipolar constraints, iteratively optimizes for the best relative pose, and applies Sampson distance weighting to refine match precision. The introduction of geometry optimization enhances the system's ability to handle challenging scenarios, including drastic changes in perspective, scale, and illumination. Our method is shown to be also effective in 3D reconstruction for large baselines, a scenario where conventional methods often fail and perform unsatisfied.
§ ACKNOWLEDGEMENT
Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DOI/IBC) contract number 140D0423C0075. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily
representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government.
ieeenat_fullname
|
http://arxiv.org/abs/2409.03547v1 | 20240905140929 | A Silicon Photonic Neural Network for Chromatic Dispersion Compensation in 20 Gbps PAM4 Signal at 125 km and Its Scalability up to 100 Gbps | [
"Emiliano Staffoli",
"Gianpietro Maddinelli",
"Lorenzo Pavesi"
] | cs.ET | [
"cs.ET",
"physics.app-ph",
"physics.optics"
] |
A Silicon Photonic Neural Network for Chromatic Dispersion Compensation in 20 Gbps PAM4 Signal at 125 km and Its Scalability up to 100 Gbps
Emiliano Staffoli,
Gianpietro Maddinelli,
and Lorenzo Pavesi, Fellow, IEEE
E. Staffoli, G. Maddinelli, and L. Pavesi are with Department
of Physics, University of Trento, Trento,
38123, Italy, e-mail: emiliano.staffoli@unitn.it.
5 September 2024
==========================================================================================================================================================================================================================================================
§ ABSTRACT
A feed-forward photonic neural network (PNN) is tested for chromatic dispersion compensation in Intensity Modulation/Direct Detection optical links. The PNN is based on a sequence of linear and nonlinear transformations. The linear stage is constituted by an 8-tap time-delayed complex perceptron implemented on a Silicon-On-insulator platform and acting as a tunable optical filter. The nonlinear stage is provided by the square modulus of the electrical field applied at the end-of-line photodetector. The training maximizes the separation between the optical levels (i.e. the eye diagram aperture), with consequent reduction of the Bit Error Rate. Effective equalization is experimentally demonstrated for 20 Gbps 4-level Pulse Amplitude Modulated signal up to 125 km. An evolutionary algorithm and a gradient-based approach are tested for the training and then compared in terms of repeatability and convergence time. The optimal weights resulting from the training are interpreted in light of the theoretical transfer function of the optical fiber. Finally, a simulative study proves the scalability of the layout to larger bandwidths, up to 100 Gbps.
Photonic Neural Network, PAM4, Equalization, Silicon Photonics, Chromatic Dispersion
§ INTRODUCTION
In the last decades, with the advent of the Internet data transmission assumed a central role in the development of society. Fiber optics-based communications represented a breakthrough technology in this field, allowing for long-distance transmission at large bandwidths. Nowadays, this technology represents the backbone of many applications, including cloud services, e-commerce, streaming platforms, big data transfer inter- and intra-data centers, and, recently, Artificial Intelligence <cit.>. Market's needs require an always higher Capacity × Distance product in the transmission lines, pushing the research to find new transmission and detection techniques that adapt to different scenarios and costs <cit.>. Coherent transceivers are mostly employed in long-haul propagation, where maximum spectral efficiency is required <cit.>. To increase the propagation distance, high optical launch power can be used. This triggers linear and nonlinear effects <cit.>, which in turn induce transmission impairments. Error compensation is provided by Digital Signal Processing (DSP) performed at the transceiver level <cit.>, often requiring Application-Specific Integrated Circuits (ASICs) featuring high power consumption and introducing latency <cit.>. On the other hand, short-reach applications rely on Intensity Modulation/Direct Detection (IM-DD), providing lower spectral efficiency but reducing hardware complexity, costs, and energy consumption compared to the coherent counterpart <cit.>. In this framework, errors in data transmission are induced by linear effects such as Polarization Mode Dispersion, Symbol Timing Offset, Optical filtering, and Chromatic Dispersion (CD) <cit.>. The latter represents the dominant source of impairments, causing a time broadening of the propagating pulses with the consequent Intersymbol Interference (ISI). This effect becomes more severe with increasing bandwidth and propagation distance. Dispersion Compensating Fibers <cit.> are nowadays widely used to counteract ISI, with their application being affected by the high production costs, their additional contribution to the latency in the transmission, and their non-tunability (being these static devices after the implementation). The alternative of Chirped Fiber Bragg Gratings <cit.> drastically reduces the latency, but high power consumption is required for thermal or electrical tuning of the devices. A further option is represented by integrated Photonic Neural Networks (PNNs) <cit.> which allow to overcome many of the limitations just mentioned. Signal processing performed directly in the optical domain reduces power consumption and minimizes latency. Moreover, being tunable, a PNN can be adapted to different transmission scenarios by tuning the β_m parameters that define its dispersion profile (β_m = d^m β/dω^m, β being the single-mode propagation constant <cit.>). Finally, PNNs realized as Photonic Integrated Circuits (PICs) in CMOS-compatible processes feature low costs and small footprints, allowing the embedding of the photonics platforms directly into a transceiver <cit.>.
In this work, we experimentally demonstrate the use of an integrated PNN for CD equalization. The very simple design is based on an 8-channel time-delayed complex perceptron <cit.>. The input optical signal is split into 8 copies that accumulate a relative time delay Δ t = 25 ps between each other. Each copy is then applied with an amplitude and a phase weight and finally recombined with the others. The present design is a feed-forward architecture, with the nonlinear activation function constituted by the square modulus of the propagating electric field applied at the end-of-line photodetector. Signal processing is performed with minimized latency at the optical level, except for the training phase, which is performed offline. Signal equalization is applied to a 10 Gbaud 4-level Pulse Amplitude Modulated (PAM4) signal propagating in a standard single-mode fiber up to 125 km long. A first demonstration of the use of a time-delayed complex perceptron was reported in <cit.>. There, a PNN with 4 channels, longer delay lines (Δ t = 50 ps), and only phase weights was tested. Here, we report on an improved PNN with more channels and shorter Δ t. These ensure that more pieces of optical information closer in time interact with each other. A further improvement is represented by the use of also amplitude weights, which allow for better handling of the various delayed optical contributions. In the previous design, all the delayed copies of the input reached the recombination stage, leaving the phase weights only to arrange the interference. This possibly introduces unnecessary information from the past to the recombination stage.
This paper is organized as follows: Section <ref> introduces the PNN layout, the experimental setup, and the training procedures, focusing in particular on the loss function definition and the choice of the minimization algorithm. The corresponding results are reported and discussed in Section <ref>, followed by an exhaustive comparison of the presented technology with other approaches in Section <ref>. Finally, Section <ref> addresses future developments of our design in terms of the number of taps and delay units for its adaptability to higher modulation frequencies up to 100 Gbps.
§ EXPERIMENTAL SETUP AND PROCEDURES
§.§ Delayed complex perceptron layout
Fig. <ref>(a) shows the design of the PNN device under test. It is fabricated in a Silicon-on-Insulator platform, and made by Si waveguides with a 220 × 500 nm^2 cross-section surrounded by Silica cladding. The input and output ports are located respectively at the leftmost and rightmost edges of the chip. The input signal x(t) enters the chip via butt coupling and propagates through a cascade of 50/50 y-branches and spiralized optical paths. Overall, this creates N=8 time-delayed copies of the input signal, ending in as many parallel waveguides (channels). Spirals are available in 3 different lengths, respectively L_A = 7.10 mm, L_B = 3.55 mm, and L_C = 1.77 mm, each corresponding to an added propagation time of Δ t_A = 100 ps, Δ t_B = 50 ps, and Δ t_C = 25 ps. The channels are labeled from the first (i=1) at the bottom edge to the eighth (i=8) at the top edge.
The i-th channel hosts a delayed copy x_i(t) = x[t - (i-1)Δ t] of the input signal, where the delay is measured with respect to the first channel. The quantity Δ t = Δ t_C is the delay unit imposed by the shortest spirals between copies traveling in adjacent channels. The eighth channel is associated with the maximum delay of Δ t_A + Δ t_B + Δ t_C = 7 Δ t. Each channel hosts a Mach-Zehnder Interferometer (MZI) with a footprint of 400 × 150 μm^2, and two y-branches provide the initial splitting and final recombination. Each MZI is controlled via the thermo-optic effect produced by a current-driven TiN micro-heater placed on top of one of the arms. The final aperture level (i.e. the transmittance) of the MZI determines the amplitude weight a_i applied to the optical signal traveling in that channel. A phase weight e^jϕ_i is then generated for each channel by another current-controlled micro-heater (phasor, PS), leading to a weighted optical signal x[t - (i-1)Δ t] k_i a_i e^jϕ_i where k_i accounts for the channel losses (numerical values provided in Appendix <ref>). The weighted copies are then complex-summed via a 8× 1 combiner realized via a cascade of 50/50 y-branches providing the output sequence
y(t) = ∑_i=1^N=8 x[t - (i-1)Δ t] k_i a_i e^jϕ_i.
Therefore, the PNN device acts as an 8-tap Optical Finite Impulse Response filter, and the currents driving the MZIs and the PSs represent the tunable parameters of the PNN. The theoretical free spectral range (FSR) is 1/Δ t= 40. The number of taps N_T is determined starting from the empirical formula already presented in <cit.>:
N_T = int( 1/B + | L β_2 Δω|/Δ t).
Here, the numerator represents an estimate of the increased time width of a Gaussian pulse after propagation in a fiber of length L, obtained as the sum of the initial baud time slot 1/B (with B being the baud rate) and the pulse broadening Δ T=|Lβ_2Δω| induced by CD. We consider the Group Velocity Dispersion parameter β_2 = -0.021 ps^2/m of a standard SM G.652D fiber, and Δω is the pulse bandwidth. The first generation design tested in <cit.> featured phase weights only, with B = 10 Gbps, L = 100 km, and Δ t = 50 ps, resulting in N_T = 4. The Δ t value was chosen to have a sufficiently dense sampling of the information in a single-bit time slot (at least 2 samples per baud). For the current design, Δ t = 25 ps is chosen, which allows dealing with multilevel formats and faster data rates. Indeed, this layout has approximately the same maximum relative delay (given by [N -1] ×Δ t) between the channels as the previous one but with a finer sampling of the information from the same baud simultaneously sent to the recombination stage. Summarizing, the selected N and Δ t are a trade-off between a sufficiently dense optical sampling, insertion loss minimization, and a reduced number of trainable parameters. Finally, the design of the photonic structures implemented on the PNN device is optimized for the Transverse Electric (TE) mode propagation, making the PNN device polarization sensitive.
§.§ Ideal design working principle
The frequency response of an optical fiber with length L on the envelope of the input field is <cit.>
H_of(ω) = e^jβ(ω_0+ω)L,
where β(ω) is the frequency dependent propagation constant of the fiber and ω_0 is the optical carrier frequency. By expanding β(ω) around ω_0 up to the second order, (<ref>) can be approximated as
H_of(ω) = e^j( β_0 + β_1ω + 1/2β_2ω^2 )L.
Since we are interested in CD, β_0 and β_1 can be removed from the expression as they affect the signal with a constant phase shift and a propagation delay respectively, therefore
H_of(ω) ≈ e^j1/2β_2Lω^2.
Equation (<ref>) shows that CD affects the propagation by applying a phase mask to the different frequency components of the signal. Perfect CD compensation can be achieved by a device with the transfer function
H(ω) = e^-j1/2β_2Lω^2,
such that
H(ω)H_of(ω)=1.
The impulse response of such a device is
h(t) = e^-j1/2β_2Lt^2,
and the output y(t) is the convolution of the input x(t) with the impulse response
y(t) = ∫_-∞^∞dτ x(t-τ) e^-j1/2β_2Lτ^2.
Let us note that according to (<ref>) the PNN performs a discretized convolution of its input over a finite window of time. Comparing (<ref>) with (<ref>), it emerges that perfect CD compensation can be achieved by a delayed complex perceptron featuring infinite taps (N→∞), an infinitesimal delay unit (Δ t→0) and lossless channels (k_i=1). Furthermore, all MZIs are to be kept open (a_i=1), and the PSs must induce a relative phase equal to
ϕ_i = -1/2β_2L(i-N/2)^2Δ t^2.
referred to the central channel (i=N/2). However, real device implementations have a limited number of taps and a finite delay unit, making them non-ideal. Therefore, the optimal phases provided by a training procedure are supposed to depart from those given by (<ref>).
§.§ Experimental setup overview and loss function definition
The PNN device has been tested with the experimental setup presented in a simplified version in Fig. <ref>(b). An extensive description of the experimental setup can be found in Appendix <ref>. In the transmission stage, a laser source operating at 1550 nm is modulated as a 10 Gbaud PAM4 signal based on periodic Pseudo-Random Binary Sequences of order 11, producing an overall period of 2^10 symbols. A Fiber Optic Coupler sends part of the optical signal to a 20 GHz-bandwidth fast photodiode (RX1), where the input signal is acquired for reference. The other fraction proceeds to the PNN for optical processing, as described in (<ref>). The amplitude and phase weights are regulated by an external DC current generator. Then, the output signal from the PNN is sent through an optical fiber span with variable length, from 0 km (Back-To-Back configuration, BTB) to 125 km with steps of 25 km. The position of the PNN device within the transmission line makes it, in fact, a pre-compensator for CD effects. Placing the PNN device before the propagation stage allows for getting rid of polarization diversity effects since the device is polarization-sensitive. The optical power in the setup remains low enough to avoid nonlinear effects and damage in any component. In particular, it stays below 3 dBm at the input of the PNN and below 2 dBm at the input of each optical fiber span. After fiber propagation, the optical signal is detected by another 20 GHz bandwidth fast photodiode (RX2). Both RX1 and RX2 are connected to a 16 GHz bandwidth oscilloscope with a sampling frequency of 80 GSa/s.
For each measure (i.e. transmission test) the DC current generator sends an array of pre-set currents to the PNN device to drive the amplitude and phase weights. A triggering signal is then sent to the oscilloscope to acquire the resulting periodic trace at the end-of-line receiver. The output trace is then aligned via the cross-correlation method with the digital target sequence obtained as a periodic version of the digital input sequence. Each aligned output trace y serves as an input to a properly defined loss function that assesses the quality of the transmission, which is then provided to the minimization algorithms for the PNN training. This is performed first via a Particle Swarm Optimizer (PSO) <cit.> and then via the Adam algorithm <cit.>, namely a gradient-based approach provided with memory to minimize the possibility of a premature end of the research in a local minimum.
The amount of errors in a single data transfer is quantified by the Bit Error Rate (BER), here defined as the sum of the discrepancies between the binary de-mapped version of the digitized target and output sequences. However, the discrete-valued nature of the BER function makes it not well-suited for a gradient-based approach to minimization. In addition, the minimum BER value of 0 can be obtained statistically even in sub-optimal equalization conditions, possibly leading the PSO algorithm to end the research prematurely. A BER optimization can be equivalently obtained by minimizing the overlap between the distributions of samples associated with each expected optical level, as described by (2) in <cit.>. Thus, we proceed with defining a function that measures the separation between the distributions.
Fig. <ref> provides a visual representation of the procedures leading to the loss function evaluation. The measurement produces a N_s = 7.1 × 10^4 samples-long output trace y (blue line in Fig. <ref>) with N_sps = 8 samples per symbol because the 10 Gbaud signal is sampled at 80 GSa/s. A sequence y_k (colored large dots) is then obtained by sub-sampling y at the k-th sample in each symbol, with k=N_sps/2. The sub-sampling is performed close to the center of the symbol, namely where the contrast is ideally the highest, allowing retrieving the maximum information from each symbol. The points in y_k are grouped into {y}_k,n based on their expected level n=0,…,N_L-1 (N_L being the number of levels in the chosen PAM format) determined by comparison with the target sequence. In Fig. <ref> the points belonging to each {y}_k,n are marked with a different color and generate the distributions shown in the central panel. Each distribution is populated by N_n samples {y^i}_k,n, with i=1,…,N_n. The rightmost panel focuses on a single histogram, showing the typical Gaussian envelope identified by its average I_k,n and variance σ_k,n. The position of the left and right tails, respectively E_L[k,n] and E_R[k,n], can be estimated as
E_L[k,n] = 1/n_L|∑_i=1^n_L{y^i}_k,n| ,
E_R[k,n] = 1/n_R|∑_i=1^n_R{y^i}_k,n|.
In (<ref>), the index i is such that {y^i}_k,n < I_k,n - 1.28σ_k,n, namely the sum is performed over the n_L points corresponding to the 10% of the population in the leftmost area of the distribution. Analogously, in (<ref>) i is such that {y^i}_k,n > I_k,n + 1.28σ_k,n, namely the sum is performed over the n_R points corresponding to the 10% of the population in the rightmost area of the distribution. These areas are identified by the orange bars in the third panel of Fig. <ref>. We define the one-sample separation loss function as
ℒ_1(k) = n=0,…,N_L-2max{ E_R[k,n] - E_L[k,n+1] },
which returns negative values when all the distributions are correctly separated. A minimization of ℒ_1 produces the maximum separation between the optical levels, that is the maximum aperture of the eye diagram. Since the aperture level is measured for a single sample (the k-th) close to the center of the symbol, the minimization may lead to a local separation of the levels, losing the symmetry in the eye diagram aperture around the center of each symbol. Therefore, it is convenient to define a two-sample separation loss function as
ℒ_2 (k) = ℒ_1(k) + ℒ_1(k+1).
This measures the separation between the optical levels in two different points within each symbol, respectively symmetrical to its center. A minimization of ℒ_2 reduces the risk of asymmetric eye diagrams both on the horizontal and vertical axis. The definitions of ℒ_1 and ℒ_2 represent a generalization to a multi-level system of the separation loss function presented in <cit.>. Finally, the BER evaluation starts with the digitization of the output trace. The separation threshold T_n between the n-th and the n+1-th level are defined as
T_n,n+1 = 1/2( E_R[k,n] + E_L[k,n+1] ).
with n = 0,…,N_L-2. These are then applied to y_k producing the corresponding 4-level digitized sequence y_dig which is then de-mapped into the binary domain y_bin. The BER is then evaluated as an error counting between y_bin and the corresponding de-mapped binary target sequence y_bin. Each evaluation is performed on 2 × N_s/N_sps = 1.8× 10^4 bits, providing a minimum BER value of 5.5×10^-5.
§.§ Dispersion-induced power penalty
In IM-DD links, CD is known to cause the fading of specific frequencies of the transmitted signal <cit.>. This effect is briefly reviewed in this section. Let s_in(t) be the input signal to an optical fiber of length L. According to (<ref>), the output signal s_out(t) can be written as
s_out(t) = ∫dω S_in(ω)e^j1/2β_2Lω^2 e^-jω t,
where S_in(ω) is the Fourier transform of s_in(t). Assuming that s_in(t)=a+bcos(ω̅t),
s_out(t) = a + be^j1/2β_2Lω̅^2cos(ω̅t).
The signal detected at the input is proportional to
|s_in(t)|^2 = a^2 + 2abcos(ω̅t) + b^2cos^2(ω̅t),
while the signal detected at the output is proportional to
|s_out(t)|^2 = a^2 + 2abcos(β_2Lω̅^2/2)cos(ω̅t) + b^2cos^2(ω̅t).
The CD-induced penalty of the transmission line at ω̅ is defined as the ratio between the frequency component at ω̅ of |s_in(t)|^2 and that of |s_out(t)|^2 <cit.>,
P(ω̅) = 20log_10| ℱ{|s_in(t)|^2}(ω̅)/ℱ{|s_out(t)|^2}(ω̅)|
= -20log_10|cos(β_2Lω̅^2/2)|,
where ℱ denotes the Fourier transform. The formula predicts that the first notch of the CD-induced penalty is located at ω_m=√(π/β_2L). As long as this value is larger than the bandwidth of the modulation, the effects of CD are minimal. When this condition is not satisfied, CD severely impairs the transmission.
Experimentally, the CD-induced penalty of a transmission line is measured by modulating the intensity of the laser with a cosine of frequency ω̅. In our experiment the value of ω̅ is swept from 50 to 15 taking steps of 50. The result of the measurement is expected to follow (<ref>) when the PNN is not inserted in the transmission line. On the other hand, when the PNN compensates CD, the power penalty is expected to be flat within the bandwidth of the signal.
§ RESULTS & DISCUSSION
The equalization capabilities of the PNN have been assessed with multiple training procedures in different scenarios. A first exploratory phase has been devoted to finding optimal training conditions in terms of the number of parameters and the loss function. Starting from the most general case, all 15 currents (7 PS, 8 MZI) have been used as training parameters for the PSO (full configuration). Fig. <ref> presents the optimal amplitude weights a_i, the sum of the apertures ∑_i=1^8 a_i, and the measured insertion loss (IL) for each transmission length between 25 km and 125 km with steps of 25 km. Analogously to what has already been pointed out in <cit.>, the graphs highlight that the number of open channels (a_i ≠ 0) increases with the fiber length. Indeed, the CD-induced temporal broadening of the transmitted symbols increases with the propagation distance, thus the PNN needs more channels to recombine pieces of optical information that are spread over a larger time window. Training results for L=25 km break this trend. If we consider ∑_i=1^8 a_i as a measure of the overall channel opening, we note that the sum results higher than the corresponding value for L=50 km. Indeed, for short propagation distances, the impact of cumulated CD is limited, therefore it is expected a small number of open channels for the compensation. However, limiting the number of open channels increases the IL, with a consequent degradation of the Signal-to-Noise Ratio (SNR) at the receiver. As a consequence, in the presence of reduced CD, the PNN favors an a_i configuration that maintains a high SNR rather than using the minimum amount of open channels needed for the compensation.
In a second set of measurements, the tunable parameters have been restricted to the phasors only (PO configuration) with a_i = 1 for every channel. Fig. <ref> summarizes the comparison between the different training configurations with the two tested loss functions ℒ_1 and ℒ_2. In particular, Fig. <ref>(a) shows the BER reached after a PSO run for different fiber lengths. Each point is obtained as the average of 20 measurements with the optimal current configuration provided by the training. Each colored curve is associated with a particular combination of the loss function and the number of parameters. Since CD acts by applying a phase mask to the signal (as seen with (<ref>)), we found an overall better performance achieved with PO configuration, already observed in <cit.>. The minimization of ℒ_1/PO and ℒ_2/PO for L=125 km (worst case scenario) produces the equalized eye diagrams reported respectively in Fig. <ref>(b) and Fig. <ref>(c). The use of ℒ_2/PO allows, in general, to retrieve the symmetry of the eye, therefore it is preferred to ℒ_1/PO in the rest of the paper.
The equalization tests were performed for different propagation distances, from L=25 km to L=125 km, using 25 km steps. The quality of the Transmitter/Receiver performance has been assessed for each L during the testing phase. This consists of a scan over the power at the receiver (PRX), or equivalently over the Signal-to-Noise Ratio (SNR), performed by varying an optical attenuator placed in front of the end-of-line photodiode (see Fig. <ref>). For each PRX, the BER was acquired as the average over 50 acquisitions. The testing phase has been performed for bare fiber transmission (unequalized case) and after each training (equalized case), finally comparing the results with the benchmark performance set by the transmission at L=0 km (BTB). The BER versus PRX profiles obtained for L=25 km, L=75 km, and L=125 km are presented in Fig. <ref>(a-c). The intersymbol interference generated by CD raises the unequalized BER curves with respect to the BTB condition and causes a progressive closure in the corresponding eye diagrams (Fig. <ref>(d-f)) as L increases. The trained PNN lowers the BER curves close to the BTB performance, reaching the pre-FEC threshold of 2× 10^-3 <cit.> in correspondence of PRX≈ -3 dBm, as well as the opening of the eye diagrams (Fig. <ref>(g-i)).
Then, PSO as a minimization algorithm has been compared with other strategies. Strategy 𝑆𝑇_1 consists of a gradient descent performed by the Adam algorithm <cit.> with the starting point chosen as the configuration that provides the lowest loss among 20 randomly generated ones. Similarly, the strategy 𝑆𝑇_2 again exploits the Adam algorithm, but the initial condition is fixed to the optimal configuration (found with PSO) that optimizes the BTB transmission. Finally, strategy 𝑆𝑇_3 represents the standard training configuration that produced the results of Fig. <ref>, namely the parameters are optimized via PSO starting from a random initial condition. The optimization ends when the loss fails to improve by more than 0.02 for 10 consecutive iterations in 𝑆𝑇_1 and 𝑆𝑇_2 and for 15 consecutive iterations in 𝑆𝑇_3. The final loss value after each training is evaluated as the average of 20 measures acquired with the optimal weights. Each strategy has been executed 15 times to optimize the equalization of 125 km fiber with the intent to assess its performance in terms of speed, precision, and repeatability. Fig. <ref>(a-c) reports the best loss at every epoch as a function of time for the three strategies, while Fig. <ref>(d) compares the final loss values against the number of loss function evaluations spent in the minimization process. Strategy 𝑆𝑇_1 fails to reach an optimal solution in most cases, while strategy 𝑆𝑇_2 is more consistent in reaching low loss values. The PSO-based strategy 𝑆𝑇_3 achieves the lowest loss values on average, but the number of iterations needed for the minimization is increased by one order of magnitude compared to 𝑆𝑇_1 and 𝑆𝑇_2. The Adam algorithm is thus faster than the PSO, but the reliability strongly depends on a proper choice of the starting point. Our experiments prove that a favorable starting configuration for an equalization problem is the one that minimizes the loss function at BTB, namely the one that makes the PNN as transparent as possible to the signal. PSO guarantees convergence to a near-to-optimal configuration at the cost of increased complexity.
The performance of the PNN has been compared against a commercial Tunable Dispersion Compensator (TDC) based on fiber-Bragg gratings[TeraXion ClearSpectrum^TM Model: TDCMB-C000-J.]. The device features an insertion loss of 5 and a dispersion range of ±900/, corresponding to a complete equalization of CD accumulated in a 50 km propagation with a 10 Gbaud signal. In the experimental setup described in Fig. <ref>, the PNN was replaced by the TDC. Since the PNN and the TDC have different insertion losses, it is necessary to adjust the input power to the new device to restore the same fiber launch power condition. In Fig. <ref> the BER curves obtained at 50 km and 125 km using the TDC or the PNN are compared. The two devices are equally effective in retrieving the signal after 50 km of fiber, while the PNN demonstrates a larger dispersion range with good performance up to 125 km.
As mentioned, a set of current values is the result of the training. This set can be translated into a set of phase shifts according to
ϕ_i = α_i I_i^2 + β_i,
with i=1,…,8. These values appear in (<ref>) and determine the optical response of the PNN. The coefficients α_i and β_i in (<ref>) are experimentally characterized for every microheater. The phases ϕ_i for each channel are measured relative to the channel with no delay since the corresponding heater is left unconnected (see Appendix <ref>). Fig. <ref>(a) shows the phases of the PNN in PO configuration trained to compensate for different lengths of optical fiber. Fig. <ref>(b) shows the phases of the PNN, again in PO configuration, obtained for 15 different minimization procedures to equalize the propagation in 125 km. The phases measured in a particular trial are adjusted by ± 2π, and the average value is subtracted to superimpose the curves obtained in different trials. This is feasible because the response of the PNN is invariant under a constant phase shift over all channels. Fig. <ref>(b) shows that the phases resulting from a repeated minimization show a trend that is not fully compatible with (<ref>), especially in the first and last channels. The variability in the training outcomes is a consequence of the PSO algorithm, which involves randomness in the initialization and evolution of the swarm, as well as statistical fluctuations in the loss evaluation. In addition to this, the possibility of ending up in local minima cannot be ruled out. Let us also stress that the configuration described by (<ref>) was derived under the assumption of a PNN with a large number of channels and small Δ t. The PNN under test does not satisfy these assumptions. Moreover, (<ref>) neglects the insertion loss of the PNN. A larger insertion loss implies a smaller SNR at the receiver, which in turn translates into a narrower separation of the optical levels and a larger value of ℒ_2 (see Fig. <ref>). Therefore, the configuration that minimizes the loss consists of a trade-off between the compensation of CD and the reduction of the insertion loss. In the PNN under test, the application of the ideal phases for L=125 km rather than those obtained after the optimization results in 2.5 orders of magnitude increase in the BER. A more detailed analysis is reported in Appendix <ref>.
Fig. <ref> shows the CD-induced penalty of the equalized (yellow) and unequalized (red) 25 km, 75 km, and 125 km long optical link. The CD-induced penalty at BTB (blue) is reported for comparison. In violet is the CD-induced penalty measured after removing the fiber span from the link and keeping just the PNN. Note that the yellow curves do not result from the summation of the red and purple ones. Indeed, they describe the frequency response of the whole system formed by the PNN and the fiber span.. In the 25 km case, bare fiber propagation does not significantly alter the power penalty within the bandwidth of the modulation (10 GHz). In this case, the PNN is useless, and actually, its action moves the first notch of the power penalty around 10. On the other hand, at 75 km and 125 km, the differences between BTB and bare fiber profiles are significant within the bandwidth of the transmitted signal. In these cases, the PNN reduces the CD-induced penalty by shifting the first notch back to higher frequency values. One can observe that when the fiber span is removed from the link, keeping just the trained PNN, the first minimum of the CD-induced penalty is approximately where it would be with the fiber in place and without the PNN. This observation suggests that the frequency response of the trained PNN approximates the inverse of the optical fiber frequency response within the bandwidth of the signal so that their product is equal to 1, as expected following (<ref>).
To test the applicability of our PNN to a DWDM link, we tested its equalization capabilities by varying the laser frequency at the transmitter. The experiment is performed with 125 km of optical fiber using the trained configuration of the PNN at f_0=193 THz. The weights of the PNN are left unaltered throughout the frequency sweep, which covers the interval f_o ± 90 GHz with steps of 1 GHz. The end-of-line optical filter follows the peak frequency during the sweep. The blue line in Fig. <ref>(a) shows how the loss function ℒ_2 changes with the transmitted frequency. Each point and its error bar in the curve are obtained respectively as the average and the standard deviation of 20 experiments. It can be seen that as the signal frequency changes by a few GHz from the training frequency f_0 the loss increases, namely the PNN becomes ineffective at compensating CD. This is explained by the fact that the phases accumulated by the signal along the spirals change with the optical frequency. Since the phase shifts induced by the micro-heaters remain the same, the optimal interference condition for the equalization is lost. The same observation is made by training the PNN at 193.04 (orange line) and 192.96 (yellow line). The Free Spectral Range (FSR) Δλ of the PNN is given by
Δλ = c/n_g L_C≈40,
where n_g is the group index of the PNN Silicon waveguides. Therefore, we expected that the optimal weights found by training the PNN at 193 would also work at 192.96 and 193.04. However, this is not the case. A plausible reason lies in fabrication imperfections. Indeed, the periodicity in frequency is lost if the delays are not multiples of the same unit as a result of variations in the spiral lengths or, more plausibly, in the group velocity. The electron beam lithography technique used to trim the waveguide induces local changes in the track width (Line Width Roughness), which are perceived as variations in the average group index at different frequencies. The role played by uncertainty in the delay is confirmed by performing a simulation of the experiment (see Fig. <ref>(b)). The periodicity is well visible in the simulation with nominal values (blue line) and disappears when errors in delays are considered.
§ COMPARISON WITH OTHER APPROACHES
Table <ref> proposes a comparison between several implementations of photonic devices used for CD compensation. Over the years, several integrated photonic filters have been proposed for this scope. In <cit.> an integrated filter consisting of 10 cascaded unbalanced MZIs was proved to equalize a 53.125Gb/s PAM4 signal over 40 km of optical fiber. The device can be tuned in the range ±500 ps/nm (considering a 15 GHz bandwidth) by adjusting a single voltage and can equalize up to 4 WDM channels with a spacing of 100 GHz. A similar performance was achieved by filters consisting of cascaded ring resonators <cit.>. In general, the maximum dispersion that cascade filters can compensate grows with the number of stages that, however, leads to a larger insertion loss.
Optical FIR filters with complex coefficients have been studied for CD compensation as well. These devices can be classified as transversal-form or lattice-form filters <cit.>. In a transversal-form filter, the signal goes through a 1× N splitter, then a phase shift is applied to each branch and finally a N×1 combiner produces the output. Our device falls into this category. A lattice-form filter consists of a cascade of MZIs and delay lines. This architecture allows for lower transmission losses but setting the desired configuration is not straightforward <cit.>. Transversal-form filters differ in the way the splitter/combiner is implemented. It is possible to use a tree of y-branches/MMI (as in our previous work <cit.>) followed by one MZI per tap (as in this work) or a tree of MZI as in <cit.>. It is more convenient to implement the splitter as a tree of MZIs so that the input power can be distributed over the branches with different weights without losses. On the other hand, if the splitter is a tree of y-branches, power is lost depending on the aperture of the MZIs.
Different approaches have been followed in the application of photonic neural networks to signal recovery in optical communication. One that has gained considerable attention is Reservoir Computing (RC), in which the signal is fed into a non-trainable recurrent network with randomly connected non-linear nodes called reservoir <cit.>. The only trainable part of the net is a linear regressor (or classifier) that receives as inputs some of the nodes of the reservoir. In <cit.>, the recovery of the transmitted sequence from the distorted signal is framed as a classification task and it is demonstrated that a reservoir comprising a semiconductor laser with a delayed feedback loop can successfully tackle it. Masking is used to increase the dimension of the reservoir, implying the necessity of sampling more points per symbol, which is impractical at high baud rates. In <cit.> equalization is achieved with an integrated passive photonic reservoir (with a swirl topology and 16 nodes) in which the non-linearity is provided by the photo-detectors that probe the nodes. This means that the linear stage must be implemented as part of a DSP, even though the reservoir allows for a simpler and more power-efficient DSP that performs only linear operations. In <cit.> the reservoir consists of two parallel optical filters with feedback. The idea behind this design is that a higher computational power can be achieved by sampling different spectral slices of the signal.
Compared to other solutions, our device proposes a competitive approach to the problem of CD equalization. Its compact design offers a wide compensation range with great adaptability to different modulation formats <cit.>. The layout is easily scalable compared to other technologies, as discussed in the next section. It allows maintaining the signal processing entirely at the optical level (except for the training phase), with minimized latency and power consumption compared to other approaches. Manufacturing imperfections affect its applicability to DWDM and increase the insertion losses.
§ CONCLUSION
We demonstrated that an 8-channel delayed complex perceptron effectively equalizes a 10 Gbaud PAM4 signal transmitted through up to 125 km of optical fiber (18.5 dB average insertion loss). The action of the device can be described as the discrete convolution between the input signal and a finite sequence of tunable complex weights. Two methods for finding the optimal configuration of these weights, PSO and Adam, were compared: PSO is more precise and accurate than Adam with the drawback of an increased computational cost. The proposed PNN proved to be as effective as a commercial TDC in compensating CD while requiring less power consumption during the operation (∼ 250 mW against few W). The device's effectiveness is confirmed by the fact that it flattens the CD-induced penalty within the bandwidth of the transmitted signal. Due to manufacturing imperfections, the 40 GHz periodicity in the response of the PNN predicted by the ideal model was not observed, limiting its applicability to WDM schemes.
We simulate the effect of structural parameters, namely the number of channels N and the delay unit Δ t, on the PNN performance. As shown in Fig. <ref>(a), the equalized and unequalized BER curves at 125 km obtained from the simulation are compatible with the experimental ones, proving the code's reliability.
Different training procedures of the PNN for various combinations of N and Δ t are performed via PSO, each by evaluating the loss in the trained configuration. For consistency, the loss is always evaluated with the same SNR at the receiver, and the bandwidth of the transmission system is scaled proportionally to the bandwidth baud rate. Three scenarios are considered: a 10 Gbaud PAM4 signal propagating through 200 km of fiber (Fig. <ref>(b)), a 20 Gbaud PAM4 signal propagating through 80 km of fiber (Fig. <ref>(c)) and a 50 Gbaud PAM4 signal propagating through 10 km of fiber (Fig. <ref>(d)). Every scenario has a set of parameters for which the PNN reaches low loss values, indicating an effective signal equalization. The results show that the best performance is obtained in proximity to the parameters that satisfy (<ref>), marked by crosses in Fig. <ref>. Equation (<ref>) expresses the fact that the product NΔ t, namely the time window observed by the PNN, must be comparable with β_2LΔω, the broadening caused by CD. In general, it is observed that a more significant number of channels, thereby a smaller delay unit, ensures better performance. Indeed, a PNN with more degrees of freedom can better approximate the inverse transfer function of the optical fiber. The simulations demonstrate that a time-delay complex perceptron with less than 16 channels is capable of equalizing a 50 Gbaud signal up to 10 km of transmission distance. However, similarly to other integrated photonic devices, the scalability comes at the cost of a larger footprint, higher power consumption by the microheaters, enhanced thermal cross-talk, and a longer training duration. Many efforts are being put into conciliating increased complexity with performance <cit.>. Focusing on our case of interest, future-generation devices could feature an on-chip integrated optical amplifier (e.g., an SOA) for loss reduction and enhanced versatility (studies on the feasibility in terms of power consumption are ongoing). In parallel, the problem of thermal cross-talk and power inefficiency generated by microheaters in MZIs can be mitigated by implementing electro-absorption modulators (EAM). Both these improvements are available in the next-generation InP devices at our disposal, which will be tested soon.
We foresee that future-generation devices will also represent a valid alternative to coherent transmission in access and metro applications. On single-span links, the use of the PNN device will reduce the number of components required in the link, thus simplifying the network and reducing the costs. On multi-span links, using the PNN device increases the quality of the transmission by fully compensating chromatic dispersion or removing the residuals. Migrating the equalization to the optical domain mitigates typical DSP-related problems such as power consumption, latency, and costs. Compared to coherent systems, the innovative PNN approach still features some drawbacks, such as high insertion losses, poor performances in WDM networks, and low receiver sensitivity typical of IMDD. Future generation devices will implement new features (on-chip SOA and EAM), making the device more competitive with respect to algorithm-based alternatives and more prone to scalability. We plan to perform tests for nonlinear effects equalization and coherent modulation formats to demonstrate its versatility and advantages.
§ EXPERIMENTAL SETUP
Fig. <ref> depicts a schematic of the complete experimental setup. A CW tunable laser source (TLS) operating at 1550 nm is modulated as a 4-level Pulse Amplitude Modulated (PAM4) signal. This is software-generated and based on a periodic Pseudo-Random Binary Sequence (PRBS) of order 10 and period 2^10. The pattern is generated by a 30 GHz-bandwidth Arbitrary Waveform Generator (AWG) connected to two 20 GHz-bandwidth RF amplifiers, which drive a Nested Mach-Zehnder Interferometer (NMZI). A 50/50 Fiber Optic Coupler sends half of the optical power to a 20 GHz-bandwidth fast photodiode (RX1), where the input signal is acquired for reference. The other half of the optical power proceeds to a polarization controller, which is needed since the PNN device is polarization-sensitive (see Section <ref>). An Erbium-Doped Fiber Amplifier (EDFA) and a Variable Optical Attenuator (VOA) regulate the input power to the PNN device so that it never exceeds 3 dBm in order not to damage the device and not to trigger nonlinear effects in Silicon. The PNN is placed before the fiber to eliminate polarization-dependent losses. The coupling to the chip occurs through butt coupling via tapered fibers held by 3-axis stages. The 0.1% of the collected optical power is to a Power Monitor (PM1) for alignment monitoring. The PNN processes the optical signal accordingly to (<ref>), with the weights driven by a DC current generator with 16 independent channels. Since the PNN optical response is defined for one of the phase weights e^jϕ_i, the PS associated with the null-delay channel (Channel 1) is left unconnected. Therefore, only 15 of the 16 channels of the DC current generators are used to tune the PNN device, while the remaining channel is used as a triggering signal for the oscilloscope (measurement procedure described below). The PNN device is soldered to a Printed Circuit Board (PCB) and electrically connected via wire bonding. A Copper plate in direct thermal contact with the PNN device is placed underneath the PCB and connected to a Temperature Controller for thermal stabilization.
The remaining 99.9 % of the PNN optical power transmission is sent to an EDFA to compensate for the PNN insertion losses (∼ 20 dB with all the currents switched off). Then, the optical signal is coupled to a first fiber span with variable length between 0 and 75 km with steps of 25 km. Larger propagation distances are reached by implementing another 25 km or 50 km fiber span preceded by another EDFA to prevent Optical Signal-To-Noise Ratio (OSNR) degradation. Standard SM G.652D fibers have been used (0.2 dB/km losses), with the input optical power to each span never exceeding 2 dBm to prevent the triggering of nonlinear effects. The fiber link transmitted output optical power is attenuated by a VOA and, via an optical switch, is addressed alternatively to an Optical Spectrum Analyzer (OSA) or to a final amplification stage which compensates for fiber losses. Subsequently, another VOA regulates the amount of optical power sent to a 30 GHz optical bandwidth filter and then to the receiver. A fiber splitter routes part of the signal towards a second Power Meter (PM2, 0.1%) and part to another 20 GHz-bandwidth fast photodiode (RX2, 99.9%). Both RX1 and RX2 are connected to an 80 GSa/s oscilloscope (OSC) with a 16 GHz bandwidth.
The OSNR level at 0 dBm at RX2 is maintained at (44 ± 1) dB (0.1-nm resolution bandwidth) for the measurement process. Since the optical gain of the end-of-line EDFA is left unaltered for the whole experiment duration, the mentioned condition at the receiver reflects into (34 ± 1) dB OSNR (0.1-nm resolution bandwidth) and (-14 ± 1) dBm optical power at 1550 nm measured at the OSA. This is chosen as the reference working point for the measurements performed in each transmission scenario. When the PNN device is inserted, this condition is set while keeping all currents at 0 mA in the device. Some deviations from this working point continuously occur during the training, since the insertion loss of the device significantly changes with the current configuration set in each specific measure (these changes cannot be monitored directly by the OSA due to the too-low duty cycle of its measure). Similarly, the trained PNN may induce deviations from the reference working point. However, a noise characterization of the experimental setup revealed that these variations in PRX and OSNR minimally impact the SNR profile at the receiver. The consistency between the measurements performed with and without the PNN is thus preserved.
Table <ref> reports channel losses k_i entering in (<ref>) obtained after the characterization of the PNN device.
§ PERFORMANCE COMPARISON BETWEEN IDEAL AND NON-IDEAL IMPLEMENTATIONS
Following the discussion about Fig. <ref>, here we provide a detailed discussion about the differences between the non-ideal PNN device (finite N_T and finite Δ t) and the ideal behavior predicted by (<ref>). The comparison is carried out on a simulative level using the same code exploited for the results of Fig. <ref>. The benchmark performance is set with a simulated PNN with N_T = 8 taps and a delay unit Δ t = 25 ps. The PNN is trained via Particle Swarm Optimizer (PSO) in Phase Only (PO) configuration to equalize CD after 125 km propagation, saving the eye diagram at the receiver and BER versus power at the receiver (PRX) profile. The results have been compared with those produced respectively by the configurations N_T = 8, Δ t = 25 ps, and N_T = 512, Δ t = 0.7812 ps applying the ideal phases derived from (<ref>). In particular, the last configuration adheres more to the ideal specifications under which (<ref>) was derived. The comparison of the BER versus PRX profiles for the three cases and the corresponding eye diagrams at 0 dBm at the receiver is reported in Fig. <ref>. As expected, the lowest BER profile is obtained with the trained PNN, namely when the phase weights are optimally adapted to the selected transmission scenario, resulting in the opened eye diagram of panel (b). The same PNN layout applied with the ideal phases does not perform similarly, bringing the BER profile close to the unequalized case and resulting in the degraded eye diagram of panel (c). Applying the ideal phases brings an advantage as soon as the PNN layout adheres to the ideal case (N_T →∞, Δ t → 0). In this case, an improvement in the eye diagram aperture is observed in panel (d).
§ ACKNOWLEDGMENT
European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 788793, BACKUP and No 963463, ALPI).
The authors would like to thank Mattia Mancinelli and Davide Bazzanella for the chip design and Stefano Biasi for the fruitful discussions.
IEEEtran
|
http://arxiv.org/abs/2409.02273v1 | 20240903200907 | SlipNet: Slip Cost Map for Autonomous Navigation on Heterogeneous Deformable Terrains | [
"Mubarak Yakubu",
"Yahya Zweiri",
"Ahmad Abubakar",
"Rana Azzam",
"Ruqayya Alhammadi",
"Lakmal Seneviratne"
] | cs.RO | [
"cs.RO"
] |
The Wanderer: Charting WASP-77A b's Formation and Migration Using a System-Wide Inventory of Carbon and Oxygen Abundances
Maleah Rhem
September 9, 2024
=========================================================================================================================
empty
empty
§ ABSTRACT
Autonomous space rovers face significant challenges when navigating deformable and heterogeneous terrains during space exploration. The variability in terrain types, influenced by different soil properties, often results in severe wheel slip, compromising navigation efficiency and potentially leading to entrapment. This paper proposes SlipNet, an approach for predicting slip in segmented regions of heterogeneous deformable terrain surfaces to enhance navigation algorithms. Unlike previous methods, SlipNet does not depend on prior terrain classification, reducing prediction errors and misclassifications through dynamic terrain segmentation and slip assignment during deployment while maintaining a history of terrain classes. This adaptive reclassification mechanism has improved prediction performance. Extensive simulation results demonstrate that our model (DeepLab v3+ + SlipNet) achieves better slip prediction performance than the TerrainNet, with a lower mean absolute error (MAE) in five terrain sample tests.
§ INTRODUCTION
Autonomous navigation is crucial for space exploration missions, where robots operate independently on planetary surfaces with minimal or no human intervention. The significant communication delays between the Earth and distant planets, such as Mars, necessitate reliable onboard sensors that can perceive the surrounding environment, assess traversability, and navigate safely <cit.>. Previous Mars missions have equipped rovers with stereo-vision systems to avoid obstacles and plan paths autonomously. However, these rovers often require human intervention to assess traversability, resulting in slow progress. For instance, the Curiosity rover's drive distance was limited due to significant slips on deformable surfaces, caused by the looseness of the sand, leading to significant path deviation <cit.>. These surfaces, which can vary greatly in soil composition, are examples of heterogeneous terrains. Such terrains exhibit variations in soil texture and physical properties, making navigation even more challenging. Overcoming these challenges is crucial for enhancing the efficiency and safety of rovers during space exploration missions.
Traversability assessment on deformable terrains, particularly for space environments, is challenging due to the complex dependencies between physical terrain properties, surface geometry, and the rover's mobility mechanisms. These dependencies vary widely across different terrains, making it difficult to generalize from one type of terrain to another. Existing methods often rely on visual sensors to classify terrain types, but these methods can fail when encountering new, unseen terrains, leading to misclassification and prediction errors <cit.>. Additionally, factors like sand density, rock friction, and terrain unevenness further complicate traversability assessment, influencing wheel slip and vehicle dynamics and necessitating reliable slip prediction models <cit.>. This complexity is further exacerbated in extraterrestrial environments, where the lack of prior terrain knowledge can hinder real-time decision-making. For example, reliance on visual sensors alone might be problematic, as visual cues may not fully capture the physical properties affecting slip, such as soil moisture or compaction <cit.>. The interaction between the rover's wheels and the terrain is influenced by these properties, affecting traction and potentially leading to slip <cit.>.
Machine learning methods, particularly deep learning, have been extensively used to tackle these challenges. Convolutional Neural Networks (CNNs) have demonstrated effectiveness in feature extraction and terrain classification, leveraging large datasets to learn unique patterns from terrain images <cit.>. However, these methods often require substantial amounts of annotated data, which can be challenging to obtain for planetary surfaces <cit.>. Also, they could not capture the soil's physical parameters. Hence, they fail to generalize well to unknown terrains due to the variability in soil properties. Self-supervised learning approaches have emerged as a solution, associating sensor data with terrain properties to enable models to learn from real-world interactions without extensive annotations <cit.>. Despite these advancements, accurately predicting wheel slip remains difficult due to the complex dynamic nature of terrain interactions. Factors such as the interaction between the wheel and the soil, which can vary significantly, highlight the need for more robust and adaptable methods <cit.>.
This paper proposes SlipNet, a new approach for real-time terrain classification and slip prediction applicable to autonomous navigation on deformable terrains. SlipNet dynamically reclassifies terrain during deployment and maintains a history of terrain classes, enabling the rover to adapt to new and evolving terrain conditions (Fig. <ref>). Our method, the SlipNet, employs advanced hybrid neural network architecture that integrates machine learning techniques for feature extraction, terrain classification, temporal sequence prediction, and probabilistic slip prediction.
The main contributions of the paper are summarized as follows:
* We propose SlipNet, a hybrid deep learning model designed to perform real-time terrain classification and slip prediction. SlipNet is trained based on input sensor data from wheel encoders and cameras, enhancing robustness and accuracy for autonomous navigation on deformable terrains.
* We develop a framework for synthetic data generation using a high-fidelity simulator (Vortex Studio) to generate high-quality datasets that mimic real-world deformable terrain environments.
* We evaluate the performance of our SlipNet on unseen terrains with varying soil properties and show significant improvements compared to the state-of-the-art method, TerrainNet <cit.>.
The article's structure is as follows. Section II delves into the concept of the intelligent wheel system. Section III offers a concise overview of the experimental setup for the single wheel Test-rig Section IV presents the methodology employed for slip estimation, outlining the methods used Section V delves into the discussion of the experiments, and performance evaluation for the well-lit and low-light conditions. Finally, Section VI provides a summary of findings and outlines potential directions for future research.
§.§ Related Papers
In this section, we present a survey of current practices in slip prediction and traversability estimation based on semantic segmentation and self-supervised techniques.
§.§.§ Background on Slip
This work examines the longitudinal slip (along the travel direction) of the rover as in <cit.>. The longitudinal slip is defined as:
s_x = v_x - v_ref/v_ref, if v_x < v_ref (driving)
v_x - v_ref/v_ref, if v_x > v_ref (braking)
where v_x is the rover's measured velocity in the direction of travel and v_ref is the commanded velocity. A positive slip means the rover is traveling slower than commanded, and a negative slip means the rover is traveling faster than commanded. This slip value ranges between -1 and 1. In this paper, only the positive slip values are considered.
§.§.§ Current practice on Mars
: Currently, slip prediction for daily tactical planning is a manual process on Earth. Rover operators begin by visually identifying the terrain type, then estimating slip based on slope versus slip curves specific to each terrain class such as loose sand, consolidated sand, and bedrock, which have been derived from comprehensive earth-based testing <cit.>. While these models are generally accurate for most terrains, they often underestimate the variability in slip. Moreover, the models do not specifically address longitudinal or lateral slips; instead, they assume the slip vector always points down-slope, which can lead to a significant overestimation of lateral slip. Efforts are underway to develop automated systems for visual terrain classification and slip estimation using data collected directly from rovers <cit.>.
For autonomous navigation, rovers employ the Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) system <cit.>, which protects against geometric obstacles but does not predict slip. During autonomous operation, rovers estimate slip by comparing the distance covered as measured by visual odometry (VO) against the expected distance without a slip from wheel odometry. If the detected slip exceeds a certain threshold, the rover halts and awaits further instructions <cit.>. The onboard imaging systems primarily focus on identifying targets for scientific instruments, as seen in the Autonomous Exploration for Gathering Increased Science (AEGIS) system on the Opportunity rover, which identifies scientific targets either by recognizing rocks with defined edges <cit.> or using a random forest classifier <cit.>. However, these systems do not evaluate the traversability classes.
§.§.§ Traversability from Semantics
: Semantic segmentation is essential in computer vision, involving pixel-wise image labeling, and has been enhanced by deep learning architectures like OCRNet <cit.> and PSPNet <cit.>. Recent transformer-based methods offer superior performance but require significant computational resources, prompting the development of more efficient designs <cit.>. These methods excel in structured datasets like CityScapes but struggle with deformable terrain environments due to indistinct boundaries and class features.
Deep learning excels in semantic scene understanding from both images <cit.>, and point clouds <cit.>. Semantic classes are mapped and linked to traversability scores using datasets like RUGD <cit.>, Rellis-3D <cit.>, and Freiburg Forest <cit.>, though their limited size and diversity constrain broad applicability and accuracy. Techniques range from voxel-wise terrain density classification using LiDAR and cameras <cit.> to projecting image-based semantic segmentation onto 2.5D maps <cit.>, and combining LiDAR with learned semantics for region risk assessment <cit.>. Space exploration faces similar challenges in terrain hazard identification, critical for rover missions <cit.>. Efforts to classify terrain types and their physical interactions include using CNNs to predict wheel slip on Mars <cit.> and developing comprehensive datasets <cit.>. Research also focuses on reducing training data needs <cit.> and probabilistically merging semantic classifications with slip models for risk-aware traversability <cit.>.
§.§.§ Self-supervision
: Traversability estimation methods that depend on scene semantics often require costly annotated data. Self-supervised approaches address this challenge by generating training signals without manual annotations, utilizing information from alternative sensor modalities <cit.> or from robot-environment interactions <cit.>. These self-generated supervision signals enable models to predict future terrain conditions without direct interaction with the terrain. In <cit.>, two classifiers were enhanced for terrain class prediction using image and proprioceptive data in a space exploration context. The work in <cit.> adjusted proprioceptive-based pseudo labels using vibration data to predict traversability from colorized elevation maps. During indoor navigation, <cit.> suggested using anomaly detection to identify safe image areas. Other researchers have applied anomaly detection and evidential deep learning to learn from real-world data without manual labels <cit.>. Contrastive learning has also proved effective in creating expressive representations for traversability estimation <cit.> generated heuristic-based and velocity-tracking pseudo labels, respectively, enabling online training of traversability prediction networks during deployment. <cit.> in WayFAST approximated terrain traversability using tracking errors from a predictive control model, and <cit.> estimated worst-case costs and traction from semantic elevation maps, including confidence values via density estimation.
§ OUR APPROACH
§.§ Data Generation
The Khalifa University Space Rover (KUSR), a differential drive grouser-wheeled rover <cit.>, is used in a high-fidelity, physics-based simulator (Vortex Studio v22.8)<cit.> for data collection. Egocentric RGB terrain images, right and left wheel angular speed, and slippage data are generated from the navigation of the rover on six heterogeneous terrains. These terrains are created by generating patterns using eight different types of soils, which include four lunar soil textures and four soils commonly found on Earth, as in Fig. <ref> . Mechanical properties such as soil friction angle, cohesion, and stiffness modulus are assigned to the rover tire model. A total of 24 trajectories were generated with a 20m × 20m terrain dimension. In total, there are 10080 input labels split into 8064/2016 training/testing set, respectively.
§.§ Model Architecture
The overall framework of our model consists of three major components as shown in Fig. <ref>: Terrain segmentation module which converts a terrain image into semantic segmentation map implemented with state-of-the-art segmentation network. Three semantic segmentation networks were explored in this studies: U-Net, PSPNet and DeepLab v3+. The DeepLab v3+ <cit.> is adopted for resegmentation due to its weak-supervision functionality. The Slip risk module maps the segmentation mask with in-situ wheel slip mean and standard deviation values for each terrain type. The SlipNet assigns slip risk classes to terrain segments and keeps track of all terrain classes and their corresponding slip risk estimates. In the SlipNet, the terrain segmented map is updated into a Slip Cost Map represented by jet colormap to indicated slippage risk zones.
§.§.§ Terrain Semantic Segmentation:
Our terrain segmentation network utilizes the "DeepLab" implementation of fully convolutional neural network (FCNN) as described by <cit.> and shown in Fig. <ref>. The network's front end mirrors the VGG architecture but is adapted to incorporate "atrous" convolutions, also known as dilated convolutions. These specialized convolutions expand the receptive field of the filters without increasing the filter size.
To enhance the generalization of the segmentation models, only 70 % of the terrain images were annotated with segmentation masks, creating a weak supervision setup. The network is trained using standard backpropagation and stochastic gradient descent, requiring approximately 6 hours on an Nvidia GTX 1050 GPU. The trained segmentation model processes an input terrain image within 125 milliseconds.
§.§.§ Slip Risk Module:
As mentioned in Section <ref>, the slip ratio quantitatively describes the wheel slippage. The slip ratio can be classified into five different categories: (0 < s_1 ≤ 0.2), (0.2 < s_2 ≤ 0.4), (0.4 < s_3 ≤ 0.6), (0.6 < s_4 ≤ 0.8), and (0.8 < s_5) slip. The simulation results of the current study, as presented later, indicate that at low and moderate wheel speeds, the rover experiences low slip up to 0.4 but encounters high and uncertain slip when the wheel speed increases significantly. Based on the mobility system of the wheeled rover in this study and the previous literature <cit.>, this definition is adopted as a reasonable measure for evaluating the risk associated with wheel slip. Prior to predicting risk, the slip ratio is estimated from a regression curve using collected slip versus speed data as inputs. The estimated slip ratio, denoted as s, and a linear basis function y(x,w) as in (<ref>).
s = y(x, w) + ϵ,
y(x, w) = ∑_j=0^M-1 w_j ϕ_j(x) = w^T ϕ(x).
where ϵ is the prediction error, x is the wheel speed variable, w is the weight vector and ϕ(x) are the basis functions. To minimize error, w is defined in (<ref>).
min E(w) = 1/2∑_n=1^N { s - w^T ϕ(x) }^2.
The Guassian basis function is adopted in this study.
ϕ_j(x) = exp(-(x - μ_j)^2/2t^2),
where μ_j is the basis function location in input space and t is the spacial scale. The slip prediction is then compared with the set threshold for different level of wheel slip as discussed above. While slip risk is commonly defined in levels, the slip is terrain-dependent based on the estimated wheel traversing speed. The terrain-specific speed threshold is defined as f(ϕ) which is modified to f(ϕ - σ) in consideration of speed estimation accuracy as in (<ref>).
s = f(ϕ),
s' = f(ϕ - σ) = f(ϕ'),
MAE : σ = 1/n∑_i=1^n |g_i - h_i|,
where g_i and h_i) are the estimated traversing speed and the ground truth value, respectively. Through the above techniques as adopted from <cit.>, the terrain-dependent slip risk prediction threshold can be shifted from (s, ϕ) to (s', ϕ')
§.§.§ SlipNet:
SlipNet is designed as a dynamic terrain analysis tool utilizing a Vision Transformer (ViT) encoder-decoder architecture to predict and adapt to varying slip conditions encountered by a rover during navigation. The input to the SlipNet consists of a segmentation map generated from the DeepLab v3+ model. The ViT encoder processes this map through self-attention mechanisms, capturing spatial hierarchies and inter-segment dependencies, with each segment treated as a token. A fusion layer then integrates real-time slip data from the rover's sensors. The ViT decoder uses these processed features to reconstruct an enhanced segmentation map that annotates each segment with predicted slip risks. SlipNet employs an experience replay buffer to store informative past experiences, using an attention mechanism to prioritize learning from segments that provide significant insight into terrain properties.
SlipNet predicts both mean m_i and variance σ_i for each pixel i, using a negative Gaussian log-likelihood loss, averaged over valid pixels i ∈𝒱 where ground truth m_i^t is available <cit.>:
ℒ = ∑_i ∈𝒱( (m_i - m_i^t)^2/2σ_i^2 + logσ_i )
§ RESULTS
We evaluate our models on test datasets that consist of 2016 terrain images. The rover used for simulation experiments is KUSR, which is lightweight (about 10 kg) with four grouser wheels of about 0.1 m radius. We use a maximum angular speed range of [-3.5, 3.5 ]rad/s with the differential drive to guide the rover from the start position to the goal position for each path-following scenario. Each scenario takes around 30 seconds. To generate rich data across a variety of terrains, various heterogeneous terrains were created with different complexities using a mixture of eight different soil textures given in Fig. <ref>.
A sample of five heterogeneous terrain images was selected for both quantitative and qualitative performance comparison of our approach and the baselines. Adopting the pretrained segmentation models mainly for segmentation task along with SlipNet to predict segment slippage value was found to be more efficient in computational and time complexities. We investigated three pretrained segmentation models that include U-Net, PSPNet, and DeepLab v3+.
For fair comparison with TerrainNet, elevation loss component is replaced by slippage loss. The overall training for U-Net + SlipNet, PSPNet + SlipNet, DeepLab v3+ + SlipNet, and TerrainNet are [4.8 - 7 hrs], [5.1 - 9 hrs], [6 - 9 hrs], and [2.2 - 6], respectively as in Table <ref>. TerrainNet takes The U-Net + SlipNet takes the least inference time of about 0.11 seconds for each terrain sample. DeepLab v3+ + SlipNet (Ours), despite having an inference time of about 0.22 seconds, has shown better prediction accuracy than the baselines in four out of five tested sample cases. Mean Absolute Error (MAE) is used for the quantitative comparison as shown in Table <ref>.
We evaluate the models qualitatively using the five heterogeneous image samples as shown in Fig. <ref>. The slip risk module outputs five slip risk classes in the range of [0 to 1] slip ratio presented by colors in the Jet color map range. Untraversed terrains are initially assigned a moderate slip value range [0.2–0.4] represented by the cyan color ID. In all five samples used, TerrainNet performs the least in slip prediction performance for each terrain segment. U-Net + SlipNet performs best in sample 3. In general, slip prediction performance is better if there is clear semantic distinction in the terrain image. This is expected since the performance of the semantic segmentation network is high with distinct image features. Unseen terrains with close visual outlook but different mechanical properties than the known terrains are hard to predict within the slip risk threshold. Sample 4, which mimics two lunar soil heterogeneous terrain, was not seen during training but our model predicts the slip value for both soils with high accuracy.
In our model (DeepLab v3+ + SlipNet), resegmentation of unannotated terrain segment is handled through weak supervision mechanism of DeepLab v3+ network.
The slip prediction performance of our model is tested in a new heterogeneous terrain. Four paths were followed under a rover speed of about [0.3 - 0.5] m/s traversing different soils with different mechanical properties. As shown in Fig. <ref>, the predicted mean slip
is represented by a red curve, and the standard deviation values are represented by a blue-shaded area around the mean. The actual mean slip value measured from the rover are plotted in the black curve. The slip mean value is computed from the average of right and left wheel slip ratios.
During motion on desert sand-like terrain in path (a), our model predicts a high slip ratio of around 0.6. The measured average slip ratio value fluctuates due to variations in speeds from the right and left wheels in achieving differential steering drive on the soft desert sand-like terrain.
In paths (b), (c), and (d), multiple terrain textures were traversed and the measured slip value was predicted well within the prediction standard deviation. However, relatively high deviations experienced in some steps within the test scenarios are due to sudden changes in speed which were not handled by our current model.
All test scenarios for path-following experiments were conducted with a simple PID control that does not take into consideration terrain awareness. This model is suitable for generating a slip cost map for incorporation with an autonomous navigation algorithm.
§ CONCLUSIONS
In this work, we developed a framework for synthetic data generation for wheel-slip prediction in deformable terrain environments using Vortex Studio. We also proposed SlipNet, which introduces adaptive reclassification by dynamically resegmenting and reclassifying unseen terrain during deployment and maintaining a history of the most relevant segmentation class. Our method outperforms the state-of-the-art, TerrainNet method for slip prediction in all 5 test case scenarios with varying soil properties.
§ ACKNOWLEDGEMENTS.
We extend our sincere gratitude to the entire team at the Khalifa University Center for Autonomous Robotic Systems (KUCARS) and Advance Research and Innovation Center (ARIC) for their invaluable support and collaboration throughout this research.
|
http://arxiv.org/abs/2409.02483v1 | 20240904072001 | TASAR: Transferable Attack on Skeletal Action Recognition | [
"Yunfeng Diao",
"Baiqi Wu",
"Ruixuan Zhang",
"Ajian Liu",
"Xingxing Wei",
"Meng Wang",
"He Wang"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
Thermoelectricity at a gallium-mercury liquid metal interface
[
==============================================================
§ ABSTRACT
Skeletal sequences, as well-structured representations of human behaviors, are crucial in Human Activity Recognition (HAR). The transferability of adversarial skeletal sequences enables attacks in real-world HAR scenarios, such as autonomous driving, intelligent surveillance, and human-computer interactions. However, existing Skeleton-based HAR (S-HAR) attacks exhibit weak adversarial transferability and, therefore, cannot be considered true transfer-based S-HAR attacks. More importantly, the reason for this failure remains unclear. In this paper, we study this phenomenon through the lens of loss surface, and find that its sharpness contributes to the poor transferability in S-HAR. Inspired by this observation, we assume and empirically validate that smoothening the rugged loss landscape could potentially improve adversarial transferability in S-HAR. To this end, we propose the first Transfer-based Attack on Skeletal Action Recognition, TASAR. TASAR explores the smoothed model posterior without re-training the pre-trained surrogates, which is achieved by a new post-train Dual Bayesian optimization strategy. Furthermore, unlike previous transfer-based attacks that treat each frame independently and overlook temporal coherence within sequences, TASAR incorporates motion dynamics into the Bayesian attack gradient, effectively disrupting the spatial-temporal coherence of S-HARs. To exhaustively evaluate the effectiveness of existing methods and our method, we build the first large-scale robust S-HAR benchmark, comprising 7 S-HAR models, 10 attack methods, 3 S-HAR datasets and 2 defense models. Extensive results demonstrate the superiority of TASAR. Our benchmark enables easy comparisons for future studies, with the code available in the supplementary material.
§ INTRODUCTION
Human Activity Recognition (HAR) has a wide range of application scenarios, such as human-computer interactions, bio-mechanics, and virtual reality <cit.>. Skeleton sequences has been widely used in HAR <cit.>, because skeleton data is a well-structured data representation of human behaviors and robust to lighting, occlusion, and view angles. Recent research <cit.> demonstrates that S-HARs are less robust than expected when adversarial perturbations are added to the testing data, causing skeletal classifiers to produce incorrect outputs. Previous skeleton-based attacks are mainly developed under the white-box setting <cit.>, where the attacker knows the structure and parameters of a given model, or under the query-based setting <cit.>, where the adversary can request numerous queries against the target model <cit.>. But both settings are impractical in real-world HAR scenarios, such as autonomous driving <cit.>, intelligent surveillance <cit.>, and human-computer interactions <cit.>, in which the white-box information and a large number of queries are not attainable.
In contrast, transfer-based attacks, which craft adversarial examples from surrogate models and then transfer them to target black-box models, present a more realistic threat under the free-query black-box setting. Although adversarial transferability has been widely studied across various tasks <cit.>, research on S-HAR remains limited. Recent studies have attempted to apply white-box S-HAR attacks against black-box models via surrogate models <cit.>. However, their transferability is low and highly sensitive to the surrogate choice <cit.>. Consequently, existing S-HAR attacks cannot be regarded as transferable. Similarly, previous transfer-based attacks <cit.> are successful on image data, but show poor transferability when applied to skeletal motion. Contrary to the common belief that adversarial examples transfer well across different model architectures and parameters <cit.>, existing attacks do not exhibit transferability on S-HAR, raising doubt on the usefulness of adversarial transferability in this domain <cit.>. More importantly, the reason for this failure remains unclear.
To study this phenomenon, we begin by examining the factors that hinder adversarial transferability in S-HARs. Our preliminary experiments reveal that adversarial transferability is highly sensitive to the chosen surrogates (<ref>). This finding motivates us to further explore the differences between surrogate models from the view of loss surface smoothness, as previous research has demonstrated that the smoothness of the loss surface significantly impacts adversarial transferability <cit.>. As shown in <ref>, surrogates with a smoothed loss landscape have higher transferability than surrogates with a sharp loss landscape. Therefore, unlike most existing attacks that focus on the backpropagation process, we argue that the S-HAR attack should prioritize smoothening the surrogate's landscape in the training process.
Bayesian neural networks have been shown to exhibit lower sharpness and better generalization ability <cit.>. Therefore, attacking the entire model distribution is more likely to generalize for S-HARs <cit.>. However, it is not straightforward to design such a transferable Bayesian attack for S-HAR. First, skeletal classifiers contain at least several millions of parameters. Sampling from the posterior distribution for such complex models can be computationally expensive and consume a heavy memory footprint. Second, most prior transferable attacks are specifically designed for static data, e.g. images. However, most S-HAR models learn the spatial-temporal features because skeletal data contains rich motion dynamics. Naive adaptation of the Bayesian attack may ignore the spatial-temporal coherence when issuing attacks. How to consider the motion dynamics in Bayesian attacks has not been explored.
To tackle these challenges, we propose the first Transfer-based Attack specifically designed for Skeletal Action Recognition, TASAR, based on a Post-train Dual Bayesian optimization that significantly smoothens the rugged landscape (see <ref>). Our post-train Bayesian strategy freezes the pre-trained surrogate and appends small extra Bayesian components behind it, to turn a single surrogate into a Bayesian one without the need for re-training, thereby speeding up the training process and avoiding a heavy memory footprint. Next, we integrate the temporal motion gradient in a Bayesian manner to disrupt the temporal coherence between sequences over time. A high-level illustration of our method is presented in <ref>. Our contributions are summarized as follows.
* We systematically investigate the previously unknown reason for low adversarial transferability in S-HAR through the lens of loss surface smoothness, and introduce the first transfer-based attack on S-HAR to address this issue.
* We propose a novel post-train Dual Bayesian Motion attack, that explores the smoothed model posterior and considers the spatial-temporal coherence from a Bayesian perspective, without requiring re-training of the pre-trained surrogate model.
* We build the first comprehensive robust S-HAR evaluation benchmark RobustBenchHAR. RobustBenchHAR consists of 7 S-HAR models with diverse GCN structures and more recent Transformer structures, 10 attack methods, 3 datasets and 2 defense methods. Extensive experiments on this benchmark demonstrate the superiority and generalizability of TASAR.
§ RELATED WORK
Skeleton-Based Human Action Recognition.
Early S-HAR research employed convolutional neural networks (CNNs) <cit.> and recurrent neural networks (RNNs) <cit.> to extract motion features in the spatial domain and temporal domains, respectively. However, skeleton data, inherently a topological graph, poses challenges for feature representation using traditional CNNs and RNNs. Recent advances with graph convolutional networks (GCNs) <cit.> have improved performance by modeling skeletons as topological graphs, with nodes corresponding to joints and edges to bones <cit.>. Subsequent improvements in graph designs and network architectures include two-stream adaptive GCN (2s-AGCN) <cit.>, directed acyclic GCN (DGNN) <cit.>, multi-scale GCN (MS-G3D) <cit.>, channel-wise topology refinement (CTR-GCN) <cit.> and auxiliary feature refinement (FR-HEAD) <cit.>. Alongside advancements in GCN-based models, recent studies are exploring temporal Transformer structures for S-HARs <cit.>, but their vulnerability remains unexplored. This work is the first to assess the adversarial robustness of Transformer-based S-HARs.
Adversarial Attacks on S-HAR.
Adversarial attacks <cit.> highlight the susceptibility of deep neural networks and have been applied across different data types. Recently, attacks on S-HAR have garnered increasing attention. CIASA <cit.> proposes a constrained iterative attack via GAN <cit.> to regularize the adversarial skeletons. SMART <cit.> proposes a perception loss gradient. Tanaka et al. <cit.> suggest only perturbing skeletal lengths, and evaluate robustness via Fourier analysis <cit.>. These methods are white-box attacks, requiring full knowledge of the victim model. In contrast, BASAR <cit.> proposes motion manifold searching to achieve the query-based black-box attack. FGDA-GS <cit.> estimates gradient signs to further reduce query numbers.
Compared to white-box and query-based attacks, transfer-based attacks <cit.> pose a more practical threat as real-world HAR scenarios typically cannot access white-box information or extensive querying. While existing white-box S-HAR attacks <cit.> can be adapted for transfer-based scenarios, they suffer from low transferability and sensitivity to surrogate choices. Lu et al. <cit.> proposes a no-box attack for S-HAR but also lacks transferability. Numerous transfer-based attacks, including gradient-based <cit.>, input transformation <cit.>, and ensemble-based methods <cit.>, exhibit high transferability across various tasks but struggle in skeletal data <cit.>. Therefore, there is an urgent need to develop a transferable attack for skeleton-based action recognition.
§ METHODOLOGY
§.§ Preliminaries
We denote a clean motion 𝐱∈𝒳 and its corresponding label y ∈𝒴. Given a surrogate action recognizer f_θ parametrized by θ, f_θ is trained to map a motion 𝐱 to a predictive distribution p(y |𝐱, θ). The white-box attack can be optimized by minimizing the predictive probability:
min_𝐱 - 𝐱_p≤ϵ p(y |𝐱, θ)
where 𝐱 is the adversarial example and ϵ is the perturbation budget. ·_p is the l_p norm distance. The procedure of transfer-based attack is firstly crafting the adversarial example 𝐱 by attacking the surrogate model, then transferring 𝐱 to attack the unseen target model. In <ref>, since the transferable adversarial examples are optimized against one surrogate model, the adversarial transferability heavily relies on the surrogate model learning a classification boundary similar to that of the unknown target model. While possible for image classification, it proves unrealistic for S-HAR <cit.>.
§.§ Motivation
Existing S-HAR attacks have shown outstanding white-box attack performance but exhibit low transferability <cit.>. Similarly, previous transfer-based attacks <cit.>, successful on image data, also show poor transferability when applied to skeletal motion <cit.>. Naturally, two questions occur to us: (1) Why do existing adversarial attacks fail to exhibit transferability in skeletal data? (2) Do transferable adversarial examples truly exist in S-HAR?
To answer these questions, we start by generating adversarial examples using various surrogate skeletal recognizers and then evaluate their adversarial transferability. Obviously, in <ref>, the transferability is highly sensitive to the chosen surrogates, e.g. CTR-GCN <cit.> as the surrogate exhibits higher transferability than ST-GCN <cit.>. This observation motivates us to further investigate the differences between surrogate models. Previous research <cit.> has proven that adversarial examples generated by surrogate models with a less smooth loss landscape are unlikely to transfer across models. Therefore, we investigate the smoothness of the loss landscape across different surrogate models. In <ref>, We visualize the loss landscape of ST-GCN and CTR-GCN trained on the skeletal dataset NTU-60 <cit.> and compare their smoothness to the ResNet-18 <cit.> trained on CIFAR-10 <cit.>. More landscape visualizations can be found in Appendix B. By analyzing the loss surface smoothness, we have two findings: (1) The loss surface of models trained on skeletal data is much sharper than those trained on image data, leading to a relatively low transferability. This suggests that adversarial examples within a sharp local region are less likely to transfer across models in S-HAR, potentially explaining our first question. (2) CTR-GCN has a flatter loss landscape compared to ST-GCN, making it a more effective surrogate for higher transferability. Consequently, we argue that using a surrogate with a smoothed loss landscape will significantly enhance adversarial transferability in S-HAR.
In this work, motivated by evidence that Bayesian neural networks (BNNs) exhibit low sharpness and good generalization <cit.>, we aim to construct a Bayesian surrogate by sampling from the model posterior space to smoothen the rugged loss landscape. From a Bayesian perspective, <ref> can be reformulated by approximately minimizing the Bayesian posterior predictive distribution:
𝐱 - 𝐱_p≤ϵmin p(y |𝐱, 𝒟) = 𝐱 - 𝐱_p≤ϵmin 𝔼_θ∼ p(θ|𝒟) p(y |𝐱, θ)
where p(θ|𝒟) ∝ p(𝒟|θ) p(θ), in which 𝒟 is the dataset and p(θ) is the prior of model weights.
§.§ A Post-train Bayesian Perspective on Attack
Unfortunately, directly sampling from the posterior distribution of skeletal classifiers is not a straightforward task due to several factors. First, directly sampling the posterior is intractable for large-scale skeletal classifiers. Although approximate methods such as MCMC sampling <cit.> or variational inference <cit.> are possible, sampling is prohibitively slow and resource-intensive due to the high dimensionality of the sampling space, which typically involves at least several million parameters in skeletal classifiers. In addition, skeletal classifiers normally contain a large number of parameters and are pre-trained on large-scale datasets <cit.>. Consequently, it is not practical for end-users to re-train the surrogate in a Bayesian manner, as the training process is time-consuming.
To solve the above issues, we propose a new post-train Bayesian attack. We maintain the integrity of the pre-trained surrogate while appending a tiny MLP layer g_θ^' behind it, connected via a skip connection. Specifically, the final output logits can be computed as: logits=g_θ^'(f_θ(𝐱))+f_θ(𝐱). In practice, we adopt Monte Carlo sampling to optimize the appended Bayesian model:
θ^'max 𝔼_θ^'∼ p(θ^'|𝒟,θ) p(y |𝐱, θ, θ^')
≈θ_k^'max 1/K∑_k = 1^K p(y |𝐱 ,θ, θ_k^'),
θ_k^'∼ p(θ^'|𝒟,θ)
where K is the number of appended models. Correspondingly, <ref> can be approximately solved by performing attacks on the ensemble of tiny appended models:
δ_p≤ϵmin 1/K∑_k = 1^K p(y |𝐱+δ ,θ, θ_k^'), θ_k^'∼ p(θ^'|𝒟,θ)
Our post-train Bayesian attack offers two advantages. First, the appended models are composed of tiny MLP layers, getting a similar memory cost to a single surrogate. Second, by freezing f_θ, our post-train Bayesian strategy keeps the pre-trained surrogate intact, avoiding re-training the pre-trained surrogate. More importantly, training on g_θ^' is much faster than on f_θ due to the smaller model size of g_θ^'.
§.§ Post-train Dual Bayesian Motion Attack
In our preliminary experiments, we found that a naive application of post-train Bayesian attack (<ref>) already surpassed the adversarial transfer performance of existing S-HAR attacks, which demonstrates the effectiveness of smoothening the loss surface of surrogates. However, its performance remains slightly inferior to the Bayesian attack via re-training a Bayesian surrogate <cit.>(<ref>). This performance gap is understandable, as we avoid the prohibitively slow process of sampling the original posterior distribution θ∼ p(θ|𝒟) by using a tiny Bayesian component for post-training instead. To further eliminate the trade-off between attack strength and efficiency, we propose a novel post-train dual Bayesian optimization for smoothed posterior sampling, to sample the appended models with high smoothness for better transferability (<ref>). Moreover, unlike previous transfer-based attacks that assume each frame is independent and ignore the temporal dependency between sequences, we integrate motion dynamics information into the Bayesian attack gradient to disrupt the spatial-temporal coherence of S-HAR models. We name our method Post-train Dual Bayesian Motion Attack.
§.§.§ Post-train Dual Bayesian Optimization.
This motivation is based on the view that models sampled from a smooth posterior, along with the optimal approximate posterior estimating this smooth posterior, have better smoothness <cit.>. To this end, we aim for proposing a smooth posterior for learning post-train BNNs, hence possibly possessing higher adversarial transferability. Specifically, inspired by the observation that randomized weights often achieve smoothed weights update <cit.>, we add Gaussian noise to smooth the appended network weights. This is achieved by a new post-train dual Bayesian optimization:
θ^'max 𝔼_θ^'∼ p(θ^'|𝒟,θ)𝔼_Δθ^'∼𝒩(0, σ^2𝐈) p(y |𝐱, θ, θ^'+Δθ^')
For any appended model sampled from the posterior, <ref> ensures that the neighborhood around the model parameters has uniformly low loss. We further use dual Monte Carlo sampling to approximate <ref>:
min _θ^'_k∼ p(θ^'|𝒟, θ)1/M K∑_k=1^K∑_m=1^M L(𝐱, y, θ, θ_k^'+Δθ^'_km) ,
Δθ^'_km∼𝒩(0, σ^2𝐈)
where L is the classification loss. Considering dual MCMC samplings computationally intensive, we instead consider the worst-case parameters from the posterior, followed by <cit.>. Hence <ref> can be equivalent to a min-max optimization problem, written as:
min _θ^'_k∼ p(θ^'|𝒟, θ)max _Δθ^'∼𝒩(0, σ^2𝐈)1/K∑_k=1^K L(𝐱, y,θ, θ_k^'+Δθ^') ,
p(Δθ^') ≥ξ
The confidence region of the Gaussian posterior is regulated by ξ. We discuss the sensitivity to ξ in the Appendix B. The entanglement between θ^' and Δθ^' complicates gradient updating. To simplify this issue, we utilize Taylor expansion at θ^' to decompose the two components:
min _θ^'_k∼ p(θ^'|𝒟, θ)max _Δθ^'∼𝒩(0, σ^2𝐈)1/K∑_k=1^K [L(𝐱, y,θ ,θ_k^')
+∇_θ^'_k L(𝐱, y,θ, θ_k^')^TΔθ^'], p(Δθ^') ≥ξ
Since Δθ^' is sampled from a zero-mean isotropic Gaussian distribution, the inner maximization can be solved analytically. We introduce the inference details and mathematical deduction in Appendix A. As shown in <ref>, the loss landscape optimized by post-train Dual Bayesian is significantly smoother than vanilla post-train Bayesian.
§.§.§ Temporal Motion Gradient in Bayesian Attack.
Post-train Dual Bayesian Motion Attack can be performed with gradient-based methods such as FGSM <cit.>:
𝐱=𝐱 + α·sign( ∑_k=1^K∑_m=1^M∇ L(𝐱, y, θ, θ_k^' + Δθ^'_km))
where α is the step size. For notational simplicity, we notate the classification loss L(𝐱, y, θ, θ_k^' + Δθ^'_km) as L(𝐱). Assume a motion with t frames 𝐱=[x_1, x_2, ⋯, x_t], this attack gradient consists of a set of partial derivatives over all frames ∇ L(𝐱)=[∂ L(𝐱)/∂ x_1, ∂ L(𝐱)/∂ x_2, ⋯, ∂ L(𝐱)/∂ x_t]. The partial derivative ∂ L(𝐱)/∂ x_t assumes each frame is independent, ignoring the dependency between frames over time. This assumption is reasonable for attacks on static data such as PGD <cit.> while infeasible for skeletal motion attacks. In skeletal motion, most S-HAR models learn the spatial-temporal features <cit.>, hence considering motion dynamics in the computing of attack gradient can disrupt the spatial-temporal coherence of these features, leading to more general transferability. To fully represent the motion dynamics, first-order (velocity) gradient (∇ L(𝐱))_d1 and second-order (acceleration) gradient information (∇ L(𝐱))_d2 should also be considered. To this end, we augment the original position gradient with the motion gradient, then <ref> becomes:
𝐱 =𝐱 + α·sign( ∑_k=1^K∑_m=1^M∇ L_dyn(𝐱))
∇ L_dyn(𝐱) = ∑_n=0^2 w_n (∇ L(𝐱))_dn, ∑_n=0^2 w_n = 1
where (∇ L(𝐱))_d0= ∇ L(𝐱). Motion gradient can be computed by explicit modeling <cit.> or implicit learning <cit.>. Given that implicit learning requires training an additional data-driven model to learn the motion manifold, which increases computational overhead, we opt for explicit modeling. Inspired by <cit.>, we employ time-varying autoregressive models (TV-AR)<cit.> because TV-AR can effectively estimate the dynamics of skeleton sequences by modeling the temporary non-stationary signals <cit.>. We first use first-order TV-AR(f_d 1) and second-order TV-AR(f_d 2) to model human motions respectively:
f_d 1:x^i_t =A_t·x^i_t-1+B_t+γ_t
f_d 2: x^i_t =C_t·x^i_t-1+D_t·x^i_t-2+E_t+γ_t
where the model parameters β_t^1=[A_t, B_t] and β_t^2=[C_t, D_t, E_t] are all time-varying parameters and determined by data-fitting. γ_t is a time-dependent white noise representing the dynamics of stochasticity. Using <ref>, the first-order motion gradient can be derived as:
(∂ L(𝐱^i)/∂x^i_t-1)_d 1=∂ L(𝐱^i)/∂x^i_t-1+∂ L(𝐱^i)/∂x^i_t· A_t
Similarly, second-order dynamics can be expressed as below by using <ref>:
(∂ L(𝐱^i)/∂x^i_t-2)_d 2=∂ L(𝐱^i)/∂x^i_t-2+∂ L(𝐱^i)/∂x^i_t-1· C_t-1
+∂ L(𝐱^i)/∂x^i_t·(D_t+C_t· C_t-1)
where C_t=∂x^i_t/∂x^i_t-1 and D_t=∂x^i_t/∂x^i_t-2. After computing x^i_t-1=C_t-1·x^i_t-2+ D_t-1·x^i_t-3+E_t-1+γ_t-1, we can compute C_t-1=∂x^i_t-1/∂x^i_t-2. Overall, the high-order dynamics gradients over all sequences can be expressed as (∇ L(𝐱))_d1=[(∂ L(𝐱)/∂ x_1)_d 1, (∂ L(𝐱)/∂ x_2)_d 1, ⋯, (∂ L(𝐱)/∂ x_t)_d 2] and (∇ L(𝐱))_d2=[(∂ L(𝐱)/∂ x_1)_d 2, (∂ L(𝐱)/∂ x_2)_d 2, ⋯, (∂ L(𝐱)/∂ x_t)_d 2].
§ EXPERIMENTS
§.§ RobustBenchHAR Settings
To our best knowledge, there is no large-scale benchmark for evaluating transfer-based S-HAR attacks. To fill this gap, we build the first large-scale benchmark for robust S-HAR evaluation, named RobustBenchHAR. We briefly introduce the benchmark settings here, with additional details available in Appendix C.
(A) Datasets. RobustBenchHAR incorporates three popular S-HAR datasets: NTU 60 <cit.> , NTU 120 <cit.> and HDM05<cit.>. Since the classifiers do not have the
same data pre-processing setting, we unify the data format following <cit.>. For NTU 60 and NTU 120, we subsampled frames to 60. For HDM05, we segmented the data into 60-frame samples.
(B) Evaluated Models. We evaluate TASAR in three categories of surrogate/victim models. (1) Normally trained models: We adapt 5 commonly used GCN-based models, i.e., ST-GCN <cit.>, MS-G3D <cit.>, CTR-GCN <cit.>, 2s-AGCN <cit.>, FR-HEAD <cit.>, and two latest Transformer-based models SkateFormer<cit.> and STTFormer<cit.>. To our best knowledge, this is the first work to investigate the robustness of Transformer-based S-HARs. (2) Ensemble models: an ensemble of ST-CGN, MS-G3D and DGNN <cit.>. (3) Defense models: We employ BEAT <cit.> and TRADES <cit.>, which all demonstrate their robustness for skeletal classifiers.
(C) Baselines. We compare with state-of-the-art (SOTA) S-HAR attacks, i.e. SMART <cit.> and CIASA <cit.>. We also adopt the SOTA transfer-based attacks as baselines, including gradient-based, i.e., I-FGSM <cit.>, MI-FGSM <cit.> and the latest MIG <cit.>, input transformation method DIM <cit.>, and ensemble-based/Bayesian attacks, i.e., ENS <cit.>, SVRE <cit.> and BA <cit.>. For a fair comparison, we ran 200 iterations for all attacks under l_∞ norm-bounded perturbation of size 0.01. For TASAR, we use the iterative gradient attack instead of FGSM in <ref>.
(D) Implementation Details. Our appended model is a simple two-layer fully-connected layer network. Unless specified otherwise, we use K=3 and M=20 in <ref> for default and explain the reason in the ablation study later. More implementation details can be found in Appendix C.
§.§ Evaluation on Normally Trained Models
Evaluation of Untargeted Attack. As shown in in <ref>, TASAR significantly surpasses both S-HAR attacks and transfer-based attacks under the black-box settings, while maintaining comparable white-box attack performance. Specially, TASAR achieves the highest average transfer success rate of 35.5% across different models and datasets, surpassing SMART <cit.> (the SOTA S-HAR attack) and MIG <cit.> (the SOTA transfer-based attack) by a large margin of 23.4% and 8.1% respectively. Moreover, TASAR shows consistent transferability across all surrogate models, target models and datasets. These improvements break the common belief that transfer-based attacks in S-HAR suffer from low transferability and highly rely on the chosen surrogate <cit.>.
Evaluation of Targeted Attack.
In this section, we focus on targeted attacks under the black-box setting. Improving targeted attack transferability on S-HAR is generally more challenging than untargeted attacks. This is primarily due to the significant semantic differences between the randomly selected class and the original one. Attacking a `running' motion to `walking' is generally easier than to `drinking'. This is why targeted attacks have lower success rate than untargeted attacks. However, <ref> shows TASAR still outperforms the baseline under most scenarios. Moreover, TASAR can successfully attack the original class to a target with an obvious semantic gap without being detected by humans. The visual examples can be found in Appendix B.
§.§ Evaluation on Ensemble and Defense Models
Evaluation on Ensemble Models.
TASAR benefits from the additional model parameters added by the appended Bayesian components. For a fair comparison, we compare it with SOTA ensemble-based methods, i.e., ENS <cit.> and SVRE <cit.>, and the Bayesian Attack (BA) <cit.>, because they also benefit from the model size. Unlike BA, which re-trains the surrogate into a BNN, we instead append a small Bayesian component for post-training. ENS and SVRE take three models ST-GCN, MS-G3D and DGNN as an ensemble of surrogate models, while BA and TASAR only take MS-G3D as the single substitute architecture. We choose ST-GCN, 2s-AGCN, MS-G3D, CTR-GCN, FR-HEAD as the target models, and evaluate the average white-box attack success rate (WASR), average black-box attack success(BASR) and the number of parameters in <ref>. We can clearly see that TASAR (blue line) achieves the best attack performance under both white-box and black-box settings, with an order of magnitude smaller model size. When using MSG3D (12.78M) as the surrogate model, the Bayesian components appended by TASAR only increase 0.012M parameters of the surrogate size, resulting in a memory cost comparable to that of a single surrogate. In contrast, the Bayesian surrogate model used by BA has 15 times more parameters (255.57M) than the single surrogate.
Evaluation on Defense Models.
As BEAT shows high robustness against S-HAR white-box attack <cit.>, it is also interesting to evaluate its defense performance against black-box attack. We also employ the adversarial training method TRADES <cit.> as a baseline due to its robustness in S-HAR <cit.>. Obviously, in <ref>, TASAR still achieves the highest adversarial transferability among the compared methods against defense models, further validating its effectiveness.
§.§ Ablation Study
Dual MCMC Sampling.
TASAR proposes a new dual MCMC sampling in the post-train Bayesian formulation (<ref>). To see its contribution, we conduct an ablation study on the number of appended models (K and M in <ref>). As shown in <ref>, compared with vanilla Post-train Bayesian strategy (M=0), the dual sampling significantly improves the attack performance. Furthermore, although TASAR theoretically requires intensive sampling for inference, in practice, we find a small number of sampling is sufficient (K=3 and M=20). More sampling will cause extra computation overhead. So we use K=3 and M=20 by default.
Temporal Motion Gradient.
TASAR benefits from the interplay between temporal Motion Gradient (MG) and Bayesian manner. We hence conduct ablation studies(MG/No MG) to show the effects of motion gradient and report the results in <ref>. Compared with TASAR without using motion gradient, TASAR with motion gradient consistently improves the attack success rate in both white box and transfer-based attacks, which shows the benefit of integrating the motion gradient into the Bayesian formulation.
§.§ Surrogate Transferability
It is widely believed that transfer-based attacks in S-HAR are highly sensitive to the surrogate choice <cit.>. In this subsection, we provide a detailed analysis of the factors contributing to this phenomenon. When looking at the results in <ref> and the visualization of loss landscape in <ref> and Appendix B, we note that loss surface smoothness correlates with the adversarial transferability. For example, CTR-GCN, manifesting smoother regions within the loss landscape, demonstrates higher transferability than ST-GCN and STTFormer. STTFormer trained on NTU 120 has a smoother loss surface than ST-GCN (see Appendix B), resulting in higher transferability than ST-GCN. For NTU 60, STTFormer shows a similar loss surface to that of ST-GCN and exhibits comparable transferability.Therefore, we suspect that the loss surface smoothness plays a pivotal role in boosting adversarial transferability for S-HAR, potentially outweighing the significance of gradient-based optimization techniques. Next, two-stream MS-G3D shows the highest transferability. Unlike other surrogates, which solely extract joint information, MS-G3D uses a two-stream ensemble incorporating both joint and bone features, thereby effectively capturing relative joint movements. In conclusion, we suggest that skeletal transfer-based attacks employ smoother two-stream surrogates incorporating both joint and bone information.
§ CONCLUSION
In this paper, we systematically investigate the adversarial transferability for S-HARs from the view of loss landscape, and propose the first transfer-based attack on skeletal action recognition, TASAR. We build RobustBenchHAR, the first comprehensive benchmark for robustness evaluation in S-HAR. We hope that RobustBenchHAR could contribute to the adversarial learning and S-HAR community by facilitating researchers to easily compare new methods with existing ones and inspiring new research from the thorough analysis of the comprehensive evaluations.
§ REPRODUCIBILITY CHECKLIST
This paper:
* Includes a conceptual outline and/or pseudocode description of AI methods introduced (yes)
* Clearly delineates statements that are opinions, hypothesis, and speculation from objective facts and results (yes)
* Provides well marked pedagogical references for less-familiare readers to gain background necessary to replicate the paper (yes)
Does this paper make theoretical contributions? (no)
Does this paper rely on one or more datasets? (yes)
If yes, please complete the list below.
* A motivation is given for why the experiments are conducted on the selected datasets (yes)
* All novel datasets introduced in this paper are included in a data appendix. (NA)
* All novel datasets introduced in this paper will be made publicly available upon publication of the paper with a license that allows free usage for research purposes. (NA)
* All datasets drawn from the existing literature (potentially including authors’ own previously published work) are accompanied by appropriate citations. (yes)
* All datasets drawn from the existing literature (potentially including authors’ own previously published work) are publicly available. (yes)
* All datasets that are not publicly available are described in detail, with explanation why publicly available alternatives are not scientifically satisficing. (NA)
Does this paper include computational experiments? (yes)
If yes, please complete the list below.
* Any code required for pre-processing data is included in the appendix. (yes).
* All source code required for conducting and analyzing the experiments is included in a code appendix. (yes)
* All source code required for conducting and analyzing the experiments will be made publicly available upon publication of the paper with a license that allows free usage for research purposes. (yes)
* All source code implementing new methods have comments detailing the implementation, with references to the paper where each step comes from (yes)
* If an algorithm depends on randomness, then the method used for setting seeds is described in a way sufficient to allow replication of results. (yes)
* This paper specifies the computing infrastructure used for running experiments (hardware and software), including GPU/CPU models; amount of memory; operating system; names and versions of relevant software libraries and frameworks. (yes)
* This paper formally describes evaluation metrics used and explains the motivation for choosing these metrics. (yes)
* This paper states the number of algorithm runs used to compute each reported result. (yes)
* Analysis of experiments goes beyond single-dimensional summaries of performance (e.g., average; median) to include measures of variation, confidence, or other distributional information. (yes)
* The significance of any improvement or decrease in performance is judged using appropriate statistical tests (e.g., Wilcoxon signed-rank). (yes)
* This paper lists all final (hyper-)parameters used for each model/algorithm in the paper’s experiments. (yes)
* This paper states the number and range of values tried per (hyper-) parameter during development of the paper, along with the criterion used for selecting the final parameter setting. (yes)
|
http://arxiv.org/abs/2409.02554v1 | 20240904091957 | DH type II radio bursts during solar cycles 23-25: Origin and association with solar eruptive events | [
"Bhuwan Joshi",
"Binal D. Patel",
"Kyung-Suk Cho",
"Rok-Soon Kim"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Joshi al.
CME and DH Type II radio bursts
17
2021
10.1017/xxxxx
388
2024
Solar and Stellar Coronal Mass Ejections
Proceedings IAU Symposium
N. Gopalswamy, O. Malandraki, A. Vidotto & W. Manchester, eds.
^1Udaipur Solar Observatory, Physical Research Laboratory, Udaipur 313001, India
email: bhuwan@prl.res.in
^2Space Science Division, Korea Astronomy and Space Science Institute, Daejeon 34055, Republic
of Korea
§ ABSTRACT
We analyses occurrence of DH type II solar radio bursts spanning over solar cycles 23-25 during which a total of 590 DH type II bursts are reported with confirmed 568 and 462 cases of associated CME and flares, respectively. We find short-term yet important differences in DH type II activity when the data is examined in terms of event counts and their durations, e.g., temporal shift in the peak activity during cycle 24 and variation in the growth rate of the activity level during cycle 25. For an in-depth exploration, DH type II bursts are classified in 3 categories based on their end-frequencies: Low-, Medium-, and High- Frequency Groups (LFG, MFG, and HFG, respectively). The HFG category is the most populous (≈47%) while the LFG category occupy about a quarter of the events (≈24%). The LFG events show a clear inclination toward fastest CMEs and X-class flares with a quarter of events exhibiting end frequency below 50 MHz.
Coronal mass ejections, solar radio bursts, type II bursts, solar cycle
DH type II radio bursts during solar cycles 23-25: Origin and association with solar eruptive events
Bhuwan Joshi^1, Binal D. Patel^1, Kyung-Suk Cho^2, and Rok-Soon Kim^2
September 9, 2024
====================================================================================================
§ INTRODUCTION
It is well recognized that the powerful CMEs generate shocks in the coronal and interplanetary medium. In radio dynamic spectrum, these shocks are identified as type II bursts. Depending on the energetics of CMEs, the type II bursts are observed in metric (m;30 MHz ≤ f ≤ 300 MHz), decametric–hectometric (DH; 300 kHz ≤ f ≤ 30 MHz), and
kilometric (km; 30 kHz ≤ f ≤ 300 kHz) wavelength domains. The extension or origin of type II radio bursts in the DH domain implies the cases of stronger MHD shocks propagating from the inner corona and entering the interplanetary (IP) medium <cit.>. Hence the study of
shocks in DH domain, together with their associated CME-flare events, becomes extremely
important to infer not only the propagation characteristics of CMEs but also to develop their
forecasting tools <cit.>. Contextually, the CME associated with type II radio bursts in metric and/or DH wavelength domain are termed as radio loud CMEs <cit.>.
In this paper, we present a statistical study of DH type II bursts occurred during solar
cycles 23, 24, and 25. For the present analysis, we have obtained data from the following sources: (1) Wind/WAVES Type-II Burst Catalogue (<https://cdaw.gsfc.nasa.gov/CME_list/radio/waves_type2.html>), (2) Solar and Heliospheric Observatory (SOHO) Large Angle and Spectroscopic Coronagraph (LASCO) CME Catalogue(<https://cdaw.gsfc.nasa.gov/CME_list/>). This work highlights the importance of analysing DH type II bursts in terms
of their end frequencies <cit.>. Notably, the end frequencies of these IP bursts directly relate to the heliocentric distance up to which a shock can survive and, therefore, have implications
in exploring the energetics and propagation characteristics of CMEs.
§ OCCURRENCE AND FREQUENCY-DEPENDENT CHARACTERISTICS
§.§ DH type II bursts during cycles 23-25
A comparison of occurrence of DH type II radio bursts during solar cycles 23 (January 1996–December 2008), 24 (January 2009–Dec 2017, and 25 (January 2018–June 2023) shows that cycle 23 produced about twice number of events than cycle 24 (Table <ref>). The solar cycle 25 is still running, however, in the beginning phase of ≈5 years, considered here, it has exhibited a deficit in DH type II activity compared to the previous two cycles along with a slow rise. To understand the activity level of DH type II radio bursts with the evolution of solar cycles, we characterize the type II activity in terms of their total yearly counts and cumulative durations (Figure <ref>). Both indicators reflect an obvious cyclic behaviour of ≈11 years with some noteworthy differences: the maximum activity period for the two parameters differs for cycle 24, and type II event counts show a steep growth during 2021-2023 while the cumulative durations indicates a slow and gradual rise. To have a clear understanding of the observed DH type II events in terms of the frequency at which the bursts originated or first detected, we present a histogram of the starting frequencies in Figure <ref>. Here we clearly notice that the histogram bars representing the limiting frequency range of Wind/WAVES (13–14 MHz) and SWAVES (15–16 MHz) populate 45% and 12% events,
respectively. From this statistics, it is clear that a sizeable population of DH type II events exist as an extension of type II emission starting at metric frequencies.
§.§ Frequency-dependent categories and their characteristics
The characteristics feature of type II burst is slow frequency drift toward lower frequencies which is physically attributed to the burst driver moving outward from the solar atmosphere to larger heliocentric distances with ever decreasing electron densities. This frequency–height relationship of the type II burst is illustrated using the atmospheric density model of <cit.> in Figure <ref>. Contextually, we note that the height estimations from the atmospheric density model provide a “coarse” estimation only as the density of the medium, through which shock is propagating, evolves with the spatially and temporally varying coronal and interplanetary conditions. Thus, rationally, we consider a multiplier (or suitable “fold”) to the basic atmospheric density model.
In Figure <ref>a, we draw frequency-height relation for 1-, 2-, 3-, and 10-fold Leblanc models to incorporate density variations of upto an order. The frequency–height relationship suggests that the observing window of 16 MHz–20 kHz essentially represents a large heliocentric distance from ≈2R_⊙ to 1 AU. Therefore, to explore the characteristic of DH type II radio bursts and associated solar eruptive events, we divide this vast frequency range into three groups: Low-Frequency Group (LFG; 20 kHz ≤ f ≤ 200 kHz), Medium-Frequency Group (MFG; 200 kHz ≤ f ≤ 1 MHz), and High Frequency Group (HFG; 1 MHz ≤ f ≤ 16 MH. Figure <ref>a readily shows that our end-frequency classification essentially provides a quantitative estimation on the distance up to which a shock can survive. We further note that HFG and MFG events represent shocks terminating within the lower and upper coronal heights (9.7R_⊙ and 44.9R_⊙, as per three-fold Leblanc model), respectively. On the other hand, the events under LFG group represent shock travelling in the interplanetary medium (beyond ≈45R_⊙).
In Table <ref>, we list the counts for LFG, MFG, and HFG events for solar cycle 23, 24, and 25. The same has been pictorially shown in a pie chart in Figure <ref>b. In Figure <ref>a, we present the histogram of ending frequency for DH type II events which reveal 49% of events to lie below 1 MHz frequency bin. We further show the distribution of ending frequencies from 1 MHz down to 20 KHz in Figure <ref>b to explore the occurrence of LFG and MFG events exclusively. In Table <ref>, we specifically give number of type II bursts that end beyond 50 MHz as this frequency roughly corresponds to heliocentric distance of 0.5 AU (see Figure <ref>a); We find cycle 24 to be highly deficient in comparison to cycle 23 (12% versus 37%) while rise phase of cycle 25 completes lacks such events.
§ ASSOCIATION WITH CMES AND FLARES
In Figure <ref>, we present the distribution of linear speed of CMEs within LASCO field-of-view associated with DH type II radio bursts for HFG, MFG, and LFG categories. The histograms clearly reveal a larger difference of ≈570 km s^-1 between the mean CME speeds for two limiting cases i.e., LFG and HFG categories. Further,
as expected, the mean speed for MFG events lies between the LFG and HFG classes. To investigate whether the CME speeds for HFG, MFG, and LFG categories belong to the same distribution or not (i.e., if the difference in mean
speeds of CMEs for the three successive categories is statistically significant), we perform the two-sample Kolmogorov–Smirnov test <cit.> and the results of the test is presented in Table <ref>. The two-sample K-S test suggests that the difference in the mean speed of CMEs between the events of three categories of DH type II radio bursts is statistically highly significant.
In Figure <ref>, we present the histogram showing the association of flares of different GOES classes (viz B, C, M, and X) with DH type II radio bursts. We find that the highest number of
events are associated with the M class flares (≈55%) for all the categories. We further find that the fraction of occurrence of X-class flares becomes almost double as we go from HFG to LFG categories (12, 23, and 36%, respectively).
§ CONCLUSIONS
The present study make a detailed survey of DH type II radio bursts occurred during solar cycles 23-25 involving a total of 590 DH type II bursts from January 1996 to June 2023, with confirmed 568 and 462 cases of associated CME and flares, respectively. In the following, we summarize the main results of the paper:
* The occurrence rate of DH type bursts follow a periodic behaviour with the phases of solar cycle. However, subtle but important differences are seen when the data is examined in terms of event counts and event durations. This result has important implication in understanding the origin of solar activity and its coronal and heliospheric consequence.
* Our approach is to classify radio loud CMEs based on the end-frequencies of the corresponding DH type II radio bursts. Accordingly, three categories of CMEs have been considered viz. Low-Frequency Group (LFG; 20 kHz ≤ f ≤ 200 kHz), Medium-Frequency Group (MFG; 200 kHz ≤ f ≤ 1 MHz), and High Frequency Group (HFG; 1 MHz ≤ f ≤ 16 MH. This end-frequency classification plausibly describes the heliocentric distance up to which a shock can survive.
* The HFG category is the most populous, occupying almost half of the events of the entire sample (≈47%). The LFG category (f ≤ 200 kHz), most important for space weather perspective, occupy a quarter of the events (≈24%).
* Within the LFG class, ≈25% events exhibiting end frequency below 50 MHz, are of special interest as this frequency corresponds to a heliocentric distance of ≈0.5 AU. Notably, cycle 24 drastically lacks such events in comparison to cycle 23 (12% versus 37%). In cycle 25, we could not find any events of this category within the observing period (January 2018–June2023).
* The LFG events show a clear inclination toward fastest CMEs and largest X-class flares.
In summary, the present study provides us insights about the statistical properties of DH type II radio bursts during cycles 23-25 but also widens our understanding about the conditions related to their origin in the near-Sun region and survival in the corona and interplanetary medium. Our results indicate that the three-element classification of radio loud CMEs – HFG, MFG, and LFG – has important implications. In particular, the HFG events imply propagation of the shock inside the complex magnetic environment and within faster solar wind condition of the low corona, whereas, the events of LFG category, representing the shock travelling from the corona to the interplanetary medium, are rather intriguing for understanding the space weather phenomena.
The research work at the Physical Research Laboratory (PRL) is funded by the Department of Space, Government of India. We gratefully acknowledge the WIND/WAVES type II burst catalog and LASCO CME catalog. We further acknowledge the SOHO, STEREO, GOES, and Wind missions for their open data policy. BJ thanks the organizers for providing the local hospitality.
natexlab#1#1
[Gopalswamy et al.(2005)Gopalswamy, Aguilar-Rodriguez,
Yashiro, Nunes, Kaiser, & Howard]Gopal2005JGRA
Gopalswamy, N., Aguilar-Rodriguez, E., Yashiro, S., et al. 2005,
Journal of Geophysical Research (Space Physics), 110, A12S07,
10.1029/2005JA011158
[Gopalswamy et al.(2001)Gopalswamy, Yashiro, Kaiser,
Howard, & Bougeret]Gopal2001
Gopalswamy, N., Yashiro, S., Kaiser, M. L., Howard, R. A., &
Bougeret, J. L. 2001, , 106, 29219, 10.1029/2001JA000234
[Gopalswamy et al.(2008)Gopalswamy, Yashiro, Xie, Akiyama,
Aguilar-Rodriguez, Kaiser, Howard, & Bougeret]Gopal2008
Gopalswamy, N., Yashiro, S., Xie, H., et al. 2008, , 674, 560,
10.1086/524765
[Joshi et al.(2018)Joshi, Ibrahim, Shanmugaraju, &
Chakrabarty]Joshi2018
Joshi, B., Ibrahim, M. S., Shanmugaraju, A., & Chakrabarty, D. 2018,
, 293, 107, 10.1007/s11207-018-1325-2
[Leblanc et al.(1998)Leblanc, Dulk, &
Bougeret]Leblanc1998
Leblanc, Y., Dulk, G. A., & Bougeret, J.-L. 1998, , 183, 165,
10.1023/A:1005049730506
[Michalek et al.(2007)Michalek, Gopalswamy, &
Xie]Michalek2007
Michalek, G., Gopalswamy, N., & Xie, H. 2007, , 246, 409,
10.1007/s11207-007-9062-y
[Patel et al.(2021)Patel, Joshi, Cho, & Kim]Patel2021
Patel, B. D., Joshi, B., Cho, K.-S., & Kim, R.-S. 2021, , 296,
142, 10.1007/s11207-021-01890-6
[Patel et al.(2022)Patel, Joshi, Cho, Kim, &
Moon]Patel2022
Patel, B. D., Joshi, B., Cho, K.-S., Kim, R.-S., & Moon, Y.-J. 2022,
, 297, 139, 10.1007/s11207-022-02073-7
[Press et al.(1992)Press, Teukolsky, Vetterling, &
Flannery]Press1992
Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P.
1992, Numerical recipes in FORTRAN. The art of scientific computing
[Reiner et al.(2007)Reiner, Kaiser, &
Bougeret]Reiner2007
Reiner, M. J., Kaiser, M. L., & Bougeret, J. L. 2007, , 663, 1369,
10.1086/518683
[Syed Ibrahim et al.(2019)Syed Ibrahim, Joshi, Cho, Kim,
& Moon]Syed2019
Syed Ibrahim, M., Joshi, B., Cho, K. S., Kim, R. S., & Moon, Y. J.
2019, , 294, 54, 10.1007/s11207-019-1443-5
|
http://arxiv.org/abs/2409.03169v1 | 20240905015617 | Two or three things I know about tree transducers | [
"Lê Thành Dũng Nguyên"
] | cs.FL | [
"cs.FL"
] |
MARAGS: A Multi-Adapter System for Multi-Task Retrieval Augmented Generation Question Answering
Mitchell DeHaven
School of Physics, Nankai University, Tianjin, 300071, China
===============================================================================================
§ ABSTRACT
You might know that the name tree transducers refers to various kinds of automata that compute functions on ranked trees, i.e. terms over a first-order signature. But have you ever wondered about:
* How to remember what a macro tree transducer does?
* Or what are the connections between top-down tree(-to-string) transducers, multi bottom-up tree(-to-string) transducers, tree-walking transducers, (invisible) pebble tree transducers, monadic second-order transductions, unfoldings of rooted directed acyclic graphs (i.e. term graphs) — and what happens when the functions that they compute are composed?
The answers may be found in old papers (mostly coauthored by Engelfriet), but
maybe you can save some time by first looking at this short note.
§ INTRODUCTION
This relatively short note exists for two reasons.
First, I was frustrated at having to repeat myself in multiple papers involving
tree transducers in some way, such as <cit.>, to justify claims that
follow more-or-less immediately from connecting the dots in the
literature. This is why I wanted to record a bunch of useful facts in
<Ref>. Some parts of that section are also more pedagogical
in purpose.
As for <Ref>, it is meant to spread the word about a
moral rather than technical point, namely the bottom-up
perspective on deterministic macro tree transducers, to dispel the latter's
reputation for being overly complicated. From the reactions I observed at the
Dagstuhl Seminar on Regular Transformations in May 2023 <cit.>, it
seems that many people working on string transducers nowadays find this
perspective enlightening, due to its close proximity to the well-known streaming
string transducer <cit.>. In fact, macro tree transducers are
already presented this way in Courcelle and Engelfriet's
book <cit.>; but I hope that my alternative
exposition will prove helpful for some people. Another point raised is
when is a tree-to-string transducer not quite the same thing as a tree
transducer that outputs a string?. Finally, <Ref> is a
niche observation about something I found perplexing at first.
§ TOP-DOWN STATES VS BOTTOM-UP REGISTERS
§.§ Deterministic top-down tree transducers are bottom-up
A top-down tree transducer is like a tree automaton, except that the
result of a transition is not just one state per child of the current node, but
a tree expression involving states applied to children. It can also be seen as a
regular tree grammar controlled by an input tree — just like the derivations of
a grammar, the semantics of a top-down tree transducer can be defined by
rewriting.
[conditional swap, inspired by <cit.>]
Consider the function
f(a(t,u)) = a(f(u),f(t)) f(t)=t when the root of t is not a
on trees over the ranked alphabet {a:2, b:1, c:0}.
To compute it, we use the initial state q_0, an auxiliary state q_1 and the transitions:
⟨ a(t,u)⟩→ a(⟨ u ⟩, ⟨ t ⟩) ⟨ a(t,u)⟩→ a(⟨ t ⟩, ⟨ u ⟩) ⟨ b(t) ⟩→ b(⟨ t ⟩) …
They are deterministic: each pair of a state and a ranked letter appears only once on the left-hand side of a rule.
Over the input a(b(c),c), we have the following executions:
⟨ a(b(c),c) ⟩→ a(⟨ c ⟩, ⟨ b(c) ⟩) → a( ⟨ c ⟩, b(⟨ c ⟩)) →…→ a(c,b(c))
⟨ b(a(b(c),c)) ⟩→ b(⟨ a(b(c),c) ⟩) →…→ b(a(b(c),c))
By determinism, the rewriting relation → always reaches a unique normal form. In this case, t→^* f(t) and t→^* t for any input tree t.
The bottom-up view.
Another informal way to see a top-down tree transducer is that the states are
mutually recursive procedures. A well-established principle in algorithmics is
that a top-down structurally recursive procedure can be optimized into a
bottom-up dynamic programming algorithm. We can also adopt this point
of view here; as we shall see in later subsections, it brings us closer in
spirit to more recent work on transducers (cf. <ref>).
The previous example of top-down tree transducer can be seen as a bottom-up
device whose memory consists of two tree-valued registers
X_0,X_1. After processing a subtree t, the contents of the register
X_i is q_it (for i∈{0,1}). The final
output register X_0 corresponds to the initial state q_0.
(top) at (0,1) ;
(mid) at (0,0) b;
(bot) at (0,-1) a;
(b) at (-1,-1.5) b;
(c) at (1,-1.5) c;
(cc) at (-1,-2.5) c;
[-] (top) – node[right]X_0 = X_1 = b(a(b(c),c)) (mid);
[-] (mid) – node[right]X_0 = a(c,b(c)), X_1 = a(b(c),c) (bot);
(bot) – (b);
(bot) – (c);
(b) – (cc);
Beware of nondeterminism.
Let's consider the following nondeterministic rules:
qb(t)→ a(qt,qt) qc→ c qc→ b(c)
By taking the associated rewriting system, we have:
qb(c)→^* a(c,c) or a(b(c),c) or a(c,b(c)) or a(b(c),b(c))
However, if we work bottom-up, here is what happens. After reading c, the register contains either c or b(c); this is chosen nondeterministically. Then, at the next step, after reading b(c), we get either a(c,c) or a(b(c),b(c)) depending on the earlier choice — but not a(c,b(c)).
Morally, using the terminology of programming languages: top-down nondeterminism is naturally call-by-name, bottom-up nondeterminism is naturally call-by-value (this is related to IO vs OI in automata theory).
From now on, in this article, all automata models under consideration will be deterministic.
§.§ Top-down lookahead = bottom-up states
Transducers are often extended with a feature called regular
lookahead, introdu. In general, this notion corresponds in automata theory to being
able to choose a transition based on a regular property of the
future part of the input. Concretely, in the case of top-down tree
transducers <cit.>, the transitions are now of the form
qa(t_1|r_1,…,t_k|r_k)→some term e.g. b(a(q't_3, c))
where the r_i are some states of an auxiliary deterministic bottom-up tree automaton, called the lookahead automaton. The idea is that this transition can only be applied when the run of the lookahead automaton on the subtree t_i ends in state r_i (for all i). The definition of determinism is adjusted accordingly.
Suppose we want to replace all subtrees of the input of the form b(t) where
t does not contain any b by a(t,t). This can be done with a single state q and
regular lookahead. The lookahead states are r_⊕ if the subtree contains a
b, and r_⊖ otherwise (this can indeed be implemented by a deterministic
bottom-up tree automaton). The transitions include:
qb(t|r_⊕)→ b(qt) qb(t|r_⊖)→ a(qt,qt)
From the bottom-up point of view, this regular lookahead just corresponds to
adding states to the transducer! That is, a configuration of a
corresponding bottom-up device now consists not only of register contents, but
also of a finite control state; but the computation is still performed entirely
bottom-up.
(top) at (0,1) ;
(mid) at (0,0) b;
(bot) at (0,-1) c;
[-] (top) – node[right]state = r_⊕, X = a(c,c) (mid);
[-] (mid) – node[right]state = r_⊖, X = c (bot);
The observations in this subsection and in the previous one are essentially the
meaning of a paper by Fülöp, Kühnemann and Vogler <cit.>:
deterministic top-down tree transducers with regular lookahead =
deterministic multi bottom-up tree transducers.
§.§ Top-down tree-to-string transducers — it's not what you think(?)
One of the major results in transducer theory in the 2010s has been the
decidability of equivalence for top-down tree-to-string
transducers <cit.>, which popularized the Hilbert method
(cf. <cit.>). But what are these devices?
Strings as unary trees.
We start by presenting a — arguably the — natural way to restrict a
tree transducer model to strings as output. But here, when applied to top-down
tree transducers, it does not give us the right thing. It relies on the
following encoding: strings over the finite alphabet Σ are in canonical
bijection with trees over the ranked alphabet with:
* a unary letter for each letter of Σ;
* a 0-ary letter ε that serves as an end-of-string
symbol.
For example, the string abac∈{a,b,c}^* becomes the unary tree
a(b(a(c(ε)))).
The tree-to-string functions that can be computed this way by top-down tree
transducers are very weak, since they can only explore one branch of the input
tree. Note, however, that if the input also consists of strings encoded as unary
trees, then we get the very well-studied model of sequential transducers,
see for instance <cit.>.
The yield operation and concatenable strings.
In fact, in the older literature, it often happens that X
tree-to-string transducer means ∘ X tree transducer —
and it is the case for X = top-down. The yield of a tree is
the string obtained by reading its leaves from left to right, and erasing some
letters deemed neutral.
Since
(a(t_1,…,t_k)) = (t_1) ·…·(t_k),
this amounts to working with a data type of concatenable strings for the
output. That is, we can describe top-down tree-to-string transducers directly
(without ) using transitions whose right-hand side may use string
concatenation (as in context-free grammars).
The following single-state top-down tree-to-string transducer computes the
postfix representation (reverse Polish notation) of its input tree:
qa(t,u)→ qt· qu· a qb(t)→ qt· b qc→ c
From the bottom-up point of view, this transducer model should be equivalent to
some counterpart to Fülop et al.'s multi bottom-up tree
transducer <cit.> whose registers contains concatenable strings.
Indeed, the multi bottom-up tree-to-string transducer has been studied
— under this precise name — in Courcelle and Engelfriet's
book <cit.>. As before, top-down regular lookahead
corresponds to bottom-up states.
The string-to-string case.
It does not make much sense for an automaton to process its input as
concatenable strings. Thus, to get a sensible string-to-string version of
top-down tree-to-string transducers, we should take unary trees as input, but
still use concatenation to produce output strings. The resulting machine model
is a variant of the copyful streaming string transducer <cit.>
— whose memory contains a finite state and concatenable registers — that reads
its input from right to left, instead of left to right.
For other characterizations of this class of string-to-string functions,
see <cit.>.
Arguably, since concatenation provides the canonical semigroup structure on
strings, the recognition of a regular language by a semigroup morphism
processes its input as concatenable strings. An algebraic characterization of
string-to-string MSO transductions in this vein has been obtained
in <cit.>. Morally, the fact that both input and output are
treated as concatenable strings makes it easy to show closure under
composition.
§.§ Macro tree transducers store tree contexts
Traditionally, macro tree transducers <cit.> (MTTs) are presented as an extension of top-down tree transducers with parameters, whose rules may look like, for example,
a(t,u)→t (b(u)) a(t,u)(x) →u(u(x)) c(x) → a(x,x) …
where has no parameters and has the parameter x. A possible run of this MTT is
a(a(b(c),c),c)→a(b(c),c)(b(c)) →c(c(b(c))) →…
In such a run, something is always applied to an expression that eventually reduces to a tree. But one could also evaluate t(x) for a formal parameter x, for instance:
a(c,c)(x) →c(c(x)) →c(a(x,x)) → a(a(x,x),a(x,x))
We may therefore say that the value of a(c,c)
is this tree with parameters at some of the leaves, usually called a
tree context. It represents the map x ↦ a(a(x,x),a(x,x)).
Thus, the bottom-up view is as follows: deterministic macro tree
transducers can be seen as bottom-up devices whose memory consists of
registers storing tree contexts. This perspective in presented in
Courcelle–Engelfriet <cit.>; it makes the MTT is an
ancestor to later transducer models that are explicitly presented as
manipulating context-valued registers, such as the streaming tree transducer
of <cit.> and the register tree transducer of <cit.>.
Lookahead elimination.
As in the case of top-down tree(-to-string) transducers, regular lookahead
corresponds to bottom-up states in macro tree transducers. That said, the
significance of the feature differs between these two cases. For deterministic
top-down tree transducers, deciding whether a given transducer with lookahead
computes a function that could be computed without lookahead is still an open
problem, see <cit.> for partial results. For deterministic MTTs,
the answer is always yes:
For any deterministic MTT with regular lookahead, there is another
deterministic MTT without lookahead that computes the same function.
To see why this is true, first, it is important to note that MTTs allow states
of arbitrary arities, i.e. with an arbitrary number of parameters. For instance
one may have a ternary state q_2 with a rule such as
q_2b(t)(x_1,x_2,x_3) → a(q_1t(x_3), x_1).
We can then encode the bottom-up lookahead states, ranging over a
finite set R = {r_1,…,r_n}, in the value of an n-ary state, i.e.a bottom-up register storing an n-ary context. The idea — analogous to the
Church encoding of finite sets in λ-calculus — is to represent the
lookahead state r_i by the context (x_1,…,x_n) ↦ x_i.
§.§ MTTs and string outputs — it is and isn't what you think
Following <Ref>, there are two ways to use macro tree
transducers to output strings.
Unary trees.
Let us first look into the encoding of output strings as unary trees. The key
remark is that concatenable strings can be represented by contexts, with
concatenation implemented by composition:
ab · ac = abac ⇝ (x↦ a(b(x))) ∘ (x↦ a(c(x))) = (x↦ a(b(a(c(x)))))
Conversely, over a ranked alphabet of unary letters plus an end-of-string marker
ε, any context with n parameters x_1,…,x_n must have the
form
a_1(a_2(… (a_n_can be represented by the string a_1… a_n(either x_i for some i∈{1,…,n} or ε_finite data)) … ))
Therefore, we have:
Macro tree transducers with unary trees as output are equivalent in expressive
power to top-down tree-to-string transducers with regular lookahead.
According to the end of <Ref>, this means that macro tree
transducers whose inputs and outputs both consist of unary trees are the same as
right-to-left copyful streaming string transducers (SSTs). Note that states can
be eliminated in copyful SSTs <cit.>, but for reasons
that crucially depend on the input being a string; these reasons are unrelated
to the elimination of lookaheads in MTTs sketched earlier.
Right-to-left copyless SSTs <cit.> would then correspond to
strongly single-use MTTs <cit.> with regular lookahead whose
inputs and outputs are unary trees, with a subtlety: in MTTs, the output is
taken to be the final value of some fixed state/register, whereas in SSTs, the
output depends on the final register values in a less restrictive way. This
makes a difference as the function below can be computed by a right-to-left
copyless SST (exercise for the reader), but not by a strongly single-use
MTT <cit.>:
a^n↦ a^n a^nbw ↦ a^nbb^|w| for w ∈{a,b}^*
Yield.
Post-composing MTTs with the operation results in the so-called
macro tree-to-string transducers, which manipulate string
contexts such as (x_1,x_2) ↦ x_2 ab x_1 ac x_2 b. Morally,
these transducers are to macro grammars <cit.> what top-down
tree-to-string transducers are to context-free grammars; this is where the name
comes from. Several equivalent characterizations of macro tree-to-string
transducers may be found in <cit.>. We shall discuss
them further in <Ref>.
§ RELATIONSHIPS BETWEEN DETERMINISTIC TRANSDUCTION CLASSES
§.§ Monadic second-order transductions with sharing / unfolding
Monadic Second-Order logic (MSO) is a logic over relational structures
(sets endowed with relations). For instance one can encode a string w as a
relational structure by taking its set of positions {1,…,|w|} endowed
with the total order ⩽ and, for each letter a, a unary relation
a(i) = the i-th letter is a. Over strings (resp. trees), the
properties that MSO can define correspond exactly to the regular languages
(resp. regular tree languages). Thus, MSO provides a canonical generalisation
of regular languages to structures beyond strings and trees; typically, graphs,
as discussed in Courcelle and Engelfriet's book <cit.>.
MSO transductions, also covered in <cit.>, are the usual
way to define transformations of relational structures using MSO. (One may also
consider MSO interpretations <cit.> or MSO set
interpretations <cit.> but they are not always as well-behaved,
especially over structures other than strings and trees.) String-to-string MSO
transductions are also called regular functions and have a rich theory,
see the survey <cit.>. Tree-to-tree MSO transductions are also
well-studied, with several equivalent characterizations by machine
models <cit.>. For instance a major theorem of
Engelfriet and Maneth <cit.> is that the tree-to-tree functions
definable by MSO transductions are exactly those computed by macro tree
transducers of linear growth, i.e. output size = O(input
size); see <Ref> for a generalisation. Thus, most examples of
tree-to-tree functions seen in the previous section are MSO transductions.
However, one can easily define top-down tree transducers with non-linear growth:
The following transducer
S(t)→ a(t,t) 0→ c S(t)→ b(t) 0→ b(c)
computes the function
S^n(0) ↦ a(b^n(c),a(b^n-1(c), … a(b(c),c)…)) with quadratic
growth.
The thing to note about this example is that the output contains many repeated
subtrees. For instance, for S(S(0)) ↦ a(b(b(c)),a(b(c),c)), the
subtree b(c) is repeated twice; and this is because in the computation,
0 appears at two different places. One may compress the output
tree into a shared representation — a rooted directed acyclic
graph (DAG) — where all copies of each subtree produced by the same
qt are merged together:
(a) at (0,0) S;
(b) at (0,-1) S;
(c) at (0,-2) 0;
(a) – (b);
(b) – (c);
at (1,-1) ↦;
(a') at (3,0) a;
(a”) at (2,-0.4) b;
(b') at (4,-1) a;
(b”) at (3,-1.4) b;
(c') at (4,-2) c;
[->] (a') – (a”);
[->] (a') – (b');
[->] (b') – (b”);
[->] (a”) – (b”);
[->] (b') – (c');
[->] (b”) – (c');
at (5,-1) ↦;
(a') at (7,0) a;
(a”) at (6,-0.4) b;
(b') at (8,-1) a;
(b”) at (7,-1.4) b;
(bb) at (6,-1.4) b;
(c') at (8,-2) c;
(c”) at (7,-2) c;
(cc) at (6,-2) c;
(a') – (a”);
(a') – (b');
(b') – (b”);
(a”) – (bb);
(bb) – (cc);
(b') – (c');
(b”) – (c”);
(Here we have also merged the two c leaves from 0 and 0.)
In this factorisation, the first tree-to-DAG map is definable as an MSO
transduction. The second operation, that recovers an output tree from a DAG, is
called unfolding. In general, all functions computed by top-down tree
transducers belong in the function class
∘ (MSO tree-to-DAG transductions)
which is called
MSO transductions with sharing (MSOTS) in <cit.> — we use this
name here — and MSO term graph transductions
in <cit.>.
§.§ Attributed vs tree-walking transducers vs MSOTS
Let us discuss characterisations of MSOTS by machines. There is a recent one
based on transducers using almost linear λ-terms
in <cit.>. A more classical one relies on
attributed tree transducers (ATTs) — which come from Knuth's attribute
grammars <cit.> — and on MSO relabelings, i.e. MSO tree
transductions that keep the structure of the tree intact, and merely change the
label of each node.
ATT∘MSO relabeling = MSOTS.
(When we write this kind of equality, both sides of = are understood to refer to function classes, and thus the equality denotes an equivalence in expressive power.)
As explained in <cit.>, the ATT and the
deterministic tree-walking transducer (TWT) – which appears under a
different name in <cit.>, cf. <Ref> – are
essentially notational variations of the same formalism
(quoting <cit.>). Tree-walking transducers are like
top-down tree transducers, except that transitions do not always go
downwards in the input tree: qsome node v in
the input may be rewritten into an expression that contains
q'the parent node of v. Arguably, TWTs are the natural
generalisation to trees of two-way transducers on strings.
The correspondence between the ATT and the TWT relates the former's
attributes to the latter's states, in an analogous fashion
to the bottom-up registers / top-down states correspondence of
<Ref>. There are subtleties related to noncircularity
which make the ATT slightly weaker, but they become irrelevant in presence of
preprocessing by MSO relabeling.
In <cit.>, TWTs are called 0-pebble tree
transducers, as the case k=0 of the k-pebble tree transducers
introduced in <cit.> (the latter have seen a resurgence in recent years
in the study of string-to-string polyregular functions <cit.>). The
terminology tree-walking was only later used to speak of 0-pebble
transducers, starting with <cit.>, but it was already in use for
the tree-to-string version of these devices since the late 1960s <cit.>
— there, the output strings are produced as if they were unary trees. Let us
also mention the RT(Tree-walk) transducers
of <cit.> from the 1980s, discussed
in <cit.>.
When a TWT takes as input a tree that has been preprocessed by an MSO
relabeling, it has access to some extra information on each node; this finite
information recorded by the relabeling is necessarily a regular property of the
input tree with this node distinguished. Equivalently, one could replace the
preprocessing by adding to the TWT the ability to query on the fly the
relevant regular properties of the current node. This feature is usually called
regular lookaround or MSO tests, see e.g. the introduction
to <cit.>. Hence:
MSOTS = TWT∘MSO relabeling = TWT with regular lookaround a.k.a. MSO tests
However, in the absence of regular lookaround, TWTs are much weaker, since:
Nondeterministic tree-walking automata do not recognise all regular tree
languages <cit.>, and deterministic TWA are even
weaker <cit.>.
Finally, one can impose a single use restriction on both attributed
and (deterministic) tree-walking transducers to avoid sharing phenomena, leading
to characterisations of tree-to-tree MSO transductions (MSOT):
MSOT = single-use ATT∘MSO relabeling <cit.>
= single-use TWT with MSO tests (regular lookaround) <cit.>
§.§ Composition hierarchies: MSOTS (& invisible pebbles / Caucal hierarchy) vs MTT (& iterated pushdowns / safe higher-order)
Engelfriet, Hoogeboom and Samwel have characterised the composition of
two deterministic tree-walking transducers with MSO tests using a machine
model: the invisible pebble tree transducer
(IPTT) <cit.>, a variant of the k-pebble tree transducer of
<Ref>. According to the previous subsection, we may equivalently
state their result as:
^2def.=∘ = IPTT.
The invisible pebble tree transducer manipulates an unbounded stack of
pebbles, which are pointers to nodes of the input tree. Similarly, the
memory of the pushdown tree transducer <cit.>
consists mainly of a stack of pointers to input nodes; the difference is that
the reading head of an IPTT moves like that of a TWT, whereas the pointers in a
pushdown tree transducers can only move top-down. This suggests that
MTT⊂IPTT which is indeed
true <cit.>.
In fact, we may also look at the composition of k MSOTS beyond the case k>2,
as well as the composition of several MTTs. It turns out that the two
hierarchies are interleaved:
∀ k⩾1, ^k⊂^k⊂^k+1.
In particular, ⋃_k⩾1^k = ⋃_k⩾1^k.
This is beause:
* MTT = MSOTS∘top-down tree transducer <cit.>;
* MSOTS = top-down tree transducer∘MSOTS <cit.>.
(beware: the order of composition is reversed compared to the usual in <cit.>).
Let us mention that the main theorem of the above-cited paper <cit.> is:
The functions (cf. <ref>) in the ^k
hierarchy (equivalently, in the ^k hierarchy) of linear
growth are exactly the tree-to-tree MSO transductions.
Engelfriet and Vogler have shown that the class MTT^k (for some fixed
k⩾1) corresponds to the expressive power of the iterated
pushdown tree transducer <cit.>,
which generalizes the aforementioned pushdown tree transducer by working with a
stack of … of stacks of input pointers, with nesting depth k. (See also the
work of Sénizergues on string-to-string
functions <cit.>.) Also, it seems natural to believe
that:
MSOTS^k+1 = a variant of iterated pushdown transducers whose
input pointers can move in a tree-walking rather than top-down fashion, with
nesting depth k, such that the case k=1 corresponds to the IPTT.
[Idea]
I expect this to be derivable from the
aforementioned <cit.>, plus the general
result <cit.> about adding a pushdown
layer on top of a storage type (see
also <cit.>), but I'm too lazy to spell out and
check the details.
The class MTT^k can also be characterised by the high level
tree transducer <cit.>, which stores some kind of
higher-order functions: the tree contexts in MTTs are first-order i.e.tree-to-tree functions, second-order functions map first-order functions to
trees, etc.
Connections with higher-order recursion schemes.
Engelfriet and Vogler's high level tree transducer is directly inspired by
Damm's high level grammars <cit.>. As explained for instance
in <cit.>, high level grammars are very close
syntactically — and equivalently expressive — to safe higher-order
grammars. This notion of safety was introduced by Knapik, Niwiński and
Urzyczyn <cit.> in the context of higher-order recursion
schemes; a recursion scheme is a grammar-like specification of a single
infinite tree. Safe recursion schemes of order k describe exactly the same
infinite trees as iterated (a.k.a. higher-order) pushdown tree-generating
automata of nesting depth k <cit.> — a result
that is strikingly similar to the equivalence between MTT^k and
k-iterated pushdown transducers.
Moreover, the same class of infinite trees (for a fixed k⩾0) can be
equivalently described as those obtained by applying a function in
(∘tree-to-graph MSO transduction)^k+1 to a finite tree
— this is the Caucal hierarchy <cit.>. Here the
output of the MSO transductions are rooted directed graphs that are not
necessarily acyclic, and the unfolding operation is generalized to generate
infinite trees when the input has cycles. Since an MSOTS is defined as
∘tree-to-DAG MSOT, the Caucal hierarchy is clearly
analogous to the MSOTS^k+1 tree transducer hierarchy.
This means that the counterpart to the strict inclusion
MTT^k⊂MSOTS^k+1 in the setting of recursion schemes
is an equality. <Ref> suggests a heuristic explanation: since
iterated pushdown tree-generating automata do not have an input, the difference
between top-down pointers and tree-walking pointers to the input disappears.
§.§ The MTT composition hierarchy for tree-to-string functions
For a class of tree-to-tree functions , let () be the subclass of functions that output strings encoded as unary trees. Recall from <Ref> the definition of the yield operation and of the top-down tree-to-string transducer. In that section we explained that
(^1) = top-down tree-to-string transducer (<Ref>)
For the next levels of the MTT^k hierarchy, we also have:
∀ k ⩾ 1, (^k+1) = ∘^k.
The case k=1 tells us that (^2) = macro tree-to-string transducer (cf. <ref>).
This is established in <cit.> using high level
tree transducers, but it can also be shown by working directly on
compositions of MTTs:
(^k+1) = (^1)∘^1∘^k-1
= ∘(top-down tree transducer) ∘_= , cf. <cit.>∘ ^k-1
(beware: again, in <cit.>, the notation ∘ is flipped compared to ours).
Thanks to this, for instance, Engelfriet and Maneth's bridge
theorem <cit.> on output languages, originally stated
using , can now be seen as relating (^k) and
(^k+1) for each k⩾1. This rephrasing has proved useful in
the recent work <cit.>.
|
http://arxiv.org/abs/2409.03065v1 | 20240904203405 | ADFilter -- A Web Tool for New Physics Searches With Autoencoder-Based Anomaly Detection Using Deep Unsupervised Neural Networks | [
"Sergei V. Chekanov",
"Wasikul Islam",
"Rui Zhang",
"Nicholas Luongo"
] | hep-ph | [
"hep-ph"
] |
A Comparative Study of Offline Models and Online LLMs in Fake News Detection
1st Ruoyu Xu
Department of Computer Science
Texas Tech University
Lubbock, TX 79409, USA
ruoyxu@ttu.edu
2nd Gaoxiang Li
Department of Computer Science
Texas Tech University
City, Country
gaoli@ttu.edu
September 9, 2024
===============================================================================================================================================================================================================================
§ INTRODUCTION
Traditionally, searches for new physics involve applying selection cuts to isolate events where sensitivity to Beyond Standard Model (BSM) phenomena is highest. These cuts are typically designed by exploring the kinematics of BSM events or using some general principles of how new physics might modify events produced by Standard Model (SM) processes.
Alternatively, selections can be performed by creating neural networks that "remember" the primary kinematic characteristics of data, which are predominantly governed by SM processes. These events are then removed, leaving only those that cannot be fully reconstructed by such networks. In this case, no prior knowledge of BSM models is required, making this selection method more agnostic to potential new physics. An overview of anomaly detection methods is provided in <cit.>.
The first LHC article utilizing anomaly detection to identify LHC events with five types of reconstructed objects (jets, b-jets, electrons, muons and photons) that are most likely to contain BSM scenarios was recently published by ATLAS <cit.>. In this approach, autoencoders (AEs) are trained on a small fraction of the data to determine the global characteristics of event kinematics. The trained AE is then applied to the rest of the data that need to be tested. By analyzing the loss distributions, or the errors with which the AE reproduces the data, one can select only events with large AE losses. This process identifies outlier events that deviate significantly from the bulk of events, which are dominated by SM processes. The neural network approach is expected to capture full correlations between various inputs, making it superior to methods that rely purely on statistical measures to determine how unique an event is. Once an "anomalous" region of phase space with large loss values for events is determined, it must be further investigated using various techniques. The ATLAS study <cit.> used searches in dijet invariant masses in the anomaly region to detect BSM signals. Additional demonstrations of this method using Monte Carlo (MC) simulations are provided in <cit.>.
From an experimental standpoint, the advantages of such approaches are clear: These model-agnostic selections lead to an improved sensitivity to BSM phenomena. However, if BSM signals are not found, experimental limits should be calculated for events in the outlier region. Such limits should include acceptance corrections, which are difficult to estimate outside the LHC experiments.
Thus, to facilitate comparisons of data in the outlier region with proposed BSM models, a simple public tool is needed that can propagate event records from any arbitrary BSM model and report the acceptances in the anomaly regions as defined by experiments. This approach to anomaly detection requires the wide public availability of trained autoencoders. These AEs, trained on a portion of data (or SM MC simulations), will represent the kinematics of data (or SM processes) and can thus be used to calculate acceptance for any BSM model.
This paper introduces a new online tool called "ADFilter" <cit.>. The purpose of this tool is to process event records from collision events (such as those generated by Monte Carlo simulations) through AEs used in physics research. It calculates the AE loss distribution for collision events,
and then it applies a selection cut, accepting only events within the anomaly region. This functionality enables the evaluation of acceptance corrections, which are calculated as the ratio of events that pass the AE cut to the total number of input events. Therefore, ADFilter is crucial for comparing BSM scenarios with data in anomaly regions.
§ DESCRIPTION OF ADFILTER
The ADFilter webpage <cit.> features the user interface shown in Fig. <ref>. The public version includes the AEs described in <cit.> and <cit.>. The LHC paper <cit.> employs a subset of the Run 2 ATLAS data, while the second paper relies on SM MC simulations for AE training. The autoencoder uses selection criteria based on single-lepton triggers, accepting events with at least one lepton with transverse momentum p_T^l > 60 GeV. This cut effectively reduces the multijet QCD background, which dominates the LHC data.
To start event processing, one needs to select the AE using a drop-down menu, then upload a file with input events. The formats for the input files are discussed in Sect. <ref>. The maximum size of the input file is 150 MB, which is typically sufficient for about 100,000 events in the file format to be discussed later.
After uploading, the ADFilter interface will submit several jobs in order to create:
* A ROOT file with the inputs for AE and a few test histograms. This ROOT file's name ends with the string "rmm.root". It contains basic kinematic distributions, the histogram "cross" with the observed cross section (in pb), the rapidity-mass matrix (RMM) <cit.> for the first 50 events (for demonstration purposes), and the ROOT tree "inputNN", which stores non-zero values for the RMM and their indices.
* A ROOT file with the final result. The name of this ROOT file contains the substring "ADFilter". It includes a histogram called "Loss", representing the numerical value of the reconstruction loss after processing through the AE. This histogram shows the success of the encoder-decoder process in reconstructing the compressed input. The "EventFlow" histogram shows the number of events entering the AE and the number of output events that exceed the "LossCut" value, which is typically defined in the relevant publications. The output ROOT file also includes a set of histograms showing invariant masses before and after the AE, as well as the cross section in the selected anomalous region.
* A text file that contains information about all processing steps. This file has the extension ".log". It can be used to monitor and verify each step of data processing, from the file with the input variables to the final file with the loss distribution. It also prints the selection cuts used for event processing.
All ROOT histograms can be viewed using the online browser and can also be downloaded for offline analysis.
It should be noted that if the input file contains only partons, this tool also applies parton showering and hadronization using the Pythia8 <cit.> generator. This creates an intermediate binary file with the extension "promc", which includes all final-state particles. All technical details on how to use ADFilter are given in the menu called "Documentation."
The web interface of ADFilter is implemented in the PHP language with an external call to a binary program that converts the input data to ROOT files containing the tree with the input for the AE. The autoencoder calls were implemented in Python3. The jsROOT package <cit.> was used to display the output ROOT files online.
§ TECHNICAL DETAILS ON INPUT FILES AND EVENT PROCESSING
The ADFilter web tool accepts a variety of input files containing collision events. These input files must be generated with the same center-of-mass collision energy as the events used for AE training.
The preferred event record should include the four-momentum of jets (including light-flavor jets and jets associated with b-quarks), electrons, muons, photons, and missing transverse energy. The jet definition should match the one used in the AE. An example of the generic input data structure is provided in Appendix <ref>. The data format is a simple TTree called "Ntuple." An additional histogram ("meta") stores metadata with the center-of-mass energy (in GeV) used, whereas another histogram ("cross") contains the generator-level cross section (in pb) and the luminosity (in pb^-1) for the current run.
There are other types of input files that do not include reconstructed jets and leptons. These inputs can either be events with the four-momentum of final-state particles or files containing parton-level information. In the latter case, as previously mentioned, ADFilter propagates the events through the parton shower and hadronization. Then it applies jet reconstruction, isolation criteria for leptons, and the necessary transverse-momentum cuts as specified for the selected AE.
Below, we briefly describe the input files that can be used by ADFilter.
§.§ Delphes input files
The tool accepts ROOT ntuples generated by the Delphes fast simulation <cit.>. However, because the input files can be quite large, Delphes files must be reduced in size ("slimmed"). To facilitate this, a simple tool is provided, as detailed on the "Documentation" page. This tool reduces the Delphes file to the smallest event structure that ADFilter can accept, as illustrated in Appendix <ref>.
It is important to minimize selection cuts on Delphes objects to avoid introducing biases when using the AE. Generally, applying transverse momentum cuts of 30 GeV for all jets, leptons, and photons is the safest approach.
§.§ Truth-level event record
To upload truth-level information with final-state particles, one can use ProMC files <cit.>. These files are based on Google's Protocol Buffers, which are a language-neutral, platform-neutral, and extensible mechanism for serializing structured data. They use "varints" as a way to store and compress integers using a variable number of bytes. Smaller numbers use fewer bytes. This means that low-energy particles (jets, clusters, cells, tracks, etc.) can be represented with fewer bytes since the values needed to represent such particles are smaller compared to those of high-energy particles or other objects.
ProMC files can be created directly from Monte Carlo generators or downloaded from the HepSim repository <cit.>, where about 350 event samples are stored in the ProMC format.
§.§ LHE parton-level files
One can also upload LHE files (Les Houches Event Files) <cit.> directly from the Madgraph generator <cit.>.
The files should be compressed and have the extension *.lhe.gz.
ADFilter applies the parton showering and hadronization. These steps are performed by the Pythia8 <cit.> generator. The output events with final-state particles
are stored in the ProMC file format, which is available for
download.
§.§ Object reconstruction step
Collision events stored in binary ProMC files are processed with the goal of reconstructing jets, b-jets, isolated leptons, and photons. The jet definition and selection cuts are configured to match the objects used during AE training.
Jets, isolated electrons, and muons were reconstructed from stable particles. The jets were constructed with the anti-k_T algorithm <cit.> as implemented in the FastJet package <cit.> with a distance parameter of R=0.4, which is commonly used in the ATLAS experiment. By default, the minimum transverse energy of all jets was 30 GeV in the pseudorapidity range of |η|<2.5, but these values may change depending on the selection used by the AEs.
Leptons are required to be isolated using a cone of size 0.2 in the azimuthal angle and pseudorapidity defined around the true direction of the lepton. All energies of particles inside this cone are summed. A lepton is considered isolated if it carries more than 90% of the cone energy. The SM background processes require simulations of misidentification rates for muons and electrons ("fake rates"). We use a misidentification rate of 0.1% for muons and 1% for electrons. This is implemented by assigning a probability of 10^-3 (10^-2) for a jet to be identified as a muon (electron) using a random number generator. The distributions were obtained for events having at least one isolated lepton with transverse momentum p_T^l>30 GeV and two jets with p_T^j>30 GeV.
§.§ Pre-processing step
The current version of ADFilter includes two autoencoder architectures, as previously mentioned, which require a pre-processing step to create input data for AEs. This step is described in this section. However, it's important to note that ADFilter is not limited to these two architectures; as a web tool, it can implement any autoencoder described in public research papers.
After all objects (jets, leptons and photons)
are available, a C++ program is used to transform kinematic features of each event
to the rapidity–mass matrix (RMM) which is proposed as an input for machine learning <cit.>. The RMM is a square matrix that includes reconstructed final states of jets, b-jets, muons, electrons, photons, and E_T^miss, where E_T^miss is a single object, followed by 10 (b-)jets and 5 electrons, muons, and photons each, in descending order of transverse energy for each particle type.
This matrix contains characteristics of a single and a pair of all considered objects, filled with zeros to the matrix rows and columns where the corresponding objects are undetected.
In particular, the transverse energy of the leading objects or the transverse energy imbalances of the rest objects concerning the leading one in each object type are placed along the diagonal, where the transverse energy imbalance is defined as the ratio between their difference in the transverse energy and their sum.
The non-diagonal upper-right values of the matrix are the invariant mass of the corresponding two objects represented by the row and the column; in the case that E_T^miss is involved, the transverse mass is used instead.
All values in the diagonal and above are scaled by 1/√(s) where √(s) is the collision energy.
The non-diagonal lower-left values are h_ij = C· (cosh((y_i-y_j)/2) - 1) where y_i-y_j represents the rapidity difference between the two objects i and j; when j is , h_i = C· (cosh(y_i) - 1) is used instead.
The constant C was set to 0.15 <cit.>.
By construction, all elements of the RMM are defined to be between 0 and 1, and most variables are Lorentz-invariant under boosts along the longitudinal axis.
To reduce biases in the shapes of the jet + Y invariant mass spectra, the nine invariant mass variables are excluded from the RMM. The resulting input dimension is 36^2 - 9 = 1287.
The RMM matrix is then flattened to a one-dimensional input vector before being fed into the AE.
§.§ Autoencoder step
To demonstrate ADFilter's capabilities, we will now describe the two autoencoders that are currently implemented.
After a ROOT file with the tree containing non-zero values of the RMM are created,
this tree was used as an input for the trained AE. The autoencoder is typically trained on a small
fraction of real collision events or the SM MC simulations, thus the weights of neurons represent the snapshot of known physics events.
The training is not covered in this paper since
it is specific for each analysis domain. During the file upload, user should select the needed trained AE, description of which is usually given in the specific publication.
The AE is implemented using TensorFlow <cit.>.
It comprises two sections, an encoder and a decoder.
The encoder compresses the input to a latent dimensional space, whereas the decoder takes the data in the latent layer and decompresses it back to its original size.
The network architecture for the encoder contains two hidden layers, with 800 and 400 neurons respectively, and a latent layer of 200 neurons.
The decoder reverses the structure of the encoder, using 400 and 800 neurons for the two hidden layers, and 1287 neurons for the output layer.
It compresses the inputs into lower dimensions using an encoder and then tries to reconstruct the original input using a decoder.
The AE architecture was optimized using tests with artificially created “anomalous” events by misidentifying objects in the Monte Carlo simulated events. The architecture reflects object multiplicities, particle identifications, and their kinematic characteristics well.
The Leaky ReLU <cit.> activation function is applied to the output in all hidden and output layers.
The final results of the AE step is to produce a ROOT file with invariant masses before and after applying a selection cut on the loss. The value of the selection cut is predetermined by the previous
publications.
The entire analysis chain, from the input file to the ROOT file with RMM and then the Python code with TensorFlow <cit.>, was implemented using the BASH language. The website framework uses PHP with an external call to the master BASH script that executes the entire processing chain.
The public version of ADFilter uses the AE
described in <cit.> trained on the Run2 LHC data with single-lepton pre-selections.
In addition, the ADFilter menu includes
an AE trained on the SM events as explained in
<cit.>.
ADFilter can also be used as a command-line tool on the LXPLUS computing environment, as described in the menu "Documentation".
§ REAL-LIFE EXAMPLES
The LHC publications focused on exclusion limits typically apply loose pre-selections guided by BSM model expectations. For example, one can apply a transverse momentum cut on leptons p_T^l>60 GeV
in situations where high-momentum leptons are expected for certain BSM scenarios <cit.>.
More sophisticated selections use boosted decision trees or neural networks trained on specific BSM events. While the first approach does not have the best sensitivity to BSM models, the second approach with improved sensitivity is significantly more time consuming to develop.
As discussed in the Introduction, another approach is to evaluate limits after applying AEs to reject the trivial SM background. In this approach, a single trained AE can be used for large variety of BSM scenarios, as long as event triggering is appropriate. To calculate limits after the AE,
experimental data should be available in the anomaly region. Then, to set competitive exclusion limits, the BSM models need to be propagated through the same AE.
This is where the ADfilter should be in hand: If the limits in the anomaly region
are available from LHC publications and the HEPdata database <cit.>, this tool helps
estimate acceptance corrections for BSM events in order to compare the acceptance-corrected cross section with limits in anomaly regions. The easiest BSM models for testing are those
that create well-defined enhancements in two-body invariant masses. Because the tool reconstructs invariant masses
before and after cuts on the AE loss distribution, the acceptance can be easily estimated.
In this section two examples of how to use ADFilter for two concrete BSM models are given.
This check normally takes 3-5 minutes using the ADFilter website.
§.§ Re-interpretation of sequential-standard model limits
In the first example, we will utilize the publicly available data used to exclude the sequential-standard model (SSM) with the W' → Z' W^± process <cit.>.
The published 95% CL upper limits are available as a function of masses M_X. For the SSM, X represents masses of Z' extracted from the invariant mass of two jets, i.e. M_jj. Events were pre-selected by placing the transverse momentum cut p_T^l>60 GeV on leptons (electrons or muons).
These limits are expressed in terms of σ× B, where σ is a cross section and B is a branching ratio. The limits <cit.> are reproduced as black diamonds in Fig. <ref>(a). The red line shows the SSM model predictions. The original exclusion, reported in <cit.> was at 2 TeV, where black diamonds overlap with the red dashed line.
Experimentally, such limits are corrected by the factors A_sel×ε as a function of the mass , where A_sel is the acceptance introduced by the event and object selection, and ε is an object reconstruction efficiency. Such factors are known and available from publications.
The generic limits as a function of after applying the cut p_T^l>60 GeV and the AE are available <cit.> and tabulated in HEPdata <cit.>. These limits were calculated for the resonance width of /=0.15, which is similar to the SSM studied in <cit.>.
These limits are obtained after the AE selection, therefore, they are expressed in terms of σ× B ×ε× A_sel× A_AE, where A_sel is the traditional acceptance introduced by event/object selections and
A_AE is the acceptance introduced by the AE
The factor A_sel×ε is common to both <cit.> and <cit.> publications, as long as selection cuts are the same. Therefore, one can multiply the limits <cit.> by the correction factors 1/(A_sel×ε) obtained from <cit.>.
After this correction, one obtains the limits in terms of σ× B × A_AE.
There limits are shown in Fig. <ref>(b) with the blue filled circles. They are substantially lower
than those expressed in terms of σ× B, as the trained AE significantly reduces the rate of SM events (i.e. A_AE<1).
In order to perform exclusions, the BSM model shown in Fig. <ref>(a) needs to be re-interpreted in terms of σ× B × A_AE. To do this, the ADFilter can be used to estimate the ADFilter acceptance A_AE. This can be done by using a MC generator at truth level to produce W' → Z' W^± events for different mass points and uploading the generated LHE files to the ADFilter website. Our calculations estimate that the ADfilter acceptance is 55% for SSM (on average). Then, the BSM prediction shown in <cit.> should be multiplied by these acceptances as a function of Z' mass.
The final result of re-interpretation of the limit is shown in Fig. <ref>(b). After anomaly detection, the mass exclusion moves from 2 TeV to 2.5 TeV,
where the blue filled circles overlap with the brown dotted line representing SSM corrected by the ADFilter acceptance. This demonstrates that the application of the AE can increase the exclusion region by roughly 500 GeV.
It should be pointed out that our discussion assumes that the limits for Gaussian signals with the width 15% are directly applicable to exclude the SSM. For high-precision studies, the availability of experimental limits for different realistic signal shapes in the anomaly region is highly desirable.
§.§ Re-interpretation of charged Higgs tbH^+ limits
Our second example is related to the charged Higgs tbH^+ process <cit.>.
This process was modeled with MadGraph5_aMC@NLO <cit.> at next-to-leading
order (NLO) in QCD <cit.> based on a two-Higgs-doublet model (2HDM) using a four-flavour scheme implementation.
In this model, tan (β) is the ratio of the two scalar doublets in the 2HDM.
Figure <ref>(a) reproduces the ATLAS observed limit (shown with black diamond symbols) for the pre-selection p_T^l>60 GeV as a function of the dijet mass and the tbH^+ process cross sections (red dotted line) for tan (β ) = 1.
The limits are shown as a function of , where X represents a H^+ mass derived from the mass of two jets, M_jj.
The tbH^+ process cross section does
not overlap with the observed limits, indicating that the data
do not have sufficient statistical power to exclude this BSM model with tan (β) = 1.
The situation changes drastically when applying ADFilter. Figure <ref>(b)
shows the the limit with p_T^l >60 GeV plus the AE selection <cit.> for signals with a width of /=0.15. These limits were multiplied by the factor 1/(A_sel×ε), reported in <cit.>. To justify this correction,
we assume that the width of M_jj for tbH^+ process is roughly similar to
the width /=0.15.
Then, the ADFilter was used to estimate the AE correction for the charged Higgs tbH^+ process created
with MadGraph5_aMC@NLO.
According to the calculated ADFilter acceptance, the observed cross section for the tbH^+ process does not change significantly after applying anomaly detection, since the reported ADFilter acceptance was above 95%.
The large AE acceptance indicates that the tbH^+ events are highly anomalous, compared to the SM events represented by the AE.
Figure <ref>(b) shows the limit with the cut p_T^l >60 GeV plus the AE selection <cit.> for signals with the width of /=0.15.
This creates a crossing point of the observed limits (shown with blue filled circles) at 1.35 TeV with the cross section from MadGraph5_aMC@NLO after the 95% correction by ADFilter.
Thus, even though the original ATLAS paper could not exclude tan (β) = 1 scenario of the tbH^+ process using the loose preselection with p_T^l >60 GeV, the exclusion becomes possible after applying ADFilter.
Note that the improvement in sensitivity is significantly greater than that for the SSM after the AE. The reason for this is due to the fact that the charged Higgs model is characterized by many additional objects (jets, b-jets), which makes such events significantly more complex and thus more "anomalous" than the SSM events.
It should be pointed out that if experimental limits are expressed in terms σ× B ×ε× A_sel× A_AE on generic signals after the AE, then a competitive exclusion of BSM models can be done by propagating truth-level event record via the ADFilter, which can estimate the factor A_sel× A_AE for BSM models. If BSM events were processed by the Delphes fast simulation, then the value ε is also known.
Appendix <ref> illustrate this approach using the limits for the Gaussian signal width of /=0.15.
It compares the published limits (before and after the AE) with the BSM models after ADFilter. These examples again demonstrate that the AE improves limits, or even makes the exclusion of BSM models possible.
This discussion cannot be used to draw physics conclusions
on exclusion regions, since such questions require dedicated LHC studies using realistic
signal shapes for limits in the anomaly regions. Nevertheless, these examples confidently demonstrate the potential of ADFilter event selection and how this tool can be used by theorists. To facilitate such studies, broad public access to generic limits with various signal widths in the anomaly regions is highly anticipated.
§ CONCLUSION
This paper introduces an online tool called ADFilter <cit.>. It is designed to process particle collision events using AEs trained on fractions of experimental data or SM Monte Carlo predictions. After processing the input events, the tool provides the loss values for these events and, optionally, the differential cross sections for two-body invariant masses. A higher loss value indicates that the events are more distinct in terms of kinematics compared to those used in training the autoencoder.
By applying a selection cut on the loss value, the tool enables the calculation of acceptance values, which are then can be used to establish new exclusion limits based on currently available published limits.
ADFilter is particularly useful for quickly determining whether a given BSM scenario is sufficiently exotic, in terms of event kinematics, object multiplicity, object type, etc., compared to the bulk of LHC events used for autoencoder training. If the scenario yields a large loss value, it suggests that a generic selection based on the proposed approach could be applied to enhance sensitivity to such models, without the labor-intensive design of specific selection cuts. The tool reports the acceptance, which is crucial for comparing BSM models with limits provided by ATLAS publications using autoencoder-based selections.
A straightforward check that theorists can perform is as follows: If ADFilter reports a high acceptance rate (e.g., above 50%) for a new BSM scenario implemented in LHE files from MadGraph, the corresponding events may be considered anomalous. This suggests a substantial opportunity within LHC studies to isolate and investigate these events further. If the LHC lacks dedicated studies for this scenario, ADFilter can effectively be used to rule out the BSM model, assuming that limits in the anomaly regions are published and accessible from HEPData <cit.>. Conversely, if a new proposed BSM model exhibits small loss values and low acceptance in the anomaly region, this would indicate that the model's features closely resemble those of SM kinematics. Consequently, detailed studies of such a model may require significant effort.
To enable effective exclusion of BSM models by the broad HEP community, public access to generic LHC limits with various signal widths in anomaly regions is highly desirable. This will allow the general HEP community to use ADFilter to exclude any arbitrary BSM scenario, provided that these models predict enhancements in invariant masses.
Finally, the tool is easily extendable to incorporate autoencoders from other publications.
§ ACKNOWLEDGMENTS
The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (“Argonne”). Argonne, a U.S.
Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself,
and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works,
distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government.
The Department of Energy will provide public access to these results of federally sponsored research in accordance with the
DOE Public Access Plan. <http://energy.gov/downloads/doe-public-access-plan>. Argonne National Laboratory’s work was
funded by the U.S. Department of Energy, Office of High Energy Physics (DOE OHEP) under contract DE-AC02-06CH11357. The Askaryan Calorimeter Experiment was supported by the US DOE OHEP under Award Numbers DE-SC0009937, DE-SC0010504, and DE-AC02-76SF0051. WI and RZ are supported by DE-SC0017647. We gratefully acknowledge the computing resources provided by
the Laboratory Computing Resource Center at Argonne National Laboratory.
JHEP
PART:
*Appendix
tocpartAppendix
§ EXAMPLE OF THE INPUT DATA STRUCTURE
A simple ROOT TTree is used to store data used as the input for ADFilter. This is a simple PyROOT
script to create 1000 events with jets, b-jets, electrons, muons, photons and missing energy (MET).
The data are stored in the TTree called “Ntuple".
The additional histogram "meta" keeps some metadata, such as the centre-of-mass energy (in GeV).
Run this example as “python example.py", assuming that
“example.py" contains the lines of this code. This produces the ROOT file with one electron, one muon, one photon, one jet and b-jet.
All objects have transverse energy of 100 GeV.
When using the created file as input, the ADFilter reports log(Loss) = -9.822 (see the histogram “Loss" in the created file “dummy_root_ADFilter.root").
§ ALTERNATIVE REPRESENTATION OF LIMITS
This appendix discusses an alternative representation of BSM exclusions. In this approach, one can use experimental limits obtained using Gaussian signal shapes without corrections by acceptance A_sel and efficiency
ε. If a BSM model predicts signals in invariant mass distributions
that are roughly similar to the Gaussian width,
then one can overlay the observed cross section of such models with the Gaussian limits.
Figures <ref> and <ref> show such Gaussian limits. These figures also
demonstrate how the limits can be improved by applying the AE and the ADFilter for the signal BSM models.
For SSM, the limits confidently exclude the mass region below 2.5 TeV.
In the case of charged Higgs, the AE allows us to exclude btH^+ with tan (β) = 1.
Note that the excluded mass point is somewhat lower than for Fig. <ref>(b), since the assumption on the Gaussian signal shape with the width /=0.15 may not be appropriate for the tbH^+ model.
Note that, typically, Gaussian limits can be calculated for fine steps in invariant masses, which is more difficult to achieve for BSM-specific limits based on MC event samples.
The limits shown in Fig. <ref>(b) and <ref>(b) use only a few points for which A_sel and ε exist <cit.>, and the extrapolation between the points may introduce additional disrepancy with fine-binned limits shown in Fig. <ref> and <ref>.
|
http://arxiv.org/abs/2409.03001v1 | 20240904180014 | Quantum theory at the macroscopic scale | [
"Miguel Gallego",
"Borivoje Dakić"
] | quant-ph | [
"quant-ph"
] |
miguel.gallego.ballester@univie.ac.at
University of Vienna, Faculty of Physics, Vienna Center for Quantum Science and Technology, Boltzmanngasse 5, 1090 Vienna, Austria
University of Vienna, Vienna Doctoral School in Physics, Bolztmanngasse 5, 1090 Vienna, Austria
borivoje.dakic@univie.ac.at
University of Vienna, Faculty of Physics, Vienna Center for Quantum Science and Technology, Boltzmanngasse 5, 1090 Vienna, Austria
Institute for Quantum Optics and Quantum Information (IQOQI),
Austrian Academy of Sciences, Boltzmanngasse 3,
A-1090 Vienna, Austria
§ ABSTRACT
The quantum description of the microscopic world is incompatible with the classical description of the macroscopic world, both mathematically and conceptually. Nevertheless, it is generally accepted that classical mechanics emerges from quantum mechanics in the macroscopic limit. In this letter, we challenge this perspective and demonstrate that the behavior of a macroscopic system can retain all aspects of the quantum formalism, in a way that is robust against decoherence, particle losses and coarse-grained (imprecise) measurements. This departure from the expected classical description of macroscopic systems is not merely mathematical but also conceptual, as we show by the explicit violation of a Bell inequality and a Leggett-Garg inequality.
Quantum theory at the macroscopic scale
Borivoje Dakić
September 9, 2024
=======================================
§ INTRODUCTION
Quantum mechanics is one of the most successful scientific theories, and it is generally accepted as more fundamental than classical mechanics. However, quantum behavior is not observed at larger scales, where classical physics provides a better description. To explain the macroscopic world we perceive in our everyday life, it is believed that there must exist a quantum-to-classical transition or, in other words, that classical mechanics must somehow emerge from quantum mechanics in the macroscopic limit (somewhat in the spirit of Bohr's correspondence principle <cit.>). The questions of when and how exactly this transition occurs, in spite of being active for almost a hundred years now <cit.>, are still debated today <cit.>. One way to explain the emergence of classicality is to introduce genuine non-quantum effects (thus modifying quantum theory), such as the dynamical <cit.> or gravitationally-induced collapse <cit.> of the wave function. Another standard way is to investigate the quantum-to-classical transition from within quantum theory (which is also the subject here). One of the most famous approaches in this direction is the decoherence mechanism <cit.>, which shows that macroscopic systems, being hard to isolate, lose coherence in their interaction with the environment. Consequently their description becomes effectively classical. Complementary to this approach is the coarse-graining mechanism <cit.>, which shows that outcomes of macroscopic measurements admit a classical description when their resolution is limited even for perfectly isolated systems. While the main focus of the decoherence mechanism is on the dynamics, the coarse-graining approach focuses on the kinematic aspect of the transition to classicality. One way or another, the common conclusion is that quantum effects disappear in the macroscopic limit. Here, we want to challenge this view and ask how much decoherence or coarse-graining is needed to observe classicality? Typically, one takes a “transition” parameter, such as the size of the system: for example, the number of microscopic constituents N of a large system. Then, the standard statement is rather qualitative, positing that for a measurement resolution much greater than √(N), one obtains an effective classical description <cit.>. Nevertheless, the precise mathematical meaning of what “much greater than” means is vague in a concrete experimental situation. Our goal here is to make such statements mathematically precise and to show a well-defined macroscopic scale at which large quantum systems can fully preserve a quantum description, in the sense of the typical ingredients of quantum theory such as the notion of Hilbert space, the Born rule and the superposition principle. Furthermore, we will show that these are not merely mathematical artifacts but genuine quantum phenomena by explicitly showing the violation of Bell <cit.> and Leggett-Garg <cit.> inequalities for such systems.
§ COARSE-GRAINED MEASUREMENTS AND THE QUANTUM-TO-CLASSICAL TRANSITION
The concept of coarse-grained measurement in quantum mechanics appeared already long ago as an attempt to address Born's rule using the relative frequency operator (fuzzy or coarse-grained observables <cit.>). Continuing this line of inquiry, many subsequent works have followed, including discussions related to the weak and strong laws of large numbers <cit.>, as well as the quantum-to-classical correspondence (see <cit.> and references therein). A significant breakthrough in the operational understanding of the emergence of classicality through the coarse-graining mechanism came due to Kofler and Brukner <cit.>. In their works, the authors focus on classicality via the notion of macroscopic realism <cit.> or Bell's local realism <cit.>. Such a framework opens an operational route to study the observability of genuine quantum effects, signaled by the violation of Leggett-Garg <cit.> or Bell <cit.> inequalities. Their main result is that with a sufficient level of coarse-graining (much greater than √(N)) the outcomes of the experiment can be modeled by a classical distribution (at least for finite dimensional systems) <cit.>. In particular, outcomes of successive measurements on a single system satisfy macroscopic realism (i.e. satisfy all Leggett-Garg inequalities) <cit.>, and local measurements on a bipartite system satisfy local realism (i.e. satisfy all Bell inequalities) <cit.>. Similar results can be found in the context of more general (post-quantum) theories in <cit.>. On the other hand, if the level of coarse-graining is just right, namely of the order √(N), then we have the following facts: for independent and identically distributed (IID) pairs of (finite-dimensional) quantum systems, the coarse-grained quantum correlations satisfy Bell locality <cit.> (an analogous result can be shown for quantum contextuality <cit.>), while non-IID quantum states can exhibit nonlocal correlations as shown in our earlier works <cit.>. Furthermore, these works show that an entire family of (quantum) non-central limit theorems arises in such non-IID scenarios, raising an interesting situation in which, although being coarse-grained, macroscopic quantum systems exhibit quantum phenomena. We formalized this via the idea of macroscopic quantum behavior <cit.>, a property of a system in the macroscopic limit that retains the mathematical structure of quantum theory under the action of decoherence, particle losses, and coarse-graining. We have shown that it is possible to preserve the typical ingredients of the quantum formalism in the macroscopic limit, such as the Born rule, the superposition principle, and the incompatibility of the measurements.
In this letter, we build on our previous work and develop a unified framework for quantum theory at the macroscopic scale, which describes the theory of successive concatenation of measurements in the macroscopic limit. We use the formalism of Kraus operators in the limit Hilbert space, which allows us to show a violation of a Leggett-Garg inequality in the macroscopic limit, which in turn shows the incompatibility of coarse-grained measurements (at the √(N) level) with a macrorealistic description of the large quantum system.
§.§.§ Decoherence vs. coarse-graining
Before proceeding further, we would like to make some remarks on decoherence and its relation to coarse-graining. The decoherence mechanism stresses the role of dynamical loss of quantum coherence due to the (instantaneous) interaction with the environment followed by the einselection process of the pointer basis <cit.>. On the other hand, the coarse-graining mechanism focuses on a kinematical aspect of the transition to classicality by measuring collective coarse-grained observables. Nevertheless, if decoherence is understood broadly as a mechanism of “classicalization" due to interaction with the environment, then coarse-graining can be seen as an instance of such a mechanism. Namely, suppose such a process is described by an interaction Hamiltonian of the type H=H_S⊗ H_E, with H_S=∑_i h_i describing the collective Hamiltonian of the large quantum system (here h_i refers to the operators associated to local, microscopic constituents). In that case, the decoherence model effectively reduces to the measurement model of collective coarse-grained observables. This will be precisely our model of measurement, which we will introduce in the next sections. Therefore, essential aspects of the standard decoherence mechanism are incorporated in our study through coarse-graining of the measurements, and this is the standard argument to draw the parallel between the two approaches to the quantum-to-classical transition <cit.>. Notice that that the system interacts collectively with the environment in that case. On the other hand, microscopic constituents can as well be independently subjected to a decoherence channel (such as the dephasing or depolarizing channel) <cit.>. We will also include this mechanism in our study and refer to it as local decoherence to distinguish it from the standard decoherence mechanism (a precise definition will be provided to avoid misunderstandings). Our aim is to show robustness of quantum phenomena in the macroscopic limit against both mechanisms.
The paper is organized as follows. In Section <ref>, we define the setting under consideration and specify all relevant assumptions. The notion of macroscopic quantum behavior is introduced in Section <ref>, encompassing the quantum properties of systems in the macroscopic limit, with concrete examples provided. In Section <ref>, we further analyze the properties of the system, demonstrating the genuineness of macroscopic quantum behavior through device-independent tests, including explicit violations of Bell and Leggett-Garg inequalities. Finally, we conclude with final remarks and open questions in Section <ref>.
§ SETUP
We consider a macroscopic quantum measurement scenario analogous to the one presented in <cit.> (see Figure <ref>). The setting consists of two parts: a quantum system 𝖲 and a quantum measurement apparatus 𝖬. In order to model a realistic situation in the absence of perfect control, we assume these satisfy certain assumptions.
§.§.§ Macroscopic system
First, we assume the system 𝖲 satisfies the following conditions:
* Large N. The system is composed of a large number N of identical particles or subsystems with associated Hilbert space 𝔥. We describe the state of the system with a density matrix ρ_N ∈𝒟(𝔥^⊗ N), i.e. a positive, self-adjoint bounded linear operator on 𝔥^⊗ N satisfying ρ_N =1.
* Local decoherence. The system is subject to independent, single-particle decoherence channels Γ, such as the depolarizing or the dephasing channel. The effective state thus becomes Γ^⊗ N ( ρ_N ).
* Particle losses. Each individual particle has a probability p ∈ (0,1] of reaching the measurement apparatus, while 1-p is the probability of being lost (we assume p > 0 to avoid the trivial case where no particles reach the apparatus). Therefore, in each run of the experiment, only a number M ≤ N of particles reach the measurement apparatus with a probability f_N(M) = N M p^M (1-p)^N-M. Consequently, the state received by the measurement apparatus is of the form _1 … N-M_̊N (where _i is understood as the partial trace over the Hilbert space 𝔥 of the i-th particle).
Given these assumptions, for an initial state of the system _̊N ∈𝒟(𝔥^ N), the effective state of the system can be written in Fock-like space ℱ_N = ⊕_M=0^N 𝔥^ M as
Ł_N(_̊N) = ⊕_M=0^N f_N(M) ∑_π∈𝔖_N_π(1) …π(N-M)[ ^⊗ N ( ρ_N ) ] /N! ,
where 𝔖_N is the symmetric (permutation) group of N elements.
§.§.§ Coarse-grained measurements
In order to model the macroscopic measurements, we assume the measurement apparatus 𝖬 satisfies the following conditions:
* Collective measurement. The measurement setting of the apparatus is given by a single-particle observable, i.e. a Hermitian bounded operator A ∈ℬ(𝔥). We denote by A |a⟩_A = a |a⟩_A the diagonalization of A, where a ∈ℝ are its eigenvalues and |a⟩_A ∈𝔥 are its eigenstates. We denote by 𝒜⊆ℬ(𝔥) the set of (experimentally) accessible single-particle observables.
* Intensity measurement. Given a measurement setting A ∈𝒜, the measurement apparatus measures the intensity ∑_i a_i, i.e. the sum of individual outcomes. The corresponding observable in Fock-like space is therefore the intensity observable I_N(A) = ⊕_M=1^N ∑_i=1^M A_i, where A_i= 𝕀⊗…⊗ A⊗…⊗𝕀 is the operator that acts with A on the i-th particle and with the identity on the rest.
* Coarse-graining. The measuring scale for the intensity ∑_i a_i has a limited resolution of the order of √(N) (the square-root of the total number of particles), meaning that it cannot distinguish between values that differ by approximately less than √(N).
§.§.§ Measurement model
In order to implement these assumptions explicitly, we follow von Neumann's model of quantum measurements <cit.>. First, we assume that the measurement apparatus 𝖬 couples the system 𝖲 to an auxiliary system 𝖯 called the pointer, initially in a state |Φ⟩∈ L^2(ℝ) centered around zero in position basis and with a standard deviation of order √(N). For simplicity, we take this state to be a Gaussian with standard deviation √(N) for some > 0, i.e.
Φ_N(x) = 1/(2 π N ^2)^1/4 e^-x^2/(4 N ^2) .
The coupling between the system and the pointer is described by the Hamiltonian H(t) = (t) ⊕_M=0^N ∑_i=1^M A_i P, where (t) is a nonzero function only for a short time satisfying ∫ dt (t) =1 and P is the momentum operator of the pointer. After unitary interaction, the position of the pointer is translated to a distance equal to the value of the system's intensity ∑_i a_i (i.e., an eigenvalue of the intensity operator I_N(A)). Finally, the pointer's position is measured, obtaining a value x ∈ℝ. For a single-particle observable A ∈𝒜 (which represents a measurement setting) and an initial state of the system _̊N ∈𝒟(𝔥^ N), we set X_N to be the random variable associated to the measured value x (the measurement result on 𝖯). We shall use the symbol “∼" to denote “distributed according to", and as shown in the Appendix <ref>, we have X_N ∼ P(x|A) where
P_N(x |A) = [ K_N(x| A) Ł(_̊N) K_N^†(x| A) ].
Here, the Kraus operators K_N(x|A) are given by
⊕_M=0^N ∑_a_1 … a_MΦ_N ( x - ∑_i=1^M a_i ) Π_a_1 | A⊗…⊗Π_a_M | A ,
with Π_a|A=|a⟩_A _A⟨a| being the eigenprojectors of A. These Kraus operators define a positive operator-valued measure (POVM) with elements E_N (x | A) = K_N^† (x|A) K_N (x|A), normalized so that ∫ dx E_N(x|A) = 𝕀. The corresponding (normalized) post-measurement state of the system is given by the standard expression
K_N (x | A ) Ł(_̊N) K_N^†(x | A ) /[ K_N(x | A) Ł(_̊N) K_N^†(x | A) ] ,
To summarize, the effective state Ł(_̊N) describes the system 𝖲 under assumptions <ref>, <ref> and <ref>, while the Kraus operators K_N(x|A) describe the measurement apparatus 𝖬 under assumptions
<ref>, <ref> and <ref>.
§.§.§ Macroscopic limit
The macroscopic limit corresponds to the limit of an infinite number of particles, i.e., N→∞. Nevertheless, before proceeding further, let us consider possible scenarios in such a limit. The random variable X_N will generally not converge in distribution as N →∞. To illustrate this, let the initial state of the system be an independent and identically distributed (IID) state _̊N = ^̊ N for some ∈̊𝒟(𝔥). In this case, the distribution of the intensity ∑_i=1 a_i does not converge in general, unless one subtracts to it the mean value ⟨∑_i=1 a_i ⟩ and divides by √(N), just like in the central limit theorem <cit.>. Therefore, to ensure convergence, we take an affine transformation of X_N, namely we consider a family of random variables of the form ł_N X_N + _N, and choose the parameters ł_N, _N ∈ℝ (independent of the measurement setting and initial state of the system) in a way such that ł_N X_N + _N converges in distribution. Once this is fixed, the corresponding probability density function and the Kraus operators given in Equations (<ref>) and (<ref>) transform as
P_N(x|A) ↦ł_N^-1 P_N ( ł_N^-1(x-_N) |A ) ,
and
K_N(x|A) ↦ł_N^-1/2 K_N ( ł_N^-1(x-_N) | A ) ,
respectively.
§.§.§ n consecutive measurements
Finally, we are ready to present the most general scenario where a number n of successive measurements are performed (see Figure <ref>). In this case, the system 𝖲, subjected to assumptions <ref> - <ref> as before, goes through measurement apparatuses 𝖬_1, ..., 𝖬_n, each of which satisfies assumptions <ref> - <ref>. For an initial state of the system _̊N ∈𝒟(𝔥^ N) and a sequence of n single-particle observables (A_1, … , A_n) (n measurement settings) let X⃗_N = (X_N^(1), …, X_N^(n)) be the random vector associated to the measurement outcomes (x_1, …, x_n). Then X⃗_N ∼ P(x_1, …, x_n | A_1, …, A_n), where the distribution P(x_1, …, x_n | A_1, …, A_n) is given by
[ K_N^(n)Ł^(n)( … K_N^(1) Ł^(1)(_̊N) K_N^(1) †…) K_N^(n) †] .
Here K_N^(j) = K_N(x_j |A_j) and Ł^(j) is the map (<ref>) extended to Fock-like space (defined by loss probability p_j and decoherence channel _j), i.e.,
Ł^(j)( ⊕_M=0^N _̊M ) = ⊕_M=0^N Ł_M^(j) ( _̊M) ,
where
Ł_M^(j) (_̊M) = ⊕_J=0^M f_M^(j)(J) ∑_π∈𝔖_M_π(1) …π(M-J) [ _j^ M (_̊M)]/M!
and f_M^(j)(J) = M Jp_j^J (1-p_j)^M-J. As before, the random vector X⃗_N may not converge in distribution; thus, we shall consider an affine transformation of the form ł⃗_N ⊙X⃗_N + _N, where ⊙ denotes the Kronecker product (e.g. (a,b) ⊙ (c,d) = (ac, bd)), and choose the vectors ł⃗_N and _N to facilitate convergence.
§ QUANTUM THEORY AT THE MACROSCOPIC SCALE
We now argue that, in the context of the above scenario, it is possible to define a joint notion of convergence for states and measurements that preserves the complete mathematical structure of quantum theory in the macroscopic limit. Furthermore, we will show how this formalism specifically applies to device-independent quantities such as correlations (both spatial, as in Bell experiments, and temporal, as in Leggett-Garg experiments). In other words, we will show that, in the limit N →∞, states _̊N ∈𝒟(𝔥^ N) can be mapped to states ρ∈𝒟(ℌ) in some “limit" Hilbert space ℌ and Kraus operators K_N(x|A) can be mapped to Kraus operators K(x|A) acting on ℌ in a way such that the essential ingredients of quantum mechanics, including the Born rule, the superposition principle and the incompatibility of measurements, are retained. To do this, let us introduce the concepts of macroscopic quantum representation and (robust) macroscopic quantum behavior of order n (MQB_n).
Given a closed subspace ℌ_N ⊆𝔥^⊗ N and a subset 𝒜⊆ℬ( 𝔥 ), a macroscopic quantum representation is a limit of the form
( ℌ_N, _̊N, K_N(x|A) ) ( ℌ, ,̊ K(x|A) )
for all _̊N ∈𝒟(ℌ_N) and for all A ∈𝒜, where
* ℌ is a Hilbert space;
* the limit state ∈̊𝒟 is a “linear" function of _̊N, in the sense that if the pure states |⟩_N and |⟩_N are mapped, respectively, to |⟩ and |⟩, then the linear combination |⟩_N + |̱%̱s̱⟩̱_N is mapped to the linear combination |⟩ + |̱%̱s̱⟩̱;
* the limit Kraus operators K( x | A) ∈ℬ(ℌ) form a non-compatible set of measurements.
A closed subspace ℌ_N ⊆𝔥^⊗ N and a subset 𝒜⊆ℬ( 𝔥 ) possess MQB_n for some n ∈ℕ if there exists a macroscopic quantum representation
( ℌ_N, _̊N, K_N(x|A) ) →( ℌ, ,̊ K(x|A) )
such that for every state ρ_N ∈𝒟 (ℌ_N) and for every sequence of measurement settings (A_1, …, A_n ) ∈𝒜^ n, the random vector
X⃗_N ∼[ K_N^(n)… K_N^(1) _̊N K_N^(1) †… K_N^(n) †]
(or an affine transformation thereof, i.e. ł⃗_N ⊙X⃗_N + _N for some suitably chosen ł⃗_N, _N ∈ℝ^n), where K_N^(j) = K_N(x_j |A_j), converges in distribution as N →∞ to some random vector
X⃗∼[ K^(n)… K^(1) ρ K^(1) †… K^(n) †] ,
where K^(j) = K(x_j | A_j) (as given by the macroscopic quantum representation).
In references <cit.>, only the case of a single measurement was considered (n=1), and the definition of “MQB" given there corresponds to MQB_1 as defined here. Now we define a stronger notion, where the macroscopic quantum representation is robust against decoherence and losses as introduced in assumptions <ref> and <ref> respectively.
A closed subspace ℌ_N ⊆𝔥^⊗ N and a subset 𝒜⊆ℬ( 𝔥 ) possess robust MQB_n for some n ∈ℕ if there exist a macroscopic quantum representation
( ℌ_N, _̊N, K_N(x|A) ) →( ℌ, ,̊ K(x|A) )
and ϵ > 0 such that for every state ρ_N ∈𝒟 (ℌ_N), for every sequence of measurement settings (A_1, …, A_n ) ∈𝒜^ n and for every sequence of channels (Ł^(1), …, Ł^(n) ) of the form (<ref>) with max_j Ł^(j) - Id≤ϵ, the random vector
X⃗_N ∼[ K_N^(n)Ł^(n)( … K_N^(1) Ł^(1)(_̊N) K_N^(1) †…) K_N^(n) †] ,
(or an affine transformation ł⃗_N ⊙X⃗_N + _N for some suitably chosen ł⃗_N, _N ∈ℝ^n), where K_N^(j) = K_N(x_j |A_j), converges in distribution as N →∞ to some random vector
X⃗∼[ K^(n)… K^(1) ρ K^(1) †… K^(n) †] ,
where K^(j) = K(x_j | A_j) (as given by the macroscopic quantum representation).
§.§.§ An example
To illustrate these ideas, consider the case where 𝔥 = ℂ^2 (thus, particles or subsystems are qubits). In this case, the single-particle projective measurements have two possible outcomes, which we label +1 and -1. Let us define the N-particle Dicke states <cit.>
|N,k⟩ := 1/√(N k)( |1…1_k 0…0 ⟩ + permutations) ,
the subspace of (ℂ^2)^ N generated by the first d Dicke states
𝔇_N := {|N,k⟩ , k = 0 , 1 , …, d-1 }
and the set of non-diagonal observables on ℂ^2
𝒩𝒟 := { A ∈ℬ(ℂ^2) : A^† = A and ⟨ 0 | A | 1 ⟩≠ 0 } .
Then we have the following result:
The Dicke subspace 𝔇_N with dimension d ≪ N (in the sense that lim_N →∞ d/ N = 0, which holds, for instance, if d is fixed) and the set 𝒩𝒟 possess robust MQB_1.
See Appendix <ref>.
In particular, as shown in the Appendix <ref>, the MQB_1 is given by the macroscopic quantum representation
( 𝔇_N, |N,k⟩, K_N(x|A) ) →( L^2(ℝ), |k⟩, K(x|A) ) .
Here, L^2(ℝ) is the space of square-integrable functions, |k⟩ are number states (energy eigenstates of the quantum harmonic oscillator) and
K(x|A) = e^-(X_ȷ-x)^2/(2 ^̱2)/(π^̱2)^1/4 ,
where X_ȷ = X cosȷ + P sinȷ are phase space observable in terms of position X and momentum P observables and angle ȷ = ⟨ 0 | A | 1 ⟩ and
^̱2 = ^2 + p 0^†(A^2)0 - p^2 0^†(A)0^2/p^2 |0Γ^†(A)1 |^2 - 1 ,
in terms of the probability p and decoherence channel defined in Eq. (<ref>). We conjecture that this macroscopic quantum representation for the considered system constitutes a robust MQB_n for all n:
The Dicke subspace 𝔇_N (with dimension d ≪ N) and the set 𝒩𝒟 possess robust MQB_n for all n ∈ℕ.
We prove a weaker result that, together with Theorem <ref>, supports Conjecture <ref>:
The Dicke subspace 𝔇_N (with dimension d ≪ N) and the set 𝒩𝒟 possess MQB_n for all n ∈ℕ.
See Appendix <ref>.
Moreover, as shown in Appendix <ref>, all MQB_n are given by the same macroscopic quantum representation as before
( 𝔇_N, |N,k⟩, K_N(x|A) ) →( L^2(ℝ), |k⟩, K(x|A) )
with =̱ (i.e., the value of $̱ is given by Eq. (<ref>) withp=1and= Id).§ DEVICE-INDEPENDENT TESTS OF MACROSCOPIC NON-CLASSICAL BEHAVIORS
We have shown that the above system, consisting of Dicke states𝔇_Nand collective measurements (as long as they are not represented by diagonal hermitian operators), possesses MQB_nfor alln ∈ℕas well as robust MQB_1, which lead us to conjecture that it does indeed possess robust MQB_nfor alln ∈ℕ. These results strongly hint that the quantum nature of the system can be observed at the macroscopic scale. In order to make this statement concrete, we use device-independent tests that witnesses the non-classical nature of our system at the macroscopic scale. In particular, we will see that our system possesses nonlocal correlations (in the sense of Bell <cit.>), ruling out a local realistic description of the outcome statistics in the macroscopic limit. We also show a violation of a Leggett-Garg inequality <cit.>, ruling out a macroscopic realistic description of the statistics.
§.§.§ Violation of a Bell inequality
Consider the bipartite macroscopic measurement scenario depicted in Figure <ref>: a system𝖲, subject to assumptions <ref> - <ref>, is divided in two parts which are sent to measurement apparatuses𝖬_Aand𝖬_B, each of which satisfies assumptions <ref> - <ref>. Suppose the system is in a state of the form∑_k =0^d-1 c_k |N,k⟩_A |N,k⟩_B ∈𝔇_N 𝔇_N, and suppose that Alice selects a single-particle observableA ∈𝒩𝒟obtaining an outcomex ∈ℝ. Likewise, Bob selectsB ∈𝒩𝒟obtainingy ∈ℝ. Then, by applying Theorem <ref> to each party locally, the limit bipartite distribution is given by
P(x,y|A,B) = ⟨ψ| E(x|A) E(y|B) |ψ⟩ ,
where|ψ⟩ = ∑_k=0^d-1 c_k |k⟩_A |k⟩_B ∈L^2(ℝ) L^2(ℝ)andE(x|A) = K(x|A)^†K(x|A)withK(x|A)given by (<ref>). This distribution does not admit a local hidden variable model, as we showed by explicit violation of a Bell-CHSH inequality in <cit.>.
§.§.§ Violation of a Leggett-Garg inequality
We now consider a macroscopic Leggett-Garg experiment as depicted in Figure <ref>: a system𝖲, subject to assumptions<ref>-<ref>, is consecutively sent into two measurements apparatuses𝖬_Aand𝖬_B, which satisfy conditions <ref> - <ref>. Suppose the system is initially in the state∑_k=0^d-1 c_k |N,k⟩ ∈𝔇_N. Next, suppose that during the first measurement, defined by a single-particle observableA ∈𝒩𝒟, an outcomex ∈ℝis obtained. Similarly, for the second measurement, we have the associated observableB ∈𝒩𝒟resulting in an outcomey ∈ℝ. Then, applying Theorem <ref> (namely the MQB_2of the system), the limit bipartite distribution is given by
P(x,y|A,B) = ⟨ψ| K(x|A)^† K(y|B)^† K(y|B) K(x|A) |ψ⟩ ,
where|ψ⟩ = ∑_k=0^d-1 c_k |k⟩ ∈L^2(ℝ)andK(x|A)are given by (<ref>). Now consider the following Leggett-Garg CHSH inequality <cit.>
C = ⟨ a_1 b_1 ⟩ + ⟨ a_1 b_2 ⟩ + ⟨ a_2 b_1 ⟩ - ⟨ a_2 b_2 ⟩≤ 2 ,
wherea_i = (x|A_i)andb_i = (y|B_i). Then, as we show in the Appendix <ref>, the state
|ψ⟩ = √(1/2 - 577/2 √(1244179)) |0⟩ + √(1/2 + 577/2 √(1244179)) |2⟩ .
gives
C = 2/675 π( 577 + √(1244179) + 2700 arctan1/3) ≃ 2.42
forA_i = _x cosȷ_i + _y sinȷ_iandB_i = _x cos_i + _y sin_iwith the following set of angles
ȷ_1 = π/4 , ȷ_2 = 3 π/4 , _1 = π/2 , _2 = 0 .
This violation of a Leggett-Garg inequality rules out a macroscopic realistic description of the correlations obtained.
§ OUTLOOK AND OPEN QUESTIONS
Our results shed new light on the question of the quantum-to-classical transition, suggesting that genuine quantum phenomena might be more robust than previously thought. There are several interesting questions to be addressed in the future:
* Question of scale. Our results show that genuine quantum behavior can be visible through measurements with a precision of the order of √(N), even in the presence of (single-particle) decoherence and losses. The relevant parameter that defines the scale of these quantum effects is, therefore, the resolution of the measurements as a function of the system's size N. An open question is whether genuine quantum effects exist at a scale larger than √(N), thus surviving even more coarse-graining than the system we consider. The results of Kofler and Brukner <cit.> indicate that this is not possible, showing classicality for a resolution much larger than √(N). But their result only applies to finite-dimensional systems, and the case of infinite dimensional systems (in our language above, the case where the single-particle Hilbert space 𝔥 is infinite-dimensional) is still to be investigated.
* Macroscopicity measures. The question of macroscopic quantum states dates back to Schrödinger <cit.>, and still today work is done to characterize the “macroscopicity” of quantum states (see <cit.> and references therein). An example of a quantum state that is typically thought to be macroscopically quantum is the Greenberger-Horne-Zeilinger (GHZ) <cit.> state (|0⟩^ N + |1⟩^ N)/2 (also called cat-like state). However, such a state is extremely fragile, since the loss of coherence of a single particle destroys the coherence of the global state, collapsing it into a classical mixture. For this reason, the GHZ state is not useful for our purposes. Another state typically associated with macroscopic quantumness is the W state <cit.>, which in the language of this work corresponds to the Dicke state |N,1⟩. These two examples show that quantum macroscopicity does not necessarily result in non-classicality in the macroscopic limit as defined in our sense, and robustness seems to be a key requirement. More precise relations are to be left for future considerations.
* Classical limit and infinite tensor products. There is a recent proposal that suggests that focusing on type I operator algebras might be the source of the problem and considering instead quantum mechanics on type II operator algebras might provide a framework that encompasses the classical macroscopic limit in a natural way (see <cit.>). This seems to provide another formalism to arrive at the macroscopic limit, and an interesting question to be investigated is how these findings relate to our result.
* Experimental implementations. Of particular interest are experimental considerations to test our findings. Some potential experimental settings that seem to provide the appropriate characteristics are atomic memories and Bose-Einstein condensates (see e.g. <cit.>). While our results are derived in the explicit limit N→∞, one has to derive concrete bounds for finite N or at least provide numerical simulations.
Acknowledgments.— We would like to thank Joshua Morris for helpful comments. This research was funded in whole, or in part, by the Austrian Science Fund (FWF) [10.55776/F71] and [10.55776/P36994]. M.G. also acknowledges support from the ESQ Discovery programme (Erwin Schrödinger Center for Quantum
Science & Technology), hosted by the Austrian Academy
of Sciences (ÖAW). For open access purposes, the author(s) has applied a CC BY public copyright license to any author accepted manuscript version arising from this submission.
apsrev4-1
§ KRAUS OPERATORS
Let the initial joint state of the system and pointer be described by the density matrix_̊𝖲 _̊𝖯where_̊𝖲is the initial state of the system in Fock-like space⊕_M=0^N 𝔥^Mand_̊𝖯 = |Φ⟩⟨Φ|is the initial state of the pointer. It is convenient to write_̊𝖲in the eigenbasis of theA_i's, i.e.
_̊𝖲 = ⊕_M=0^N ∑_[ a_1 … a_M; b_1 … b_M ] C_[ a_1 … a_M; b_1 … b_M ] |a_1… a_M⟩_s _s⟨b_1 … b_M| .
Then, after unitary interactionU = exp{ - i ⊕_M=1^N ∑_i=1^M A_i(s) P }, the joint state of system and pointer is
U _̊𝖲_̊𝖯 U^† = ⊕_M=0^N ∑_[ a_1 … a_M; b_1 … b_M ] C_[ a_1 … a_M; b_1 … b_M ] e^-i ∑_i=1^M A_i(s) P |a_1… a_M⟩_s _s⟨b_1 … b_M||Φ⟩⟨Φ| e^i ∑_i=1^M A_i(s) P
= ⊕_M=0^N ∑_[ a_1 … a_M; b_1 … b_M ] C_[ a_1 … a_M; b_1 … b_M ] e^-i ( ∑_i=1^M a_i ) P |a_1… a_M⟩_s _s⟨b_1 … b_M||Φ⟩⟨Φ| e^i ( ∑_i=1^M b_i ) P
= ⊕_M=0^N ∑_[ a_1 … a_M; b_1 … b_M ] C_[ a_1 … a_M; b_1 … b_M ]|a_1… a_M⟩_s _s⟨b_1 … b_M| e^-i ( ∑_i=1^M a_i ) P |Φ⟩⟨Φ| e^i ( ∑_i=1^M b_i ) P .
If we measure the position of the pointer obtaining the valuex, then the state of the system is projected to
_𝖯x U _̊𝖲_̊𝖯 U^†x_𝖯 = ⊕_M=0^N ∑_[ a_1 … a_M; b_1 … b_M ] C_[ a_1 … a_M; b_1 … b_M ]|a_1… a_M⟩_s _s⟨b_1 … b_M| xe^-i ( ∑_i=1^M a_i ) P |Φ⟩⟨Φ| e^i ( ∑_i=1^M b_i ) Px
= ⊕_M=0^N ∑_[ a_1 … a_M; b_1 … b_M ] C_[ a_1 … a_M; b_1 … b_M ]|a_1… a_M⟩_s _s⟨b_1 … b_M| Φ( x - ∑_i=1^M a_i ) Φ^* ( x - ∑_i=1^M b_i )
= K_N(x|s) _̊𝖲 K_N(x|s)^† ,
where
K_N(x|s) = ⊕_M=0^N ∑_a_1 … a_MΦ( x - ∑_i=1^M a_i ) |a_1… a_M⟩_s _s⟨b_1 … b_M| ,
and the probability of obatining such outcomexis[ K_N(x|s) _̊𝖲 K_N^†(x|s) ], proving Equations (<ref>), (<ref>) and (<ref>) in the main text.
§ PROOF OF THEOREM <REF>.
Consider the random variable X_N with distribution
P_N(x) = [K_N(x|A) Ł(_̊N) K_N^† (x|A) ] ,
where _̊N = ∑_k,l=0^d_N-1 c_kl|N,l⟩⟨N,k|, d_N satisfies lim_N →∞ d_N/N=0 and
K_N(x|A) = ⊕_M=0^N ∑_a_1 … a_MΦ( x - ∑_i=1^M a_i ) Π_a_1 | A⊗…⊗Π_a_M | A .
By Lévy's continuity theorem, in order to show that X_N converges in distribution it is sufficient to show that its characteristic function χ_N(t)
converges pointwise to some function χ(t) continuous at t=0. We have
χ_N(t) = ∫_-∞^+∞ dx e^itx P_N(x)
= ∫_-∞^+∞ dx e^itx [K(x|A) Ł (_̊N ) K_N^† (x|A) ]
= [ Ł (_̊N) ∫_-∞^+∞ dx e^itx K_N^† (x|A) K_N(x|A) ]
= [ Ł(_̊N) ⊕_M=0^N ∑_a_1 … a_M∫_-∞^+∞ dx e^itxΦ^2 ( x - ∑_i=1^M a_i ) Π_a_1 | A⊗…⊗Π_a_M | A] .
Using that
∫_-∞^+∞ dx e^itx Φ^2 ( x - ∑_i=1^M a_i ) = ∫_-∞^+∞ dx e^itx e^-(x- ∑_i=1^M a_i)^2/(2 N ^2)/√(2 π N ^2)
= e^i t ∑_i=1^M a_i - N ^2 t^2 /2 ,
we have
χ_N(t) = e^- N ^2 t^2 /2 [ Ł(_̊N) ⊕_M=0^N ∑_a_1 … a_M e^i t ∑_i=1^M a_i Π_a_1 | A⊗…⊗Π_a_M | A]
= e^- N ^2 t^2 /2 [ Ł(_̊N) ⊕_M=0^N ( ∑_a_1 e^i t a_1Π_a_1|A) …( ∑_a_M e^i t a_MΠ_a_M|A) ]
= e^- N ^2 t^2 /2 [ Ł(_̊N) ⊕_M=0^N ( e^i t A)^ M]
= e^- N ^2 t^2 /2 [ ⊕_M=0^N N M p^M (1-p)^N-M 𝒮{_N-M[ ^⊗ N (_̊N) ] }( e^itA)^ M]
= e^- N ^2 t^2 /2 ∑_M=0^N N M p^M (1-p)^N-M_M [_N-M[ ^⊗ N (_̊N) ] ( e^itA)^ M]
= e^- N ^2 t^2 /2 ∑_M=0^N N M p^M (1-p)^N-M_N [ ^⊗ N (_̊N) ( e^itA)^ M𝕀^ (N-M)]
=e^- N ^2 t^2 /2 [ ^⊗ N (_̊N) ∑_M=0^N N M p^M (1-p)^N-M ( e^itA)^ M𝕀^ (N-M)]
= e^- N ^2 t^2 /2 [ ^⊗ N (_̊N) ( 1-p + p e^itA)^ N]
= e^- N ^2 t^2 /2 [ _̊N ^†( 1-p + p e^itA)^ N] .
We now perform an affine transformation X_N ↦ł_N X_N + _N of the random variable with ł_N = 1/√(2 N p^2 |G_01|^2) and _N = - G_00√(N) / √(2 |G_01|^2 ), where G_ij := iΓ^†(A)j. The characteristic function of the new random variable is
χ_N(t) = e^- i t G_00√(N)/√(2 |G_01|^2) - ^2 t^2 / (4 p^2 |G_01|^2) [ _̊N Γ^†(1-p+p e^i t A /√(2 N p^2 |G_01|^2))^ N] .
Defining 𝒢 = Γ^†(1-p+p e^i t A /√(2 N p^2 |G_01|^2)), the matrix element N,k𝒢^⊗ NN,l in the case k ≥ l is:
⟨N, k| 𝒢^ N |N, l⟩ = 1/√(NkNl)( 1…1_k 0…0 + perm.) 𝒢^ N( 1…1_l 0…0 + perm.)
= √(Nk/Nl) 1…1_k 0…0 𝒢^ N( 1…1_l 0…0 + perm.)
= √(Nk/Nl) ∑_m=0^l kmN-kl-m 1…1_k 0…0 𝒢^ N1…1_m0…0_k1..1_l-m 0…0_N-k
= √(Nk/Nl) ∑_m=0^l kmN-kl-m 1𝒢1^m 1𝒢0^k-m 0𝒢1^l-m 0𝒢0^N-k-l+m
=∑_m=0^l √(k! l! N^k+l-2m )/ m! (k-m)! (l-m)! [ 1 + O(1/N) ] 1𝒢1^m 1𝒢0^k-m 0𝒢1^l-m 0𝒢0^N-k-l+m .
In the second equality we have used permutational invariance; in the third equality we have gathered all the terms that contribute equally, multiplied by their combinatorial multiplicity; in the last line we have used Stirling's formula. Expanding 𝒢 as
𝒢 = ^†( 1-p + p e^i t A /√(2 N p^2 |G_01|^2))
= ^†( 1 + p i t A/√(2 N p^2 |G_01|^2) -p t^2 A^2/4 N p^2 |G_01|^2 + O(N^-3/2) )
= 1 + i t/√(2 N |G_01|^2)^† (A)- t^2/4 N p|G_01|^2^†(A^2) + O(N^-3/2) ,
we can see that 1𝒢1^m = 1 + O(1/√(N)) and 0𝒢0^m = 1 + O(1/√(N)), while 1𝒢0^k-m = ( it /√(2 N |G_01|^2) G_10)^k-m[ 1 + O(1/√(N)) ] and 0𝒢1^l-m = ( it/√(2 N |G_01|^2) G_01)^l-m[ 1 + O(1/√(N)) ]. These first order contributions of the off-diagonal matrix elements cancel the overall factor of √(N^k+l-2m), while higher order corrections are suppressed. On the other hand,
0𝒜0^N-k-l = exp{ (N-k-l) log0𝒢0}
= exp{ (N-k-l) log[ 1 + it /√(2N |G_01|^2)0^†(A)0 - t^2/4 N p |G_01|^20^†(A^2)0 + O(N^-3/2) ] }
= exp{ (N-k-l) [ i t G_00/√(2 N |G_01|^2) - t^2 G^(2)_00/4 N p |G_01|^2 + t^2 G_00^2 /4 N |G_01|^2 + O(N^-3/2) ] }
= exp{i t G_00√(N)/√(2 |G_01|^2) - t^2/4 p |G_01|^2( G_00^(2) - p G_00^2 ) +O(1/√(N)) } .
Then, defining G = ^†(A) and G^(2) = ^†(A^2), the matrix element reads
N,k𝒜^ NN,l = e^i t G_00√(N) / √(2 |G_01|^2) -t^2 (G_00^(2) - p G_00^2) / (4 p |G_01|^2) + O(1/√(N))
·∑_m=0^l √(k! l!)/m! (k-m)! (l-m)!( i t G_10/√(2 |G_01|^2))^k-m( i t G_01/√(2 |G_01|^2))^l-m[ 1 +O(1/√(N)) ] .
For the case l>k all we have to do is exchange k and l, so that the sum only runs until the smallest of the two, and also exchange G_10 and G_01. Then, defining ^2 = ^2/(p^2 |G_01|^2) + (G^(2)_00 - p G_00^2)/(p |G_01|^2) and G_01=|G_01| e^i ȷ, the characteristic function in the limit reads
lim_N →∞χ_N(t) = e^-^2 t^2/4∑_k,l e^i l ȷ c_kl e^-ikȷ∑_m=0^min (k,l)√(k! l!)/m! (k-m)! (l-m)!( i t/√(2))^k+l-2m .
Since this is a continuous characteristic function, we conclude that the random variable ł_N X_N + _N converges in distribution. In order to compute its distribution, we take the Fourier transform of the above expression:
P(x) = 1/2 π∫_-∞^+∞ dt e^-itx χ(t)
= ∑_k,l e^i l ȷ c_kl e^-ikȷ∑_m=0^min (k,l)√(k! l!)/m! (k-m)! (l-m)!(1/√(2))^k+l-2m1/2 π∫_-∞^+∞ dt ( i t )^k+l-2m e^-itx- ^2 t^2/4
= ∑_k,l e^i l ȷ c_kl e^-ikȷ∑_m=0^min (k,l)√(k! l!)/m! (k-m)! (l-m)!( 1/√(2))^k+l-2m (-1)^k+l-2md^k+l-2m/dx^k+l-2m1/2 π∫_-∞^+∞ dt e^-itx- ^2 t^2/4
= ∑_k,l e^i l ȷ c_kl e^-ikȷ∑_m=0^min (k,l)√(k! l!)/m! (k-m)! (l-m)!(1/√(2))^k+l-2m (-1)^k+l-2md^k+l-2m/dx^k+l-2me^-x^2/ ^2/√(π^2) .
Using Rodrigues' formula for Hermite polynomials <cit.>
H_n(x) = (-1)^n e^x^2d^n/dx^n e^-x^2
we have
P(x) = ∑_k,l e^i l ȷ c_kl e^-ikȷ∑_m=0^min (k,l)√(k! l!)/m! (k-m)! (l-m)!( 1/√(2))^k+l-2me^- x^2/^2/^k+l-2mH_k+l-2m ( x/)/√(π^2)
= e^- x^2/^2/√(π^2 )∑_k,l e^i l ȷ c_kl e^-ikȷ√(k! l!)∑_m=0^min (k,l)1/m!k+l-2m l-m( 1/)^k+l-2mH_k+l-2m ( x/)/√(2^k+l-2m) (k+l-2m)! ,
where we have introduced the binomial coefficient for convenience. Now, the above sum over Hermite polynomials can be written as a product of two Hermite polynomials of order k and l by virtue of the following lemma:
For any constants , $̱ andsatisfying^2=^̱2+^2, and for any non-negative integerskandlwithk ≥ l, the following identity of Hermite polynomials holds:
e^- x^2/^2/√(π^2)∑_m=0^l 1/m! k+l-2m l-m( /)^k+l-2m H_k+l-2m(x/ )/√(2^k+l-2m) (k+l-2m)! =
= ∫_-∞^+∞ dx' e^-( x-x')^2/^̱2 /√(π^̱2)e^-(x')^2/^2/√(π^2 ) H_k ( x'/)/√(2^k) k!H_l (x'/)/√(2^l) l! .
See <cit.>.
Then, the limit distribution may be written as
P(x) = ∑_k,l e^i l ȷ c_kl e^-ikȷ√(k! l!)∫_-∞^+∞ dx' e^-( x-x')^2/^̱2 /√(π^̱2)e^-(x')^2/√(π) H_k (x' )/√(2^k) k! H_l (x' )/√(2^l) l!
= ∑_k,l e^i l ȷ c_kl e^-ikȷ∫_-∞^+∞ dx' e^-( x-x')^2/^̱2 /√(π^̱2)[ e^-(x')^2/2 H_k ( x') /√(2^k k! √(π))] [ e^-(x')^2/2 H_l (x' ) /√(2^l l! √(π))]
= ∑_k,l e^i l ȷ c_kl e^-ikȷ∫_-∞^+∞ dx' e^-( x-x')^2/^̱2 /√(π^̱2)⟨k|x'⟩⟨x'|l⟩ ,
where^̱2 = ^2 - 1and we have introduced the wave-functions
⟨x|k⟩ = 1/√(2^k k! √(π)) e^- x^2/2 H_k ( x )
of a one dimensional harmonic oscillator. In conclusion, we can write the limit probability distribution as
P(x) = [ K(x|A) K^†(x|A) ] ,
where=̊∑_kl c_kl|l⟩⟨k|and
K(x|A) = ∫_-∞^+∞ dx' e^-(x-x')^2/(2 ^̱2)/(π^̱2)^1/4 |x'⟩_ȷ _ȷ⟨x'| ,
where
^̱2 = ^2 + p 0^†(A^2)0 - p^2 0^†(A)0^2/p^2 |0Γ^†(A)1 |^2 - 1
and
ȷ = 0^†(A)1 .
§ PROOF OF THEOREM <REF>.
Now consider the random vectorX⃗_Nwith distribution
P_N(x_1, …, x_n | A_1 , … , A_n) = [ K_N(x_n|A_n) … K_N(x_1|A_1) _̊N K_N^†(x_1|A_1) … K_N^†(x_n|A_n) ] .
We want to show thatX⃗_Nconverges in distribution to a random vectorX⃗with distribution
P(x_1, …, x_n | A_1 , … , A_n) = [ K(x_n|A_n) … K(x_1|A_1) K̊^†(x_1|A_1) … K^†(x_n|A_n) ] .
For this, it is sufficient to show that
⟨ N , k | K̃_N^(1)…K̃_N^(n)K̃_N^(n+1) | N,l ⟩= ∑_m=0^∞( ⟨ N,k | K̃_N^(1)…K̃_N^(n) | N , m ⟩) ( ⟨ N,m | K̃_N^(n+1) | N , l ⟩) ,
where
K̃_N^(j) = K̃_N(x_j | A_j)
= 1/√(ł_N^(j)) K_N ( x_j - _N^(j)/ł_N^(j) | A_j )
= 1/√(ł_N^(j))⊕_M=0^N ∑_a_1 … a_Me^-( (x_j - _N^(j))/ł_N^(j) - ∑_i=1^M a_i )^2/(4 N ^2) /(2 π N ^2)^1/4Π_a_1 | A_j…Π_a_M | A_j
= 1/( 2 π N ^2 (ł_N^(j))^2 )^1/4⊕_M=0^N ∑_a_1 … a_M√( N ^2 (ł_N^(j))^2/π)∫ dp e^i p x_j e^-N ^2 (ł_N^(j))^2 p^2 - i p (_N^(j) + ł_N^(j)∑_i=1^M a_i ) Π_a_1 | A_j…Π_a_M | A_j
= ( N ^2 (ł_N^(j))^2 /2 π^3)^1/4∫ dp e^i p x_j - N ^2 (ł_N^(j))^2 p^2
·⊕_M=0^N e^-ip_N^(j) (1-M/N)( ∑_a_1 e^-i p ( ł_N^(j) a_1 + _N^(j)/N )Π_a_1|A_j) …( ∑_a_M e^-i p (ł_N^(j) a_M + _N^(j)/N)Π_a_M|A_j)
for some constantsł_N^(j)and_N^(j). Choosing as beforeł_N^(j) = (2 N |(A_j)_01|^2)^-1/2and_N^(j) = - (A_j)_00√(N) / √(2 | (A_j)_01|^2)and defining𝒜_j = ∑_a e^-i p (ł_N^(j) a+ _N^(j) /N)Π_a|A_j, we have
K̃_N^(j) = ( ^2 /4 π^3 |(A_j)_01|^2)^1/4∫ dp e^i p x_j - ^2 p^2 / (2 |(A_j)_01|^2) ⊕_M=0^N e^-ip_N^(j) (1-M/N)𝒜_j^ M .
Then the left hand side of (<ref>) is
LHS = ( 1/4 π^3)^(n+1)/4√(_1 …_n+1/ |(A_1)_01| … |(A_n+1)_01|)
·∫ dp_1 … dp_n+1 e^i ∑_j=1^n+1 x_j p_j - 1/2∑_j=1^n+1^2 p_j^2 / |(A_j)_01|^2 ⟨ N,k | ( 𝒜_1 …𝒜_n+1)^ N | N, l ⟩ ,
while the right hand side is
RHS = ∑_m=0^∞{( 1/4 π^3)^n/4√(_1 …_n/ |(A_1)_01| … |(A_n)_01|)
·∫ dp_1 … dp_n e^i ∑_j=1^n x_j p_j - 1/2∑_j=1^n^2 p_j^2 / |(A_j)_01|^2 ⟨ N,k | ( 𝒜_1 …𝒜_n)^ N | N, m ⟩}
·{( 1/4 π^3)^1/4√(_n+1/ |(A_n+1)_01|)∫ dp_n+1 e^i x_n+1 p_n+1 - 1/2_n+1^2 p_n+1^2 / |(A_n+1)_01|^2 ⟨ N,m | 𝒜_n+1^ N | N, l ⟩} .
It is then sufficient to show thatLHS' = RHS', where
LHS' = ⟨ N,k | ( 𝒜_1 …𝒜_n+1)^ N | N, l ⟩
and
RHS' = ∑_m=0^∞( ⟨ N,k | ( 𝒜_1 …𝒜_n)^ N | N, m ⟩) ( ⟨ N,m | 𝒜_n+1^ N | N, l ⟩) .
Expanding
𝒜_1 …𝒜_n = ( 1 - i p_1 A_1-(A_1)_00/√(2 N |(A_1)_01|^2) - p_1^2/2( A_1-(A_1)_00)^2/2 N |(A_1)_01|^2 + O(N^-3/2) ) …
…( 1 - i p_n A_n-(A_n)_00/√(2 N |(A_n)_01|^2) - p_n^2/2( A_n-(A_n)_00)^2/2 N |(A_n)_01|^2 + O(N^-3/2) )
= 1 - i/√(2 N)∑_j=1^n p_j A_j - (A_j)_00/|(A_j)_01| - 1/4N∑_j=1^n p_j^2 ( A_j - (A_j)_00)^2/|(A_j)_01|^2 - 1/2N∑_i<j^n p_i p_j A_i - (A_i)_00/|(A_i)_01|A_j - (A_j)_00/|(A_j)_01| + O(N^-3/2) ,
we have that
⟨ 1 | 𝒜_1 …𝒜_n |1 ⟩ = 1 + O(N^-1/2) ,
⟨ 1 | 𝒜_1 …𝒜_n |0 ⟩ = - i/√(2N)∑_j=1^n p_j e^-i ȷ_j+ O(N^-1) ,
⟨ 0 | 𝒜_1 …𝒜_n |1 ⟩ = - i/√(2N)∑_j=1^n p_j e^i ȷ_j+ O(N^-1) ,
⟨ 0 | 𝒜_1 …𝒜_n |0 ⟩ = 1 - 1/4N∑_j=1^n p_j^2 - 1/2N∑_i<j^n p_i p_j e^i (ȷ_i - ȷ_j)+ O(N^-3/2) ,
where we have defined⟨ 0 | A_j | 1 ⟩ = |(A_j)_01| e^i ȷ_j, so that using (<ref>) (from the proof in the previous appendix) we have
⟨ N,k | 𝒜_1 …𝒜_n | N,l ⟩ = e^-1/4∑_j=1^n p_j^2 - 1/2∑_i<j^n p_i p_j e^i(ȷ_i-ȷ_j)
·∑_r=0^min(k,l)√(k! l!)/r! (k-r)! (l-r)!( -i/√(2)∑_j=1^n p_j e^-i ȷ_j)^k-r( -i/√(2)∑_j=1^n p_j e^i ȷ_j)^l-r .
Therefore, defining
a = - i/√(2)∑_j=1^n p_j e^i ȷ_j ,
b = - i/√(2)∑_j=1^n p_j e^-i ȷ_j ,
a' = - i/√(2) p_n+1 e^i ȷ_n+1 ,
b = - i/√(2) p_n+1 e^-i ȷ_n+1 ,
we have
LHS' = e^-1/4∑_j=1^n+1 p_j^2- 1/2∑_i<j^n+1 p_i p_j e^i(ȷ_i-ȷ_j)∑_r=0^min(k,l)√(k! l!)/r! (k-r)! (l-r)!( b+b')^k-r( a+a' )^l-r
= √(k! l!) e^-1/4∑_j=1^n+1 p_j^2- 1/2∑_i<j^n p_i p_j e^i(ȷ_i-ȷ_j) f_kl(a,b,a',b') ,
where we have defined the function
f_kl (a, b, a', b') = e^ab' (a+a')^l (b+b')^k ∑_r=0^min(k,l) (a + a')^-r ( b+b')^-r/r! (k-r)! (l-r)!
= e^ab' (a+a')^l-min(k,l) (b+b')^k-min(k,l)∑_r=0^min(k,l) (a + a')^min(k,l)-r ( b+b')^min(k,l)-r/r! (k-r)! (l-r)!
= e^ab' (a+a')^l-min(k,l) (b+b')^k-min(k,l)∑_q=0^min(k,l) (a + a')^q ( b+b')^q/( min(k,l)-q )! (max(k,l)- min(k,l)+q )! q!
= e^ab'(a+a')^l-min(k,l) (b+b')^k-min(k,l)/max(k,l)!∑_q=0^min(k,l)max(k,l) min(k,l)-q (a + a')^q ( b+b')^q/ q!
= e^ab'(a+a')^l- min(k,l) (b+b')^k-min(k,l)/max(k,l)! L_min(k,l)^|k-l|(-(a + a') ( b+b') ) ,
in terms of the generalized Laguerre polynomials
L_k^m(x) = ∑_q=0^k k+m k-q(-x)^q/q! .
On the other hand, we have that
RHS' = ∑_m=0^∞{ e^-1/4∑_j=1^n p_j^2- 1/2∑_i<j^n p_i p_j e^i(ȷ_i-ȷ_j)∑_s=0^min(k,m)√(k! m!)/s! (k-s)! (m-s)! b^k-s a^m-s}
·{ e^-1/4 p_n+1^2∑_t=0^min(m,l)√(m! l!)/t! (m-t)! (l-t)! b'^m-t a'^l-t}
= √(k! l!) e^-1/4∑_j=1^n+1 p_j^2- 1/2∑_i<j^n p_i p_j e^i(ȷ_i-ȷ_j) g_kl(a,b,a',b')
where we have defined the function
g_kl(a, b, a', b') = (a')^l b^k ∑_m=0^∞ m! (ab')^m ∑_s=0^min(k,m)(a b)^-s/s! (k-s)! (m-s)!∑_t=0^min(m,l)(a'b')^-t/t! (m-t)! (l-t)!
= (a')^l b^k ∑_m=0^∞ m! (ab')^m (ab)^-min(k,m)∑_s=0^min(k,m)(ab)^min(k,m) -s/s! (k-s)! (m-s)! (a' b')^-min(m,l)∑_t=0^min(m,l)(a'b')^min(m,l)-t/t! (m-t)! (l-t)!
= (a')^l b^k ∑_m=0^∞ m! (ab')^m (ab)^-min(k,m)∑_u=0^min(k,m)(ab)^u/( min(k,m)-u )! ( max(k,m) - min(k,m) + u )! u!
· (a' b')^-min(m,l)∑_v=0^min(m,l)(a'b')^v/( min(m,l)-v )! ( max(m,l) - min(m,l) + v )! v!
= (a')^l b^k ∑_m=0^∞ m! (ab')^m (ab)^-min(k,m)/max(k,m)!∑_u=0^min(k,m)max(k,m) min(k,m)-u(ab)^u/u!
·(a' b')^-min(m,l)/max(m,l)!∑_v=0^min(m,l)max(m,l) min(m,l)-v(a'b')^v/v!
= (a')^l b^k ∑_m=0^∞ m! (ab')^m (ab)^-min(k,m)/max(k,m)! L_min(k,m)^|k-m| (-ab) (a' b')^-min(m,l)/max(m,l)! L_min(m,l)^|m-l| (-a'b') .
Therefore a sufficient condition for the identityLHS=RHSis thatf_kl(a,b,a',b') = g_kl(a,b,a',b')for allk,l ≥ 0and for alla, b, a', b' ∈ℂ, which holds by virtue of the following lemma:
Let
f_kl(a,b,a',b') = e^ab'(a+a')^l-min(k,l) (b+b')^k-min(k,l)/max(k,l)! L_min(k,l)^|k-l|( - (a+a') (b+b') )
and
g_kl (a,b,a',b') = (a')^l b^k ∑_j=0^∞ j! (ab')^j (ab)^-min(k,j)/max(k,j)! L_min(k,j)^|k-j| (-ab) (a'b')^-min(l,j)/max(l,j)! L_min(l,j)^|l-j| (-a'b') .
Then f_kl(a,b,a',b') = g_kl(a,b,a',b') for every non-negative integers k and l and for every a,a',b,b' ∈ℂ.
The structure of the proof is as follows. First, seeing that f_00=g_00 is easy. Then we prove that, in the case l=0, both f and g satisfy the same recurrence relation, namely
f_k+1,0 = b+b'/k+1 f_k0 and g_k+1,0 = b+b'/k+1 g_k0 ,
REC1
which by induction on k implies that f_k0=g_k0 for all k ≥ 0. Next, for l≠ 0, we prove that, in the case k ≥ l+1, the following recurrence relations are satisfied:
f_k,l+1=a+a'/l+1 f_kl + 1/l+1 f_k-1,l and g_k,l+1=a+a'/l+1 g_kl + 1/l+1 g_k-1,l .
REC2
This, together with the previously established identity f_k0 = g_k0, implies by induction on l that f_k,l+1=g_k,l+1 for k ≥ l+1, so that we have f_kl=g_kl for k≥ l. Finally, since the identity is symmetric with respect to the exchange of k and l, a and b and a' and b', the identity is proven for all k and l. Therefore, it remains to prove equations (<ref>) and (<ref>). Before proceeding further, let us recall some useful identities of Laguerre polynomials:
L_n^m(x) = L_n^m+1(x) - L_n-1^m+1(x) ,
∗
n L_n^m(x) = (n+m) L_n-1^m(x) - x L_n-1^m+1(x) ,
∗∗
(with the convention that L_n^m(x) = 0 if n < 0).
Step 1: proof of (<ref>). For l=0 we have
f_k0 = e^ab'(b+b')^k/k! and g_k0 = b^k ∑_j=0^∞ (ab')^j (ab)^-min(k,j)/max(k,j)! L_min(k,j)^|k-j|(-ab) .
It is easy to see that the recurrence relation f_k+1,0 = b+b'/k+1 f_k0 is satisfied. Similarly,
g_k+1,0 = b^k+1{∑_j=0^k (ab')^j (ab)^-j/(k+1)! L_j^k+1-j(-ab) + ∑_j=k+1^∞ (ab')^j (ab)^-k-1/j! L_k+1^j-k-1 (-ab) }
= b^k+1{1/(k+1)!∑_j=0^k ( b'/b)^j L_j^k+1-j(-ab) + ∑_j=k+1^∞ (ab')^j (ab)^-k-1/j! j L_k^j-k-1(-ab) + ab L_k^j-k(-ab)/k+1}
= b^k+1/k+1{1/k!∑_j=0^k ( b'/b)^j L_j^k+1-j(-ab)
+ ab' (ab)^-1∑_j=k+1^∞ (ab')^j-1(ab)^-k/(j-1)! L_k^j-k-1 (-ab) + ∑_j=k+1^∞ (ab')^j(ab)^-k/j! L_k^j-k (-ab) }
= b^k+1/k+1{1/k!∑_j=0^k ( b'/b)^j L_j^k+1-j(-ab)
+ b'/b∑_j=k^∞ (ab')^j(ab)^-k/j! L_k^j-k (-ab) + g_k0/b^k - ∑_j=0^k (ab')^j(ab)^-j/k! L_j^k-j (-ab) }
= b/k+1 g_k0 + b^k+1/k+1{1/k!∑_j=0^k ( b'/b)^j L_j^k+1-j(-ab)
+ b'/b[ g_k0/b^k - ∑_j=0^k-1 (ab')^j (ab)^-j/k! L_j^k-j(-ab) ] - 1/k!∑_j=0^k ( b'/b)^j L_j^k-j (-ab) }
= b+b'/k+1 g_k0 + b^k+1/(k+1)!{∑_j=0^k ( b'/b)^j L_j^k+1-j(-ab)
- b'/b∑_j=0^k-1( b'/b)^j L_j^k-j(-ab) - ∑_j=0^k ( b'/b)^j L_j^k-j (-ab) }
=: b+b'/k+1 g_k0 + b^k+1/(k+1)! R ,
where in the second equality we have used (<ref>) and in the last equality we have defined
R := ∑_j=0^k ( b'/b)^j [ L_j^k+1-j(-ab) - L_j^k-j(-ab) ] - b'/b∑_j=0^k-1( b'/b)^j L_j^k-j(-ab)
= ∑_j=1^k ( b'/b)^j L_j-1^k+1-j(-ab) - ∑_j=0^k-1( b'/b)^j+1 L_j^k-j(-ab)
= 0 ,
where in the second equality we have used (<ref>).
Step 2: proof of (<ref>). For l ≠ 0 and k≥ l we have
f_kl = e^ab' (b+b')^k-l/k! L_l^k-l(-(a+a')(b+b')) ,
and for k ≥ l+1 we have the following recurrence relation for f:
f_k,l+1 = e^ab'(b+b')^k-l-1/k! L_l+1^k-l-1(-(a+a')(b+b'))
= e^ab'(b+b')^k-l-1/k! k L_l^k-l-1(-(a+a')(b+b')) + (a+a')(b+b') L_l^k-l(-(a+a')(b+b'))/l+1
= 1/l+1 e^ab'(b+b')^k-l-1/(k-1)! L_l^k-l-1(-(a+a')(b+b')) + a+a'/l+1 e^ab'(b+b')^k-l/k! L_l^k-l(-(a+a')(b+b'))
= 1/l+1 f_k-1,l + a+a'/l+1 f_kl ,
where in the second equality we have used (<ref>). For g we have
g_k,l+1 = (a')^l+1 b^k ∑_j=0^∞ j! (ab')^j (ab)^-min(k,j)/max(k,j)! L_min(k,j)^|k-j| (-ab) (a'b')^-min(l+1,j)/max(l+1,j)! L_min(l+1,j)^|l+1-j|(-a'b')
= (a')^l+1 b^k {∑_j=0^l j! (ab')^j (ab)^-j/k! L_j^k-j (-ab) (a'b')^-j/(l+1)! L_j^l+1-j(-a'b')
+ ∑_j=l+1^∞ j! (ab')^j (ab)^-min(k,j)/max(k,j)! L_min(k,j)^|k-j| (-ab) (a'b')^-l-1/j! L_l+1^j-l-1(-a'b') }
= (a')^l+1 b^k {1/k! (l+1)!∑_j=0^l j! (a'b)^-j L_j^k-j (-ab) L_j^l+1-j(-a'b')
+ ∑_j=l+1^∞ j! (ab')^j (ab)^-min(k,j)/max(k,j)! L_min(k,j)^|k-j| (-ab) (a'b')^-l-1/j! j L_l^j-l-1 (-a' b') + a' b' L_l^j-l(-a'b')/l+1}
= (a')^l+1 b^k/l+1{1/k! l!∑_j=0^l j! (a'b)^-j L_j^k-j (-ab) L_j^l+1-j(-a'b')
+ (a'b')^-l-1∑_j=l+1^∞ (ab')^j (ab)^-min(k,j)/max(k,j)! L_min(k,j)^|k-j| (-ab) j L_l^j-l-1 (-a' b')
+ ∑_j=l+1^∞ j! (ab')^j (ab)^-min(k,j)/max(k,j)! L_min(k,j)^|k-j| (-ab) (a' b')^-l/j! L_l^j-l(-a'b')}
= (a')^l+1 b^k/l+1{1/k! l!∑_j=0^l j! (a'b)^-j L_j^k-j (-ab) L_j^l+1-j(-a'b')
+ (a'b')^-l-1∑_j=l+1^k (ab')^j (ab)^-j/k! L_j^k-j (-ab) j L_l^j-l-1 (-a' b')
+ (a'b')^-l-1∑_j=k+1^∞ (ab')^j (ab)^-k/j! L_k^j-k (-ab) j L_l^j-l-1 (-a' b')
+ g_kl/(a')^l b^k - ∑_j=0^l j! (ab')^j (ab)^-j/k! L_j^k-j (-ab) (a'b')^-j/l! L_j^l-j(-a'b') }
= a'/l+1 g_kl + (a')^l+1 b^k/k! (l+1)!∑_j=0^l j! (a'b)^-j L_j^k-j (-ab) ( L_j^l+1-j(-a'b') - L_j^l-j(-a'b') )
+ b^k (b')^-l-1/k! (l+1)∑_j=l+1^k j ( b'/b)^j L_j^k-j (-ab) L_l^j-l-1(-a' b')
+a^-k+1 (b')^-l/l+1∑_j=k+1^∞(ab')^j-1/(j-1)! L_k^j-k(-ab) L_l^j-l-1(-a'b')
= a'/l+1 g_kl + (a')^l+1 b^k/k! (l+1)!∑_j=1^l j! (a'b)^-j L_j^k-j (-ab) L_j-1^l+1-j(-a'b')
+ b^k (b')^-l-1/k! (l+1)∑_j=l+1^k ( b'/b)^j j L_j^k-j (-ab) L_l^j-l-1(-a' b')
+a^-k+1 (b')^-l/l+1∑_j=k^∞(ab')^j/j! L_k^j+1-k(-ab) L_l^j-l(-a'b')
= a'/l+1 g_kl + (a')^l+1 b^k/k! (l+1)!∑_j=1^l j! (a'b)^-j L_j^k-j (-ab) L_j-1^l+1-j(-a'b')
+ b^k (b')^-l-1/k! (l+1)∑_j=l+1^k ( b'/b)^j( k L_j-1^k-j(-ab) + ab L_j-1^k-j+1(-ab) ) L_l^j-l-1(-a' b')
+a^-k+1 (b')^-l/l+1∑_j=k^∞(ab')^j/j!( L_k^j-k (-ab) + L_k-1^j-k+1 (-ab) ) L_l^j-l(-a'b')
= a'/l+1 g_kl + (a')^l+1 b^k/k! (l+1)!∑_j=1^l j! (a'b)^-j L_j^k-j (-ab) L_j-1^l+1-j(-a'b')
+ b^k (b')^-l-1/(k-1)! (l+1)∑_j=l+1^k ( b'/b)^j L_j-1^k-j(-ab) L_l^j-l-1(-a' b')
+ a b^k+1 (b')^-l-1/k! (l+1)∑_j=l+1^k ( b'/b)^j L_j-1^k-j+1(-ab) L_l^j-l-1(-a' b')
+a^-k+1 (b')^-l/l+1∑_j=k^∞(ab')^j/j! L_k^j-k (-ab) L_l^j-l(-a'b')
+a^-k+1 (b')^-l/l+1∑_j=k^∞(ab')^j/j! L_k-1^j-k+1 (-ab) L_l^j-l(-a'b') ,
where we have used (<ref>) in the third and eighth equalities and (<ref>) in the seventh and eighth. On the other hand we have
g_kl = (a')^l b^k ∑_j=0^∞ j! (ab')^j (ab)^-min(k,j)/max(k,j)! L_min(k,j)^|k-j| (-ab) (a'b')^-min(l,j)/max(l,j)! L_min(l,j)^|l-j| (-a'b')
= (a')^l b^k ∑_j=0^k-1 j! (ab')^j (ab)^-j/k! L_j^k-j (-ab) (a'b')^-min(l,j)/max(l,j)! L_min(l,j)^|l-j| (-a'b')
+ (a')^l b^k ∑_j=k^∞ j! (ab')^j (ab)^-k/j! L_k^j-k (-ab) (a'b')^-l/j! L_l^j-l (-a'b')
= (a')^l b^k /k!∑_j=0^k-1 j! ( b'/b)^j L_j^k-j (-ab) (a'b')^-min(l,j)/max(l,j)! L_min(l,j)^|l-j| (-a'b')
+ a^-k (b')^-l∑_j=k^∞(ab')^j/j! L_k^j-k (-ab) L_l^j-l (-a'b')
and
g_k-1,l = (a')^l b^k-1∑_j=0^∞ j! (ab')^j (ab)^-min(k-1,j)/max(k-1,j)! L_min(k-1,j)^|k-1-j| (-ab) (a'b')^-min(l,j)/max(l,j)! L_min(l,j)^|l-j| (-a'b')
= (a')^l b^k-1∑_j=0^k-1 j! (ab')^j (ab)^-j/(k-1)! L_j^k-1-j (-ab) (a'b')^-min(l,j)/max(l,j)! L_min(l,j)^|l-j| (-a'b')
+ (a')^l b^k-1∑_j=k^∞ j! (ab')^j (ab)^-k+1/j! L_k-1^j-k+1 (-ab) (a'b')^-l/j! L_l^j-l (-a'b')
= (a')^l b^k-1/(k-1)!∑_j=0^k-1 j! ( b'/b)^j L_j^k-1-j (-ab) (a'b')^-min(l,j)/max(l,j)! L_min(l,j)^|l-j| (-a'b')
+ a^-k+1 (b')^-l∑_j=k^∞(ab')^j/j! L_k-1^j-k+1 (-ab) L_l^j-l (-a'b') .
Therefore,
g_k,l+1 = a'/l+1 g_kl + (a')^l+1 b^k/k! (l+1)!∑_j=1^l j! (a'b)^-j L_j^k-j (-ab) L_j-1^l+1-j(-a'b')
+ b^k (b')^-l-1/(k-1)! (l+1)∑_j=l+1^k ( b'/b)^j L_j-1^k-j(-ab) L_l^j-l-1(-a' b')
+ a b^k+1 (b')^-l-1/k! (l+1)∑_j=l+1^k ( b'/b)^j L_j-1^k-j+1(-ab) L_l^j-l-1(-a' b')
+ a/l+1[ g_kl - (a')^l b^k/k!∑_j=0^k-1 j! ( b'/b)^j L_j^k-j (-ab) (a'b')^-min(l,j)/max(l,j)! L_min(l,j)^|l-j|(-a'b') ]
+ 1/l+1[ g_k-1,l - (a')^l b^k-1/(k-1)!∑_j=0^k-1 j! ( b'/b)^j L_j^k-j-1(-ab) (a'b')^-min(l,j)/max(l,j)! L_min(l,j)^|l-j| (-a'b') ]
=: 1/l+1 g_k-1,l + a+a'/l+1 g_kl + 1/k! (l+1) R ,
where we have defined
R := (a')^l+1 b^k/l!∑_j=1^l j! (a'b)^-j L_j^k-j (-ab) L_j-1^l+1-j(-a'b') + k b^k (b')^-l-1∑_j=l+1^k ( b'/b)^j L_j-1^k-j(-ab) L_l^j-l-1(-a' b')
+ a b^k+1 (b')^-l-1∑_j=l+1^k ( b'/b)^j L_j-1^k-j+1(-ab) L_l^j-l-1(-a' b') - a (a')^l b^k ∑_j=0^l-1 j! ( b'/b)^j L_j^k-j (-ab) (a'b')^-j/l! L_j^l-j(-a'b')
- a (a')^l b^k ∑_j=l^k-1 j! ( b'/b)^j L_j^k-j (-ab) (a'b')^-l/j! L_l^j-l(-a'b') - k (a')^l b^k-1∑_j=0^l-1 j! ( b'/b)^j L_j^k-j-1(-ab) (a'b')^-j/l! L_j^l-j (-a'b')
- k (a')^l b^k-1∑_j=l^k-1 j! ( b'/b)^j L_j^k-j-1(-ab) (a'b')^-l/j! L_l^j-l (-a'b')
= (a')^l+1 b^k/l!∑_j=1^l j! (a'b)^-j L_j^k-j (-ab) L_j-1^l+1-j(-a'b') + k b^k-1 (b')^-l∑_j=l^k-1( b'/b)^j L_j^k-j-1(-ab) L_l^j-l(-a' b')
+ a b^k (b')^-l∑_j=l^k-1( b'/b)^j L_j^k-j(-ab) L_l^j-l(-a' b') - a (a')^l b^k/l!∑_j=0^l-1 j! (a'b)^-j L_j^k-j (-ab) L_j^l-j(-a'b')
- a b^k (b')^-l∑_j=l^k-1( b'/b)^j L_j^k-j (-ab) L_l^j-l(-a'b') - k (a')^l b^k-1/l!∑_j=0^l-1 j! (a'b)^-j L_j^k-j-1(-ab) L_j^l-j (-a'b')
- k b^k-1 (b')^-l∑_j=l^k-1( b'/b)^j L_j^k-j-1(-ab) L_l^j-l (-a'b') .
Here the second term cancels the last and the third cancels the fifth, so we have
R = (a')^l b^k-1/l!{ a' b ∑_j=1^l j! (a'b)^-j L_j^k-j (-ab) L_j-1^l+1-j(-a'b')
- ab ∑_j=0^l-1 j! (a'b)^-j L_j^k-j (-ab) L_j^l-j(-a'b') - k ∑_j=0^l-1 j! (a'b)^-j L_j^k-j-1(-ab) L_j^l-j (-a'b') }
= (a')^l b^k-1/l!{ a' b ∑_j=1^l j! (a'b)^-j L_j^k-j (-ab) L_j-1^l+1-j(-a'b') - ∑_j=0^l-1 j! (a'b)^-j (j+1) L_j+1^k-j-1 (-ab) L_j^l-j(-a'b') }
= 0 ,
where in the second equality we have used (<ref>).
§ LEGGETT-GARG CHSH INEQUALITY VIOLATION
Here we compute the maximal violation of the Leggett-Garg CHSH inequality (<ref>) using states and measurements as obtained in the macroscopic limit (see main text). For convenience, we will work with phase-space representation of operators. In particular, we define
W_ρ(x,p) := 1/π∫_-∞^+∞ ds e^2 i psx-sρx+s ,
E(x,p) := 2 ∫_-∞^+∞ ds e^-2 ips|x-s⟩⟨x+s| ,
so thatρ = ∫ dx dp W_ρ(x,p) E(x,p). DefiningK_ȷ() := K( | cosȷ _x + sinȷ _y)corresponding to the case where Alice measures spin in the(cosȷ , sinȷ , 0)direction, we have
K_0() ρ K_0()^† = ∫_-∞^+∞ dx dp W_ρ(x,p) K_0() E(x,p) K_0()^†
= ∫_-∞^+∞ dx dp W_ρ(x,p) · 2 ∫_-∞^+∞ ds e^-2ip s K_0() |x-s⟩⟨x+s| K_0()^†
= 2 ∫_-∞^+∞ dx dp ds W_ρ(x,p) e^-2ip s∫_-∞^+∞ dy e^-(y-)^2/(2 ^2)/(π^2)^1/4 |y⟩⟨y|x-s⟩∫_-∞^+∞ dy' e^-(y'-)^2/(2 ^2)/( π^2)^1/4⟨x+s|y'⟩⟨y'|
=2/√(π^2)∫_-∞^+∞ dx dp ds W_ρ(x,p) e^-2ip s e^-(x-s-)^2/(2 ^2) e^-(x+s-)^2/(2 ^2) |x-s⟩⟨x+s|
= 2/√(π^2)∫_-∞^+∞ dx dp ds W_ρ(x,p) e^-2ip s e^-(x-)^2/^2 e^-s^2/^2 |x-s⟩⟨x+s| .
Therefore
P_ȷ(,) = [ K_ȷ() K_0() ρ K_0()^† K_ȷ()^†]
= 2/√(π^2)∫_-∞^+∞ dx dp ds W_ρ(x,p) e^-2ip s e^-(x-)^2/^2 e^-s^2/^2 [ K_ȷ() |x-s⟩⟨x+s| K_ȷ()^†]
= 2/√(π^2)∫_-∞^+∞ dx dp ds W_ρ(x,p) e^-2ip s e^-(x-)^2/^2 e^-s^2/^2 ∫_-∞^+∞ dy e^-(y-)^2/^2/√(π^2)⟨x+s|y⟩_ȷ _ȷ⟨y|x-s⟩ .
Now, using that <cit.>
⟨x|y⟩_ȷ = 1/√(2 π |sinȷ |)exp{ - i cosȷ/2 sinȷ( x- y/cosȷ)^2 } ,
the above expression reads
P_ȷ (,) = 1/π^2 ^2 | sinȷ |∫_-∞^+∞ dx dp ds dy W_ρ(x,p) e^(x,p,s,y) ,
where
(x,p,s,y) = -2ip s - (x-)^2/^2 - s^2/^2 - (y-)^2/^2 -i cosȷ/2 sinȷ( x + s - y/cosȷ)^2 + i cosȷ/2 sinȷ( x - s - y/cosȷ)^2
= -2ip s - (x-)^2/^2 - s^2/^2 - (y-)^2/^2 - 2 i cosȷ/sinȷ( x - y/cosȷ) s .
Integrating insgives
P_ȷ(,) = 1/√(π^3 ^2) |sinȷ |∫_-∞^+∞ dx dp dy W_ρ(x,p) e^(̱x,p,y) ,
where
(̱x,p,y) = - (x-)^2/^2 - (y-)^2/^2 - ^2/4[ 2 p + 2 cosȷ/sinȷ( x - y/cosȷ) ]^2
= - (x-)^2/^2 - y^2/^2 - ^2/^2 + 2 y/^2 - ^2 p^2 - ^2 cos^2 ȷ/sin^2 ȷ( x - y/cosȷ)^2 - 2 ^2 p cosȷ/sinȷ( x - y/cosȷ)
= - (x-)^2/^2 - y^2/^2 - ^2/^2 + 2 y/^2 - ^2 p^2 - ^2 x^2 cos^2 ȷ/sin^2 ȷ - ^2 y^2/sin^2 ȷ + 2 ^2 x y cosȷ/sin^2 ȷ - 2 ^2 p x cosȷ/sinȷ + 2 ^2 p y /sinȷ
= - (x-)^2/^2 - ^2/^2 - ^2/sin^2 ȷ p^2 sin^2 ȷ - ^2/sin^2 ȷ x^2 cos^2 ȷ - 2 ^2/sin^2 ȷ x cosȷ p sinȷ
- ^4 + sin^2 ȷ/^2 sin^2 ȷ y^2 + ( 2/^2 + 2 ^2 x cosȷ/sin^2 ȷ + 2 ^2 p/sinȷ) y .
Integrating inygives
P_ȷ(,) = 1/π√(^4 + sin^2 ȷ)∫_-∞^+∞ dx dp W_ρ(x,p) e^(x,p) ,
where
(x,p) = - (x-)^2/^2 - ^2/^2 - ^2/sin^2 ȷ (x cosȷ + p sinȷ)^2 + 1/^2sin^2 ȷ/^4 + sin^2 ȷ( + ^4/sin^2 ȷ (x cosȷ + p sinȷ) )^2
= - (x-)^2/^2 - 1/^2( 1 - sin^2 ȷ/^4 + sin^2 ȷ) ^2 - ^2/sin^2 ȷ( 1- ^4/^4 + sin^2 ȷ) (x cosȷ + p sinȷ)^2
+ 2 ^2/^4 + sin^2 ȷ (x cosȷ + p sinȷ )
= - (x-)^2/^2 - ^2/^4 + sin^2 ȷ^2 - ^2/^4 + sin^2 ȷ (x cosȷ + p sinȷ)^2 + 2 ^2/^4 + sin^2 ȷ (x cosȷ + p sinȷ )
= - (x-)^2/^2 - ^2/^4 + sin^2 ȷ( x cosȷ + p sinȷ - )^2 .
So we have
P_ȷ (,) = 1/π√(^4 + sin^2 ȷ)∫_-∞^+∞ dx dp W_ρ (x, p ) exp{ - (x-)^2/^2 - ^2/^4 + sin^2 ȷ( x cosȷ + p sinȷ - )^2 } .
Then, the correlator of the random variables()and()is
() ()_ȷ = ∫_-∞^+∞ d d () () P_ȷ(, )
= 1/π√(^4 + sin^2 ȷ)∫_-∞^+∞ dx dp W_ρ (x, p ) ∫_-∞^+∞ d () e^ - (x-)^2/^2∫_-∞^+∞ d () e^ - ^2 ( - x cosȷ - p sinȷ )^2/^4 + sin^2 ȷ
= 1/π√(^4 + sin^2 ȷ)∫_-∞^+∞ dx dp W_ρ (x, p ) √(π^2) ( x/) √(π^4 + sin^2 ȷ/^2) ( x cosȷ + p sinȷ/√(^4 + sin^2 ȷ/^2))
= ∫_-∞^+∞ dx dp W_ρ (x, p ) ( x/) ( x cosȷ + p sinȷ/√(^4 + sin^2 ȷ/^2)) .
Now let us compute the Wigner function of the stateρ = ∑_k,l c_kl|l⟩⟨k|:
W_ρ (x,p) = 1/π∑_k,l c_kl∫_-∞^+∞ ds e^2ips⟨x-s|l⟩⟨k|x+s⟩
= 1/π√(π)∑_k,l c_kl/√(2^k k! 2^l l!)∫_-∞^+∞ ds e^2ips e^-(x-s)^2/2 H_l(x-s) e^- (x+s)^2/2 H_k (x+s)
= e^- x^2/π√(π)∑_k,l c_kl/√(2^k k! 2^l l!)∫_-∞^+∞ ds e^-s^2 + 2ips H_l (x-s) H_k(x+s)
= e^-x^2/π√(π)∑_k,lc_kl/√(2^k k! 2^l l!)∫_-∞^+∞ ds e^-(s - ip)^2 - p^2 H_l(x - s ) H_k( x+s) .
Defining := s - ipwe have
W_ρ (x,p) = e^- x^2 - p^2/π√(π)∑_k,l c_kl/√(2^k k! 2^l l!)∫_-∞-ip^+∞-ip d e^-^2 H_l ( x- - ip) H_k ( x+ +ip ) .
Since the integrand is holomorphic, we can set to zero the imaginary shift in the integration contour. Then, using the property <cit.>
∫_-∞^+∞ dx e^-x^2 H_k (x + y) H_l ( x +z) = 2^max(k,l)√(π) min(k,l)! y^k-min(k,l) z^l-min(k,l) L_min(k,l)^|k-l| (-2yz) ,
we can write
W_ρ (x,p) = e^- x^2 - p^2/π∑_k,l c_kl (-1)^l√(2^max(k,l)min(k,l)!/2^min(k,l)max(k,l)!)
·( -x+ip )^l-min(k,l)(x + ip)^k-min(k,l) L_min(k,l)^|k-l|(-2(- x+ip)(x+ip) )
= e^-x^2 - p^2/π{∑_k≥ l c_kl (-1)^l √(2^k l!/2^l k!) (x+ip)^k-l L_l^k-l( 2 x^2 + 2 p^2 )
+∑_k< l c_kl (-1)^l √(2^l k!/2^k l!) (- x+ip)^l-k L_k^l-k( 2 x^2 + 2 p^2 ) } .
Plugging this back into the correlator (<ref>) yields
⟨ () () ⟩_ȷ = 1/π∑_k≥ l c_kl (-1)^l √(2^k l!/2^l k!)∫_- ∞^+ ∞ dx dp e^-x^2 - p^2 (x+ip)^k-l
· L_l^k-l( 2 x^2 + 2 p^2 ) ( x/) ( x cosȷ + p sinȷ/√(^4 + sin^2 ȷ/^2))
+ 1/π∑_k< l c_kl (-1)^l √(2^l k!/2^k l!)∫_- ∞^+ ∞ dx dp e^-x^2 - p^2 (- x+ip)^l-k
· L_k^l-k( 2 x^2 + 2 p^2 ) ( x/) ( x cosȷ + p sinȷ/√(^4 + sin^2 ȷ/^2)) .
Let us restrict to the 3-dimensional subspace spanned by the first three excitations in the Hilbert space so that the sums overkandlonly run until 2. In this case we can analytically perform the integrals using the identities <cit.>
∫_-∞^+∞ dx e^-x^2 (ax+b) = √(π) ( b/√(1+a^2)) ,
∫_-∞^+∞ dx x e^-x^2 (ax+b) = a/√(1+a^2)exp( -b^2/1+a^2) ,
∫_-∞^+∞ dx x^2m e^-a x^2 (b x) (c x) = 2 (-1)^m/√(π)∂^m/∂ a^m[ 1/√(a)arctan( b c/√(a^2 +a (b^2 +c^2))) ] , (a > 0 , m=0, 1, … ) ,
∫_-∞^+∞ dx x^2 e^-x^2 (a x+b) = √(π)/2( b/√(1+a^2)) - a^2 b/(1+a^2)^3/2exp( - b^2/1+a^2) ,
∫_-∞^+∞ dx x^3 e^-x^2 (a x) = a/√(1+a^2)( 1 + 1/21/1+a^2) ,
∫_-∞^+∞ dx x^4 e^-x^2 (a x +b) = 3 √(π)/4( b/√(1+a^2)) - 3/2a^2 b/(1+a^2)^3/2[1 + 2 a^2 b^2/3(1+a^2)^2 + 1/1+a^2 ] exp( - b^2/1+a^2) .
The results are compact expressions, but too long to show here. Finally, the Leggett-Garg CHSH parameter is
C = ⟨ () () ⟩_ȷ_A^(1)-ȷ_B^(1) + ⟨ () () ⟩_ȷ_A^(1)-ȷ_B^(2) + ⟨ () () ⟩_ȷ_A^(2)-ȷ_B^(1) -⟨ () () ⟩_ȷ_A^(2)-ȷ_B^(2) .
MaximizingCover the four angles and the coefficientsc_klwe have the following violation of the inequalityC≥ 2:
C = 2/675 π( 577 + √(1244179) + 2700 arctan (1/3) ) ≃ 2.416 ,
obtained for the angles
ȷ_A^(1) = π/4 , ȷ_A^(2) = 3 π/4 , ȷ_B^(1) = π/2 , ȷ_B^(2) = 0 ,
and the state
|ψ⟩ = √(1/2 - 577/2 √(1244179)) |0⟩ + √(1/2 + 577/2 √(1244179)) |2⟩ . |
http://arxiv.org/abs/2409.02369v1 | 20240904013838 | Investigation of cloud cavitating flow in a venturi using Adaptive Mesh Refinement (AMR) | [
"Dhruv Apte",
"Mingming Ge",
"Olivier Coutier-Delgosha"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Investigation of cloud cavitating flow in a venturi using Adaptive Mesh Refinement (AMR)]Investigation of cloud cavitating flow in a venturi using Adaptive Mesh Refinement (AMR)
[1]Dhruv Aptedhruvga@vt.edu
2]Mingming Gemmge@vt.edu
1,3]Olivier Coutier-Delgoshaocoutier@vt.edu
*[1]Kevin T. Crofton Department of Aerospace and Ocean Engineering, Virginia Tech,
Blacksburg, 24060, VA, USA
[2]Macao Environmental Research Institute, Faculty of Innovation Engineering, Macau University of Science and Technology, Macao, 999078, Macao SAR, China
[3]Univ. Lille, CNRS, ONERA, Arts et Metiers ParisTech, Centrale Lille, Lille, F-59000, France
Unsteady cloud cavitating flow is detrimental to the efficiency of hydraulic machinery like pumps and propellers due to the resulting side-effects of vibration, noise and erosion damage. Modelling such a unsteady and highly turbulent flow remains a challenging issue. In this paper, cloud cavitating flow in a venturi is calculated using the Detached Eddy Simulation (DES) model combined with the Merkle model. The Adaptive Mesh Refinement (AMR) method is employed to speed up the calculation and investigate the mechanisms for vortex development in the venturi. The results indicate the velocity gradients and the generalized fluid element strongly influence the formation of vortices throughout a cavitation cycle. In addition, the cavitation-turbulence coupling is investigated on the local scale by comparing with high-fidelity experimental data and using profile stations. While the AMR calculation is able to predict well the time-averaged velocities and turbulence-related aspects near the throat, it displays discrepancies further downstream owing to a coarser grid refinement downstream and under-performs compared to a traditional grid simulation . Additionally, the AMR calculations is unable to reproduce the cavity width as observed in the experiments. Therefore, while AMR promises to speed the process significantly by refining grid only in regions of interest, it is comparatively in line with a traditional calculation for cavitating flows. Thus, this study intends to provide a reference to employing AMR as a tool to speed up calculations and be able to simulate turbulence-cavitation interactions accurately.
[
[
=====
§ INTRODUCTION
Cavitation is a multiphase, highly unstable and turbulent phenomenon characterized by formation of clouds of bubbles when the ambient pressure drops below the vapour pressure. These clouds of bubbles travel with high velocity and upon exiting the low-pressure area burst, generating shock. This shock can cause erosion damage, vibration, load asymmetry and other detrimental effects that can drastically impact the performance of marine engineering devices like propellers, pumps and turbines. Additionally, the capability to cause erosion can also be utilized in jet drilling in the hydrocarbon and geothermal energy sector <cit.>. Therefore, it is important to investigate the phenomenon, especially the mechanism of periodic development of the main cavity followed by its shedding and collapse.
A primary component of cavitation is its interaction with turbulence and the resulting generated vortices. Gopalan & Katz <cit.> used PIV and high-speed photography to demonstrate that vortice generation was a result of the collapse of the vapor cavities in the cavity closure region. More recently, Arabnejad et al. <cit.> investigated the cavitation over the leading edge of a NACA0009 hydrofoil using High-Speed Visualization (HSV) to observe the transformation of the shedding of cavity structures into horse-show vortices. Ge et al. <cit.> conducted similar experiments in a venturi-type cavitation reactor and observed a similar horseshoe structure. They further concluded that the horseshoe broke up into a long, thin chain of bubbles with the upper parts merging with the clouds downstream.
Concurrently, studies using numerical simulations to model cavitating flows have been gaining more and more interest. A successful modelling of cavitating flows typically necessitates the combination of a cavitation model and a turbulence model. Regarding turbulence models, Direct Numerical Simulation (DNS) where all turbulence scales are resolved seems an evident approach but its demand for an extremely fine mesh and computational power hinders its application in practice. Large Eddy Simulation (LES) studies seem as a viable option and several studies have been conducted to investigate the cavity and vortex dynamics driving cavitating flows <cit.>. In fact, LES has also been coupled with multi-scale Euler-Lagrangian approaches <cit.> to investigate both the large vapor volumes from an Eulerian point-of-view and the micro-scale bubble dynamics using a Lagrangian reference. It was observed that the collision of the re-entrant jet with the cavity interface results in the formation of large number of bubbles. Tian et al. <cit.> further confirmed these findings with their multi-scale approach and concluded that the generation of the large number of bubbles is a result of Kelvin-Helmholtz instability, triggered by the shear flow between water and vapor. Ji et al. <cit.> have conducted a comprehensive review of multi-scale techniques for simulating cavitating flows. However, LES also requires high-resolution grids which results in higher computational costs <cit.>. Thus, Reynolds-averaged Navier-Stokes (RANS) models, where all turbulence scales are modelled seems as the practical option to model cavitating flows.
Coutier-Delgosha et al. <cit.> conducted RANS calculations using the k-ϵ Re-Normalization Group (RNG) and k-ω models with and without an empirical correction suggested by Reboud et al. <cit.> to simulate cavitating flows. They concluded that the standard models over-predict the eddy-viscosity and thus are unable to predict the periodic shedding of the cavity as observed in experiments. The same model was also employed by Ji et al. <cit.> to simulate the unsteady cavitating flow around a twisted hydrofoil. They noted that the cavitating flow induced boundary layer separation and reported large increases of vorticity at the cavity interface. Long et al. <cit.> advanced this approach by combining the Reboud correction with the Filter-Based Method (FBM) <cit.>, a recent type of model, termed as hybrid RANS-LES models that behave as RANS model near the wall and as LES model away from the wall, therefore combining the accuracy of LES models and computational efficiency of RANS models. They conducted a vorticity budget analysis, based on the Vorticity Transport Equation (VTE) in attached cavitation around a Clark-Y hydrofoil. They concluded that the baroclinic torque term in the VTE is the principal source of vorticity production during the collapse of the cavitation cloud. Sun et al. <cit.> conducted a similar calculation using Partially-Averaged Navier-Stokes model (PANS) proposed by Girimaji et al. <cit.> to simulate unsteady cavitation around a NACA0015 hydrofoil in thermo-sensitive fluid. They observed that the vortex structures and motion show considerable unsteady properties in the cavity shedding region at the trailing edge. Additionally, Apte et al. <cit.> conducted a systematic analysis of several RANS and hybrid RANS-LES models to simulate cloud cavitating flow inside a venturi nozzle and concluded that while these models are able to simulate the unsteady cavity shedding, they are unable to simulate the cavity dynamics on the local scale. They suggested grid refinement as a possible solution to ensure the LES zone occurs on the fine grid.
To solve the problem of the resulting high requirement in computational resources, studies have proposed mesh refinement techniques that promote refinement in specific flow regions rather than refining the entire domain. Bai et al. <cit.> devised a Cartesian cut-cell method where the rotation factor, the ratio of the strain rate to the sum of the strain rate and vorticity tensor was applied as a marker for cell refinement while modelling a tip leakage cavitating flow generated by a hydrofoil. The methodology was successfully extended to Cheng et al. <cit.> who conducted a systematic analysis of various LES models to simulate tip-vortex cavitation. They concluded the method was able to simulate tip vortex cavitation accurately and were able to capture the influence of the vorticity dilatation term in decreasing the vorticity inside the tip-leakage vortex section. Another technique, titled Adaptive Mesh Refinement (AMR) was proposed <cit.> that allocates high resolution grid based on the input property and can be a promising technique to understand better the vortex dynamics in cavitating flows. AMR has been used for various multiphase flow applications like primary atomization <cit.>, bubble dynamics <cit.> and cavitation as well. Li et al. <cit.> used AMR to aid LES for simulating cavitation over a cylindrical body. Wang et al. <cit.>, employed a similar approach to model cavitating flow around a Clark-Y hydrofoil. However, both these papers have leveraged LES, which continues to be prohibitively expensive in some case. In addition, while these studies delve into the aspects of cavitation generation mechanisms, there remains a gap to investigate the ability of AMR to capture the cavitation-turbulence interplay.
This study aims to address this gap by simulating cloud cavitation using AMR coupled with the Detached Eddy Simulation (DES) model, a hybrid RANS-LES model. Thus, the study attempts to strike a delicate balance between computational time and accuracy.
§ MATHEMATICAL FORMULATION AND NUMERICAL METHOD
§.§ Adaptive Mesh Refinement
To overcome the issue of using highly-refined grids, the AMR is used to focus the refinement dynamically in the region most needed. Mathematically, AMR can be formulated as refining a domain comprising of non-overlapping rectangular grid G_l,k:
G_l = ∪_m G_l,k
where l=0, 1, 2, ... l_max represents refinement levels. In OpenFOAM <cit.>, AMR is based upon three key drivers, the refinement engine, mesh cutter and the refinement tree. The refinement engine is responsible for ensuring the connectivity structures in the mesh while the mesh cutter refines the mesh as per the input. Finally, the refinement tree saves the refinement history and ensures a steady transition for mesh refinement. Formulated by Mitchell et al. <cit.>, the refinement tree is defined as:
T (G) = {Ω, {C(ν_i)}}
which contains a set of nodes Ω = ν_i_i=0^M with each ν_i∈Ω containing a set of child C(ν_i) ⊂Ω where G represents the grid. Fig <ref> presents the workflow of the three key portions in AMR. The solver reads the refinement criterion as the solver reaches the refinement interval . This refinement criterion could be based on a numerical domain for turbulent kinetic energy or pressure field or as in this case, the void fraction. Next, the solver proceeds to the refinement engine which selects the appropriate zones for grid refinement. The mesh cutter then refines the specific areas while the refinement tree saves the entire refinement history. The refinement occurs in the form of an octree structure which means, on every single refinement level, the number of cells increases by a factor of eight. After refinement, a similar unrefinement process is conducted for cells that no longer fit the refinement criteria, with the solution being mapped to the new cells. The refinement criterion is based on the void fraction to ensure the grid refinement happens in the interface between the cavitating flow and single-phase, turbulent flow.
§.§ Numerical models
§.§.§ Basic Governing Equations
As previously mentioned,modelling cavitating flows involves a coupling of a cavitation model and a turbulence model. The work here uses the Transport-Equation Model approach (TEM) where the two phases are considered to be strongly coupled and governed by the same momentum and mass transfer equations:
∂ (ρ_m u_i)/∂ t+∂ (ρ_m u_i u_j)/∂ x_j
=-∂ p/∂ x_i+∂/∂ x_j((μ_m +μ_t)(∂ u_i/∂ x_j+∂ u_j/∂ x_i- 2/3∂ u_k/∂ x_kδ_ij))
∂ρ_lα_l/∂ t+∂(ρ_lα_l u_j)/∂ x_j= ṁ^- + ṁ^+
ρ_m= ρ_lα_l+ρ_vα_v
μ_m= μ_lα_l+μ_vα_v
where u_j is the velocity component in the jth direction, ρ_m and μ_m are respectively the density and viscosity of the mixture phase, u is the velocity, p is the pressure, ρ_l and ρ_v are respectively the liquid and vapor density, μ_l and μ_v are respectively the liquid and vapor dynamic viscosity while μ_t represents the turbulent viscosity. α_l and α_v are respectively the liquid and vapor void fraction. The source (ṁ^+) and sink (ṁ^-) terms represent the condensation (vapor destruction) and evaporation (vapor formation) terms respectively and will be discussed below.
The separation of condensation and evaporation terms in the above equation is the principal concept of the TEM methodology. The separation aids in treating the two processes separately if needed. The TEM model used in the study is the Merkle model <cit.> which defines the evaporation and condensation terms as:
ṁ^-= C_dest min(p-p_sat,0)γρ_l/0.5 U_∞^2 t_∞ρ_v
ṁ^+= C_prod max(p-p_sat,0)(1-γ)/0.5 U_∞^2 t_∞
where γ is the liquid volume fraction, ρ_v and ρ_l are the vapor density and the liquid density respectively, p and p_sat are the pressure and the saturation pressure respectively, t_∞ is the free stream time scale and U_∞ is the free stream velocity. The empirical factors C_dest and C_prod are set as 80 and 1e-3 respectively, according to the study by Merkle et al. <cit.>.
§.§.§ Turbulence model
The turbulence model used is the Detached Eddy Simulation (DES) model with the k-ω SST model as the baseline RANS model <cit.>. Here, the LES model calculates the turbulent separated zone while the RANS model is used to model the near-wall zone. The switch to from RANS to LES and vice versa is defined by the change in length scale:
L_DES= min (L_t, C_DESΔ)
where L_T is the turbulent length scale defined as:
L_t = √(k)/β * ω
and C_DESΔ is the DES filter length with C_DES = 0.61
§ COMPUTATIONAL SETUP
A converging-diverging (venturi) nozzle is selected as the geometry utilized for the study as shown in <ref>. The venturi nozzle has been widely used in jetting for geothermal reservoirs and hydrocarbon industry <cit.>. The geometrical setup here is identical to the experiment <cit.>. The nozzle height at the inlet and outlet is originally 21 mm but steadily drops to 10 mm at the throat. The venturi has a converging angle of 18 degrees and divergent angle of 8 degrees (see Fig <ref>). The velocity inlet and pressure outlet boundary conditions are adopted. For consistency and validation with the experiments, the velocity at inlet is set to 8.38 m/s while the pressure at the outlet is adjusted to ensure both experiments and simulations have the same mean cavity length. The Re number is 1.8 × 10^5. To evaluate whether AMR is able to capture the cavity dynamics accurately at the local stage, a similar standard mesh calculation is conducted. The mesh consists of 8.4 million cells, since the cell limit of the AMR mesh is 8.4 million cells as well <cit.>. Table <ref> shows the mean cavity lengths of the two cases, as compared to the experimental case. All three cases have the same mean cavity length, thus validating the simulations and providing an opportunity to investigate the cavitation-vortex interaction and the cavitation-turbulence interplay.
The calculations are conducted using interPhaseChangeFoam and interPhaseChangeDyMFoam.
interPhaseChangeFoam is an unsteady, isothermal solver that couples a cavitation model with a turbulence model while the latter is its sister-solver that uses adaptive mesh refinement. It uses a dynamicMeshDict for the dynamic mesh properties as input. The process of refining the grid is dependant on the field of void fraction, ranging from 0.01 to 0.9. This ensures the grids near the vapor cavity-water interface are refined. For this study, the mesh properties are based on the void fraction field with a maximum cell limit of 8.4 million cells from a base size of 3.3 million cells (see Fig <ref>). Both solvers utilize the PIMPLE algorithm, a hybrid algorithm comprising of both the PISO and SIMPLE algorithms used in OpenFOAM. The PIMPLE algorithm consists of three parts:
* momentum predictor
* pressure solver
* momentum corrector
In PIMPLE, each timestep has at least one outer corrector loop where the fields are solved using a number of iterations. The algorithm exhibits a more robust pressure-velocity coupling by coupling a SIMPLE outer corrector loop with a PISO inner corrector loop and has been shown to display better numerical stability.
Regarding numerical schemes, the volume fraction interpolation uses the Gauss vanLeer scheme. To mitigate the unboundedness, a semi-implicit multi-dimensional limiter for explicit solution (MULES) is implemented: an implicit corrector step is first implemented corresponding to the discretization schemes and then an explicit correction is applied with the MULES limiter. The calculations are run at a fixed timestep of 1e-5. However, to ensure stability of the calculation, the AMR mesh is run at a timestep subjected to the maximum Courant number, 1 in this study. The refineInterval utility is set to 10 thus refining the mesh at 10 timesteps. The refinement is based on the alpha.water field or the void fraction field and set between the limits of 0.01 and 0.9. This ensures the mesh would be refined on lower void fraction values, where the cavity is expected to develop.
The simulation is set in three consecutive stages. First, it is run for 0.03 s where the vaporization coefficients in the cavitation model are set to zero thereby modelling it as a single-phase, turbulent flow. The next stage, consisting of 0.03s is a sinusoidal ramp whereupon at the end, fully cavitation has been fully launched. At the third and final stage, the fully cavitating regime is activated. The focus of the work is on the fully cavitating regime with AMR also initiated at this stage. To maintain lower computational costs, the full cavitation regime is launched for 0.02 s, sufficient enough to capture a complete shedding cycle.
§ RESULTS
§.§ Cavitation flow structures
Fig <ref> shows the pressure and void fraction distributions at five successive time instants in the diverging section of the venturi. In the first column, the pressure distributions are shown, with the dark blue region denoting the low pressure regions. The second column represents the same snapshots with void fraction. Here, the blue region denotes vapor while the red region denotes water. The distributions show the evolution of cavitation starting from the first one, where a cavity is initiated at the throat of the venturi. Near the throat, a sharp drop in pressure is observed, responsible for the inception of the cloud cavity. Downstream, a group of smaller cloud cavities is observed. These cavities are remnants of the previous cycle. The next figures Fig <ref> (c)-(d) demonstrate the growth of the cavity at the throat while the cavities downstream flow downstream and collapse as it leaves the low-pressure region. The next set of figures (e)-(f) show the cavity reaching its maximum size. In (g)-(h), it is observed that the large cavity at the throat is pinched-off due to a re-entrant jet rushing upstream, resulting in a secondary cloud cavity along with smaller vapor cavity clouds. The primary cavity continues to reduce and collapses while the secondary cavity continues to roll up and flow downstream. In (i)-(j), it is observed another small incipient cavity develops at the throat while the detached secondary cavity of this cycle continues to be shed downstream. Thus, the snapshots demonstrate the periodic nature of cloud cavitating flows and the presence of a re-entrant jet responsible for this periodic shedding of the primary cavity cloud.
§.§ Adaptive Mesh Refinement inside the nozzle
As stated previously, the AMR method refines the grid only in the concerning region based on the input refinement criterion. To check the mesh refinement inside the venturi nozzle for unsteady cavitating flow, six snapshots are taken at the mid-plane of the venturi and focused on the diverging section of the nozzle. The grid is then visualized with the void fraction field. Fig <ref> represents the grid during the cavity initiation stage. A small cavity at the throat is observed followed by a highly refined grid , in sharp contrast to the structured grid at the top of the venturi nozzle. The grid refinement extends further beyond the initial cavity, possibly as a result of other smaller cavities throughout the region. Further downstream, the refinement exists in selected regions, where the remnants of the previous cavity shedding cycle are flowing downstream. As the incipient cavity enlarges, shown in Fig <ref>, the grid refinement closer to the throat is observed. However, further downstream where the detached cavities collapse after exiting the low-pressure region, the grid reverts back to the structured grid defined at the onset of the calculation. This hypothesis is substantiated in Fig <ref> where the cavity reaches its maximum length. The grid in this region is significantly refined, while the grid downstream reverts completely to the structured grid devised as the original setup. Thus, it can be noted that the AMR method works well in refining solely the low vapor regions and giving more insights about the flow physics driving the unsteady cavitating flow at a much reduced computational cost.
Fig <ref> shows the cavity detachment process. The presence of both a reduced primary cavity at the throat and a secondary cavity downstream results in a high grid refinement throughout the lower portion of the venturi nozzle, with some refinement zones slightly upwards as well, as a result of smaller shed vapor cavities in other planes. The cavity collapse process, depicted in Fig <ref> shows the primary cavity collapsing and the detached cavity flowing downstream. Significant grid refinement continues to be conducted in the section accounted by the detached cavity downstream and the presence of smaller vapor cavities present closer to the throat. The presence of a refined grid in regions containing smaller vapor cavities illustrate that the AMR method is very efficient tool to simulate unsteady cavitating flows by dramatically increasing the grid refinement while striking a balance between grid refinement and computational cost.
§.§ Cavitation-vortex interaction
For a better understanding of the cavitating flow and the resulting vortex interaction, the Q-criterion is plotted for various stages in a cavitating cycle. The Q-criterion defined as the second invariant of the velocity gradient tensor is defined as
Q = 1/2 (|Ω|^2 - |S|^2)
where Ω is the vorticity tensor and S is the rate of strain tensor.
The Q-criterion and corresponding vapor structures are presented in the following figures. Here, a snapshot of the venturi from the front is taken of contours of the α = 0.6.
Fig <ref> shows the void fraction and Q-criterion during cavitation inception. Here, the cavity is just initiated at the cavity throat while the remnants of the previous cavity appear further downstream including, the entire detached cloud cavity of the previous cycle.
To further understand the mechanism of cavitation-vortex interaction, the vorticity-transport equation is employed at the mid-plane in the z-direction to investigate the vorticity and the distribution of its contributing terms, shown in the following equation
Dω_z/Dt = [(ω·∇) V]_z - [ω (∇· V)]_z + ( ∇ρ_m×∇ p/ρ_m^2)_z + [(ν_m + ν_t) ∇^2ω]_z
Here,the Left Hand Side (LHS) denotes the rate of vorticity change while the Right Hand Side (RHS) indicate the vortex stretching, vortex dilatation, baroclinic torque and viscous diffusion of vorticity terms respectively. The vortex stretching term describes the stretching and tilting of a vortex due to the velocity gradients. The vortex dilatation describes the expansion and contraction of a fluid element. The baroclinic torque is a result of the misalignment between the pressure and velocity gradients. The viscous diffusion term can be ignored in high Reynolds number flows <cit.> and thus is omitted in the study. The equation clearly shows the effects of velocity and pressure gradients formed as a result of cavitating flow on the vorticity. The equation is further simplified as:
ω_z = ∂ V_y/∂ x - ∂ V_x/∂ y
[(ω·∇) V]_z = ω_x∂ V_z/∂ x + ω_y∂ V_z/∂ y + ω_z∂ V_z/∂ z
[ω (∇· V)]_z = ω_z (∂ V_x/∂ x+∂ V_y/∂ y+ ∂ V_z/∂ z)
( ∇ρ_m×∇ p/ρ_m^2)_z = 1/ρ_m^2 (∂ρ_m/∂ x·∂ p/∂ y - ∂ρ_m/∂ y·∂ p/∂ x)
Fig <ref> shows the different terms of the vorticity transport equation at the cavity initiation stage. It is observed, that while the vortex stretching and dilatation terms show highly positive values, specially at the throat, thus dominating the vortex formation. The baroclinic torque does not influence the vorticity considerably and it is demonstrated that the velocity gradients and expansion of the fluid element influences the cavitation-vortex interaction.
Fig <ref> shows the next stage of the cavitation shedding. The previously small incipient cavity grows larger, developing into a cloud shape. Here, the Q-criterion is negative across the primary cavity and the detached cavity of the previous cycle, indicating the vorticity tensor is smaller than the rate of strain tensor. However, it is observed that the vortex stretching and dilatation terms continue to dominate the vorticity tensor compared to the baroclinic torque.
Fig <ref> shows the Q-criterion and the vorticity equation terms as the cavity reaches its maximum size. Here, it is observed that the secondary cavity of the previous cycle has collapsed upon exiting the low-pressure region. The Q-criterion is distributed messily throughout the cavity with the vortex stretching and vortex dilatation term also displaying similar behaviours: the terms are positive along the cavity interface but negative inside the cavity, showing the very drastic change in vorticity from the cavity interface to the center of the cavity. As the cavity reaches its maximum size, it encounters a re-entrant jet travelling in the direction opposite to the main flow direction that breaks the cavity into two, a reduced primary cavity at the throat and a detached secondary cavity downstream (Fig <ref>). Here, the Q-criterion is considerably high near the cavity interface but dominated by the strain rate tensor inside the cavity. While the vortex stretching term remains positive at the cavity-water interface at the point of cavity detachment, the vortex dilatation term gets extremely at the trailing edge of the primary cavity and leading edge of the secondary, rolled-up cavity. It can be observed, the shedding of the primary cavity leads to generation of vortices in both direction. The figures demonstrate the dramatic change of vortices caused by cloud cavitation and the resulting interplay between strain rate and the vorticity tensors and how the expansion and contraction of the fluid element overall dominates the process. The shed cavity flows downstream and collapses, upon exiting the low pressure, as seen in Fig <ref>. At the same instant, a small incipient cavity is observed at the throat. Thus, the periodic shedding of cavity clouds is observed. With the collapse of the cloud cavity, the Q-criterion is positive in very few areas, demonstrating the role of strain rate tensor in the process. In the areas where vorticity is observed, near the cavity-water interface of the detached cloud cavity, it is observed the vortex shedding dominates the process. While studies <cit.> have postulated that baroclinic torque might be responsible for vorticity production during cloud cavity collapse, baroclinic torque has not been observed as extremely influential in vorticity generation in these numerical simulations. Throughout the stages of cavitation, it is observed that the velocity gradients and the growth shrinking of the fluid element resulting due to cavitation influence the vortex formation thus providing insights into the cavitation-vortex interaction.
§.§ Cavity dynamics at the local scale
While the cavity dynamics at global scale is discussed, it is important to investigate if turbulence modelling with AMR is able to capture the cavity dynamics at the local scale.
Fig <ref> shows the venturi nozzle at cut at mid-plane about the z-axis with the profile stations for local analysis. These stations are 1.5mm, 3 mm, 5mm, 10 mm, 15 mm and 20 mm from the throat respectively. The profile stations have been placed at varying distance from the throat to capture the cavity and investigate if the numerical results are able to simulate the turbulence properties as observed in experiments. Figs <ref> and <ref> shows the profiles for time-averaged velocities in both stream-wise and wall directions respectively. The black dots represent the experimental data, taken from Ge et al. <cit.>. Near the throat, the stream-wise velocity near the venturi's diverging wall is less than the flow velocity but jumps considerably between 1 and 2 mm from the wall. Downstream, the jump is much less pronounced, with the velocity increasing steadily. Both models are able to predict the stream-wise time-averaged velocity well, with slight differences in the 10 mm profile. Extending the discussion to the velocity in wall-direction, different results are obtained. The velocity jump away from the wall is less pronounced throughout the profiles as compared to the velocity in the stream-wise direction. The numerical simulations show considerable discrepancies near the throat as they predict a sharper velocity jump slightly closer to the wall than observed in the experiments. This indicates that while the simulations are able to reproduce the same cavity length as measured in the experiments, they are unable to reproduce the same cavity width at the throat. This discrepancy could stem from the cavitation modelling strategy. In addition, subtle differences appear in the downstream profiles, starting from the 10 mm profile. Away from the wall, the AMR mesh predicts higher velocity than the non-AMR mesh and experimental data. At the farthest profile, the 20 mm one, the AMR profile shows some minor pulsations close to the bottom wall of the venturi which are not captured by the non-AMR mesh. These pulsations could be attributed to the smaller cavity clouds formed after the primary cavity detaches due to the re-entrant jet rushing upwards.
The analysis is extended to turbulence data like the Reynolds shear stress and the Turbulent Kinetic Energy (TKE) as well, in Figs <ref> and <ref>. It is observed at the throat, that the AMR calculation predicts the Reynolds stress closer to the experimental data as compared to the non-AMR mesh which over-predicts the Reynolds shear stress. This co-relation extends until the 5 mm profile. It is posited that since the AMR is a function of low void fraction, the grid near the throat becomes considerably more refined than the standard mesh. As a result, the area is modelled by LES rather than URANS, as is the case in DES models, thus leading to better results as compared to standard meshes. However, downstream predictions show considerable discrepancies as the AMR mesh under-predicts the Reynolds stress while the standard mesh performs comparatively better. Downstream, the absence of low void fraction results in a coarser mesh as compared to the region close to the throat. The coarser mesh results in the modelling of the region by URANS rather than LES, thus under-predicting the turbulence stress as previously observed in URANS simulations. The TKE plots show similar results. As we go downstream, the TKE increases continuously and farther away from the bottom wall. The AMR mesh under-predicts the TKE at the profiles near the throat but are able to predict the TKE much better away from the throat. The AMR mesh results are similar to the results displayed by the non-AMR mesh that contained 8.4 million cells throughout the calculation. Thus, while the AMR is reproducing the same cavity dynamics, it provides insights into cavitating flow with considerably much less computational time than standard refined mesh, at the local scale scale as well.
§ CONCLUSIONS
This paper employs the Adaptive Mesh Refinement (AMR), coupled with DES model to simulate the unsteady cavitating flow in a converging-diverging (venturi) nozzle. The Merkle model is used to model for cavitation modelling. The vapor cavity structures, the mechanisms of cloud shedding and cavitation-vortex interaction are examined alongside the cavitation-turbulence coupling on the local level. The main conclusions are as follows:
* The numerical simulations show that cloud cavitation is a periodic process where a cavity initiates at the throat, followed by its growth until it reaches a maximum size before being pinched-off by a re-entrant jet. Further analysis of the cavitation-vortex interaction using the terms comprising vorticity transport equation demonstrated the extensive domination of the vortex stretching and the vortex dilatation terms throughout the cavitation cycle. This indicates the strong influence of the velocity gradients and the growth of the fluid element in the vorticity and therefore, the strong interplay between cavitation and vortex formation. The baroclinic torque is much smaller and less influential than other terms.
* Analysis of time-averaged velocities and turbulence-related aspects like, the Reynolds stress and TKE on local profile stations showed the influence of AMR as compared to a non-AMR mesh calculation. While the AMR mesh is able to predict well the velocity in stream-wise direction, it displays some differences in the wall direction velocity in the downstream profiles. However, while the numerical simulations are able to reproduce the cavity lengths as measured in experiments, there is significant difference in the cavity width at the throat, where the numerical simulations predict a thinner cavity.
* The AMR mesh calculation is able to predict the Reynolds stress data near the throat much better than the standard mesh calculation. However, the prediction worsens downstream, where the AMR calculation under-predicts the Reynolds stress. This is a consequence of the absence of low void fraction downstream, resulting in lower mesh refinement and the region being subsequently modelled by URANS. The TKE plots show both calculations display similar results and reproducing TKE downstream in a close agreement with experiments, thus highlighting the use of AMR as a tool to speed up calculations for unsteady cavitating flows.
* Future avenues include running the AMR simulation with more refinement levels to investigate the results in a better way at a reduced computational cost. However, as this study sheds more light on turbulence-cavitation interaction, future AMR calculations should focus on grid refinement based on turbulence data rather than void fraction in order to model the turbulence-cavitation interplay more accurately, throughout the domain.
§ ACKNOWLEDGEMENTS
This work was supported by the Office of Naval Research, USA [grant number N00014-18-S-B001], the Macao Young Scholars Program (Project code: AM2022003) and Priority Postdoctoral Projects in Zhejiang Province (Project Code:341781). The authors would like to thank the ONR proposal manager Dr. Ki-Han Kim for his support. The authors would like to thank the anonymous reviewers for their feedback to enhance the quality of the manuscript.
§ COMPLIANCE WITH ETHICAL STANDARDS
* Conflict of interest: The authors declare they have no conflict of interest.
* Ethical approval: The article does not contain any studies with human participants or animals performed by any of the authors.
* Informed consent: Not applicable
* Funding:This work was supported by the Office of Naval Research, USA [grant number N00014-18-S-B001]. The authors would like to thank the ONR proposal manager Dr. Ki-Han Kim for his support.
|
http://arxiv.org/abs/2409.02481v1 | 20240904071330 | Word and Phrase Features in Graph Convolutional Network for Automatic Question Classification | [
"Junyoung Lee",
"Ninad Dixit",
"Kaustav Chakrabarti",
"S. Supraja"
] | cs.CL | [
"cs.CL"
] |
[
[
3 September 2024
====================
*Equal contribution
§ ABSTRACT
Effective question classification is crucial for AI-driven educational tools, enabling adaptive learning systems to categorize questions by skill area, difficulty level, and competence. This classification not only supports educational diagnostics and analytics but also enhances complex tasks like information retrieval and question answering by associating questions with relevant categories. Traditional methods, often based on word embeddings and conventional classifiers, struggle to capture the nuanced relationships in natural language, leading to suboptimal performance. To address this, we propose a novel approach leveraging graph convolutional networks (GCNs), named Phrase Question-Graph Convolutional Network (PQ-GCN) to better model the inherent structure of questions. By representing questions as graphs—where nodes signify words or phrases and edges denote syntactic or semantic relationships—–our method allows GCNs to learn from the interconnected nature of language more effectively. Additionally, we explore the incorporation of phrase-based features to enhance classification accuracy, especially in low-resource settings. Our findings demonstrate that GCNs, augmented with these features, offer a promising solution for more accurate and context-aware question classification, bridging the gap between graph neural network research and practical educational applications.
§ INTRODUCTION
Question classification is vital in providing AI-driven education and training, assisting adaptive learning systems by categorizing questions into skill area or level of difficulty and competence, and providing educational diagnosis and analytics. Accurate automatic question classification also provides headway to more complex tasks such as information retrieval and question answering, in which the association of the given question with a certain category allows for retrieval of relevant contextual knowledge to formulate compatible answers. Traditional methods, relying on word embeddings and conventional classifiers, often struggle with capturing the nuanced relationships between words and phrases, leading to suboptimal classification performance. The complexity of natural language, especially in the context of educational and domain-specific questions, requires more sophisticated approaches that can understand and leverage the inherent structure of language.
Aside from text embeddings, graphs have recently been highlighted as a way to represent unstructured text data. Questions can be naturally represented as graphs, where nodes correspond to words or phrases, and edges capture the relationships between them, such as syntactic dependencies, semantic similarities, or proximity measures. This graph-based representation aligns perfectly with the strengths of graph convolutional network (GCN), which excels at learning from structured data. GCNs are designed to operate on graph-structured data, making them ideal for processing and understanding the interconnected nature of words and phrases within questions. This capability enables GCNs to capture the underlying structure and dependencies in a question, leading to more accurate and context-aware classification decisions. GCNs offer a promising solution by modeling the relationships between words, phrases, and their syntactic and semantic connections as graphs. This approach allows for a more nuanced and context-aware classification, bridging the gap between basic AI research and practical applications in education and beyond.
Question classification poses a unique problem even among the different types of text classification, as the amount of information available in a question tends to be limited compared to document-level text counterparts. While neither tackling the task of text classification with GCNs, nor automated question classification via artificial intelligence is new, our investigation aims to determine the feasibility of adapting GCNs for question classification. In particular, we explore additional phrase-based features in our proposed model, PQ-GCN, to enhance classification performance, and assess the effectiveness of these features in low-resource settings.
§ RELATED WORK
§.§ Automatic Question Classification
One of the earliest works on Question Classification was in <cit.> which made use of support vector machines (SVMs) and explored a variety of features like lexical, synctactic, and semantic features. Another reference work by <cit.> also discusses usage of head words and hypernyms as features for SVM and maximum entropy (ME) algorithms. They demonstrated the effectiveness of statistical learning methods in classifying questions, while rule-based classification <cit.> via string matching in questions have also been explored in earlier works.
Following these works, a review of automatic question classifiers from 2012 to 2017 <cit.> revealed that there was an increase in the number of works proposing neural network-based methods, or using neural networks in combination with rule-based methods. The approaches range from concatenating pre-trained word embeddings to form a question embedding <cit.>, to combining CNN and LSTM in a hybrid model <cit.>, effectively balancing feature extraction and sequence modeling, and was shown to outperform traditional machine learning techniques and earlier deep learning models in classifying questions across various domains.
§.§ Text Classification Using Text Graphs
TextGCN <cit.> pioneered approaching text classification as a node classification problem. Yao et al. constructs a corpus-level graph with document nodes and word nodes—pointwise mutual information (PMI) as word-word edge weights and text frequency-inverse document frequency (TF-IDF) as word-document edge weights—and applies a 2-layer GCN to obtain the class output. Several works propose similar graph construction and node initialization based on TextGCN, with more efficient or better regularized graph propagation <cit.>.
Other works have also explored multi-graphs instead of a single corpus-level graph. TensorGCN <cit.> builds a tensor graph from three separate graphs: semantic graph via word embeddings, syntactic graph via grammar parsing, and sequential graph via PMI values. Each of these graphs has different word-word edge weights while sharing the same TF-IDF values for word-document edge weights, and they undergo intra- and inter-graph propagation. ME-GCN <cit.> constructs a graph that has multi-dimensional word-word, word-document, and document-document edges, using Word2Vec and Doc2Vec embeddings and similarity scores. SHINE <cit.> uses
While node classification is a transductive approach, where one needs to build a corpus-level graph consisting both train and test document nodes, InducT-GCN <cit.> extends the task into a inductive approach, where a training corpus graph is constructed first similarly to TextGCN, and any new test corpus is built as a virtual subgraph for inference.
Graph classification, in which you classify a document graph as opposed to a document node, has also been used as an inductive approach to text classification. Text-Level-GNN <cit.> uses a sliding window to build multiple graphs, each with a small number of nodes and edges, with trainable edge weights that are shared across the graphs for the same word pair. Huang et al. also uses a message passing mechanism instead of a GCN architecture, relying on aggregating neighbourhood information via max-pooling and combining them via weighted sums, instead of convolution functions.
Graph Fusion Network <cit.> constructs four individual global word co-occurrence graphs, and each document is represented as with a set of four subgraphs. GCN is applied to each subgraph, and the resulting output is concatenated, passed through an MLP layer, and average-pooled to obtain the final document embedding.
There are also combinatory methods such as HyperGAT (Hypergraph Attention Networks) <cit.> building hypergraphs for each document to capture high-level interaction between words, consisting of sequential hyperedges connecting all words in a sentence and semantic hyperedges connecting top-K words with LDA-based topic modeling. Attention mechanism is utilized at both node- and edge-level to update representations.
§ WHY USE PHRASE FEATURES?
As with all text classification tasks, feature extraction from questions poses a particular challenge, as the features need to be representative of the questions and yet be domain-agnostic to be generalizable across different classification frameworks. We identify a research gap in adapting GCNs for question classification in the lack of question classification-specific feature extraction and graph construction methods. While works discussed in the previous section discuss word and document relationships, phrase relationships have not been explored in the context of text classification with graph representations. The effectiveness of phrases have been discussed in topic modeling for questions <cit.>, and this section outlines the rationale to explore phrase relationships as a crucial feature for question classification.
Capturing Meaningful Chunks of Information
Unlike individual words, phrases (e.g., noun phrases and verb phrases) often encapsulate more specific and contextually relevant information. Consider the question "How do enzymes function in the human body?" Here, the phrase "enzymes function" (verb phrase) and "human body" (noun phrase) are pivotal in identifying this as a biological process question, likely classified under "Biology" or "Physiology." The verb phrase indicates the action or process in question, while the noun phrase specifies the domain of interest. In the question "Describe the steps to solve a quadratic equation," the phrases "steps to solve" (verb phrase) and "quadratic equation" (noun phrase) are linked by their relevance to problem-solving in mathematics.
This also applies to question analysis with part-of-speech (POS) tags. POS tags help in identifying the grammatical structure, but they do not capture the full semantic context. For example, knowing that "role" is a noun does not reveal its specific relevance in a question about "electric circuits."
Intra-Question Relationships
By extracting phrases and analyzing their semantic embeddings, we can understand the relationship between different components of the question. For instance, in "What are the causes of the French Revolution?", the proximity and relationship between "causes" and "French Revolution" (via PPMI or other metrics) are key to categorizing this question under "History" or "Political Science."
Disambiguation and Contextualization
Phrases provide additional context that can help disambiguate words that might have multiple meanings. For instance, in "What is the role of the cell in an electric circuit?", the noun phrase "role of the cell" and its proximity to "electric circuit" helps to clarify that "cell" refers to a component of an electric circuit rather than a biological cell.
While word-level features such as embeddings and POS tags are foundational for understanding the basic structure and meaning of a question, phrase-level features offer a more comprehensive and contextually rich understanding. By extracting noun phrases and verb phrases and analyzing their semantic embeddings and relationships, one can capture the nuanced meaning and intent of questions, leading to more accurate and effective classification models.
§ METHODOLOGY
§.§ Feature Extraction and Graph Construction
Given a corpus of question-label pair, we perform basic text cleaning which includes removal of contractions and punctuation.
To construct meaningful graph representations from a corpus of question text, we selected several feature extraction techniques, each contributing uniquely to capturing different aspects of the textual data. A summary of the extracted features are provided in Table <ref>.
Words and PMI
We first tokenize the given question corpus by whitespace to obtain a unique set of words. A word-level graph is constructed, with word nodes connected by edges weighted with PMI values, representing local co-occurrence relationships. Specifically, we utilize positive PMI (PPMI), given by max(PMI(word_1, word_2), 0). Pre-trained word embeddings from word2vec <cit.> are also obtained for each word in the corpus, to be used as additional semantic information by concatenating with node embeddings.
Word-Level POS Tags
POS tags for each word is obtained via the default set from NLTK [<https://nltk.org>] and PPMI values between each pair of POS tags are calculated as edge weights for a POS tag graph.
Phrases and PMI
Phrases are extracted from each question text via regex matching for verb phrases and noun phrases with [<https://spacy.io>] in Python, and a unique set of phrases in the corpus is obtained. Each question is represented as a sequence of verb phrases and noun phrases present in the text. Then a phrase-level graph (similar to the word-level graph) is constructed with PMI-weighted edges. Phrase embeddings are obtained from PhraseBERT <cit.> to provide additional semantic information.
Phrase-Level POS Tags
During the phrase extraction process above, also provides word-level POS tags for each word in the extracted phrases. These word-level POS tags are concatenated to form phrase-level POS tags. For example, in the question "Discuss the main objective of layout design rules", using the regex (ADJ)(NOUN), the noun phrase [main objective] is extracted, together with the phrase-level POS tag of (ADJ_NOUN). Then, the PPMI values between each pair of phrase-level POS tags are calculated as edge weights for a phrase-level POS tag graph.
Named Entity Recognition (NER)
Named entities (e.g., people, organizations, locations) carry significant meaning and often denote key elements within a question. A list of named entities is obtained from the NELL knowledge base <cit.>, and is used to extract named entities via string matching. With the short text nature of questions, it is difficult to obtain multiple entities from a single question to calculate co-occurrence statistics. Hence, we utilize TransE embeddings <cit.> for each named entity and calculate the cosine similarity between embeddings of each pair of named entities, to be used as edge weights in constructing a named entity graph.
For all 5 graphs constructed above, the node features are initialized as one-hot vectors. By combining these diverse feature extraction methods, we construct a comprehensive and multi-faceted graph representation of the question text, capturing various semantic, syntactic, and named entity information crucial for question classification.
§.§ Model Architecture
As we identify phrase-based features as the gap towards building a more robust automatic question classification model, we also identify a model architecture capable of incorporating phrase-based features as a modular add-on, as well as be able to consolidate the each graph of different node and edge types into a single question embedding. Initial approach was to concatenate of the outputs of 5 individual 2-layer GCNs, followed by a linear layer and softmax function. Various combinations of 2D convolution layers and pooling layers have also been experimented with, but ultimately we found that the final question embedding was not able to provide sufficient information for classification and performed worse than baseline models.
Following the initial exploration, we base our output layers largely on SHINE <cit.>, where a dynamic graph of question nodes and cosine similarity edges is learned from the individual GCN outputs and the nodes are classified via label propagation in a final GCN layer. The training parameters are kept the same as provided by Wang et al.. An overview of the feature extraction, graph construction, and model is provided in Figure <ref>.
§ EXPERIMENTS
§.§ Datasets
NU This dataset was obtained from Najran University <cit.>, containing 596 questions with 3 class labels of different cognitive complexities.
ARC This dataset was released as part of the A12 Reasoning Challenge <cit.>, consisting 279 questions labeled with 3 different reasoning capabilities.
LREC This is a collection of 345 middle school science questions, labeled with 3 different type of answers required by the respective questions <cit.>.
Bloom Bloom's Taxonomy <cit.> is one of the most well-established question classification frameworks used by educators. This dataset is a set of questions collected from different web sources, and manually classified into the six cognitive levels of Bloom's Taxonomy <cit.>.
TREC The Text REtrieval Conference (TREC) dataset contains ∼6000 questions from four sources—4,500 English questions published by USC <cit.>, about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set. These questions were manually labeled into 6 coarse and 50 fine class labels.
Aside from TREC, for which the dataset split was obtained from HuggingFace [<https://huggingface.co/datasets/CogComp/trec>], the train/test split was given by a pre-determined seed.
§.§ Performance Evaluation
We compare the performance of our proposed model with selected baseline models in Table <ref>, including vanilla CNN and Bi-LSTM for non-graph classification models.
PQ-GCN outperforms the baseline models in the macro-averaged evaluation on ARC, LREC, and TREC-coarse datasets, and second to the best performing model for NU and Bloom datasets. The improvement from the base SHINE model is also highlighted, as we have kept the original features as proposed by Wang et al., showing the effectiveness of phrase-based features in improving question classification capabilities.
While performance on TREC-fine drops compared to its coarse label counterpart across all models is expected due to the large number of labels, our proposed model suffers from a larger difference compared to models such as HyperGAT or ME-GCN. TREC-fine was extensively labeled and was included in experiments to determine model capability in a more extreme and granular question classification task. In realistic educational settings, it is unlikely to have a question set that is labeled as intently as TREC-fine.
§ CONSIDERATIONS FOR SYSTEM IMPLEMENTATION
In this section, we outline possible considerations for practical deployment of the trained model.
Compared with Transformer-based models <cit.> which scale quadratically with sequence length, GCNs offer a less computationally extensive alternative to text embedding-based classification. This allows for greater accessibility to educators without access to appropriate compute resources.
In addition, node classification-based question classification is quick to train as at every epoch, the corpus-level graph is passed through the network once to obtain the logits for all samples, unlike traditional neural networks where samples are passed into the network batch by batch. This allows for a quick refresh of the model based on any new data points or even a whole new dataset.
There is also no need for a lot of labels for label propagation via cosine similarity at the question-level dynamic graph—some may not have access to well-labeled set of questions nor the resources for manual labeling. This is a crucial factor for learning analytics systems, and the fact that the proposed model performs well in low-resource settings shows its potential in such environments.
The proposed features can be made modular to suit the question corpus in context i.e. each individual graph can be added or removed from the model input to build the dynamic question-level graph. This will allow for greater flexibility in deployment of the model in various settings.
§ CONCLUSION AND FUTURE WORKS
The proposed method on applying GCNs to question classification shows promising results in capturing the intricate word and phrase relationships within question text. The next steps involve refining the model for specific educational and AI-driven applications, such as adaptive learning platforms or automated tutoring systems. Future work will focus on optimizing the model for deployment at scale, ensuring they can handle large volumes of data and provide real-time classification on user-provided questions. In conclusion, we hope to provide sufficient basis for further research into feature extraction methods for graph neural networks, especially in automatic question classification.
|
http://arxiv.org/abs/2409.03134v1 | 20240904235818 | Relative evolution of eclipsing binaries: A tool to measure globular cluster ages and He abundances | [
"N. Cristi-Cambiaso",
"M. Catelan",
"A. A. R. Valcarce",
"A. Papageorgiou"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, 7820436 Macul, Santiago, Chile
[nicolascristi;mcatelan]@uc.cl
Millennium Institute of Astrophysics, Nuncio Monseñor Sotero Sanz 100, Of. 104, Providencia, Santiago, Chile
Centro de Astro-Ingeniería, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, 7820436 Macul, Santiago, Chile
Departamento de Física, FACI, Universidad de Tarapacá, Casilla 7D, Arica, Chile
Department of Physics, University of Patras, 26500, Patra, Greece
Globular clusters (GCs) are among the oldest objects in the Universe for which an age can be directly measured, thus playing an important cosmological role. This age, on the other hand, depends sensitively on the He abundance, which cannot be reliably measured from spectroscopy in GC stars. Detached eclipsing binaries (DEBs) near the turnoff (TO) point may play an important role in this regard.
The aim of this study is to explore the possibility that, by working with differential measurements of stars that comprise a TO binary system, and assuming both stars have the same age and He abundance, one can achieve tighter, more robust, and less model-dependent constraints on the latter two quantities than otherwise possible by working with the absolute parameters of the stars.
We compare both absolute and differential parameters of the stars in V69, a TO DEB pair in the GC 47 Tuc, with two different sets of stellar evolutionary tracks, making use of a Monte Carlo technique to estimate its He abundance and age, along with their uncertainties.
We find that the relative approach can produce age and He abundance estimates that are in good agreement with those from the literature. We show that our estimates are also less model-dependent, less sensitive to [Fe/H], and more robust to inherent model systematics than those obtained with an absolute approach. On the other hand, the relative analysis finds larger statistical uncertainties than does its absolute counterpart, at least in the case of V69, where both stars have very similar properties.
For binary pairs in which one of the components is less evolved than the other, the statistical uncertainty can be reduced.
Our study suggests that the method proposed in this work may be useful to robustly constrain the He abundance and ages of GCs.
Relative evolution of eclipsing binaries:
A tool to measure globular cluster ages and He abundances
N. Cristi-Cambiaso1, 2
M. Catelan1, 2, 3
A. A. R. Valcarce2, 4
A. Papageorgiou5
Received April 17, 2024; accepted September 4, 2024
============================================================================================================================
§ INTRODUCTION
Detached eclipsing binary (DEB) systems are exceptional subjects for astronomical studies due to their orbits, which enable a direct determination of their masses, radii, and other stellar parameters <cit.>. This direct determination is achieved through the analysis of their photometric light curves and radial velocities, reaching errors in the stellar parameters as small as 1% in some cases <cit.>. Furthermore, when at least one star within a DEB system is located at (or very close to) the main sequence (MS) turnoff (TO) point, it becomes a valuable tool for accurately estimating ages, since the TO is a key age indicator in the realm of stellar evolution (see, e.g., the reviews by and , and references therein). The use of these DEBs for age dating of a stellar cluster was initially proposed by <cit.>, and has since become a common practice in numerous studies <cit.>. Several DEBs in open and globular clusters (GCs), in which at least one component is located near the TO point, are now known <cit.>.
One especially interesting system is the double-lined DEB V69-47 Tuc (hereafter, V69), located within the GC 47 Tucanae (47 Tuc, NGC 104). This binary system consists of two (very similar) low-mass stars that are both near the TO point, making V69 an excellent candidate for determining 47 Tuc's age. The significance of V69 in determining the age of 47 Tuc has not gone unnoticed, as it has been extensively utilized with this purpose in multiple studies <cit.>.
When attempting to match the empirically derived physical parameters of the V69 components, however, a problem arises, as the consideration of different helium abundances (Y, the fraction of He by mass) in the stellar models introduces a wide range of possible ages for the system, spanning several billion years. In other words, a strong degeneracy arises between the assumed Y of the binary and its estimated age. This poses a challenge, since it is not feasible to directly determine Y for moderately cool stars, such as the components of V69, given their inability to excite He atoms in their photospheres. Since V69 is a cluster member, one could in principle use measurements for other cluster stars for this purpose; however, the only stars for which this can be done directly are blue horizontal-branch stars, whose photospheric abundances are severely affected by diffusion, and whose measured He abundances are thus not representative of the cluster's initial Y <cit.>. While the chromospheric He i line at 10,830 Å may in principle provide a direct measurement <cit.>, the technique is still far from being able to yield representative Y values from equivalent width measurements.
A further complication that arises when comparing evolutionary tracks and isochrones with the empirical data is that stellar models are inherently affected by systematic uncertainties, as often are empirical measurements as well. These uncertainties encompass various factors, such as color-temperature transformations, boundary conditions, treatment of turbulent convection, etc. <cit.>. Such systematic uncertainties constitute the main reason why the so-called “horizontal methods” of age-dating GCs <cit.> are used to derive ages in a differential sense only <cit.>. In fact, these problems affect traditional isochrone-fitting methods as well, as pointed out early on by <cit.>, among many others. For this reason, whenever the temperatures, colors, and radii of stars are involved, differential analyses are expected to give more reliable results than absolute ones.
Our main hypothesis is that, at least in principle, dealing with the binary components differentially can help minimize the impact of systematic errors affecting the empirical measurements and theoretical models alike. Indeed, several studies have demonstrated that the difference in temperature between eclipsing binary members can be derived with higher precision than is possible from the absolute temperatures of its individual components <cit.>. In addition, this
rationale aligns with the approach employed in the study of solar twins and the differential analyses commonly conducted in these cases <cit.>, which can yield remarkably precise stellar parameters: in the case of effective temperature (T_eff), precision at the level of ∼ 3 K has been reported.
In this paper, we explore the possibility that, by working with differential measurements of the properties of the stars that comprise a binary pair, and assuming both stars have the same age and Y, one may be able to achieve more robust constraints on the latter two quantities than would have been possible by working with both stars' absolute parameters. In this first experiment, we employ V69 in 47 Tuc.
In Sect. <ref>, we detail the stellar parameters and chemical composition that we have adopted for the analysis of V69. Sect. <ref> describes the theoretical models used in our study, while Sect. <ref> outlines the computation of relative parameters for both the observed stars and the models, and the fitting procedure employed to derive an age and Y for V69. Section <ref> presents the results obtained with our method for V69, followed by a summary of this work, in Sect. <ref>.
§ ADOPTED STELLAR PARAMETERS AND CHEMICAL COMPOSITION OF V69
We adopt the masses (M), bolometric luminosities (L_bol), and T_eff of V69's components from <cit.>. Their respective radii (R) and surface gravities (g) come from <cit.>, as revised estimates are not provided in <cit.>. These values, based on a combination of photometry and spectroscopy, are given in Table <ref>. Additionally, Table <ref> shows the relative values of each parameter with their respective errors; these are explained in Sect. <ref>.
In this work, the iron-over-hydrogen ratio with respect to the Sun, [Fe/H], and α-element enhancement of the system, [α/Fe], are assumed to be [Fe/H] = -0.71 ± 0.05 and [α/Fe] = +0.4, respectively. This chemical composition was adopted based on the studies conducted by <cit.> and <cit.>. The adopted [Fe/H] value agrees, to within 0.01 dex, with the value obtained by <cit.> from a systematic compilation of measurements in the literature, and also with the [Fe/H] value recently recommended by <cit.>.
§ THEORETICAL STELLAR MODELS
We compare the physical parameters of V69 with stellar models obtained from two stellar evolution codes (SECs): Princeton-Goddard-PUC <cit.> and Victoria-Regina <cit.>. These models were chosen because they offer a wide range of He abundances and metallicities, as can be seen in Table <ref>. We obtain evolutionary tracks with different mass values than those available in the original grids by interpolation. Further details regarding the latter are provided in Appendix <ref>.
The PGPUC tracks provided by <cit.> start at the zero-age main sequence (ZAMS) and extend to the He flash, at the tip of the red giant branch (RGB).
For our analysis, we compute tracks from the PGPUC Online webpage[<http://www2.astro.puc.cl/pgpuc/index.php>] with the masses, He abundances, and metallicities specified in Table <ref>.
As mentioned in Sect. <ref>, the [α/Fe] of V69 is adopted as +0.4. However, we use PGPUC models with [α/Fe] = +0.3, since <cit.> do not offer models with a higher [α/Fe].
The impact of this 0.1 dex difference upon our results will be addressed later in the paper.
<cit.> provide grids of evolutionary tracks that start at the ZAMS and extend up to the tip of the RGB, in addition to computer programs that enable interpolation of new models within these grids <cit.>. However, these programs do not support interpolation of tracks with new masses (only new chemical compositions), and the mass resolution of the available grid is insufficient for our purposes. Therefore, we employ these programs to interpolate new isochrone grids (their ages and chemical compositions are shown in Table <ref>), which we subsequently use to interpolate tracks (see Appendix <ref>) with the same masses as the tracks in the PGPUC grid.
Both the PGPUC and VR databases directly provide the values for T_eff, L_bol, and M, among others, for each model. However, the corresponding R values are not provided, and, in the VR case, neither are the g values. Hence, the radii are calculated according to the Stefan-Boltzmann law,
R = [L_bol/(4 πσ T_eff^4)]^1/2, where σ is the Stefan-Bolztmann constant, while the g values are calculated from Newton's law of universal gravitation, g = G M / R^2, where G is the gravitational constant.
Figure <ref> overplots the absolute stellar parameters of the V69 pair (from Table <ref>) on evolutionary tracks interpolated from the PGPUC and VR grids for different He abundances. These tracks are computed for the nominal mass values of both stars (also from Table <ref>).
Additionally, Fig. <ref> shows a comparison between the relative parameters of the system and those predicted by these tracks (obtained as described in the next section).
These plots show that, in the absolute and general cases alike, there is general agreement between both sets of tracks and the data, with values of Y at the lower range of those explored apparently being favored.[Though not displayed in these figures, we have checked that Bag of Stellar Tracks and Isochrones (BaSTI) models <cit.> for a similar chemical composition (Y = 0.255, [Fe/H] = -0.7, [α/Fe] = 0.4, Z = 0.006) match closely the PGPUC and VR models in the relevant region of parameter space.] This will be explored further in the following sections.
§ METHODS
In this section, we describe the method proposed in this work to estimate the He abundance and age of the binary V69. The methods are illustrated by means of PGPUC calculations carried out for a single metallicity, Z = 0.006. A more detailed analysis that considers the range of possible metallicities of the system and incorporates VR models is presented in Sect. <ref>.
§.§ Relative parameters
To perform our differential analysis, we compute the relative physical parameters of the binary Δ𝒳, where 𝒳 may represent any of M, T_eff, L_bol, g, and R, as follows:
Δ𝒳(t) = 𝒳_s(t) - 𝒳_p(t),
where 𝒳_ s(t) and 𝒳_ p(t) are the physical parameters of the secondary and primary, respectively, computed for a given age t, the latter assumed identical for both members of the binary system.
The nominal errors in these relative parameters are obtained by propagating those in the absolute parameters in quadrature. However, as mentioned in Sect. <ref>, since the components of V69 are very similar, they should be affected similarly by systematic uncertainties (due to, e.g., the use of different color indices, bolometric corrections, color-temperature transformations, etc.). For a strictly differential analysis, therefore, we posit that systematic errors affecting each of the two components individually do not propagate in quadrature as statistical errors do; rather, they should largely cancel each other out.
We thus consider that the errors propagated in quadrature are upper limits. This is explored further in Sect. <ref>.
In like vein, a differential evolutionary track was obtained for the binary system, by computing the difference, for each age t, between the physical parameters of the secondary and primary along their respective tracks. This was done by interpolating in time within the individual tracks, which are uniquely defined by the specified (M_ s, Z_ s, Y_ s), (M_ p, Z_ p, Y_ p) combinations, with (in general) M_ s < M_ p, metallicity Z_ s = Z_ p, and helium abundance Y_ s = Y_ p. Some such differential tracks are shown in Fig. <ref>.
An alternative approach to the one used in this paper is to employ parameter ratios, as opposed to differential parameters. This is motivated by the fact that, when studying binary systems, such ratios are often more directly measured from the empirical data <cit.>.
We explore this possibility further in Appendix <ref>.
§.§ Measuring track-to-star agreement
To estimate the Y value and age of the binary, we compare its (absolute and relative) parameters with a diverse set of stellar models in a four-dimensional space of M, L_bol, g, and R. Effective temperature is included implicitly, through the Stefan-Boltzmann relation that relates it to L_bol and R. We do not include T_ eff explicitly because, as can be seen in Fig. <ref>, it has less diagnostic power than other physical parameter combinations, in the relative case.
To evaluate the quality of the fits for different (Y, age) combinations, we define a goodness-of-fit parameter as follows.
We calculate Euclidean track-to-star distances (TSDs),
TSD(t) = {[M_⋆ - M_ ET(t)/σ_M_⋆]^2 + [L_bol, ⋆ - L_bol, ET(t)/σ_L_bol,⋆]^2 .
. + [R_⋆ - R_ET(t)/σ_R_⋆]^2 + [g_⋆ - g_ET(t)/σ_g_⋆]^2}^1/2,
where M_ET(t), L_bol,ET(t), R_ET(t), and g_ET(t) are the evolutionary track's mass, bolometric luminosity, radius, and surface gravity at time t, respectively; M_⋆, L_ bol,⋆, R_⋆, g_⋆ the corresponding empirical values; and σ_M_⋆, σ_L_ bol,⋆, σ_R_⋆, and σ_g_⋆ their respective uncertainties. Dividing each term by the error of the respective parameter gives higher weight to those parameters that are empirically better constrained, and renders TSD a quantity with no physical units — an approach similar to the one followed, for instance, by <cit.> and <cit.>.
The TSDs were calculated for both the relative (RTSDs) and absolute (ATSDs) approaches. The RTSDs are simply the TSDs computed for a relative track (such as those shown in Fig. <ref>) and the “relative star”, i.e., the relative parameters of the binary system. On the other hand, the ATSDs are computed as
ATSD(t) = √(TSD_p^2(t) + TSD_s^2(t)) ,
where TSD_s(t) and TSD_p(t) are the TSDs computed for the secondary and primary track, respectively.
§.§ Helium abundance and age determination
For each set of evolutionary models (i.e., PGPUC and VR), we find all the (M_ s, M_ p) combinations, from the available masses in the model grid (see Table <ref>), that satisfy M_ s < M_ p, discarding those instances in which the masses are outside 3 σ of the respective star's mass. Then, for each of these mass combinations, we compute relative tracks for every available He abundance value (as explained in Sect. <ref>), and calculate ATSD and RTSD values for all of the tracks. Given the strong dependence of the relative tracks on mass around the MS TO point, a high enough mass resolution is required in order to infer reliable Y and age values using this method.
Finally, we search through all the mass combinations to find the evolutionary track(s) that are associated with the minimum ATSD and RTSD values, here denoted MATSD and MRTSD, respectively. These values will correspond to the point in the evolutionary track(s) that comes closest to the measured binary parameters (i.e. M, L_bol, R, and g), according to the adopted statistic. The estimated Y and age of the binary can then be straightforwardly obtained from this closest point of the track(s).
§.§ Accounting for uncertainties
The nominal M, L_bol, R, and g values of the stars, as adopted in Sect. <ref>, are not necessarily the true ones. As shown in Table <ref>, V69's parameters also have an associated uncertainty, which must thus be taken into account. Here we employ a Monte Carlo (MC) technique to perform an analysis of the impact of the errors in these parameters.
We run 2000 MC instances, in which we resample the M, L_bol, R, and g of both stars assuming they follow normal deviates. The latter follow the binary components' nominal values and associated errors, both from Table <ref>, as their mean and standard deviation, respectively. The only constraint imposed is that the primary's mass must be greater than the secondary's.
In each MC instance, we then follow the same procedure described earlier to estimate anew the Y value and age of the binary system, adopting these resampled parameters.
This sampling procedure assumes that the uncertainties in the relative parameters are not correlated. This notwithstanding, and as will be discussed below (Sect. <ref>), our artificial star tests show that the method is able to recover the correct age and Y values. In future applications, however, the possibility of correlated errors in at least some of the input parameters should be investigated and taken into account if present, in order for more realistic errors in the inferred parameters to be extracted from the data.
Figure <ref> shows the Y and age distributions obtained when using relative PGPUC models with Z = 0.006. As shown in the upper left panel, the Y distribution can present an overpopulated bin at the lowest He abundance value provided by the models (i.e., Y = 0.23), which occurs because this bin contains all the instances with Y ≤ 0.23, instead of only the ones with Y = 0.23. This, in turn, also affects the age distribution, which can show multiple affected (overpopulated or underpopulated) bins. A regular normal fit to these distributions could thus be negatively impacted by these.
As a way to obtain reliable Gaussian fits when in presence of these spuriously over/underpopulated bins at the extremes of otherwise normal distributions, we propose to fit the corresponding cumulative distribution functions (CDFs) with Gaussian CDFs, instead.
Similarly to fitting the regular distributions, these CDF fits have the mean and standard deviation of the Y and age distributions as their fitting parameters. These CDFs are shown in the lower panels of Fig. <ref>, with the best-fitting Gaussian CDF depicted with a dashed blue line. Additionally, using the parameters of the best-fitting CDF, we craft a Gaussian and compare it to the distributions in the upper panels of this figure.
In the Y distributions, we expect these CDF fits to be more robust in view of the fact that they are unaffected by the overpopulated bin.
On the other hand, the age CDF can show over/underpopulated (older) bins, but it will always have one less bin affected than the regular distribution. Furthermore, even though we may not consider one or more age bins for the CDF fit, the instances within these are still considered in the bins with lower ages (and thus, in the fit). With these factors in mind, we also expect the CDF fit to the ages to be more robust.
The width of the bins in each histogram is selected following the Freedman-Diaconis rule <cit.>, with a minimum bin width in Y and age of 0.005 and 0.5 Gyr, respectively, based on the resolution of our model grids.
Figure <ref> shows the Y and age distributions obtained when using the absolute PGPUC models with Z = 0.006. It is clear that the absolute approach obtains much tighter distributions. In this case, it is also not necessary to employ the CDF to obtain reliable normal fits to the data.
In both approaches, we adopt the nominal value and error in Y and the age of the binary as the mean and standard deviation of the corresponding Gaussian fit.
§.§ Precision of the relative approach
As shown in Figs. <ref> and <ref>, the statistical errors in Y and age obtained with the relative approach are much larger than those obtained with the absolute approach, which holds true for both SECs. This arises from the fact that the absolute approach simultaneously fits two tracks to two stars, while the relative approach only fits one (relative) track to one (relative) star, thus reducing the fit's ability to precisely pinpoint the values of the parameters (Y, age). Additionally, Figs. <ref> and <ref> show that the relative approach may be intrinsically less sensitive to variations in Y than the absolute approach, which contributes to the larger (statistical) error bars obtained by the former.
Note, however, that our calculations use relative errors that arise from propagating the errors of the absolute parameters in quadrature. Such errors typically involve both a statistical and a systematic component. As already mentioned in Sect. <ref>, and indeed extensively exploited in the study of solar twins, the use of relative parameters may lead to a reduction in the impact of systematic uncertainties, as those should affect both stars similarly.
It is of considerable interest, therefore, to analyze the impact of a reduction in the error of the differential parameters of the system, as compared with the propagated nominal errors that we have been using up to this point in our analysis.
To accomplish this, we arbitrarily shrink the errors in the relative L_bol, M, R, and g values of the binary by a reduction factor Q, and we follow the procedure detailed in the previous sections to estimate anew the age and Y of the binary, along with their respective errors.
The results of this experiment are shown in Fig. <ref>, where we compare the precision obtained with the relative approach, taking a range of Q values, to the precision of the absolute approach (with unchanged errors in the absolute parameters). As expected, the relative approach obtains much larger error bars than the absolute analysis when using the propagated errors; however, error bars of the same order as in the absolute approach can be inferred if the uncertainties in the relative M, L_bol, R, and g of the binary are smaller than their propagated nominal values.
§.§ Testing our methodology
We have tested our methodology also by means of synthetic TO stars, generated using PGPUC isochrones, with physical parameters and associated error bars resembling those of V69's components (Table <ref>). We explore synthetic stars with Y and age values in the ranges of 0.24-0.32 and 8-13 Gyr, respectively. Then, we run the algorithm described in the previous sections to try and recover the Y value and age of the original isochrones.
Additionally, to further explore the robustness of both approaches, we repeat these tests with manipulated T_eff values for both synthetic stars. We run tests with T_eff values increased by 100 K, and tests with those decreased by 100 K. Then, we recalculate their radii and surface gravity, with these manipulated temperatures, and attempt to retrieve the Y value and age of the isochrones.
This simulates the effect of the systematic uncertainty present in color-temperature transformations, which (among other factors) commonly affects the comparison between stellar models and empirical data. We choose to change the temperatures of both stars equally because these are very similar stars (see Table <ref>), which should thus be affected in a similar manner by such systematics.
We perform these tests in both the absolute and relative approaches. In the latter case, we use two settings, one with propagated absolute errors (Q = 1), and another with reduced errors (Q = 3). This is done in order to ascertain whether more precise estimates of the relative parameters can lead to a more accurate Y and age determination (in addition to a more precise one, as shown in Sect. <ref>).
The results of these tests are depicted in Fig. <ref>. Here, we also show the mean residuals (which are calculated using the absolute values of the residuals) each approach obtains in each of the three settings (i.e., decreased, increased, or unchanged temperatures).
As shown here, when using unchanged temperatures, both the absolute and the relative method provide accurate age and Y measurements, with the relative method showing better accuracy when using reduced errors.
When using manipulated temperatures (mimicking systematic errors), however, the absolute case shows large offsets in the age and Y estimates. In this case, the He abundance is systematically under/over estimated (by about 0.015, on average) when the temperatures are decrease/increased, and vice-versa for the ages (by about 1.4 Gyr, on average). On the other hand, the relative approach shows much smaller offsets in the estimated age and Y values than does its absolute counterpart, especially so when using reduced errors. These results support our hypothesis that the impact of systematic errors affecting the stellar parameters are reduced, when dealing with the binary components differentially.
§.§ Correlation between the He abundance and age
As expected from theoretical models <cit.>, the inferred age of a binary system strongly depends on the adopted He abundance, with larger Y leading to younger ages (due to faster evolution). Figure <ref>, where the Y- age distributions implied by our MC runs are shown, confirms this, both in the absolute and relative approaches.
Interestingly, Fig. <ref> also reveals a bimodality in the Y- age distribution when studying the system's relative parameters.
This occurs because, in the relative case, the masses of the (absolute) tracks used to compute the best-fitting relative models (of the MC instances) tend to be concentrated near the 3 σ edges of the absolute mass distributions. Therefore, two peaks are formed in the distributions of these masses, which appear as the two “ridges” in Fig. <ref>.
This implies that, for systems with properties similar to V69's, it may not be straightforward to infer accurate masses from the peak of the relative MC solutions, as the (globally) most appropriate solutions — i.e., those with MTSD < 1 — are not concentrated around any specific mass values within the 3σ ranges.
In this sense, we note that the fraction of relatively poor solutions — i.e., those with MTSD > 1 — is higher in both ridges than it is in the “valley” region in between. Specifically, in the lower and upper ridges, 55% and 24% of the solutions are characterized by MTSD > 1, whereas in the valley, this fraction is reduced to 20%.
§ DERIVING THE HE ABUNDANCE AND AGE OF V69
As discussed in Sect. <ref>, the relative approach gives more accurate results when the uncertainties are reduced.
Therefore, in what follows we adopt errors reduced by a factor Q = 3 to estimate V69's Y value and age. Whether such a reduction in the errors of one or more relative parameters can be achieved empirically is beyond the scope of this paper, though we note that some studies have shown that at least some reduction is indeed possible <cit.>.
Due to the limitations of the grid provided by <cit.>, we use PGPUC models with [α/Fe] = +0.3. However, the adopted α-element enhancement of 47 Tuc is +0.4. Thus, in order to obtain age and Y estimates with the adopted chemical composition of 47 Tuc ([α/Fe] = +0.4 and [Fe/H] = -0.71), we determine the effect of differing [α/Fe] values on our age and He abundance estimations.
This is shown in Fig. <ref>, where we present our results with both approaches, using VR models with [α/Fe] values in the range of 0.3 - 0.4, and a fixed [Fe/H] = -0.6.
We find that, in either approach, an increase in the assumed α-element enhancement of the binary system correlates with a decrease in its estimated age and an increase in its He abundance.
Figure <ref> shows that, in the relative approach, an increment in [α/Fe] of 0.1 leads to an increase in Y of roughly 0.013 and a decline in age close to 0.4 Gyr. On the other hand, in the absolute approach, an equal increment in [α/Fe] enlarges the estimated Y by 0.013, and reduces the age by 0.6 Gyr.
Thus, both approaches produce age and Y estimations with very similar correlations with [α/Fe], with the relative analysis providing a slightly weaker age dependency.
We apply these slopes as corrections to the PGPUC results obtained with [α/Fe] = +0.3, in order to shift these to estimations with [α/Fe] = +0.4. Figure <ref> depicts the sensitivity of the measured ages and He abundances to the [Fe/H] of the models, using both PGPUC and VR models. All results shown correspond to those obtained with [α/Fe] = +0.4.
Note that Fig. <ref> does not include the errors in each Y and age estimation; however, these are of the order of those shown in Fig. <ref> (with Q = 3, in the relative approach), and do not change in any significant way with [Fe/H].
We find that an increase in the assumed metal content of the binary system correlates with a decrease in its estimated age and an increase in its He abundance, akin to the correlations our results show with [α/Fe]. These trends are consistent across both approaches. However, the relative results show a weaker dependence with [Fe/H] than the absolute measurements, particularly in age.
Additionally, the PGPUC results for both Y and age show a very well-defined linear dependency on [Fe/H], while the relative VR results display a mildly disjointed age dependency at lower [Fe/H]. This occurs because we can find a Y value outside (or very close to the limit of) the range of He abundances provided by the models (in the VR case, Y = [0.25, 0.33]), which can lead to a slight overestimation and underestimation of the Y value and age, respectively. This happens for the relative calculations performed using the VR models with both [Fe/H] = -0.75 and -0.7.
As shown in Fig. <ref>,
PGPUC and VR tracks lead to inferred Y values that are within about 0.01 of each other, with the relative approach implying slightly lower He abundances.
On the other hand, it is clear that the relative approach provides age estimations that are less model-dependent than the absolute approach. The former produces ages that agree to within about 0.2 Gyr when comparing PGPUC and VR results (even when some age estimates obtained using VR models are slightly underestimated, as discussed in the previous paragraph), while the latter provides age estimates which differ by at least 1 Gyr between PGPUC and VR.
Explaining this result is outside the scope of this paper, but it is again evidence that systematic effects can have a deleterious impact on the determination of GC Y and age from binary TO stars.
Differences in the input physics adopted in the PGPUC and VR codes could be one contributing factor. For instance, PGPUC adopts the FreeEOS equation of state (EOS) <cit.>, whereas VR uses a somewhat more rudimentary EOS <cit.>. At V69's metallicity, this may be an important contributor to the ≈ 65 K offset that we find between PGPUC and VR tracks at the TO level, the latter set being systematically cooler than the former (Fig. <ref>). Though we do not use temperatures directly in our method, we do use them to obtain gravities and radii, both of which are parameters used in our goodness-of-fit diagnostics (see Eq. <ref>). Other relevant ingredients that could affect the results of different SECs, and the placement of their evolutionary tracks and isochrones in color-magnitude diagrams (CMDs), include the adopted outer boundary conditions, mixing-length formalism adopted for the treatment of convection, low-temperature radiative opacities, color-temperature relations and bolometric corrections, and reference (solar) abundance mix, among others <cit.>.
Such a temperature offset in the models is akin to the tests we describe in Sect. <ref>. Particularly, using these VR models, which are systematically cooler, should be roughly equivalent to the tests where we increase the temperature of the stars (shown in the right panels of Fig. <ref>), albeit at a reduced level (we adopted a temperature offset of 100 K in those tests, whereas the difference in temperatures between PGPUC and VR tracks at the TO level is about 65 K instead).
Nevertheless, the similarities between the results shown in Fig. <ref> and the tests using increased temperatures are evident. In the absolute case, the VR models (as compared to PGPUC ones) and these tests alike lead to significantly underestimated ages and overestimated He abundances. In the relative approach, the differences are much reduced, but still fully consistent with the trends seen in Fig. <ref>, particularly considering the slightly lower ages obtained with VR models.
With the relative approach, we find that the age and He abundance of V69, and thus of 47 Tuc, are 12.1 ± 1.1 Gyr and 0.247 ± 0.015, respectively. These uncertainties correspond to the statistical errors discussed in Sect. <ref>, with Q = 3.
The real statistical uncertainties depend on how much more accurate a direct determination of the relative parameters is, compared to the propagated errors in the individual components' separately measured parameters. In addition, the possibility of correlated uncertainties in these parameters, which was not considered in this work, should be evaluated and taken into account if necessary as well.
In any case, it should be noted that the relative method becomes less sensitive, the more similar the two binary components are; indeed, in the limit where both stars are identical, the method cannot be applied. In the case of V69, as we have seen earlier, both stars are indeed quite similar, and are both currently located at 47 Tuc's TO point. In this sense, binary pairs in which the stars differ more in their properties, with one of the components being somewhat less evolved than the other, would help beat down the statistical uncertainties, in the relative approach. This, in fact, is the case of E32 in 47 Tuc, a DEB system which we plan to address in a forthcoming paper.
These age estimates are in good agreement with the results presented in <cit.> and <cit.>, who find that the age of V69 is 11.8 ± 1.5 (3σ) Gyr and 12.0 ± 0.5 (1σ) Gyr, respectively. These were obtained by comparing the absolute parameters of V69 with stellar models, combined with isochrone fitting to the CMD of 47 Tuc. Additionally, to obtain the age of 47 Tuc, <cit.> analysed not only V69, but also E32.
At first glance, these tight constraints on the age of V69 could indicate that our method, which obtains much looser constraints, is not an effective age indicator. However, both quoted papers rely on a single SEC; thus, their quoted errors do not consider the systematic uncertainty that arises from using different sets of evolutionary tracks. This systematic, as discussed in Sect. <ref>, can be most significant when working with the absolute parameters of the stars. This is also shown in Table 9 of <cit.>, who estimated V69's age using five different SECs and different combinations of physical parameters, including mass, radius, and luminosity.
They find ages that differ by up to 2 Gyr, depending on the models used and the adopted He abundance (as they do not simultaneously derive the latter along with the age).
Furthermore, <cit.> and <cit.> find an approximate Y value for V69 by performing isochrone fitting to the CMD of the cluster, which they then use to estimate the age of the binary. As both works point out, this assumes that V69 (and E32, in the case) belongs to a specific population of 47 Tuc.
In contrast, our methodology neither assumes a Y value a priori, nor does it require isochrone-fitting to be performed to the cluster's CMD.
Furthermore, the method proposed in this work could be used to test these assumptions and study the He abundance of the multiple populations in GCs, in the event different binaries are found belonging to different subpopulations with different Y values within a given cluster. Considering these factors, we are confident that the constraints we have placed on the Y value and age of 47 Tuc are robust, and that the method can be competitive in the search for He abundance and age variations, including within individual star clusters.
§ FINAL REMARKS
In this paper, we have proposed and tested a new approach to simultaneously measure the age and He abundances of star clusters. It relies on the differential evolutionary properties of DEBs in which at least one of the components is close to the main-sequence TO. We argue that the method is less sensitive to systematic effects than methods relying on the absolute physical parameters of the stars, and demonstrate that it can give useful results in the case of the DEB V69 in 47 Tuc. As a proof of concept, we analyze the system assuming that the errors in the relative parameters can be reduced by a factor of 3 compared to the propagated formal errors in the individual, absolute parameters of the components. In this way, we are able to infer an age for the system of 12.1 ± 1.1 Gyr and a He abundance of 0.247 ± 0.015. Naturally, the actual errors in Y and age that can be achieved with our method depend critically on how precisely each of the several relative parameters of the binary can be determined empirically. Application of the method to other suitable DEB systems, as well as further exploration of the methodology, are strongly encouraged.
In particular, efforts at determining the relative parameters of DEB stars, with a decreased sensitivity to systematic effects that would otherwise affect each component's absolute parameters individually, would prove especially useful in putting the proposed methodology on a firmer footing.
We gratefully acknowledge the constructive comments and suggestions provided by the referee. Support for this project is provided by ANID's FONDECYT Regular grants #1171273 and 1231637; ANID's Millennium Science Initiative through grants ICN12009 and AIM23-0001, awarded to the Millennium Institute of Astrophysics (MAS); and ANID's Basal project FB210003. NCC acknowledges support from SOCHIAS grant “Beca Adelina”. We are very grateful to Don VandenBerg for providing us with his evolutionary tracks, and programs to interpolate within these.
The research for this work made use of the following Python packages: Astropy <cit.>, Matplotlib <cit.>, NumPy <cit.>, SciPy <cit.>, Jupyter <cit.>, and pandas <cit.>.
aa
§ TRACK INTERPOLATION
In order to interpolate evolutionary tracks from the VR isochrone grids, we compute a linear interpolation of their parameters (T_eff, L_bol, R, g) as a function of the mass values associated with each point along the isochrone, and we evaluate these interpolations on the mass of the original track we seek to reproduce.
The resulting track has 135 evolutionary points (the number of isochrones used in the interpolation), with minimum and maximum ages of 1 Gyr and 18 Gyr, respectively. To improve the time resolution of the track, we perform a second interpolation of the resulting track parameters as a function of the evolutionary age, using the INTEP interpolation subroutine <cit.>.
To judge the accuracy of this interpolation method,
we compare two tracks (with M = 0.8 and 0.9 M_⊙), interpolated from our VR isochrone grid (Table <ref>), to tracks with the same masses (and chemical compositions), provided directly by the VR interpolation software <cit.>. The chemical composition of the tracks is [Fe/H] = 0.7, Y = 0.25, and [α/Fe] = 0.3. To compare these tracks, we plot their residuals in L_bol, R, and g, as a function of the evolutionary age along the track. The results of this test are shown in Fig. <ref>. The evolution of the tracks is shown up until they enter the RGB stage, since our main focus is the TO point of the tracks, not their later evolution.
This test shows that the interpolation reproduces the expected tracks with very good precision in the MS and TO stages, showing minor bumps in the residuals before 5 Gyr, which occur due to the larger time-step of our grid (Table <ref>) in these early ages.
The interpolation starts to break down in the late stages of the subgiant branch. This occurs because our pre-INTEP tracks, with 135 evolutionary ages, have thinly populated post-MS stages, leading to problems with the interpolation. Nevertheless, this does not represent a problem for our analysis, given that we are dealing with two stars located very near the TO point.
§ ON THE USE OF PARAMETER RATIOS
As mentioned in Sect. <ref>, parameter ratios may constitute an interesting alternative to the use of differential parameters, motivated by the fact that some such ratios can be measured more straightforwardly <cit.>.
To explore this possibility, in Fig. <ref> we recreate Fig. <ref>, but plotting parameter ratios instead of differences. In this figure, the empirical measurements (with propagated errors) for 47 Tuc's V69 DEB system are compared with VR evolutionary tracks for the indicated Y values. PGPUC tracks, not shown for the sake of clarity, display essentially the same behavior, as also seen in Fig. <ref>. In the zoomed-in panels of Fig. <ref>, we also add BaSTI tracks with [Fe/H] = -0.7, Y = 0.255, [α/Fe] = 0.4, and Z = 0.006, which show excellent agreement with the VR (and thus PGPUC) tracks for a similar chemical composition.
Figure <ref> reveals, intriguingly, that, irrespective of the theoretical models adopted, the parameter ratios consistently require models with what appear to be unrealistically high Y values to match the empirical data for V69. We have checked that the same happens in the case of 47 Tuc's E32 DEB system. This suggests that, unlike differential parameters, parameter ratios are sensitive to systematic errors, and thus less amenable to a method such as proposed in this paper.
Additionally, these ratios show a weak dependency with Y, as all the tracks included in Fig. <ref> pass within the (propagated) error bars.
Further exploration of parameter ratios is strongly encouraged, in order to explain the intriguing discrepancy shown in Fig. <ref> and further explore their utilization in studies of the He abundance and age of DEB systems.
|
http://arxiv.org/abs/2409.03014v1 | 20240904181055 | Monojets from compressed weak frustrated dark matter | [
"Benjamin Fuks",
"Mark D. Goodsell",
"Taylor Murphy"
] | hep-ph | [
"hep-ph"
] |
fuks@lpthe.jussieu.fr
goodsell@lpthe.jussieu.fr
Laboratoire de Physique Théorique et Hautes Énergies (LPTHE),
UMR 7589, Sorbonne Université & CNRS,
4 place Jussieu, 75252 Paris Cedex 05, France
murphy@lpthe.jussieu.fr
murphyt6@miamioh.edu
Laboratoire de Physique Théorique et Hautes Énergies (LPTHE),
UMR 7589, Sorbonne Université & CNRS,
4 place Jussieu, 75252 Paris Cedex 05, France
Department of Physics, Miami University
500 E. Spring St., Oxford, OH 45056, U.S.A.
Center for Cosmology and Astroparticle Physics (CCAPP), The Ohio State University
191 W. Woodruff Avenue, Columbus, OH 43210, U.S.A.
§ ABSTRACT
We extend the so-called hyperchargeless Higgs triplet model to include a weak triplet of Dirac fermions and a Dirac fermion X transforming trivially under the Standard Model gauge group. We are motivated in part by a collection of anomalies that can be interpreted as a new scalar state with mass of approximately 152 GeV coupling to W but not Z bosons, which may be consistent with the electrically neutral triplet scalar in this model (provided that it mixes very slightly with the Standard Model Higgs boson). Meanwhile, the lightest neutral fermion in the model is stable and can be thermal dark matter with the correct relic abundance if it mixes lightly with the neutral triplet fermion, such that the dark matter is composed mostly of X. Because X couples to the Standard Model only through a Yukawa-like interaction with the pair of triplets, this model falls into the frustrated dark matter paradigm. Finally, the spectrum of exotic fermions in this model can exhibit the strong compression favored by the current excess in the monojet channel, and evades multijet constraints in the region favored by monojets. In this work we explore this model's phenomenology and compare the parameter space regions best suited to each of the aforementioned excesses and constraints.
Monojets from compressed weak frustrated dark matter
Taylor Murphy 0000-0002-3215-9652
September 9, 2024
====================================================
§ INTRODUCTION
Of the many searches for physics beyond the Standard Model (SM) during Run 2 of the LHC, several have reported excesses, although none of them are significant enough to exclude the SM. In recent work <cit.>, we pointed out that four of these excesses were potentially overlapping. These corresponded to monojet searches in both the ATLAS and CMS experiments, and soft-lepton searches in both as well <cit.>. In such cases, it is important to examine whether the observations can be explained by plausible models, in anticipation of more data.
In this paper, we consider the overlap of the monojet excesses with the possible signature of a Higgs-like particle with a mass around 152 GeV <cit.>. Phenomenological studies have suggested that this excess appears in new-physics analyses in the diphoton (γγ + X) and γ Z + X channels, and may furthermore be compatible with excesses revealed with multileptonic probes <cit.>. The γγ excess <cit.> is interesting because the claimed significance is large—4.3σ according to <cit.>—and yet has a potentially simple explanation in terms of a real Y=0 scalar field transforming as a triplet under the weak gauge group <cit.>, among others <cit.>. In this work, we point out that if we add electroweak fermions to this hyperchargeless Higgs triplet model <cit.>, we obtain a prediction for excess monojet events that may at the same time also provide a viable dark matter (DM) candidate.
The DM is mostly aligned with a SM singlet fermion that couples indirectly to SM fields through a Yukawa-like coupling with one of the weak-triplet scalars and a triplet fermion. This coupling is the defining feature of frustrated dark matter models <cit.>, which have recently been explored with color-charged mediator fields. With mixing and coannihilation, some level of compression between the triplet and singlet fermions can produce the observed DM relic abundance through freeze out, and in the meantime the monojet excess seems to favor strong compression of just a few GeV. Thus we are presently interested in compressed frustrated dark matter. The purpose of this work is to explore the overlap between the parameter spaces in this model favored by the (CMS) monojet analysis, searches for new physics in final states with multiple jets and missing energy (E_T^miss), the 152 GeV resonance explanation for various LHC anomalies, and the DM relic abundance.
This paper is organized as follows. Section <ref> introduces our model's field content, interactions, and mass spectrum. In Section <ref> we discuss the phenomenology of the singlet-like lightest neutral fermion, which can play the role of dark matter. Section <ref> briefly notes how the triplet-like heavy scalar may appear as a γγ resonance consistent with recent observations. Finally, in Section <ref>, we explore constraints on this model from the monojet channel and identify the region in which the monojet data favor our model over the Standard Model. We also discuss constraints from the multijet + E_T^miss channel, which has some bins that overlap with the monojet analysis but is not considered to exhibit an excess. Section <ref> concludes.
§ MODEL DISCUSSION
In this section we introduce the model's field content and interactions. We take the opportunity to fix notation, describe the particle spectrum, and discuss the computer implementation of the model and the procedure utilized to generate benchmark points.
§.§ Field content and mass spectrum
The field content of the new-physics sector of our model is summarized in Table <ref>. We begin with a Y=0 weak-triplet real scalar field Δ^ = √(2)(Δ^+, Δ^0,Δ^-). With the given normalization of Δ, it is convenient to perform the unitary transformation
Δ' = UΔ with U = 1/√(2)[ 1 0 1; 0 -; 0 √(2) 0 ]
so that Δ' (with the prime dropped henceforth) can be written as a 2× 2 matrix after projection onto the generators of the fundamental representation of SU(2). To wit:
Δ_ij = 1/2σ^A_ijΔ^A = 1/√(2)[ Δ^0 √(2)Δ^+; √(2)Δ^- -Δ^0 ].
Because Δ is real, the electrically charged scalars are each other's conjugates: Δ^+ = (Δ^-)^†. We adopt an analogous representation for a weak-triplet vectorlike Dirac fermion T, which is also hyperchargeless.
The final piece of this model is a SM singlet Dirac fermion X. In addition to the Standard Model, the Lagrangian comprises the terms
ℒ = 1/4 (D_μΔ^† D^μΔ) + T̅(D - m_T)T + X̅(∂ - m_X)X
+ ℒ_int - V(Φ,Δ),
where
-ℒ_int = y_T T̅Δ T + y_X X̅Δ T + H.c.,
and the scalar potential (including the terms depending on the SM Higgs field Φ for clarity) is given by
V(Φ,Δ) = μ^2 |Φ|^2 + 1/2λ (Φ^†Φ)^2
+ 1/2μ_Δ^2 Δ^2 + 1/4λ_ΔΔ^4
+ √(2)δ_ΔΦ^†ΔΦ + 1/2κ_Δ Φ^†ΦΔ^2.
All traces are over SU(2)_L indices.[The contraction of SU(2)_L indices in T̅Δ T is necessarily antisymmetric.] The electrically neutral component Δ^0 of the triplet scalar obtains a vacuum expectation value (VEV) ⟨Δ^0 ⟩≡ v_Δ upon electroweak symmetry breaking; this field mixes with the SM field Φ^0, whose VEV is ⟨Φ^0 ⟩≡ v/√(2). (We require (v^2 + v_Δ^2)^1/2≈ 246 GeV.) The scalar spectrum of this model therefore consists of a charged scalar H^± with mass
m_H^±^2 = 1/2δ_Δv^2/v_Δ + 2 δ_Δv_Δ
and two electrically neutral scalars. The mass matrix of the latter can be written as
M_h^2 = [ λ v^2 v(κ_Δv_Δ-δ_Δ); v(κ_Δv_Δ-δ_Δ) 1/2δ_Δv^2/v_Δ + κ_Δv_Δ^2 ]
in the Higgs basis {Φ^0,Δ^0}, after the elimination of μ and μ_Δ via the tadpole equations. The Higgs basis and physical basis {h,H} are related by an angle α satisfying (by construction)
sin 2α = 2v(κ_Δv_Δ-δ_Δ)/m_H^2 - m_h^2
and tan 2α = 2v(κ_Δv_Δ-δ_Δ)/1/2δ_Δv^2/v_Δ+κ_Δv_Δ^2 - λ v^2,
and the physical masses can be expressed in the limit v_Δ/v ≪ 1 as
m_h^2 = λ v^2 - 2 δ_Δv_Δ + 4 (κ_Δ-λ)v_Δ^2
and m_H^2 = m_H^±^2 + [λ_Δ- 4(κ_Δ-λ)]v_Δ^2.
We require the mixing of the doublet and triplet to be small on phenomenological grounds and consider spectra in which the lighter state h is SM like. In this small-mixing (α) limit, the masses reduce to
m_h^2 ≈λ v^2 and m_H^2 ≈1/2δ_Δv^2/v_Δ + λ_Δv_Δ^2,
and by comparison with (<ref>) it is clear that the triplet-like scalar splitting obeys the approximate relation
m_H^±^2 - m_H^2 ≈ (2δ_Δ-λ_Δv_Δ)v_Δ,
which is quite small when v_Δ and δ_Δ are tiny. In this scenario, the heavy scalar H couples to pairs of W bosons with coupling
ℒ⊃1/2 g^2 (v sinα + 4 v_Δcosα) H W_μ^+ W^-μ,
g denoting the SU(2)_L coupling constant, and decays preferentially to W^+W^- with branching fractions exceeding 90%, though in many cases (including the scenarios discussed in this work) at least one W boson may be off shell.
Meanwhile, the spectrum of BSM fermionic states consists of two pairs of charged fermions χ^±_1,2 and two Dirac electrically neutral states χ^0_1,2. In order to ensure the stability of the lightest electrically neutral fermion χ^0_1, so that it may perform the role of dark matter, we impose a ℤ_2 symmetry under which only X and T are odd. At leading order, the neutral fermions have masses
m_χ^0_1,2 = 1/2[m_T+m_X ∓√((m_T-m_X)^2 + 4(y_X v_Δ)^2)],
and the charged states are degenerate with mass
|m_χ^±_1,2| = m_T√(1-1/2(y_T v_Δ/m_T)^2).
In the small-v_Δ limit, χ^±_1,2 are furthermore nearly degenerate with T^0, the neutral component of the weak-triplet fermion. Loop effects break these (pseudo-)degeneracies. In the parameter space we examine below, wherein the physical state χ^0_2 is triplet like, the spectrum consists of a singlet-like χ^0_1 somewhat lighter than the three fermions {χ^±_1,χ^0_2,χ^±_2}, which we write in order of ascending mass and are split nearly symmetrically by 1 GeV or less.
§.§ Implementation; preliminary constraints
We implement the model in version 4.15.2 of SARAH <cit.>,[Note the factor of 1/2 in the SM Higgs quartic coupling λ in (<ref>), which is reflected in the SARAH implementation. The factor of √(2) in (<ref>), which eliminates such factors in (<ref>), etc., is also implemented.] which we use to generate Fortran code using routines from the SPheno library <cit.>. Fermion masses are computed including one-loop corrections, while decays are handled at leading order; scalar masses are computed including two-loop corrections <cit.>. Moreover, we generate leading-order (LO) model files compatible with CalcHEP <cit.> (hence MicrOMEGAs <cit.>) and MadGraph5aMC@NLO () <cit.>.
We use <cit.>, which calls the above tools and others detailed below, to steer several scans of the model parameter space featuring interesting dark matter and LHC monojet phenomenology. The input parameters are discussed below. For all scans, before any event generation is performed, we impose a set of experimental constraints based variously on Higgs measurements, electroweak precision tests, searches for flavor-changing processes, and a range of direct searches for BSM phenomena. The non-trivial constraints we make sure to satisfy include the following:
* The W boson mass m_W must lie within the range compatible with the current global average, either including the CDF II measurement with conservative uncertainty <cit.>,
m_W^CDF = (80.413 ± 0.015) GeV,
or excluding that measurement <cit.>,
m_W^CDF = (80.377 ± 0.012) GeV,
or (for flexibility) the gap between these two bands. (Most of our points fall in the CDF band.) Either condition strongly constrains the triplet VEV v_Δ to be of 𝒪(1) GeV.
* The upward shift δ a_μ in the anomalous magnetic dipole moment of the muon must not exceed the difference between the 2021 world experimental average <cit.> and the 2020 Muon g-2 Theory Initiative calculation <cit.>:
δ a_μ≤ (g-2)_μ^exp - (g-2)_μ^th = 2.49 × 10^-9.
More recent results are available, so this condition could be quantitatively disputed, but in our parameter space δ a_μ is of 𝒪(10^-14). The shift is always tiny because the weak triplets have no hypercharge and the only BSM couplings to muons are induced by doublet-triplet scalar mixing.
* The scalar potential must be bounded from below to ensure the stability of the electroweak vacuum, and the scalar couplings must be small enough that all 2→ 2 scalar processes satisfy the optical theorem (perturbative unitarity). The latter condition is fulfilled easily in our parameter space <cit.>; vacuum stability is slightly less trivial and requires <cit.>
λ,λ_Δ>0 and κ_Δ/√(2) + √(2 λ_Δλ) > 0.
* Both neutral scalars must accommodate a range of experimental results, most notably the branching fractions h →{ZZ,invisible} <cit.> and the fiducial rates <cit.>
σ(gg,bb̅→ H) ×BF(H →τ^+τ^-).
The calculation of m_W, the electroweak precision observables including δ a_μ, and the perturbative unitarity check are set up by SARAH <cit.>. A suite of flavor physics constraints are checked using the FlavorKit extension of SARAH <cit.>. BSMArt calls HiggsTools <cit.> to constrain the scalar sector[SARAH gives coupling ratios to HiggsTools to compute the constraints on the SM-like Higgs, which effectively incorporates higher-order effects from SM particles only; in the version of this model without extra fermions, the two-loop contributions to the diphoton partial width were recently computed <cit.> and shown to be modest.], and we interface to Vevacious++ <cit.> to check for vacuum stability. As alluded to above, and detailed below, we use MicrOMEGAs to calculate the dark matter relic abundance and confront limits from direct- and indirect-detection experiments. Finally, we call SModelS <cit.> to perform a fast check for experimental constraints originating from colliders. Points are kept for simulation only if the HiggsTools p-value exceeds p = 0.05, the vacuum is stable, the SModelS r-value is below r = 1, and all other constraints mentioned above are satisfied. After some preliminary scans, in order to directly target the relevant model parameter space and reliably satisfy the constraints discussed above, our large scans for this work rely on the following input parameters:
[ λ ∈ [0.2,0.3], λ_Δ ∈ [0,3.5],; δ_Δ ∈ [0,2] GeV, κ_Δ ∈ [0,3.5],; v_Δ ∈ [1,10] GeV, y_T ∈ [0,1],; y_X ∈ [0,1], m_X ∈ [95,400] GeV, ]
and m_T ∈ [96,435] GeV such that m_T - m_X∈ [1,35] GeV. Given small mixing between X and T^0, this requirement enforces a similar condition on Δ m(χ^0_2,χ^0_1), which is important for our LHC analysis in Section <ref>. Note that we use the Lagrangian parameters for input rather than the masses and mixing angles; this naturally ensures that the parameter space of all points is physical and respects perturbative unitarity.
§ PHENOMENOLOGY OF THE DARK MATTER X01
The Yukawa-like coupling between the singlet fermion X and the weak triplet pair, which appears in (<ref>) with coupling strength y_X, provides the only channels through which X pairs can annihilate to visible particles at leading order. In terms of gauge eigenstates, depending on the hierarchy between the triplet scalar Δ and fermion T, the relevant processes are the t-channel annihilations XX̅→Δ^0 Δ^0, Δ^+Δ^-, with Δ^0 subsequently decaying primarily to W^+W^- and Δ^± primarily to W^±Z. Some representative diagrams are displayed in Figure <ref>. (While XX̅→ T^0T̅^0, etc., may be kinematically favored for some spectra, these processes cannot produce SM final states.)
This pattern, in which fermionic dark matter annihilates at tree level only to ≥ 4 SM particles through a pair of mediating fields with identical gauge quantum numbers, is the hallmark of the frustrated dark matter paradigm <cit.>. Typically, the DM candidate in frustrated DM models is strongly overabundant until its mass exceeds that of at least one mediator, at which point one or both mediators to which the DM annihilates can be approximately on shell. But in the present model, as discussed in Section <ref>, X mixes with the electrically neutral component T^0 of the weak-triplet fermion. The neutral scalars Φ^0,Δ^0 also mix. Even when these mixings are small, so that the DM candidate χ^0_1 is singlet like and the lightest CP-even scalar h is SM like, they open additional annihilation channels such as χ^0_1 χ̅^0_1 → W^+ W^- (both s- and t-channel diagrams contribute here) and χ^0_1 χ̅^0_1 → qq̅ (s channel, mediated by the SM-like scalar h). Furthermore, when the mass gap between X and T is small compared to their masses, coannihilation processes like χ^0_1 χ^0_2 → H^+ W^-, χ^0_1 χ^±_1 → W^±H, χ^0_2 χ^±_1 → qq̅' become efficient. Some representative diagrams for annihilation and coannihilation of various mass eigenstates are shown in Figure <ref>. These two phenomena—mixing and compression—allow a singlet-like χ^0_1 to freeze out with the relic abundance Ω h^2 = 0.12 ± 0.001 reported by the Planck Collaboration <cit.> even when it is lighter than the other BSM particles in the model.
In principle, both terrestrial (direct detection) and astrophysical (indirect detection) constraints apply to this realization of frustrated DM. In practice, however, DM-nucleon scattering has a small cross section in this model: it can proceed via SM-like Higgs exchange, as in the second diagram of Figure <ref>, but the rate is suppressed by both the SM Yukawa couplings and the small weak-triplet admixture of the 125 GeV scalar h. Thus it is understandable that MicrOMEGAs reports a trivial p = 0.50 for direct detection for all benchmark points. Constraints from searches for astrophysical γ rays, on the other hand, are not so trivial a priori. There are multiple open channels of DM annihilation into two-body SM states, notably bb̅ and W^+W^-, and even more channels into two-body BSM states subsequently decaying into multiple SM species, all of which may produce observable deviations in the continuous γ-ray spectra measured by telescopes like Fermi-LAT and MAGIC.[Monochromatic γ-ray signatures will also be produced by direct χ^0_1 χ̅^0_1 →γγ annihilation at one-loop order, but we relegate this calculation to future work.] To perform a rudimentary check of indirect-detection constraints on the points generated by our scans, we integrate the indirect-detection capabilities of MicrOMEGAs <cit.> with BSMArt for the first time. For each generated model point, we compare the annihilation cross section for every two-body SM final state to scraped limits from the 2016 joint Fermi-LAT/MAGIC analysis <cit.> for the appropriate DM mass m_χ^0_1. While these model-independent limits provide an incomplete picture, we expect limits on 2 → 2 → 4 annihilation processes (with more than two SM particles) to be weaker since the resulting γ rays should be softer. We also find that, for benchmarks predicting the correct DM relic abundance, the typical total χ^0_1 annihilation rates are between one and a few orders of magnitude smaller than the 𝒪(10^-26) cm^3 s^-1 cross sections probed by the aforementioned experiments. (This tends to happen because, as mentioned above, coannihilations are required to achieve the correct relic abundance.) Thus we have reason to believe that our benchmark points are not excluded by indirect searches for dark matter.
As mentioned in Section <ref>, we perform several scans for this work. We are naturally interested in obtaining points that predict an approximately correct DM relic abundance through freeze out; by “approximately” we mean within 20% of the central Planck value, in order to account for uncertain higher-order effects. Some of our scans impose this constraint. On the other hand, Sections <ref> and <ref> show results of scans that allow the dark matter relic abundance to vary freely; , under- and overabundant dark matter are allowed in order to cover a wider parameter space and to better map the edge of the region accommodating the correct relic abundance. We clarify below when a plot shows only points with the approximately correct Ω h^2. In both cases, we only retain model points that evade direct and indirect searches.
§ FITTING THE PURPORTED 152 GEV DIPHOTON EXCESS
A recent work <cit.> examines potential evidence of Drell-Yan production of a ∼ 152 GeV electrically neutral scalar in the sidebands of a Run 2 ATLAS search for new physics in final states including a SM-like Higgs boson <cit.>, and a concurrent ATLAS search for nonresonant Higgs pair production in leptonic or photonic final states <cit.>. The authors argue in <cit.> and other works <cit.> that this targeted mass is motivated by excesses in a number of channels, including final states with multiple leptons resulting from W boson decays <cit.>, the bb̅ + E_T^miss channel <cit.>, and several inclusive and exclusive γγ channels. In prior work <cit.>, (some of) the same authors claim that the combination of these channels provides evidence for a 152 GeV scalar with global (local) significances of 5.1 (4.8) standard deviations relative to the background hypothesis. These significances have been challenged <cit.> and may be as small as 4.1 (3.5)σ, but in any case seem to merit investigation.
The authors of <cit.> point out that the multilepton anomalies suggest a new particle with strong coupling to the W boson, but that conversely the lack of an excess in the ZZ → 4ℓ channel <cit.> points to a new particle with zero hypercharge. These ideas jointly motivate the authors to investigate the hyperchargeless Higgs triplet model. As mentioned above, they consider Drell-Yan production of the electrically neutral H with its charged partner H^±, followed by the decays H →γγ and H^±→{WZ, tb, cs, τν} (antiparticles depending on H^+/H^-). The authors identify eight signal regions in the ATLAS analyses <cit.> that are sensitive to these signals in principle, which for m_H ≈ 152 GeV have a total Run 2 LHC cross section of ∼ 1 pb. The analysis of this model is particularly convenient because (see Section <ref>) the new neutral and charged scalars are nearly degenerate, and the branching fractions of H^± can be expressed purely in terms of the neutral scalar's branching fraction to photons, BF(H →γγ). Therefore in <cit.>, the location and significance of the purported excess, within the framework of this model, is simply mapped onto the (m_H, BF(H →γγ)) plane. The authors find a 4.3 σ preference for BF(H →γγ) ≈ 1.2% at m_H ≈ 152 GeV. On the other hand, the authors also find tension in this model between the preferred value for BF(H →γγ) and the SM-like Higgs diphoton branching fraction, BF(h →γγ), and the requirements of vacuum stability and perturbative unitarity—so there is some question about the goodness of fit of this model to the 152 GeV excesses. Nevertheless, a BF(H →γγ) fit within 3σ appears possible, so we use this new-physics scenario as inspiration and require m_H ∈ [145,155] GeV. Since the addition of a fermionic multiplet with identical SM gauge quantum numbers does not abrogate the logic of <cit.>, we take its results at face value.
The triplet-like neutral scalar H couples weakly to photon pairs at one-loop order. The loops generating this coupling are composed of SM fermions (only top quarks are non-negligible), W bosons, charged scalars H^±, and the charged fermions χ^±_1,2. Most of these contributions have been reported previously <cit.>; we include the χ^± loops for completeness. The diphoton decay rate of the heavy scalar H can be expressed as
Γ(H →γγ) = G_Fα_EM^2/128√(2)π^3 m_H^3 | -4/3 s_αℱ(τ_t) + 1/√(2) y_T v c_α(1/m_χ_1^±ℱ(τ_χ^±_1) - 1/m_χ_2^±ℱ(τ_χ^±_2))
-8 G_F/2√(2) v(1/2 v s_α + 2 v_Δc_α)𝒱(τ_W) - 1/2m_H^±^2 v (λ_Δ v_Δc_α - κ_Δ v s_α) 𝒮(τ_H^±) |^2,
where τ_x = m_H^2/4m_x^2 for some particle x coupling directly to H, and
ℱ(τ) = 2/τ^2[τ+(τ-1)f(τ)],
𝒱(τ) = -1/τ^2[2τ^2 + 3τ + 3(2τ-1)f(τ)],
𝒮(τ) = -1/τ^2[τ-f(τ)]
with
f(τ) = arcsin^2 √(τ), τ≤ 1,
-14(ln1+√(1-τ^-1)1-√(1-τ^-1) - π)^2, τ > 1
are the usual one-loop three-point functions. The contribution from the charged fermions χ^±_1,2 is quite small since the splitting between these species is loop induced and tiny. The largest contribution is from the W boson loops but is strongly controlled by the doublet-triplet mixing: the prefactor of 𝒱(τ_W) in (<ref>) is suppressed either by sinα, which must be small to accommodate Higgs data, or by v_Δ, which must be small for electroweak precision observables. Therefore the diphoton decay rate is altogether small—dwarfed in particular by the tree-level decay Γ(H → W^+ W^-(*))—and the largest achievable diphoton branching fractions are of 𝒪(0.1)%.
The leading-order diphoton width (<ref>) is automatically computed by SARAH/SPheno <cit.> for each benchmark point generated by our scans in BSMArt. Figure <ref> shows the results of a scan filtered to a relatively small collection of 618 points intended for LHC simulation, as discussed in Section <ref> below. Our generated points are superimposed on the best-fit region scraped from Figure 5 of <cit.> in the range m_H ∈ [145,155.25] GeV. The left panel of our Figure <ref> shows all of our points, which survive all constraints mentioned in Sections <ref> and <ref> but have unrestricted dark matter relic abundance. The right panel filters this collection once more to show only those points with Ω h^2 = 0.12 ± 20%. Our large scan requires BF(H →γγ) > 0.2%; hence the sharp cutoff visible in the left panel. In this model, it is not easy to generate points lying within two standard deviations of the central preferred value BF(H →γγ) ≈ 1.2% for m_H ≈ 152 GeV, though 3σ agreement is possible. This difficulty stems from the natural suppression of BF(H →γγ) associated with small Higgs mixing angles. These findings are consistent with those shown in Figures 5 and 6 of <cit.>.
§ FITTING THE MONOJET EXCESS; MULTIJET CONSTRAINTS?
The Run 2 CMS monojet search CMS-EXO-20-004 <cit.> was originally interpreted within several frameworks, including simplified dark matter models and first-generation scalar leptoquarks, but applies in principle to fermion pair production in our model as displayed in Figure <ref>. The dark matter χ^0_1 can produce sizable E_T^miss if the pair-produced fermions recoil off of hard initial-state radiation (ISR) jets. Vetoes of leptons and additional jets with p_T of more than a few GeV optimize this analysis for particles decaying invisibly or softly, which can happen in models with small mass splittings. In our model, when the fermions are compressed, the decays of heavier species tend to proceed through off-shell weak bosons and result in soft leptons and hadrons. Interestingly, as we have explored in previous works <cit.>, this analysis exhibits excesses in a variety of E_T^miss bins. This analysis has been recast in <cit.> and adapted into <cit.> for our convenience in previous work[We have checked that monojet predictions with , using the implementation provided by CMS <cit.>, and those obtained with our implementation in agreed for all benchmarks in our scan on the basis of samples of 4× 10^6 events.] . We again use the latter implementation, which does not require a <cit.> simulation. We are also thus able to take advantage of a new feature of BSMArt to run batches of events via gridpacks, and check after each run for convergence to a desired accuracy: this capability has been augmented by a check against the experimental sensitivity of each signal region, rather than converging when a given accuracy is reached. This saves significant processing time, since the signal regions for this analysis typically have very small efficiencies (since they are sliced into small bins in missing energy).
We are also interested in whether our model is constrained by multijet data, since processes of the kind exemplified in Figure <ref> could populate some low-multiplicity (low-N_jet) bins of multijet analyses. The CMS Run 2 multijet search CMS-SUS-19-006 <cit.> provides a good opportunity to compare multijet and monojet constraints, since these analyses use the same data. CMS-SUS-19-006 imposes cuts on N_jet, N_b-jet, and the scalar and vector sums of jet transverse momentum. It notably requires significant E_T^miss, such that it overlaps in principle with the monojet (+ E_T^miss) analyses. It is particularly convenient to use this multijet analysis for the present exercise because it is already implemented in <cit.>. Unlike the HackAnalysis monojet recast, this multijet recast requires for detector simulation.
As exemplified in Figure <ref>, the signal in our model relevant to CMS-EXO-20-004 is electroweak fermion pair production with a hard jet. We use to simulate leading-order production of all possible fermion pairs (of which by far the most important are pp →χ^+_1χ^-_1 and χ^+_2χ^-_2) accompanied by up to two additional partons in the matrix element(s). Parton showering and hadronization is performed using <cit.>. As mentioned above, for the HackAnalysis recast of CMS-EXO-20-004, we can dynamically generate events in batches until the (leading) efficiencies are under good statistical control. Nonetheless, owing to the tiny efficiencies in most signal regions of this analysis, for most parameter points we find that we still need to simulate our (self-imposed) upper limit of 10^7 events. We separately simulate the same processes, with up to two additional hard jets, in order to use the recast of the multijet analysis CMS-SUS-19-006. In this case, just over 4 × 10^6 events are simulated for each model point.
The results of our combined analysis are displayed in Figure <ref>. Here we show the expected and observed limits at 95% confidence level (CL) <cit.> for the recast monojet and multijet analyses in red and orange contours, respectively. Also shown for the monojet analysis are several isocontours of the Bayes factor B_10 <cit.>, which is a likelihood ratio for our model relative to (in this case) the Standard Model and provides a quantitative measurement of whether our model is favored or disfavored compared to the SM by the monojet search. Values of B_10 > 1 indicate that the evidence for our model is stronger than that for the SM. By plotting these contours, we can find the region (if it exists) in which our model provides the best fit to the monojet excess compared to the scenario without any new physics.[Because this quantity provides an excellent visualization of parameter space regions that accommodate excesses better than the Standard Model, it would be a good choice for future analyses of exotic scalars in the vein of <cit.>.]
As for other models we have explored in previous works, we find that the monojet analysis is expected (observed) to be sensitive only to points with Δ m(χ^0_2,χ^0_1) ≲ 14 (10) GeV. On the other hand, the multijet constraints are much less sensitive to the mass splitting and are not too strong, hovering around m_χ^0_2∼ 125–175 GeV. The significant excess in the monojet analysis is visible in the difference between the highest expected limit on m_χ^0_2, around 320 GeV, and the observed limit at just above 225 GeV. The Bayes factor contours confirm that the monojet excess is associated with mass splittings of 6 GeV or less, and—interestingly, since the limit contours cannot give this information—that our weak frustrated DM model provides the best fit to the monojet excess, relative to the SM, for m_χ^0_2∼ 300–375 GeV. In that part of the (m_χ^0_2,Δ m) plane, our model is at least five times more plausible than the SM.
We finally mention the blue shaded region in approximately the upper half of Figure <ref>. In this region, the relic abundance of the DM candidate χ^0_1 can lie within 20% of the observed value, Ω h^2 ≈ 0.12, for some choice(s) of input parameters. The edge of this region appears quite sharply in the large scan that we filtered in order to obtain the 618 points used for LHC simulation; while the high dimensionality of the input parameter space makes the “correct Ω h^2” region quite diffuse, we find that beneath the edge that we have drawn in Figure <ref>, the compression between χ^0_1 and χ^0_2 (hence χ^±_1,2), the rate of coannihilations becomes too great such that the dark matter is underabundant in the present epoch. Unfortunately, the region with the approximately correct relic abundance only overlaps with the B_10=2 contour: our model does not seem to optimally fit the monojet excess and the preferred DM region. But it remains intriguing that there is some overlap.
§ CONCLUSIONS
In this work we have explored an extension of the hyperchargeless Higgs triplet model that includes additional electroweak fermions in a frustrated dark matter configuration. In addition to predicting a neutral scalar that couples to photon pairs at one-loop order, as suggested by the purported diphoton anomaly around 152 GeV, this model produces additional monojet events as suggested by the Run 2 excesses in that channel. We have performed a number of parameter space scans and used high-quality recasts of Run 2 CMS monojet and multijet analyses (CMS-EXO-20-004 and CMS-SUS-19-006) to explore all of this phenomenology and evaluate whether there is any overlap between the regions favored by any or all of the searches and measurements.
In addition to computing traditional limits at 95% CL from the monojet and multijet analyses, we have computed the Bayes factor for the monojet analysis and plotted the isocontours of this quantity in order to visualize the region in which our model is specifically favored over the Standard Model by the monojet data. We find a region approximately centered on (m_χ^0_2,Δ m(χ^0_2,χ^0_1)) = (330,4) GeV in which our model is at least five times more plausible than the Standard Model with respect to that analysis. We also note some overlap between the region in which the lightest neutral fermion χ^0_1 can freeze out with the observed dark matter relic abundance and an area in which our model provides a monojet fit at least twice as plausible as that offered by the SM. In the meantime, we have demonstrated that none of the parameter space in our model that is interesting for monojets is constrained by the multijet search. In the end, while there is no single point providing a perfect fit to all data, this model has multiple interesting characteristics and provides an excellent monojet signal.
These results suggest several interesting avenues of future work. First, a more thorough investigation of the dark matter phenomenology in this model is warranted, including a more accurate calculation of indirect-detection limits. Different variants of this model have been proposed to fit the CDF W mass measurement <cit.> and the 95 GeV excesses <cit.>; it could be worthwhile to explore these alongside the monojet excess, but fitting the 95 GeV excesses is not straightforward since they appear in the γγ, bb̅, and τ^+ τ^- channels in a pattern different from the branching fractions of the SM-like Higgs boson. Some more model building may be in order: it would be interesting to find a model with better overlap between the monojet-favored region and the area with correct DM relic abundance—and perhaps explain the mild excesses in the soft-lepton channels. The electroweak fermions in our model do not couple strongly enough to light leptons to produce more than a handful of events for m_χ^0_2 of 𝒪(100) GeV, so a different scheme is needed to produce soft-lepton signals. Finally, on the recasting side, implementations of an ATLAS multijet search, the CMS Run 2 soft-lepton analysis, and an ATLAS search for new physics in final states with b-jets + E_T^miss <cit.> are at various stages of development.
The work of B. F., M. D. G., and T. M. was supported in part by Grant ANR-21-CE31-0013, Project DMwithLLPatLHC, from the Agence Nationale de la Recherche (ANR), France.
JHEP
|
http://arxiv.org/abs/2409.02179v1 | 20240903180002 | Enhanced Dark Matter Abundance in First-Order Phase Transitions | [
"Rouzbeh Allahverdi",
"Cash Hauptmann",
"Peisi Huang"
] | hep-ph | [
"hep-ph",
"astro-ph.CO",
"astro-ph.HE",
"hep-th"
] |
rouzbeh@unm.edu
Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131,
USA
chauptmann2@huskers.unl.edu
Department of Physics and Astronomy, University of Nebraska, Lincoln, NE 68588, USA
peisi.huang@unl.edu
Department of Physics and Astronomy, University of Nebraska, Lincoln, NE 68588, USA
§ ABSTRACT
We propose a novel scenario to obtain the correct relic abundance for thermally under-produced dark matter. This scenario utilizes a strongly first-order phase transition at temperature T_ PT that gives rise to dark matter mass m. Freeze-out in the broken phase can yield the desired abundance in the entire region currently allowed by observational bounds and theoretical constraints for 10^2 T_ PT≲ m ≲ 10^4 T_ PT. We show that the accompanying gravitational waves are strong enough to be detected by many upcoming and proposed experiments. This, in tandem with dark matter indirect searches, provides a multi-messenger probe of such models. Positive signals in the future can help reconstruct the potential governing the phase transition and shed light on an underlying particle physics realization.
Enhanced Dark Matter Abundance in First-Order Phase Transitions
Peisi Huang 0000-0003-3360-2641
Received ; accepted
===============================================================
§ INTRODUCTION
There are various lines of evidence for the existence of dark matter (DM) in our universe <cit.>. However, the nature of DM remains a major open problem at the interface of cosmology and particle physics, and an important question concerns the relic abundance of DM that is precisely inferred from cosmological measurements <cit.>. An interesting and widely studied mechanism to address this question is thermal freeze-out in a radiation-dominated universe. The correct relic abundance can be obtained in this framework if the thermally averaged DM annihilation rate ⟨σ v ⟩ takes the nominal value of
3 × 10^-26 cm^3 s^-1 at the time of freeze-out (coined “WIMP miracle"). Indirect detection experiments have put ⟨σ v ⟩ under increasing scrutiny. For example, Fermi-LAT’s
results from observations of dwarf spheroidal galaxies <cit.>
and newly discovered Milky Way satellites <cit.> have placed limits that rule out thermal DM with a mass below 20 GeV in a model-independent way <cit.> (barring special cases with p-wave annihilation or coannihilation).
The correct relic abundance may be obtained for larger and smaller values of ⟨σ v ⟩ if DM does not drop out of chemical equilibrium in a radiation-dominated universe. An important such scenario is DM production in nonstandard thermal histories with an epoch of early matter domination (EMD) prior to big bang nucleosynthesis (BBN) <cit.>. In the case with ⟨σ v ⟩ < 3 × 10^-26 cm^3 s^-1, which leads to thermal overproduction, entropy generation at the end of EMD can yield an acceptable relic abundance from freeze-out during EMD <cit.> or production at the end of EMD <cit.>. In the thermally under-produced case, where ⟨σ v ⟩ > 3 × 10^-26 cm^3 s^-1, the correct abundance can be obtained by a combination of DM production and its residual annihilation at the end of EMD <cit.>.
Here we propose an alternative scenario that can yield the desired relic abundance for large values of ⟨σ v ⟩. Instead of relying on a period of EMD, this scenario employs a first-order phase transition (FOPT) within the standard thermal history. The DM particles remain light at temperatures above that of the FOPT, denoted by T_ PT, and hence stay in chemical equilibrium with the thermal bath until then. The FOPT bumps the DM mass to values m ≫ T_ PT, as a result of which DM particles undergo annihilation. For suitable values of T_ PT
one can obtain the correct DM relic abundance. Additionally, the FOPT is expected to form gravitational waves (GWs) that could be probed by upcoming detectors. This allows us to test this scenario via its GW signal in tandem with the current experimental bounds and future projections from indirect DM searches.
Our main goal is to examine viability of the proposed scenario and its observable signatures. Therefore we remain agnostic to the specifics of underlying particle physics realizations of this scenario. That said, our results for the GW signal fit well within models accommodating supercooled FOPTs such as classically conformal B-L models <cit.>, Peccei-Quinn axion models <cit.>, strongly coupled theories <cit.>, and extra dimensional theories <cit.>. Moreover, an increase in DM mass after a FOPT may happen for any model with a fermionic DM candidate that is charged under a spontaneously broken chiral gauge symmetry <cit.>.
§ FREEZE-OUT AND ABUNDANCE ENHANCEMENT
The following Boltzmann equation provides the dynamics of the particle number (proper/physical) density n of DM as it evolves in a Friedmann-Lemaître-Robertson-Walker spacetime:
nt = -3Hn - ⟨σ v ⟩[ n^2 - n_eq^2 ] .
Hubble's parameter H ≡ ( a / t) / a is given by the cosmic scale factor a and its rate of change with respect to cosmic time t, σ is the DM + DM annihilation cross section, and v is the Møller speed between two interacting DM particles. Angular brackets denote a thermal average: ⟨σ v ⟩≡ (∫σ v n)/(∫ n). In deriving Eq. <ref> the DM is assumed to have a Maxwell-Boltzmann-like distribution function f ∝ e^-E/T in a Lorentz frame where E^2 = m^2 + |𝐩|^2 is the energy of DM with momentum 𝐩, and T
is its temperature. The equilibrium density n_eq is given by an exact Maxwell-Boltzmann distribution:
n_eq = ∫ e^-E/Tg/(2π)^3 ^3 p
where g is the number of internal degrees of freedom for the DM particle. This study uses g=2 for concreteness.
Given enough time, the cosmic expansion rate dominates the Boltzmann equation and DM densities are too sparse for number-changing processes to occur. This is called freeze-out and happens at some temperature T_f. The final amount of DM can be characterized by its relic abundance Ω≡ρ_ DM / ρ_ c,0; the ratio of DM energy density to today's critical energy density. Asymptotic solutions to the Boltzmann equation find
Ω_ thermal = s_0/ρ_c,0√(45/π)√(g_*)/g_*sm/M_Pl T_f ⟨σ v ⟩
(see Appendix <ref>). Today's entropy density is denoted with s_0, while g_* and g_*s respectively give the total relativistic degrees of freedom in the SM + DM plasma and entropy density <cit.> in a radiation-dominated universe (we take g_* = g_*s = 106.75 + 2). The Planck mass is M_ Pl = 1.22 × 10^19 GeV. This study assumes s-wave annihilation is dominant.
In the standard freeze-out analysis, Eq. <ref> is derived under the assumption that the DM number density closely tracks its equilibrium values for early times before freeze-out (called thermal DM). Given sufficient interactions between DM and the SM, this assumption is justified for DM produced in these early times, as the particles will quickly reach thermal equilibrium with the dense universe.
However, if no appreciable number of massive DM particles exist
for T ≥ T_f then Eq. <ref> cannot be applied. This has been studied in the context of non-thermal DM production from the decay of inflaton fields <cit.> or moduli fields in supersymmetric models <cit.>. For DM produced after what would have been considered its characteristic freeze-out (had it been in equilibrium in the early universe), two cases must be considered, demarcated by some critical DM number density n_c
given by
n_c(T) = 3H/⟨σ v ⟩ ,
which is defined for times after freeze-out (T < T_f) when the DM has fallen out of thermal equilibrium and therefore the equilibrium rate ⟨σ v ⟩ n_eq^2 in Eq. <ref> is negligible. This critical number density is found by equating the remaining expansion and annihilation rates: 3Hn_c = ⟨σ v ⟩ n_c^2.
If there is too little DM produced at T_P < T_f and n(T_P) < n_c(T_P), the cosmic expansion dominates annihilation. In this case, the relic abundance is already frozen in at T_P and is determined by the DM abundance at its production: Ω_DM = m n(T_P) / ρ_c,0.
The more interesting case occurs when DM is produced in excess of its critical value in Eq. <ref>.
When DM is produced at some temperature T_P < T_f with density n(T_P) > n_c(T_P), the annihilation rate dominates the cosmic expansion rate and rapid annihilation will occur until the two rates are equal and the DM density begins to stabilize. The same techniques of standard freeze-out analysis can be applied to find the subsequent Boltzmann evolution of this delayed, non-thermal production of DM. The resulting relic abundance finds an enhancement from the standard thermal DM scenario:
Ω_ DM = Ω_ thermalT_f/T_P
which is no longer a function of T_f according to Eq. <ref>.
As long as the criterion n(T_P) > n_c(T_P) is met, the exact value of the initial DM density at its production plays no role in determining the asymptotic values, as seen in Eq. <ref>. Rapid annihilation drives the initial abundances toward a common attractor.
§ DARK MATTER PARAMETER SPACE
For a given model of DM with mass m and thermal cross section ⟨σ v ⟩, Eq. <ref> gives the necessary temperature T_P at which the DM is produced to yield some relic abundance Ω_DM. The production temperatures which yield today's relic abundance of Ω_DMh^2 = 0.12 <cit.> are shown in Fig. <ref>. Much of the DM parameter space has been ruled out by both theoretical and observational bounds. On the theoretical side, non-composite DM must have upper bounds on ⟨σ v ⟩ to maintain unitarity in 2 → 2 interactions. Following arguments first made in Ref. <cit.>, we use a unitarity bound of ⟨σ v ⟩≲ 4π / (m^2 ⟨ v ⟩) where ⟨ v ⟩≈√(6m/T_f) (see Appendix <ref>). As for observational bounds, Ref. <cit.> combines limits on DM annihilation coming from Planck satellite measurements of the cosmic microwave background <cit.>, Fermi-LAT measurements of dwarf spheroidal galaxies in the Milky Way <cit.>, and the Alpha Magnetic Spectrometer measurements of cosmic rays <cit.>. These bounds only consider “visible” final states consisting of photons, leptons, or hadrons. Lastly, we carve out the parameter space which yields an overabundance of DM; models within this region find freeze-out times T_f giving Ω_DM > 0.12. As the delayed production of DM above its critical abundance can only lead to an increase in its density, this region remains excluded for such T_P < T_f models.
With the remaining DM parameter space of Fig. <ref>, we include single-channel bounds from searches of DM annihilating to τ^- τ^+ states from HAWC <cit.>, LHAASO <cit.>, and HESS <cit.> as this channel consistently provides the most stringent bounds across different experiments. Projected sensitivities of CTA <cit.> are shown by the dashed curve. Note that a significant portion of the DM parameter space allowed by HESS is expected
to be probed by CTA searches, yet masses near 20 GeV or 100 TeV
will still remain out of reach.
§ MASSIVE DARK MATTER PRODUCTION IN PHASE TRANSITIONS
We now outline a simple mechanism which can produce massive DM in a cosmological FOPT occurring after its characteristic freeze-out time. Cosmological phase transitions are present in any particle physics model exhibiting symmetry breaking from an
underlying (gauge) group. As this happens, the particles of the universe transition from a phase of interactions described by the original symmetry of the full (gauge) group to a phase of interactions given by some
spontaneously broken subgroup.
FOPTs are characterized by the broken phase nucleating in pockets that are stochastically distributed throughout the symmetric phase. As the FOPT proceeds,
bubbles of broken phase expand
until the universe is completely enveloped by the broken phase.
In this mechanism of DM “production", the DM exists prior to the phase transition, however a change in its mass occurs. The usual scenario begins with massless
fermionic species χ (eventually the DM candidate)
with a Yukawa
coupling to some scalar field ϕ:
ℒ⊃ - y ϕ^* χ_R χ_L + h.c. ,
where χ_R and χ_L denote the right and left chiral components of χ respectively. After the phase transition
the scalar field develops an asymmetric vacuum expectation value (VEV) ⟨ϕ⟩
resulting in a mass term m = y⟨ϕ⟩ for the DM.
The massive DM abundance in the true vacuum is determined by the amount of massless
χ in the false vacuum that have enough momentum to overcome the mass gap across the bubble wall. Assuming thermal equilibrium at the time of the FOPT, the number density of massless
χ is given by n_fv(T_PT) = (3 / 4π^2) ζ(3) γ_w g T_PT^3. Therefore, if some fraction n / n_fv of
χ particles penetrates into the true vacuum, the condition for abundance enhancement reads (n / n_fv) n_fv≥ n_c(T_PT), or
n/n_fv⟨σ v ⟩ T_PT≥√(2/3)8 π^5/2/ζ(3) M_Pl√(g_*)/g .
The situation can be easily extended to other massive DM production mechanisms in FOPTs, such as scalar decay outlined in Appendix <ref>.
One comment is in order at this point.
In general, mass of the particle that mediates interactions of DM may also change during a FOPT leading to an increase in ⟨σ v ⟩ (for example, see <cit.>). However, this will not affect the final DM abundance in our scenario. This is because the number density of χ particles can acquire and track the equilibrium value from scatterings in the thermal bath even if ⟨σ v ⟩ is much (but not extremely) smaller than 3 × 10^-26 cm^3 s^-1 at T ≫ T_ PT. All that matters is that ⟨σ v ⟩ has reached its final value when the phase transition ends.
§ PENETRATING BUBBLE WALLS
Eq. <ref> provides a lower bound on the abundance of DM particles in the true vacuum required for abundance enhancement. However, not all particles can penetrate into the true vacuum through the expanding bubble walls. When particles acquire a larger mass upon entering the true vacuum due to a nonzero VEV, only false vacuum particles with energies (relative to a bubble wall) above the mass difference can penetrate. More accurately, for a particle with three-momentum 𝐩 and mass m_fv in the false vacuum, it could penetrate only if |𝐩|^2 ≥ m_tv^2 - m_fv^2 where m_tv is its mass in the true vacuum. We will take m_fv = 0 in the following. To calculate the fraction of particles that successfully transition to true vacuum, we assume the false vacuum populations are relativistic and in thermal equilibrium with the matter content of the universe, and we find the flux of particles with enough momentum to penetrate the walls.
In the frame of reference of a bubble wall with relative speed v_w, a relativistic particle of spin s is described by the following one-particle phase space distribution function when in equilibrium:
f_fv = 1/e^[γ_w |𝐩| (1 - v_w cosθ) - μ]/T - (-1)^2s ,
where γ_w = (1-v_w^2)^-1/2 is the Lorentz factor, μ is the chemical potential, and π - θ is the angle between the particle's three-momentum 𝐩 and the wall's local velocity. The number density of fermions in the false vacuum is then
n_fv = ∫ f_fvg/(2π)^3^3p
= g/2 π^2γ_w [ 3/2ζ(3) + π^2/6μ/T + ln(2) μ^2/T^2] T^3
where g is the particle's number of internal degrees of freedom and ζ is the Riemann zeta function. The flux density of these particles which can penetrate a bubble wall is given by
J = ∫|𝐩| cosθ/|𝐩| f_fvΘ(|𝐩| cosθ - m) g/(2π)^3^3p
= g/(2π)^2[ 1/2T^2 (1-v_w)^2 γ_w^3{( m^2 (1-v_w)^2 γ_w^2 - μ^2 ) ( -iπ - 2tanh^-1[ 1+2e^m(1-v_w)γ_w-μ/T] . . .
. . . + ln[ 1+e^-m(1-v_w)γ_w+μ/T] ) - 2mT(1-v_w)γ_w Li_2( -e^-m(1-v_w)γ_w+μ/T) - 2T^2 Li_3( -e^-m(1-v_w)γ_w+μ/T)}] T^3
where Θ is the Heaviside step function and Li_2 is the polylogarithm function of order q. As J is the number of particles per area per time that can penetrate bubble walls, the number density of particles that pass into the true vacuum is n = J / v_w.
One can now find the fraction of particles which penetrate into the true vacuum and acquire mass:
n/n_fv = J/v_w n_fv .
Note that, as shown in Fig. <ref>, this fraction is a function only of wall speed v_w and x ≡ m / T whenever μ = 0 (which is assumed here). We use this relationship to find the minimum wall speed for a given FOPT to produce today's relic abundance of DM in the following way. For DM produced after its characteristic freeze-out, requiring Ω_DM h^2 = 0.12 allows Eq. <ref> to find the necessary time x_P = m / T_P at which the DM must be produced. Yet, the abundance enhancement only applies if Eq. <ref> is satisfied, requiring a minimum fraction of penetrating particles. Thus, a given x_P and minimum n / n_fv can identify a minimum wall speed.
We end this section with a discussion on the possibility of forming compact objects from DM trapped in the false vacuum. As a FOPT nears completion, bubbles of the true vacuum dominate the universe and domains or remnants of the false vacuum shrink as bubbles continue to expand. These remnants may contain those DM particles which lack the sufficient three-momenta to overcome the mass gap across bubble walls. If regions within these remnants become sufficiently dense, compact DM objects may form under their own gravitational interactions. For example, these compact objects could be black holes <cit.> or, if the DM is fermionic, an intermediate stage of Fermi-ball formation <cit.> may take place before collapsing into black holes. Whatever the mechanism, DM annihilation can severely preclude the formation of compact objects. To keep DM from completely annihilating within false vacuum remnants, particle models must have either an asymmetric distribution of DM to anti-DM populations, or some suppression of the annihilation mechanism via small Yukawa couplings or kinematic constraints. Since our analysis has no requirement for small Yukawa couplings between the DM and the FOPT-inducing scalar field, we do not consider scenarios with suppressed DM annihilation, and therefore assume no appreciable formation of compact objects before DM annihilates within (or escapes from) domains of the false vacuum.
§ GRAVITATIONAL WAVES
Particularly strong FOPTs could displace enough energy to produce a stochastic GW background detectable by future experiments which aim to measure the amplitudes and frequencies of GWs. FOPTs that release more latent heat produce GWs of greater amplitudes and are thus easier to detect. Given the stochastic nature of FOPTs, their GW signal is expected to yield a wide range of frequencies peaked around some frequency heavily influenced by the characteristic timescale of the phase transition. These two properties of a FOPT, its latent heat and timescale, are characterized by α and β:
α = 1/ρ_r(T){ [V_eff(ϕ_false, T) - V_eff(ϕ_true, T)] .
. - T [ V_eff(ϕ_false, T)T - V_eff(ϕ_true, T)T] }|_T=T_PT
β = - (S_3 / T)t|_t=t_PT
.
Here, ρ_r(T) = π^2 g_*(T) T^4 / 30 is the energy density of the radiation bath at the time of the FOPT with g_* the relativistic degrees of freedom in the plasma. Subscripts “true” and “false” denote quantities evaluated in the true and false vacuua. More useful in GW calculations is β / H(T_PT) where H is the Hubble parameter. T_PT is a characteristic temperature at which the FOPT occurs. In what follows, we also take this to be the temperature at which massive DM is produced (T_PT = T_P). It should be noted that T_PT is a post-supercooling temperature and all reheating effects take place before this time.
In a FOPT, the three dominant contributions to a GW background come from (1) sound waves in the universal plasma, (2) collisions of true vacuum bubbles, and (3) magnetohydrodynamic turbulence in the plasma. Following Ref. <cit.>, we approximate the total GW background to be a linear combination of these stochastic sources:
Ω_GW = Ω_sw + Ω_col + Ω_turb
with Ω = ρ / ρ_c the corresponding fraction of the critical energy density, however GW signal spectra are usually presented in terms of Ω(f) ≡dΩ / dln(f) <cit.>, which is the notation used here. To determine each of these contributions, one needs knowledge of the fractional amount of available latent heat that transfers into (1) the bulk motion of the fluid (κ_sw), (2) the bubble wall kinetic energy (κ_col), and (3) turbulence (κ_turb). The sound wave contribution is numerically fitted as
h^2 Ω_sw(f) = 3 × 2.65 × 10^-6[ H(T_PT)/β] [ κ_swα/1+α]^2
×[ 100/g_*(T_PT)]^1/3 v_w [ f/f_sw]^3 [ 7/4 + 3(f/f_sw)^2]^7/2 ,
f_sw = 1.15/v_w[ β/H(T_PT)] h_* ,
h_* = 1.65 × 10^-5 Hz[ T_PT/100 GeV] [ g_*(T_PT)/100]^1/6
where κ_sw is the fraction of latent heat transferred into the bulk motion of the plasma, and g_*(T_PT) is the number of relativistic degrees of freedom for all fields when the FOPT occurs. (The prefactor of 3 is found in the erratum of Ref. <cit.>.) The effective number of degrees of freedom coming from relativistic SM particles changes with temperature <cit.>. Redshifting the peak frequencies to their expected values today is accomplished with the h_* factor which assumes adiabatic expansion of a radiation-dominated universe. A numerical fit for the contributions from magnetohydrodynamic turbulence is found to be
h^2 Ω_turb(f) = 3.35 × 10^-4[ H(T_PT)/β] [ κ_turbα/1+α]^3/2
×[ 100/g_*(T_PT)]^1/3 v_w
×[ (f/f_turb)^3/[1+(f/f_turb)]^11/3 [1+8π f/h_*]] ,
f_turb = 1.64/v_w[ β/H(T_PT)] h_*
where κ_turb gives the fraction of latent heat transferred into plasma turbulence. Finally, for a strongly supercooled phase transition, we use lattice simulation results from Ref. <cit.> to provide a numerical fit for the contributions from collisions of bubble walls:
h^2 Ω_col(f) = 1.67 × 10^-5[ H(T_PT)/β]^2 [ κ_colα/1+α]^2
×[ 100/g_*(T_PT)]^1/3[ 1 + (2π f/0.13 β)^d-a/1 + (2π f_col/0.13 β)^d-a]
×3.63 × 10^-2× (a+b)^c/[b(f/f_col)^-a/c + a(f/f_col)^b/c]^c,
f_col = 0.81/2π[ β/H(T_PT)] h_* .
Here, κ_col giving the fraction of latent heat converted into kinetic energy of the bubble walls. The numerical parameters are a=2.54, b=2.24, c=2.30, and d=0.93.
To meet relic abundance requirements, Fig. <ref> shows that the DM must be produced at times x_P ≡ m/T_P ∼ [10^2, 10^4]. If DM mass is proportional to the scalar field's VEV, high x_P indicates the FOPTs should occur at temperatures well below the VEV, given that
y ≤𝒪(1) in the perturbative regime.
This suggests the FOPT is supercooled, meaning it completes at some temperature well below its critical temperature (the temperature, typically on the same energy scale of the VEV, at which the effective potential acquires degenerate minima). With supercooling present, at the end of the FOPT a large difference in the effective potential obtains between the true and false vacuum phases, releasing much more energy than a non-supercooled transition. For this analysis, we thus assume α≫ 1 meaning the contributions to Ω_GW(f) are practically independent of α[For α≈ 1, the GW signal strength is only reduced by a factor of about four. Even with this reduction, Fig. <ref> shows GW signals are still well within projected detector sensitivities. On the other hand, α should not be allowed arbitrarily large values to avoid an extended period of vacuum energy domination, requiring adjustments to the GW signal calculations.]. In taking α≫ 1 limits, we also calculate κ_sw = κ_sw(α, v_w) using numerical fits found in Ref. <cit.>. For values of v_w required in these fits and in Ω_GW(f), we use the minimum wall speed required for abundance enhancement outlined in Sec. <ref>. In the remaining GW spectra parameters κ_col and κ_turb, simulations <cit.> have shown power in transverse modes of plasma fluid velocity to be at most 5-10% of that in longitudinal modes. We thus conservatively take κ_turb = 0.05 κ_sw. Finally, to find κ_col we enforce κ_sw + κ_turb + κ_col = 1 which assumes all latent heat is dispersed among sound waves, turbulence, and bubble collisions.
§ RESULTS
To calculate GW spectra, we take points in the m-⟨σ v ⟩ plane, shown in Fig. <ref>, which yield the correct DM relic abundance. Furthermore, we do not consider the points excluded by visibles and unitarity bounds. Within this study, the only free parameter left in calculating GW spectra is β / H(T_PT), which we take to be greater than one to ensure bubble nucleation rates exceed universal expansion rates and the FOPT can complete. Peak values of GW signals from FOPTs are plotted in Fig. <ref> to compare against expected sensitivities of proposed GW detection experiments. It is clear that supercooled FOPTs with α≫ 1 help provide GW signals strong enough to reach expected sensitivities for a wide range of β / H(T_PT). Although the correlation is not exact, we can also see that in models endowing larger masses to DM fields, the resulting GW signals peak at higher frequencies; suggesting these models require more rapid FOPTs. As mentioned above, each signal peak is determined by a point in Fig. <ref> which is not excluded by bounds on visible annihilation channels, unitarity, and overabundance. The peaks are further distinguished by their shape: disks (∘) are points permitted by HESS bounds while crosses (×) are not. In addition to evading HESS bounds, disks with black outlines are probeable by projected sensitivities of CTA.
Finally, in Fig. <ref> we offer a fuller look at possible DM candidates in a parameter space mixing particle characteristics and GW data from the accompanying FOPT. In this plot, the DM particle mass is given by the vertical axis while thermally averaged cross sections color each point. The horizontal axis provides the frequencies of peak GW signals, and we include three frequency bands demarcating the projected sensitivity ranges of THEIA, LISA, ALIA, and ET. Note that although a given point may fall within one of these frequency bands, its GW signal strength will determine whether the point might be probed by some detector. As in Fig. <ref>, disks denote points compatible with HESS bounds while crosses do not; disks with black outlines
can be further probed by CTA. The four distinct (although overlapping) sets of points correspond to different β / H(T_PT) values in the FOPTs, labeled near the bottom of the plot.
§ CONCLUSION
In this paper, we presented a new scenario for obtaining the correct relic abundance in the case of thermally under-produced DM, i.e., ⟨σ v ⟩ > 3 × 10^-26 cm^3 s^-1. This scenaio utilizes a FOPT in a radiation-dominated universe as a result of which DM acquires its mass m. DM particles maintain a constant comoving density above FOPT temperature T_ PT and their freeze-out in the broken phase yields the correct abundance if T_ PT is sufficiently lower than the freeze-out temperature in the standard scenario T_ f∼ m/20.
In particular, we found that 10^2 T_ PT≲ m ≲ 10^4 T_ PT within the mass range allowed by current experimental limits from indirect searches and theoretical bounds from unitarity. The hierarchy between m and T_ P suggests that the FOPT must be supercooled thereby leading to potentially significant GW signals. We found that although the GWs sourced by such a FOPT would likely not be detectable by current experiments, they are within projected sensitivities of many next-generation GW detectors.
As shown in Fig. <ref>, for reasonable values of PT parameters, the GW signal is within the reach of experiments like ET, LISA, ALIA, and THEIA. This, in tandem with indirect detection experiments, provides a multi-messenger test of our scenario. We see in Fig. <ref> that almost the entire region of the ⟨σ v ⟩-m plane that is allowed by the FERMI-LAT constraints can be probed by GW detectors. Future indirect searches like CTA can access the bulk of this region that include masses between the weak scale and 10^4 GeV.
The crucial ingredient of our scenario is a strongly FOPT at temperatures below a few TeV which gives rise to DM mass. We largely focus on general conditions for its success and prospects of its multi-messenger probes. A natural direction for future investigations involves model-specific studies. Detection of GWs combined with a signal from indirect DM searches can help us reconstruct the potential governing the dynamics of the FOPT. This can then be used to build explicit particle physics models employing a FOPT to give mass to the DM.
§ ACKNOWLEDGEMENTS
The work of R.A. is supported in part by NSF Grant No. PHY-2210367. The work of C.H. and P.H. is supported by the National Science Foundation under grant number PHY-2112680.
§ APPENDIX
§.§ Freeze-Out of Cold Relics
The Boltzmann equation reveals three processes responsible for the changing DM number density. Cosmic expansion and self-annihilation deplete the number density at rates of 3Hn and ⟨σ v ⟩ n^2, respectively. Equilibrium interactions at high temperatures help maintain the number density at a rate ⟨σ v ⟩ n_eq^2. To factor out effects of cosmic expansion, one can define the yield of DM as Y ≡ n / s where s is the entropy density of the universe. During cosmic expansion n and s scale equivalently with a^-3, allowing Eq. <ref> to be rewritten as
Yt = - ⟨σ v ⟩ s [ Y^2 - Y_eq^2 ] ,
Y ≡ n/s
with Y_eq≡ n_eq / s. One can further adimensionalize this equation with the parameterization of x ≡ m / T. Assuming the DM to be in thermal equilibrium with a radiation-dominated universe, the Friedmann equations give x / t = Hx and Eq. <ref> becomes
Yx = - ⟨σ v ⟩ s/Hx[ Y^2 - Y_eq^2 ] ,
x ≡m/T .
The Friedmann equations also yield the following expressions for the Hubble parameter and entropy density:
H^2 = 8 π/3 M_Pl^2 g_*(T) π^2/30 T^4 ,
s = 2π^2/45 g_*s(T) T^3
with Planck mass M_Pl = 1.22 × 10^19 GeV. In these expressions g_* and g_*s give the total degrees of freedom in the universal plasma and entropy density, respectively <cit.>. This study approximates these degrees of freedom to be equal: g_* = g_*s. The sum of SM and DM relativistic degrees of freedom gives the total number of degrees of freedom in the plasma, that is g_* = g_SM + g.
Typical solutions of the Boltzmann equation are displayed in Fig. <ref> where Y = Y_eq is taken as a boundary condition for early times (small x). The universe is dense and hot throughout these times, keeping the DM yield near its thermal equilibrium value Y_eq, while self-annihilation decreases the number of DM particles. Eventually the expansion of the universe “freezes out" DM annihilation processes in that the number of DM particles within a comoving volume becomes too low for annihilation rates to significantly affect Y(x). The approximate time at which this occurs is denoted by x_f. With a boundary condition specified for some unique Boltzmann equation, a characteristic freeze-out for the system is determined. For x > x_f, the DM comoving abundance is said to be “frozen in" as the number of DM particles per comoving volume remains relatively constant after freeze-out. As long as freeze-out occurs well into the past, the frozen-in DM abundance determines its relic abundance as measured today: Ω_DM h^2 = 0.12 <cit.>. Here, h is determined by measurement of today's value of the Hubble parameter H_0 = 100 h km s^-1 Mpc^-1. With knowledge of the frozen-in DM abundance, we expect to find
Ω_DM = m Y(x_today) s_0/ρ_c, 0
for nonrelativistic DM. Although Eq. <ref> has no known closed-form solutions, one can find approximations for today's yield Y(x_today) for two types of DM relics. Hot relics are models of DM wherein freeze-out occurs while the species is still relativistic, that is x_f ≪ 3. Models with nonrelativistic DM at freeze-out (x_f ≫ 3) are known as cold relics. The next section follows a standard analysis that can be found in Ref. <cit.>, for example.
The first simplifying assumption is made on the thermally averaged annihilation cross section ⟨σ v ⟩. It is expected that σ v ∝ v^k/2 where k=0 corresponds to s-wave annihilation and k=1 corresponds to p-wave annihilation. As ⟨ v ⟩∝ T^1/2, we parameterize the cross section as
⟨σ v ⟩ ≡σ_0 (T/m)^k
= σ_0 x^-k
k = 0 ⟺ s-wave annihilation
k = 1 ⟺ p-wave annihilation .
Our study assumes s-wave annihilation is the dominant contribution and thus k=0. This parameterization allows Eq. <ref> to be rewritten as
Yx = - λ x^-k-2[ Y^2 - Y_eq^2 ] ,
λ≡x σ_0 s/H = [σ_0 s/H]_x=1
= √(π/45)g_*s/√(g_*) M_Pl m σ_0
where a new parameter λ has been defined which is approximately independent of x.
To track the yield's deviation from its equilibrium value, we express the right-hand side of Eq. <ref> in terms of Δ≡ Y - Y_eq to find
Yx = -λ x^-k-2Δ (2 Y_eq + Δ) .
As Y_eq decreases exponentially with x, we expect Δ≈ Y with cold relics for times after freeze-out, meaning Eq. <ref> is approximately
Δx≈ -λ x^-k-2Δ^2
(x ≫ x_f) .
Despite the validity of this approximation holding for x ≫ x_f, the standard method is to integrate the above equation from x_f out to some x ≫ x_f to find
∫_Δ(x_f)^Δ(x)Δ/Δ^2 ≈ -λ∫_x_f^x x^'/x^'^k+2
[ 1/Δ(x) - 1/Δ(x_f)] ≈-λ/k+1[ 1/x^k+1 - 1/x_f^k+1]
Δ(x) ≈k+1/λ[ 1/x_f^k+1 - 1/x^k+1]^-1
Δ(x) ≈k+1/λ x_f^k+1 .
The third line of Eqs. <ref> ignores 1/Δ(x_f) since the yield at freeze-out is typically larger than its value at some later time. The last line utilizes x ≫ x_f and rids the dependence on x. With Δ(x) ≈ Y(x), the standard approximation for today's DM yield is finally given by
Y_st(x_today) ≈k+1/λ x_f^k+1 .
If instead the DM is produced after its characteristic freeze-out time, we can simply integrate Eq. <ref> from a lower bound given by the time of production x_P. With x_ today≫ x_P > x_f, a similar result is found:
Y(x_today) ≈k+1/λ x_P^k+1 .
§.§ Thermally Averaged Møller Speed
The Møller speed between two particles with four-momenta p_1^μ = (E_1, 𝐩_1) and p_2^μ = (E_2, 𝐩_2) is
v ≡√((p_1^μ p_2 μ)^2 - (m_1 m_2)^2)/E_1 E_2
where m_i^2 ≡ p_i^μ p_i μ. Consider a colinear collision between particles of the same mass: m_1 = m_2 ≡ m. Furthermore, in taking a thermal average, we should expect each particle to have the same energy and momentum (although antiparallel), which we will call E and |𝐩|. We then find
⟨ v ⟩^2 = (E^2 + |𝐩|^2)^2 - m^4/E^4
= 1 + ( |𝐩|/E)^4 + 2( |𝐩|/E)^2 - ( m/E)^4
= 1 + ( 3T/γ m)^2 + 2( 3T/γ m) - 1/γ^4
= 6/x√(1-v_ rel) + 9/x^2 (1-v_ rel) + v_ rel^2 (2 - v_ rel^2) .
The relativistic equipartition theorem of 3T = |𝐩|^2 / E (along with E = γ m) was used in the third line, and γ≡ 1 / √(1-v_ rel) is the Lorentz factor between the two rest frames of the particles. We thus see that for cold relics with x_f ≫ 3 and nonrelativistic relative speeds v_ rel≪ 1, the Møller speed at freeze-out is ⟨ v ⟩≈√(6/x_f).
§.§ Scalar Decay
Consider scalar DM particles Δ produced inside the bubble walls of the developing broken phase via decay of a symmetry-breaking scalar field ϕ. For scalar DM, the relevant terms in the Lagrangian density are
ℒ⊃ - 1/2 m_Δ^2 Δ^2 - 1/2λΔ^2 |ϕ|^2
The interaction term develops a three-point vertex when ϕ acquires a nonzero VEV, inducing a phase transition which we take to be of first order. Letting ϕ→ϕ + ⟨ϕ⟩, the interaction term becomes
1/2λΔ^2 |ϕ + ⟨ϕ⟩|^2 =
1/2λΔ^2 ( |ϕ|^2 + ⟨ϕ⟩^2 + 2 ⟨ϕ⟩ |ϕ| ) .
Here, the first term describes a quartic interaction between the DM and scalar as in the original Lagrangian, the second term provides the DM with additional mass, and the third term is a trilinear coupling between DM and scalar. This last term is responsible for the decay of ϕ into DM within bubble walls where ⟨ϕ⟩≠ 0.
In this mechanism, the abundance of DM in the true vacuum phase is determined by the ϕ→Δ decay process, as well as how many scalars are able to penetrate the bubble walls. Suppose the scalar field has some yield Y^ϕ_fv = n^ϕ_fv / s and decays into N DM particles with probability |𝒯_ϕ→ N Δ|^2 at any given time. If some fraction Y^ϕ / Y^ϕ_fv of scalar particles penetrate the bubble walls into the true vacuum, the yield of DM in the true vacuum is given by
Y = Y^ϕ/Y^ϕ_fvn^ϕ(T_PT)/s(T_PT) N |𝒯_ϕ→ N Δ|^2 .
Prior to the FOPT, we assume the scalar field ϕ to be in thermal equilibrium with the field content of the universe so that n^ϕ(T) = ζ(3) γ_w g_ϕ T^3 / π^2 which gives
Y = Y^ϕ/Y^ϕ_fv45 ζ(3) g_ϕ/2π^4 g_*,s N |𝒯_ϕ→ N Δ|^2 .
To meet the condition of DM abundance to be enhanced from its standard freeze-out value, we require Y ≥ Y_c(T_PT). Combining this condition with Eqs. <ref> and <ref> yields
Y^ϕ/Y^ϕ_fv⟨σ v ⟩ T_PT≥2√(6)π^5/2/ζ(3) M_Pl√(g_*)/g_ϕ1/N |𝒯_ϕ→ N Δ|^2 .
|
http://arxiv.org/abs/2409.02390v1 | 20240904023852 | Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts | [
"Jie Su",
"Fang Cai",
"Shu-Kuo Zhao",
"Xin-Yi Wang",
"Tian-Yi Qian",
"Da-Hui Wang",
"Bo Hong"
] | cs.NE | [
"cs.NE",
"cs.AI",
"cs.CV",
"q-bio.NC"
] |
1]Jie SU
These authors contributed equally to this work.
1]Fang CAI
These authors contributed equally to this work.
2]Shu-Kuo ZHAO
1]Xin-Yi WANG
[1]Tian-Yi QIANqiantianyi@qiyuanlab.com
[2]Da-Hui WANGwangdh@bnu.edu.cn
[1]Bo HONGhongbo@qiyuanlab.com
*[1]
Qiyuan Laboratory,
Beijing, China
[2]
School of Systems Science,
Beijing Normal University,
No. 19, Xinjiekouwai St.,
Beijing, 100875, China
Uncovering the fundamental neural correlates
of biological intelligence,
developing mathematical models,
and conducting computational simulations
are critical for advancing new paradigms
in artificial intelligence (AI).
In this study, we implemented
a comprehensive visual decision-making model
that spans from visual input to behavioral output,
using a neural dynamics modeling approach.
Drawing inspiration from the key components
of the dorsal visual pathway in primates,
our model not only aligns closely with human behavior
but also reflects neural activities in primates,
and achieving accuracy comparable to
convolutional neural networks (CNNs).
Moreover, magnetic resonance imaging (MRI)
identified key neuroimaging features
such as structural connections and functional connectivity
that are associated with performance in perceptual decision-making tasks.
A neuroimaging-informed fine-tuning approach
was introduced and applied to the model,
leading to performance improvements that
paralleled the behavioral variations observed among subjects.
Compared to classical deep learning models,
our model more accurately replicates
the behavioral performance of biological intelligence,
relying on the structural characteristics
of biological neural networks rather than extensive training data,
and demonstrating enhanced resilience to perturbation.
Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts
*
September 9, 2024
============================================================================
§ INTRODUCTION
Abstracting the structures of biological neural systems
into mathematical models
and constructing artificial neural networks
based on these abstractions to address real-world problems,
represents a pivotal approach to innovation in artificial intelligence.
Over the past decades,
this approach has achieved significant success in various fields
such as pattern recognition and computer vision
<cit.>.
Visual perception, a fundamental process through which
humans and animals interpret and interact with the environment,
is a central topic in both neuroscience an artificial intelligence (AI).
Understanding the neural mechanisms underlying perceptual decision-making
not only provides insights into biological systems
but also has the potential to drive advancements in AI technologies.
Convolutional neural network (CNN) models,
inspired by the receptive fields and
parallel distributed processing in animal vision systems,
have achieved notable success in tasks such as
object detection, facial recognition
<cit.>,
and action recognition
<cit.>,
often surpassing human performance.
However, these models face significant limitations,
including the need for vast amounts of labeled data,
lack of biological interpretability,
and susceptibility to adversarial attacks.
These limitations highlight the gap between
current AI systems and biological neural networks
regarding robustness, efficiency, and flexibility.
Addressing these challenges requires a paradigm shift
towards models that more closely mimic the
underlying principles of biological neural computation.
Motion perception is critical for animals
to detect potential conspecifics, preys or predators,
and is vital for their survival.
The motion perception system starts with retinal input,
travels through the lateral geniculate nucleus (LGN)
to the primary visual cortex (V1),
and then projects to parietal areas along the dorsal visual pathway,
supporting spatial attention and eye movements
<cit.>.
Direction-selective (DS) neurons
are widely present in brain areas such as V1,
middle temporal area (MT),
and lateral intraparietal area (LIP)
<cit.>,
which are essential for motion perception.
The direction preference of V1 DS neurons depends on
the spatial pattern of LGN neurons to which they are connected
<cit.>.
Motion perception relies on the integration of
both spatial and temporal dimensions of information
<cit.>.
Spatial integration is achieved
by the larger receptive fields of MT neurons
<cit.>,
while temporal integration primarily relies on
the temporal integration properties of LIP neurons
<cit.>.
Based on the physiological structures and properties
of neurons and circuits in the LIP area,
Wang proposed a recurrent neural circuit model
<cit.>.
Within this model,
two groups of direction-selective excitatory neurons
are capable of integrating lower-level synaptic inputs
and performing cognitive tasks through
recurrent excitation and mutual inhibition mechanisms.
The model's attractor dynamics enhance its ability
to perform decision-making and working memory tasks,
replicate neural activity patterns observed in non-human primates,
and achieve performance consistent with experimental data.
Subsequent researches have extended the model by focusing on
its simplification, theoretical analyses and training methodologies
<cit.>.
These studies primarily emphasize
fitting and interpreting biological experimental data,
showcasing significant biological plausibility and interpretability.
However, neuron parameters are usually determined
based on averaged experimental data,
without considering the physiological characteristics
that underlie behavioral differences among individuals.
Moreover, the model has not been optimized
based on biological features found in primates,
which limits its impact and application in artificial intelligence.
Numerous studies have investigated the relationship
between the physiological or structural features
of the human brain and behavior
<cit.>.
These studies span various human behaviors,
including cognitive functions such as
working memory <cit.>,
language acquisition <cit.>,
theory of mind <cit.>,
and social functions like
social network size <cit.>.
Some research examines differences in
physiological characteristics between healthy and diseased groups,
for example, correlations between neurodegenerative diseases
and physiological indicators of white matter fiber tracts
<cit.>.
Modern neuroimaging technologies provide effective tools
for quantifying human brain physiological features, including
gray matter volume <cit.>,
cortical thickness <cit.>,
and myelination <cit.>
calculated from structural images,
measurements of white matter areas from diffusion images
<cit.>,
and the correlation of blood-oxygen-level-dependent (BOLD) signals
between brain areas in resting state,
known as resting-state functional connectivity (rest FC)
<cit.>.
These studies have revealed correlations between
brain structure, functional characteristics, and human behavior,
contributing to understanding disease mechanisms
<cit.>.
Despite these insights,
neuroimaging data has not been fully integrated into
the development or optimization of artificial neural network (ANN) models.
Integrating neuroimaging data with ANN models
offers a promising frontier for creating brain-inspired AI models
that replicates the behavioral performance of human subjects.
Addressing the gap in biologically inspired dynamics modeling
in artificial intelligence applications,
specifically the lack of tuning mechanisms
integrated with behavioral and neuroimaging data,
this study synthesizes known neural mechanisms
to construct a comprehensive biological neural dynamics model
of visual motion perception.
The model adheres to the physiological features of non-human primates
and facilitates cognitive decision-making behavior
in the Random Dot Kinematogram (RDK) task,
exhibiting biological-like behavioral and neuronal characteristics.
Furthermore, this study explores the correlation between
behavioral and neuroimaging features of human subjects
during visual decision tasks through
magnetic resonance imaging (MRI) techniques,
particularly focusing on the structural and functional characteristics
of corresponding brain regions in human experts.
Finally, we introduce a novel
neuroimaging-informed fine-tuning approach,
which leverages these neuroimaging characteristics
to optimize the artificial biological neural network model
and achieve notable improvements in performance.
§ RESULTS
§.§ A Neural Dynamics Model of Visual Decision-Making with
Behavioral Performance and Neural Activities Similar to Primates
We constructed a neural dynamics model that
encompasses the four key brain areas of the dorsal visual pathway
(LGN, V1, MT, and LIP, Figs. <ref>a-c).
This model forms an artificial biological neural network
capable of motion perception and performing
the Random Dot Kinematogram (RDK) task.
The neurons in our model adopt the Leaky Integrate-and-Fire (LIF) model
and are interconnected through excitatory synapses (AMPA, NMDA)
and inhibitory synapses (GABA) (see <ref>).
The spikes of different neuron groups were recorded
during the RDK task (Fig. <ref>d-f).
The electrophysiologic recordings show that
V1 neurons exhibit direction selectivity,
which is further enhanced in the MT neurons.
In the LIP area,
neurons that prefer a specific motion direction gradually dominate,
while those favoring the opposite direction are suppressed,
resulting in a “winner-take-all” effect.
Additionally, V1 and MT neuron activation increases with motion coherence
but stabilizes during the stimulus period.
In contrast, LIP neurons show a gradual ramping of activation,
with the ramping speed proportional to motion coherence
(Fig. <ref>d-f).
These simulated neural activities closely resemble those
recorded in electrophysiological experiments with macaque monkeys
<cit.>,
demonstrating the biological plausibility of our model.
The performance of our model was evaluated using the entire
Random Dot Kinematogram (RDK) dataset (see <ref>).
We calculated the choice probability
and average reaction time for each coherence level.
Fig. <ref>a (upper panel)
presents the psychometric curve of the model (in blue),
which is similar to the psychometric curves of human subjects (in orange).
The model's sensitivity
(slope of the psychometric curve, k in Equation <ref>)
is 19.31± 0.17 (mean±SEM of 45 experiments),
significantly exceeding the average level of human subjects
(36 subjects, 15.1± 1.9, t=2.50, p=0.015, two-sample t-test).
Moreover, the model's decision time curve
aligns with the reaction times of human subjects.
As coherence increases (task difficulty decreases),
the decision time gradually becomes shorter,
as illustrated in Fig. <ref>a (lower panel).
In electrophysiological experiments,
microstimulation of specific brain regions
can induce behavioral changes
<cit.>.
To verify whether our neural dynamics model
exhibits similar characteristics to biological motion perception systems,
we conducted virtual cortical stimulation experiments.
External currents were injected
into different groups of neurons in our model
to investigate if the results matched those observed in biological experiments
<cit.>.
Applying continuous extra current
(040, Fig. <ref>b)
to MT neurons that prefer leftward motion
shifted the model's psychometric curve to the right,
indicating an increased leftward choice preference
(slope=-0.18, p<0.001),
This manipulation slightly reduced the model's sensitivity
(slope=-0.25, p<0.001),
and altered decision times,
decreasing for leftward and increasing for rightward motions,
aligning with observations in animal studies
<cit.>.
Conversely, applying extra current to neurons
that prefer rightward motion resulted in
a leftward shift of the psychometric curve
and a corresponding leftward shift in the decision time curve
(Fig. <ref>c,
intercept changed with external currents: slope=0.18, p<0.001;
sensitivity changes with external currents: slope=-0.26, p<0.001).
Moreover, when stimulating both groups of neurons
in the MT region without selection
(040),
the model's sensitivity slightly declined
(slope=-0.26, p<0.001),
but the decision time notably reduced
(Fig. <ref>e).
Correspondingly, stimulating all neurons in the V1 region
significantly altered the model's sensitivity
(slope=-0.50, p<0.001)
and decision time (Fig. <ref>d).
However, stimulating all excitatory neurons in the LIP area
did not affect the model's sensitivity
(slope=0.005, p=0.60)
or decision time (Fig. <ref>f).
§.§ Model Optimization Inspired by Structural Connectivity of Human Experts
Structural connectivity estimated from neuroimaging techniques
reflects the mesoscopic properties of fibers,
which are directly related to behavioral outcomes
<cit.>.
To identify the key parameters
influencing the task performance,
we collected behavioral and neuroimaging data
from 36 human subjects performing the RDK tasks.
In the correlation analysis,
we found a significant negative correlation between
the mean fractional anisotropy (FA) value
of white matter in the left lateral occipital region
and subjects' behavioral performance
(r=-0.435, p=0.008, uncorrected Pearson correlation,
Fig. <ref>a).
This indicates that
subjects with lower mean FA values in this area
performed better in the behavioral experiment.
The lateral occipital region, located in the occipital lobe,
includes white matter pathways connecting
the primary visual cortex and the MT area.
Lower FA values may reflect reduced anisotropy
and predict a denser distribution of fiber bundles in this region.
The relative white matter volume in the right inferior parietal region
(i.e., the ratio of ROI volume to total brain volume,
estimated from MR T1 images)
was also significantly positively correlated
with subjects' behavioral performance
(r = 0.373, p = 0.025, uncorrected Pearson correlation,
Fig. <ref>c).
This suggests that subjects with better performance
had a larger white matter volume in this area,
indicating a broader range of fiber tracts.
The inferior parietal region,
located between the occipital and parietal lobes,
includes the white matter pathway connecting the MT and LIP regions,
which might be associated with decision-making in the brain.
These results demonstrate correlations between behavioral performance
and the structural features of the visual dorsal pathway.
No other structural features showed significant correlations.
In response to key parameters identified in MRI studies,
specifically the white matter connections
from V1 to MT and from MT to LIP,
which influence behavioral performance,
we employed a neuroimaging-informed fine-tuning approach
to adjust model parameters based on MRI analysis results.
For the mean FA of left lateral occipital area
(Fig. <ref>a),
we modified the average connection weight
of the V1-to-MT connections in the model
(Fig. <ref>b, upper panel) accordingly.
Increasing the connection weight at lower strengths
(mean of the connection matrix gradually increasing to 2)
enhanced model sensitivity and reduced decision times,
aligning with the observed behavioral correlations.
However, further increases in the connection weight
led to reduced decision times but a decline in sensitivity
(Fig. <ref>b, lower panel).
Similarly, adjusting the connection ratio between MT and LIP neurons
(Fig. <ref>d upper panel)
revealed that increasing the ratio from low levels
improved model sensitivity and reduced decision times,
aligning with neuroimaging and behavioral correlations.
However, beyond a certain range,
further increases in the connection ratio
decreased both model sensitivity and decision times
(Fig. <ref>d lower panel).
§.§ Model Optimization Inspired by Functional Connectivity of Human Experts
The correlation between the functional connectivity indicators
and the behavioral performance in human subjects was also analyzed.
We found a significant positive correlation between
behavioral performance and resting-state functional connectivity (FC)
of the MT region and the anterior agranular insular complex (AAIC.
r=0.445, p=0.007, uncorrected Pearson correlation,
Fig. <ref>e).
The resting-state FC between the two brain regions
was more positively correlated in subjects with better performance,
which could indicate enhanced self-monitoring during the task.
The resting-state FC between the LIP region
and the suborbital region of the prefrontal lobe
was significantly negatively correlated with behavioral performance
(r=-0.472, p=0.004, uncorrected Pearson correlation,
Fig. <ref>g).
This suggests that a stronger negative correlation
between resting-state activities in these regions
is associated with better performance,
possibly reflecting higher engagement or attentiveness during the task.
No significant correlations were observed
between the resting-state FC features of other regions and subject behavior.
The functional MRI results highlighted brain areas
not included in the existing dynamics model.
These regions may provide top-down control
to areas involved in motion perception and decision-making,
thereby affecting perceptual decisions and behavior.
Based on findings from electrophysiological experiments
<cit.>,
we simulated the modulation of the MT region
by adjusting synaptic conductance in this module,
without adding new brain regions to the model.
The results are illustrated in Figs. <ref>f and h.
Increasing MT synaptic conductance enhanced
model sensitivity and reduced decision time,
aligning with observed subject behavior.
However, beyond a certain point,
further increases in connection strength
led to decreased sensitivity despite reduced decision time.
Similarly, we simulated the regulation of the LIP region
by adjusting connection efficiency between DS neurons.
This simulation, detailed in Fig.<ref>h,
showed that increasing connection efficiency
improved model performance and reduced decision time,
but excessive efficiency negatively affected performance.
These experiments illustrate how adjusting model parameters
based on structural and functional features
can enhance the neural dynamics model,
reflecting biological behaviors observed in neuroimaging studies.
This approach not only validates the model's biological plausibility
but also introduces a novel method for incorporating neuroimaging data
into the fine-tuning of artificial neural networks,
which we term neuroimaging-informed fine-tuning.
§.§ Superior Robustness of Neural Dynamics Model Under Perturbation
To assess the robustness of our model,
we conducted four types of perturbation experiments
on both the neural dynamics model
and the convolutional neural network model,
as described in Section <ref>.
Noise was introduced by either
discarding or adding noise to the connection weights
or neurons in each module of the models.
Figs. <ref>b–i
show the changes in accuracy
of the CNN and neural dynamics models at each layer
with varying perturbation intensities
(see Table <ref> for statistics,
and Extended Fig. <ref> for changes in sensitivity).
From these figures,
it is evident that our neural dynamics model
performs better compared to the CNN model under perturbation.
The CNN's accuracy typically declines
more significantly with increased perturbation,
whereas the neural dynamics model remains more stable overall.
Notably, the model's performance remains largely unaffected
when noise is introduced into modules
corresponding to higher-level brain regions.
For example, discarding connections from MT to LIP
(Fig. <ref>f, red)
or adding noise to these connections
(Fig. <ref>g, red)
has minimal impact on model accuracy,
far less than the perturbation effects on the CNN
(Figs. <ref>b&c, red).
Adding noise to the input current of neurons
almost does not affect the neural dynamics model's performance
(Fig. <ref>i, except the purple).
Adding noise to LGN neurons causes model failure
(Figs. <ref>g&i, purple),
likely due to the sparse connections between V1 and LGN
leading to over-reliance on LGN signals.
Discarding neurons in the decision layer LIP
(Fig. <ref>g, blue)
also drastically reduces model performance,
as too few LIP neurons cannot maintain
the attractor states corresponding to decisions,
causing the model to rapidly degrade to a resting state
with a single attractor <cit.>,
effectively rendering it unable to make decisions.
Additionally, we found that changes in accuracy and sensitivity
with added noise in the neural dynamics model are consistent,
which contrasts with the behavior observed in CNN
In some cases, the CNN model retains high sensitivity,
but its accuracy significantly decreases with increasing model bias.
Furthermore, the CNN model exhibits greater variation with perturbation
(see scatter distribution in Figs. <ref>b–d,
see also Extended Figs. <ref>, <ref>),
while the neural dynamics model shows smaller bias
and maintains relatively stable performance even when disturbed.
Unexpectedly, reducing the number of LIP neurons
(Extended Fig. <ref>g, blue)
and adding noise to LIP neuron input currents
(Extended Fig. <ref>h, blue)
may improve model performance.
This is because our base model is not fully optimized;
weakening LIP neuron recurrent connections
or increasing Ornstein–Uhlenbeck noise may improve performance,
thus similar changes in perturbation
may also enhance model performance.
§ DISCUSSION
In this study, we developed a neural dynamics model
for motion perception and decision-making
that simulates the visual dorsal pathway.
The model incorporates biomimetic neurons and synapses
with dynamic characteristics and kinetic properties as its core elements.
These components enable the model to exhibit
neural activities and behavioral outputs
that are comparable to those of the biological brain,
including responses to virtual electrical stimulation.
By leveraging neuroimaging insights
such as the structural features identified from human experts,
we employed neuroimaging-informed fine-tuning
to optimize the model parameters,
leading to improved performance.
Compared to CNNs, our model achieves
similar performance with fewer parameters,
more closely aligns with biological data,
and exhibits greater robustness to perturbations.
Parameter optimization for neural dynamics model
is both critical and challenging due to
the complexity of nonlinear dynamic systems.
Previous studies have primarily employed
data fitting techniques using neural or behavioral data,
which are feasible for small-scale models.
However, as neural dynamics models increase in scale and complexity,
with larger number of parameters and higher computational demands,
optimization become significantly more computationally intensive and difficult.
while it is theoretically possible to optimize
key parameters in some simplified models through analytical methods,
such method is generally impractical for more complex models.
This study, for the first time,
explores the application of neuroimaging data in model optimization,
a process we term neuroimaging-informed fine-tuning.
While this approach may not achieve the global optimum,
it offers practical directions for parameter optimization
and significantly reduces the search space.
It is noteworthy that excessive adjustments to
parameters can impair model performance.
This observation implies the existence of an optimal range
for these parameters that maximizes performance.
Biological neural systems,
refined over billions of years of evolution,
regulate these parameters within such an optimal range,
with minor variations accounting for individual differences.
Targeting neuroimaging-informed fine-tuning,
we identified brain regions associated with
visual decision-making in human subjects
through behavioral and imaging experiments.
Subjects exhibiting superior performance in visual tasks
tend to have lower mean FA values in the occipital region,
indicating a higher degree of neural branching in the lateral occipital area.
Variations in white matter characteristics within the adults
are thought to result from inherent brain structure
and the long-term maturation process from childhood to adulthood
<cit.>,
suggesting that individuals with advanced visual capabilities
likely have undergone extensive training and development in visual function.
We also identified correlations between the functional connectivity of
MT–AAIC and LIP–suborbital with behavioral performance.
Variations in functional connectivity between subjects
are primarily influenced by their level of participation
and cognitive engagement during experiments <cit.>,
suggesting that expert subjects are more engaged and attentive during tasks.
We hypothesize that this process involves top-down regulation
from higher-level brain areas to task-related regions,
potentially enhancing synaptic transmission efficiency.
The structural and functional connectivity features
observed in human subjects account for their superior performance
from the perspectives of innate structure,
long-term development, and short-term participation.
These findings, applied to key parameter adjustments in the model,
indicate that the neural dynamics model aligns with biological intelligence
and shows good interpretability in physiological terms.
This proposes a potential path for enhancing the model's performance
through neuroimaging-informed fine-tuning.
In the model, LIP Neurons primarily rely on
differences in input currents for decision-making.
Larger input currents enable the model to reach
the decision threshold more quickly but reduce accuracy.
For instance, increasing the connection strength
between V1 and MT, between MT and LIP,
or increasing the number of MT neurons connected to LIP
enhances the total currents received by LIP neurons
(see Fig <ref>a).
This enhancement improves signal differentiation
among neuronal groups that prefer different directions,
thereby increasing accuracy (see Fig <ref>b&d, first half).
However, larger input currents cause attractors in decision space
to shift towards the diagonal
(see Fig. <ref>b, from 2 to 3) <cit.>.
In this scenario, inhibitory neurons are less effective
at suppressing DS neurons that prefer the opposite direction,
disrupting the winner-take-all dynamics
and deteriorating model performance
(see Figs. <ref>b&d, second half).
Similarly, modifying the recurrent connection weights
among LIP neuronal groups alters the energy landscape of the decision space.
Stronger recurrent connections eliminate the resting states attractor
and expand the size of the decision state attractors
(see Fig. <ref>c) <cit.>.
This modification improves performance during the initial phase
(see Fig. <ref>h, first half),
but excessively large decision attractors
can lead to rapid decision-making in the presence of noise,
thereby reducing accuracy (see Fig. <ref>h, second half).
Adjusting the number of LIP neurons also
alters the energy characteristics in the decision space.
If the parameters result in stable decision-state attractors,
the model can make decisions and maintain the states without external stimuli,
demonstrating a degree of working memory capability.
This enables decision-making with limited information
by integrating noisy evidence,
which can shorten decision time but introduces a risk of errors.
Conversely, if only one resting state attractor exists,
the model fails to reach the decision threshold
under any stimulus and cannot make decisions,
as evidenced by the perturbation experiment (Fig. <ref>h, blue),
where we deliberately chose the preferred direction
associated with higher firing rate neurons for decision.
In real-world scenarios,
the model would not make a choice in this situation.
Similarly, parameter adjustments can lead to another monostable mode,
where neuron groups preferring opposite directions
both exhibit high firing rates,
resulting in an inability to make reasonable decisions
(Figs. <ref>g&i, purple).
In 2-alternative forced choice (2-AFC) tasks,
researchers often model decision-making as evidence accumulation over time,
using approaches such as
the drift diffusion model (DDM)
<cit.>,
racing diffusion model <cit.>,
and linear ballistic accumulator (LBA) model <cit.>.
The temporal integration in the LIP module resembles the DDM
(see discussions in
<cit.>
), and can be directly related to it
<cit.>.
This type of evidence accumulation is not present in feedforward DCNN models.
Additionally, the dynamic nature of the LIP constitutes an attractor network
<cit.>,
where, through iterative processes,
the network's state stabilizes near attractors.
This differs fundamentally from traditional DCNNs,
which categorize through spatial partitioning within representational spaces,
and may explain why neural dynamics models
exhibit better stability and noise resistance.
In summary, integrating neural dynamics into AI models
can bridge the gap between biological intelligence and artificial systems,
leading to the development of next-generation AI
that is both interpretable and resilient like biological organisms.
By leveraging advances in neuroscience to construct models with neural dynamics,
we are able to create AI systems that better mimic biological behavior.
The neuroimaging-informed fine-tuning method enhances performance
while preserving the advantages of neural dynamics.
This approach ensures both high performance and explainability,
aligning with biological plausibility and computational efficiency.
This synthesis between advanced AI techniques
and in-depth neuroscientific insights represents a promising direction
for future research and development in both fields.
§ METHODS
§.§ The RDK Dataset
Random Dot Kinematogram (RDK)
is a classic and common psychophysical paradigm
used to study visual motion perception.
In a typical RDK stimulus,
numerous small dots are randomly distributed on the screen
moving at different speeds in different directions,
without a clear pattern.
This randomness is crucial,
preventing subjects from relying solely on local motion cues
to judge overall motion characteristics.
Typically, a certain proportion of dots (signal dots)
move in a specified direction (target direction),
while the remaining dots (noise dots)
move randomly <cit.>.
Movement in a fixed direction is referred to as coherent motion,
and observers' ability to report the direction of coherent motion
increases with the percentage of coherent motion dots,
accompanied by shorter reaction times.
This proportion known as coherence,
is a measure of task difficulty
<cit.>,
making it the most important control parameter in RDK stimuli.
RDK can conveniently control the relative saliency of motion stimuli,
making it suitable for coherence threshold detection
or measuring changes in subject behavior with coherence.
In a 2-alternative forced choice (2-AFC) task,
with the proportion of coherent motion or
its logarithm as the independent variable,
a psychometric curve can be plotted
showing the probability of a subject choosing a certain direction.
This curve typically follows a sigmoid shape and can be described
by the equation (<ref>).
p = 1/1+e^-kx+b.
Here, the slope k of the linear part
is referred to as sensitivity,
with higher sensitivity indicating a better ability
to distinguish between two directions of motion
and better behavioral performance.
The intercept b describes the bias of the decision-maker,
i.e., the extent to which the decision-maker
tends to choose one direction over the other.
A good decision-maker should have higher sensitivity and smaller bias.
To ensure consistency in data during model testing,
we generated different RDK stimuli using
the specialized neuroscience and experimental psychology software
PsychoPy <cit.>
with varying parameters and stored the stimuli
in the form of three-dimensional arrays.
Computer vision systems that use a camera as the perceptual input
typically measure input videos and images in pixels,
rather than in degrees as in studies of biological vision.
Therefore, the RDK animation used in this study is pixel-based.
Specifically, within a rectangular black background
with dimensions of 300 pixels by 300 pixels,
a 270-pixel diameter aperture is placed at the center.
Inside the aperture,
200 white circles with a diameter of 6 pixels are evenly distributed.
Some points (signal dots) are randomly selected to move left or right,
while the remaining points (noise dots) are randomly dispersed in all directions.
The circles move at a speed of 2 pixels per frame,
and their positions are randomly reset every 4 frames
or when they move out of the aperture.
Following these rules,
120 frames of images are continuously generated
and stored in 8-bit grayscale format
as a matrix of size 300x300x120.
This animation lasts for 2 seconds on a 60 monitor,
matching the stimulus duration of the human behavioral experiment.
The RDK dataset contains 100 levels of coherence
ranging from 099
with a step size of 1.
For each coherence level,
10 stimuli were generated for both leftward and rightward motion directions,
resulting in a total of 2000 stimuli.
This fine-grained variation in coherence levels allows for a detailed analysis
of the model's sensitivity to subtle differences in motion direction.
§.§ The Neural Dynamics Model
The neural dynamics model constructed in this study
simulates four key areas of the dorsal pathway
involved in motion perception,
corresponding to the LGN, V1, MT and LIP regions
of the motion perception decision system
from primary to high levels (Fig. <ref>a).
We did not directly model lower visual pathway modules
(retina, ganglion cells, etc.) in detail,
but simplified them into temporal and spatial convolution calculations,
directly mapping visual input to the total input current
of LGN neurons as the model's input.
The LGN layer of the model was constructed
based on previous research
<cit.>.
The LGN is divided into two groups of ON and OFF neurons
adapting the Leaky Integrate-and-Fire (LIF) model,
each group containing 10000 neurons,
all with double Gaussian spatial receptive fields
(Equation <ref>, Fig. <ref>b middle).
These neurons cover a visual angle of 0.35
(x, y ∈ [-0.175, 0.175] in Equation <ref>),
corresponding to a 9x9 pixel area.
They are alternately overlapped and arranged regularly in space
(Fig. <ref>b top),
covering the entire 300x300 field of view,
forming a two-dimensional plane corresponding to the image space.
ON and OFF neurons have different temporal profiles,
with ON neurons responding slower than OFF neurons
(approximately 10,
Equation <ref>, Fig. <ref>b bottom).
Images of 300x300 pixels are convolved spatially
with the 9x9 spatial convolution kernels,
then the time step is reduced to 2
(120 frames expanded to 1000 frames)
through nearest-neighbor interpolation,
and temporal convolution with a sliding time window of 160 is performed.
At each time point,
two sets of 100x100 convolution results are obtained,
which serve as the stimulus-induced current
for each neuron in the ON and OFF neuron groups, respectively,
and together with Ornstein–Uhlenbeck noise
<cit.>
form the input current of LGN neurons.
A(x, y) =
α/πσ_α^2exp(
-x^2+y^2/σ_α^2)
- β/πσ_β^2exp(
-x^2+y^2/σ_β^2)
where α = 1, β = 1,
σ_α = 0.0894,
σ_β = 0.1259
<cit.>.
K(t) =
αt^6/τ_0^7exp(
-t/τ_0)
- βt^6/τ_1^7exp(
-t/τ_1)
where τ_0 = 3.66, τ_1 = 7.16,
α = 1, β = 0.8 for ON neurons, and
α = 1, β = 1 for OFF neurons
<cit.>.
The other three brain regions are composed of LIF neuron groups,
with each neuron group connected to the neurons
in the previous layer with specific structures and probabilities.
The V1 area has a total of 5000 neurons,
divided into two groups, namely G1 and G2.
Each V1 neuron receives input from
one ON cell and one OFF cell
arranged in a specific pattern
through AMPA synapses.
In the LGN projection received by the G1 group neurons,
the ON cells are always located to the left of the OFF cells.
Similarly, in the LGN projection received by the G2 group neurons,
the ON cells are always located to the right of the OFF cells
(Fig. <ref>c).
The MT area contains two groups of neurons, L and R,
each consisting of 400 neurons.
The L group neurons receive input from the G1 group neurons
within a certain receptive field,
while the R group neurons receive input from
the G2 group neurons within the same receptive field.
The connections between MT and V1 corresponding neuron groups
are also formed by AMPA synapses,
with a synaptic conductance of
g̅ = 2.0.
The LIP area is constructed according to the model
in literature <cit.>,
consisting of excitatory neuron groups A and B (each with 300 neurons)
and an inhibitory neuron group I (500 neurons).
The A group neurons receive random projections from L,
while the B group neurons receive projections from R
(50 of neurons for each group) with AMPA synapses.
Connections within excitatory groups A, B,
as well as connections from excitatory groups to inhibitory neuron group I,
are formed by AMPA and NMDA synapses with different temporal characteristics.
The inhibitory neuron group I inhibits neurons
in groups A and B through GABA synapses.
The connection strengths (synaptic conductance coefficients)
of the above connections follow a normal distribution
𝒩(g̅, 0.5g̅),
where the average conductance from MT to LIP is
g̅_MT = 0.1,
the average conductance among excitatory neurons in LIP is
g̅_AMPA = 0.05,
g̅_NMDA = 0.165,
the average conductance from excitatory neurons to inhibitory neurons in LIP is
g̅_AMPA = 0.04,
g̅_NMDA = 0.13.
If the connection strength between two neurons is less than 0,
there is no connection (Fig. <ref>c).
Additionally, the connection weights between neurons
with the same direction preference increase to
w = 1.3 times the original weight (Hebb-strengthened weight),
while the connection weights between neurons
with different direction preferences weaken to
w = 0.7 times the original weight (Hebb-weakened weight).
The neurons in the model are all LIF neurons,
a commonly used model in computational neuroscience
to approximate the behavior of biological neurons.
The resting membrane potential is set to V_r = -70,
and when the membrane potential of a neuron reaches -50,
it fires an action potential,
resetting to -55
during the refractory period <cit.>.
The LIF model was selected due to its
simplicity and computational efficiency,
while still capturing essential dynamics of neuronal firing.
In addition to synaptic currents and external currents,
each neuron receives Ornstein–Uhlenbeck noise,
a standard approach for modeling stochastic fluctuations in neuronal input.
The time constant of OU noise τ = 10,
and mean current of 400 with a variance of 100
were chosen to match the noise characteristics observed in biological neurons.
for LIP excitatory neurons,
a higher mean current of 550 was used to simulate
the enhanced excitatory drive from
other excitatory neurons without direction selectivity.
For excitatory neurons, the parameters are
C_m = 0.5,
g_l = 25,
and refractory period τ = 2.
For inhibitory neurons, the parameters are C_m = 0.2,
g_l = 20,
and refractory period τ = 1.
The AMPA, NMDA, and GABA synapses in the model
follow the settings in the literature
<cit.>.
We use the overall firing rate of neuron groups
A or B in the LIP region
as the basis for whether the model makes a decision.
When the average firing rate exceeds a threshold (30)
or when the stimulus finishes,
the model selects the direction preferred by
the neuron group with the higher firing rate as its final output.
By simulating the model with each stimulus in
the aforementioned RDK dataset,
we obtain the accuracy of the model's decisions
and the number of time steps taken for the decision (decision time)
under different coherence levels.
We can estimate a psychometric function through
least squares regression (Equation <ref>).
This method yields behavioral metrics,
including the model's psychometric curve and sensitivity indices.
Furthermore, our model,
which possesses a structure akin to biological systems
and neurons that mirror those of the biological nervous system,
allows for the execution of virtual electrophysiological experiments,
recording and analyzing the firing characteristics
of each neuron group in the model,
and even performing virtual electrical stimulation on neurons
by adding additional current inputs,
to study the characteristics of the model
(Figs. <ref>b&c).
Due to the randomness of RDK stimuli and the neural dynamics of the model,
to ensure the robustness of the results,
each stimulus was repeated twice in the model performance test,
and the neural dynamics model was re-initialized and repeated five times.
The selection probabilities and average decision times
for each coherence level were calculated and
estimated using the psychometric curve (Equation <ref>),
and the decision time curve was estimated using
a moving median smoothing algorithm with a coherence window of 10
(Fig. <ref>a upper panel).
For human subjects,
we conducted a bootstrap analysis on the behavioral data,
where the trials of each subject were randomly sampled with replacement,
and the median of 1000 bootstrap samples was calculated for each subject.
The final average performance over all subjects was plotted.
§.§ A CNN Model for Motion Perception
A convolutional neural network (CNN) model (MotionNet) was built
similar in structure and scale to the neural dynamics model for comparison
(Fig. <ref>a).
The model receives video input with a resolution of 300x300 pixels,
processes it through spatial convolutions of 9x9 pixels
and temporal convolutions of 10 frames,
resulting in two 100x100 feature maps
matching the input of LGN neurons.
These feature maps are converted to two 50x50 maps
using 3x3 convolutional kernels.
The two channels are then processed with
11x11 convolutional kernels to generate two 20x20 feature maps.
After average pooling along the time dimension,
these features are mapped to a 400-dimensional vector by a linear layer,
and finally to output neurons using another linear layer.
All neurons utilize the ReLU activation function except the output layer,
which is passed through a softmax transformation
for fitting the one-hot encoded direction classification information.
To avoid overfitting the training data and ensure accurate model comparison,
MotionNet was not directly trained on the RDK dataset.
Instead, it utilized generated moving random images as the training data.
Specifically, a random grayscale image was generated and stretched randomly.
A window covering the image was moved in random directions
to obtain the necessary animation data.
The speed and direction of movement were randomly assigned,
and the training labels (supervision information, moving left or right)
were determined by the direction of movement along the horizontal axis.
MotionNet was trained with the cross-entropy loss function
and stochastic gradient descent (SGD) method.
The initial learning rate was set at 0.01,
reduced to 10 at the 5th and 15th epochs.
The momentum was set to 0.9,
with a batch size of 64,
and each epoch contained 500 batches.
Training stopped at the 20th epoch.
The model from the 20th epoch was selected for testing
based on its convergence and stability during training.
Similar analyses were then conducted on this model using the RDK dataset,
paralleling those applied to the neural dynamics model.
This approach ensured a fair comparison between
the two models' performance under the same experimental conditions.
Specifically, twenty consecutive frames were randomly selected from
a 120-frame animation as input for MotionNet.
The model's choices under various coherences were recorded
and psychometric curves were fitted.
For result stability, each trial was repeated twice.
Unlike the neural dynamics model, MotionNet,
being a CNN, lacks a concept of time,
so we focused on its psychometric curve
and related parameters (sensitivity).
§.§ RDK Behavioral and Neuroimaging Experiments in Human Subjects
Subjects:
Thirty-six subjects participated in this RDK experiment
(12 males, 24 females, mean age 29 ±8 years).
Each subject conducted a behavioral experiment
with a series of random-dot motion-direction discrimination tasks
(consisting of a learning period, a practice period, and a test period)
and an MRI scanning experiment.
The MRI experiment consisted of
a high-resolution 3D-T1 structural scanning session,
a field-map scanning session for image distortion correction,
a functional-MRI session for localization of the MT region,
two functional-MRI sessions conducting the RDK tasks,
and a DWI session for white-matter fiber reconstruction.
The study was approved by the Ethics Committee,
and informed consent was obtained from each volunteer.
Behavioral experiment:
Random-dot stimuli were presented on a monitor
at a distance of 57 from the subject,
with a display resolution of 2560x1440 pixels
and a refresh rate of 60.
The stimulus program was written using PsychoPy
<cit.>.
The stimuli were a series of circular dots
(white dots presented on a black background).
Each trial was initialized with a fixation cross
(.33 x 0.33 dva,
lasting for a fixed duration of 500
plus a random duration sampled from a uniform distribution
with a maximum of 200).
Subjects were asked to gaze at the cross.
After that, a set of moving dots was presented
in a circular area of 5 dva diameter,
with a total number of 300 dots and
a diameter of 0.04 dva for each of these dots.
Some of the dots moved uniformly to the left or right,
while the rest of the dots moved randomly with a speed of
3.3.
The positions of all the dots were re-randomized every 5 frames
(i.e., the presentation of each dot lasted 5 frames).
Subjects were asked to recognize the uniform direction
of the moving dots and respond within
2000 (Extended Fig. <ref>a).
Each subject first performed a learning period.
The proportion of the uniformly moving dots for this period
was fixed at a coherence of 80
(i.e., 80 of the dots move in the same direction),
and the subject was informed with correct or incorrect feedback
after responding to the direction.
This period would not terminate
until the subject responded correctly 10 consecutive times.
After the learning period, the subject entered a practice period.
A total of 5 sessions were presented
and each session consisted of 70 trials
at randomized coherence levels
(1;2;3;4;5;6;8;10;15;20;25;30;40;50,
for a total of 14 coherence levels, each of which appeared 5 times).
During this period,
subjects were also informed of correct or incorrect feedback after responding.
Finally, the subject would be presented with a staircase test.
The coherence of the trials was set by the staircase method (3-down 1-up method),
with an initial coherence of 10,
a step size of 5 before the first reversal,
and a step size of 1 after the first reversal.
The behavioral experiment was completed after the 20th reversal or 150 trials in total.
A psychometric curve was then fitted for each subject's performance,
and a coherence corresponding to 79.4 accuracy
was estimated as the initial value of coherence in the following MRI experiment.
MRI experiment:
MRI experiments were performed using a MAGNETOM Prisma 3T
(Siemens Healthcare, Erlangen, Germany) MR scanner
with a 64-channel head-neck coil.
High-resolution 3D T1 structural images were acquired
using the MPRAGE sequence with the following scanning parameters:
TR=2530,
TE=3.34,
TI=1000,
flip angle=7,
iPAT=2,
FOV=[product-units=power]256x256,
number of slices=192,
resolution=1 isotropic.
Field-map images were scanned using a dual-echo 2D-GRE sequence
with the following parameters:
TR=747,
TE1=4.92,
TE2=7.38,
FOV=[product-units=power]208x208,
number of slices=72 slices,
resolution=2.0 isotropic.
Functional-MRI was performed using a 2D-GRE-EPI sequence
with simultaneous multi-slices scanning with the following parameters:
TR=1000,
TE=35,
flip angle=52,
FOV=[product-units=power]208x208,
number of slices=72 slices,
resolution=2.0 isotropic,
SMS factor=8.
The diffusion-weighted images were scanned using a 2D-SE-EPI sequence
with simultaneous multi-slices. Scanning parameters:
multiple b-values (b=0, 1000, 2500),
96 diffusion directions (32 b=1000 volumes and
64 b=2500 volumes <cit.>)
and 9 b=0 volumes,
TR=7400,
TE=70,
iPAT=2,
FOV=[product-units=power]204x204,
number of slices=96 slices,
resolution=1.4 isotropic,
SMS factor=2,
partial fourier=6/8,
phase-encoding direction=A>>P with only one b=0 volume P<<A.
The MT Localizer task used the classic random-dot
expansion-contraction paradigm
<cit.>.
The stimuli were presented on a monitor
at a distance of 170 from the subject,
with a resolution of 1920x1080 pixels
and a refresh rate of 60.
The program was written using PsychoPy.
The stimuli were a series of white circular dots presented on a black background.
The experimental paradigm was designed to be an block design,
consisting of a 16 rest block
alternating with a 16 visual stimuli block
lasting for a total duration of 112.
A fixation cross was first presented
at the center of the screen for 16,
and subjects were asked to gaze at the cross;
three blocks were then presented.
Each block consisted of 200 dots (0.15 dva)
presented in a ring ranging from 0.5 dva to 9 dva.
A bigger dot of 0.3 dva was presented at the center of the screen,
and then all white dots except the central dot
started a contraction-expansion movement
with a speed of 8.
Subjects were asked to look at the central dot throughout the entire process.
The RDK test task in the MRI scanning used
the same paradigm as the behavioral test.
Subjects were asked to press two buttons with either the left or right hand.
The task adopted an block design,
consisting of a
4 rest block,
a 4 random viewing block,
and a 4 prompt-and-press block.
Each subject went through two sessions,
each consisting of 25 trials,
and a total duration of 304.
The experimental stimuli of each trial
were consistent with those presented in the behavioral experiment,
except that in the prompt-and-press block,
a new cue of `left-handed' or `right-handed' appeared
at the center of the screen
and the subjects followed the cue
to press the key with the corresponding hand
(Extended Fig. <ref>b).
In the MRI experiment,
we found that the RDK stimulus-induced activation was
mainly located in the primary visual cortex, MT, and LIP regions
(Extended Fig. <ref>c,
p<0.001, uncorrected, one-sample t-test).
This result was consistent with the ROIs modeled
in the neural dynamics model of motion perception.
§.§ Neuroimaging Data Analysis Methods
High-resolution 3D–T1 structural images were
parcellated into gray matter and white matter
and reconstructed into cortex surfaces using Freesurfer
<cit.>.
A voxel-level segmentation of brain regions
based on the Destrieux atlas template
<cit.> was also obtained.
Several indices were estimated,
including the number of voxels per subregion,
cortical area, and average cortical thickness.
These processed data and structural features were used
in the subsequent correlation analysis.
The fMRI images were corrected using FSL,
including slice timing correction,
motion correction, and field-map correction.
The corrected images were then aligned to
the individual space of the T1 structural image.
The WM/GM/CSF binary maps were constructed
in the structural space for the subsequent GLM analysis
to remove white matter and cerebrospinal fluid voxels.
In the first-level analysis, the GLM model was used,
and the rest, stimuli, and response blocks of each trial
were considered as separate regressors.
The six motion parameters
(three translation and three rotation parameters per frame)
and the white matter and cerebrospinal fluid temporal signals
were used as nuisance regressors in the model.
The GLM model was fitted to each voxel,
and the contrast of `stimuli - rest' was constructed
to obtain the activation map in z-score.
In the second-level analysis, each subject's activation map in z-score
was projected to the individual cortical space
(reconstructed cortical surface),
and then projected to a standard cortical space (fsaverage5 template),
where the averaged activation map in z-score
was obtained by performing a one-sample t-test
(p<0.001, uncorrected, Extended Fig. <ref>c).
The resting-state functional connectivity in this study
was obtained from the task-fMRI data <cit.>.
By regressing out the task-dependent signals
and filtering the preprocessed results in the
0.010.08 band,
the background signals of the brain were obtained.
The resting-state functional connectivity matrix
was then calculated using two atlases:
the Destrieux structural atlas
and the Glasser functional atlas <cit.>.
Each element of the matrix was used to characterize each ROI pair
to correlate with the behavioral results.
Data preprocessing steps for diffusion images included
brain extraction, registration to T1 images,
field-map top-up correction, and eddy-current correction.
We applied a ball-and-stick model to regress the white matter voxels.
The model regression of voxels in the white-matter region
inferred the white-matter orientation distribution of single voxels,
and FA and MD values for each voxel were then obtained
<cit.>.
By registering to the white-matter segmentation image,
white-matter regions' mean FA and mean MD values
(Destrieux atlas, threshold: FA>0.2)
were calculated for the subsequent correlation analysis.
Mean FA and MD values were used to characterize
the structural connectivity of white-matter pathways
to correlate with the behavioral results.
To visualize the fiber bundles of the subjects,
fiber tract reconstruction was performed at the individual level
using the quantitative anisotropy of
the DSI Studio's gqi inference as a tracking index.
Segmented ROIs (lateral occipital and inferior parietal regions)
were used as spatial constraints for fiber tracking.
Deterministic fiber tracking was performed by the streamline Euler method
(tracking threshold =0,
angular threshold =0,
min length =30,
max length =300,
seed number =100000)
to obtain the morphology of fiber bundles
related to the visual decision-making brain regions
in the occipital and parietal lobes of the subjects.
§.§ Neuroimaging-Informed Fine-Tuning
Data analysis:
The performance of the subjects in the behavioral RDK experiment
was quantitatively analyzed using psychometric curves,
and Pearson correlation analysis was performed
between the behavioral results and the structural and functional features
estimated from the MRI data.
For behavioral data,
a psychometric curve was fitted to each subject's staircase performance,
and perceptual ability was determined by taking
`1 - 79.4 accuracy'.
Pearson correlations were calculated between the above features from structural
(number of voxels per subregion, cortical area, average cortical thickness),
diffusion (mean FA, mean MD),
and functional (resting-state FC elements) images
and the perceptual ability indices of the subjects.
The ROIs were determined by the parcellation
and registration to the Glasser atlas
and the Destrieux atlas in each individual space.
We calculated the correlations between the ROIs' features
and the behavioral perceptual abilities
(p<0.01 as a threshold of statistical significance).
Connecting physiological data indicators found
in magnetic resonance imaging (MRI) with the model
is challenging due to the vastly different scales of neural elements involved.
In this study, a heuristic approach was used
to adjust and assess parameters potentially correlated with
structural and functional MRI indicators,
inspired by functional analogy.
White matter connections found in MRI were simulated
by adjusting the mean of the connection parameter distribution for V1 to MT
(0.23.0 with a step size of 0.2),
or by adjusting the proportion of connections between MT and LIP
(10100 with step 10).
According to Dale's principle,
non-positive connection weights were set to zero,
equivalent to removing those connections.
The adjusted model was retested on the RDK dataset five times,
and statistical analysis was performed on the sensitivity of
psychometric curves (slope)
to investigate the impact of parameter adjustments
on model performance and identify parameter combinations
that improve model performance.
Similarly, for functional connections found in fMRI,
the synaptic conductance of neurons in MT was altered
(0.010.15 with a step size of 0.01),
or the Hebb-strengthened weight
between LIP excitatory neurons was changed
(0.71.5 with a step size of 0.1)
to simulate the neural modulation from other brain regions.
The same statistical analysis was conducted
to examine the effect of parameter adjustments on model performance.
§.§ Interference Resistance Experiment of the Model
In addition to requiring large amounts of data for training,
deep learning models are also prone to biases
and are easily affected by noise,
resulting in poor transferability, among other issues.
To address these problems, we designed a series of perturbation experiments
on the neural dynamics model to assess its noise resistance performance.
For each module of the neural dynamics model,
we separately tested the effects of deactivating a certain percentage of neurons,
discarding a certain percentage of synapses
(ranging from 090 with a step size of 10),
adding a certain level of Gaussian perturbation to the input current of neurons
(variance ranging from 0 to 2 times the
averaged absolute value of the group's input current,
normal distribution noise with a mean of 0),
and adding a certain level of Gaussian perturbation to the connection weights between neurons
(variance ranging from 0 to 2 times the
averaged absolute value of all connection weights,
normal distribution noise with a mean of 0).
For each parameter setting,
the model's behavioral performance was evaluated using the RDK dataset.
Our analysis concentrated on understanding
how changes in model parameters affected sensitivity and overall accuracy.
This evaluation highlights the significance of each parameter
and demonstrates the resiliency of our model.
For comparison, we conducted similar noise and damage experiments
on MotionNet as we did on the neural dynamics model.
For each layer of MotionNet,
we separately tested the effects of
deactivating a certain percentage of neurons,
discarding a certain percentage of connections
(ranging from 090 with a step size of 10),
adding a certain level of Gaussian perturbation to the output of neurons
(variance ranging from 0 to 2 times the
averaged absolute value of the group's neuron activation,
normal distribution noise with a mean of 0),
and adding a certain level of Gaussian perturbation to the connection weights between neurons
(variance ranging from 0 to 2 times the
averaged absolute value of all connection weights,
normal distribution noise with a mean of 0).
Due to the fact that the fully connected layer (fc4) of MotionNet
is mapped to the output layer with only two neurons,
the fc4 connections were neither deactivated nor perturbed during our experiment.
After each adjustment of the perturbation parameters,
we conducted a complete evaluation on the RDK dataset
and calculated the sensitivity of the current model
to the parameter changes.
§ EXTENDED DATA
10
mcculloch1943logical
authorMcCulloch, W. S. & authorPitts, W.
titleA logical calculus of the ideas immanent in nervous
activity.
journalThe Bulletin of Mathematical Biophysics
volume5, pages115–133
(year1943).
hopfield1982neural
authorHopfield, J. J.
titleNeural networks and physical systems with emergent
collective computational abilities.
journalProceedings of the national academy of
sciences volume79, pages2554–2558
(year1982).
lecun1998gradient
authorLeCun, Y., authorBottou, L.,
authorBengio, Y. & authorHaffner, P.
titleGradient-based learning applied to document
recognition.
journalProceedings of the IEEE
volume86, pages2278–2324
(year1998).
lecun2015deep
authorLeCun, Y., authorBengio, Y. &
authorHinton, G.
titleDeep learning.
journalNature volume521,
pages436–444 (year2015).
hassabis2017neuroscience
authorHassabis, D., authorKumaran, D.,
authorSummerfield, C. & authorBotvinick, M.
titleNeuroscience-Inspired Artificial Intelligence.
journalNeuron volume95,
pages245–258 (year2017).
alex2012imagenet
authorKrizhevsky, A., authorSutskever, I. &
authorHinton, G. E.
titleImagenet classification with deep convolutional
neural networks.
journalAdvances in neural information processing
systems volume25 (year2012).
szegedy2015going
authorSzegedy, C. et al.
titleGoing deeper with convolutions.
booktitle2015 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR),
pages1–9 (publisherIEEE, addressBoston,
MA, USA, year2015).
he2016resnet
authorHe, K., authorZhang, X., authorRen,
S. & authorSun, J.
titleDeep Residual
Learning for Image Recognition.
booktitle2016 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR),
pages770–778 (publisherIEEE, addressLas
Vegas, NV, USA, year2016).
ji2013conv3d
authorJi, S., authorXu, W., authorYang, M.
& authorYu, K.
title3D Convolutional Neural Networks for Human
Action Recognition.
journalIEEE Transactions on Pattern Analysis and
Machine Intelligence volume35,
pages221–231 (year2013).
maturana2015voxnet
authorMaturana, D. & authorScherer, S.
titleVoxNet: A
3D Convolutional Neural Network for real-time object recognition.
booktitle2015
IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), pages922–928 (publisherIEEE,
addressHamburg, Germany, year2015).
mishkin1982contribution
authorMishkin, M. & authorUngerleider, L. G.
titleContribution of striate inputs to the visuospatial
functions of parieto-preoccipital cortex in monkeys.
journalBehavioural Brain Research
volume6, pages57–77 (year1982).
shadlen2001neural
authorShadlen, M. N. & authorNewsome, W. T.
titleNeural basis of a perceptual decision in the parietal
cortex (area lip) of the rhesus monkey.
journalJournal of neurophysiology
volume86, pages1916–1936
(year2001).
katz2016dissociated
authorKatz, L. N., authorYates, J. L.,
authorPillow, J. W. & authorHuk, A. C.
titleDissociated functional significance of
decision-related activity in the primate dorsal stream.
journalNature volume535,
pages285–288 (year2016).
yates2017functional
authorYates, J. L., authorPark, I. M.,
authorKatz, L. N., authorPillow, J. W. &
authorHuk, A. C.
titleFunctional dissection of signal and noise in MT and
LIP during decision-making.
journalNature Neuroscience
volume20, pages1285–1292
(year2017).
chariker2021theory
authorChariker, L., authorShapley, R.,
authorHawken, M. & authorYoung, L.-S.
titleA theory of direction selectivity for macaque primary
visual cortex.
journalProceedings of the National Academy of
Sciences volume118, pagese2105062118
(year2021).
chariker2022computational
authorChariker, L., authorShapley, R.,
authorHawken, M. & authorYoung, L.-S.
titleA computational model of direction selectivity in
macaque v1 cortex based on dynamic differences between on and off pathways.
journalJournal of Neuroscience
volume42, pages3365–3380
(year2022).
burr2011motion
authorBurr, D. & authorThompson, P.
titleMotion psychophysics: 1985–2010.
journalVision research
volume51, pages1431–1456
(year2011).
amano2012human
authorAmano, K. et al.
titleHuman neural responses involved in spatial pooling of
locally ambiguous motion signals.
journalJournal of Neurophysiology
volume107, pages3493–3508
(year2012).
nishida2018motion
authorNishida, S., authorKawabe, T.,
authorSawayama, M. & authorFukiage, T.
titleMotion Perception: From Detection to
Interpretation.
journalAnnual Review of Vision Science
volume4, pages501–523
(year2018).
shadlen1996motion
authorShadlen, M. N. & authorNewsome, W. T.
titleMotion perception: seeing and deciding.
journalProceedings of the National Academy of
Sciences volume93, pages628–633
(year1996).
wang2002probabilistic
authorWang, X.-J.
titleProbabilistic decision making by slow reverberation
in cortical circuits.
journalNeuron volume36,
pages955–968 (year2002).
wong2006recurrent
authorWong, K.-F. & authorWang, X.-J.
titleA recurrent network mechanism of time integration in
perceptual decisions.
journalThe Journal of Neuroscience: The Official
Journal of the Society for Neuroscience volume26,
pages1314–1328 (year2006).
wang2008decision
authorWang, X.-J.
titleDecision making in recurrent neuronal circuits.
journalNeuron volume60,
pages215–234 (year2008).
wang2020macroscopic
authorWang, X.-J.
titleMacroscopic gradients of synaptic excitation and
inhibition in the neocortex.
journalNature Reviews. Neuroscience
volume21, pages169–178
(year2020).
song2017reward
authorSong, H. F., authorYang, G. R. &
authorWang, X.-J.
titleReward-based training of recurrent neural networks
for cognitive and value-based tasks.
journaleLife volume6,
pagese21492 (year2017).
genon2022linking
authorGenon, S., authorEickhoff, S. B. &
authorKharabian, S.
titleLinking interindividual variability in brain
structure to behaviour.
journalNature Reviews Neuroscience
volume23, pages307–318
(year2022).
colom2007general
authorColom, R., authorJung, R. E. &
authorHaier, R. J.
titleGeneral intelligence and memory span: evidence for a
common neuroanatomic framework.
journalCognitive Neuropsychology
volume24, pages867–878
(year2007).
hamalainen2017bilingualism
authorHämäläinen, S., authorSairanen, V.,
authorLeminen, A. & authorLehtonen, M.
titleBilingualism modulates the white matter structure of
language-related pathways.
journalNeuroImage volume152,
pages249–257 (year2017).
rice2015spontaneous
authorRice, K. & authorRedcay, E.
titleSpontaneous mentalizing captures variability in the
cortical thickness of social brain regions.
journalSocial Cognitive and Affective Neuroscience
volume10, pages327–334
(year2015).
kanai2012online
authorKanai, R., authorBahrami, B.,
authorRoylance, R. & authorRees, G.
titleOnline social network size is reflected in human
brain structure.
journalProceedings of the Royal Society B:
Biological Sciences volume279,
pages1327–1334 (year2012).
forkel2022white
authorForkel, S. J., authorFriedrich, P.,
authorThiebaut de Schotten, M. & authorHowells, H.
titleWhite matter variability, cognition, and disorders: a
systematic review.
journalBrain Structure and Function
pages1–16 (year2022).
schmithorst2010white
authorSchmithorst, V. J. & authorYuan, W.
titleWhite matter development during adolescence as shown
by diffusion mri.
journalBrain and cognition
volume72, pages16–25
(year2010).
kantarci2017white
authorKantarci, K. et al.
titleWhite-matter integrity on dti and the pathologic
staging of alzheimer's disease.
journalNeurobiology of aging
volume56, pages172–179
(year2017).
mechelli2005voxel
authorMechelli, A., authorPrice, C. J.,
authorFriston, K. J. & authorAshburner, J.
titleVoxel-based morphometry of the human brain: methods
and applications.
journalCurrent Medical Imaging
volume1, pages105–113
(year2005).
fischl2000measuring
authorFischl, B. & authorDale, A. M.
titleMeasuring the thickness of the human cerebral cortex
from magnetic resonance images.
journalProceedings of the National Academy of
Sciences volume97, pages11050–11055
(year2000).
glasser2011mapping
authorGlasser, M. F. & authorVan Essen, D. C.
titleMapping human cortical areas in vivo based on myelin
content as revealed by t1-and t2-weighted mri.
journalJournal of neuroscience
volume31, pages11597–11616
(year2011).
pievani2010assessment
authorPievani, M. et al.
titleAssessment of white matter tract damage in mild
cognitive impairment and alzheimer's disease.
journalHuman brain mapping
volume31, pages1862–1875
(year2010).
sui2014function
authorSui, J., authorHuster, R., authorYu,
Q., authorSegall, J. M. & authorCalhoun, V. D.
titleFunction–structure associations of the brain:
evidence from multimodal connectivity and covariance studies.
journalNeuroimage volume102,
pages11–23 (year2014).
wang2007altered
authorWang, K. et al.
titleAltered functional connectivity in early alzheimer's
disease: A resting-state fmri study.
journalHuman brain mapping
volume28, pages967–978
(year2007).
song2008brain
authorSong, M. et al.
titleBrain spontaneous functional connectivity and
intelligence.
journalNeuroimage volume41,
pages1168–1176 (year2008).
hashemi2020bayesian
authorHashemi, M. et al.
titleThe Bayesian Virtual Epileptic Patient: A
probabilistic framework designed to infer the spatial map of epileptogenicity
in a personalized large-scale brain model of epilepsy spread.
journalNeuroImage volume217,
pages116839 (year2020).
hanks2006microstimulation
authorHanks, T. D., authorDitterich, J. &
authorShadlen, M. N.
titleMicrostimulation of macaque area lip affects
decision-making in a motion discrimination task.
journalNature neuroscience
volume9, pages682–689
(year2006).
briggs2013attention
authorBriggs, F., authorMangun, G. R. &
authorUsrey, W. M.
titleAttention enhances synaptic efficacy and the
signal-to-noise ratio in neural circuits.
journalNature volume499,
pages476–480 (year2013).
ye2021quantifying
authorYe, L. & authorLi, C.
titleQuantifying the Landscape of Decision Making
From Spiking Neural Networks.
journalFrontiers in Computational Neuroscience
volume15, pages740601
(year2021).
lebel2008microstructural
authorLebel, C., authorWalker, L.,
authorLeemans, A., authorPhillips, L. &
authorBeaulieu, C.
titleMicrostructural maturation of the human brain from
childhood to adulthood.
journalNeuroImage volume40,
pages1044–1055 (year2008).
kuhn2021brain
authorKühn, S., authorForlim, C. G.,
authorLender, A., authorWirtz, J. &
authorGallinat, J.
titleBrain functional connectivity differs when viewing
pictures from natural and built environments using fMRI resting state
analysis.
journalScientific Reports
volume11, pages4110 (year2021).
ratcliff1978theory
authorRatcliff, R.
titleA theory of memory retrieval.
journalPsychological review
volume85, pages59 (year1978).
ratcliff2006modeling
authorRatcliff, R.
titleModeling response signal and response time data.
journalCognitive psychology
volume53, pages195–237
(year2006).
ratcliff2008diffusion
authorRatcliff, R. & authorMcKoon, G.
titleThe diffusion decision model: theory and data for
two-choice decision tasks.
journalNeural computation
volume20, pages873–922
(year2008).
usher2001time
authorUsher, M. & authorMcClelland, J. L.
titleThe time course of perceptual choice: the leaky,
competing accumulator model.
journalPsychological review
volume108, pages550 (year2001).
tillman2020sequential
authorTillman, G., authorVan Zandt, T. &
authorLogan, G. D.
titleSequential sampling models without random
between-trial variability: the racing diffusion model of speeded decision
making.
journalPsychonomic Bulletin & Review
volume27, pages911–936
(year2020).
brown2008simplest
authorBrown, S. D. & authorHeathcote, A.
titleThe simplest complete model of choice response time:
Linear ballistic accumulation.
journalCognitive psychology
volume57, pages153–178
(year2008).
wei2017decision
authorWei, H., authorBu, Y. & authorDai,
D.
titleA decision-making model based on a spiking neural
circuit and synaptic plasticity.
journalCognitive Neurodynamics
volume11, pages415–431
(year2017).
roxin2008neurobiological
authorRoxin, A. & authorLedberg, A.
titleNeurobiological models of two-choice decision making
can be reduced to a one-dimensional nonlinear diffusion equation.
journalPLoS computational biology
volume4, pagese1000046
(year2008).
umakantha2022relating
authorUmakantha, A., authorPurcell, B. A. &
authorPalmeri, T. J.
titleRelating a Spiking Neural Network Model and
the Diffusion Model of Decision-Making.
journalComputational Brain & Behavior
volume5, pages279–301
(year2022).
kim1999neural
authorKim, J.-N. & authorShadlen, M. N.
titleNeural correlates of a decision in the dorsolateral
prefrontal cortex of the macaque.
journalNature neuroscience
volume2, pages176–185
(year1999).
roitman2002response
authorRoitman, J. D. & authorShadlen, M. N.
titleResponse of neurons in the lateral intraparietal area
during a combined visual discrimination reaction time task.
journalJournal of neuroscience
volume22, pages9475–9489
(year2002).
britten1992analysis
authorBritten, K. H., authorShadlen, M. N.,
authorNewsome, W. T. & authorMovshon, J. A.
titleThe analysis of visual motion: a comparison of
neuronal and psychophysical performance.
journalJournal of Neuroscience
volume12, pages4745–4765
(year1992).
peirce2019psychopy2
authorPeirce, J. et al.
titlePsychopy2: Experiments in behavior made easy.
journalBehavior research methods
volume51, pages195–203
(year2019).
lansky2001ornstein
authorLánskỳ, P. & authorSacerdote, L.
titleThe ornstein–uhlenbeck neuronal model with
signal-dependent noise.
journalPhysics Letters A
volume285, pages132–140
(year2001).
troyer1997physiological
authorTroyer, T. W. & authorMiller, K. D.
titlePhysiological gain leads to high isi variability in a
simple model of a cortical regular spiking cell.
journalNeural computation
volume9, pages971–983
(year1997).
jahr1990voltage
authorJahr, C. E. & authorStevens, C. F.
titleVoltage dependence of nmda-activated macroscopic
conductances predicted by single-channel kinetics.
journalJournal of Neuroscience
volume10, pages3178–3182
(year1990).
tian2022comprehensive
authorTian, Q. et al.
titleComprehensive diffusion mri dataset for in vivo human
brain microstructure mapping using 300 mt/m gradients.
journalScientific Data
volume9, pages7 (year2022).
michels2005visual
authorMichels, L., authorLappe, M. &
authorVaina, L. M.
titleVisual areas involved in the perception of human
movement from dynamic form analysis.
journalNeuroreport volume16,
pages1037–1041 (year2005).
fischl2012freesurfer
authorFischl, B.
titleFreesurfer.
journalNeuroimage volume62,
pages774–781 (year2012).
destrieux2010automatic
authorDestrieux, C., authorFischl, B.,
authorDale, A. & authorHalgren, E.
titleAutomatic parcellation of human cortical gyri and
sulci using standard anatomical nomenclature.
journalNeuroimage volume53,
pages1–15 (year2010).
fox2016combining
authorFox, M. D. et al.
titleCombining task-evoked and spontaneous activity to
improve pre-operative brain mapping with fmri.
journalNeuroImage volume124,
pages714–723 (year2016).
glasser2016multi
authorGlasser, M. F. et al.
titleA multi-modal parcellation of human cerebral cortex.
journalNature volume536,
pages171–178 (year2016).
|
http://arxiv.org/abs/2409.03334v1 | 20240905081627 | A Compact Magnet System for the Tsinghua Tabletop Kibble Balance | [
"Yongchao Ma",
"Nanjia Li",
"Weibo Liu",
"Kang Ma",
"Wei Zhao",
"Songling Huang",
"Shisong Li"
] | physics.ins-det | [
"physics.ins-det"
] |
A Compact Magnet System for the Tsinghua Tabletop Kibble Balance
Yongchao Ma, Nanjia Li, Weibo Liu, Kang Ma, Wei Zhao, Songling Huang,
Senior Member, IEEE, Shisong Li^†, Senior Member, IEEE
The authors are with the Department of Electrical Engineering, Tsinghua University, Beijing 100084, China.
Wei Zhao is also with the Yangtze Delta Region
Institute of Tsinghua University, Jiaxing, Zhejiang 314006, China.
This work was supported by the National Key Research and Development Program of China under Grant 2022YFF0708600 and the National Natural Science Foundation of China under Grant 52377011.
^†Email: shisongli@tsinghua.edu.cn
September 9, 2024
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Although the so-called magnetic geometrical factor, Bl, of a Kibble balance does not appear in the Kibble equations, it offers the precision link between electrical and mechanical quantities and furthers a quasi-quantum traceability path for mass metrology. This feature makes the magnet system, supplying the Bl in Kibble equations, play a core role in Kibble balances. Following the open-hardware idea, we report here on the design, manufacture, assembly, optimization, and finally performance of a compact magnet system for the Tsinghua tabletop Kibble balance. Notably, the magnet system showcased in this study facilitates a straightforward upper levitation of splitting through a streamlined mechanism guide, substantially enhancing the ease of open and close operations.
Experimental tests show the realized magnet systems can yield a high Bl value (e.g., 400 Tm for a bifilar coil and 800 Tm for a single coil with a wire gauge of 0.2 mm) meanwhile a low volume/weight (40 kg) thanks to the uniformity improvement of magnetic profiles. Furthermore, important parameters related to systematic effects, such as the current effect, are checked, aiming for a final mass-realization accuracy at the 10^-8 level.
Kibble balance, magnetic field measurement, kilogram, measurement error, tabletop instruments.
§ INTRODUCTION
The Kibble balance, originally identified as the watt balance <cit.>, stands as one of the principal methodologies for realizing the kilogram, the unit of mass, within the revised International System of Units (SI)<cit.>. An alternative approach, the x-ray crystal density (XRCD) method<cit.>, represents the other major approach. Currently, numerous groups, predominantly national metrology institutes (NMIs), are engaged in Kibble balance experiments <cit.>, and the most precise Kibble balances exhibit the capability to calibrate masses at the kilogram level with a relative uncertainty of approximately one part in 10^8 <cit.>.
The detailed principle of the Kibble balance experiment has been summarized in recent reviews, e.g., <cit.>. The principle is a combination of two simple physical laws, Lorentz's force law and Faraday's induction law, on the same magnet-coil system. In the so-called weighing phase, a current-carrying coil is placed in the magnetic field and the electromagnetic force is adjusted to balance the weight of a test mass, yielding mg=(Bl)_wI, where m is the test mass, g is the local gravitational acceleration, and I is the current in the coil. The Bl is a geometric factor that integrates the cross product of the magnetic flux density B⃗ and the unit length vector along the wire d⃗l⃗ over the entire coil wire path. In the second measurement phase, the velocity phase, the coil is moved at a velocity v and the induced voltage U=(Bl)_vv is obtained. Ensuring that (Bl)_w=(Bl)_v, the mass can be realized in terms of m=UI/(gv). Since the quantities on the right-hand side can be measured against quantum standards, either electrical or optical, the Kibble balance can be considered a quasi-quantum mass realization instrument, see <cit.>.
As can be seen from the measurement principle, although Bl does not appear in the Kibble equations, it is a hidden key player: Bl provides the precision link between two measurement phases so that the mass m can be precisely determined. A detailed discussion of the irony of Bl can be found in <cit.>. It is shown in <cit.> that the value of Bl should be neither too large nor too small. A large Bl introduces uncertainty due to the small current measurement when the test mass is fixed, while the opposite, i.e. a small Bl value, leads to high measurement uncertainty for a low induced voltage. It is found that a typical optimum Bl value is a few hundred Tm for the kilogram-level mass realization. In order to maintain a suitable Bl value while suppressing related systematic effects <cit.> to a level well below 1×10^-8, a large field uniform region is desired. In this regard, the magnet system providing the magnetic field is typically large and heavy. Nowadays, nearly all Kibble balance experiments worldwide utilize magnetic circuits composed of rare-earth permanent magnets and yokes <cit.>. To build such large magnet systems, the cost of the enormous magnet system is extremely high, and its assembly becomes difficult since the attractive forces for gluing the permanent segments as a whole or merging the permanent magnet and yoke parts can easily reach a few or a few tens of kN.
In recent years, several groups have started the design of the compact magnet system, e.g. <cit.>, for tabletop Kibble balance applications. However, the compact magnet system may increase the magnetic uncertainties related to the hysteresis effect, the thermal effect, and the current effect<cit.>. Therefore, it remains a challenging task to limit each related systematic effect while realizing a compact and easy-to-use Kibble balance magnet system.
In late 2022, Tsinghua University launched a tabletop Kibble balance project for compact, robust, and accurate mass realizations following an open hardware approach <cit.>. In this paper, we report the design, fabrication, assembly, optimization, and performance of a compact magnet system for the Tsinghua Tabletop Kibble balance. Section <ref> presents the design and general considerations of the magnet system, along with measurements of the magnetic material properties. Section <ref> presents the machine and the assembly of various mechanical segments, in which we show that an appropriate splitting surface can significantly reduce the difficulty of the open/close operation of the magnet. Section <ref> measures the magnetic profile, and some related systematic effects are discussed. Finally, a conclusion is drawn and the future work is discussed in Section <ref>.
§ GENERAL DESIGN AND CONSIDERATIONS
§.§ Basic Design
Compared to other types of magnet systems, for example, current-carrying coils <cit.>, the permanent magnet system can offer a sub-Tesla magnetic field without any current stabilization and ohmic heating issues. At present, almost all the world's Kibble balance experiments choose to use magnetic circuits with permanent magnets and iron yokes. Among different Kibble balance magnetic circuit designs, the one originally proposed by the BIPM Kibble balance group <cit.> and later adopted by many other groups, e.g. <cit.>, becomes the most popular. The magnet system design for the Tsinghua tabletop Kibble balance is also based on the BIPM-type magnetic circuit. The CAD model and schematic design are presented in Fig. <ref> (a) and (b). The targeted magnet system has a 220 mm outer diameter, 180 mm height, and about 40 kg in total mass. The flux of two permanent magnet rings, each denoted as PM, is guided by the inner yokes through a circular air gap formed by inner and outer yokes. The outer yoke and cover yokes return the flux as a closure. Note that in this design, we choose two different rare-earth materials, Neodymium (NdFeB) and Samarium Cobalt (Sm_2Co_17), and build two versions of magnet systems. In the following text, they are noted as respectively the NdFeB magnet and the SmCo magnet. For the yoke, a high permeability pure iron, DT4C, is chosen to form good equal potential boundaries at the inner and outer radii of the air gap to restrict the direction of the magnetic lines. The idea is to lower the influences of upper and lower PM asymmetries and ensure the generated magnetic field can follow a 1/r distribution inside the air gap <cit.>, detailed in subsection <ref>.
For the BIPM-type magnet circuit, the approximate formula to estimate the average magnetic flux density in the air gap, B, can be written as <cit.>
B ≈ -μ_0H_ m/(δ_ a/δ_ m+γS_ a/S_ m),
where μ_0, H_ m, δ_ a, δ_ m, S_ a, S_ m are the vacuum permeability, the residual coercive force of the permanent magnet, the air-gap width, the permanent magnet height, half of the air gap area along the flux path (S_ a≈π r_ah_a where r_a, h_a are the mean radius and the height of the air gap) and the top/bottom surface area of the permanent magnet disk, respectively. Here γ is a scale factor due to the edge effect, mainly related to the height-width ratio of the air gap, i.e. h_ a/δ_ a. In our design, the mean radius, width, and height of the air gap are respectively r_a=80 mm, δ_a=15 mm, and h_a=50 mm. For such a geometrical setup, γ≈ 1.43 is obtained by the finite element analysis (FEA). The inner and outer radii of the permanent magnet ring are r_ mi=20 mm and r_ mo=65 mm, and its height δ_ m=25 mm. H_ m( NdFeB)=-1030 kA/m and H_ m( SmCo)=-786 kA/m are obtained by FEA simulations based on the surface magnetic flux density measurement, see subsection <ref>. Taking the above parameters into (<ref>), the average magnetic flux density is 0.46 T and 0.62 T respectively for SmCo and NdFeB magnets.
During the Kibble balance measurements, uniformity of the magnetic field at the centimeter range along the vertical direction, e.g. Δ B/B<1×10^-3, is desired to ensure the uncertainty of the measurement results. However, due to the edge effect, the magnetic flux density of a conventional BIPM-type magnet drops significantly at two ends of the air gap, leaving a much shorter field uniform range than the height of the wide air gap. Fig. <ref> (a) shows the magnetic profile of the conventional BIPM-type magnet using the same air-gap parameters. The uniform range (Δ B/B<1×10^-3) is about 23 mm, approximately 46% of the whole air gap height. Since for tabletop Kibble balances, the volume of the magnet system is limited, it is crucially necessary to extend the uniform field range without enlarging the overall magnet size.
An innovation of the Tsinghua tabletop Kibble balance magnet lies in the optimization of its profile through a strategic reshaping of the inner yoke boundary. This approach draws inspiration from the magnet design proposed in <cit.>, incorporating two small rectangle rings at the extremities of the inner yoke to mitigate the edge effect. The rationale behind this modification is elucidated by (<ref>): The reduction of the air gap width (δ_ a) at both ends effectively amplifies the magnetic field B. Fine-tuning the dimensions, specifically the width (δ_ c) and height (h_ c) of the rectangle, enables significant compensation for the edge effect, thereby expanding the range of uniform magnetic fields.
Fig. <ref>(a) illustrates our approach, where we maintain a fixed height for the compensation rectangle ring (h_ c=5 mm) and vary its width (δ_ c) from 0 mm to 0.6 mm through FEA simulations. It is evident that excessively small widths result in insignificant compensation, while excessively large widths lead to an over-compensation, reducing the uniform range. In the final design, a width of 0.4 mm is chosen for the compensation ring. Despite a modest decrease of approximately 4.3 mT (0.7%) in the absolute magnetic field strength, this configuration enhances the uniform magnetic field range (Δ B/B<1×10^-3) to 35 mm, representing 70% of the total air-gap height. Notably, this signifies a more than 50% increase in the uniform range compared to the original uncompensated magnet system. The experimental validation of the magnetic profiles is discussed in Section <ref>.
The second major improvement of the Tsinghua Kibble balance magnet, compared to the conventional BIPM-type design, is the optimization of the open/close operation. The BIPM-type permanent magnet contains a magnetic closure that can well reject magnetic interactions from the background flux, however, a drawback of such systems is that a high attraction force (up to the ton level and above) needs to be overcome for magnet open/close operations so that the coil can be reached (install, replace, adjustment, etc). It is obvious that the smaller the force required to open the magnet system, the easier it is to maintain the magnet system.
The ideal case is to choose an operation surface that has the minimum change to the main magnetic flux path between the open and the close status. With a symmetrical design, the middle horizontal plane (z=0) seems to be an option because all the magnetic flux on this surface is horizontal, i.e. B_z=0. However, FEA calculations <cit.> and theoretical analysis using Maxwell's tensor <cit.> show that the splitting force on the middle symmetrical surface is repulsive. This phenomenon is understandable: Once the two symmetrical parts are separated, the edge effects produce the same magnetic poles around the air gap region, and a repulsive force is hence generated.
In <cit.>,
the approximation of the vertical splitting force, F_z, as a function of the splitting position z is given, i.e.
F_z (z) = πr_a B^2/μ_0(δ_a -2πr_a(S_i+S_o)/S_iS_o z^2),
where r_a = (r_i+r_o)/2 denotes the average radius of the air gap, S_i, S_o are respectively the cross-section area of the outer and inner yokes at the splitting plane, B is the average magnetic flux density in the air gap, and the z means the distance of the split plane to the symmetry plane of the magnet. Although (<ref>) estimates only the magnetic force with an opening distance d=0 mm, it clearly shows how the F_z varies from repulsive to attractive. Also, the magnetic force F_z varies as a function of the opening distance, d and the FEA mapping of the magnetic force F_z(z,d) <cit.> shows that the sign of F_z can also revise, from attractive to repulsive, at the z regions that F_z(z,d=0 mm)<0. Summarizing the above information, it is possible to find a z value as the open/close surface and ensure the overall magnetic force F_z minimum. In addition, considering the splitting surface may change the shape of the profile, it requires pushing as far away from the middle symmetry as possible. In our design, the splitting position z is set as 15 mm. Fig. <ref>(b) presents the magnetic force as a function of the splitting distance, d, in a range of 50 mm. The initial opening force (d=0 mm) is -200 N and -112 N respectively for the NdFeB and SmCo magnets. Within a few millimeters, F_z changes direction from attractive to repulsive and then quickly reaches the maximum (approximately 540 N and 302 N at d=6 mm). Then finally the force F_z goes down slowly when d continues to increase.
In Fig. <ref>, the magnet system is delineated by a splitting surface, dividing it into distinct upper and lower segments. The respective masses of these segments are 17.2 kg and 24.1 kg. In the design, the lower half is immovably fixed, while the upper half remains mobile. Fig. <ref>(b) illustrates that subtracting the weight of the upper half, G=168 N, from F_z results in a zero-crossing point denoted by a prominent dot in the plot. At this specific point, F_z equals G, leading to the levitation of the upper half magnet. This characteristic stands as a distinctive feature of the Tsinghua Kibble balance magnet system. Notably, the considerable gap during levitation measures 22 mm and 41 mm for the SmCo and NdFeB magnets, respectively. This ample gap accommodates coil operations effectively. It is important to emphasize that in the closed position, the attractive forces are 280 N and 368 N for SmCo and NdFeB magnets, respectively. This configuration ensures a securely closed state while maintaining a reasonable ease of reopening.
§.§ Material Properties
§.§.§ Permanent Magnet
The permanent magnet is the magnetic source of the magnet system. Its properties are closely related to the stability and uniformity of the air-gap magnetic field. Here we compare two types of permanent magnets: Samarium-cobalt (Sm_2Co_17) magnet and Neodymium (NdFeB) magnet. Since the temperature coefficient of NdFeB magnet (≈-1.2× 10^-3/K) is a few times of the SmCo magnet (≈-3.0× 10^-4/K), most Kibble balance magnets choose SmCo as the permanent magnet material.
Due to the brittleness of SmCo material, there is a size limitation in its manufacturing process. As a result, large SmCo magnets are typically constructed by assembling multiple smaller magnet tiles. However, variations in the magnetization of these tiles, along with the gaps introduced during assembly, can significantly degrade the overall uniformity of the magnetic field in the permanent magnet. For the required size of the permanent ring (r_ mi=20 mm, r_ mo=65 mm, δ_ m=25 mm), it is still too large for a whole magnetization. In the design, we divided the permanent magnet ring into 8 slices. Conventionally, each slice is magnetized first (choose slices with a similar magnetization, see <cit.>) and then all slices are glued into a whole. The latter needs to overcome the attraction force between adjacent slices and the assembly is usually costly. Here we use a different sequence for the magnetization: the slices are first assembled (glue or use an outer ring) and then individual slices are magnetized one by one. This approach can significantly reduce the cost, but the drawback is that the magnetization of each slice may differ considerably.
To check the uniformity of different slices, a measurement of the surface magnetic field was carried out. The experimental setup is shown in Fig. <ref>(a). A Hall probe is placed slightly above the middle surface (r=45 mm) and the vertical magnetic field is scanned along 360 degrees when the permanent magnet disk is rotated by a motorized stage. One typical measurement result is shown in Fig. <ref>(b) (the red curve). It can be seen from the measurement that the magnetic field at the junction of two permanent magnets is significantly weaker (up to 15%) than that in the middle of the permanent magnet slice, leaving 8 magnetic poles along the θ axis. Using the same experimental setup, the surface magnetic field of the NdFeB ring is also measured, shown as the black dashed line. Note the NdFeB disk is manufactured and magnetized as a whole as its magnetization size limit is much larger than that of the SmCo magnet. It can be seen that the curve of the NdFeB magnet is much smoother and the peak-peak variation is at the percentage level. Also, the magnetic field generated by the NdFeB magnet is about stronger, and the surface magnetic flux density increases by over 0.1 T, from 0.25 T to 0.36 T.
Using the surface field measurement results, the residual coercive force of the permanent magnet, H_ m, can be conducted. The idea is to adjust and find the right H_ m in an open space FEA model so that the vertical magnetic field at the Hall probe measurement point equals to the measurement result. As shown in Fig. <ref>(c), the final B_z distribution along r is presented, and the H_ m values for the test NdFeB and SmCo magnets are -1030 kA/m and -786 kA/m, respectively.
Knowing the main parameters of permanent magnet rings, a concerning question is how well can the yoke average out the asymmetry of the permanent magnet rings, such as the ripples shown in Fig. <ref>(b). The asymmetry of the upper and lower permanent magnets can be well rejected by using two permanent magnets with a similar surface field strength and high permeability yokes, see <cit.>. For example, for the NdFeB magnet, the variations of the surface field is within 4%. In this case, according to the evaluation in <cit.>, even with a relatively low permeability yoke, e.g. μ_ r=1000 for low-carbon steel, the relative magnetic field change over z is well below 1×10^-3. By far, the field uniformity along the circular, B(θ), has not been studied. To observe the magnetic field uniformity change as a function of yoke permeability, a 3D FEA model is employed. For the permanent magnet ring, only one-eighth located at θ=π/2 is magnetized and used as the flux input. Its surface magnetic field distribution is shown in Fig. <ref>(d). To keep the setup simple, the upper and lower permanent magnet rings are kept the same field distribution along θ. Now we change the relative permeability of the yoke, from 10 to 10^5, and calculate the field distribution in the air gap. The FEA calculation results are shown in Fig. <ref>(e). It can be seen that as the μ_ r increases the field becomes flatter and flatter along the θ. Limited by the meshing numbers, the calculation starts to lose accuracy when μ_ r is larger than 1000.
But we can extrapolate the calculation results and find out the B uniformity improvement as a function of μ_ r. As shown in Fig. <ref>(f), the variation of the B(θ) profile, defined as max(|B/B̅-1|) where B̅ is the average of B(θ) over 0 to 2π, as a function of the relative permeability of the yoke is presented. The cross points are values calculated by FEA from Fig. <ref>(e) and the dashed line is an extrapolation using the first three points, where the FEA calculation has not yet been limited by the meshing. It can be seen that with μ_ r>10^4, the uniformity of B(θ) can be improved over two magnitudes. For the SmCo rings shown in Fig. <ref>(b), if the eight peaks or valleys differ at the percentage level and the yoke permeability is at 10^4 level, the symmetry of B(θ) can be suppressed to the 10^-4 level, which is largely enough for the coil alignment adjustment. For NdFeB magnet, since its surface flux density change of the permanent disk is much less along θ, the asymmetry of the permanent magnet disk is not a limiting factor for obtaining a uniform B(θ) profile.
§.§.§ Magnetic Yoke
As discussed in the above section, high-permeability yokes are extremely important for eliminating permanent asymmetries and ensuring the uniformity of the magnetic field in the air gap. By far, there are three categories of yoke materials employed in constructing the Kibble balance magnet: low-carbon steel, pure iron, and iron-nickel alloy (e.g. Supra50). Their relative permeability μ_ r is respectively at the 10^3, 10^4, and 10^5 order, and the saturation field is about 1.8 T, 1.5 T and 1.2 T <cit.>. For tabletop systems, a high saturation field can help to reduce the yoke volume and hence the overall size of the magnet. A comprehensive consideration of two factors, the Tsinghua Kibble balance magnet uses pure iron, DT4C, as the yoke material. The carbon fraction of DT4C usually is below 0.025%. Heat treatments can recover the permeability loss during the machining process, and the DT4C requires a deoxygenation annealing in a vacuum or hydrogen atmosphere. Here we measured the magnetic permeability of the DT4C sample before and after annealing in vacuum.
The BH curve and magnetic permeability of the samples are measured by electromagnetic induction. The measurement principle diagram is shown in Fig. <ref>(a). The samples are machined into the rings and two sets of coils, the excitation coil N_1=480 turns and induction coil N_2=280 turns, are wound on the ring. The primary coil is excited by the output of a signal generator with a low frequency, 0.1 Hz. The current through the primary is measured by a 10 Ω sampling resistance, R_s. The voltage drop of R_s and the induced voltage of the secondary coil are synchronized and measured by two digital voltage meters (3458A, V_1 and V_2). The H field through the yoke ring is obtained by Ampere's law, i.e.
H=N_1V_1/lR_s,
where l=2π r_0 (r_0=25 mm is the mean radius of the ring). The induced voltage on the secondary coil is written as
V_2=N_2 A dB/ dt,
where A=100 mm^2 is the cross-section area of the iron ring. The B field then can be calculated as
B=∫_TV_2/N_2 A dt-B_0.
Note that B_0 is a constant that makes the average of the B field in a period T equal to zero.
The measurement results, including the hysteresis curve, the BH curve, and the μ H curve, are shown in Fig. <ref>(b)-(d). As expected, a significant increase of permeability is observed after the heat treatment. The results of the FEA simulation show that along the main flux path, the magnetic field in the yoke varies from about 40 A/m to 200 A/m. Under such working status, the relative permeability μ_ r of the yoke is over 2× 10^4, which can well meet the requirement for realizing the designed target. One additional merit of using a high-permeability yoke is to achieve good magnetic shielding performance, offering a reduction of electromagnetic noise, as well as some systematic effects.
§ MACHINE AND ASSEMBLY
The substantial attractive force between the yoke and the permanent magnet renders the assembly of the magnet system a demanding task. Typically, the installation of the yoke and permanent components necessitates specialized tools or machinery, as referenced in <cit.>. In this paper, we outline a straightforward procedure for assembling the designed magnet system using widely available tools. We detail the attraction force between various components and provide a step-by-step guide for installing each segment. Our approach is grounded in the open-hardware philosophy, enabling experimenters to replicate the magnet system by following the provided assembly instructions.
§.§ Assembly Process
As previously noted, the magnet system is divided into upper and lower sections to facilitate coil maintenance. Fig. <ref> illustrates the different components contained within these two halves. Generally, the assembly process follows a bottom-to-top and inside-to-outside sequence. Initially, the upper and lower parts of the magnet are assembled separately. Subsequently, these two sections are combined. Given that the assembly processes for the lower and upper parts are identical, we will use the lower part as an example to detail the steps. As depicted in Fig. <ref>(b), the assembly procedure of each half is divided into 8 steps (S1-S8), listed as follows:
* Secure the cover yoke to the assembly platform. Utilize the six threading holes of the cover yoke to mount it on the bottom plate of the assembly cage shown in Fig. <ref>(a). The design of the assembly cage and its accessories is discussed in subsection <ref>.
* Installation of the permanent magnet. In this step, the permanent magnet is placed on a stainless steel bar. The diameter of the bar's end is larger than the inner hole of the permanent magnet but smaller than the diameter of the central hole of the cover yoke. This design allows the bar to support the magnet via the lifting rod in the assembly cage. After the permanent magnet is installed, the bar is removed from the bottom through the central hole of the yoke cover.
* Installation of the aluminum ring. The aluminum ring serves to secure the permanent magnet and enhance its alignment. By aligning the six primary holes (intended for the coil rods and optics) of the aluminum ring with those of the yoke cover, the centering of the permanent magnet is improved. Furthermore, the aluminum ring plays a crucial role in adjusting the concentricity between the inner and outer yokes, as detailed in step S7.
* Installation of the inner yoke. This step is similar to S2, with the primary difference being the diameter of the lifting bar. In this case, the bar is designed to support the inner yoke while also passing through the central hole of both the permanent magnet and the cover yoke.
* Installation of the aluminum centering bar. The aluminum bar with four different diameter levels is inserted into the central hole of the inner yoke, the permanent magnet, and the cover yoke. Once the aluminum bar is fully inserted, its outer edge is secured to the bottom of the cover yoke with six mini screws.
* Installation of the outer yoke. In the design, the outer diameter of the outer yoke is slightly larger than the outer diameter of the yoke cover. An aluminum ring, whose inner diameter is in between, is used to hold and guide the outer yoke from the suspension. It is important to insert three aluminum bars through the main holes for the isolation of the inner and outer yokes in case of a short circuit. When the outer yoke is approaching the cover yoke, a rough alignment of the screw holes should be carried out.
After the outer yoke touches the cover yoke, make sure three locking screws are in place but not tightened, and the three aluminum bars can be removed.
* Concentric adjustment of the inner and outer yokes. The screws locking the outer yoke and the cover yoke are kept loose, and insert three headless screws to the lower side threads every 120 degrees. By pushing the depth in different directions, the concentricity of the inner and outer yokes can be adjusted. The details of the adjustment are presented in subsection <ref>.
* Finalization of the outer yoke assembly. Once the concentricity of the inner and outer yokes is optimized, the locking screws can be tightened. The cover yoke can be unlocked and the outer aluminum ring can be removed.
The assembly and adjustment of the upper half of the magnet follow the same process as outlined above. Once both halves are installed, we proceed to the final step: combining the upper and lower parts. In this design, three long screws, labeled 1 in Fig. <ref>, pass through the upper outer yoke and are threaded into the lower outer yoke. It is important to note that misalignment of the two halves can potentially short-circuit the magnetic path in the worst-case scenario, necessitating a guiding mechanism. Fig. <ref>(c) presents a mechanical design for opening and closing the upper and lower halves. Experimental tests validate the proposed concept of magnetic levitation for the upper part, as illustrated in the upper illustration showing the NdFeB system. The levitation is super stable with a separation distance of approximately 40 mm, which offers enough space for operations such as inserting, removing, or modifying the coil. To close the magnet, a simple downward push with hand-scale force is sufficient. As mentioned, since the attraction force (including gravity) under closed status is 368 N and 280 N for NdFeB and SmCo magnets respectively, three nuts are attached to the guide bars, and by turning these nuts the magnet can be reopened.
§.§ Lifting Force and Device
To counteract the substantial attractive force between the permanent magnets and the yokes, the use of a lifting device is required <cit.>. Selecting the appropriate capacity for the crane, along with a suitable weighing sensor, necessitates an evaluation of the maximum attractive force. The assembly force was analyzed by using FEA calculations. Since the yokes and permanent magnets move slowly during assembly, the impact of the eddy current effect is negligible. Consequently, it is appropriate to employ a static magnetic field model to analyze the force variation. The results, depicted in Fig. <ref>, indicate that the attractive force is significant only when the distance between the yoke and the permanent magnet is small. The attraction force decays quickly with the separation distance and is well below 0.2 kN at d=50 mm. In the other cases, the gravitational force on the component is the dominant force. Tab. <ref> presents the maximum values of the force during the assembly steps considering the gravity of each segment connected to the lifting rods.
Based on the simulation results, we designed the lifting system, depicted in Fig. <ref>(a). The system comprises a trestle, a retainer plate, a manual hoist, and metal hooks. To mitigate the attractive force generated by the permanent magnet, all components of the lifting system are constructed from non-ferromagnetic materials. The magnet system is secured to the retainer plate, which is centrally positioned on the trestle. This arrangement ensures that the attractive force between the yoke and the permanent magnet acts as an internal force within the lifting system, significantly reducing the tensile strength requirements for the trestle. The trestle is fabricated from 4080 aluminum profiles, with tensile and bending strengths exceeding 205 MPa and 157 MPa, respectively. The retainer plate is made from an aluminum plate, while the manual hoist and hooks are constructed from 304 stainless steel. During the assembly process, the maximum force encountered is approximately 5815 N (equivalent to 593 kg). Consequently, the selected manual hoist has a maximum lifting capacity of 1.5 t.
To check these numbers, the lifting force is measured by a tension meter during the installation. The maximum force observed is much lower than the theoretical values in Tab. <ref>. This is because the maximum force occurs at d=0 mm, and drops quickly once they are separated, and for the force measuring device, it is difficult to capture the opening or closed moment, d=0 mm. Although the measurement result is much lower, the theoretical values shown in Tab. <ref> are still the most important references for designing and realizing the lifting system.
§.§ Concentricity Adjustment of Inner and Outer Yokes
The target of a Kibble balance magnet system is to generate a one-dimensional radial magnetic field in the air gap: Ideally, the magnetic field should be uniform along the vertical and have a 1/r distribution along the radial direction. Since the magnetic flux density is inversely proportional to the gap width <cit.>, the concentricity of the inner and outer yokes can significantly affect the tangential uniformity of the radial magnetic field. Although, in theory, the decentering of inner and outer yokes does not affect the Bl value <cit.> and can be compensated by the coil alignment, in practice, the gap between coil former and yokes is very limited, and the coil may touch the yoke before the alignment is done. In this case, the concentricity of the inner and outer yokes must be adjusted.
Bielsa et al proposed a concentricity optimization of inner and outer yokes using a double capacitor sensor <cit.>. However, in our case, the mean radius of the air gap is much smaller and the range of the capacitor sensor becomes limited when measuring the inner surface of the outer yoke. To address this issue, we use an optical sensor approach: As shown in Fig. <ref>(a), two laser sensors (Mirco-Epsilon 1420 with a measurement range 10 mm, measurement repeatability 0.5 m) are mounted in the same rotational stage. The measurement points are set respectively on the outer surface of the inner yoke and the inner surface of the outer yoke. After the stage is leveled and two sensors are adjusted in the measurement range, the variation of the gap width between the inner and outer yokes is given as
Δ r = Δ l_2sinθ_2-Δ l_1sinθ_1,
where l_1 and l_2 are distances measured by the two optical sensors shown in Fig. <ref>(a); Δ l_1 and Δ l_2 are residuals after removing their average values; θ_1 and θ_2 are the sensor mounted angles referred to the horizontal plane.
The first step of the measurement is to make the centers of the inner yoke and the rotation stage coaxial. The inner yoke is mechanically fixed and the horizontal position of the rotation stage is adjusted to a status that the amplitude of the sinusoidal component with 2π period is minimum or comparable to its harmonics. Then fix the position of the rotation stage. Now, by rotating the stage with two optical sensors, the variations of the inner surface of the outer yoke, i.e. Δ r_o=Δ l_2sinθ_2, and the outer surface of the inner yoke, Δ r_i=Δ l_1sinθ_1, as a function of rotation angle θ can be obtained. Their difference, Δ r_o-Δ r_i, can define the concentricity of the inner and outer yokes.
As presented in the assembly process S7, the screws locking the yoke cover and the outer yoke remain loose, and adjusting the side screws can push the aluminum ring and hence the position of the outer yoke in the backward direction. Keep the adjustment and measurement interactions until Δ r_o-Δ r_i is flat (no considerable sinusoidal component is seen). Finally, the locking screws can be tightened to finish the concentricity adjustment.
Fig. <ref>(b) and (c) present a final adjustment result, respectively for the upper and lower parts of our magnet system. The raw experimental data contain some high interference noise, and it was found due to the yoke surface quality of machining. To remove these sparks, the least squares fit is employed and the results are shown in dotted lines in Fig. <ref>(b) and (c).
It can be seen from the measurement results that the peak-to-peak value of Δ r_o-Δ r_i achieved is approximately 150 m and 200 m for the upper and lower parts, respectively.
The residual of Δ r_o-Δ r_i does not exhibit a significant symmetry. The asymmetric waveforms indicate that, after the adjustment, the main reason affecting the consistency of the air gap width is the yoke deformation caused by the machining or assembly.
§ EXPERIMENTAL RESULTS AND DISCUSSIONS
§.§ Magnetic Profile Measurement
An easy method for measuring the magnetic profile is to use the gradient coil <cit.>. A gradient coil contains two identical coils that are separated with a vertical distance Δ z and are connected in series opposition. Moving the gradient coil in the magnetic field with a velocity v yields
V_1 (z)-V_2 (z)/V_1 (z) = B (z)-B (z-Δ z)/B(z)≈Δ z dB(z)/dz/B(z),
where V_1 and V_2 are the induced voltage of each coil. Hence, the profile B(z) is determined as
B(z)=B̅/V̅_̅1̅Δ z∫(V_1(z)-V_2(z))dz + B_0,
where B_0 is chosen to ensure that B(0)=B̅. As the variation noise of two coils is almost identical, the common-mode noise in the induced voltage can be well suppressed and a good signal-to-noise ratio is therefore achievable.
The following is the parameter of the gradient coil used in the measurement: The main radius is 81.5 mm, Δ z=6 mm, and the number of turns for each coil is 400. The gradient coil is connected to a moving stage (PI M-413.2DG) through three aluminum rods and the stage can drive the coil moving up and down with a constant velocity 0.5 mm/s. The induced voltage of one coil, V_1(z), and the voltage difference of two coils, V_1(z)-V_2(z), are simultaneously measured by two DVMs (Keysight 3458A).
Fig. <ref> presents the magnetic profile measurement result. Note that those profiles are relative referring to the magnetic flux density at the center point (B=0.59 T for the NdFeB magnet, and 0.45 T for the SmCo magnet). The red curve is a theoretical profile of a BIPM-type magnetic design without inner yoke shape compensation (parameters remain the same as the Tsinghua system). The black curve is the theoretical profile with the proposed yoke compensation (h_c=5 mm, δ_c=0.4mm). The green curve shows an experimental measurement result of a proposed magnet system (No. THU-1). It can be seen that the measurement agrees well with the prediction in the lower range, z∈(-17,5) mm. While in the top range, z∈(5,15) mm, the measurement result becomes higher than the theoretical value. This phenomenon results from the open/close surface: In our design, nickel coating is employed to prevent yoke rust. Although nickel has a considerable magnetic permeability, its value (μ_r is typically a few hundreds) is much lower than that of the yoke. In addition, small air gaps always exist on the open/close surface. In this case, the magnetic reluctance for the upper permanent magnet flux going through the lower air gap (divided by the open/close surface) becomes increased. Then more flux goes through the air gap above the open/close surface, yielding a higher magnetic flux density. To verify the above conclusion, an FEA model with adding a tiny air gap on the open/close surface is analyzed, and the result is shown as the blue curve in Fig. <ref>. The FEA result agrees well with the measurement with the air gap width of 140 m. The uniformity of the magnetic profile can be improved by using a thinner coating and reducing the air gap on the close surface. The magenta curve in Fig. <ref> presents an example (No. THU-2) where the ripple is well suppressed.
It should be noted that the gradient coil may cause a fixed slope of the B(z) measurement result when the diameters of two coils are not identical <cit.>. It is better to check the profile measurement result by using the U/v measurement when the formal coil is integrated into the system. It is also worth mentioning that the profile measurement result is closely related to the radius of the gradient coil, r_gc. In general, when close to the inner yoke, e.g. r_gc<r_a, the magnetic flux density at two ends of B(z) profile is getting higher. While, when r_gc>r_a, the two ends of the profile drop. It is observed that the field uniformity in the measured profile in Fig. <ref> is lower than the predicted uniformity shown in Fig. <ref>(a). This discrepancy is exactly due to the use of a larger gradient coil radius (r_gc=81.5 mm) compared to the mean radius of the air gap (r_a=80 mm).
§.§ Alignment Test
As mentioned in Section <ref>, the magnet must provide sufficient space for the coil to reach equilibrium during alignment. Therefore, an alignment test is essential to confirm the existence of this equilibrium position before integrating the magnet into the Kibble system.
Fig. <ref>(a) shows an experimental setup for the coil alignment test. The coil is mounted on an xy stage through three rods and a spider. The connection between the coil suspension and the xy stage is a gimble that allows the coil to horizontally move or rotate along x and y.
The suspension is first horizontally leveled, and before being aligned, the coil will rotate along x and y when an AC current is injected into the coil. Optical sensors (Mirco-Epsilon 1420) are used to measure the oscillation amplitude of the spider rotation along x and y, i.e. θ_x and θ_y. The test is to ensure valleys can be reached when moving the stage along both the x and y axis.
Fig. <ref>(b) and (c) present an experimental result. Note that the frequency of the current excitation should make the coil rotation be sensitive to xy movement. In this test, the frequency of the current through the coil is set to 1 Hz. It can be seen from the measurement that both θ_x and θ_y can reach the minimum and therefore the test coil can reach an equilibrium where a good coil alignment is achievable.
§.§ Temperature Characteristic
The thermal dependence is an important parameter for Kibble balance magnetic circuits, especially crucial for tabletop systems <cit.>. Typically, the NdFeB temperature coefficient is approximately -1×10^-3 /K and the SmCo magnet is about -3×10^-4 /K. Although most Kibble balances by far choose the SmCo magnet system due to a low-temperature coefficient, it is still very attractive to use the NdFeB magnet as the magnetic field created here is over 30% stronger than that of the SmCo magnet. The magnetic field change due to the environmental temperature drift, in principle, can be well eliminated by ABA or ABBA measurements. Instead, an important concern is the systematic effects caused by the noncontinuous coil ohmic heating, which can not be removed in a conventional Kibble measurement scheme, see <cit.>. A stronger magnetic field, in this case, can lead to a reduction of the coil ohmic heating (if the same wiring is adopted) and the current effect, which may compensate for or even lower the thermal-related bias.
The temperature coefficient of the magnet system can be measured in several different ways, and the most accurate result comes from the measurement of Bl in the final stage along with the temperature variation. Without a surprise, the measurement result should agree more or less with these values given by the permanent magnet manufacturers. It is believed the thermal time constant, which is independent of the magnet type (NdFeB or SmCo) and mainly depends on the size and material of the segments forming the magnet system, is a more important parameter. Here a measurement of the thermal time constant and temperature coefficient of an NdFeB system is presented. As shown in the subplot of Fig. <ref>, the magnet system is placed in a temperature-controlled oven. Its temperature control accuracy is ± 0.2 ^∘ C. Two sets of temperature ramping, respectively 25 ^∘ C to 30 ^∘ C, and 30 ^∘ C to 35 ^∘ C, are carried out. A Gauss meter (Lakeshore Model 425) is used to measure the air-gap magnetic flux density change.
The experimental results are shown in Fig. <ref>. The oven can reach the targeted temperature (30 ^∘ C and 35 ^∘ C) within a few minutes, while the reduction of the magnet flux density lasts about 10 hours to reach stability. The measurement results show that the relative change of the magnetic flux density for the two tests is almost identical. An exponential fit of the measurement result yields a time constant of 2 h and the temperature coefficient obtained is about -1.1× 10^-3 /K.
§.§ Bl and the Current Effect
With the magnet system finalized, the Bl value and the current effect <cit.> of the experiment can be evaluated. Here we take the Tsinghua system as an example: For the NdFeB version, the average magnetic flux density at the air gap is about 0.59 T. The inner and outer radii of the coil wiring section are 75.5 mm and 86 mm, and its height is 15 mm. The number of turns for each coil for the bifilar coil is 1360 (0.2 mm wire gauge) and the resistance of each coil is measured at approximately 395 Ω. In this configuration, the Bl value given by each coil is over 400 Tm. If a conventional two-mode, two-phase measurement scheme is chosen, the Bl is doubled and is over 800 Tm. A Bl value in the range of a few hundred Tm allows the experimental to achieve the lowest uncertainty for a kilogram mass realization <cit.>.
The additional magnetic field generated by the coil current reshapes the magnetic field distribution in the air gap and hence affects the weighing and velocity measurements (the velocity measurement is affected only when the one-mode scheme is adopted). There are a few approaches to determine the coil current effect, detailed in <cit.>. Here a direct measurement of the coil inductance is used. The coil is set to a fixed position in the air gap of the magnet, z. A 200 Ω resistor (R_s) is in series with the coil and is excited by a low-frequency current. The voltages of R_s and the coil are simultaneously sampled by two DVMs (Keysight 3458A). The amplitude and phase of the two measurements can be extracted, denoted as V_R, ϕ_R and V_L, ϕ_L respectively for the resistor and the coil. The following equations allow to solve the coil resistance and coil inductance, i.e.
R_s^2/R_s^2+(ω L)^2=(V_R/V_L)^2,
ω L/R_L=tan(ϕ_L-ϕ_R),
where ω=2π f is the angular velocity of the current applied. Measuring the L value at different coil positions can yield the L(z) curve.
Note that the L value required here should be a DC value. To remove the frequency dependence, the measurement was carried out with different current frequencies, f=1,2,3,4 Hz. Fig. <ref>(a) illustrates the mean inductance, L_mean, within a range of ± 17.5 mm as a function of the current frequency f. The measurement results indicate that the inductance varies linearly with f for frequencies up to 4 Hz. This linear relationship allows for the extrapolation of the inductance's DC value. Fig. <ref>(b) presents detailed measurements of L(z) at frequencies of 1, 2, and 3 Hz. A quadratic fit is applied to these measurements, and further, a linear extrapolation is used to determine the L(z) curve, see the red curve in Fig. <ref> (b). The quadratic fit gives: L(z)=-650.3 H/m^2z^2+1.3725 H. The constant term denotes the maximum L value where the coil is at the symmetrical center, z=0 mm. Using ∂ L/∂ z, the magnetic profile change due to the coil current can be calculated, presented in Fig. <ref>(c). As a comparison, the FEA result, obtained by F/I calculation, is also shown in Fig. <ref>(c). It can be seen that the measurement result agrees well with the FEA calculation.
§ CONCLUSION
How to develop a compact magnet system is an important task for tabletop Kibble balance experiments. A direct volume reduction of the conventional BIPM-type magnet may lead to a considerable increase in measurement uncertainty, and hence necessary optimization of the magnet system is required to balance the measurement uncertainty and the overall size. Hereby, we present the design and realization of a compact magnet system for the Tsinghua tabletop Kibble balance. Some noticeable features are achieved:
* The magnet presented is divided into upper and lower parts by an optimal open/close surface. When closed (separation distance d=0 mm), the two parts are tightened by an attractive magnetic force. When opened by a few millimeters, the magnetic force becomes repulsive, allowing the upper part to levitate robustly. This easy open/close operation facilitates the maintenance of the magnet system within the experiment. Additionally, the opening and closing forces are reduced to tens of kilograms, compared to the kN-level attraction force of conventional designs, enhancing operational convenience.
* The magnet system is optimized to achieve a sufficient Bl by extending the range of the one-dimensional radial magnetic field. The inner yoke shape is modified to improve magnetic field uniformity in the z direction, and circumferential uniformity is enhanced through optimization of the yoke material and adjustment of the concentricity between the inner and outer yokes. As a result, the Tsinghua system achieves a Bl over 400 Tm for a bifilar coil (800 Tm for a single coil) using 0.2 mm wire gauge.
* Following the open-hardware idea, details of the magnet assembly and adjustments are provided. A precision assembly of the magnet system was achieved by using very simple tools, and some readily available methods are presented for fine adjusting and characterizing the performance of the realized magnet system. This provides useful references for building such magnet systems.
In subsequent sections, the realized magnet system will be integrated into the Tsinghua tabletop Kibble balance measurement, with careful evaluation of related systematics. We believe that the proposed magnet system meets the high-accuracy requirements for kilogram-level mass realizations in Kibble balance experiments.
§ ACKNOWLEDGEMENT
The authors would like to thank our mechanical engineer Mr. Zhenyu Zhang for his help on the magnet design and assembly. Shisong Li would like to thank colleagues from the NIST Kibble balance group and the BIPM Kibble balance group for valuable discussions. Yongchao Ma would like to thank Mr. Jian Liu and Mrs. Zhilan Huang for their help during the assembly of the magnet system.
10
Kibble1976
B. P. Kibble, “A measurement of the gyromagnetic ratio of the proton by the strong field method,” in Atomic masses and fundamental constants 5. Springer, 1976, pp. 545–551.
cgpm2018
Resolution 1 of the 26th CGPM Conference, 2018.
fujii2016realization
K. Fujii, H. Bettin, P. Becker, et al, “Realization of the kilogram by the XRCD method,” Metrologia, vol. 53, no. 5, pp. A19–A45, 2016.
NRC
B. M. Wood, C. A. Sanchez, R. G. Green, et al, “A summary of the Planck constant determinations using the NRC Kibble balance,” Metrologia, vol. 54, no. 3, pp. 399–409, 2017.
NIST
D. Haddad, F. Seifert, L. S. Chao, et al, “Measurement of the Planck constant at the National Institute of Standards and Technology from 2015 to 2017,” Metrologia, vol. 54, no. 5, pp. 633–641, 2017.
NIST2
L. Chao, F. Seifert, D. Haddad, et al, “The performance of the KIBB-g1 tabletop Kibble balance at NIST,” Metrologia, vol. 57, no. 3, p. 035014, 2020.
METAS
A. Eichenberger, H. Baumann, A. Mortara, et al. “First realisation of the kilogram with the METAS Kibble balance,” Metrologia, vol. 59, no. 2, p. 025008, 2022.
BIPM
H. Fang, F. Bielsa, S. Li, et al, “The BIPM Kibble balance for realizing the kilogram definition,” Metrologia, vol. 57, p. 045009, 2020.
LNE
M. Thomas, D. Ziane, P. Pinot, et al, “A determination of the Planck constant using the LNE Kibble balance in air,” Metrologia, vol. 54, no. 4, pp. 468–480, 2017.
MSL
R. J. Hawke, M. T. Clarkson. “Position control for the MSL Kibble balance coil using a syringe pump,” ACTA IMEKO, vol. 11, no. 4, pp. 1-7, 2022.
NIM
Z. Li, Y. Bai, Y. Wang, et al, “Improvements of the NIM-2 Joule Balance Since 2020,” IEEE Trans. Instrum. Meas., vol. 72, pp. 1–7, 2023.
KRISS
D. Kim, M. Kim, M. Seo, et al, “Realization of the kilogram using the KRISS Kibble balance,” Metrologia, vol. 57, no. 5, p. 055006, 2020.
UME
H. Ahmedov, R. Orhan, and B. Korutlu, “UME Kibble balance operating in air,” Metrologia, vol. 60, no. 1, p. 015003, 2022.
PTB
C. Rothleitner, J. Schleichert, N. Rogge, et al, “The Planck-Balance—using a fixed value of the Planck constant to calibrate E1/E2-weights,” Meas. Sci. Technol., vol. 29, no. 7, p. 074003, 2018.
NPL
I. A. Robinson, “The architecture of the NPL next generation Kibble balance,” in IMEKO 24th TC3, 14th TC5, 6th TC16 and 5th TC22 International Conference, Cavtat-Dubrovnik, Croatia, 2022.
Stephan16
I. A. Robinson and S. Schlamminger, “The watt or Kibble balance: a technique for implementing the new SI definition of the unit of mass,” Metrologia, vol. 53, no. 5, pp. A46–A74, 2016.
haddad2016bridging
D. Haddad, F. Seifert, L. S. Chao, et al, “Bridging classical and quantum mechanics,” Metrologia, vol. 53, no. 5, pp. A83–A85, 2016.
li2022irony
S. Li and S. Schlamminger, “The irony of the magnet system for Kibble balances—a review,” Metrologia, vol. 59, no. 2, p. 022001, 2022.
schlamminger2013design
S. Schlamminger, “Design of the Permanent-Magnet System for NIST-4,” IEEE Trans. Instrum. Meas., vol. 62, no. 6, pp. 1524–1530, 2013.
li18
S. Li, M. Stock, F. Biesla, et al, “Field analysis of a moving current-carrying coil in OMOP Kibble balances,” in 2018 International Applied Computational Electromagnetics Society Symposium (ACES). IEEE, 2018, pp. 1–2.
linonlinear
S. Li, Z. Zhang, and B. Han, “Nonlinear magnetic error evaluation of a two-mode watt balance experiment,” Metrologia, vol. 50, no. 5, pp. 482–489, 2013.
linonlinear2
S. Li, S. Schlamminger, and J. Pratt, “A nonlinearity in permanent-magnet systems used in watt balances,” Metrologia, vol. 51, no. 5, pp. 394–401, 2014.
hysteresis
S. Li, F. Bielsa, M. Stock,et al, “An Investigation of Magnetic Hysteresis Error in Kibble Balances,” IEEE Trans. Instrum. Meas., vol. 69, no. 8, pp. 5717-5726, 2020.
li2022
S. Li and S. Schlamminger, “Magnetic Uncertainties for Compact Kibble Balances: An Investigation,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1–9, 2022.
NPL3
I. A. Robinson, J. Berry, C. Bull, et al, “Developing the next generation of NPL Kibble balances,” in 2018 Conference on Precision Electromagnetic Measurements (CPEM 2018). Paris, France, 2018, pp. 1–2.
chao2020performance
L. Chao, F. Seifert, D. Haddad, et al, “The performance of the KIBB-g1 tabletop Kibble balance at NIST,” Metrologia, vol. 57, no. 3, p. 035014, 2020.
li2022design
S. Li, W. Zhao, S. Huang, et al, “Design of the Tsinghua openhardware tabletop Kibble balance,” in 2022 Conference on Precision Electromagnetic Measurements (CPEM 2022). Wellington, New Zealand, 2022, pp. 1–2.
li2023design
S. Li, Y. Ma, W. Zhao, et al, “Design of the Tsinghua Tabletop Kibble Balance,” IEEE Trans. Instrum. Meas., vol. 72, pp. 1-8, Art no. 1005208, 2023.
nimx
Z. Zhang, Q. He, Z. Li, et al, “Recent Development on the Joule Balance at NIM,” IEEE Trans. Instrum. Meas., vol. 60, no. 7, pp. 2533–2538, 2011.
nist3
S. Schlamminger, D. Haddad, F. Seifert, et al, “Determination of the Planck constant using a watt balance with a superconducting magnet system at the National Institute of Standards and Technology,” Metrologia, vol. 51, no. 2, pp. S15–S24, 2014.
BIPMmag2006
M. Stock, “Watt balances and the future of the kilogram,” INFOSIM Inform. Bull. Inter Amer. Metrol. Syst., vol. 9, pp. 9–13, 2006.
NISTmag
F. Seifert, A. Panna, S. Li, et al, “Construction, measurement, shimming, and performance of the NIST-4 magnet system,” IEEE Trans. Instrum. Meas., vol. 63, no. 12, pp. 3027–3038, 2014.
BIPMmag2017
S. Li, F. Bielsa, M. Stock,et al, “A permanent magnet system for Kibble balances,” Metrologia, vol. 54, no. 5, pp. 775–783, 2017.
you2016designing
Q. You, J. Xu, Z. Li, et al, “Designing Model and Optimization of the Permanent Magnet for Joule Balance NIM-2,” IEEE Trans. Instrum. Meas., vol. 66, no. 6, pp. 1289–1296, 2016.
diamagnetic2020
S. Li, S. Schlamminger, Rafael Marangoni, et al,, “Resolution of the paradox of the diamagnetic force on the Kibble coil,” Sci. Rep., vol. 11, no. 1048, 2020.
ss20
S. Li, S. Schlamminger, and Q. Wang, “A Simple Improvement for Permanent Magnet Systems for Kibble Balances: More Flat Field at Almost No Cost,” IEEE Trans. Instrum. Meas., vol. 69, no. 10, pp. 7752-7760, 2020.
marangoni2019magnet
R. R. Marangoni, D. Haddad, F. Seifert, et al, “Magnet system for the Quantum Electro-Mechanical Metrology Suite,” IEEE Trans. Instrum. Meas., vol. 69, no. 8, pp. 5736-5744, 2020.
li2016discussion
S. Li, W. Zhao, and S. Huang, “A discussion of Bl conservation on a two dimensional magnetic field plane in watt balances,” Meas. Sci. Technol., vol. 27, no. 5, p. 051001, 2016.
bielsa2015alignment
F. Bielsa, Y. Lu, T. Lavergne, et al, “Alignment of the magnetic circuit of the BIPM watt balance,” Metrologia, vol. 52, no. 6, p. 775, 2015.
li17
S. Li, F. Bielsa, M. Stock, et al, “Coil-current effect in Kibble balances: analysis, measurement, and optimization,” Metrologia, vol. 55, no. 1, pp. 75–83, 2017.
|
http://arxiv.org/abs/2409.03695v1 | 20240905165000 | On the Regge behaviour of the AdS Virasoro-Shapiro Amplitude | [
"Luis F. Alday",
"Maria Nocchi",
"Clément Virally",
"Xinan Zhou"
] | hep-th | [
"hep-th"
] |
arrows.meta
|
http://arxiv.org/abs/2409.03518v1 | 20240905133121 | On the mean field limit of consensus based methods | [
"Marvin Koß",
"Simon Weissmann",
"Jakob Zech"
] | math.OC | [
"math.OC",
"math.PR"
] |
Speaker and Style Disentanglement of Speech Based on Contrastive Predictive Coding Supported Factorized Variational Autoencoder
Yuying Xie1, Michael Kuhlmann2, Frederik Rautenberg2 , Zheng-Hua Tan1, Reinhold Haeb-Umbach2
1. Department of Electronic Systems, Aalborg University, Denmark
2. Department of Communications Engineering, Paderborn University, Germany
===========================================================================================================================================================================================================================================================
§ ABSTRACT
Consensus based optimization (CBO) employs a swarm of particles evolving as a system of stochastic differential equations (SDEs). Recently, it has been adapted to yield a derivative free sampling method referred to as consensus based sampling (CBS). In this paper, we investigate the “mean field limit” of a class of consensus methods, including CBO and CBS. This limit allows to characterize the system's behavior as the number of particles approaches infinity. Building upon prior work such as <cit.>, we establish the existence of a unique, strong solution for these finite-particle SDEs. We further provide uniform moment estimates, which allow to show a Fokker-Planck equation in the mean-field limit. Finally, we prove that the limiting McKean-Vlasov type SDE related to the Fokker-Planck equation admits a unique solution.
§ INTRODUCTION
Consensus based optimization (CBO) methods comprise a class of stochastic interacting particle systems utilized as metaheuristic optimization techniques for non-convex and high-dimensional optimization problems. Metaheuristic methods, which aim to design efficient algorithms for solving challenging optimization problems effectively, have garnered significant attention in recent years <cit.>. These methods can be tailored to a wide array of optimization challenges, rendering them suitable for diverse applications, including machine learning, data science and engineering, among others.
Interacting particle optimization methods, e.g. <cit.>,
blend the exploration of complex landscapes inherent to the underlying cost function with the exploitation of experiential knowledge stored within the particle system. Through interactions among the particles, these methods have demonstrated promising success in avoiding local minima and discovering global solutions.
At
its core, CBO, as introduced in <cit.>, can be motivated as a system of multiple agents that collaboratively work toward finding the optimal solution, with the ultimate goal of reaching a consensus. More precisely, it considers the minimization problem
min_x∈^d f(x),
where f:^d→ is a given cost function. The stochastic dynamical system employed by CBO relies solely on the evaluation of this cost function and remains derivative-free. In other words, the scheme does not involve the computation of (potentially costly or unobtainable) gradients.
In comparison to other particle swarm optimization methods, CBO also avoids the need for computing min_j=1,…,J f(x^j) across the particle system (x^j)_j=1,…,J. Instead, it introduces a smooth approximation through a weighted average over the particle system. The weight function depends on f and a tuning parameter β>0, and is defined by
:=ℝ^d →ℝ
x↦exp(-β f(x)).
By construction, the weight function gives more weight to x∈^d with low value of the cost
function f(x). We define
ℳ_β: 𝒫 (^d) →^d with
η:= ∫_^dx e^-β f(x)dη(x)/∫_^de^-β f(x)dη(x)
as the operator mapping a probability measure η∈𝒫(^d) to its weighted average. The weighted average of the interacting particle system (x^j)_j=1,…,J can then be computed as
η^J= 1/∑_j=1^J e^-β f(x^j)∑_j=1^J x^j e^-β f(x^j) ,
where η^J=1/J∑_j=1^J δ_x^j∈𝒫(^d) denotes the empirical measure over the particle system. In the limit for β→∞ one may exploit the fact that
<cit.>
β→∞lim-1/βlog∫_^d(x)dη(x) = x∈supp(η)inf f(x)
to justify the smooth approximation
η^J≈x^j,J,j=1,…,Jmin f(x^j,J).
CBO uses this idea to evolve a swarm of J particles (θ^j,J_t)_j=1,…,J according to competing terms for both local exploration around the current optimum of the particles and global exploration away from it by obtaining the optimum through the application ρ^J_t, where ρ^J_t:=1/J∑_j=1^J δ_θ^j,J_t.
Local exploration is thus governed by moving the particles to the weighted average defined in (<ref>), while global exploration is incorporated through random pertubation depending on the distance of the particles from it. As introduced in <cit.>, the interacting particle system of CBO is described as a coupled system of stochastic differential equations (SDEs) of the form
dθ_t^j,J = -(θ_t^j,J-) dt + λ^-1_d(|θ_t^j,J-|) dW_t^j
with j=1,…,J and ensemble size J≥2, where (W_t^j)_t∈[0,T] are independent Brownian motions in ^d. Here,
for x=(x_1,…,x_d)^⊤∈^d we write
_d(x) = (x_1,…,x_d)∈^d× d ,
to denote a diagonal matrix. We assume that the initial ensemble is drawn as iid samples of some initial distribution ρ_0∈𝒫(^d).
The heuristic behind the considered system of SDEs is that the particles will meet a consensus in ρ^J_t which can potentially lead to attaining the global optimum of f.
The existing theoretical convergence analysis of the scheme is mainly characterized by the system's behavior in the large ensemble size limit (J→∞), known as the mean field limit. In the mean field limit, CBO can be described by a McKean-Vlasov type SDE
of the form
dθ̅_t = -(θ̅_t-ρ_t) dt + λ^-1_d(|θ̅_t-ρ_t|) dW_t
where λ>0 is fixed,
(W_t)_t∈[0,T] is a Brownian motion in ^d, and for each t≥0, ρ_t denotes the law of the solution θ̅_t itself. Through Itô's formula, the evolution of the law (ρ_t)_t∈[0,T] can be characterized as a weak solution of the Fokker-Planck equation
∂ρ/∂ t = ∇· ((θ-ρ)ρ + 1/2λ_d(|θ - ρ|^2)∇ρ ),
with initial condition ρ_0. Both of these equations have been analyzed in terms of well-posedness in <cit.>. While the convergence behavior of CBO is characterized through the mean field limit (<ref>) and (<ref>), the practical implementation is based on the particle approximation (<ref>).
A convergence analysis of (<ref>) towards the mean field limit (<ref>) for J→∞ has been provided in <cit.>.
§.§ General class of consensus based methods
Recently, in <cit.>, the authors introduced a more general class of consensus based methods that allow the scheme to be applied as both an optimization
and a sampling method.
For optimization, the particle system may reach a consensus at the global minimum of the considered cost function f, i.e. all particles coalesce in the minimum.
For sampling,
the consensus is characterized as Gaussian approximation of the stationary distribution ρ_∗∝exp(-f) of the considered mean field equation. As such the method lends itself for use in Bayesian inference, to generate approximate samples from a posterior distribution.
Beside the weighted mean ℳ_β, the proposed formulation relies on the weighted covariance matrix 𝒞_β: 𝒫(^d)→^d× d defined as
𝒞_β(η) :=∫_^d(x-η)⊗ (x-η) e^-β f(x)dη(x)/∫_^de^-β f(x)dη(x) .
In the finite particle regime, the proposed algorithm can be written in the form
dθ_t^j,J = -(θ_t^j,J-)dt + √(2)d_t,
for j=1,…,J, where (_t)_t∈[0,T] again denote independent Brownian motions. The corresponding mean field limit as J→∞ can again be written as SDE of McKean-Vlasov type
d_t = -(_t-ρ_t)dt + √(2ρ_t)dW_t,
where for each t≥0, ρ_t = ℙ ^_t denotes the law
of _t. One fundamental advantage of the considered formulation
is that the dynamical system remains invariant under affine
transformations. This property is particularly beneficial for sampling
methods based on Markov chain Monte Carlo methods, see, e.g.,
<cit.>.
Depending on the choice of λ, the authors identify the scheme as an optimization method (λ=1) or a sampling method (λ = (1+β)^-1). This observation has been supported by a detailed theoretical analysis of the mean field limiting system.
However, the proposed reformulation of consensus based methods in the form of (<ref>) leaves open questions regarding the well-posedness of the finite particle system as well as the well-defined limit J→∞. The goal of the present paper is to address this issue.
§.§ Related Work
Various particle based optimizers have been proposed, including Particle Swarm Optimization <cit.> and Consensus Based Optimization <cit.>, for which analyses of the mean field regime have been done, respectively, by <cit.> and <cit.>.
For CBO in particular, an analysis in the finite particle regime has been done in <cit.>. The authors in <cit.> modify CBO for high-dimensional problems, <cit.> adapt it and other bayesian inference algorithms for gradient inference, and <cit.> apply CBO to multi-objective optimization. An overview over the research on CBO can be found for example in <cit.>.
A recently proposed variant is Consensus Based Sampling (CBS) <cit.>, which is amenable both to sampling and optimization and has successfully been applied to rare event estimation in <cit.>.
Moreover, a kernelized variant has been proposed by <cit.>, which is able to deal with multimodal distributions and objective functions with multiple global minima, respectively. When it comes to the implementation of CBS, the bias due to discretization and Gaussian approximation can be eliminated using a Metropolization step as proposed by <cit.>.
Upon completion of this manuscript we were made aware of the recent study <cit.>, which also analyzes convergence to the mean-field limit for both CBO and CBS. Specifically, in Thm. 2.6 & 2.12 of <cit.> using a
synchronous coupling approach they show pathwise Wasserstein p-chaos <cit.>, which actually implies our pointwise empirical propagation of chaos <cit.>.
Our independent analysis, conceived concurrently with theirs,
differs in that we employ a compactness argument for the proof, considering weak convergence of the empirical measures rather than
a coupling approach.
§.§ Our contribution
We make the following contributions: In the finite particle regime, we make use of a stochastic Lyapunov function in order to
show well-posedness of the system of SDEs (<ref>); i.e.
we prove existence of unique and strong solutions. For the mean field limit, we verify tightness of the empirical measures along the lines of <cit.>, and provide a compactness argument which closely follows the arguments in <cit.>. These arguments imply weak convergence of the empirical measures and, using Skorokhod's Representation, ℙ-a.s. convergence on some appropriate probability space.
Lastly, we adopt the use of the Leray-Schauder fixed point theorem from <cit.> to verify well-posedness of the McKean-Vlasov type SDE (<ref>) in the form of existence of a unique solution.
Structure The paper is structured as follows. In Section 2 we investigate the well-posedness of the finite particle SDEs. In Section 3 we investigate the mean field limit of the system of SDEs, by first verifying a weak solution of the Fokker Planck Equation exists, and then continuing to show a unique solution to the associated McKean-Vlasov limiting SDE exists. We only sketch proofs in the main part and refer to the supplementary material for some more results and proofs of the Lemmas and Theorems given in the main part.
Notation
We write ℙ^X for the law of a random variable X.
For a vector x∈^d, we write |x| for its Euclidean norm,
while
C_F=√(CC^T)=√(∑_i=1^d∑_j=1^d (C_ij)^2) denotes
the
Frobenius norm of a matrix C∈^d× d and |[C]|_p=√(C^*C)^p^1/p its Schatten p-Norm. Note that the Frobenius norm is compatible with the Euclidean norm.
We call the space of positive semi-definite d× d-Matrices 𝕊^d_≥.
Throughout the article, we write particles as θ_t^j,J to make explicit the dependence on the number of particles.
We furthermore write (^d) for the space of probability measures on ^d, and for p∈ℕ, _p(^d)
is the space of probability measures on ^d with finite p-th moment.
We write C(^d) for the space of continuous functions from ^d→,
C^p(^d) for the space of
p∈ℕ times continuously differentiable functions from ^d→,
C_c^p(^d) for
compactly supported functions in C^p(^d), and
C_b(^d) for bounded functions in C(^d).
Additionally, we'll require the space C([0,T];^d)
of continuous functions from [0,T]→^d.
For p≥ 1 we use Łp(^d,ρ) (or Łp(ρ))
to denote the usual Lebesgue spaces of functions from ^d→.
§ WELL-POSEDNESS OF THE PARTICLE APPROXIMATION
Before considering the mean-field limit analysis, we
firstly ensure that the finite particle SDE (<ref>) admits a unique and strong solution. This is
the primary concern of this section.
Following
<cit.>,
throughout this paper
we make
the following assumptions on the
cost function:
The cost function f:^d→ satisfies:
* There exists a constant Lip(f)>0 such that for all x,y ∈ℝ^d it holds
|f(x)-f(y)| ≤Lip(f)·(|x|+|y|)·|x-y|.
* The cost function f is bounded from below with -∞<f_*:=x∈^dinf f(x) and there exists a constant c_u>0 such that for all x∈ℝ^d it holds
f(x)-f_* ≤ c_u(1+|x|^2) .
* There exist constants c_l, M >0 such that for all x ∈ℝ^d with |x|>M it holds
f(x)-f_* ≥ c_l|x|^2 .
We make a further assumption on the finiteness of the sixth moment of the initial measure.
The initial measure ρ_0 satisfies
∫_ℝ^d|x|^6dρ_0(x) < ∞,
that is ρ_0∈𝒫_6(ℝ^d).
For a fixed weight parameter β>0, in the following we
use the operator (cp. (<ref>))
L_β :=(^d) →(^d)
η↦η/_Ł1(η)
which maps a probability measure η to the reweighted probability
measure denoted by L_βη. Note that
ℳ_β(η)=𝔼_x∼
L_βη[x]. We next state two lemmas that are used
throughout all parts of the mean field limit proof, and in the proof
of well-posedness, too. The first Lemma has been proven in
<cit.> and states that second moments of
reweighted probability measures L_βμ can always be bounded by
the second moments of μ∈𝒫_2(^d).
Let f satisfy Assumption <ref> and
let μ∈_2(^d).
Then
|ℳ_β (μ)|^2≤∫_^d |x|^2 dL_βμ(x) ≤ b_1 + b_2∫_^d|x|^2dμ(x) ,
where
b_1 := M^2+b_2,
b_2 := 2c_u/c_l (1+1/β c_l1/M^2)
are positive reals
depending on f and β, but not on μ.
The following result extends the estimate provided by Lemma <ref> to the weighted covariance matrix. This property is crucial in our considered analysis, since the diffusion is given by 𝒞_β(ρ_t^J) and 𝒞_β(ρ_t) respectively.
Let f satisfy Assumption <ref>
and μ∈_2(^d).
Then
max(μ_F,√(μ)^2_F ) ≤b_1 + b_2 ∫_ℝ^d |x|^2 dμ(x) ,
where b_1, b_2>0 are the constants from Lemma <ref>.
We now provide global existence of a unique strong solution for the coupled system of SDEs (<ref>) via stochastic Lyapunov theory. We refer the reader to <cit.>. Let us rewrite the system of d· J coupled SDEs (<ref>) as a single SDE in the joint state space ×_j=1^J ^d of the form
dθ_t = F(θ_t) dt + G(θ_t) dW_t,
where θ_t = (θ_t^j)_j=1,…,J∈^J· d × 1, W_t = (W_t^j)_j=1,…,J∈^J· d× 1 and for x=(x^1,…,x^J)^⊤∈^J· d we defined
F(x) = (ρ^J-x^j)_j=1,…,J∈^J· d× 1, G(x) = (√(2λ^-1η^J))_j=1,…,J∈^J· d× J· d,
with empirical measure η^J := 1/J∑_j=1^J δ_x^j and (B_j)_j=1,…,J denoting a block diagonal matrix with blocks B_j, j=1,…,J.
Moreover, let (ℱ_t^J)_t≥0 = (σ(θ_0,W_s,s≤ t))_t≥0 be the canonical filtration generated by the Brownian motion (W_t)_t≥0 and the initial state θ_0.
In order to ensure unique existence of strong solutions for the SDE (<ref>), following <cit.>, we need to construct a function V∈ C^2(^J· d) satisfying
(x) := ∇ V(x)· F(x)+1/2G^⊤(x)∇^2V(x)G(x)≤ cV(x),
for some c>0 and
inf_x>R V(x) →∞ for R→∞.
Here, ℒ is the generator of the Markov process
(θ_t)_t∈[0,T]. This leads us to our first main result, in
particular the one of well-posedness of the finite particle SDEs:
Let f satisfy Assumption <ref> and let θ_0=(θ_0^j)_j=1,…,J be ℱ_0^J-measurable maps θ_0^j:Ω→^d. For all T≥0 there exists a unique strong solution (θ_t)_t∈[0,T] (up to ℙ-indistinguishability) of
the system of coupled SDEs (<ref>).
For the proof of Theorem <ref> we construct the Lyapunov
function
V(x):=1/2Jj=1J|x^j,J|^2
and verify the two
conditions
in (<ref>) using the previous two lemmas. Note that
the limiting condition (<ref>) is obviously
satisfied. Beside the existence of unique strong solutions, we also
derive estimates that are uniform in J for the second,
fourth and sixth moments of the particle system in
Section <ref>. These estimates are crucial to
derive the mean field limit.
§ MEAN-FIELD LIMIT
Taking the limit of the number of particles J→∞ in (<ref>), we pass to the mean field limit of the dynamical system. This limit is given by the following SDE of McKean-Vlasov type:
d_t = -(_t-ρ_t)dt + √(2ρ_t)dW_t,
where for each t≥0, ρ_t = ℙ ^_t denotes the law of _t. Deriving the mean field limit in this setting is challenging, as we have nonlinear and nonlocal terms ρ_t and √(ρ_t) arising in the drift and diffusion, which depend on the law of the solution itself. By applying Itô's formula, one however expects that the law of the solution of (<ref>) satisfies the Fokker-Planck Equation
∂ρ/∂ t = ∇· ((θ-ρ)ρ + ρ∇ρ ).
We follow the arguments in
<cit.>, where the mean field limit is obtained for the system of SDEs (<ref>). For each φ∈ C_c^2(^d) and t∈[0,T],
we define the functional 𝔽_φ,t on 𝒞([0,T];(^d)) as
ρ : = ∫_^dφ(x) dρ_t(x) - ∫_^dφ(x) dρ_0(x) + ∫_0^t ∫_^dℒφ(x) dρ_s(x) ds
= ∫_^dφ(x) dρ_t(x) - ∫_^dφ(x) dρ_0(x) + ∫_0^t ∫_^d(x - ρ_s)^⊤∇φ(x) dρ_s(x) ds
- ∫_0^t i=1dk=1d∫_^dρ_s_ik∂^2/∂ x_i∂ x_kφ(x) dρ_s(x) ds .
We then provide the definition of a weak solution to the FPE (<ref>) as defined in <cit.> by ρ∈ C([0,T];_2(^d)) satisfying the two conditions
* For all ϕ∈C_b(^d) and t_n→ t it holds
∫_^dϕ(x)dρ_t_n(x) →∫_^dϕ(x)dρ_t(x)
* For all φ∈C_c^2(^d) and t∈[0,T] holds
ρ = 0,
By this definition of weak solution we restrict to elements in C([0,T];𝒫(^d)) limiting the type of convergence result to be expected. We make the following distinction between pointwise and pathwise convergence:
<cit.>
Let ρ^J:=1/J∑_j=1^Jδ_θ^j,J∈𝒫(C([0,T], ^d)) be the empirical measure for a particle stochastic process {(θ^j,J)_t∈[0,T]}_j=1^J of ensemble size J
and let e_t:C([0,T],^d)→^d,f↦ f(t) be the evaluation map. We define the time marginals ρ_t^J:=(e_t)_#ρ^J=ρ^J(e_t^-1(·)). Then ℙ^ρ^J∈𝒫(𝒫(C([0,T],^d))) and (ℙ^ρ^J_t)_t∈[0,T]∈ C([0,T],𝒫(𝒫(^d))) and we define the following types of convergence:
* We say {(ℙ^ρ^J_t)_t∈[0,T]}_J∈ℕ⊂ C([0,T],𝒫(𝒫(^d))) converges pointwise, if there exists (ρ_t)_t∈[0,T]∈ C([0,T],𝒫(^d)) such that ℙ^ρ^J_tJ→∞→δ_ρ_t in 𝒫(𝒫(^d)), with δ_ρ_t being a Dirac measure, for every t∈[0,T],
* We say {ℙ^ρ^J}_J∈ℕ⊂𝒫(𝒫(C([0,T],^d))) converges pathwise, if there exists ρ∈𝒫(C([0,T],^d)) such that ℙ^ρ^JJ→∞→δ_ρ, and δ_ρ is a Dirac measure in 𝒫(𝒫(C([0,T],^d))).
The convergence is weak convergence. We call elements of C([0,T],𝒫(^d)) measure flows.
Pathwise convergence is stronger than pointwise convergence in the sense that pathwise implies pointwise, but not vice versa. We begin the proof of our following main result, Theorem <ref>, by working in 𝒫(𝒫(C([0,T];^d))) in steps 1 and 2, which take place in Section <ref>, and then project to C([0,T];𝒫(𝒫(^d))) for steps 3 and 4 (transferring the previously obtained weak convergence to the time marginals using <cit.>), identifying the limit of the sequences in 𝒫(𝒫(^d))
for all t∈[0,T]. Therefore, our final result yields pointwise convergence.
In order to guarantee uniqueness of the FPE (<ref>), we make the following assumption on the solution of the corresponding McKean SDE (<ref>):
Let (ρ_t)_t∈[0,T] be the
the solution to the McKean SDE (<ref>). We assume that there exists σ̅>0 such that
ρ_t - σ̅ Id≻ 0
for all t∈[0,T].
We note that for Gaussian initial distributions ρ_0 = 𝒩(m_0,C_0), Assumption <ref> is satisfied for both CBO (λ=1) and CBS (λ = (1+β)^-1) <cit.>. Moreover, it is worth mentioning that in Theorem <ref>, we will verify the existence of a strong solution to the McKean SDE (<ref>).
We are now ready to formulate our main result, which is analogous to <cit.>. In particular, we are able to extend the mean-field limit analysis for the system (<ref>) presented in <cit.> to the stochastic particle system driven by (<ref>).
Let Assumptions <ref>, <ref> and <ref> hold and for T>0 and J≥2 let (θ^j,J_t)_t∈[0,T] be the unique solution of (<ref>) with empirical measures ρ_t^J=1/Jj=1Jδ_θ_t^j,J, t∈[0,T], and ρ_0^⊗ J-distributed initial data (θ^j,J_0)_j=1,…,J.
Then {(ℙ^ρ^J_t)_t∈[0,T]}_J∈ℕ converges pointwise to (δ_ρ_t)_t∈[0,T], where (ρ_t)_t∈[0,T]∈ C([0,T];𝒫(^D)) is the (deterministic) unique weak solution of the FPE (<ref>).
The proof of Theorem <ref> follows a compactness argument for verifying existence of the weak solution to the FPE and
can be sketched as follows:
* We show in Lemma <ref> that the sequence of measures 𝔉 = {ℙ^ρ^J}_J∈ℕ is tight in 𝒫(𝒫(C([0,T];^D))).
* Prohorov's Theorem implies that 𝔉 is weakly relatively sequentially compact.
* If a subsequence {ρ^J_k}_k∈ℕ is convergent in distribution[Equivalent to weak convergence of the ℙ^ρ^J_k.], the limit of {(ρ^J_k_t)_t∈[0,T]}_k∈ℕ solves the FPE weakly, which we show in Theorem <ref>. In this step we employ Skorokhod Representations of the ℙ^ρ^J_k_t and work with ℙ_t-a.s. convergence on some appropriate space.
* Corollary <ref> implies that the limits of all {(ℙ^ρ^J_k_t)_t∈[0,T]}_k∈ℕ for weakly convergent subsequences {ℙ^ρ^J_k}_k∈ℕ of 𝔉 are equal.
* Because every subsequence has a weakly convergent subsequence, and all convergent subsequences attain the same limit, so does the entire sequence.
This line of arguments is in principle analogous to
<cit.>.
Nevertheless, our analysis significantly deviates from this work as we
consider a complete covariance matrix rather than a diagonal diffusion term. This results in more intricate computations and estimations.
We proceed with the first step of the above procedure, which is proving tightness of the sequence of laws, for which the following moment estimates and propagation of chaos <cit.> will be instrumental.
§.§ Moment Estimates
We now provide some moment estimates, which are fundamental to multiple steps of the compactness argument we present for Theorem <ref>. These estimates include the second, fourth and sixth moments along the lines of <cit.>.
Let θ_0 = (θ_0^j)_j=1,…,J be ℱ_0^J-measurable such that ρ_0 satisfies Assumption <ref>. Then, for p=1,2,3 there exists a constant K_p(T)>0, depending on T and p but independent of J, such that
t∈[0,T]∫_^dxd(x)≤ K_p(T).
§.§ Tightness of the empirical measures
The following theorem provides the tightness of the empirical measures using the criteria of Aldous' <cit.>, which we recall in the supplementary material in Theorem <ref>. We adapt the strategy of the proof of <cit.>. Tightness ensures the particles do not tend to infinity with notable probability and is fundamental to the compactness argument.
Let f satisfy Assumption <ref> and let ρ_0 satisfy Assumption <ref>. Furthermore, let {(θ^j,J)_t∈[0,T]}_j=1^J be the unique solution to (<ref>) with ρ_0^⊗ J-distributed {θ^j,J_0}_j=1^J. Then 𝔉 ={ℙ^ρ^J}_J∈ℕ is tight in 𝒫(𝒫(C([0,T];^D))).
§.§ Convergence of the empirical measures
Due to Prokhorov’s theorem <cit.> it is a direct consequence of Lemma <ref> that there exists a subsequence
{ℙ^ρ^J_k}_k∈ℕ which converges weakly (i.e. {ρ^J_k}_k∈ℕ converges in distribution) to some random measure ρ:Ω→𝒫(C([0,T];^D)). By Skorokhod’s lemma <cit.> we then find a common probability space (Ω̃,ℱ̃,ℙ̃) on which {ρ^J_k}_k∈ℕ converges ℙ̃-almost surely as random variable to a random variable ρ:Ω̃→𝒫(C([0,T];^D)) with values in 𝒫(C([0,T];^D)).
It is important to note that at this point in our overall argument we
project from 𝒫(C([0,T],^d)) to C([0,T],𝒫(^d)) by use of the map
Π : 𝒫(C([0,T], ^d)) → C([0,T],𝒫(^d))
ρ ↦ (ρ_t)_t∈[0,T]
defined in <cit.> and work with the measure flows
ρ^J:=Π(ρ^J)=(ρ^J_t)_t∈[0,T] ,
where the time marginals ρ_t^J are as in Definition <ref>.
This
enables us to verify the
two pointwise convergences described below. The weak convergence of the time marginals is then obtained via <cit.>.
In continuing with our compactness argument for the unique solvability of the FPE, we now verify condition (<ref>) for the limit of a convergent subsequence {ρ^J_k}_k∈ℕ (which we simply denote by the full sequence {ρ^J}_J∈ℕ)
using Ł1- and Ł2-convergence with respect to the probability measure ℙ̃ on the common probability space (Ω̃,ℱ̃,ℙ̃), i.e. ∀ t∈[0,T] and arbitrary φ∈ C_c^2(^d):
* ρ^J2 0 as J→∞,
* ρ^J1ρ as J→∞.
Since the limits are ℙ̃-almost surely unique, we obtain that the limiting random measure ρ_t satisfies ρ=0, ℙ̃-almost surely.
These two points are shown in the following two Lemmas.
The first point is shown using
Ł2-convergence.
By the reasoning in <cit.>, we have the following result:
Let f satisfy Assumption <ref> and let ρ_0 satisfy Assumption <ref>. For J∈ℕ, assume that {(_t)_t∈[0,T]}_j=1,…,J is the unique strong solution to the particle system (<ref>) with ρ_0^⊗ J-distributed initial data {_0}_j=1,…,J. For the projections {ρ^J}_J∈ℕ of the empirical measures it then holds for all t∈[0,T] and φ∈ C_c^2(^d) that
ρ^J2 0,
where ρ^J∈ C([0,T],𝒫(^d)).
One obtains this result by applying Itô's Lemma to the functional and using Lemma <ref>. The second part of the limiting procedure is shown in the following Lemma, which is analogous to <cit.>:
Let f satisfy Assumption <ref> and let ρ_0 satisfy Assumption <ref>. For J∈ℕ, assume that {(_t)_t∈[0,T]}_j=1,…,J is the unique strong solution to the particle system (<ref>) with ρ_0^⊗ J-distributed initial data {_0}_j=1,…,J.
Let {ρ^J}_J∈ℕ⊂𝒫(C([0,T],^d)) be a weakly convergent subsequence of the empirical measures, denoted by the full sequence, and let ρ be their limit. It then holds for the projections for all t∈[0,T] and φ∈ C_c^2(^d) that
ρ^J1ρ.
In order to obtain this result, one inspects the summands of the functional one by one for convergence.
The previous two lemmas permit us to obtain the following result:
Let f satisfy Assumption <ref> and let ρ_0 satisfy Assumption <ref>. For J∈ℕ, assume that {(_t)_t∈[0,T]}_j=1,…,J is the unique strong solution to the particle system (<ref>) with ρ_0^⊗ J-distributed initial data {_0}_j=1,…,J. Let {ρ^J}_J∈ℕ be a weakly convergent subsequence of the empirical measures, denoted by the full sequence, with limit ρ∈𝒫(C([0,T],^d)). Then the projection of the limit ρ∈ C([0,T],𝒫(^d)) is a weak solution of the FPE (<ref>).
The above result provides that the limiting measure of convergent subsequences indeed (weakly) solves the FPE.
§.§ Properties of the McKean-Vlasov SDE
For our proof of the unique solvability of the FPE (<ref>) we require existence and uniqueness of the solution to the following associated SDE of McKean-Vlasov type:
dθ_t = -(θ_t-ρ_t)dt + √(2ρ_t)dW_t ,
where ρ_t = ℙ^θ_t denotes the law of θ_t.
This SDE describes the behavior of a representative particle in the mean field limit, and if the solution exists, its law solves the FPE (<ref>). The existence and solution are nontrivial to obtain since the drift and diffusion term depend in nonlinear ways on the law of the solution itself. Hence, we dedicate the following Theorem to the existence and the one after it to the uniqueness of the solution to this SDE. We follow the strategy of <cit.>, wherein the authors obtain unique solvability for the McKean SDE for CBO using Schaefer's Fixed point Theorem (SFPT)[cf. Thm <ref> in the Appendix]. The difference in our analysis lies in the fact that we have the covariance matrix of the reweighted measure instead of a diagonal diffusion matrix. We emphasize that this extension is non-trivial.
Next, we obtain estimates that will be useful in proving both existence and uniqueness of the solution to the McKean-Vlasov type SDE and is made in analogy to <cit.>.
Let μ, ν∈𝒫_6(^d) with
(∫_^d |x|^6 dμ(x))^1/6, (∫_^d |x|^6 dν(x))^1/6≤ K .
Then there exist c_0,c_1>0 such that
(i) |μ-ν| ≤ c_0 W_2(μ,ν),
(ii) μ-ν_F ≤ c_1 W_2(μ,ν),
where c_0 and c_1 depend on μ, ν solely through K.
We now provide the existence of the solution to the McKean-Vlasov type SDE using a fixed point argument which is made closely following the strategy of the proof in <cit.>.
Let θ_0∼ρ_0 with ρ_0 satisfying Assumption <ref> and let f satisfy Assumption <ref>. For all T≥0 there exists a strong solution (θ_t)_t∈[0,T] of the McKean SDE (<ref>).
Using Assumption <ref>, we verify that solutions to the McKean-Vlasov type SDE are unique in the following sense.
Let Assumption <ref> hold, let θ_0∼ρ_0 with ρ_0 according to Assumption <ref> and let f satisfy Assumption <ref>. Then the solution of the McKean SDE (<ref>) is unique up to ℙ-indistinguishability.
§.§ Uniqueness of the FPE
Until now we have been concerned with weakly convergent subsequences of the empirical measures, and shown that their limit is indeed a solution to the FPE. For the convergence of the whole sequence, it suffices to show that the solution of the FPE is unique. This is shown in Corollary <ref>, which is analogous to <cit.> and uses the results of the previous section, most crucially the previously obtained unique solvability of the associated McKean-Vlasov type SDE.
In order to prove the Corollary, we also need the following auxiliary result, which is analogous to <cit.>:
For T>0, let (u,C)∈ C([0,T],^d×^d× d) and ρ_0 according to Assumption <ref>. Then the following linear PDE
∂ρ/∂ t = i=1dk=1d∂^2/∂ x_i∂ x_k[(C_t)_ikρ_t] - ∇·[(x-u_t)ρ_t]
has a unique weak solution ρ∈ C([0,T];_2(^d)).
Using the above result for the unique solvability of a linearized version of the FPE, we are able to prove the pointwise uniqueness, which is stated in the following Corollary:
Let ρ_0 satisfy Assumption <ref> and let it be the initial data of two weak solutions ρ^1, ρ^2 to the FPE (<ref>).
Under Assumption <ref> we have
t∈[0,T]supW_2(ρ_t^1, ρ_t^2) = 0.
§.§ Proof of Theorem <ref>
By connecting the results of the previous subsections, we are ready to prove our main result, Theorem <ref>:
Due to the assumptions on our initialization, we
are able to apply propagation of chaos and the initially computed moment estimates, and
obtain with Lemma <ref> that the sequence of measures 𝔉 = {ℙ^ρ^J}_J∈ℕ is tight in 𝒫(𝒫(C([0,T];^D))). Subsequently applying Prohorov's Theorem implies weak relative sequential compactness of 𝔉.
Theorem <ref> shows that if we take a subsequence of the {ρ^J}_J∈ℕ to be convergent in distribution, the limit of the projections {ρ^J}_J∈ℕ solves the FPE weakly.
Furthermore, Corollary <ref> implies that the limit of the projections of all weakly convergent subsequences of 𝔉 is equal, implying the weak convergence of the entire sequence {ρ^J}_J∈ℕ.
§ CONCLUSION
After providing a stability result for the finite particle SDE, we investigated the mean field limit of the system of CBO and CBS, successfully showing that it holds under moderate assumptions that are common in the literature. Specifically, we employed propagation of chaos via a compactness argument by analyzing the weak convergence of the empirical measures, which additionally yielded that the limiting measure is not only a solution to the FPE, but in fact the only one.
plain
§ PROOFS OF SECTION 2
§.§ Proofs of preliminary results
By linearity of the trace operator we have
√(μ)^2_F
= ∫_^d(x-μ)⊗(x-μ)dL_βμ(x)
= ∫_^d|x-μ|^2dL_βμ(x)
= ∫_^d|x|^2dL_βμ(x)-|μ|^2
≤∫_^d|x|^2dL_βμ(x) ,
where we used |x-y|^2 = |x|^2 - 2⟨ x,y⟩ +|y|^2, the definition of μ=∫_^dx dL_βμ(x) and the fact that μ is a constant independent of x. We apply Lemma <ref> to deduce that
√(μ)^2_F ≤∫|x|^2dL_βμ(x) ≤ b_1 + b_2 ∫ |x|^2 dμ(x) .
Similarly, applying Cauchy-Schwarz inequality we obtain
μ_F^2 = ∫_^d×^d⟨ x - μ,x̂ - μ⟩ (x-μ)⊗ (x̂-μ) d L_βμ( x) d L_βμ(x̂)
= ∫_^d×^d⟨ x - μ,x̂ - μ⟩^2 d L_βμ( x) d L_βμ(x̂)
≤∫_^d×^d | x - μ|^2 |x̂ - μ|^2 d L_βμ( x) d L_βμ(x̂)
= (∫_^d | x - μ|^2 d L_βμ( x) )^2
≤(∫_^d | x|^2 d L_βμ( x) )^2 ,
which yields again with Lemma <ref> that
μ_F ≤ b_1 + b_2 ∫ | x|^2 dμ( x) .
§.§ Proofs of main results
The proof strategy is to apply <cit.>.
To this end we verify the assumptions of this theorem. First we
note that the drift F and diffusion G defined in (<ref>)
are both locally Lipschitz and satisfy a linear growth condition,
which follows straightforwardly from the Lipschitz continuity of
f. For more details, we refer for example to <cit.> for a similar calculation which can be adapted up to minor modifications to the present setting.
In the rest of the proof we construct a
Lyapunov function V and show that (<ref>) holds.
Define
V( x) = 1/2J∑_j=1^J
| x^j|^2 for
x=( x^j)_j=1,…,J∈^J· d.
Condition (<ref>) is clearly satisfied for
this V,
so that it only remains to show (<ref>).
We compute the generator and derive the following upper bound using
Cauchy-Schwarz inequality
( x) = 1/J∑_j=1^J ⟨ x^j, ρ^J- x^j⟩ + 1/2J∑_j=1^J 2λ^-1ρ^J
≤ -1/J∑_j=1^J | x^j|^2 + 1/J∑_j=1^J| ρ^J|| x^j|+ 1/Jλ∑_j=1^Jρ^J.
By Lemma <ref>, we have
1/J∑_j=1^Jρ^J≤b_1+b_21/J∑_i=1^J| x^j|^2
and using Lemma <ref>, we similarly obtain
|ℳ_β(ρ^J)|^2≤ b_1 + b_2 1/J∑_i=1^J | x^i|^2 .
By Young's inequality we deduce that
1/J∑_j=1^J|ℳ_β(ρ^J)|| x^j| ≤1/J∑_j=1^J ( |ℳ_β(ρ^J)|^2/2 +| x^j|^2/2)≤b_1/2 + 1+b_2/2J∑_j=1^J| x^j|^2.
Hence, there exist constants c_1, c_2>0
such that ( x) ≤ c_1 + c_2 V( x)
and thus V_c_1( x) = c_1+V( x) gives the desired Lyapunov
function. In all this shows that the assumptions of <cit.> are satisfied. An application of this theorem
then implies the claim.
Let p, q≥ 0, let μ be a probability measure on ^d, and
let f∈Łp+q(^d,μ). Then
∫_^d|f(x)|^p dμ(x)
∫_^d|f(y)|^q dμ(y)
≤∫_^d|f(x)|^p+q dμ(x).
Without loss of generality p≤ q. For p=0 the statement is
trivial, so let p>0. Then q/p≥ 1 and by Hölder's inequality
∫_^d|f(x)|^p dμ(x)
≤(∫_^d|f(x)|^pq/p dμ(x))^p/q
=(∫_^d|f(x)|^q dμ(x))^p/q.
This estimate together with Jensen's inequality yields
∫_^d|f(x)|^p dμ(x)
∫_^d|f(y)|^q dμ(y)
≤(∫_^d|f(x)|^q dμ(x) )^q+p/q≤∫_^d|f(x)|^q+p dμ(x),
which gives the claim.
Set
g:=×^d →
(t, x) ↦
| x|^2p.
For x=( x_k)_k=1^d∈^d we have
∂ g/∂ t(t, x) = 0,
∂ g/∂ x_k(t, x) = 2p x_k|x|^2(p-1),
∂^2 g/∂ x_k^2(t, x) = 2p| x|^2(p-1) + 4p(p-1)| x|^2 max(0,p-2) x_k^2.
Now let θ_t^j∈^d, j=1,…,J, be the solution of
(<ref>). Applying the Itô formula to g yields
d|θ^j_t|^2p = 2p|θ^j_t|^2(p-1)⟨θ_t^j,dθ^j_t ⟩ + p|θ^j_t|^2(p-1)⟨dθ_t^j,dθ^j_t ⟩ + 2p(p-1)|θ^j_t|^2(p-2)⟨θ_t^j,dθ^j_t⟩
= 2p|θ^j_t|^2(p-1)ıθ^j_t-(θ^j_t-ρ^J_t)dt+√(2ρ^J_t)dW_t
+ p|θ^j_t|^2(p-1)ı-(θ^j_t-ρ^J_t)dt+√(2ρ^J_t)dW_t-(θ^j_t-ρ^J_t)dt+√(2ρ^J_t)dW_t
+ 4p(p-1)|θ^j_t|^2max(0,p-2)ıθ^j_t-(θ^j_t-ρ^J_t)dt+√(2ρ^J_t)dW_t .
Integrating over time and applying the expectation gives
|θ^j_t|^2p = [|θ_0^j|^2p -∫_0^t2p|θ^j_s|^2(p-1)⟨θ^j_s,θ^j_s-ρ^J_s⟩ds
+ ∫_0^t2p|θ^j_s|^2(p-1)⟨θ^j_s,√(2ρ_s^J)⟩dW_s + ∫_0^t 2 p|θ^j_s|^2(p-1)√(ρ^J_s)_F^2 ds
+ 4p(p-1)(∫_0^t|θ_s^j|^2max(0,p-2)⟨θ^j_s,θ^j_s-ρ^J_s⟩ds+∫_0^t|θ_s^j|^2max(0,p-2)⟨θ_s^j,√(2ρ^J_t)⟩dW_s)].
Using the linearity of the expectation and the integral and the fact that the third and last integrands are martingales, we have
|θ^j_t|^2p = |θ_0^j|^2p - 2p∫_0^t|θ^j_s|^2(p-1)⟨θ^j_s,θ^j_s-ρ^J_s⟩ds
+2p∫_0^t|θ^j_s|^2(p-1)√(ρ^J_s)_F^2ds
+ 2p(p-1)∫_0^t|θ_s^j|^2max(0,p-2)⟨θ^j_s,θ^j_s-ρ^J_s⟩ds
≤|θ_0^j|^2p + 4p^2∫_0^t(|θ^j_s|^2(p-1)+|θ^j_s|^2max(0,p-2))(|θ^j_s|^2+|θ^j_s-ρ^J_s|^2)ds
+2p∫_0^t|θ^j_s|^2(p-1)√(ρ^J_s)_F^2ds .
Next, we sum over the particles and divide by J, after which we apply the inequality |a+b|^2≤ 2|a|^2+2|b|^2 to obtain
∫_^d| x|^2pdρ^J_t( x) ≤∫_^d| x|^2pdρ_0^⊗ J( x)
+ 4p^2∫_0^t∫_^d(| x|^2(p-1)+ | x|^2max(0,p-2))(3| x|^2+2|ρ^J_s|^2)dρ^J_s( x)ds
+2p∫_0^t∫_^d| x|^2(p-1)√(ρ^J_s)^2_Fdρ^J_s( x)ds
We now apply
Lemma <ref> and
Lemma <ref>, which yields
∫_^d| x|^2pdρ^J_t( x) ≤∫_^d| x|^2pdρ_0^⊗ J( x) + 4p^2∫_0^t3∫_^d| x|^2pdρ_s^J( x)
+ [∫_^d| x|^2(p-1)(2b_1+3 +2b_2∫_^d|η|^2 dρ^J_s(η))dρ^J_s( x)]
+∫_^d| x|^2max(0,p-2)(2b_1+2b_2∫_^d|η|^2dρ_s^J(η))dρ^J_s( x)ds
+ 2^2p+1p^2∫_0^t∫_^d| x|^2(p-1)(b_1+b_2∫_^d|η|^2 dρ^J_s(η) )dρ^J_s( x)ds
≤∫_^d| x|^2pdρ_0^⊗ J( x) + (4p^2(3+2b_2)+2^2p+2p^2 b_2)
∫_0^t∫_^d| x|^2pdρ_s^J( x)ds
+ (4p^2(2b_1+3+2b_2)+2^2p+2p^2 b_1)
∫_0^t∫_^d| x|^2(p-1)dρ^J_s( x)ds
+ (4p^2(2b_1))∫_0^t∫_^d| x|^2max(0,p-2)dρ^J_s( x)ds
=: ∫_^d| x|^2pdρ_0^⊗ J( x) + C∫_0^t∫_^d| x|^2pdρ^J_sds + C̃∫_0^t∫_^d| x|^2(p-1)dρ^J_s( x)ds
+ C' ∫_0^t∫_^d| x|^2max(0,p-2)dρ^J_s( x)ds.
Here we used
∫ | x|^2(p-1)dρ( x)∫ |η|^2dρ(η)≤∫ | x|^2pdρ( x) which follows by Lemma <ref> .
Continuing with our estimates, for p=1, we arrive at
∫_^d| x|^2dρ^J_t( x) ≤∫_^d| x|^2dρ^J_0( x) + C ∫_0^t∫_^d| x|^2dρ^J_s( x) ds + (C̃+C') t.
Applying Grönwall's inequality gives the upper bound
∫_^d| x|^2dρ^J_t( x)≤(∫_^d| x|^2dρ_0^J( x) + (C̃ +C')T)exp(CT) < ∞.
Similarly, for p=2, using the above estimate we obtain
∫_^d| x|^4dρ^J_t( x) ≤∫_^d| x|^4dρ^J_0( x) + C ∫_0^t∫_^d| x|^4dρ^J_s( x) ds + C̃ t sup_t∈[0,T]∫_^d| x|^2dρ^J_t( x) + C't
≤∫_^d| x|^4dρ^J_0( x) + C ∫_0^t∫_^d| x|^4dρ^J_s( x) ds
+ C̃ T (∫_^d| x|^2dρ_0^J( x) + (C̃ +C') T)exp(CT) + C'T,
such that we can derive an upper bound by applying Grönwall's inequality once more
∫_^d| x|^4dρ^J_t( x) ≤( ∫_^d| x|^4dρ^J_0( x) + C̃ T (∫_^d| x|^2dρ_0^J( x)
+ (C̃ +C') T)exp(CT)+C'T) exp(CT) < ∞.
Using the same procedure, one may obtain a bound for the sixth
moment.
§ PROOFS OF SECTION <REF>
The proof is a straightforward adaptation of the argument in <cit.>, and we refer to this paper for a more detailed exposition. In particular, according to Theorem <ref> it sufficies to prove the tightness of (ℙ^θ^1,J)_J∈ℕ due to the particles'
exchangeability.
We prove tightness of the solution
{θ^1,J}_J∈ℕ to (<ref>) on
C([0,T];^d), using the two criteria by Aldous stated in
Theorem <ref> in Appendix <ref>:
1. Let ϵ >0, then by Markov's inequality
θ_t^1,J>K_p(T)/ϵ≤ϵ/K_p(T)θ_t^1,J≤ϵ, ∀ J∈ℕ,
where K_p(T) is the uniform bound on the moments from Lemma <ref> and t∈[0,T].
2. Let τ be a discrete σ(θ^1_s,s∈[0,T])-stopping time with τ+δ≤ T. By (<ref>) we have
θ_τ+δ^1,J-θ_τ^1,J = |-∫_τ^τ+δθ^1,J_s-ρ_s^Jds + ∫_τ^τ+δ√(2ρ_s^J)dW^1_s|^2
≤2|∫_τ^τ+δθ^1,J_s-ρ_s^Jds|^2 + 2|∫_τ^τ+δ√(2ρ_s^J)dW^1_s|^2
We can bound the first term via our moment estimate. Using Jensen's
inequality
|∫_τ^τ+δρ_s^J-θ^1,J_s ds|^2 ≤δ∫_τ^τ+δρ_s^J-θ^1,J_sds
≤ 2δ^2(t∈[0,T]+t∈[0,T]θ^1,J_t)
≤ 2δ^2 K_1(T),
where for the last inequality we used Lemma <ref>
and Lemma <ref>.
For the second moment of the covariance of the reweighted measure we similarly obtain
|∫_τ^τ+δ√(ρ_s^J)dW_s|^2 =∫_τ^τ+δ√(ρ_s^J)^2_F ds
≤∫_τ^τ+δ b_1+b_2∫_^d|x|^2dρ_s^J(x)ds
≤δ( b_1+b_2t∈[0,T]∫_^d|x|^2dρ_t^J(x))
≤δ (b_1 + b_2K_1(T)),
where in the first equality we used the Itô Isometry, the
first inequality uses Fubini's theorem and Lemma
<ref>, and the final inequality follows
by Lemma <ref>.
In all, using δ≤ T and
Jensen's
inequality
once more,
|θ_τ+δ^1,J-θ_τ^1,J| ≤√(2δ·max(TK_1(T), (b_1+b_2K_1(T))))
=: √(δ)C ,
with C>0 depending on T but independent of δ.
To
obtain
the desired inequality (<ref>)
for given ϵ, η>0,
we define
δ_0 := min(ϵη/C^2,T)
and apply Markov's inequality to obtain
δ∈[0,δ_0]supℙ(|θ^1,J_τ+δ-θ^1,J_τ|>η) ≤δ∈[0,δ_0]sup1/η|θ^1,J_τ+δ-θ^1,J_τ|≤ϵ,
which concludes the proof.
Plugging ρ^J into 𝔽_φ,t defined in
(<ref>) we
get
ρ^J = φ(_t) - φ(_0) + ∫_0^t(_s-ρ^J_s)∇φ(_s) ds
- ∫_0^ti=1dk=1dρ_s^J_ik∂^2/∂ x_i∂ x_kφ(_s) ds
= ( φ(_t) - φ(_0)+ ∫_0^t(_s-ρ^J_s)∇φ(_s) ds
- ∫_0^t i=1dk=1dρ_s^J_ik∂^2/∂ x_i ∂ x_kφ(_s) ds ).
Since φ∈C_c^2(^d), we may apply Itô's formula
to the solution _t of (<ref>), which yields
φ(_t) = φ(_0) - ∫_0^t(_s-ρ^J_s)∇φ(_s) ds + ∫_0^t√(2ρ_s^J)∇φ(_s) dW^j_s
+ ∫_0^t i=1dk=1dρ_s^J_ij∂^2/∂ x_i∂ x_kφ(_s) ds.
Inserting this into (<ref>) leads to
ρ^J = ∫_0^t√(2ρ_s^J)∇φ(_s) dW^j_s .
For the second moment, we obtain
|ρ^J|^2 = 2/λ J^2|j=1J∫_0^t√(ρ_s^J)∇φ(_s) dW_s^j|^2
= 2/λ J^2i=1d(j=1J∫_0^t√(ρ^J_s)∇φ(_s) dW^j_s)_i^2
=2/λ J^2i=1d(j=1Jk=1J(∫_0^t√(ρ^J_s)∇φ(_s) dW^j_s)(∫_0^t√(ρ^J_s)∇φ(θ^k,J_s) dW^k_s))_i
=2/λ J^2i=1dj=1J∫_0^t∇φ(_s)^⊤ρ^J_s∇φ(_s) ds
≤ C(λ, K_p(T), T, [L^∞]∇φ1/JJ→∞→ 0
where for the final inequality we used
Lemma <ref> and Lemma <ref>,
and the
equality in the second to last line
holds due to Itô's Isometry and the fact that ∫_0^tg(t, θ_t)d(B_s^j⊗ B_s^k)=0 for j≠ k and g∈Ł2;
this holds
due to φ∈ C_c^2(^d) and Lemma <ref>.
We were able to apply Lemma <ref> due to our assumption about the initialisation.
Firstly, we observe that for every t∈[0,T], {ρ^J_t}_J∈ℕ converges[remember that we identify the subsequence {ρ^J_k}_k∈ℕ as {ρ^J}_J∈ℕ.] ℙ̃_t-almost surely (on (Ω̃_t,ℱ̃_t,ℙ̃_t)) to ρ_t∈𝒫(^D).
It suffices to check the Ł1-convergence of the
terms of ρ^J in (<ref>) one by one.
The difference to <cit.> lies in the diffusion term, which we split as
|∫_0^ti=1dk=1d∫_^dρ_s^J_ik∂^2/∂ x_i∂ x_kφ( x) dρ_s^J( x)ds-∫_0^ti=1dk=1d∫_^dρ_s_ik∂^2/∂ x_i∂ x_kφ( x) dρ_s( x) ds|
≤(∫_0^t|i=1dk=1d∫_^dρ_s^J_ik∂^2/∂ x_i∂ x_kφ( x) d(ρ_s^J( x)-ρ_s( x))|ds
+ ∫_0^t|i=1dk=1d∫_^d(ρ^J_s_ik-ρ_s_ik)∂^2/∂ x_i∂ x_kφ( x) dρ_s( x)|ds)
=: (∫_0^t|I_1^J(s)|ds + ∫_0^t|I_2^J(s)|ds),
where we used the triangle inequality and Jensen's inequality.
The expectation of the first integrand vanishes, since φ
vanishes at the boundaries: using Jensen's inequality, Lemma
<ref> and K_1(T) from Lemma <ref>
|I^J_1(s)| ≤i=1dk=1d|∫_^dρ_s^J_ik∂^2/∂ x_i∂ x_kφ( x) d(ρ_s^J( x)-ρ_s( x))|
≤(K_1(T)^1/2i=1dk=1d|∫_^d∂^2/∂ x_i∂ x_kφ( x)d(ρ_s^J( x)-ρ_s( x))|)→ 0 as J→∞.
Moreover, for the second integrand, we claim that for all s∈[0,T]
|I^J_2(s)| = |i=1dk=1d∫_^d(ρ^J_s_ik-ρ_s_ik)
∂^2/∂ x_i∂ x_kφ( x)dρ_s( x) |
≤sup_ x∈^d[F]∇^2 φ( x)·ρ^J_s-ρ_s_F→ 0 as J→∞.
To show this
claim, we first note that there holds the
ℙ̃_s-almost sure convergence
J→∞limρ^J_s-ρ_s_F = 0,
for each fixed s because xexp(-β
f( x)) and exp(-β f( x)) belong to
C_b(^d).
More precisely, together with
ℙ̃_t-almost sure convergence
of the ρ^J_t towards
ρ_t,
this implies
J→∞limρ_t^J = J→∞lim∫_^d( x-ℳ_β(L_βρ^J_t))⊗( x-ℳ_β(L_βρ_t^J) dL_βρ_t^J( x)
= J→∞lim∫_^d( x-ı xexp(-β f( x))dρ_t^J( x)/ıexp(-β f( x))dρ_t^J( x))⊗( x-ı xexp(-β f( x))dρ_t^J( x)/ıexp(-β f( x))dρ_t^J( x))
·exp(-β f( x))/ıexp(-β f( x))dρ_t^J( x) dρ_t^J( x)
= ∫_^d( x-ı xexp(-β f( x))dρ_t( x)/ıexp(-β f( x))dρ_t( x))⊗( x-ı xexp(-β f( x))dρ_t( x)/ıexp(-β f( x))dρ_t( x))
·exp(-β f( x))/ıexp(-β f( x))dρ_t( x) dρ_t( x)
= ∫_^d( x-ℳ(L_βρ_t))⊗( x-ℳ(L_βρ_t) dL_βρ_t( x)
= ρ_t ,
ℙ̃_t-almost surely. Here we wrote integrals over ^d with respect to a measure μ as ı·dμ( x) for better legibility. Importantly, we were able to use weak convergence of the time marginals ρ^J_t →ρ_t according to <cit.>.
We apply Lemma <ref> to verify that
lim_J→∞ρ^J_s-ρ_s_F = 0, which can be seen as follows. Let A>0 be arbitrary, and consider
ρ^J_s-ρ_s_F
= ρ^J_s-ρ_s_F 1_ρ^J_s-ρ_s_F ≤ A + ρ^J_s-ρ_s_F 1_ρ^J_s-ρ_s_F>A
≤ρ^J_s-ρ_s_F 1_ρ^J_s-ρ_s_F ≤ A + ρ^J_s-ρ_s_F^2 ^1/2ℙ̃(ρ^J_s-ρ_s_F>A)^1/2
≤ρ^J_s-ρ_s_F 1_ρ^J_s-ρ_s_F ≤ A + ρ^J_s-ρ_s_F^2 /A^2 ,
where we have first applied Hölder's inequality followed by Markov's inequality. Note that by Lemma <ref> and Lemma <ref>,
ρ^J_s-ρ_s_F^2≤ 2 ρ^J_s_F^2 + 2 ρ_s_F^2≤ K_2(T)<∞
uniformly in J. Taking the limit J→∞ yields
0≤lim sup_J→∞ρ^J_s-ρ_s_F ≤K_2(T)/A^2 ,
and since A>0 is arbitrary, we also obtain lim_J→∞ρ^J_s-ρ_s_F = 0.
In summary, we verified the convergence of the diffusion term
J→∞lim 𝔼[|∫_0^ti=1dk=1d∫_^dρ_s^J_ik∂^2/∂ x_i∂ x_kφ( x)dρ_s^J( x)ds
-∫_0^ti=1dk=1d∫_^dρ_s_ik∂^2/∂ x_i∂ x_kφ( x) dρ_s( x) ds|] = 0.
The remaining two summands of the functional are identical to CBO, and their convergence has already been obtained in <cit.>, i.e.
it holds
J→∞lim|(∫_^dφ( x)dρ^J_t( x)-∫_^dφ( x)dρ^J_0( x))-(∫_^dφ( x) dρ_t( x)-∫_^dφ( x)dρ_0( x))| = 0
and
J→∞lim|∫_0^t ∫_^d( x-ρ^J_s)·∇φ( x)dρ^J_s( x)ds-∫_0^t∫_^d( x-ρ_s)·∇φ( x)dρ_s( x)ds| = 0.
Equations (<ref>), (<ref>), and
(<ref>) together
with
the triangle inequality
give
J→∞lim|ρ^J-ρ| = 0,
which is the desired result.
The proof is a direct adaptation of <cit.>.
Considering the measure flow ρ=(ρ_t)_t∈[0,T] of time marginals of the limit ρ, we must check the two conditions in the definition of a weak solution. We begin with the first point, which requires checking that t↦ρ_t is continuous in time in the sense
of (<ref>): For ϕ∈C_b(^d)
and t_n→ t, dominated convergence yields
∫_C([0,T]; ^d)ϕ(f(t_n))dρ(f) n→∞→∫_C([0,T]; ^d)ϕ(f(t)) dρ(f).
With the definition of the evaluation map e_t:C([0,T], ℝ^d)→ℝ^d, f↦ f(t) and the time marginals ρ_t:=(e_t)_#ρ=ρ(e_t^-1(·)) one can equivalently write the above as
∫_^dϕ(x)dρ_t_n(x) n→∞→∫_^dϕ(x)dρ_t(x)
using change of variables, which is applicable because ϕ is measurable.
The second part of the Definition now follows straightforwardly by combining the previously obtained convergence results from Lemma <ref> and Lemma <ref>:
|ρ| = |ρ|
= |ρ-ρ^J+ρ^J|
≤|ρ-ρ^J| + |ρ^J|
≤|ρ-ρ^J| + C/√(J)→ 0 as J→∞.
We next recall a standard argument to bound the difference
of the square root of two matrices.
Let A∈^d× d be symmetric positive semi-definite and let
B∈^d× d be symmetric positive definite. Then
√(A)-√(B)_2≤ 2√(B^-1_2)A-B_2.
Let a≥ 0. Substituting t=√(a)tan(s) one can show
2/π∫_0^∞a/a+t^2d
t=2/π∫_0^π/2√(a)d s = √(a). Using the
singular value decomposition (SVD) and applying the above equality to the
singular values yields the well-known identity
√(A) = 2/π∫_0^∞ (A+t^2I)^-1A dt,
where I∈^d× d is the identity matrix. Since the same is true for B, we find
√(A)-√(B) = 2/π∫_0^∞ (A+t^2 I)^-1A - (B+t^2 I)^-1B dt
=2/π∫_0^∞ ((A+t^2 I)^-1-(B+t^2 I)^-1)A dt +2/π∫_0^∞ (B+t^2 I)^-1(A-B) dt
=:E+F.
We first bound F_2. Denote by σ=B^-1_2^-1>0 the
smallest eigenvalue of B. The smallest eigenvalue of B+t^2I is
then σ+t^2. Thus
(B+t^2I)^-1_2=1/σ+t^2
and
F_2≤A-B_22/π∫_0^∞1/σ+t^2d t = A-B_21/√(σ).
To bound E_2, we use the matrix identity X^-1-Y^-1=Y^-1(Y-X)X^-1
to obtain
((A+t^2 I)^-1-(B+t^2 I)^-1)A_2 ≤(B+t^2 I)^-1_2A-B_2(A+t^2 I)^-1A_2
≤1/σ+t^2A-B_2.
Here we used (<ref>) and the fact that
(A+t^2 I)^-1A_2≤ 1, which can be seen using the SVD of
A. As before E_2≤A-B_2/√(σ), which concludes
the proof.
The proof of the first claim is given in <cit.>.
For the second claim, let μ, ν∈𝒫_6(^d) with
(∫_^d | x|^6 dμ( x))^1/6, (∫_^d |x|^6 dν(x))^1/6≤ K.
We define the normalizing constants with respect to the weight function w_β^f by
Z_μ := ∫_^d w_β^f( x) dμ( x) and Z_ν := ∫_^d w_β^f(x) dν(dx).
According to
<cit.>, there exists a constant c_K>0 such that
w_β^f( x)/Z_μ, w_β^f( x)/Z_ν≤ c_K
for all x∈^d. By definition of the weighted covariance matrix
μ-ν = ∫_^dw_β^f( x)/Z_μ x x^⊤dμ( x) - ∫_^dw_β^f(x)/Z_νxx^⊤dν(x)+ M_β(μ) M_β(μ)^⊤ - M_β(ν)M_β(ν)^⊤ .
Next, we estimate
μ-ν_F ≤[F]∫_^dw_β^f( x)/Z_μ x x^⊤dμ( x) - ∫_^dw_β^f(x)/Z_νxx^⊤dν(x) + M_β(μ) M_β(μ)^⊤ - M_β(ν)M_β(ν)^⊤_F.
We will make use of the following estimates for arbitrary vectors x,y∈^d
xx^⊤ -yy^⊤_F^2 =|y|^2 |x-y|^2+|x|^2 |x-y|^2 + 2⟨ y,x-y⟩⟨ x,y-x⟩≤ (|x|+|y|)^2|x-y|^2
where we have applied the Cauchy-Schwarz inequality. Hence, we first observe that
M_β(μ) M_β(μ)^⊤ - M_β(ν)M_β(ν)^⊤_F ≤ (|M_β(μ)|+|M_β(ν)|)|M_β(μ)-M_β(ν)| ≤ 2(b_1 +b_2 c_K) c_0 W_2(μ,ν),
where we have used <cit.> and Lemma <ref>. Secondly, let π be an arbitrary coupling of μ and ν such that we can write
[F]∫_^dw_β^f( x)/Z_μ x x^⊤dμ( x) - ∫_^dw_β^f(x)/Z_νxx^⊤dν(x) = [F]∫_^d∫_^d(w_β^f( x)/Z_μ x x^⊤ - w_β^f(x)/Z_νxx^⊤) dπ( x,x)
≤∫_^d∫_^d[F]w_β^f( x)/Z_μ x x^⊤ - w_β^f(x)/Z_νxx^⊤dπ( x,x) .
The outline of the remaining proof follows similar steps as the proof of <cit.>. In particular, we use triangle inequality to bound the integrand by
[F]w_β^f( x)/Z_μ x x^⊤ - w_β^f(x)/Z_νxx^⊤ ≤ x x^⊤ - xx^⊤_F |w_β^f(x)/Z_μ|
+ x x^⊤_F |w_β^f( x)-w_β^f(x)|/Z_μ
+ xx^⊤_F |w_β(x)| |∫_^d∫_^d w_β^f(y)-w_β^f(y) dπ(y,y)|/Z_μ Z_ν ,
where we used the equality
w_β^f( x)/Z_μ x x^⊤ - w_β^f(x)/Z_νxx^⊤ = ( x x^⊤ - xx^⊤ ) w_β^f(x)/Z_μ + x x^⊤w_β^f( x)-w_β^f(x)/Z_μ + xx^⊤ w_β^f(x) (1/Z_μ - 1/Z_ν)
and
1/Z_μ - 1/Z_ν = ∫_^d∫_^d w_β^f(y)-w_β^f(y) dπ(y,y)/Z_μ Z_ν .
We will integrate over and bound each tearm in (<ref>)–(<ref>) separately. We start with (<ref>) and apply (<ref>) which yields
∫_^d∫_^d x x^⊤ - x x^⊤_F |w_β^f(x)/Z_μ| dπ( x,x) ≤ c_K ∫_^d∫_^d (| x|+|x|) | x-x|dπ( x,x)
≤ c_K (∫_^d∫_^d (| x|+|x|)^2dπ( x,x))^1/2(∫_^d∫_^d (| x-x|)^2dπ( x,x))^1/2
≤ 4 c_K K (∫_^d∫_^d (| x-x|)^2dπ( x,x))^1/2 ,
where we have applied Hölder's inequality in the second line. For the second term (<ref>) we apply Assumption (<ref>) to derive
∫_^d∫_^d x x^⊤_F |w_β^f( x)-w_β^f(x)|/Z_μdπ( x,x) ≤ 2c_K∫_^d∫_^d x x^⊤_F (| x|+|x|)| x-x| dπ( x,x)
= 2c_K∫_^d∫_^d (| x|^3+| x|^2|x|) | x-x| dπ( x,x)
≤ 2c_K ( ∫_^d∫_^d (| x|^3+| x|^2|x|)^2 dπ( x, x) )^1/2( ∫_^d∫_^d | x-x|^2 dπ( x, x) )^1/2
≤ 2c_K K̃( ∫_^d∫_^d | x-x|^2 dπ( x, x) )^1/2,
where we once again used Hölder's inequality in the third line and afterwards
the assumption on bounded second moments.
Finally, applying Hölder's inequality once more we estimate (<ref>) by
∫_^d∫_^d xx^⊤_F |w_β(x)| |∫_^d∫_^d w_β^f(y)-w_β^f(y) dπ(y,y)|/Z_μ Z_νdπ( x,x)
≤ c_K^2 βLip(f) ∫_^d∫_^dxx^⊤_F dπ( x,x) ∫_^d∫_^d (|y|+|y|) |y-y| dπ(y,y)
≤ c_K^2 βLip(f) 4K∫_^d∫_^d | x|^2 dπ( x,x) (∫_^d∫_^d |y-y|^2 dπ(y,y))^1/2
≤ c_K^2 βLip(f) 4 K^2 (∫_^d∫_^d |y-y|^2 dπ(y,y))^1/2 .
Taking the infimum over all couplings of μ and ν yields the claim.
We begin by linearizing the process. Consider for fixed ρ_0∈𝒫_4(^d) and some given (u, D)∈ C([0,T],^d×^d× d) the following linear SDE
dY_t = -(Y_t-u_t) dt + √(2) D_t dW_t, where ℙ^Y_0=ρ_0.
By standard SDE theory, e.g. <cit.>, this SDE has a unique solution with laws ν_t=ℙ^Y_t, t≥0. More precisely, for every t, ν_t is the law of Y_t, leading to the function ν∈ C([0,T],𝒫(^d)).
We then define a mapping from (u, D) to the continuous function which consists of the regularized mean and regularized covariance of ν_t at every timestep:
𝒯:=
C([0,T],^d×^d× d)→ C([0,T],^d×^d× d)
(u,D) ↦(ν_t,√(ν_t))_t∈[0,T],
for which we prove the conditions of Schaefer's Fixed Point Theorem (<ref>). The well-definedness of the mapping follows firstly from the ranges of ℳ_β and 𝒞_β, and secondly from their respective (Hölder) continuitiy, which we prove below.
Theorem <ref> is sufficient to go from the above linearized SDE (<ref>) to the nonlinear McKean process as a fixed point of the mapping 𝒯 ensures that (ℳ_β(ν_t),√(𝒞_β(ν_t))) can be plugged in for (u_t, D_t), which yields a solution of the nonlinear SDE in (<ref>).
We now begin with checking the compactness requirement of Theorem (<ref>), which we obtain by showing the map t↦ (ν_t,√(ν_t)) is Hölder continuous and making use of the compact embedding from Hölder continuous functions to continuous functions.
By <cit.>, there exists c'>0 s.t.
|Y_t|^6 ≤ (1+|Y_0|^6)exp(c't),
for all t∈[0,T], i.e. sup_t∈[0,T]∫|x|^6dν_t≤ K for a K<∞, which means we will be able to apply Lemma <ref>.
Furthermore, we have for t>s, t, s ∈ (0,T)
|Y_t-Y_s|^2 = (-∫_s^tY_r-u_r dr)^2+(∫_s^t√(2) D_r dW_r)^2
+2√(2)(∫_s^t(Y_r-u_r) dr)(∫_s^tD_r dW_r).
Bounding the three terms respectively using for example the Cauchy-Schwarz inequality, the Itô Isometry and the fact that the third term is a martingale, we have
|Y_t-Y_s|^2 ≤ |t-s|∫_s^t|Y_r-u_r|^2 dr+2∫_s^tD_r^2dr+ 0
≤ 2(T+2)(K+u_∞^2+D^2^2_∞)|t-s|
=: c|t-s|.
Applying <cit.> with μ=ν_t and μ̂=ν_s, we arrive at
|ν_t-ν_s| ≤ c_0 W_2(ν_t,ν_s) ≤ c_0c^1/2 |t-s|^1/2.
for c_0>0 from Lemma <ref>.
Similarly, using the well-known Powers-Størmer inequality <cit.>, estimating the resulting Schatten 1-norm |[·]|_1 using the Frobenius norm and applying Lemma <ref> we obtain
√(ν_t)-√(ν_s)_F ≤ (|[ν_t-ν_s]|_1)^1/2
≤ (d^1/2ν_t-ν_s_F)^1/2
≤ d^1/4√(c_1 W_2(ν_t,ν_s))
≤ c_1^1/2d^1/4 |t-s|^1/4.
This yields the Hölder continuity with exponent
1/4 of
f:=
[0,T] →^d×^d× d
t ↦ (ν_t,√(ν_t)).
The compact embedding
C^0,1/4([0,T],^d×^d× d) ↪ C([0,T],^d×^d× d)
therefore provides the compactness of 𝒯 (See <cit.>.).
Computing the second moment using Itô's formula, we get
d/dt∫_ℝ^d| x|^2 dρ_t = ∫_ℝ^d[2( x-u_t)· x+D_t^2] dρ_t( x).
Lemma <ref> then yields the following for the Diffusion term:
D_t^2 = τ^2ρ_t≤τ^2 (b_1 + b_2 ∫_^d| x|^2 dρ_t( x)).
Similarly, by Lemma <ref> it holds that
|u_t|^2 = τ^2ρ_t^2 ≤τ^2∫_^d| x|^2 dL_βρ_t( x) ≤τ^2(b_1 + b_2 ∫_^d| x|^2 dρ_t( x)).
Using Cauchy-Schwarz and the arithmetic-geometric mean inequality, we furthermore obtain
-2∫_^d x· u_t dρ_t( x) ≤ 2 ∫ | x· u_t|dρ_t( x)
≤ 2∫ | x||u_t|dρ_t( x)
≤∫ | x|^2 + |u_t|^2 dρ_t( x)
≤∫ | x|^2 dρ_t( x) + |u_t|^2.
Therefore, we can bound (<ref>) as follows:
d/dt∫_^d| x|^2 dρ_t( x) ≤τ^2b_1 +(τ^2b_2+2) ∫_ℝ^d | x|^2 dρ_t( x) + |u_t|^2
≤τ^2b_1(+1) +(τ^2 b_2(+1)+2)∫_^d| x|^2 dρ_t( x).
Applying Grönwall's inequality yields
∫_^d| x|^2 dρ_t( x) ≤τ^2b_1(+1)exp(τ^2b_2(+1)+2)∫_^d| x|^2 dρ_0( x) < ∞.
We can transfer this estimate back to (u_t,D_t) using Jensen's inequality and Lemma <ref> respectively, resulting in
|u_t|^2 = τ^2|ℳ_β(ρ_t)|^2 ≤τ^2 (b_1+b_2∫_^d| x|^2 dρ_t( x)) ≤ M < ∞
and D_t^2^2_F = τ^2 𝒞_β(ρ_t)_F^2 ≤τ^2(b_1+b_2∫_^d| x|^2dρ_t( x))≤ M < ∞,
and thus get upper bounds for u and D by considering the supremum norms u_∞ := sup_t∈[0,T]|u_t| and D_∞ := sup_t∈[0,T]D_t_F. Hence, we have verified all conditions to apply Theorem <ref> implying the existence of a fixed point of the mapping 𝒯.
We showed above that a fixed point (u,D)∈ C([0,T],^d×^d) of 𝒯 satisfies u_∞,D_∞≤ M. Now let (u,D), (û,D̂) be two fixed points of 𝒯.
Recall, that we have
u_∞, D_∞≤ M, t∈[0,T]sup∫_^d| x|^4 dρ_t( x)≤ K < ∞,
û_∞, D̂_∞≤ M, t∈[0,T]sup∫_^d| x|^4 dρ̂_t( x) ≤ K < ∞
Taking the difference of the paths of their corresponding processes (Y_t)_t∈[0,T], (Ŷ_t)_t∈[0,T] yields
Y_t-Ŷ_t =: z_t = z_0 - ∫_0^tz_s ds + ∫_0^t(u_s-û_s) ds + √(2)∫_0^t(D_s-D̂_s) dW_s.
We apply Itô's formula to obtain
|z_t|^2 = |z_0|^2 - ∫_0^t 2|z_s|^2 ds + ∫_0^t 2⟨ z_s, u_s-û_s⟩ ds +2λ^-1∫_0^t (D_s-D̂_s)(D_s-D̂_s)^⊤ ds + ∫_0^t dM_s,
where M_s is a Martingale such that ∫_0^t dM_s = 0.
We apply Cauchy-Schwarz and Young's inequality to get
⟨ z_s, u_s-û_s⟩≤1/2|z_s|^2 + 1/2|u_s-û_s|^2 .
Taking the expectation it follows that
|z_t|^2 ≤|z_0|^2- ∫_0^t |z_s|^2 ds + ∫_0^t |u_s-û_s|^2 ds + 2λ^-1∫_0^t (D_s-D̂_s)(D_s-D̂_s)^⊤ ds .
Note that we have interchanged the integral ∫_0^t and the expectation 𝔼 using Fubini's theorem. Moreover, (D_s-D̂_s)(D_s-D̂_s)^⊤ = D_s-D̂_s_F^2.)
Substituting ρ_t for u_t, ρ̂_t for û, √(ρ_t) for D_t, and √(ρ̂_t) for D̂_t, and using Lemma <ref> and <cit.>, we obtain
|z_t|^2 ≤|z_0|^2- ∫_0^t |z_s|^2 ds + ∫_0^t |ρ_s-ρ̂_s|^2 ds
+ 2λ^-1∫_0^t √(ρ_s)-√(ρ̂_s)_F^2 ds
≤|z_0|^2 +(c_0^2+2 c_14d/σ̅-1)∫_0^t|z_s|^2ds .
Here we used Assumption <ref> and Lemma <ref> to deduce that
√(ρ_s)-√(ρ̂_s)_F^2≤ d √(ρ_s)-√(ρ̂_s)_2^2 ≤ 4d ρ̂_s^-1_2 ρ_s-ρ̂_s_2^2
≤4d/σ̅ρ_s-ρ̂_s_F^2 .
Applying Grönwall's inequality and using the fact that |z_0|^2=0 yields the desired |z_t|^2=0, implying uniqueness.
The existence follows by applying the Itô's formula to the law of the solution of (<ref>).
To obtain the uniqueness, consider for each t_0∈(0,T] and compactly supported smooth test function ψ∈ C_c^∞(^d) the Kolmogorov backward equation
∂ h_t/∂ t = -i=1dk=1d (C_t)_ik∂^2/∂ x_i∂ x_k(h_t)_k-(x-u_t)·∇ρ_t, (t,x)∈[0,t_0]×^d; h_t_0 = ψ
By <cit.>, this linear PDE admits the solution
h_t(x) = ψ(θ_t_0^t,x), t∈[0,t_0],
where (θ_t_0^t,x)_0≤ t ≤ s ≤ t_0 is the strong solution to the linear SDE
dθ^t,x_s = -(θ^t,x_s-u_s) ds + √(2 C_s)dB_s, θ_t^t,x = x
The chain rule provides, for each (t,x)∈[0,t_0]×^d:
∇_kh_t(x) = ∇_kψ(θ_t_0^t,x)∇_kθ_t_0^t,x.
According to <cit.> we may apply the derivative w.r.t. parameter k to the drift and diffusion coefficients of dθ^t,x_t_0 separately, and simply obtain
d∇_k(θ_s^t,x)_k = ∇_k(θ^t,x_s)_k ds, ∇_k(θ^t,x_t)_k = 1.
We obtain a geometric brownian motion without diffusion and have
∇_k(θ^t,x_s)_k = exp(s-t)
Therefore, there exists a constant c>0 depending only on ψ s.t.
(t,x)∈[0,t_0]×^dsup |∇_kh_t(x)| ≤ cexp(T) < ∞, k=1,…,d
Furthermore, for 0≤ t < t+δ < t_0, we have θ_s^t,x = θ_t_0^t+δ,θ_t_0^t+δ,x for t+δ < s ≤ t_0 and thus
h_t+δ-h_t(x)/δ = 1/δψ(θ_t_0^t+δ,x)-ψ(θ_t_0^t,x)
= 1/δψ(θ_t_0^t+δ,x)-ψ(θ_t_0^t+δ,θ^t,x_t+δ)
=1/δh_t+δ(x)-h_t+δ(θ^t,x_t+δ)
=1/δ-∫_t^t+δi=1dk=1d(C_s)_ik∂^2/∂ x_i∂ x_kh_t+δ(θ^t,x_s)-(x-u_t)·∇ h_t+δ(θ^t,x_s) ds
Since we saw that the law ρ_t=ℙ^θ_t is a classical solution to the linearized FPE (<ref>), the limit δ→0limh_t+δ-h_t(x)/δ exists.
Now suppose ρ^1 and ρ^2 are two weak solutions of (<ref>) with identical initial condition ρ_0^1=ρ_0^2. We set δρ_t=ρ^1_t-ρ^2_t. Now we integrate the solution h to the Kolmogorov backward PDE, which we have seen exists, against this (signed) measure:
∫ h_t_0 dδρ_t_0(x) =
∫_0^t_0∫_^d∂_sh_s(x) dδρ_s(x)ds +∫_0^t_0∫_^di=1dk=1d (C_s)_ik∂^2/∂ x_i∂ x_kh_s dδρ_s(x)ds
+ ∫_0^t∫_^d(x-u_s)·∇ h_s dδρ_s(x)ds
= ∫_0^t_0∫_^d∂_sh_s(x) dδρ_s(x)ds + ∫_0^t_0∫_^d-∂_sh_s(x) dδρ_s(x)ds
= 0
Since we chose h_t_0=ψ∈ C_c^∞(^d), we have ∫_^dψ(x)dδρ_t_0(x)=0, which means δρ_t_0=0. Since t_0 was arbitrary, we have ρ^1=ρ^2.
We construct two linear processes (θ^l_t)_t∈[0,T] (l=1,2) satisfying
dθ^l_t = -(θ^l_t-ρ_t^l)dt + √(2ρ_t^l)dW_t,
with θ_0∼ρ_0. The laws of these linearized processes ρ^l_t:=ℙ^θ^l_t are thereby weak solutions to the linear PDE
∂ρ^l_t/∂ t = i=1dk=1d∂^2/∂ x_i∂ x_k[ρ^l_t_ikρ^l_t] - ∇·((x-ρ^l_t)ρ^l_t).
By assumption, ρ^l also solves the above PDE, since in this case it is identical to the nonlinear FPE. In the subsequent Proposition <ref>, we show that the weak solution to the above SDE is unique. From these two facts follows that ρ^l=ρ^l. But ρ^l=ℙ^θ^l is the law of the McKean Process
dθ_t = -(θ_t-ρ^l_t) dt + √(2ρ^l_t)dW_t,
which is solvable uniquely up to ℙ-indistinguishability according to Theorem <ref>.
Thus, (θ^l_t)_t∈[0,T] = (θ^l_t)_t∈[0,T] is a solution to the McKean Process in (<ref>). Therefore
0 = t∈[0,T]sup|θ^1_t-θ^2_t|
= t∈[0,T]sup|θ^1_t-θ^2_t|≥t∈[0,T]sup W_2(ρ^1_t,ρ^2_t)
≥t∈[0,T]sup W_2(ρ^1_t,ρ^2_t),
which is the desired result.
§ ADDITIONAL RESULTS USED IN THE PROOF
Let E be a Polish Space, and ρ^J=1/Ji=1Jδ_θ^j,J be (E)-valued random variables, where (θ^1,J,…,θ^J,J) is distributed according to u_J. Assume that (u_J)_J∈ℕ is a sequence of symmetric probabilities on E^N. Then, (ρ^J)_J∈ℕ is tight if and only if (ℙ^θ^1,J)_J∈ℕ is tight.
Let {X^n}_n∈ℕ be a sequence of stochastic processes on a common probability space (Ω,ℱ,ℙ) with X^n: Ω→ C([0,T];ℝ^d). The sequence is tight on C([0,T];ℝ^d) if the following two conditions hold:
* {ℙ^X^n_t}_n∈ℕ is tight on ^d for every t∈[0,T].
* ∀ϵ> 0,η> 0 and m there exists δ_0∈(0,T) and n_0∈ℕ s.t., when n≥ n_0, δ≤δ_0 and τ is a σ(X^n_s,s∈[0,T])-stopping time with finite range and τ≤ m, then:
ℙ(|X^n_τ+δ-X_τ^n|≥ϵ) ≤η.
The solvability of the McKean-Vlasov type SDE is obtained by use of the following Theorem.
Schaefer's Fixed Point Theorem: Let 𝒯:𝔅→𝔅 be a compact mapping of a Banach Space 𝔅 into itself and suppose there is a constant M such that if x∈𝔅 and τ∈[0,1] with x=τ𝒯x, then x_𝔅<M.
Then 𝒯 has a fixed point.
|
http://arxiv.org/abs/2409.02579v2 | 20240904100015 | Assembling the Puzzle: Exploring Collaboration and Data Sensemaking in Nursing Practices for Remote Patient Monitoring | [
"Mihnea Calota",
"Janet Yi-Ching Huang",
"Lin-Lin Chen",
"Mathias Funk"
] | cs.HC | [
"cs.HC"
] |
Assembling the Puzzle]Assembling the Puzzle: Exploring Collaboration and Data Sensemaking in Nursing Practices for Remote Patient Monitoring
m.s.calota1@tue.nl
0009-0002-5979-2439
Eindhoven University of Technology
Eindhoven
The Netherlands
y.c.huang@tue.nl
0000-0002-8204-4327
Eindhoven University of Technology
Eindhoven
The Netherlands
l.chen@tue.nl
0000-0001-9887-9858
Eindhoven University of Technology
Eindhoven
The Netherlands
m.funk@tue.nl
0000-0001-5877-2802
Eindhoven University of Technology
Eindhoven
The Netherlands
§ ABSTRACT
Remote patient monitoring (RPM) involves the remote collection and transmission of patient health data, serving as a notable application of data-driven healthcare. This technology facilitates clinical monitoring and decision-making, offering benefits like reduced healthcare costs and improved patient outcomes. However, RPM also introduces challenges common to data-driven healthcare, such as additional data work that can disrupt clinician's workflow. This study explores the daily practices, collaboration mechanisms, and sensemaking processes of nurses in RPM through field observations and interviews with six stakeholders. Preliminary results indicate that RPM's scale-up pushes clinicians toward asynchronous collaboration. Data sensemaking is crucial for this type of collaboration, but existing technologies often create friction rather than support. This work provides empirical insights into clinical workflow in nursing practice, especially RPM. We suggest recognizing data sensemaking as a distinct nursing practice within data work and recommend further investigation into its role in the workflow of nurses in RPM.
<ccs2012>
<concept>
<concept_id>10003120.10003130.10011762</concept_id>
<concept_desc>Human-centered computing Empirical studies in collaborative and social computing</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Empirical studies in collaborative and social computing
< g r a p h i c s >
The knowledge transfer between a past monitoring nurse (left) and a present specialist nurse (right) is interfaced through the health records system, leading to asynchronous collaboration. The monitoring nurse collected patient data (green, purple, blue, yellow) and stored it in the health records system. The specialist nurse needs information about a specific patient (yellow), and collects some data from the patient directly. They still need to extract data from the health records system. Finding and decoding this data forms a “data sensemaking” activity. This task creates an increased cognitive load and friction in the workflow of the nurse.
[
Mathias Funk
September 9, 2024
=====================
§ INTRODUCTION
The healthcare sector is constantly changing, with new technologies and methodologies implemented daily. Data-driven healthcare refers to the practice of using tools and technology to collect, analyze, and leverage patient data to improve healthcare outcomes, enhance decision-making, and optimize clinical workflows. Leveraging patient data allows clinicians to tailor the care programs to each individual's needs or to use predictive analysis to intervene early and reduce hospital (re)admissions <cit.>. In turn, this can lead to lower costs for healthcare providers and improved quality of care for patients <cit.>.
However, data-driven healthcare technologies often face tremendous challenges in transitioning from the design stage to real-world implementation <cit.>. These technologies often fail in practice due to poor contextual fit or disruption of established clinical workflows <cit.>. For instance, a tool that does not integrate seamlessly with existing electronic systems can create additional administrative burdens for clinicians, distracting them from patient care. Similarly, technologies that do not align with the routines and needs of healthcare professionals are often underutilized or misused, negating their potential benefits. To ensure successful implementation and adoption, it is important to gain a deeper understanding of the specific contexts within which these technologies will be used.
One notable application of data-driven healthcare is Remote Patient Monitoring (RPM). It involves the intensive use of technology to collect and transmit patient data (i.e., physiological measurements and symptoms) from a remote location (the patient's home) to the clinicians for interpretation and decision-making. In turn, clinicians provide remote treatment through advice, coaching, or medication adjustments based on the insights derived from the provided data and their professional experience. RPM was shown to offer numerous advantages, including improved quality of life for patients, reduced healthcare costs, and a decrease in hospitalization needs <cit.>. However, the remote nature of RPM requires a shift in the way clinicians work, involving considerable data work <cit.> and distributing collaboration between clinicians over space and time.
Data-driven healthcare, and RPM as a subset of it, is inherently connected to data work conducted most often by clinicians. They handle data gathering, storage, retrieval, and sensemaking <cit.>. Understanding how clinicians react to the changes in their workflow imposed by data work is essential to the successful implementation of RPM programs. Previous studies have investigated various aspects of designing clinician support tools <cit.> and their impact on the clinician workflow <cit.>. However, most focus on operational constraints or tangible measures of workflow efficiency, with little work trying to unpack data work and cognitive activities clinicians carry out. Our research aims to investigate data work activities in RPM to further understand how they influence clinicians' workflow and collaboration strategies.
§.§ The interplay of human and technological factors in RPM
Specially trained nurses play an essential role in keeping an RPM program running smoothly. They act as an interface between patients, data records, and other clinicians. However, their key placement in the RPM system leads to nurses experiencing some of the most disruptive changes to their pre-RPM workflows, as their jobs are highly collaborative and interconnected with the management and interpretation of patient data <cit.>. Despite the importance of nurse-led monitoring programs, prior studies often focus more on the patient rather than the nurse experience <cit.>.
The implementation of new technologies in RPM requires nurses to adapt many facets of their usual clinical practice: face-to-face consults become phone calls, collaboration with colleagues happens remotely or asynchronously, measurements are taken by the patients themselves, and new data types coming from new sources need be filtered, cleaned and filed in the same unchanged electronic health record (EHR)<cit.><cit.><cit.>. In this context, Grisot et al. emphasize the critical role specially trained nurses (i.e., monitoring nurses) play in remote monitoring. Their study analyzes novel nursing practices that emerge from RPM implementation <cit.>. Their results show that data work conducted by specialized monitoring nurses is key to maintaining high-quality remote healthcare.
Therefore, we consider the monitoring nurse to be one of the central stakeholders in the RPM system. We aim to look at the RPM workflow through the nurse's lens to understand their experience, particularly their routine practice, perception, and challenges surrounding data work and data-driven healthcare.
§.§ Data work for RPM
Data work is a concept used to understand nurse practice in RPM. This study defines data work as any human activity related to creating, collecting, managing, curating, analyzing, interpreting, and communicating data <cit.>. Data is not simply ready to be harvested but must be created, managed, and formatted through intentional effort <cit.>. This is especially relevant in healthcare, where the desire to benefit from “data-driven” healthcare often clashes with the reality that intense data work will be carried out, most often by clinicians.
In the context of RPM, Islind et al. report on how nurses in the monitoring center translate and make data useful <cit.>. Their study focuses on three shifts in nursing practice stemming from the introduction of patient-generated data as a core decision-making material: a shift in questioning tactics, a shift in work distribution, and, most importantly, a shift in decision-making.
The shift in questioning tactics stems from the physical distance between the clinician and the patient, imposed by the remote nature of RPM. Successful diagnosis depends upon the clinician's ability to perceive changes in the patient, and therefore, they have to ask them about things they cannot see in the data <cit.>.
The shift in work distribution aligns with the RPM workflow, where continuous data work is incorporated into the daily routine of a nurse. This contrasts with the traditional model, which primarily involved one-time data effort during face-to-face consultations.
The shifts in work distribution and questioning strategy, with the addition of data work, also directly influence the decision-making processes of clinicians <cit.>. This is because nurses rely on browsing and making sense of the data to make their decisions. While technological solutions like EHRs and monitoring apps (such as Luscii <cit.>) are being developed to support these shifts towards personalized healthcare, their actual impact on nurse practice and the level of support they provide remain unclear <cit.><cit.>.
In personalized healthcare, three key nurse practices related to data work have been identified: preparation activities, continuous adjustments, and questionnaire fine-tuning <cit.>. Sensemaking activities, which are critical yet dispersed across all three practices, are particularly notable. We propose positioning data sensemkaing as a distinct nurse practice within data work and exploring its role in the workflow of nurses in RPM.
§.§ (Data) sensemaking
Sensemaking research has been explored across a variety of fields and participant types, but it constantly focuses on how humans transform ambiguous and complex situations into understandable and actionable information <cit.>. In areas like user interface design, sensemaking is regarded as an independent framework used to analyze the cognitive processes of users <cit.>. In RPM, a nurse is one part of a complex system, transforming intricate and disconnected data points into decisions and actions. In this context, a nurse is embedded in a data-driven environment, and data work and sensemaking become integral to their professional identity <cit.>.
In our study, sensemaking activities are seen as cognitive and sometimes collaborative processes where varied data sources (i.e., patient measurements, EHR data, phone calls) are interpreted to inform personalized care and clinical decision-making. Data sensemaking is not entirely an internal process but is influenced by material props, in this case, data <cit.>. This perspective is based on the work of Suh et al. on “Sensecape” <cit.>, and the work of Hultin <cit.> on how sensemaking occurs in the emergency room. The primary goal of data sensemaking for clinicians is to transform discrete information pieces into coherent, actionable insights.
Despite the advantages of RPM in improving patient outcomes and reducing healthcare costs, significant challenges remain in integrating data work into clinical workflows. Previous research often overlooked the cognitive and collaborative activities nurses perform to manage and interpret patient data. This gap highlights the need to explore data work practices and their impact on nurse experience. Addressing this gap is crucial for developing data-driven solutions that support nurses without disrupting workflows. This study aims to fill this void by examining the specific data work processes and collaborative practices of nurses in the RPM settings.
§ METHOD
This study aims to explore and understand how data-driven characteristics of the RPM environment influence nurse practices and collaboration strategies. We explore two specific questions: how do nurses in an RPM setting engage in collaboration and how does the available technology at their disposal offer support or create friction.
§.§.§ Study design and participants
In this exploratory study, we recruited six participants from two hospitals in the Netherlands. Both hospitals employ a commercial state-of-the-art remote monitoring system, relying on clinicians (i.e., monitoring nurses, specialist nurses, and physicians) to provide remote care using health records, patient data, and patient-generated measurements recorded in the Luscii mobile app <cit.>. The participants had varied roles in the RPM workflow, as shown in Table <ref>. The two institutions had different scales of RPM operation, with Hospital 1 having around 200 patients in 3 care pathways and Hospital 2 having around 2500 patients in over 15 care pathways.
§.§.§ Data Collection
We used field observations and semi-structured interviews to understand the participants' personal and professional contexts and their experiences in RPM.
As prior work has argued, immersing researchers in the actual work environment of nurses can yield valuable insights into their daily tasks, interactions, and challenges <cit.>. Observations were conducted in the mornings, aligning with what the participants regarded as peak activity times for RPM interactions. We initially discussed the expectations for the day with the nurses and then minimized interaction to avoid influencing their behaviors. Detailed notes, including timestamps of various actions, were recorded throughout each observation session. The sessions were conducted independently, aligning with the clinical specialists' shifts.
After the observations, we conducted semi-structured interviews with the six participants. These interviews focused on identifying the types of activities the nurses performed during RPM sessions, their perceptions of the data work, and any challenges they faced.
§.§.§ Data Analysis
We qualitatively analyzed data from field notes and interview notes. We used affinity diagrams to organize and categorize the data, which helped identify common themes and patterns in the nurses' activities and interactions during RPM <cit.>. We also created timelines and workflow journey maps from the observational notes, which visually represent the nurses' tasks and decision-making processes across typical RPM sessions. These visualizations allowed us to examine how different activities overlap, how collaboration takes place, and where various data-related tasks are performed throughout an average workday. A streamlined example of such timelines can be seen in Figure <ref>.
§ RESULTS
The different interviews and observation sessions offered a chance to understand the work environment surrounding RPM. Three main themes emerged: the role of the monitoring nurse in the RPM system, the asynchronous nature of remote care, and the role data sensemaking plays in collaboration.
§.§ The role of the monitoring nurse
The daily RPM workflow was highly structured, with monitoring nurses primarily managing alerts and patient calls before noon. Unresolved cases were then transferred to specialist nurses, who investigated, made decisions, and communicated these back to the monitoring nurse. In cases of particular complexity, a specialized physician was consulted. According to P6, out of approximately 700 alerts at the start of the day, around 15% of them will be seen by a specialist nurse, and only a few will require the attention of a physician.
Monitoring nurses performed various tasks, including sorting alerts, contacting patients for clarifications, interpreting data, and resolving technical issues. They handled the majority of the data work and sensemaking activities that kept the RPM system running.
P2 stated, ”I am doing so much more than just sort through alerts.” and ”I help patients feel safe and listened to, I solve their technical issues. As a [monitoring] nurse, you need to know a bit of everything.”
Despite variations in team sizes and patient loads, the workflow for a single monitoring nurse remained largely consistent. P2 handled all alerts before noon and compiled a report of the patients requiring further discussion with the specialist nurse (P1). As shown in Figure <ref>, she alternated between calling patients, making sense of different alerts, and browsing the EHR.
Participants provided diverse responses when asked about the role and value of a monitoring nurse. Some responses included: ”the [monitoring] nurse is there to make the patient feel safe” (P5), ”to call [the patients], hear their story and understand what they are experiencing” (P2), ”the [monitoring] nurse is the one that keeps the whole system running” (P3), ”[their] role is to teach patients to self manage, or at least support the ones who can't do that” (P6) and ”prevent them from coming back to the hospital.” (P1)
§.§ The asynchronous nature of remote care
During the observations, we noticed that the remote, data-driven nature of the RPM system shapes collaboration strategies. It encourages frequent and continuous monitoring that requires a relatively low effort in the moment, spread over a prolonged period. As a result, there were almost no traditional intense collaborative efforts. What we often saw were smaller, lower-effort asynchronous collaboration moments. Instead of meetings and live discussions, the nurses typically left notes and small pieces of data in the EHR for later use or communication.
For instance, P2 noted a patient's state of dizziness in the patient's consult file after a phone call. She decided not to adjust anything and left a note for a colleague to check on the patient later by mentioning ”no medication change for now, to be checked again in two days” in the file. Later, P1 consulted a patient via a phone call after noticing some inconsistencies in the data submitted through the monitoring app. After the call, P1 still had questions, so he checked the EHR for notes left by his colleagues about this specific patient. This interaction, captured in Figure <ref>, represents an instance of asynchronous collaboration.
Throughout the observations, we saw both sides of this asynchronous collaboration. As depicted in Figure <ref>, the monitoring nurse went back and forth between inputting data for future reference and searching for required data in the system.
Both P4 and P6 mentioned the challenges of this process. ”Most of the time, a written note captures less information than a conversation” (P6). P4 explained that they need to cross-reference notes with other data points, such as medication, admissions, or measurements, to fully understand them. For example, P4 found a note of a patient having trouble breathing in the morning. This information became more valuable after reviewing previous notes and the patient's admission history. By piecing together these facts, P4 deduced that the patient had previously been admitted to the hospital for influenza and was likely still recovering.
Lastly, our participants highlighted differences in how the two hospitals organize their RPM programs. The differences were related to how much the clinicians rely on protocols and technology for collaboration. For example, in a smaller RPM program, P1 and P2 could quickly and flexibly collaborate through quick calls and easily understood notes. As P2 noted, ”[P1] and I are working very closely together on the Heart Failure program, so I already know when to call him, when he is available. Sometimes, he just calls early because he has some free time, and I just make the time, and we have a quick chat about a few patients.”
By contrast, monitoring nurses in larger programs (P5 and P6) relied primarily on asynchronous collaboration through reports, notes, and protocols due to the program's large scale. As P6 explained, ”We need to have all the reports and urgent patients ready to go when we contact a specialist nurse. On paper, we have one hour to talk to them around 13:00. In reality, they are very busy, so we just need to have everything organized and placed in the system so they can check [the patients] there when they have more time.”
P3 also anticipated that ”when the program scales up, [they] will just have to do everything by the protocol,” with technology playing a crucial role in mediating clinician interactions.
§.§ The role of data sensemaking in collaboration
We observed that nurses frequently engage in data sensemaking activities to understand a patient's case. Most of the time, when frustration was visibly expressed (e.g., body language, sighs) or explicitly mentioned, they were trying to locate information in the freeform notes. P2 explained that they could “pin” certain information (e.g., smoking behavior) to the top of a patient's profile, eliminating the need to ask for or search for this information in the EHR again. P3 emphasized the significant amount of time spent “looking for the information you need.”
During the interview, P4 discussed how clinicians are motivated to make their notes as complete as possible. ”You want to have everything in there, both from a legal and insurance perspective and to make sure you didn't miss something important. But everything is just text blocks, and so much information is duplicated. And you have to [search and search] for what you need.” (P4)
P3 argued that having too much data can cause as much friction as having too little, and P2 observed that the EHR “is overloaded, showing you everything, all the time, while in reality [the nurse] needs different things at different times, with different patients.” P5 and P6 agreed, describing the EHR more like a data management system than a support tool. They expressed a desire for a customizable interface that would allow them to selectively view relevant information.
P1 also saw the data work and sensemaking as potential bottlenecks for scaling up the RPM system and believed that more monitoring nurses would be needed to handle the required data work. In contrast, the monitoring nurses felt that data work is not where their time is best spent, but they ”do it because someone has to do it for now” (P6).
§ DISCUSSION
RPM represents a socio-technical system that includes patients, nurses, physicians, communication channels, data gathering systems, data storage systems, and the data itself. This study uncovers potential friction points in the RPM system by exploring how nurses engage with data and collaborate with other clinicians.
§.§ The humans in the RPM workflows
The monitoring nurses serve as the interface between patients, other clinicians, and the supporting data systems. One finding of this study is that data work emerges in their daily practice and heavily influences their workflows as they deal with higher workloads and new types of tasks they might not have ever performed before. Next to that, data work is spread thinly over almost all activities of the monitoring nurses: they need to “know a bit of everything,” and their role extends beyond simply filtering alerts and inputting data into the EHR. Many participants recognized the core value of the monitoring nurse in the interpersonal relations with patients and their ability to guide patients on self-management while providing a sense of safety. While quality data work is essential to keep the RPM system running, nurses prefer not to spend the majority of their effort on this aspect, nor do they feel it has the most impact on the patient's life.
The nurses attend to multiple patients at varying times, but a constant element in their daily practice is their interaction with the EHR, which they use to transmit or retrieve patient information (Figure <ref>). All participants consistently mentioned the interpretation of patient— or clinician-generated data as a crucial part of their collaboration and job performance. Our results highlight two sources of friction that could affect nurses' experiences in the RPM workflow: the need to shift to asynchronous collaboration and the obstacle of conducting sensemaking activities with currently available technology.
§.§ The burden of synchronizing asynchronous data and collaboration in RPM
Clinicians typically have demanding schedules and often work in parallel with their colleagues, making it difficult to align their schedules. To effectively leverage new technologies like RPM, it is crucial to understand and prioritize clinicians' workflow and context. Previous studies have identified a gap between the design and real-world implementation of data-driven solutions <cit.>. These solutions may perform well in design studios, but they can disrupt clinicians' workflows and lack contextual fit in clinical practice. Preliminary findings from this study indicate that while RPM is pushing clinicians to adopt asynchronous collaboration strategies, the design of the available technology does not fully consider this new workflow.
Asynchronous collaboration has the advantage of freeing clinicians from the constraints of aligning busy schedules. However, this new collaboration style presents challenges, especially when information must be encoded, processed through a central data management system (such as the EHR), and then decoded (Figure <ref>). Our findings highlight that it is important to strike a balance between too little and too much information available at any moment. The participants voiced their dissatisfaction with the current design principle of the EHR, which is having access to everything at all times. Since clinicians tend to take precautions and prefer to be complete in their notes, the EHR is being populated with dense, lengthy notes and information duplicates, which further add difficulty to the sensemaking activities.
This paper offers a unique insight: asynchronous collaboration in the medical field differs from other sectors. For example, when programmers or researchers work on a shared artifact (such as software or a paper), their cooperation focuses on the changes and additions made to the artifact and is outcome-oriented (i.e., finishing the paper) <cit.>. In contrast, clinicians' asynchronous collaboration is focused on a transfer of knowledge, spanning various informational systems, patients, or cases. There is no central artifact to track “progress” or “completion” of the task. While clinicians usually look at the patient's well-being as a measure of success, this metric does not mark an end to the collaboration efforts.
Lastly, the scale of operation is directly proportional to the amount of asynchronous collaboration. With larger operations, the teams working in RPM expand as well and are forced to rely on protocols and technology to collaborate effectively. This contrast was already evident among the two institutions of our participants, with technology mediating the majority of interactions in the larger hospital of P5 and P6. As RPM evolves and expands, this asynchronous collaboration will gain prominence. Therefore, support is needed to facilitate this collaboration, particularly when it comes to the supporting technology.
§.§ Foregrounding and unpacking data sensemaking in RPM
A sensemaking activity in RPM starts with a trigger, which is usually an alarm from the monitoring app or a prompt from a colleague. This prompts the clinician to start gathering information to resolve the trigger. This activity concludes with a decision to either act, call the patient, or collaborate with a colleague. The most challenging part of the sensemaking process is aggregating all the necessary information. Information is frequently distributed across various platforms, each with its interface and data presentation method, adding layers of complexity to the retrieval process. Interpreting this information feels meaningful once it is assembled, but locating it can be frustrating, especially when the information is in “blocks full of free text”.
In the context of RPM, data sensemaking largely involves the hospital's EHR system, remote data-gathering platforms like Luscii <cit.>, and patient phone calls. In this ecosystem, clinicians have to bridge the gaps between different data sources through their own efforts (Figure <ref>). They sometimes use other tools, such as digital and handwritten notes, to aid in selecting relevant data and creating summaries. While such ad-hoc strategies are helpful in the moment, they do not integrate seamlessly with the full system, and friction is added to the overall process. Our study confirms that re-contextualizing data is a critical but challenging aspect of the sensemaking process for nurses, as previously observed in related research <cit.>. This challenge is specifically relevant when the nurse needs access to qualitative information since the “why” and the “how” behind a patient's condition are often lost in long free-form text blocks. This finding supports previous studies on the use of EHRs in the hospital <cit.>.
The challenge extends beyond just improving the EHR systems. It encompasses the need for a holistic approach to system design that considers both the usability and accessibility of information. Such enhancements should aim to improve the presentation and coherence of data across different platforms, alongside the integration of support tools that facilitate a smoother knowledge transfer among clinicians and a smoother sensemaking process.
§.§ Future Work
Considering these novel insights, we see an opportunity to focus future efforts on asynchronous collaboration among clinicians. By understanding and designing for asynchronous knowledge transfer, we could mitigate the poor contextual fit or workflow disruption that often leads to implementation failures in data-driven technologies. Specifically, we see an opportunity to explore how to design tools that support the data sensemaking activities of clinicians.
We have identified data sensemaking as an essential part of collaboration in RPM since clinicians have to encode and decode the information they wish to transfer from one another. New tools should specifically focus on facilitating this knowledge transfer between clinicians by supporting or minimizing the need to forage extensively for qualitative data.
The limitation of this exploratory study is its small sample size. With only six participants, it may not reflect the broader population of RPM clinicians. While the observations and interviews delved into their experiences, the results could lean towards the specific institutions and programs the participants belong to. Despite this limitation, our initial findings still highlight the importance of understanding asynchronous collaboration and the role of data sensemaking in it. We recommend future research to incorporate more diverse samples for a longitudinal study, and consider the experiences of a wider range of healthcare providers to validate and expand upon these findings.
§ CONCLUSION
This study identified a novel area of focus: asynchronous collaboration in Remote Patient Monitoring and the data sensemaking activities that support it. During observations, we noticed monitoring nurses are central to RPM, managing and interpreting large volumes of data. The shift to asynchronous collaboration is necessary for continuous monitoring, but existing technologies inadequately support these activities, causing friction in the daily workflow.
Current tools often increase nurses' cognitive load, due to the need to encode, decode and forage for information in the central record system. Future efforts should focus on developing technologies that enhance support for data sensemaking and facilitate smoother knowledge transfer.
In conclusion, addressing these technological and workflow challenges is vital for the effectiveness and scalability of RPM systems. Improved support for asynchronous collaboration and data sensemaking can lead to better nurse experiences and easier implementation of data-driven solutions in RPM.
§ ACKNOWLEDGEMENTS
We would like to thank all of our participants for allowing us a glimpse into their day-to-day work and for taking time from their busy schedules to answer our questions.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.03158v1 | 20240905011829 | Magic Billiards: the Case of Elliptical Boundaries | [
"Vladimir Dragović",
"Milena Radnović"
] | math.DS | [
"math.DS",
"nlin.SI",
"37C83, 37J35, 37J39"
] |
The YMDB catalog: Young massive detached binaries for the determination of high-precision absolute stellar parametersFull version of Tables 2, 3 and 4 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/.
Pablo Martín-Ravelo
1,2
Roberto Gamen
3,4
Julia I. Arias
1
André-Nicolas Chené
2
Rodolfo H. Barbá In Memoriam (1962–2021)
1
Received: 20 June 2024 / Accepted: 05 July 2024
================================================================================================================================================================================================================================================================================================================================
Dedicated to Academician A. T. Fomenko on the occasion of his 80th anniversary.
§ ABSTRACT
In this work, we introduce a novel concept of magic billiard games and analyse their properties in the case of elliptical boundaries.
We provide explicit conditions for periodicity in algebro-geometric, analytic, and polynomial forms.
A topological description of those billiards is given using Fomenko graphs.
Keywords: integrable systems, elliptical billiards, topological billiards, Fomenko graphs, periodic trajectories, Cayley's conditions, divisors on elliptic curves, polynomial Pell's equations
MSC2020: 37C83, 37J35, 37J39
§ INTRODUCTION
The elliptic billiard Bir,KT1991,Tab is a notable example of a completely Liouville integrable system.
Thus, the Liouville-Arnold theorem implies that the phase space of that particular billiard is foliated into invariant Liouville tori <cit.>.
Fomenko and his school developed a beautiful theory for topological description and classification of integrable systems using what are now known as Fomenko graphs and Fomenko-Zieschang invariants, see Fomenko1987,FZ1991 and, in particular, the book <cit.>, with the fundamentals of that theory, including a large list of well-known integrable systems, such as the integrable cases of rigid body motion and geodesic flows on surfaces.
The use of topological tools in the study of integrable billiards was initiated by the authors in DR2009, see also DR2010, DR2011.
Further details and applications to other integrable systems can be found in the literature related to billiards Fokicheva2014, R2015, DR2017, VK2018,FV2019, FV2019a, PRK2020, DGR2021, DGR2022,BF2024.
For the applications in the broader theory of Hamiltonian systems with two degrees of freedom see BMF1990,RRK2008,BBM2010.
An important milestone of this theory is the so-called Fomenko conjecture, emphasizing a surprising universality of billiard dynamics. This conjecture is about realization of topology of Liouville foliaions of smooth and real-analytic integrable Hamiltonian systems by integrable billiards, see e.g. <cit.> and <cit.> and references therein.
In this paper, we introduce general magic billiards, where after hitting the boundary the particle is magically transported to another point of the boundary and continues motion from there.
A formal definition of such class of systems is given in
Section <ref>.
In Section <ref>, we focus to magic billiards within an ellipse, and among them only to those ones where the equations of motion in elliptic coordinates remain the same as for the standard billiard, which will mean that the obtained system will be integrable.
We provide conditions for periodicity of such systems and give topological description using Fomenko graphs.
We note that one of the cases we consider, so called billiards with slipping was recently introduced and studied by Fomenko and his school FVZ2021,FV2021,VZ2022,Fom2023,Zav2023.
In Section <ref>, we consider magic billiard within elliptic annulus.
The last Section <ref> contains discussion.
§ DEFINITION OF MAGIC BILLIARDS
In this section, we will introduce a new class of dynamics, where a particle moves along straight segments by constant speed within a given domain in the plane, and when it reaches its boundary, it is transported to another point of the domain boundary from where it continues the motion within the domain.
More formally, we will introduce that dynamics as follows.
Let D be a given domain in the plane, which is bounded by a smooth closed curve. Suppose that φ a continuous bijective mapping of the boundary ∂ D onto itself.
Now, consider the subset T(∂ D) of the tangent fibration of the plane consisting only of the tangent spaces at the points of the curve ∂ D.
Let φ^* be a continuous bijective mapping of T(∂ D) onto itself, satisfying the following:
* π∘φ^*=φ∘π, where π:T(∂ D)→∂ D is the projection to the base points;
* φ^* maps tangent vectors to the boundary ∂ D to tangent vectors to that boundary curve;
* vectors pointing outwards D are mapped to vectors pointing inwards D.
A magic billiard (D,φ,φ^*) is a dynamical system where a particle moves with a unit speed and without constraints in the interior of D, and when it hits the boundary at a point A with velocity v⃗, it will bounce off at point φ(A) with velocity φ^*(v⃗).
Notice that the standard billiard in D belongs to the class of just defined magic billiards.
There, mapping φ is the identity and φ^* is the billiard reflection, i.e. reflection with respect to the direction of the tangent line to the boundary at each point.
Another class of magic billiards are billiards with slipping, see <cit.>.
§ MAGIC BILLIARDS WITHIN AN ELLIPSE
Before starting the analysis of novel examples of magic elliptical billiards, we will review the standard billiard within an ellipse.
Suppose that the ellipse is given by:
: x^2/a+y^2/b=1,
a>b>0.
Following classical ideas of Jacobi, one can introduce the elliptic coordinates (λ_1,λ_2), which are, for each given point in the plane, the parameters of an ellipse and a hyperbola from the confocal family:
_λ : x^2/a-λ+y^2/b-λ=1,
which intersect at that point.
Note that each billiard trajectory within has a unique caustic _α, which is touching each segment of the trajectory.
The differential equation of the billiard motion is separated in elliptic coordinates:
dλ_1/√((a-λ_1)(b-λ_1)(α-λ_1))
+
dλ_2/√((a-λ_2)(b-λ_2)(α-λ_2))
=
0.
We are interested in magic billiards which keep those nice geometric and analytic properties of standard elliptic billiards, so we will focus to the mappings φ and φ^* which preserve the equation (<ref>).
In particular, we note that the elliptic coordinates remain unchanged for φ being one of the following:
* the reflection with respect to one of the axes of the billiard boundary ; or
* the half-turn around the center of .
Notice that such φ is defined on the whole plane, not only on the boundary of the billiard table.
Moreover, because the elliptic coordinates are invariant with respect to reflections with respect to the axes, the equation (<ref>) will also be invariant if φ^* is the composition of the billiard reflection in the point of impact and the differential map dφ.
Thus, for such φ and φ^*, the dynamics of the magic billiard (,φ,φ^*) in elliptic coordinates will be identical to the usual billiard motion.
In particular, each trajectory of such magic billiards will have a unique caustic from the family of conics which are confocal with the boundary .
Note that for such φ, the configuration space can be defined as 𝒟/∼, where 𝒟 is the billiard table, and x∼φ(x), for x∈=∂𝒟.
The phase space is defined, analogously, when the velocity vectors mapped into each other by φ^* are identified.
Next, we will consider each of those mappings separately.
For each of them, we will derive periodicity conditions in three forms: algebro-geometric, analytic, and polynomial.
Since φ and φ^* are involutions, the n-periodicity conditions for even n will be the same as for the standard elliptical billiard.
For odd periods, the conditions will be different, and will be distinct for each case that we consider.
We will also present Fomenko graphs corresponding to the Liouville foliation of the isoenergy manifold.
§.§
Elliptical billiard with flipping over long axis
In this system, φ is the reflection with respect to the longer axis.
Examples of its trajectories are depicted in Figure <ref>.
Periodicity conditions for such trajectories can be obtained by applying the results from <cit.>.
In the following theorem, we represent such conditions in the algebro-geometric form, from which the classical form of Cayley's and in the polynomial form can be derived.
In the billiard table bounded by the ellipse given by (<ref>), consider the magic billiard with flipping over the long axis.
Consider a trajectory of such billiard with the caustic _β from the confocal family (<ref>).
Such a trajectory
is n-periodic if and only if one of the following conditions is satisfied:
* n is even and nQ_0∼ nQ_∞;
* n is odd, _β is hyperbola, and nQ_0∼ nQ_b.
Here Q_0, Q_b, Q_∞ denote the points with coordinates (0,√(abβ)), (b,0), (∞,∞) on the elliptic curve:
y^2=(a-x)(b-x)(β-x).
Integrating (<ref>) along a trajectory of the magic billiard, we get that the n-periodicity condition will be equivalent to the following divisor condition:
n(Q_0-Q_c_1)+m(Q_c_2-Q_a)∼0
on the elliptic curve (<ref>).
Here, we have {c_1,c_2}={β,b} and c_1<c_2, while Q_β, Q_a, denote the points of the curve with coordinates (β,0), (a,0) respectively.
Note that, on the trajectory, the elliptic coordinates λ_1 and λ_2 belong to the intervals [0,c_1] and [c_2,a] respectively.
Natural numbers n and m represent the numbers of times each of them traces back and forth its respective interval along the closed trajectory.
Note that the short axis of the ellipse is the degenerate conic _a of the confocal family.
Since that axis must be crossed even number of times along a closed trajectory, we have that m is even, thus the periodicity condition (<ref>) reduces to n(Q_0-Q_c_1)∼0.
On the other hand, the sum of the number of the flippings over the long axis and the number of times when the billiard particle actually crossed the long axis must also be even.
That sum equals n+m if _β is an ellipse, and 2n if _β is hyperbola.
From there, we conclude that all closed trajectories with ellipse as caustic must have even period.
Now, the required conditions follow from the fact that c_1=b when 𝒞_β is hyperbola and 2Q_β∼2Q_∞.
Now, analogously as in <cit.>, we can get the conditions of Cayley's type.
A trajectory of the billiard within with flipping over the long axis is n-periodic if and only if one of the following conditions is satisfied:
* n is even and
| [ B_3 B_4 … B_n/2+1; B_4 B_5 … B_n/2+2; …; B_n/2+1 B_n/2+2 … B_n-1 ]|=0;
* n is odd, the caustic is hyperbola, and
| [ C_2 C_3 … C_(n+1)/2; C_3 C_4 … C_(n+1)/2+1; …; C_(n+1)/2 C_(n+1)/2+1 … C_n-1 ]|=0.
Here, we denoted:
√((a-x)(b-x)(β-x))=B_0+B_1x+B_2x^2+…,
√((a-x)(b-x)(β-x))/b-x=C_0+C_1x+C_2x^2+…,
the Taylor expansions around x=0, and β is the parameter of the caustic from (<ref>).
The periodicity conditions can be equivalently stated in the form of polynomial equations.
Again, for details on how to obtain those equations, refer to <cit.>.
A more general theory connecting polynomial Pell's equations and integrable billiards in arbitrary dimension is given in <cit.>.
The trajectories of the magic billiard with flipping over the long axis with caustic _β are n-periodic if and only if there exists a pair of real polynomials p_d_1, q_d_2 of degrees d_1, d_2 respectively, and satisfying the following:
(a) if n=2m is even, then d_1=m, d_2=m-2, and
p_m^2(s)
-
s(s-1/a)(s-1/b)(s-1/β)
q_m-2^2(s)=1;
(b) if n=2m+1 is odd, then d_1=m, d_2=m-1, and
(s-1/b)p_m^2(s)
-
s(s-1/a)(s-1/β)q_m-1^2(s)=-1.
We conclude this section by a topological description of the system.
The rough Liouville equivalence class of an isoenergy manifold in the phase space of the magic billiard with flipping over the long axis is given by the Fomenko graph in Figure <ref>.
The billiard table consists of the boundary ellipse and its interior.
According to Remark <ref>, the configuration space is obtained when the points of which are symmetric to each other with respect to the long axis are identified.
Thus, the configuration space is homeomorphic to a sphere, where the boundary of the billiard table is represented by an arc.
Notice that the trajectories of our system do not depend on the velocity of the particle, thus all isoenergy manifolds will be equivalent to each other.
Now, we will fix an isoenergy manifold.
Any level set corresponding to the trajectories having a fixed ellipse as caustic consists of a single Liouville torus.
The level set corresponding to the trajectories having a fixed hyperbola as caustic consists of two Liouville tori: one torus contains the trajectories where motion along each segment is downwards (as shown in the lefthandside of Figure <ref>), and on the other torus the motion is upwards.
The level set corresponding to the caustic =_0 consists of the limit motion back and forth along the boundary.
This is a single closed trajectory, thus corresponding to the Fomenko atom 𝐀.
The level set with the caustic _a corresponds to the motion along the short axis of the ellipse .
When the particle hits the boundary, it is magically flipped to the opposite point on the axis.
Thus, there are two closed trajectories there: the downwards one and the upwards one.
Each of those trajectories corresponds to one Fomenko atom of type 𝐀.
The level set _b consists of the trajectories that contain the foci of the ellipse .
Exactly one of those trajectories is closed, corresponding to horizontal motion along the long axis.
All other trajectories alternately pass through the one and the other focus, and on each such trajectory, because of the flipping along the long axis, the motion is either upwards along each segment or downwards along each segment.
Thus, there are two separatrices on that level set, so it is represented by the Fomenko atom 𝐁.
§.§ Elliptic billiard with flipping over short axis
In this case, with φ is the reflection with respect to the short axis.
Examples of its trajectories are depicted in Figure <ref>.
Now, we derive the algebro-geometric condition for periodicity of the such magic billiard.
In the billiard table bounded by the ellipse given by (<ref>), consider the magic billiard with flipping over the short axis.
Consider a trajectory of such billiard with the caustic _β from the confocal family (<ref>).
Such a trajectory
is n-periodic if and only if n is even and nQ_0∼ nQ_∞.
Here Q_0 and Q_∞ are as in Theorem <ref>.
We will use notation as in Theorem <ref> and its proof.
In the same way as explained there, we can derive the divisor condition (<ref>).
In addition to that condition, we need the following:
* the number of times that the particle crossed the long axis must be even: that number equals m if the caustic is ellipse, or n if it is hyperbola;
* the sum n+m must be even, since that is the sum of the number of the flippings over the short axis and the number of times when the billiard particle actually crossed that axis.
In any case, we get that both m and n must be even, so the conditions for closure reduce to n(Q_0-Q_c_1)∼0, which is equivalent to the stated relation.
It is interesting to note that in this case there are no odd-periodic trajectories.
Notice that, as announced in Remark <ref>, the conditions for trajectories of even period are the same as in the case of flipping over the long axis, see Theorem <ref>, and as in the case of standard billiard, see <cit.>*Theorem 2.
As a consequence, the analytic and polynomial conditions can be obtained as follows.
In the billiard table bounded by the ellipse given by (<ref>), consider the magic billiard with flipping over the short axis.
Consider a trajectory of such billiard with the caustic _β from the confocal family (<ref>).
Such a trajectory
is n-periodic if and only if n is even and the following equivalent conditions are true:
* relation (<ref>) is satisfied;
* there are polynomials satisfying the polynomial equation (<ref>).
We use Fomenko graph to give a topological description of the system.
The rough Liouville equivalence class of an isoenergy manifold in the phase space of the magic billiard with flipping over the long axis is given by the Fomenko graph in Figure <ref>.
Similarly as in the proof of Theorem <ref>, the configuration space is homeomorphic to a sphere, where the boundary of the billiard table is represented by an arc.
In a fixed isoenergy manifold, any level set corresponding to the trajectories having a fixed non-degenerate conic as caustic consists of a single Liouville torus.
The level set corresponding to the caustic =_0 corresponds to the Fomenko atom 𝐀, similarly as in the proof of Theorem <ref>.
The level set with the caustic _a corresponds to the motion back and forth along the short axis of the ellipse , thus it corresponds to the Fomenko atom of type 𝐀.
The level set with the caustic _b consists of the trajectories that contain the foci of the ellipse .
Two of those trajectories is closed.
One closed trajectory corresponds to motion to the right along the long axis: when the particle hits the boundary, it is magically flipped to the other endpoint of the diameter, from where it continues its motion to the right.
The second closed trajectory corresponds to the motion to the left along the long axis.
For any other trajectory on that level set, each segment will pass through the same focus.
Thus, there are two separatrices on that level set, so it is represented by the Fomenko atom 𝐀^**.
§.§ Elliptic billiard with half-turn around the center
In this case, with φ is the half-turn about the center of the ellipse.
Such billiards are a case of the billiards with slipping, see <cit.>.
Examples of its trajectories are depicted in Figure <ref>.
As in the previous cases, we will first derive the divisor conditions for periodicity.
In the billiard table bounded by the ellipse given by (<ref>), consider the magic billiard with half-turn about the center.
Consider a trajectory of such billiard with the caustic _β from the confocal family (<ref>).
Such a trajectory
is n-periodic if and only if nQ_0∼ nQ_∞.
Here Q_0 and Q_∞ are as in Theorem <ref>.
We follow the notation of Theorem <ref> and its proof, and, as there, we can derive the divisor condition (<ref>).
In addition to that condition, we need that the number of times the particle hit the boundary, the number of times it crossed the long axis, and the number of times it crossed the short axis, are all odd or all even.
This is equivalent to m and n being both odd or both even.
Thus, the periodicity condition for even n is equivalent to n(Q_0-Q_c_1)∼0, and for odd n to n(Q_0-Q_c_1)+(Q_c_2-Q_a)∼0.
Using 2Q_∞∼2Q_a∼2Q_b∼2Q_β and 3Q_∞∼ Q_a+Q_b+Q_β, we get that the periodicity condition for any n is nQ_0∼ nQ_∞.
From divisor conditions, one can derive the analytic conditions of Cayley's type and the polynomial conditions, similarly as in <cit.>.
We present them in the following two corollaries.
In the billiard table bounded by the ellipse given by (<ref>), consider the magic billiard with the half-turn about the center.
Consider a trajectory of such billiard with the caustic _β from the confocal family (<ref>).
Such a trajectory
is n-periodic if and only if:
* n is even and (<ref>) is satisfied; or
* n is odd and
| [ B_2 B_3 … B_(n+1)/2; B_3 B_4 … B_(n+1)/2+1; …; B_(n+1)/2 B_(n+1)/2+1 … B_n-1 ]|=0.
The coefficients B_2, B_3, …are as in the statement of Corollary <ref>.
The trajectories of the magic billiard with flipping over the long axis with caustic _β are n-periodic if and only if there exists a pair of real polynomials p_d_1, q_d_2 of degrees d_1, d_2 respectively, and satisfying the following:
(a) if n=2m is even, then d_1=m, d_2=m-2, and (<ref>);
(b) if n=2m+1 is odd, then d_1=m, d_2=m-1, and
sp_m^2(s)
-
(s-1/a)(s-1/b)(s-1/β)q_m-1^2(s)=1.
Finally, we give the topological description of this system.
The rough Liouville equivalence class of an isoenergy manifold in the phase space of the magic billiard with flipping through the center is given by the Fomenko graph in Figure <ref>.
According to Remark <ref>, we define the configuration space as the billiard table where the points of which are diametrically symmetric to each other are identified.
Such configuration space is homeomorphic to the projective plane.
In a fixed isoenergy manifold, any level set corresponding to a any given ellipse as caustic consists of two Liouville tori: one torus contains the trajectories that are winding in the clockwise direction about the caustic, the other torus contains the counterclockwise trajectories.
The level set corresponding to hyperbola as caustic also consists of two Liouville tori: one torus where all segments of the trajectories point downwards, and the other where they point upwards.
The level set corresponding to the caustic =_0 corresponds to a pair of Fomenko atoms 𝐀: each of them contains a single closed trajectory along the boundary ellipse, winding in one of the two directions.
The level set with the caustic _a corresponds to the motion the short axis of the ellipse .
It consists of two closed trajectories – the upwards and the downwards one.
Thus we have two Fomenko atoms of type 𝐀 there.
The level set with the caustic _b consists of the trajectories that contain the foci of the ellipse .
Two of those trajectories are closed: one corresponds to motion to the right along the long axis, the other to the motion to the left.
For any other trajectory on that level set, each segment will pass through the same focus, and each segment points in the same direction: upwards or downwards.
Thus, there are four separatrices on that level set, so it is represented by the Fomenko atom 𝐂_2.
For a more detailed proof of Theorem <ref>, together with the calculation of the numerical invariants, see <cit.>.
§ MAGIC BILLIARDS IN ELLIPTIC ANNULUS
In this section, we discuss magic billiards in the annulus between two confocal ellipses _1 and _2, see Figure <ref>.
In order to preserve integrability and caustics, we will assume that the mapping φ is defined as follows:
* the restriction of φ to _1 is the reflection with respect to one of the axes or the central symmetry;
* the restriction to _2 is the identity.
The mapping φ^* will be defined on the velocity vectors at the points of _1 in the same way as explained in Section <ref>.
On _2, mapping φ^* is the ordinary billiard reflection.
If magic reflection is introduced on the inner boundary, then the dynamics does not any more depend continuously on the initial conditions.
Namely, consider motion parallel to a tangent line to _2, close to the point of tangency.
If the particle does not cross _2, the motion is continued straight along that line.
If the particle reaches _2, it will be, according to the mapping φ magically transported to another point of _2 and continue motion from there.
Thus, the continuity close to tangency to _2 is lost.
Two trajectories when φ on the outer boundary is flipping over the long axis are shown in Figure <ref>.
The magic billiard in the annulus between two confocal ellipses with flipping over the long axis on the outer boundary is roughly Liouville equivalent to the magic billiard within an ellipse with flipping over the short axis.
In a fixed isoenergy manifold, any level set corresponding to the trajectories having a fixed non-degenerate conic as caustic consists of a single Liouville torus.
The level set corresponding to the caustic =_0 corresponds to the Fomenko atom 𝐀, similarly as in the proof of Theorem <ref>.
The level set with the caustic _a corresponds to the motion along two segments of the short axis which are within the annulus.
Each time when the particle hits the outer boundary, it is magically flipped to the other segment, thus this level set consists of a single closed trajectory, so we have the Fomenko atom of type 𝐀 there.
The level set with the caustic _b consists of the trajectories that contain the foci of the ellipse .
Two of those trajectories is closed: each trajectory corresponds to the motion along one of the segments of the long axis which are within the annulus.
For any other trajectory on that level set, the extensions of the segments will pass alternately through the two foci.
In one class of those trajectories, the segments that point from the outer boundary to the inner one contain the left focus, and their limit as time goes to +∞ is the left segment on the long axis, while the time limit to -∞ is the right segment.
In the other class, everything is opposite.
Thus, there are two separatrices on that level set, so it is represented by the Fomenko atom 𝐀^**.
Two trajectories when φ on the outer boundary is flipping over the short axis are shown in Figure <ref>.
The magic billiard in the annulus between two confocal ellipses with flipping over the short axis on the outer boundary is roughly Liouville equivalent to the magic billiard within an ellipse with flipping over the long axis.
In a fixed isoenergy manifold, any level set corresponding to the trajectories having an ellipse as caustic consists of a single Liouville torus.
On the other hand, if the caustic is hyperbola, there are two tori: each corresponding to one connected component within the annulus between the two branches of the hyperbola.
The level sets corresponding to the caustic =_0 corresponds to the Fomenko atom 𝐀.
The level set with the caustic _a corresponds to the motion along two segments of the short axis which are within the annulus.
Each segment is covered by one closed trajectory on that level set, thus both of them corresponds to the Fomenko atom 𝐀.
The level set with the caustic _b consists of the trajectories that contain the foci of the ellipse .
One of those trajectories is closed and it is placed on the long axis, traversing the two segments.
Any other trajectory on that level set is placed on one side of the long axis, thus there are two separatrices.
We conclude that this level set corresponds to the the Fomenko atom 𝐁.
Two trajectories when φ on the outer boundary is slipping by half-ellipse are shown in Figure <ref>.
The rough Liouville equivalence class of an isoenergy manifold in the phase space of the magic billiard in an elliptic annulus with slipping by half-ellipse along the outer boundary is given by the Fomenko graph in Figure <ref>.
In a fixed isoenergy manifold, any level set corresponding to the trajectories having a fixed ellipse as caustic consists of two Liouville tori: each torus contains the trajectories the winding in one direction around the annulus.
On the other hand, if the caustic is hyperbola, there is only one torus.
The level set corresponding to the caustic =_0 contains only two closed trajectories: each is winding in one direction along the boundary.
Thus, we have two Fomenko atoms of type 𝐀 there.
The level set with the caustic _a corresponds to the motion along two segments of the short axis which are within the annulus.
There is only one trajectory on that level set, so it corresponds to the Fomenko atom 𝐀.
The level set with the caustic _b consists of the trajectories that contain the foci of the ellipse .
One of those trajectories is closed and it is placed on the long axis, traversing the two segments.
There are two separatrices on that level set: one separatrix contains the trajectories where the motion is above the long axis on the left and below the axis on the right as t→+∞, and vice versa for t→-∞.
The opposite holds for the other separatrix.
We can conclude that the corresponding Fomenko atom is 𝐁.
The example of billiard within elliptic annulus with slipping along the outer ellipse was analyzed in <cit.>.
There, the configuration space was defined differently: by gluing two identical annuli along inner and the outer boundaries, so the resulting Fomenko graph there is different.
§ CONCLUSIONS AND DISCUSSION
We note that the magic billiards, as introduced in Section <ref> represents a very broad class, where various subclasses may be of interest for exciting future research.
In particular, integrable cases for magic billiards within ellipse are not exhausted by the list considered in this paper. It would be interesting to explore a more general class of magic billiards (,φ,φ^*), including those which preserve caustic. One natural generalization includes studying domains with piece-wise smooth boundaries.
We note that billiard ordered games introduced in <cit.> and studied further in <cit.> can also be generalised using magic reflections.
§ ACKNOWLEDGEMENTS
This paper is devoted to Academician Anatoly Timofeevich Fomenko on the occasion of his 80-th anniversary. The authors have been glad to learn a lot from personal contacts with Anatoly Timofeevich and his distinguished students, from their lectures, books, and papers. The authors wish Anatoly Timofeevich many happy returns and further success in his scientific work and the work of his scientific school.
This research is supported by the Discovery Project No. DP190101838
Billiards within confocal quadrics and beyond from the Australian Research Council,
by Mathematical Institute of the Serbian Academy of Sciences and Arts, the Science Fund of Serbia grant Integrability and Extremal Problems in Mechanics, Geometry and
Combinatorics, MEGIC, Grant No. 7744592 and the Ministry for Education, Science, and Technological Development of Serbia and the Simons Foundation grant no. 854861.
M. R. is grateful to Professors John Roberts and Wolfgang Schief and the School of Mathematics and Statistics of the UNSW Sydney for their hospitality and support during final stages of work on this paper.
amsalpha
|
http://arxiv.org/abs/2409.03702v1 | 20240905165836 | Evidence for Dust Depletion in a Misaligned Protoplanetary Disk with JWST | [
"C. C. Espaillat",
"T. Thanathibodee",
"Z. Zhu",
"I. Rabago",
"J. Wendeborn",
"N. Calvet",
"L. Zamudio-Ruvalcaba",
"M. Volz",
"C. Pittman",
"M. McClure",
"J. F. Babb",
"R. Franco-Hernandez",
"E. Macias",
"M. Reynolds",
"P. -G. Yan"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.EP"
] |
Catherine C. Espaillat
cce@bu.edu
0000-0001-9227-5949]C. C. Espaillat
Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
Institute for Astrophysical Research, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
0000-0003-4507-1710]T. Thanathibodee
Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
Institute for Astrophysical Research, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
0000-0003-3616-6822]Z. Zhu
Department of Physics and Astronomy, University of Nevada, Las Vegas, 4505 S. Maryland Pkwy, Las Vegas, NV 89154, USA
Nevada Center for Astrophysics, University of Nevada, 4505 S. Maryland Pkwy., Las Vegas, NV 89154-4002, USA
0000-0001-5008-2794]I. Rabago
Department of Physics and Astronomy, University of Nevada, Las Vegas, 4505 S. Maryland Pkwy, Las Vegas, NV 89154, USA
Nevada Center for Astrophysics, University of Nevada, 4505 S. Maryland Pkwy., Las Vegas, NV 89154-4002, USA
0000-0002-6808-4066]J. Wendeborn
Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
Institute for Astrophysical Research, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
0000-0002-3950-5386]N. Calvet
Department of Astronomy, University of Michigan, 1085 South University Avenue, Ann Arbor, MI 48109, USA
0000-0001-9219-7696]L. Zamudio-Ruvalcaba
Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
Institute for Astrophysical Research, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
0009-0005-4517-4463]M. Volz
Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
Institute for Astrophysical Research, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
0000-0001-9301-6252]C. Pittman
Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
Institute for Astrophysical Research, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA
0000-0003-1878-327X]M. McClure
Leiden Observatory, Leiden University, PO Box 9513, NL–2300 RA Leiden, The Netherlands
0000-0002-3883-9501]J. F. Babb
Center for Astrophysics Harvard & Smithsonian, Cambridge, MA 02138, USA
0000-0002-1650-3740]R. Franco-Hernández
Instituto de Astronomía y Meteorología, Universidad de Guadalajara, Avenida Vallarta No. 2602, Col. Arcos Vallarta Sur, CP 44130, Guadalajara, Jalisco, Mexico
0000-0003-1283-6262]E. Macías
European Southern Observatory, Karl-Schwarzschild-Str. 2, 85748, Garching bei Munchen, Germany
0000-0003-1621-9392]M. Reynolds
Department of Astronomy, Ohio State University, 140 West 18th Ave., Columbus, OH 43210, USA
Department of Astronomy, University of Michigan, 1085 South University Avenue, Ann Arbor, MI 48109, USA
0000-0003-1623-1391]P. -G. Yan
Center for Astrophysics Harvard & Smithsonian, Cambridge, MA 02138, USA
§ ABSTRACT
Here we report the detection of dust depletion in a misaligned inner disk around UX Tau A using JWST MIRI spectra. Mid-infrared (MIR) continuum “seesaw” variability was detected in this disk by Spitzer and attributed to variable shadows cast on the outer disk by the inner disk. The JWST MIRI spectrum of UX Tau A also shows seesaw variability, but with a significant decrease of emission shortwards of 10 to nearly photospheric levels. We argue that UX Tau A's MIR continuum variability is due to depletion of dust in a misaligned inner disk. We speculate that this dust depletion occurs because the inner disk is misaligned from the outer disk, which can disrupt the replenishment of the inner disk from the outer disk. Using contemporaneous measurements of the mass accretion rate of UX Tau A and estimating the amount of dust necessary to produce the MIR excess in the Spitzer observations, we estimate a minimum dust depletion timescale of ∼0.1 yr. These observations show that we can indirectly detect the signatures of misaligned inner disks through MIR continuum variability and that in some cases the inner disk may be significantly depleted of dust and become optically thin.
§ INTRODUCTION
Variability is a distinctive characteristic of low-mass (< 2 ), accreting pre-main sequence stars (i.e., Classical T Tauri stars; CTTS). Extensive multi-epoch studies of CTTS in the X-ray, UV, optical, and IR wavelengths revealed significant variability at each of these wavelengths <cit.>.
Out of the wide range of variability seen to date, at least two types are linked to dust in the inner regions of the protoplanetary disk: MIR “seesaw” continuum variability and optical “dipper” light curves. Seesaw variability was discovered by Spitzer, which found that the MIR continuum in many disks with large central cavities seesaws, i.e., the flux at shorter wavelengths varies inversely with the flux at longer wavelengths <cit.>. Spectral energy distribution (SED) modeling showed that this seesawing behavior could be explained by differences in the size of the shadow cast on the edge of the outer disk by the inner disk. <cit.> attributed the seesaw behavior to changes in inner disk wall height resulting in differences in illumination in the outer wall, a geometrical effect. NIR imaging of dark regions supports shadowing of the outer disk by the inner disk <cit.>. “Dippers” were identified in Kepler surveys based on their optical light curves, which display deep dips in their brightness <cit.>. The period of the dips is typically a few days and equals the stellar rotation period, which indicates that there is dust located at the corotation radius that obscures the star <cit.>.
Here, we study UX Tau A, which has been reported to display seesaw variability <cit.>, to further explore the role of dust in the innermost disk in MIR continuum variability and the link to dippers and disk shadows. UX Tau is a quadruple system located in the Taurus-Auriga star-forming region at a distance of ∼147 pc <cit.>. The UX Tau system is made up of components A, B, and C where A and B are separated by 5.86 and A and C are separated by 2.63 <cit.>. UX Tau B is a close binary separated by 0.1 <cit.>. In this work, we focus on the A component of the UX Tau system, which has been shown to dominate the emission <cit.>.
UX Tau A has a stellar mass of 1.4 <cit.>, and reported spectral types of G5–K5 <cit.> with an accretion rate of 1×10^-8 <cit.>. This object is surrounded by a pre-transitional disk <cit.>. (Sub-)mm imaging finds that the disk has a gap of ∼25–31 au <cit.> as well as spiral arm structure indicating that UX Tau A has been perturbed by UX Tau C within the past 1000 yrs <cit.>. NIR observations detect dust down to at least 23 au <cit.>.
Here we present new JWST data of UX Tau A and contemporaneous optical spectra, which we compare to archival MIR spectra and photometry as well as contemporaneous optical light curves. In Section 2, we present the new and supporting datasets. In Section 3, we analyze the optical light curves and profiles. We discuss possible connections between the datasets in Section 4 and end with a summary in Section 5.
§ OBSERVATIONS & DATA REDUCTION
We study MIR spectra from JWST and Spitzer, MIR photometry from WISE/NEOWISE, optical light curves from TESS and ASAS-SN, and optical spectra from LCOGT/NRES. In the following, we provide more details about these data.
§.§ Mid-Infrared Data
§.§.§ JWST
We present new JWST MIRI <cit.> MRS <cit.> data of UX Tau A taken on 2023 October 13 starting at 05:37:37 UT as part of GO program 1676 (PI: Espaillat). The observations were 336 s long and used the default four-point dither. Background observations were also taken with the same exposure time and setup. We follow the same reduction procedure as in <cit.>. In sum, we reduce the uncalibrated raw MRS data using the calibration reference file version jwst_1252.pmap and the JWST Science Calibration Pipeline v.1.15.1 <cit.>. There is a known flux calibration uncertainty of ∼10% (Law et al in prep). The spectrum is presented in Figure <ref>.
The Channel 1 spectra (4.9–7.5 ) are of UX Tau A alone while Channels 2, 3 and 4 (7.5–27.90 ) include the C component. At K-band, the flux ratio between A and C is 16.9±1.6 and between A and B it is 3.91±0.31 <cit.>. If there was a significant contribution from the C component, there would have been a sharp increase in flux at 7.5 . We conclude that the C component is not contributing significantly to the MIR.
UX Tau A has detections of [Ne2] at 12.81 , [Ne3] at 15.5 , and [Ar2] at 6.98 . There are also a few H_2 lines present and weak PAH features at ∼6.2, 7.7, 8.6, 12.0, and 12.7 along with a strong PAH feature at 11.3 .
We follow <cit.> to measure [Ne2], [Ne3], and [Ar2] lines fluxes of
1.5±0.1×10^-14,
1.4±0.3×10^-15,
and 1.1±0.1×10^-14 erg cm^-2 s^-1.
We note that the [Ne II] line flux is consistent with the line flux measured by <cit.>.
§.§.§ Spitzer
In Figure <ref>, we show Spitzer IRS spectra taken from the Combined Atlas of Sources with Spitzer IRS Spectra <cit.>. These include low-resolution data of UX Tau A on 2008-10-01 and 2008-10-08. Also included are high-resolution spectra of UX Tau A on 2007-03-22 and 2008-11-05. We note that the spectrum of UX Tau A from 2004-02-08 appeared to have scaling issues with the CASSIS reduction so here we use the spectrum from <cit.>. The systematic absolute flux uncertainties on the IRS spectra are ∼5%.
§.§.§ WISE/NEOWISE
In Figure <ref>, we use data from from the Wide-field Infrared Survey Explorer (WISE) mission <cit.> and from the NEOWISE project <cit.>. These data are in WISE bands W1 (3.4 ) and W2 (4.6 ). The WISE data are from mid-February 2010 and mid-August 2010. The NEOWISE data span 2014 February – 2023 August and are typically obtained in 2–day segments, with 10–20 observations per filter and 130–200 days between segments. The flux calibration uncertainties are less than 5%.
§.§ Optical Data
§.§.§ TESS and ASAS-SN
We show All-Sky Automated Survey for Supernovae <cit.> and TESS light curves for UX Tau A in Figure <ref>. The ASAS-SN g photometry come from Sky Patrol V2.0 <cit.> and are contemporaneous with the JWST observations. We obtained the TESS light curves using the TESS-GAIA Light Curve (tglc) Python package <cit.>.
The TESS observations were simultaneous with JWST. TESS observed UX Tau A for ∼26 days in Sector 70 from 2023 September 20 to October 16 (MJD: 60207.9-60233.3) with 3.3-minute cadence. In Figure <ref>, we include additional TESS light curves of UX Tau A from Sectors 43, 44, and 71 with cadences ranging from 3.3–30 minutes.
§.§.§ LCOGT
We observed UX Tau A using the NRES spectrograph on the Las Cumbres Global Observatory 1-meter telescope network (LCOGT). NRES is a robotic, fiber-fed spectrograph providing high-resolution (R∼48000) spectra from the BANZAI-NRES automatic reduction pipeline. Multiple exposures were stacked. All spectroscopic observations are contemporaneous with the JWST observations. UX Tau A was observed starting on 2023-10-11UT00:52, 2023-10-11UT23:50, 2023-10-14UT09:33, and 2023-10-17UT11:20.
All spectra were re-normalized using a polynomial fit to the continuum and are shown in Figure <ref>.
§ ANALYSIS & RESULTS
We measure the periods of the optical light curves and classify them following <cit.>. Then we model the profiles to measure accretion rates and properties of the accretion flow. These results will be discussed along with the MIR data in Section 4.
§.§ Optical light curves
To measure the periods in the TESS data, we subtract a linear fit from each light curve and fit a Lomb-Scargle periodogram. We measure the Q and M variability metrics following <cit.>. Q measures the light curve's periodicity and varies between 0 and 1 where 0 is periodic and 1 is aperiodic. Q values between 0.15 and 0.85 are quasiperiodic. M measures the light curve's asymmetry and typically varies between -1 and 1. Positive values correspond to dips and negative values to bursts with the highest values corresponding to dippers and the lowest values to bursters. M values between ±0.25 are symmetric.
The period of UX Tau A is ∼3.8 days amongst the TESS sectors (43, 44, 70, and 71). We find Q values of 0.9, 0.4, 0.9, 0.9 and M values of -0.3, 0.2, 0.5, 1.2. These metrics lead to the following classifications: Sector 43 burster (B); Sector 44 quasiperiodic symmetric (QPS); Sector 70 aperiodic dipper (APD); Sector 71 aperiodic dipper (APD).
cccccccc
Results of the Magnetospheric Accretion Model
Object
Obs. Date
Instrument
i
(UT)
(10^-9 )
(R_⋆)
(R_⋆)
(10^3K)
(deg)
UX Tau A 2023-10-11 NRES 16.0±29.2 2.3±1.6 0.4±0.3 9.0±0.7 46±14
UX Tau A 2023-10-12 NRES 11.7±22.0 2.0±1.2 0.4±0.3 9.0±0.6 46±13
UX Tau A 2023-10-14 NRES 21.6±31.7 1.7±0.4 0.3±0.2 9.0±0.6 48±12
UX Tau A 2023-10-17 NRES 11.3±26.9 3.5±2.2 0.7±0.6 8.7±0.7 36±16
§.§ Optical spectra
We model the profiles of UX Tau A (Figure <ref>) using the magnetospheric accretion flow model from <cit.>. The magnetic, stellar, and disk rotation axes are aligned and the flow geometry follows a dipolar magnetic field which has the following parameters: inner radius (), width of the flow () at the disk plane, maximum temperature in the flow (), and viewing inclination (i). To determine the best fits, we follow the procedure of <cit.>.
We created a grid of 29,700 models with the following values: (1×10^-9-1×10^-7 ), (1.5-7.0 R_*), (0.2-1.8 R_*), (8-10×10^3 K), and i (20-60). We calculate the χ^2 for each combination of the model and observed profile and we selected the models where the normalized likelihood is ≥0.5 and calculated the weighted mean of , , , , and i. The best fit parameters are listed in Table 1.
§ DISCUSSION
Seesaw variability is still present in the JWST spectrum of UX Tau A, but differs significantly from what has been seen previously. Strikingly, the JWST spectrum is nearly consistent with photospheric emission at the shortest wavelengths, with a very small MIR excess shortwards of 10 and a broad, weak 10 silicate emission feature.[The photosphere plotted in Figure <ref> uses colors from <cit.>, is scaled at J-band, and follows a Rayleigh-Jeans tail beyond K-band. The photometry used in <cit.> included emission from the A, B, and C components and here we scale the photosphere following the flux ratios from <cit.> to represent the emission from only the A component.] This combination points to a small amount of ∼micron-sized dust grains in an optically thin inner disk. Previous work has reproduced the 10 feature of UX Tau A using a grain size distribution of a^-3.5, where a varies between a_min=0.005 and a_max=10 <cit.>.
We propose that the drop to nearly photospheric levels in UX Tau A is due to dust depletion in a misaligned inner disk undergoing disk breaking. Disk breaking is a phenomenon that occurs when the inner and outer disks precess independently, possibly due to the presence of a companion <cit.>. The inclination of the outer disk of UX Tau A is 37.96^∘^+0.97_-0.90 while the inclination of the inner disk is 73.46^∘^+11.76_-16.47 <cit.>.
As the inner disk precesses, there will be times when it is aligned with the outer disk (Figure <ref>, left) and times when it is misaligned (Figure <ref>, right). Mass transfer from the outer disk to the inner disk is easiest when the disks are coplanar and is the most disrupted when the inner disk is at maximum misalignment from the outer disk. However, accretion may still occur at intermediate points. Because it only takes a small amount of dust to make the disk optically thick, any accretion can deposit sufficient dust to lead to an optically thick inner disk. Therefore, we expect that a significant MIR excess would be seen most of the time. Then when the accretion to the inner disk is significantly disrupted, most of the dust in the inner disk may accrete or drift to the star due to lack of replenishment from the outer disk. The inner disk becomes optically thin and there will be a drop at the shorter wavelengths in the JWST spectrum, while there is an increase at the longer wavelengths as there is little/no shadow cast on the outer disk. Below we estimate the dust depletion and precession timescales and consider this proposed scenario in light of the MIR light curve, optical light curves, and the MIR emission lines and features.
§.§ Depletion and precession timescales
We can estimate the dust depletion timescale by calculating how much mass was in the inner disk at the time of the Spitzer observations and comparing this to the accretion rate of UX Tau A. Assuming the excess MIR emission comes from a marginally optically thick inner disk at 1550 K, the lower limit of the dust mass in the inner disk is then ∼10^-11 M_⊙ using a dust opacity of 200 cm^2 g^-1 <cit.>. We measure accretion rates of 1–2×10^-8 contemporaneous with the JWST observations (Table 1), which would quickly deplete the inner disk of dust in 0.1 yrs. Here we adopt the typically assumed dust-to-gas ratio of 0.01, but the dust-to-gas ratio in circumstellar disks is unknown and it is not clear what it would be in the inner disk of UX Tau A.
We can also consider the precession timescale of the inner disk of UX Tau A. Theoretical works show that a misaligned Jupiter-mass planet can drive misalignment and precession of the inner disk at ∼1000 times the planetary orbital timescale <cit.>. We assume that the period measured from the TESS light curves (Figure <ref>) is the stellar rotation period. Then, if a misaligned Jupiter-mass planet is around the corotation radius, the inner disk's precession timescale is ∼500 wks or 10 yrs. If the planet is 10 Jupiter masses, the precession timescale is then 1 year.
§.§ MIR light curve
There are no significant dips in the WISE/NEOWISE fluxes. Our estimated dust depletion timescale is ∼0.1 yrs (∼37 days) and the dust replenishment timescale would be similar. Since there are at least 130 days in between the segments of the WISE/NEOWISE data, it is possible that we missed this short-lived depleted period. If the precession timescale is 10 yrs, the last time that the inner disk was in the same position as in the JWST observation would have been in 2013, which falls in the ∼3.5 yrs gap in coverage between WISE and NEOWISE. It is plausible that given the sparse time cadence of the data and the short depletion and replenishment timescale, that another significant MIR dip was not observed.
There is significant MIR emission in the last NEOWISE observation taken ∼43 days before the JWST observations, which implies a quick drop to nearly photospheric levels. This is still consistent with our estimated depletion timescale. During the last NEOWISE observations, the inner disk could have been marginally optically thick and then it became optically thin when observed with JWST. However, the estimated depletion timescale is a lower limit and any slightly longer timescale would not be consistent with the observations.
§.§ Optical light curves
One may speculate that the drop in the JWST spectrum is due to an edge-on inner disk. However, we can exclude the possibility that the inner disk of UX Tau A was close to edge-on at the time of the JWST observations since there is no evidence of extinction of the star by the inner disk in the simultaneous optical light curve (Figure <ref>).
The dips in the TESS light curve suggest that there is dust in the inner disk along our line of sight which obscures the star a few days before and after the JWST observations. This is consistent with the JWST spectrum whose small MIR excess shows that there is still some small amount of dust grains in the inner disk dust. The exact mechanism by which this dust obscures the star in the UX Tau A system is unclear. Scenarios that have been proposed to explain dippers include dusty accretion flows <cit.>, disk warps <cit.>, and dusty disk winds <cit.>.
Finally, we see a change in UX Tau A's optical light curve behavior that could possibly be further evidence of a precessing misaligned inner disk. In 2021 (top, Figure <ref>), UX Tau A had a quasiperiodic symmetric light curve and two years later (bottom, Figure <ref>) showed a dipper light curve. This could indicate that the inner disk was at different inclinations in 2021 and 2023, changing the obscuration of the star and leading to the different light curve behavior. This may have implications on a subset of objects with “hybrid” optical light curves which switch from quasiperiodic to dipper <cit.>.
§.§ MIR atomic gas emission lines and PAH features
The proposed scenario of a depleted inner disk also appears roughly consistent with the lines and features seen in the JWST spectrum. First, only the 11.3 PAH feature was clearly seen in the Spitzer spectrum <cit.> whereas they are all prominent in the JWST spectrum. The PAH features were likely “drowned out” in the Spitzer spectrum due to the significant MIR excess. In the JWST spectrum, the depletion of dust from the inner disk leads to a decrease in the continuum emission, allowing the PAHs to be conspicuous. Likewise, it is easier to see the [Ar II] and [Ne II] emission lines due to the decreased MIR continuum, as has been seen in other pre-transitional and transitional disks <cit.>. In addition, the presence of the [Ar II] and [Ne II] lines confirms that there is gas in the inner disk, consistent with the measured accretion rates in this work, which points to a significant gas reservoir in the inner disk despite the depletion of dust.
§ SUMMARY & CONCLUSIONS
We presented a new JWST spectrum of UX Tau A which shows a striking decrease of continuum emission shortwards of 10 , dropping to nearly photospheric levels. We conclude that the dust in the inner disk has been depleted to the point where the inner disk is optically thin, leading to this drop in MIR emission. This dust depletion could be due to a disruption in the replenishment of the inner disk since accretion from the outer disk may be more difficult when the inner and outer disks are misaligned.
Monitoring of the precession timescale via inclination measurements, MIR spectra, and/or imaging is necessary to test if this variability is periodic, which would be expected if it is due to disk precession. More detailed modeling with non-axisymmetric disks is also necessary. These approaches will help us link the variability seen in protoplanetary disks at different wavelengths.
We acknowledge support from JWST grant GO-01676. We thank the anonymous referee for a constructive report which improved the paper.
The JWST data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via [10.17909/sej8-g173]https://archive.stsci.edu/doi/resolve/resolve.html?doi=10.17909/sej8-g173.
ASAS-SN, JWST, LCOGT, WISE, NEOWISE, TESS
natexlab#1#1
[Andrews et al.(2011)Andrews, Wilner, Espaillat, Hughes, Dullemond, McClure, Qi, & Brown]andrews11
Andrews, S. M., Wilner, D. J., Espaillat, C., et al. 2011, , 732, 42, 10.1088/0004-637X/732/1/42
[Benisty et al.(2023)Benisty, Dominik, Follette, Garufi, Ginski, Hashimoto, Keppler, Kley, & Monnier]benisty23
Benisty, M., Dominik, C., Follette, K., et al. 2023, in Astronomical Society of the Pacific Conference Series, Vol. 534, Protostars and Planets VII, ed. S. Inutsuka, Y. Aikawa, T. Muto, K. Tomida, & M. Tamura, 605, 10.48550/arXiv.2203.09991
[Birnstiel et al.(2018)Birnstiel, Dullemond, Zhu, Andrews, Bai, Wilner, Carpenter, Huang, Isella, Benisty, Pérez, & Zhang]birnstiel18
Birnstiel, T., Dullemond, C. P., Zhu, Z., et al. 2018, , 869, L45, 10.3847/2041-8213/aaf743
[Bohn et al.(2022)Bohn, Benisty, Perraut, van der Marel, Wölfer, van Dishoeck, Facchini, Manara, Teague, Francis, Berger, Garcia-Lopez, Ginski, Henning, Kenworthy, Kraus, Ménard, Mérand, & Pérez]bohn22
Bohn, A. J., Benisty, M., Perraut, K., et al. 2022, , 658, A183, 10.1051/0004-6361/202142070
[Bouvier et al.(2007)Bouvier, Alencar, Boutelier, Dougados, Balog, Grankin, Hodgkin, Ibrahimov, Kun, Magakian, & Pinte]bouvier07
Bouvier, J., Alencar, S. H. P., Boutelier, T., et al. 2007, , 463, 1017, 10.1051/0004-6361:20066021
[Bushouse et al.(2024)Bushouse, Eisenhamer, Dencheva, Davies, Greenfield, Morrison, Hodge, Simon, Grumm, Droettboom, Slavich, Sosey, Pauly, Miller, Jedrzejewski, Hack, Davis, Crawford, Law, Gordon, Regan, Cara, MacDonald, Bradley, Shanahan, Jamieson, Teodoro, Williams, & Pena-Guerrero]bushouse24_v1.15.1
Bushouse, H., Eisenhamer, J., Dencheva, N., et al. 2024, JWST Calibration Pipeline, 1.15.1, Zenodo, 10.5281/zenodo.6984365
[Cody & Hillenbrand(2018)]cody18
Cody, A. M., & Hillenbrand, L. A. 2018, , 156, 71, 10.3847/1538-3881/aacead
[Cody et al.(2014)Cody, Stauffer, Baglin, Micela, Rebull, Flaccomio, Morales-Calderón, Aigrain, Bouvier, Hillenbrand, Gutermuth, Song, Turner, Alencar, Zwintz, Plavchan, Carpenter, Findeisen, Carey, Terebey, Hartmann, Calvet, Teixeira, Vrba, Wolk, Covey, Poppenhaeger, Günther, Forbrich, Whitney, Affer, Herbst, Hora, Barrado, Holtzman, Marchis, Wood, Medeiros Guimarães, Lillo Box, Gillen, McQuillan, Espaillat, Allen, D'Alessio, & Favata]cody14
Cody, A. M., Stauffer, J., Baglin, A., et al. 2014, , 147, 82, 10.1088/0004-6256/147/4/82
[Duchêne et al.(2024)Duchêne, LeBouquin, Ménard, Cuello, Toci, & Langlois]duchene24
Duchêne, G., LeBouquin, J.-B., Ménard, F., et al. 2024, , 686, A188, 10.1051/0004-6361/202348755
[Espaillat et al.(2011)Espaillat, Furlan, D'Alessio, Sargent, Nagel, Calvet, Watson, & Muzerolle]espaillat11
Espaillat, C., Furlan, E., D'Alessio, P., et al. 2011, , 728, 49, 10.1088/0004-637X/728/1/49
[Espaillat et al.(2007)Espaillat, Calvet, D'Alessio, Bergin, Hartmann, Watson, Furlan, Najita, Forrest, McClure, Sargent, Bohac, & Harrold]espaillat07a
Espaillat, C., Calvet, N., D'Alessio, P., et al. 2007, , 664, L111, 10.1086/520879
[Espaillat et al.(2010)Espaillat, D'Alessio, Hernández, Nagel, Luhman, Watson, Calvet, Muzerolle, & McClure]espaillat10
Espaillat, C., D'Alessio, P., Hernández, J., et al. 2010, , 717, 441, 10.1088/0004-637X/717/1/441
[Espaillat et al.(2013)Espaillat, Ingleby, Furlan, McClure, Spatzier, Nieusma, Calvet, Bergin, Hartmann, Miller, & Muzerolle]espaillat13
Espaillat, C., Ingleby, L., Furlan, E., et al. 2013, , 762, 62, 10.1088/0004-637X/762/1/62
[Espaillat et al.(2014)Espaillat, Muzerolle, Najita, Andrews, Zhu, Calvet, Kraus, Hashimoto, Kraus, & D'Alessio]espaillat14
Espaillat, C., Muzerolle, J., Najita, J., et al. 2014, in Protostars and Planets VI, ed. H. Beuther, R. S. Klessen, C. P. Dullemond, & T. Henning, 497, 10.2458/azu_uapress_9780816531240-ch022
[Espaillat et al.(2023)Espaillat, Thanathibodee, Pittman, Sturm, McClure, Calvet, Walter, Franco-Hernández, & Muzerolle Page]espaillat23
Espaillat, C. C., Thanathibodee, T., Pittman, C. V., et al. 2023, , 958, L4, 10.3847/2041-8213/ad023d
[Facchini et al.(2018)Facchini, Juhász, & Lodato]facchini18
Facchini, S., Juhász, A., & Lodato, G. 2018, , 473, 4459, 10.1093/mnras/stx2523
[Flaherty et al.(2011)Flaherty, Muzerolle, Rieke, Gutermuth, Balog, Herbst, Megeath, & Kun]flaherty11
Flaherty, K. M., Muzerolle, J., Rieke, G., et al. 2011, , 732, 83, 10.1088/0004-637X/732/2/83
[Francis et al.(2020)Francis, Johnstone, Herczeg, Hunter, & Harsono]francis20
Francis, L., Johnstone, D., Herczeg, G., Hunter, T. R., & Harsono, D. 2020, , 160, 270, 10.3847/1538-3881/abbe1a
[Gaia Collaboration et al.(2018)Gaia Collaboration, Brown, Vallenari, Prusti, de Bruijne, Babusiaux, Bailer-Jones, Biermann, Evans, Eyer, & et al.]gaia18b
Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, , 616, A1, 10.1051/0004-6361/201833051
[Gaidos et al.(2024)Gaidos, Thanathibodee, Hoffman, Ong, Hinkle, Shappee, & Banzatti]gaidos24
Gaidos, E., Thanathibodee, T., Hoffman, A., et al. 2024, , 966, 167, 10.3847/1538-4357/ad3447
[Han & Brandt(2023)]han23
Han, T., & Brandt, T. D. 2023, , 165, 71, 10.3847/1538-3881/acaaa7
[Hart et al.(2023)Hart, Shappee, Hey, Kochanek, Stanek, Lim, Dobbs, Tucker, Jayasinghe, Beacom, Boright, Holoien, Ong, Prieto, Thompson, & Will]hart23
Hart, K., Shappee, B. J., Hey, D., et al. 2023, arXiv e-prints, arXiv:2304.03791, 10.48550/arXiv.2304.03791
[Hartigan et al.(1989)Hartigan, Hartmann, Kenyon, Hewett, & Stauffer]hartigan89
Hartigan, P., Hartmann, L., Kenyon, S., Hewett, R., & Stauffer, J. 1989, , 70, 899, 10.1086/191361
[Hartmann et al.(1994)Hartmann, Hewett, & Calvet]hartmann94
Hartmann, L., Hewett, R., & Calvet, N. 1994, , 426, 669, 10.1086/174104
[Kenyon & Hartmann(1995)]kh95
Kenyon, S. J., & Hartmann, L. 1995, , 101, 117, 10.1086/192235
[Kochanek et al.(2017)Kochanek, Shappee, Stanek, Holoien, Thompson, Prieto, Dong, Shields, Will, Britt, Perzanowski, & Pojmański]kochanek17
Kochanek, C. S., Shappee, B. J., Stanek, K. Z., et al. 2017, , 129, 104502, 10.1088/1538-3873/aa80d9
[Kraus & Hillenbrand(2009)]kraus09
Kraus, A. L., & Hillenbrand, L. A. 2009, , 704, 531, 10.1088/0004-637X/704/1/531
[Lebouteiller et al.(2015)Lebouteiller, Barry, Goes, Sloan, Spoon, Weedman, Bernard-Salas, & Houck]lebouteiller15
Lebouteiller, V., Barry, D. J., Goes, C., et al. 2015, , 218, 21, 10.1088/0067-0049/218/2/21
[Mainzer et al.(2011)Mainzer, Bauer, Grav, Masiero, Cutri, Dailey, Eisenhardt, McMillan, Wright, Walker, Jedicke, Spahr, Tholen, Alles, Beck, Brandenburg, Conrow, Evans, Fowler, Jarrett, Marsh, Masci, McCallon, Wheelock, Wittman, Wyatt, DeBaun, Elliott, Elsbury, Gautier, Gomillion, Leisawitz, Maleszewski, Micheli, & Wilkins]mainzer11
Mainzer, A., Bauer, J., Grav, T., et al. 2011, , 731, 53, 10.1088/0004-637X/731/1/53
[McGinnis et al.(2015)McGinnis, Alencar, Guimarães, Sousa, Stauffer, Bouvier, Rebull, Fonseca, Venuti, Hillenbrand, Cody, Teixeira, Aigrain, Favata, Fűrész, Vrba, Flaccomio, Turner, Gameiro, Dougados, Herbst, Morales-Calderón, & Micela]mcginnis15
McGinnis, P. T., Alencar, S. H. P., Guimarães, M. M., et al. 2015, , 577, A11, 10.1051/0004-6361/201425475
[Ménard et al.(2020)Ménard, Cuello, Ginski, van der Plas, Villenave, Gonzalez, Pinte, Benisty, Boccaletti, Price, Boehler, Chripko, de Boer, Dominik, Garufi, Gratton, Hagelberg, Henning, Langlois, Maire, Pinilla, Ruane, Schmid, van Holstein, Vigan, Zurlo, Hubin, Pavlov, Rochat, Sauvage, & Stadler]menard20
Ménard, F., Cuello, N., Ginski, C., et al. 2020, , 639, L1, 10.1051/0004-6361/202038356
[Muzerolle et al.(1998)Muzerolle, Calvet, & Hartmann]muzerolle98
Muzerolle, J., Calvet, N., & Hartmann, L. 1998, , 492, 743, 10.1086/305069
[Muzerolle et al.(2001)Muzerolle, Calvet, & Hartmann]muzerolle01
—. 2001, , 550, 944, 10.1086/319779
[Muzerolle et al.(2009)Muzerolle, Flaherty, Balog, Furlan, Smith, Allen, Calvet, D'Alessio, Megeath, Muench, Rieke, & Sherry]muzerolle09
Muzerolle, J., Flaherty, K., Balog, Z., et al. 2009, , 704, L15, 10.1088/0004-637X/704/1/L15
[Nagel et al.(2024)Nagel, Bouvier, & Duarte]nagel24
Nagel, E., Bouvier, J., & Duarte, A. E. 2024, , 688, A61, 10.1051/0004-6361/202450085
[Nealon et al.(2020)Nealon, Price, & Pinte]nealon20
Nealon, R., Price, D. J., & Pinte, C. 2020, , 493, L143, 10.1093/mnrasl/slaa026
[Rieke et al.(2015)Rieke, Wright, Böker, Bouwman, Colina, Glasse, Gordon, Greene, Güdel, Henning, Justtanont, Lagage, Meixner, Nørgaard-Nielsen, Ray, Ressler, van Dishoeck, & Waelkens]reike15
Rieke, G. H., Wright, G. S., Böker, T., et al. 2015, , 127, 584, 10.1086/682252
[Rydgren et al.(1976)Rydgren, Strom, & Strom]rydgren76
Rydgren, A. E., Strom, S. E., & Strom, K. M. 1976, , 30, 307, 10.1086/190364
[Stauffer et al.(2015)Stauffer, Cody, McGinnis, Rebull, Hillenbrand, Turner, Carpenter, Plavchan, Carey, Terebey, Morales-Calderón, Alencar, Bouvier, Venuti, Hartmann, Calvet, Micela, Flaccomio, Song, Gutermuth, Barrado, Vrba, Covey, Padgett, Herbst, Gillen, Lyra, Medeiros Guimaraes, Bouy, & Favata]stauffer15
Stauffer, J., Cody, A. M., McGinnis, P., et al. 2015, , 149, 130, 10.1088/0004-6256/149/4/130
[Szulágyi et al.(2012)Szulágyi, Pascucci, Ábrahám, Apai, Bouwman, & Moór]szulagyi12
Szulágyi, J., Pascucci, I., Ábrahám, P., et al. 2012, , 759, 47, 10.1088/0004-637X/759/1/47
[Tanii et al.(2012)Tanii, Itoh, Kudo, Hioki, Oasa, Gupta, Sen, Wisniewski, Muto, Grady, Hashimoto, Fukagawa, Mayama, Hornbeck, Sitko, Russell, Werren, Curé, Currie, Ohashi, Okamoto, Momose, Honda, Inutsuka, Takeuchi, Dong, Abe, Brandner, Brandt, Carson, Egner, Feldt, Fukue, Goto, Guyon, Hayano, Hayashi, Hayashi, Henning, Hodapp, Ishii, Iye, Janson, Kandori, Knapp, Kusakabe, Kuzuhara, Matsuo, McElwain, Miyama, Morino, Moro-Martín, Nishimura, Pyo, Serabyn, Suto, Suzuki, Takami, Takato, Terada, Thalmann, Tomono, Turner, Watanabe, Yamada, Takami, Usuda, & Tamura]tanii12
Tanii, R., Itoh, Y., Kudo, T., et al. 2012, , 64, 124.
1206.1215
[Thanathibodee et al.(2023)Thanathibodee, Molina, Serna, Calvet, Hernández, Muzerolle, & Franco-Hernández]thanathibodee23
Thanathibodee, T., Molina, B., Serna, J., et al. 2023, , 944, 90, 10.3847/1538-4357/acac84
[Watson et al.(2009)Watson, Leisenring, Furlan, Bohac, Sargent, Forrest, Calvet, Hartmann, Nordhaus, Green, Kim, Sloan, Chen, Keller, d'Alessio, Najita, Uchida, & Houck]watson09
Watson, D. M., Leisenring, J. M., Furlan, E., et al. 2009, , 180, 84, 10.1088/0067-0049/180/1/84
[Wells et al.(2015)Wells, Pel, Glasse, Wright, Aitink-Kroes, Azzollini, Beard, Brandl, Gallie, Geers, Glauser, Hastings, Henning, Jager, Justtanont, Kruizinga, Lahuis, Lee, Martinez-Delgado, Martínez-Galarza, Meijers, Morrison, Müller, Nakos, O'Sullivan, Oudenhuysen, Parr-Burman, Pauwels, Rohloff, Schmalzl, Sykes, Thelen, van Dishoeck, Vandenbussche, Venema, Visser, Waters, & Wright]wells15
Wells, M., Pel, J. W., Glasse, A., et al. 2015, , 127, 646, 10.1086/682281
[White & Ghez(2001)]white01
White, R. J., & Ghez, A. M. 2001, , 556, 265, 10.1086/321542
[Wright et al.(2010)Wright, Eisenhardt, Mainzer, Ressler, Cutri, Jarrett, Kirkpatrick, Padgett, McMillan, Skrutskie, Stanford, Cohen, Walker, Mather, Leisawitz, Gautier, McLean, Benford, Lonsdale, Blain, Mendez, Irace, Duval, Liu, Royer, Heinrichsen, Howard, Shannon, Kendall, Walsh, Larsen, Cardon, Schick, Schwalm, Abid, Fabinsky, Naes, & Tsai]wright10
Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, , 140, 1868, 10.1088/0004-6256/140/6/1868
[Wright et al.(2023)Wright, Rieke, Glasse, Ressler, García Marín, Aguilar, Alberts, Álvarez-Márquez, Argyriou, Banks, Baudoz, Boccaletti, Bouchet, Bouwman, Brandl, Breda, Bright, Cale, Colina, Cossou, Coulais, Cracraft, De Meester, Dicken, Engesser, Etxaluze, Fox, Friedman, Fu, Gasman, Gáspár, Gastaud, Geers, Glauser, Gordon, Greene, Greve, Grundy, Güdel, Guillard, Haderlein, Hashimoto, Henning, Hines, Holler, Detre, Jahromi, James, Jones, Justtanont, Kavanagh, Kendrew, Klaassen, Krause, Labiano, Lagage, Lambros, Larson, Law, Lee, Libralato, Lorenzo Alverez, Meixner, Morrison, Mueller, Murray, Mycroft, Myers, Nayak, Naylor, Nickson, Noriega-Crespo, Östlin, O'Sullivan, Ottens, Patapis, Penanen, Pietraszkiewicz, Ray, Regan, Roteliuk, Royer, Samara-Ratna, Samuelson, Sargent, Scheithauer,
Schneider, Schreiber, Shaughnessy, Sheehan, Shivaei, Sloan, Tamas, Teague, Temim, Tikkanen, Tustain, van Dishoeck, Vandenbussche, Weilert, Whitehouse, & Wolff]wright23
Wright, G. S., Rieke, G. H., Glasse, A., et al. 2023, , 135, 048003, 10.1088/1538-3873/acbe66
[Zapata et al.(2020)Zapata, Rodríguez, Fernández-López, Palau, Estalella, Osorio, Anglada, & Huelamo]zapata20
Zapata, L. A., Rodríguez, L. F., Fernández-López, M., et al. 2020, , 896, 132, 10.3847/1538-4357/ab8fac
[Zhu(2019)]zhu19
Zhu, Z. 2019, , 483, 4221, 10.1093/mnras/sty3358
|
http://arxiv.org/abs/2409.03569v1 | 20240905142210 | QuAK: Quantitative Automata Kit | [
"Marek Chalupa",
"Thomas A. Henzinger",
"Nicolas Mazzocchi",
"N. Ege Saraç"
] | cs.FL | [
"cs.FL"
] |
Exact anomalous mobility edges in one-dimensional non-Hermitian quasicrystals
Lei Pan
September 9, 2024
=============================================================================
ntroduction
reliminaries
ool
xperiments
onclusion
splncs04
|
http://arxiv.org/abs/2409.02909v1 | 20240904174853 | Gravitational radiation from binary systems in Unimodular gravity | [
"Indranil Chakraborty",
"Soumya Jana",
"Subhendra Mohanty"
] | gr-qc | [
"gr-qc",
"astro-ph.CO",
"hep-ph",
"hep-th"
] |
P[1]>p#1
leftmargin=8pt
square, comma, numbers, sort compress
|
http://arxiv.org/abs/2409.02566v1 | 20240904093240 | How Do You Perceive My Face? Recognizing Facial Expressions in Multi-Modal Context by Modeling Mental Representations | [
"Florian Blume",
"Runfeng Qu",
"Pia Bideau",
"Martin Maier",
"Rasha Abdel Rahman",
"Olaf Hellwich"
] | cs.CV | [
"cs.CV"
] |
9
Main Track
How Do You Perceive My Face?
GCPR 2024 Submission . CONFIDENTIAL REVIEW COPY.
GCPR 2024 Submission . CONFIDENTIAL REVIEW COPY.
Paper ID
F. Blume et al.
Technische Universität Berlin
Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK
Humboldt-Universität zu Berlin
Science of Intelligence, Research Cluster of Excellence, Berlin
envelope florian.blume@tu-berlin.de, runfeng.qu@tu-berlin.de
How Do You Perceive My Face?
Recognizing Facial Expressions in Multi-Modal Context by Modeling Mental Representations
Florian Blumeequal contribution1,40000-0002-7557-1508envelope Runfeng Qu1,40009-0008-7885-8812 Pia Bideau20000-0001-8145-1732 Martin Maier3,40000-0003-4564-9834 Rasha Abdel Rahman3,40000-0002-8438-1570 Olaf Hellwich1,40000-0002-2871-9266
September 9, 2024
=================================================================================================================================================================================================================================================
§ ABSTRACT
Facial expression perception in humans inherently relies on prior knowledge and contextual cues, contributing to efficient and flexible processing. For instance, multi-modal emotional context (such as voice color, affective text, body pose, etc.) can prompt people to perceive emotional expressions in objectively neutral faces. Drawing inspiration from this, we introduce a novel approach for facial expression classification that goes beyond simple classification tasks. Our model accurately classifies a perceived face and synthesizes the corresponding mental representation perceived by a human when observing a face in context. With this, our model offers visual insights into its internal decision-making process. We achieve this by learning two independent representations of content and context using a VAE-GAN architecture. Subsequently, we propose a novel attention mechanism for context-dependent feature adaptation. The adapted representation is used for classification and to generate a context-augmented expression. We evaluate synthesized expressions in a human study, showing that our model effectively produces approximations of human mental representations. We achieve State-of-the-Art classification accuracies of % on the RAVDESS dataset and % on the MEAD dataset. We make our code publicly available[https://github.com/tub-cv-group/recognizing-by-modeling].
§ INTRODUCTION
Integrating multi-modal contextual information is crucial for generating adaptive behavior and enabling an agent to respond appropriately to its environment. More specifically, contextual information encompasses the multi-modal information that enhances the agent's perception and thus is a prerequisite for adaptive behavior. Latest work in cognitive psychology has shown that the human brain leverages contextual cues and prior knowledge to dynamically adjust future predictions about incoming sensory input <cit.>. Concurrent sound, textual cues, or prior knowledge offer additional information that shapes social perception <cit.>. For example, in the interpretation of facial expressions, the individual's voice plays a significant role in understanding their overall expression. As illustrated in <ref>, the same neutral face is perceived as displaying a more positive or negative expression when presented in the context of the respective prior beliefs about the person. In this work, we refer to the perceived facial expression as the synthesized mental representation.
Previous works in computer vision typically either only perform fer <cit.> or generate expressions <cit.>.
Only few approaches exist that perform both tasks jointly <cit.>.
Addressing recognition and generation jointly, however, poses an essential element in modelling human social interaction and creating effective communication between humans and artificial agents, enabling agents to mimic the expressions of conversational partners in a
context-sensitive manner and in direct alignment with the recognized expression.
We propose a novel mechanism of fusing expressions and multi-modal context information encoded in the latent space of a vae <cit.>. In particular, using an attention mechanism, our model dynamically adapts previously learned representations of facial expressions using context. Operating on lower-dimensional representations of facial expressions enables us to simultaneously produce well-aligned classifications and an approximation of the perceived expression. We verify the validity of these approximations in a rating study with 160 human observers and show SOTA classification accuracy on the ravdess and mead datasets. Our model performs a task that is similar to the studies in <cit.>, where human participants were presented with individual photographs paired with affective-semantic contexts. Our contributions are threefold: (1) We present a model that for the first time simultaneously classifies expressions and produces approximations of their mental representations, which are inherently well aligned with the predicted class. (2) We evaluate these approximations in a human study. They capture the fine-grained effects of emotional context on human perception. (3) Our model is explainable due to its ability to visualize the adapted features by generating a context-augmented expression.
§ RELATED WORKS
We discuss three types of works: (1) context-sensitive fer-only, (2) synthesizing expressions, and (3) performing both tasks jointly.
Multi-modal context-sensitive facial expression recognition. Multi-modal context-sensitivity in fer ranges from incorporating visual surroundings as additional information cues <cit.>, to drawing on audio <cit.>, text <cit.>, body pose <cit.> or combining multiple context sources <cit.>. Transformers have successfully been incorporated into fer in multiple approaches <cit.>. Contrastive learning schemes have shown to extract general features that ensure good classification performance on unseen data <cit.>. <cit.> outperform sota methods on ravdess using an unsupervised contrastive learning scheme on four modalities. <cit.> use the transformer architecture and action units to predict expressions on ravdess. In contrast to our work, generating context-augmented versions of the input face is not straightforward in these approaches.
Generating facial expressions in context. gan have been used for generating realistic looking face images <cit.>. <cit.> combined them with a vae to allow smooth transitions between representations. <cit.> employ a gan to generate a talking face from audio. <cit.> generate a 3D talking head based on the audio of the ravdess dataset, which is capable of producing facial expressions. Using artificial characters in a rating study results in a different perceptual experience for humans <cit.>, which makes them inapplicable to our goal. <cit.> train their network in a CLIP-like <cit.> fashion to generate sequences of talking faces. <cit.> employ latent diffusion <cit.> for the same task. None of these works target joint generation of mental representations and classification.
Joint Facial Expression Generation and Recognition. Few works exist that perform the task of simultaneously generating facial expressions while also predicting expression classes. <cit.> train two gan cooperatively to recognize facial expressions under large view angle changes. <cit.> employ a gan <cit.> to overcome the lack of labeled training data in fer by jointly training it together with an expression recognition network. Context sensitivity is not part of their work. <cit.> also draw on a gan-based architecture and argue that by generating expressions, they help solve the issue of appearance variance in fer and lack of training data. Their network processes only input images and does not take additional modalities into account. None of the disucssed works model mental representation through synthesized expressions.
§ METHOD
In this Section, we describe our multi-modal approach that adapts an expression image using affective audio. Our proposed adaption module allows us to classify a facial expression in light of context and at the same time synthesize a novel facial expression as it would have been perceived by a human. We employ a two-stream encoder-decoder backbone, based on a vae-gan combination <cit.>, and an attention mechanism to combine their latent spaces in a context-sensitive way. <ref> depicts an overview of this design: The representations learned by the face_reconst_net and the context_reconst_net are adapted in the context_att_net, by shifting the facial features using the context features. The shifted representation is visualized using the fixed decoder (D) of the face_reconst_net and classified with the classification head (E). Our model operates and is trained on individual frames together with audio context. To evaluate our model on videos, we perform majority voting over classes of the frames.
§.§ Face and Context Reconstruction Network
The face and context reconstruction networks (called face_reconst_net and context_reconst_net respectively) both consist of a vae and gan discriminator. We follow <cit.> and add a gan discriminator for training to increase image quality.
VAE Module. Similarly to <cit.>, we add skip connections to the en- and decoder of the vae to speed up the training process and allow processing of larger image resolutions. Let enc_i be the expression encoder of (A1) and face_input∈ℝ^m × n × 3 an input expression image:
(img_mean, img_variance) = enc_i(face_input)
where img_mean∈ℝ^latent_space_dim and img_variance∈ℝ^latent_space_dim denote the mean and variance of a Gaussian distribution, respectively. Mean ctx_mean∈ℝ^latent_space_dim and ctx_variance∈ℝ^latent_space_dim variance of the context Mel spectrogram context_input∈ℝ^u × v are computed using the encoder of (B1) and the following formulas are applied analogously.
The prior loss term keeps the latent distribution close to a Gaussian:
prior_loss = d_kl( posterior_latent_input_image || prior_latent_image)
d_kl is the kl_divergence, posterior_latent_input_image is the posterior of the latent vector latent_rep_image∈ℝ^latent_space_dim under input face_input and prior_latent_image is the Gaussian prior over the latent vector.
The reconstruction loss term penalizes the feature map of the discriminator (A2 and B2 in <ref>) at a certain level, as proposed in <cit.>, using mse:
disc_loss = mse(dis_i^l(dec_i(enc_i(face_input))), dis_i^l(face_input))
where face_input is the input image and dis_i^l the l-th feature map of the discriminator.
GAN Module. The gan discriminator's task is to distinguish between input images face_input from the dataset and reconstructions img_reconst. In addition, it is tasked to identify reconstructions from random noise sampled_noise∼𝒩(0, 1) to enforce generation capabilities in the vae. The overall loss is the following:
gan_loss = log (dis_i(face_input)) + log (1 - dis_i(dec_i(enc_i(face_input))))
+ log (1 - dis_i(dec_i(sampled_noise)))
Joint Training. In the pretraining phase, face_reconst_net and context_reconst_net are trained unsupervised for input reconstruction and are fixed in subsequent training. We follow the training algorithm of <cit.> and compute the joint update as follows:
θ_enc +⟵ - ∇_θ_enc(βprior_loss + disc_loss)
θ_dec_i +⟵ - ∇_θ_dec_i( disc_loss - gan_loss)
θ_dis_i +⟵ - ∇_θ_dis_igan_loss
where β∈ℝ is a hyperparameter to weigh the prior loss.
§.§ Context-Attention Network (CAN) and Classification Head
Our proposed attention mechanism shifts the facial expression distribution of the face_reconst_net based on the context distribution of the context_reconst_net. <ref> illustrates this fusion technique.
Context-Attention Net. The context_att_net computes attention maps based on mean and variance of the distributions of the context and the facial expression input. We use these maps to compute offsets mean_offset∈ℝ^latent_space_dim and variance_offset∈ℝ^latent_space_dim to shift the face mean and variance context-dependently. We compute the following parameters for the attention mechanism:
query_mu = attention_weights_mu_qctx_mean
key_mu = attention_weights_mu_kimg_mean
value_mu = attention_weights_mu_vimg_mean
attention_weights_mu_q, attention_weights_mu_k,attention_weights_mu_v∈ℝ^latent_space_dim×latent_space_dim are the trainable parameters of linear layers (bias omitted for simplicity). We compute the attention map attention_map∈ℝ^latent_space_dim×latent_space_dim as
attention_map = softmax(query_mukey_mu^T)
where the softmax function is applied row-wise. Note that we reverse the attention mechanism from <cit.> - we do not compute the dot products of the query with all keys, instead, we compute the dot product of a key to all queries. We do this to get attention on the facial mean based on the context mean. The offset mean_offset and the resulting shifted mean shifted_mean are then computed as
mean_offset = attention_mapvalue_mu
shifted_mean = mean_offset + img_mean
<ref> provides a visualization of these relationships. During inference we can vary the strength of the context influence by multiplying the offset with a weight, to allow smooth modulation of the offset:
shifted_mean = context_weight·mean_offset+ img_mean
We compute the new (shifted) variance shifted_variance analogously, the only difference being that we operate in log scale.
Joint Context Attention Network and Classification Head Training. To train the context_att_net and classification head jointly, we first initialize the latter by training it directly on the facial features using the expression classes and cross-entropy loss. Next, we train the network together using the following loss:
ℒ = cross_entropy(E(shifted_mean), y_true) + αd_kl(posterior_latent_rep_image||posterior_latent_rep_shifted_face_context)
where cross_entropy is the cross-entropy, E is the classification head and α∈ℝ is a hyperparameter to regularize the shift. Furthermore, during training, we propose a novel data augmentation approach for multi-modal settings, where we swap contexts for a specific actor within its expression class.[Note, this data augmentation is only possible for multi-modal datasets, where content comes with different context variations within their respective expression class.]
§ EXPERIMENTS
We evaluate our approach on publicly available datasets <cit.>. We discuss the results in Section <ref> and <ref>. An ablation study is shown to support the understanding of our proposed approach for fer in multi-modal context.
§.§ Datasets
celeb_db (Pretraining). We pretrain the face_reconst_net unsupervisedly for face reconstruction on the large-scale dataset celeb_db <cit.>. celeb_db is a prevalent dataset for face attribute recognition and consists of roughly 200k images showing 10k different identities. The context_reconst_net is trained using the respective downstream datasets.
ravdess (Downstream). ravdess <cit.> consists of videos of 24 identities. Each video is labeled with one of the seven expression classes calm, happy, sad, angry, fearful, surprise, and disgust, with an additional binary label for the intensity. We extract 16 frames from each video at regular intervals and use the video's label for each. Following <cit.> the neutral class is omitted to reduce noise.
mead (Downstream). mead <cit.> is a large-scale dataset targeting talking-face generation, which also features fer labels. Similar to <cit.>, mead contains videos of 60 actors speaking with different emotions at different intensity levels. We use the frontal view recordings as proposed by <cit.> and apply the same frame extraction approach as for ravdess.
§.§ Implementation Details
We set the batch size to 256, β (<ref>) and α (<ref>) to 0.00001 and context_weight=1.0 (<ref>). The dimension of the latent space of the face_reconst_net and context_reconst_net is latent_space_dim = 512. We use MTCNN <cit.> to detect faces. We then resize them to 128 × 128 pixels, which is also the size of the reconstructed and generated images. We apply random horizontal flipping as data augmentation. For generating the Mel spectrograms, we chose 128 Mel bins, a sample rate of 22050, a window and FFT length of 1310, and a hop length of 755. We use the Adam optimizer <cit.> with a learning rate of 0.00003 and a weight decay of 0.01.
Reconstruction Pretraining. We pretrain the face_reconst_net unsupervised for facial image reconstruction on celeb_db and the context_reconst_net for context reconstruction on the Mel spectrograms of the downstream datasets. Pretraining is run for 400 epochs. The learning rate is decreased by factor 10 after 150 and 300 epochs.
Downstream Classification Training. During downstream training, the face_reconst_net is fixed and the last two layers of the context_reconst_net are fine-tuned. We first initialize the one-layer classification head by training it directly on the facial features of the face_reconst_net to obtain a suitable initialization for its weights. Next, we train the context_att_net and the single-layer classification head jointly together using the loss from <ref>.
Note that the decoder from (A1), which we use to visualize the shifted expression, is fixed and not trained in this step.
For both ravdess and mead, we performed k-fold cross validation with k=10, similarly to other works <cit.>, splitting the folds along the identities (one identity can only occur in test, validation or train set).
§.§ Facial Expression Recognition Performance
We provide classification results for ravdess in <ref>, and for mead in <ref>. All compared methods use a dataset split by identities, ensuring no identity seen during training appears during testing.
For our method, the final prediction for a test video is obtained by majority voting across frames.
Our model achieves an accuracy of % on ravdess and % on mead, matching SOTA performance on classical fer. We largely outperform methods - by 17.75% on ravdess and 8.47% on mead, that tackle the dual problem of classification and the synthesize of the corresponding percept.
The influence of context on the final fer accuracy is highlighted by visualizing the per-class accuracy in confusion matrices for two conditions: in <ref>, the context_att_net computed the offset for the facial features based on the facial (instead of the context) features to simulate missing context. In <ref>, it received the face together with the context features, as intended by our approach. The latter exhibits higher (or equal) probabilities on the diagonal for every class. This proves that the reported accuracy cannot be attributed solely to the computational capabilities of the context_att_net but is a result of the meaningful adaption procedure.
0.8
0.49
colormap/jet
The tsne plots in <ref> visualize the structure of the latent space in different conditions: (1) The face-only features as we receive them from the face_reconst_net do not cluster in any particular way (<ref>). (2) Employing our context_att_net but providing the face features twice instead of combining with the context leads to a more structured latent space (<ref>) and (3) taking the audio context into account leads to a clustered latent space that makes classification easier (<ref>).
Ablation Study. <ref> lists classification performance of our context_att_net and its components. We assess the quality of face and context features independently by classifying learned features directly using a single-layer classifier. Experiments for the context_att_net in a simulated uni-modal mode are run by providing face or context twice. If no context is provided, the context_att_net performs similarly to the single layer classifier using learned representations. However, when learned features are adapted to context, the final classification performance improves by 15.44% (strict audio). Additionally, our new data augmentation technique for multi-modal settings further enhances performance by 1.72%.
§.§ Mental Representations: Context-Augmented Expressions
We show qualitative results of our approach on ravdess in different conditions (additional generations for ravdess and mead in the suppl. material). To visualize the adapted features obtained from the context_att_net, we use the decoder (D) from <ref>. <ref> depicts the effect of the two kinds of intensities provided in ravdess on neural input faces. The stronger the intensity, the stronger the facial expression in the generated image. The strength of the effect is subtle but resembles human perception <cit.>, which is our goal.
To our knowledge, the only publications showing generations on ravdess are <cit.>, <cit.> and <cit.>. They aim at rendering an input face that strongly resembles the emotion in the accompanying audio file. This defies our goal of capturing the subtle changes in appearance a human observer would have, nevertheless we provide a comparison in the suppl. material.
How influential is the context in which humans perceive facial expressions?
To empirically evaluate the strength of contextual influence that best captures context effects in humans, we aimed to manually control the computed offset using context_weight of <ref>. As shown in <ref>, this allowed us to generate facial expressions with varying degrees of contextual influence, which makes them useful as experimental stimuli. The context weight context_weight is increased from 0 to 1 in 0.1 steps from left to right. Note that this weight is independent of the two intensities provided by ravdess (<ref>). The smooth transition demonstrates that our model creates continuous representations of the input data.
§.§ Human Study: Verifying Synthesised Mental Representations
To assess our model's ability to replicate the contextual impact of emotional speech on facial expression perception in human observers, we conducted two experiments with a total of 160 participants. In the first experiment, 80 participants evaluated neutral facial expressions from the ravdess dataset while listening to the depicted actor's speech with either happy or angry prosody. We restricted the study to these two classes because they are of opposite valence. This allowed us to obtain more fine-grained ratings on a continuous Likert scale. We aimed to measure the impact of the audio's emotion on perceived facial expressions. In the second experiment, a different group of 80 participants rated facial expressions synthesized by our model under happy or angry contextual influences. Each face was presented with five different context weights context_weight.
To test whether the model's generations can approximate human-like perception, we compared ratings with those obtained in Experiment 1. This comparison was done for each context weight, determining the parameter that best approximates human responses to contextual influences.
Results. Mean facial expression ratings for different emotional context conditions across both experiments are depicted in <ref>.
In experiment 1, a linear mixed effects model was employed, with the factor emotional context (happy vs. angry audio). A significant effect of emotional context on facial expression ratings was observed (b = 1.00, p < .001), indicating that identical neutral faces were perceived as more negative when accompanied by angry speech compared to happy speech. For experiment 2, a linear mixed effects model was run with the factors emotional context (happy vs. angry generated expression) and weight (context_weight = 0, 0.25, 0.5, 0.75, and 1). To compare rating differences within each weight, the emotional context factor was nested within the weight factor. As shown in <ref> (Experiment 2), expression ratings for faces generated in the context of angry vs. happy prosody did not differ at weights 0 and 0.25 (bs ≤ 0.19, ps > .269), but showed a significant and increasing impact of emotional context at weights 0.5 (b = 0.37, p = .033), 0.75 (b = 0.51, p = .004), and 1 (b = 0.73, p < .001).
Discussion. The results of our rating study reveal two key findings: 1) Our model effectively captures the impact of context, exemplified by emotional prosody, on human facial expression perception. Our generations reflect how a neutral face would subjectively appear to a human observer when associated with a positive or negative context. 2) The model's efficacy in shifting facial appearance towards contextual emotions does not require further adjustment post-training, as evidenced by the optimal context weight being context_weight = 1.
§ CONCLUSION
In this work, we introduced a novel approach that simultaneously enhances expression class predictions by taking affective context into account, and provides the means to generate an approximation of the expression a human would perceive. Our model achieves sota accuracy on ravdess and mead, and outperforms joint competitor methods. The implications of our rating study showcase that our model not only accurately quantifies human context-sensitive perception but also successfully mirrors the altered subjective experience back to human observers. This has significant potential, particularly for social artificial agents, that could leverage contextual information to adapt to human mental and emotional states, ensuring successful communication. Our model also addresses the dual nature of context-sensitivity of human perception: on the one hand, leveraging context enhances perceptual efficiency and flexibility <cit.>, while on the other hand, it bears the potential for adversely biased perception, e.g., when contextual information originates from untrustworthy sources <cit.>.
§ ACKNOWLEDGEMENTS
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 “Science of Intelligence” – project number 390523135. In addition, this work has been partially supported by MIAI@Grenoble Alpes, (ANR-19-P3IA-0003)
§ DISCLOSURE OF INTERESTS
The authors have no competing interests to declare that are relevant to the content of this article.
splncs04
§ ABSTRACT
In our work, we presented a model that encodes facial images and audio context using vae. We developed a context fusion network called context_att_net which shifts the latent facial distribution, to allow more accurate classifications by the classification head, as well as generate an approximation of the facial expression a human would perceive. Here, we provide additional details on the joint training of the context_att_net and classification head and provide a derivation of their loss function. We also list more detailed parameters of our rating study. Lastly, we provide more facial expression generations that show the capabilities of our model to purposefully fuse a facial image with affective audio context. The generations are based on neutral face images paired with different audio contexts to mimic the setting in our rating study. In addition, we release the code of our work.
§ ARCHITECTURE DETAILS
The number of layers of the components of our model are given in <ref>.
§ RATING STUDY
In this section we provide additional information about the procedure used for our rating study with human participants. The study was conducted according to the principles expressed in the Declaration of Helsinki and was approved by the ethics committee of Department of Psychology at Humboldt-Universität zu Berlin. All participants gave their informed consent.
§.§ Materials
In Experiment 1, faces of 24 actors from the RAVDESS database with a neutral expression and audio files in which the actors said the sentence “Kids are playing by the door” either with angry or happy emotional prosody served as stimuli. The images were neutral as generated by our network with a weight parameter of 0. We chose a generated face instead of the original frame from the RAVDESS database to eliminate potential effects of low-level visual differences between generated images and images from the database.
In Experiment 2, neutral faces of the same 24 actors were shifted towards the model’s representation of the face in the context of either a happy audio or an angry audio, with five different weights: 0 (i.e. the neutral expression also presented in Experiment 1), 0.25, 0.5, 0.75, and 1 (i.e. the model’s originally trained weight parameter).
In both experiments, we used counterbalancing across participants, such that one participant saw each actor either in the happy or in the angry emotion condition and each face was shown equally as often in each emotion condition.
§.§.§ Participants
The study adhered to the principles of the Declaration of Helsinki, approved by the ethics committee of Department of Psychology at Humboldt-Universität zu Berlin. Participants were recruited from Prolific.com and received monetary compensation. For each experiment, 80 participants were recruited. The final samples included 72 English native speakers aged 18–35 years (M = 29.01) in Experiment 1 and 77 English native speakers aged 18–35 years (M = 28.58) in Experiment 2.
The samples were balanced, with 50% of participants identifying as female and 50% identifying as male. We used the following criteria for inclusion in the final samples: Participants who reported not being highly distracted during the experiment, participants who reported not giving random ratings, participants who reported being able to hear all the audio files (Experiment 1), and participants who did not report rating only the audio files (Experiment 1).
Participants were pre-screened for the following criteria based on their data provided to Prolific.com
Age: 18–35;
Prison: No;
Approval Rate: 90–100%;
Units of alcohol per week: 0, 1-4, or 5-9;
Neurodiversity: No;
Dyslexia: No;
Vision: Yes;
Hearing difficulties: No;
Cochlear implant: No;
Colourblindness: No;
Head Injury—Knock out history: No;
Head Injury: No;
Mental health/illness/condition - ongoing: No;
Medication use: No;
Mild cognitive impairment/Dementia: No;
Autism Spectrum Disorder: No;
Depression: No;
Mental illness daily impact: No;
Anxiety: No;
ADD/ADHD: No;
Anxiety Severity: No;
Mental Health Diagnosis: No;
Mental Health Treatment: None;
First Language: English.
§.§ Procedure
Both experiments followed a similar procedure: After providing informed consent, participants rated the facial expressions of stimuli described above in random order on a 9-point Likert scale ranging from “very negative” to “very positive” with “neutral” in the middle.
In Experiment 1, ratings pertained to a neutral face presented with happy or angry audio, while in Experiment 2, faces modified by our model to depict contextual influence were rated. Post-rating, participants answered questions about their task experience (e.g., whether they were distracted, whether they gave their ratings randomly, and their potential awareness of the hypothesis tested in the study), were debriefed on the study's purpose, and directed back to Prolific.com.
§.§ Statistical Analyses
For Experiment 1, we ran a linear mixed effects model with the factor emotion (angry vs. happy audio) coded as a sliding difference contrast (meaning that the estimated effect reflects the predicted mean difference between faces seen with an angry audio vs. a happy audio).
For Experiment 2, we ran a linear mixed effects model with the factor emotion (generated face shifted towards an angry vs. a happy expression) and the factor weight (with five levels, 0, 0.25, 0.5, 0.75, and 1). To compare the differences in expression ratings between the happy and the angry condition within each of the weights, the factor emotion was nested within the factor weight.
For both experiments, we modelled random intercepts for participants and items (i.e. depicted actors), as well as random slopes for the effect of emotion over participants and items <cit.>. The significance of fixed effects coefficients (p < 0.05) was tested by Satterthwaite approximation.
§.§ Results
Separate models were run for each weight level, including the factors emotional context, experiment (2 vs. 1), and their interaction. With a weight of 1, no significant interaction between experiment and emotional context was found (b = -0.22, p = .084), whereas this interaction was significant for all other weights (bs < -0.42, ps < .001). <ref> shows the results including interaction significance.
§ RAVDESS COMPARISON TO SOTA GENERATIONS
In the main paper we omitted a comparison to SOTA generation methods <cit.> because they pursue a different purpose—making a neutral input images resemble strongly the accompanying audio context. Due to our objective of synthesizing mental representations, such heavy shifts in the facial expression would be too strong for a human observer. Changes in subjective appearance are rather subtle <cit.>. For the sake of completeness, we provide a comparison to SOTA generations in <ref>. The two SOTA competitors use a target ground-truth sequence they want to model, whereas we compute our adaption only based on the expression class and the model implicilty learns a sensible shift.
§ ADDITIONAL SYNTHESIZED FACIAL EXPRESSIONS
We provide additional generations of facial expressions on the ravdess dataset using our proposed model. <ref> shows that our model picks up the subtle differences in the mouth opening in the generations. <ref> show generations for a neutral input image (left column) paired with different contexts. The context weight context_weight is increased from 0 (second column) to 1 (rightmost column) in 0.1 steps. <ref> shows additional generations for variations of neutral input images with different contexts, to showcase that our model adapts the style of the input image. The leftmost column is the neutral input image, from left to right follows the reconstruction without context, calm audio context, happy audio context, sad audio context, fearful audio context, disgusted audio context, and surprised audio context. <ref> shows generations of a neutral input image (top row) paired with normal (middle row) and strong context intensity (bottom row). The ravdess dataset provides these two intensities as a binary label for every sample. The context expression classes are from left to right: happy,
sad, angry, fearful, disgusted, surprised.
1.0
0.75
0.55
§ MEAD GENERATIONS
Reconstructions on the mead dataset are shown in <ref>.
|
http://arxiv.org/abs/2409.02315v1 | 20240903220432 | Revisiting the symmetry-resolved entanglement for non-invertible symmetries in $1{+}1$d conformal field theories | [
"Jared Heymann",
"Thomas Quella"
] | hep-th | [
"hep-th",
"cond-mat.stat-mech",
"cond-mat.str-el",
"math-ph",
"math.MP"
] |
Defect Landscape Engineering to Tune Skyrmion-Antiskyrmion Systems in FeGe
Serena Eley
September 9, 2024
==========================================================================
§ ABSTRACT
Recently, a framework for computing the symmetry-resolved entanglement entropy for non-invertible symmetries in 1+1d conformal field theories has been proposed by Saura-Bastida, Das, Sierra and Molina-Vilaplana [https://doi.org/10.1103/PhysRevD.109.105026Phys. Rev. D109, 105026].
We revisit their theoretical setup, paying particular attention to possible contributions from the conformal boundary conditions imposed at the entangling surface – a potential subtlety that was not addressed in the original proposal. We find that the presence of boundaries modifies the construction of projectors onto irreducible sectors, compared to what can be expected from a pure bulk approach. This is a direct consequence of the fusion algebra of non-invertible symmetries being different in the presence or absence of boundaries on which defects can end.
We apply our formalism to the case of the Fibonacci category symmetry in the three-state Potts and tricritical Ising model and the Rep(S_3) fusion category symmetry in the SU(2)_4 Wess-Zumino-Witten conformal field theory. We numerically corroborate our findings by simulating critical anyonic chains with these symmetries as a finite lattice substitute for the expected entanglement Hamiltonian. Our predictions for the symmetry-resolved entanglement for non-invertible symmetries seem to disagree with the recent work by Saura-Bastida et al.
§ INTRODUCTION
Entanglement plays a key role in both understanding and simulating many-body systems. It is used to identify phenomena such as quantum phase transitions and critical points <cit.>, or to identify phases that lack local order parameters, such as topological phases <cit.>.
Not only is entanglement used to understand phenomena in many-body systems <cit.>, it is also used for efficient simulation of them.
In particular, the success of tensor network algorithms <cit.> hinges on the entanglement structure displayed by ground states <cit.>: understanding entanglement allows more efficient simulations of quantum many-body systems <cit.>.
Recently, there has been a surge of interest in understanding the interplay between entanglement and another pillar of modern physics: symmetries.
Of particular interest has been the so-called symmetry resolved entanglement entropy (SREE), which quantifies the entanglement entropy in the different representations of a symmetry.
The idea of symmetry resolution dates back to Ref. <cit.>, where spin-resolution of critical quantum spin chains was studied.
More recently, the SREE has been studied extensively in 1+1d conformal field theories (CFTs) due to the powerful analytical methods available <cit.>
(for a more comprehensive review see Ref. <cit.>).
The SREE has also found applications in studying strongly correlated models <cit.> where it can provide non-trivial information concerning the configuration of the ground state or serve as an order parameter <cit.>.
From the numerous studies of the SREE in 1+1d CFTs, a key result has emerged: the equipartition of the SREE.
At leading order in the ultraviolet cut off the SREE is equally distributed among the different representations of the theory.
Violations of the equipartition are observed at subleading order and are generally sensitive to the dimension of the representation in question <cit.>.
A powerful approach to symmetry resolution in 1+1d CFTs adopts the framework of boundary CFT (BCFT)
<cit.>.
The BCFT approach to the SREE arises because the spectra of the entanglement Hamiltonian in 1+1d CFTs is described by a BCFT <cit.>.
A BCFT is present because computing the entanglement entropy requires a bipartition of the Hilbert space, which comes with a choice of boundary conditions at the entangling surface <cit.>.
The choice of boundary conditions dictates the energy spectrum of the entanglement Hamiltonian <cit.> and which symmetries of the bulk are present in the BCFT <cit.>.
By using the BCFT approach to the SREE, when the symmetry in the BCFT is a finite group G, Ref. <cit.> found
lim_q→ 1[S_n(q,r)- S_n(q)] = lnd_r^2/G ,
where d_r is the dimension of the representation r of G and G is the cardinality G.
Here, S_n(q,r) is the n-th Rényi entropy within the irreducible representation r, while S_n(q) is the n-th Rényi entropy.
The q → 1 limit corresponds to taking the ultraviolet cut off (lattice spacing) to 0 (or the width of the annulus in the BCFT to infinity).
When the symmetry of interest is a finite group G, <ref> indicates that the equipartition of the SREE is broken by a term that depends on the dimension of the r representation of G and the cardinality of G.
In recent years, after the seminal work of Ref. <cit.> the notion of a symmetry in quantum field theory and quantum lattice models has been generalised systematically and now extends beyond the concept of groups.
As part of this endeavour, there have also been substantial efforts to abstract the concept of the symmetry (and its representations) from that of the underlying physical system in terms of what has become to be known as a “SymTFT” or sandwich construction.
For recent reviews on these very dynamical areas of research and references to the original literature see Refs. <cit.>.
From the modern perspective that has emerged, symmetries should be identified with topological defects of various dimensions and composition of symmetries be realised as defect fusion.
In this framework it is very natural to allow for symmetries that are not invertible.
Predating the more recent efforts, the consideration of topological defects and their role in terms of describing symmetries and dualities has a long history in the study of 1+1d rational CFTs (RCFTs), see e.g. <cit.>.
Recently, in Ref. <cit.> the SREE was analysed for non-invertible symmetries, dubbed the Cat-SREE, generated by Verlinde lines in 1+1d RCFTs.
The tricritical Ising model was studied and the entanglement entropy was resolved with respect to the Fibonacci category.
The Fibonacci category has two simple objects C= {1, W} with non-trivial fusion relation W × W = 1 + W and two irreducible representations, also labelled by C.
Analogous to the group-like case, Eq. (33) of Ref. <cit.> implies
lim_q → 1[ S_n(q,r) - S_n(q) ] = lnd_r^2/C^2
where d_r is the (quantum) dimension of the representation r ∈ C and C^2 = ∑_c ∈ C^ d_c^2.[We note that our convention for the total quantum dimension C differs from the one used in <cit.>.]
However, we find that the above breaking of equipartition eq:equipartition-wrong is incorrect.
Instead, for symmetries generated by Verlinde lines, we find that the equipartition of the SREE is
lim_q → 1[ S_n(q,r) - S_n(q) ]=lnd_r/√(C) ,
where √(C) = ∑_c ∈ C^ d_c and r ∈ C labels an irreducible representation of C.
When the Verlinde lines form a finite (abelian) group <ref> is in agreement with <ref> because the quantum dimensions of invertible symmetries are one.
The deviation from Ref. <cit.> can be succinctly summarised as follows: the fusion algebra of non-invertible symmetries is sensitive to the presence of boundary conditions and is different than the fusion algebra of the bulk.
In this work, we find that if one wishes to understand the relation between entanglement measures and non-invertible symmetries, extra care due to the presence of boundaries is required – a issue that was not addressed in Ref. <cit.>.
Before proceeding to more technical considerations, we note that topological defects in the presence of boundaries find many other applications.
They have been utilised to analyse boundary renormalisation group flows <cit.> and open string field theory <cit.>.
Recently, they have also been explored in the representation theory of solitons <cit.>.
Before proceeding to more technical considerations, we note that topological defects in the presence of boundaries find numerous other applications.
In the CFT setting, they have been used to analyse boundary renormalisation group flows <cit.> and study open string field theory <cit.>.
The algebraic structure formed by considering topological defects terminating on two domain walls (or boundaries) is known as the “ladder algebra” in the mathematics literature <cit.>;
this has its roots in analysing gapped boundaries and domain walls for topological phases <cit.>.
However, the ladder algebra can also be viewed analogously to the tube algebra in the bulk <cit.>, providing the appropriate mathematical setting to analyse generalised symmetries in the presence of boundaries.
This has motivated a recent SymTFT perspective of the ladder algebra to study the representation theory of solitons <cit.>. We would also like to mention <cit.>, where the ladder algebra was discussed in the context of the three-state Potts model.
The rest of this paper is organised as follows.
In <ref> we briefly review the BCFT approach to computing the SREE.
In <ref> we provide an explicit counter example to the results in Ref. <cit.> by analysing the SREE with respect to the Fibonacci category in the three-state Potts model.
In <ref> we briefly review topological defect networks and the action of defects on states in the open string Hilbert space.
This allows us to construct projections on the open string Hilbert space and analyse the asymptotic limit of the SREE in <ref>.
A crucial ingredient is a detailed understanding of the fusion algebra of defects in the presence of boundaries. We then proceed to apply our framework to specific examples in <ref> and numerically corroborate our results by simulating anyonic chains.
§ BRIEF REVIEW OF THE BCFT APPROACH TO THE SREE
Defining the entanglement between regions A and B for a state described by the density matrix ρ requires a bipartition of the Hilbert space
ℋ = ℋ_A ⊗ℋ_B .
The reduced density matrix ρ_A = _B ρ corresponding to region A is obtained by tracing out the degrees of freedom in B, and the entanglement entropy can be computed.
However, as explained in Ref. <cit.>, a well defined bipartition of the total Hilbert space eq:bipartition-1 requires a choice of boundary conditions at the entangling surface (the regions separating A and B).
The need for boundary conditions at the entangling surface is most explicit in theories whose Hilbert space does not admit a simple tensor product decomposition, such as anyonic chains <cit.> which in turn represent important examples of physical systems with non-invertible symmetries.
For a conformal field theory, it is natural to choose boundary conditions that preserve the (extended) chiral algebra.
Formally, this is achieved by a factorisation map ι_a,b:
ι_a,b: ℋ→ℋ_A,ab⊗ℋ_B,ab ,
where ℋ_A,ab (ℋ_B,ab) is the Hilbert space in region A (B) with boundary conditions a and b at the entangling surface.
In the path integral, the factorisation map is implemented by inserting two disks of radius ε≪ 1 that serve as an ultraviolet (UV) cut off.
The region A of length ℓ is then mapped to an annulus of width w by a conformal transformation <cit.>, see <ref>.
The dependence of w on ℓ depends on the initial geometry of the problem: for instance, whether the Hilbert space is defined on an infinite line or finite periodic chain etc..
For a comprehensive list of different geometries see Ref. <cit.>.
For our purposes, we are interested in the BCFT defined on an annulus of width w and it is not important what the initial geometry of the problem was.
The reduced density matrix ρ_A,ab = _B,abρ is obtained by tracing out the degrees of freedom in ℋ_B,ab, where ρ is the density matrix of the ground state.
In terms of the parameters of the conformal field theory, the reduced density matrix is
ρ_A,ab = q^L_0 - c/24/Z_ab(q) ,
where q = e^-2 π^2/w is the modular parameter, c is the central charge,
Z_ab(q) = _abq^L_0 - c/24 ,
is the open string partition function with boundaries a and b and _ab denotes the trace over the open string Hilbert space with boundaries a and b.
The normalisation factor of 1/Z_ab(q) in <ref> ensures _abρ_A,ab = 1.
By a modular transformation, the open string partition function is written in the closed string channel as
Z_ab(q) = ⟨a|q̃^L_0 - c/24|b⟩ ,
where q̃ = e^-2 w is the dual modular parameter and |a⟩ and |b⟩ are conformal boundary states imposed at the entangling surface by the factorisation map ι_ab.
The Rényi entropies are
S_n(q)1/1-nlog_abρ_A^n = 1/1-nlogZ(q^n)/Z^n(q) ,
with the entanglement entropy being given by lim_n → 1 S_n(q).
Suppose we have some symmetry category C acting on the open string Hilbert space.
The SREE is computed by introducing
𝒵_ab(q^n, r) _abΠ_r ρ_A^n ,
where Π_r projects onto the representation r of C.
The symmetry resolved Rényi entropy (SRRE) is given analogously to the Rényi entropies
S_n(q,r) 1/1-nlog𝒵_ab(q^n,r)/𝒵_ab^n(q,r) ,
with the SREE being obtained in the limit n → 1.
It is useful to define the charged moments of the reduced density matrix
z_ab(q^n,c) _abℒ_c q^n(L_0 - c/24) ,
where ℒ_c is the operator corresponding to the object c ∈ C acting on states in the open string Hilbert space.
In the closed string channel the charged moments read
z_ab(q^n ,c) = c⟨a|q̃^1/n(L_0 - c/24)|b⟩_c ,
where |a⟩_c and |b⟩_c are boundary states in the Hilbert space of the c-twisted sector.
The charged moments eq:charged-moments correspond to computing the open string partition function with an insertion of the defect ℒ_c terminating on the boundaries a and b.
We require that ℒ_c can topologically end on both |a⟩ and |b⟩.
This requires that the fusion c × a contains a and, likewise, the fusion c × b contains b.
Using the terminology of Ref. <cit.>, the boundaries a and b must be weakly symmetric under ℒ_c for <ref> to be non-zero.
Moreover, we will assume that a and b are simple boundaries; they are not written as a superposition of elementary boundary states.
The charged moments in the closed string channel make it quite simple to derive the asymptotic behaviour of the SREE.
Generally, a projector onto the irreducible representation r of C is written as a linear combination of operators in C,
Π_r^[a,b] = ∑_c∈ C^Λ_r c^[a,b]ℒ_c ,
for constants Λ_rs^[a,b] that ensure the projectors are orthogonal.
The superscripts [a,b] make it explicit that such projectors are defined in the presence of boundaries a and b.
If the boundaries are equal, a = b, we will only include one boundary in the superscript, [a]≡[a,a].
For the symmetry C to be present in the open string Hilbert space, all defects ℒ_c, for c ∈ C, must be able to topologically terminate on the boundaries a and b: the boundaries a and b must be weakly symmetric under C.
As a consequence, C is non-anomalous and be “gauged” in a generalised sense <cit.>.
Using the general form of a projector eq:projector-ansatz, in the closed string channel <ref> reads
𝒵_ab(q^n, r) = ∑_c ∈ C^Λ_rc^[a,b]c⟨a|q̃^1/n(L_0 - c/24)|b⟩_c ,
q → 1∼Λ_r1^[a,b]q̃^-c/24n⟨a||0⟩⟨0||b⟩ ,
where we have assumed that there is a unique vacuum |0⟩ in the bulk CFT.
Therefore, in the asymptotic limit q → 1, the vacuum sector dominates in the closed string channel, leading to <ref>.
It will be useful to define the difference between the symmetry resolved Rényi entropies and the Rényi entropies
Δ S_n(q,r) S_n(q,r) - S_n(q) .
Plugging in the symmetry resolved Rényi entropies eq:SRRE and utilising the asymptotic limit eq:asymptotic-general, one finds that the equipartition of the SREE is broken by the term
lim_q→ 1Δ S_n(q,r) = lnΛ_r1^[a,b] .
For a finite group G, the projectors onto the r irrep are
P^G_r = χ_r(1)/G∑_g∈ G^χ_r(g) U(g) ,
where U(g) is the representation of the element g ∈ G and χ_r(g) is the group character.
From the above projector, one has Λ_r1^[a,b] = χ_r(1)^2/G, recovering <ref>.
§ SYMMETRY RESOLUTION FOR THE FIBONACCI CATEGORY
In this section we will revisit the Cat-SREE framework proposed in Ref. <cit.> and show it is insufficient to produce the correct SREE with respect to the Fibonacci category in the three-state Potts model.
Using the extended W_3 algebra, the three-state Potts model has a diagonal modular invariant.
There are six primary fields 1, ψ, ψ^†, ε, σ and σ^† with scaling dimension 0, 2/3, 2/3, 2/5, 1/15 and 1/15, respectively.
In the basis (1, ε, ψ, σ , ψ^†, σ^†) the modular S matrix reads <cit.>
S = 1/√(3)(
s s s
s ω s ω^2
s ω^2 s ω s
) ,
with
s = 2/√(5)(
sinπ/5 sin2π/5
sin2 π/5 -sinπ/5
) , ω = e^2π i/3 .
The fusion rule
ε×ε = 1 + ε
indicates that the Verlinde lines ℒ_c, for c∈ C = {1, ε}, generate a Fibonacci symmetry.
For the case of diagonal RCFTs, the primary operators, Verlinde lines and boundary states are in one-to-one correspondence.
Therefore, they can all share labels in some modular fusion category ℳ.
Thus, the Cardy state |ε⟩ is weakly symmetric with respect to C <cit.>. The open string partition function with both boundaries a = b = ε is
Z_εε = χ_1(q) + χ_ε(q) .
Next, we require projectors onto the irreducible representations of C.
As C has two elements there are two irreducible representations.
In the bulk CFT, the action of the Verlinde line ℒ_ε on each conformal family <cit.> in the basis (1, ε, ψ, σ ,ψ^† ,σ^†) is
ℒ_ε = ( S_ε j/S_1j) =( φ, 1- φ, φ, 1- φ, φ, 1- φ) ,
where φ = (1+√(5))/2 is the golden ratio.
The action of the topological defect ℒ_c on the vacuum representation is given by the quantum dimension d_c of c.
This means that the conformal towers 1, ψ and ψ^† transform trivially under ℒ_ε, while the conformal towers ε , σ and σ^† transform non-trivially under ℒ_ε.
For the bulk CFT with diagonal modular invariant, the Hilbert space decomposes as ℋ= ⊕_μ𝒱_μ⊗𝒱_μ.
In the bulk CFT, the projectors P_r^bulk: ℋ→𝒱_r ⊗𝒱_r onto the conformal tower r are <cit.>
P_r^bulk = ∑_b ∈ℳ^ S_1rS_b r ℒ_b .
The primary operators {1, ψ, ψ^†} transform in the trivial representation of C.
Therefore, the projector onto the trivial irrep of C is the sum of projectors onto the conformal towers {1, ψ, ψ^†},
Π_1^bulk = P_1^bulk + P_ψ^bulk + P_ψ^†^bulk .
Similarly, the primary operators {ε, σ, σ^†} transform in the non-trivial irrep of C; the projector onto the non-trivial irrep of C is the sum of projectors onto the conformal towers {ε, σ, σ^†},
Π_ε^bulk = P_ε^bulk + P_σ^bulk + P_σ^†^bulk .
Using the bulk modular fusion category projectors eq:MFC-projector we find
Π_1^bulk = 1/1+φ^2( ℒ_1 + φℒ_ε)
Π_ε^bulk = φ/1+φ^2(φℒ_1 - ℒ_ε) .
As observed in Ref. <cit.>, these are the projectors one would obtain when using <ref> with the modular S matrix for the Fibonacci category,
S_Fib = 1/√(1+φ^2)(1 φ
φ -1) .
We now need to compute the charged moment z_εε(q, ε), which requires computing the overlap of twisted boundary states.
The Hilbert space of the c-twisted sector decomposes as <cit.>
ℋ_ℒ_c = ⊕_μ,ν N_c ν^μ𝒱_μ⊗𝒱_ν .
This implies that the diagonal sectors in ℋ_ℒ_ε are of the form 𝒱_ρ⊗𝒱_ρ with ρ running over the set {ε, σ ,σ^†}.
We denote the boundary state |ε⟩ in the ε-twisted sector by
|ε⟩_ε =a_1 ε_ε + a_2 σ_ε + a_3 σ^†_ε ,
where h_ε are the Ishibashi states in the ε-twisted sector and a_i are undetermined constants.
Using a vector notation for the characters, this implies the charged moment z_εε(q, ε) = _εεℒ_ερ_A,ε, ε
is
z_εε(q, ε) = α_1 χ_ε(q̃) + α_2 χ_σ(q̃) + α_3 χ_σ^†(q̃) = (
μ (α_1 + α_2 + α_3)
ν (α_1+α_2 +α_3)
μ(α_1 + ω^2 α_2 + ωα_3)
ν (α_1 + ω^2 α_2 + ωα_3)
μ(α_1 + ωα_2 + ω^2 α_3)
ν(α_1 + ωα_2 + ω^2 α_3)
)_(q) ,
where we use the basis {χ_1(q), χ_ε(q), χ_ψ(q), χ_σ(q), χ_ψ^†(q), χ_σ^†(q) }, α_i = a_i^2 and
μ = √(5 + √(5)/30) , ν = - √(5 - √(5)/30) .
Note that Z_εε(q) only contains the characters χ_1(q) and χ_ε(q); therefore the final four rows in z_εε(q, ε) must be zero.[This is because, by construction, the Verlinde lines commute with the extended chiral algebra and therefore, by Schur's lemma, act as scalar multiples of the identity on irreducible representations of the chiral algebra <cit.>.
]
This implies that α_1 + ω^2 α_2 + ωα_3 =0 and α_1 + ωα_2 + ω^2 α_3 = 0.
Together, these constraints imply that α_1 = α_3 and α_2 = -ω(α_1 + ωα_3) = -ω(1+ω)α_1.
Plugging these into the charged-moment eq:pott-charged-moment yields
z_εε(q, ε) = -(1+ ω) α_1( μχ_1(q) + νχ_ε(q)) .
Finally, we need a constraint to fix α_1.
This is determined by the action of ℒ_ε on the boundary identity or ε conformal tower.
At this stage, we do not know what that action is;
for now, we will let ℒ_ε's action on the boundary identity conformal tower be some non-zero scalar λ.
This implies
-(1+ω) α_1 μ = λα_1 = λ√(5 + √(5)/30) ,
resulting in
z_εε(q, ε) = λχ_1(q) + λ (1 - φ) χ_ε(q) .
As the coefficients in front of χ_1(q) and χ_ε(q) are different in <ref>, we conclude that the boundary identity conformal tower and the boundary ε conformal tower correspond to different irreps of C.
As a consequence, for <ref> to be projectors onto the irreducible representations of C in the presence of boundaries, we must have _εεΠ_1^bulkρ_A,εε!=χ_1(q) or χ_ε(q) and vice versa for Π_ε^bulk.
Using the charged moment z_εε(q, ε) defined in <ref> we compute
_εεΠ_1^bulkρ_A,εε = 1/1+φ^2( (1 + λφ) χ_1 + (1 - λφ^2 + λφ)χ_ε) ,
and assert
_εεΠ_1^bulkρ_A !=χ_1 (q) .
However, no choice of λ satisfies the above, <ref>.
Similarly, no choice of λ can satisfy
_εεΠ_1^bulkρ_A,εε!=χ_ε(q) .
We conclude that the original bulk projectors eq:Fib-projectors are incorrect in the presence of boundary conditions.
The Fibonacci projectors in the bulk eq:Fib-projectors leading to incorrect results in the presence of boundaries could be established on different, simpler grounds.
Namely, the fusion algebra of defect lines terminating on a boundary is generally different than the bulk fusion algebra <cit.>.
Following the prescription in Ref. <cit.> we find that the boundary fusion rules for C = {1, ε} with boundary condition ε on both sides is
ℒ_ε^[ε]×ℒ_ε^[ε] = 1/φℒ_ε^[ε] + φ - 1/φℒ_ε^[ε] ,
where ℒ_c^[ε] indicates that the defect ℒ_c terminates on the two boundaries labelled by ε.
From the above boundary fusion rules eq:Potts-boundary-fusion, it is clear that the bulk projectors eq:Fib-projectors do not satisfy Π_r^bulkΠ_h^ bulk = δ_rhΠ_r^bulk when the defects terminate on the boundary ε.
Before concluding this section, note that if one used the projectors
Π_1^[ε] = 1/1+φ(ℒ_1 + φℒ_ε) ,
Π_ε^[ε] = φ/1+φ(ℒ_1 - ℒ_ε) ,
then for λ = 1 we would find
_εεΠ_1^[ε]ρ_A = χ_1(q) ,
_εεΠ_ε^[ε]ρ_A = χ_ε(q) .
We will show how to construct these projectors in <ref>.
§ TOPOLOGICAL DEFECT NETWORKS
In <ref>, the charged moments eq:charged-moments were computed by taking overlaps of twisted boundary states.
In Ref. <cit.> the authors noted the similarity between computing these overlaps and orbifold CFTs, developing a method to compute the SREE by utilising boundary states in orbifold CFTs.
The orbifold construction of the SREE makes it explicit why the symmetry of interest, C, needs to be anomaly free: constructing the orbifold CFT is only possible if C is free of a 't Hooft anomaly.
In contrast to Refs. <cit.>, we wish to compute the SREE strictly from the open string channel and only use the closed string channel to compute the breaking of equipartition in the asymptotic limit, <ref>.
This requires knowledge of the action of defects on states in the open string Hilbert space without appealing to the closed string channel and twisted boundary states.
When working in the open string channel, it is natural to use the graphical calculus afforded by fusion categories;
this requires performing manipulations of topological defect networks by utilising the appropriate F-symbols.
In the presence of boundaries, as is the case here, these manipulations can become difficult because one needs to keep track of the F-symbols associated with the symmetry category of interest and the F̃-symbols associated with the module category of interest (the boundary states).
However, with simplifying assumptions, only a single set of F-symbols is required.
We will focus on diagonal RCFTs in which charge conjugation acts trivially.
As the CFT has a diagonal modular invariant, primary operators, Cardy boundary states, and Verlinde lines are in one to one correspondence and share the same labels in a modular fusion category ℳ.
Invariance under charge conjugation means that all Verlinde lines are self dual i.e. ℒ_c = ℒ_c̅ for c ∈ℳ. In other words, it is not necessary to specify the orientation of defects in diagrammatic manipulations.
For simplicity, we also restrict ourselves to situations where the fusion of Verlinde lines
ℒ_c ×ℒ_d = ∑_e^ N_cd^e ℒ_e
is multiplicity free. This means that the fusion coefficients N_cd^e are either 0 or 1 for all c,d,e ∈ℳ. The fusion coefficients N_cd^e can be determined from the modular S matrix with the Verlinde formula <cit.>.
Finally, we assume parity invariance of our defects and CFTs.
With these assumptions, we review the relevant formalism presented in Ref. <cit.> to compute the action of defects on states in the open string Hilbert space.
§.§ Defect networks
Manipulations of networks of topological defects is done by utilising the F-symbols.
Our convention for the F-symbols are shown in <ref>.
Moreover, if any of the incoming defects a, b, c or d are the identity defect, denoted by 1, then the corresponding F-symbol is trivial.
For consistency of defect network manipulations, the F-symbols must satisfy the pentagon identity, <ref>.
Equating both directions in <ref> results in the celebrated pentagon equation
∑_s^ F_ps[ b c; a q ]
F_qt[ s d; a e ]
F_sr[ c d; b t ] = F_qr[ c d; p e ]
F_pt[ b r; a e ] .
As emphasised in Ref. <cit.> (see also Ref. <cit.>), the pentagon identity has a large symmetry, sometimes referred to as a gauge symmetry.
From a set of F-symbols that satisfy the pentagon identity, applying an appropriate gauge transformation to these F-symbols results in another equivalent solution of the pentagon identity.
Of course, after manipulating defect networks the final result obtained should be independent of the gauge choice (assuming the result corresponds to a physical observable).
For our purposes, it is useful for our diagrammatic rules to be fully isotopy invariant; the defects can be freely moved and bent without incurring additional negative signs or phase factors.
This condition corresponds to a specific choice of gauge for the F-symbols which we will use in this work.
In this gauge some of the F-symbols simplify <cit.>
F_1c[ a b; a b ]= F_c 1[ a a; b b ] = √(d_c/d_a d_b) .
Due to the assumption of parity invariance, we have F-symbol identities such as
F_pq[ a b; c d ]
= F_pq[ c d; a b ]
= F_pq[ b a; d c ] .
§.§ Open topological defects
As explained in Ref. <cit.>, attaching defects to a boundary comes with a choice of normalisation for the junction field.
A natural choice of normalisation arises from using the F-symbols to attach a parallel defect to a boundary and assigning a factor of √(F_1b[
d a; d a ]) to each junction.
This junction factor is denoted by a filled circle, see <ref>.
These junction factors have some convenient properties <cit.>.
One particularly useful property we will utilise is the fact that, with these junction factors, we can shrink a defect on a boundary to the identity, <ref>.
Next we require the action of defects on boundary condition changing operators.
By the operator-state correspondence in CFT, boundary condition changing operators correspond to states in the open string Hilbert space <cit.>.
Determining the action of a defect on boundary condition changing operators is then equivalent to determining the defect's action on states in the open string Hilbert space.
Following Ref. <cit.>, manipulations of boundaries with boundary condition changing operators can be lifted to defect manipulations by representing the boundary condition by the corresponding defect label.
This assumes that there is some identity boundary that can be used to generate all other boundaries by fusion with an appropriate defect.
In our case, this assumption is trivially satisfied by the vacuum Cardy state.
Then, the boundary condition changing operator with representation label i can be traded for a defect labelled by i and a defect ending field on the identity boundary, <ref>.
The constants α_i^ab associated with this manipulation were computed in Ref. <cit.>.
However, the constants α_i^ab will not be needed for our purposes.
This manoeuvre, <ref>, also has an interpretation in terms of the topological field theory formulation of CFTs <cit.>.
§.§ Action of defects on boundary fields
To compute the SREE one needs to compute charged moments z_a b(q,d) = _a bℒ_d ρ_A,ab.
The insertion of the defect ℒ_d in this trace
corresponds to connecting a defect to both boundaries, a and b, of the annulus, <ref>(left).
By a conformal transformation, the annulus can be mapped to the upper-half plane and the defect then traverses a semi-circle around the origin, where a possible boundary condition changing operator is located, <ref>(right).
By the operator-state correspondence, a state |i, M⟩ in the open string Hilbert space, where i labels a boundary conformal tower and M enumerates its states, is created by a local boundary condition changing operator ψ_i, M <cit.>.
The action of the defect operator ℒ_d on states in the open string Hilbert space is determined by the action of ℒ_d on boundary condition changing operators.
With the assumptions specified at the start of <ref> – primary operators, boundary states and defect lines are in one-to-one correspondence – this action has been derived in Ref. <cit.>.
This action is illustrated in <ref> under the assumption that the boundaries at the entangling surface are described by simple boundary conditions.
For the generalisation to non-simple boundary conditions see Ref. <cit.>.
If the boundary conditions a and b are not simple then the action of defect operators is more complicated.
This is because the defect operator can now intertwine different boundary condition changing operators in the same representation.
In the BCFT approach to the SREE, <ref>, we required our boundary states to be weakly symmetric under our fusion category of interest.
We now see how this requirement manifests itself in the open string Hilbert space.
The requirement of the defect d being able to topologically end on a and b is encoded in the F-symbol F_ba[ ad; ib], see <ref>, which specifies the action of ℒ_d on states in the open string Hilbert space.
If a and b are not weakly symmetric under d then F_ba[ad; ib] = 0 for all i, implying that ℒ_d = 0 in the open string Hilbert space.
The fusion of two defects in the presence of boundaries a and b is denoted by
ℒ_c^[a,b]×ℒ_d^[a,b] = ∑_ e ∈ c × d^Ñ_cd^[a,b] eℒ_e^[a,b] ,
where Ñ_cd^[a,b] e are generally non-integer coefficients <cit.>.
By using the F-symbols to fuse two defects in the presence of boundaries, as in <ref>, and utilising <ref>, the boundary fusion coefficients are given by
Ñ_cd^[a,b]e = F_eb[ d c; b b ]
F_ea[ d c; a a ] .
Peculiarly, even if e appears in the bulk fusion c × d, the boundary fusion coefficient Ñ_cd^[a,b] e eq:boundary-fusion-rules may be zero, resulting in ℒ_e not appearing on the right hand side of <ref>.
We will see an explicit example of this when we study Rep(S_3) in the presence of boundaries in <ref>.
With our choice of junction normalisation, and specialising to the case a = b, we note some identities of the boundary fusion rules <cit.>:
N_1d^[a]d = 1 , N_dd^[a] 1 = 1/d_d , ∑_e ∈ c × d^ N_cd^[a] e = 1 .
The last identity arises due to the fact that defects can be shrunk on a boundary with no additional factors incurred, <ref>.
§ BOUNDARY PROJECTORS
§.§ Projector construction
We are now in a position to use Verlinde lines to project onto sectors of the open string Hilbert space.
First, we will construct projectors that project onto the boundary conformal towers in the open string Hilbert space.
Before we proceed, recall that all boundary states, Verlinde lines and primary operators share labels in the modular fusion category ℳ and the fusion of defects is multiplicity free.
The open string Hilbert space decomposes as ℋ_ab = ⊕_r N_ab^r 𝒱_r.
We want to find operators P_r^[a,b] that implement the projections P_r^[a,b]: ℋ_ab→ N_ab^r 𝒱_r, i.e.
_abΠ_r^[a,b]ρ_A,ab = N_ab^rχ_r(q) .
These projectors were constructed on a formal level in Ref. <cit.>,
P_r = ∑_M^|r,M⟩⟨r,M| ,
where M is a descendant label.
Ref. <cit.> used <ref> to analyse the SREE with respect to conformal symmetry.
Firstly, let WS^[a,b]⊆ℳ denote the subset of Verlindes lines that both boundaries a and b are weakly symmetric under.
We propose that the projector onto 𝒱_r is
P_r^[a,b] = 1/F_1b[a r
a r ]∑_s∈WS^[a,b]^ F_1s[a a
a a] F_sr[a b
a b] ℒ_s .
By using the following F-symbol identity, which can be derived from the pentagon identity ( see <ref>)
1/F_1b[ a r; a r ]∑_s ∈WS^[a,b]^ F_1s[ a a; a a ] F_sr[ a b; a b ]F_b a[ a s; i b ] = δ_i r ,
we have
_ab P_r^[a,b]ρ_A,ab = ∑_i^ N_ab^i 1/F_1b[ a r; a r ]∑_s ∈WS^[a,b]^ F_1s[ a a; a a ] F_sr[ a b; a b ]F_b a[ a s; i b ]χ_i(q)
= N_ab^r χ_r(q) ,
as required.
Using <ref>, it is easy to show that {P_r^[a,b]}_r∈ a × b are orthogonal projectors when acting on states in the open string Hilbert space,
P_h^[a,b]P_r^[a,b]|i,M⟩= δ_rh P_r^[a,b]|i,M⟩ .
Similarly, using <ref> and the pentagon identity (<ref>), one can show
∑_r ∈WS^[a,b] P_r^[a,b] = ℒ_1 .
§.§ Asymptotic limit
Utilising the projectors (<ref>), the asymptotic limit of the SREE can be computed.
In the asymptotic limit, the untwisted sector in the closed string channel dominates.
Therefore
𝒵_ab(q,r) = _ab P_r^[a,b]ρ_A,ab q→1∼F_11[ a a; a a ] F_1 r[ a b; a b ]/F_1b[ a r; a r ]⟨a||0⟩⟨0||b⟩q̃^c/24 ,
= d_r/d_a d_b⟨a||0⟩⟨0||b⟩q̃^c/24 ,
= S_1rq̃^c/24 ,
where we have explicitly expanded the Cardy states in terms of the modular S matrix to go from the second last to the final line.
Unsurprisingly, the final, simplified result is identical to results found in Ref. <cit.>, which utilised the projectors in <ref>.
This serves as a useful check of the validity of the BCFT projectors in <ref>.
Using <ref>, we find that the equipartition of the SREE is broken by the term ln d_r/d_a d_b when projecting onto the conformal family r.
Currently, the projectors <ref> do not provide any insight on the asymptotic limit of the SREE for a fusion category symmetry because it is not obvious how the coefficients d_r/d_a d_b in <ref> relate to WS^[a,b].
More insight can be gained with some additional assumptions.
Firstly, we will assume that both boundaries are identical, a = b.
Next, suppose that there is a sub fusion category C ⊆ℳ; we wish to compute the SREE with respect to C.
Additionally, assume there exists a boundary state a_C ∈ℳ such that a_C × a_C = ∑_c∈ C^ c.
As the quantum dimensions furnish a representation of the fusion ring, it follows that
√(C) d_a^2 = ∑_c ∈ C ^ d_c .
Using <ref>, we construct the projectors onto the irreps of C
P_r^[a_C] = 1/F_1a_C[ a_C r; a_C r ]∑_d ∈ C ^ F_1d[ a_C a_C; a_C a_C ]
F_dr[ a_C a_C; a_C a_C ]ℒ_d ,
which can be written in the nicer form
P_r^[a_C] = d_r/√(C)∑_d ∈ C^ d_d F_a_C a_C[ d a _C; a_C r ]ℒ_d .
The boundary fusion rules (<ref>) can be used to explicitly show that {P_r^[a_C]}_r ∈ C are orthogonal projectors,
P_r^[a_C] P_h^[a_C] = δ_rh P_r^[a_C] .
This is a tedious calculation, so we differ it to <ref>.
Additionally, from <ref> we have
∑_r ∈ C P_r^[a_C] = ℒ_1.
Using <ref>, we see that the equipartition of the SREE is broken by the term
Δ S_n(q,r) = lnd_r/√(C) .
This is in contrast with the results obtained in Ref. <cit.>, <ref>, which suggest that the equipartition should be broken by ln d_r^2/C^2 where C^2 = ∑_c ∈ C^ d_c^2.
<ref> is in agreement with Ref. <cit.>, <ref>, when C is a finite abelian group
because irreducible representations of finite abelian groups are one dimensional.
Strictly speaking, due to the assumption of self-duality of representations, C must be a product of ℤ_2-factors.
However, we believe this assumption can easily by relaxed without changing the outcome, <ref>.
While <ref> disagrees with the results of Ref. <cit.>, <ref>, for non-abelian finite groups, this disagreement is superficial: C can never be a non-abelian group because the fusion of Verlinde lines is commutative.
A natural concern one may have is the generality of the projectors, <ref>.
Indeed, the construction of <ref> required a careful choice of boundary conditions.
The boundary conditions are referenced explicitly in the boundary fusion rules (<ref>) so it would appear that the projectors constructed in <ref> and all computations based on them are true specifically for the chosen boundary condition a_C.
This is a valid concern, and we cannot definitively establish at the time being that <ref> are the general projectors that project onto irreducible representations of C in the presence of more general boundary conditions.
Currently, <ref> can be used on a case-by-case basis.
Suppose that one is studying a theory with boundary conditions a' and b' that are weakly symmetric under C.
Due to the orthogonality condition eq:orthogonal-projectors, a sufficient condition for the projectors with boundaries a_C eq:MFC-general? to be projectors in the presence of boundaries a' and b' is requiring that the fusion of defects in C does not depend on whether the boundaries are labelled by a_C or a' and b':
Ñ_dc^[a_C] e = Ñ_dc^[a',b'] e ,
for all d,c,e ∈ C.
If <ref> is true, then <ref> will project onto the irreps of C in the theory with boundaries a' and b' even though they were constructed with the boundary a_C.
We will show an example of this when we study the tricritical Ising model in <ref>.
Given the explicit dependence of the boundary fusion rules on the choice of boundary conditions, <ref>, it would be quite surprising if <ref> turned out to be true in full generality.
In the next section, after studying explicit examples of the boundary projectors (<ref>), we will briefly discuss the validity of <ref> in the context of C = Fib and C = Rep(S_3) from the perspective of anyonic chains.
Before we study examples of the BCFT projectors, <ref> and <ref>, we note that it is impossible to construct the projectors onto the irreducible representations of C (<ref>) by only using the F-symbols of C and making no reference to a boundary condition.[We restrict the data to only the F-symbols of C because C need not be a modular fusion category.]
We will comment on this further when we study the Ising model in <ref>.
It may be possible to write the projectors eq:MFC-general? in terms of other data of C, such as the characters of C in the presence of boundaries.
The boundary characters χ̃_i^[a,b](c) should form a one-dimensional representation of the boundary fusion algebra
χ̃_i^[a,b](c) χ̃_j^[a,b](c) = ∑_k ∈ i × jÑ_ij^[a,b] kχ̃_k(c)^[a,b] ,
where c ∈ C and irreps of C are labelled by i,j,k ∈ C.
Moreover, they should obey relevant orthogonality relations and reduce to suitable (quantum) dimensions under appropriate circumstances, analogous to the finite group characters, to befit being called a character.[In the bulk the natural characters χ_j(i)=S_ij/S_i1 would be quotients of modular S-matrix entries.]
Again, it seems likely that one would also have to make an explicit reference to boundary conditions to define the characters χ̃_i^[a,b](c) because defining the boundary fusion rules rules requires a choice of boundaries.
Therefore, from a mathematical perspective, it is unclear whether formulas for fusion category projectors in the presence of boundaries can be constructed without explicit reference to the boundaries.
Despite this, in <ref> we appeal to physical arguments to suggest that such projectors are independent of the choice of boundary conditions.
§ EXAMPLES
After our general discussion of the theoretical underpinnings we now apply the above framework to concrete examples. The F-symbols used in this section have been obtained utilising Refs. <cit.>.
§.§ Revisiting the three-state Potts model.
Our goal is to find the projectors (<ref>) using <ref>. The action of ℒ_1 on the boundary conformal towers 1 and ε is trivial.
The action of ℒ _ε is trivial on the boundary conformal tower 1, while the action of ℒ_ε on the boundary conformal tower ε is given by
F_εε[ ε ε; ε ε ] = 1- φ .
This is in agreement with taking λ = 1 in <ref>.
Finally, using the boundary fusion rules eq:boundary-fusion-rules, the non-trivial fusion rules are[We note that boundary fusion rules for the Fibonacci category appear in Refs. <cit.>, however, their choice of junction normalisation differs from ours.]
ℒ_ε^[ε]×ℒ_ε^[ε] = 1/φℒ_1^[ε] + φ -1/φℒ_ε^[ε] .
Using these fusion relations, it can be checked that the Fibonacci projectors in the three-state Potts model (<ref>) are orthogonal.
§.§ ℤ_2: Ising Model
The Ising model has a ℤ_2 symmetry generated by ε; this ℤ_2 fusion category is a subcategory of the Ising modular fusion category and can thus be used within our formalism.
The Cardy state |σ⟩, corresponding to the primary spin operator, is weakly symmetric with respect to the ℤ_2 symmetry.
The open string partition function with boundaries σ is
Z_σσ(q) = _σσρ_A,σσ = χ_1(q) + χ_ε(q) .
From Ref. <cit.>, we know that χ_1(q) is the ℤ_2 even sector and χ_ε(q) is the ℤ_2 odd sector.
Using <ref>, we find the following projectors
P_1^[σ] = 1/2(ℒ_1 + ℒ_η) ,
P_ε^[σ] = 1/2(ℒ_1 - ℒ_ε) ,
which are nothing other than ℤ_2 projectors!
The action of ℒ_ε on the ε boundary conformal tower is given by the F-symbol
F_σσ[ σ ε
ε σ] = -1 .
From this, it follows that
_σσ P_1^[σ]ρ_A,σσ = χ_1(q) ,
_σσ P_ε^[σ]ρ_A, σσ = χ_ε(q) ,
in agreement with <cit.>.
Finally, note that the ℤ_2 projectors (<ref>) cannot be constructed in terms of the F-symbols of the (non-anomalous)[For a category C to kinematically allow a (weakly) symmetric boundary state, it needs to be anomaly free <cit.>.] ℤ_2 fusion category.
All the F-symbols in ℤ_2 are trivial, so the coefficient of -1/2 in <ref> could not possibly be expressed in terms of these F-symbols.
To obtain this negative sign with the F-symbols, we need to make reference to a boundary condition, in this case σ.
§.§ Tricritical Ising Model
To make contact with the original Cat-SREE paper <cit.>, we will analyse the tricritical Ising model in greater depth.
Following conventions in the literature <cit.>, the primary operators are labelled as
1 , η , 𝒩 , W , η W , 𝒩 W .
The tricritical Ising model is a diagonal RCFT: there is a one-to-one correspondence between boundary states and Verlinde lines.
One finds that the three (Cardy) boundary states |W⟩, |η W⟩ and |𝒩 W⟩ are weakly symmetric under C:
W × W= W + 1 , W ×η W = η W + η , W ×𝒩 W = 𝒩 W + 𝒩 .
In the basis {1, η W, W, η, 𝒩 W, 𝒩}
the modular S matrix is given by <cit.>
S=([ s_2 s_1 s_1 s_2 √(2) s_1 √(2) s_2; s_1 -s_2 -s_2 s_1 √(2) s_2 -√(2) s_1; s_1 -s_2 -s_2 s_1 -√(2) s_2 √(2) s_1; s_2 s_1 s_1 s_2 -√(2) s_1 -√(2) s_2; √(2) s_1 √(2) s_2 -√(2) s_2 -√(2) s_1 0 0; √(2) s_2 -√(2) s_1 √(2) s_1 -√(2) s_2 0 0 ])
with
s_1 = sin(2π /5) s_2 = sin(4π /5) .
The Verlinde lines labelled by C = {1, W } form a Fibonacci subcategory; we with to resolve the entanglement entropy with respect to C.
To compute the SREE, Ref. <cit.> considered the case when the boundaries at the entangling surface were a = b = 𝒩 W.
With these boundaries, the open string partition function is
Z_𝒩W | 𝒩W(q) = χ_1(q) + χ_η W(q) + χ_W(q) + χ_η(q) .
To construct projectors onto the irreps of C, we first need to determine how C acts on each boundary conformal tower, <ref>; we require the F-symbols
F_𝒩W| 𝒩W[ 𝒩W d
i 𝒩W] ,
with d ∈ C and i ∈{1, η W, W, η}.
If d or i is 1 then this action is simply the identity.
Therefore, the relevant F-symbols are
F_𝒩W| 𝒩W[ 𝒩W W
W 𝒩W ] = 1 - √(5)/2 = 1 -φ ,
F_𝒩W| 𝒩W[ 𝒩W η W
W 𝒩W ] = 1 -φ ,
F_𝒩W| 𝒩W[ 𝒩W η
W 𝒩W ] = 1 .
We conclude that the 1 and η boundary conformal towers transform trivially under C, while the η W and W boundary conformal towers transform non-trivial under C.
The projectors onto the trivial, 1, and non-trivial, W, irreducible representation of C are then given by
Π_1^[𝒩W] = P_1^[𝒩 W] + P_η^[𝒩W] ,
Π_W^[𝒩W] = P_W^[𝒩 W] + P_η W^[𝒩W] .
Using <ref> we find
Π_1^[𝒩W] = 1/1+φ(ℒ_1 + φℒ_W) ,
Π_W^[𝒩W] = φ/1+φ(ℒ_1 - ℒ_W) .
Using the action of C on each boundary conformal tower eq:Tricrit-F-symbols, one can verify that the above projectors eq:Tricrit-Project implement
_𝒩W|𝒩WΠ_1^[𝒩 W]ρ_A,𝒩W|𝒩W = χ_1(q) + χ_η(q) ,
_𝒩W|𝒩WΠ_W^[𝒩 W]ρ_A,𝒩W|𝒩W = χ_η W(q) + χ_W(q) .
Note that the projectors we found, <ref>, are identical to those found in the three state Potts model, <ref>.
Indeed, as both boundaries are identical, a = b = 𝒩 W, the boundary fusion rule identities eq:boundary-fusion-identities can be used to show the non-trivial boundary fusion rules of C are identical to those of the three-state Potts model:
ℒ_W^[𝒩 W]×ℒ_W^[𝒩 W] = 1/φℒ_1^[𝒩 W] + φ -1/φℒ_W^[𝒩 W] .
The breaking of the equipartition of the SREE eq:SREE-equipartition is then
Δ S_n(q, 1) = ln1/1+ φ , Δ S_n(q, W) = lnφ/1+φ .
This is in contradiction with Ref. <cit.>, which suggest that Δ S_n(q,1) and Δ S_n(q,W) are ln 1/(1+φ^2) and lnφ/(1+φ^2), respectively.
Next, let us consider the case of two different boundaries, a = η W and b = 𝒩W.
The open string partition function is
Z_η W| 𝒩W(q) = χ_𝒩(q) + χ_𝒩W(q) .
Before we appeal to the projector formula eq:BCFT-projector, let us illustrate the logic surrounding <ref>.
In the previous example, we have constructed the projectors onto the irreps of C when both boundaries are 𝒩 W.
These projectors can be used with the new boundary conditions, a = η W and b = 𝒩W, if the boundary fusion rules of C are unchanged with these new boundaries.
Using <ref> we find
Ñ_dc^[𝒩 W] e = Ñ_dc^[η W, 𝒩W] ,
for d, c, e ∈ C.
Therefore, the projectors onto the irreps of C with boundaries a = η W and b = 𝒩 W are also given by <ref>.
To know which boundary conformal towers transform in the trivial and non-trivial irreps of C we can: appeal to explicit F-symbols, as we did in <ref>, construct the projectors onto each conformal tower with <ref>, or analyse the asymptotic SREE by computing eq:asymptotic-1 d_r/d_η W d_𝒩W for r ∈η W ×𝒩 W.
Using the latter option, we find d_𝒩/d_η W d_𝒩W = 1/(1 + φ) and d_r/d_η W d_𝒩W = φ/(1+φ),
indicating that the 𝒩 and 𝒩W boundary conformal towers transform in the trivial and non-trivial irrep of C, respectively.
§.§.§ Numerical check: golden chain
We corroborate our results numerically by exact diagonalisation of the golden chain.
The golden chain is an anyonic lattice model describing the tricritical Ising CFT <cit.> (see also <cit.>).
We consider an open golden chain with L_A external anyons or L = L_A - 1 links as illustrated in <ref>.
The local link variables take values in the Fibonacci category, x_i ∈{1, W} for i = 1, …, L.
The neighbouring links are subject to the constraints of the Fibonacci fusion rules and we fix the boundary anyons to be x_0 = x_L+1 = W.
The Hamiltonian of the golden chain projects neigbouring anyons to fuse into the trivial channel.
Using the local basis {1, W } for the link variables, the Hamiltonian is <cit.>
H_Golden = - ∑_i^ h_i with h_i = n_i-1 + n_i+1 -n_i-1n_i+1(φ^-3/2+ σ_i^x + φ^-3 n_i +1 + φ^-2) ,
where n_i counts the number of W particles at link i and σ^x_i is the Pauli matrix x matrix.
The spectrum of the open golden chain is described by the tricritical Ising model BCFT.
For an even (odd) number of anyons, the spectrum is described by a BCFT with boundary conditions a = W and b = W (η W) <cit.>.
On general grounds, the entanglement spectrum of a critical 1+1d system is described a BCFT <cit.>.
We are therefore permitted to interpret the open boundary condition Hamiltonian as a possible entanglement Hamiltonian: the reduced density matrix is ρ_A = e^- H.
We determined the spectrum of H using exact diagonalization for chain lengths L ranging from 3 to L=19 and organized it in irreducible representations with respect to the Fibonacci symmetry.
To achieve this, we constructed the Fibonacci projectors (<ref>) on the lattice.
This requires the lattice analogs of the Verlinde lines ℒ_1 and ℒ_W.
The defects on the lattice, known as the topological symmetry <cit.>, can be described pictorially as illustrated in <ref>.
The general action of the topological symmetry ℒ_i for boundary anyons x_0 = x_L+1 = ℬ and external anyons j is
ℒ_i |x_1,…,x_L⟩ = ∑_y_1,…,y_L^∏_n = 0^L F_x_n y_l+1[ y_i j; i x_n+1 ]|y_1, …, y_L⟩ .
Here, x_0 = x_L+1 = y_0 = y_L+1 = ℬ and i labels the defect ℒ_i.
For the golden chain, the boundary anyons and external anyons are ℬ = j= W.
Note that <ref> differs from the topological symmetry in Ref. <cit.> by a factor of √(d_i) due to the presence of junction factors in <ref>.
Both boundaries in the golden chain are labelled by W and therefore the non-trivial boundary fusion rule is given by <ref> no matter the chain length.
However, the above statement is not obvious from the BCFT perspective.
This is because the boundary conditions in the BCFT describing the open golden chain differs depending on whether the number of external anyons is odd or even <cit.>.
It is not obvious that the boundary fusion rules in <ref> should be identical when there is an even number of external anyons, a = b = W, and when there is an odd number of external anyons, a = W and b = η W.
For this to happen we must have
Ñ_cd^[W] e = Ñ_cd^[W, η W] e ,
for c, d, e ∈ C {1, W}.
This example is a lattice realisation showing that the boundary fusion rules of C can be identical for different boundary conditions in the BCFT.
The projectors onto the two irreps of the Fibonacci category are given by <ref>.
Using the asymptotic equipartition of the SREE, <ref>, we expect
Δ S_2(q, 1) = ln1/1+φ , Δ S_2(q, W) = lnφ/1+φ ,
where 1 is the trivial irrep and W is the non-trivial irrep.
Our numeric results plotting Δ S_2(q, r), r ∈{1, W }, are shown in <ref>.
We find that both the 1 and W irreducible representation are approaching their predicted asymptotic value.
This provides compelling evidence that the appropriate asymptotic values of the SREE for the Fibonacci category are given by <ref>.
However, the anyonic chains we could simulate were not long enough to observe saturation of the asymptotic value.
§.§ Rep(S_3)
When constructing the BCFT projectors (<ref>), we did not need to assume that the symmetry category of interest, C, was modular.
If one can find an appropriate embedding C ⊂ℳ of the symmetry category C into a modular fusion category ℳ, then projectors for the fusion category C can also be constructed.
We will illustrate this procedure by constructing projectors for the Rep(S_3) fusion category in the presence of boundaries.
To construct these in the context of a CFT, we consider the diagonal, A-type modular invariant for the SU(2)_4 Wess-Zumino-Witten model.
This theory contains fields with spin 0, 1/2, 1, 3/2 and 2.
As mentioned in <cit.>, the boundary state |k/4 = 1⟩ is weakly symmetric under a C =Rep(S_3) category that is generated by three simple lines: ℒ_j, for j =0, 1 and 2.
ℒ_0 is the identity line and ℒ_1 and ℒ_2 satisfy the fusion rules
2 × 2 = 0
1 × 1 = 0 + 1 + 2
1 × 2 = 2× 1 = 1 .
As expected, the above fusion rules show that the Cardy state | 1⟩ is weakly symmetric with respect to C.
The open string partition function with both boundaries fixed as |1⟩ is
Z_11(q) = χ_0(q) + χ_1(q) + χ_2(q) .
The action of C on the boundary operators is described by the F-symbol
F_11[1 d
i 1] ,
for d,i ∈C.
The relevant, non-trivial F-symbols are
F_11[1 1
2 1] = -1 , F_11[1 2
1 1] = -1 , F_11[1 1
1 1] = 0 .
Using <ref>, we find the three projectors
Π_0^[1] = P_0^[1] = 1/4( ℒ_0 + 2 ℒ_1 + ℒ_2)
Π_1^[1] = P_1^[1]= 1/4(2 ℒ_0 - 2 ℒ_2)
Π_2^[1] = P_2^[1]= 1/4(ℒ_0 - 2 ℒ_1 + ℒ_2) .
These projectors imply that for a Rep(S_3) symmetry the equipartition of the SREE is broken by, <ref>,
Δ S_n(q, 0) = Δ S_n(q,2) = ln1/4 , Δ S_n(q,1) = ln1/2 .
Using the projectors (<ref>) along with the non-trivial F-symbols (<ref>), a quick calculation shows
_22Π_0^[1]ρ_A,11 = χ_0(q) , _22Π_1^[1]ρ_A,11 = χ_1(q) , _22Π_2^[1]ρ_A,11 = χ_2(q) .
The boundary fusion rules of Rep(S_3) are identical to the bulk fusion rules, with the peculiar exception
ℒ_1^[1]×ℒ_1^[1] = 1/2ℒ_0^[1] + 1/2ℒ_2^[1] .
Note that no non-invertible symmetry appears on the right hand side of <ref>; this is to be contrasted with the bulk fusion rule 1 × 1 = 0 + 1 + 2.
Using these fusion rules, one can show that the projectors (<ref>) are orthogonal projectors.
§.§.§ Numerical check: SU(2)_4 spin-1/2 anyon chain
To study the Rep(S_3) projectors (<ref>), we numerically simulate the SU(2)_4 spin-1/2 antiferromagnetic anyonic chain.
This chain describes the Tetracritical Ising model <cit.>, which has a Rep(S_3) symmetry.
We study an open chain with spin-1/2 external anyons and spin-1 anyons as boundaries, see <ref> for an illustration.
The Hamiltonian favours neighbouring spin-1/2 anyons to fuse into the trivial channel.
Due to the fusion rules, the local basis alternates between integer labels and half-integer labels.
Therefore, the number of links L on the anyon chain must be odd because we have fixed spin-1 anyons on the boundary.
In the local basis x_i∈{1/2, 3/2 }, x_i+1∈{0, 1, 2 } where i (i+1) is odd (even), the Hamiltonian is
H_SU(2)_4 = - ∑_i^ h_i
h_i = n_i-1^0 n_i+1^0 + n_i-1^1 n^1_i+1[ 1 -1; -1 1 ]_i
+ 1/3 n_i^1/2 n_i^1/2[ 1 -√(2); -√(2) 2 ]_i+2
+1/3 n_i^3/2 n_i^3/2[ 2 -√(2); - √(2) 1 ]_i+2 .
A lattice realisation of the topological symmetry is analogous to the case of the golden chain in <ref>.
The lattice realisation of the defects is given by <ref> with the boundaries x_0 = x_L+1 = y_0 = y_L+1 = ℬ = 1, and external anyons j = 1/2.
Via arguments similar to the case of the golden chain, it is clear that the fusion rules of the ℒ_i are identical to those in (and above) <ref>.
Therefore, we can expect the following equipartition of the SREE:
Δ S_n(q,0) = Δ S_n(q,2) = ln1/4 , Δ S_n (q, 1) = ln1/2 .
Similar to the golden chain, we interpret the energy levels of the open chain as an entanglement spectrum and define the reduced density matrix as ρ_A e^-H_SU(2)_4.
We then compute Δ S_2(q, r) for various chain lengths via exact diagonalisation and the Rep(S_3) projectors eq:Rep(S3)-projectors.
<ref>(left) shows Δ S_2(q,r) for r ∈{0,1,2} for various chain lengths.
While the spin-2 representation appears to be converging to ln 1/2, the spin-0 and spin-1 representations are far from the expected value of ln 1/4.
We suspect that the difference in the rate of convergence between the spin-2 representation and the spin-0 and spin-1 representation is due to the number of states in each representation.
In the anyonic chain Hilbert space, approximately half of the states transform in the 1 representation, while only one quarter of the states transform in the 0 or 2 representation. This follows from investigating the rank of the respective projection operators but it also matches the corresponding ratios of quantum dimensions, d_1/d_i = 2 for i = 0 and 2.
Thus one would expect Δ S_n(q,2) to converge faster on the lattice as it is, in a sense, closer to the thermodynamic limit.
With this in mind, observe that the number of states in the combined 0 ⊕ 2 representation is equal to the number of states in the 1 representation.
We then expect that Δ S_2(q, 0 ⊕ 2) will converge at approximately the same rate as Δ S_2(q, 1) and the expected limit will be
Δ S_2(q, 0 ⊕ 2 ) = ln 1/2.
Indeed, as shown in <ref>(right), Δ S_2(q, 0⊕ 2) and Δ S_2(q, 1) approach the expected value of ln 1/2 at similar rates as the chain length is increased.
However, similar to the case of the golden chain, the length of the chain L is not long enough for this asymptotic value to be saturated.
§.§ Additional observations
After studying numerous examples in the previous sections, we have arrived at the following observation concerning the asymptotic limit of the SREE, <ref>.
Firstly, when we used the BCFT projectors P_r^[a,b] eq:BCFT-projector, recall that the asymptotic symmetry resolved partition function eq:asymptotic-1 contained the prefactor d_r/d_a d_b, where a and b are the (simple) boundaries at the entangling surface and r ∈ a × b labels a boundary conformal tower.
At first glance, this factor appears to have no relation to the set of Verlinde lines that the boundaries a and b are weakly symmetric under, WS^[a,b].
However, we believe that
d_r/d_a d_b = d_c/√(WS^[a,b ]) ,
for some c ∈WS^[a,b] and √(WS^[a,b]) = ∑_c ∈WS^[a,b] d_c.
In other words, the projectors onto the conformal families, P_r^[a,b], project onto the irreps of WS^[a,b].
This provides an interpretation of the asymptotic limit eq:asymptotic-1 in terms of the Verlinde lines present in the open string Hilbert space.
For the case where a = b, <ref> is easily seen to be true, with c = r and d_a^2 = √(WS^[a,b]).
More generally, for a ≠ b it is not obvious that <ref> is true, although we have checked it for some specific minimal models, such as the tricritical and tetracritical Ising model.
Let us briefly discuss the reasoning behind <ref> in the general case, a ≠ b.
Firstly, by using the Verlinde formula, one can show that
∑_d N_ad^a N_bd^b = ∑_r N_ab^r ,
which implies that the number of boundary conformal towers in the partition function Z_ab(q) is equal to the number of simple objects in WS^[a,b].
Then, if the irreducible representations of WS^[a,b] are in one-to-one correspondence with the boundary conformal towers i.e. for every r ∈ a × b there is a corresponding irrep c ∈WS^[a,b],
the projector onto the boundary conformal tower r, P_r^[a,b], then projects onto the irreducible representation c ∈WS^[a,b].
Appealing to the specific case of both boundaries being equal eq:MFC-SREE-equipartition, we then suggest the relation in <ref>.
§.§ Boundary conditions and projectors
It is worth noting that the projectors we found for the Fibonacci symmetry in the three-state Potts model, <ref>, are identical to the Fibonacci projectors for the tricritical Ising model, <ref>.
Naturally, this leads one to ask: do the projectors defined in <ref> actually depend on the choice of boundary conditions?
For the case of the Fibonacci category, one can show the BCFT projectors do not depend on the choice of boundary condition when they are equal on both sides.
Suppose we choose both boundary conditions to be labelled by a;
the boundary fusion algebra is
ℒ_c^[a]×ℒ_d^[a] = ∑_e ∈ c × d^Ñ_cd^[a] eℒ_e^[a] ,
where N_cd^[a] e need not be integers, see <ref>.
The boundary fusion algebra identities eq:boundary-fusion-identities completely determine the boundary fusion rules for the Fibonacci category:
ℒ_W^[a]×ℒ_W^[a] = 1/φℒ_1^[a] + φ - 1/φℒ_W^[a] .
The orthogonal projectors that are compatible with these fusion rules are
Π_1^[a] = 1/1+φ(ℒ_1 + φℒ_W) ,
Π_W^[a] = φ/1+φ(ℒ_1 - ℒ_W) .
Finally, from the perspective of critical anyonic chains, it is somewhat natural to conclude that the boundary fusion rules are independent of the specific choice of boundaries.
We briefly discuss this for the case of Rep(S_3) symmetry.
There are numerous anyonic chains with Rep(S_3) symmetry with open boundary conditions. For instance, the (anti) ferromagnetic SU(2)_4 spin-1/2 chain describes the (Tetracritical Ising) Z_4 parafermion CFT <cit.>.
Similarly, the SU(2)_4 spin-1 anyon chains can describe c = 1 compact boson CFTs at various compactification radii <cit.> (including non-rational points).
In the above cases, fixing a spin-1 anyon on the boundaries of the open chain will result in the Rep(S_3) symmetry being present.
The boundary fusion rules for Rep(S_3) will be those stated in <ref>.
Therefore, from the lattice perspective, it is obvious that with spin-1 anyons at the boundaries the Rep(S_3) projectors (<ref>) will project onto the irreps of Rep(S_3).
In contrast, from the continuum BCFT perspective, it is non-trivial to determine whether these projectors are also the appropriate projectors for Rep(S_3).
Firstly, this is because the BCFTs associated with the anyonic chains will generally admit non-Cardy boundary states.
Secondly, as shown for the golden chain, even if both boundary anyons are identical, the corresponding boundary states in the BCFT can be different.
For these reasons, it is not obvious that the boundary fusion rules in the presence of these boundaries are given by <ref> in the BCFT describing the continuum limit.
However, viewing these BCFTs as the continuum limit of the associated anyonic chains, the boundary fusion rules of the Rep(S_3) symmetry in the continuum must also satisfy <ref>.
This would suggest that as long as the boundary conditions in the CFT are weakly symmetric under Rep(S_3), then fusion of Rep(S_3) in the presence of these boundaries is identical to <ref>.
Therefore, we tentatively suggest the following picture. Suppose that the simple boundary conditions a and b are weakly symmetric under a symmetry category C that acts faithfully (see <cit.>) on the Hilbert space of interest.
Then the fusion of objects in C in the presence of the boundaries a and b are independent of the specific choice of a and b.
§ CONCLUSIONS
In this work we have analysed the symmetry-resolved entanglement entropy for non-invertible symmetries.
Specifically, we focused on non-invertible symmetries generated by Verlinde lines in 1+1d conformal field theories with diagonal partition function.
We constructed projectors onto irreducible sectors for these non-invertible symmetries in the presence of boundaries and utilised them to study the equipartition of the symmetry-resolved entanglement entropy. A key element was the study of the defect fusion algebra in the presence of boundaries which permits non-integer coefficients.
Explicit examples of orthogonal projectors for the cases of the Fibonacci category and Rep(S_3) were provided.
Importantly, the Fibonacci projectors and the associated expressions for the symmetry-resolved entanglement entropy differ from those in the recent literature <cit.>.
An obvious direction for future work is extending the analysis to general fusion categories whose fusion rules may be non-commutative. This is likely best achieved by utilising the symmetry topological field theory <cit.>.
Indeed, the symmetry topological field theory proved to be a powerful tool when studying the symmetry resolution of the torus partition function <cit.>.
This is because the symmetry topological field theory can be used to construct the generalised charges associated with a symmetry <cit.>.
Work in this direction requires further understanding of the representation theory of non-invertible symmetries on manifolds with boundaries, which has been recently studied in the context of the representation theory of solitons <cit.>.
Throughout this work we have emphasised that the fusion algebra of non-invertible symmetries in the presence of boundaries is different than the bulk fusion algebra. This leads to a natural question: does the fusion algebra explicitly depend on the choice of boundary conditions?
We commented on this briefly in <ref> and tentatively suggest that the boundary fusion relations are independent of the choice of boundary conditions, as long as the boundaries are weakly symmetric under the fusion category of interest.
While one may have physical reasons to believe this picture to be true, a mathematical proof is far from obvious as the boundary fusion algebra makes explicit reference to the choice of boundary conditions.
Additionally, it would be beneficial to further analyse the interplay between entanglement measures and non-invertible symmetries.
Entanglement measures require some choice of bipartition and consequently a choice of boundary conditions at the entangling surface.
Therefore, we believe our results will prove useful in analysing the relation between other entanglement measures and non-invertible symmetries, such as the entanglement asymmetry <cit.>.
On a similar note, to study these entanglement measures numerically requires realising the non-invertible symmetries on a lattice with open boundary conditions.
While recent works have constructed Rep(S_3) symmetries for periodic chains <cit.>, they have, to our knowledge, not yet been constructed on spin chains with open boundary conditions.
We leave the discussion of spin chain realisations of non-invertible symmetries in the presence of boundaries to future work.
Note added. While in the process of completing this work we became aware of related work on symmetry-resolved entanglement measures for non-invertible symmetries by Yichul Choi, Brandon Rayhaun, and Yunqin Zheng <cit.> as well as Arpit Das, Javier Molina-Vilaplana and Pablo Saura-Bastida <cit.>. We also expect parts of other upcoming work on the SymTFT picture for physical theories with boundaries by Yichul Choi, Brandon Rayhaun, and Yunqin Zheng <cit.>, Lakshya Bhardwaj, Christian Copetti, Daniel Pajer, and Sakura Schäfer-Nameki <cit.> as well as Iñaki García Etxebarria, Jesús Huertas, and Angel Uranga <cit.> to be relevant in our context.
§.§ Acknowledgement
The research of JH was supported by an Australian Government Research Training Program (RTP) Scholarship.
The authors thank Tyler Franke for helpful discussions and Hong-Hao Tu for bringing the concept of a ladder algebra to their attention.
We would like to thank everyone listed under “Note added” for willing to coordinate our respective submissions.
utphysTQ
10
vidalEntanglementQuantumCritical2003
G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, “Entanglement in quantum critical phenomena,” http://dx.doi.org/10.1103/PhysRevLett.90.227902Physical Review Letters 90 (2003) 227902, http://arxiv.org/abs/quant-ph/0211074arXiv:quant-ph/0211074.
osterlohScalingEntanglementClose2002
A. Osterloh, L. Amico, G. Falci, and R. Fazio, “Scaling of entanglement close to a quantum phase transition,” http://dx.doi.org/10.1038/416608aNature 416 (2002) 608–610, http://arxiv.org/abs/quant-ph/0202029arXiv:quant-ph/0202029.
Korepin:2004PhRvL..92i6402K
V. E. Korepin, “Universality of entropy scaling in one dimensional gapless models,” http://dx.doi.org/10.1103/PhysRevLett.92.096402Physical Review Letters 92 (2004) 096402, http://arxiv.org/abs/cond-mat/0311056arXiv:cond-mat/0311056.
calabreseEntanglementEntropyConformal2009
P. Calabrese and J. Cardy, “Entanglement entropy and conformal field theory,” http://dx.doi.org/10.1088/1751-8113/42/50/504005Journal of Physics A: Mathematical and Theoretical 42 (2009) 504005.
Tu:2017PhRvL.119z1603T
H.-H. Tu, “Universal entropy of conformal critical theories on a Klein bottle,” http://dx.doi.org/10.1103/PhysRevLett.119.261603Physical Review Letters 119 (2017) 261603, http://arxiv.org/abs/1707.05812arXiv:1707.05812.
levinDetectingTopologicalOrder2006
M. Levin and X.-G. Wen, “Detecting topological order in a ground state wave function,” http://dx.doi.org/10.1103/PhysRevLett.96.110405Physical Review Letters 96 (2006) 110405, http://arxiv.org/abs/cond-mat/0510613arXiv:cond-mat/0510613.
kitaevTopologicalEntanglementEntropy2006
A. Kitaev and J. Preskill, “Topological entanglement entropy,” http://dx.doi.org/10.1103/PhysRevLett.96.110404Physical Review Letters 96 (2006) 110404, http://arxiv.org/abs/hep-th/0510092arXiv:hep-th/0510092.
laflorencieQuantumEntanglementCondensed2016
N. Laflorencie, “Quantum entanglement in condensed matter systems,” http://dx.doi.org/10.1016/j.physrep.2016.06.008Physics Reports 646 (2016) 1–59, http://arxiv.org/abs/1512.03388arXiv:1512.03388.
whiteDensityMatrixFormulation1992
S. R. White, “Density matrix formulation for quantum renormalization groups,” http://dx.doi.org/10.1103/PhysRevLett.69.2863Physical Review Letters 69 (1992) 2863–2866.
sierraEquivalenceVariationalMatrix1998
G. Sierra, T. Nishino, M. A. Martín-Delgado, and J. Dukelsky, “Equivalence of the variational matrix product method and the density matrix renormalization group applied to spin chains,” http://dx.doi.org/10.1209/epl/i1998-00381-xEPL 43 (1998) 457.
eisertAreaLawsEntanglement2010
J. Eisert, M. Cramer, and M. B. Plenio, “Area laws for the entanglement entropy - a review,” http://dx.doi.org/10.1103/RevModPhys.82.277Reviews of Modern Physics 82 (2010) 277–306, http://arxiv.org/abs/0808.3773arXiv:0808.3773.
schollwoeckDensitymatrixRenormalizationGroup2011
U. Schollwöck, “The density-matrix renormalization group in the age of matrix product states,” http://dx.doi.org/10.1016/j.aop.2010.09.012Annals Phys. 326 (2011) 96–192, http://arxiv.org/abs/1008.3477arXiv:1008.3477.
banulsTensorNetworkAlgorithms2023
M. C. Bañuls, “Tensor network algorithms: A route map,” http://dx.doi.org/10.1146/annurev-conmatphys-040721-022705Ann. Rev. Condensed Matter Phys. 14 (2023) 173–191, http://arxiv.org/abs/2205.10345arXiv:2205.10345.
laflorencieSpinresolvedEntanglementSpectroscopy2014
N. Laflorencie and S. Rachel, “Spin-resolved entanglement spectroscopy of critical spin chains and Luttinger liquids,” http://dx.doi.org/10.1088/1742-5468/2014/11/P11013Journal of Statistical Mechanics: Theory and Experiment 2014 (2014) P11013.
goldsteinSymmetryresolvedEntanglementManybody2018
M. Goldstein and E. Sela, “Symmetry-resolved entanglement in many-body systems,” http://dx.doi.org/10.1103/PhysRevLett.120.200602Physical Review Letters 120 (2018) 200602, http://arxiv.org/abs/1711.09418arXiv:1711.09418.
xavierEquipartitionEntanglementEntropy2018
J. C. Xavier, F. C. Alcaraz, and G. Sierra, “Equipartition of the entanglement entropy,” http://dx.doi.org/10.1103/PhysRevB.98.041106Physical Review B 98 (2018) 041106, http://arxiv.org/abs/1804.06357arXiv:1804.06357.
bonsignoriSymmetryResolvedEntanglement2019
R. Bonsignori, P. Ruggiero, and P. Calabrese, “Symmetry resolved entanglement in free fermionic systems,” http://dx.doi.org/10.1088/1751-8121/ab4b77Journal of Physics A: Mathematical and Theoretical 52 (2019) 475302, http://arxiv.org/abs/1907.02084arXiv:1907.02084.
capizziSymmetryResolvedEntanglement2020
L. Capizzi, P. Ruggiero, and P. Calabrese, “Symmetry resolved entanglement entropy of excited states in a CFT,” http://dx.doi.org/10.1088/1742-5468/ab96b6Journal of Statistical Mechanics: Theory and Experiment 2020 (2020) 073101, http://arxiv.org/abs/2003.04670arXiv:2003.04670.
bonsignoriBoundaryEffectsSymmetry2021
R. Bonsignori and P. Calabrese, “Boundary effects on symmetry resolved entanglement,” http://dx.doi.org/10.1088/1751-8121/abcc3aJournal of Physics A: Mathematical and Theoretical 54 (2021) 015005, http://arxiv.org/abs/2009.08508arXiv:2009.08508.
calabreseSymmetryresolvedEntanglementEntropy2021
P. Calabrese, J. Dubail, and S. Murciano, “Symmetry-resolved entanglement entropy in Wess-Zumino-Witten models,” http://dx.doi.org/10.1007/JHEP10(2021)067Journal of High Energy Physics 2021 (2021) 67, http://arxiv.org/abs/2106.15946arXiv:2106.15946.
digiulioBoundaryConformalField2023
G. Di Giulio, R. Meyer, C. Northe, H. Scheppach, and S. Zhao, “On the boundary conformal field theory approach to symmetry-resolved entanglement,” http://dx.doi.org/10.21468/SciPostPhysCore.6.3.049SciPost Physics Core 6 (2023) 049.
kusukiSymmetryresolvedEntanglementEntropy2023
Y. Kusuki, S. Murciano, H. Ooguri, and S. Pal, “Symmetry-resolved entanglement entropy, spectra & boundary conformal field theory,” http://dx.doi.org/10.1007/JHEP11(2023)216Journal of High Energy Physics 2023 (2023) 216, http://arxiv.org/abs/2309.03287arXiv:2309.03287.
northeEntanglementResolutionRespect2023
C. Northe, “Entanglement resolution with respect to conformal symmetry,” http://dx.doi.org/10.1103/PhysRevLett.131.151601Physical Review Letters 131 (2023) 151601, http://arxiv.org/abs/2303.07724arXiv:2303.07724.
castro-alvaredoSymmetryResolvedMeasures2024
O. A. Castro-Alvaredo and L. Santamaría-Sanz, “Symmetry resolved measures in quantum field theory: A short review,” http://arxiv.org/abs/2403.06652arXiv:2403.06652.
lukinProbingEntanglementManybodylocalized2019
A. Lukin, M. Rispoli, R. Schittko, M. E. Tai, A. M. Kaufman, S. Choi, V. Khemani, J. Léonard, and M. Greiner, “Probing entanglement in a many-body-localized system,” http://dx.doi.org/10.1126/science.aau0818Science 364 (2019) 256–260, http://arxiv.org/abs/1805.09819arXiv:1805.09819.
shenDisentanglingPhysicsAttractive2024
T. Shen, H. Barghathi, A. Del Maestro, and B. M. Rubenstein, “Disentangling the physics of the attractive Hubbard model as a fully interacting model of fermions via the accessible and symmetry-resolved entanglement entropies,” http://dx.doi.org/10.1103/PhysRevB.109.195119Physical Review B 109 (2024) 195119.
araEntanglementEdgeModes2024
N. Ara, R. Basu, E. Mathew, and I. Raychowdhury, “Entanglement of edge modes in (very) strongly correlated topological insulators,” http://dx.doi.org/10.1088/1361-648X/ad3c88Journal of Physics: Condensed Matter 36 (2024) 295601, http://arxiv.org/abs/2312.13598arXiv:2312.13598.
cardyBoundaryConditionsFusion1989
J. L. Cardy, “Boundary conditions, fusion rules and the Verlinde formula,” http://dx.doi.org/10.1016/0550-3213(89)90521-XNuclear Physics B 324 (1989) 581–596.
cardyEntanglementHamiltoniansTwodimensional2016
J. Cardy and E. Tonni, “Entanglement Hamiltonians in two-dimensional conformal field theory,” http://dx.doi.org/10.1088/1742-5468/2016/12/123103Journal of Statistical Mechanics: Theory and Experiment 2016 (2016) 123103.
lauchliOperatorContentRealspace2013
A. M. Läuchli, “Operator content of real-space entanglement spectra at conformal critical points,” http://arxiv.org/abs/1303.0741arXiv:1303.0741.
ohmoriPhysicsEntanglingSurface2015
K. Ohmori and Y. Tachikawa, “Physics at the entangling surface,” http://dx.doi.org/10.1088/1742-5468/2015/04/P04010Journal of Statistical Mechanics: Theory and Experiment 2015 (2015) P04010, http://arxiv.org/abs/1406.4167arXiv:1406.4167.
choiRemarksBoundariesAnomalies2023
Y. Choi, B. C. Rayhaun, Y. Sanghavi, and S.-H. Shao, “Remarks on boundaries, anomalies, and noninvertible symmetries,” http://dx.doi.org/10.1103/PhysRevD.108.125005Physical Review D 108 (2023) 125005, http://arxiv.org/abs/2305.09713arXiv:2305.09713.
gaiottoGeneralizedGlobalSymmetries2015
D. Gaiotto, A. Kapustin, N. Seiberg, and B. Willett, “Generalized global symmetries,” http://dx.doi.org/10.1007/JHEP02(2015)172Journal of High Energy Physics 2015 (2015) 172, http://arxiv.org/abs/1412.5148arXiv:1412.5148.
Bhardwaj:2018JHEP...03..189B
L. Bhardwaj and Y. Tachikawa, “On finite symmetries and their gauging in two dimensions,” http://dx.doi.org/10.1007/JHEP03(2018)189Journal of High Energy Physics 2018 (2018) 189, http://arxiv.org/abs/1704.02330arXiv:1704.02330.
mcgreevyGeneralizedSymmetriesCondensed2023
J. McGreevy, “Generalized symmetries in condensed matter,” http://dx.doi.org/10.1146/annurev-conmatphys-040721-021029Annual Review of Condensed Matter Physics 14 (2023) 57–82, http://arxiv.org/abs/2204.03045arXiv:2204.03045.
Schafer-Nameki:2023jdn
S. Schäfer-Nameki, “ICTP lectures on (non-)invertible generalized symmetries,” http://dx.doi.org/10.1016/j.physrep.2024.01.007Phys. Rept. 1063 (2024) 1–55, http://arxiv.org/abs/2305.18296arXiv:2305.18296.
Bhardwaj:2023arXiv230707547B
L. Bhardwaj, L. E. Bottini, L. Fraser-Taliente, L. Gladden, D. S. W. Gould, A. Platschorre, and H. Tillim, “Lectures on Generalized Symmetries,” http://arxiv.org/abs/2307.07547arXiv:2307.07547.
shaoWhatDoneCannot2023
S.-H. Shao, “What's done cannot be undone: TASI lectures on non-invertible symmetries,” http://arxiv.org/abs/2308.00747arXiv:2308.00747.
Carqueville:2023arXiv231102449C
N. Carqueville, M. Del Zotto, and I. Runkel, “Topological defects,” http://dx.doi.org/10.48550/arXiv.2311.02449arXiv e-prints (2023) arXiv:2311.02449, http://arxiv.org/abs/2311.02449arXiv:2311.02449. Contribution to the Encyclopedia of Mathematical Physics, 2nd Edition.
petkovaGeneralizedTwistedPartition2001
V. B. Petkova and J. B. Zuber, “Generalized twisted partition functions,” http://dx.doi.org/10.1016/S0370-2693(01)00276-3Physics Letters B 504 (2001) 157–164, http://arxiv.org/abs/hep-th/0011021arXiv:hep-th/0011021.
frohlichKramersWannierDualityConformal2004
J. Fröhlich, J. Fuchs, I. Runkel, and C. Schweigert, “Kramers-Wannier duality from conformal defects,” http://dx.doi.org/10.1103/PhysRevLett.93.070601Physical Review Letters 93 (2004) 070601, http://arxiv.org/abs/cond-mat/0404051arXiv:cond-mat/0404051.
frohlichDualityDefectsRational2007
J. Fröhlich, J. Fuchs, I. Runkel, and C. Schweigert, “Duality and defects in rational conformal field theory,” http://dx.doi.org/10.1016/j.nuclphysb.2006.11.017Nuclear Physics B 763 (2007) 354–430, http://arxiv.org/abs/hep-th/0607247arXiv:hep-th/0607247.
saura-bastidaCategoricalsymmetryResolvedEntanglement2024
P. Saura-Bastida, A. Das, G. Sierra, and J. Molina-Vilaplana, “Categorical-symmetry resolved entanglement in conformal field theory,” http://dx.doi.org/10.1103/PhysRevD.109.105026Physical Review D: Particles and Fields 109 (2024) 105026, http://arxiv.org/abs/2402.06322arXiv:2402.06322.
grahamDefectLinesBoundary2004
K. Graham and G. M. T. Watts, “Defect lines and boundary flows,” http://dx.doi.org/10.1088/1126-6708/2004/04/019Journal of High Energy Physics 2004 (2004) 019–019, http://arxiv.org/abs/hep-th/0306167arXiv:hep-th/0306167.
konechnyOpenTopologicalDefects2020
A. Konechny, “Open topological defects and boundary RG flows,” http://dx.doi.org/10.1088/1751-8121/ab7c8bJournal of Physics A: Mathematical and Theoretical 53 (2020) 155401, http://arxiv.org/abs/1911.06041arXiv:1911.06041.
kojitaTopologicalDefectsOpen2018
T. Kojita, C. Maccaferri, T. Masuda, and M. Schnabl, “Topological defects in open string field theory,” http://dx.doi.org/10.1007/JHEP04(2018)057JHEP 04 (2018) 057, http://arxiv.org/abs/1612.01997arXiv:1612.01997.
cordovaRepresentationTheorySolitons2024
C. Cordova, N. Holfester, and K. Ohmori, “Representation theory of solitons,” http://arxiv.org/abs/2408.11045arXiv:2408.11045.
Barter:2018hjs
D. Barter, J. C. Bridgeman, and C. Jones, “Domain walls in topological phases and the Brauer–Picard ring for Vec (ℤ/pℤ),” http://dx.doi.org/10.1007/s00220-019-03338-2Commun. Math. Phys. 369 (2019) 1167–1185, http://arxiv.org/abs/1806.01279arXiv:1806.01279.
2019Rowett
M. Rowett, “Monoidal ladder categories,” Bachelor's thesis, Australian National University, 2019.
Available at <https://tqft.net/web/research/students/MitchellRowett/thesis.pdf>.
2023Henriques
A. Henriques, D. Penneys, and J. Tener, “Classification of finite depth objects in bicommutant categories via anchored planar algebras,” http://arxiv.org/abs/2307.13822arXiv:2307.13822.
Kitaev:2011dxc
A. Kitaev and L. Kong, “Models for gapped boundaries and domain walls,” http://dx.doi.org/10.1007/s00220-012-1500-5Commun. Math. Phys. 313 (2012) 351–373, http://arxiv.org/abs/1104.5047arXiv:1104.5047.
linAsymptoticDensityStates2023
Y.-H. Lin, M. Okada, S. Seifnashri, and Y. Tachikawa, “Asymptotic density of states in 2d CFTs with non-invertible symmetries,” http://dx.doi.org/10.1007/JHEP03(2023)094Journal of High Energy Physics 2023 (2023) 94, http://arxiv.org/abs/2208.05495arXiv:2208.05495.
Vanhove:2021nav
R. Vanhove, L. Lootens, H.-H. Tu, and F. Verstraete, “Topological aspects of the critical three-state Potts model,” http://dx.doi.org/10.1088/1751-8121/ac68b1J. Phys. A 55 (2022) 235002, http://arxiv.org/abs/2107.11177arXiv:2107.11177.
feiguinInteractingAnyonsTopological2007
A. Feiguin, S. Trebst, A. W. W. Ludwig, M. Troyer, A. Kitaev, Z. Wang, and M. H. Freedman, “Interacting anyons in topological quantum liquids: The golden chain,” http://dx.doi.org/10.1103/PhysRevLett.98.160409Physical Review Letters 98 (2007) 160409, http://arxiv.org/abs/cond-mat/0612341arXiv:cond-mat/0612341.
difrancescoConformalFieldTheory1997
P. Di Francesco, P. Mathieu, and D. Senechal, http://dx.doi.org/10.1007/978-1-4612-2256-9Conformal Field Theory.
Graduate Texts in Contemporary Physics. Springer-Verlag, New York, 1997.
verlindeFusionRulesModular1988
E. Verlinde, “Fusion rules and modular transformations in 2D conformal field theory,” http://dx.doi.org/10.1016/0550-3213(88)90603-7Nuclear Physics B 300 (1988) 360–376.
Moore:1989vd
G. W. Moore and N. Seiberg, “Lectures on RCFT,” in 1989 Banff NATO ASI: Physics, Geometry and Topology, no. RU-89-32, YCTP-P13-89.
1989.
Simon:2023hdq
S. H. Simon, Topological Quantum.
Oxford University Press, 2023.
cardyBoundaryConformalField2004
J. L. Cardy, “Boundary conformal field theory,” http://arxiv.org/abs/hep-th/0411189arXiv:hep-th/0411189. Contribution to the Encyclopedia of Mathematical Physics.
fuchsTFTConstructionRCFT2002
J. Fuchs, I. Runkel, and C. Schweigert, “TFT construction of RCFT correlators I: Partition functions,” http://dx.doi.org/10.1016/S0550-3213(02)00744-7Nuclear Physics B 646 (2002) 353–497.
felderCorrelationFunctionsBoundary2002
G. Felder, J. Fröhlich, J. Fuchs, and C. Schweigert, “Correlation functions and boundary conditions in rational conformal field theory and three-dimensional topology,” http://dx.doi.org/10.1023/A:1014903315415Compositio Mathematica 131 (2002) 189–238.
vercleyenLowRankFusion2023
G. Vercleyen and J. K. Slingerland, “On low rank fusion rings,” http://dx.doi.org/10.1063/5.0148848Journal of Mathematical Physics 64 (2023) 091703, http://arxiv.org/abs/2205.15637arXiv:2205.15637.
ardonneClebschGordan6jcoefficientsRank2010
E. Ardonne and J. K. Slingerland, “Clebsch-Gordan and 6j-coefficients for rank two quantum groups,” http://dx.doi.org/10.1088/1751-8113/43/39/395205Journal of Physics A: Mathematical and Theoretical 43 (2010) 395205, http://arxiv.org/abs/1004.5456arXiv:1004.5456.
kirchnerCharacterizingEntanglementAnyonic2024
N. Kirchner, W. Choi, and F. Pollmann, “Characterizing the entanglement of anyonic systems using the anyonic partial transpose,” http://arxiv.org/abs/2403.12121arXiv:2403.12121.
Andrews:1984JSP....35..193A
G. E. Andrews, R. J. Baxter, and P. J. Forrester, “Eight-vertex SOS model and generalized Rogers-Ramanujan-type identities,” http://dx.doi.org/10.1007/BF01014383Journal of Statistical Physics 35 (1984) 193–266.
Klumper:1992PhyA..183..304K
A. Klümper and P. A. Pearce, “Conformal weights of RSOS lattice models and their fusion hierarchies,” http://dx.doi.org/10.1016/0378-4371(92)90149-KPhysica A Statistical Mechanics and its Applications 183 (1992) 304–350.
gilsAnyonicQuantumSpin2013
C. Gils, E. Ardonne, S. Trebst, D. A. Huse, A. W. W. Ludwig, M. Troyer, and Z. Wang, “Anyonic quantum spin chains: Spin-1 generalizations and topological stability,” http://dx.doi.org/10.1103/PhysRevB.87.235120Physical Review B 87 (2013) 235120, http://arxiv.org/abs/1303.4290arXiv:1303.4290.
gaiottoOrbifoldGroupoids2021
D. Gaiotto and J. Kulp, “Orbifold groupoids,” http://dx.doi.org/10.1007/JHEP02(2021)132Journal of High Energy Physics 2021 (2021) 132, http://arxiv.org/abs/2008.05960arXiv:2008.05960.
apruzziSymmetryTFTsString2023
F. Apruzzi, F. Bonetti, I. G. Etxebarria, S. S. Hosseini, and S. Schäfer-Nameki, “Symmetry TFTs from string theory,” http://dx.doi.org/10.1007/s00220-023-04737-2Communications in Mathematical Physics 402 (2023) 895–949, http://arxiv.org/abs/2112.02092arXiv:2112.02092.
freedTopologicalSymmetryQuantum2024
D. S. Freed, G. W. Moore, and C. Teleman, “Topological symmetry in quantum field theory,” http://arxiv.org/abs/2209.07471arXiv:2209.07471.
bhardwajGeneralizedChargesPart2024
L. Bhardwaj and S. Schäfer-Nameki, “Generalized charges, part I: Invertible symmetries and higher representations,” http://dx.doi.org/10.21468/SciPostPhys.16.4.093SciPost Physics 16 (2024) 093, http://arxiv.org/abs/2304.02660arXiv:2304.02660.
bhardwajGeneralizedChargesPart2023
L. Bhardwaj and S. Schäfer-Nameki, “Generalized charges, part II: Non-invertible symmetries and the symmetry TFT,” http://arxiv.org/abs/2305.17159arXiv:2305.17159.
aresEntanglementAsymmetryProbe2023
F. Ares, S. Murciano, and P. Calabrese, “Entanglement asymmetry as a probe of symmetry breaking,” http://dx.doi.org/10.1038/s41467-023-37747-8Nature Communications 14 (2023) 2036.
capizziEntanglementAsymmetryOrdered2023
L. Capizzi and M. Mazzoni, “Entanglement asymmetry in the ordered phase of many-body systems: The Ising field theory,” http://dx.doi.org/10.1007/JHEP12(2023)144Journal of High Energy Physics 2023 (2023) 144.
capizziUniversalFormulaEntanglement2024
L. Capizzi and V. Vitale, “A universal formula for the entanglement asymmetry of matrix product states,” http://arxiv.org/abs/2310.01962arXiv:2310.01962.
chenEnyiEntanglementAsymmetry2024
M. Chen and H.-H. Chen, “Rényi entanglement asymmetry in (1+1)-dimensional conformal field theories,” http://dx.doi.org/10.1103/PhysRevD.109.065009Physical Review D 109 (2024) 065009.
fossatiEntanglementAsymmetryCFT2024
M. Fossati, F. Ares, J. Dubail, and P. Calabrese, “Entanglement asymmetry in CFT and its relation to non-topological defects,” http://dx.doi.org/10.1007/JHEP05(2024)059Journal of High Energy Physics 2024 (2024) 59, http://arxiv.org/abs/2402.03446arXiv:2402.03446.
chatterjeeQuantumPhasesTransitions2024
A. Chatterjee, Ö. M. Aksoy, and X.-G. Wen, “Quantum phases and transitions in spin chains with non-invertible symmetries,” http://arxiv.org/abs/2405.05331arXiv:2405.05331.
bhardwajIllustratingCategoricalLandau2024
L. Bhardwaj, L. E. Bottini, S. Schäfer-Nameki, and A. Tiwari, “Illustrating the categorical Landau paradigm in lattice models,” http://arxiv.org/abs/2405.05302arXiv:2405.05302.
Zheng2
Y. Choi, B. Rayhaun, and Y. Zheng, “A non-invertible symmetry-resolved Affleck-Ludwig-Cardy formula and entanglement entropy from the boundary tube algebra,”. to appear.
Das
A. Das, J. Molina-Vilaplana, and P. Saura-Bastida, “Generalized symmetry resolution of entanglement in CFT for twisted and anyonic sectors,”. to appear.
Zheng1
Y. Choi, B. Rayhaun, and Y. Zheng, “Generalized tube algebras, symmetry-resolved partition functions, and twisted boundary states,”. to appear.
Bhardwaj
L. Bhardwaj, C. Copetti, D. Pajer, and S. Schäfer-Nameki, “Boundary SymTFT,”. to appear.
GarciaEtxebarria
I. García Etxebarria, J. Huertas, and A. Uranga, “SymTFT fans: The symmetry theory of 4d 𝒩 = 4 super Yang-Mills on spaces with boundaries,”. to appear.
§ PROJECTOR CALCULATIONS
§.§ Tetrahedron symbols
The tetrahedron symbols are obtained by considering a defect network in the shape of a tetrahedron.
Following the conventions in Ref. <cit.> and using <ref>, they are related to the F-symbols by
[baseline=(0, 0cm- pt)]
[black,thick] (0,0) circle (1);
[black,thick] (0,0) – node[midway, anchor =east] i (0,1);
[black,thick] (0,0) – node[midway, anchor =south west, xshift=-0.1cm] j(0.866,-0.5);
[black,thick] (0,0) – node[midway, anchor =north west , xshift=-0.1cm, yshift=0.1cm] k(-0.866,-0.5);
(45: 1) node[anchor=south west]c;
(-90: 1) node[anchor=north]a;
(135: 1) node[anchor=south east]b;
=
[ i j k; a b c ]^TET = √(d_a d_b d_i d_j) F_ck[ j i; a b ] .
There are numerous ways to shrink this tetrahedron to a point, which generates numerous identities between the F-symbols.
<ref> becomes more useful when used in conjunction with the tetrahedron symbols,
Due to the numerous way to shrink a tetrahedron to a point, <ref> has many symmetries.
For instance, <ref> is invariant under the cyclic permutation of its columns
[ i j k; a b c ]^TET
= [ k i j; c a b ]^TET
= [ j k i; b c a ]^TET ,
or switching the upper and lower labels of two different columns simultaneously.
§.§ Pentagon identity
The pentagon identity (<ref>) reads
∑_s ^ F_ps[b c
a q] F_qt[s d
a e] F_sr[c d
b t] = F_qr[c d
p e] F_pt[b r
a e] .
Referring to <ref>, if p = 1 then we require a = b, c = q and e = r.
Then, by the assumption of parity invariance of defects, F_pq[j k; il] = F_pq[i l; j k],[This is obtained by reflecting <ref> about the horizontal.] we find
∑_s ∈ C^ F_1s[b c
b c] F_ct[b e
s d] F_sr[c d
b t] = F_1t[b r
b r] δ_er .
Now we set b =c=X and t = d = Y
∑_s∈ C^ F_1s[X X
X X] F_XY[X e
s Y] F_sr[X Y
X Y] = F_1Y[X r
X r] δ_er
1/F_1Y[X r
X r]∑_s^ F_1s[X X
X X] F_sr[X Y
X Y] F_XY[X e
s Y] = δ_re .
We identify the final term on the left side as the action of ℒ_s on the irreducible representation e with boundaries X and Y;
this is only non-zero if the boundaries X and Y are weakly symmetric with respect to ℒ_s.
Thus we arrive at
P_r^[a,b] = 1/F_1b[a r
a r ]∑_s∈WS_[a,b]^ F_1s[a a
a a] F_sr[a b
a b] ℒ_s ,
as required.
§.§ Orthogonal projectors
Let C ⊆ℳ where ℳ is a modular fusion category.
Consider a ∈ℳ such that a× a = ∑_c∈ C^ C.
Then using <ref> we obtain the projector onto each representation r ∈ C:
P_r^[a] = 1/F_1a[ a r; a r ]∑_s ∈ C ^ F_1s[ a a; a a ]
F_sr[ a a; a a ]ℒ_s
= √(d_r)/d_a∑_s∈ C^√(d_s) F_sr[ a a; a a ]ℒ_s .
Next, using the F-symbol identity (which is easily seen from the tetrahedron symbols)
d_a F_sr[ a a; a a ] = √(d_s d_r) F_aa[ a r; s a ]
we then have
P_r^[a] = d_r/√(C)∑_s ∈ C^ d_s F_aa[ a r; s a ]ℒ_s
where we have used d_a^2 = √(ℳ).
This recovers <ref>.
Next we show that these are orthogonal projectors.
We then compute
P_r^[a] P_h^[a] = ∑_s,d,λ^ N_sd^λÑ_sd^[a] λF_1s[ a a; a a ]
F_sr[ a a; a a ]
F_1d[ a a; a a ]
F_dh[ a a; a a ]/ F_1a[ a r; a r ]
F_1a[ a h; a h ]ℒ_λ
= √(d_r d_h)/d_a^2∑_s,d,λ^√(d_s d_d) N_sd^λ
F_sr[ a a; a a ]
F_dh[ a a; a a ]
F_λ a[ s d; a a ]
F_λ a[ s d; a a ]ℒ_λ
We use the following identity, which can be obtained from the tetrahedron symbols,
F_ea[ d c; a a ] = √(d_e)/√(d_d) F_da[ c e; a a ]
to then write
P_r^[a]P_h^[a] = √(d_r d_h)/d_a^2∑_s,d,λ^√(d_d d_λ) N_sd^λ
F_sr[ a a; a a ]
F_dh[ a a; a a ]
F_λ a[ s d; a a ]
F_s a[ d λ; a a ]ℒ_λ
Then, consider the following form of the pentagon identity <cit.>
F_rq[ e p; d c ]
F_pt[ e r; a b ] = ∑_s^ F_sp[ b c; a q ]
F_tq[ s d; a e ]
F_sr[ t d; b c ] .
Next, by using the tetrahedron symbols one finds
F_λ a[ s d; a a ] = F_a λ[ d s; a a ] .
The right hand side is only non-zero when N_sd^λ≠ 0.
Thus, summing over s in <ref> and utilising the pentagon identity, <ref> we find
P_r^[a]P_h^[a] = √(d_r d_h)/d_a^2∑_d,λ^√(d_d d_λ) F_dh[ a a; a a ]
F_aa[ a r; d a ]
F_rλ[ a a; a a ]ℒ_λ .
From the tetrahedron symbol identity
[ r a a; d a a ] ^TET = [ a a r; a a d ]^TET
F_aa[ a r; d a ] = d_a/√(d_r d_d)F_dr[ a a; a a ]
we then have
P_r^[a]P_h^[a] = √( d_h)/d_a∑_d,λ^√(d_λ)F_dh[ a a; a a ]
F_dr[ a a; a a ]_pentagon identity
F_rλ[ a a; a a ]ℒ_λ
= δ_rh√(d_h)/d_a∑_λ^√(d_λ) F_r λ[ a a; a a ]ℒ_λ
= δ_rh P_h^[a] ,
as required.
|
http://arxiv.org/abs/2409.03132v1 | 20240904234557 | Energy Transport in Random Perturbations of Mechanical Systems | [
"Anna Maria Cherubini",
"Marian Gidea"
] | math.DS | [
"math.DS"
] |
]
Energy Transport in Random Perturbations of Mechanical Systems
Dipartimento di Matematica e Fisica “Ennio De Giorgi”, Università del Salento,
I-73100 Lecce, Italy
anna.cherubini@unisalento.it
Yeshiva University, Department of Mathematical Sciences, New York, NY 10016, USA
Marian.Gidea@yu.edu
^† Research of A.M.C. was partially supported by GNFM/INDAM
^ Research of M.G. was partially supported by NSF grant DMS-2154725.
[2010]Primary,
37J40;
37C29; 34C37;
Secondary,
70H08.
§ ABSTRACT
We describe a mechanism for transport of energy in a mechanical system consisting of a pendulum and a rotator subject to a random perturbation.
The perturbation that we consider is the product of a Hamiltonian vector field and a scalar, continuous, stationary Gaussian process with Hölder continuous realizations, scaled by a smallness parameter. We show that for almost every realization of the stochastic process, there is a distinguished set of times for which there exists a random normally hyperbolic invariant manifold with associated stable and unstable manifolds that intersect transversally, for all sufficiently small values of the smallness parameter. We derive the existence of orbits along which the energy changes over time by an amount proportional to the smallness parameter. This result is related to the Arnold diffusion problem for Hamiltonian systems, which we treat here in the random setting.
[
Marian Gidea^
=================
§ INTRODUCTION
The main idea of the present work is using randomness to overcome geometric obstacles in
dynamical systems coming from classical mechanics, and, in particular, to generate energy transfer.
We consider a 2-degrees of freedom, uncoupled pendulum-rotator system, which is described by an
integrable Hamiltonian. The energies of the rotator and of the pendulum are first integrals of the system.
Hence, there are no trajectories that cross the level sets of the energies, so these are geometric obstacles for the dynamics.
We add a small, random perturbation to the system; we assume that this perturbation is of a special type.
More precisely, the perturbation is given by a Hamiltonian vector field multiplied by a scalar, continuous, stationary Gaussian process with Hölder continuous paths. Additionally, the Hamiltonian vector field is assumed to vanish at the hyperbolic fixed point of the pendulum.
The energies of the rotator and of the pendulum are no longer conserved. We show that, in particular, the energy of the rotator can change by an amount proportional to the size of the perturbation.
Our treatment of the underlying random dynamics is path-wise, in the sense that we derive results for fixed realizations of the stochastic process. Each such realization is given by an unbounded, continuous curve. Nevertheless, we cannot reduce the problem to the case of a non-autonomous perturbation by regarding each realization as a time-dependent parameter, since we need to use the ergodicity of the process and make use of the Birkhoff ergodic theorem.
Considering the effect of random perturbations of mechanical system is very natural in applications.
There are inherently many sources of noise that affect mechanical systems, or, more generally engineering systems.
Some concrete examples can be found in, e.g., <cit.>.
One particular application that we plan to study in the future concerns piezoelectric energy harvesting devices, where one wants to exploit external vibrations
to generate electrical output; see <cit.>. Many of the existing models assume that the external vibrations are periodic, but it would be more realistic to consider noisy perturbations. Such systems also include dissipation effects, yielding random attractors <cit.> and stochastic resonance <cit.>.
For applications, the path-wise approach is suitable when we want to analyze the output of a single experiment at a time, rather than study the statistics of multiple experiments.
Our approach is based on geometric methods. The unperturbed system possesses a normally hyperbolic invariant manifold (NHIM) whose stable and unstable manifolds coincide. To understand the effect of the time-dependent perturbation, we work in the extended space, where time is viewed as an additional coordinate.
We show that for a distinguished set of times, there is a random normally hyperbolic invariant manifold (RNHIM) and corresponding stable and unstable manifolds that survive the perturbation.
The main difference from the standard normal hyperbolicity theory is that our normally hyperbolic invariant manifold is time-dependent, and, moreover
we cannot guarantee its existence for all times, but only for a certain set of times.
These manifolds are reminiscent of Pesin sets in non-uniform hyperbolicity theory <cit.>.
The reason for why the RNHIM's may break up is that the underlying stochastic process is unbounded, and large spikes in the noise can destroy the relations among the hyperbolic rates that are needed for normal hyperbolicity.
Results on the persistence of stable and unstable manifolds of hyperbolic fixed points under random perturbations have been obtained in <cit.>.
To obtain a similar result for normally hyperbolic invariant manifolds, we apply some general results from <cit.>; see also <cit.>. One difficulty in applying these results to our case is that they assume that the perturbed flow is close to the unperturbed flow at all times.
This is not the case when the noise driving the perturbation is unbounded. To deal with unbounded noise, we modify the system by multiplying the Hamiltonian vector field that appears in the perturbation by a random bump function defined in the extended space. This bump function depends on the noise parameter, and has the effect of cutting off the noise when it spikes too much. This makes the modified flow stay close to the unperturbed flow, so we can apply the general theory. For the aforementioned distinguished set of times (where time is viewed as an additional coordinate), we show that the modified flow coincides with the original flow. This is how we obtain the persistence of the NHIM and of its stable and unstable manifolds for the distinguished set of times.
We also show that, for the distinguished set of times, the stable and unstable manifolds intersect transversally provided that certain non-degeneracy conditions are satisfied. The transverse intersections of the manifolds correspond to non-degenerate zeroes of a certain Melnikov process.
A key ingredient to show the existence of zeroes for the Melnikov process is Rice's formula, concerning the number of times the process crosses a predetermined level. Melnikov theory for random perturbations has been developed in <cit.>, and the idea of using Rice's formula to obtain intersections of the invariant manifolds comes from <cit.>.
Once the existence of transverse intersections of the stable and unstable manifolds is established, the dynamics along the corresponding homoclinic orbits can be described via a random scattering map. This is an analogue of the (deterministic) scattering map developed in <cit.>, and its version for time-dependent system developed in <cit.>. This is a map defined on the RNHIM, and gives the future asymptotic of a homoclinic orbit as a function of its the past asymptotic. We show that the random scattering map changes the energy of the rotator by an amount proportional to the size of the perturbation, provided that some non-degeneracy conditions are satisfied.
These non-degeneracy conditions rely again on Rice's formula.
The change in energy owed to the small perturbation is reminiscent of the Arnold diffusion problem for Hamiltonian systems <cit.>. Arnold conjectured that integrable Hamiltonian systems of general type, of more than two degrees of freedom, subjected to small, Hamiltonian perturbations of generic type have trajectories along which the energy changes by an amount that is independent of the perturbation parameter. A survey on some recent results can be found in <cit.>.
Much of the existing work considers deterministic perturbations.
Diffusion in randomly perturbed integrable Hamiltonian systems has been studied in <cit.>,
where they derive the Fokker-Planck equation for the distribution function of the action angle-variables.
A model for diffusion for random compositions of cylinder maps was considered in <cit.>.
Another paper of related interest is <cit.>.
The upshot of our work is that, we can extend the geometric methods for Arnold diffusion developed in <cit.> to the case in which the perturbations are random (rather than deterministic).
The main difficulty in applying the geometric method is that the perturbation is driven by unbounded noise, so the spikes in the noise may destroy the geometric structures.
Our current results yield only a small change in energy, of the order of the perturbation (that is, we obtain micro-diffusion – a term coined in <cit.>), rather than of order one.
The identification of orbits with diffusion of order one is the object of future investigation.
§ SET-UP
§.§ Unperturbed system
The unperturbed system is a rotator-pendulum system described by an autonomous Hamiltonian H_0 of the form:
H_0(I,ϕ,p,q) =h_0(I)+h_1(p,q)
=h_0(I)+ (1/2p ^2+V (q )),
with z=(I,ϕ,p,q) in M:=×××.
The phase space M is endowed with the symplectic form
dI∧ dϕ+dp∧ dq.
We denote by
ν(I):=∂ h_0/∂ I(I)
the frequency of the rotator.
We assume the following:
(P-i)
The potential V is periodic of period 1 in q;
(P-ii) The potential V has a non-degenerate local maximum, which, without loss of generality,
we set at 0; that is, V'(0)=0 and V”(0)<0. We additionally assume that q=0 is non-degenerate in the sense of Morse, i.e., 0 is the only critical point in the level set {V(q) = V (0)}.
Condition (P-ii) implies that the
pendulum has a homoclinic orbit to (0, 0), the hyperbolic fixed point of the pendulum. We consider that the homoclinic orbit is parametrized by (p_0(t),q_0(t)) for t∈ℝ, where (p_0(t),q_0(t))→ (0,0) as t→±∞.
The Hamilton equation associated to (<ref>) is
ż=X^0(z)=J∇ H_0(z),
where J is the symplectic matrix
J=(
[ J_2 0; 0 J_2; ]) with J_2=(
[ 0 -1; 1 0; ]).
We denote by Φ^t_0 the flow of (<ref>).
Since for H_0 the pendulum and the rotator are decoupled, the action variable I is preserved along the trajectories of (<ref>).
Similarly, the energy P= (1/2p ^2+V (q )) of the pendulum is a conserved quantity.
In the sequel, we will show that if we add a small, random perturbation to the pendulum-rotator system, there are trajectories of the perturbed system along which I changes over time. If ν(I)=∂ h_0/∂ I(I)≠ 0 for all I within some range, the fact that I changes along a trajectory implies that the energy of the rotator h_0(I) also changes along that trajectory.
§.§ Perturbed system
To the system (<ref>) we add a random perturbation, so that the perturbed system is of the form
ż= X_(z,η(t))=
X^0(z)+ X^1 (z,η(t))
= J∇ H_0(z)+ J∇ H_1(z)η(t)
for ∈, where η(t) is a scalar, continuous, stationary Gaussian process satisfying the properties (<ref>), (<ref>), (<ref>) below.
In the above, we assume that H_1(z ) is a Hamiltonian function, uniformly ^2 in z, satisfying the following condition
H_1(I,ϕ,0,0 ) =0, DH_1(I,ϕ,0,0 ) =0.H1
The perturbation is chosen so does not affect the
inner dynamics, given by the restriction to the phase space of
the rotator. The dynamics of the rotator is integrable, hence I
is preserved by the inner dynamics.
The level sets of I (which are invariant circles) constitute geometric obstacles for the inner dynamics. We will show that we can use the outer dynamics, along the homoclinic orbits of the pendulum, to overcome these geometric obstacles.
The system (<ref>) is non-autonomous.
We denote by Φ^t_0,t_(ω) the corresponding flow, which depends on the initial time t_0 and on the realization ω of the stochastic process η(t). For every fixed realization ω of η(t) we have a sample path given by η(t)(ω)=ω(t). See Section <ref>.
It may be possible to remove condition (<ref>).
In fact, we will not use this condition for two of the main results (Theorem <ref> and Theorem <ref>.
However, without (<ref>), the inner dynamics will be affected by the random perturbation, and the resulting inner dynamics may overcome on its own the geometric obstacles.
§.§ Noise
The time-dependent function η (t) is a scalar stationary Gaussian process with mean 0, i.e.,
R1 E[η(t)] = 0.
Stationarity means that for any n, t_1,…, t_n, and h>0, the random vectors
(η(t_1),…, η(t_n)) and (η(t_1+h),…, η(t_n+h))
have the same (Gaussian) distribution.
The autocorrelation
function r(h) is
r(h):=E[η(t)η(t +h )].
By stationarity, the right-hand side of (<ref>) does
not depend on t.
We assume that the autocorrelation function satisfies the following conditions
R2 r(h ) is continuous and absolutely integrable on ℝ,
and
R3 r(h )= 1-C|h|^a+o(|h|^a) as h→ 0,
for some C with 0< C < ∞, and some a with 1<a ≤ 2.
We use the notation f(x)=o(g(x)) to signify lim_x→ 0f(x)/g(x)=0.
For a Gaussian process, if r(h) satisfies (<ref>) for 0<a≤ 2, then the sample paths ω(t) are continuous, and if 1<a≤ 2,
then the sample paths ω(t) are α-Hölder continuous for any 0<α<a-1/2. See <cit.>.
That is,
there exists C_H>0, independent of ω, such that
|ω(t_1)-ω(t_2)|<C_H|t_1-t_2|^α for all t_1,t_2∈ℝ.
Intuitively, the above conditions say that the lesser the loss of memory of the process is, the more regular the sample paths of the process are.
To summarize, by assumption (<ref>), the sample paths of η(t) are continuous and α-Hölder continuous with probability 1.
Condition (<ref>) yields r(0)=1, and therefore
E[η(t)^2]=1,
which means that the Gaussian process has variance equal to 1.
Also, (<ref>) implies, via the Maruyama Theorem <cit.>, that η(t) is ergodic
lim_T→±∞1/T∫_0^Tϕ(η(t))dt = E[ϕ(η(t))], ∀ϕ:ℝ→ℝ measurable function.
Gaussian stationary processes that are not continuous are necessarily very irregular.
More precisely, one of the following alternatives holds: either with probability one the sample paths ω(t) are continuous, or with probability one they are unbounded on every finite interval <cit.>.
Hence, considering Gaussian processes with continuous sample paths as in (<ref>) is a reasonable assumption.
§ MAIN RESULTS
In Section <ref> we show that the unperturbed rotator-pendulum system possesses a normally hyperbolic invariant manifold (NHIM).
The first main result is the persistence of the NHIM, as random normally hyperbolic invariant manifold (RNHIM), and of its stable and unstable manifolds, for a distinguished set of times. The definition of an RNHIM and of its stable and unstable manifolds is given in Section <ref>.
Assume that the system (<ref>) satisfies (P-i), (P-ii), (R-i), (R-ii), (R-iii) (but not necessarily (H1)).
Then, for any δ>0, there exist a positive random variable T_δ(ω), a closed set Q_A_δ,T_δ(ω)⊆ [-T_δ, T_δ],
and _0>0 such that, for every t_0∈ Q_A_δ,T_δ(ω), every with 0<<_0, and a.e. ω∈Ω
there exist the following objects:
(i) A normally hyperbolic manifold Λ_(θ^t_0(ω))
such that
Φ^t_0,t_1_ (Λ_(θ^t_0(ω)))=Λ_(θ^t_0+t_1(ω)),
provided that t_0+t_1 ∈ Q_A_δ,T_δ(ω).
(ii) Stable and unstable manifolds W^(Λ_(θ^t_0(ω))) and W^(Λ_(θ^t_0(ω)))
such that
Φ^t_0,t_1_(θ^t_0(ω))(W^ (Λ_(θ^t_0(ω))))= W^(Λ_(θ^t_0+t_1(ω)),
Φ^t_0,t_1_(θ^t_0(ω))(W^ (Λ_(θ^t_0(ω))))= W^(Λ_(θ^t_0+t_1(ω)),
provided that t_0+t_1 ∈ Q_A_δ,T_δ(ω).
See Fig. <ref>.
Moreover, the distinguished set of times
Q_A_δ,T_δ(ω) satisfies the conditions (<ref>) and (<ref>) given in Section <ref>.
In Section <ref> we define a Melnikov process M^P (<ref>) to measure the splitting of the perturbed stable and unstable manifolds as it changes over time.
The second main results says that, if the Melnikov process satisfies some non-degeneracy conditions, then the stable and unstable manifolds, corresponding to the distinguished set of times provided by Theorem (<ref>), intersect transversally.
Assume that the system (<ref>) satisfies (P-i), (P-ii), (R-i), (R-ii), (R-iii) (but not necessarily (H1)).
Also assume the condition (<ref>) on the spectral moments of M^P, given in Section <ref>.
Then, given a set Q_A_δ,T_δ(ω) and _0>0 as in Theorem <ref>, there exist t_0∈ Q_A_δ,T_δ(ω) and 0<_1<_0 such that for every 0<<_1, the stable and unstable manifolds W^(Λ_(θ^t_0ω)) and W^(Λ_(θ^t_0ω)) have a transverse intersection at a point _.
In Section <ref> we define the random scattering map associated to a homoclinic intersection of the stable and unstable manifolds obtained in Theorem <ref>. This map relates the future asymptotic of a homoclinic orbit as a function of its the past asymptotic.
In Section <ref> we define a Melnikov process M^I (<ref>) to measure the splitting of the action I-level sets by the scattering map.
The third main result says that, if the Melnikov process satisfies some non-degeneracy conditions, then the scattering map grows the action I (and hence the energy of the rotator) by O(). Consequently, there are trajectories of the perturbed system along which the action I (energy) grows by O().
Assume that the system (<ref>) satisfies (P-i), (P-ii), (R-i), (R-ii), (R-iii) and (H1).
Also assume the condition (<ref>) on the spectral moments of M^I, given in Section <ref>.
Then, given
_0>0 as in Theorem <ref>, for every v>0 there exist
trajectories _(t), and times T_>0, such that
I(_(T_))-I(_(0))= v +O(^1+ρ).
This result says that the perturbed system has trajectories that exhibit micro-diffusion in the action variable.
The obtained change in action is obtained along a single homoclinic orbit.
We stress that, while the change is of order
O(), the constant v can be chosen arbitrarily large. This is different from the case of time-periodic (or quasi-periodic) Hamiltonian perturbations.
It seems possible to obtain true Arnold diffusion, i.e., existence of trajectories along which the action variable changes by
O(1), by concatenating trajectories segments that change I by O(), and then applying a shadowing lemma similar to those in <cit.>. Some of the challenges is to show that there are O(1/) such trajectories segments, and to obtain a version of the aforementioned shadowing lemma in the random setting.
§ BACKGROUND
§.§ Random dynamical system
We consider a probability space (Ω, ℱ,ℙ), where Ω is the sample space of
outcomes, ℱ is the σ-algebra of events, and ℙ is a probability measure
that assigns probabilities to the events in ℱ.
A stochastic process η is a mapping t∈↦η(t):Ω→ R, where
each η(t) is a random variable, i.e., a measurable function from Ω to .
For a fixed realization ω∈Ω, the function ω(t):=η(t)(ω) is referred to as a sample path.
(We note that, in this paper, under (<ref>), we assume that a.e. sample path is Hölder continuous.)
On Ω we consider the ℙ-preserving measurable flow θ^t:Ω→Ω, given by
θ^t ω (s)=ω(t+s).
It satisfies the following conditions for all ω∈Ω:
(i) θ^0 ω =ω,
(ii) θ^t_1+t_2ω =θ^t_1(θ^t_2ω), for t_1,t_2∈ℝ,
for each ω∈Ω.
The dynamical system (Ω,θ^t) is referred to as a metric dynamical system.
Let (M,ℬ) be a measurable space.
A measurable mapping
Φ:ℝ×Ω× M→ M
is a random dynamical system (RDS) over θ, if it satisfies the following cocycle conditions for all ω∈Ω:
(i) Φ(0,ω)(z)=(z), for all z∈ M,
(ii) Φ(t_1+t_2,ω)(z)=Φ(t_1,θ^t_2ω)(Φ(t_2,ω)(z)), for all z∈ M, t_1,t_2∈.
We will often write Φ(t,ω)=Φ^t(ω).
§.§ Random differential equations
Let M be a smooth manifold and X:M×Ω→ TM, (z,ω)∈ M×Ω↦ X(z,ω)∈ T_zM, be a random vector field that is ^r, r≥ 1, in the z-component, and measurable in the ω-component.
A solution (in the sense of Carathéodory) of a system
{[ ż = X(z,θ^tω); z(t_0) = z_0 ].
is a function z(t;z_0,t_0,ω):M→ M satisfying
z(t;z_0,t_0,ω)=z_0+∫_t_0^t X(z(s;z_0,t_0,ω),θ^sω)ds,
where we fix a realization ω of the process η(t).
In the above notation ω refers to the choice of realization at time t=0.
If for all ω∈Ω, X_∈^r, r≥ 1, then (<ref>) yields a unique solution t↦ z(t;z_0,t_0,ω) which is ^r in z_0.
See <cit.>.
If we denote z(t)=z(t;z_0,t_0,ω) and z̅(t)=z(t_0+t), and make a change of variable s↦ t_0+s' in (<ref>),
we have
z(t)= z_0+∫_0^t-t_0 X(z(t_0+s'),θ^t_0+s'ω)ds'
= z_0+∫_0^t-t_0 X(z̅(s'),θ^s'(θ^t_0ω))ds'.
Noting that z̅(0)=z(t_0)=z_0, we see that the right hand side of (<ref>) is the solution of
{[ ż̅̇ = X(z̅,θ^t(θ^t_0ω)); z̅(0) = z_0 ].
evaluated at time t-t_0, therefore, it coincides with z(t-t_0; z_0,0,θ^t_0ω).
We obtained the following invariance relation:
z(t;z_0, t_0,ω)=z(t-t_0;z_0,0,θ^t_0ω).
That is, the solution only depends on the elapsed time t-t_0 and on the random parameter θ^t_0ω at time t_0.
Therefore,
Φ(t,ω)(z_0)=z(t;z_0,0,ω)
determines a random dynamical system (RDS) over θ^t.
From (<ref>) and (<ref>) we have that, for any t_0,
Φ(t-t_0,θ^t_0ω)(z_0)=z(t-t_0;z_0,0,θ^t_0ω)=z(t;z_0,t_0,ω).
In the case when the sample paths ω(t) are continuous, for each realization ω the equation (<ref>) is a classical non-autonomous differential equation,
and its solutions (<ref>) are in classical sense.
§.§ Normally hyperbolic invariant manifolds for random dynamical systems
In the sequel we follow <cit.>.
Let Φ(t,ω) be a random dynamical system.
A random set is a mapping
ω∈Ω↦ℳ(ω)⊆ M
assigning to each path ω∈Ω a closed subset ℳ(ω)⊆ M,
such that
ω→inf_y∈ℳ(ω) y-x is measurable for each x ∈ M.
A random invariant manifold is a random manifold ℳ(ω)⊆ M such that
Φ(t,ω)(ℳ(ω))=ℳ(θ^tω), for all t∈ℝ, ω∈Ω.
A random variable C(t) is said to be tempered if
lim_t→±∞log C(θ^t ω )/t=0 for a.e. ω∈Ω.
A random invariant manifold Λ(ω) is normally hyperbolic
if for a.e. ω∈Ω and all x∈Λ(ω) there exists an invariant splitting of T_xM, which is ^0 in x and measurable in ω,
T_xM=T_xΛ(ω)⊕ E^_x(ω)⊕ E^_x(ω),
whose bundles are invariant in the sense
D_xΦ(t,ω)(T_xΛ(ω))= T_Φ(t,ω)(x)Λ(θ^tω) ,
D_xΦ(t,ω)(E^_x (ω))= E^_Φ(t,ω)(x)(θ^tω),
D_xΦ(t,ω)(E^_x (ω))= E^_Φ(t,ω)(x)(θ^tω),
and there exist a tempered random variable C(x,ω)>0 and (θ, Φ)-invariant random variables (rates)
0<α (x,ω)<β(x,ω)
such that for all x∈Λ(ω) we have
v∈ E^_x(ω)⇒D_xΦ(t,ω)v< C(x,ω)e^-β(x,ω)tv, for t≥ 0,
v∈ E^_x(ω)⇒D_xΦ(t,ω)v< C(x,ω)e^β(x,ω)tv, for t≤ 0,
v∈ T_xΛ(ω)⇒D_xΦ(t,ω)v< C(x,ω)e^α(x,ω)|t|v, for all t.
When the objects in Definition <ref> do not depend on the random parameter ω, we obtain the classical definition of a NHIM as in <cit.>; a brief summary of the normal hyperbolicity theory can be found in <cit.>.
We will use the following results from <cit.> on the persistence of the NHIM under random perturbations in Section <ref>.
Assume that Φ_0(t) is a (deterministic) ^r flow on M, r ≥ 1. Assume that Φ_0(t) has a compact, connected
^r normally hyperbolic invariant manifold Λ⊆ M. Let the positive exponents
related to the normal hyperbolicity of Λ be 0<α < β, which are
constant and deterministic.
Then there exists >0 such that for any random
flow Φ(t, ω) on M which is ^1 in x, if
Φ(t, ω)- Φ_0(t)_^1 < , for t ∈ [0, 1], ω∈Ω
then
(i) Persistence: Φ(t, ω) has a normally hyperbolic random invariant manifold
Λ(ω) which is ^1 in x;
(ii) Smoothness: If ℓ<min{β/α,r}, then Λ(ω) is ^ℓ in x, and is
diffeomorphic to Λ for a.e. ω∈Ω;
(iii) Existence of stable and unstable manifolds: Φ(t, ω) has stable manifolds W^(Λ(ω)) and W^(Λ(ω)) that are ^ℓ-1 and depend measurably on ω;
(iv)Existence of stable and unstable foliations: The stable manifold W^(Λ(ω)) is foliated by
an equivariant family of ^r stable leaves
W^(Λ(ω))=⋃_x∈Λ(ω) W^(x,ω),
and the unstable manifold W^(Λ(ω)) is foliated by
an equivariant family of ^r unstable leaves
W^(Λ(ω))=⋃_x∈Λ(ω) W^(x,ω),
both depending measurably on ω.
Condition (<ref>) implies that the perturbed flow and the unperturbed flow stay -close for all time, since we can
re-initialize the time and update the random variable at the end of the time-interval [0,1].
Also note that we can replace the time domain [0,1] with any closed interval.
The perturbed NHIM, and its stable and unstable manifolds, can be described as graphs over the unperturbed ones, respectively.
There exist a smooth parametrization k_ω:Λ_0→Λ(ω), depending on ω in a measurable fashion, such that Λ(ω)=k_ω(Λ_0).
Given a system of coordinates on the unperturbed manifold Λ_0, we can transport it through k_ω to obtain a system of coordinates on Λ(ω). We will use this fact in Section <ref>.
The stable and unstable manifolds W^(Λ(ω)), W^(Λ(ω)), given in Theorem <ref>, have the following asymptotic properties:
x ∈ W^(Λ(ω)) ⇒ ∃ x^+ ∈Λ(ω) s.t. d(Φ(t,ω)(x),Φ(t,ω)(x^+))→ 0 as t→+∞,
x ∈ W^(Λ(ω)) ⇒ ∃ x^-∈Λ(ω) s.t. d(Φ(t,ω)(x),Φ(t,ω)(x^-))→ 0 as t→ -∞,
where the point x^±∈Λ(ω) is uniquely defined by x.
Then, we respectively have
x ∈ W^(Λ(ω)) ⇒ x ∈ W^(x^+,ω),
x ∈ W^(Λ(ω)) ⇒ x ∈ W^(x^-,ω).
§ PRELIMINARY RESULTS
§.§ Sub-linearity of the noise
We assume that the stochastic process η satisfies (<ref>), (<ref>), (<ref>).
For almost every realization ω∈Ω of η we have
lim_s→+∞|ω(s)|/s=0,
that is, ω(s)=o(s).
Therefore, there exist A_ω >0 depending on ω and B>0 that can be chosen independent of ω, such that
|ω(s) |≤ A_ω+Bs for all s≥ 0.
See Fig. <ref>
Since η(s) is a continuous stationary Gaussian process with E[η(s)^2] = 1, it follows from <cit.> that for every realization ω we have
lim sup_s→ +∞|ω(s)|/√(2log(s))≤ 1.
This implies that
lim_s→+∞|ω(s)|/s=0.
Since for almost every ω∈Ω, ω is continuous in s, then for any given B>0 there exists T_ω>0 such that |ω(s)|/s<B for all s≥ T_ω.
Let A_ω>sup{|ω(s)| | s∈ [0,T_ω]}. Then |ω(s)|≤ A_ω+Bs for all s≥ 0.
Note that A_ω depends on ω, while B does not; moreover, B>0 can be chosen arbitrarily small.
The meaning of Lemma <ref> is that the graph of |ω(s)| for s≥ 0 is below the line s↦ A_ω+Bs, where B is a fixed slope independent of the path ω, and A_ω is the vertical intercept of the line and depends on the path. The slope B can be chosen arbitrarily close to 0, and is fixed once and for all.
We define some relevant sets and describe relations among them.
A consequence of Lemma <ref> is that for a.e. ω∈Ω, there exist B>0 and A=A_ω>0 such that
|ω(s) |≤ A_ω+B|s| for all s∈ℝ.
Without any loss of generality, by disregarding a measure zero set of paths, we can assume that this property is true for all ω∈Ω.
For any A>0 fixed, define the set:
Ω_A={ω∈Ω | |ω(s) |≤ A+B|s|,∀ s∈ℝ}.
Since s↦ |ω(s)| is a continuous function, the set Ω_A is a measurable subset of Ω, and
A_1≤ A_2Ω_A_1⊆Ω_A_2.
Since ⋃_A>0Ω_A=Ω, from the continuity from below of the measure ℙ,
we have that lim_A→∞ℙ(Ω_A)=ℙ(Ω)=1, therefore
∀δ>0, ∃ A_δ>0 s.t. ℙ(Ω_A)>1-δ.
Note that if ω∈Ω_A and t≠ 0, it does not follow that θ^tω∈Ω_A.
It is also possible that ω∉Ω_A and for some t≠ 0 we have θ^tω∈Ω_A.
That is to say that the sets Ω_A are not invariant under the shift θ^t. See Fig. <ref>.
On the other hand, given any ω (up to a measure zero set of paths in Ω), the Birkhoff ergodic theorem implies that there exists a bi-infinite sequence of times {t_n}_n∈ℤ such that
θ^t_nω∈Ω_A, for all n ∈ℤ.
See <cit.>.
That is, for a.e. path ω, the orbit of ω under the metric dynamical system θ^t:Ω→Ω visits Ω_A infinitely often.
For any A>0 and ω∈Ω fixed, define the set
Q_A(ω)= {t∈ℝ | θ^t ω∈Ω_A }
= {t∈ℝ | |ω(t+s) |≤ A+B|s|,∀ s∈ℝ}.
For any T>0, A>0 and ω∈Ω fixed, we now define the following subset of Q_A(ω)
Q_A,T(ω)= {t∈ [-T,T] | θ^t ω∈Ω_A }
= {t∈ [-T,T] | |ω(t+s) |≤ A+B|s|,∀ s∈ℝ}.
For δ>0, and A_δ as in (<ref>), we denote the set in (<ref>) corresponding to A_δ by
Q_A_δ,T(ω).
We have the following monotonicity property:
T_1≤ T_2 Q_A,T_1(ω)⊆ Q_A,T_2(ω).
We now recall <cit.>.
For any δ > 0 there exists a random variable T_δ(ω)
such that
T>T_δ(ω) ⇒ m(Q_A_δ,T(ω))>2(1-δ)T,
where m denotes the Lebesgue measure on ℝ.
Let χ_A be the characteristic function of a set A.
We have:
m({t∈(0,T) | θ^tω∈Ω_A_δ})= ∫_0^Tχ_Ω_A_δ(θ^sω)ds.
Using the ergodicity and stationarity of η(t), as well as (<ref>), we obtain
lim_T→∞1/T∫_0^Tχ_Ω_A_δ(θ^sω)ds=
E[χ_Ω_A_δ(ω)]=ℙ(Ω_A_δ)>1-δ,
A similar result holds when we take the limit as T→-∞.
Combining the two results concludes the proof.
§.§ Construction of a random bump function
For each t∈ℝ, and ω∈Ω and A>0 we define the following sets
C_A(ω)={s∈ℝ | |ω(s)|≤ A+B|s|}
which is a closed set in ℝ, and for ρ>0 small,
U_ A,ρ(ω)=⋃_s∈ C_A(ω) (s-ρ,s+ρ)
which is a ρ-neighborhood of C_A(ω).
Let
F_ A,ρ(ω)= ℝ∖ U_ A,ρ(ω)
which is a closed set in ℝ. See Fig. <ref>.
It is clear that:
C_ A(ω)⊆ U_ A,ρ(ω) and d_H(C_ A(ω), F_ A,ρ(ω))=ρ,
where d_H refers to the Hausdorff distance.
If s'∈Cl(U_ A,ρ(ω)), then there exists s∈ C_A(ω) such that |s-s'|≤ρ, therefore,
by the Hölder property (<ref>) of ω(t), we have
|ω(s)-ω(s')|≤ C_H|s-s'|^α≤ C_H ρ^α,
hence
|ω(s')|≤ |ω(s')-ω(s)|+|ω(s)|≤ C_H·ρ^α+A+B|s|
≤ A'_ρ+B|s|,
where A'_ρ:=C_H ·ρ^α+A.
Note that for t∈ℝ we obviously have
C_A(θ^tω)= {s∈ℝ | |θ^t ω(s)|≤ A+B|s|}
= {s∈ℝ | |ω(t+s)|≤ A+Bs},
Cl(U_ A,ρ(ω))⊆ {s ∈ℝ | |θ^t ω(s)|≤ A'_ρ +B|s|}
⊆ {s∈ℝ | |ω(t+s)|≤ A'_ρ +B|s|}.
We construct random bump function ψ_A:ℝ→ℝ with ψ_A(s,ω) being a measurable function in ω for each s fixed, and a ^∞ function in s for each ω fixed, such that
ψ_A,ρ(·,ω):ℝ→[0,1],
ψ_A,ρ(s,ω)={[ 1, for s∈ C_A(ω),; 0, for s∈ F_A,ρ(ω), ].
D_sψ_A,ρ(s,ω)_C^0≤1/ρ,
where D_s denotes the derivative with respect to s.
The latter condition comes from the fact that a slope of a line that changes from 0 to 1 within an interval of at least ρ is at most
1/ρ. Therefore the upper bound on the first derivative of a bump function that is equal to 1 on C_A(ω) and is supported on the closure of the ρ-neighborhood U_ A,ω of C_A(ω) is at most 1/ρ. Recipes to construct such bump functions can be found in <cit.>.
Therefore,
ψ_A,ρ(s,ω)={[ 1, if |ω(s)|≤ A+B|s|,; 0, if |ω(s)|≥ A'_ρ+B|s|, ].
Consequently, for t∈ℝ, we have
ψ_A,ρ(s,θ^tω)={[ 1, if |ω(t+s)|≤ A+B|s|,; 0, if |ω(t+s)|≥ A'_ρ+B|s|. ].
Note that if t∈ Q_A(ω) then |ω(t+s)|≤ A+B|s| for all s, hence ψ_A,ρ(·,θ^tω)≡ 1.
§ GEOMETRIC STRUCTURES OF THE UNPERTURBED SYSTEM
§.§ Coordinate system for the unperturbed rotator-pendulum system
We introduce a new coordinates system defined in a neighborhood of the homoclinic orbit (p_0(t),q_0(t)) of the pendulum
in the (p,q)-phase space, as we describe below.
Choose some fixed reference value q_*≠ 0 of the position coordinate q of the pendulum.
The first coordinate of a point (p,q) is
the pendulum energy level P(p,q)=1/2p^2+V(q) on which the point lies.
The second coordinate represents the time it takes for the solution (p(τ),q(τ)) to go from q_* to (p,q) along the energy level P corresponding to (p,q). Note that for a given initial position q_* the corresponding initial momentum p_* is uniquely determined, up to sign, by the energy condition P(p_*,q_*)=P(p,q).
In order for the coordinate change (p,q)→ (P,τ) to be well defined, we restrict to a neighborhood 𝒩 of the homoclinic orbit {p_0(t),q_0(t)} that does not contain any critical point of P, of the form
𝒩={(p,q) | P_1<P<P_2, q_1<q<q_2}
for some P_1<0<P_2 and 0<q_1<q_*<q_2< 1.
The coordinate change (p,q)↦ (P,τ) is canonical, i.e., dp∧ dq=dP∧ dτ for (p,q)∈𝒩; see <cit.>, also <cit.>.
While the above coordinate change is only defined on 𝒩, the energy of the pendulum P as a function of (p,q) is well defined at all points (p,q).
For the rotator-pendulum system we have the canonical coordinates (I,ϕ,P,τ) for (p,q)∈𝒩.
In these coordinates the Hamiltonian H_0 is given by
H_0(I,ϕ,P,τ)=h_0(I)+P.
In Section <ref> below, we will consider the extended phase space, by considering time as an additional phase-space coordinate.
In this case we have the system of coordinates (I,ϕ,P,τ,) for (p,q)∈𝒩.
§.§ Normally hyperbolic invariant manifold for the unperturbed rotator-pendulum system
Consider the unperturbed rotator-pendulum system given by H_0.
The point (0,0) is a hyperbolic fixed point for the pendulum, the characteristic
exponents are β=(-V”(0))^1/2>0, -β=-(-V”(0))^1/2<0,
and the corresponding unstable/stable eigenspaces are E^=Span(v^u),
E^=Span(v^s), where
v^=(-(-V”(0))^1/2,1), v^=((-V”(0))^1/2,1).
Also, define
E^_z= {z}×Span(v^),
E^_z= {z}×Span(v^).
It immediately follows that for each closed interval [a,b]⊆, the set
Λ_0={(I,ϕ, p,q) | I∈ [a,b], p=q=0}
is a NHIM with boundary, where
the unstable and stable spaces E^_z and
E^_z at z∈Λ_0 are given by (<ref>), respectively, and
the rates that appear in the definition of a NHIM are given by
β for E^, -β for E^,
and α=0 for TΛ_0.
The stable and unstable manifolds of Λ_0 are denoted by W^(Λ_0) and W^(Λ_0),
respectively. They are 3-dimensional manifolds, and W^(Λ_0)=W^(Λ_0).
Relative to the (I,ϕ,P,τ) coordinates they can be locally written as graphs over the (I,ϕ,τ) variables.
See, e.g., <cit.>.
§.§ Extended phase space
The system (<ref>) is non-autonomous. We transform it into an autonomous system by making the time into an additional dependent variable (or additional phase space coordinate) , and denoting the independent variable by t:
d/dtz
= X^0(z)+ X^1(z,η()),
d/dt= 1.
We denote by Φ̃^t_ the flow for the (autonomous) extended system and =(z,).
The solution with (t_0)=(z_0,t_0) is given by
(z_(t;z_0,t_0,ω),t)=(Φ^t-t_0_(z_0,θ^t_0ω),t),
where the flow Φ_ associated to (<ref>) is defined as in (<ref>).
It is easy to see that when t=t_0 we have (z(t_0;z_0,t_0,ω),t) =(Φ^0_(z_0,θ^t_0ω),t_0)=(z_0,t_0), as expected.
The extended flow of (<ref>) is defined as
Φ̃^t_(z_0,t_0, ω)=(z(t;z_0,0,ω),t+t_0)=(Φ^t_(z_0,ω), t_0+t).
The extended flow for the unperturbed system is
Φ̃^t_0(z_0,t_0)=(z(t;z_0,0),t+t_0)=(Φ^t_0(z_0), t_0+t).
For >0, the expression of the perturbed flow Φ̃^t_ in the time-component is the same as in the unperturbed case when =0.
§ GEOMETRIC STRUCTURES OF THE PERTURBED SYSTEM
From now on, we will use the following notation convention: t=physical time, =time as an additional coordinate in the extended space, s=dummy variable.
For any given ω, consider the set
Q_A,T(ω)={t_0∈[-T,T] | |θ^t_0ω(s)|≤ A+B|s|, ∀ s∈ℝ}.
If t_0∈ Q_A,T(ω) then θ^t_0ω∈Ω_A and so |ω(t_0+s)|≤ A+B|s| for all s.
Equivalently, C_A (θ^t_0ω)=ℝ.
If t_0∈ℝ∖ Q_T,A(ω) we have the following possibilities:
* if s∈ C_A(θ^t_0ω) then |ω(t_0+s)|≤ A+B|s|;
* if s∈Cl(U_A,ρ(θ^t_0ω )) then |ω(t_0+s)|≤ A'_ρ+B|s|;
* if |ω(t_0+s)|≥ A'_ρ+B|s| then s∈ F_A,ρ(θ^t_0ω ).
The corresponding bump function (<ref>) is ψ_A,ρ(s,θ^t_0ω). We have
ψ_A,ρ(s,θ^t_0ω)={[ 1, for s∈ C_A(θ^t_0ω);; 0, for s∈ F_A,ρ(θ^t_0ω ). ].
In particular, for t_0=0 we have
ψ_A,ρ(s, ω)={[ 1, for s∈ C_A(ω);; 0, for s∈ F_A,ρ(ω ). ].
Now, we modify the vector field X^1_η(z,) by multiplying the Hamiltonian function by the random bump function (,ω)↦ψ_A(,ω) defined on the extended space:
X̂^1_ω(z,)=J∇(ψ_A,ρ(,ω)H_1(z))ω() .
(Note that in the notation for the bump function we switched from the dummy variable s to the time-coordinate .)
We have
X̂^1_ω(z,)= {[ X^1_ω(z,), for ∈ C_A(ω),; 0, for ∈ F_A,ρ(ω ). ].
The modified system is
d/dtẑ _= J∇ H_0(ẑ_)+J∇(ψ_A(,ω)H_1(_))ω() ,
d/dt= 1 .
For fixed ω and t_0∈ℝ, by (<ref>) and (<ref>), the solution of (<ref>) satisfies
_ (t;z_0,t_0,ω)= _(t-t_0;z_0,0,θ^t_0ω)= π_z[Φ̂^t-t_0_(z_0,0,θ^t_0ω)]
=z_0+∫_0^t-t_0[ J∇ H_0(_(s))+J∇(ψ_A,ρ(s,θ^t_0ω)H_1(_(s)))θ^t_0ω(s)] ds.
If we denote the elapsed time t-t_0=t', we have
π_z[Φ̂^t'_(z_0,0,θ^t_0ω)]=z_0+∫_0^t' [ J∇ H_0(_(s)).
.+J∇(ψ_A,ρ(s,θ^t_0ω)H_1(_(s)))ω(t_0+s)] ds.
For t_0∈ Q_A,T(ω), the solution Φ̂^t'_(z,θ^t_0ω) of the modified system (<ref>) coincides with the solution Φ̃^t'_(z,θ^t_0ω) of the original system
(<ref>).
§.§ Persistence of the NHIM
The result below will be used to prove Theorem <ref>.
The first part of the result says that the flow of the modified perturbed system is close to the flow of the unperturbed system.
The second part of the result says that for the distinguished set of times, the modified perturbed flow coincides with the original perturbed flow.
Assume that the system (<ref>) satisfies (P-i), (P-ii), (R-i), (R-ii), (R-iii) (but not necessarily (H1)).
Fix δ>0, ρ>0, and k∈(0,1).
Then, there exists _0>0 such that, for every with 0<<_0, all (z_0,t_0), and a.e. ω∈Ω, for the modified system we have
Φ̂^t_(z_0,t_0,ω)-Φ̂^t_0(z_0,t_0)_C^1≤ C^1-c for t∈ [0,1].
In particular, there exist a positive random variable T_δ(ω), a closed set Q_A_δ,T_δ(ω)⊆ [-T_δ, T_δ] satisfying (<ref>) and (<ref>),
and _0>0 such that, for every with 0<<_0, all (z_0,t_0), a.e. ω∈Ω, and every t_0∈ Q_A_δ,T_δ(ω), the solution of the modified system and the solution for the extended system coincide, i.e.,
Φ̂^t_(z_0,t_0,ω)=Φ̃^t_(z_0,t_0,ω) for all t∈ℝ,
and therefore
Φ̃^t_(z_0,t_0,ω)-Φ̃^t_0(z_0,t_0)_C^1≤ C^1-c for t∈ [0,1].
Consider the set Q_A_δ,T_δ(ω).
To prove (<ref>) it is sufficient to show that the modified, perturbed flow and the unperturbed flow, when we shift the origin of time at t_0, are ^1-close, that is
π_z[Φ̂^t-t_0_(z_0,0,θ^t_0ω)]-π_z[Φ̃^t-t_0_0(z_0,0)] _C^1≤ C^1-c for t∈ [0,1].
To simplify notation, we substitute t-t_0↦ t and write
_(t)=π_z[ Φ̂^t_(z_0,0,θ^t_0ω)] and z_0(t)= π_z[ Φ̃^t_0(z_0,0) ].
We write the solution z_0(t) of the unperturbed system and the solution _(t) of
of the perturbed, modified system (<ref>) in integral form as in (<ref>) (with t' replaced by t)
z_0(t)= z_0+∫_0^t J∇ H_0(z_0(s))ds
_(t)= z_0+∫_0^t [J∇ H_0(_(s))+ J∇ (ψ_A_δ,ρ (s,θ^t_0ω)H_1(_(s)))ω(t_0+s)]ds .
By subtraction we obtain
_(t)-z_0(t)≤ ∫_0^t |J∇ H_0(_(s))-J∇ H_0(z_0(s))|ds
+∫_0^t |J∇ (ψ_A_δ,ρ (s,θ^t_0ω)H_1(_(s)))| |θ^t_0ω(s)|ds .
Restricting z to some suitable, compact domain, we let K_1 be the Lipschitz constant for J∇ H_0,
and K_2 be the supremum of J∇ (ψ_A_δ,ρ H_1) (recall that H_1 is uniformly ^2
and 0≤ _A_δ,ρ≤ 1). Since ψ_A_δ,ρ (s,θ^t_0)=0 whenever |ω(t_0+s)|>A'_δ,ρ+B|s|, we then have
|J∇ (ψ_A_δ,ρ (s,θ^t_0ω) H_1(_(s)))| |ω(t_0+s)| ≤ K_2 (A'_δ,ρ+B s ),
where we denote A'_δ,ρ:= C_H·ρ^α+A_δ.
From (<ref>) we infer
_(t)-z_0(t)≤ K_1 ∫_0^t_(s)- z_0(s) ds
+ K_2∫_0^t ( A'_δ,ρ+B s)ds .
Hence (<ref>) yields
_(t)-z_0(t)≤ K_1 ∫_0^t_(s)- z_0(s) ds + K_2 ∫_0^t (A'_δ,ρ +B s) ds
= K_1 ∫_0^t_(s)- z_0(s) ds+ K_2 (A'_δ,ρ t +B/2t ^2).
Applying Gronwall's Inequality – I <ref> for δ_0=0, δ_1= K_2 A'_δ,ρ, δ_2= K_2 B/2 and δ_3=K_1, we obtain
_(t)-z_0(t)≤ ( δ_0+δ_1 t+ δ_2 t^2 ) e^δ_3 t
= ( K_2 A'_δ,ρ t+ K_2 B/2 t^2 ) e^K_1 t
= ( A̅_δ,ρ t+ B̅ t^2 ) e^K_1 t,
where A̅_δ,ρ=K_2 A'_δ,ρ and B̅=K_2 B/2.
Fix 0<k<1 and 0<k'<1-k. For 0≤ t≤k/K_1ln( 1/)=1/K_1ln( (1/)^k) we have
_(t)-z_0(t)≤ (A̅_δ,ρk/K_1ln(1/)+ B̅k^2/K^2_1(ln(1/))^2 ) (1/)^k
= ^1-k-k'·^k'(A̅_δ,ρk/K_1ln(1/)+ B̅k^2/K^2_1(ln(1/))^2 ).
lim_→ 0^k'ln(1/)=0 and lim_→ 0^k'(ln(1/))^2=0.
From (<ref>) and the fact k/K_1ln( 1/) →∞ as → 0, there exist _0 >0 and C^0_δ,ρ>0 and such that for all 0<<_0 we have
[0,1]⊂ [0, (k/K_1)ln( 1/)],
and
^k'(A̅_δ,ρk/K_1ln(1/)+ B̅k^2/K^2_1(ln(1/))^2 )<C^0_δ,ρ.
Therefore, denoting c_0=k+k', we obtain
_(t)-z_0(t)≤ C^0_δ,ρ^1-c_0, for t∈[0,1].
We have only showed that _(t) and z_0(t) are C^0-close. Now we make a similar argument to show that _(t) and z_0(t) are ^1-close.
Denote
d/dzπ_z[Φ̃^t_0(z_0,0)]:= ξ_0(t)
d/dzπ_z[Φ̂^t_(z_0,0,θ^t_0ω)]:= ξ̂_(t).
Then ξ_0(z) and ξ_(z) satisfy the variational equations
ξ̇_0(t)= DJ∇ H_0(z_0(t))ξ_0(t)
ξ̇̂̇_(t) = DJ∇ H_0(z_(t))ξ_(t)+DJ∇(ψ_A(t,;θ^t_0ω)H_1(_(t)))ξ_(t)ω(t),
where the derivation D is with respect to z.
Then
ξ̂_(t)-ξ_0(t)≤ ∫_0^tDJ∇ H_0(z_(s))ξ̂_(s)-DJ∇ H_0(z_0(t))ξ̂_0(s)ds
+∫_0^tDJ∇(ψ_A_δ,ρ(s;θ^t_0ω)H_1(_(s)))ξ̂_(s) |ω(t_0+s)|ds.
Restricting z to some suitable, compact domain, let K'_1 be the Lipschitz constant for DJ∇ H_0, K'_2 such that DJ∇(ψ_A_δ,ρ(s;θ^t_0ω)H_1(_(s)))<K'_2 (recall that H_1 is uniformly ^2 and ψ_A_δ,ρ is uniformly ^1), and K'_3>0 such that ξ_0(t)<K'_3.
Therefore:
ξ̂_(t)-ξ_0(t)≤ K'_1∫_0^tξ̂_(s)-ξ_0(s)ds
+ K'_2∫_0^tξ̂_(s) |ω(t_0+s)|ds
≤ K'_1∫_0^tξ̂_(s)-ξ_0(s)ds
+ K'_2∫_0^tξ̂_(s)-ξ_0(s) |ω(t_0+s)|ds
+ K'_2∫_t_0^tξ_0(s) |ω(t_0+s)|ds
≤ K'_1∫_0^tξ̂_(s)-ξ_0(s)ds
+ K'_2∫_0^tξ̂_(s)-ξ_0(s) (A'_δ,ρ+Bs)ds
+ K'_2 ∫_t_0^tξ̂_0(s) (A'_δ,ρ+Bs) ds
≤ ∫_0^t (K'_1+ K'_2A'_δ,ρ + K'_2 B s)ξ̂_(s)-ξ_0(s)ds
+ K'_2K'_3 (A'_δ,ρ t+B/2t^2)
Applying Gronwall's Inequality – II <ref> for δ_0=0, δ_1= K'_2K'_3 A'_δ,ρ, δ_2=K'_2K'_3 B/2,
δ_3=K'_1+ K'_2A'_δ,ρ and δ_4= K'_2B, we obtain
ξ̂_(t)-ξ_0(t)≤ ( K'_2K'_3 A'_δ,ρ t+K'_2K'_3 B/2t^2 )e^[( K'_1+ K'_2A'_δ,ρ)t+K'_2K'_3 B/2t^2]
= (At+Bt^2)^[Ct+(Dt+Et^2)].
where A̅_δ,ρ =K'_2K'_3 A'_δ,ρ, B̅=K'_2K'_3 B/2, C̅=K'_1, D̅_δ,ρ=K'_2 A'_δ,ρ, and E̅=K'_2K'_3 B/2.
Fix 0<k<1 and 0<k'<1-k.
For 0≤ t ≤k/C̅ln(1/), we have
(D̅_δ,ρ t+E̅t^2)≤(D̅_δ,ρk/C̅ln(1/)+E̅k^2/C̅^2 (ln(1/))^2)→ 0 as → 0
due to (<ref>).
Therefore, there exists F̅_δ,ρ >0 such that, if 0<<_0, for sufficiently small _0, we obtain
e^(D̅_δ,ρ t+E̅t^2)≤F̅_δ,ρ .
Using again (<ref>), we obtain that there exist _0>0, C^1_δ,ρ>0 such that for 0<<_0
ξ̂_(t)-ξ_0(t)≤ ^1-k(A̅_δ,ρk/K_1ln(1/)+ B̅k^2/K^2_1(ln(1/))^2 )F̅_δ
= ^1-k-k'·^k'(A̅_δ,ρk/K_1ln(1/)+ B̅k^2/K^2_1(ln(1/))^2 )F̅_δ,ρ
≤ C^1_δ,ρ^1-c_1,
where c_1=k+k'.
We obtain
ξ̂_(t)-ξ_0(t)≤ C^1_δ,ρ^1-c_1, for t∈[0,1].
Combining (<ref>) and (<ref>) we obtain
_(t)-z_0(t)_^1≤ C_δ,ρ^1-c, for t∈[0,1],
where C_δ,ρ =max{C^0_δ,ρ ,C^1_δ,ρ} and c=max{c_0,c_1}.
Since k,k' are arbitrary, we can choose in fact any c∈(0,1).
We can use Proposition <ref> to prove the first main result of the paper.
Let
Λ̃_0={(I,ϕ,p,q,t_0) | I∈[a,b], ϕ∈𝕋^1, p=q=0, t_0∈ℝ}
be the NHIM of the extended, unperturbed system.
Consider the perturbed, modified system given by (<ref>).
Proposition <ref> implies that the flow of the perturbed, modified system and the flow of the unperturbed systems are C^1-c close
to one another in ^1, for all smaller than some _0. We choose _0 small enough so that
C_0^1-c<,
where is the smallness parameter that appears in (<ref>), in the statement of Theorem <ref>.
Applying Theorem <ref>, we obtain the existence of the normally hyperbolic manifold Λ̂_(ω) and its stable and unstable manifolds, satisfying the desired properties.
We note that, although Λ̃_0 is not compact, the result on the persistence of the NHIM still applies since the perturbation has uniformly bounded derivatives (see <cit.>).
By Lemma <ref>, for a.e. ω∈Ω, the set Q_A_δ,T_δ(ω), consisting of the times t_0 for which θ^t_0(ω)∈Ω_A_δ has measure at least 2(1-δ)T_δ.
For t_0∈Ω_A_δ,T, the solution of (<ref>) with (ẑ_(t_0),(t_0))=(z_0,t_0) coincides with the solution of
(<ref>) with the same initial condition.
Thus, for t_0∈ Q_A_δ,T_δ(ω), we obtain that the normally hyperbolic manifold Λ̂_(θ^t_0(ω)) for the modified system represents a normally hyperbolic manifold Λ_(θ^t_0(ω)) for the original system.
The same statement holds for its stable and unstable manifolds, which satisfy the desired properties.
We have noted that the set Ω_A_δ,T_δ⊆Ω is not closed under θ^t, therefore the equivariance property of the NHIM and on its stable and unstable manifolds is restricted to those t_0,t_1∈ℝ such that t_0,t_0+t_1∈ Q_A_δ,T_δ(ω).
Even though the normally hyperbolic manifold exist for those paths
θ^t_0ω which are in Q_A_δ,T_δ(ω),
the initial path ω is arbitrary. Starting with an arbitrary path ω, there exits a large measure set of t_0 for which θ^t_0ω∈ Q_A_δ,T_δ(ω).
§ EXISTENCE OF TRANSVERSE HOMOCLINIC INTERSECTIONS
§.§ Distance between stable and unstable manifolds
The unperturbed stable and
unstable manifolds, W^(_0) and W^(_0) in the extended space
coincide along the homoclinic manifold, which is given in the coordinates (I,ϕ,P,τ,) defined in Section <ref>, by
{(I, ϕ,P,τ,) | P=0}.
Define the section:
Σ_t_0= { (I, ϕ,P,τ,) | =t_0 }.
For t_0∈ Q_A_δ,T_δ(ω) and _0 sufficiently enough, the perturbed invariant manifold for the original system W^
(Λ_(θ^t_0ω)), W^(Λ_(θ^t_0ω)) exist in Σ_t_0,
and they are C^1-close to W^(Λ_0), W^(Λ_0), respectively. Moreover, the invariant manifold W^(Λ_(θ^t_0ω)) can be represented as a graph P^=P^(I,ϕ,τ,t_0) over the variables (I,ϕ,τ), and, similarly, the invariant manifold W^(_(θ^t_0ω)) can be written as a graph P^=P^(I,ϕ,τ,t_0), where =t_0 is fixed.
See Fig. <ref>.
The next result says that we can express the distance between the perturbed stable and unstable manifolds as a Melnikov-type integral.
We will use this to find crossings of the stable and unstable manifolds as zeros of the Melnikov integral.
Let t_0∈ Q_A_δ,T_δ(ω) and _0 sufficiently small. Consider a point ^_∈ W^(Λ_(θ^t_0ω)) given by P^(^_)=P^(I,ϕ,τ,t_0), and a point
^_∈ W^(Λ_(θ^t_0ω)) given by P^(^_)=P^(I,ϕ,τ,t_0), for the same coordinates (I,ϕ,τ,t_0).
Then
P (^_) -P(^_)
=-∫_-∞^∞{P,H_1}(I,ϕ+ν(I) s,p_0(τ+s),q_0(τ+s),t_0+s) ω(t_0+s) ds
+O(^1+ρ)
for some ρ∈(0,1).
Suppose that _0=(I,ϕ,p_0(τ),q_0(τ),t_0) is a homoclinic point for ^t_0.
Then the stable and unstable foot-points are both given by _0^± =(I,ϕ,0,0,t_0), as
the stable foot-point and the unstable foot-point coincide in the unperturbed case.
Hence, in (<ref>), (I,ϕ+ν(I) s,p_0(τ+s),q_0(τ+t),t_0+s )
represents the effect of the unperturbed flow ^s_0 on the homoclinic point _0, and
(I,ϕ+ν(I) s,0,0,t_0+s ) represents the effect of the unperturbed flow ^s_0 on the foot-point _0^±.
Since d(^s(_0), ^s(^±_0)→ 0 exponentially fast as s→±∞,
{P,H_1} (I,ϕ+ν(I) s,p(τ+t),q(τ+s),t_0+s)
-{P,H_1}(I,ϕ+ν(I) s,0,0,t_0+s ) → 0
exponentially fast as t→∞.
Note that {P,H_1}=V'(q)∂ H_1/∂ p-p∂ H_1/∂ q vanishes at p=q=0.
Since t_0∈ Q_A_δ,T_δ, |ω(t_0+s)|<A_δ+B|s| for all s, so we obtain
{P,H_1}(I,ϕ+ν(I)t,p(τ+s),q(τ+s))ω(t_0+s)→ 0
exponentially fast as s→∞.
Thus, the improper integral in (<ref>) is absolutely convergent.
For ^_∈ W^_(Λ_(θ^t_0ω)) let Ω^_(^_) be the foot-point of the unique stable fiber of W^(Λ_(θ^t_0ω)) through ^_ .
Using the fundamental theorem of calculus and
P(^_) - P(Ω^_(^_))
= P( ^T_(^_)) - P(^T_Ω_^(^_) )
-∫_0^T d/d s[ P( ^s_(^_)) -
P( ^s_Ω_^(^_) ) ] d s
= P( ^T_(^_)) - P(^T_Ω_^(^_) )
-∫_0^T [ (J∇ H_1)P( ^s_(^_)) -
(J∇ H_1)P( ^s_ω_^(^_) )] ω(t_0+s) d s
The vector field (J∇ H_1) is thought of as derivation, and so (J∇ H_1)P is the corresponding directional derivative of P.
Hence
(J∇ H_1)P(^s_())= -[∂ P/∂ p∂ H_1/∂ q+
∂ P/∂ q∂ H_1 /∂ p](^s_())ω(t_0+s)
= {P,H_1}(^s_())ω(t_0+s),
where {·,·} denotes the Poisson bracket.
Letting T→+∞, since [P( ^T_(^_)) - P(^T_Ω_^(^_) )]ω(t_0+s)→ 0, we obtain
P(^_) - P(Ω^_(^_))
=
-∫_0^+∞[ {P,H_1}( ^s_(^_)) -
{P,H_1}( ^s_Ω_^(^_) )] ω(t_0+s) d s.
We split the integral on the right-hand side of the above into two:
I= -∫_0^klog(1/)( {P,H_1}( ^s_(^_)) -
{P,H_1}( ^s_Ω_^(^_) ) ) ω(t_0+s) d s,
II= -∫_klog(1/)^∞({P,H_1}( ^s_(^_)) -
{P,H_1}( ^s_Ω_^(^_) ) )ω(t_0+s) d s,
for some k>0.
For the second integral, since
[{P,H_1}( ^s_(^_)) -
{P,H_1}( ^s_Ω_^(^_) )]ω(t_0+s)→ 0
exponentially fast, for any k>0 we have that II=O_C^1(^ϱ_2) for some ϱ_2>0.
For the first integral, by applying the Gronwall Inequality – III <ref> and choosing k>0 sufficiently small, we can replace the terms depending on the perturbed flow by corresponding terms depending on the unperturbed flow, while making an error of order O_C^1(^ϱ_1), for some ϱ_1∈(0,1), obtaining
I= -∫_0^klog(1/)[{P,H_1}( ^s_0(^_0)) -
{P,H_1}( ^s_0Ω_0^(^_0 ))] ω(t_0+s) d s
+O_C^1(^1+ϱ_1).
Since also
[{P,H_1}( ^s_0(^_0)) -
{P,H_1}]( ^s_0 Ω_0^(^_0) )]ω(t_0+s)→ 0
exponentially fast, we
can replace the above integral from 0 to klog(1/) by the improper integral from 0 to +∞, by making an error of order O(^1+ρ_2).
Thus,
I= -∫_0^+∞[ {P,H_1}( ^s_0(^_0)) -
{P,H_1}( ^s_0Ω_0^(^_0 ) ] ω(t_0+s) d s
+O_C^1(^1+ϱ),
where ϱ=min{ϱ_1,ϱ_2}.
Combining I and II we obtain
P(^_) - P(Ω^_(^_))
= -∫_0^+∞[ {P,H_1}( ^s_0(^_0)) -
{P,H_1} ]( ^s_0 Ω_0^(^_0) ) ] ω(t_0+s) d s
+O_C^1(^1+ρ).
Using that _0=(I,ϕ,p_0(τ),q_0(τ),t_0) and _0^± =(I,ϕ,0,0,t_0), we have
P(^_) - P(Ω^_(^_))
= -∫_0^+∞{P,H_1} (I,ϕ+ν(I) s, p(τ+s), q(τ+s)) ω(t_0+s) d s
+O_C^1(^1+ρ).
A similar computation for a point ^_ in W^(Λ_(θ^t_0ω)),
yields
P(^_) - P(Ω^_(^_))
= -∫_-∞^+∞{P,H_1} (I,ϕ+ν(I) s, p(τ+s), q(τ+s)) ω(t_0+s) d s
+O_C^1(^1+ρ),
where Ω^_(^_) be the foot-point of the unique unstable fiber of W^_(Λ_(θ^t_0ω)) through ^_.
Subtracting (<ref>) from (<ref>) yields
P(^_) - P(^_)= P(Ω^_(^_))-P(Ω^_(^_))
-∫_-∞^+∞{P,H_1} (I,ϕ+ν(I) s, p(τ+s), q(τ+s)) ω(t_0+s) d s
+O(^1+ρ).
The points Ω^_(^_) and Ω^_(^_) are in
Λ_(θ^t_0ω).
Since P has a critical point at (0,0) and P(Ω^_0(_0))=P(Ω^_0(_0))
it follows that
P(Ω^,_( ^,_))
- P(Ω^,_0(_0))_C^1≤ C^2,
and therefore
P(Ω^_(^_)) -P(Ω^_(^_))_C^1=O(^2).
Therefore, (<ref>) immediately implies (<ref>), which concludes the proof.
§.§ Time invariance
Consider the Melnikov function that appears on the right-hand side of Proposition <ref>.
(I,ϕ,τ,t_0)↦ M^P( I,ϕ, τ, t_0 )
= ∫_-∞^∞{P,H_1}(I,ϕ+ν(I) s,p_0(τ+ s ),q_0(τ+ s)) ω(t_0+s) d s
The following is immediate:
Let t_0∈ Q_A_δ,T_δ.
If the mapping
τ↦ M^P(I,ϕ,τ,t_0)
has a non-degenerate zero τ^*=τ^*(I,ϕ,t_0), then there exists 0<_1<_0 such that
τ↦ P(^_) -P(^_)
has a non-degenerate zero τ^*_=τ^*(I,ϕ,t_0)+O_C^1(^1+ϱ) for all 0<<_1, which in turn implies the existence of a transverse homoclinic intersection of W^(Λ_(θ^t_0ω)) and W^(Λ_(θ^t_0ω)) for all 0<<_1.
We will show that W^(Λ_(θ^t_0ω)) and W^(Λ_(θ^t_0ω)) intersect transversally in Section <ref>.
Towards this goal, we start by making the following key observation:
The argument in the proof of Proposition <ref> shows that the mapping
(I,ϕ,τ,)↦ M^P(I,ϕ,τ,)
is well-defined for all ∈ℝ, and
not only for ∈ Q_A_δ,T_δ(ω).
Indeed, for any ∈ℝ, by Lemma <ref> we have
|ω(+s)| ≤ A_θ^ω+B|s| for all s.
Therefore, for ω and fixed, the integrand in (<ref>) is exponentially convergent to 0 as s↦±∞.
(i)
For any ς∈ℝ we have
M^P(I,ϕ,τ, )=M^P(I,ϕ+ν(I)ς,τ+ς, +ς ).
(ii)If the mapping
τ↦ M^P(I,ϕ,τ, ),
has a non-degenerate zero at τ^*(I,ϕ,),
then for any ς∈ℝ, the mapping
τ↦ M^P(I,ϕ+ν(I)ς,τ, +ς )
has a non-degenerate zero τ^*(I,ϕ+ν(I)ς, +ς ), and
τ^*(I,ϕ+ν(I)ς, +ς )=τ^*(I,ϕ, )+ς.
(iii) If the mapping
↦ M^P(I,ϕ,τ, ),
has a non-degenerate zero at t^*(I,ϕ,τ),
then for any ς∈ℝ, the mapping
↦ M^P(I,ϕ+ν(I)ς,τ+ς, )
has a non-degenerate zero ^*(I,ϕ+ν(I)ς,τ+ς), and
^*(I,ϕ+ν(I)ς,τ+ς)=t^*(I,ϕ,τ)+ς.
In (<ref>) make the change of variable t↦ς+t, obtaining
∫_-∞^∞ {P,H_1}(I,ϕ+ν(I) s,p_0(τ+s),q_0(τ+s)) ω(+s) d s
= ∫_-∞^∞{P,H_1}(I,ϕ+ν(I)ς+ν(I) s,p_0(τ+ς+s),q_0(τ+ς+s)) ω(+ς+s ) d s.
Thus, if τ^*(I,ϕ,) is a non-degenerate zero of the first integral, then τ^*(I,ϕ,)+ς is a non-degenerate zero for the second integral,
and hence τ^*(I,ϕ,)+ς=τ^*(I,ϕ+ν(I)ς,+ς).
Similarly, if ^*(I,ϕ,τ) is a non-degenerate zero for the first integral, then ^*+ς= ^*(I,ϕ,τ)+ς is non-degenerate zero for the second integral.
§.§ Existence of transverse homoclinic intersections
Let ℱ:ℝ→ℝ a C^1-function with the property that ℱ()→ 0 as →+∞ and Dℱ()→ 0 as →+∞ exponentially fast, where D(·) denotes the derivative of a function.
Let
M()=∫_-∞^∞ℱ(s)θ^ω(s) ds.
Then the process M() is a stationary Gaussian process with expectation
E[M()] =0
and autocorrelation function
ρ(h)= E[M() M(+h)]
= ∫_-∞^∞∫_-∞^∞ℱ(s_1) ℱ(s_2)r(s_2-s_1+h)ds_1ds_2,
where r is the autocorrelation function of η.
The integral is independent of as the process is stationary.
The proof is similar to <cit.>.
Since η() is a stationary Gaussian process with mean 0, M() is also a stationary Gaussian process with mean 0.
The variance (zeroth spectral moment) of M() is given by
χ_0=ρ(0)=∫_-∞^∞∫_-∞^∞ℱ(s_1)ℱ(s_2)r(s_2-s_1) ds_1ds_2.
Performing a change of variable (s_1,s_2-s_1)↦ (,s) we can write
χ_0=ρ(0)=∫_-∞^∞∫_-∞^∞ℱ()ℱ(+s)r(s) dtds.
By definition, the autocorrelation function of M() is given by
ρ(h):=E[M(+h)M()].
Taking the second derivative with respect to of
ρ(h)=∫_-∞^∞∫_-∞^∞ℱ(s_1)ℱ(s_2)r(s_2-s_1+h) ds_1 ds_2
and applying integration by parts twice we obtain
d^2/dh^2ρ(h)=-∫_-∞^∞∫_-∞^∞Dℱ(s_1)Dℱ(s_2)r(s_2-s_1+h) ds_1ds_2,
where Dℱ(s_1)=∂ℱ/∂ s_1 and Dℱ(s_2)=∂ℱ/∂ s_2.
For integration by parts we have used that lim_s→±∞ℱ(s)= 0 exponentially fast together with its derivative, and that r is a bounded function together with its derivative.
Performing a change of variable (s_1,s_2-s_1)↦ (,s) and setting h=0 we obtain that the second spectral moment of M is given by
χ_2=-d^2/dh^2ρ(h)_| h=0=∫_-∞^∞∫_-∞^∞ Dℱ()Dℱ(+s)r(s) dt ds.
Denote
𝒫(I,ϕ,τ,s)= {P,H_1}(I,ϕ+ν(I)s,p_0(τ+s),q_0(τ+s)).
When ( I,ϕ,τ) are fixed and we only want to emphasize the dependence on s we denote the above function by 𝒫(s).
Note that 𝒫(s) converges to 0 exponentially fast together with its derivative as s→±∞.
Define the Melnikov stochastic process
∈ℝ↦ M^P():= ∫_-∞ ^+∞𝒫(s)θ^ω(s) d s.
The Melnikov process (<ref>) is a stationary Gaussian process with expectation
E[M^P()] =0
and autocorrelation function
ρ(h)= E[M^P() M^P(+h)]
= ∫_-∞^∞∫_-∞^∞𝒫(s_1) 𝒫(s_2)r(s_2-s_1+h)ds_1ds_2,
with the right-side independent of as the process is stationary.
The zeroth spectral moments is:
χ^P_0 =ρ(0)=∫_-∞^∞∫_-∞^∞𝒫(s_1) 𝒫(s_1+s)r(s)ds_1 ds
and the second spectral moment is:
χ^P_2=-(d^2/d h^2ρ (h) )_| h=0=∫_-∞^∞∫_-∞^∞ D𝒫(s_1) D𝒫(s_1+s)r(s) ds_1 ds.
It follows immediately from Lemma <ref>.
We will make the following assumption on the spectral moments of M^P:
SMPχ^P_0>0 and χ^P_2>0.
Assume condition (<ref>).
Fix ( I^*,ϕ^*,τ^*). Then the mapping
∈ℝ↦ M^P( I^*,ϕ^*, τ^*,)
has a non-degenerate zero ^*∈ℝ.
Applying the Rice's Formula <cit.> to the Melnikov process
↦ M^P(),
the number N_T of zeros of M^P() on the interval [0,T] has expectation
E[N_T]=T/π√(χ^P_2/χ^P_0)e^- E[M^P()]^2/2χ^P_0=T/π√(χ^P_2/χ^P_0),
and M^P() has almost surely no tangential zeroes. This yields the desired conclusion.
Let ^*=^*(I^*,ϕ^*,τ^*) be a non-degenerate zero from Proposition <ref>, for some I^*,ϕ^*,τ^* fixed.
Then τ^* is a zero of
τ↦ M^P(I^*,ϕ^*,τ,^*).
Moreover, from Proposition <ref>, (<ref>) and (<ref>), it follows that τ^* is a non-degenerate zero.
Indeed, the fact that ^* is a non-degenerate zero implies that there are ς>0 arbitrarily close to 0 such that one of the following holds:
M(I,ϕ+ν(I)(-ς), τ^*-ς ,t^*_0-ς)<0 < M(I,ϕ+ν(I) ς , τ^*+ς,t^*_0+ς),
M(I,ϕ+ν(I)(-ς), τ^*-ς ,t^*_0-ς)>0 > M(I,ϕ+ν(I) ς , τ^*+ς,t^*_0+ς).
This implies that τ^* is a non-degenerate zero of
τ↦ M^P(I^*,ϕ^*,τ,^*).
Therefore, τ^* is (locally) uniquely defined by I^*,ϕ^*,^*, so we can write τ^*=τ^*(I^*,ϕ^*,^*).
If t_0:=^*∈ Q_A_δ,T_δ(ω), then
W^(Λ_(θ^t_0ω)) and W^(Λ_(θ^t_0ω)) are well defined. By Lemma <ref>, it follows that they intersect transversally for <_1.
If ^*∉Q_A_δ,T_δ(ω) then take ς^*∈ℝ such that t_0:=^*+ς^*∈ Q_A_δ,T_δ(ω). Such a ς^* always exists due to Proposition <ref>. By Proposition <ref> we have that
τ^*(I^*,ϕ^*+ν(I^*)ς,t^*_0+ς^*)=τ^* +ς^* is a non-degenerate zero of
τ↦ M^P(I^*,ϕ^*+ν(I^*)ς,τ^*,^*+ς^*).
This implies that W^(Λ_(θ^^*+ς^*ω)) and W^(Λ_(θ^^*+ς^*ω)) intersect transversally for all 0<<_1 for some _1<_0 sufficiently small.
Recall that τ corresponds to a `position' along the homoclinic orbit of the unperturbed system.
The typical way to use Melnikov theory to show existence of transverse intersection of the perturbed stable and unstable manifolds is to vary the position τ until we reach a place where the distance between the manifolds is 0.
We emphasize that in our argument above – Proposition <ref> – we first fix the position τ. Then we show that there is a time t when the distance between the manifolds is 0 at that fixed position. In other words, we wait until the noise pushes the manifolds to cross one another.
An extra complication is that the perturbed stable and unstable manifolds are not well-defined for all times. We have to adjust t to a time where the perturbed stable and unstable manifolds are well-defined. To achieve this, we also adjust the location τ.
For this, we use the time invariance of the Melnikov integral – Proposition <ref>.
Theorem (<ref>) gives us a homoclinic point at the transverse intersection between
W^ (Λ_(θ^t_0(ω)) and W^ (Λ_(θ^t_0(ω)), for some t_0∈ Q_A_δ,T_δ(ω).
Consider the homoclinic orbit ^t_(_).
Unlike the deterministic case, we cannot guarantee that ^t_( )∈ W^ (Λ_(θ^t_0+t(ω))∩ W^ (Λ_(θ^t_0+t(ω)), since t_0+t may not be in Q_A_δ,T_δ(ω), and so the stable and unstable manifolds corresponding
to θ^t_0+t(ω) are not guaranteed to exist (as graphs).
Nevertheless, the homoclinic orbit ^t_() is asymptotic to the normally hyperbolic invariant manifold in both forward and backwards times, as in the deterministic case. This is given by the following:
Let t_0∈ Q_A_δ,T_δ(ω).
If ∈ W^ (Λ_(θ^t_0(ω)) then there exists a unique point ^+∈Λ_(θ^t_0(ω)) such that d(^t_(),^t_(^+))→ 0 as t→ +∞.
Assuming condition (<ref>), it follows that ^t_() approaches _0 as t→ +∞.
Similarly, if ∈ W^ (Λ_(θ^t_0(ω)) then there exists a unique point ^-∈Λ_(θ^t_0(ω)) such that d(^t_(),^t_(^-))→ 0 as t→ -∞. Assuming condition (<ref>), it follows that ^t_() approaches Λ_0 as t→ -∞.
If ∈ W^ (Λ_(θ^t_0(ω)) then ^t_() is on the stable manifold of Λ_(θ^t_0+t(ω)) for the modified system (<ref>).
This implies that d(^t_(),^t_(^+))→ 0 as t→+∞.
Since for our system Λ_(θ^t_0(ω))=Λ_0 for all t and ω, it follows that ^t_() approaches Λ_0 as t→ +∞.
A similar argument holds for the unstable manifold.
§ EXISTENCE OF ORBITS THAT INCREASE IN ACTION
§.§ Random scattering map
In this section we adapt the theory of the scattering map developed in <cit.> for the case of random perturbations.
Our construction is very similar with the time-dependent scattering theory for general vector fields developed in <cit.>.
The most significant difference is that in our case the scattering map depends on the realization of the stochastic process in a measurable fashion.
Let t_0∈ Q_A_δ,T_δ(ω).
For a point ∈ W^(Λ_(θ^t_0ω)) (resp. ∈ W_Λ^(Λ_(θ^t_0ω))), we
denote by ^+ (resp. ^-) the unique point in Λ_(θ^t_0ω) which satisfies ∈ W^(^+,θ^t_0ω) (resp. ∈ W^(^-,θ^t_0ω)). The stable and unstable fibers referred above are given by Theorem <ref>.
Then the canonical projections
Ω^+_(·,θ^t_0ω) : W^(Λ_(θ^t_0ω)) →Λ_(θ^t_0ω),
Ω^+_(, θ^t_0ω )= ^+,
Ω^-_ (·, θ^t_0ω ): W^(Λ_(θ^t_0ω)) →Λ_(θ^t_0ω),
Ω^-_(,θ^t_0ω )= ^-.
are well defined ^ℓ-1 maps in and measurable in ω.
Now, assume there is a homoclinic manifold
Γ_(θ^t_0ω) ⊂ W^(Λ_(θ^t_0ω))∩ W^(Λ_(θ^t_0ω))
satisfying the following conditions:
T_ M = T_ W^(Λ_(θ^t_0ω))+ T_ W^(Λ_(θ^t_0ω)),
T_ W^(Λ_(θ^t_0ω)) ∩ T_ W^(Λ_(θ^t_0ω)) = T_Γ_(θ^t_0ω),
T_Γ_(θ^t_0ω) ⊕ T_ W^(^+,θ^t_0ω) = T_ W^(Λ_(θ^t_0ω)),
T_Γ_(θ^t_0ω) ⊕ T_ W^(^-,θ^t_0ω) = T_ W^(Λ_(θ^t_0ω)),
for all ∈Γ_(θ^t_0ω).
The first two conditions in (<ref>) say that W^(Λ_(θ^t_0ω)) and W^(Λ_(θ^t_0ω)) intersect transversally
along Γ_(θ^t_0ω),
and the last two conditions say that Γ_(θ^t_0ω) is transverse to the stable and unstable foliations.
We now consider the canonical projections Ω_ ^±(·,θ^t_0ω)
(<ref>) restricted to Γ_(θ^t_0ω).
Under the assumption (<ref>) we have that Γ_(θ^t_0ω) is ^ℓ-1 and that Ω_^±(·,θ^t_0ω) are ^ℓ-1 local diffeomorphisms from Γ_(θ^t_0ω) to Λ_(θ^t_0ω).
Let us further assume further that Γ_(θ^t_0ω) is a homoclinic channel, that is,
Ω^±_ (·,θ^t_0ω):Γ_(θ^t_0ω)
→ U_^± (θ^t_0ω):= Ω_ ^± (Γ_(θ^t_0ω),θ^t_0ω)
is
a C^ℓ-1-diffeomorphism.
The random scattering map associated to Γ_(θ^t_0ω) is defined as
σ_(·,θ^t_0ω): U_^-(θ^t_0ω) → U^+_(θ^t_0ω),
given by
σ_(·,θ^t_0ω)=Ω^+_(·,θ^t_0ω) ∘( Ω^-_(·,θ^t_0ω) )^-1.
For each fixed path ω, the scattering map is ^ℓ-1, and it depends on ω in a measurable fashion.
When there is no dependence on a random variable, Definition <ref> translates into the definition of the scattering map in the deterministic case
<cit.>.
§.§ The scattering map for the unperturbed pendulum-rotator system
For the unperturbed system the definition of the scattering map from above translates into the standard definition of the scattering map as in <cit.>.
Since we have W^(Λ_0)=W^(Λ_0) and for each z∈Λ_0, W^(z)=W^(z), the corresponding scattering map σ_0 is defined on the whole Λ_0 as the identity map. Thus, σ_0(z^-)=z^+ implies z^-=z^+.
Expressed in terms of the action-angle coordinates (I,ϕ) of the rotator, we have
σ_0(I,ϕ)=(I,ϕ).
In the next section, we provide a formula to estimate the effect of the scattering map on the action of the rotator.
§.§ Change in action by the scattering map
Denote
ℐ(I,ϕ,τ, s)= {I,H_1}(I,ϕ+ν(I) s,p_0(τ+ s ),q_0(τ+ s))
-{I,H_1}(I,ϕ+ν(I) s,0,0)
and, for (I,ϕ,τ) fixed, let
↦ M^I()= ∫_-∞^∞ℐ(I,ϕ,τ,s) θ^ω(s) d s .
The stochastic process
↦ M^I()
is a stationary Gaussian process with mean
E[M^I()]=0,
and autocorrelation
ρ^I(σ):=E[M^I(τ+σ)M^I(τ)]
given by
ρ^I(h)=∫_-∞^∞∫_-∞^∞ℐ(s_1)ℐ(s_2)r(s_2-s_1+h) ds_1ds_2.
Hence M^I() is ergodic.
It follows immediately by applying Lemma <ref> to s↦ℐ(I,ϕ,τ,s).
Define
χ^I_0=ρ^I(0) and χ^I_2=-d^2/dh^2_| h=0ρ^I(h).
We will make the following assumption on the spectral moments of M^I:
SMIχ^I_0>0 and χ^I_2>0.
Assume the condition (<ref>). Let v>0.
Then the number of times N_T the process M^I() crosses the value v from above as well as from below
on the interval [0,T] has expectation
E[N_T]=T/π√(χ^I_2/χ^I_0)exp(-(v-E[M^I()])^2/2χ^I_0)=
T/π√(χ^I_2/χ^I_0)exp(- v ^2/2χ^I_0).
Apply Rice's formula (see <cit.>).
Suppose t_0∈ Q_A_δ,T_δ and _∈ W^(Λ_(θ^t_0ω))∩ W^(Λ_(θ^t_0ω)) is a transverse homoclinic point for 0<<_1, as given by Theorem <ref>.
Then there exists a homoclinic channel Γ_ (θ^t_0ω)⊂ W^(Λ_(θ^t_0ω) containing _.
We recall that for Theorem <ref> we assume (<ref>), which means that Λ_(θ^t_0ω)=Λ_0 for t_0∈ Q_A_δ,T_δ. This implies that the perturbed inner dynamics restricted to the RNHIM coincides the inner dynamics in the unperturbed case, and, in particular that it preserved the action coordinate I along orbits.
A computation similar to that in the proof of Proposition <ref> yields
I(^+_)-I(^-_)
= ∫_-∞^+∞[{I,H_1}(I,ϕ+ν(I) s,p_0(τ^*+ s ),q_0(τ^*+ s))
-{I,H_1}(I,ϕ+ν(I) s,0,0)]ω(t_0+s) d s
+O(^1+ρ)
= M^I(I,ϕ,τ,)+O(^1+ρ),
for 0<ϱ<1; for details, see, e.g., <cit.>.
Following the same computation as in the proof of Proposition <ref>
we obtain the following time invariance relation
M^I(I,ϕ,τ, )=M^I(I,ϕ+ν(I)ς,τ+ς, +ς ).
for all ς∈ℝ.
If we apply the flow ^ς_ to the point _, while ^ς_(_) may no longer stay in W^(Λ_(θ^t_0ω)∩ W^(Λ_(θ^t_0ω), it remains asymptotic in both forward and backward time to Λ_0.
More precisely, we have
d(^t_(^ς_(_),^t_(^ς_(^+_))→ 0 when t→+∞,
d(^t_(^ς_(_),^t_(^ς_(^-_))→ 0 when t→-∞.
We have
I (^ς_(^+_))- I(^ς_(^-_))
= ∫_-∞^+∞[ {I,H_1}(I,ϕ+ν(I) s+ν(I)ς,p_0(τ^*+ s+ς ),q_0(τ^*+ s+ς))
-{I,H_1}(I,ϕ+ν(I) s+ν(I)ς,0,0)]ω(t_0+s+ς) d s
+O(^1+ρ)
= M^I(I,ϕ+ς,τ+ς,t_0+ς)+O_C^1(^1+ρ),
Due to (<ref>), I(^ς_(^+_))=I(^+_) and I(^ς_(^-_))=I(^-_).
By Proposition <ref> there exists ς^* such that t_0+ς^* is a point where the process M^I crosses the prescribed value v.
Then (<ref>) implies
I(^ς_(^+_))-I(^ς_(^-_))
= v+O(^1+ρ).
This concludes the proof.
§ CONCLUSIONS AND FUTURE WORK
To summarize, in this paper we considered a rotator-pendulum system with a random perturbation of special type,
and we proved the persistence of the NHIM and of the stable and unstable manifolds, the existence of transverse homoclinic orbits,
and the existence of orbits that exhibit micro-diffusion in the action.
The perturbation is given by some Hamiltonian vector field that vanishes on the phase space of the rotator, multiplied by unbounded noise.
We work with path-wise solutions under the assumption that the sample paths are Hölder continuous.
The persistence of the NHIM is proved not for all times, but only for a distinguished set of times within some arbitrarily large time interval.
It seems possible that the RNHIMs and their stable and unstable manifolds exist for all times, but may not be of uniform size, or of a size necessary to guarantee the crossing of the stable and unstable manifolds. Further investigation is planned on this regard, including also a generalization to less regular sample paths for the noise.
We also plan to show the existence of Arnold diffusion (rather than micro-diffusion); a foreseeable way is by developing random versions of the lambda lemma and shadowing lemma, and showing that we can find O(1/) pseudo-orbits of the scattering map that we can join together.
§ GRONWALL'S INEQUALITY
A general form of Gronwall’s lemma is stated in Lemma <ref> and it is used to derive the other
three inequalities that are used in the main text.
Let α, β and ϕ be real-valued functions defined on [t_0,+∞). Assume that β and ϕ are continuous and that the negative part of α is integrable on every closed and bounded subinterval of [t_0,+∞).
(i) If β is non-negative and if ϕ satisfies the integral inequality
ϕ(t)≤α(t)+∫_t_0^tβ(s) ϕ(s) ds for t≥ t_0,
then
ϕ(t)≤α(t)+∫_t_0^tα(s)β(s) exp(∫_s^tβ(r) dr ) ds for t≥ t_0.
(ii) If, in addition, the function α is non-decreasing, then
ϕ(t)≤α(t)exp(∫_t_0^tβ(s) ds ) for t≥ t_0.
For a reference, see e.g. <cit.>.
Assume that δ_0,δ_1,δ_2,δ_3>0, t_0≥ 0, and ϕ is a continuous function.
If
ϕ(t)≤δ_0+δ_1 t+δ_2 t^2+∫_t_0^t δ_3ϕ(s) ds, for t≥ t_0,
then
ϕ(t)≤ (δ_0+δ_1 t+δ_2 t^2) e^δ_3 (t-t_0)
< (δ_0+δ_1 t+δ_2 t^2) e^δ_3 t,
for t≥ t_0.
Let α(t)=δ_0+δ_1 t+δ_2 t^2 and β(t)=δ_3, where δ_0,δ_1,δ_2,δ_3>0.
Then α'(t)=δ_1+2δ_2 t≥ 0 for t≥ 0, so α is non-decreasing.
Lemma <ref>-(ii) implies that
ϕ(t)≤ (δ_0+δ_1 t+δ_2 t^2)e^δ_3(t-t_0) for t≥ t_0.
Assume that δ_0,δ_1,δ_2,δ_3,δ_4>0, t_0≥ 0, and ϕ is a continuous function.
If
ϕ(t)≤δ_0+δ_1 t+δ_2 t^2+∫_t_0^t (δ_3+δ_4 s)ϕ(s) ds, for t≥ t_0,
then
ϕ(t)≤ (δ_0+δ_1 t+δ_2 t^2) e^[(δ_3t+δ_4t^2/2)-(δ_3t_0+δ_4t_0^2/2)]
< (δ_0+δ_1 t+δ_2 t^2) e^(δ_3t+δ_4t^2/2), for t≥ t_0.
Let α(t)=δ_0+δ_1 t+δ_2 t^2 and β(t)=δ_3+δ_4 t, where δ_0,δ_1,δ_2,δ_3,δ_4>0.
Then
Lemma <ref>-(ii) implies that
ϕ(t)≤ (δ_0+δ_1 t+δ_2 t^2)e^[(δ_3 t+δ_4/2t^2)-(δ_3 t_0+δ_4/2t_0^2)] for t≥ t_0.
Let M be an n-dimensional manifold, ^0:M→ TM be vector field on M that is Lipschitz in z∈ M, and ^1:M×ℝ×Ω→ TM a time-dependent vector field on M that is Lipschitz in z∈ M, continuous in t∈ℝ, and measurable in ω∈Ω.
Consider the following differential equations:
ż(t) = ^0(z),
ż(t) = ^0(z)+^1(z, ω(t_0+t)).
Assume:
* ^0 has Lipschitz constant C_3>0;
* For a fixed continuous path ω∈Ω, ^1 satisfies
^1(z, ω(t_0+t))≤ C_1t+C_2,
for some C_1,C_2>0 depending on ω and t_0 and all t≥ 0.
Let z_0 be a solution of the equation (<ref>) and z_ be a solution of the equation (<ref>) such that
z_0(0)-z_(0)<C_0, for some C_0>0 depending on ω.
Then, for 0<ϱ_1<1, k≤1-ρ_1/C_3, there exist _0>0 and K, such that for 0≤<_0 we have
z_0(t)-z_(t)< K^ϱ_1, for 0≤ t ≤ kln(1/).
We have
z_0(t)= z_0(0)+∫_0^t X_0(z_0(s))ds
z_(t)= z_(0)+∫_0^t [ X_0(z_0(s))+ X_1(z_(s),ω (t_0+s))]ds
z_(t)-z_0(t)≤ z_(0) -z_0(0)+∫_0^t X_0(z_(s))-X_0(z_0(s))ds
+ ∫_0^t X_1(z_(s),ω (t_0+s)) ds
≤ C_0+C_3∫_0^t z_(s) - z_0(s)ds
+∫_0^t ( C_1s+C_2 ) ds
≤ C_0+ C_1t+C_2/2t^2 + C_3∫_0^t z_(s) - z_0(s)ds
Applying Gronwall Lemma <ref> for δ_0= C_0, δ_1= C_1, δ_2=C_2/2 and δ_3=C_3,
we obtain
z_(t)-z_0(t)≤ [ C_0+ C_1t+ C_2/2t^2 ]e^C_3 t
For 0≤ t≤ kln (1/) we have
z_(t)-z_0(t)≤ [ C_0+ C_1kln (1/) + C_2/2k^2(ln (1/))^2 ]
e^C_3 kln (1/)
Let k<1-ϱ/C_3, where ϱ∈(0,1). Then
e^C_3 kln (1/)≤ ^-1+ϱ.
From (<ref>) we obtain
z_(t)-z_0(t)≤ ^ϱ [ C_0+ C_1kln (1/) + C_2/2k^2(ln (1/))^2 ]
= ^ϱ_1^ϱ-ϱ_1 [ C_0+ C_1kln (1/) + C_2/2k^2(ln (1/))^2 ]
for 0<ϱ_1<ϱ. Note that ϱ_1 can be chosen arbitrary.
There exists _0 and constants A,B>0 such that, for 0<<_0, we have
^ϱ-ϱ_1ln (1/)<A and ^ϱ-ϱ_1 (ln (1/))^2<B.
Therefore from (<ref>) we obtain that, for the constant
K=[ C_0+ C_1k A + C_2/2k^2 B ]
we have
z_(t)-z_0(t)≤ K^ϱ_1.
alpha
|
http://arxiv.org/abs/2409.02335v1 | 20240903234945 | What Do You See in Common? Learning Hierarchical Prototypes over Tree-of-Life to Discover Evolutionary Traits | [
"Harish Babu Manogaran",
"M. Maruf",
"Arka Daw",
"Kazi Sajeed Mehrab",
"Caleb Patrick Charpentier",
"Josef C. Uyeda",
"Wasila Dahdul",
"Matthew J Thompson",
"Elizabeth G Campolongo",
"Kaiya L Provost",
"Paula M. Mabee",
"Hilmar Lapp",
"Anuj Karpatne"
] | cs.CV | [
"cs.CV"
] |
Coaching a Robotic Sonographer: Learning Robotic Ultrasound with Sparse Expert's Feedback
Deepak Raina^*, Mythra V. Balakuntala^*, Byung Wook Kim, Juan Wachs Senior Member, IEEE,
Richard Voyles Fellow, IEEE
This work was supported by National Science Foundation (NSF) USA under Grant #2140612 and Purdue University seed grants for West Lafayette-Indianapolis campuses collaboration. (Corresponding author: Deepak Raina)
Deepak Raina is with the Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218 USA (e-mail: draina1@jh.edu)
Mythra V. Balakuntala is with Nikon Research Corporation of America, Belmont, CA 94002 USA (e-mail: mythra.balakuntala@nikon.com)
Byung Wook Kim is with the Department of Computer Engineering, Purdue University, West Lafayette, IN 47907 USA (e-mail: kim2986@purdue.edu)
Juan Wachs is with the School of Industrial Technology, Purdue University, West Lafayette, IN 47907 USA (e-mail: jpwachs@purdue.edu).
Richard Voyles is with the School of Engineering Technology, Purdue University, West Lafayette, IN 47907 USA (e-mail: rvoyles@purdue.edu)
*DR and MVB primarily conducted this research at Purdue University
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
A grand challenge in biology is to discover evolutionary traits—features of organisms common to a group of species with a shared ancestor in the tree of life (also referred to as phylogenetic tree). With the growing availability of image repositories in biology, there is a tremendous opportunity to discover evolutionary traits directly from images in the form of a hierarchy of prototypes. However, current prototype-based methods are mostly designed to operate over a flat structure of classes and face several challenges in discovering hierarchical prototypes, including the issue of learning over-specific features at internal nodes. To overcome these challenges, we introduce the framework of Hierarchy aligned Commonality through Prototypical Networks ().
We empirically show that learns prototypes that are accurate, semantically consistent, and generalizable to unseen species in comparison to baselines on birds, butterflies, and fishes datasets.
[The code and datasets are available at <https://github.com/Imageomics/HComPNet>.]
toc
§ INTRODUCTION
A central goal in biology is to discover the observable characteristics of organisms, or traits (e.g., beak color, stripe pattern, and fin curvature), that help in discriminating between species and understanding how organisms evolve and adapt to their environment <cit.>. For example,
discovering traits inherited by a group of species that share a common ancestor on the tree of life (also referred to as the phylogenetic tree, see Figure <ref>) is of great interest to biologists to understand how organisms diversify and evolve <cit.>.
The measurement of such traits with evolutionary signals, termed evolutionary traits, is not straightforward and often relies on subjective and labor-intensive human expertise and definitions <cit.>, hindering rapid scientific advancement <cit.>.
With the growing availability of large-scale image repositories in biology containing millions of images of organisms <cit.>, there is an opportunity for machine learning (ML) methods to discover evolutionary traits automatically from images <cit.>. This is especially true in light of recent advances in the field of explainable ML, such as the seminal work of ProtoPNet <cit.> and its variants <cit.> which find representative patches in training images (termed prototypes) capturing discriminatory features for every class. We can thus cast the problem of discovering evolutionary traits into asking the following question: what image features or prototypes are common across a group of species with a shared ancestor in the tree of life that are absent in species with a different shared ancestor?
For example, in Figure <ref>, we can see that the four species of birds on the left descending from the blue node show the common feature of having “long tails”, unlike any of the descendant species of the red node.
Learning such common features at every internal node as a hierarchy of prototypes can help biologists generate novel hypotheses of species diversification (e.g., the splitting of blue and red nodes) and accumulation of evolutionary trait changes.
Despite the success of ProtoPNet <cit.> and its variants in learning prototypes over a flat structure of classes, applying them to discover a hierarchy of prototypes is challenging for three main reasons. First, existing methods that learn multiple prototypes for every class are prone to learning “over-specific” prototypes at internal nodes of a tree, which cover only one (or a few) of its descendant species. Figure <ref> shows a few examples to illustrate the concept of over-specific prototypes. Consider the problem of learning prototypes common to descendant species of the Felidae family: Lion and Bobcat. If we learn one prototype focusing on the feature of the mane (specific only to Lion) and another prototype focusing on the feature of spotted back (specific only to Bobcat), then these two prototypes taken together can classify all images from the Felidae family. However, they do not represent common features shared between Lion and Bobcat and hence are not useful for discovering evolutionary traits.
Such over-specific prototypes should be instead pushed down to be learned at lower levels of the tree (e.g., the species leaf nodes of Lion and Bobcat).
Second, while existing methods such as ProtoPShare <cit.>, ProtoPool <cit.>, and ProtoTree <cit.> allow prototypes to be shared across classes for re-usability and sparsity, in the problem of discovering evolutionary traits, we want to learn prototypes at an internal node n that are not just shared across all it descendant species but are also absent in the contrasting set of species (i.e., species descending from sibling nodes of n representing alternate paths of diversification).
Third, at higher levels of the tree, finding features that are common across a large number of diverse species is challenging <cit.>. In such cases, we should be able to abstain from finding common prototypes without hampering accuracy at the leaf nodes—a feature missing in existing methods.
To address these challenges, we present Hierarchy aligned Commonality through Prototypical Networks (), a framework to learn hierarchical prototypes over the tree of life for discovering evolutionary traits. Here are the main contributions of our work:
* learns common traits shared by all descendant species of an internal node and avoids the learning of over-specific prototypes in contrast to baseline methods using a novel over-specificity loss.
* uses a novel discriminative loss to ensure that the prototypes learned at an internal node are absent in the contrasting set of species with different ancestry.
* includes a novel masking module to allow for the exclusion of over-specific prototypes at higher levels of the tree without hampering classification performance.
* We empirically show that learns prototypes that are accurate, semantically consistent, and generalizable to unseen species compared to baselines on data from 190 species of birds (CUB-200-2011 dataset) <cit.>, 38 species of fishes <cit.>, and 30 species of butterflies <cit.>.
We show the ability of to generate novel hypotheses about evolutionary traits at different levels of the phylogenetic tree of organisms.
§ RELATED WORKS
One of the seminal lines of work in the field of prototype-based interpretability methods is the framework of ProtoPNet <cit.> that learns a set of “prototypical patches” from training images of every class to enable case-based reasoning. Following this work, several variants have been developed, such as ProtoPShare <cit.>, ProtoPool <cit.>, ProtoTree <cit.>, and HPnet <cit.> suiting to different interpretability requirements.
Among all these approaches, our work is closely related to HPnet <cit.>, the hierarchical extension of ProtoPNet that learns a prototype layer for every parent node in the tree. Despite sharing a similar motivation as our work, HPnet is not designed to avoid the learning of over-specific prototypes or to abstain from learning common prototypes at higher levels of the tree.
Another related line of work is the framework of PIPNet <cit.>, which uses self-supervised learning to reduce the “semantic gap” <cit.> between the latent space of prototypes and the space of images,
such that the prototypes in latent space correspond to the same visual concept in the image space. In , we build upon the idea of self-supervised learning introduced in PIPNet to learn a semantically consistent hierarchy of prototypes.
Our work is also related to ProtoTree <cit.>, which structures the prototypes as nodes in a decision tree to offer more granular interpretability. However, ProtoTree differs from our work in that it learns the tree-based structure of prototypes automatically from data and cannot handle a known hierarchy. Moreover, the prototypes learned in ProtoTree are purely discriminative and allow for negative reasoning, which is not aligned with our objective of finding common traits of descendant species.
Other related works that focus on finding shared features are ProtoPShare <cit.> and ProtoPool <cit.>.
Both approaches aim to find common features among classes, but their primary goal is to reduce the prototype count by exploiting similarities among classes, leading to a sparser network. This is different from our goal of finding a hierarchy of prototypes to find evolutionary traits common to a group of species (that are absent from other species).
Outside the realm of prototype-based methods, the framework of Phylogeny-guided Neural Networks (PhyloNN) <cit.> shares a similar motivation as our work to discover evolutionary traits by representing biological images in feature spaces structured by tree-based knowledge (i.e., phylogeny). However, PhyloNN primarily focuses on the tasks of image generation and translation rather than interpretability. Additionally, PhyloNN can only work with discretized trees with fixed number of ancestor levels per leaf node, unlike our work that does not require any discretization of the tree.
§ PROPOSED METHODOLOGY
§.§ Model Architecture
Given a phylogenetic tree with N internal nodes, the goal of is to jointly learn a set of prototype vectors 𝐏_𝐧
for every internal node n∈{1,…,N}.
Our architecture as shown in Figure <ref> begins with a CNN that acts as a common feature extractor f(x;θ) for all nodes, where θ represents the learnable parameters of f. f converts an image x into a latent representation Z ∈ℝ^H × W × C, where each “patch” at location (h,w) is, 𝐳_𝐡,𝐰∈ℝ^C. Following the feature extractor, for every node n, we initialize a set of K_n prototype vectors 𝐏_𝐧 = {𝐩_𝐢}^K_n_i=1, where 𝐩_𝐢∈ℝ^C. Here, the number of prototypes K_n learned at node n varies in proportion to the number of children of node n, with β as the proportionality constant, i.e., at each node n we assign β prototypes for every child node.
To simplify notations, we drop the subscript n in 𝐏_𝐧 and K_n while discussing the operations occurring in node n.
We consider the following sequence of operations at every node n. We first compute the similarity score between every prototype in 𝐏 and every patch in Z. This results in a matrix Ẑ∈ℝ^H × W × K, where every element represents a similarity score between image patches and prototype vectors. We apply a softmax operation across the K channels of Ẑ such that the vector ẑ_𝐡,𝐰∈ℝ^K at spatial location (h,w) in Ẑ represents the probability that the corresponding patch 𝐳_𝐡,𝐰 is similar to the K prototypes.
Furthermore, the i^th channel of Ẑ serves as a prototype score map for the prototype vector 𝐩_𝐢, indicating the presence of 𝐩_𝐢 in the image. We perform global max-pooling across the spatial dimensions H× W of Ẑ to obtain a vector 𝐠∈ℝ^K, where the i^th element represents the highest similarity score of the prototype vector 𝐩_𝐢 across the entire image.
𝐠 is then fed to a linear classification layer with weights ϕ to produce the final classification scores for every child node of node n. We restrict the connections in the classification layer so that every child node n_c is connected to a distinct set of β prototypes, to ensure that every prototype uniquely maps to a child node. ϕ is restricted to be non-negative to ensure that the classification is done solely through positive reasoning, similar to the approach used in PIP-Net <cit.>. We borrow the regularization scheme of PIP-Net to induce sparsity in ϕ by computing the logit of child node n_c as log((𝐠ϕ)^2 + 1). 𝐠 and ϕ here are again unique to each node.
§.§ Loss Functions Used to Train
Contrastive Losses for Learning Hierarchical Prototypes:
PIP-Net <cit.> introduced the idea of using self-supervised contrastive learning to learn semantically meaningful prototypes.
We build upon this idea in our work to learn semantically meaningful hierarchical prototypes at every node in the tree as follows.
For every input image 𝐱, we pass in two augmentations of the image, 𝐱' and 𝐱” to our framework. The prototype score maps for the two augmentations, Ẑ^' and Ẑ^”, are then considered as positive pairs.
Since ẑ_𝐡,𝐰∈ℝ^K represents the probabilities of patch 𝐳_𝐡,𝐰 being similar to the prototypes from 𝐏, we align the probabilities from the two augmentations ẑ^'_𝐡,𝐰 and ẑ^”_𝐡,𝐰 to be similar using the following alignment loss:
ℒ_A = -1/HW∑_(h,w) ∈ H × Wlog(ẑ_𝐡,𝐰^'·ẑ_𝐡,𝐰^”)
Since ∑_i=1^K ẑ_𝐡,𝐰,𝐢 = 1 due to softmax operation, ℒ_A is minimum (i.e., ℒ_A = 0) when both ẑ_𝐡,𝐰^' and ẑ_𝐡,𝐰^” are identical one-hot encoded vectors. A trivial solution that minimizes ℒ_A is when all patches across all images are similar to the same prototype. To avoid such representation collapse,
we use the following tanh-loss ℒ_T of PIP-Net <cit.>, which serves the same purpose as uniformity losses in <cit.> and <cit.>:
ℒ_T = -1/K∑_i=1^Klog(tanh(∑_b=1^B𝐠_𝐛,𝐢)),
where 𝐠_𝐛,𝐢 is the prototype score for prototype i with respect to image b of mini-batch. ℒ_T encourages each prototype 𝐩_𝐢 to be activated at least once in a given mini-batch of B images, thereby helping to avoid the possibility of
representation collapse. The use of tanh ensures that only the presence of a prototype is taken into account and not its frequency.
Over-specificity Loss:
To achieve the goal of learning prototypes common to all descendant species of an internal node, we introduce a novel loss, termed over-specificity loss ℒ_ovsp that avoids learning over-specific prototypes at any node n.
ℒ_ovsp is formulated as a modification of the tanh-loss such that prototype 𝐩_𝐢 is encouraged to be activated at least once in every one of the descendant species d ∈{1,…,D_i} of its corresponding child node in the mini-batch of images fed to the model, as follows:
ℒ_ovsp = -1/K∑_i=1^K∑_d=1^D_ilog(tanh(∑_b ∈ B_d𝐠_𝐛,𝐢)),
where B_d is the subset of images in the mini-batch that belong to species d.
Discriminative loss:
In order to ensure that a learned prototype for a child node n_c is not activated by any of its contrasting set of species (i.e., species that are descendants of child nodes of n other than n_c), we introduce another novel loss function, ℒ_disc, defined as follows:
ℒ_disc = 1/K∑_i=1^K∑_d ∈D_imax_b ∈ B_d (𝐠_𝐛,𝐢),
where D_i is the contrasting set of all descendant species of child nodes of n other than n_c. This is similar to the separation loss used in other prototype-based methods such as <cit.>, <cit.>, and <cit.>.
Orthogonality loss: We also apply kernel orthogonality as introduced in <cit.> to the prototype vectors at every node n, so that the learned prototypes are orthogonal and capture diverse features:
ℒ_orth = 𝐏̂𝐏̂^⊤ - I _F^2
where 𝐏̂ is the matrix of normalized prototype vectors of size C × K, I is an identity matrix, and ._F^2 is the Frobenius norm. Each prototype 𝐩̂_𝐢 in 𝐏̂ is normalized as, 𝐩̂_𝐢 = 𝐩_𝐢/𝐩_𝐢.
Classification loss: Finally, we apply cross-entropy loss for classification at each internal node as follows:
ℒ_CE = -∑_b^B y_b log(ŷ_b)
where y is ground truth label and ŷ is the prediction at every node of the tree.
§.§ Masking Module to Identify Over-specific Prototypes
We employ an additional masking module at every node n to identify over-specific prototypes without hampering their training. The learned mask for prototype 𝐩_𝐢 simply serves as an indicator of whether 𝐩_𝐢 is over-specific or not, enabling our approach to abstain from finding common prototypes if there are none, especially at higher levels of the tree.
To obtain the mask values, we first calculate the over-specificity score for prototype 𝐩_𝐢 as the product of the maximum prototype scores obtained across all images in the mini-batch belonging to every descendant species d as:
𝒪_i = -∏_d=1^D_imax_(b ∈ B_d) (𝐠_𝐛,𝐢)
where 𝐠_𝐛,𝐢 is the prototype score for prototype 𝐩_𝐢 with respect to image b of mini-batch and B_d is the subset of images in the mini-batch that belong to descendant species d. Since 𝐠_𝐛,𝐢 takes a value between 0 to 1 due to the softmax operation, 𝒪_i ranges from -1 to 0, where -1 denotes least over-specificity and 0 denotes the most over-specificity. The multiplication of the prototype scores ensures that even when the score is less with respect to only one descendant species, the prototype will be assigned a high over-specificity score (close to 0).
As shown in Figure <ref>, 𝒪_i is then fed into the masking module, which includes a learned mask value M_i for every prototype 𝐩_𝐢. We generate M_i from a Gumbel-softmax distribution <cit.> so that the values are skewed to be very close to either 0 or 1, i.e., M_i = Gumbel-Softmax(γ_i, τ), where γ_i are the learnable parameters of the distribution and τ is temperature. We then compute the masking loss, ℒ_mask, as:
ℒ_mask = ∑_i=1^K ( λ_mask M_i∘(𝒪_i) + λ_L_1 M_i _1)
where λ_mask and λ_L_1 are trade-off coefficients, ._1 is the L_1 norm added to induce sparsity in the masks, and represents the stop gradient operation applied over 𝒪_i to ensure that the gradient of ℒ_mask does not flow back to the learning of prototype vectors and impact their training.
Note that the learned masks are not used for pruning the prototypes during training, they are only used during inference to determine which of the learned prototypes are over-specific and likely to not represent evolutionary traits. Therefore, even if all the prototypes are identified as over-specific by the masking module at an internal node, it will not affect the classification performance at that node.
§.§ Training
We first pre-train the prototypes
at every internal node in a self-supervised learning manner using alignment and tanh-losses as
ℒ_SS = λ_Aℒ_A + λ_Tℒ_T.
We then fine-tune the model using the following combined loss: (λ_CEℒ_CE + ℒ_SS + λ_ovspℒ_ovsp + λ_discℒ_disc + λ_orthℒ_orth + ℒ_mask),
where λ's are trade-off parameters. Note that the loss is applied over every node in the tree. We show an ablation of key loss terms in our framework in Table <ref> in the Supplementary Section.
§ EXPERIMENTAL SETUP
Dataset:
In our experiments, we primarily focus on the 190 species of birds (Bird) from the CUB-200-2011 <cit.> dataset for which the phylogenetic relationship <cit.> is known. The tree is quite large with a total of 184 internal nodes.
We removed the background from the images to avoid the possibility of learning prototypes corresponding to background information, such as the bird's habitat, as we are only interested in the traits corresponding to the body of the organism.
We also apply our method on a fish dataset with 38 species (Fish) <cit.> along with its associated phylogeny <cit.> and 30 subspecies of Heliconius butterflies (Butterfly) from the Jiggins Heliconius Collection dataset <cit.> collected from various sources [Sources: <cit.>]
along with its phylogeny <cit.>. The qualitative results of Butterfly and Fish datasets are provided in the supplementary materials.
The complete details of hyper-parameter settings and training strategy are also provided in the Supplementary Section <ref>.
Baselines: We compare to ResNet-50 <cit.>, INTR (Interpretable Transformer) <cit.> and HPnet <cit.>. For HPnet, we used the same hyperparameter settings and training strategy as used by ProtoPNet for the CUB-200-2011 dataset. For a fair comparison, we also set the number of prototypes for each child in HPnet to be equal to 10 similar to our implementation. We follow the same training strategy as provided by ProtoPNet for the CUB-200-2011 dataset.
§ RESULTS
§.§ Fine-grained Accuracy
Similar to HPnet <cit.>, we calculate the fine-grained accuracy for each leaf node by calculating the path probability over every image. During inference, the final probability for leaf class Y given an image X is calculated as,
P(Y|X) = P(Y^(1), Y^(2), ..., Y^(L)|X) =
∏_l=1^L P(Y^(l)|X),
where P(Y^(l)|X) is the probability of assigning image X to a node at level l, and L is the depth of the leaf node. Every image is assigned to the leaf class with maximum path probability, which is used to compute the fine-grained accuracy. The comparison of the fine-grained accuracy calculated for and the baselines are given in Table <ref>.
We can see that performs better than the other interpretable methods, such as INTR and HPNet, and is also able to nearly match the performance of non-interpretable models, such as ResNet-50, even outperforming it for the Fish and Butterfly dataset.
This shows the ability of our proposed framework to achieve competitive classification accuracy along with serving the goal of discovering evolutionary traits.
§.§ Generalizing to Unseen Species in the Phylogeny
We analyze the performance of in generalizing to unseen species that the model has not seen during training.
The biological motivation for this experiment is to evaluate if can situate newly discovered species at its appropriate position in the phylogeny by identifying its common ancestors shared with the known species. An added advantage of our work is that along with identifying the ancestor of an unseen species,
we can also identify the common traits shared by the novel species with known species in the phylogeny.
Since unseen species cannot be classified to the finest levels (i.e., up to the leaf node corresponding to the unseen species), we analyze the ability of to classify unseen species accurately up to one level above the leaf level in the hierarchy. With this consideration, the final probability of an unseen species for a given image is calculated as,
P(Y|X_unseen) = P(Y^(1), Y^(2), ..., Y^(L-1)|X) =
∏_l=1^L-1 P(Y^(l)|X).
Note that we leave out the class probability at the L^th level, since we do not take into account the class probability of the leaf level. We leave four species from the Bird training set and calculate their accuracy during inference in Table <ref>. We can see that is able to generalize better than HPnet for all four species.
§.§ Analyzing the Semantic Quality of Prototypes
Following the method introduced in PIPNet <cit.>, we assess the semantic quality of our learned prototypes by evaluating their part purity. A prototype with high part purity (close to 1) is one that consistently highlights the same image region in the score maps (corresponding to consistent local features such as the eye or wing of a bird) across images belonging to the same class.
The part
r0.53
Part purity of prototypes on Bird dataset.
Model ℒ_ovsp Masking Part purity % masked
HPnet - - 0.14 ± 0.09 -
- - 0.68 ± 0.22 -
- 0.75 ± 0.17 21.42%
- 0.72 ± 0.19 -
0.77 ± 0.16 16.53%
purity is calculated using the part locations of 15 parts that are provided in the CUB dataset. For each prototype, we take the top-10 images from each leaf descendant. We consider the 32 × 32 image patch that is centered around the max activation location of the prototype from the top-10 images. With these top-10 image patches, we calculate how frequently each part is present inside the image patch. For example, a part that is found inside the image patch 8 out of 10 times is given a score of 0.8. In PIP-Net, the highest value among the values calculated for each part is given as the part purity of the prototype. In our approach, since we are dealing with a hierarchy and taking the top-10 from each leaf descendant, a particular part, let's say the eye, might have a score of 0.5 for one leaf descendant and 0.7 for a different leaf descendant. Since we want the prototype to represent the same part for all the leaf descendants, we take the lowest score (the weakest link) among all the leaf descendants as the score of the part. By following this method, for a given prototype we can arrive at a value for each part and finally take the maximum among the values as the purity of the prototype. We take the mean of the part purity across all the prototypes and report the results in Table <ref> for different ablations of and also HPnet, which is the only baseline method that can learn hierarchical prototypes.
We can see that , even without the use of over-specificity loss, performs much better than HPnet due to the contrastive learning approach we have adopted from PIPNet <cit.>. The addition of over-specificity loss improves the part purity because over-specific prototypes tend to have poor part purity for some of the leaf descendants, which will affect their overall part purity score. Further, for both ablations with and without over-specificity loss, we apply the masking module and remove masked (over-specific) prototypes during the calculation of part purity. We see that the part purity goes higher by applying the masking module, demonstrating its effectiveness in identifying over-specific prototypes. We further compute the purity of masked-out prototypes and notice that the masked-out prototypes have drastically lower part purity (0.29 ± 0.17) compared to non-masked prototypes (0.77 ± 0.16). We also provide a visual comparison of a masked (over-specific) prototype and an unmasked (non-over-specific) prototype in the Supplementary Section <ref>.
An alternative approach to learning the masking module is to identify over-specific prototypes using a fixed global threshold over 𝒪_i. We show in Table <ref> of Supplementary Section <ref>, that given the right choice of such a threshold, we can identify over-specific prototypes. However, selecting the ideal threshold can be non-trivial. On the other hand, our masking module learns the appropriate threshold dynamically as part of the training process.
Figure <ref> visualizes the part consistency of prototypes discovered by in comparison to HPnet for the bird dataset. We can see that is finding a consistent region in the image (corresponding to the head region) across all three descendant species and all images of a species, in contrast to HPnet. Furthermore, thanks to the alignment loss, every patch ẑ_𝐡,𝐰 is encoded as nearly a one-hot encoding with respect to the K prototypes which causes the prototype score maps to be highly localized. The concise and focused nature of the prototype score maps makes the interpretation much more effective compared to baselines.
§.§ Analyzing Evolutionary Traits Discovered by
We now qualitatively analyze some of the hypothesized evolutionary traits discovered in the hierarchy of prototypes learned by .
Figure <ref> shows the hierarchy of prototypes discovered over a small subtree of the phylogeny from Bird (four species) and Fish (three species) dataset.
In the visualization of bird prototypes, we can see that the two Pelican species share a consistent region in the learned Prototype labeled 2, which corresponds to the head region of the birds. We can hypothesize this prototype is capturing the white-colored crown common to the two species. On the other hand, Prototype 1 finds the shared trait of similar beak morphology (e.g., sharpness of beaks) across the two Cormorant species. We can see that avoids the learning of over-specific prototypes at internal nodes, which are pushed down to individual leaf nodes, as shown in visualizations of Prototype 3, 4, 5, and 6.
Similarly, in the visualization of the fish prototypes, we can see that Prototype 1
is highlighting a specific fin (dorsal fin) of the Carassius auratus and Notropis hudsonius species, possibly representing their pigmentation and structure, which is noticeably different compared to the contrasting species of Alosa chrysochloris.
Note that while identifies the common regions corresponding to each prototype (shown as heatmaps), the textual descriptions of the traits provided in Figure <ref> are based on human interpretation.
Figure <ref> shows another visualization of the sequence of prototypes learned by for the Western Grebe species at different levels of the phylogeny. We can see that at level 0, we are capturing features closer to the neck region, indicating the likely difference between the length of necks between Grebe species and other species (Cuckoo, Albatross, and Fulmar) that diversify at an earlier time in the process of evolution. At level 1, the prototype is focusing on the eye region, potentially indicating a difference in the color of red and black patterns around the eyes. At level 2, we are differentiating Western Grebe from Horned Grebe based on the feature of bills. We also validate our prototypes by comparing them with the multi-head cross-attention maps learned by INTR <cit.>. We can see that some of the prototypes discovered by can be mapped to equivalent attention heads of INTR. However, while INTR is designed to produce a flat structure of attention maps, we are able to place these maps on the tree of life. This shows the power of in generating novel hypotheses about how trait changes may have evolved and accumulated across different branches of the phylogeny.
Additional visualizations of discovered evolutionary traits for butterfly species and fish species are provided in the supplementary section in Figures <ref> to <ref>.
§ CONCLUSION
We introduce a novel approach for learning hierarchy-aligned prototypes while avoiding the learning of over-specific features at internal nodes of the phylogenetic tree, enabling the discovery of novel evolutionary traits.
Our empirical analysis on birds, fishes, and butterflies, demonstrates the efficacy of over baseline methods. Furthermore, demonstrates a unique ability to generate novel hypotheses about evolutionary traits, showcasing its potential in advancing our understanding of evolution. We discuss the limitations of our work in Supplementary Section <ref>.
While we focus on the biological problem of discovering evolutionary traits, our work can be applied in general to domains involving a hierarchy of classes, which can be explored in future research.
This research is supported by National Science Foundation (NSF) awards for the HDR Imageomics Institute (OAC-2118240). We are thankful for the support of computational resources provided by the Advanced Research Computing (ARC) Center at Virginia Tech. This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains, and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript or allow others to do so for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (<https://www.energy.gov/doe-public-access-plan>).
unsrt
toc
§ : SUPPLEMENTARY MATERIAL
§ ADDITIONAL BIOLOGICAL BACKGROUND
One of the first steps in any study of evolutionary morphology is character construction - the process of deciding which measurements will be taken of organismal variation that are replicable and meaningful for the underlying biology, and how these traits should be represented numerically <cit.>. For phylogenetic studies, researchers typically attempt to identify synapomorphies – versions of the traits that are shared by two or more species, are inherited from their most recent common ancestor, and may have evolved along the phylogeny branch. The difficulty with the traditional character construction process is that humans often measure traits in a way that is inconsistent and difficult to reproduce, and can neglect shared features that may represent synapomorphies, but defy easy quantification. To address the problem of human inconsistency, PhyloNN <cit.> and Phylo-Diffusion <cit.> took a knowledge-guided machine learning (KGML) <cit.> approach to character construction, by giving their neural networks knowledge about the biological process they were interested in studying (in their case, phylogenetic history), and specifically optimizing their models to find embedded features (analogous to biological traits) that are predictive of that process. To address the problem of visual irreproducibility, Ramirez et al. <cit.> suggested photographing the local structures where the empirical traits vary and linking the images to written descriptions of the traits. In this paper, we take influence from both approaches. We extend the hierarchical prototype approach from Hase et al. <cit.> to better reflect phylogeny, similar in theory to the way PhyloNN <cit.> and Phylo-Diffusion <cit.> learned embeddings that reflect phylogeny. Using prototypes, however, we enforce local visual interpretability similar to how researchers may use “type-specimens” to define prototypical definitions of particular character states.
Specifically, our method is about finding synapomorphies–shared derived features unique to a particular group of species that share a common ancestor in the phylogeny (referred to as clade). While such features may bear similarities to convergent phenotypes in other clades, our goal is not to identify features that exhibit convergence. It is typical for phylogenetic studies to specifically avoid features that exhibit high levels of convergence, as they can lend support for erroneous phylogenetic relationships. Identifying convergence required additional information such as shared habitat, niche, diet, or behavior, which is not incorporated in our work.
§ ABLATION OF OVER-SPECIFICITY LOSS TRADE-OFF HYPERPARAMETER
We have provided an ablation for the over-specificity loss trade-off hyperparameter (λ_ovsp) in Table <ref>.
We can observe that increasing the weight of over-specificity loss reduces the model's classification performance, as the model struggles to find any commonality, especially at internal nodes where the number of leaf descendant species is large and quite diverse. It is natural that species that are diverse and distantly related may share fewer characteristics with each other, in comparison to a set of species that diverged more recently from a common ancestor <cit.>. Therefore, forcing the model to learn common traits with a strong ℒ_ovsp constraint can cause the model to perform badly in terms of accuracy.
§ ABLATION OF NUMBER OF PROTOTYPES
In Table <ref>, we vary the number of prototypes per child β for a node to see the impact on the model's performance.
We note that while the accuracy increases marginally with increasing the number of prototypes per child (β) from 10 to 15, it does not affect the performance of the model significantly.
Therefore, we continue to work with β=10 for all of our experiments.
§ ABLATION OF INDIVIDUAL LOSSES
In Table <ref>, we perform an ablation of the various loss terms used in our methodology. As it can be observed, the removal of ℒ_ovsp and ℒ_disc degrades performance in terms of both semantic consistency (part purity) and accuracy. On the other hand, the removal of self-supervised contrastive loss ℒ_SS improves accuracy but at the cost of drastically decreasing the semantic consistency.
§ CONSISTENCY OF CLASSIFICATION PERFORMANCE OVER MULTIPLE RUNS
We trained the model using five distinct random weight initializations. The results showed that the model's fine-grained accuracy averaged 70.63% with a standard deviation of 0.18%.
§ IMPLEMENTATION DETAILS
We have included all the source code and dataset, along with the comprehensive instructions to reproduce the results at <https://github.com/Imageomics/HComPNet>.
Model hyper-parameters:
We build on top of a ConvNeXt-tiny architecture as the backbone feature extractor. We have modified the stride of the max pooling layers of later stages of the backbone from 2 to 1, similar to PIP-Net, such that the backbone produces feature maps of increased height and width, in order to get more fine-grained prototype score maps. We implement and experiment with our method on ConvNeXt-tiny backbones with 26 × 26 feature maps. The length of prototype vectors C is 768. The weights ϕ at every node n of are constrained to be non-negative by the use of the ReLU activation function <cit.>. Further, the prototype activation nodes are connected with non-negative weights only to their respective child classes in W while their weights to other classes are made zero and non-trainable.
Training details:
All models were trained with images resized and appropriately padded to 224 × 224 pixel resolution and augmented using TrivialAugment <cit.> for contrastive learning. The prototypes are pretrained with self-supervised learning similar to PIP-Net for 10 epochs, following which the model is trained with the entire set of loss functions for 60 epochs. We use a batch size of 256 for the Bird dataset and 64 for the Butterfly and Fish dataset.
The masking module is trained in parallel, and its training is continued for 15 additional epochs after the training of the rest of the model is completed. The trade-off hyper-parameters for the loss functions are set to be λ_CE = 2; λ_A = 5; λ_T = 2; λ_ovsp = 0.05; λ_disc = 0.1; λ_orth = 0.1; λ_mask = 2.0; λ_L1 = 0.5. λ_CE, λ_T and λ_A were borrowed from PIP-Net <cit.>. Ablations to arrive at suitable λ_ovsp is provided in Table <ref>. λ_disc and λ_orth were chosen empirically and found to work well on all three datasets. Experiment on unseen species was done by leaving out certain classes from the datasets, so that they are not considered during training.
Dataset and Phylogeny Details: Dataset statistics and phylogeny statistics are provided in Table <ref> and Table <ref> respectively. Bird dataset is created by choosing 190 species from CUB-200-2011 [License: https://creativecommons.org/licenses/by/4.0/CC BY] <cit.> dataset, which were part of the phylogeny. Background from all images was filtered using the associated segmentation metadata <cit.>. For Butterfly dataset we considered each subspecies as an individual class and considered only the subspecies of genus Heliconius from the Heliconius Collection (Cambridge Butterfly)[Note that this dataset is a compilation of images from 25 Zenodo records by the Butterfly Genetics Group at Cambridge University, licensed under https://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International (<cit.>).] <cit.>. There is substantial variation among subspecies of Heliconius species. Furthermore, we balanced the dataset by filtering out the subspecies that did not have 20 or more images. We also sampled a subset of 100 images from each subspecies that had more than 100 images. For Fish [License: https://creativecommons.org/licenses/by-nc/4.0/CC BY-NC] dataset, we followed the exact same preprocessing steps as outlined in PhyloNN <cit.>.
Compute Resources: The models for the Bird dataset were trained on two NVIDIA A100 GPUs with 80GB of RAM each. Butterfly and Fish models were trained on a single A100 GPU. As a rough estimate, the execution time for the training model on the Bird dataset is around 2.5 hours. For Butterfly and Fish datasets, the training is completed in under 1 hour. We used a single A100 GPU during the inference stage for all other analyses.
§ POST-HOC THRESHOLDING TO IDENTIFY OVER-SPECIFIC PROTOTYPES
An alternative approach to learning the masking module is to calculate the over-specificity score for each prototype on the test set after training the model. We calculate the over-specificity scores for the prototypes of a trained model as follows,
𝒪_i = -∏_d=1^D_i1/top_k∑_i=1^top_k (𝐠_𝐢)
For a given prototype, we choose the top_k images with the highest prototype scores from each leaf descendant. After taking mean of the top_k prototype score, we multiply the values from each descendant to arrive at the over-specificity score for the particular prototype. Subsequently, we choose a threshold to determine which prototypes are over-specific. We provide the results of post-hoc thresholding approach that can also be used to identify over-specific prototypes in Table <ref>. While we can note that this approach can also be effective, validating the threshold particularly in scenarios where there is no part annotations available (such as part location annotation of CUB-200-2011) can be an arduous task. In such cases, directly identifying over-specific prototypes as part of the training through the masking module can be the more feasible option.
§ VISUAL COMPARISON OF A OVER-SPECIFIC AND A NON-OVER-SPECIFIC PROTOTYPE
We do a visual comparison of a prototype that has been identified as over-specific by the masking module and a prototype that is not identified as over-specific in Figure <ref>. As it can be observed in Figure <ref>(a), the Red-Legged Kittiwake has legs that are shorter in comparison to other species of its clade - Heerman Gull and Western Gull. Therefore, the prototype is identified as over-specific, as long legs are not common to all three species. On the other hand, in Figure <ref>(b), the prototype has been identified as non-over-specific because all three species share white-colored crowns. Prototype from Figure <ref>(a) has very low activation for Red Legged Kittiwake and also has poor part purity since it does not highlight the same part of the bird in the images of Red-legged Kittiwake.
§ PERFORMANCE OF HPNET WITH HIGH RESOLUTION FEATURE MAPS
We analyze the performance of HPnet with high-resolution feature maps in Table <ref>. We modified the backbone by removing the max pooling layers at the final stages of the model to produce a 28x28 feature map instead of the original 7x7 feature map. It can be observed that the accuracy and part purity do not improve with high-resolution feature maps. We also make a qualitative comparison between an HPnet and prototype with a higher resolution feature map in Figure <ref>, showing that part purity does not improve with high-resolution feature maps for HPnet.
§ ADDITIONAL VISUALIZATIONS OF THE HIERARCHICAL PROTOTYPES DISCOVERED BY
We provide more visualizations of the hierarchical prototypes discovered by for Butterfly (Figures <ref> and <ref>) and Fish (Figure <ref>) datasets in this section. For ease of visualization, in each figure we visualize the prototypes learned over a small sub-tree from the phylogeny. The prototypes at the lowest level capture traits that are species-specific, whereas the prototypes at internal nodes capture the commonality between its descendant species. For Fish dataset, we have provided textual descriptions purely based on human interpretation for the traits that are captured by prototypes at different levels. For Butterfly dataset, since the prototypes are capturing different wing patterns, assigning textual descriptions for them is not straightforward. Therefore, we refrain from providing any text description for the highlighted regions of the learned prototypes and leave it to the reader's interpretation.
§ ADDITIONAL TOP-K VISUALIZATIONS OF PROTOTYPES
We provide additional top-K visualizations of the prototypes from Butterfly (Figures <ref> to <ref>) and Fish (Figures <ref> to <ref>) datasets, where every row corresponds to a descendant species and the columns corresponds to the top-K images from the species with the largest prototype activation scores. A requirement of a semantically meaningful prototype is that it should consistently highlight the same part of the organisms in various images, provided that the part is visible. We can see in the figures that the prototypes learned by consistently highlight the same part across all top-K images of a species, and across all descendant species. We additionally show that can find common traits at internal nodes with a varying number of descendant species, including 4 species (Figure <ref>), 5 species (Figures <ref> and <ref>), and 10 species (Figure <ref>) of butterflies, and 5 species (Figure <ref>), 8 species (Figure <ref>) and 18 species (Figure <ref>) for fish. We also provide several top-k visualizations of prototypes learned for bird species in Figures <ref> to <ref>. This shows the ability of to discover common prototypes at internal nodes of the phylogenetic tree that consistently highlight the same regions in the descendant species images even when the number of descendants is large.
§ LIMITATIONS OF OUR WORK
A fundamental challenge of every prototype-based interpretability method (including ours) is the difficulty in associating a semantic interpretation with the underlying visual concept of a prototype. While some prototypes can be interpreted easily based on visual inspection of prototype activation maps, other prototypes are harder to interpret and require additional domain expertise of biologists. Also, while we have considered large phylogenies as that of the 190 species from the CUB dataset, it may still not be representative of all bird species. This limited scope may cause our method to identify apparent homologous evolutionary traits that could differ with the inclusion of more species into the phylogeny. Therefore, our method can be seen as a system that generates potential hypotheses about evolutionary traits discovered in the form of hierarchical prototypes.
|
http://arxiv.org/abs/2409.02561v1 | 20240904092848 | Vision-Language Navigation with Continual Learning | [
"Zhiyuan Li",
"Yanfeng Lv",
"Ziqin Tu",
"Di Shang",
"Hong Qiao"
] | cs.AI | [
"cs.AI",
"cs.RO"
] |
BMI Prediction from Handwritten English Characters Using a Convolutional Neural Network
N. T. Diba1, N. Akter2, S. A. H. Chowdhury3
Dept. of Electronics & Telecommunication Engineering
Rajshahi University of Engineering & Technology
Rajshahi, Bangladesh
nishattasnimediba09@gmail.com1,
nasrinmanha123@gmail.com2, arif.1968.ruet@gmail.com3
J. E. Giti4
Dept. of Electrical & Electronic Engineering
Rajshahi University of Engineering & Technology
Rajshahi, Bangladesh
jishan.e.giti@gmail.com4
September 9, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Vision-language navigation (VLN) is a critical domain within embedded intelligence, requiring agents to navigate 3D environments based on natural language instructions. Traditional VLN research has focused on improving environmental understanding and decision accuracy. However, these approaches often exhibit a significant performance gap when agents are deployed in novel environments, mainly due to the limited diversity of training data. Expanding datasets to cover a broader range of environments is impractical and costly.
We propose the Vision-Language Navigation with Continual Learning (VLNCL) paradigm to address this challenge. In this paradigm, agents incrementally learn new environments while retaining previously acquired knowledge. VLNCL enables agents to maintain an environmental memory and extract relevant knowledge, allowing rapid adaptation to new environments while preserving existing information.
We introduce a novel dual-loop scenario replay method (Dual-SR) inspired by brain memory replay mechanisms integrated with VLN agents. This method facilitates consolidating past experiences and enhances generalization across new tasks. By utilizing a multi-scenario memory buffer, the agent efficiently organizes and replays task memories, thereby bolstering its ability to adapt quickly to new environments and mitigating catastrophic forgetting.
Our work pioneers continual learning in VLN agents, introducing a novel experimental setup and evaluation metrics. We demonstrate the effectiveness of our approach through extensive evaluations and establish a benchmark for the VLNCL paradigm. Comparative experiments with existing continual learning and VLN methods show significant improvements, achieving state-of-the-art performance in continual learning ability and highlighting the potential of our approach in enabling rapid adaptation while preserving prior knowledge.
§ INTRODUCTION
Vision-Language Navigation (VLN) <cit.> is crucial for the embedding intelligence field. The agent follows natural language instructions and moves around in the 3D environment. By integrating natural language processing, visual perception, and decision-making, the agent could navigate to the destination. Most research in VLN focuses on environment understanding ability improvement <cit.> and accuracy of target decision policy <cit.>.
While these advancements have significantly improved VLN performance, there is still a critical issue: the generalization of agents to diverse unseen scenes, which is essential for real-world applications.
In practical scenarios, agents must continually adapt to new environments while retaining the knowledge acquired from previous tasks.
The significant performance gap between seen and unseen <cit.> environments underscores this challenge. The primary cause of this issue is the limited availability of diverse environmental data, which constrains the agents' ability to generalize effectively <cit.>.
Yet, massively expanding the dataset with various environments is unrealistic and expensive <cit.>. Therefore, we consider an alternative approach by introducing the continual learning (CL) framework.
This framework enables the agents to incrementally learn and adapt to new environments while retaining the knowledge acquired from previous tasks <cit.>. By using this strategy, we aim to enhance the generalization capabilities of VLN agents, making them more robust and effective in real-world applications where they must navigate an ever-changing array of environments.
To enable the VLN agent to accumulate knowledge from tasks, handling the challenge known as catastrophic forgetting <cit.> is vital. We combine it with vision-language navigation tasks to introduce the Vision-Language Navigation with Continual Learning (VLNCL) paradigm. The agent must continuously accumulate information and maintain former knowledge by motivating the agent with new tasks. That means a balance between stability and plasticity <cit.>. Furthermore, considering that real-world tasks often occur within the same environment simultaneously, we split the tasks by scene to raise them to the agent. This way, tasks are divided into different domains.
Building on this foundation, we propose the dual-loop scenario replay vision-language navigation agent (Dual-SR) as a novel method for VLNCL. Inspired by the mechanism of memory replay in the resting brain <cit.>, we designed a dual-loop memory replay framework to enable the model to consolidate earlier scenario memories while balancing new task learning. Randomly replaying scenario memory from the memory buffer of the agent brings a former task memory bias while the inner loop weight updates learning new tasks <cit.>.
Then, the agent applies the meta-learning-based outer loop weight updates to balance the new and old weights while alleviating the overfitting problem caused by the single scenario task data <cit.>.
Additionally, to effectively retain diverse environmental knowledge, we design a memory buffer based on task domains that allow agents to store and replay memories from different scenes.
To assess VLNCL agents, we propose two metrics: Unseen Transfer (UT) for evaluating knowledge transfer and Seen Transfer (ST) for assessing continual learning. UT measures generalization by testing the agent in new scenes with unseen tasks. ST evaluates continual learning by applying the agent to all seen task domains after training.
Making the VLN agent able to perform continual learning in new tasks brings the advantage of increasing task performance and increasing task generalization. Comparative experiments were also conducted extensively with several CL methods used in other fields and previous VLN agents to verify the advances of our approach. Experiments show state-of-the-art performance in continual learning ability.
To summarize the contribution of this work:
* We introduce the Vision-Language Navigation with Continual Learning (VLNCL) paradigm and metrics, enabling VLN agents to adapt to new environments while retaining prior knowledge for improving generalization.
* We propose Dual-loop Scenario Replay (Dual-SR), a novel approach inspired by the brain memory system, which enables continual learning in VLN agents. Experimental results showed a 16% success rate rise compared with the base model.
* We design a multi-scenario memory buffer that organizes task memories by environment type, facilitating rapid adaptation and balancing tasks across scenarios.
* Our work focuses on continual learning in VLN agents, reduces catastrophic forgetting, and improves knowledge transfer, setting a benchmark in the field. Comparative experiments also confirm its robustness and effectiveness.
§ RELATED WORK
§.§ Vision-and-Language Navigation
Vision-and-Language Navigation (VLN) aims to develop intelligent agents capable of interacting with humans using natural language, perceiving the environment, and executing real-world tasks. This field has attracted significant attention across natural language processing, computer vision, robotics, and machine learning. Anderson et al. <cit.> laid the foundation by introducing the Room-to-Room (R2R) dataset, where agents navigate virtual indoor environments using a simulator. Building on R2R, researchers quickly established other VLN benchmarks like R4R <cit.> and RxR <cit.>.
For outdoor navigation, Touchdown <cit.> is a crucial benchmark where agents navigate a simulated New York City street view. Conversational navigation tasks, such as CVDN <cit.> and HANNA <cit.>, enable agents to interact with humans to aid navigation. Remote object navigation tasks, including REVERIE <cit.> and SOON <cit.>, require agents to infer object locations and identify them based on language instructions.
To address cross-modal alignment, PTA <cit.> leverages CNN-based visual features, while HAMT <cit.> incorporates long-term history into multimodal decision-making using a hierarchical vision transformer. Kerm <cit.> models relationships between scenes, objects, and directions using graphs. For navigation decisions, Dreamwalker <cit.> employs reinforcement learning with an intrinsic reward for cross-modal matching. Transformer-based models <cit.> has also gained popularity, integrating visual and linguistic information to enhance decision-making.
Despite these advances, introducing new environments requires retraining on seen environments to prevent catastrophic forgetting, leading to high costs and limited adaptability. Thus, enabling VLN agents with continual learning capabilities is crucial.
§.§ Continual Learning
Continual learning is vital for agents to acquire new tasks without forgetting prior ones. Compared with traditional machine learning, which relies on static datasets, continual learning processes sequential data streams, making earlier data inaccessible. First introduced by Thrun <cit.>, recent research has focused on mitigating catastrophic forgetting <cit.>. Approaches in this field generally fall into three categories: rehearsal-based, regularization-based, and parameter isolation methods.
Rehearsal-based methods, like experience replay, retain data from previous tasks to update the model during new task training <cit.>. Techniques by Lopez-Paz <cit.> and Chaudhry <cit.> use data replay to prevent gradient conflicts. While combined with meta-learning and experience replay, these methods help optimize network features and reduce conflicts between tasks <cit.>. Capacity expansion approaches <cit.> and online meta-learning <cit.> offer additional strategies to mitigate forgetting.
In the VLN domain, continual learning research is still developing. Jeong et al. <cit.> recently proposed rehearsal-based methods like "PerpR" and "ESR" to enhance robots' adaptability in new environments, showing promise on specific datasets. However, these methods face memory and computational efficiency challenges, especially with complex, long-sequence tasks. Further refinement of the continual learning framework and evaluation metrics is needed.
§ METHOD
§.§ Setting of Vision-Language Navigation
Vision-language navigation (VLN) involves an agent navigating a visual environment following natural language instructions. This task requires the agent to comprehend and integrate visual and linguistic inputs to achieve the goal. Let 𝐕 = {v_1, v_2, …, v_n} represent the sequence of visual observations from the environment and 𝐈 represent the natural language instruction. The objective is to learn a policy π(a_t |𝐕, 𝐈), where a_t is the action taken at time t, which maximizes the cumulative reward R = ∑_t=1^T r_t and minimize the loss
L=E_(I,A^*) ∈ D[ℒ(π_θ(a_t |𝐕, 𝐈), A^*)]
where r_t being the reward at time t, A^* is the ground truth trajectory, θ is parameters, and T the total number of time steps.
The integration of vision and language occurs by projecting 𝐈 and 𝐕 into a common feature space, creating a joint embedding space.
This embedding allows the agent to align visual cues with linguistic references, enabling accurate navigation.
§.§ Formulation of Vision-Language Navigation with Continual Learning
Considering the reality application scenario, we take the validation dataset as a continual learning dataset and split it into several data streams. In real-world applications, the agent might navigate a scene for several tasks. Thus, we split the dataset by scene and divided the scene with the same ID into one data stream as one task domain, which agents learn sequentially. During this process, the agent needs to learn current tasks without forgetting knowledge of former tasks. This approach is more comparable to the realistic situations of agents in the real world.
The dataset is separated into d task domains 𝒮={s_1, s_2, ..., s_d}, each containing tasks in the same scene. Task domains are independent of each other. Therefore, we can identify each task domain s as one distribution D_s. The definition of the loss function of VLNCL agents is
min_θ∑^d_i=1 E_(I,A^*) ∈ D_s_i[ℒ(π_θ(a_t |𝐕, 𝐈), A^*)]
where parameters θ are iterated in each task domain. The agent can learn multiple task domains by optimizing the parameters in sequence. The main challenge of it is how to balance each task domain and promote generalization.
§.§ Dual-loop Scenario Replay
In the VLNCL setup, agents must minimize forgetting and improve transfer learning by leveraging prior knowledge to improve performance on current and former tasks. However, most existing VLN agents struggle with continual learning, unlike humans, who efficiently learn from a few examples by integrating sensory input with long-term memory <cit.>. The human brain continuously extracts and stores knowledge, reinforcing long-term memory through replay during rest <cit.>. Inspired by this, we propose the Dual-loop Scenario Replay Continual Learning (Dual-SR) algorithm for VLN agents. This algorithm simulates working memory as an inner loop and long-term memory as an outer loop, creating two weight update loops to balance prior and current task information while enhancing generalization.
In the VLNCL setup, agents might encounter an overfitting problem due to the limited samples. Hence, we leverage the meta-update mechanism in the Reptile algorithm <cit.> to mimic long-term memory formation. The reptile algorithm is equivalent in effectiveness to MAML <cit.>, which provides a means for models to acquire a standard structure from the current task domain, enabling them to quickly adapt to other similar new tasks. Thus, we can maximize generalization ability rather than data fitting. By that, the outer loop can improve the generalization capacity of agents. In the outer loop, the update of weight can be defined as:
θ= θ + β*(θ^'-θ)
where θ, θ^', and β separately denote weights of the model before the inner cycle, weights after it, and meta-learning rate.
To mimic the brain's abstraction and consolidation of long-term memories <cit.>, we designed the inner loop by simulating the memory retrieval in working memory. The agent can train with stable data selected randomly from previous task domains by maintaining the buffer and replaying old samples in the inner loop. Applying the memory buffer can ensure each former task is equally likely to be selected in the buffer. The newly received samples are combined with randomly selected old samples from the buffer to form a mini-batch, which is then used for meta-learning.
In the inner loop, the update of the model can be defined as:
U_k(θ)=θ-α∘∇ L_(a_t |𝐕, 𝐈)(θ)
where U_k(θ) is a update by learning (a_t |𝐕, 𝐈) and α are parameters of the meta-learner to be learned, and ∘ denotes element-wise product. Specifically, α is a vector of the same size as θ that decides both the update direction and learning rates.
Meta-updates can extract common structures learned across tasks, thereby enhancing the knowledge transfer capabilities of agents. The VLNCL setup exposes agents to a dynamic and unpredictable data stream. This procedure necessitates that agents adapt and perform effectively across an evolving array of tasks without following a predefined sequence. This approach differs significantly from traditional methods that require partitioning a fixed dataset into multiple batches for a set number of tasks. To address this, we implement experience replay within the inner loop. By storing task indices, the agent can revisit and leverage previously learned tasks when faced with new ones.
This approach contrasts with traditional replay-based methods, which indiscriminately use memory across all tasks. Our method randomly replays scenarios within each task domain to ensure balance. Additionally, we introduce the memory buffer size Z.
When a task belongs to a previous task domain and the task ID t is a multiple of Z, the agent updates the memory buffer M by replacing one of the tasks in the corresponding domain with the current task.
The agent efficiently manages memory size by updating scenario memory between task domains, even when handling many tasks. This strategy also encourages the model to prioritize tasks inspired by working memory principles.
§.§ Structured Transformer VLN Agents with Continual Learning
Building upon the Dual-SR algorithm, we employ a cross-modal structured transformer as the planner to enhance the performance of the VLN agent in continual learning settings. The Dual-SR algorithm provides the foundation for this approach by balancing integrating new information and retaining prior knowledge. At each navigation step t, the model processes five forms of tokens, global token 𝐠_t-1, candidate target tokens 𝐂 = {𝐜_t-1^1, 𝐜_t-1^2, …, 𝐜_t-1^q}, history tokens 𝐇 = {𝐡_t-1^1, 𝐡_t-1^2, …, 𝐡_t-1^t-1}, encoded instruction tokens 𝐈 = {𝐢_0, 𝐢_1, …, 𝐢_m}, and encoded vision tokens 𝐕 = {𝐯_t^1, 𝐯_t^2, …, 𝐯_t^n}. The instruction tokens remain constant through time to reduce computation, and other tokens are updated based on previous time steps. The system initializes the global token as the sentence embedding 𝐠_0 = i_0.
To encode candidate target tokens, we apply the grid coding form to address the modeling of possible long-term target challenges in unseen scenes. Each cell center can represent a potential navigation target token by discretizing the environment into a d × d grid to cover the navigation area. Initially, candidate target tokens 𝐜_0^1, 𝐜_0^2, …, 𝐜_0^q are created using the positional embedding of the targets, formulated as:
𝐜_0^i = f_ϕ_P(𝐬_j) ·𝐢_0, j ∈{1, 2, …, q}
where f_P is the positional encoder, 𝐬_i is the spatial location expressed by position coordinates, ϕ_P is the parameter of encoder, and 𝐱_0 is the sentence embedding.
During navigation, these candidate target tokens are refined with new visual clues and instruction tokens to predict a more precise long-term target. The probability of each target being the navigation destination is calculated using a multi-layer perceptron (MLP) based target predictor:
0.9!P(𝐜_t^i |θ) = softmax{MLP(𝐜_t^i ·𝐠_t)}, i ∈{1, 2, …, q}
where 𝐠_t is the global token.
The agent constructs and maintains a structured representation of the explored area with the transformer architecture to capture the structured environment layouts. At time step t, the model constructs a graph 𝒮_t, where the nodes represent previously visited locations and the edges represent the navigability of those locations. We construct the history token 𝐡_t^t using panoramic view embedding, action embedding, temporal embedding, and positional embedding as follows:
𝐡_t^t = f_V(𝐯_t^1, …, 𝐯_t^n) + f_A(𝐫_t) + f_T(t) + f_P(𝐬_t)
where f_V is a panoramic visual feature extractor, 𝐫_t = (sinθ, cosθ, sinφ, cosφ) is the moving direction, f_A is the action encoder, f_T is the temporal encoder, and f_P is the positional encoder.
The adjacency matrix 𝐄 of history tokens at time step t is defined such that if a navigation viewpoint n_j is navigable from n_i, then 𝐄_ij = 1; otherwise, 𝐄_ij = 0. The attention mask matrix 𝐌 controls the information flow among tokens, with a sub-matrix 𝐌_H for history tokens:
𝐌_H ←𝐌_H ∗𝐄
where ∗ denotes element-wise multiplication.
The structured transformer enables the agent to access structured information of the past, allowing decisions from adjacent and previously visited locations. The local action space at time step t is:
𝒜_t^L = {τ(𝐯̂_t^1), τ(𝐯̂_t^2), …, τ(𝐯̂_t^k_t)}
and the global action space is:
𝒜_t^G = {τ(𝐯̂_t^1), …, τ(𝐯̂_t^k_t), τ(𝐡_t^1), …, τ(𝐡_t^t-1)}
where τ maps the token to its corresponding location. The probability of each possible action is:
0.87!π(a_t |θ) = softmax{MLP(τ^-1(a_t) ·𝐠_t)}, a_t ∈𝒜_t^G
The optimization of the model involves both imitation learning (IL) loss L_IL and reinforcement learning (RL) loss L_RL, alternating between teacher forcing (using ground truth actions) and student forcing (using actions sampled from the policy). To further consider the action chosen and target chosen, the history teacher loss L_HT and target prediction loss L_T are incorporated.
The history teacher loss is defined as:
L_HT = -∑_t=1^T logπ(a_t |θ)
and the target prediction loss is:
L_T = -∑_t=1^T log P(𝐜_t^i |θ)
where the i-th target token is closest to the navigation destination.
The total loss function is given by:
L = α_1 L_IL + α_2 L_RL + α_3 L_HT + α_4 L_T
where α_i are the loss coefficients.
After training the foundational model, the agent prompts continual learning inference within validation environments. The agent processes the task-domain-based data stream sequentially for the Val-Seen and Val-Unseen splits. The agent executes the inner loop based on the loss function specified in Equation <ref> to iteratively update parameters θ, achieved through continuous memory buffer updates and scenario replays. Upon completing the learning for the current task domain, the agent proceeds to perform the outer loop as described in Equation <ref>.
The continual learning methodology equips the VLN agent to learn and adapt within complex environments, maintaining and enhancing knowledge across multiple tasks. The Dual-SR algorithm in structured transformers allows for effective navigation and adaptation capabilities in continually changing scenarios.
§ EXPERIMENTS
§.§ Experiment Setup
The experiment adopts the VLNCL framework and divides the R2R dataset <cit.> into distinct task domains to evaluate resistance to forgetting and knowledge transfer capabilities.
By sequentially inputting each task domain into the agent, we separately assess the average Seen Transfer (ST) and Unseen Transfer (UT) across each dataset split.
For comparative experiments, we implemented the latest unpublished related work with the corresponding setup and a prior study incorporating continual learning. We also compared our approach with other VLN agents to validate performance on the test unseen split and confirm that continual learning enhances agent performance.
§.§ Evaluation Protocol for VLNCL
To assess the forgetting resistance and knowledge transfer abilities of the VLNCL agent, we introduce two metrics: Seen Transfer (ST) and Unseen Transfer (UT), analogous to Backward and Forward Transfer in continual learning. ST measures the average performance difference on the i-th task domain after training on T domains compared to training on the first i domains (i < T). UT evaluates the average performance difference on unseen tasks after training on T domains compared to the base agent. As training progresses, unseen tasks diminish as the agent encounters them.
Note that the unseen task domain set is a subset of the validation task set, divided into several scenario-related data streams that feed the agent sequentially. Therefore, the unseen task domain set is dynamic.
Formally, the validation task set is separated into d task domains 𝒮={s_1, s_2, ..., s_d}. The initial unseen task domain set is 𝒮_unseen^0={s_1, s_2, ..., s_d}. After the agent encounters task domain s_1, the unseen task domain set is updated with 𝒮_unseen^1={s_2, s_3, ..., s_d}. Hence, the Seen Transfer is defined as
ST = 1/T-1∑_i=1^T-1 SR_T(s_i) - SR_i(s_i) s_i ∈𝒮_seen^T
The Unseen Transfer is defined as
0.85!UT = 1/T-1∑_i=2^T SR_i-1(s_i) - SR_0(s_i) s_i ∈𝒮_unseen^T
where the SR_j(s_i) is the average success rate in the task domain s_i for agent after trained on the j task domain. The SR_0(s_i) represents the success rate of the base agent.
§.§ Implementation Details
The agent's hyperparameters and the structured transformer architecture align with those utilized in previous studies. The candidate target grid size, d, is set to 5, resulting in a 5 × 5 grid with a spacing of 6 meters between adjacent positions. The agent follows the ground-truth path when applying teacher force under specific loss configurations. Conversely, actions are sampled from the predicted probability distribution during student forcing, leveraging distinct loss configurations.
We train the base model using the Adam optimizer on two NVIDIA V100 GPUs for 100,000 iterations, employing a batch size 20 and a learning rate of 1×10^-5 over 72 GPU hours.
Continual learning is subsequently conducted on two NVIDIA V100 GPUs for 1k iterations per task domain with learnable learning rates.
§.§ Comparative Experiment Results
To evaluate the performance of our continual learning method in vision-language navigation tasks, we conducted a comparative experiment against previous works using the R2R dataset. We adopted the latest VLN agents with continual learning capabilities, adhering to the experimental settings and metrics outlined in ESR <cit.>, which evaluate the average performance across all seen tasks throughout the continual learning process. Additionally, we compared our framework with the ASA continual learning strategy <cit.>, which fine-tunes the agent using new data. However, distinguished from the original ASA, our experiment employed augmented data rather than interaction data collected from human-agent interactions. Agents used a randomly initialized HAMT <cit.> as the backbone model and set the agent's buffer size to 2000 tasks. Due to GPU memory constraints, the agent stored only each task's index and environment ID. The baseline for the experiment was the agent that did not employ any continual learning algorithm.
To further evaluate the task performance of our agent, we compared its results on the test unseen split against single-run performances of other VLN agents on the R2R dataset. The methods included in this comparison are Seq2Seq <cit.>, SSM <cit.>, EnvDrop <cit.>, AuxRN <cit.>, CCC <cit.>, PREVALENT <cit.>, AirBERT <cit.>, VLN↻BERT <cit.> (init. OSCAR).
The baseline used the base agent's performance before applying continual learning. We further compared the performance before and after continuous learning to demonstrate the potential of CL approaches.
The comparative results demonstrate that our method significantly enhances the task performance of the agent. The success rate increased by 16%, and the oracle success rate increased by 8% compared to the base agent, highlighting the substantial potential of a continual learning approach in VLN agents. Moreover, our results have achieved state-of-the-art performance in continual learning ability. Unlike agents that rely on elaborate environmental understanding mechanisms or meticulous fine-tuning, agents with CL capabilities can consistently enhance their performance in new scenes. However, integrating a more advanced inference architecture with the CL approach can significantly improve the agent's performance.
§.§ Resisting Forgetting and Transferring Evaluation
With continual learning, we introduce the concepts of Seen Transfer and Unseen Transfer for the VLN agent to evaluate its resistance to forgetting and ability to transfer knowledge. To assess forgetting resistance, we apply the average Seen Transfer to the validation train split and the validation seen split. We utilize the average Unseen Transfer on the validation unseen split to evaluate the agent's capability to transfer knowledge. As a baseline, we use the results of fine-tuning each task domain.
The Seen Transfer measures the agent's performance on the seen task domain set, 𝒮_seen^T, within the current split, determining whether the agent retains its prior knowledge. In contrast, Unseen Transfer evaluates performance on the unseen task domain set, 𝒮_unseen^T, within the current split, assessing knowledge transfer from previous tasks to current ones.
The experimental results demonstrated that our method has advantages in forgetting resistance and knowledge transfer. To further elucidate these capabilities, we present the changes in success rates across different splits in Figure <ref>. In the Val Unseen split, we evaluate the success rate in the unseen portion to assess knowledge transfer ability. Conversely, in the Val Seen and Train Seen splits, we consider the success rate in the seen portion to evaluate the agent's forgetting resistance.
Additionally, we can track the highest and lowest Success Rates (SR) and Oracle Success Rates (OSR) across all dataset splits in Table <ref>. The evaluation of performance also follows the VLNCL setup. This result allows us to observe the changes in performance more clearly.
Analyzing performance changes shows that our method exhibits strong resistance to forgetting and robust knowledge transfer capabilities. Our method consistently improves performance on unseen tasks in the Val Unseen split, highlighting the significant generalization capability introduced by continual learning in VLN agents. In the Val Seen and Train Seen splits, our method also shows substantial performance retention on seen tasks, with a success rate drop of only 20% after processing a sequence of 50 different scenario tasks. These results indicate that the agent effectively mitigates forgetting across long-term task domains. Consequently, we propose a benchmark for the Vision-Language Navigation with Continual Learning paradigm.
By implementing a continual learning approach in VLN agents, we enhance their ability to generalize effectively to unseen environments, paving the way for broader real-world applications of VLN agents.
§ CONCLUSION
This paper presents the Vision-Language Navigation with Continual Learning (VLNCL) paradigm, where the agent learns from unseen tasks while retaining knowledge from prior scenarios, closely reflecting real-world application demands. To achieve this, we introduce the Dual-loop Scenario Replay (Dual-SR) algorithm, which improves the agent's generalization and task performance. We also establish a benchmark for VLNCL using the R2R dataset. Experiments demonstrate that our method surpasses existing continual learning approaches under comparable conditions, advancing VLN agent performance and setting the stage for further research into real-world application-ready agents. Our future work will concentrate on developing more sophisticated inference mechanisms and advancing continual learning strategies to improve generalization.
|
http://arxiv.org/abs/2409.03462v1 | 20240905121903 | Automated Journalism | [
"Wang Ngai Yeung",
"Tomás Dodds"
] | cs.CY | [
"cs.CY"
] |
Article Title]Automated Journalism
[1,2]Wang Ngai Yeungjustin.yeung@oii.ox.ac.uk
3,4]Tomás Doddst.dodds.rojas@hum.leidenuniv.nl
[1]Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, United Kingdom
[2]Leiden HumAN Lab, Leiden University, Doelensteeg 16, Leiden, 2311 VL, the Netherlands
[3]Leiden University Centre for Linguistics, Leiden University, Reuvenplaats 3-4, Leiden, 2311 BE, the Netherlands
[4]Berkman Klein Center for Internet & Society, Harvard University, 1557 Massachusetts Ave, Cambridge, MA 02138, MA, United States
Developed as a response to the increasing popularity of data-driven journalism, automated journalism refers to the process of automating the collection, production, and distribution of news content and other data with the assistance of computer programs. Although the algorithmic technologies associated with automated journalism remain in the initial stage of development, early adopters have already praised the usefulness of automated journalism for generating routine news based on clean, structured data. Most noticeably, the Associated Press and The New York Times have been automating news content to cover financial and sports issues for over a decade. Nevertheless, research on automated journalism is also alerting to the dangers of using algorithms for news creation and distribution, including the possible bias behind AI systems or the human bias of those who develop computer programs. The popularization of automated news content also has important implications for the infrastructure of the newsroom, the role performance of journalists and other non-journalistic professionals, and the distribution of news content to a datafied audience.
[
[
September 9, 2024
=====================
§ INTRODUCTION
Algorithmic journalism is a container term that describes the practice of journalism that adopts techniques that involve the datafication, quantification, computationalization, and automation of several steps throughout the newsmaking process. Sometimes also termed robot journalism <cit.> and automated journalism <cit.>, algorithmic journalism attempts to automate the collection, production, and distribution of news. (For a critical account of the terminologies, see <cit.>). This entry aims to offer a brief historical overview of algorithmic journalism since the 1990s. Understanding algorithms and media technologies generally, as sociotechnical constructs, we present a range of possibilities and limitations of algorithmic journalism in contemporary newsrooms in terms of data collection, production, and distribution of news content.
The affordances of Internet communication, such as hypertextuality and interactivity <cit.>, opened new doors for journalists and general users to find information more efficiently at a meager cost. Therefore, it is unsurprising that information aggregators such as the Really Simple Syndication 1.0 (RSS) were released in 2000 when the Internet reached over 300 million users worldwide. RSS is a type of web feed that provides website updates in a standardized and computer-readable format, available for both users and applications to access. Information aggregators were also essential for journalists, as they reduced the time it took for journalists to monitor essential websites of interest. Nonetheless, information aggregators proved insufficient in the age of big data and social media platforms, when an average of 328.77 million terabytes of data are uploaded to the Internet daily.
Indeed, the overflow of now readily available data demands an excessive amount of human labor to process and filter useful data points, especially considering that most of the content online is duplicated, unverified, or simply irrelevant. In this context, data mining techniques infused with Natural Language Processing (NLP) can prove more advantageous for media workers. The Global Database of Events, Language, and Tone or GDELT Project, created by Kalev Leetaru and Philip Schrodt, and others, is a prime example of these new emerging technologies. GDELT helps journalists monitor news media around the globe by gathering recordings from various organizations and countries while also detecting the emotions that underlie the events that have been collected. However, while GDELT is an example of an off-the-shelf tool, wealthier newsrooms have started to create their own in-house activity monitoring tools, like Reuters’ Tracer <cit.>.
§ NATURAL LANGUAGE GENERATION AND HYPER-PERSONALIZATION
Despite the digitalization of journalism entering the newsrooms as early as the 1980s, journalism was not ‘algorithmic’ until the late 1990s, when a turn towards automation became manifest (See <cit.> for a historiography on this issue). For example, technologies such as the Automatic Spoken Document Retrieval, a speech-to-text algorithm conceived in the late 1980s, were leveraged to automate transcription processes of broadcasted content. Technologies for automation continue to develop inside newsrooms, and by the mid to late 2000s, full-text generation emerged as a rising tool in several media organizations. One of the most popular examples is Automated Insights, which the Associated Press later adopted as one of their main content generation tools. Other examples include the AI-generated virtual weather reporter developed by Radio Television Hong Kong (RTHK), called Aida, which started its first reporting in June 2023.
In previous decades, the commercialization of news and rising competition between outlets prompted editors and managers to tailor content to audience segments to garner readership <cit.>. However, algorithmic journalism does not only involve automating the production of news content but also its distribution and marketing processes. Owing to the abundance of both implicit and explicit behavioral data collected by the news sites or offered by social media platforms themselves (e.g., Facebook, Twitter, etc.) and external data brokers (e.g., Chartbeat on The New York Times, Permutive on BBC, etc.), tailoring news content to news readers is now not only possible but encouraged by third-party stakeholders and newsrooms managers alike.
The underlying mechanism of news tailoring in automated journalism is essentially the quantification of users’ needs and wants in the form of audience metrics. Progressively, News Recommender Systems (NRCs) that utilize user data for recommending content and maximizing click-through rate (CTR) have become ubiquitous. Google News and Apple News are the most common examples of these technologies. However, automated journalism can take personalization even further (i.e., hyper-personalization). For example, a GNI-funded project developed by the South China Morning Post attempts to cluster users based on their demographic, psychographic, and behavioral data. Setting aside ethical concerns, this project is an archetype of algorithmic journalism wherein users’ data are continuously collected, inferred, and analyzed to enhance the reader experience.
§ THE CURRENT STATE OF THE TECHNOLOGIES AND THEIR LIMITATIONS
With the rise in popularity of Natural Language Generation (NLG), Large Language Models (LLMs), and generative AI, newsrooms are beginning to see the advantages of utilizing these technologies to automate written content. Today, the automation of the creation and curation of shorter, repetitive, and structured journalistic pieces, for instance, financial, sports, and crime news, have been rather successful, and the process of crafting such content has become considerably more streamlined (Galily, 2018). Despite the hesitation of relying on generative AI for longer, analytical pieces, advanced NLG enables a smoother and less time-consuming workflow in generative Search Engine Optimisation (SEO)-friendly content using automatic tag generation, image labeling, and alt text creation <cit.>
Particularly when it comes to News Recommender System (NRS), a recent review by <cit.> summarized current approaches and methods to recommend items to readers, including factorization models (e.g., Matrix Factorization, Bayesian Personalized Ranking, and so on) and deep learning-based systems (e.g., Neural Collaborative Filtering). Novel solutions to challenges, such as ranking newsworthiness based on timeliness, can be tackled by graph-based algorithms (e.g., sequential Browse-Graphs) and time-decay models (e.g., short-term preference model).
However, these technologies also have limitations. For instance, although audiovisual-based generative AI is now on the rise (e.g., StableDiffusion or WaveNet), its current adoption in newsrooms has yet to be fully embraced by news organizations worldwide, which can be complicated for ethical and technical reasons. Technically, producing a photorealistic video requires considerable time to fine-tune and render compared to an AI-generated image and text. Regarding ethics, AI-generated audiovisual content inherently challenges the notion of mechanical objectivity — the reasoning of “seeing is believing” <cit.>. Some examples have already caused serious public debates. In early 2023, the German magazine Die Aktuelle was criticized for fabricating an interview with former F1 racer Michael Schumacher with generative AI. While this case can be easily classified as an example of deliberate misinformation, the line between a blatant lie and an AI-generated visual aid is vague.
Another limitation of contemporary machine learning models for content generation is the differential performance across languages. A cross-language evaluative study of several generative models, such as gpt-3.5-turbo and gpt-4-32k by <cit.>, has elucidated that the performance of these models is best when it is prompted using high-resource languages and languages in Latin scripts. This study further showed that, for example, even if Indo-Aryan texts were translated into English before performing tasks, the performance was still comparatively worse than Germanic, Greek, and Romance language families. Such performance bias is both natural and artificial, in the sense that popular languages do have more resources to be trained on and evaluated against and are simultaneously paid less attention to simply because they are less popular. Due to such divergence, it is conceivable that newsrooms in certain regions can utilize generative AI more effectively than their counterparts.
§ AUTOMATED JOURNALISM IMPACT ON NEWSROOMS’ INFRASTRUCTURES AND CHANGING WORKFLOWS
Automated journalism has had a profound impact on the infrastructures and workflows of newsrooms, particularly their physical infrastructure. Traditionally, newsrooms relied on local data storage, utilizing computers or physical storage devices. However, with the advent of automated journalism, there has been a shift towards outsourcing data storage to cloud data warehouses offered by large platforms. Notable examples of these cloud data warehouses include Amazon Redshift and Google BigQuery.
This transition has brought about several notable changes. Firstly, newsrooms no longer bear the responsibility of maintaining and managing their hardware for data storage. Instead, they can rely on the robust infrastructure provided by these cloud platforms, which promise scalable and secure storage solutions. This could allow newsrooms to focus more on their core journalistic activities rather than investing significant resources in maintaining physical storage systems.
Moreover, the emergence of automated journalism has also prompted a shift in the hardware required for data analytics and model training within newsrooms. Previously, data analytics and model training were relatively rare, if not non-existent, in traditional newsroom settings. However, with the outsourcing of storage to cloud data warehouses, newsrooms now can leverage the computational power and analytical capabilities offered by these platforms. By utilizing services such as Amazon Redshift and Google BigQuery, newsrooms can perform sophisticated data analytics and train machine learning models, enabling them to uncover valuable insights and enhance their journalistic output.
However, the transition from locally stored, processed, and distributed to outsourcing these procedures is not without criticism. Underlying these large-scale engineering systems are fragments of digital dominance <cit.> in the platform ecosystem. Indeed, newsrooms are now enabled to ‘play around’ with big data, but only possible when it is enabled by big tech companies such as Amazon and Google. The convergence of journalism and platforms has remodeled the relationship between news workers and these tech conglomerates. Not only have the platforms exerted more control over what to produce (i.e., gatekeeping), but they also have played a huge role in altering in what forms news should be produced and shared <cit.>. More importantly, at the core of these platforms, in the sense of news distribution, is the quantification of users and performance based on engagement metrics. Concerning the algorithmic turn of journalism, <cit.> showed that although analytics-driven journalism may bring new opportunities for African newsrooms, looming was the feeling of the loss of control over the degree of whether their journalistic work actually addresses public concerns.
The impact of automated journalism on newsrooms extends beyond physical infrastructure and encompasses changes in organizational infrastructure and workflows. One significant aspect is the rise of data journalism and the establishment of in-house data science teams. In the past, journalists did not commonly undertake quantitative and statistical data analysis. However, with the advent of automated journalism, newsrooms have recognized the value of data-driven storytelling and have formed dedicated teams to handle data journalism initiatives. Prominent examples include The Guardian US visuals team and The SFChronicle Data Team. These teams possess the expertise to explore, analyze, and visualize data, allowing for the creation of compelling and informative stories based on data-driven insights. This shift in organizational infrastructure reflects a growing recognition of the importance of data in journalism. It highlights how automation has prompted newsrooms to adapt their internal structures to accommodate these new demands.
§ CONCLUSION
In conclusion, the impact of automated journalism on newsrooms' professional infrastructures is evident in the shift from emphasizing traditional journalism skills to data-oriented technical competencies. Journalists are now expected to possess programming skills, software knowledge, and a basic understanding of machine learning models to effectively leverage automation technologies for tasks discussed in the previous section. This transformation necessitates ongoing training and professional development initiatives within newsrooms to equip journalists with the necessary skills to thrive in automated journalism. While opportunities and challenges await journalists in the age of automated journalism, more empirical research is called for to better understand how journalists can be better equipped to tackle the problem of AI unintelligibility <cit.> and how one can reconfigure the asymmetric relationship between journalists and platforms.
Acknowledgements
We thank our editors Alessandro Nai, Max Grömping and Dominique Wirz for their suggestions that have improved the quality of this encyclopedia entry.
Note
This is a preprint of an entry in Nai, A., Grömping, M., & Wirz, D. (Eds). Elgar Encyclopedia of Political Communication. Edward Elgar Publishing.
|
http://arxiv.org/abs/2409.02236v1 | 20240903190458 | First-order transition into a charge-density wave phase | [
"A. V. Rozhkov"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Institute for Theoretical and Applied Electrodynamics, Russian
Academy of Sciences, 125412 Moscow, Russia
§ ABSTRACT
The transition into a charge-density wave phase is analyzed theoretically.
We argue that, for commensurate or nearly-commensurate charge-density wave,
the transition may become first order. Such a remarkable departure from the
more common second-order transition scenario is associated with additional
`umklapp' terms one can include in the Landau free energy expansion in the
commensurate or nearly-commensurate cases. The proposed mechanisms may be
relevant for certain alloys demonstrating first-order transition into
charge-density wave state. It may be generalized for spin-density wave
phases as well.
First-order transition into a charge-density wave phase
A.V. Rozhkov
September 9, 2024
=======================================================
§ INTRODUCTION
Canonical theory of the charge-density wave (CDW) thermodynamic phase
(e.g.,
Ref. gruner_book)
concludes that the transition from a disordered phase into a CDW state is
continuous (second-order). This basic theoretical expectation is indeed
confirmed by numerous experiments. For example,
Fig. 5
in review
paper gruner_review_dynamics1988
demonstrates continuous decay of the CDW order parameter to zero as a
function of increasing temperature, for three different CDW-hosting alloys
[NbSe_3, (TaSe_4)_2I, and K_0.3MoO_3].
As a more recent demonstration of the same behavior, we can mention
Fig. 3 in
Ref. cdw2017cont,
where order-parameter-versus-temperature data for
TiSe_2
are presented.
Reference cdw_fluct2008exper
examined the transition type for
TbTe_3.
Continuous decay of the order parameter, absence of hysteresis, and
critical fluctuations all point to the second-order transition in the
latter compound.
Yet for many crystals a CDW phase is separated from a disordered state by a
discontinuous (first-order) transition. Namely, in
IrTe_2
the formation of a commensurate CDW (CCDW) is accompanied by pronounced
hysteresis inside heating-cooling
cycle <cit.>,
the signature of a first-order transition. Another alloy demonstrating
the first-order transition between a CCDW and a disordered state is
Lu_5Ir_4Si_10,
see
Ref. Lu5Ir4Si10first_orderCDW1999exper.
Compound
Er_2Ir_3Si_5
presents a similar case but for a nearly-commensurate CDW (NC-CDW)
phase <cit.>.
Other materials for which this phenomenon was reported are
Lu_2Ir_3Si_5,
see
Refs. Lu2Ir3Si5_first_orderCDW2015exper,
Lu2Ir3Si5_cdw2006hysteresis_exper;
EuTe_4,
see
Refs. EuTe4hysteresis2022exper, EuTe4hyster2019exper.
A first-order transition into a CDW state may be explained by incorporating
the so-called `umklapp' contributions into a Landau-type model, as it was
done in
Ref. TaSe2McMillan1975theory_umklapp
in the context of the CDW state in
TaSe_2.
However, as a CDW-hosting material,
TaSe_2
is quite peculiar for it can host three non-identical order parameters
connected by C_3 rotations. As one can expect, this symmetry feature is
of crucial importance for the structure of the Landau functional. At the
same time, it makes the corresponding formalism too specialized to be
directly applicable in many relevant situations.
In this paper, we aim to expand the ideas of
Ref. TaSe2McMillan1975theory_umklapp
to a broader context. Namely, using the Landau free energy framework, we
study phase transitions into commensurate and nearly-commensurate CDW
states. We argue that, in various rather general situations, the
transition into these states may be discontinuous. Moreover, for some
conditions our theory predicts that a material can demonstrate
two-transition sequence: a continuous normal-to-CCDW transition is followed
by a first-order CCDW-CCDW transition, the latter connecting the phases
that differ only by order parameter magnitudes.
As for NC-CDW, we show that, if we incorporate lattice distortions into the
model, suitably constructed umklapp terms become symmetry-allowed. When the
lattice distortions fields are eliminated from the free energy, the
resultant effective model is equivalent to CCDW Landau free energy, and a
first-order
transition can be recovered.
Our paper is organized as follows.
Section <ref>
is dedicated to formulation of a Landau free energy function valid for
unidirectional incommensurate CDW. Various models of CCDW are introduced
and analyzed in
Sec. <ref>.
The case of NC-CDW is presented in
Sec. <ref>.
Our results are discussed in
Sec. <ref>.
§ GENERAL CONSIDERATIONS
It is common to describe transition into a CDW state within the framework
of the Landau free energy
F_0 (ρ_ cdw)
=
a/2 |ρ_ cdw|^2 + b/4 |ρ_ cdw|^4,
where the coefficients a and b satisfy the well-known conditions
b>0
and
a (T) = α (T - T_ CDW).
Here T is temperature,
T_ CDW
is the CDW transition temperature, and coefficient α is positive.
As for the complex order parameter
ρ_ cdw
=
|ρ_ cdw| e^i φ,
it represents charge density modulation. In many situations it is
conveniently approximated by a single harmonic term
ρ( R )
≈ρ_ cdw e^i k· R
+
c.c.
=
2 |ρ_ cdw| cos ( k· R + φ) ,
where the wave vector
k
characterizes CDW spatial periodicity. Following the standard prescription,
one minimizes F over
ρ_ cdw
to derive
|ρ_ cdw|
= θ( T_ CDW - T) √(α ( T_ CDW - T) /b),
where
θ(x)
is the Heaviside step-function. This formula explicitly demonstrates that
the order parameter strength
|ρ_ cdw|
is a continuous function of T, a hallmark of the second-order transition.
Unlike the absolute value
|ρ_ cdw|,
the order parameter phase φ remains undetermined, which is a
manifestation of the U(1)-symmetry of F: the change
φ→φ + δφ
keeps F the same. Physically, this can be viewed as a invariance of the
CDW state under arbitrary uniform translation
R→ R + t,
t∈ℝ^3.
It is easy to check that, for a given
t,
the phase change is
δφ = ( k· t),
or, equivalently the order parameter transforms according to
ρ_ cdw→ρ_ cdw e^i ( k· t)
under the translation
t.
§ COMMENSURATE CDW
§.§ Landau free energy with `umklapp' contribution
The invariance of the CDW Landau free energy relative to arbitrary
translations (<ref>)
is, by itself, a very excessive constraint on the model: in any crystal
the translation group must be limited to lattice translations only, that
is, instead of
t∈ℝ^3,
the allowed
t's
are
t = m_1 a_1 + m_2 a_2 + m_3 a_3,
where
m_i
are integers, and
a_i
are elementary lattice vectors.
The reduction of the invariance group implies that additional terms may be
introduced into the Landau free energy. Below we explicitly construct these
terms for commensurate CDW order.
By definition, a commensurate CDW satisfies the following conditions
( k· a_i ) = 2 π p_i/q_i,
i = 1,2,3,
where integer
p_i
is co-prime with
q_i
for all i. Formally, of course, any measured
k
can be described in this manner, with arbitrary large
q_i's.
However, for practical matters, a wave vector is considered to be
commensurate only when all three
q_i's
are not too large.
For these three
q_i
we introduce their least common multiple
n = lcm (q_1, q_2, q_3),
referred below to as commensuration degree. Then the monomial
ρ_ cdw^n
is invariant under arbitrary lattice translations. To prove this claim, we
start with
Eq. (<ref>)
and write
ρ_ cdw^n
→ρ_ cdw^n e^i ( k· t) n,
where
( k· t) n
=
2π∑_i m_i p_i n/q_i.
Since n is a multiple of a
q_i
for any i, one establishes that
( k· t) n = 2π N,
where N is an integer. Thus,
ρ_ cdw^n
is invariant for any
t
described by
Eq. (<ref>)
Note that, while n is introduced as least common multiple of three
denominators
q_i's,
in many realistic situations, however, no significant number-theoretical
calculations are required, as the commensuration degree is quite obvious
from the data. For example, if
(p_1/q_1, p_2/q_2, p_3/q_3) = (1/5, 0, 1/5),
as in
Ref. Ivashko2017,
then
n=5.
Since
ρ_ cdw^n
is invariant, we conclude that, for any complex number
c_n = |c_n| e^i γ,
the nth degree `umklapp' contribution
F_ u^(n) = - c_n/2nρ_ cdw^n + c.c.
=
-|c_n|/n |ρ_ cdw|^n cos ( n φ + γ)
is explicitly real and invariant under lattice translations. Consequently,
the free energy
F^(n) = F_0 + F_ u^(n).
can be used as a model for a commensurate CDW state.
Inclusion of
F_ u^(n)
into the free energy shrinks the symmetry group of the Landau energy from
U(1) to
Z_n:
function
F^(n)
is no longer invariant under an arbitrary phase shift, only discrete
shifts
φ→φ + 2π m/n, m ∈ℤ,
do not change the free energy.
Let us now search for minima of
F^(n).
Minimization with respect to φ is very simple. As this variable
enters
F_ u^(n)
term only, it is easy to demonstrate that the Landau energy is the lowest
when
φ = φ^*_m,
where
φ^*_m = 2π m/n - γ/n,
m = 0, 1, …, n-1.
We see n minima evenly distributed over a unit circle. This arrangement
resembles a clock dial, thus a common name for a model of this kind is the
n-state clock model. Another frequently used designation is the
Z_n
model, a reference to the invariance group of
F^(n).
Every
φ^*_m
in
Eq. (<ref>)
represents a particular localization of CDW distortions relative to the
underlying lattice, see
Fig. <ref>.
There are n such localizations, all of them are degenerate. Speaking
heuristically, one can say that a CDW with nth degree commensuration is
always pinned by the lattice to one of n possible minima, as illustrated
by
Fig. <ref>.
This pinning decreases the symmetry of the Landau free energy from U(1) to
Z_n.
At any of these minima
the cosine in
Eq. (<ref>)
is equal to unity. Thus, the Landau free energy can be re-written as the
following function of a single non-negative variable
|ρ_ cdw|
F^(n)
=
a/2 |ρ_ cdw|^2 + b/4 |ρ_ cdw|^4
- |c_n|/n |ρ_ cdw|^n
+ …,
where ellipses stand for higher-order terms that might be necessary to
include in order to maintain stability of the free energy, and the tilde
over F implies that this free energy does not depend on φ.
For
n=2
the contribution proportional to
|c_2|
acts to renormalize a, effectively increasing the transition temperature.
The transition remains continuous for all
|c_2|.
If
n>2,
the contribution coming from the “umklapp" term
F^(n)_ u
may qualitatively alter the behavior of the system near the transition
point, as discussed below.
§.§ Z_3 model of CDW
The value
n=3
represents the CCDW phase whose unit cell is three times larger
than the unit cell of the underlying lattice. (This type of order is
schematically shown in
Fig. <ref>.)
Specializing
Eq. (<ref>)
for
n=3,
one can express the free energy
F^(3)
as
F^(3) (|ρ_ cdw|)
=
a/2 |ρ_ cdw|^2 - |c_3|/3 |ρ_ cdw|^3
+ b/4 |ρ_ cdw|^4.
We see that this free energy is stable in the sense that, for large
|ρ_ cdw|,
function
F^(3)
grows, which guarantees that an equilibrium value of the order parameter
is bounded.
For
a>0
this free energy has a
|ρ_ cdw|=0
minimum that represents (meta)stable disordered state. Additionally, for
a < |c_3|^2/(4b)
there is a minimum of
F^(3)
at
|ρ_ cdw|
=
1/2b( |c_3| + √(|c_3|^2 - 4ab)),
see
Fig. <ref>.
It is easy to check that
Eq. (<ref>)
describes the global minimum of
F^(3)
when
a < 2|c_3|^2/(9b).
By exploiting the commonly assumed linearization
a = a(T) ≈α (T-T_*),
α > 0,
where
T_*
is the temperature for which
a(T)
passes through zero, the CDW transition temperature can be expressed as
T_ CDW = T_* + 2 |c_3|^2/9 α a b > T_*.
We see that
T_*
by itself does not have any special meaning. However, in the limit
|c_3| → 0
the transition temperature
T_ CDW
approaches
T_*.
At the transition, the coefficient a is not zero, but rather
a = 2|c_3|^2/(9b).
Substituting this value in
Eq. (<ref>),
one finds that
|ρ_ cdw|
jumps from 0 to
2|c_3|/(3b),
see also
Fig. <ref>.
Thus, we conclude that, at finite
|c_3|,
the transition is discontinuous. On the other hand, one must remember that
at small
|c_3|
the transition is formally indeed first-order, yet, in this regime, the
discontinuity of order parameter becomes weak, and
difficult to detect. This observation remains relevant for other signatures
of first-order transition.
§.§ Z_4 model of CDW
The
n=4
phase diagram differs qualitatively from the
n=3
situation. The
n=4
Landau free energy reads
F^(4)
=
a/2 |ρ_ cdw|^2 + b/4 |ρ_ cdw|^4
+ d/6 |ρ_ cdw|^6,
where
b = b - |c_4|.
In other words,
|c_4|
effectively renormalizes b. Since
b
can be either positive, or negative, depending on the relation between b
and
|c_4|,
we retained here the sixth-order term to prevent uncontrollable growth of
the order parameter at
b < 0.
When
|c_4| < b,
the free energy
F^(4)
describes second-order transition that occurs at
a=0.
If
linearization (<ref>)
is assumed, then the transition temperature coincides with
T_*.
At negative
b,
the transition into the CDW phase becomes first-order. (Qualitatively, the
behavior of
F̃^(4)
in this regime is very similar to the graphs of
F̃^(3)
in
Fig. <ref>.)
For
0<4ad < b^2
the free energy has three extrema: one at zero, and two more at
|ρ_ cdw|
=
√(1/2d( | b| ±√( b^2 - 4ad))
).
The minimum (maximum) corresponds to the plus (minus) sign in this formula.
The transition into the ordered state takes place when the free energy at
the non-trivial minimum becomes equal to zero, which is the free energy at
the trivial minimum
|ρ_ cdw| = 0.
This occurs at
a = 3 b^2/(16 d)
if
b < 0.
At arbitrary sign of
b
the transition temperature can be compactly expressed as
T_ CDW = T_* + 3 (b-|c_4|)^2/16 α dθ(|c_4| - b).
This shows that, unlike the
n=3
case,
arbitrary weak
n=4
“umklapp" term cannot change the continuous type of the transition. Only
when
|c_4|
exceeds b, the transition becomes discontinuous. The point
a=0,
|c_4| = b
is a tricritical point on the phase diagram.
§.§ Z_5 and Z_6 models
For
n=5
and
n=6,
the phase diagram acquires additional complexity. We start our analysis by
writing the
n=5
Landau free energy as
F^(5) = a/2 |ρ_ cdw|^2 +b/4 |ρ_ cdw|^4
-|c_5|/5 |ρ_ cdw|^5 +d/6 |ρ_ cdw|^6,
where, as before, we included the
O(|ρ_ cdw|^6)
term to provide proper growth of
F̃^(5)
at
|ρ_ cdw| → +∞.
Due to relative complexity of the
n=5
and
n=6
cases, it is convenient to introduce normalized quantities. Namely, the
dimensionless form of
F^(5)
reads
F^(5)/ F_0
=
A/2 y^2 + 1/4 y^4
-
C/5 y^5 + 1/6 y^6.
The coefficient A in this formula is
A = a d/b^2 = α d /b^2(T - T_*).
Other quantities are
F_0 = b^3/d^2,
y = √(d/b)|ρ_ cdw|,
C = |c_5|/√(bd).
Here energy
F_0
sets the overall scale for
F^(5),
and y is the dimensionless order parameter.
For
A>0,
the disordered state
y=0
is absolutely unstable. It is at least metastable when
A<0.
As for ordered states (stable, metastable, or unstable), they are
represented by roots of the equation
P^(5)_C (y) = - A,
where
P^(5)_C (y)
is a family of polynomials of variable y
P^(5)_C (y) = y^2(1 - Cy + y^2),
parameterized by
C>0.
For small C and positive y, the polynomials are positive increasing
functions, see
Fig. <ref>.
Thus,
Eq. (<ref>)
has one solution for negative A. No solution exists when
A>0.
If C is fixed, this describes an order-disorder continuous phase
transition at
A=0,
or, equivalently, at
T=T_*.
This simple picture is not applicable for
C>C_ hi = 4√(2)/3.
Indeed, at
C=C_ hi
a horizontal inflection at
y = 1/√(2)
is formed (hence, the subscript `hi'). For
C > C_ hi
the polynomial
P^(5)_C (y)
is no longer monotonic as a function of y, and more than one solution
become possible for appropriate (negative) values of A. Since
P_ C_ hi (1/√(2)) = 1/12,
these roots are realized for
A>-1/12.
Multiple non-trivial roots of
Eq. (<ref>)
implies that first order transitions between CDW states emerges. Note that
the states separated by this transition have identical symmetries. The only
difference is the magnitude of
|ρ_ cdw|.
As one can see from
Fig. <ref>,
for sufficiently large C there are finite intervals of y in which the
value
P^(5)_C (y)
is negative. For such C,
Eq. (<ref>)
has two roots even for positive
A>0.
When A grows, the roots approach each other, merge, and ultimately
disappear, signaling a first-order transition into the disordered phase.
The resultant phase diagram is shown in
Fig. <ref> (left).
It features a second-order transition line reaching the first-order
transition curve. The latter terminates at a critical point inside the CDW
phase. This point corresponds to the horizontal inflection point for
P^(5)_C.
The tricritical point `T', where two transition lines meet, can be found by
solving
Eq. (<ref>)
simultaneously with
F̃^(5)(y) = 0,
both in the limit
A=0.
These two equations can be satisfied when
C=5/√(6),
which is the horizontal coordinate of `T'.
Depending on the value of C, the behavior
ρ_ cdw = ρ_ cdw (T)
may vary significantly, see
Fig. <ref>.
If
C > 5/√(6),
which corresponds to the area to the right of the `T' on the phase diagram,
the order-disorder transition is discontinuous. To the left of point `Cp'
(C<4√(2)/3),
the transition is continuous, at
A=0.
In the interval
4√(2)/3 < C < 5/√(6)
the model exhibits a cascade of two transitions (a first-order CDW-CDW
transition followed by a second-order CDW-disorder transition). When
coefficient C is fine-tuned to be
C=4√(2)/3,
the order parameter discontinuity shrinks to zero and becomes a continuous
singularity, as shown in
Fig. <ref>.
For
n=6
commensuration, the Landau free energy can be expressed as
F^(6) = a/2 |ρ_ cdw|^2 +b/4 |ρ_ cdw|^4
-d̃/6 |ρ_ cdw|^6
+e/8 |ρ_ cdw|^8,
where
d̃ = |c_6| - d,
and
d, e > 0.
Normalized form of this free energy is easy to establish
F^(6)/ F_0
=
A/2 y^2 + 1/4 y^4
-
D/6 y^6 + 1/8 y^8.
Here, under assumption
b>0,
we introduced the following set of parameters
y = (e/b)^1/4|ρ_ cdw|,
F_0 = b^2/e,
A = a √(e/b^3),
D = d̃/√(b e).
Similar to
Eq. (<ref>),
ordered phases of the
n=6
model are represented by roots of equation
P^(6)_D (y) = -A,
where P^(6)_D = y^2 ( 1 - D y^2 + y^4).
Analysis of
Eq. (<ref>)
can be adopted for the latter equation, and an
n=6
phase diagram can be constructed, see
Fig. <ref> (right).
It is clear that both diagrams in
Fig. <ref>
are qualitatively similar.
§ NEAR-COMMENSURATE CDW
We demonstrated in the previous section that symmetry-allowed “umklapp"
terms enhance complexity of the model's phase diagram. In particular, a
first-order transition line emerges. Let us now generalize our approach to
the case of NC-CDW.
For an NC-CDW, vector
n k
does not belong to the reciprocal lattice of a host crystal for any
n ∈ℕ,
however, one can find a (small) integer m and a reciprocal lattice vector
b 0
such that a vector
δ q = b - m k
is small in the sense that
|( δ q· a_i )| ≪ 1
for all
i=1,2,3.
Since
k
is not commensurate,
ρ_ cdw^n
is not compatible with the lattice translation group for any
n ∈ℕ.
Yet, an umklapp contribution associated with the NC-CDW order can emerge
through the following mechanism. Note that a monomial
ρ_ cdw^m,
where m is defined in
Eq. (<ref>),
transforms according to the rule
ρ_ cdw^m → e^- i (δ q· a_i)ρ_ cdw^m
upon a translation on the elementary lattice vector
a_i.
Although “the elementary defects"
e^- i (δ q· a_i)
are close to unity, the exponent oscillates for longer translations,
indicating that the contribution
∝ρ_ cdw^m
averages to zero upon summation over the whole sample.
Fortunately, since
|δ q|
is small, the lattice can adjust its structure to allow the umklapp term.
Imagine that the lattice, in response to the CDW presence, experiences an
additional periodic distortion with the wave vector
δ q.
Representing such a distortion by a complex quantity u, we can devise a
contribution
u ρ_ cdw^n
that is invariant under the lattice translations. (Conceptualizing u, one
can think of it as a “frozen", or “condensed" phonon mode, whose wave
vector is
δ q.)
To check the invariance, observe that a translation on
a_i
transforms u according to the rule
u → e^i (δ q· a_i) u.
This makes the products
u ρ_ cdw^m
and
(u ρ_ cdw^m)^*
translation-invariant, and permissible to enter the Landau free energy.
With this in mind, we write the following Landau-type model
F_ NC (ρ_ cdw, u)
=
F_0 (ρ_ cdw)
+ ε_δ q |u|^2
+
1/√(4m)( g_ nc u ρ_ cdw^m + c.c.).
Here
ε_δ q |u|^2 ≥ 0
is the elastic energy associated with the distortion u, complex
coefficient
g_ nc
is a coupling constant, and factor
(4m)^-1/2
is introduced into this formula to make expression below consistent with
previous definitions.
Minimizing this energy over u, we obtain
u
=
- g_ nc^*/√(4m)ε_δ q
(ρ_ cdw^*)^m.
This relation demonstrates that in the NC-CDW phase the distortion u is
always present. This relation can be checked in experiment.
Substituting the equality for u into
Eq. (<ref>)
one derives the reduced free energy that depends on
|ρ_ cdw|
only
F_ NC
=
F_0
-
|c_2m|/4m |ρ_ cdw|^2m,
where
|c_2m| = |g_ nc|^2/ε_δ q.
Note that this reduced Landau free energy is identical to
F^(n) (|ρ_ cdw|)
for
n=2m.
As a result, it can describe a first-order transition between the
disordered and ordered phases, as we have seen above for
Z_4
and
Z_6
models.
§ DISCUSSION
We argued above that the transition into a commensurate or
nearly-commensurate CDW phase can be discontinuous. The mechanism that
turns an anticipated second-order transition into a first-order one relies
on the umklapp contributions to the CDW Landau free energy. The proposed
model may be viewed as an expansion of the ideas previously formulated for
TaSe_2
in
Ref. TaSe2McMillan1975theory_umklapp
to CDW phases with different types of order parameter symmetries.
The effects of the umklapp contribution on the model phase diagram depend
on the commensuration degree n: the larger n the richer the model's
phase diagram. Indeed, for
n=2
the umklapp contribution does nothing but corrects the transition point,
when n is as large as 5 or 6, the phase diagram displays such elements as
tricritical point, critical point, continuous and discontinuous transitions
lines, see
Fig. <ref>.
Moreover, the
Z_5,6
models allow for a possibility that the destruction of the order may occur
through a two-step process: lower-temperature CDW-CDW discontinuous
transition followed by higher-temperature CDW-disorder continuous
transition, as
Fig. <ref>
illustrates for
Z_5
model. Superficially, one may argue that such a cascade was already
discussed quite some time ago (see, for instance,
Fig. 1
in
Ref. TaSe2McMillan1977microscopic_theory,
or
Fig. 2
in
Ref. nakanishi1977domain).
However, there is an important difference. Indeed, in our case, the
first-order CDW-CDW transition occurs within the same commensurate phase.
This is very much unlike commensurate-incommensurate CDW lock-in
transitions of
Refs. TaSe2McMillan1975theory_umklapp,
TaSe2McMillan1977microscopic_theory,
as well as
other <cit.>
first-order transitions associated with discontinuous change of CDW wave
vector.
We did not extend our analysis beyond
n=6
power. Unfortunately, we were unable to identify any general principle
restricting structure of the phase diagram of large-n models. In such a
situation any investigation of large-n model becomes problematic due to
ever increasing number of parameters one must keep in the Landau free
energy expansion to guarantee model stability. Yet, apart these purely
technical issues, the formulated analytical framework is perfectly
applicable for
n>6
models.
Our argumentation can be extended to NC-CDW order parameters as well. This
is not ultimately that surprising: in a situation of small deviation
from commensurability
|δ q|
at not-too-large n a sufficiently soft hosting crystal lattice
reorganizes itself to lock-in with the CDW. This is the heuristic
understanding behind
Eq. (<ref>).
Such a model offers an alternative to a well-known lock-in
scenario <cit.>
in which the CDW adjusts its wave vector while the (infinitely rigid)
lattice does not participate at all.
At the same time, our approach to a NC-CDW phase bears clear similarity
with that of
Ref. nakanishi1977domain.
To satisfy the commensurability
condition (<ref>),
we postulated that the lattice generates a deformation with small
|δ q|,
while in
Ref. nakanishi1977domain
a commensurability condition is fulfilled by introducing additional smooth
modulations of the order parameter.
In our discussion we assumed that the system always chooses the global
minimum of the Landau free energy. Yet our models for
n>2
allows for metastable states. For example, metastable minima, both ordered
and disordered, are clearly visible in
Fig. <ref>,
which is plotted for
n=3.
Switching between two minima reveals itself as hysteresis, a common fixture
of experimental presentation of a first-order transition. Using the above
phase diagrams for experimental data analysis one must remember that our
calculations do not take hysteresis into account. In principle, hysteresis
can be captured in the framework of the Landau theory of phase transitions.
However, the description of this kind oversimplifies the physics
significantly as it ignores various non-universal mechanisms affecting
hysteretic behavior in real materials.
Additionally, we can adapt the discussed ideas to the spin-density wave
(SDW) case. The SDW order parameter is a complex vector
S,
and
( S· S)
is a true complex scalar. Thus, for even
n=2k
one can construct an umklapp term of the form
c_2k ( S· S)^k + c.c.
that is consistent with the discrete translations, parity, and time
inversion symmetries.
Finally, let us make the following observation. A number of alloys
demonstrate the first-order transition between disordered and CDW phases.
Several papers reporting this also
commented <cit.>
that such an unusual transition type must be a consequence of “strong
coupling". Within the context of the described formalism the expectation of
strong coupling regime is quite natural: the umklapp coefficients
|c_n|
are likely to be small for larger n unless the displacements associated
with the order parameter are significant.
To conclude, in this paper, within the Landau free energy framework, we
explored effects of the order parameter commensuration on the CDW
transition properties. We demonstrated that in the case of commensurate and
nearly-commensurate CDW the anticipated second-order transition may be
replaced by the first-order transition, as indeed observed experimentally.
Under certain circumstances our model predicts a cascade of two transitions
(low-temperature CDW-CDW first-order transition is followed by
higher-temperature order-disorder second-order transition). These ideas may
be applicable to SDW phases as well.
§ ACKNOWLEDGMENTS
Author is thankful to B.V. Fine, B.Q. Lv, and Alfred Zong for illuminating
discussions.
18
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1
[Grüner(1994)]gruner_book
authorG. Grüner,
titleDensity Waves in Solids
(publisherAddison-Wesley, addressReading,
year1994).
[Grüner(1988)]gruner_review_dynamics1988
authorG. Grüner,
“titleThe dynamics of charge-density waves,”
journalRev. Mod. Phys. volume60,
pages1129 (year1988).
[Kogar et al.(2017)Kogar, de la Pena,
Lee, Fang, Sun, Lioi, Karapetrov, Finkelstein, Ruff, Abbamonte
et al.]cdw2017cont
authorA. Kogar,
authorG. A. de la Pena,
authorS. Lee,
authorY. Fang,
authorS. X.-L. Sun,
authorD. B. Lioi,
authorG. Karapetrov,
authorK. D. Finkelstein,
authorJ. P. C. Ruff,
authorP. Abbamonte,
et al., “titleObservation of a Charge Density
Wave Incommensuration Near the Superconducting Dome in
Cu_xTiSe_2,” journalPhys. Rev.
Lett. volume118, pages027002
(year2017).
[Ru et al.(2008)Ru, Condron, Margulis,
Shin, Laverock, Dugdale, Toney, and Fisher]cdw_fluct2008exper
authorN. Ru,
authorC. L. Condron,
authorG. Y. Margulis,
authorK. Y. Shin,
authorJ. Laverock,
authorS. B. Dugdale,
authorM. F. Toney, and
authorI. R. Fisher,
“titleEffect of chemical pressure on the charge density wave
transition in rare-earth tritellurides RTe_3,”
journalPhys. Rev. B volume77,
pages035114 (year2008).
[Pyon et al.(2012)Pyon, Kudo, and
Nohara]IrTe2021hysteresis_exper
authorS. Pyon,
authorK. Kudo, and
authorM. Nohara,
“titleSuperconductivity Induced by Bond Breaking in the
Triangular Lattice of IrTe2,” journalJ. Phys. Soc. Jpn.
volume81, pages053701
(year2012).
[Eom et al.(2014)Eom, Kim, Jo, Yang,
Choi, Min, Park, Cheong, and Kim]IrTe2014first_order_exper
authorM. J. Eom,
authorK. Kim,
authorY. J. Jo,
authorJ. J. Yang,
authorE. S. Choi,
authorB. I. Min,
authorJ.-H. Park,
authorS.-W. Cheong,
and authorJ. S. Kim,
“titleDimerization-Induced Fermi-Surface Reconstruction in
IrTe_2,” journalPhys. Rev. Lett.
volume113, pages266406
(year2014).
[Ko et al.(2015)Ko, Lee, Kim, Yang,
Cheong, Eom, Kim, Gammag, Kim, Kim et al.]IrTe2015first_exper
authorK.-T. Ko,
authorH.-H. Lee,
authorD.-H. Kim,
authorJ.-J. Yang,
authorS.-W. Cheong,
authorM. J. Eom,
authorJ. S. Kim,
authorR. Gammag,
authorK.-S. Kim,
authorH.-S. Kim,
et al., “titleCharge-ordering cascade with
spin-orbit Mott dimer states in metallic iridium ditelluride,”
journalNat. Commun. volume6,
pages7342 (year2015).
[Kim et al.(2015)Kim, Kim, Ko, Lee, Park,
Yang, Cheong, and Min]IrTe2kim2015dft_arpes
authorK. Kim,
authorS. Kim,
authorK.-T. Ko,
authorH. Lee,
authorJ.-H. Park,
authorJ. J. Yang,
authorS.-W. Cheong,
and authorB. I. Min,
“titleOrigin of First-Order-Type Electronic and Structural
Transitions in IrTe_2,” journalPhys. Rev.
Lett. volume114, pages136401
(year2015).
[Ivashko et al.(2017)Ivashko, Yang,
Destraz, Martino, Chen, Guo, Yuan, Pisoni, Matus, Pyon et al.]Ivashko2017
authorO. Ivashko,
authorL. Yang,
authorD. Destraz,
authorE. Martino,
authorY. Chen,
authorC. Y. Guo,
authorH. Q. Yuan,
authorA. Pisoni,
authorP. Matus,
authorS. Pyon, et al.,
“titleCharge-Stripe Order and Superconductivity in
Ir_1-xPt_xTe_2,” journalSci. Rep.
volume7, pages17157 (year2017).
[Becker et al.(1999)Becker, Patil,
Ramakrishnan, Menovsky, Nieuwenhuys, Mydosh, Kohgi, and
Iwasa]Lu5Ir4Si10first_orderCDW1999exper
authorB. Becker,
authorN. G. Patil,
authorS. Ramakrishnan,
authorA. A. Menovsky,
authorG. J. Nieuwenhuys,
authorJ. A. Mydosh,
authorM. Kohgi, and
authorK. Iwasa,
“titleStrongly coupled charge-density wave transition in
single-crystal Lu_5Ir_4Si_10,”
journalPhys. Rev. B volume59,
pages7266 (year1999).
[Ramakrishnan et al.(2020)Ramakrishnan,
Schönleber, Rekis, van Well, Noohinejad, van Smaalen, Tolkiehn, Paulmann,
Bag, Thamizhavel et al.]Er2Ir3Si5_first_order_CDW2020exper
authorS. Ramakrishnan,
authorA. Schönleber,
authorT. Rekis,
authorN. van Well,
authorL. Noohinejad,
authorS. van Smaalen,
authorM. Tolkiehn,
authorC. Paulmann,
authorB. Bag,
authorA. Thamizhavel,
et al., “titleUnusual charge density wave
transition and absence of magnetic ordering in
Er_2Ir_3Si_5,”
journalPhys. Rev. B volume101,
pages060101 (year2020).
[Sangeetha et al.(2015)Sangeetha,
Thamizhavel, Tomy, Basu, Awasthi, Rajak, Bhattacharyya, Ramakrishnan, and
Pal]Lu2Ir3Si5_first_orderCDW2015exper
authorN. S. Sangeetha,
authorA. Thamizhavel,
authorC. V. Tomy,
authorS. Basu,
authorA. M. Awasthi,
authorP. Rajak,
authorS. Bhattacharyya,
authorS. Ramakrishnan,
and authorD. Pal,
“titleMultiple charge-density-wave transitions in
single-crystalline
Lu_2Ir_3Si_5,”
journalPhys. Rev. B volume91,
pages205131 (year2015).
[Kuo et al.(2006)Kuo, Sivakumar, Su, and
Lue]Lu2Ir3Si5_cdw2006hysteresis_exper
authorY. K. Kuo,
authorK. M. Sivakumar,
authorT. H. Su, and
authorC. S. Lue,
“titlePhase transitions in
Lu_2Ir_3Si_5: An experimental
investigation by transport measurements,” journalPhys. Rev. B
volume74, pages045115
(year2006).
[Lv et al.(2022)Lv, Zong, Wu, Rozhkov,
Fine, Chen, Hashimoto, Lu, Li, Huang et al.]EuTe4hysteresis2022exper
authorB. Q. Lv,
authorA. Zong,
authorD. Wu,
authorA. V. Rozhkov,
authorB. V. Fine,
authorS.-D. Chen,
authorM. Hashimoto,
authorD.-H. Lu,
authorM. Li,
authorY.-B. Huang,
et al., “titleUnconventional Hysteretic
Transition in a Charge Density Wave,” journalPhys. Rev. Lett.
volume128, pages036401
(year2022).
[Wu et al.(2019)Wu, Liu, Chen, Zhong, Su,
Shi, Tong, Xu, Gao, and Wang]EuTe4hyster2019exper
authorD. Wu,
authorQ. M. Liu,
authorS. L. Chen,
authorG. Y. Zhong,
authorJ. Su,
authorL. Y. Shi,
authorL. Tong,
authorG. Xu,
authorP. Gao, and
authorN. L. Wang,
“titleLayered semiconductor EuTe_4 with charge
density wave order in square tellurium sheets,” journalPhys.
Rev. Mater. volume3, pages024002
(year2019).
[McMillan(1975)]TaSe2McMillan1975theory_umklapp
authorW. L. McMillan,
“titleLandau theory of charge-density waves in transition-metal
dichalcogenides,” journalPhys. Rev. B
volume12, pages1187 (year1975).
[McMillan(1977)]TaSe2McMillan1977microscopic_theory
authorW. L. McMillan,
“titleMicroscopic model of charge-density waves in
2HTaSe_2,” journalPhys.
Rev. B volume16, pages643
(year1977).
[Nakanishi and Shiba(1977)]nakanishi1977domain
authorK. Nakanishi and
authorH. Shiba,
“titleDomain-like incommensurate charge-density-wave states and
the first-order incommensurate-commensurate transitions in layered tantalum
dichalcogenides. I. 1T-polytype,” journalJ. Phys. Soc. Jpn.
volume43, pages1839 (year1977).
|
http://arxiv.org/abs/2409.03193v1 | 20240905023348 | Upper-Limb Rehabilitation with a Dual-Mode Individualized Exoskeleton Robot: A Generative-Model-Based Solution | [
"Yu Chen",
"Shu Miao",
"Jing Ye",
"Gong Chen",
"Jianghua Cheng",
"Ketao Du",
"Xiang Li"
] | cs.RO | [
"cs.RO"
] |
Chen et al.
1Department of Automation, Tsinghua University, China
2Shenzhen MileBot Robotics Co., Ltd, China
3Department of Rehabilitation, South China Hospital, Medical School, Shenzhen University, China
Xiang Li,
Department of Automation, Central Main Building, Tsinghua University, Beijing, China 100084
xiangli@tsinghua.edu.cn
§ ABSTRACT
Several upper-limb exoskeleton robots have been developed for stroke rehabilitation, but their rather low level of individualized assistance typically limits their effectiveness and practicability.
Individualized assistance involves an upper-limb exoskeleton robot continuously assessing feedback from a stroke patient and then meticulously adjusting interaction forces to suit specific conditions and online changes.
This paper describes the development of a new upper-limb exoskeleton robot with a novel online generative capability that allows it to provide individualized assistance to support the rehabilitation training of stroke patients.
Specifically, the upper-limb exoskeleton robot exploits generative models to customize the fine and fit trajectory for the patient, as medical conditions, responses, and comfort feedback during training generally differ between patients.
This generative capability is integrated into the two working modes of the upper-limb exoskeleton robot: an active mirroring mode for patients who retain motor abilities on one side of the body and a passive following mode for patients who lack motor ability on both sides of the body.
In addition, the upper-limb exoskeleton robot has three other attractive features. First, it has six degrees of freedom (DoFs), namely five active DoFs and one passive DoF, to assist the shoulder and the elbow joints and cover the full range of upper-limb movement.
Second, most of its active joints are driven by series elastic actuators (SEAs) and a cable mechanism, which absorb energy and have low inertia.
These compliantly driven high DoFs provide substantial flexibility and ensure hardware safety but require an effective controller.
Thus, based on the singular perturbation approach, a model-based impedance controller is proposed to fully exploit the advantages of the hardware.
Third, the safety of the upper-limb exoskeleton robot is guaranteed by its hardware and software.
Regarding hardware, its SEAs are tolerant to impacts and have high backdrivability. Regarding software, online trajectory refinement is performed to regulate the assistance provided by the upper-limb exoskeleton robot, and an anomaly detection network is constructed to detect and relax physical conflicts between the upper-limb exoskeleton robot and the patient.
The performance of the upper-limb exoskeleton robot was illustrated in experiments involving healthy subjects and stroke patients.
Upper-Limb Rehabilitation with a Dual-Mode Individualized Exoskeleton Robot: A Generative-Model-Based Solution
Yu Chen1, Shu Miao1, Jing Ye2, Gong Chen2, Jianghua Cheng3, Ketao Du3, and Xiang Li1
September 5, 2024
==============================================================================================================
§ INTRODUCTION
Exoskeleton robots offer several advantages in stroke rehabilitation. In particular, they enable precise control of movement, consistent and repeatable therapy sessions, and collection of real-time data on patient progress <cit.>.
Furthermore, they offer high-intensity training while reducing physical burdens on therapists, and they are adaptable to the needs and recovery stages of individual patients <cit.>.
This paper focuses on rehabilitation using upper-limb exoskeleton robots, a training task that is the focus of several prototypical or commercially available products.
Among these products, the Harmony <cit.> stands out, as it is equipped with an anatomical shoulder mechanism designed to augment and facilitate the arm’s natural movements.
Additionally, ANYexo <cit.> enhances the upper limb mobility of patients and exhibits sufficient flexibility to encompass a wide array of daily activities.
Furthermore, the Armeo Power (Hocoma) <cit.> enables early-stage stroke patients to start intensive arm therapy.
However, current upper-limb exoskeleton robots may not fully achieve the main goal of upper-limb rehabilitation, i.e., recovery of the manipulability of the human body with high degrees of freedom (DoFs), as they have too few DoFs to match the movement of a healthy subject and a rather low capacity for individualization. Individualization is a key feature of an upper-limb exoskeleton robot used in rehabilitation, as it enables the provision of assistance that is customized to the condition (e.g., stroke duration and medical background) of the patient.
Such assistance helps the patient to regain movement ability that is original and natural, i.e., devoid of abnormal patterns.
Hence, insufficient individualization results in low-quality rehabilitation training.
Generally, either active mirroring or passive following training modes are applied in the rehabilitation of stroke patients, according to their medical condition and as illustrated in Figure <ref>.
Active mirroring training is used for patients who have mild impairments and an unaffected side of the body and thus retain motor abilities.
In this mode, a generative model known as the intention predictor estimates the patient’s motion intentions based on historical motion data.
Subsequently, these intentions are mirrored by an upper-limb exoskeleton robot on the affected side of the patient’s body.
This process aims to align motion intentions between the unaffected and affected sides of the body at a neural level, thereby increasing patient engagement.
Passive following training is used for patients with severely impaired motor abilities.
In this mode, the patient follows a pre-defined trajectory facilitated by an upper-limb exoskeleton robot, which provides structured guidance for movement rehabilitation.
The aforementioned modes can also be utilized together with manual training for patients with different medical conditions or in different stages of stroke. For example, using the passive mode at the early stage and then the active one, when the healthy side can move naturally.
Such dual-mode rehabilitation is important for ensuring that an upper-limb exoskeleton robot can be used by multiple patients with various levels of physical ability.
As patients wear an upper-limb exoskeleton robot to carry out training procedures, ensuring patient safety in the presence of tight and continuous physical interactions is a primary concern.
Much progress has been made in this regard, in terms of both hardware and software. In terms of hardware, safety is increased by using compliant actuators such as series elastic actuators (SEAs) <cit.> to absorb impact forces during interactions. Additionally, bio-inspired cable-driven actuators <cit.> are used to transition between different actuation modes, and safety is further guaranteed by backdrivable actuators <cit.>.
In terms of software, safety regulation can be achieved through various control strategies, including impedance control <cit.>, velocity field-based control <cit.>, and data-driven ergonomic control <cit.>, all of which ensure that an upper-limb exoskeleton robot system operates safely.
However, most hardware and software approaches are applied after a safety problem has occurred.
A better approach is to predict potential safety problems and prevent their occurrence.
Accordingly, this paper describes the development of a new upper-limb exoskeleton robot for rehabilitation training of stroke patients that is superior to previously reported upper-limb exoskeleton robots in terms of the three characteristics described below.
* Safety:
The new upper-limb exoskeleton robot contains SEAs in most of its active joints, so that the joints absorb impact forces caused by collisions and unexpected contacts during upper-limb interaction.
This design enhances the precision of torque control, thereby providing smoother assistance in rehabilitation training than previously reported upper-limb exoskeleton robots.
Moreover, an anomaly detection neural network informed by interactive feedback is implemented to assess the safety and naturalness of the movement of the upper-limb exoskeleton robot in real time.
This network enables the early detection of anomalies, allowing for their prevention or mitigation during rehabilitation tasks.
Furthermore, an online trajectory refinement module and an impedance controller are incorporated into the upper-limb exoskeleton robot to ensure safety throughout the planning and execution phases of rehabilitation.
* Effectiveness:
The upper-limb exoskeleton robot is configured with one passive joint and five active joints, endowing it with a high number of DoFs and thus enabling the workspace to accommodate a broad spectrum of daily activities.
Given the inherent uncertainty and randomness associated with human motion intentions and upper-limb movements, the system efficiently generates trajectories by exploiting a probabilistic model of motion for explicit consideration during the sampling process so as to match the nature of human intention.
The effectiveness of the upper-limb exoskeleton robot was validated through clinical trials, which demonstrated that the experimental group, which underwent passive following training, recovered motor abilities more quickly than the control group.
* Friendliness:
The active joints of the upper-limb exoskeleton robot are capable of sensing force and thus can detect interaction torque.
Therefore, when the upper-limb exoskeleton robot is operated in transparent mode for demonstration purposes, it can be maneuvered effortlessly by the patient.
Moreover, given the substantial load sustained by the shoulder joint during upper-limb movements, a direct-drive motor is incorporated into this joint in the upper-limb exoskeleton robot.
This motor delivers enhanced driving torque and increases backdrivability by eliminating nonlinear friction forces.
Furthermore, to reduce the effects of inertia during movement and thus increase comfort, a cable-driven mechanism is utilized in joints that must have a broader range of motion in space than other joints.
The key novelty of this study is its design and implementation of a generative-model-based refinement framework that is capable of generating highly individualized trajectories and ensuring safety.
Specifically, the framework operates in either of the two aforementioned distinct modes, namely active mirroring mode and passive following mode.
The active mirroring mode exhibits the newly developed key features described below.
- A novel diffusion model-based motion intention predictor, which uses historical motion data of the patient’s unaffected side to estimate the patient’s upper-limb motion intentions.
This allows for the prediction of future trajectories and establishes the mean and variance of the patient’s motion intentions.
- A preemptive tuning algorithm that ensures that a predicted trajectory remains within a safe region by exploiting the predicted distribution to mitigate potential risks.
The passive following exhibits the newly developed key features described below.
- A diffusion model-based anomaly detection network capable of evaluating safety and naturalness based on an anomaly score. This score is utilized to guide the online trajectory refinement and evaluation for each rehabilitation task.
- A probabilistic movement primitives (ProMPs)-based approach to trajectory generation that captures the distribution of the patient’s motion intention. In addition, based on each sampling result’s performance, this approach iteratively optimizes the assistive trajectory.
Both modes customize the assistance and generate an individualized trajectory for the patient, enabling natural and original motion patterns to be recovered by the patient.
In particular, the intention predictor and anomaly detector utilize generative models to capture patterns in the latent space of large datasets.
Moreover, the intention predictor and anomaly detector have distinct functions. The intention predictor accurately predicts the patient’s original motion intentions. This prediction serves as a self-mirrored trajectory that is finely tailored to the needs of the patient’s affected side and thus aligns closely with the patient’s motion intentions.
Simultaneously, the anomaly detector quantitatively identifies the differences between the patient’s rehabilitation movements and those of healthy individuals. These differences are used to guide the adjustment of rehabilitation exercises, thereby enhancing the individualization of the treatment.
In a previous study <cit.>, we developed an individualization framework for passive following training using a variational autoencoder (VAE)-based anomaly detector. In the current study, we expand this approach to a dual-mode upper-limb exoskeleton robot by incorporating generative model technology, i.e., by utilizing a diffusion model. We present results from a series of experiments and comparisons that confirm the safety and performance of the new upper-limb exoskeleton robot.
In addition, we present results from a clinical trial that validate its effectiveness and friendliness.
§ RELATED WORKS
This section reviews related work on upper-limb exoskeleton robots, trajectory generation, and interaction control. Unlike those that have been previously reported, the upper-limb exoskeleton robot reported in this paper has an efficient generative capability, which enables it to effectively provide individualized assistance.
§.§ Upper-Limb Exoskeleton Robots
Upper-limb rehabilitation necessitates a large range of motion. Therefore, upper-limb exoskeleton robots must possess a sufficient number of DoFs to accommodate dynamic and complex upper-limb movements.
The redundancy provided by multiple DoFs can enhance patient comfort <cit.>.
However, many upper-limb exoskeleton robots have been primarily designed for the rehabilitation of single joints, such as the shoulder <cit.>, elbow <cit.>, or wrist <cit.>.
In contrast, our newly developed upper-limb exoskeleton robot is designed to fulfill the requirements of most upper-limb rehabilitation tasks and thus features one passive joint and five active joints.
It also features advanced sensory capabilities and a lightweight mechanical design.
Table <ref> compares our upper-limb exoskeleton robot with previously reported designs.
Unlike traditional upper-limb exoskeleton robots, which are directly motor-driven, our design incorporates a cable-driven mechanism.
Thus, it has a significantly lower weight and inertia effect than traditional upper-limb exoskeleton robots <cit.>.
The cable-driven mechanism also offers the advantages of flexibility, low-backlash gearing, and backdrivable transmission <cit.>.
However, the friction generated during cable transmission introduces nonlinear characteristics that complicate joint control <cit.>.
Compared with other joints, the shoulder joint requires a higher driving torque and thus exhibits higher friction. Thus, this joint requires additional enhancements to optimize its performance.
During exoskeleton robot-supported rehabilitation, there is tight and frequent human–robot interaction and thus compliant actuation is essential.
In particular, the use of compliant actuation ensures flexibility, mechanical safety, and good interaction performance <cit.>.
As such, many upper-limb exoskeleton robots have used SEAs <cit.>.
Accordingly, we incorporate SEAs within the cable-driven mechanism of our upper-limb exoskeleton robot to improve its interaction performance.
Previously developed upper-limb exoskeleton robots have demonstrated good flexibility.
However, few have simultaneously incorporated cable-driven mechanisms and compliant actuator designs (to offer sufficient numbers of DOFs for expansive rehabilitation training) and force sensors (to offer enhanced sensory capabilities).
Furthermore, the combination of cable drive and SEA structures creates a system with high-order nonlinear dynamics, thereby presenting substantial control challenges.
§.§ Trajectory Generation
The assistance provided by an upper-limb exoskeleton robot can be represented by the assistive (i.e., desired) trajectory that it follows to interact with the stroke patient.
For the two modes of training considered in this study, the assistive trajectory is generated in different ways.
In active mirroring mode, the aim of assistance is to replicate the motion of the unaffected side of the patient’s body, and the assistive trajectory is generated as a sequence of predicted human motion intentions.
Various methods have been developed to determine human motion intentions from sensor information, such as that generated by inertial measurement units <cit.>, force-sensing resistors <cit.>, and surface electromyography <cit.>.
Additionally, Gaussian process regression was employed to estimate human motion intention from human–robot interaction data <cit.>.
Furthermore, a multi-sensor fusion-based method was proposed to adaptively update an assistance profile to enhance mirror training rehabilitation <cit.>.
Moreover, learning-based approaches that exploit neural networks’ capability to incorporate multimodal data and their excellent expressive capabilities have been successfully implemented in mirror training rehabilitation <cit.>.
However, the above-mentioned methods have primarily been designed to focus on immediate motion intentions and thus struggle to predict motion intention trends over time.
Given that the affected side of the body of a stroke patient has limited motion ability, failing to predict motion intention may result in crucial factors such as safety constraints being overlooked, thereby posing risks to the patient during rehabilitation training.
In passive tracking mode, the aim of assistance is to enable the patient to follow a pre-defined trajectory.
In this mode, the simplest method for trajectory generation involves discretizing the trajectory into multiple waypoints and then performing linear interpolation of discrete points and trajectory constraints within a planning framework <cit.>.
Trajectories may also be parameterized using polynomials and dynamic movement primitives to meet the requirements of different assistive tasks <cit.>.
Additionally, assistance can be provided through torque profiles generated in a human-in-the-loop manner <cit.> or determined via reinforcement learning <cit.>.
However, the above-mentioned methods typically optimize feasible trajectories at only the kinematic level or by using metabolic measurements, which are labor-intensive to obtain.
Thus, these methods fail to simultaneously effectively incorporate safety constraints and personalize a trajectory, despite the former being the primary concern for an exoskeleton robot, and the latter being important for ensuring that recovered motion is natural and original.
§.§ Interaction Control
Hardware development for exoskeleton robots driven by compliant actuators is well advanced.
However, controller development for such exoskeleton robots remains challenging, because their cable-driven mechanism forms a high-order system with significant nonlinearity.
To mitigate this nonlinearity, which is caused by the coupling of compliant actuators and rigid joints, several control strategies have been devised for accurate and flexible interaction control. For instance, the backstepping control method <cit.> has been devised to hierarchically deliver desired control commands, enabling position or impedance control.
Additionally, singular perturbation theory <cit.> eliminates the need for high-order motion information.
Thus, a method based on singular perturbation has been developed that utilizes the intrinsic differences between two subsystems on exoskeleton robots equipped with SEAs.
This method has been used to generate a multimodal control scheme <cit.> and for adaptive trajectory tracking <cit.>.
Moreover, a recent advancement is a learning-based method <cit.> that further refines interaction control capabilities.
Exoskeleton robots featuring cable-driven mechanisms can have lower weights than those featuring other mechanisms.
However, the substantial friction generated during cable transmission and movement may hinder effective interaction control.
Multiple methodologies have been developed to alleviate friction and other disturbances in human-robot interactions.
For instance, <cit.> employed an iterative learning control algorithm that compensates for disturbances, including friction, at both motor and joint ends by accurately fitting disturbances.
<cit.> used a pulley model to obtain an analytical expression for the friction and the tension output from a Boden cable.
This model facilitated adaptive learning of friction parameters through the system’s dynamic structure to effectively counteract the above-mentioned disturbances.
Additionally, <cit.> demonstrated that friction parameters can be obtained via nonlinear fitting techniques.
However, despite significant advancements in the management of SEA systems and the mitigation of disturbances in cable transmission, it remains challenging to integrate these capabilities to enable flexible interaction control.
Moreover, in cable-driven exoskeleton robots, disturbance signals and human-machine interaction forces are often overlapping.
Thus, there remains a need for a systematic consideration of all of the aforementioned aspects combined with a rigorous proof of the stability of a closed-loop system, as this would enable a full exploration of the advantages of hardware (i.e., high numbers of DoFs, and cable-driven and compliant actuation).
§ OVERALL STRUCTURE
This study examined a prototype cable-driven upper-limb exoskeleton robot that was co-developed by Tsinghua University and Shenzhen MileBot Robotics Co., Ltd. A computer-aided design model of the upper-limb exoskeleton robot is illustrated in Figure <ref>.
The upper-limb exoskeleton robot is equipped with one passive and five active joints that allow refinement of the upper-arm and forearm lengths within ranges of 20 and 40 cm, respectively.
The passive DoF, labeled as Joint 0, accommodates the eccentric movements of the shoulder joint and thereby eliminates the limitations of the upper-limb motion range and increases the space for patient motion training.
The active joints are numbered 1 to 5 and serve specific functions, as described below.
- Joint 1: shoulder abduction and adduction
- Joint 2: shoulder flexion and extension
- Joint 3: upper-arm internal and external rotation
- Joint 4: elbow flexion and extension
- Joint 5: forearm internal and external rotation
A block diagram of the upper-limb exoskeleton robot is shown in Figure <ref>.
Joints 1 and 2 consist of direct-drive joint modules (RJSIIT-17-RevB2 and RJSIIT-17-RevB5, respectively), each of which is outfitted with force sensors and supported by the frame. Joints 3 to 5 utilize harmonic deceleration servo motors (AK80-64) that are configured as cable-driven SEAs and equipped with torque sensors (TK17-191151) and encoders (3590S-2-104L for the internal and external rotation joints, and QY2204-SSI for the elbow joint).
Additionally, these joints incorporate two potentiometers for measuring spring compression during motion.
The detailed technical specifications of the upper-limb exoskeleton robot are provided in Table <ref>.
Motor drivers are implemented to control Joints 3–5, and together with two joint modules, facilitate communication with the main board through a controller area network.
The main board receives commands from a personal computer (PC) via a universal asynchronous receiver/transmitter.
The upper-limb exoskeleton robot employs a hierarchical control system.
At the low level, the main board executes motion control and environmental perception, enabling precise force control and real-time acquisition of sensory data.
At the high level, a Linux system on the PC operates a motion planning module and performs computationally intensive tasks, such as dynamic calculation and network inference.
All programs are integrated into Robot Operating System (ROS) <cit.> and run concurrently.
The joints that undergo significant spatial movement, namely Joints 3, 4, and 5, utilize our self-developed SEAs equipped with cable-driven mechanisms.
A working principle diagram of a SEA is provided in Figure <ref>.
Each joint motor is outfitted with two sets of pulleys, with each set corresponding to a different rotational direction.
Each end of the motor output shaft is affixed with a steel cable that is connected to a potentiometer to constitute the SEA.
The potentiometer assembly includes two fixed blocks, namely fixed block 1 and fixed block 2, and a spring component. The cable from the motor end (depicted in red) is connected via a spring to fixed block 1, while another cable (depicted in blue) connects the moving joint components to fixed block 2.
As the motor applies torque, fixed block 1 moves along a sleeve attached to fixed block 2, thereby compressing the spring and enabling the measurement of tension within the cable. Upon spring compression, the blocks form a unified structure that moves in synchrony with and in the same direction as the joint, thereby facilitating overall movement.
For safety, the upper-limb exoskeleton robot is equipped with an emergency stop button that physically disconnects power during urgent situations. Additional safety measures are incorporated at the software level, including measures that limit the range of motion, prevent self-collision, restrict joint velocity, and automatically cease operation in cases of excessive interaction force or torque.
A dynamic model of a SEA-driven upper-limb exoskeleton robot can be defined as follows <cit.>:
M( q)q̈ + C(q̇, q)q̇ + g( q) = S_1 u + S_2^𝖳 K (θ- S_2 q)+
τ_e + S_2^𝖳τ_f,
Bθ̈ + K (θ- S_2 q) = S_2 u,
where q∈^5 is the vector of joint positions; θ∈^3 is the vector of motor-rotor-shaft positions; and M( q)∈^5× 5, C(q̇, q)q̇∈^5× 5, and g( q)∈^5 are the inertia matrix, centripetal and Coriolis torques, and gravitational torques of the robot, respectively.
The selection matrices S_1=diag(1, 1, 0, 0, 0)∈^5× 5 and S_2 = [ 0, I_3]∈^3× 5 are used to separate the cable-driven joints. In addition,
K∈^3× 3 is the stiffness matrix, B∈^3× 3 is the inertia matrix of the motor,
τ_e∈^5 is the physical interaction torque vector, τ_f∈^3 is the disturbance due to the cable transmission and the friction of the joint system, and u ∈^5 is the control torque applied to the actuator.
The dynamic model of the overall system is described by (<ref>) and (<ref>) and exhibits different time-scales.
Specifically, the SEA subsystem operates on a fast time-scale, while the upper-limb exoskeleton robot subsystem functions on a slow time-scale.
To control this system effectively, the control input is formulated according to singular perturbation theory <cit.>, as follows:
u= u_f + u_s,
where u_f is the fast time-scale control term for stabilizing the model defined by (<ref>), and u_s is the slow time-scale control term for stabilizing the model defined by (<ref>). A representative form of u_f is expressed as follows:
u_f=- S_2^𝖳 K_v(θ̇- S_2 q̇),
where K_v∈^3× 3 is a diagonal and positive-definite matrix.
Substituting (<ref>) and (<ref>) into (<ref>) yields
Bθ̈+ K(θ- S_2 q)+ K_v(θ̇- S_2 q̇)= S_2 u_s,
which can be rewritten as follows by defining τ_o= K(θ- S_2 q):
Bτ̈_o + K_vτ̇_o + K τ_o= S_2 u_s- B S_2 q̈.
By introducing K = K_1/ε ^2 and K_v = K_2/ε, with ε being a small positive parameter, (<ref>) can be written as follows:
ε^2 Bτ̈_o+ε K_2τ̇_o+ K_1τ_o= K_1( S_2 u_s- B S_2 q̈).
When ε=0, the solution of (<ref>) is τ_o= S_2 u_s- B S_2 q̈.
If the fast time-scale is set as γ=t/ε, τ_o is achieved at γ→∞.
τ_o remains constant at ε =0.
Next, we introduce a new variable η=τ_o-τ_o to rewrite (<ref>) on the fast time-scale, as follows:
B( d^2η/ dγ^2)+ K_2( dη/ dγ)+ K_1η = 0
,
which defines the boundary-layer system.
Substituting the solution of (<ref>) into (<ref>), affords a quasi-steady-state system that captures the slow dynamics of the overall system. These dynamics can be expressed as follows:
( M( q)+B̅)q̈+ C(q̇, q)q̇+ g( q)= u_s+τ_e + S_2^𝖳τ_f,
where B̅ = S_2^𝖳 B S_2 is the projected motor inertia matrix.
According to singular perturbation theory, the stability of the overall system is guaranteed if the boundary-layer system and the quasi-steady-state system are both exponentially stable. A stability analysis of this system is provided in the Appendix.
The overall dynamic model described by (<ref>) and (<ref>) is a high-order system, where (<ref>) represents the rigid-joint side, and (<ref>) represents the compliant actuator side. It is non-trivial to stabilize and control such a system.
An upper-limb exoskeleton robot needs to guide the patient to perform repetitive motions via close interaction to help the patient to regain motor function.
Thus, the upper-limb exoskeleton robot is controlled to track a time-varying trajectory in accordance with the following impedance model:
M_d(q̈-q̈_d)+ C_d(q̇-q̇_d)+ K_d( q- q_d)=τ_e,
where q_d∈^n is the vector of the desired trajectory, and M_d, C_d, K_d∈^n× n are the desired inertia, the desired damping, and the desired stiffness matrices, respectively, which are diagonal and positive-definite. Tracking the desired trajectory in accordance with the impedance model allows the patient to deviate from the trajectory and hence provides a certain level of compliance to improve safety.
The proposed dual-mode training framework is illustrated in Figure <ref>.
Both the anomaly detector and intention predictor are designed using trained generative models based on a dataset of motions of healthy individuals.
These models are designed to capture the motion intentions of the unaffected side of the patient and evaluate the comfort and naturalness of the current patient–robot interaction.
Individualization is facilitated by employing two methods, i.e., employing the anomaly score to guide the refinement of a desired trajectory that complies with dynamic constraints, and employing ProMPs to integrate various assistive trajectories into a personalized assistance distribution.
Additionally, the safety of the motion process is enhanced by adjusting the impedance parameters according to the anomaly score, thereby ensuring a secure and effective rehabilitation environment.
§ ONLINE TRAJECTORY REFINEMENT
This section introduces the new trajectory generation method for the upper-limb exoskeleton robot.
This method is based on the generative models and can be refined online to suit the patient while maintaining safety and individualized features, and hence maintaining effectiveness.
First, in both training modes (i.e., active mirroring and passive following modes), we introduce a general integrated vector x_d = [ q_d^𝖳,q̇_d^𝖳]^𝖳, which encapsulates a sequence of discretized trajectory points within the joint space, thereby parameterizing the trajectory.
The following quadratic programming problem is formulated to describe the planning process:
min_ x_d, u_d, s ∑_i=t^t+N_p[ q_d^(i) - q_r^(i)_ Q^2 + u_d^(i)_ R^2 + (s^(i))^2],
s.t. x_d^(t+1) = [ I IΔ t; 0 I ] x_d^(t) + [ 0; IΔ t ] u_d^(t)
s^(t+1)=s^(t) + [ 0; -(∂ f/∂τ_e)^𝖳 K_aΔ t ] x_d^(t)
-(∂ f/∂τ_e)^𝖳 C_a u_d^(t)Δ t
x_d ∈𝒳, u_d ∈𝒰
where N_p is the predictive horizon, Q and R are symmetric positive-definite weighting matrices, u_d is the acceleration of the desired trajectory, C_a and K_a are the varying impedance parameters to be defined later, and f(·) is a functional representation of the anomaly detection network.
In addition, the superscript indicates the time-step, and 𝒳 and 𝒰 are the sets of trajectory space constraints and acceleration constraints, respectively.
Thus, (<ref>) enables various constraints and safety measures to be included in both training modes.
§.§ Active Training Mode
In cases of mild motor impairment, such as that observed in early-stage stroke, patients retain functional motion capabilities on their unaffected side.
Thus, the upper-limb exoskeleton robot must accurately estimate the motion intentions of the patient’s unaffected side to provide mirrored support for the patient’s affected side.
This mirroring process enables an affected limb to emulate the movement of an affected limb, thereby enhancing the recovery of motor function.
The intention estimation system inputs a series of N historical observations, defined as o_(i) = { q_h(i)^(t)∈^5 | t = -N_o,-N_o+1,⋯,0 }, ∀ i ∈{1,2,⋯, N }, where N_o is the number of past time steps, and q_h(i)^(t) is the position of the unaffected side of the body at time step t.
The system also generates a prediction of the motion intention, expressed as p_(i) = { q_h(i)^(t)∈^5 | t = 1,2,⋯, N_p}.
To estimate the potential distribution of plausible trajectories, we introduce a probabilistic predictor based on denoising diffusion probabilistic models (DDPMs) <cit.>, as DDPMs can effectively describe the uncertain nature of human motion intention.
For clarity, we omit the subscript (i) in the following subsections and refer to the past and future trajectories as o and p, respectively.
Intention Predictor: The role of the intention predictor is to produce probabilistic forecasts of future trajectories.
As such, the intention predictor is informed by an initial, ambiguous predicted trajectory p_T, which represents a noise-perturbed estimate of the future trajectory p_0 after T diffusion steps, where T is the pre-defined maximum number of diffusion steps.
The predictor is designed to infer the reverse diffusion process, articulated as the sequence ( p_T, p_T-1, ⋯, p_0).
This sequence is mathematically structured as a Markov chain that is characterized by Gaussian transition probabilities and refines the prediction by reducing the initial uncertainty.
Furthermore, the intention predictor incorporates historical observations encoded into a context vector c, which is synthesized via a neural network parametrized by ϕ.
This context vector informs the trajectory generation process.
The principle of the intention predictor is depicted in Figure <ref>.
Within this framework, the upper-limb exoskeleton robot is endowed with the ability to precisely estimate the future trajectory based on a simple sampling from the initial noisy prediction. Importantly, a future trajectory represents the patient’s motion intention and is not based on direct measurements of the patient’s unaffected limb.
Thus, this approach generates a self-mirrored trajectory that closely aligns with the patient’s original movement patterns, thereby representing a personalized trajectory.
The posteriors associated with diffusion and reverse processes are given as follows:
q( p_t| p_t-1) = 𝒩( p_t;√(1-β_t) p_t-1,β_t I),
q_ψ( p_t-1| p_t, c) = 𝒩( p_t-1;μ_ψ( p_t, c),β_t I),
where β_1,β_2,⋯,β_T is the variance schedule used to adjust the injected noise.
The adjusted variance,
β_t, is defined as β_t=1-α̅_t-1/1-α̅_tβ_t, where this formulation is derived by adopting the definitions α_t = 1-β_t and α̅_t = ∏_i=1^tα_i.
The objective of the predictor is to maximize the log-likelihood, denoted as 𝔼[log q_ψ( p_0)].
Given that direct computation of this metric is infeasible, a variational lower bound is utilized as a surrogate for the training loss:
ℒ_vlb(ϕ,ψ)= ∑_t=2^TD_KL(q( p_t-1| p_t) q_ψ( p_t-1| p_t, c))
-log q_ψ( p_0| p_1, c),
where D_KL(·) signifies the Kullback–Leibler divergence function.
In the proposed methodology, the conditional probability q( p_t-1| p_t) is reformulated as q( p_t-1| p_t, p_0).
This reformulation allows the following closed-form expression to be devised:
q( p_t-1| p_t, p_0)= 𝒩( p_t-1;μ_t( p_t, p_0),β_t I).
By iteratively employing parameterization techniques in the diffusion process (<ref>), the mean of the posterior distribution in (<ref>) and the reverse process (<ref>) can be articulated as follows:
μ_t( p_t, p_0)= 1/√(α_t)( p_t - β_t/√(1-α̅_t)ϵ_t),
μ_ψ( p_t, c)= 1/√(α_t)( p_t - β_t/√(1-α̅_t)ϵ_ψ( p_t, c)),
where ϵ𝒩( 0, I), and the loss function applied in active mirroring mode is further simplified as follows
ℒ_a(ϕ,ψ) = 𝔼_t, p_0,ϵ[ϵ - ϵ_ϕ,ψ( p_t, o)^2].
Once the reverse process is trained, multiple trajectories are sampled by utilizing the predictor to infer from the initial noise.
For simplicity and conservatism, it is assumed that the future of each joint is predicted in an independent manner.
Consequently, the entire inference and statistical process is encapsulated in the following equation:
p̂_0, Σ̂_p = f_a( o),
where p̂_0 = {q̂_h^(t)∈^5 | t = 1,2,⋯, N_p} represents the estimated motion intention, and Σ̂_p = { diag(σ̂_h^(t)) ∈^5×5 | t = 1,2,⋯, N_p} represents the standard deviation of the prediction.
Preemptive Tuning: Given the distribution of a predicted future trajectory, it is imperative to preemptively tune the reference trajectory to ensure compliance with established constraints.
Specifically, for each joint i, denoted by subscript, the feasible trajectory must hold the following probabilistic inequality:
𝒫(q_hi^(t)∉𝒬_i)< ε,
where 𝒬_i is the pre-defined permissible motion range [q_min,i, q_max,i ].
It is stipulated that q̂_hi^(0) = q_i, where q_i is the current position of joint i of the upper-limb exoskeleton robot.
In accordance with Cantelli’s inequality, the predicted distribution is characterized by:
𝒫(q_hi^(t)≥q̂_hi^(t) + δ_i^(t))≤σ̂_hi^(t)/σ̂_hi^(t) + (δ_i^(t))^2,
δ_i^(t) = min(q̂_hi^(t-1)-q_min,i,q_max,i-q̂_hi^(t-1)).
Now, we introduce the following tuning law for prediction:
q_ri^(t) = q̂_hi^(t), if σ̂_hi^(t)/σ̂_hi^(t) + (δ_i^(t))^2≤ε,
q_ri^(t-1), otherwise.
Here, q_ri^(t) is the reference trajectory of joint i at time-step t.
It updates to the predicted motion intention q̂_hi^(t) when the movement trend is within the established boundaries.
However, if a potential boundary violation is detected, it maintains trajectory from the previous time-step to prevent the limit being crossed.
The aforementioned process is also detailed in Algorithm <ref>.
The adjusted reference trajectory q_r offers an accurate estimate of the intended motions of the unaffected side of the body, guaranteeing that safety considerations are adequately addressed.
By resolving (<ref>), it is feasible to generate a trajectory for the upper-limb exoskeleton robot that is not only smooth and characterized by minimal acceleration but also ensures the safety of the desired future trajectory.
§.§ Passive Training Mode
In cases of significant motor impairment, the upper-limb exoskeleton robot is engaged in passive following training. This mode of training is essential for assisting the patient to follow a pre-defined trajectory.
Moreover, to promote rehabilitation, it is crucial to individualize the assistance in the regulation of assist-as-needed. The workflow of passive following training is depicted in Figure <ref>. In this mode, motion data from the upper limbs of healthy subjects are collected to facilitate the training of two distinct modules.
Anomaly Detector: To individualize assistance, a real-time criterion is needed for evaluating the assistive trajectory. We use an anomaly detector to quantify the comfort of wear and the effectiveness of rehabilitation.
This anomaly detector is based on a diffusion model architecture and identifies irregular patterns in upper-limb movements, as illustrated in Figure <ref>.
Subsequently, the anomaly detector computes a score that quantifies the deviation between the current human-robot interaction and a natural interaction condition, thereby guiding the individualization of assistance.
In this approach, sensory feedback at a given timestep i, captured through a sliding window mechanism, is fed into the anomaly detector, denoted as 𝐱^(i)∈^L_sN_c, where L_s is the width of the sliding window, and N_c is the number of data channels.
As mentioned, the anomaly detector is based on a diffusion model, wherein the reverse diffusion process of duration T^p is delineated as the sequence (𝐱_T^p, 𝐱_T^p-1, ⋯, 𝐱_0).
The associated diffusion and reverse diffusion processes are defined as follows:
q(𝐱_t|𝐱_t-1) = 𝒩(𝐱_t;√(1-β_t^p)𝐱_t-1,β_t^p I),
q_Ψ(𝐱_t-1|𝐱_t) = 𝒩(𝐱_t-1;μ_Ψ(𝐱_t),β_t^p I),
where β_1^p,β_2^p,⋯,β_T^p^p are variance schedules utilized to modulate the level of noise injected during the process.
The adjusted variance, β_t^p, is calculated using the formula β_t^p=1-α̅^a_t-1/1-α̅_t^pβ_t^p, and by exploiting the definitions α_t^p = 1-β_t^p and α̅_t^p = ∏_i=1^tα_i^p.
In alignment with the concept outlined in the intention predictor, the loss function used to train the diffusion model in passive following mode is specified as follows
ℒ_p(Ψ) = 𝔼_t,𝐱_0,ϵ[ϵ - ϵ_Ψ(𝐱_t)^2].
Once the reverse process has been effectively learned from the dataset comprising upper-limb movements, the anomaly detector proficiently filters noise from contaminated sensory data to yield a clarified output.
This capability facilitates the generation of a refined sensory input through the anomaly detector.
this input is employed to compute the anomaly score, as delineated in Algorithm <ref>, where ϵ_p, z_p𝒩( 0, I) and ν∈ [1, T^p] is a constant parameter.
For the sake of brevity, the methodology for calculating the anomaly score is encapsulated by the following function:
s = f( q, q̇, θ, θ̇,τ_e).
The anomaly detector integrates diffusion models and thus is adept at capturing the inherent spatiotemporal patterns and stochastic motion tendencies of upper-limb movements through analysis of sensor data.
Consequently, anomaly scores can be computed in real time for human–robot interactions.
These scores serve as indicators of the comfort levels and the naturalness of the assistance provided by the upper-limb exoskeleton robot.
Reference Generation:
To customize assistance during passive following training, historical upper-limb trajectories, coupled with online trajectory refinement as outlined in (<ref>) are integrated to develop a probabilistic model.
In this context, ProMPs are employed to encode a set of trajectories into a probabilistic framework <cit.>, which is capable of generating similar references through sampling.
The application of ProMPs for trajectory sampling is particularly suited for analyzing the repetitive movements encountered in passive following training.
This suitability is attributable to the probabilistic model’s effective accommodation of sensor noise, human uncertainty, and individual biases.
To implement the ProMPs, we express the trajectory by means of the weight vector ω∈^Dn×1, where D is the number of basis functions, and n is the number of active joints, such that
y_t = [
[ q_1,t^𝖳 ⋯ q_n,t^𝖳 ]]^𝖳=[ Φ_t; Φ̇_t ]ω+ϵ_y,
q_i,t = [
[ q_i,t q̇_i,t ]]^𝖳,
p(τ_y| w) = ∏_t𝒩(y_t|Φ_t ω, Σ_y),
where q_i,t∈^2 represents the composite vector of the i^th joint at time step t, ϵ_y∼𝒩( 0,Σ_y) represents zero-mean i.i.d. Gaussian noise, τ_y is the trajectory over the demonstration, and Φ_t ∈^n× Dn, chosen as a Gaussian form, is the time-variant basis matrix.
Given the assumption that ω follows a normal distribution, ω∼𝒩(ω|μ_ω^(k),Σ_ω^(k)), a new trajectory at time step t can be modeled as follows:
p( y_t;μ_ω^(k),Σ_ω^(k)) = ∫𝒩(y_t|Φ_t ω, Σ_y)𝒩(ω|μ_ω^(k),Σ_ω^(k))dω.
Therefore, the reference trajectory is given as follows:
q_r(t) = [q_i,t,⋯, q_n,t,]
To facilitate the generation of personalized assistance, the following cost function is introduced to evaluate the performance of the provided reference trajectory:
𝒮( q_r) = ∫_T_r{ q_d( q_r) - q_ Q^2+s^2}dt,
where T_r signifies the duration of the assistance, and q_d denotes the desired trajectory obtained from online refinement via (<ref>).
Given that each sampled trajectory can be evaluated by its cost, it becomes feasible to attribute an information-theoretic weight to them, reflecting the performance across all k trajectories <cit.>:
w^(i) = 1/η_kexp(-1/λ_p𝒮( q_r^(i))),
where q_r^(i) is the reference trajectory from the i^th sampling, η_k is a normalization constant, and λ_p is a small positive constant.
A detailed analysis of the adopted weight setting is included in the Appendix.
Considering the performance of the trajectory, the parameters μ_ω^(k) and Σ_ω^(k) are deduced from the k collected trajectories, as follows:
ω^(i) = (Φ^𝖳Φ)^-1Φ^𝖳 Y^(i),
μ_ω^(k) = ∑_i w^(i)ω^(i),
Σ_ω^(k) = ∑_i w^(i)(ω^(i) - μ_ω^(k))(ω^(i) - μ_ω^(k))^𝖳,
where Φ∈^N_r n× Dn is a matrix comprised of block diagonal matrices Φ_t, stacked vertically in accordance with sampling number N_r, and Y^(i)∈^N_r n corresponds to the i^th gathered trajectory.
This process facilitates the construction of the probabilistic model.
The adjusted trajectory is integrated into the upper-limb exoskeleton robot to assist the patient.
Subsequently, the patient’s actual movement during the training sessions is captured and utilized to iteratively refine the probabilistic model (<ref>).
Specifically,
μ_ω^(k)←μ_ω^(k+1) and Σ_ω^(k)←Σ_ω^(k+1), which is then used
to facilitate the planning of subsequent trajectories, as illustrated in Algorithm <ref>.
N_s is the number of times that free exploration is performed based on the pre-collected demonstrations.
The refinement of assistance is governed by the anomaly score, reflecting prior healthy movement behavior.
This score serves as an indicator of movement comfort and naturalness during rehabilitation, ensuring that the trajectory adjustments intrinsically enhance the quality of assistance.
The optimal assistance distribution is identified via a coarse-to-fine approach.
Initially, the sample distribution is established based on pre-defined demonstrations, which primarily guide the exploratory phase using the initial samples.
Subsequent iterations improve the assistance distribution by progressively narrowing the sampling space and utilizing previously explored optimal trajectory values.
The probability model focuses on the best-performing trajectories to provide optimal assistance, which effectively addresses the various uncertainties that the patient may encounter during the task.
Incorporation of the patient’s motion data ensures that subsequent trajectories become increasingly individualized and thus align increasingly more with the patient’s medical needs.
Hence, this method enables the upper-limb exoskeleton robot to exploit online interactions for the dual purpose of enhancing interaction safety and improving the efficacy of passive following training.
§ INTERACTION CONTROL
The joint below the shoulder joints is cable-driven, which decreases the inertia associated with movement and thus increases comfort.
Given the presence of these joints, there is substantial friction, which can hinder the movement of joints and therefore should be considerably compensated for.
To achieve this, the upper-limb exoskeleton robot first performs movements in the absence of the patient, where τ_e= 0. In addition, the disturbance torque can be parameterized as follows <cit.>:
τ_f = ( a_f + b_f ⊙ e^- c_f ⊙q̇ + d_f⊙q̇)⊙sgn(q̇)
≈ (a̅_f + b̅_f⊙q̇ + c̅_f⊙q̇⊙q̇)⊙sgn(q̇)= Y(q̇)ζ,
where a_f, b_f, c_f, d_f are the unknown parameters, a̅_f, b̅_f, c̅_f are derived from the Taylor expansion as simplifications for the model,
⊙ denotes the Kronecker product, sgn(·) represents a sign function, Y(·) represents a regressor matrix, and ζ represents the vector of model parameters.
The approximation presented in (<ref>) is reasonable because the velocities of the joints of the upper-limb exoskeleton robot remain rather low during a given rehabilitation process.
As the upper-limb exoskeleton robot is equipped with force sensors on all of its active joints, friction can be directly measured and recorded in the absence of a patient.
Together with the recorded joints’ velocities, the parameters of friction model are learned via polynomial fitting. The estimated friction is represented as follows
τ̂_f = Y(q̇)ζ̂.
Once the friction is estimated, a variable impedance model is proposed. This model must be capable of identifying the human–robot interaction condition and addressing the conflict during rehabilitation.
These capabilities enable the model to regulate the action of the upper-limb exoskeleton robot.
To regulate the impedance model, a weighting function <cit.> is introduced to consider the anomaly score. This function is mathematically defined as follows:
w(s) = λ_1 tanh(-s/χ_1 + χ_2) +λ_2,
where λ_1 and λ_2 are positive constants that determine the range and median of the weighting function, respectively,
χ_1 is a constant that normalizes the anomaly score into a specified small range, and χ_2 is the offset of the weighting function from the origin of the coordinates along the positive horizontal axis.
Based on this function, the desired impedance model is redefined as follows:
C_d(q̇-q̇_d)+ K_d( q- q_d)=1/w(s)τ_e.
Multiplying both sides of (<ref>) by w(s) yields
C_a(t)(q̇-q̇_d)+ K_a(t)( q- q_d)=τ_e,
where C_a(t)=w(s) C_d and K_a(t)=w(s) K_d, are the time-varying apparent impedance parameters. These parameters are utilized to deduce the relationship between the desired trajectory and the interaction torque, as outlined in (<ref>).
The function of the mechanism is such that an increase in the anomaly score leads to a decrease in w(s), thereby reducing the magnitude of the impedance parameters.
This reduction ensures that there is an increase in the passiveness with which the upper-limb exoskeleton robot responds to any detected conflicts.
Conversely, when the anomaly score is low, the impedance parameters revert to their original values, thereby maintaining the level of assistance provided by the upper-limb exoskeleton robot.
Subsequently, an impedance vector is introduced, as follows
z =q̇-q̇_r
=q̇-q̇_d+ C_d^-1 K_d( q- q_d)-1/w(s) C_d^-1τ_e,
where
q̇_r=q̇_d- C_d^-1 K_d( q- q_d)+1/w(s) C_d^-1τ_e
is a reference vector. According to (<ref>), the convergence of z→ 0 implies the realization of the desired impedance model (<ref>).
The overall control input is designed as defined in (<ref>), with the fast time-scale control term defined as in (<ref>).
Next, the slow time-scale control term is established by using the estimated friction τ̂_f to stabilize the dynamics expressed in (<ref>) and achieve the desired impedance model, as follows:
u_s= - K_z z- S_2^𝖳τ̂_f-τ_e-k_g·sgn( z)
+( M( q)+B̅)q̈_r+ C(q̇, q)q̇_r
+ g( q),
where sgn(·) is the sign function and is defined as follows:
sgn (z) =
{[ 1, z >0; 0, z= 0; - 1, z<0 ].
where k_g is a positive constant, and
K_z∈^n× n is a diagonal and positive-definite matrix.
The proposed variable impedance controller, as delineated in (<ref>), (<ref>), and (<ref>), can be demonstrated to be exponentially stable, as shown by the stability analysis provided in the Appendix.
§ EXPERIMENTS
The proposed dual-mode trajectory refinement method was implemented within the upper-limb exoskeleton robot to assess the effectiveness of both rehabilitation modes.
Figure <ref> depicts the experimental configuration, in which the main board controlled the impedance of the upper-limb exoskeleton robot.
This board was interfaced via a serial port connection with a PC, which was outfitted with an Intel i5-13490F CPU and an RTX 4060Ti graphics card.
The PC executed the anomaly detection module, calculated the reference trajectory q_r, and dynamically updated the desired trajectory q_d online.
Subjects performed rehabilitation training in passive following mode or active mirroring mode.
In the latter mode, the subject wore a brace equipped with markers on the unaffected side of the body to facilitate engagement in rehabilitation exercises.
Optical motion capture equipment (Nokov) was used to accurately capture motion intentions.
Implementation of the variable impedance controller required knowledge of the dynamic model.
Thus, the dynamic parameters were computed analytically in real time using the open-source Orocos Kinematics and Dynamics Library. <https://www.orocos.org/wiki/orocos/kdl-wiki.html>
In this computation, the upper-limb exoskeleton robot is deconstructed into a sequence of links and joints to formulate a model defining its physical configuration, encompassing characteristics such as the length, mass, and inertia of each link.
Subsequently, the forward dynamics are derived using Newton–Euler equations based on this model.
The intention predictor and the anomaly detection module required motion data, which was collected from the upper-limb exoskeleton robot during operation by healthy subjects.
To facilitate free and natural movement, the upper-limb exoskeleton robot was set to operate in a transparent mode during the data collection phase <cit.>.
Specifically, the controller was designed as follows <cit.>:
u_0 = ( M( q)+B̅)q̈_0+ C(q̇, q)q̇
+ g( q)- S_2^𝖳τ̂_f- τ_e + u_f,
q̈_0 = 1/γ_0 ( M( q)+B̅)^-1τ_e,
where γ_0 represents a parameter controlling the magnitude of virtual mass, and q̈_0 denotes the desired acceleration. In this transparent mode, healthy subjects or patients were able to maneuver the upper-limb exoskeleton robot effortlessly, without experiencing significant discomfort.
The next section presents the results of five experiments, which are described below.
-Intention Predictor:
This experiment aimed to validate the efficacy and accuracy of the proposed intention predictor. A diverse set of intention prediction methods were trained using the collected data, and then a comparative analysis was performed to demonstrate the superiority of the proposed intention predictor.
-Anomaly Detection:
This experiment aimed to evaluate the performance of the anomaly detection network.
Its detection accuracy was demonstrated in various simulated anomaly scenarios, including movements outside the normal range, stroke-induced convulsions, and human–robot interaction conflicts.
Furthermore, comparative studies were conducted to illustrate that the proposed anomaly detector, which is based on a diffusion model, exhibits detection accuracy that is significantly better than that exhibited by other anomaly detection methods.
-Interaction Control:
This experiment aimed to assess the dynamic capabilities of the system and its ability to reject disturbances.
Thus, a trajectory tracking task was conducted. In addition, the efficacy of the proposed variable impedance controller was validated in a scenario involving anomalies.
-Active Mirroring Training:
This experiment aimed to assess the effectiveness of the online trajectory refinement.
Thus, ablation studies were conducted. In addition, how the anomaly score influences the trajectories generated by this refinement process was examined.
Furthermore, an active mirroring mode was implemented, and the motion capture system was used to verify that the proposed method effectively constrains assistive trajectories and maintains safety throughout a rehabilitation process.
-Passive Following Training:
This experiment aimed to validate the improvements in rehabilitation facilitated by passive following training.
An ablation study was conducted to demonstrate that the online trajectory refinement significantly enhances movement naturalness and task performance throughout the training process.
Moreover, a clinical trial was performed with stroke patients to obtain evidence that this rehabilitation framework significantly aids in the recovery of motor functions.
§ RESULTS
Two able-bodied participants with no prior experience with upper-limb exoskeleton robots were recruited for motion data collection.
Table <ref> presents the motion range of the upper-limb exoskeleton robot during data collection. This motion range covers the main spatial areas of upper-limb daily activities.
Table <ref> presents an overview of the participants’ statistical attributes.
The participants signed a written informed consent form prior to the experimental sessions.
Next, the participants underwent a 6-minute pre-training phase to familiarize themselves with moving while wearing the upper-limb exoskeleton robot.
During the data collection phase, no specific movement guidelines were imposed.
Therefore, the participants were permitted to move the upper-limb exoskeleton robot at their preferred speed, i.e., rapidly or slowly, or to maintain it in a stationary position.
The collected data were compiled into a dataset that was utilized for training the intention predictor and anomaly detection modules.
§.§ Intention Predictor
To achieve real-time estimation of human motion intention, the collected motion data were utilized to train the intention predictor.
To enhance the generalizability of the trained model, we partitioned the collected motion data into training, testing, and validation sets in an 8:2:1 ratio during the training process.
We employed Trajectron++ <cit.> as the backbone for extracting trajectory features and selected a transformer architecture to manage the diffusion process.
The parameters for the intention predictor were set as follows: N_o = 5, N_p = 7, and T = 100.
Denoising diffusion implicit models (DDIMs) <cit.> were employed in the inference process.
Intention prediction relies on historical observation data, and the predicted trajectory length was rather short.
Therefore, we selected forward integration and a convolutional neural network–long short-term memory (CNN-LSTM)-based neural network that previously demonstrated excellent performance in estimating robot joint behavior <cit.> as the baselines for the experiments.
Furthermore, we adopted final displacement error (FDE), average displacement error (ADE), mean absolute error (MAE), and root-mean-square error (RMSE) as the evaluation metrics to measure the accuracy of the predictions.
FDE and ADE were computed on a two-dimensional plane defined by Joint 2 and Joint 4, which possess the largest range of motion and exhibit the most frequent movements.
MAE and RMSE were averaged across all active joints. The predictions for the validation set are presented in Table <ref>.
These results suggest that in the considered real-time intention prediction task, the trajectory prediction performance of the CNN-LSTM-network-based method was on a par with that of the forward integration method.
However, our intention predictor based on a diffusion model significantly outperformed the forward integration method.
Specifically, our intention predictor showed a 43.9% lower FDE, a 40.2% lower ADE, a 40.0% lower MAE, and a 39.4% lower RMSE than the forward integration method.
A comparative analysis was conducted to validate the applicability of our intention predictor across different movement conditions in four tasks.
Two tasks were based on motion data that were collected while the participants were wearing the upper-limb exoskeleton robot, including data on free movements similar to the training data and arm-swinging sinusoidal movements.
The other two tasks were based on simulated smooth trajectories, namely circular and lemniscate trajectories.
Not all tested trajectories were included in the pre-collected dataset employed in the training phase.
Table <ref> reports the performance of the intention predictor in the four tasks, averaged across five active joints.
It can be seen that our intention predictor exhibits the best performance in all metrics.
Figure <ref> illustrates trajectories from steps 1, 4, and 7 of trajectory prediction, together with the actual reference trajectory.
For clarity, we present the experimental results in a two-dimensional plane in which Joint 2 is the x-axis and Joint 4 is the y-axis.
Moreover, the right side of each subplot shows an expanded view of the trajectory prediction details within the black box.
It can be seen that the trajectory prediction at step 1 aligns well with the actual trajectory, while that the trajectory predictions at steps 4 and 7 deviate slightly from the real trajectory.
These results demonstrate that as the number of prediction steps increase, the prediction accuracy slightly decreases and corresponding prediction variance increases.
This trend is explicitly considered in the preemptive tuning algorithm.
Overall, the above-mentioned results confirm that the our intention predictor reliably forecasts upper-limb joint movements and certain regular motions by capturing the dynamic trends of trajectories based on historical observations.
The free-move task was directly related to human movements in practical applications, as it simulated real-life scenarios. In contrast, the sinusoidal, circular, and lemniscate tasks were designed to test the generalization performance of our intention predictor in contexts other than rehabilitation training.
§.§ Anomaly Detection
The collected motion data, inclusive of interaction information, were integrated to train the our anomaly detector, which operated in real time to evaluate safety and the naturalness of motion.
Therefore, these data served as a repository of safe and natural interaction feedback, enabling the anomaly detector to discern their latent relationship and subsequently identify abnormal interactions during human–robot interaction.
The parameters were L_s = 100 and N_c = 21, denoting the history of joint and motor motion data, in addition to interaction torque.
The diffusion model parameters during both training and inference phases were T^p=100 and ν = 60.
DDIMs were also employed during inference.
During the experiment, a participant was required to move while wearing the upper-limb exoskeleton robot to simulate various anomalies.
Three abnormal scenarios were considered: an excessive movement scenario (involving joint movements beyond the normal range), a balance deviation scenario (involving deviations from the relative balance position), and a simulated stroke tremors scenario.
In the excessive movement scenario, the participant initially maintained the upper-limb exoskeleton robot within the normal movement range and then lowered the arm to simulate the anomaly.
As the arm was gradually lowered, there was a decrease in the shoulder joint angle that progressively crossed the motion boundary and thus the anomaly level increased gradually.
Subsequently, the arm was raised back to the normal motion range.
The results are presented in Figure <ref> and reveal that as the joint positions gradually approached and then crossed the boundary, the anomaly score increased.
These results demonstrate that the our anomaly detector effectively detected deviations from the normal motion range (i.e., the motion range of the collected data).
In the simulated stroke tremors scenario, the participant initially maintained the upper-limb exoskeleton robot in the rest phase.
Subsequently, the participant shook the entire arm of the upper-limb exoskeleton robot to simulate tremors.
The corresponding anomaly score and joint positions are depicted in Figure <ref>.
It is evident that during the simulated stroke tremors, the anomaly score generally increased, and while the tremors persisted, it remained rather high.
In the balance deviation scenario, the participant initially maintained the upper-limb exoskeleton robot in a static equilibrium position that represented the balance state in which the upper-limb exoskeleton robot offered assistance.
The participant was then instructed to manipulate the upper-limb exoskeleton robot upward and downward to simulate misalignment during assistance.
The results are presented in Figure <ref> and reveal that there was a marked increase in anomaly scores whenever deviations occurred, regardless of the direction (i.e., regardless of whether the patient’s movement trajectory was above or below the predetermined trajectory).
This increase in scores indicates that the above-mentioned anomalies were detected.
The performance of the our anomaly detector was experimentally assessed using a VAE-based anomaly detector <cit.> as a baseline.
Specifically, detection performance in the stroke tremor scenario was evaluated using a receiver operating characteristic (ROC) curve, which illustrates the ability of a classification model to differentiate between classes.
The areas under the ROC curve for the considered models are presented in Table <ref>, and the calculated anomaly scores are depicted in Figure <ref>.
In the figure, it can be seen that the scores generated by the proposed anomaly detector markedly increased as the anomaly occurred. Furthermore, the scores generated by the VAE-based anomaly detector failed to return to a normal level once the anomaly ceased.
These results indicate that compared with the VAE-based anomaly detector, our anomaly detector is more adaptive in classifying anomalies in different joint configurations, owing to the superior generative performance of the diffusion model.
§.§ Interaction Control
During rehabilitation training, the upper-limb exoskeleton robot is required to regulate human-robot interaction in a desired impedance model with consideration of friction compensation.
To this end, the upper-limb exoskeleton robot was operated to move slowly in the absence of a participant and friction was quantified.
That is, the difference between the readings from the potentiometers at the motor output and the measurements from the torque sensor at the joint end were calculated. In addition, joint velocities were recorded.
Subsequently, polynomial fitting was applied based on the simplified friction model (<ref>) to afford the estimated values for friction parameters presented in Table <ref>.
Next, we implemented our impedance controller in the upper-limb exoskeleton robot, devoid of a patient, to track a pre-defined sinusoidal trajectory involving all active joints.
This experiment aimed to validate the accuracy of the friction compensation and evaluate the dynamic performance of the controller.
The impedance parameters were set as follows: C_d=10 I_5, K_d=50 I_5 where I_5 is a 5× 5 identity matrix. The parameters of the weighting function were set as follows: λ_1=0.5, χ_1=0.04, χ_2=8.75 and λ_2=1.5. The control parameters were set as follows: K_v=1.1 I_3 and K_z=diag(1.5,0.6,0.7,4,1.8).
The experimental results demonstrate that the impedance controller effectively compensated for friction, enabling the upper-limb exoskeleton robot to accurately follow the desired trajectory, as illustrated in Figure <ref>.
Specifically, the RMSEs for the joints during this trajectory tracking task were 0.613^∘ for Joint 1, 0.728^∘ for Joint 2, 0.997^∘ for Joint 3, 2.143^∘ for Joint 4, and 0.948^∘ for Joint 5.
Additionally, we assessed the performance of our impedance controller under conditions involving anomalies.
This was achieved by having the upper-limb exoskeleton robot guide the patient in trajectory tracking while the patient held the upper-limb exoskeleton robot in positions that simulated anomalies.
The results of this experiment are depicted in Figure <ref>.
It can be seen that throughout the experiment, the impedance vector remained close to zero, indicating that the desired variable impedance model was effectively maintained despite human involvement and the occurrence of anomalies.
§.§ Active Mirroring Training
In active mirroring training, the movements of the upper-limb exoskeleton robot were aligned with the motion intentions of the unaffected side of the body of the patient (as illustrated in Figure <ref>).
In addition, online trajectory refinement was used to smoothen movement commands and enhance safety through dynamic constraints. The details of the experiments are given below.
- To evaluate the capability of the proposed method to handle unexpected external impact, we conducted an experiment on Joint 2 of the upper-limb exoskeleton robot’s shoulder.
- We simulated sudden external disturbances by abruptly altering the desired position during the experiments.
- To represent the lower constraint of the online trajectory refinement, we imposed a lower bound on the position command at 20^∘, thereby simulating a tendency to exceed the acceptable movement range.
- During comparative trials, we eliminated motion command inconsistencies caused by feedback from the unaffected side of the body by deactivating the positional feedback on this side.
Therefore, we instead relied on the proprioceptive sensors of the upper-limb exoskeleton robot (specifically, its encoders) and conducted the experiment without a patient.
The position commands and joint positions with and without the implementation of online trajectory refinement, respectively, are illustrated in Figure <ref>.
This figure reveals that without online trajectory refinement, the joint position surpassed the set movement boundary.
However, when online trajectory refinement was activated, the joint movement responded effectively to the dynamic constraints and remained close to the boundary.
Specifically, without trajectory refinement, the movement exceeded the established boundary by approximately 9^∘.
In contrast, with trajectory refinement, the excess was significantly reduced to just 0.5^∘.
The trajectory refinement is designed to adjust the trajectory to adhere to constraints and to mitigate the extent to which joint position violates movement boundaries.
Therefore, even with trajectory refinement, the joint position may slightly cross the boundary, as observed with the 0.5^∘ transgression in this experiment.
Next, we experimented with Joint 1 to verify that the proposed method identified anomaly regions and guided the planned trajectory toward safer areas with lower anomaly scores than the current area.
Ideally, this joint’s normal operational range should not significantly exceed zero.
In particular, the shoulder adduction angle should remain small during typical activities.
Thus, as a motion capture system may yield inaccurate adduction angle estimates due to marker obstruction, we instead utilized the encoder feedback from the upper-limb exoskeleton robot to obtain adduction angle estimates.
The results are displayed in Figure <ref>.
In the figure, it can be seen that as the shoulder adduction angle increased, the motion commands generated by the online trajectory refinement decreased the adduction angle of movements rather than ensuring that they strictly adhered to the target trajectory.
Moreover, when the joint angle returned to the normal range, i.e., when the shoulder moved into the abduction space, the trajectory refinement resumed its focus on aligning the trajectory with the target trajectory within the dynamic constraints.
These results confirm that the proposed method is capable of tracking changes in human motion intention within the normal activity range and refining the trajectory accordingly.
A motion capture system must be installed in a hospital for deploying active mirroring training in clinical trials, and such a system introduces additional limitations and inconvenience.
Thus, we did not conduct clinical trials of active mirroring training. Instead, we evaluated our method by using the upper-limb exoskeleton for active mirroring training in a healthy subject by mapping the motion of the left arm (i.e., the healthy side) to that of the right arm (i.e., the mock stroke side).
Acquiring precise angles of human upper limbs through an optical motion capture system requires a custom-made suit, which was beyond the scope of this study.
Instead, we employed a self-manufactured brace embedded with key markers and gloves fitted with markers to estimate the angles at three limb joints, specifically those corresponding to Joints 1, 2, and 4.
However, due to brace deformation and marker obstruction, the accuracy of the upper-limb joint angles was limited to within a certain range.
Thus, to ensure safety and demonstrate the capability of our method to impose constraints on upper-limb movements, we set position constraints for Joints 1, 2, and 4 as [ -40^∘,10^∘ ], [ -10^∘,80^∘ ], and [ 0^∘,60^∘ ], respectively.
As shown in Figure <ref>, the desired trajectory was consistently aligned with the trajectory of the human limb q_h and was refined according to dynamic constraints.
Moreover, when the unaffected side of the body moved rapidly to a position outside the established movement boundary, the proposed trajectory refinement was swiftly adjusted.
That is, a predetermined maximum speed was implemented to prevent the set joint position boundaries being exceeded.
When the movements of the joints on the unaffected side of the body remained within these dynamic constraints, the upper-limb exoskeleton robot tracked the trajectory of the healthy limb.
Simultaneously and throughout the training session, our variable impedance controller effectively sustained the human-robot interaction within the desired impedance model.
§.§ Passive Following Training
We conducted a series of experiments to validate the efficiency of the proposed individualization framework on the passive following training task.
These experiments were centered on a typical task in upper-limb rehabilitation: raising the upper limb to a fixed point.
This task requires coordination between the shoulder and elbow, specifically involving Joints 1, 2, and 4.
During the cost calculation, the anomaly score was scaled to have the same magnitude as the tracking error, and the hyperparameter λ_p was set to 0.003.
Moreover, as the exploration part of assistance individualization largely consists of initial trajectory sampling based on a demonstration, we set N_s=40.
The training process was designed to stop when the RMSE of the mean trajectory of the distribution between two consecutive iterations decreased to less than 0.1^∘ or when the maximum number of iterations was reached.
Furthermore, the sampling space was greatly reduced by the demonstration data.
The number of samples set was considered to be sufficient for exploration in the current task based on practical experience.
First, we conducted ablation studies with a healthy patient present to validate the efficacy of the online refinement module in passive following training scenarios.
In addition, five restarts were executed to bypass local optima.
The results of these studies are depicted in Figure <ref>. Within the figure, the black dashed line illustrates the trajectory that achieved the lowest cost during the exploration phase, while the red solid line represents the mean of the assistive trajectory distribution.
Without online trajectory refinement, the experiment spanned 67 iterations, whereas with online trajectory refinement, the experiment spanned 55 iterations.
The experimental results reveal that without online refinement in passive following training individualization, the converged assistive trajectory distribution exhibited a large variance, and the trajectory with the lowest cost significantly deviated from the mean of the distribution.
This indicates that the trajectory distribution with similar costs was rather wide.
In contrast, with online refinement in passive following training individualization, the converged assistive trajectory distribution had a smaller variance, suggesting that there was a more concentrated set of trajectories with similar costs.
This concentration is largely attributable to the use of sensor feedback and anomaly scores in the online refinement, thereby facilitating real-time adjustments to the assistive trajectory.
This process effectively reduced the uncertainty of motion intentions and minimized conflicts in human–robot interaction.
Thus, the precision with which the cost function evaluated and distinguished the performance of different trajectories was increased, leading to the overall effectiveness of the training being enhanced.
The effectiveness of the individualized assistance was assessed using four metrics: the tracking RMSE, the anomaly score, the cost (<ref>), and the EMG signal level of the biceps brachii.
The results of this evaluation are displayed in Figure <ref>.
To allow comparison, all metrics were averaged over one cycle and normalized relative to the original assistance.
Compared with the other assistance (i.e., the mean trajectory of the demonstrations, and the optimized assistance without online refinement), the assistance incorporating online refinement demonstrated the best performance in improving tracking accuracy and reducing the anomaly score during motion.
Furthermore, implementing online refinement resulted in the lowest cost following passive training and a reduction in the EMG signal level.
These outcomes substantiate the efficacy of the proposed individualization framework for passive following training.
We also recruited seven patients for clinical trials to demonstrate the effectiveness of the proposed method.
As one participant was transferred to another hospital, only six participants completed the trial. All the patients had a healthy side that did not move naturally, due to the effects of their stroke. Hence, passive following training was employed. Each participant signed an informed consent form, and all experiments were approved by the ethics committee created by Shenzhen MileBot Robotics Co., Ltd in May 2023.
The participants were allocated to either a control group or an experimental group and their details are summarized in Table <ref>.
Both groups engaged in regular daily rehabilitation exercises, and the experimental group participated in an additional 14 days of passive following training.
The rehabilitation task was set to be the same as the previously described task of raising the arm, involving the coordinated movement of Joints 1, 2, and 4, as illustrated in Figure <ref>.
Interactive information was recorded during the limb-lifting phase, which was considered as the training task, while the limb-lowering phase, which began at t=3.5s, was used to return to the initial position.
The motor abilities of both groups were evaluated and scored by professional healthcare personnel using specific evaluation metrics. These metrics were the muscle tone level and the Fugl-Meyer assessment (FMA) score <cit.>.
A low level of muscle tone and a high FMA score are indicative of good upper-limb motor ability.
Both groups were subjected to a motor function assessment before treatment. During the treatment phase, the experimental group participated in daily passive following training sessions, each of which lasted approximately 15 minutes.
Both groups underwent reassessment 2 weeks after the start of the treatment.
The experimental results normalized to the initial evaluation are displayed in Figure <ref>, and detailed evaluation results are included in the Appendix.
Unlike the control group, the experimental group exhibited significant improvements in all metrics compared with their initial assessments before the treatment.
Thus, compared with the results of the control group, the results of the experimental group indicate that the passive following training with the upper-limb exoskeleton robot accelerated the recovery of motor functions.
Therefore, this training could enhance the effectiveness of treatment for conditions such as stroke and cerebral hemorrhage.
§ CONCLUSION AND DISCUSSION
§.§ Conclusion
Overall: This paper introduces a dual-mode individualization framework that incorporates generative models. This framework incorporates an intention predictor and an anomaly detector, which are used to capture the motion intentions of the unaffected side of the patient and to assess the human–robot interaction in real time during rehabilitation tasks.
In active mirroring mode, the assistance reflects the patient’s original motion intentions. In passive following mode, the assistance is tailored to the patient based on interactive feedback.
Trajectories in both modes are integrated within an online trajectory refinement framework, ensuring that they are smooth, adhere to dynamic constraints, and are individualized, thereby effectively supporting the patient’s rehabilitation.
Details:
The online trajectory refinement integrates both training configurations and utilizes generative models to achieve personalized assistance.
Specifically, in active mirroring mode, the reference trajectory is derived from the unaffected limb, with the intention predictor providing a predicted trajectory distribution that is preemptively tuned to mitigate potential risk movements.
Conversely, in passive following mode, the reference trajectory is pre-defined based on human demonstrations.
Additionally, the anomaly detector plays a crucial role in guiding the online refinement process to enhance the naturalness of movements in real time.
This detector assesses the deviation between the current interaction data and standard demonstration data obtained from healthy individuals, thereby facilitating performance evaluation during passive following training.
In passive following mode, ProMPs are implemented for specific training tasks, with each movement of the patient weighted according to a cost function. This approach significantly enhances the effectiveness of the generated assistance distribution.
Performance:
We conducted a series of experiments, including a clinical trial, to validate each of the proposed modules and demonstrate their effectiveness in enhancing assistance and ensuring safety.
In terms of prediction accuracy, the intention predictor outperformed alternative methods, namely forward integration and a CNN-LSTM.
Furthermore, the anomaly detector accurately identified anomalies across different scenarios.
Moreover, the performance of the variable impedance controller was validated in trajectory tracking and in assisting the patient when anomalies occurred.
During active mirroring training, online refinement effectively reduced the degree of constraint violations in the presence of unexpected impacts.
It was also capable of identifying abnormal regions within the movement space and guiding the upper-limb exoskeleton robot to result in a decreased anomaly score.
This active mirroring training approach was tested under a motion capture system, which validated its effectiveness.
In passive following mode, testing was conducted using healthy individuals and in a clinical trial, respectively. The results confirm that the approach provided personalized assistance to healthy participants and significantly accelerated the recovery of motor functions in stroke participants.
Specifically, the clinical trial data indicate that the experimental group, which had participated in passive following training, showed improvements in various performance metrics after completing the treatment protocol.
§.§ Discussion
Limitations:
The current trajectory generation method exhibits three main limitations, as detailed below.
1) The performance of the intention predictor and anomaly detector depends on the size and quality of the dataset. Expanding the dataset to include more subjects would significantly improve the performance of the generative models. This would increase the accuracy of predictions of patient motion intentions and the precision of detection of abnormal interactions during movements, thereby enhancing the personalization of the training modes.
2) The clinical trial included only six participants and focused exclusively on the rehabilitation effects of the passive following mode. Conducting a clinical trial with more participants and incorporating active mirroring mode into the rehabilitation process would provide a more comprehensive evaluation of the proposed dual-mode individualization framework.
Efforts to address these limitations in the manner described will form the basis of our future research and development activities.
Intellectual Merits:
We have developed an innovative dual-mode individualization framework that incorporates generative models, thereby establishing a new benchmark for adaptive rehabilitation systems.
This novel framework can switch between active mirroring and passive following modes based on the patient’s needs and thus offers personalized assistance and enhanced rehabilitation outcomes.
Key features of this framework are its real-time intention prediction and anomaly detection capabilities.
Specifically, the intention predictor captures motion intentions from the unaffected side of the patient, while the anomaly detector evaluates human–robot interactions in real time, ensuring immediate adaptation and response to the patient’s movements.
Additionally, the framework integrates online trajectory refinement that unifies trajectories from both active mirroring and passive following modes to ensure they are smooth, dynamically constrained, and individualized.
Thus, the framework provides more natural and effective assistance than other frameworks. The application of generative models to personalize assistance based on interactive feedback ensures that the rehabilitation process is effectively responsive to individual patient conditions and needs.
Potential Impacts:
The development of a dual-mode individualization framework that integrates generative models represents a significant advancement that could enhance the deployment and effectiveness of rehabilitation technologies in both clinical and homecare environments.
Specifically, as this innovative framework delivers personalized and adaptive assistance tailored to real-time feedback and the specific motion intentions of the patient, it has the potential to revolutionize the rehabilitation process.
To the best of the authors’ knowledge, this study is the first to integrate generative models into an upper-limb exoskeleton robot and perform a clinical trial.
Our pioneering approach not only enhances the functionality of rehabilitation devices but also contributes to a potential impact on the field by merging artificial intelligence with rehabilitation medicine.
That is, our approach could effectively bridge the gap between AI and rehabilitation medicine, thereby facilitating the translation of advancements in AI into practical medical applications.
This study exemplified the power of interdisciplinary research, as it involved a combination of principles from the fields of robotics, control systems, machine learning, and clinical rehabilitation.
This led to advances in each field and set a precedent for future studies aiming to develop comprehensive and adaptive rehabilitation systems.
Furthermore, the framework devised in this study addresses the broader societal challenge posed by an aging population. Specifically, the framework offers methods that could be used in advanced rehabilitation solutions that are applicable in both healthcare facilities and home settings.
§ FUNDING
This work was supported in part by the Science and Technology Innovation 2030-Key Project under Grant 2021ZD0201404, in part by
the Institute for Guo Qiang, Tsinghua University, and in part by the National Natural Science Foundation of China under Grant U21A20517 and 52075290.
SageH
§ APPENDIX
§.§ Stability Analysis
To prove the stability of the whole system, we substitute equations (<ref>) and (<ref>) into (<ref>), resulting in the following system dynamics:
( M( q)+B̅)ż+ C(q̇, q) z=- K_z z- S_2^𝖳τ_f-k_gsgn( z),
where τ_f = τ̂_f - τ_f represents the estimation error of the friction.
Subsequently, we propose the following candidate Lyapunov function:
V = 1/2 z^T( M( q)+B̅) z.
By differentiating (<ref>) with respect to time and substituting the dynamics with those from (<ref>), we derive the following expression:
[ V̇ = - z^T K_z z - z^T S_2^𝖳τ_f-k_g z^Tsgn( z). ]
Assuming that ‖ S_2^𝖳τ_f ‖≤κ, the upper bound for V̇ is derived as follows:
V̇≤ - z^T K_z z -(k_g-κ)‖ z ‖,
If k_g is adequately large such that k_g>κ, the inequality simplifies to
V̇≤ - z^T K_z z<0,
Given that V>0 and V̇ < 0, the quasi-steady-state system is exponentially stable.
Considering that the boundary-layer system can be made intrinsically stable by appropriate tuning of K_1 and K_2, the stability of the closed-loop system is assured according to <cit.>, ensuring convergence to the desired impedance vector.
§.§ Weight Setting in Passive Following Mode
For a specific training task involving human-robot interaction, we assume that the patient intends to synchronize movement with the upper-limb exoskeleton robot.
However, physical factors such as the randomness of human motion intentions may impact the tracking performance of the upper-limb exoskeleton robot.
Given the challenge of explicitly accounting for such disturbances within the dynamics, we model this interference as additional noise in trajectory planning. The dynamics are described as follows:
x_p^(t+1) = g_p( x_p^(t), q_r),
q_r ∼𝒩( q_rp,Σ_p),
where x_p = [ x_d^𝖳,s]^𝖳 is an augmented state vector, g_p(·) is a nonlinear time-variant function that integrates online refinement, the impedance controller, and deterministic components of human motion intention with human-robot interaction.
q_rp is the mean of the assistance distribution, and Σ_p encapsulates the overall stochastic disturbance, which includes the randomness of human motion intentions and the sampling variability of generative models.
According to <cit.>, to minimize the following cost
𝒮̂( q_r) = ∫_T_r{ q_d( q_r) - q_ Q^2+s^2 + λ_p/2 q_rp^𝖳Σ_p^-1 q_rp}dt,
where the optimal control input is structured in a cost-decoupled manner as follows:
ŵ( q_r) = 1/η̂_kexp(-1/λ_p(𝒮̂( q_r) + γ_p∑ q_rp^𝖳Σ_p^-1 q_r)),
q̂_rp^* = 𝔼[ŵ( q_r) q_r].
Here, γ_p = λ_p(1-α_p) is the decoupled temperature parameter, defined with α_p∈[0,1], and q_rp is the mean of the initial trajectory estimates.
In this approach, the optimal assistance is derived from the expected value of the current trajectory distribution.
Each sampling iteration is based on the previously improved trajectory distribution, which may lead to inconsistencies in the sampling space.
To address these potential inconsistencies and ensure adequate exploration, we opt for a large sample size N_s in Algorithm <ref>.
Additionally, we set α_p = 1 to mitigate the influence of inconsistencies in q_rp across different samples and thereby enhance the robustness and reliability of the optimal control solution.
Given that the last term in (<ref>) is unexpected in the rehabilitation process, the ideal set is λ_p = 0, leading to the following definition of optimal assistance:
q̂_rp^* = min_ q_r𝒮̂( q_r).
In this scenario, the optimal assistance is identified as the sampled trajectory with the lowest cost, and all other sampled outcomes are disregarded.
However, this formulation disregards the distribution of the optimal assistance, rendering the iterative improvement mechanism nonviable.
Thus, to maintain the feasibility of iterative improvements and also minimize the impact of the undesirable term, the parameter λ_p, which governs the tightness of the solution, is set to a small value.
This adjustment ensures that 𝒮̂( q_r) →𝒮( q_r), and the optimal weight formula in (<ref>) simplifies to the configuration used in (<ref>).
This setting balances the need to minimize the undesired term with the need to maintain a practical and effective iterative improvement process.
§.§ Clinical Evaluation Results
In this clinical trial, muscle tone levels were assessed using a rating scale with grades 0, 1, 1+, 2, 3, and 4, where grade 0 indicates normal muscle tone.
For the purposes of numerical analysis, grade 1+ is quantified as 1.5. The evaluation results are documented in Table <ref>.
All assessments were exclusively focused on the upper limb. In the upper limb segment of the FMA, the maximum score, indicating normal function, is 66 points.
|
http://arxiv.org/abs/2409.03561v1 | 20240905141921 | Communication-Assisted Sensing Systems: Fundamental Limits and ISAC Waveform Design | [
"Fuwang Dong",
"Fan Liu",
"Yifeng Xiong",
"Yuanhao Cui",
"Wei Wang",
"Shi Jin"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Communication-Assisted Sensing Systems: Fundamental Limits and ISAC Waveform Design
Fuwang Dong, Member, IEEE, Fan Liu, Senior Member, IEEE, Yifeng Xiong, Member, IEEE,
Yuanhao Cui, Member, IEEE, Wei Wang, Senior Member, IEEE, Shi Jin, Fellow, IEEE
(Corresponding author: Fan Liu.)
Part of this paper was presented at IEEE International Symposium on Information Theory (ISIT), 2024 <cit.>.
Fuwang Dong, and Fan Liu are with the School of System Design and Intelligent Manufacturing, Southern University of Science and Technology, Shenzhen 518055, China. (email: {dongfw, liuf6}@sustech.edu.cn).
Yifeng Xiong, and Yuanhao Cui are with the School of Information and Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China. (email: {yifengxiong, cuiyuanhao}@bupt.edu.cn).
Wei Wang is with the College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin 150001, China. (email: wangwei407@hrbeu.edu.cn).
Shi Jin is with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China. (e-mail: jinshi@seu.edu.cn).
September 9, 2024
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The communication-assisted sensing (CAS) systems are expected to endow the users with beyond-line-of-sight sensing capabilities without the aid of additional sensors. In this paper, we study the dual-functional signaling strategy, focusing on three primary aspects, namely, the information-theoretic framework, the optimal distribution of channel input, and the optimal waveform design for Gaussian signals. First, we establish the information-theoretic framework and develop a modified source-channel separation theorem (MSST) tailored for CAS systems. The proposed MSST elucidates the relationship between achievable distortion, coding rate, and communication channel capacity in cases where the distortion metric is separable for sensing and communication (S&C) processes. Second, we present an optimal channel input design for dual-functional signaling, which aims to minimize total distortion under the constraints of the MSST and resource budget. We then conceive a two-step Blahut-Arimoto (BA)-based optimal search algorithm to numerically solve the functional optimization problem. Third, in light of the current signaling strategy, we further propose an optimal waveform design for Gaussian signaling in multi-input multi-output (MIMO) CAS systems. The associated covariance matrix optimization problem is addressed using a successive convex approximation (SCA)-based waveform design algorithm. Finally, we provide numerical simulation results to demonstrate the effectiveness of the proposed algorithms and to show the unique performance tradeoff between S&C processes.
Integrated sensing and communication, communication assisted sensing, rate-distortion theory, source-channel separation theorem, Blahut-Arimoto algorithm.
§ INTRODUCTION
Integrated sensing and communication (ISAC) systems are widely acknowledged for their potential to enhance snc performance by sharing hardware and spectrum resources <cit.>. Over the past few decades, substantial research efforts have focused on improving resource efficiency and mitigating mutual interference through waveform design <cit.>, beamforming <cit.>, and dedicated signal processing techniques <cit.>. To further explore the coordination gains between snc, the authors of <cit.> proposed a sensing-assisted communication scheme, which leverages sensing capability of the transmitting ISAC signal, such as beam tracking and prediction, to establish user links thereby reducing the communication overhead. On the other hand, sensing-as-a-service is expected to become a key feature of upcoming 6G perceptive networks <cit.>. This necessitates the exploration of how the sensing quality can be enhanced by leveraging communication functionality, namely, the cas techniques.
§.§ Communication-Assisted Sensing Systems
The effective sensing range may be significantly extended by leveraging the data transmission capabilities of communication systems. This capability, referred to as beyond-line-of-sight (BLoS) sensing, allows for detecting obstructed or exceptionally distant targets. In <cit.>, BLoS sensing was investigated by sharing measurements collected by vehicle-mounted sensors among users. Recently, a novel cas system has been proposed in <cit.>, which enables BLoS sensing as an intrinsic network capability. Unlike the techniques proposed in <cit.>, the CAS system eliminates the need for additional sensors.
As shown in Fig. <ref>, the base station (BS) or roadside unit (RSU) with favorable visibility illuminates targets and captures observations during the sensing process while simultaneously transmitting the acquired sensory information to the end-users during the communication process. Therefore, the snc processes are coupled at the BS and concurrently determine the sensing quality attained by the user. The authors of <cit.> have proposed two signaling strategies, i.e., the separated snc and the dual-functional signaling, to minimize the total distortion between the ground truth and the corresponding reconstruction at end-users.
In the dual-functional signaling strategy, the BS transmits a sophisticatedly tailored ISAC waveform to simultaneously perform snc tasks. Namely, the ISAC waveform is used to sense the current targets' states and to convey the previously acquired state information to the end-users. However, in this strategy, where there is strong snc coupling, two critical problems still remain unclear. 1) The problem formulation in <cit.> is established at an information theoretical level, whose operational meaning is not proved from a coding theory perspective. 2) The ISAC waveform is assumed to be a Gaussian distribution, which may not be necessarily optimal for the considered CAS scenario. For instance, Gaussian distribution is communication-optimal under Gaussian channels, whereas 2-ary pulse amplitude modulation is sensing-optimal <cit.>. This motivates us to delve into the fundamental limits and the resulting snc tradeoff of the dual-functional signaling strategy for CAS systems.
§.§ Related Works
Exploring the fundamental limits plays an important role in the ISAC systems <cit.>. The groundbreaking studies <cit.> have considered a scenario where the transmitter (Tx) communicates with a user through a memoryless state-dependent channel while simultaneously estimating the state from generalized feedback. The capacity-distortion-cost tradeoff of this channel is characterized to illustrate the optimal achievable rate for reliable communication while maintaining a preset state estimation distortion. Concurrently, the work of <cit.> has addressed a general scenario where the Tx senses arbitrary targets rather than the specified communication channel state. This study reveals the deterministic-random tradeoff between snc within the dual-functional signaling strategy by characterizing the Cramér-Rao bound (CRB)-communication rate region. Unfortunately, the fundamental limits of CAS systems, whose setups significantly differ from the existing literature, remain widely unexplored.
The information-theoretic framework of the dual-functional signaling strategy for CAS systems is depicted in Fig <ref>. The ISAC channel input X is employed both to acquire the observation Z of the original state S from the sensing channel, and to carry the estimate S̃ to the communication channel. Although this framework shares some similarities with conventional remote source coding and remote estimation problems, it exhibits a unique challenge in ISAC channel input design. In Remote source coding <cit.>, the encoder cannot access the original source information S but only its noisy observation Z. In this context, S↔Z↔X↔Y↔Ŝ forms a Markov chain, where the observation Z does not depend on the communication channel input X. However, in our scheme, the sensing channel output Z is determined by the ISAC channel input X due to the dual-functional signaling strategy. This implies that the process is no longer a Markov chain in CAS systems, complicating the characterization of the rate-distortion relationship using remote source coding theory.
The remote estimation problem involves a sensor measuring the state of a linear system and transmitting its observations to a remote estimator over a wireless fading channel <cit.>. A typical application of the remote estimation problem in the ISAC field is vehicular networks, such as the sensory data-sharing scheme considered in <cit.>. This widely investigated problem differs from the proposed CAS system in two ways. First, remote estimation requires additional sensors, whereas sensing capability is an intrinsic function in CAS system. Second, similar to the remote source coding problem, the observations and channel inputs are not coupling. Consequently, the snc performance tradeoff leads to a significantly different working pipeline and unique challenges in CAS system design.
It would be worthwhile to remark that, in most works on remote source coding and remote estimation, the encoder directly transmits the observation Z, which is estimated at the user-end. This strategy is referred to as the compress-and-estimation. In the proposed CAS systems, however, we focus on a scenario where the BS transmits the estimate S̃ instead of the observation. This estimate-and-compress strategy achieves an optimal tradeoff between rate and distortion, as discussed in <cit.>. We will leave the compress-and-estimate strategy for future research.
§.§ Our contributions
To fill the aforementioned research gap, we aim to bring new insights into the dual-functional signaling strategy for CAS systems, from the fundamental limits to the specific waveform design. Compared to the conference version <cit.>, which only discussed the corresponding information-theoretic framework, we further elaborate on the the optimal distribution of channel input and propose a novel method for Gaussian waveform design in MIMO CAS systems. For clarity, the main contributions of this work are summarized as follows.
* First, we establish the information-theoretic framework for CAS systems to illustrate the relationship between achievable distortion, coding rate, and communication channel capacity. We develop a modified source-channel separation theorem (MSST) specific to the cases of separable distortion metric for snc processes. Compared to the existing works <cit.>, we provide a rigorous proof of the operational meaning for the MSST.
* Second, we formulate a unified optimization problem for the input distribution design of the ISAC channel, where the total distortion is minimized while adhering to the MSST and resource constraints. To cope with the functional optimization problem, we conceive a two-step Blahut-Arimoto (BA)-based optimal search algorithm in an effort to tackle the challenges of lacking explicit expressions for the rate-distortion bound and channel capacity.
* Third, we propose an optimal waveform design scheme for Gaussian signaling in MIMO CAS systems. The associated covariance matrix optimization problem is solved using an successive convex approximation (SCA)-based waveform design algorithm.
* Finally, we provide numerical simulation results to show the effectiveness of the proposed algorithms, while demonstrating the unique performance tradeoff between snc in the considered CAS systems.
This paper is structured as follows. We commence with establishing the information-theoretic framework in Section <ref>, including the definitions of distortions, rate-distortion function, and constrained channel capacity. In Section <ref>, we prove the achievability and converse of the proposed MSST. Then, we formulate the optimization problem for ISAC channel input design and develop a two-step BA-based optimal search algorithm in Section <ref>. In Section <ref>, we present the Gaussian ISAC waveform design method for the MIMO CAS systems. Finally, we provide the simulation results and conclude this paper in Section <ref> and <ref>, respectively.
The notations used in this paper are as follows. The uppercase normal letter A, lowercase italic letter a, and fraktur letter 𝒜 denote a random variable, its realization and a set, respectively. Aa represent a probability distribution function and A|Ba|b specifies to channel transition probability. Uppercase and lowercase bold letters 𝐀 and 𝐚 denote the matrix and column vector. (·)^T, (·)^*, and (·)^H represent the transpose, conjugate, and complex conjugate transpose operations, respectively. · is the statistical expectation, and · is the trace of a matrix.
§ INFORMATION-THEORETIC FRAMEWORK OF CAS
§.§ Sensing and Communication Processes
As shown in Fig. <ref>, X∈𝒳 serves as the input to both the snc channel. The random variables of the target's state S, the sensing channel output Z, and the communication channel output Y take values in the sets 𝒮, 𝒵, and 𝒴, respectively. Here, the state sequence {S_i}_i ≥ 1 is iid subject to a prior distribution Ss. In general, the cas system may be described as the following snc processes.
∙ Sensing Process: The sensing channel output, Z_i, at a given time i is generated based on the sensing channel law Z|XS·|x_i,s_i given the ith channel input X_i=x_i and the state realization S_i=s_i. We assume that the sensing channel output Z_i is independent of past inputs, outputs and state signals. Let 𝒮̃ denote the estimate alphabet. The state estimator is a map from the acquired observations 𝒵^n to 𝒮̃^n. Thus, the expected average per-block estimation distortion in the sensing process can be defined by
Δ_s^(n):= d(S^n,S̃^n)=1/n∑_i=1^n d(S_i,S̃_i),
where d(·, ·) is the distortion function bounded by d_max.
∙ Communication Process: The BS encodes the estimate S̃_i-1 into the communication channel input X_i. The user receives the communication channel output Y_i in accordance with channel law Y|Xy|x. The decoder is a map from 𝒴^n to 𝒮̂^n, where 𝒮̂ denotes the reconstruction alphabet. Therefore, the expected average per-block estimation distortion in the communication process can be defined by
Δ_c^(n):= d(S̃^n,Ŝ^n) =1/n∑_i=1^n d(S̃_i,Ŝ_i).
Additionally, the performance of the CAS systems may be evaluated by the total distortion between the ground truth and its reconstruction at the user,
Δ^(n):= d(S^n,Ŝ^n) = 1/n∑_i=1^n d(S_i,Ŝ_i).
By denoting R as the bit rate, a (2^nR, n) coding scheme for the CAS systems consists of
1) A state parameter estimator h: 𝒳^n ×𝒵^n →𝒮̃^n [We will show that the optimal estimator h can be achieved by a symbol-by-symbol estimation later in Lemma 1. Namely, the estimate S̃_i only depends on X_i and Z_i but independent of past inputs and outputs. Here, we define the mapping in block form to maintain generality.];
2) A message set (also the estimate set) 𝒮̃^n = [1:2^nR];
3) An encoder ϕ: 𝒮̃^n →𝒳^n;
4) A decoder ψ: 𝒴^n →𝒮̂^n.
In practical systems, the snc channel input X may be restricted by limited system resources. Let us define the cost-function b(x):𝒳→ℝ^+ as the channel cost, then a rate-distortion-cost tuple (R,D,B) is said achievable if there exists a sequence of (2^nR,n) codes that satisfy
_n →∞Δ^(n)≤ D, 5pt _n →∞𝔼[b(X^n)] ≤ B,
where D and B represent the total distortion and resource budget, respectively.
§.§ Optimal Sensing Estimator
The estimate S̃, as well as the communication source, is determined by the choice of sensing estimator at the BS. The optimal estimator is given through the following lemma.
Lemma 1 <cit.>: By recalling that S↔XZ↔S̃ forms a Markov chain, the sensing distortion Δ_s^(n) is minimized by the deterministic estimator
h^⋆(x^n,z^n):= (s̃^⋆(x_1,z_1),s̃^⋆(x_2,z_2),⋯,s̃^⋆(x_n,z_n)),
where
s̃^⋆(x,z):= min_s'∈𝒮̃∑_s ∈𝒮S|XZs|x,zd(s,s'),
with posterior transition probability being
S|XZs|x,z = SsZ|XSz|x,s/∑_s' ∈𝒮Ss'Z|XSz|x,s' ,
which is independent to the choice of encoder and decoder.
The detailed proof can be found in <cit.>. By applying the estimator (<ref>), the estimation-cost function wrt the channel input realization can be defined by
e(x) = d(S,s̃^⋆(X,Z))|X=x.
Thus, a given estimation (or sensing) distortion D_s satisfies
_n →∞Δ_s^(n) = _n →∞1/n∑_i=1^n e(X)≤ D_s.
§.§ Constrained Communication Channel Capacity
The snc processes are strongly coupled through the common snc channel input, leading to a performance tradeoff between the sensing distortion and the achievable communication channel capacity. To proceed, let us define the channel capacity and the rate-distortion function under the CAS regime.
Definition 1: The information-theoretic channel capacity constrained by the desired sensing distortion D_s and resource cost B can be defined by
C^IT(D_s, B) = Xx∈𝒫_D_s∩𝒫_B I(X;Y),
where I(X;Y) denotes the mutual information (MI) between the communication channel input and output, 𝒫_D_s is the sensing feasible probability set whose element satisfies
𝒫_D_s= {Xx|e(X)≤ D_s },
and 𝒫_B is the resource feasible probability set described by
𝒫_B={Xx|b(X)≤ B }.
Note that the sequence {X_i}_i ≥ 1 is iid with distribution Xx for achieving the capacity-distortion region. By recalling the iid state sequence {S_i}_i ≥ 1, the sensing channel output {Z_i}_i ≥ 1 is also iid subject to the distribution Zz=∑_s ∈𝒮∑_x ∈𝒳Z|XSz|x,sSsXx. Furthermore, according to the symbol-by-symbol optimal sensing estimator in Lemma 1, we have the fact that the estimate {S̃_i}_i ≥ 1 is iid with the following distribution
S̃s̃ = ∑_z ∈𝒵∑_x ∈𝒳∑_s ∈𝒮Z|XSz|x,sSsXx𝕀{s̃^⋆(x,z)=s̃},
with 𝕀(·) being the indicator function. Therefore, for the iid source S̃, the relationship between communication distortion D_c and bit rate R can be characterized by the rate-distortion theory.
Definition 2: The information-theoretic rate distortion function can be defined by
R^IT(D_c) = Ŝ|S̃ŝ|s̃: d(S̃,Ŝ)≤ D_c I(S̃;Ŝ).
Here, we have
I(S̃;Ŝ) =^(a) I(X,Z;Ŝ) =^(b) I(X;Ŝ),
where (a) is due to the deterministic sensing estimator (<ref>) in Lemma 1, and (b) follows the fact that Ŝ is conditionally independent to Z with a given X.
§ MODIFIED SOURCE-CHANNEL SEPARATION THEOREM
In this section, we develop the modified source-channel separation theorem tailored for the dual-functional signaling CAS systems, aiming to elucidate the relationships between total distortion, coding rate and communication channel capacity. The MSST retains a form analogous to the conventional source-channel separation theorem with distortion in lossy data transmission <cit.>.
Theorem 1: Under the condition that the distortion metric d(·, ·) is separable for the snc processes, i.e.,
d(S,Ŝ) = d(S,S̃) + d(S̃,Ŝ),
the total distortion D can be achieved by the sum of snc distortions, if and only if
R^IT(D_c) ≤ C^IT(D_s, B),
where C^IT(D_s, B) and R^IT(D_c) are the constrained channel capacity (<ref>) and the rate-distortion function (<ref>), respectively.
The above MSST is proposed mathematically within the framework of information theory. Next, we elucidate the operational meaning of the MSST by proving that there must exist a practical coding scheme satisfying it.
§.§ Converse
We start with a converse to show that any achievable coding scheme must satisfy (<ref>). Consider a (2^nR,n) coding scheme defined by the encoding and decoding functions ϕ and ψ. Let S̃^n = h^⋆(X^n,Z^n) be the estimate sequence as given in (<ref>) and Ŝ^n = ψ(ϕ(S̃^n)) be the reconstruction sequence corresponding to S̃^n.
Let us focus on the communication process S̃↔X↔Y↔Ŝ. By recalling from the proof of the converse in lossy source coding, we have
R ≥1/n∑_i=1^n I(S̃_i;Ŝ_i) ≥^(a)1/n∑_i=1^n R^IT( d(S̃_i,Ŝ_i))
≥^(b) R^IT(1/n∑_i=1^nd(S̃_i,Ŝ_i)) ≥^(c) R^IT(D_c),
where (a) follows the Definition 2 that R^IT is the minimum required MI, (b) and (c) are due to the convexity and non-increasing properties of the rate distortion function.
On the other hand, by recalling from the proof of the converse in channel coding, we have
R ≤1/n∑_i=1^n I(X_i;Y_i)
≤^(d)1/n∑_i=1^n C^IT( ∑_x∈𝒳X_ixc(x), ∑_x∈𝒳X_ixb(x))
≤^(e) C^IT(1/n∑_i=1^n∑_x∈𝒳X_ixc(x), 1/n∑_i=1^n∑_x∈𝒳X_ixb(x))
≤^(f) C^IT(D_s,B),
where (d) follows the Definition 1 that the channel capacity is the maximum MI, (e) and (f) are due to the concavity and non-decreasing properties of the capacity constrained by estimation and resource costs <cit.>. By combing the inequalities (<ref>), (<ref>) and the data processing inequality I(S̃;Ŝ) = I(X;Ŝ) ≤ I(X;Y) benefit from the Markov chain X↔Y↔Ŝ, we complete the proof of the converse.
§.§ Achievability
Let S^n be drawn iid ∼Ss, we will show that there exists a coding scheme for a sufficiently large n and rate R, the distortion Δ^n can be achieved by D if (<ref>) holds. The core idea follows the famous random coding argument and source channel separation theorem with distortion.
1) Codebook Generation: In source coding with rate distortion code, randomly generate a codebook 𝒞_s consisting of 2^nR sequences Ŝ^n which is drawn iid ∼Ŝŝ. The probability distribution is calculated by Ŝŝ = ∑_s̃S̃s̃Ŝ|S̃ŝ|s̃, where S̃s̃ is defined in (<ref>) and Ŝ|S̃ŝ|s̃ achieves the equality in (<ref>). In channel coding, randomly generate a codebook 𝒞_c consisting of 2^nR sequences X^n which is drawn iid ∼Xx. The Xx is chosen by satisfying the constrained capacity with estimation- and resource-cost in (<ref>). Index the codeword S̃^n and X^n by w ∈{1,2,⋯, 2^nR}.
2) Encoding: Encode the S̃^n by w such that
(S̃^n, Ŝ^n(w)) ∈𝒯^(n)_d,ϵ_s (S̃Ŝs̃,ŝ),
where 𝒯^(n)_d,ϵ_s (S̃Ŝs̃,ŝ) represents the distortion typical set <cit.> with joint probability distribution S̃Ŝs̃,ŝ = S̃s̃Ŝ|S̃ŝ|s̃. To send the message w, the encoder transmits x^n(w).
3) Decoding: The decoder observes the communication channel output Y^n=y^n and look for the index ŵ such that
(x^n(ŵ), y^n) ∈𝒯^(n)_ϵ_c(XYx,y),
where 𝒯^(n)_ϵ_c(XYx,y) represents the typical set with joint probability distribution XYx,y=XxY|Xy|x. If there exists such ŵ, it declares Ŝ^n = ŝ^n(ŵ). Otherwise, it declares an error.
4) Estimation: The encoder observes the channel output Z^n = z^n, and computes the estimate sequence with the knowledge of channel input x^n by using the estimator s̃^n = h^⋆(x^n,z^n) given in (<ref>).
5) Distortion Analysis: We start by analyzing the expected communication distortion (averaged over the random codebooks, state and channel noise). In lossy source coding, for a fixed codebook 𝒞_s and choice of ϵ_s > 0, the sequence s̃^n ∈𝒮̃^n can be divided into two categories:
∙ (s̃^n, ŝ^n(w)) ∈𝒯^(n)_d,ϵ_s, we have d(s̃^n, ŝ^n(w)) < D_c + ϵ_s;
∙ (s̃^n, ŝ^n(w)) ∉𝒯^(n)_d,ϵ_s, we denote P_e_s as the total probability of these sequences. Thus, these sequence contribute at most P_e_sd_max to the expected distortion since the distortion for any individual sequence is bounded by d_max.
According to the achievability of lossy source coding <cit.>, we have P_e_s tends to zero for sufficiently large n whenever R ≥ R^IT(D_c).
In channel coding, the decoder declares an error when the following events occur:
∙ (x^n (w), y^n) ∉𝒯^(n)_ϵ_c;
∙ (x^n (w'), y^n) ∈𝒯^(n)_ϵ_c, for some w' w, we denote P_e_c as the probability of the error occurred in decoder. The error decoding contribute at most P_e_cd_max to the expected distortion. Similarly, we have P_e_c→ 0 for n →∞ whenever R ≤ C^IT(D_s, B) according to channel coding theorem <cit.>.
On the other hand, the expected estimation distortion can be upper bounded by
Δ_s^(n) = 1/n∑_i=1^n d(S_i,S̃_i)|Ŵ wPr(Ŵ w)
30pt + 1/n∑_i=1^n d(S_i,S̃_i)|Ŵ = wPr(Ŵ = w)
≤ P_e_cd_max + 1/n∑_i=1^n d(S_i,S̃_i)|Ŵ = w (1-P_e_c).
Note that (s^n, x^n(w), s̃^n) ∈𝒯^(n)_ϵ_e (SXS̃s,x,s̃) where SXS̃s,x,s̃ denotes the joint marginal distribution of SXZS̃s,x,z,s̃ = SsXxZ|SXz|s,x𝕀{s̃=s̃^⋆(x,z)}, we have
_n →∞1/n∑_i=1^n d(S_i,S̃_i)|Ŵ = w≤ (1+ϵ_e) d(S,S̃),
according to the typical average lemma <cit.>. In summary, the total distortion can be attained by
Δ^(n) =^(a)Δ_s^(n) + Δ_c^(n)
≤^(b) D_c + (P_e_s+ 2P_e_c)d_max + (1+ϵ_e)(1-P_e_c)D_s,
where (a) follows the condition of the separable distortion metric in MSST and we omit the terms containing the product of P_e_s and P_e_c in step (b). Consequently, taking n →∞ and P_e_s, P_e_c, ϵ_c,ϵ_e,ϵ_s → 0, we can conclude that the expected total distortion (averaged over the random codebooks, state and channel noise) tends to be D → D_c+D_s whenever R^IT(D_c) ≤ R ≤ C^IT(D_s, B).
This completes the proof of the proposed MSST. ▪
Remark: The sensing process determines the accuracy of the state information acquisition at the BS, whereas the communication process governs the quantity of information transmitted to the user. Overemphasis on either process can result in substantial performance degradation. For instance, achieving optimal sensing performance alone may not ensure the accurate reconstruction at the end-user's side due to limitations in the communication channel capacity. Conversely, excessive communication capacity may be wasted if the sensory data lacks sufficient accuracy. This underscores the need to investigate optimal channel input as well as the waveform design that minimize the total distortion.
§ CHANNEL INPUT DISTRIBUTION OPTIMIZATION FOR THE CAS SYSTEMS
In this section, we explore the optimal input distribution for the CAS system under scalar channels, aiming to minimize total distortion under the MMST constraint. It should be noted that the MSST applies only when the distortion metric is separable. Fortunately, we demonstrate that the mse or quadratic distortion, a widely used distortion metric in parameter estimation, satisfies the separability condition. Specifically, we have
D = S-Ŝ
= ^(a)𝔼[𝖲- 𝖲̃_2^2 ]+𝔼[𝖲̃ - 𝖲̂_2^2 ] = ^Δ D_s+D_c,
where (a) holds from the properties of the conditional expectation <cit.>
𝔼[(𝖲- 𝖲̃)^T(𝖲̃ - 𝖲̂)]= 𝔼[(𝖲-𝔼[ 𝖲|𝖹, 𝖷])^Tf(𝖹, 𝖷)]=0,
with 𝖲̃=𝔼[ 𝖲|𝖹, 𝖷] being the optimal estimator in (<ref>) [We highlight that the (<ref>) is exactly the minimum MSE (MMSE) estimator. An arbitrary non-optimal estimator except MMSE, which cannot apply the conditional expectation properties, may not satisfy the separability condition. ].
Hereinafter, we adopt the mse as the distortion metric to guarantee the MSST. Meanwhile, we drop the superscript “IT” in the MSST.
§.§ Problem Formulation
The ISAC channel input design can be formulated by the following optimization problem [Here, we use the symbol I(D_s,B) instead of C(D_s,B) to emphasize that the optimal distribution does not necessarily achieve the channel capacity.]
𝒫_0 { 2pt
min_P_𝖷(x) 5pt D = D_c + D_s
subject to 5pt R(D_c) ≤ I(D_s, B).
.
By substituting the associated expressions into (<ref>), problem 𝒫_0 can be reformulated by
𝒫_1 { 2pt
min_P_𝖷(x) 5pt d(S,S̃) + d(S̃,Ŝ)
subject to 5pt I(𝖲̃;Ŝ) ≤ I(X;Y),
5pt b(X)≤ B.
.
It is challenging to directly solve the functional optimization problem 𝒫_1 due to the difficulty in obtaining explicit expressions of distortions and MI for arbitrary distributions. Typically, the BA algorithm can be employed to solve problem 𝒫_1 numerically. Inspired by this idea, we expand the expectation operation and the MI wrt the variable Xx as follows.
∙ Sensing distortion:
d(S,S̃) = d(𝖲,s̃(𝖷,𝖹))|𝖷,𝖹
= ∑_x,z P_𝖷𝖹(x,z) ∑_s Q_𝖲|𝖷𝖹(s|x,z) d(s,s̃(x,z))
≜∑_x P_𝖷(x) e(x),
where the sensing cost e(x) defined in (<ref>) can be expressed by
e(x)=∑_z Q_𝖹|𝖷(z|x) ∑_s Q_𝖲|𝖷𝖹(s|x,z) d(s,s̃(x,z)).
∙ Channel MI:
I(𝖷;𝖸) = ∑_x,y P_𝖷(x) Q_𝖸|𝖷(y|x) logQ_𝖷|𝖸(x|y)/P_𝖷(x).
∙ Resource budget:
b(X) = ∑_x P_𝖷(x)b(x).
∙ Communication distortion:
𝔼[d(𝖲̃,Ŝ)] = ∑_s̃ P_𝖲̃(s̃) ∑_ŝ Q_Ŝ|𝖲̃(ŝ|s̃)d(s̃,ŝ).
∙ Rate distortion function:
I(𝖲̃;Ŝ) = ∑_s̃,ŝ P_𝖲̃(s̃) Q_Ŝ|𝖲̃(ŝ|s̃) logQ_𝖲̃|Ŝ(s̃|ŝ)/P_𝖲̃(s̃).
Here, we emphasize the unique challenge in solving problem 𝒫_1. In the conventional BA algorithm, the primary approach involves constructing the Lagrangian function wrt the variable and obtaining the optimal solution based on the first-order necessary condition. However, the probability distribution S̃s̃ in (<ref>) and (<ref>) is indeed a function of the variable Xx as describe in (<ref>). Consequently, deriving a closed-form expression of Xx from first-order necessary condition is challenging, which imposes a significant obstacle to solving problem 𝒫_1 by using the conventional BA algorithm. To address this issue, we propose a two-step BA-based optimal search method to seek for a sub-optimal solution in the following subsection.
§.§ Two-step BA-based Optimal Search Algorithm
We divide the original problem 𝒫_1 into two sub-problems.
1) Sub-problem 1: We determine the constrained communication channel capacity I(X,Y) for a given sensing distortion D_s and identify the optimal distribution P_𝖷(x).
2) Sub-problem 2: We calculate the minimum communication distortion D_c for the given channel input distribution P_𝖷(x) and determine the communication distortion D_c, thereby obtaining the total distortion D.
By varying the preset sensing distortions, one may generate a set 𝒟 collecting the total distortions calculated through the above two steps. Thus, the minimum total distortion and its corresponding distribution P_𝖷(x) can be identified by the minimum value search. The detailed procedure is as follows.
∙ Constrained communication channel Capacity for a given sensing distortion.
In this sub-problem, I(𝖷;𝖸) is maximized under the constraints of sensing distortion D_s and resource budget B, which can be expressed by
max_P_𝖷(x) 10pt I(𝖷;𝖸)
subject to 5pt ∑_x P_𝖷(x) e(x) ≤ D_s, 5pt ∑_x P_𝖷(x)b(x) ≤ B.
Inspired by <cit.>, we reformulate (<ref>) by incorporating the sensing distortion cost as a penalty term in the objective function, i.e.,
max_P_𝖷(x) 5pt I(P_𝖷(x), Q_𝖸|𝖷(y|x)) - μ∑_x P_𝖷(x) e(x)
subject to 5pt ∑_x P_𝖷(x)b(x) ≤ B,
where μ is the penalty factor to balance the weight of snc performance. For a given μ, the Lagrangian function of (<ref>) can be written by
ℒ(P_𝖷(x), λ) = I(P_𝖷(x), Q_𝖸|𝖷(y|x))
- μ∑_x P_𝖷(x) e(x) + λ∑_x P_𝖷(x)b(x).
By setting the derivative ∂ℒ(P_𝖷(x), λ) / ∂ P_𝖷(x) to zero and taking the fact that ∑_x P_𝖷(x)=1 into account, we have
P_𝖷(x) = e^h(x)/∑_x' e^h(x'),
with
h(x) = ∑_y Q_𝖸|𝖷(y|x) log Q_𝖷|𝖸(x|y)-μ e(x)-λ b(x).
In formula (<ref>), the Lagrangian multiplier λ and the posterior distribution Q_𝖷|𝖸(x|y) are unknown. The former may be determined by a bisection search method such that ∑_x P_𝖷(x)b(x) → B. For the latter, we have the optimal posterior distribution which maximizes the MI with given P_𝖷(x) and Q_𝖸|𝖷(y|x), expressed by <cit.>
Q_𝖷|𝖸(x|y) = P_𝖷(x)Q_𝖸|𝖷(y|x)/∑_x'P_𝖷(x')Q_𝖸|𝖷(y|x').
After setting an initial distribution X_0x, problem (<ref>) can be solved by updating (<ref>) and (<ref>) iteratively. The detailed procedure is summarized in Algorithm <ref>, whose convergence can be guaranteed <cit.>.
∙ Source distribution S̃s̃.
The tradeoff between sensing distortion and constrained communication capacity can be adjusted by varying the penalty factor μ. Let us denote the optimal channel input distribution, sensing distortion, and channel MI obtained by Algorithm <ref> as P_𝖷^(μ)(x), D_s^(μ), and I^(μ)(X;Y), respectively, to highlight their dependence on the factor μ.
Subsequently, given a specific channel input distribution P_𝖷^(μ)(x), our objective is to compute the estimate distribution P_𝖲̃^(μ)(s̃), which is essential for determining the rate-distortion function. However, deriving the explicit expression of the estimate distribution for an arbitrary state distribution Ss is a challenging task. Instead, we can compute the estimate S̃ and its distribution P_𝖲̃^(μ)(s̃) using formulas (<ref>) and (<ref>), respectively, by performing a sufficiently large number of random trials. For analytical convenience, we consider a Gaussian state distribution and a linear sensing model, which allows for explicit expressions to be derived.
Assume that the target's state follows Gaussian distribution with 𝒞𝒩(0,ν_s^2). Let us consider the following linear sensing model
𝖹=𝖷𝖲+𝖭,
where 𝖭∼𝒞𝒩(0,1) represents Gaussian channel noise. In such a Gaussian linear model, the estimator (<ref>) is indeed an MMSE estimator for each channel realization x. Therefore, the estimate S̃ and the corresponding conditional probability distribution of P_𝖲̃|𝖷(s̃|x) can be expressed by
𝖲̃ = x^2ν_s^4/(1+x^2ν_s^2)^2Z, 5pt P_𝖲̃|𝖷(s̃|x) ∼𝒞𝒩(0,x^2ν_s^4/1+x^2ν_s^2).
Moreover, sensing cost is the MSE with the given realization x, which can be written by
e(x)= ν_s^2/(1+x^2ν_s^2).
Consequently, the source distribution can be given by
P_𝖲̃^(μ)(s̃) = P_𝖷^(μ)(x)P_𝖲̃|𝖷(s̃|x).
∙ Communication and total distortions.
In this sub-problem, we aim to evaluate the minimum communication distortion D_c^(μ) achieved under the source distribution P_𝖲̃^(μ)(s̃) and the channel capacity I^(μ)(X;Y). This scenario corresponds to a typical rate-distortion problem in lossy data transmission. The distortion-rate curve can be obtained by the optimization problem <cit.>
min_Q_Ŝ|𝖲̃(ŝ|s̃) 5pt I(𝖲̃;Ŝ) - λ_s D_c
where λ_s represents the slope of the distortion-rate curve. The conventional BA algorithm can be leveraged to solve problem
(<ref>). By following a similar procedure in sub-problem 1, the optimal solution can be calculated by
Q_Ŝ|𝖲̃(ŝ|s̃) = P_Ŝ(ŝ)e^λ_s d(s̃,ŝ)/∑_ŝP_Ŝ(ŝ')e^λ_s d(s̃,ŝ').
The unknown nuisance term Ŝŝ can be updated by
P_Ŝ(ŝ) = ∑_s̃ P_𝖲̃(s̃)Q_Ŝ|𝖲̃(ŝ|s̃).
By setting an initial distribution Ŝ|S̃ŝ|s̃, the optimal solution of (<ref>) can be obtained by updating (<ref>) and (<ref>) iteratively. The detailed derivations can be found in <cit.>.
For a given slop λ_s, the tangent point ( D^(λ_s)_c, I^(λ_s)(𝖲̃;Ŝ) ) of the distortion-rate curve can be obtained by solving problem (<ref>). We may collect a series of distortion-rate data points to fit a distortion-rate function by varying the slop λ_s. Thus, the communication distortion constrained by a given channel capacity is attained by
D_c^(u) = D(I^(u)(X;Y)).
We summarize the communication distortion computation algorithm into Algorithm <ref>.
In the outer loop, we may collect a set 𝒟 of total distortion D^(μ) and the associated channel input distribution P_𝖷^(μ)(x) by repeating the above steps with various penalty factor μ. Therefore, the optimal channel distribution that minimizes the total distortion can be found by
D^(μ^⋆) = min_D ∈𝒟 2pt D, 5pt P_𝖷^(⋆)(x) = P_𝖷^(μ^⋆)(x).
The proposed two-step BA-based optimal search algorithm is summarized in Algorithm <ref>.
§ WAVEFORM DESIGN FOR GAUSSIAN CHANNEL INPUT
The previous section illustrates that using Gaussian channel input may not be optimal for CAS systems. Despite this limitation, the assumption of Gaussian signals is prevalent in communications research literature. In this section, we introduce an ISAC waveform design scheme specifically developed for multi-input multi-output (MIMO) systems, while adhering to the framework of Gaussian signaling.
§.§ System Model
The widely employed signal models of the snc processes for the MIMO systems are expressed by
𝐙=𝐇_s𝐗+𝐍_s, 5pt 𝐘=𝐇_c𝐗+𝐍_c,
where 𝐗∈ℂ^N_t × T, 𝐙∈ℂ^M_s × T, and 𝐘∈ℂ^M_c × T represent the transmitting ISAC signal, the received signals at the BS and the end-user, respectively; 𝐇_s and 𝐇_c denote the snc channel state information (CSI) matrices; 𝐍_s and 𝐍_c are the channel noises whose entries follow the complex Gaussian distribution with 𝒞𝒩(0,σ_s^2) and 𝒞𝒩(0,σ_c^2); Finally, N_t, M_s, M_c, and T denote the numbers of Tx antennas, snc receiver (Rx) antennas, and the transmitting symbols, respectively.
We focus on the sensing task of trm estimation, where the to-be-estimate parameters can be written by 𝐬 = vec(𝐇_s). After vectorization operations, the sensing signal observed at the BS can be recast by
𝐳=(𝐈_M_s⊗𝐗^H )𝐬+𝐧_s,
with 𝐳=vec(𝐙^H), and 𝐧_s = vec(𝐍_s^H). In the CAS process, the BS estimates trm 𝐬̃ from the observations 𝐳, then transmits it to the user through communication channel 𝐇_c. Subsequently, the user can reconstruct 𝐬̂ from the communication received data 𝐘. Here, we adopt the quadratic distortion metric (mse), which meets the separable condition as shown in (<ref>). Before proceeding the ISAC waveform design, we make the following assumptions.
Assumption 1: The ISAC waveform 𝐗 follows complex Gaussian distribution with 𝒞𝒩(0, 𝐑_x), where 𝐑_x ∈ℂ^N_t × N_t is the covariance matrix.
Assumption 2: The trm vector 𝐬 follows complex Gaussian distribution 𝒞𝒩(0,𝐈_M_s⊗Σ_s), where Σ_s ∈ℂ^N_t × N_t denotes the covariance matrix of each column of 𝐇_s.[This assumption corresponds to the scenario that the Rx antennas for sensing are sufficiently separated so that the correlations among the rows of 𝐇_s can be ignored <cit.>. Here, we specify this Kronecker structure in TRM covariance matrix to simplify the expression of sensing distortion <cit.>. To avoid the deviation of our core contribution in this paper, we will leave the general TRM covariance matrix cases for future research.]
The above assumptions enable us to derive the explicit expressions of the snc distortions, the estimate distribution 𝐬̃, and the communication channel capacity wrt the ISAC waveform matrix 𝐗.
1) Sensing distortion: For the Gaussian linear model (<ref>), the optimal estimator (<ref>), i.e., the well-known mmse estimator, is leveraged to obtain the estimate 𝐬̃ by
𝐬̃ = (𝐈_M_s⊗(Σ_s𝐗(𝐗^H Σ_s 𝐗 + σ^2_s𝐈_T)^-1))z,
Furthermore, the sensing distortion is attained by <cit.>
D_s (𝐑_x) = 𝔼[𝐬-𝐬̃^2] = M_s ( 1/σ^2_s𝐑_x + Σ_s^-1)^-1 .
2) Communication distortion: The estimate 𝐬̃ in (<ref>) follows the complex Gaussian distribution with 𝒞𝒩(0,𝐈_M_s⊗𝐑_s̃), with the covariance matrix of <cit.>
𝐑_𝐬̃ = Σ_s - ( 1/σ^2_s𝐑_x + Σ_s^-1)^-1.
Let us temporarily omit the Kronecker product. For a Gaussian source 𝒞𝒩(0, 𝐑_s̃), the rate-distortion function can be characterized by the following optimization problem <cit.>
R(D_c) = 𝐃∈𝕊_N_t^+ log( (𝐑_s̃)/(𝐃))
subject to 5pt 𝐃≼𝐑_s̃, 2pt 𝐃≤ D_c,
where 𝕊_N_t^+ denotes the set of all N_t × N_t positive definite matrices. More details on (<ref>) can be found in the proof of <cit.>.
3) Channel Capacity: The MIMO Gaussian channel capacity can be expressed by
C(𝐑_x) = I(𝐘;𝐗|𝐇_c) =
log(1/σ^2_c𝐇_c𝐑_x𝐇_c^H+𝐈_M_c).
§.§ ISAC Waveform Design
Observe that the expressions (<ref>) to (<ref>) are all related to the covariance matrix 𝐑_x rather than the specific waveform matrix 𝐗. Consequently, our ISAC waveform design problem can be transformed into determining the optimal covariance matrix that minimizes the total distortion. Regarding the system resource constraint, we only consider the transmit power at the BS. By substituting the expressions into the original problem 𝒫_1 and introducing an auxiliary variable of positive definite matrix 𝐃, the ISAC waveform design problem can be reformulated by
𝒫_2 {𝐑_x,𝐃∈𝕊_N_t^+ D_s(𝐑_x) + 𝐃
subject to 5pt C(𝐑_x) - M_s log( (𝐑_s̃)/(𝐃)) ≥ 0,
5pt 𝐃≼𝐑_s̃, 2pt 𝐑_x ≽ 0, 2pt 𝐑_x≤ P_T,
.
where P_T is the power budget. 𝒫_2 is non-convex due to the nonlinear intermediate variables log (𝐑_s̃) and 𝐑_s̃. To relax 𝒫_2 into a convex problem, we employ an SCA technique based on Taylor series expansion. To be specified, for any given point 𝐑_0, the matrix 𝐑_s̃ can be approximated by
𝐑_s̃≃Σ_s - 𝐏 + 1/σ_s𝐏 (𝐑_x - 𝐑_0) 𝐏≜𝐑_s̃,
with the constant matrix defined by
𝐏= ( 1/σ^2_s𝐑_0 + Σ_s^-1)^-1.
Furthermore, we have
log (𝐑_s̃) ≃log( Σ_s - 𝐏)
10 pt + 1/σ_s^2 (Σ_s - 𝐏)^-1𝐏 (𝐑_x - 𝐑_0) 𝐏≜ f(𝐑_x).
Note that 𝐑_s̃ and f(𝐑_x) are both the linear functions wrt variable 𝐑_x with a given point 𝐑_0. By substituting (<ref>) and (<ref>) into 𝒫_2, the problem can be relaxed into
𝒫_3 {𝐑_x,𝐃∈𝕊_N_t^+ D_s(𝐑_x) + 𝐃
subject to 5pt C(𝐑_x) - M_s ( f(𝐑_x) - log (𝐃) ) ≥ 0,
5pt 𝐑_s̃ - 𝐃≽ 0, 2pt 𝐑_x ≽ 0, 2pt 𝐑_x≤ P_T.
.
For a given point 𝐑_0, 𝒫_3 is convex since both the objective and constraints are either convex or linear. Therefore, it can be efficiently solved by using the off-the-shelf CVX toolbox <cit.>. For an initial covariance matrix, e.g., 𝐑_0 = P_T/N_t𝐈, the non-convex problem 𝒫_2 can be addressed by iteratively computing the optimal solution 𝐑^⋆_x of 𝒫_3 and updating 𝐑_0 = 𝐑^⋆_x until convergence. The procedure is outlined in Algorithm <ref>.
§ SIMULATION RESULTS
The simulation results are divided into two sections. The first section evaluates the optimal distribution for the ISAC channel input in scalar cases, demonstrating the effectiveness of the proposed MSST and the two-step BA-based optimal search algorithm. Moreover, the unique distortion-capacity (D-C) curves of the CAS system are provided to illustrate the performance tradeoff between snc processes. The second section presents the results of the optimal waveform design for the Gaussian MIMO channel, showing its superiority over the snc-optimal schemes.
§.§ Optimal Distribution for ISAC Channel Input
In this subsection, we aim to reveal the unique performance tradeoff between snc processes. The simulation parameters are set as follows. The prior distribution of the target's sate is assumed to be Ss∼𝒞𝒩(0,1), i.e, ν_s^2=1. We define the normalized signal-to-noise ratios (SNRs) for the snc channels as 10log(1/σ^2) dB and set the power budget to be B=5. Unless otherwise specified, the snc channel SNRs are set to be SNR_s=SNR_c=0 dB in this subsection. To improve the computational efficiency, the weighted factor μ takes values from the sets [0,1], [1,5], and [5,30] with non-uniform spacings of 0.1, 0.5, and 5, respectively.
§.§.§ The D-C curves and the optimal distribution of ISAC channel input
Fig. <ref> presents the D-C curves for both snc processes. As expected, sensing distortion D_s increases as the values of μ decrease, implying that the estimation performance at the BS becomes less significant. Conversely, the communication channel capacity increases accordingly, leading to a reduction in communication distortion D_c. Consequently, these opposing trends in snc distortions result in an uncertain total distortion. In Fig. <ref>, we show the resultant D-C curve for the CAS systems with different sensing SNRs. A CAS-optimal point that minimizes the total distortion is evident, highlighting a unique phenomenon in CAS systems compared to the existing studies <cit.>.
Fig. <ref> depicts the specific channel input distributions corresponding to three special points in Fig.<ref>, namely, the snc-optimal and CAS-optimal points. The channel capacity is maximized without the sensing constraint when μ = 0. It is well-known that Gaussian channel input achieves the maximum MI for the Gaussian channels. By contrast, at the sensing-optimal point, as μ becomes sufficiently large, the channel input exhibits a 2-ary pulse amplitude modulation. These results are consistent with the findings in <cit.>. However, the unique CAS-optimal distribution, presenting a compromise between the snc-optimal distributions, is attained at μ = 2.5.
§.§.§ The impact of the target's prior variance
Subsequently, we investigate the impact of the target's state variance by selecting the values of ν_s^2 ∈{0.1,0.2,0.5,5}. Due to the significant differences in the order of magnitude of the values for MI and sensing distortions in (<ref>), we adjust the interval of μ for each state variance rather than using a fixed interval. The essential purpose of the CAS systems is to reduce the uncertainty of the target's prior information, specifically the variance of the state's prior distribution.
In Fig. <ref>, each point represents a lower total distortion compared to the prior variance, even at the snc-optimal points. This reduction is attributed to the resource multiplexing gain achieved through the dual-functional signaling strategy. Additionally, we observe that the CAS-optimal point gradually shifts from sensing-optimal to communication-optimal as variance increases. A small variance indicates relatively accurate prior information about the target's state, necessitating an ISAC channel input with strong sensing capabilities for better accuracy performance. Conversely, as variance increases, greater emphasis on communication performance is required to convey more information.
We provide the detailed distributions of the CAS-optimal channel input for different state variances. For a small variance (e.g., ν_s^2=0.1), the sensing distortion becomes small accordingly. It is at least less than the prior variance; otherwise, the user can `guess' the state information in terms of the prior distribution. Therefore, a large weighted factor is required, e.g., μ = 690, to balance the values of MI and sensing distortion in (<ref>). In this case, the CAS-optimal distribution aligns with the sensing-optimal distribution, specifically a standard 2-ary pulse amplitude modulation. As anticipated, the optimal μ values decrease with increasing state variance, and the CAS-optimal distribution changes accordingly.
§.§ Optimal Waveform Design for Gaussian Signal
In this subsection, we demonstrate the effectiveness of the proposed ISAC waveform design scheme for Gaussian signaling. The general system setups are as follows. The communication channel is modeled by Rayleigh fading, where each entry of 𝐇_c obeys the standard complex Gaussian distribution. All results are obtained by the average of 100 Monte Carlo trials and the average distortion in each Monte Carlo trial is calculated by D/(M_sN_t).
§.§.§ Waveform design for a 2-D special case
We begin with a two-dimensional (2-D) special case where the state covariance matrix, Σ_s = diag{a_1,a_2}, is assumed to be diagonal. In this scenario, the global optimal solution of problem 𝒫_2 can be attained through a 2-D exhaustive search, as outlined in <cit.>. Thus, this optimal solution [The optimal eigenspace of 𝐑_x must align with the communication eigenspace of 𝐇_c^H𝐇_c as the sensing eigenspace is an arbitrary unitary matrix. Thus, the original problem can be reduced to finding two optimal eigenvalues over 2-D grids, with the sum of each 2-D point equal to P_T.] can serve as a benchmark for comparing the superiority of our method. Accordingly, the numbers of the Tx and snc Rx antennas are set by N_t=M_c=M_s=2.
Fig. <ref> illustrates the impact of sensing channel quality on the achievable average distortion. The sensing SNRs vary from 0 dB to 20 dB with a spcing 2 dB. Besides the global optimal benchmark, the snc optimal waveform schemes refer to the solutions that minimize the MMSE in (<ref>) and maximize the channel capacity in (<ref>), respectively. At a glance, we observe that the achievable average distortion decreases with increasing sensing channel SNRs. Furthermore, the sensing-optimal scheme is preferable for acquiring accurate target information when the communication channel is sufficiently good at SNR_c=5 dB. However, the sensing-optimal scheme may cause significant performance degradation as the communication SNR decreases. Particularly, the sensing-optimal curve even tends to be flat at SNR_c=-5 dB, implying that a poor communication channel severely limits performance improvement. This clearly highlights the unique tadeoff between snc processes in CAS systems. Finally, it is worth noting that the proposed algorithm achieves satisfactory performance compared to the global optimal solution.
In Fig. <ref>, we investigate the impact of state variances on the achievable average distortion in the MIMO systems. The state variances are scaled by a factor ranging from 2 to 10. Two interesting observations emerge. 1) For small state variances, the global optimal scheme coincides with the sensing-optimal scheme at SNR_c = 10 dB. However, both the snc-optimal schemes experience performance degradation as state variances increase. 2) When the communication channel quality is insufficient at SNR_c = -5 dB, the global optimal scheme tends towards the communication-optimal scheme to enhance communication capabilities. As state variances increase, the performance degradation of the sensing-optimal scheme becomes more significant. Nevertheless, our proposed algorithm can effectively balance the snc processes, resulting in improved average distortion.
§.§.§ Waveform design for general MIMO cases
Subsequently, we extend the results to high-dimensional matrix scenarios, where the number of Tx antennas is N_t = 10, and the numbers of snc Rx antennas are M_s = 2 and M_c = 5, respectively. The state covariance matrix Σ_s is randomly generated as an N_t × N_t Hermitian matrix. It is worth noting that obtaining the global optimal solution is challenging due to the highly coupled snc processes. For comparison purposes, we provide the results of the Heuristic algorithm proposed in <cit.>, whose formulation is given by
𝐑_x 5pt β I(𝐘;𝐗|𝐇_c) + (1-β) I(𝐙;𝐇_s|𝐗)
subject to 5pt 𝐑_x≤ P_T,
where I(𝐙;𝐇_s|𝐗) = log(1/σ^2_sΣ_s𝐑_x+𝐈_N_t) is defined by sensing MI and I(𝐘;𝐗|𝐇_c) is defined in (<ref>). The weighted factor β takes values over the interval [0,1] with L grid points. In our simulations, L is set to be 11.
Fig. <ref> illustrates the average distortions versus sensing channel SNRs for various waveform design schemes. It can be observed that the proposed algorithm outperforms its counterparts. Similar trends to those shown in Fig. <ref> for the 2-D case are also noticeable. Besides, we can observe that the proposed ISAC waveform design achieves significantly greater performance gains in high-dimensional scenarios.
Fig. <ref> and Fig. <ref> demonstrate the influence of snc Rx antennas on average distortion across various waveform design schemes. Fig. <ref> exhibits a linear increase in average distortion with the addition of sensing Rx antennas. The linearity feature consistent with the Kronecker product form in assumption 2. Moreover, regarding the TRM estimation, an increase in sensing Rx antennas necessitates more channel information to be estimated, leading to a larger average distortion.
In contrast, Fig. <ref> shows a declining trend in average distortion as the number of communication Rx antennas increases. This performance improvement is attributed to enhanced communication channel capacity facilitated by multiple Rx antenna gain, thereby improving overall system performance. Additionally, under the communication-optimal scheme, there is a gradual performance degradation as channel capacity increases. This trend arises because improving sensing performance becomes critical in scenarios with excessive communication channel capacity. Once again, the proposed ISAC waveform design method is superior to the other counterparts in all scenarios.
§ CONCLUSION
In this paper, we delve into the dual-functional signaling strategy for communication-assisted sensing (CAS) systems across three primary aspects. Firstly, we develop a modified source-channel separation theorem (MSST) tailored for CAS systems under the condition of separable distortion metric. We elucidate the operational meaning of the proposed MSST from the perspective of coding theory. Secondly, we develop an input distribution optimization scheme for the CAS system under scalar channels, which minimizes the total distortion while adhering to the MSST and resource constraints. We determine the optimal distribution for CAS systems, balancing between the conventional communication-optimal (Gaussian distribution) and sensing-optimal (2-ary pulse amplitude modulation) schemes. Thirdly, we propose an ISAC waveform design method for Gaussian signaling in MIMO CAS systems, where a successive convex approximation (SCA) algorithm is conceived to solve for the optimal covariance matrix. Simulation results show the unique performance tradeoff between the sensing and communication processes.
IEEEtran
|
http://arxiv.org/abs/2409.02489v1 | 20240904073301 | NeuroSpex: Neuro-Guided Speaker Extraction with Cross-Modal Attention | [
"Dashanka De Silva",
"Siqi Cai",
"Saurav Pahuja",
"Tanja Schultz",
"Haizhou Li"
] | cs.SD | [
"cs.SD",
"cs.AI",
"eess.AS"
] |
Geometry of temporal chiral structures
Andres F. Ordonez^1,2, Aycke Roos ^3, Pablo M. Maier ^3, David Ayuso^1,2,3 and Olga Smirnova^3,4
September 9, 2024
====================================================================================================
§ ABSTRACT
In the study of auditory attention, it has been revealed that there exists a robust correlation between attended speech and elicited neural responses, measurable through electroencephalography (EEG). Therefore, it is possible to use the attention information available within EEG signals to guide the extraction of the target speaker in a cocktail party computationally. In this paper, we present a neuro-guided speaker extraction model, i.e. NeuroSpex, using the EEG response of the listener as the sole auxiliary reference cue to extract attended speech from monaural speech mixtures. We propose a novel EEG signal encoder that captures the attention information. Additionally, we propose a cross-attention (CA) mechanism to enhance the speech feature representations, generating a speaker extraction mask. Experimental results on a publicly available dataset demonstrate that our proposed model outperforms two baseline models across various evaluation metrics.
Speaker extraction, EEG, selective auditory attention, cocktail party effect
§ INTRODUCTION
Humans inherently can focus on a specific audio source amidst multiple competing sources. This remarkable ability is referred to as selective auditory attention (SAA) in cocktail party scenarios <cit.>. It was found that two regions of the auditory cortex manage such selective auditory attention. They allow us to separate and enhance the interested voices. However, when we suffer from hearing loss or other forms of impairments, we find it hard to distinguish voices <cit.> because our ears are missing some frequencies. Despite much progress, existing algorithms still struggle to isolate and enhance the targeted speech amidst background noise effectively <cit.>.
Studies in neuroscience have revealed a strong correlation between the attended speech and the neural response it elicits <cit.>, paving the way for auditory attention detection (AAD) from neural activities <cit.>. Such neural responses include Electrocorticography (ECoG) <cit.>, Magnetoencephalography (MEG) <cit.>, and EEG <cit.>. Among them, EEG provides a non-invasive, comparably easy-to-wear, and affordable choice for SAA tasks, especially for neuro-steered hearing aids <cit.>.
Speech separation and speaker extraction emulate human SAA to solve cocktail party problems. Speech separation algorithms are designed to separate a speech mixture into individual speech streams <cit.>. Conversely, speaker extraction <cit.> algorithms extract a target speaker's voice associated with an auxiliary reference cue. This cue serves as a distinct marker indicating the specific speech signal of interest to be isolated, typically providing identifiable information about the target or attended speech. Various auxiliary reference cues have been explored to guide speaker extraction systems. Among them, pre-enrolled speech signal <cit.> from the interested speaker is a commonly used reference signal. Moreover, inspired by human attention mechanisms, visual reference cues derived from video streams capturing hand and body gestures <cit.>, lip movements <cit.>, and direction information <cit.> have been studied. Furthermore, there's a growing interest in multimodal approaches that integrate audio-visual cues <cit.>. However, in real-world scenarios, it is not feasible to always access pre-enrolled speech from numerous speakers, and visually tracking the target speaker is often impractical.
In this paper, we seek to use the elicited EEG signal as the sole reference cue as it provides feedback from the human brain regarding attended speech. We hypothesize that the neural response can effectively inform a speaker extraction system about the content of the attended speech in real-time. Therefore, we propose a novel end-to-end speaker extraction model at the utterance level in this work, which utilizes the EEG signal temporally aligned with speech as the auxiliary reference cue. We train the model in a trial-independent setting on a public dataset and compare it with several baseline models on multiple evaluation metrics.
This paper is organized as follows. Section 2 summarizes the related work; Section 3 describes the architecture of our proposed model; Section 4 outlines the experimental setup, including the dataset and baseline models. We report our results in Section 5 and conclude the study in Section 6.
§ RELATED WORK
Recent advancements in EEG-based AAD and speech separation have opened a path to utilize EEG signals for separate attended speech in a multi-source speech mixture environment. The mixture is first separated into multiple single-speech signals and then the signal with the highest correlation with the EEG signal is selected as the attended speech <cit.>. However, these methods are limited by the requirement of the number of speakers to be known in advance and the high computational consumption during separation.
In the studies of AAD, a clean speech signal is often compared with the EEG signal to find their correlation. It was shown that <cit.> the performance of the AAD systems improves as the decision window size increases. However, such clean single-speech signals are not always available in real-world scenarios. This calls for the study on how to exploit the information of the attended speech from within the EEG signals. There have been studies on reconstructing the attended speech envelope <cit.> from EEG signals. This allows us to establish a relationship between the speech stimulus and its neural responses.
Multiple end-to-end time domain studies have been proposed addressing different aspects. The Brain-Informed Speech Separation (BISS) <cit.> uses the reconstructed attended speech envelope from the EEG signal as the reference cue for speaker extraction. Inspired by BISS, the Brain-Enhanced Speech Denoiser (BESD) <cit.> and follow-up work U-shaped BESD (UBESD) <cit.> followed a dual-module approach based on Temporal Convolutional Neural network (TCN) to model EEG signals with speech together to denoise the speech mixture to obtain the attended speech. The Brain Assistend Speech Enhancement Network (BASEN) <cit.> used a TCN and Convolutional multi-layer Cross-Attention (CMCA) module for feature extraction and fusion, respectively. As a follow-up work, sparsity-driver BASEN <cit.> has proposed two EEG channel selection methods during speech enhancement: residual Gumbel selection <cit.> and convolutional regularization selection.
Most recently, the neuro-steered speaker extraction (NeuroHeed) <cit.> was introduced, which consists of a self-attention (SA) <cit.> based EEG encoder that generates the reference cue. The speaker extraction task is performed in either an online or offline manner. The online system adopted auto-regressive feedback from previously extracted speech. Following up, the NeuroHeed+ <cit.> was introduced. It added an auxiliary AAD module to NeuroHeed to reduce the speaker confusion error.
Despite much progress, it has not been explored in the prior work how both spatial and temporal information from EEG signals can be exploited to build knowledge-rich clues. Moreover, previous research has not studied how to fuse EEG and speech embeddings to make use of the temporal correlation and complementary information of two types of signals during speech mask generation for speaker extraction. We are motivated to explore the above unused potentials for neuro-guided speaker extraction.
§ METHODOLOGY
We propose a neuro-guided speaker extraction algorithm, an end-to-end EEG-based speaker extraction model or NeuroSpex, comprising four components: speech encoder, EEG encoder, speaker extractor, and speech decoder as depicted in Fig. 1. This model is built based on ConvTasNet <cit.>, consisting of a speech encoder, separator, and decoder. Our model also has an EEG encoder module to guide the extractor by providing information on the attended speech. Given the mixture of attended and interfering speech in the time domain, our model aims to extract the attended speech using elicited neural response as the sole reference cue.
§.§ Speech Encoder
The speech encoder transforms the single-channel input mixture signal segment x ∈ℝ^T_s into a sequence of utterance-based embeddings X ∈ℝ^N_x× T_x, similar to the frequency analysis with short-time Fourier transform. This is achieved by applying time domain speech encoding <cit.>, that is a 1D Convolutional (Conv1D) layer followed by Rectified Linear units (ReLu) as the activation function:
X = ReLu(Conv1D(x,1,N,K)) ∈ℝ^N_x× T_x
Here, the input channel size is 1 for the single-channel speech utterance with a sequence length T_s, which is a 4-second segment sampled at 8 kHz (as discussed in Section 4.1). The output X is a sequence of T_x embeddings, each with N_x dimensions, which are 3200 and 256, respectively. The kernel size K of the Conv1D is set to 20, while stride and padding are set to K/2.
§.§ EEG Encoder
The proposed EEG encoder seeks to exploit complementary information of the attended speech contained in the multi-channel EEG signals and develops the EEG embeddings or the reference signal to guide the speaker extractor. The EEG encoder takes a 4-second segment of a 64-channel EEG signal y, sampled at 128 Hz (detailed in Section 4.1), as input and generates Y ∈ℝ^N_y× T_y, where T_y = 512 and N_y = 64. This output Y serves as the reference signal for the speaker extractor.
The EEG encoder has a pre-convolutional layer followed by a series of multi-head attention and depth convolution blocks that serve as AdC blocks as illustrated in Fig. <ref> (b). Specifically, the pre-convolutional layer (preConv) acts as a preliminary feature extractor, which extracts important features for subsequent layers from EEG signal segment y. Then we have n number of sequentially layered stack of AdC blocks, where each Adc block consists of a multi-head attention <cit.> and 1D depthwise-convolutional layer <cit.>. We employ multi-head attention due to its ability to effectively capture temporal dependencies from sequential data as applied in previous SAA research <cit.>. Furthermore, motivated by recent work <cit.> to encode EEG signals, we use depth-wise convolutions to capture frequency-specific spatial information from channels. Both multi-head attention (mHA) and depth-convolutional (dConv) layers have residual connections followed by layer normalization (LN). That is,
Y_0 = preConv(y) ∈ℝ^N_y× T_y
Y_i = AdC_i(Y_i-1) ∈ℝ^N_y× T_y
where Y_0 is the output of the preConv layer that is fed to the first AdC block (AdC_1). Y_i is the output of i^th AdC block where i ∈{1,...,n }. Each AdC block is designed as follows:
Y_mha = LN_i^mHA( mHA_i(Y_i-1,N,H) + Y_i-1) ∈ℝ^N_y× T_y
Y_i = LN_i^dConv( dConv_i(Y_mha,N,K) + Y_mha) ∈ℝ^N_y× T_y
where Y_mha is the output of the multi-head attention and N is the channel size that is 64 for all cases. The number of heads of the attention layer H and the kernel size of the depth-convolutional layer K are set to 2 and 10, respectively. The EEG encoder outputs Y from the final AdC block.
§.§ Speaker Extractor
The speaker extractor generates the estimation mask to separate the attended speech from the mixture background. The masked speech S is created by element-wise multiplication of speech mixture embeddings X and the generated mask M as shown in Equation (<ref>). The extracted speech has an embedding sequence length T_x of dimension N_x, the same as the input speech mixture embedding X.
S = X ⊗ M ∈ℝ^N_x× T_x
As shown in Fig. 1, we propose a speaker extractor based on the ConvTasNet backbone with TCNs <cit.> to increase the receptive field, and CA modules to fuse speech mixture and EEG embeddings as shown in Fig. 2 (a).
We adopt cross-modal attention <cit.> to integrate information from both the auditory stimuli (speech mixture) and brain responses (EEG), mirroring the brain's natural process of speech mixture perception during speech comprehension.
The cross-attention (CA) mechanism is designed to combine and enhance insights from the key-value given the query <cit.>. Before fusing EEG and speech mixture embeddings, the EEG embedding (i.e., the reference signal Y) is linearly interpolated from length T_y to T_x (512 to 3200) to match the embedding sequence length required by the CA block, thereby generating Y^'∈ℝ^N_y× T_x. Subsequently, the key and value inputs to the CA block are the speech mixture embeddings X, while the query input is the interpolated reference signal Y^'. The output of the CA block X_ca is computed as shown in Equation <ref>, where Q, K, and V are query, key, and value, respectively. The X_ca also has the same dimensionality of X and is then fed to the TCN block with a residual connection.
X_ca = CA(K=X, V=X, Q=Y^') ∈ℝ^N_x× T_x
As shown in Fig. 1, we repeat CA and TCN pairs 4 times (m = 4) in the speaker extractor.
§.§ Speech Decoder
The speech decoder reconstructs time-domain single-channel speech waveform s from masked speech embeddings S. It performs an inverse operation of the speech encoder by passing S through a linear layer (linear) and an Overlap-and-Add (OvlpAd) function to restore the audio signal from speech representation. That is,
s = OvlpAd(linear(S,N,L), L/2) ∈ℝ^T_s
where T_s denotes the length of the output speech utterance. The linear layer has input size of N and output size of L, which are 64 and 20, respectively. The Overlap-and-Add operation also has a L/2 frame shift.
§.§ Loss Function
We train the proposed model end-to-end using the scale-invariant signal-to-distortion ratio (SI-SDR) <cit.> between the reference attended speech signal s_t and the extracted speech signal s_e. SI-SDR loss is typically used in time-domain speaker extraction tasks <cit.>. The extracted speech is scaled to remove changes that occurred during the reconstruction and helps maintain stability during the training phase. It is calculated with dB and a higher SI-SDR indicates better speech quality. Therefore, a negative SI-SDR is used as the loss function to train the model. The SI-SDR is defined as follows:
ℒ_SI-SDR(s_t,s_e) = -10log_10(∥s_e^Ts_t/∥ s_t ∥^2s_t ∥^2/∥ s_e- s_e^Ts_t/∥ s_t ∥^2s_t ∥^2)
§ EXPERIMENTAL SETUP
§.§ Dataset
We used the publicly available KULeuven (KUL) <cit.> dataset, which includes EEG signals from 16 normal-hearing subjects, collected using the BioSemi ActiveTwo system with 64 channels at a sampling rate of 8,192 Hz. The dataset comprises 20 trials per subject, though our experiments used only the first 8 to avoid repetition in attending to the same speech stimulus. Each trial involves subjects listening to simultaneous speech recordings from two male speakers narrating Dutch short stories, delivered dichotically via plugged-in earphones. Subjects focused on the speech from one speaker while ignoring the other. The stimuli, generated from four stories, were balanced for loudness and presented in randomized speaker direction and identity. The speech signals are sampled at 8 kHz and mixture signals are created by mixing attended and unattended speech signals at 0 dB to have the same power for both speakers.
EEG recordings were referenced to the average of all electrodes, band-pass filtered between 1 and 32 Hz, and downsampled to 128 Hz. This frequency range was selected to align with auditory attention detection protocols. Data normalization was performed trial-by-trial to standardize mean and variance across the dataset. In total, the dataset comprises 128 trials amounting to 12.8 hours of speech-EEG parallel data.
We followed a speaker-dependent trial-independent training approach to train all configurations of the proposed NeuroSpex and baseline models. Therefore, we divided the data into three sets: train, test, and validation. For the test set, we randomly selected 1 trial from each subject, thus having 16 trials for all subjects. Since each trial has a pair of attended and unattended audios, we made sure no same pair is chosen more than twice from all subjects. Similarly, from the rest of the data, we randomly selected 4 trials to form the validation set. The remaining trials are used for the training set.
Each trial was cut into 4-second segments with a hop length of 1 second for all sets based on experimental observations and computational resources. This segmentation approach yielded 5,712 segments for the test set, 1,428 segments for the validation set, and 38,556 segments for the training set. Specifically, the test set comprised data from 16 subjects, with each subject contributing 357 segments, totaling 5,712 segments, each lasting 4 seconds. We train the model on the training data and evaluate its performance on the validation set at the end of each epoch to monitor progress and adjust hyper-parameters accordingly. After training, the test set is used to obtain the final evaluation metrics to report the performance.
§.§ Evaluation Metrics
We employ three evaluation metrics and an extension of one of them. SI-SDR <cit.> and SI-SDR improvement (SI-SDRi) which quantifies the improvement in the quality of the extracted speech signal compared to the mixture signal. Perceptual Evaluation of Speech Quality (PESQ) <cit.> assesses the quality of extracted speech by comparing it to the clean single speech signal, providing a score indicating intelligibility and naturalness. Short-Term Objective Intelligibility (STOI) <cit.> evaluates the intelligibility of extracted speech by comparing it to the clean single speech signal, providing a measure of speech clarity and understandability. All evaluation metrics are higher the better.
§.§ Baseline Models
We benchmark NeuroSpex against several baseline models. The baselines are justified as follows.
Firstly, we use blind speech separation with permutation invariant training BSS (PIT) <cit.> model based on DPRNN <cit.>. This model blindly separates a multi-talker speech mixture into multiple single-talker streams. It then selects the target speech by comparing the best permutation with the target speech based on the SI-SDR metric, without relying on a reference cue to guide the separation. This method is optimized to separate speech streams in a discriminative manner. It represents the full speaker extraction potential when we have perfect neural decoding. Thus, it is seen as the upper bound of NeuroSpex in terms of speaker extraction performance.
Secondly, two versions of NeuroHeed are considered, which operate with a speaker extractor. They are DPRNN <cit.> and ConvTasNet <cit.> based on recurrent neural networks and TCNs, respectively. NeuroHeed is the current state-of-the-art for neural response-based speaker extraction. NeuroHeed with ConvTasNet is similar to NeruroSpex in terms of model architecture, while NeuroHead with DPRNN has fewer parameters than NeuroSpex. They make two relevant baselines for our benchmarking.
Thirdly, we use the BASEN model which also adopts TCN and CMCA EEG-Speech processing in terms of model architecture. Each model has been used in its vanilla form and trained in the same conditions as NeuroSpex. BASEN is being chosen as a baseline also because it represents the state of the art before NeuroHeed, and it also adopts TCN and CA in its architecture.
§.§ Model Training
All implementations utilize the PyTorch framework with distributed training and data distributed sampler techniques across 2 Nvidia RTX A6000 GPUs. To ensure reproducibility, we used random seeds to generate consistent results. The models are trained end-to-end using the Adam optimizer with an initial learning rate of 0.0001. A learning rate scheduler with a decay factor of 0.5 when the best validation loss does not improve within 5 consecutive epochs and early stopping is applied when the best validation loss does not improve within the last 25 epochs. Training is conducted for around 100 epochs or until stable training with batch size of 16.
§ RESULTS
In this section, we discuss the empirical evaluations of our study by comparing our model with baseline models, exploring different model configurations, and conducting an ablation study. Performance on the trial-independent test and validation sets is reported using the evaluation metrics outlined in section <ref>. The results are summarized in tables and illustrated with SI-SDRi violin plots. Statistical significance was assessed using paired t-tests.
§.§ Ablation Study
We conducted the ablation study to understand the effect of the contributing components. Table <ref> shows the performance of models, evaluated using multiple metrics on the validation set, for different EEG encoders and feature fusion methods including direct EEG signal input and direct fusion. Note that we only used a single AdC block (AdC_1) in the EEG encoder. The results mainly show that introducing CA to speech-EEG embeddings fusion increases the performance on every evaluation metric. Furthermore, AdC block also improves the performance over SA used in NeuroHeed and Convolution-based EEG encoders.
§.§ Effect of AdC blocks
We evaluated the effect of the number of AdC blocks in the EEG encoder to find the appropriate number. Table <ref> reports the performance for 5 progressive number of AdC blocks. For all evaluation metrics, 6 AdC blocks return the best performance. This shows that the number of AdC blocks increases the performance from 1 to 6 blocks with smaller margins. Hence, we present our best-proposed model with 6 AdC blocks. Furthermore, Fig. 3 depicts the violin plots of SI-SDRi for each of the 16 subjects in the test set for our best-proposed model. All subjects show consistent and centered distribution with smaller variances except for subjects 8, 11, and 14.
§.§ Comparison with Baseline Models
We compare our best model (with 6 AdC blocks) with multiple baseline models as shown in Table 3 including the main architectural difference between models and evaluation metrics. The violin plots in Fig. 4 summarize the SI-SDRi for all trials in the test set for all models. The BSS (PIT) shows the best results with the least variance as it performs direct speech separation and sets an upper bound for the comparison. Our proposed model outperforms all baselines significantly (p < 0.001) except BSS (PIT) on all evaluation metrics showing that it performs the speaker extraction with better signal quality perpetual quality, and intelligibility. Furthermore, the proposed NeuroSpex model contains fewer parameters compared to the ConvTasNet based NeuroHeed model but more than DPRNN counterparts due to the TCN based speaker extractor. According to violin plots, the proposed model has a better-centered distribution around a higher median and less variance compared to both NeuroHeed and BASEN baselines.
§ CONCLUSION
In this study, we propose an end-to-end speaker extraction model operating in the time domain. The model utilizes neural responses as a reference cue to extract attended speech in a cocktail party scenario. The novel contributions of this work include multi-head attention and depth-wise convolution-based EEG encoding, and CA-based EEG-speech embeddings fusion, which seek to enhance the overall quality of speaker extraction. The results demonstrate significant improvements in extraction performance over several competitive baseline models. Thus, we show that our proposed model effectively extracts EEG embeddings correlated with attended speech and achieves superior speech-EEG feature fusion to generate the speaker extraction mask, hence validating our hypothesis. For future research, we recommend exploring speaker-specific information during extraction and conducting subject-independent studies to enhance generalizability and realism.
IEEEbib
|
http://arxiv.org/abs/2409.02736v1 | 20240904141019 | Oxygen Isotope Exchange Between Dust Aggregates and Ambient Nebular Gas | [
"Sota Arakawa",
"Daiki Yamamoto",
"Lily Ishizaki",
"Tamami Okamoto",
"Noriyuki Kawasaki"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Sota Arakawa
arakawas@jamstec.go.jp
0000-0003-0947-9962]Sota Arakawa
Center for Mathematical Science and Advanced Technology, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-ku, Yokohama 236-0001, Japan
0000-0001-6852-2954]Daiki Yamamoto
Department of Earth and Planetary Sciences, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
0009-0005-8583-9730]Lily Ishizaki
Department of Earth and Planetary Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
0000-0003-1545-2723]Tamami Okamoto
Earth-Life Science Institute, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8550, Japan
0000-0001-6502-6488]Noriyuki Kawasaki
Department of Earth and Planetary Sciences, Faculty of Science, Hokkaido University, Kita-10 Nishi-8, Kita-ku, Sapporo 060-0810, Japan
§ ABSTRACT
Meteorites and their components exhibit a diverse range of oxygen isotope compositions, and the isotopic exchange timescale between dust grains and ambient gas is a key parameter for understanding the spatiotemporal evolution of the solar nebula.
As dust grains existed as macroscopic aggregates in the solar nebula, it is necessary to consider the isotopic exchange timescales for these aggregates.
Here, we theoretically estimate the isotope exchange timescales between dust aggregates and ambient vapor.
The isotope exchange process between aggregates and ambient vapor is divided into four processes: (i) supply of gas molecules to the aggregate surface, (ii) diffusion of molecules within the aggregate, (iii) isotope exchange on the surface of constituent particles, and (iv) isotope diffusion within the particles.
We evaluate these timescales and assess which one becomes the rate-determining step.
We reveal that the isotope exchange timescale is approximately the same as that of the constituent particles when the aggregate radius is smaller than the critical value, which is a few centimeters when considering the exchange reaction between amorphous forsterite aggregates and water vapor.
§ INTRODUCTION
Oxygen is the most fundamental element for solids in the universe.
<cit.> discovered a large oxygen isotopic anomaly in refractory inclusions of the Allende carbonaceous chondrite, and oxygen isotopic compositions of extraterrestrial materials have been intensively investigated in the fields of meteoritics and planetary science <cit.>.
Meteorites and their components exhibit a diverse range of oxygen isotope compositions <cit.>.
This isotopic variation is usually interpreted as the result of mixing between isotopically distinct reservoirs in the solar nebula <cit.>.
The reservoirs would be formed through self-shielding of carbon monoxide (CO) gas in the protosolar molecular cloud <cit.> or the early solar nebula <cit.>.
Minor CO isotopologues (i.e., C^17O and C^18O) are selectively dissociated by irradiation of ultraviolet photons and ^16O-poor oxygen atoms are produced.
They are subsequently transformed into ^16O-poor water (H_2O) ice through chemical reactions at the surface of dust grains <cit.>.
Considering the mass balance, the residual CO gas would be enriched in ^16O.
As a result, the three major oxygen reservoirs (i.e., H_2O, CO, and silicate) would have distinct isotope compositions <cit.>.
The material evidence for the presence of ^16O-poor H_2O reservoir in the solar nebula is preserved as cometary H_2O ice <cit.> and ^16O-poor magnetite, formed by oxidation by H_2O, in “cosmic symplectite” found in the most primitive carbonaceous chondrite Acfer 094 <cit.>.
In a protoplanetary disk, dust grains drift radially due to gas drag and turbulent diffusion, and water ice sublimation occurs around the H_2O snowline <cit.>.
Inward migration of icy dust grains could cause local enhancement of ^16O-depleted H_2O relative to ^16O-enriched CO <cit.>.
Both H_2O and CO vapors could react with silicate grains in the inner region of the disk where the temperature is high <cit.>, leading to depletion/enrichment of silicate grains in ^16O.
Evaporation of silicate grains followed by recondensation also alters their oxygen isotopic composition <cit.>.
<cit.> reported the oxygen isotopic composition of the Sun, which represents the average isotopic composition of the solar system.
They found that the vast majority of silicate grains in the inner solar system were depleted in ^16O compared to the composition of the Sun.
Assuming that silicate grains initially have a Sun-like isotope composition on average, the oxygen isotopic composition of silicate grains must evolve before the accretion of planetesimals.
The isotopic exchange between pristine silicate dust components with the Sun-like oxygen isotopic compositions and ^16O-depleted H_2O vapor is a key process for the oxygen isotopic evolution in the solar nebula.
Dust grains in the interstellar medium are largely amorphous <cit.>.
In contrast, astronomical observations at infrared wavelengths have revealed that both crystalline and amorphous silicate grains exist in protoplanetary disks, with forsterite being one of the most abundant crystalline silicate <cit.>.
Therefore, we expect that amorphous forsterite represents the dust that existed in the early phase of the solar nebula.
The isotopic exchange timescale between dust grains and ambient gas is the key parameter to understanding the spatiotemporal evolution of oxygen isotopic composition of the solar nebula.
Therefore, laboratory experiments of oxygen isotope exchange reaction under the disk-like low vapor pressure conditions are essential.
In this context, <cit.> performed experiments of oxygen isotope exchange reaction between amorphous forsterite grains and water vapor and determined the isotopic exchange timescale.
The timescales for different minerals and gases have also been investigated <cit.>.
They found that, for amorphous forsterite grains with a radius of 1 micron or smaller, the timescale for oxygen isotope exchange at temperatures above approximately 600 K would be shorter than the lifetime of the solar nebula.
Additionally, at temperatures below 800 K, isotope exchange occurs faster than forsterite crystallization.
However, dust grains exist as aggregates in protoplanetary disks <cit.> and the solar nebula <cit.>, necessitating consideration of aggregates' isotopic exchange timescales in discussions on the evolution of oxygen isotope compositions in the solar nebula.
To quantify the effect of aggregate structure on the isotopic exchange timescale, understanding of the diffusion of gas molecules within the aggregate through its voids is essential.
Recently, the diffusion of dilute gas within aggregates have been intensively investigated in the context of cometary science <cit.>.
We can employ theoretical models developed in the cometary science community for isotope exchange reaction of dust aggregates in the gaseous solar nebula.
In addition, the surface-to-volume ratio decreases as dust grains grow into larger aggregates, potentially affecting the isotopic exchange timescale by controlling the supply of molecules at the surface of aggregates.
We also quantify this effect and derive the critical aggregate radius below which the aggregate effects can be neglected.
In this study, we theoretically estimate the oxygen isotope exchange timescales between silicate dust aggregates and ambient water vapor.
The isotope exchange process between aggregates and ambient vapor is divided into four processes (see Section <ref>).
We evaluate the timescales of these processes and assess which one becomes the rate-determining step based on aggregate size, temperature, and water vapor pressure.
Our analytical calculations reveal that, for dust aggregates smaller than cm size, the isotope exchange timescale is approximately the same as that of the constituent particles (see Section <ref>).
§ MODEL
Here, we construct a theoretical model of isotope exchange process between aggregates and ambient vapor.
Figure <ref> shows a schematic of gas molecule diffusion within a dust aggregate.
The isotope exchange process between aggregates and ambient vapor is divided into four processes: (i) supply of gas molecules to the aggregate surface, (ii) diffusion of molecules within the aggregate, (iii) isotope exchange on the surface of constituent particles, and (iv) isotope diffusion within the particles.
The timescales for these four processes are denoted as t_ agg, surf, t_ agg, diff, t_ par, surf, and t_ par, diff, respectively.
We theoretically derive the equations for these four timescales and their parameter dependences in the following sections.
We evaluate the timescale for isotope exchange reaction, t_ ex, as the sum of the timescales of the four processes:
t_ ex = t_ agg, surf + t_ agg, diff + t_ par, surf + t_ par, diff.
The isotope exchange timescale for constituent particles is given by
t_ ex, par = t_ par, surf + t_ par, diff
≈ max( t_ par, surf, t_ par, diff ).
Thus, the effect of aggregate structure on the isotopic exchange timescale is significant when the following condition is satisfied:
max( t_ agg, surf, t_ agg, diff )≳ t_ ex, par.
§.§ Supply of Molecules at the Surface of Aggregates
For a spherical aggregate with a radius of r_ agg, the timescale of the supply-controlled exchange reaction, t_ agg, surf, is given by <cit.>
t_ agg, surf = N_ agg/β_ agg J S_ agg,
where S_ agg = 4 πr_ agg^2 is the surface area of the aggregate, N_ agg is the number of oxygen atoms in the aggregate, β_ agg is the exchange efficiency of molecules at the surface of the aggregate, and J is the supply flux of oxygen atoms.
Here, J is given by
J = P γ_ gas/√(2 π m k_ B T),
where P is the partial pressure, m is the molecular weight, T is the temperature, and k_ B = 1.38 × 10^-23 J.K^-1 is the Boltzmann constant.
For water vapor, we set m = 18 m_ H, where m_ H = 1.67 × 10^-27 kg is the mass of a hydrogen atom.
We define the number of oxygen atoms per gas molecule as γ_ gas, and γ_ gas = 1 for H_2O.
The number of oxygen atoms in the aggregate, N_ agg, is given by
N_ agg = ϕ_ agg V_ agg N_ Aγ_ grain/Ω,
where Ω is the molar volume of minerals, V_ agg = ( 4 π / 3 )r_ agg^3 is the volume of the aggregate, ϕ_ agg
is the volume filling factor of the aggregate, and N_ A = 6.02 × 10^23 mol^-1 is the Avogadro constant.
We set Ω = 50 cm^3.mol^-1 for both amorphous and crystalline forsterite <cit.>.
We define the number of oxygen atoms in the compositional formula of grains as γ_ grain, and γ_ grain = 4 for amorphous/crystalline forsterite (Mg_2SiO_4) grains.
The area fraction of holes on the surface of aggregates is equal to the porosity of aggregates, 1 - ϕ_ agg.
As exchange of molecules between aggregates and ambient gas occurs at the site of the hole on the surface of aggregates (see Figure <ref>), the exchange efficiency of molecules at the surface of the aggregate, β_ agg, is approximately given by
β_ agg = 1 - ϕ_ agg.
§.§ Diffusion of Molecules Within Aggregates
For a spherical aggregate with a radius of r_ agg, the timescale of the diffusive exchange reaction, t_ agg, diff, is given by <cit.>
t_ agg, diff = r_ agg^2/π^2 D_ agg,
where D_ agg is the diffusion coefficient within an aggregate.
For dust aggregates consisting of micron-sized grains, the size of voids is orders of magnitude smaller than the mean free path of gas molecules, and the random motion of gas molecules is governed by collisions with the particle surface <cit.>.
In this situation, D_ agg is given by <cit.>
D_ agg = 4 r_ par c_ s( 1 - ϕ_ agg )^2/13 ϕ_ agg,
where r_ par is the radius of constituent particles and c_ s = √(( 8 k_ B T ) / ( π m )) is the mean thermal velocity of molecules.
In Section <ref>, we show that t_ agg, diff for cm-sized aggregates is orders of magnitude shorter than the others (t_ agg, surf, t_ par, surf, and t_ par, diff).
Thus, the diffusion within aggregates is not the rate-limiting process.
§.§ Supply of Molecules at the Surface of Constituent Particles
For a spherical particle with a radius of r_ par, the timescale of the supply-controlled exchange reaction, t_ par, surf, is given by <cit.>
t_ par, surf = N_ par/β_ par J S_ par,
where S_ par = 4 πr_ par^2 is the surface area of the particle, N_ par = ( V_ par N_ Aγ_ grain ) / Ω is the number of oxygen atoms in the particle, and V_ par = ( 4 π / 3 )r_ par^3 is the volume of the particle.
The isotopic exchange efficiency of colliding molecules with particles, β_ par, is determined from laboratory experiments.
For amorphous forsterite grains reacting with water vapor, <cit.> measured the β_ par value at T ∼ 800–900 K and P = 10^-2 Pa.
They found that
β_ par = 7.4 × 10^-6.
In this study, we neglect the temperature and pressure dependences of β_ par for amorphous forsterite grains, although it would slightly depend on both T and P in reality <cit.>.
We also assume that β_ par for crystalline forsterite grains is the same as that for amorphous grains, although in reality, it might be significantly lower than what we assumed for simplicity.
§.§ Diffusion Within Constituent Particles
For a spherical particle with a radius of r_ par, the timescale of the diffusive exchange reaction, t_ par, diff, is given by <cit.>
t_ par, diff = r_ par^2/π^2 D_ par,
where D_ par is the diffusive isotope exchange coefficient.
The temperature dependence of D_ par follows the Arrhenius law:
D_ par = D_ par, refexp[ - E_ a/R_ gas( 1/T - 1/T_ ref)],
where R_ gas = 8.31 J.mol^-1.K^-1 is the gas constant, E_ a is the activation energy, and D_ par, ref is the diffusive isotope exchange coefficient at the reference temperature, T_ ref.
For amorphous forsterite grains in the ambient water vapor, <cit.> obtained the following values:
D_ par, ref = 1.5 × 10^-19 m^2.s^-1,
T_ ref = 1200 K,
E_ a = 161.5 kJ.mol^-1.
For crystalline forsterite grains, the temperature dependence of D_ par is given by Equation (<ref>) with the following constants[
The mechanism for the diffusive isotope exchange within grains depends on whether the grains are amorphous or crystalline.
It could also depend on the molecular species of the ambient vapor <cit.>.] <cit.>:
D_ par, ref = 3.6 × 10^-18 m^2.s^-1,
T_ ref = 1823 K,
E_ a = 320 kJ.mol^-1.
§ RESULTS & DISCUSSION
We calculate the timescale for oxygen isotope exchange between dust aggregates and ambient gas, t_ ex, and investigate which process is the rate-limiting step.
For simplicity, we assume that dust aggregates are made of monodisperse spherical particles with radius of r_ par = 0.1 m <cit.>.
The volume filling factor of aggregates is set to ϕ_ agg = 0.3.[
The choice of ϕ_ agg = 0.3 is motivated from numerical simulations <cit.> and laboratory experiments <cit.> on collisional growth of dust aggregates.]
In this section, we consider oxygen isotope exchange with water vapor with partial pressure of P = 10^-4 Pa.
Figure <ref> shows t_ ex as a function of r_ agg.
Here, we assume that aggregates are made of amorphous forsterite grains.
It is evident that t_ agg, surf is proportional to r_ agg and t_ agg, diff is proportional to r_ agg^2 (see Equations (<ref>) and (<ref>), respectively).
In contrast, t_ par, surf and t_ par, diff are independent of r_ agg.
We also reveal that t_ agg, diff is orders of magnitude shorter than the other three timescales.
In other words, diffusion of molecules within aggregates is not the rate-limiting step.
Both t_ par, diff and t_ par, surf depend on T, and t_ par, diff > t_ par, surf when T = 700 K (see Figure <ref>(a)).
In this case, either t_ par, diff or t_ agg, surf is the rate-limiting step: t_ ex≃ t_ par, diff when r_ agg < 24 cm, whereas t_ ex≃ t_ agg, surf when r_ agg > 24 cm.
In contrast, for T = 800 K (see Figure <ref>(b)), t_ par, surf > t_ par, diff, thus either t_ par, surf or t_ agg, surf is the rate-limiting step: t_ ex≃ t_ par, surf when r_ agg < 3.2 cm, whereas t_ ex≃ t_ agg, surf when r_ agg > 3.2 cm.
We also show the temperature dependence of t_ ex in Figure <ref>.
Here, we calculate t_ ex for both amorphous and crystalline cases, and r_ agg is set to 1 cm or 10 cm.
Figure <ref>(a) shows the results for r_ agg = 1 cm and aggregates are made of amorphous forsterite particles.
We find that t_ ex≃ t_ par, diff when T < 750 K, whereas t_ ex≃ t_ par, surf when T > 750 K.
In this case, max( t_ agg, surf, t_ agg, diff ) is smaller than t_ ex, par at an arbitrary temperature, and the impact of aggregate structure on the isotopic exchange timescale is negligible.
Amorphous silicate grains are crystallized under high temperature conditions by thermal annealing <cit.>.
The timescale of crystallization, t_ cry, is given by
t_ cry = 1/ν_0exp( E_ cry/R_ gas T),
where ν_0 is the pre-exponential factor of the crystallization rate and E_ cry is the activation energy for crystallization.
For amorphous forsterite grains with the ambient water vapor pressure of P = 10^-4 Pa, <cit.> found that ln( ν_0 / 1 Hz ) = 40.2 and E_ cry = 414.4 kJ.mol^-1.
As shown in Figure <ref>(a), t_ cry drastically decreases with increasing T, and t_ cry = 1 Myr at T = 700 K.
Thus, a significant fraction of silicate dust particles in a high temperature region of T > 700 K would be crystallized within a lifetime of protoplanetary disk.
The dynamics of dust aggregates also affect the crystallization temperature <cit.>.
Figure <ref>(b) shows the temperature dependence of t_ ex for the case of r_ agg = 1 cm and aggregates are made of crystalline forsterite particles.
As is the case for Figure <ref>(a), max( t_ agg, surf, t_ agg, diff ) is always smaller than t_ ex, par, and t_ ex is approximately given by t_ ex≈ t_ ex, par.
To understand the condition where the impact of aggregate structure on the isotopic exchange timescale becomes important, we compare t_ agg, surf with t_ par, surf.
The ratio of the two timescales, t_ agg, surf / t_ par, surf, is given by
t_ agg, surf/t_ par, surf = β_ parϕ_ agg r_ agg/( 1 - ϕ_ agg ) r_ par
= 0.32 ( β_ par/7.4 × 10^-6)( r_ agg/1 cm).
Therefore, t_ agg, surf becomes larger than t_ par, surf when r_ agg > 3 cm.
We validate this prediction in Figures <ref>(c) and <ref>(d).
For r_ agg = 10 cm, t_ agg, surf is indeed larger than t_ par, surf, and t_ agg, surf becomes the rate-limiting process at a high temperature (T > 720 K for the amorphous case and T > 1090 K for the crystalline case).
We note that t_ agg, surf / t_ par, surf is independent of Ω and γ_ grain.
Multi-wavelength observations of protoplanetary disks at (sub)millimeter wavelengths allow us to constrain the size of dust aggregates and its radial distribution <cit.>.
Several studies <cit.> have reported that large aggregates with r_ agg≳ 1 cm are ubiquitous in the inner parts of disks where r ≲ 10 au, whereas the maximum radius of aggregates is typically a few millimeters in the outer parts of r ≳ 100 au.
Comets in the solar system also consist of millimeter- to decimeter-sized dust aggregates referred to as “pebbles” <cit.>.
These pieces of evidence support the presence of large dust aggregates in the ancient solar nebula.
In the protosolar molecular cloud, ^16O-poor H_2O reservoir would be formed through the self-shielding of CO gas, and silicates and volatiles have different oxygen isotopic compositions at the earliest stage of the solar nebula <cit.>.
Centimeter-sized icy dust aggregates drift toward the Sun due to gas drag <cit.>, although the direction of radial migration depends on the sign of the pressure gradient <cit.>.
Furthermore, the radial migration of icy dust aggregates, followed by their evaporation at the snow line, could cause the spatiotemporal variation of oxygen isotopic composition in the gas component of the inner solar nebula <cit.>.
In this study, we derived the fundamental equations for the isotope exchange process between aggregates and ambient vapor (Equations (<ref>) and (<ref>)).
Using these equations, we plan to investigate the spatiotemporal evolution of oxygen isotopic composition and silicate crystallinity, in conjunction with disk formation and evolution, in future studies.
§ CONCLUSION
In this study, we theoretically estimated the oxygen isotope exchange timescales between silicate dust aggregates and ambient water vapor.
The isotope exchange process between aggregates and ambient vapor is divided into four processes (Section <ref>).
We evaluated the timescales of these processes and assess which one becomes the rate-determining step.
Our analytical calculations revealed that the isotope exchange timescale is approximately the same as that of the constituent particles when the aggregate radius is smaller than the critical value given by Equation (<ref>) (Section <ref>).
We plan to perform numerical simulations of oxygen isotopic evolution of the solar nebula using the theoretical model derived in this study.
§ ACKNOWLEDGMENTS
This work was supported by JSPS KAKENHI Grant Numbers JP24K17118 and JP21K13986.
aasjournal
|
http://arxiv.org/abs/2409.03264v1 | 20240905061534 | A Potential Dynamical Origin of The Galactic Disk Warp: The Gaia-Sausage-Enceladus Major Merger | [
"Mingji Deng",
"Cuihua Du",
"Yanbin Yang",
"Jiwei Liao",
"Dashuang Ye"
] | astro-ph.GA | [
"astro-ph.GA"
] |
0000-0002-0592-7660]Mingji Deng
School of Astronomy and Space Sciences, University of Chinese Academy of Sciences, Beijing 100049, P.R. China
:0000-0002-3954-617X]Cuihua Du
School of Astronomy and Space Sciences, University of Chinese Academy of Sciences, Beijing 100049, P.R. China
0000-0001-7949-3407]Yanbin Yang
GEPI, Observatoire de Paris, Universite PSL, CNRS, Place Jules Janssen, 92195 Meudon, France
0000-0001-6762-5599]Jiwei Liao
School of Astronomy and Space Sciences, University of Chinese Academy of Sciences, Beijing 100049, P.R. China
School of Astronomy and Space Sciences, University of Chinese Academy of Sciences, Beijing 100049, P.R. China
Cuihua Du
ducuihua@ucas.ac.cn
§ ABSTRACT
Previous studies have revealed that the Galactic warp is a long-lived, nonsteady, and asymmetric structure. There is a need for a model that accounts for the warp's long-term evolution. Given that this structure has persisted for over 5 Gyrs, its timeline may coincide with the completion of Gaia-Sausage-Enceladus (GSE) merger. Recent studies indicate that the GSE, the significant merger of our Galaxy, was likely a gas-rich merger and the large amount of gas introduced could have created a profound impact on the Galactic morphology. This study utilizes GIZMO simulation code to construct a gas-rich GSE merger. By reconstructing the observed characteristics of the GSE, we successfully reproduce the disk warp and capture nearly all of its documented features that aligns closely with observational data from both stellar and gas disks. This simulation demonstrates the possibility that the single major merger could generate the Galactic warp amplitude and precession. Furthermore, the analysis of the warp's long-term evolution may offer more clues into the formation history of the Milky Way.
§ INTRODUCTION
The most common asymmetrical structure in many disk galaxies is the disk warp, as documented in previous studies <cit.>. The Milky Way, a typical example of disk and spiral galaxies, also exhibits a clear disk warp, confirmed by various works <cit.>. Many mechanisms of the Galactic warp have been proposed, including: inflow of intergalactic matter into the halo <cit.>; inflow directly onto the Galactic disk <cit.>; magnetic fields that exist between galaxies <cit.>; interaction between satellite galaxies like Sagittarius <cit.> or Magellanic Clouds <cit.> with the disk; and the bending instability and self-excited warps or internally driven warps of the Galactic disk <cit.>. However, none of these hypotheses have been quantitatively confirmed as the definitive origin of the Milky Way's warp. Recent studies have suggested that the MW's S-shaped warp may be a long-standing structure, existing for over 5 Gyrs <cit.>. Constructing a model of long-term evolution could provide insights into the origin of the Galactic warp, and such a period may coincide with the time that Gaia-Sausage-Enceladus (GSE) merger completed.
Lines of evidence have shown that the GSE, which is considered as the last major merger of our Galaxy, constitutes the bulk of the inner halo <cit.>. Consequently, the GSE has been studied qualitatively via analogs in cosmological simulations <cit.>, MW zoom-ins <cit.>, and existing merger simulations <cit.>. These studies have achieved success in demonstrating how this merger produces the eccentric debris and reshapes the early Milky Way disk.
Building upon these findings, <cit.> made a tailored model for this merger by producing a grid of 500 idealized galaxy merger simulations with the GADGET code <cit.> to identify a fiducial model that best matches the H3 survey data <cit.>. They produced a configuration to replicate the H3 data and explain disparate phenomena across the Galaxy. Recent evidence has reported that the GSE is likely a significantly gas-rich merger. This was highlighted by <cit.> in their analysis of the age-metallicity relationship from the APOGEE-2 DR17 survey <cit.>. They observed a notable hallmark of a gas-rich merger: an increase in [Mg/Fe] corresponding with a decrease in [Fe/H] around τ≈12 Gyrs, closely aligning with GSE infall period. However, despite the success of the model proposed by <cit.>, it did not fully account for the considerable amount of gas introduced into the Galaxy by the GSE merger, which significantly contributes to the growth of both the high-α and low-α disks.
Based on measurements of precession and analyses of differences in other warp characteristics using mono-age samples, some studies have proposed an externally excited formation mechanism for the disk warp <cit.>. However, these studies generally prefer a recent interactions with satellite galaxies that drive transient warps. <cit.> found that at least half of spiral galaxies exhibit warps, suggesting a long-lived and universal warp mechanism. A hallmark feature of ΛCDM cosmology is hierarchical assembly <cit.>, which means the warp might be formed in a universal way by galaxies merger.
In this study, we advance our understanding of the Galactic warp by creating a gas-rich GSE merger simulation to reconstruct the morphology of our galaxy's disk. In Section <ref>, we provide a description of the simulation model and the initial conditions. In Section <ref>, we illustrate the long-lived and nonsteady warp model's evolution over time, comparing it with observational data at present. In Section <ref>, we calculate the warp precession rate and demonstrate its lopsided feature. In Section <ref>, we show that the disk is embedded in a live DM halo that is tilted and retrograde, possibly contributing to sustain the long-lived warp. Finally, we discuss and summarize our results in Section <ref> and Section <ref>.
§ MODEL AND SIMULATION
In <cit.>, they made three different models to bracket mass range of (2-7)× 10^8 M_⊙ for the GSE progenitor. Each model considered three scale lengths to account for the significant scatter in size at fixed mass observed at z∼2. After simulation tests, they selected the fiducial model, determining the GSE progenitor mass to be 5× 10^8 M_⊙ and the scale length to be 1.5×SMR (size–mass relation). The DM halo mass was derived from the z=2 stellar mass-halo mass model in <cit.>. The Milky Way was modeled as a combination of a thick disk and bulge with a total stellar mass of 2× 10^10 M_⊙, following the scale lengths in <cit.>, and the DM halo was set to half the z=0 mass. More details of how these parameters are elaborated could be found in Section 4 in <cit.>. In summary, they created faithful representations of both the GSE and the Milky Way progenitors as follows: the GSE comprises a stellar disk and dark matter halo, while the Milky Way comprises a bulge, stellar disk, and dark matter halo.
The numerical initial conditions (ICs) are generated from the disk initial conditions environment (DICE) code <cit.>. The assumed density profiles for DM/stars/gas are input into DICE as distribution functions, which then generate Lagrangian particles with the Metropolis-Hastings Monte Carlo Markov Chain algorithm <cit.>. In our simulation, We add a gas disk to each progenitors. The Milky Way progenitor comprises a bulge, a stellar disk, a gas disk, and a DM halo; where as the GSE progenitor is modeled with a stellar disk, a gas disk, and a DM halo. Both disk models follow an exponential + sech-z profile. The bulge is modeled using an Einasto profile <cit.> to replace the Hernquist profile <cit.> used in <cit.>. The Einasto profile, which is similar to the Sérsic profile <cit.> but used for 3-D mass density <cit.>, is a reasonable choice since the Sérsic profile is widely used for fitting the surface density of galaxies. The DM halo is modeled using the Hernquist profile.
In Table <ref>, we present the initial conditions of our simulation, where the virial masses of the Milky Way and GSE progenitors at z∼2 are set to 5× 10^11 M_⊙ and 2× 10^11 M_⊙, respectively. Considering the higher gas fractions in galaxies at z∼2, we set the GSE's gas fraction to 0.8 and the Milky Way's to 0.5, slightly higher than the typical values observed at z∼1.5 <cit.>. We set the gas disk scale length for the Milky Way and GSE progenitors to 3R_s and 4R_s (R_s is the scale radius of the stellar disk), respectively. These choices follow model assumptions made in <cit.> and <cit.> that were used in simulation studies of galaxy mergers and metallicity gradients. The resolution mass for baryon particles is 1× 10^4 M_⊙ and for DM particles is 1× 10^5 M_⊙, and the initial relative velocity is calculated base on Keplerian orbit. The modeling approach utilized in this study is based on GIZMO <cit.>, in which we have implemented the module of star formation and feedback processes as in <cit.>, as star formation can convert gas particles into stellar particles and stars can turn back into gas via feedback. It is a variation of the smooth particle hydrodynamics code Gadget-3 and uses Adaptive Gravity Softening (more details in <cit.>).
As these structural choices play a secondary role to the total mass and orbital parameters described in <cit.>, they modeled the orbit with circularity: η=0.5, inclination: θ=15^∘. In <cit.>, they conducted a similar simulation broadly following <cit.>, but changed the inclination to θ=30^∘. Based on the fiducial model of <cit.>, we aimed to reconstruct both the GSE and the warp structure in our the simulation, leading to some differences in our orbital settings. Two galaxies with opposite disk spins were positioned in a a radially-biased and retrograde orbit. The initial distance between them was set equal to the sum of their virial radii. The eccentricity was set to 0.75, which is more radial than the moderate η=0.5 orbit. The polar (θ) and azimuthal (φ) angles of the orbital plane's normal vector were set at (0^∘, 125^∘). This configuration indicates that GSE is inclined at 35^∘ relative to the MW disk plane in Z direction, slightly larger than <cit.>, and the spin angle is set to (θ_1, θ_2, κ)=(125^∘, 55^∘, 180^∘), θ_1 is the angle between the spin vector of the first galaxy and the orbital plane, θ_2 is the angle between the spin vector of the second galaxy and the orbital plane, and κ is the angle between the spin vector of the first galaxy and the second one (see Fig.2 in <cit.> for more details). After the gas-rich merger happened, gas particles could inherit its angular momentum from the orbital momentum of the merger and then be redistributed into a thin disk <cit.>. It will take several Gyrs for subsequent virialization phase to rebuild the disk <cit.>, during which instabilities may develop some oscillations that we would like to search for. A more detailed summary of the constructed the GSE properties is provided in Appendix <ref>.
§ A LONG-LIVED AND NONSTEADY WARP
The merger occurs rapidly, taking approximately 1 Gyr between the first and final pericenter passage, and is completed within the first 3 Gyrs, which is shown in the orbital decay profile panel of Figure <ref>. As the GSE merger is estimated to have occurred approximately about 8-11 Gyrs ago <cit.>. In Figure <ref>, we provide an edge-on perspective of the galactic disk, selecting particles based on their circularity: ϵ=L_Z/L_Z, max(E)>0.65. This selection criterion is applied at 9.05 Gyrs after simulation began, corresponding to what we expected as the present (z∼0). This time aligns with Auriga 18 (Au18), a Milky Way-like disk galaxy from the Auriga project <cit.>, which is frequently referenced in GSE research. This visualization distinctly reveals an S-shaped warp in both the stellar and gas disks, consistent with observational data. Stars are color-coded based on their ages; elder stars are represented with lighter green markers, while younger stars are depicted with more vibrant blue markers. The warp is the most pronounced for the youngest stars and spans multiple age groups <cit.>. The geometric shape of the warp can be approximated by a power-law warp mode:
Z_w(R≥ R_w) =a(R-R_w)^b sin(ϕ -ϕ_w).
Z_w(R< R_w) =0.
Here, R and Z_w are Galactocentric cylindrical coordinates, a represents the amplitude of the warp, b is the power-law index, ϕ_w is the polar angel of warp's lines of node (LON hereafter) indicates the orientation of the warp and R_w is the onset radius of the warp. This model was employed by <cit.> to accurately fit the Cepheid tracers, prompting us to select stars within the young stars' age range (age<1 Gyr) in the simulation for comparison with the Cepheid data. As <cit.> notes, the parameters of a, b and R_w show a clear correlation, whereas ϕ_w is a more independent parameter in the geometric warp model. Consequently, while different studies might yield varying a, b and R_w values, ϕ_w tends to be more consistent across measurements of the Galactic warp. In our analysis, we refrain from comparing ϕ_w with observational data, as its value can change based on the reference frame specified in the simulation, so we define the ϕ_w as the y-axis making the orientation similar to that of the MW.
In the upper panel of Figure <ref>, we fitted the warp model in simulations and found that the linear model (b=1) closely matches the observational data, the simulation's maximum warp amplitude is marked by a yellow dashed line, aligning closely with the Cepheid data <cit.>. The gas density projection map with contour profile is displayed in the lower panel, in central ∼1 kpc where the formation of the Galactic bar creates a noticeable gap in the inner region. Both sides of the gas warp align closely with the H1 data from <cit.> within a 15 kpc range. Beyond this region, observational data reveal that the northern section (with positive peak) exhibits a higher amplitude, whereas the southern part (with negative peak) displays a lower amplitude. Nonetheless, the solid line remains within high gas density region. Despite these minor discrepancies, the overall results are deemed satisfactory. Previous research utilizing different tracer types has noted different observational systematics in the measured amplitudes <cit.>.
Previous studies have reported the asymmetry of the warp in both gas and stellar components, with findings suggesting that the northern warp is more pronounced than its southern counterpart <cit.>. Accordingly, we plot horizontal lines to represent the height of |Z|=2.5 kpc in the bottom panel of Figure <ref>. This visualization shows the warp's asymmetry in our simulation: the southern part falls slightly below this height, while the northern part is consistent with observational data. It should be noticed that in our simulation, the disk warp evolves continuously, such asymmetry could occasionally disappear or suddenly intensify.
Figure <ref> illustrates the time evolution of the stellar/gas disks warp's amplitude, calculated by fitting Eq. <ref> to the warp every 0.01 Gyr since t=3 Gyr that after the merger has been completed and identifying the maximum amplitude at R=16 kpc, a radius near the edge of the disk in our simulation, and the stellar disk is fitting with stars younger than 1 Gyr. Both disks exhibit similar evolution in Figure <ref>, as gas particles can convert into young stars in the simulation, such evolution trend highlights the importance of gas component in reconstructing the Galactic warp. The thin disk was gradually formed after the merger in our simulation, corresponding to a look-back time of about 6 Gyr, which is close to the observational time frame when the thin disk gradually formed about 7 Gyr ago <cit.>, disk had not fully shaped before 3.25 Gyr, which leading to chaotic conditions and potentially inaccurate measurements prior to this point. Afterwards, the warp reached its initial peak at approximately 4.3 Gyr, decreased until around 5.2 Gyr, saw another peak at 6.4 Gyr, and then generally diminished to present. Thus, we can identify roughly many evident extremum values in Figure <ref>, indicating that the warp is a long-lived and nonsteady structure.
§ KINEMATIC WARP MODEL
In section <ref>, we discuss the evolution of the Galactic warp, highlighting how its rapid amplitude changes underscore the dynamical nature of warps. This dynamism is crucial for unlocking insights into the formation history of galaxies and the mass distribution of their halos. The changing geometry of the warp can be succinctly described by the variation in the direction of its LONs at a precession rate ω. This is expressed as ϕ_w(t) = ϕ_w,0+ω t, where ϕ_w,0 represents the current position of the LONs. A time dependent model of warp is provided by <cit.>, described as:
V_Z(R,ϕ,t=0)=(V_ϕ/R -ω(R))h_w(R)cos(ϕ-ϕ_ω)+∂ h_w/∂ tsin(ϕ-ϕ_w).
Where h_w(R)=a(R-R_w)^b, as described in Eq. <ref>, V_Z represents mean vertical velocity and V_ϕ is mean azimuthal velocity. To simplify the calculations, we neglect the time derivative term ∂ h_w/∂ t and assume ω does not vary with radius. Although the warp is evolving, our calculations indicate that this approximation does not significantly affect the precession results. With this assumption, the model could be described the same as Equation 7 in <cit.>:
V_Z(R,ϕ)=(V_ϕ/R -ω)h_w(R)cos(ϕ-ϕ_w).
With this kinematic model, after determining the geometric shape through fitting with Eq. <ref>, we can plug in the parameters of h_w and ϕ into Eq. <ref> and measure the procession rate ω using kinematic data from the simulation. Here we select the young stars (age<1Gyr) within the radial range of R=11∼13 kpc and R=13∼15 kpc, plotting the evolution of ω over time in the top panel of Figure <ref>. The dashed line represents the non-precession state, while positive and negative values of ω indicate prograde and retrograde warp precession relative to disk rotation, respectively. The trend of ω fluctuates between positive and negative values, for example the R=13∼15 kpc precession trend ranges approximately from -13 to 18 km s^-1 kpc^-1. Although there are periods where the precession rate is notably high in either the prograde or retrograde direction, it typically remains within a range of -5∼5km s^-1 kpc^-1. A significant shift in the precession rate is observed around t=5.5 Gyr in Figure <ref>, which can be attributed to the exceptionally low warp amplitude during this period. Comparing both the evolution trend of precession and amplitude in Figure <ref>, we could find that both two consistently reach their extremum simultaneously, indicating a synchronous evolution. In the middle panel of Figure <ref>, the transparent band illustrates the velocity trend of simulated stars younger than 1 Gyr within the radial ranges of R=11∼13 kpc (left) and R=13∼15 kpc (right) at the present time, with the dashed line representing the mean velocity. This clearly demonstrates the warp's dynamical characteristics. In our simulation, it shows a high prograde precession in the range of R=11∼13 kpc. Incorporating all geometric model parameters into Eq. <ref>, we obtain a precession rate of 13.820 and 4.795 km s^-1 kpc^-1 for two ranges, this value is close to some current precession observational data. It can be found that the fitting curve does not perfectly cover the mean velocity. This discrepancy arises because the vertical velocity is asymmetric about the longitude of peak vertical velocity, indicating that the warp is lopsided <cit.>. In the bottom panel of Figure <ref>, refer to <cit.> we measure the velocity asymmetry by subtracting the median vertical velocity of stars on one side of ϕ_w from its complement on the other side at the same azimuthal separation in each radial annulus. A gradually increasing difference in vertical velocity is found when the azimuth angle moving away from the velocity peak, consistent with the feature reported in <cit.>. This trend may become more pronounced in the outer disk, as the disk is asymmetric we can also found the lopsided feature is more evident in the ϕ<0^∘ area.
In <cit.>, precession was calculated using 12 million red giant stars from Gaia DR2 <cit.>, yielding a precession rate of 10.86± 0.03_stat±3.20_syst km s^-1 kpc^-1 in the direction of Galactic rotation. Recently, the similar model was employed in <cit.>, using 134 Cepheids and combined with the line-of-sight velocity (RV) from Gaia DR3 <cit.>, determining a lower warp precession rate of 4.9±1.6 km s^-1 kpc^-1, our simulation is very close to these observation in different ranges. Another study by <cit.> also suggested a low precession rate and cannot exclude a non-precessing warp of MW. Consequently, determining precession accurately is hindered by the lack of comprehensive spatial and kinematic data, compounded by the need for reliable tracers. In Figure <ref>, we plot the variation of the precession rate with Galactocentric radius R. As shown, our simulation results align well with other studies in the literature <cit.> for R<∼15 kpc. Both <cit.> and <cit.> reported a prograde precession that decreases with increasing Galactocentric radius, a trend that our simulation also captures. In our results, the precession rate continues to decrease, eventually turning negative (R∼15.5 kpc), indicating a transition to retrograde precession. As the outer disk is more susceptible to be disturbed, which is also likely to be influenced by Sgr. and LMC <cit.>, which might make precession rate rise in observation beyond 15 kpc.
Theoretical and numerical studies of misaligned non-spherical halo potentials imply that the direction of precession can be influenced by the DM halo shape, as the prolate or prolate-like halo can rise prograde precession, while oblate-like halo should produce a retrograde warp precession <cit.>. However, warps in oblate halos are not appropriate to explain the observed warps or they could not sustain a prominent warp <cit.>. Typical precession periods from such models or those produced by a misaligned outer torus of later accreted material <cit.>, are approximately between 4 Gyr and 40 Gyr, corresponding to warp precession rates between 1.5 km s^-1 kpc^-1 and 0.1 km s^-1 kpc^-1. In Figure <ref>, the precession rate determined after the merger significantly surpasses all predicted values throughout the simulation period. Since GSE merger led to a oblate halo which could not account for the observed characteristics of warp, and a percession value greater than 10 km s^-1 kpc^-1 indicates a likely transient response observed in the outer disk. Such a response could be attributed to interactions with a satellite galaxy <cit.>, a scenario not included in our simulation. Given the warp's persistence over an extended period and its exhibition of both high prograde and retrograde precession post-merger, unraveling the underlying dynamical mechanisms poses a challenge.
§ TILTED AND RETROGRADE DM HALO
<cit.> discovered that the stellar halo, spanning a Galactocentric radius of 5-50 kpc, is tilted approximately ∼25^∘ with respect to the disk plane. Following this discovery, they executed a numerical simulation utilizing a gravitational potential model of a galaxy. In their approach, 30% the dark halo mass adopts a triaxial distribution, which is similarly tilted by 25^∘ above the Galactic plane towards the Sun. Their results suggest that the warp and the flare of the Galactic disk might be related to the tilted halo <cit.>.
If so, our simulated disk might also be embedded in a tilted DM halo. We select the DM particles within 10 kpc<r<50 kpc of Galactocentric radius, with the mass and position of each particles we can calculate the moment of inertia tensor. Solving for the eigenvector-eigenvalue pairs of inertia tensor, we can find the three principal axes of rotation and their respective moments of inertia. The major axis has the minimum moment of inertia and minor axis has the maximum moment of inertia. The length of the principal axes r_i, i∈{ a,b,c} has the relation with the moments of inertia I_i as following <cit.>:
r_i^2∝-I_i+I_j+I_k/2.
Once we obtain the moments of inertia, we can calculate the major-to-intermediate and major-to-minor axes ratios as follows:
r_b/r_a :r_c/r_a =√(-I_b+I_a+I_c/-I_a+I_b+I_c): √(-I_c+I_a+I_b/-I_a+I_b+I_c).
Then we can calculate the triaxiality with T=(1-p^2)/(1-q^2), where p and q represent the ratios of major-to-intermediate and major-to-minor axes, respectively. T>0.6 denotes a prolate halo, and T<0.3 indicates an oblate halo. By measuring the orientation of the axes (with the disk plane's normal vector aligned parallel to the Z-axis), we are able to determine the tilt angle. Our results indicate that the major and intermediate axes are degenerate, a scenario also observed in <cit.> and <cit.>. This degeneracy renders the measurement unstable if these axes are chosen. However, the minor axis is nondegnerate and can be reliably used to measure the tilt angle of the DM halo with respect to the disk plane.
In the middle and bottom panels of Figure <ref>, we illustrate the triaxiality with particles in 10 kpc<r<50 kpc and the net angular momentum of DM halo within 50 kpc. Following the GSE merger, the simulation halo appears an oblate profile, in alignment with <cit.>, and maintains a negative angular momentum, indicating a retrograde halo with respect to the disk. <cit.> suggest that the vertical displacement of the disk in a retrograde halo will be stronger than in a prograde halo, leading to a more organized warp. For a simple heuristic argument that dynamical friction tries to bring the angular momentum vectors of the halo and disk into alignment, which means the warp will be strongly damped in co-rotating halo but enhenced in counter-rotating halo <cit.>. Thus, the retrograde halo resulting from GSE merger could sustain a long-lived warp.
The top panel of Figure <ref> illustrates the time evolution of the DM halo tilt angle with respect to the disk, showing a continuously increasing trend, and there is no evident correlation with the amplitude evolution. In the simulation of <cit.>, they fixed the halo's tilt angle at 25^∘ and found that the warp need 1.5 Gyr to reach a steady-state amplitude after that there is no significant change. This suggests that a live halo could facilitate a fast and nonsteady evolution of the warp. The halo's tilt angle continues to increase following the merger, which, as suggested by <cit.>, should result in an increasing warp amplitude. However, our simulation could produce warp with a low amplitude at t = 5.2 Gyr and show irregular evolution as illustrated in Figure <ref>. Therefore, we posit that in galaxy models with a more realistic cosmological setting, the actual mechanisms driving the warp evolution are likely to be more intricate than suggested by idealized simulations based solely on gravitational potential.
Such changes in the tilted angle suggest that the DM halo might also exhibit rotational precession. We can approximate the DM halo as a rotating rigid body in the absence of external moments. In this scenario, we identify the body-fixed principal axes for the DM halo, resulting in a moment of inertia tensor in the simplified form of [I]=diag(I_xx,I_yy,I_zz). Consequently, the angular momentum vector can be expressed as 𝐇_𝐆={ 0,0,H_G}.
To describe the precession of the DM halo, we select the particles within 50 kpc and establish a body-fixed coordinate system by applying Euler angles with the rotation order R_ZR_XR_Z to translate the X-Y-Z absolute reference frame system which is initially defined in the simulation box. The free motions of the DM halo is then characterized by changes in these three Euler angles, which can be calculated as follows <cit.>:
ϕ̇ = H_G(cos^2ψ/I_yy+ sin^2ψ/I_xx).
θ̇ = H_G(1/I_xx - 1/I_yy)sinθ sinψ cosψ.
ψ̇ = H_G(1/I_zz - cos^2ψ/I_yy-sin^2ψ/I_xx )cosθ.
The ψ,ϕ,θ are described as “roll", “yaw" and “pitch" angle, and ψ̇,ϕ̇,θ̇ are spin, precess and nutate rate respectively. Under the condition that the DM halo exhibits an oblate profile, we align the minor axis with the z-axis in the body-fixed coordinate system, the evolution of three velocities are shown in Figure <ref>. The precession rate of DM halo is along the original Z axis. Given that the angular momentum of the dark matter (DM) halo is opposite to that of the disk, the DM halo undergoes retrograde precession relative to the disk. This interaction may cause the disk warp to also exhibit retrograde precession, as shown in Figure <ref>, where the precession remains predominantly retrograde throughout most of the period.
§ DISCUSSION
It is crucial to note that in our simulation, one of the important parameters influencing warp creation is the azimuthal angle of the normal vector of the orbital plane. Specifically, we chose a slightly larger angle of 125^∘, which resulted in a particularly small part the GSE debris following a high-energy orbit. These stars do not align well with current observations, suggesting that the simulation configuration may not be the most accurate model for reconstructing the GSE. Additionally, our work does not consider other potential mechanisms for the creation of warp, such as interactions with satellite galaxies, or other minor/mini infall events. Therefore, this suggests that single major merger model could not simultaneously fit the both the GSE and the disk warp very accurately.
In Section <ref>, we calculated the precession rate versus time, revealing the precession can alternate between prograde and retrograde. In the present, it aligns well with observational data for R<∼15 kpc. However, beyond this range our simulation shows a transition to retrograde precession, potentially inconsistent with current observational data in this range. However, precise measurements necessitate extensive kinematic data. Additionally, gravitational interactions with a satellite galaxy, not accounted for in our simulation, could induce a prograde precession. Given the extensive evolutionary history of the Milky Way, it's plausible that other mechanisms affecting the precession rate have occurred. Therefore, incorporating additional models partway through this period may provide a more comprehensive understanding of these dynamics.
In Section <ref>, we find our disk is embedded in a time-evolving tilted and retrograde DM halo. In <cit.>, their research focused exclusively on a fixed tilt angle of the DM halo, neglecting the long-term dynamical mechanisms that emerge from mergers. As they suggest that the warp requires 1.5 Gyrs to reach steady-state, our simulations indicate that such an extended period could result in significant changes within the DM halo. Therefore, we posit that variations of the tilt angle relative to the disk might influence the formation of warp. Notably, the time evolution of the DM halo tilt angle does not exhibit a clear correlation with the observed changes in warp amplitude. This implies that in a comprehensive galaxy model, the impact of a tilted DM halo on warp formation may not be as straightforward as predicted by idealized gravitational potential models. Consequently, the Galactic warp model appears to necessitate a series of controlled experiments for further investigation, which we are preparing to conduct in our next study.
§ CONCLUSION
We conducted a hydrodynamical simulation of gas-rich GSE merger, positioned on reconstructing the GSE debris, successfully producing the Galactic warp amplitude and precession that aligns well with observational data, discovering that the warp is long-lived, nonsteady, asymmetric, lopside, exhibiting both prograde and retrograde precession rate after the completion of the merger. We found significant changes in the DM halo as a consequence of the major merger, which created a live, oblate, tilted, and retrograde DM halo. Specifically, the tilted and retrograde characteristics of the DM halo appears to sustain the warp.
This work was supported by the National Natural Science Foundation of China (NSFC Nos.11973042 and 11973052). We thank Xiaodian Chen for sharing the data. We are grateful to Phil Hopkins and Jianling Wang who kindly shared with us the access to the Gizmo code. We are grateful for the support of the International Research Program Tianguan, which is an agreement between the CNRS in France, NAOC, IHEP, and the Yunnan Univ. in China .
§ CHARACTERISTIC OF SIMULATED GSE
Here we list a summary of our GSE model in Figure <ref>. While most of the galaxy model parameters are adapted from <cit.>, our model features a significantly different orbit configuration. Nevertheless, our model successfully reconstructs the global characteristics of the GSE debris. The top-left panel illustrates the orbital decay profile. Owing to the large eccentricity of the orbit setting, the merger occurs rapidly, taking approximately only 1 Gyr between the first and final pericenter passages. The top-middle panel shows the all-sky density profile of the GSE, the power-law coefficients are given by ρ∝ r_gal^α profile. As <cit.> proposed, we also adopted a double-break density profile which is used widely for the inner halo (r_gal<30 kpc), and noticed that there are two breaks in r_gal=15 kpc and r_gal=30 kpc respectively. The break at 15 kpc is a critical feature of the GSE merger <cit.> and it also roughly the radius where the disk warp reaches maximum amplitude, which may hint a correlation between the GSE and warp. Since the GSE is by far the most dominant component of the inner halo, we expect the overall halo density profile at this range could be fitted in our simulation, we find the slope of the GSE density profile between two breaks is a good match to that found for the inner halo <cit.>, and the third coefficients after the second break fits well with that of <cit.>, as they report that the second break at 27.18 kpc with α=2.86. Meanwhile, the top-right panel displays the anisotropy parameter β of GSE debris with Galactocentric distance in a spherical coordinate system, which defined as:
β = 1-σ_θ^2+σ_ϕ^2/2σ_r^2.
where β quantifies the degree of velocity anisotropy of a system of stellar orbits, where σ_i are the velocity dispersions in spherical coordinates. The observational data is from <cit.>, derived from the LAMOST <cit.> K-giant Sgr-removed stellar halo. There exists a disparity between the parameter values in the data and the simulation results. However, both exhibit a consistent trend characterized by an increasing anisotropy parameter within 17 kpc and a relatively stable to decreasing trend between approximately 17-28 kpc, and increasing after that. The bottom panels show most prominent features of GSE in both V_ϕ-V_r panel and the E-L_Z panel. We can find the “sausage” structure in velocity panel which was first discovered in <cit.> as the “Gaia-Sausage” came from, the contour profile tends to have negative values (indicating retrograde orbits in our setting) on both sides, implying a slight overall retrograde motion. The same feature also reflected in E-L_Z panel, where the majority of stars are situated in the central region, and the profile similarly trends toward negative values.
[Abdurro’uf et al.(2022)]AbdurroufAbdurro’uf, Accetta K., Aerts C. et al., 2022, ApJS, 259, 35
[Amôres et al.(2017)]AmoresAmôres, E. B., Robin, A. C., & Reylé, C. 2017, A&A, 602, A67
[Bailin(2003)]BailinBailin, J. 2003, ApJL, 583, L79
[Barnes(2002)]BarnesBarnes J. E., 2002, MNRAS, 333, 481
[Battaner & Jiménez-Vicente (1998)]BJBattaner, E., & Jiménez-Vicente, J. 1998, A&A, 332, 809
[Belokurov et al.(2018)]Belokurov Belokurov, V., Erkal, D., Evans, N. W., Koposov, S. E., Deason, A. J. 2018, MNRAS, 478, 611
[Belokurov et al.(2023)]Belokurov23Belokurov V., Vasiliev E., Deason A. J., Koposov S. E., Fattahi A., Dillamore A. M., Davies E. Y., Grand R. J. J., 2023, MNRAS, 518, 6200
[Behroozi et al.(2019)]BehrooziBehroozi, P., Wechsler, R. H., Hearin, A. P., & Conroy, C. 2019, MNRAS, 488, 3143
[Binney et al.(1998)]BinneyBinney J., Jiang I.-G., Dutta S., 1998, MNRAS, 297, 1237
[Bignone et al.(2019)]BignoneBignone, L. A., Helmi, A., Tissera, P. B. 2019, ApJL, 883, L5
[Bland-Hawthorn & Gerhard(2016)]BHGBland-Hawthorn, J., & Gerhard, O. 2016, ARA&A, 54, 529
[Bosma(1991)]BosmaBosma, A. 1991, Warped and Flaring HI Disks
(Cambridge: Cambridge Univ. Press), 181
[Cabrera-Gadea et al.(2024)]CCCabrera-Gadea, M., Mateu, C., Ramos, P., et al. 2024, MNRAS, 528, 4409
[Ciucă et al.(2024)]CiucaCiucă I., Kawata, D., Ting, Y.-S., et al. 2024, MNRAS.528L.122C
[Cui et al.(2012)]CuiCui, X.-Q., Zhao, Y.-H., Chu, Y.-Q., et al. 2012, RAA, 12, 1197
[Chen et al.(2019)]ChenChen, X., Wang, S., Deng, L., et al. 2019, Nature Astronomy, 3, 320
[Cheng et al.(2020)]ChengCheng, X., Anguiano, B., Majewski, S. R., et al. 2020, ApJ, 905, 49
[Chrobáková et al.(2021)]ChrChrobáková, Ž., & López-Corredoira, M. 2021, ApJ, 912, 130
[Coe(2010)]CoeCoe D., 2010, preprint (arXiv:1005.0411)
[Conroy et al.(2019a)]ConroyConroy, C., Bonaca, A., Cargile, P., et al. 2019a, ApJ, 883, 107
[Cox et al.(2008)]CoxCox T. J., Jonsson P., Somerville R. S., Primack J. R., Dekel A., 2008, MNRAS, 384, 386
[Deason et al.(2014)]DeasonDeason, A. J., Belokurov, V., Koposov, S. E., & Rockosi, C. M. 2014, ApJ, 787, 30
[Dehnen et al.(2023)]Dehnen23Dehnen, W., Semczuk, M., & Schönrich, R. 2023, MNRAS, 523, 1556
[Dubinski, J. & Chakrabarty(2009)]DubinskiDubinski J., & Chakrabarty D., 2009, AJ. 703, 2068–2081
[Einasto(1965)]EinastoEinasto, J. 1965, Trudy Inst. Astroz. Alma-Ata, 51, 87
[Elias et al.(2020)]EliasElias, L. M., Sales, L. V., Helmi, A., Hernquist, L. 2020, MNRAS, 495, 29
[Fattahi et al.(2019)]FattahiFattahi, A., Belokurov, V., Deason, A. J., et al. 2019, MNRAS, 484, 4471
[Freudenreich et al.(1994)]FreudenreichFreudenreich, H. T., Berriman, G. B., Dwek, E., et al. 1994, ApJL, 429, L69
[Gaia Collaboration et al.(2018)]GaiaGaia Collaboration et al. Gaia Data Release 2. Mapping the Milky Way disc kinematics. Astron. Astrophys. 616, A11.
[Gaia Collaboration et al.(2023a)]Gaia3Gaia Collaboration, Drimmel, R., Romero-G´omez, M., et al. 2023a, A&A, 674, A37
[Grand et al.(2017)]Grand17Grand R. J. J., Gómez F., Marinacci F., et al. 2017, MNRAS, 467, 179
[Grand et al.(2020)]GrandGrand, R. J. J., Kawata, D., Belokurov, V., et al. 2020, MNRAS, 497, 1603
[Hammer et al.(2009)]HammerHammer F., Flores H., Puech M., Yang Y. B., Athanassoula E., Rodrigues M., Delgado R., 2009, A&A, 507, 1313
[Han et al.(2023)]Han23Han, J. J., Conroy, C., Hernquist, L., 2023NatAs...7.1481H
[Han et al.(2022)]Han22Han, J. J., Conroy, C., Johnson, B. D., et al. 2022a, AJ, 164, 249
[He(2023)]HeHe. 2023, ApJ, 954L, 9H
[Helmi et al.(2018)]HelmiHelmi, A., Babusiaux, C., Koppelman, H. H., et al. 2018, Natur, 563, 85
[Hernquist(1990)]HernquistHernquist, L. 1990, ApJ, 356, 359
[Hopkins et al.(2009)]Hopkins09Hopkins P. F., 2009, in Jogee S., Marinova I., Hao L., Blanc G. A., eds, ASP Conf. Ser. Vol. 419, Galaxy Evolution: Emerging Insights and Future Challenges. Astron. Soc. Pac., San Francisco, p. 228
[Hopkins(2015)]Hopkins15Hopkins P. F., 2015, MNRAS, 450, 53
[Hopkins et al.(2018)]Hopkins18Hopkins P. F., Wetzel A., Kereš D., 2018b, MNRAS, 480, 800
[Huang et al.(2024)]HuangHuang, Y. et al. A slightly oblate dark matter halo revealed by a retrograde precessing galactic disk warp. Nature Astronomy early access (2024).
[Ideta et al.(2000)]IdetaIdeta M., Hozumi S., Tsuchiya T., Takizawa M., 2000, MNRAS, 311, 733
[Iorio & Belokurov(2019)]IorioIorio, G. & Belokurov, V. 2019, MNRAS, 482, 3868
[Jeon et al.(2009)]JeonJeon, M., Kim, S. S., & Ann, H. B. 2009, ApJ, 696, 1899
[Jiang & Binney(1999)]JBJiang, I.-G., & Binney, J. 1999, MNRAS, 303, L7
[Joshi et al.(2024)]JoshiJoshi, R., Widrow, Lawrence M. 2024MNRAS.527.7781J
[Kazantzidis et al.(2004)]KazantzidisKazantzidis, S., Kravtsov, A. V., Zentner, A. R., et al. 2004, ApJL, 611, L73
[Kerr(1957)]KerrKerr, F. J. 1957, AJ, 62, 93
[Koppelman et al.(2020)]KoppelmanKoppelman, H. H., Bos, R. O. Y., Helmi, A. 2020, A&A, 642, L18
[Laporte et al.(2019)]LaporteLaporte, C. F. P., Minchev, I., Johnston, K. V., & Gómez, F. A. 2019, MNRAS, 485, 3134
[Lemasle et al.(2022)]LemasleLemasle et al. 2022, A&A, 668, A40
[Levine et al.(2006)]LevineLevine E. S., Blitz L., Heiles C., 2006, Science, 312, 1773
[Luo et al.(2012)]LuoLuo, A. L., Zhang, H.-T., Zhao, Y.-H., et al. 2012, RAA, 12, 1243
[López-Corredoira et al.(2002a)]LCLópez-Corredoira, M., Betancort-Rijo, J., & Beckman, J. E. 2002a, A&A, 386, 169
[López-Corredoira et al.(2014)]LC14López-Corredoira, M., Abedi, H., Garzón, F., & Figueras, F. 2014, A&A, 572, A101
[Li et al.(2023)]Li23Li, X., Wang, H.-F., Luo, Y.-P., et al. 2023, ApJ, 943, 88
[Metropolis et al.(1953)]MetropolisMetropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. 1953, J. Chem. Phys., 21, 1087
[Naidu et al.(2020)]Naidu2020Naidu, R. P., Conroy, C., Bonaca, A., et al. 2020, ApJ, 901, 48
[Naidu et al.(2021)]Naidu2022Naidu R. P., Conroy C., Bonaca, A., et al. 2021, ApJ, 923, 92
[Nelson & Tremaine(1995)]ntNelson R. W., Tremaine S., 1995, MNRAS, 275, 897
[Ostriker & Binney(1989)]OBOstriker, E. C., & Binney, J. J. 1989, MNRAS, 237, 785
[Peraire & Widnall(2009)]Peraire Peraire J., Widnall S., 16.07 Dynamics, Fall 2009 Version 2.0
[Perret et al.(2014)]Perret14Perret, V., Renaud, F., Epinat, B., et al. 2014, A&A, 562, A1
[Perret(2016)]Perret16Perret, V. 2016, DICE: Disk Initial Conditions Environment, Astrophysics Source Code Library, ascl:1607.002
[Poggio et al.(2020)]PoggioPoggio, E., Drimmel, R., Andrae, R., et al. 2020, Nature Astronomy, 4, 590
[Quinn & Binney(1992)]QBQuinn, T., & Binney, J. 1992, MNRAS, 255, 729
[Revaz & Pfenniger(2004)]RPRevaz, Y., & Pfenniger, D. 2004, A&A, 425, 67
[Reshetnikov & Combes(1998)]ReshetnikovReshetnikov, V., & Combes, F. 1998, A&A, 337, 9
[Reylé et al.(2009)]ReyleReylé, C., Marshall, D. J., Robin, A. C., & Schultheis, M. 2009, A&A, 495, 819
[Rocha et al.(2008)]RochaRocha M., Jonsson P., Primack J. R., Cox T. J., 2008, MNRAS, 383, 1281
[Rodrigues et al.(2012)]RodriguesRodrigues M., Puech M., Hammer F., Rothberg B., Flores H., 2012, MNRAS, 421, 2888
[Roskar et al.(2010)]RoskarRoskar, R., Debattista, V. P., Brooks, A. M., et al. 2010, MNRAS, 408, 783
[Sánchez-Saavedra et al.(1990)]SS90Sánchez-Saavedra, M. L., Battaner, E., & Florido, E. 1990, MNRAS, 246, 458
[Sánchez-Saavedra et al.(2003)]SS03Sánchez-Saavedra, M. L., Battaner, E., Guijarro, A., López-Corredoira, M., & Castro-Rodríguez, N. 2003, A&A, 399, 457
[Sellwood & Debattista(2022)]SDSellwood, J. A., & Debattista, V. P. 2022, MNRAS, 510, 1375
[Sérsic(1968)]SersicSérsic, J. L. 1968, Atlas de galaxias australes (Cordoba, Argentina: Observatorio Astronomico, 1968)
[Shao et al.(2021)]ShaoShao, S., Cautun, M., Deason, A., & Frenk, C. S. 2021, MNRAS, 504, 6033
[Shen & Sellwood(2006)]ShenShen, J. & Sellwood, J. A., 2006, MNRAS, 370, 2
[Skowron et al.(2019a)]SkowronSkowron, D. M., Skowron, J., Mróz, P., et al. 2019a, AcA, 69, 305
[Snaith et al.(2014)]SnaithSnaith, O. N., Haywood, M., Di Matteo, P., et al. 2014, ApJ, 781, L31
[Springel et al.(2005)]Springel05Springel, V. 2005, MNRAS, 364, 1105
[Springel et al.(2021)]Springel21Springel, V., Pakmor, R., Zier, O., Reinecke, M. 2021, MNRAS, 506,2871
[Stelea et al.(2024)]SteleaStelea, I. A., Hunt, J. A. S., & Johnston, K. V. 2024, The Milky Way’s rowdy neighbours: The effects of the Large Magellanic Cloud and Sagittarius Dwarf on the Milky Way Disc
[Wang et al.(2018)]WangWang H., Lopez-Corredoira ´ M., Carlin J. L., Deng L., 2018, MNRAS, 477, 2858
[Wang et al.(2012)]WangjWang J., Hammer F., Athanassoula E., Puech M., Yang Y., Flores H., 2012, A&A, 538, A121
[Weinberg & Blitz(2006)]WBWeinberg, M. D., & Blitz, L. 2006, ApJL, 641, L33
[White & Frenk(1991)]WhiteWhite, S. D. M., & Frenk, C. S. 1991, ApJ, 379, 52
[Wu et al.(2022)]WuWu, W.B., Zhao, G., Xue,X.X., et al. 2022, ApJ, 924, 23W
[Ye et al.(2023)]YeYe D., Du C., Shi J., Ma J., 2023, MNRAS, 525, 2472
[Zhao et al.(2006)]Zhao06Zhao, G., Chen, Y.-Q., Shi, J.-R., et al. 2006, ChJAA, 6, 265
[Zhao et al.(2012)]Zhao12Zhao, G., Zhao, Y.-H., Chu, Y.-Q., Jing, Y.-P., & Deng, L.-C. 2012, RAA, 12, 723
[Zhou et al.(2024)]ZhouZhou, X.Y., Chen, X.D., Deng, L.C., et al. doi:10.48550/arXiv.2402.15782
|
http://arxiv.org/abs/2409.02512v1 | 20240904082147 | Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal | [
"Jifeng Hu",
"Li Shen",
"Sili Huang",
"Zhejian Yang",
"Hechang Chen",
"Lichao Sun",
"Yi Chang",
"Dacheng Tao"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
^1,3,4,5,7School of Artificial Intelligence, Jilin University, Changchun, China
^2JD Explore Academy, Beijing, China
^6Lehigh University, Bethlehem, Pennsylvania, USA
^8College of Computing and Data Science, NTU, Singapore
<https://github.com/JF-Hu/Continual_Diffuser>
§ ABSTRACT
Artificial neural networks, especially recent diffusion-based models, have shown remarkable superiority in gaming, control, and QA systems, where the training tasks' datasets are usually static.
However, in real-world applications, such as robotic control of reinforcement learning (RL), the tasks are changing, and new tasks arise in a sequential order.
This situation poses the new challenge of plasticity-stability trade-off for training an agent who can adapt to task changes and retain acquired knowledge.
In view of this, we propose a rehearsal-based continual diffusion model, called Continual Diffuser (), to endow the diffuser with the capabilities of quick adaptation (plasticity) and lasting retention (stability).
Specifically, we first construct an offline benchmark that contains 90 tasks from multiple domains.
Then, we train the on each task with sequential modeling and conditional generation for making decisions.
Next, we preserve a small portion of previous datasets as the rehearsal buffer and replay it to retain the acquired knowledge.
Extensive experiments on a series of tasks show can achieve a promising plasticity-stability trade-off and outperform existing diffusion-based methods and other representative baselines on most tasks.
Source code is available at https://github.com/JF-Hu/Continual_Diffuserhere.
§ INTRODUCTION
Artificial neural networks, such as diffusion models, have made impressive successes in decision-making scenarios, e.g., game playing <cit.>, robotics manipulation <cit.>, and autonomous driving <cit.>.
However, in most situations, a new challenge of difficult adaption to changing data arises when we adopt the general strategy of learning during the training phase and evaluating with fixed neural network weights <cit.>.
Changes are prevalent in real-world applications when performing learning in games, logistics, and control systems.
A crucial step towards achieving Artificial General Intelligence (AGI) is mastering the human-like ability to continuously learn and quickly adapt to new scenarios over the duration of their lifetime <cit.>.
Unfortunately, it is usually ineffective for current methods to simply continue learning on new scenarios when new datasets arrive.
They will show a dilemma between storing historical knowledge (stability) in their brains and adapting to environmental changes (plasticity) <cit.>.
Recently, we have noticed that diffusion probabilistic models (DPMs) have emerged as an expressive structure for tackling complex decision-making tasks such as robotics manipulation by formulating deep reinforcement learning (RL) as a sequential modeling problem <cit.>.
Although recent DPMs have shown impressive performance in robotics manipulation, they, however, usually focus on a narrow setting, where the environment is well-defined and remains static all the time <cit.>, just like we introduce above.
In contrast, in real-world applications, the environment changes dynamically in chronological order, forming a continuous stream of data encompassing various tasks.
In this situation, it is challenging for the agents to contain historical knowledge (stability) in their brains and adapt to environmental changes (plasticity) quickly based on already acquired knowledge <cit.>.
Thus, a natural question arises:
Can we incorporate DPMs' merit of high expression and concurrently endow DPMs the ability towards better plasticity and stability in continual offline RL?
Facing the long-standing challenge of plasticity-stability dilemma in continual RL, current studies of continual learning can be roughly classified into three categories.
Structure-based methods <cit.> propose the use of a base model for pertaining and sub-modules for each task so as to store separated knowledge and reduce catastrophic forgetting.
Regularization-based methods <cit.> propose using auxiliary regularization loss such as L_2 penalty, KL divergence, and weight importance to contain policy optimization and avoid catastrophic forgetting during training.
Rehearsal-based methods <cit.> are considered simple yet effective in alleviating catastrophic forgetting as rehearsal mimics the memory consolidation mechanism of hippocampus replay inside biological systems.
There are many strategies to perform rehearsal. For instance, a typical method is gradient projection <cit.>, which contains the gradients from new data loss as close as to previous tasks, furthest preventing performance decrease.
Although these methods are effective for continual learning, they present limited improvement in continual offline RL because of extra challenges such as distribution shift and value uncertain estimation.
Recently, diffusion-based methods, such as DD and Diffuser <cit.>, propose to resolve the above two extra challenges from sequential modeling and have shown impressive results in many offline RL tasks.
However, they concentrate solely on training a diffuser that can only solve one task, thus showing limitations in real-world applications where training datasets or tasks usually arrive sequentially.
Though recent works, such as MTDIFF <cit.>, consider diffusers as planner or data generators for multi-task RL, the problem setting of their work is orthogonal to ours.
In this view, we take one step forward to investigate diffusers with arriving datasets and find that recent state-of-the-art diffusion-based models suffer from catastrophic forgetting when new tasks arrive sequentially (See Section <ref> for more details.).
To address this issue, we propose “Continual Diffuser” (), which endows the diffuser with the capabilities of quickly adapting to new tasks (plasticity) meanwhile retaining the historical knowledge (stability) with experience rehearsal.
First of all, to take advantage of the potential of diffusion models, we construct an offline RL benchmark that consists of 90 tasks from multiple domains, such as Continual World (CW) and Gym-MuJoCo.
These continual datasets will be released to all researchers soon at the present stage, and we will actively maintain and progressively incorporate more datasets into our benchmark.
Based on the benchmark, we train our method on each task with sequential modeling of trajectories and make decisions with conditional generation in evaluation.
Then, a small portion of each previous task dataset is reserved as the rehearsal buffer to replay periodically to our model.
Finally, extensive experiments on a series of tasks show that can achieve a promising plasticity-stability trade-off and outperform existing diffusion-based models and other representative continual RL methods on most tasks.
In summary, our contributions are threefold:
* We construct a continual offline RL benchmark that contains 90 tasks in the current stage, and we will actively incorporate more datasets for all researchers.
* We investigate the possibility of integrating experience rehearsal and diffuser, then propose the Continual Diffuser () to balance plasticity and stability.
* Extensive experiments on a series of tasks show that can achieve a promising plasticity-stability trade-off and outperform existing baselines on most tasks.
§ RESULTS
In this section, we will introduce environmental settings and evaluation metrics Section <ref> and <ref>.
Then, in Section <ref> and <ref>, we first introduce a novel continual offline RL benchmark, including the task description and the corresponding dataset statistics, and introduce various baselines.
Finally, in Section <ref> and <ref>, we report the comparison results, ablation study, and parameters sensitivity analysis.
§.§ Environmental Settings
Following the same setting as prior works <cit.>, we conduct thorough experiments on Continual World and Gym-MuJoCo benchmarks.
In Continual World, we adopt the task setting of CW10 and CW20 where CW20 means two concatenated CW10.
All CW tasks are version v1.
Besides, we also select Ant-dir for evaluation, which includes 40 tasks, and we arbitrarily select
four tasks (tasks-10-15-19-25) for training and evaluation. See Appendix <ref> for more details.
§.§ Evaluation Metrics
In order to compare the performance on a series of tasks, we follow previous studies <cit.> and adopt the totally average success rate P(ρ) (higher is better), forward transfer FT (higher is better), forgetting F (lower is better), and the total performance P+FT-F (higher is better) as evaluation metrics.
Suppose that we use p_i(ρ) to represent the average success rate on task i at gradient update step ρ and each task train Δ gradient steps, then the total average success rate P(ρ)=∑_i=1^Ip_i(ρ), where p_i(ρ) ∈ [0, 1].
The forward transfer FT denotes the normalized AUC area between the training curve and the reference curve.
Note that FT_i<1 and it might also be negative.
Mathematically, FT=1/I∑_iFT_i=1/I∑_iAUC_i-AUC_ref,i/1-AUC_ref,i, where we set AUC_ref,i=0.5 and AUC_i=(p_i(i·Δ)+p_i((i+1)·Δ))/2 for simplicity.
The forgetting F_i is defined as the performance decrease between p_i((i+1)·Δ) and p_I-1(I·Δ), thus F=1/I∑_i^IF_i.
§.§ Novel Benchmark for Continual Offline RL
To take advantage of the potential of diffusion models, we propose a benchmark for continual offline RL (CORL), comprising datasets from 90 tasks, including 88 Continual World tasks and 2 Gym-MuJoCo tasks <cit.>.
For the Gym-MuJoCo domain, there are 42 environmental variants, which are constructed by altering the agent goals.
In order to collect the offline datasets, we trained Soft Actor-Critic (SAC) on each task for approximately 1M time steps <cit.>.
Continual World <cit.> is a popular testbed that is constructed based on Meta-World <cit.> and consists of realistic robotic manipulation such as Pushing, Reaching, and Door Opening.
CW is convenient for training and evaluating the abilities of forward transfer and forgetting because the state and action space are the same across all tasks.
Firstly, we will define the task-incremental CORL (TICORL), task-incremental CORL (TICORL), and task-incremental CORL (TICORL) <cit.>.
In RL, we call the CL setting CICORL, where the CL tasks are constructed in the same environment with different goals, such as different directions or velocities.
We call the CL setting TICORL, where the CL tasks are indeed different environments but with the same purposes. For instance, the CL settings with the purpose of pushing blocks (e.g., “push wall” and “push mug” tasks in Continual World) in different robotic control tasks formulate the TICORL.
Finally, we can use the tasks of different purposes, such as push, pull, turn, and press blocks, to construct the DICORL.
For example, CW10 and CW20 form the mixed TICORL and DICORL setups because the task sequence contains multiple purposes.
Additionally, Gym-MuJoCo's 42 environmental variants facilitate constructing a CICORL setup.
Researchers can use these datasets in any sequence or length for CL tasks to test the plasticity-stability trade-off of their methods.
We also provide multiple quality datasets, such as `medium' and `expert,' in our benchmark.
We list the information statistics of our benchmark in Table <ref> and <ref>, and Figure <ref> and <ref>, where the episodic time limit is set to 200, and the evaluation time step is set to 1M and 0.4M for different qualities datasets.
Ant-dir is an 8-joint ant environment.
The different tasks are defined according to the target direction, where the agent should maximize its return with maximal speed in the pre-defined direction.
As shown in Table <ref>, there are 40 tasks (distinguished with “task id”) with different uniformly sampled goal directions in Ant-dir.
For each task, the dataset contains approximately 200k transitions, where the observation and action dimensions are 27 and 8, respectively.
We found that the Ant-dir datasets have been used by many researchers <cit.>, so we incorporate them into our benchmark.
Moreover, we report the mean return information of each sub-task in Table <ref> and Figure <ref>.
As for Cheetah-dir, it only contains two tasks that represent forward and backward goal directions.
Compared with Ant-dir, Cheetah-dir possesses lower observation and action space.
§.§ Baselines
We compare our method () with various representative baselines, encompassing structure-based, regularization-based, and rehearsal-based methods.
In structure-based methods, we select LoRA <cit.>, PackNet <cit.>, and Multitask.
For regularization-based methods, we select L2, EWC <cit.>, MAS <cit.>, and VCL <cit.> for evaluation.
Rehearsal-based baselines include t-DGR <cit.>, DGR <cit.>, CRIL <cit.>, A-GEM <cit.>, and IL <cit.>.
Besides, we also include several diffusion-based methods <cit.> and Multitask methods, such as MTDIFF <cit.> for the evaluation.
§.§ Main Results
Ant-dir Results.
To show the effectiveness of our method in reducing catastrophic forgetting, we compare our method with other diffusion-based methods on the Ant-dir tasks ordered by 10-15-19-25.
As shown in Table <ref> (d) and Figure <ref>, the results illustrate:
1) Directly applying previous diffusion-based methods into continual offline RL will lead to severe catastrophic forgetting because the scores of Diffuser-w/o rehearsal and DD-w/o rehearsal are far behind .
2) Extending the technique of LoRA into the diffusion model may not always work. The reason lies in that the parameter quantity size is small, which inspires us to construct diffuser foundation models in future work.
3) Rehearsal can bring significant improvements on diffuser as approaches the score of Multitask .
Online Continual World Results.
Considering that offline datasets prohibit further exploration in the environments, which may hinder the capability of some baselines that are designed for online training.
We conduct CW10 and CW20 experiments of these methods under the online continual RL setting.
Similarly, we constrain the interaction as 500k time steps for each task and report the comparison results in Figure <ref> (a) and Table <ref> (a).
The results show that our method () surpasses other baselines by a large margin, which illustrates the superior performance over balancing plasticity and stability.
Besides, it is indeed that some methods, such as EWC, are more suitable for online training by comparing the performances in Figure <ref> (a) and (b).
Additionally, we also report the comparison under mixed-quality datasets CL setting in Table <ref> (c).
Please refer to Appendix <ref> for the comparison of model plasticity and generation acceleration details.
Offline Continual World Results.
This section presents the comparison between and six representative continual RL methods on CW10 and CW20 benchmarks.
In order to show the capabilities of plasticity (quick adaptation to unseen tasks) and stability (lasting retention of previous knowledge), we keep the size of training samples, number of gradient updates, and computation constant.
Figure <ref> (b) and Table <ref> (b) summarize the results of CW10 and CW20 tasks.
We observe that our method can quickly master these manipulation tasks and remember the acquired knowledge when new tasks arrive, while the baselines (except for Multitask) struggle between plasticity and stability because the performance of these baselines fluctuates among tasks.
Moreover, after 5M gradient steps, our method still remembers how to solve the same task it learns, which shows small forgetting.
The results of the table also show that though some baselines exhibit high forward transfer, the average success rate is lower than our method, and they forget knowledge fleetly.
§.§ Ablation Study
To show the effectiveness of experience rehearsal, we conduct an ablation study of in CW and Ant-dir tasks.
We compare our method with and without experience rehearsal and find that experience rehearsal indeed brings significant performance gain.
For example, achieves 76.82% performance gain compared with -w/o rehearsal.
In CW 20 tasks, reaches mean success rate from 20% to 98% when incorporating experience rehearsal.
Refer to Table <ref> for more results.
Sensitivity of Key Hyperparameters.
In the experiments, we introduce the key hyper-parameters: the rehearsal frequency (υ) and rehearsal sample diversity (ξ).
The larger υ will aggravate the catastrophic forgetting because the model can access previous samples after a longer training process.
A large value of ξ will improve the performance and increase the storage burden, while a small value is more cost-efficient for longer CL tasks but is more challenging to hold the performance.
We conduct the sensitivity of the hyperparameters on the CW and Ant-dir environments, and the results are shown in Figure <ref> and Figure <ref>.
According to the results, our method can still reach good performance with the variation of υ and ξ.
§ DISCUSSION
§.§ Catastrophic Forgetting of Diffuser
Previous diffusion-based methods <cit.>, such as DD and Diffuser, are usually proposed to solve a single task, which is not in line with the real-world situation where the task will dynamically change.
Thus, it is meaningful but challenging to train a diffuser that can adapt to new tasks (plasticity) while retaining historical knowledge.
When we directly extend the original diffusion-based method in continual offline RL, we can imagine that severe catastrophic forgetting will arise in the performance because there are no mechanisms to retain preceding knowledge.
As shown in Figure <ref>, in order to show the catastrophic forgetting, we compare our method and the representative diffusion-based methods on Ant-dir, where we arbitrarily select four tasks, task-10, task-15, task-19, and task-25, to form the CL setting.
Diffuser-w/o rehearsal and DD-w/o rehearsal represent the original method Diffuser and DD, respectively.
Multitask and MTDIFF are the multitask baselines, which can access all training datasets in any time step, and -RCR represents we use return condition for decision generation during the training stage.
-LoRA denotes that we train with the technique of low-rank adaptation.
IL-rehearsal is the imitation learning with rehearsal.
The results show that previous diffusion-based methods exhibit severe catastrophic forgetting when the datasets arrive sequentially, and at the same time, the good performance of illustrates experience rehearsal is effective in reducing catastrophic forgetting.
§.§ Reducing Catastrophic Forgetting with Experience Rehearsal
In Section <ref>, we illustrate the effectiveness of experience rehearsal through the experiments on our proposed offline CL benchmark, which contains 90 tasks for evaluation.
From the perspective of the CL tasks quantity, we evaluate carious quantity settings, such as 4 tasks for Ant-dir, 4 tasks for CW4, 10 tasks for CW10, and 20 tasks for CW20.
From the perspective of classification of traditional CL settings, our experimental settings contain CICORL, TICORL, and DICORL.
In the Ant-dir environment, we select 10-15-19-25 task sequence as the CL setting and conduct the experiment compared with other diffusion-based methods.
From the results shown in Figure <ref>, we can see distinct catastrophic forgetting on the recent diffusion-based method, though they show strong performance in other offline RL tasks <cit.>.
To borrow the merits of diffusion models' strong expression on offline RL and equip them with the ability to reduce catastrophic forgetting, we propose to use experience rehearsal to master the CORL.
Detailed architecture is shown in Figure <ref>, and we postpone the method description in Section <ref>.
Apart from the Ant-dir environment, we also report the performance on more complex CL tasks, i.e., CW10 and CW20, in Table <ref>.
Considering that most baselines are trained in online mode in their original papers, we first select the online baselines and compare their mean success rate with our method.
The results (Table <ref> and Figure <ref>) show that our method () surpasses other baselines by a large margin, which illustrates the superior performance over balancing plasticity and stability.
Besides, we also compare our method with these baselines trained with offline datasets, where the results show that our method can quickly master these manipulation tasks and remember the acquired knowledge when new tasks arrive, while the baselines (except for Multitask) struggle between plasticity and stability because the performance of these baselines fluctuates among tasks.
When the previous tasks appear once again after 5M training steps, the baselines show different levels of catastrophic forgetting because the performance decreases after 5M steps.
However, our method still remembers how to solve the same task it learned before, which shows small forgetting.
Moreover, we also conduct mixed-quality dataset experiments to show our method's capability of learning from sub-optimal offline datasets.
For more details, please refer to Appendix <ref>.
To investigate the influence of key hyperparameters, we report the performance of the rehearsal frequency (υ) and rehearsal sample diversity (ξ) in Figure <ref> and Figure <ref>, where larger υ corresponds to aggravated catastrophic forgetting and a larger value of ξ will improve the performance and increase the storage burden.
In practice, we find that usually υ=2 and ξ=10% indicate good performance and pose small challenges for the computation and memory burden (see Appendix <ref> for memory and efficiency analysis.).
§ METHODS
§.§ Continual Offline RL
In this paper, we focus on the task-incremental setting of task-aware continual learning in the offline RL field where the different tasks come successively for training <cit.>.
Each task is defined as a corresponding Markov Decision Process (MDP) ℳ=⟨𝒮, 𝒜, 𝒫, ℛ, γ⟩, where 𝒮 and 𝒜 represent the state and action space, respectively, 𝒫: 𝒮×𝒜→Δ(𝒮) denotes the Markovian transition probability, ℛ: 𝒮×𝒜×𝒮→ℝ is the reward function, and γ∈ [0, 1) is the discount factor.
In order to distinguish different tasks, we use subscript i for task i, such as ℳ_i, 𝒮_i, 𝒜_i, 𝒫_i, ℛ_i, and γ_i.
At each time step t in task i, the agent receives a state s_i,t from the environment and produces an action a_i,t with a stochastic or deterministic policy π.
Then a reward r_i,t=r(s_i,t, a_i,t) from the environment serves as the feedback to the executed action of the agent.
Continual offline RL aims to find an optimal policy that can maximize the discounted return ∑_i^I𝔼_π[∑_t=0^∞γ^tr(s_i,t, a_i,t)] <cit.> on all tasks with previously collected dataset {D_i}_i∈ I.
§.§ Conditional Diffusion Probabilistic Models
In this paper, diffusion-based models are proposed to model the distribution of trajectory τ, where each trajectory can be regarded as a data point.
Then we can use diffusion models to learn the trajectory distribution q(τ)=∫ q(τ^0:K) dτ^1:K with a predefined forward diffusion process q(τ^k|τ^k-1)=𝒩(τ^k;√(α_k)τ^k-1,(1-α_k)I) and the trainable reverse process p_θ(τ^k-1|τ^k)=𝒩(τ^k-1;μ_θ(τ^k,k), Σ_k), where k∈ [1, K] is the diffusion step, √(α_k) and √(1-α_k) control the drift and diffusion coefficients, μ_θ(τ^k)=1/√(α_k)(τ_k-β_k/√(1-α̅_k)ϵ_θ(τ^k,k)), Σ_k=1-α̅_k-1/1-α̅_kβ_kI, and α_k + β_k = 1.
ϵ_θ(τ^k, k) represents the noising model <cit.>.
According to <cit.>, we can train ϵ_θ(τ^k, k) with the below simplified objective
ℒ(θ)=𝔼_k∼ U(1, 2, ..., K), ϵ∼𝒩(0,I), τ^0∼ D[||ϵ-ϵ_θ(τ^k, k)||_2^2],
where k is the diffusion time step, U is uniform distribution, ϵ is multivariant Gaussian noise, τ^0=τ is sampled from the replay buffer D, and θ is the parameters of model ϵ_θ.
Conditions play a vital role in conditional generation because this method makes the outputs of diffusion models controllable.
We can also use two conditions methods, classifier-guided and classifier-free, to train diffusion models p_θ(τ^k-1|τ^k, 𝒞) <cit.>.
The classifier-guided method separates the training of the unconditional diffusion model and conditional guide and then combines them together, i.e., p_θ, ϕ(τ^k-1|τ^k, 𝒞)∝ p_θ(τ^k-1|τ^k)p_ϕ(𝒞|τ^k).
The corresponding sampling process is p(τ^k-1|τ^k, 𝒞)=𝒩(μ_θ+Σ_k ·∇ log p_ϕ(𝒞|τ), Σ_k).
Compared with classifier-guided, the classifier-free method implicitly builds the correlation between the trajectories and conditions in the training phase by learning unconditional and conditional noise ϵ_θ(τ^k, ∅, k) and ϵ_θ(τ^k, 𝒞, k), where ∅ is usually the zero vector <cit.>.
Then the perturbed noise at each diffusion time step is calculated by ϵ_θ(τ^k, ∅, k)+ω(ϵ_θ(τ^k, 𝒞, k)-ϵ_θ(τ^k, ∅, k)).
In this paper, we adopt the classifier-free guidance due to its simplicity, controllability, and higher performance <cit.>.
§.§ Continual Diffuser
In this section, we introduce the Continual Diffuser (), as shown in Figure <ref>, which contains classifier-free task-conditional training, experience rehearsal, and conditional generation for decision.
Data Organization.
In RL, we leverage the characteristic of the diffusion model that can capture joint distributions in high-dimensional continual space by formulating the training data from single-step transition to multi-step sequences.
Specifically, we have I tasks, and each task ℳ_i consists of N trajectories {τ_i}_1^N, where the τ_i,n={s_i,t,n, a_i,t,n} will be split into equaling sequences with T_e time steps as the discrepancy of trajectories may occur across tasks.
In the following parts, we slightly abuse this notation τ_i to represent the sequence data with length T_e sampled from task i 's dataset D_i and τ̂_i to denote the generative sequence.
Task Condition.
In order to distinguish different tasks, we propose to use environment-related information as the task condition.
For example, in the Ant-dir environment, the agent's goal is to maximize its speed in the pre-defined direction, which is given as the goal in the specific tasks.
So, we propose to use this information as condition 𝒞_task to train our model.
In each diffusion step k, the task condition 𝒞_task will pass through a task embedding function to obtain task embedding, which will be fed into the diffusion model jointly with diffusion time step embedding.
Apart from the task conditions that are used implicitly in the training, we also need explicit observation conditions.
We use the first state s_i,t,n of the T_e length sampled sequence τ_i,n = {s_i,t,n, a_i,t,n, s_i,t+1,n, a_i,t+1,n, ..., s_i,t+T_e-1,n, a_i,t+T_e-1,n} as the conditions.
Then at each diffusion generation step, after we obtain the generated sequences {ŝ_i,t,n, â_i,t,n, ..., ŝ_i,t+T_e-1,n, â_i,t+T_e-1,n}^k, the first observation ŝ_i,t,n is directly replaced by s_i,t,n, i.e., τ̂_i,n^k = {s_i,t,n, â_i,t,n, ..., ŝ_i,t+T_e-1,n, â_i,t+T_e-1,n}^k.
Training Objective.
Following the previous studies of the diffusion model <cit.>, the training and generation for each task i are defined as
ℒ_i(θ)=𝔼_k∼ U(1, K), ϵ∼𝒩(0,I), τ^0_i∼ D_i[||ϵ-ϵ_θ(τ_i^k,𝒞_task i, k)||_2^2],
τ_i^k-1=√(α̅_k-1)β_k/1-α̅_k·τ̅_i+√(α)_k (1-α̅_k-1)/1-α̅_kτ_i^k+|Σ_k|z,
where z∼𝒩(0,I), τ̅_i=τ_i^k-√(1-α̅_k)ϵ̅/√(α̅_k), |Σ_k|=1-α̅_k-1/1-α̅_kβ_k, and ϵ̅=ϵ_θ(τ_i^k, ∅, k)+ω(ϵ_θ(τ_i^k, 𝒞_task, k)-ϵ_θ(τ_i^k, ∅, k)).
Experience Rehearsal.
In this paper, we propose periodic rehearsal to strengthen the knowledge of previous tasks, which mimics the memory consolidation mechanism of hippocampus replay inside biological systems.
When a new dataset D_i of task i arrives, we preserve a small portion ξ of the entire dataset, donated as 𝒟_i.
For the small training dataset 𝒟_i, it is easy to overfit these data for most rehearsal-based methods.
Fortunately, inspired by the distributional robust optimization, increasing the hardness of the samples will hinder memory overfitting.
The discrete type of diffusion process τ^k=√(α_k)τ^k-1+√(1-α_k)ϵ can be reformulated as the corresponding continuous forward process dτ=-1/2β(t)τ dt+√(β(t))dW, where W is the standard Wiener process (a.k.a. Brownian motion).
This process gradually inserts directional noise (i.e., increasing the hardness) to induce transformation from trajectory distribution to Gaussian distribution.
So rehearsal-based diffusers naturally possess the capability of reducing memory overfitting, and the total objective function is
min_∀θ∈Θ[𝔼_τ_j∈ D_jℒ_j(θ,τ_j,𝒞_task j)+𝔼_τ_i∈𝒟_i, i<jℒ_i(θ,τ_i,𝒞_task i)]
In practice, we usually set the rehearsal frequency υ as 2 gradient steps and the portion ξ as 10%.
Architecture.
In this paper, we adopt temporal Unet with one-dimensional convolution blocks as the diffusion model to predict noises.
Specifically, temporal Unet contains several down-sampling blocks, a middle block, several up-sampling blocks, a time embedding block, and a task embedding block.
We train the time embedding block and task embedding block to generate time and task embeddings that are added to the observation-action sequence
τ_i,t:t+T_e-1,n=[ s_i,t,n s_i,t+1,n ... s_i,t+T_e-1,n; a_i,t,n a_i,t+1,n ... a_i,t+T_e-1,n; ].
In the return conditional diffusion models, we replace the task embedding block with the return embedding block.
Also, following the implementation of low-rank adaptation in Natural Language Processing <cit.>, we increase the LoRA module in down-sampling, middle, and up-sampling blocks to construct the LoRA variant -LoRA.
§.§ Conclusion
First of all, to facilitate the development of the continual offline RL community, a continual offline benchmark that contains 90 tasks is constructed based on Continual World and Gym-MuJoCo.
Based on our benchmark, we propose Continual Diffuser (), an effective continual offline RL method that possesses the capabilities of plasticity and stability with experience rehearsal.
Finally, extensive experiments illustrate the superior plasticity-stability trade-off when compared with representative continual RL baselines.
§ CODE AND DATA AVAILABILITY
The code and data are available in GitHub at <https://github.com/JF-Hu/Continual_Diffuser>.
§ ACKNOWLEDGEMENT
We would like to thank Lijun Bian for her contributions to the figures and tables of this manuscript.
We thank Runliang Niu for his contributions to providing help on the computing resource.
plainnat
§ SUPPLEMENTARY MATERIAL
§.§ Pseudocode of Continual Diffuser
The pseudocode for training is shown in Algorithm <ref>.
First of all, we process the datasets of I tasks before training, including splitting the trajectories into equal sequences and normalizing the sequences to facilitate learning.
As shown in lines 9-24, for each task i, we check the task index in the whole task sequence and sample different samples from the different buffers.
For example, for task i,i>0, we will perform experience rehearsal every υ train steps by sampling data from 𝒟_j, j∈0, ..., i-1, where j is sampled from U(0, i-1).
Then, the networks ϵ_θ, f_task(ϕ), and f_time(φ) are updated according to Equation (<ref>) and Equation (<ref>).
After training on task i, we preserve a small portion (ξ) of the dataset of buffer D_i as task i's rehearsal buffer.
During the evaluation of multiple tasks (shown in Algorithm <ref>), we successively generate decisions with and calculate the evaluation metrics.
§.§ Implement Details
Compute. Experiments are carried out on NVIDIA GeForce RTX 3090 GPUs and NVIDIA A10 GPUs.
Besides, the CPU type is Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz.
Each run of the experiments spanned about 24-72 hours, depending on the algorithm and the length of task sequences.
Hyperparameters.
In the implementation, we select the maximum diffusion steps as 200, and the default structure is Unet.
Then, in order to speed up the generation efficiency during evaluation, we consider the speed-up technique of DDIM <cit.> and realize it in our method, thus accomplishing 19.043x acceleration compared to the original generation method.
The sequence length is set to 48 in all experiments, where a larger sequence length can capture a more sophisticated distribution of trajectories and may also increase the computation burden.
We set the LoRA dimension as 64 for each module of down-sampling, middle, and up-sampling blocks, and the percent of LoRA parameters is approximately 12% in our experiments.
§.§ Related Work
Diffusion-Based Models for RL.
Diffusion models have made big progress in many fields, such as image synthesis and text generation <cit.>.
Recently, a series of works have demonstrated the tremendous potential of diffusion-based models in offline RL tasks such as goal-based planning, composable constraint combination, scalable trajectory generation, and complex skill synthesis <cit.>.
For example, <cit.> propose to use the value function as the guide during trajectory generation, effectively reducing the effects of out-of-distribution actions and reaching remarkable performance in offline RL tasks.
Besides, diffusion models can also be used as policies to model the multimodal distribution from states to actions and as planners to perform long-horizon planning <cit.>.
For instance, <cit.> use diffusion models as policies to model the distribution from states to actions, while <cit.> endow diffusion models with the ability to perform planning and data augmentation with different task-specific prompts.
Continual Learning in RL.
Continual learning (CL) aims to solve multi-tasks that come sequentially with explicit boundaries (task-aware CL) or implicit boundaries (task-free CL) and achieve no catastrophic forgetting and good task transferring (i.e., plasticity-stability dilemma) at the same time <cit.>.
Multitask learning methods <cit.> are usually regarded as the upper bound of continual learning.
Existing studies for continual RL can be roughly classified into three categories: Structure-based methods focus on novel model structures such as sub-networks, mixture-of-experts, hypernetworks, and low-rank adaptation <cit.>.
Regularization-based methods propose using auxiliary regularization loss to constrain the policy optimization and avoid catastrophic forgetting during training <cit.>.
Rehearsal-based methods preserve experiences of previous tasks or train generative models that can produce pseudo-samples to maintain knowledge of past tasks <cit.>.
Besides, recent plasticity-preserving studies <cit.> reveal that the plasticity of models can be enhanced by weight re-initialization and noisification when facing the early interactions overfitting within a single task.
Offline RL.
Offline RL mainly focuses on how to train optimal policies with previously collected large datasets without expensive and risky data collection processes <cit.>.
It, however, remains a huge challenge for training when facing the distribution shift between the learned policy and the data-collected policy and the overestimation of out-of-distribution (OOD) actions <cit.>.
To solve these issues, previous studies on offline-RL tasks generally rely on methods from constrained optimization, safe learning, imitation learning, and amendatory estimation <cit.>.
Besides, planning and optimizing in the world model with limited interactions also serves as a promising way to train satisfactory policies <cit.>.
Recently, sequential modeling has been proposed to fit the joint state-action distribution over the trajectories with transformer-based models and diffusion-based models <cit.>.
§.§ Additional Experiments
Offline Continual World Results on CW10.
We report the performance on CW10 in Figure <ref> when the baselines are trained with offline datasets.
The results show that the learning speed of our method () is much more efficient than other baselines when executing the same gradient updates.
Besides, we can observe that the performance of generative methods is more effective than non-generative methods, which shows the powerful expressiveness of generative models in modeling complex environments and generating pseudo-samples with high fidelity.
Online Continual World Results on CW10.
Apart from the offline comparison, we also modified the original baselines and conducted experiments on CW10, where several new online baselines were introduced.
Similarly, the results in Figure <ref> also show that our method () surpasses the baselines by a large margin, illustrating the superiority of .
We do not incorporate several offline baselines trained with generative models into online comparison because the generative process consumes much more time for interaction, which exceeds the tolerable range of training.
These baselines trained with generative models are more suitable for training on offline datasets.
Mixed Dataset Training Analysis.
We can classify the training under the sub-optimal demonstrations into two situations.
You can click here to return to Section <ref> quickly for continual reading of the main body.
The first is learning from noise datasets. In order to simulate the training under the sub-optimal demonstrations, we insert noise into the observations of the current dataset to obtain sub-optimal demonstrations, i.e., o̅ = o + clip(η * 𝒩(0, I), ρ). The larger noise denotes datasets with lower quality. We report the results in Table <ref>. The results illustrate that the performance decreases with the noise increasing, which inspires us to find additional techniques to reduce the influence of the noise on samples, such as adding an extra denoising module before diffuser training.
The second is learning from datasets sampled with mixed-quality policies.
We construct the `medium' datasets on several Continual World tasks (CW4) to show the performance on the mixed-quality datasets, where the trajectories come from a series of behavior policies during the training stage.
With the training stage going, we update the policy network many times, and each gradient update step will be regarded as generating a new behavior policy. Then, the performance of the policy will be improved.
Next, we use the behavior policies whose performance ranges from medium to well-trained performance to collect `medium' datasets, i.e., the `medium' datasets contain unsuccessful trajectories and successful trajectories simultaneously (Refer to Table <ref> for more statistics.).
Based on the mixed-quality CW4 datasets, we adopt IL as the baseline and compare our method with IL.
The corresponding experimental results are shown in Table <ref> (c).
The results show that our method () can achieve better performance than the baseline in the `medium' dataset quality setting, which shows its effectiveness.
Plasticity Comparison.
In order to compare the plasticity of our method and representative plasticity-preserving methods <cit.>, we conduct the experiments on the Ant-dir environment with task setting as `10-15-19-25', which is the same as the setting in the main body.
The results are reported in Table <ref>, where the final performance means evaluation on all tasks after the whole training on all tasks and the performance gain of plasticity (task-level) is calculated according to mean(P(train15test15) - P(train10test15) + P(train19test19) - P(train15test19) + P(train25test25) - P(train19test25)).
The results illustrate that our model reaches better final performance than PLASTIC and SAM. Besides, in the task-level plasticity performance comparison, our method also obtains a higher score.
Although PLASTIC and SAM do not perform well here, it's worth noting that PLASTIC and SAM are not designed to resolve continual learning under changing tasks but to address early interactions overfitting within a single task.
The granularity of plasticity referred to in is larger than that in PLASTIC and SAM.
Click here to return to Section <ref> quickly for continual reading of the main body.
Parameters Sensitivity on Ant-dir.
In Section <ref>, we conduct the parameter sensitivity analysis on CW to show the effects of rehearsal frequency υ and rehearsal diversity ξ.
We also report the results of parameters sensitivity on Ant-dir in Figure <ref> and Table <ref>, where υ=inf means we do not perform rehearsal during training.
The results show that with the increase of υ, the performance declines because the model can not use previous datasets to strengthen its memory in time.
Efficiency Analysis of Generation Speed.
The generation process of diffusion models is indeed computationally intensive because the mechanism of generation requires multiple rounds to generate a sequence. However, we can draw inspiration from previous studies <cit.> in related domains and accelerate the generation process. For example, we can reduce the reverse diffusion step from 200 to 10. To show the efficiency of accelerating during the generation process, we conduct a comparison of generation speed. We report the results in Table <ref>, where the 200 diffusion steps setting is the original version, and the 10 diffusion steps setting is our accelerated version. In the experiments of our manuscript, we adopt the 10 diffusion steps setting, which improves the sampling speed (19.043×) with a larger margin than the original sampling version. It's worth noting that our implemented accelerate technique can also use other diffusion steps settings, but we find that 10 diffusion steps setting performs well on performance and generation efficiency.
Ablation Study on Mixed Datasets.
In Table <ref>, we report the effects of rehearsal sample diversity ξ on the `medium' datasets. From the results, we can see that increasing the rehearsal sample diversity is beneficial to the performance, which is in line with the experiments in the main body of our manuscript. Besides, the results also show that our method () can reach a better plasticity-stability trade-off than the baseline in the `medium' dataset quality setting.
Computation Costs Analysis.
In order to show the consumption of computational costs, we report the comparison of computation costs during the training stage in Table <ref>, where we obtain the statistical data with `wandb.'
The results show that increasing the rehearsal samples does not significantly increase computation costs and training time.
§.§ Statistics of Continual Offline RL Benchmarks
To take advantage of the potential of diffusion models, we first collect an offline benchmark that contains dozens of tasks from multiple domains, such as Continual World and Gym-MuJoCo <cit.>.
In order to collect the interaction data, we trained Soft Actor-Critic on each task for approximately 1M time steps <cit.>.
Totally, the benchmark contains 90 tasks, where 88 tasks come from Continual World, 2 tasks come from Gym-MuJoCo.
Specifically, CW <cit.> tasks are constructed based on Meta-World <cit.>.
CW consists of many realistic robotic manipulation
tasks such as Pushing, Reaching, Door Opening, Pick, and Place.
CW is convenient for training and evaluating the abilities of forward transfer and forgetting because the state and action space are the same across all the tasks.
In our benchmark, we collect “expert” and “medium” datasets, where the episodic time limit is set to 200, and the evaluation time step is set to 1M and 0.4M for “expert" and “medium” datasets, respectively.
Thus, we obtain 5000 and 2000 episodes for these two quality tasks, as shown in Table <ref> and Table <ref>, in which we also report the mean success rate of these two qualities dataset.
Besides, we also provide the return information of all datasets in Figure <ref> and Figure <ref>.
Out of these tasks defined in Meta-World, we usually select ten tasks from them as the setting of continual learning, i.e., CW10, and CW20 denotes the setting of two CW10.
Aligning with the traditional definition of various CL settings <cit.>, this benchmark supports constructing task-incremental CORL (TICORL), domain-incremental CORL (DICORL), and class-incremental CORL (CICORL) settings.
Researchers can use these datasets in any sequence or length for CL tasks to test the plasticity-stability trade-off of their proposed methods.
The future expansion plan of this benchmark will gather datasets such as `random' and `full' training qualities datasets to bolster training robust CL agents.
Ant-dir is an 8-joint ant environment.
The different tasks are defined according to the target direction, where the agent should maximize its return with maximal speed in the pre-defined direction.
As shown in Table <ref>, there are 40 tasks (distinguished with “task id”) with different uniformly sampled goal directions in Ant-dir.
For each task, the dataset contains approximately 200k transitions, where the observation and action dimensions are 27 and 8, respectively.
We found that the Ant-dir datasets have been used by many researchers <cit.>, so we incorporate them into our benchmark.
Moreover, we report the mean return information of each sub-task in Table <ref> and Figure <ref>.
As for Cheetah-dir, it only contains two tasks that represent forward and backward goal directions.
Compared with Ant-dir, Cheetah-dir possesses lower observation and action space.
|
http://arxiv.org/abs/2409.03145v1 | 20240905004323 | Exceptional topology in Non-Hermitian Twisted Bilayer Graphene | [
"Yingyi Huang"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"quant-ph"
] |
yyhuang@gdut.edu.cn
School of Physics and Optoelectronic Engineering,
Guangdong University of Technology, Guangzhou 510006, China
Guangdong Provincial Key Laboratory of Sensing Physics and System Integration Applications, Guangdong University of Technology, Guangzhou, 510006, China
§ ABSTRACT
Twisted bilayer graphene has extraordinary electronic properties at the magic angle along with an isolated flat band at magic angle. However, the non-Hermitian phenomena in twisted bilayer graphene remain unexplored. In this work, we study a non-Hermitian TBG formed by one-layer graphene twisted relative to another layer with gain and loss.
Using a non-Hermitian generalization of Bistritzer-MacDonald(BM) model, we find Dirac cones centered at K' corner of the moiré Brillouin zone deformed in the presence of non-Hermiticity. This is different from single layer graphene with gain and loss, where rings of exceptional points appear in both K and K' corners of the Brillouion zone.
The coincident of exceptional rings at Γ_M point characterizes an “exceptional magic angle", at which the system hosts flat bands with zero energy and finite lifetime. More interestingly, we find that the topological charge in the moiré Brillouin zone is conserved during the expansion and fusion of the exceptional ring, which is absent in two-dimensional systems constraining by Nielsen-Ninomiya theorem.
These findings can be demonstrated in realistic cold atom and metamaterial systems and will stimulate further study on non-Hermitian phenomena in twistronic.
Exceptional topology in Non-Hermitian Twisted Bilayer Graphene
Yingyi Huang
September 9, 2024
==============================================================
§ INTRODUCTION
Graphene owes extraordinary electronic and optical properties in two-dimensional (2D) systems <cit.>.
These properties relate to its lattice structure, which is directly interconnected with its wave-function topology. In single-layer graphene, the presence of unusual linear band dispersion termed Dirac band dispersion at K and K' points of the Brillouin zone is derived from its honeycomb lattice geometry. The Lorentz invariant Dirac cone at K and K' points are related by time-reversal symmetry and thus have opposite Berry phases, which has been confirmed in experiments <cit.>.
When two graphene layers are shifted with each other in rational competing periodicities upon shearing or twisting, they form a moiré pattern <cit.>. One prominent example is a graphene bilayer with a relatively small angle rotation between the layers <cit.>.
The recent studies on twisted bilayer graphene (TBG) focus on the extraordinary electronic properties at the magic angle along with an isolated flat band <cit.>.
It is related to a variety of exotic phrases including quantum anomalous Hall <cit.>, fractional Chern insulators <cit.>, ferromagnetic states <cit.>, nematicity <cit.>, unconventional superconductivity <cit.>.
The topological properties of two-dimensional systems can be generalized to non-Hermitian systems by introducing on-site gain and loss <cit.>. The presence of non-Hermiticity can transform Hermitian degenerate points, such
as a Dirac-like point or Weyl point, into a ring of exceptional points <cit.>. At an exceptional point, not only the real and imaginary parts of the eigenvalues degenerate, but also the eigenvectors coalesce.
Many phenomena are unique to non-Hermitian systems, including non-Hermitian skin effect <cit.> and non-Hermitian topological classifications <cit.>.
Despite topological robustness has been shown in one-dimensional moiré lattice under stain, the non-Hermitian effect of twisted moiré system with magic angle has been overlooked <cit.>.
In particular, twisted bilayer graphene has extraordinary topological properties. Since the two Dirac points in moiré Brillouin zone (mBZ) in one valley emanate from different layers, their Berry phases can be identical due to the symmetry interlayer hopping term, which is generally not allowed in 2D periodic systems following the Nielsen-Ninomiya theorem <cit.>. This is in contrast to single-layer graphene, in which the Berry phases of Dirac cones in different valleys has different topological charges.
Actually, when the exceptional contours with opposite topological charges merge, the topological charge can be dissipated <cit.>. It is natural to ask how the exceptional geometry appears and develops in a non-Hermitian TBG system.
In this paper, we uncover a new type of twisted band engineering in TBG, with non-Hermitian perturbation on one of the layers.
As shown in Fig. <ref>(a), balanced gain (in bright color) and loss (in light color) are put on the bottom layers (in red and blue, respectively).
The superposing of two graphene layers with a twist angle between them creates moiré patterns. In the mBZ shown in Fig. <ref>(b), the low-energy band structure consists of Dirac cones from the rotated layers located at the K_M and K'_M corners. When the non-Hermitian perturbation is turned on the bottom layer, Dirac points morph into exceptional rings and the corresponding Dirac cones become exceptional cones, as shown in Fig. <ref>(c). We find that this non-Hermitian system hosts special angles for which multiple exceptional cones coincide at Γ_M points in an mBZ. We call this phenomenon “exceptional magic". And importantly, our results demonstrated that the total Berry charge is conserved in the mBZ even after the merging of exceptional rings, this is a large difference from non-Hermitian single-layer graphene.
The paper is organized as follows. Section <ref> introduces the non-Hermitian TBG model. Both the real and imaginary parts of the low-energy moiré band spectrum will be explored in Sec. <ref>. In Sec. <ref>, we will use a non-Hermitian three triod model to discuss exceptional physics. In Sec. <ref>, we calculate Beery curvature and the corresponding Chern number. Finally, we conclude our results in Sec. <ref>.
§ THE NON-HERMITIAN BILAYER MODEL
We consider balanced gain and loss on both layers of the bilayer system. We can write down a non-Hermitian generalization of the single-valley model for twisted bilayer graphene. The low-energy Hamiltonian for the K mBZ is:
H=[ h_b(k) T(r); T(r)^† h_t(k) ]
where the top layer Hamiltonian is
h_t=-iv_Fσ_θ/2∇
and the bottom layer Hamiltonian is non-Hermitian
h_b=-iv_Fσ_-θ/2∇+iλ_Vσ_z.
Here, iλ_Vσ_z is a balanced gain and loss imposing on the bottom layer, which induces non-Hermiticity. And σ_θ/2 corresponds to the rotated Pauli matrices e^-iθσ_z/4(σ_x,σ_y) e^iθσ_z/4. T(r)=∑^3_j=1 T_j e^-ik_θq_j·r is the sublattice-dependent moiré potential that couples the two layers. The interlayer coupling is in the form
T_j+1=w_0σ+w_1 (σ_x cos jϕ+σ_ysin jϕ)
with ϕ=2π/3 and w_0(w_1) the interlayer coupling strength in the AA (AB) region.
The unit vectors q_1=(0,-1), q_2,3=(±√(3)/2,1/2). k_θ=2k_Dsin(θ/2) is the moiré modulation vector and k_D=4π/(3a_0) is the magnitude of
the Dirac wave vector, where a_0 is the lattice constant of monolayer graphene. The hopping strength w encodes the inter-layer coupling in the AB region.
The generators of the magnetic space group P6'2'2 for the Hermitian twisted bilayer graphene include a C_3z=e^i2π/3σ_z rotation symmetry, a C_2x=τ_xσ_x rotation symmetry, and a C_2zT=σ_x𝒦 symmetry <cit.>. The single-valley Hamiltonian only has the C_3z symmetry, since C_2z and T symmetries play the role of mapping one valley to the other. We can check that the non-Hermitian model preserves the C_3z rotation symmetry.
If there is no AA stacking (u=0), the Hamiltonian preserves chiral symmetry 𝒞ℋ(r)𝒞^†=-ℋ(r) under chiral symmetry operator 𝒞=σ_z in the absence of non-Hermiticity <cit.>.
In the presence of the balanced gain and loss iσ_z term, the chiral symmetry is broken but non-Hermitian chiral symmetry is preserved
In the following numerical calculation, we take ħ v_F/a= 2380 meV, w_1 = 110meV and 2v_0k_D = 19.81eV. They correspond to the first magic angle α≈ 0.586.
§ THE APPEARANCE AND DEVELOPMENT OF EXCEPTIONAL RINGS
Let us investigate the band structure for twist angle θ=1.
In the absence of non-Hermiticity, this is not a magic angle with a flat band.
If the interlayer coupling effect is absent (w_0=w_1=0), the dispersion in the moiré Brillouin zone is equivalent to the folded dispersion in the Brillouin zone of graphene. The Dirac point of the K point in layer 1 is mapped to K_M, while that of the K' point of layer 2 is mapped to K_M'.
In the presence of non-Hermitian perturbation λ_V, the energy dispersion becomes complex. In Fig. <ref>(a), we can see that non-Hermitian perturbation deforms the Dirac cones at K'_M corners. More specifically, the degeneracy point at the K'_M points in the real part of the spectrum morphs into a ring of degeneracy points. This is a stark difference from single-layer graphene with onsite gain and loss. The latter has deformed Dirac cones in both K and K' corners in the Brillouin zone. This difference is due to the fact that K_M and K_M' in non-Hermitian TBG are folded from different graphene layers. The non-Hermitian λ_V can only deform the Dirac cone on the bottom layer.
In particular, the degeneracy of the two lowest energy states leads to the appearance of singularity points in momentum space. At the same time, a ring of degeneracy point appears at the same momentum position in the imaginary part of the spectrum, as shown in Fig. <ref>(b). We found that not only the eigenvalues coalesce but also their corresponding eigenvectors coalesce at this point. This confirms that these singularity points are exceptional points in momentum space. Thus, the Dirac cone is turned into a ring of exceptional points, this is similar to that in non-Hermitian semimetals <cit.>.
At a critical non-Hermitian strength, the exceptional point approach Γ_M point, where the whole lowest band becomes flat, as shown in Fig. <ref>. Different metric including the vanishing of Dirac velocity, maximum gap to neighboring bands, and minimum bandwidth gives the same results.
The value of this critical non-Hermiticity strength λ_Vc depends on the twist angle θ.
§ EXCEPTIONAL CONE
To understand the nonhermitian physics, we use the simplified tripodmode model <cit.>, which truncates Eq. <ref> at the first honeycomb shell. The Hamiltonian is written as
H_Tri=[ h_b(k) T_1 T_2 T_3; T_1 h_t1(k-q_1) 0 0; T_2 0 h_t2(k-q_2) 0; T_3 0 0 h_t3(k-q_3) ]
where h_tj for j=1,2,3 are on the top layer and h_b on the bottom layer. Since h_b is non-Hermitian, the Hamiltonian H_tri has biorthogonal eigenvectors. In this system, the left and right eigenvectors are four two-component spinors
Ψ_α^T(k)=(ψ_0(k),ψ_1(k),ψ_2(k),ψ_3(k))_α^T, α=L,R.
Using perturbation theory one can derived the effective Hamiltonian fin the space of ψ_0
The Hamiltonian can be divide into a momentum-independent part ℋ^(0) and a momentum-dependent part
ℋ^(1)_k. The ψ_jR can be expressed by
ψ_jR=-h_j^-1T_jψ_0R.
Since h_j is independent of non-Hermicity, we can expect T_jh_j^-1T_j=0, which is similar to the Hermitian model. Thus, the ψ_0 spinor in the zero-energy eigenstates satisfies
h_0ψ_0R=0.
This form seems to be the same as that of the Hermitian TBG. However, h_0 is in a non-Hermitian form, the ψ_0R is different from ψ_0L. We can check that the biorthorgonal wave functions are normalized as
⟨Ψ_0L|Ψ_0R⟩=1+3(α_0^2+α_1^2).
The effective Hamiltonian matrix to leading order in k is
Ψ^(i)|ℋ^(1)_k|Ψ^(j)⟩=ψ_0^(i)†(-v_F^*σ·𝐤+iλ^*σ_z)ψ^(j)_0
with the renormalized velocity being
v_F^*/v_F=1-3α_1^2/1+3(α_0^2+α_1^2)
and the renormalized non-Hermitian strength being
λ^*/λ=1-3(α_0^2-α_1^2)/1+3(α_0^2+α_1^2).
We can see that aside from a renormalized velocity and non-Hermitian strength, the form of Hamiltonian is identical to the continuum model Hamiltonian of non-Hermitian single-layer graphene.
The form of the renormalized velocity is the same as that of the Hermitian TBG. It decreases with increasing interlayer coupling in AB region α_1.
To see the effect of interlayer coupling on non-Hermitian strength, we should discuss two cases. When α_0=0, λ^* increases with increasing interlayer coupling in AB region α_1. Thus, the ratio λ^*/v_F^* becomes larger as α_1 increases.
Whenα_0≠0, λ^* increases with increasing interlayer coupling in AA region α_0. The enon-Hermitian effect is more complicated.
§ BERRY CURVATURE AND CHERN NUMBER
So far, we have shown the existence of exceptional geometries in the non-Hermitian TBG. In this section, we will explore their topological properties numerically.
The topological charge or Chern number of an exceptional contour can be obtained by integrating the Berry connection A(𝐤) along a closed surface containing the exceptional contour
𝒞=1/2π∮_∂ S𝐀(𝐤)· d𝐤= 1/2π∫_SΩ(𝐤)· d𝐒,
where the Berry connection is defined by the eigenvectors as
𝐀^β,β'=i⟨ψ^β(𝐤)|∇_𝐤|ψ^β'(𝐤)⟩.
Here, |ψ^β'(𝐤)⟩ and ⟨ψ^β(𝐤)| can be the left or right eigenvectors. Since previous study has proven that the total topological charge is the same for four possible choices of eigenvectors <cit.>, we will choose β=L and β'=R in the following calculation.
In numerical calculation, the Berry connection is presented by the local Berry curvature Ω(𝐤)=∇_𝐤×𝐀(𝐤), which can be calculated as
Ω(𝐤)^β,β'=lim_q→01/4q^2⟨ψ^β'_𝐤-qx̂-qŷ|ψ^β_𝐤-qx̂+qŷ⟩⟨ψ^β'_𝐤-qx̂+qŷ|ψ^β_𝐤+qx̂+qŷ⟩⟨ψ^β_𝐤+qx̂+qŷ|ψ^β'_𝐤+qx̂-qŷ⟩⟨ψ^β_𝐤+qx̂-qŷ|ψ^β'_𝐤-qx̂-qŷ⟩
with q being half of the lattice constant.
Figure <ref> shows the Berry curvature in mBZ. We can see that Berry curvatures have sharp peaks around Γ_M-K_M high symmetric lines, which are close to the boundary of exceptional rings. This can be well understood since Berry curvature is the phase of an overlap tracing a small loop in mBZ. On the boundary of exceptional rings, the wave functions vary rapidly and will give a large Berry curvature.
Upon summing the Berry curvature on the mmBZ, we can obtain the total Berry charge or Chern number
𝒞=∑_mBZΩ^LR(k).
The non-Hermitian perturbation transforms a Dirac point into an exceptional ring. In the mBZ, there are three-thirds of exceptional rings from Dirac points in different moiré corners. We find that the total Berry charge in mBZ is 1 throughout the merging of exceptional rings with increasing λ_V (e.g., 0≤λ_V≤0.85 for θ=1). At much larger λ_V, the Berry charge can be a quantized value larger than 1 due to approaching band crossings with the remote bands.
This supports our expectation that the topological charge in the system is preserved in the exceptional contour. And most importantly, this characterizes the topological features of non-Hermitian TBG. This is opposite to the merging of exceptional contours with opposite topological charge, in which the topological charge is dissipated and gives a single, uncharged exceptional contour.
§ DISCUSSION AND CONCLUSION
In this paper, we have studied a non-Hermitian generalization of twisted bilayer graphene system with balanced gain and loss on one of the layers. We found that exceptional rings centered at K' corner in the moiré Brillouin zone in the K(K') valley. This is due to the folding of the Brillouin zone, and it is absent in the single-layer graphene. The exceptional rings coincide and form bands whose real parts are flat at a specific non-Hermitian strength, which is denoted as “exceptional Dirac magic". This flat band is characterized with a robust Chern number 𝒞=1 in moiré Brillouin zone. This is different from single-layer graphene in which Dirac/Weyl exceptional rings carrying opposite topological charges are coupled and annihilated.
We have introduced a new design concept for twisted moiré systems, in which a certain “exceptional Dirac magic" angle is achieved. Non-Hermitian TBG can be realized in cold atom and metamaterial system. On one hand, due to the great advantage of manipulating atom-atom interaction and light-matter coupling, ultracold atom experiments provide an unprecedented opportunity.
Remarkably, atomic Bose-Einstein condensate has been recently realized in twisted bilayer optical lattices <cit.>. Also, there are proposals on the photonic analog <cit.> and phononic analog <cit.> of twisted bilayer graphene. In particular, there is demonstration of optical bilayer photonic crystal devices in the microwave range <cit.> and in the optical frequency range <cit.>On the other hand, on-site dissipations are well-controlled non-Hermitian terms that can be realized experimentally in many systems <cit.>. In particular, the Weyl exceptional ring has been realized in photonic experiment <cit.>.
The highlight of this work is the discovery of non-dissipation exceptional topology. The interplay of topology, non-Hermitian, and moiré physics can open the door to new phenomena. Especially, the reduced kinetic energy of the flat bands in twisted bilayer graphene makes it a correlated system. Non-Hermitian strongly interacting Dirac system
has recently been calculated <cit.>. The non-Hermitian physics in quantum many-body systems, such as strongly correlated phases in moié system, will be a future focus.
Note added-. Upon completion of our manuscript, we became aware of a recent paper on non-Hermitian twisted bilayer graphene, in which non-Hermiticity is on the hopping amplitude and topological aspects are undiscussed.
This work is supported by the National Natural Science Foundation of China (Grant No. 12104099 and No. 12274095) and Guangzhou Science and Technology Program (Grant No. 2024A04J0272)
Appendix A
iblabel[1][S#1]
|
http://arxiv.org/abs/2409.02981v1 | 20240904180000 | A Semi-Analytical Model for Stellar Evolution in AGN Disks | [
"Alexander J. Dittmann",
"Matteo Cantiello"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.HE",
"astro-ph.SR"
] |
Alexander J. Dittmann
dittmann@ias.edu
0000-0001-6157-6722]Alexander J. Dittmann
Department of Astronomy and Joint Space-Science Institute, University of Maryland, College Park, MD 20742-2421, USA
Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
0000-0002-8171-8596]Matteo Cantiello
Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, NY 10010, USA
Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA
§ ABSTRACT
Disks of gas accreting onto supermassive black holes may host numerous stellar-mass objects, formed within the disk or captured from a nuclear star cluster. We present a simplified model of stellar evolution applicable to these dense environments; our model exhibits exquisite agreement with full stellar evolution calculations at a minuscule fraction of the cost. Although the model presented here is limited to stars burning hydrogen in their cores, it is sufficient to determine the evolutionary fate of disk-embedded stars: whether they proceed to later stages of nuclear burning and leave behind a compact remnant, reach a quasi-steady state where mass loss and accretion balance one another, or whether accretion proceeds faster than stellar structure can adjust, causing a runaway. We provide numerous examples, highlighting how various disk parameters, and effects such as gap opening, affect stellar evolution outcomes. We also highlight how our model can accommodate time-varying conditions, such as those experienced by a star on an eccentric orbit, and can couple to N-body integrations. This model will enable more detailed studies of stellar populations and their interaction with accretion disks than have previously been possible.
§ INTRODUCTION
The accretion disks that fuel active galactic nuclei (AGN) <cit.> may host numerous stars and stellar mass objects. Stars may form within gravitationally unstable regions of these accretion disk <cit.>, or be captured from nuclear star clusters <cit.>.
Stellar populations in AGN disks may strongly affect the structure and observable characteristics of the accretion disks around supermassive black holes (SMBHs), particularly in the outer regions of the disk <cit.>. These populations may beget both electromagnetic <cit.> or gravitational wave <cit.> transients, and could be responsible for the supersolar metal content typically inferred in quasar broad line regions <cit.>. Closer to home, an accretion disk may have sculpted chemically and dynamically peculiar massive stellar populations in our own galactic center during the most recent accretion episode onto Sagittarius A^* <cit.>.
Although numerous studies have considered the influences of embedded objects on accretion disks <cit.>, time-dependent models coupling accretion disks, stellar populations, and stellar evolution have yet to be constructed. Thus far, the cost of stellar evolution simulations has prohibited their self-consistent inclusion in such studies, although some stellar evolution studies have yielded insight into the typical fates of embedded stars as functions of various disk properties that can be applied to simplified models <cit.>. Our hope is that the model presented here or extensions thereof will enable more realistic modeling of the interplay between stars and accretion disks.
In the following, we present a simple semi-analytical model of stellar evolution within AGN disks, coupling an Eddington standard model to prescriptions for accretion, mass loss, and related processes. We present the resulting set of differential equations in Section <ref>. Section <ref> provides examples of stellar evolution under various conditions, illustrating the close agreement between our model and various stellar evolution calculations. We discuss some potential future applications, model limitations, and potential extensions in Section <ref>, and summarize our work in Section <ref>. An implementation of our model is available at https://github.com/ajdittmann/starsamhttps://github.com/ajdittmann/starsam.
§ SIMPLIFIED STELLAR MODELS
Our model of stars in AGN disks consists of two main components: equations approximating how the structure of a massive star depends on its bulk properties (its mass and composition); and equations describing how the bulk properties of the star evolve over time (through accretion, mass loss, etc.). Concerning the former, we draw on previous calculations of the structure of massive stars presented in <cit.>, which we review in Section <ref>. Concerning the latter, we draw upon the treatments employed in previous full calculations of stellar evolution in AGN disks, such as <cit.>, which we review in Section <ref>.
§.§ Stellar Structure
The fundamental quantities in our stellar models are the total masses of hydrogen, helium, and metals constituting the star (M_X, M_Y, and M_Z), which are used to calculate the total stellar mass and mass fractions
M_*=M_X+M_Y+M_Z,
X_*=M_X/M_*, Y_*=M_Y/M_*, Z_*=M_Z/M_*.
We assume that the star is chemically homogeneous. In nonrotating stars, this may not be satisfied until M_*≳100 M_⊙, at which point the stars become almost fully convective <cit.>. On the other hand, rotating stars are subject to numerous additional instabilities that lead to chemical homogeneity <cit.>. Since stars in AGN disks are likely to spin up as they accrete <cit.>, the assumption of chemical homogeneity is likely to be well-justified even at lower masses. Homogeneity makes these stars particularly well-described by simplified models (as noted in , the structure of stars in AGN disks tend to be well-described by n=3 polytropes). In the following, we describe the Eddington standard model for massive stars <cit.>, drawing heavily from Section 2b of <cit.>.
The average number of nuclei and electrons per baryon within the star, the inverse of the mean molecular weight, is given by
Y_T=μ^-1=1/4(6X_*+Y_*+2).
The photon entropy (s_γ) is then related to the ratio of radiation pressure to gas pressure (σ≡ P_ rad/P_ gas) by
σ = s_γ/(4Y_T)=(β^-1-1),
where β≡ P_ gas/(P_ gas+P_ rad). In these terms, the familiar form of Eddington's quartic equation
1-β=0.003β^4μ^4(M_*/M_⊙)^2
can be rewritten as
M_* = 1.141 s_γ^2(1+4Y_T/s_γ)^3/2 M_⊙.
The stellar luminosity L_* is related to s_γ through the Eddington ratio, Γ≡ L_*/L_ Edd, where
Γ = (4Y_T + 1)^-1
L_ Edd≡ 1.2 × 10^38 Y_e^-1 (M_*/M_⊙) erg s^-1,
and where Y_e = (1+X)/2 is the number of electrons per baryon. Notably, the Eddington standard model discussed here yields an appropriate mass-luminosity relation L_*∝ M_*^3 at ∼solar masses and L_*∝ M_* at higher masses, and it is our assumption of uniform composition and efficient mixing that breaks down in low-mass stars.
In order to employ the mass-loss estimates described in Section <ref>, it is necessary to compute an estimate of the stellar radius. In the (n=3) polytropic approximation, the stellar radius is <cit.>
R_* = 30.4Y_Tσ^1/2(1+σ)^1/2T_c^-1 R_⊙,
where T_c is the central temperature of the star in keV. To estimate the central temperatures, we follow appendix A of <cit.>, assuming that energy generation is solely due to hydrogen burning via the CNO cycle. Specifically, Equations A2 and A3 of <cit.> provide
T_c/2.67 =(1 - 0.021 logA - 0.021 logX_N^-1 + 0.053logT_c)^-3
A = s_γ[1+(ν_p-3)(1+ς)/6]^3/2[XY_e(1+σ^-1)]^-1,
ν_p = 22.42T_c^-1/3 + 7/3
ς = (2s_γ/Y_N+1)^-1.
The above expressions follow from a power-law fit to the CNO nuclear burning energy generation rate, where ν_p is the power-law exponent with respect to temperature in the neighborhood of the central temperature and X_N is the nitrogen mass fraction. The ς term is a small modification to the results of <cit.> accounting for variation of photon entropy within the convective core <cit.>. We approximate the nitrogen mass fraction by scaling the solar <cit.> nitrogen, carbon, and oxygen abundances by Z_*/Z_⊙ and assuming that they are entirely in the form of nitrogen, which is appropriate given the rate-limiting step of the CNO cycle ^14_7N+^1_1H→ ^15_8O+γ.
§.§ Bulk Evolution
The bulk composition of the star evolves due to three processes: accretion (Ṁ_+), mass-loss (Ṁ_-), and fusion. In the following, we have specialized to a few specific forms of each of these, but it is trivial to extend our model to other treatments or approximations.
§.§.§ Accretion
In the following, we have explored four regimes of stellar accretion: spherically symmetric Bondi accretion, tidally-limited accretion, accretion limited by gap formation in an accretion disk, and accretion onto stars with substantial velocity relative to the disk, each of which determine a base accretion rate, Ṁ_0. For a star with zero luminosity, the accretion rate Ṁ_+=Ṁ_0. However, as the star approaches the Eddington luminosity, the accretion rate is reduced by some factor 0≤ f ≤ 1, according to
Ṁ_+ = Ṁ_0f(L_*+L_S),
where L_* is the intrinsic stellar luminosity and L_S≈Ṁ_+GM_*/R_* is the luminosity generated by shocks as accreted material strikes the surface (see Section 4.5 of ).[One notational difference between this work and <cit.> is that <cit.> defined L_* to include both the intrinsic stellar luminosity and the shock luminosity, while here we define these separately. ]^,[The assumptions made in <cit.> may break down when the accretion flow is sufficiently optically thick <cit.>.]
When considering spherically symmetric Bondi accretion, the accretion rate is simply given by <cit.>
Ṁ_0 = πρ c_s R_B^2
R_B ≡ 2GM_*/c_s^2,
where ρ and c_s are the density and sound speed of the AGN disk around the star, and the Bondi radius (R_B) characterizes the distance from the embedded star within which the gravity of the star can overcome the thermal response of the fluid. If modeling the evolution of a star moving with significant velocity relative to the background fluid (v≳ c_s), then the Bondi-Hoyle-Lyttleton accretion rate may be more appropriate, leading to <cit.>
Ṁ_0 = 4πρG^2M_*^2/(c_s^2 + v^2)^3/2.
Within an accretion disk, in the presence of an SMBH, gas simply being within the Bondi radius is not a sufficient condition for it to accrete onto the star, as that gas may still be gravitationally bound to the SMBH. The Hill radius <cit.> defines the region within which gas is bound to the star rather than the SMBH, and can be defined in terms of the angular velocity of the stellar orbit about the SMBH as
R_H = (GM_*/3Ω^2)^1/3.
For a given c_s and Ω, R_B < R_H at low stellar masses, and R_H < R_B at higher masses. When we consider “tidally-limited” accretion, we simply take whichever of these is smaller as the limiting scale, using <cit.>
Ṁ_0 = πρ c_s min(R_B^2,R_H^2).
It is also possible to generalize Equation (<ref>) to account for the stellar Hill radius and the vertical extent of the disk, as in <cit.>, prescribing accretion according to
Ṁ_0 = 4πρ R_ accmin(R_ acc,H)
R_ acc=GM_*/v_ eff^2
v_ eff = (c_s^2 + R_H^2Ω^2 + v^2)^1/2.
If mass ratio between the star and the SMBH (q≡ M_*/M_∙) is sufficiently large (for a given disk scale height and effective viscosity) then the presence of the star can begin to alter the large-scale structure of the accretion disk. We consider a simple scenario, where this interaction decreases the average surface density of the disk by some factor ξ≡Σ'/Σ_0, the ratio of the perturbed to unperturbed surface densities, which in turn decreases the accretion rate by a factor of ∼ξ <cit.>. We simply approximate that <cit.>
ξ = 1/1+0.04K
K≡ q^2/(h^5α),
although stars may open even deeper (more depleted) gaps in very low-viscosity disks <cit.>. Thus, if we consider “gap-limited” accretion, we take the base accretion rate to be
Ṁ_0 = πρ c_s min(R_B^2,R_H^2)ξ(α,M_∙,h).
Thus, after choosing some approximation for the unmodified (by radiation) accretion rate, we are left to make some approximation for how radiative feedback further limits accretion. Here, we limit ourselves to an approximation introduced in <cit.>, inspired by multidimensional flows where accretion and outflows may coexist. In this case,
f(L_*+L_S) = 1-tanh[(L_*+L_S)/L_ Edd].
Thus, the accretion rate onto the star can be calculated given its mass, luminosity, and some number of extrinsic parameters dependent on calculation one aims to conduct.
§.§.§ Mass Loss
As stars approach the Eddington limit (L_*∼ L_ Edd), they are thought to lose appreciable mass via continuum-driven winds <cit.> as radiation pressure overcomes the pull of gravity. In this work, we follow a common approach used in stellar evolution simulations <cit.>, assuming an outflow at the escape velocity (v_ esc=(2GM_*/R_*)^1/2) only when the stellar luminosity is very near the Eddington limit, specifically
Ṁ_- = L/v_ esc[1 + tanh(L_*+L_S-L_ Edd/0.1L_ Edd)].
Although we have only investigated the above prescription in this work, our formalism can accommodate any mass loss prescription that only depends on bulk properties of the star.
§.§.§ Nuclear Burning
We treat nuclear burning in a greatly simplified manner, approximating energy generation due to hydrogen fusion alone. Unlike full stellar evolution calculations, we cannot extend our simplified model beyond core hydrogen burning. A sufficient approximation is that for each helium atom that is produced, four protons are consumed and 27 MeVs are liberated, contributing to the intrinsic stellar luminosity. Thus, helium and hydrogen are created and depleted at a rate
Ṁ_He,L=-Ṁ_H,L=L_*/27 MeV/4 m_p,
where m_p is the proton mass.
§.§ Piecing the Model Together
To model the evolution of a star within an AGN disk or other dense environment, one must select initial properties of the star, specifically its initial mass, hydrogen abundance, helium abundance, and metal abundance. One must also select at least an ambient density ρ, an ambient sound speed c_s, and the mass fractions of the ambient gas (X, Y, Z) — in principle these could be treated as constants, or functions of time. One must then decide which approximation to make of the stellar environment. The above parameters are sufficient to model spherically symmetric accretion, but if one would like to study stars within accretion disks, one must also chose some angular velocity for the stellar orbit Ω; and possibly a disk aspect ratio h, viscosity parameter α, and SMBH mass M_∙ if one aims to account for gap opening — in our models, we allow Ω and h to be constants or functions of time, but assume α and M_∙ to be constant.
Once these choices have been made, simulating stellar evolution in AGN disks boils down to solving a trio of ordinary differential equations (ODEs) describing how the hydrogen, helium, and metal content of the star evolve over time, and at each step solving a series of nonlinear equations to determine the rate of change of each component. Ultimately, we arrive at the system
dM_H/dt = XṀ_+ - X_*Ṁ_- - L_*/27 MeV/4 m_p
dM_He/dt = YṀ_+ - Y_*Ṁ_- + L_*/27 MeV/4 m_p
dM_Z/dt = ZṀ_+ - Z_*Ṁ_-,
where L_*, Ṁ_-, and Ṁ_+ can be calculated following section <ref> and <ref>. These equations can be solved using standard methods for solving ODEs and a nonlinear one-dimensional root-finding algorithm.
In addition to a set of ODEs and their initial conditions, we must decide when to terminate each integration. We consider three potential outcomes of stellar evolution. First, stars may simply run out of hydrogen fuel, at which point they would go on to later stages of nuclear burning on a comparatively rapid timescale, likely leaving behind some form of compact remnant <cit.>; however, the models presented here are not appropriate to stages of stellar evolution beyond core hydrogen burning, so we terminate each integration if M_H≤0. Additionally, it is possible that stellar accretion and mass loss might balance each other, leading to a quasi-steady “immortal” state <cit.>; the integration of these models typically ends upon reaching a user-imposed time limit. Additionally, it is possible that stars might accrete more quickly than they can thermally adjust; we do not attempt to model the evolution of these stars subsequent to reaching such a condition, although it seems reasonable that these stars might continue to increase their mass without appreciable radiative feedback until they accrete the entirety of the local gas supply <cit.>.
In the following calculations, we have estimated the thermal relaxation timescale of massive stars to be
τ_ KH=3/2GM_*^2/R_*L_*,
appropriate for a star with n=3 polytropic structure. We assume that the star undergoes runaway accretion when τ_ acc≡M_*/Ṁ_+<τ_ KH. However, it would be trivial to incorporate a different runaway condition if desired.
The aforementioned stellar evolution outcomes are illustrated in Figure <ref>, where the higher-density model shown in yellow-green undergoes runaway accretion, the medium-density model shown in blue reaches an immortal state, and the lower-density model shown in blue runs out of hydrogen fuel.
As a word of caution, in some situations it may be unreasonable to terminate calculations when τ_ acc<τ_ KH without further consideration. For example, for near-solar masses L_*∝ M^3 so that τ_ KH decreases with mass, but if accretion is limited by tidal effects or gap opening, τ_ acc increases with mass. This interplay at low masses might lead to the early termination of a calculation of a star than undergoes a brief period of rapid accretion prior to the accretion timescale increasing. If accretion is limited by both gap opening and tidal effects, the accretion rate (Ṁ_0) itself decreases with stellar mass.
§ EXAMPLES
We present here a number of example calculations that can be compared to existing stellar evolution calculations. Each of these was computed using a publicly available implementation of our model, which can be found at https://github.com/ajdittmann/starsamhttps://github.com/ajdittmann/starsam.[Scripts to reproduce each of our calculations in this paper will be made public upon publication.]
Unless otherwise noted, we use
M_*(t=0)=10M_⊙, X_*(t=0)=0.74, Y_*(t=0)=0.24, X=0.72, Y=0.27, and c_s=10^6 cm s^-1.
In most of the following, we have used the basic RK4(5) integrator <cit.> as implemented in scipy <cit.>; in Section <ref>, when coupling the above differential equations to a short term orbital integration, we use the standard 4th-order method due to <cit.>.
§.§ Tidally Limited Accretion
We begin by comparing our models to existing stellar evolution calculations. In particular, <cit.> investigated a number of approximations for how deviation from spherical symmetry in AGN disks could affect stellar evolution, and found that the tidal limiting described by Equation (<ref>) was often significant and the most widely applicable.
We present in Figure <ref> a number of M_*(t) tracks, paralleling the calculations in <cit.>, although assuming very slightly different mass fractions. We find very good agreement between the results of our simplified model and full stellar evolution calculations. For example, at both Ω=10^-10 s^-1 and Ω=10^-12 s^-1 we find that models transition from exhausting their helium supply to reaching a quasi-steady state at the same ambient density, although the exact shape of the M_*(t) curve deviates slightly in the ρ=10^-16 g cm^-3, Ω=10^-10 s^-1 case.
Overall, given the enormous simplifications of the present model, the level of agreement they achieve with full stellar evolution calculations is extremely encouraging. They seem to reproduce the typical evolutionary outcomes of a given star, as well as their masses, to good precision. For example, the masses of immortal stars in Figure <ref> usually deviate from those in the relevant models of <cit.> by factors of at most a factor of a few.
One perhaps counter-intuitive result displayed in Figure <ref>, particularly the simulations at Ω=10^-12 s^-1, is that immortal stars embedded in high-density media reach lower masses than stars embedded in comparatively lower-density media. In these cases, the accretion shock luminosity can reach a substantial fraction of the total stellar luminosity, resulting in substantially higher mass loss rates at a given mass. This trend has been observed in previous full in stellar evolution simulations <cit.>.
§.§ Gap Opening
The possibility of stars opening gaps in AGN disks and subsequently decreasing the rate of accretion onto embedded stars was discussed in <cit.>, although they did not carry out any calculations since Equation (<ref>) illustrates that gap opening will only be relevant in disks that are either very thin h≲ 10^-3, nearly inviscid α≲ 10^-4, or those orbiting very low-mass SMBHs M_∙≲ 10^6 M_⊙. Although gap opening is more important for intermediate-mass black holes and extremely massive (M_*≳10^4) stars <cit.>, it can also be relevant for stars and stellar-mass black holes around less-massive SMBHs <cit.>.
Figure <ref> illustrates the effects of gap opening explicitly for disks with α=0.1, c_s=10^6 cm s^-1, and Ω = 10^-10 s^-1. In the top panel, showing calculations assuming an SMBH mass of 10^6 M_⊙, the disk aspect ration h=H/r strongly affects the evolution of embedded stars. In the h=10^-3 case, accretion is entirely inhibited, since initially K=10^6. In the h=10^-2 case, K was initially ∼10, so accretion was not suppressed until the stars reached somewhat higher masses. One other hand, in h=0.1 disks, stellar evolution was unaffected.
The bottom panel of Figure <ref> illustrates the same calculations for a more massive SMBH. In this case, the h=0.1 and h=10^-2 disks lead to nearly indistinguishable stellar evolution, while accretion onto stars in the h=10^-3 case is heavily suppressed. Intriguingly, in the ρ=10^-17 g cm^-3 case, gap opening actually leads to a higher-mass immortal star by decreasing the accretion rate as discussed in Section <ref>. It is conceivable that gap opening might lead to disparate stellar populations around low- and high-mass SMBHs. However, we leave a detailed investigation of this and its observational consequences for future studies.
§.§ The Critical Density to Sustain Immortal Stars
One of the questions we hope to answer with our models is what environmental conditions lead to which stellar evolutionary outcomes — what sort of disk parameters should lead to stars reaching quasi-steady states, which should lead to runaway accretion, and which should should produce stars poised to undergo supernovae? Here, we examine the boundary between stars that reach a quasi-steady state and those that run out of hydrogen fuel. This depends on whether or not the stellar nuclear burning timescale can become shorter than the accretion timescale prior to accretion and mass loss achieving a quasi-steady state (see Figure 1 of ). Figure <ref> collects calculations determining this critical density in a number of scenarios using 25 bisection iterations at each point.
The top panel of Figure <ref> examines the effect of the background helium abundance of the AGN disk on the critical density for immortal stars undergoing spherically symmetric (Bondi) accretion. Here, because stars with higher mean molecular weights tend to be more luminous at constant mass than those with lower mean molecular weights, stars that accrete material with a higher helium abundance will have shorter nuclear burning timescales at a given mass, requiring higher densities to achieve a short enough accretion timescale to sustain a star in a quasi-steady state. The results of this calculation agree very closely with the analogous stellar evolution calculations presented as the orange curve in Figure 5 of <cit.>.
The middle panel of Figure <ref> plots the critical density as a function of angular velocity and sound speed for stars undergoing tidally-limited accretion. At sufficiently low angular velocities, accretion occurs in a spherically symmetric manner, requiring much higher densities at high sound speeds, since R_B∝ c_s^-2. However, at higher angular velocities, the accretion rate is proportional to the sound speed, so lower densities are required to sustain immortal stars at fixed Ω. The orange curve, for c_s=10^6 cm s^-1, agrees very well with the analogous result based on stellar evolution calculations (Figure 8 and Equation 22 of ) in the Ω≳10^-11 s^-1 regime where those simulations were conducted.
The bottom panel of Figure <ref> shows the critical density as a function of disk aspect ratio and viscosity for a star orbiting a 10^8 M_∙ SMBH at Ω=10^-10 s^-1 within a c_s=10^6 cm s^-1 disk. The values of α chosen here roughly bracket those used to model compact object accretion disks <cit.> and the effective values of α derived from global simulations of the magnetorotational instability <cit.>. At this SMBH mass, gap opening makes it more difficult to sustain stars in a quasi-steady state in very thin disks (h<10^-2), although it is still quite possible.
§.§ A Star on an Eccentric Orbit
The model presented here should facilitate coupled N-body and stellar evolution calculations. To illustrate this capability, we study here the evolution of a 2-body star-SMBH system and a toy disk model.
Here, we assume an initially 10 M_⊙ star on a prograde eccentric (e=0.4) orbit around a 10^7 M_⊙ SMBH with a period of ∼ 1 Myr. The star moves though an isothermal Keplerian accretion disk with c_s=10^5 cm s^-1 and a density profile given by
ρ(r) = ρ_0(r/r_0)^-1,
normalized to 3×10^-13g cm^-3 at 10 AU.
Figure <ref> illustrates the results of an example calculation over four orbital periods under the assumptions of Equations (<ref>) and (<ref>), where we have neglected the change in stellar velocity due to accretion or drag forces, which over this dynamically short integration have little effect. In this case, the accretion timescale is shorter than the orbital timescale and the star essentially moves between two quasi-steady states of differing mass. Notably, in a multi-body system, scattering events could easily drive stars into regions where their evolution is irrevocably altered.
§.§ Disk Models
Our model will enable more precise feedback between models of AGN disks and the stellar populations they harbor. To illustrate this possibility, albeit without incorporating the effects of stellar feedback on the disk, we illustrate in Figure <ref> the stellar evolutionary outcomes of a set of 8×10^5 calculations; each simulation assumed the star to reside at a fixed location in a set of <cit.> disk models calculated in <cit.>, allowed a maximum duration 10^7 years, and assumed tidally-limited accretion (without considering gap opening). We classify stars as `runaway' (τ_ acc<τ_ KH), `immortal' (reaching a quasi-steady state), `massive' (achieving a mass greater than 8 M_⊙ and accreting at too slow a rate to achieve a quasi-steady state), and `ordinary' otherwise. A shorter calculation, effectively a smaller upper limit on the AGN disk lifetime, would primarily limit the number of higher-mass stars. The calculations in this subsection assumed an initial stellar mass of M_*=M_⊙, which greatly increased the initial accretion timescale compared to the calculations in prior sections.
The same general trends occur at each SMBH mass, although the extent of each region varies with mass. Gas densities are typically very low at large radii, usually resulting in very low accretion rates that do not affect stellar evolution significantly. Moving inwards, gas densities increase and the accretion timescale can become comparable or shorter than the initial stellar nuclear burning timescale and stars grow massive. This trend continues, and stars eventually accrete quickly enough to reach quasi-steady states or undergo runaway accretion. However, at smaller radii, the gas density varies non-monotonically, while the disk temperature increases and tidal forces from the SMBH become stronger, limiting accretion.
Our model predicts slightly fewer massive and immortal stars than the analytical calculations of <cit.> but a similar number of stars undergoing runaway accretion. This is most likely because of the slightly different criteria used to differentiate between `ordinary' and `massive stars,' <cit.> comparing the initial accretion timescale to the expected disk lifetime in contrast to our cut based on the final stellar mass. Notably, more massive stars have shorter accretion timescales, so these results depend on our choice of initial stellar masses.
§ DISCUSSION
The models presented in this work, thanks to their close agreement with full stellar evolution simulations at a minuscule fraction of the cost, will enable new avenues of research detailed below, which we hope will facilitate rigorous comparisons between theory and observation. Specifically, these models should make it possible to carry out time-dependent studies of stellar populations in AGN disks with minimal sacrifices compared to full stellar evolution calculations. Furthermore, the model presented here may be extended in a number of ways, to broaden its applicability to other stages of stellar evolution, or to incorporate other physical effects.
However, when applying the present models, one should keep in mind some of the assumptions used to derive them and the resulting model limitations, which we collect below.
§.§ Applications
The model developed here should make it possible to conduct detailed time-dependent studies of stellar populations in AGN disks, circumventing the cost of full stellar evolution simulations. For example, numerous studies have coupled N-body or smoothed-particle-hydrodynamics codes to either full <cit.> or approximate <cit.> stellar evolution. Because the model developed here is a relatively straightforward ordinary differential equation, it should be straightforward to couple it to N-body calculations. Presumably, an accretion prescription along the lines of Equation (<ref>) or Equation (<ref>) would be appropriate for such studies.
Because the model presented here self-consistently predicts stellar luminosities and mass loss rates given local disk properties, it should improve models of stellar feedback on AGN disks <cit.>. It may also be possible to incorporate these stellar models into time-dependent disk model calculations <cit.>, or one-dimensional Monte Carlo calculations similar to those used to predict stellar-mass black hole merger rates in AGN disks <cit.>.
§.§ Caveats
The models presented here rely on numerous assumptions that may cause their predictions to deviate from reality. Many of these, such as the highly approximate prescriptions for how stars accrete and lose mass, and how the accreting matter interacts with a near-Eddington radiation field, follow from the necessity of describing intrinsically three dimensional and turbulent effects within a simplified ODE model. Such ad hoc assumptions are also shared by full stellar evolution calculations; although such assumptions cannot be circumvented, their effects on predictions can be gauged through parameter studies, which are far more efficient using the model presented here.
Compared to full stellar evolution calculations, the simplified model presented in Section <ref> has primarily sacrificed spatial and chemical fidelity. The assumption of CNO-dominated burning limits its applicability to stars more massive than the Sun and with appreciable metallicity (making them ill-suited to studying stars in the very early Universe). The assumption of an n=3 polytropic structure is quite suitable for stars embedded in AGN disks <cit.>, but is nevertheless a limitation. The assumption of chemical homogeneity is not reasonable for stars of all masses: below M≲100M_⊙, nonrotating stars still possess a radiative envelope, which may preclude freshly accreted material from mixing into the stellar core <cit.>. However, stars accreting from AGN disks are likely to reach substantial rotation rates <cit.>, at which point rotation-induced mixing should lead to chemical homogeneity at virtually all stellar masses <cit.>.
§.§ Extensions
Many of the model components presented here, such as the recipes for the accretion rate onto stars given external accretion disk conditions, the interplay between near-Eddington radiation fields and accretion, and continuum-driven mass loss, are quite flexible. Similarly one may choose to implement different criteria for runaway accretion. For example, in the theoretical study of early-Universe supermassive black hole formation, some have considered the balance between the accretion and Kelvin-Helmholtz timescales of the stellar envelope specifically, as opposed to the global timescales employed here <cit.>.
Beyond adjusting the components already present within the model, it is possible to incorporate other considerations entirely. For example, <cit.> has suggested modifications to the stellar accretion rate applicable when the accretion flow onto the star is very optically thick. Incorporating an accretion rate along the lines suggested in Section 3.2 of <cit.>, suitably modified when R_H<R_B as discussed in Section <ref> <cit.>, would be a straightforward way to extend the applicability of our model to other regimes. Judging by Figure <ref>, limiting the stellar accretion rate has the potential to substantially increase the number of massive and immortal stars within the disk
One natural extension of the models presented here is to explicitly account for stellar angular momentum. Thanks to the background shear flow of AGN disks, accretion may spin stars up to considerable angular velocities. Prescriptions for these effects have been incorporated into previous stellar evolution calculations <cit.>, which would be straightforward to incorporate into the semi-analytical model presented here by introducing an additional differential equation for the total stellar spin. One crucial ingredient for such considerations is the stellar moment of inertia, which for the Eddington standard model discussed here is approximately
I_*≈ 7.529×10^-2 M_*R_*^2.
One of the limitations of the model presented in this work is its restriction to burning via the CNO cycle. At least under the assumption of chemically homogeneous evolution <cit.> which seems fairly applicable to the evolution of stars in AGN disks <cit.>, it may be reasonable to assume energy generation due to triple-α burning after hydrogen exhaustion. Such considerations might improve the estimates made by these models of the amount of metals ejected into the ambient medium. However, modeling later stages of stellar evolution would require evolving the abundances of more elements, at which points the costs would likely outweigh potential benefits.
Moreover, the time a star spends burning heavier elements is significantly shorter than the time it spends burning hydrogen, which usually accounts for 90% of the stellar lifetime. Later stages of burning make up a smaller fraction of the the lives of AGN stars, where accretion and mixing result in relatively longer hydrogen burning phases.
§ CONCLUSIONS
In this work, we have combined a simple model of stellar structure, an Eddington standard model <cit.>, with approximations for the interplay between stellar radiation, accretion, and mass loss and AGN disks employed in previous stellar evolution calculations <cit.> to yield a simple model of stellar evolution in AGN disks or other dense environments. As shown in Section <ref>, our model very closely reproduces the results of full stellar evolution simulations, typically at less than a thousandth of the cost, although the reduced model is only applicable to core hydrogen burning via the CNO cycle in its present form. We hope that our distillation of the equations of stellar evolution in AGN disks to a trio of ordinary differential equations will enable future studies to investigate the interplay between stellar populations and their environments in much more detail than had been possible before now.
§ SOFTWARE
<cit.>, <cit.>, scipy <cit.>
§ ACKNOWLEDGMENTS
AJD is grateful for the hospitality of the Center for Computational Astrophysics throughout a visit during which this work was initiated.
The Center for Computational Astrophysics at the Flatiron Institute is supported by the Simons Foundation.
AJD gratefully acknowledges support from LANL/LDRD under project number 20220087DR, and NASA ADAP grants 80NSSC21K0649 and 80NSSC20K0288. The LA-UR number is LA-UR-24-28455.
aasjournal
|
http://arxiv.org/abs/2409.03582v1 | 20240905143725 | Exploring the dynamic rotational profile of the hotter solar atmosphere: A multi-wavelength approach using SDO/AIA data | [
"Srinjana Routh",
"Bibhuti Kumar Jha",
"Dibya Kirti Mishra",
"Tom Van Doorsselaere",
"Vaibhav Pant",
"Subhamoy Chatterjee",
"Dipankar Banerjee"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Rotational profile of the solar atmosphere
S. Routh et al.
Dipankar Banerjee
dipu@aries.res.in
0009-0008-5834-4590]Srinjana Routh
Aryabhatta Research Institute of Observational Sciences, Nainital-263002, Uttarakhand, India
Mahatma Jyotiba Phule Rohilkhand University, Bareilly-243006, Uttar Pradesh, India
0000-0003-3191-4625]Bibhuti Kumar Jha
Southwest Research Institute, Boulder, CO 80302, USA
0009-0003-1377-0653]Dibya Kirti Mishra
Aryabhatta Research Institute of Observational Sciences, Nainital-263002, Uttarakhand, India
Mahatma Jyotiba Phule Rohilkhand University, Bareilly-243006, Uttar Pradesh, India
0000-0001-9628-4113]Tom Van Doorsselaere
Centre for mathematical Plasma Astrophysics, Mathematics Department, KU Leuven, Celestijnenlaan 200B
bus 2400, B-3001 Leuven, Belgium
0000-0002-6954-2276]Vaibhav Pant
Aryabhatta Research Institute of Observational Sciences, Nainital-263002, Uttarakhand, India
0000-0002-5014-7022]Subhamoy Chatterjee
Southwest Research Institute, Boulder, CO 80302, USA
0000-0003-4653-6823]Dipankar Banerjee
Aryabhatta Research Institute of Observational Sciences, Nainital-263002, Uttarakhand, India
Indian Institute of Astrophysics, Koramangala, Bangalore 560034, India
Center of Excellence in Space Sciences India, IISER Kolkata, Mohanpur 741246, West Bengal, India
§ ABSTRACT
Understanding the global rotational profile of the solar atmosphere and its variation is fundamental to uncovering a comprehensive understanding of the dynamics of the solar magnetic field and the extent of coupling between different layers of the Sun. In this study, we employ the method of image correlation to analyze the extensive dataset provided by the Atmospheric Imaging Assembly of the Solar Dynamic Observatory in different wavelength channels. We find a significant increase in the equatorial rotational rate (A) and a decrease in absolute latitudinal gradient (|B|) at all temperatures representative of the solar atmosphere, implying an equatorial rotation up to 4.18% and 1.92% faster and less differential when compared to the rotation rates for the underlying photosphere derived from Doppler measurement and sunspots respectively. In addition, we also find a significant increase in equatorial rotation rate (A) and a decrease in differential nature (|B| decreases) at different layers of the solar atmosphere. We also explore a possible connection from the solar interior to the atmosphere and interestingly found that A at r=0.94 R_⊙, 0.965 R_⊙ show an excellent match with , and , respectively. Furthermore, we observe a positive correlation between the rotational parameters measured from , , and with the yearly averaged sunspot number, suggesting a potential dependence of the solar rotation on the appearance of magnetic structures related to the solar cycle or the presence of cycle dependence of solar rotation in the solar atmosphere.
§ INTRODUCTION
Rotation is a fundamental aspect in the pursuit of a comprehensive understanding of our nearest star, the Sun. The study of solar rotation has been a persistent topic in solar physics since the 17th century and has become increasingly important in recent years due to its strong connection with the solar magnetic field <cit.>. Early studies on the differential rotation in the photosphere of the Sun relied on tracking of prominent photospheric magnetic features called sunspots <cit.>, which allowed for the measurement of photospheric differential rotation in the form of <cit.>,
Ω= A + Bsin^2θ + C sin^4θ,
where θ is the latitude, A is the equatorial rotation rate, and B and C are the coefficients of a quadratic expansion in sin^2θ, often physically interpreted as latitudinal gradients <cit.>.
In the past century, advances in measuring techniques and instruments have significantly improved the accuracy of sunspot tracking <cit.>, and have also led to the development of new measurement techniques such as spectroscopy <cit.>. Furthermore, extensive research, in conjunction with the more recent field of helioseismology <cit.>, has enabled us to gain a comprehensive understanding of the rotational profile of the Sun till photosphere, including its variations with depth. However, a complete understanding of the rotational profile of the Sun above the photosphere and its variation with temperature (or height) remains elusive.
Initial investigations into the rotational profile of the higher solar atmosphere, where magnetic field dominates the dynamics <cit.>, suggested a faster rate of rotation than the photosphere <cit.>. These findings were contradicted by the subsequent studies that suggested a rotational profile of different parts of the solar atmosphere to be similar to that of the photosphere or sunspots, if not even slower <cit.>. Studies also made efforts to utilise higher atmospheric features like filaments <cit.>; coronal bright points <cit.>; coronal streamers <cit.>; magnetic loops <cit.>; coronograph images <cit.>; Ca^+ network and plages <cit.>; soft X-ray observation <cit.>, wavelength bands like , , , etc. <cit.> and radio flux information <cit.> to obtain the rotational profile of the different layers of hotter solar atmosphere. Additionally, some studies reported an increase in rotation rate with temperature/height <cit.>, while others found contrary results <cit.>. These studies used various methods, including the tracer method <cit.>, periodogram <cit.>, auto-correlation<cit.> and cross-correlation method <cit.> to obtain diverse results, which have been unable to resolve the problem of atmospheric solar rotation, persisted for over a century.
Studies akin to that of <cit.> have linked the cause of these reported differences in the obtained rotational profiles based on the nature of the tracer chosen. However, other studies, such as those by <cit.> attributed the probable cause to the characteristics of the data selected for analysis. Additionally, several studies have also explored the correlation between the solar cycle and the rotational profile of the solar corona and transition region <cit.> in a pursuit to explore the role of solar magnetic activity in driving the rotation of the solar atmosphere. The reported link in their respective findings suggests a cyclic behaviour in equatorial rotation and a differential nature in these layers, similar to the solar cycle but with a lag <cit.>. But such a pursuit was too riddled with further contradictions as other studies <cit.> reported finding no such significant variation in the rotation rate of the solar chromosphere <cit.> and corona with the solar cycle. The contrasting findings from these studies have been explored through various theoretical and analytical perspectives. Among those perspectives, the potential connection between the solar interior and the solar atmosphere through magnetic fields has been consistently proposed in many studies to resolve some of these perplexing results <cit.>.
Despite utilizing various methods and datasets, a comprehensive understanding of the global rotational profile of the solar atmosphere above the photosphere and how it varies across different layers remains elusive due to the diverse results obtained. In an attempt to address this gap, this study adopts a more focused approach by utilizing a single tracer-independent method, that is, the method of image correlation, to analyze the extensive dataset provided by the Atmospheric Imaging Assembly (AIA) of the solar Dynamic Observatory (SDO) from the period of 2010–2023. Additionally, we utilize the internal rotation rates derived using helioseismology from <cit.>, photospheric rotation rate using sunspot <cit.>, and chromospheric rotation rate using chromospheric plage <cit.>, to connect the global variation of the solar differential rotation profile from subsurface regime to the atmosphere.
In <ref>, we will discuss the specific details of the dataset used; <ref> will discuss the modifications made to the method initially proposed by <cit.> for this study before moving on to <ref> and <ref> where results obtained from the analysis will be discussed. The final <ref> will summarize the study and highlight its key conclusions.
§ DATA
The Atmospheric Imaging Assembly <cit.> on the Solar Dynamics Observatory <cit.> captures data across multiple extreme ultraviolet (EUV) and ultraviolet (UV) wavelengths. By employing narrow-band imaging in ten specific temperature-sensitive wavelength channels, such as Fe xviii (94 Å), Fe viii, xxi (131 Å), Fe ix (171 Å), Fe xii, xxiv (193 Å), Fe xiv (211 Å), He ii (304 Å), and Fe xvi (335 Å), the AIA probes the solar atmosphere at different temperatures ranging from ≈ 10^4 K to 10^7 K. The AIA observes regions of solar atmosphere starting from the photosphere and above, extending through the chromosphere, transition region, and lower corona, with a pixel scale of 0.6/pixel. In addition, one of the telescopes of the AIA observes in C iv line near 1600 Å and the nearby continuum at 1700 Å as well as in the visible continuum at 4500 Å <cit.>. The AIA has been providing data from May 2010 to the present, covering solar cycle 24 and ongoing cycle 25.
For this study, we primarily utilize data from seven different wavelength channels, namely , , , , , and from the period of 2010–05-13 to 2023–08–30 at a cadence of 6 hours. This was done to ensure that only features with a lifespan longer than 6 hours contributed to the analysis while still providing sufficient data for robust statistical analysis. The 94 Å band was excluded from the analysis due to its poor signal-to-noise ratio in the low-temperature regime <cit.>. The initial dataset, classified as Level 1, is obtained from the Joint Science Operations Center[AIA data can be downloaded from http://jsoc.stanford.edu/ajax/exportdata.htmlhere] <cit.> and converted to level 1.5 using Interactive Data Language (IDL) version of aia_prep.pro, available under AIA/SolarSoft <cit.>. This step aligns the solar north with that of the image and applies the necessary adjustments required to make the plate scale consistent across all wavelength bands <cit.>. This Level 1.5 data from 2010–05-13 to 2023–08–30 was used to get the rotation profile of the Sun across all the aforementioned wavelengths.
§ METHODOLOGY
§.§ Pre-Processing
The hotter solar atmosphere is ubiquitously populated with small-scale features that are often short-lived and can undergo significant changes within a very short span of time <cit.>. Since the method of image correlation depends only on pixel-specific intensities in consecutive images, such rapid changes in the small-scale structures negatively affect
the correlation coefficient, thereby affecting our analyses. Therefore, the data was smoothed by convolving them with a Gaussian kernel to remove such small scale features to minimize the contribution from them. The size of the Gaussian kernel (σ=5) was chosen keeping in mind the angular size of the small-scale features <cit.>.
Additionally, this procedure serves as a step to minimize the random noise and help us to improve the signal-to-noise ratio of large-scale structures <cit.>, see <ref> for a representative example.
§.§ Method of Image Correlation
After applying a Gaussian smoothing filter, we utilized the image correlation technique similar to <cit.> to determine the rotation rate in different latitude bands. The image correlation method utilizes the two-dimensional (2D) cross-correlation technique to determine the offset between two images. This method has been previously suggested to focus on the rotation of the magnetic features as has been discussed in <cit.> and <cit.>. The method is briefly outlined below, but for a detailed discussion, the reader is encouraged to refer to <cit.>.
We project the full-disk AIA data to a heliographic grid of size 1800 pixels × 1800 pixels (0.1^∘/pixel in latitude and longitude) using the near-point interpolation; see <ref>(a), similar to the process demonstrated in <cit.>. These projected images are then divided into overlapping bins of 15^∘, each separated by a 5^∘ stride e.g., 0^∘-15^∘, 5^∘-20^∘ etc. (see <ref>). The choice of 15^∘ bin is made to minimize the impact of any partially remaining extended features <cit.> and improve the cross-correlation coefficient. Furthermore, the overlapping bins are chosen to ensure a sufficient number of latitudinal bands were probed. These bins are selected over the span of ± 60 (in the case of , , and ), ± 55 (in the case of , ) and ± 45 (in the case of and ) in latitude (θ) and ± 45^∘ in longitude (ϕ). These multiple latitudinal extents are selected to take into account the presence of most of the large-scale features across different wavelength channels e.g., active regions, large-scale coronal bright points (CBPs) within ± 45, plages within ± 55. Additionally, these limits also serve to reduce the projection effects at higher latitudes (θ>± 60) and near the limb <cit.>. The latitude of the bin is assigned as the centre of the selected bin, e.g., for 0^∘-15^∘ it is 7.5^∘. Subsequently, two bins (say B1 and B2) of the same latitudinal extent from consecutive images (separated by 6 hrs in time) are used to calculate the 2D cross-correlation function by shifting B2 with respect to B1 for the set of Δϕ∈ [ ϕ_0-3^∘, ϕ_0+3^∘] in longitude and Δθ∈ [θ_0-1^∘, θ_0+1^∘] in latitude direction, where ϕ_0 is the expected longitudinal shift estimated based
on the photospheric rotation rate <cit.> and θ_0 is taken as 0. Finally, the Δϕ and Δθ are identified by maximizing the 2D cross-correlation function[The image cross-correlation was performed using https://hesperia.gsfc.nasa.gov/ssw/gen/idl_libs/astron/image/correl_images.procorrel_images.pro and https://hesperia.gsfc.nasa.gov/ssw/gen/idl_libs/astron/image/corrmat_analyze.procorrmat_analyze.pro routines available in the Solar SoftWare library.]. Since this study is focused towards the measurement of differential rotation particularly, only the value of Δϕ was used to calculate the value of Ω in that latitudinal bin.
§ RESULTS
§.§ Average rotational profile above the photosphere
To obtain the average rotational profile of the hotter solar atmosphere above the solar photosphere, we calculated the average of Ω (θ) for each latitudinal band weighted by corresponding cross-correlation coefficients (CC) in that latitude band. This step is performed after the elimination of cases with low values of CC, which may have arisen due to the absence or emergence of any large-scale feature in either of the consecutive images being analyzed. Low values of CC may also result from the presence of transient events (e.g., flares), which lead to intensity enhancements in any of the consecutive images being correlated. Cases where CC < 0.65 for , , , , and CC <0.70 for and are not included in analysis. These limits on the CC are imposed after finding out the value of CC for which the values of A, B and C do not vary significantly <cit.>. The uncertainty (error) in Ω_θ is calculated as the resultant of the least count error (σ_ LCE) and the standard statistical error (σ_ SSE) of the mean. However, σ_ LCE remains dominant in the total error estimate[σ_ LCE = Δϕ/Δ t = 0.1^∘/0.25 days = 0.4^∘], as σ_ LCE is an order of magnitude greater than the σ_ SSE (shaded region of respective colours for each wavelength band in <ref>). The values of mean Ω_θ thus obtained for each latitudinal bin are then fitted with <ref> (where θ=θ_mid is the centre of the latitude band) using the least square fit method, to obtain the best-fit parameters (A, B, C) and their associated uncertainties (Δ A, Δ B, Δ C). These steps are repeated for each AIA wavelength channel, and the differential rotation parameters obtained are tabulated in <ref> for the same.
ccccccc
0pt
The values of differential rotation parameters for different wavelength channels
Wavelength (Primary ion) Log_10 T* Height ± error* A ±Δ A B ±Δ B C ±Δ C
(Å) (z, km) (^∘/day) (^∘/day) (^∘/day)
304 (He ii) 4.7 2820 ± 400 14.574 ± 0.012 -1.518 ± 0.12 -2.287 ± 0.223
1600 (C iv) 5.0 430 ± 185 14.485 ± 0.024 -1.612 ± 0.243 -2.677 ± 0.452
131 (Fe viii) 5.6 — 14.649 ± 0.014 -1.334 ± 0.200 -2.999 ± 0.519
171 (Fe ix) 5.93 5100 ± 1900 14.574 ± 0.032 -1.356 ± 0.286 -2.654 ± 0.461
193 (Fe xii) 6.176 6700 ± 2000 14.645 ± 0.039 -0.916 ± 0.344 -2.701 ± 0.557
211 (Fe xiv) 6.272 6100 ± 1900 14.613 ± 0.042 -0.504 ± 0.372 -3.314 ± 0.601
335 (Fe xvi) 6.393 15200 ± 2300 14.656 ± 0.015 -0.958 ± 0.224 -2.750 ± 0.604
*The logarithmic temperatures and heights used to represent all wavelength channels are obtained from the studies of <cit.>. A detailed discussion is available in <ref>.
Our first interpretation from <ref> is that the rotation profile in the solar atmosphere, from the upper photosphere to the corona, exhibits a similar rotational profile across all wavelengths. However, the rotational profile is relatively flatter i.e less differential whereas the absolute rotation rate is higher compared to the photosphere,
as observed using Dopplergrams <cit.> and sunspots as tracers <cit.>. The rotational profile corresponding to chromospheric temperatures () aligns well with the findings of <cit.>, adding credibility to our results. Additionally, our results are consistent with the <cit.>, suggesting that the corona rotates faster and less differentially (see <ref>).
§.§ Variation of rotational parameters with height and temperature
In order to investigate the variations in solar differential rotation from the photosphere to the corona, as is indicated in studies like <cit.>, it is necessary to get the corresponding height of all AIA channels. Several 1D models of the solar atmosphere through the photosphere to the transition region have been proposed throughout the years <cit.>. However, these models provide features that seldom agree with the observed profiles due to several factors <cit.>. Hence, we obtain the approximate representative heights above the photosphere to represent the parts of the solar atmosphere visible in the wavelength channels used in this study (see <ref>), keeping in mind the temperature sensitivity of the same <cit.>. Unfortunately, to the best of our knowledge, there was no singular height that could be ascribed to the AIA
Consequently, we have not included the rotation parameters measured using data from this channel in this part of the analysis. Furthermore, in order to make a fair comparison of rotational parameters across all wavelength bands, we also study the variations in rotational parameters with the temperature (T) corresponding to each wavelength.
In <ref>, we plot the rotational parameters, A (<ref>a) and B (<ref>b), against the height (z) above the photosphere and T (log_10T), whereas B with z and T in <ref>c, <ref>d, respectively. Finally, to assess the extent of the relationship between the said parameters, we calculated both Spearman (ρ_s) and Pearson (ρ) correlation coefficients (CC) between rotational parameters and the log_10T.
These positive values of CC, between (A, z) and (B, z), are indicative of an increase in equatorial rotation as well as the decrease in latitudinal gradient (flatter profile) with height in the solar atmosphere, which has been previously speculated by <cit.>. Additionally, the relationship between A and log_10T seems to exhibit similar behaviour, although with lower CC. Here we would like to emphasize the scarcity of data in the temperature range from approximately log_10T=5.0 (represented by ) till log_10T≈5.9 (represented by ). This absence of information could potentially have a significant impact on the determination of the correlation in the present scenario. Conversely, the B shows an upward trend in connection with log_10T. Based on <ref>, we note that although the rotation parameters show positive CC, the nature of the increase is different in these two cases (temperature and height).
§.§ Is there a connection with the solar interior?
It is imperative to acknowledge from <ref> that the rotation rate of the solar atmosphere (for all AIA channels) is faster than that of the rotation rate measured using photospheric magnetic features like sunspots. Interestingly, the rotation rate derived using magnetic features, which are believed to be anchored deeper in the photospheric surface, is greater than
the rotation rate obtained based on Doppler measurement, which samples the higher photospheric plasma <cit.>.
Such results motivate us to consider the potential connection between the faster-rotating solar interior to the faster-rotating solar atmosphere measured in this study. In order to explore such possibilities, we need to obtain the profile of the Sun's rotation from the subsurface regime to its outer layers, observing how rotational characteristics evolve from the interior to the atmosphere of the Sun.
We have already obtained the rotation rate of the solar atmosphere whereas for internal rotation, we use the helioseismic measurement of solar rotation, obtained using the methodology outlined in <cit.>. The helioseismic data we use is the temporally averaged values of Ω (r,θ) for r∈ [0.7 R_⊙,1.0 R_⊙] in steps of 0.005 R_⊙ and θ∈[0^∘-88^∘] in steps of 2^∘. To obtain the rotation parameters i.e., A, B and C for a given depth r we fit Ω (r,θ) with <ref> for latitudes spanning ± 60^∘ (as most of the solar magnetic features considered in this study are limited within this latitude). This calculation is only performed for all r∈(0.93,1.0) R_⊙, as we are assuming the possibility of sub-photospheric influence on solar atmospheric rotation. A representative example of rotation profile for r=0.94 R_⊙ (deeper) and r=0.965 R_⊙ (near the surface) is shown in <ref>.
To examine the evolution in rotational parameters from sub-photspheric regime to the atmospheric values obtained in this study, we compare our results with the rotation rate inferred from helioseismology with respect to depth as well as the corresponding temperature (see <ref>), derived from Solar S-Model <cit.>. On comparing our results in <ref>a, interestingly, we note that the rotation parameter A for solar atmosphere as obtained from the and coincide with the rotation parameter A obtained at a depth of r ≈ 0.94 R_⊙ as well as the A for coincides with the A for r ≈ 0.965 R_⊙. Additionally, we also note that the A at r = 0.94 R_⊙ also show a good match with the A obtained in the case of , if we consider 3σ uncertainty for A. At this juncture, it is imperative to emphasize that the channel receives a contribution from cooler components too, with the temperature near to the one the channel is sensitive to. This further highlights the complexity of considering the solar atmosphere to be distinctly stratified, with the contribution from each layer being unique and independent. We acknowledge the importance of considering the potential contributions from different heights in the same channel when determining equatorial rotation rates, as demonstrated by the case of .
Considering the complexity associated with the determination of unique height, in <ref>c-d, we plot A and B obtained for atmosphere as well as interior as a function of T (log_10T) instead of the z. Interestingly, we find the exact same match with the internal rotation for these wavelengths. However, for B we do not find any such clear connection between the interior and atmosphere of the Sun (see <ref>b and d). On plotting the respective rotational profiles for the depths of 0.94R_⊙ and 0.965R_⊙ and channels , and , we find a good overlap of the profiles for 0.965R_⊙ and at all latitudes, while for 0.94R_⊙, , , the overlap is more apparent at the equatorial regime (see <ref>).
Such an overlap in equatorial rotation rates had also been previously discussed in <cit.>, who had used Coronal Green Line Brightness (CGLB) data and ultraviolet (UV) spectral line observations, respectively, to obtain the rotational profile of the solar corona. Furthermore, <cit.> had also previously suggested the anchoring of sunspots at 0.93R_⊙ on a similar comparison with helioseismology results.
§.§ Variation of rotational parameters with solar activity
Another topic that has persistently generated significant interest and debate is the impact of solar activity on the rotation rate of the Sun. Although the limited data span makes such a study challenging, we explore whether the rotational parameters of the solar atmosphere vary with solar activity, i.e., with the different phases of the solar cycle. To achieve this, we obtained the differential rotation parameters (A, B and C) for each year using a similar approach as discussed in Section <ref>. These parameters are then plotted in <ref> as a function of time along with the yearly averaged sunspot number (SSN)[https://www.sidc.be/SILSO/datafileshttps://www.sidc.be/SILSO/datafiles], which is a marker for solar activity.
In <ref>, we note an apparent cyclic behaviour with time in the rotational parameters, both A and B. To quantify this behaviour, we calculate the Spearman rank correlation (ρ_s) of the parameters obtained for each channel with the SSN (see <ref>). We note that the results from the channel as well as the channels with sensitivity to temperatures native to coronal heights (, , , and ) show a significant positive correlation in equatorial rotation rate, A, with solar cycle phase (p<0.05). Such a correlation of the differential rotation of different parts of the solar corona with the solar activity cycle has been reported in many studies <cit.>. In contrast, the chromospheric channel () shows a very low and negative value of CC, thereby barely indicating any variation with solar activity. This behaviour of the chromospheric rotational profile is consistent with the results for chromospheric rotation as obtained in <cit.> and <cit.> for sunspot, who found no significant change in equatorial rotation rate with solar activity. However, on studying cyclic variation of differential rotation parameters using sunspot data from various databases, <cit.> found that the equatorial rotation rate does reach its maximum just before solar activity minimum, which can be noticed from a careful comparison of the general trend of A with solar activity.
For parameter B, which represents the differential nature of the rotation, we find no such significantly high positive correlation (p <0.05) in any channel other than . <cit.> obtained a similar lack of correlation for parameter B; however, they attributed this lack of correlation to more pronounced errors in their data at higher latitudes.
ccccccc
0pt
Spearman rank correlation coefficients of the rotational parameters and yearly averaged Sunspot numbers (SSN).
Wavelengths 2cA 2cB
(Å) ρ_ s p-value ρ_ s p-value
304 -0.309 0.304 0.269 0.374
1600 0.863 0.001 -0.52 0.069
131 0.813 0.001 0.489 0.09
171 0.863 0.001 0.148 0.629
193 0.583 0.036 0.67 0.012
211 0.571 0.041 0.725 0.005
335 0.83 0.0005 0.462 0.112
§ DISCUSSION
The rotational profiles of the upper solar atmosphere in <ref>, as seen in different wavelength regimes, suggest that the solar atmosphere, as modulated by magnetic large-scale features like plages, CBPs, filaments, coronal loops, etc., rotates 2.95%-4.18% and 0.73%-1.92% faster (at the equator) and less differentially compared to the photospheric rotational profile obtained from dopplergrams and sunspot data, respectively. However, these results are obtained based on the method of image correlation, which is sensitive to the intensity contrast of multiple magnetic features in the hotter solar atmosphere. While this method does not distinguish between the rotation of individual features, this method has the advantage of improving the statistics of the analysis by taking into account all the features distinguishable by intensity. The hotter solar atmosphere is also an optically thin region, and therefore, a measurement of shift in features could also be affected by the line-of-sight effect, leading to an apparent measurement of faster rotation. Therefore, there is a possibility that these results have an effect of the apparent line-of-sight (LoS) effect arising because of the extended structures like coronal loops. To test this hypothesis, we created a toy model of an extended structure mimicking a coronal loop to examine the extent of such an effect (see <ref>). The results obtained based on this experiment have confirmed that the difference between the photospheric rotation rates and that beyond the photosphere can not only be the outcome of the line-of-sight effect. However, it might have a small effect on it, which we have quantified in <ref>.
Once we have eliminated this prospect, it is important to acknowledge that the solar atmosphere is not uniformly stratified and is multi-thermal; consequently, a filter sensitive to a specific temperature may receive contributions from various heights. However, this study relies on the well-established understanding that certain global temperature ranges (e.g., ≈ 10^6 K) are limited to the higher layers of the solar atmosphere (e.g., the solar corona). Although these temperatures may be instantaneously achieved locally in the lower layers of the solar atmosphere during transient events (e.g., flares), we assume they do not represent the long-term global characteristics of the solar atmosphere, which is the primary focus of this study. As a result, the outcomes derived in this study remain statistically unaffected by these events.
The observed increasing trend in the solar differential rotation with height is a very debatable topic, and the proper explanation for such behaviour is still incomplete. However, a theoretical perspective was proposed by <cit.>, discussing the role of magnetic field line configuration on atmospheric rotation leading to an increasing rotation rate with height which is in agreement with our measurement. According to <cit.>, the interplay between magnetic field torque and the velocity plasma flowing outward ensures that the rotation rate increases with an increase in radial distance to keep the total angular momentum conserved. Additionally, the role of the magnetic field in providing the angular momentum required for faster rotation of the solar atmosphere beyond the photosphere has also been suggested in many studies <cit.>
In Section <ref>, we find an excellent match between the equatorial rotational (A) rate at the depths of r = 0.94 R_⊙ and r = 0.965 R_⊙ as inferred in helioseismic observations, with that obtained for the channels , and . While this alignment may seem coincidental, the potential physical connection between them cannot be completely dismissed owing to Ferraro's law of isorotation <cit.>, which hints toward such a possibility. According to this law, strong magnetic fields frozen in plasma tend to transport the angular momentum at their footpoints throughout their extent; thereby leading to a comparatively rigid nature of rotation in low β plasma and we suspect that this could be the region behind the observed behaviour of solar rotation. The idea that the footpoints of the loops visible in may have their root in the lower layers has been also suggested in works like that of <cit.>, while several works have also hinted at the possibility of sub-photospheric rooting of coronal magnetic features <cit.>. We emphasize here that, these arguments should be taken with a grain of salt and it needs a better and thorough study to confirm such possibility.
We have also observed a positive correlation between the changes in the rotation of the solar atmosphere and the phase of the solar cycle, as evidenced by the correlation with the yearly averaged sunspot number, more prominent in the rotational parameter A. Such a result could indicate a relationship between the solar atmospheric rotation and the presence of different magnetic structures (e.g., plages, coronal loops, CBPs etc.) during different phases of solar activity. Another possibility hints at the probable existence of a phenomenon called torsional oscillation, which has extensively been discussed and documented in various studies <cit.>. Notably, the possibility of such a phenomenon is prominently observed in the layers at coronal temperatures (, , , ), while no such variation is observed in the chromospheric counterpart () in agreement with the findings of <cit.> for the chromosphere and <cit.> for sunspots. Additionally, the variation of the coronal rotational profile has been proposed to be affected by magnetic flux concentrations <cit.>, which is positively correlated with solar activity.
Additionally from <ref>, it is also apparent that the parameter A reaches its maximum just before the minimum, while the parameter |B| has a greater value at cycle minimum for , suggesting a more differential rotation at cycle minimum. This is similar to the results obtained by <cit.> and can be suggestive of what is known as the braking effect exerted by non-axisymmetric magnetic fields <cit.>. Theoretical efforts have been made to explain such a variation with cycle activity, results from which have highlighted the role of the strength of magnetic fields in the transport of angular momentum towards the equator <cit.>.
Except , no statistically significant correlation of the parameter B with the yearly averaged sunspot number is apparent for most pass bands representing the hotter solar atmosphere. A possible connection can be made to the cross-talk between the parameters B and C, which amplifies the noise-related uncertainties and obscures their actual time variation <cit.>. This cross-talk is also the reason why the individual variation in the parameter C is not individually examined in the study.
§ SUMMARY & CONCLUSION
In this study, we analyzed 13 years of SDO/AIA data to understand the solar atmospheric rotational profile, its variation at different layers of the solar atmosphere, and with parameters like temperature and solar activity. The primary conclusion we arrive at in this study is that the solar atmosphere, till lower coronal heights, rotates faster and less differentially compared to the photospheric rotation rates obtained from Dopplergrams and sunspot data.
The study also utilised data from helioseismology at different depths to understand the variation of the rotational profile from the interior to the atmosphere and subsequently found a significant correlation between the rotational rate at certain sub-photospheric depths (0.94 R_⊙, 0.965 R_⊙) and that obtained for the channels sensitive to certain temperatures of the solar atmosphere (, and respectively).
While the current study has reinforced that the hotter solar atmosphere indeed does rotate faster and less differentially than the photosphere, numerous unanswered questions remain. Despite the few possibilities explored in this study, the physical understanding behind the observed increase in rotation rate and decrease in differential nature and their generalized trends with height above the photosphere, logarithmic temperature, and solar activity remains unclear. It is important to note that this study does not aim to provide detailed information about the rotational profiles at each specific temperature and height within each layer of the solar atmosphere but rather to provide an overview of the general trend in the rotation of the solar atmosphere from the photosphere to the chromosphere, transition region, and corona.
The findings of this study, if revisited with a larger dataset encompassing multiple cycles and a method capable of distinguishing between thermally distinct features at their exact height of formation, might have significant implications in our understanding of the overall behaviour of the Sun's differential rotation and its complex relationship with the solar magnetic field. A future study could focus on developing a method to isolate the high- and low-temperature components in images from each channel as well as isolating the specific height associated with them to map the variation in the rotational profile of the exact same feature at different heights of the solar atmosphere, which is crucial for a more thorough analysis. Further validation for the trends suggested in this study can also be provided through the use of a dataset that spans multiple solar activity cycles as well as through the use of orthogonalized fit functions. This will help mitigate any potential biases that may have arisen from using a dataset that spans fewer cycles.
§ ACKNOWLEDGMENTS
We thank the anonymous referee for carefully reviewing this study and providing valuable feedback on improving the manuscript further. S.R. is supported by funding from the Department of Science and Technology (DST), Government of India, through the Aryabhatta Research Institute of Observational Sciences (ARIES). The computational resources utilized in this study were provided by ARIES. The funding support for DKM's research is from the Council of Scientific & Industrial Research (CSIR), India, under file no.09/0948(11923)/2022-EMR-I. TVD was supported by the C1 grant TRACE space of Internal Funds KU Leuven, and a Senior Research Project (G088021N) of the FWO Vlaanderen. Furthermore, TVD received financial support from the Flemish Government under the long-term structural Methusalem funding program, project SOUL: Stellar evolution in full glory, grant METH/24/012 at KU Leuven. The research that led to these results was subsidised by the Belgian Federal Science Policy Office through the contract B2/223/P1/CLOSE-UP. It is also part of the DynaSun project and has thus received funding under the Horizon Europe programme of the European Union under grant agreement (no. 101131534). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union and therefore the European Union cannot be held responsible for them. The authors extend their gratitude to NASA/SDO, the SDO/AIA science team, and the Joint Science Operations Centre (JSOC) for the AIA data used in this study. We would also like to thank H.M. Antia for the helioseismology data and the Solar Influences Data Analysis Centre (SIDChttps://www.sidc.be/) for the Sunspot data that was utilized in this study. TVD is grateful for the hospitality of DB and VP during his visit at ARIES in spring 2023. S.R., B.K.J. and D.K.M. also extend their gratitude to the Indian Network for Dynamical and Unified Solar Physicists (https://sites.google.com/view/indus-solphys/homeINDUS) for overall support. This study has also utilized the resources of NASA Astrophysics Data System (https://ui.adsabs.harvard.edu/ADS) and Semantic Scholar (<https://www.semanticscholar.org/>) bibliographic services.
§ EXPLORING THE EFFECT OF LINE OF SIGHT (LOS) PROJECTION OF EXTENDED STRUCTURES ON THE RESULT
We have shown that the solar atmosphere, modulated by structures like plages, coronal loops, active regions, filaments etc., rotates faster than the photosphere (ΔΩ∈ [0.105,0.558]^∘/ day). The extended height of such features above the photosphere, especially at higher latitudes, can result in an erroneous measurement of the rotation rate based on projected coordinates <cit.>. Although the image correlation technique utilised in this study is tracer-independent and considers only the pixel-specific integrated intensity along the LoS to calculate the rotation rate in a particular latitudinal bin through the calculation of the 2-D cross-correlation coefficient, this method may be sensitive to the angle with respect to the LoS, and structures extending from the solar disk, such as coronal loops, whose position with respect to the LoS may influence the value of the intensity populating specific pixels and, consequently, the results obtained through image correlation. To investigate the impact of such scenarios, a simplified toy model was created, mimicking extended structures whose LoS integrated intensity changes only with respect to their position relative to the LoS, while the footpoint of the structure remains stationary. This was done to isolate the excess rotation rate resulting from the LoS effect (ΔΩ_LoS). The model was designed with two different spatial resolutions: (a) with a smaller pixel size corresponding to a better resolution <cit.>, wherein 1 pixel corresponds to 100 km in the sky; and (b) the coarser AIA pixel size, wherein 1 pixel corresponds to 435 km in the sky <cit.> (<ref>). The aim was twofold: (i) to determine whether the LoS projection effect contributes to the disparity in rotation rate between the photosphere and the hotter solar atmosphere modulated by extended structures like coronal loops; and (ii) if it does, to assess the relevance of this effect in our study using data from SDO/AIA.
Once the projected intensity along the LoS (I_LoS) is obtained for a specific angle (θ), the structure is shifted by a constant angle (Δθ), which represents the anticipated change in the angle with respect to the LoS of the structure, calculated from the rotation period of the footpoint (assuming it is the solar surface) at the equator and a cadence of 6 hours. The structure is now positioned at an angle θ+Δθ relative to the LoS, and the projected intensity obtained is cross-correlated with the projected intensity obtained at the initial angle θ. The resulting shift is used to calculate the excess in rotation rate (ΔΩ_LoS), which is represented as the excess in rotation rate for the LoS angle θ. This process was repeated for all LoS angles in the progression of θ+2Δθ, θ+3Δθ, and so on, spanning ± 45^∘ in longitude. This approach was taken to match the conditions imposed on the data in the original analysis. The results obtained for a synthetic structure of length 5 Mm observed at a high resolution of 100 km/pixel suggest an excess rotation rate (ΔΩ_LoS) of up to ≈ 0.83^∘/ day (see <ref> (a)), which is higher than the excess obtained in our original analysis. This suggests that when data at higher resolutions is subjected to the image correlation method without any pre-processing, there might be a LoS projection effect that affects the results obtained.
To see if such an effect can affect our analysis using AIA data, which offers a much coarser resolution, we create a much larger structure (Length=150 Mm; see <ref> (b)) and subject it to the same process of analysis. The results thus obtained suggest an excess of up to ΔΩ_LoS≈ 0.002^∘/ day (see <ref> (b)), which is not enough to explain the excess rotation rate of the extended structure-modulated solar atmosphere observed in our analysis. In light of these results, we would like to emphasize that, as part of our analysis, we employed pre-processing techniques such as Gaussian smoothing (see <ref>), which has a blurring effect and further degrades the resolution, thus minimizing the likelihood of spurious effects like the one discussed here. These findings provide further evidence that the faster rotation of the solar atmosphere is a complex physical phenomenon rather than a data or method-specific artefact.
§ REPRESENTATION OF HEIGHT AND LOGARITHMIC TEMPERATURE FOR AIA CHANNELS
The approximate heights used to represent the different channels of the SDO/AIA are obtained from previous studies, as discussed below,
* The heights and their respective uncertainties used to represent channels , and are the formation heights for He ii, C iv and Fe xvi emission lines as obtained in <cit.>
* For the channels , and , we utilized the heights determined by <cit.> through the study of coronal bright points (CBPs) from the data of the , 195 Å, and 284 Å channels of the Solar TErrestrial RElations Observatory (STEREO). This was done keeping in mind that the CBPs, as well, are structures which dominate in the cross-correlation process through which the rotational profile is determined for these wavelength channels of SDO/AIA. Furthermore, the logarithmic temperatures represented by the 195 Å and 284 Å channels of STEREO are nearest to the and channels of the AIA, respectively.
The logarithmic temperatures used to represent the wavelength channels of AIA were taken from <cit.> and represent the temperature responses of these respective AIA filters. Its important to point out that even though the and channels are also sensitive to hot flare plasma (Log_10 T = 7.0), the nature of our study focuses only on the long-term events with lifespan > 0.25 days or 6 hours. So, we assume the cooler component of this wavelength band contributes primarily to our results (see <ref>).
aasjournal
|
http://arxiv.org/abs/2409.03319v1 | 20240905074646 | Semantic Communication for Efficient Point Cloud Transmission | [
"Shangzhuo Xie",
"Qianqian Yang",
"Yuyi Sun",
"Tianxiao Han",
"Zhaohui Yang",
"Zhiguo Shi"
] | cs.ET | [
"cs.ET"
] |
Semantic Communication for Efficient Point Cloud Transmission
Shangzhuo Xie1, Qianqian Yang1, Yuyi Sun2, Tianxiao Han1, Zhaohui Yang1, Zhiguo Shi1
1College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310007, China
2College of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
Email:
{3200104418,qianqianyang20,Tianxiao Han,yang_zhaohui,shizg}@zju.edu.cn, yuyisun@hznu.edu.cn
================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
As three-dimensional acquisition technologies like LiDAR cameras advance, the need for efficient transmission of 3D point clouds is becoming increasingly important. In this paper, we present a novel semantic communication (SemCom) approach for efficient 3D point cloud transmission. Different from existing methods that rely on downsampling and feature extraction for compression, our approach utilizes a parallel structure to separately extract both global and local information from point clouds. This system is composed of five key components: local semantic encoder, global semantic encoder, channel encoder, channel decoder, and semantic decoder. Our numerical results indicate that this approach surpasses both the traditional Octree compression methodology and alternative deep learning-based strategies in terms of reconstruction quality. Moreover, our system is capable of achieving high-quality point cloud reconstruction under adverse channel conditions, specifically maintaining a reconstruction quality of over 37dB even with severe channel noise.
Semantic communication, point cloud reconstruction, wireless transmission
§ INTRODUCTION
The 3D point cloud is widely used for representing three-dimensional data, with applications in autonomous driving and virtual reality <cit.>. Transmitting 3D point clouds wirelessly imposes strict requirements on bandwidth, latency, and reconstruction quality. Over the years, various point cloud encoding methods have been developed, including geometry-based <cit.>, voxelization-based <cit.>, 3D convolution-based <cit.>, and nerf-based <cit.> approaches. However, relying solely on source coding can result in `cliff effect' where decoding performance drastically deteriorates under harsh channel conditions. Recently, emerging semantic communication (SemCom) approaches address this issue by integrating source and channel coding, optimizing data transmission for tasks rather than precise bit recovery, which has demonstrated improved transmission efficiency <cit.>.
SemCom has been widely applied for the transmission of images, videos, and speech <cit.>, but its use in point clouds remains limited. Notable efforts include the Point Cloud-based SemCom System (PCSC) <cit.>, which employs the Voxception-ResNet (VRN) network for joint source-channel coding of point clouds, achieving improved transmission efficiency compared to traditional methods. The authors in <cit.> utilized Point Transformer to build a SemCom system, eliminating cliff effects and offering improved compression performance under AWGN channels. However, point clouds are characterized by large volumes and complex geometric features. These methods do not effectively extract rich and multi-level semantic information within point cloud data.
Existing work on semantic communication (SemCom) for image transmission has validated that multi-level semantic information can help preserve semantics in images, leading to better transmission performance, especially under harsh channel conditions <cit.>. Promising results have also been seen in point cloud processing, where this approach successfully extracts global and local semantic information and supports various downstream tasks without complex networks <cit.>.
Motivated by these works, this paper presents a point cloud semantic communication (SemCom) system framework based on multi-level semantic information extraction. The framework utilizes a point-based neural network to extract local semantic information from point cloud segmentation patches and a graph-based neural network to extract global semantic information from point cloud projection images. To further enhance transmission quality, a hierarchical transmission strategy is implemented: lossless transmission, such as PBRL-LDPC with HARQ, is used for transmitting global semantic information and patch centroid coordinates, which have a smaller data volume. In contrast, lossy transmission is employed for local semantic information through a joint source-channel coding codec. Our simulation on the ModelNet40 dataset demonstrated that our network can maintain a reconstruction quality of around 37 dB even under extremely poor channel conditions, representing an improvement of more than 15% compared to existing method.
§ SYSTEM MODEL
In this section, we introduce the system model for a SemCom system aimed at efficient point cloud transmission and outline the performance metrics used to evaluate transmission quality.
§.§ SemCom for Point Clouds
The main framework of our point cloud transmission system is illustrated in Fig. <ref>. The point cloud P contains only three-dimensional coordinates, represented as P = (N, 3). At the transmitter, P is first processed by the patch extractor, producing S patches Pc = {Pc_1, Pc_2, ..., Pc_s}, each containing 2 × N/S points. Corresponding centroid coordinates Px = {Px_1, Px_2, ..., Px_s} are also obtained. The patches Pc are then sent to the local semantic encoder, generating local semantic information L, a combination of d-dimensional local semantic vactor from each patch. Simultaneously, P is processed by a projection extractor, producing m projection images Pr = {Pr_1, Pr_2, ..., Pr_m}, each with a size of 224 × 224. These images are processed by the global semantic encoder to generate global semantic information G, which contains D'-dimensional global features of P.
After that, global information G and centroid coordinates Px are transmitted losslessly. For local information L, power normalization is applied, resulting in X = (S, d), which is then encoded and mapped by the channel encoder, producing M = (S, 2 × d). M is then mapped into a complex vector M̂∈C^S × d. The received signal for the decoder is given by:
M̂'̂ = M̂ + 𝐧
𝐧 represents the Gaussian noise 𝒞𝒩(0, σ^2I) with the average power of 1.
The receiver then reshape M̂'̂ into M'∈R^S × 2 × d, and input to the channel decoder to obtain new local information Y=(S,d). The global information, the decoded local information and centroid coordinates are then fed to the semantic decoder to reconstruct point cloud P' = (N,3).
§.§ Performance Metrics
To evaluate transmission performance, we use two commonly applied peak signal-to-noise ratio (PSNR) metrics, D1 and D2 <cit.>. Assume A and B are the original and transmitted point clouds, respectively.
D1 calculates the point-to-point distance between A and B. The compression error e^c2c_A,B is given by:
e^c2c_A,B = 1/N_A∑_a_j∈A||E(i,j)||^2_2
where E(i, j) represents the geometric distance between each point a_j in A and its nearest point b_i in B.
For the point-to-plane distance between A and B, we construct a plane using the normal vector method described in <cit.> and project E(i, j) onto this plane to obtain the point-to-plane compression error e^c2p_A,B:
e^c2p_A,B = 1/N_A∑_a_j∈A||E’(i,j)||^2_2
where E'(i, j) is the projection of E(i, j) onto the plane.
Finally, the PSNR is calculated using e_A,B:
PSNR_A,B = 10log_10(p^2/e_A,B)
where p is defined as the diagonal distance of the point cloud bounding box. D1 aligns with intuitive metrics for point cloud changes and transmission quality, while D2, better suited for point clouds with structural surface features, offers a more perceptual quality evaluation.
§ PROPOSED METHOD
In section 3, we discuss the detail of the encoder and decoder design in our transmission system.
§.§ Patch Extractor and Local Semantic Encoder
The detailed structure is illustrated in Fig. <ref>a. The process begins with capturing a patch of the point cloud, which is then fed into the local semantic encoder. For patch acquisition, we use the farthest point sampling (FPS) <cit.> algorithm to sample the input point cloud P(N,3), ensuring the patch centroids C(S,3) are as evenly distributed as possible. Next, the k-nearest neighbors (kNN) <cit.> algorithm is applied to each centroid, sampling K points around each centroid to form a patch. As a result, S patches are generated, each containing K points. To maximize point coverage, oversampling is incorporated, potentially sampling every point in the point cloud. The relationship between S and K is defined as:
S × K = 2 × N
To mitigate the impact of patch offset on the local semantic encoder, we subtract the centroid coordinates from all points within each patch. Each patch (K, 3) is then individually processed by the local semantic encoder. The encoder comprises a PointNet++ <cit.> layer followed by a PointNet layer. The PointNet++ layer includes a downsampling layer (FPS), a grouping layer (KNN), and a PointNet layer. The PointNet layer consists of three MultiLayer Perceptron (MLP) layers and a max-pooling layer. In the downsampling layer, the number of sampling points is set to K/2, while the grouping layer contains K/4 collection points. The final PointNet layer extracts features with a dimensionality of d. As a result, the dimensionality of the information for each patch transforms as follows:
FPS[(K,3)] →KNN[(K/2,3)] →PointNet[(K/2,K/4,3)]
→PointNet[(K/2,128)] →(1,d)
We obtain the local semantic information L(S, d) of S patches. It is worth noting that since our local semantic encoder extracts information for each patch, its network structure is lightweight.
§.§ Projection Extractor and Global Semantic Encoder
The global semantic encoder's structure is illustrated in Fig. <ref>b. To extract global semantic information, we first generate the necessary projection maps. Traditionally, this can be done in two ways: 1) Surround the 3D model with a regular icosahedron, placing a virtual camera at each plane's center, and rotate it to capture 80 projection maps. 2) Capture 12 projection maps by taking photos every 30 degrees around the z-axis. To keep our model lightweight while effectively extracting global semantics, we follow the approach from MVTN <cit.>. The point cloud P (N, 3) is processed by PointNet and an MLP to obtain the virtual camera parameters u, which include azimuth, elevation, and distance information. Using these virtual cameras, we generate four projection images, each with a pixel size of (224, 224), where the point cloud is rendered in red on a white background.
The projection maps are then fed into four Convolutional Neural Networks (CNNs) with shared parameters to capture global information. A views pooling layer consolidates the 512-dimensional global data extracted by the CNNs. Finally, a three-layer MLP reduces dimensionality and extracts global semantic information G(1,D').
§.§ Channel Encoder and Decoder
The model architecture is shown in Fig. <ref>c. We input the power-normalized semantic data L (S, d) into a channel encoder consisting of two fully connected (FC) layers, which scales the data to (S, 2 × d). Before transmission, the data is subjected to noise interference. At the receiver, the noise-distorted data (S, 2 × d) is processed by a channel decoder, also containing two fully connected layers, to correct errors and regenerate the local semantic information L' (S, d) required by the semantic decoder.
§.§ Lossless Transmission
According to <cit.>, when the channel code rate is set at 1/2 and BPSK modulation is adopted, it is feasible to achieve a success rate of p=0.9 for transmissions at a Signal-to-Noise Ratio (SNR) of 0, provided the block length exceeds 128. The number of successful transmissions follows a geometric distribution with probability p, implying that the expected number of lossless transmissions is 1/p. Therefore, the required symbol count for lossless transmission under these parameters can be determined as follows:
Channel symbol use = 1/p×( 2 ×Bit use/𝒞)
𝒞 represents the channel capacity. In this paper, we can calculate it as follows:
𝒞 = log_2(1 + 10^(SNR_dB/10))
In subsequent experiments, we directly package the center of mass coordinates C(S, 3) and global information G(1, D’) for lossless transmission, calculating the model's compression rate using Eq. <ref>. It should be noted that this equation provides the maximum transmission volume required for lossless transmission. When channel conditions improve, the actual transmission volume will be lower than the calculated value.
§.§ Semantic Decoder
As shown in Fig. <ref>d, the semantic decoder begins by expanding G(1,D') into G'(S,D'), reflecting that different patches of the same point cloud share identical global semantic information. We then concatenate local and global semantic information, resulting in a total feature dimension of d+D' for each patch. To maintain a lightweight decoder structure, patches are processed separately. The upsampling layer, built using a fully connected layer, outputs data of size K/2 × 3, which is reshaped into patches of size (K/2,3), consisting of K/2 points. According to Eq. <ref>, after upsampling each patch, the total number of reconstructed points N matches the original input point cloud.
Next, we align the patches based on their centroid coordinates and merge them to obtain the reconstructed point cloud P'. In our model, we calculate the Chamfer distance (CD) <cit.> between P and P' as the loss function, defined as follows:
CD(P, P') = 1/N[ ∑_p∈Pmin_p'∈P'||p-p'||_2^2
+ ∑_p'∈P'min_p∈P||p'-p||_2^2 ]
§.§ Two-Stage Training
In our model, we employ a two-stage training strategy to enhance adaptability to various channel conditions, a method proven effective in SemCom systems <cit.>. In the first stage, the model excludes the channel encoder and decoder, focusing on training the local semantic encoder, global semantic encoder, and semantic decoder in a noise-free environment.
In the second stage, we introduce the channel encoder and decoder, loading the pre-trained local and global semantic encoders, as well as the semantic decoder. The parameters of the local and global semantic encoders are fixed since they have already demonstrated effective information extraction without noise interference during the first stage. We then add noise to train the channel encoder, channel decoder, and semantic decoder, ensuring they work together to mitigate noise.
This strategy allows us to quickly adapt the model to different channel conditions by directly applying second-stage training to the pre-trained model, significantly reducing training time.
§ EXPERIMENTAL RESULTS
§.§ Experimental Setup
We use point cloud data from ModelNet40, consisting of 9,843 training samples and 2,468 testing samples. Each point cloud is downsampled to N=8,192 points using the FPS algorithm. For compatibility with benchmark methods, we scale the data to the range [0, 63] for input and normalize the output for performance evaluation. We consider an AWGN channel and test under the noise conditions described in Section 2. For local information extraction, we use the Adam optimizer with an initial learning rate of 0.0005 and a batch size of 24. For global information extraction, we follow the MVTN settings <cit.>. In the second stage, since the training is more focused on local information extraction, we maintain consistency with the first-stage settings. To evaluate reconstruction performance across different SNR levels, we set the number of patches S=64, local semantic information d=8, projection maps w=4, and extracted global information D'=4. For compression performance evaluation, w and D' remain fixed while S and d vary. In ablation experiments, we use S=64, d=8, and w=4. We use D1 and D2 in Section 2 as performance metrics.
§.§ The Reconstruction Performance
In this subsection, we use the state-of-the-art deep learning-based DPCC <cit.> and the standard octree-based MPEG G-PCC <cit.> as benchmark compression methods. We also employ the lossless transmission model from Section 3 to calculate transmission volume, assuming 16-bit precision for each parameter. It should be noted that our model need account for the transmission volume of local semantic information.
For G-PCC, we set initial parameters based on general test conditions <cit.> and vary quantization parameters to obtain different rate-distortion points. DPCC's compression performance is adjusted by modifying its bottleneck size.
We compare the reconstruction performance under different compression settings, with results shown in Fig. <ref>a and Fig. <ref>b. Although our model is more sensitive to compression than DPCC, it achieves a 4 to 6 dB higher reconstruction quality. The compression performance of our model is influenced by the patch centroid coordinates (S,3) we transmit, which are crucial to our patch-based approach. Any offset in these coordinates can significantly degrade performance, especially at low BPP. We also test DPCC whose centroid coordinates are generated by prediction. For DPCC, the reconstruction quality improvement is within 1dB even we send receiver precise centroid coordinates. The results show that our model outperforms DPCC, despite the need to transmit patch centroid coordinates.
Particularly, we also investigate how SEPT <cit.> performs under two channel conditions. Based on different applications, SEPT is a point cloud transmission scheme with good compression capability, while our scheme focuses more on high reconstruction quality. SEPT cannot simply increase transmission volume to significantly enhance reconstruction in certain scenarios.
To ensure fairness, we keep our model parameters fixed across different channel conditions. By adjusting the bottleneck size in DPCC under varying SNRs, we maintain comparable compression levels between DPCC and our model.
The experimental results under different channel conditions are shown in Fig. <ref>. Our model degrades by approximately 2 dB in severe channel conditions, demonstrating better robustness against channel noise compared to DPCC, due to superior feature extraction. Notably, the DPCC model's bottleneck size remains relatively constant, which explains its stable performance.
We also present the reconstructed point cloud visualizations of the three methods at SNR =5 and BPP ≈ 0.6 in Fig. <ref>, demonstrating that our model outperforms the others. G-PCC, in its standard configuration, produces fewer points on complex models due to the tree depth limitation.
§.§ Model Size and Encoding&Decoding Speed
In this section, we discuss the model size, encoding, and decoding speed of our point cloud semantic transmission system, comparing it with DPCC and G-PCC. Under the conditions of SNR = 5 and BPP = 0.6, we measure the average transmission time for test files 63^rd to 127^th to accurately reflect the system's speed.
In Table <ref>, we compare the speed and model size of various models. Our model processes each patch individually, resulting in a much smaller network scale and lighter weight compared to DPCC. Thus, our method achieves 56% faster processing times than DPCC, thanks to the network's simplicity. Meanwhile, although G-PCC is slightly faster than our model, it sacrifices reconstruction quality.
In summary, our system achieves high-quality point cloud reconstruction at a fast operation speed. Additionally, its lightweight architecture allows it to run efficiently on devices with limited resources and computing power.
§.§ Ablation Studies
In this section, we present an ablation study to demonstrate the superior performance of our joint network compared to a single network. Additionally, we compare the two-stage training model with the no pre-training model to evaluate our method's performance.
We first varied the global feature dimension D' obtained by the global semantic encoder, with S=64, d=8, and tested reconstruction performance at SNR =10. The results, shown in Fig. <ref>, indicate that incorporating global feature information significantly enhances reconstruction performance compared to excluding it. Optimal performance is achieved around D'=6. Notably, even small-scale global information can greatly improve reconstruction. However, the optimal D' must balance with local feature size d. Excessive global information can degrade detail reconstruction, leading to performance fluctuations
We compare the reconstruction differences between the two-stage training model and the no pre-training model under different SNR conditions, as shown in TABLE <ref>. The performance of the no pre-training model drops significantly at low SNR. These results highlight the necessity of our two-stage training approach for the system. When noise is directly involved in training, especially under severe channel conditions, the lightweight semantic information extraction system becomes highly susceptible to noise, leading to a reduced information extraction capability.
0.03in
§ CONCLUSION
In this paper, we proposed a point cloud transmission system designed for high reconstruction quality. The system integrates point-based and graph-based neural networks to enhance the extraction of point cloud semantic information through multi-dimensional processing. A two-stage training strategy and layered transmission are employed to optimize performance for the physical channel environment. The point cloud is then reconstructed using a semantic decoder at the receiver. Simulation results validated the performance of the proposed SemCom method for point cloud transmission.
00
application Bello, S.A., Yu, S., Wang, C., Adam, J.M. and Li, J. “Review: deep learning on 3D point clouds.” Remote. Sensing. vol. 12, no. 11, pp. 1729, May. 2020.
air Y. Shao, C. Bian, L. Yang, Q. Yang, Z. Zhang, and D. Gunduz. “Point cloud in the air.” arXiv:2401.00658, 2024.
GPCCTEST K. Mammou.“PCC test model category 13 v12." ISO/IEC JTC 1 , 2019.
mesh Que, Z., Lu, G. and Xu, D. “Voxelcontext-net: An octree based framework for point cloud compression." in Proc. IEEE Conf. Comput. Vis. Pattern Recog(CVPR), Nashville, US, June. 2021, pp. 6042-6051.
3Dconv Rethage, D., Wald, J., Sturm, J., Navab, N. and Tombari, F., “Fully-convolutional point networks for large-scale point clouds." in Proc. Eur. Conf. Comput. Vis.(ECCV), Munich, Germany, Sept. 2018, pp. 596-611.
neural H. Ruan, Y. Shao, Q. Yang, L. Zhao, and D. Niyato, "Point Cloud Compression with Implicit Neural Representations: A Unified Framework." in Proc. IEEE/CIC Int. Conf. Commun. China (ICCC), Hangzhou, China, Aug.2024.
loT Z. Zhao, Z. Yang, C. Huang, L. Wei, Q. Yang, C. Zhong, W. Xu,
and Z. Zhang. “A joint communication and computation design for
distributed RISs assisted probabilistic semantic communication in IIoT.”
IEEE Internet Things J., vol. 11, no. 16, pp. 26568-26579, Aug. 2024.
twostage T. Han, Q. Yang, Z. Shi, S. He and Z. Zhang. "Semantic-Preserved Communication System for Highly Efficient Speech Transmission."IEEE J. Sel. Areas Commun, vol. 41, no. 1, pp. 245-259, Jan. 2023.
PCSC Liu, X., Liang, H., Bao, Z., Dong, C. and Xu,X. “Semantic Communications System with Model Division Multiple Access and Controllable Coding Rate for Point Cloud.” arXiv preprint arXiv:2307.06027, 2023.
SEPT Bian, C., Shao, Y. and Gunduz, D., “Wireless point cloud transmission." arXiv preprint arXiv:2306.08730, 2023.
multi Z. Zhang, Q. Yang, S. He, M. Sun and J. Chen, “Wireless Transmission of Images with the Assistance of Multi-level Semantic Information." in Proc. Int. Symp. Wirel. Commun. Syst.(ISWCS), Hangzhou, China, Oct.2022.
multipoint Xie Z, Chen J and Peng B.,“Point clouds learning with attention-based graph convolution networks." textitNeurocomputing, vol. 402, pp.245-255, Aug.2020.
PSNR “Common test conditions for point cloud compression.” ISO/IEC JTC1/SC29/WG11 MPEG output document N19084, Feb, 2020.
FPS Moenning, C. and Dodgson, N.A. “Fast marching farthest point sampling." University of Cambridge, Computer Laboratory, No. UCAM-CL-TR-562, 2003.
KNN Peterson, L.E.. “K-nearest neighbor." Scholarpedia, vol. 4, no. 2, pp. 1883, 2009.
PointNet++ Qi, C.R., Yi, L., Su, H. and Guibas, L.J., “Pointnet++: Deep hierarchical feature learning on point sets in a metric space." in Proc. Adv. neural inf. process. syst.(NeurIPS), Long Beach, US, Jan.2018.
MVTN Hamdi, A., AlZahrani, F., Giancola, S. and Ghanem, B.. “MVTN: Learning multi-view transformations for 3D understanding." arXiv preprint arXiv:2212.13462, 2022.
lossless transmission Shirvanimoghaddam, M., et al. “Short block-length codes for ultra-reliable low latency communications." IEEE Commun Mag, vol. 57, no. 2, pp. 130-137, Jul. 2018.
cd Fan, H., Su, H. and Guibas, L.J.. “A point set generation network for 3d object reconstruction from a single image." in Proc. IEEE Conf. Comput. Vis. Pattern Recog(CVPR), Honolulu, US, Jul.2017, pp. 605-613.
DPCC Zhang, J., Liu, G., Ding, D. and Ma, Z.. “Transformer and upsampling-based point cloud compression." in Proc. 1st Int. Workshop Adv. Point Cloud Compress. Process. Anal., New York, US, Oct. 2022, pp. 33-39.
|
http://arxiv.org/abs/2409.02364v1 | 20240904012156 | Examining Caregiving Roles to Differentiate the Effects of Using a Mobile App for Community Oversight for Privacy and Security | [
"Mamtaj Akter",
"Jess Kropczynski",
"Heather Lipford",
"Pamela Wisniewski"
] | cs.HC | [
"cs.HC"
] |
Examining Caregiving Roles to Differentiate the Effects of Community Oversight]Examining Caregiving Roles to Differentiate the Effects of Using a Mobile App for Community Oversight for Privacy and Security
0000-0002-5692-9252
New York Institute of Technology, USA
0000-0002-7458-6003
University of Cincinnati, USA
0000-0002-5261-0148
University of North Carolina, Charlotte, USA
0000-0002-6223-1029
Vanderbilt University, USA
§ ABSTRACT
We conducted a 4-week field study with 101 smartphone users who self-organized into 22 small groups of family, friends, and neighbors to use “CO-oPS,” a mobile app for co-managing mobile privacy and security. We differentiated between those who provided oversight (i.e., caregivers) and those who did not (i.e., caregivees) to examine differential effects on their experiences and behaviors while using CO-oPS. Caregivers reported higher power use, community trust, belonging, collective efficacy, and self-efficacy than caregivees. Both groups' self-efficacy and collective efficacy for mobile privacy and security increased after using CO-oPS. However, this increase was significantly stronger for caregivees.
Our research demonstrates how community-based approaches can benefit people who need additional help managing their digital privacy and security. We provide recommendations to support community-based oversight for managing privacy and security within communities of different roles and skills.
[
Pamela Wisniewski
=====================
§ INTRODUCTION
A recent Pew Research study <cit.> reported that most adults in the U.S. have significant knowledge gaps regarding their digital privacy and security. The report suggests a lack of understanding of how third party entities access personal information online <cit.>, making users susceptible to privacy and security breaches. This begs the question of how we might close this knowledge gap? Previous studies have explored solutions involving one party overseeing another, such as organizations using Mobile Device Management systems to secure employee data <cit.> and parents monitoring teens through mobile online safety apps <cit.>. These studies suggested that collaborative approaches, rather than unilateral controls, can offer better privacy protection by fostering collective learning and discussion <cit.>. Crowdsourcing has also been proposed as a method to improve individual's mobile privacy decisions such as managing app permissions to safeguard personal information <cit.>, though it often lacks reliability <cit.>. An approach that has shown promise is to leverage the social influence of informal networks to infuse expertise and exert influence on privacy and security decisions <cit.>. For example, several studies have shown that people tend to trust and follow privacy advice from their trusted circles, often turning to friends and family for guidance on digital privacy and security topics <cit.>.
Therefore, networked privacy researchers have called for community based approaches to co-manage digital privacy and security, where individuals in a trusted community can help one another equally make individual and collective privacy and security decisions <cit.>. While this body of research confirms that people often rely on their social connections to inform their digital privacy and security decisions, it has yet to explore how these dynamics shift when relationships involve knowledge imbalances or power hierarchies, such as caring responsibilities for children or older adults. Prior studies <cit.> have shown how communities often consist of hierarchical (i.e. uneven) levels of expertise and knowledge. Individuals with more expertise often take the role of “caregivers” to provide support and guidance to others who are less tech-savvy <cit.>. Other individuals in the community are at the receiving end of the privacy and security support <cit.>, and are thus considered “caregivees". However, community-based collaborative approaches may not take into account these unequal levels of expertise, as they have been primarily designed to support community members equally, with individuals giving and receiving feedback. Recently, Akter et al. <cit.> conducted a field study using a community oversight app that allowed individuals in a community to help each other equally and found that users, in general, benefited from using the app. However, their work did not examine the presence or impact of potentially unequal roles within communities.
This paper aims to address this gap, by investigating: 1) how caregivees vs. caregivers differ, 2) how mechanisms designed for collaborative community-based privacy and security support may affect caregivers and caregivees differently, and 3) if there are distinct differences in privacy behaviors between these two groups. To do this, we developed and deployed a community oversight app, called "CO-oPS" that allows people of a trusted community to review one another's mobile privacy and security decisions and provide advice and feedback on them. Our primary goal is to investigate how differently caregivers, who provide oversight, and caregivees, who receive the oversight, would be impacted by using the app. By providing insights into the varying effects of the CO-oPS app on these two unequal roles, we expect to gain generalizable knowledge that helps researchers and designers better understand the phenomena of privacy and security caregiving, how privacy-based interventions may effect these groups differently, and how to redesign future privacy-based interventions tailored to the unique needs of caregivers and caregivees. As such, we seek to answer the following high-level research questions:
* RQ1: How do the characteristics of Caregivers versus Caregivees differ?
* RQ2: How do the perceived outcomes of using the CO-oPS app differ based on Caregiving status?
* RQ3: How do the privacy behaviors of Caregivers and Caregivees differ when using the CO-oPS app?
To answer these research questions, we conducted a field study with 22 self-organized groups of 2-6 smartphone users. First, participants within each group reported their pre-existing caregiving relationships by identifying the individual(s) in their community for whom they go to for advice and questions regarding their mobile privacy and security, whom we labeled as caregivers. The rest of the community members were labeled as caregivees. We then had our communities of caregivers and caregivees use the CO-oPS app for four weeks, complete pre-post-study surveys, and participate in optional follow-up interviews. To answer RQ1, we first examined how the caregivers and caregivees are different in terms of their demographic information and power usage - individual's propensity to maximize the use of technology to its fullest <cit.>. For RQ2, we measured how our caregivers' and caregivees' perceptions for community trust <cit.>, community belonging <cit.>, community collective efficacy <cit.>, and self-efficacy <cit.> differed and changed after using the CO-oPS app. For RQ3, we measured the between-group differences among the pairs of caregivers and caregivees' app activities they performed using the CO-oPS app. We also qualitatively analyzed their interview transcripts to further investigate how their perceptions towards the use of the CO-oPS app differed when they provided or received caregiving.
Overall, we found that caregivers reported higher power usage compared to caregivees, with no significant differences based on demographic characteristics (RQ1). For RQ2, caregivers showed higher levels across all pre-study measures—community trust, community belonging, community collective efficacy, and self-efficacy—compared to caregivees. Both groups experienced significant increases in self-efficacy and community collective efficacy in managing mobile privacy and security after using the CO-oPS app, with caregivees showing a significantly stronger increase. For RQ3, tensions arose around the privacy feature of the CO-oPS app, as caregivees expressed concerns about app usage privacy and thus hid a significantly greater number of installed apps, while caregivers viewed this feature as counterproductive to community oversight.
In summary, our study highlights the importance of differentiating caregiving roles in communities for collaborative mobile privacy and security management. It uniquely contributes to networked privacy research by examining collaborative mobile privacy and security management in the context of caregiving in communities to support individuals who need additional help. Specifically, our research: 1) Untangles the behavioral and perceptual differences between caregivers and caregivees when using a community oversight app; 2) Provides empirical evidence of the potential for community oversight to improve caregivees' decisions on mobile privacy and security; and 3) Offers design recommendations for features aimed at supporting communities in providing and receiving caregiving tailored to the specific needs of individuals.
§ BACKGROUND
Community-based Approaches for Privacy and Security
A Pew Research study <cit.> showed that 77% of U.S. smartphone users have reported downloading different third party mobile apps on their phones. These third party apps access users' personal and sensitive information <cit.>, often without users' informed consent, as Reardon et al. reported in <cit.>. However, another Pew research study reported that a majority of U.S. adults have significant knowledge gaps and lack understanding of how to control and manage their digital privacy and security <cit.>. Users even find it difficult to understand the language of app permission requests that third party apps prompt before accessing users' information <cit.> and, therefore, they struggle to decide whether to grant an app permission or not <cit.>. To help individuals with their digital privacy and security,
privacy researchers have examined crowdsource-based approaches <cit.> that leveraged the knowledge and information of other users to provide recommendations to users so that they can make informed decisions about their app permissions. For example, Zhang et al. <cit.> proposed a social nudge mechanism that used the idea of visibility of social norms and allowed users to make app permission decisions based on the percentage of other users who approved different data permissions for each installed app. Through an online experiment, they evaluated this interface and found that users mostly perceived this social nudge as a social norm or collective expectation of information sharing, and users tended to approve their apps’ data use when they saw that more users accepted those apps' permissions.
While these crowdsource-based solutions help individuals by raising their awareness about other users' mobile privacy practices, they give little consideration to the trustworthiness of the information. Individuals might benefit more if the information comes from expert users who have expertise in mobile app privacy and security. Hence, Rashidi et al. <cit.> incorporated this idea of "expert user" into their permission control framework titled "DroidNet" which provided users recommendations based on the decisions made by the majority of expert users in the network.
However, a large body of research work also demonstrated that individuals learn and change their digital privacy and security behaviors when they become aware of their close trusted circle's privacy and security practices. For example, Das et al.'s studies <cit.> demonstrated the effectiveness of "social proof" - i.e. being able to view how many friends in their social network use a specific security feature - and showed that individuals are more influenced to adopt a privacy and security feature when they witness adoption by others. Redmiles et al. also found that people trust and adopt privacy advice when it comes from their trusted close circles, e.g., family members, friends, and co-workers <cit.>. In recent work, Mendel and Toch <cit.> conducted a field experiment to evaluate a social support mechanism and found that users relied significantly more on the advice of close connections to learn about mobile security than community volunteers.
Identifying the importance of social support in digital privacy and security, networked privacy researchers have investigated collaborative approaches. For instance, Wan et al. <cit.> designed a mobile app, titled AppMoD, to help individuals delegate their mobile privacy and security questions to an advisor - someone in their trusted social network. The advisor had the ability to make app permission decisions on behalf of the user. Chouhan et al. <cit.> proposed a community oversight framework to allow individuals in a trusted community, e.g., friends, family members, and co-workers, to help each other make their mobile privacy and security decisions together as a team.
Caregiving for Security and Privacy
Communities can be heterogeneous when it comes to knowledge, behavior, skills, and roles in mobile privacy and security. Previous work examined how individuals in a community can be different in privacy and security knowledge, as well as take different roles in providing support. For example, Akter et al.'s studies <cit.> revealed that although parents tended to take the role of supervising their families' mobile online safety, privacy, and security, tech-savvy teens showed more concerns about the permissions granted on their parents' phones, while parents were more focused on the kinds of apps their teens used. Researchers also examined how different age groups in communities vary in their decision-making about digital privacy and security. For example, Huang and Bashir <cit.> identified that older adults, e.g., 65 years or older, tend to take fewer privacy protections than their younger peers, e.g., 55 to 64 years. However, in a more recent study, Anaraky et al. <cit.> compared privacy decision making of young adults and older adults and found that young adults were more likely to disclose information based on their trust in the apps, while older adults, being careful, considered the pros and cons of each information disclosure. This body of research demonstrates that in communities, there may be some individuals who are more knowledgeable in making mobile privacy decisions and/or providing support to others (e.g., caregivers), and some might lack the knowledge and seek support from others (e.g., caregivees). Kropczynski et al. previously explored this concept of caregiving in the context of providing and receiving technological support within close, trusted circles, such as friends and family <cit.>. Our study extends this concept to examine caregiving in the management of mobile privacy and security.
The core idea behind collaborative community-based approaches <cit.> is to empower individuals within trusted communities to practice social support <cit.>, enabling them to collaboratively assist one another in making both individual and collective privacy and security decisions. However, as suggested by Akter et al. <cit.>, these mechanisms are most effective when there are disparities in knowledge and expertise within the community, allowing those in need of support with mobile privacy and security decisions to benefit from the guidance of more knowledgeable community members. Despite this, the collaborative community-based mechanisms in the existing literature have not adequately considered the imbalance of expertise, caregiving roles, and the varying needs and perspectives <cit.> that accompany them. Therefore, the objective of our study was to investigate how these approaches, which treat all community members as equals, may or may not benefit caregivees who require additional support, as well as how they help caregivers support others. To examine these, we developed a collaborative community-based mobile privacy and security management app, CO-oPS, and conducted a study involving 22 groups of caregivers and caregivees who used the app within their groups. In the following section, we present a design overview of the CO-oPS app.
§ DESIGN OF THE CO-OPS APP
We developed the Android CO-oPS app based on the model of community oversight for privacy and security proposed by Chouhan et al. in <cit.>. This app allows individuals in a trusted community, e.g., friends, families, and co-workers, to review one another's apps installed and permissions granted, and provide or seek advice from one another, providing all community members the same interface and functionalities. Our app includes three main categories of features: 1) Co-monitoring, 2) Communication, and 3) Privacy.
Co-monitoring Features: These features allow users to view the list of community members (People page, <ref>a). Here, they can explore the apps installed on each member's device through the Discovery page (<ref>b). For any installed app within the community, users have the ability to review the permissions granted or denied to that specific app on the Permission page (<ref>c). This feature enables community members to assess each other's choices regarding mobile privacy and security, facilitating joint learning and feedback. The Permission page displays a comprehensive list of permissions granted or denied to a particular app, alongside the number of community members who have made the same choices. Additionally, it offers users a convenient "SETTINGS" link, redirecting them to Android Settings, where they can adjust the permissions of their own installed apps.
Communication Features: The communication features keep users connected regarding one another's decisions, questions, and feedback. Here, users can stay updated on the permission changes made in the community, receive weekly tips, and engage in open discussion or private messaging. For instance, when permission changes are made using the above "SETTINGS" link, the app automatically generates a post on the Community Feed, as shown in <ref>d, to inform the entire community. The Community feed also fosters open discussion among community members. Moreover, this feed provides weekly tips to educate users on safe app usage and permissions. For example, one of the weekly tips was “Pay careful attention to the app permission prompts. If the permission is not required for the app to function, consider denying." Lastly, the People page (<ref>a) allows one-to-one messaging for asking questions or offering guidance.
Privacy Feature: This feature provides users with the option to control the visibility of their installed apps (Discovery page, <ref>b), ensuring personal privacy regarding their app usage within the community. By allowing users to hide specific apps, they can selectively choose which ones are visible to others. When an app is hidden on a community member's device, it becomes inaccessible for review by other community members on the Discovery page. Moreover, the changes made to the permissions of these hidden apps are not posted on the community feed page, further safeguarding the privacy of users' app-related activities.
§ METHODS
The overall goal of our study is to examine the key differences in privacy and security behaviors between the individuals who provide oversight and those who receive when using a community oversight app. To achieve these goals, we recruited 22 small self-organized groups of people (2 to 6 individuals), who knew each other. In total, 101 participants were recruited across these groups, starting with primary contacts who then invited others within their social circles to join. Participants installed and used the CO-oPS Android app for four weeks with their group to exchange support and guidance on their privacy decisions. Participants also completed pre- and post-study surveys and were invited to participate in optional follow-up interviews. The study received approval from the Institutional Review Boards (IRBs) of the universities involved.
Study Procedures
Pre and Post Study Surveys: Our participants completed a Qualtrics survey before and after the four-week long field study. In the pre-study survey, we first asked participants to describe the interpersonal relationships (e.g., friend, family member, neighbors, etc.) they had with each member of their community, and the proximity to the others' residences with respect to their own. To distinguish the caregivers and caregivees within the groups, we asked our participants about their pre-existing mobile privacy and security caregiving relationships. To this end, we asked them to report who in their community they go to for questions regarding their mobile privacy and security decisions, e.g., whether an app is safe to install or a permission is safe to allow for an app. We did not inform our participants that we would differentiate them as caregivers and caregivees as we wanted to avoid potential conflicts within their groups and to minimize bias in their app interactions and mobile privacy behaviors. Next, we collected participants' demographic information, i.e., age, gender, ethnicity and education level. In both pre- and post-study surveys, we also measured some prevalidated and newly created constructs.
Weekly App Tasks: At the beginning of the field study, community members were asked to install the CO-oPS app on their phones and setup their user profiles. A key step in this setup process involved the app launching the Discovery page (<ref>b) with the prompt: "Select the apps that you want to hide from your community," allowing participants to choose which of their apps would be hidden from others. This ensured that participants could hide their installed apps before they became visible to their group members. To this end, we allowed our participants to decide the apps they wanted to hide from their group as we did not want to bias their choices. Each week, our participants received a list of tasks on the home page of the CO-oPS app, where they could check off each task after completion. However, we did not follow up with participants for not completing their tasks, as we wanted them to use the CO-oPS app as they saw fit to provide or receive privacy caregiving from one another. The weekly tasks are presented in <ref>. Upon completing the week-4 tasks and post study survey, participants were each compensated with a $40 Amazon gift card. N=3 participants discontinued after the second week and were compensated with a $20 Amazon gift card. However, we discarded all data collected from those participants who stopped participating.
Follow-up Interviews: At the end of the field study, we invited all participants to optional one-on-one Zoom interviews which allowed us to further explore how they perceived the CO-oPS app features for providing or receiving privacy and security support to/from their communities. N=51 participants volunteered for follow-up interviews, each of whom were compensated with a $10 Amazon gift card. Appendix-B presents the sample interview questions we asked during the interviews. The interview sessions took 40 to 70 minutes to complete, and were audio/video recorded.
Survey Constructs and Hypotheses
In the pre-study survey, we measured participants' perceived power usage <cit.>, community trust <cit.>, community belonging <cit.>, community collective efficacy <cit.>, and self-efficacy <cit.>. All these measures, except power usage, were also later presented in the post-study survey. Our study outcomes were measured on the basis of these constructs. The scale items of these constructs were measured on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree). Below we describe these constructs and their corresponding hypotheses (<ref>).
Power Usage:
In the pre-study survey, we measured participants' power usage <cit.>, which is defined as their perception of whether they are comfortable with technology in general and are likely to explore all possible features of the technologies they use. In Kropczynski et al.'s work <cit.>, they demonstrated that the caregivers who support their community in general technology usage had a higher level power usage compared to the caregivees. Therefore, we hypothesize that:
H1: Caregivers will report higher levels of power usage than caregivees.
Community Trust: In both the pre- and post-study surveys, we measured our participants' perception of community trust. In this study, we created this construct based on the community oversight model framework that Chouhan et al. proposed in <cit.>. We define community trust as an individual's perception of trusting their community to keep their personal information, e.g., apps installed, private and to care for them to provide mobile privacy and security advice and guidance. The scale items are provided in Appendix A. Since our caregivees already received mobile privacy and security support from caregivers of their community before participating in the study, we expect caregivees will have more trust on their communities with their app usage information and will rely on others to receive advice and feedback on their mobile privacy and security. Moreover, as both caregivers and caregivees used the CO-oPS app to help each other with their mobile privacy and security decisions, we expect their community trust to increase at the end of the study.
Therefore, we hypothesize a main effect of caregiving status prior to using the CO-oPS app, a main effect of the CO-oPS app intervention from pre- to post-study, and an interaction effect between the caregiving status and the CO-oPS app intervention:
H2: a) Prior to using CO-oPS, caregivees will report higher levels of community trust than caregivers,
b) Both caregivers and caregivees will report higher levels of community trust after using the CO-oPS app, and
c) Caregiving status will moderate the effect of using the CO-oPS app, such that the increase in community trust after using the app will be stronger for caregivees than for caregivers.
Community Belonging scale <cit.> measures an individual's perception of their importance and the degree to which their opinions are valued within their community. In this study, we utilized the pre-validated version of this scale that was used by Kropczynski et al. in <cit.>. They showed evidence that community belonging did not differ between tech caregivers and caregivees. This is because both caregivees and caregivers needed higher levels of belonging in their community in order to provide or receive technology support from one another. Therefore, our hypothesis for the main effect of caregiving status before using the CO-oPS app is the null hypothesis. However, as our caregivers and caregivees were connected through the CO-oPS app throughout the study duration and worked together to help one another on their mobile privacy and security decisions, we expect their sense of belonging may increase at the end of the study. Also, since caregivees were supported more by their communities through the CO-oPS app, we expect that they will perceive more belonging after the study than the caregivers. Therefore our hypotheses are:
H3: a) Prior to using CO-oPS, there will be no significant difference in community belonging between caregivers and caregivees,
b) Both caregivers and caregivees will report higher levels of community belonging after using the CO-oPS app, and
c) Caregiving status will moderate the effect of using the CO-oPS app, such that the increase in community belonging after using the app will be stronger for caregivees than for caregivers.
Community Collective Efficacy was defined by Carroll et al. <cit.> as the capacity of a group or community for collaboratively performing a shared task. In our study, we utilized the pre-validated version of this construct which was used in Kropczynski et al.'s work <cit.>. The community collective efficacy scale assessed how individuals perceived their community's capacity to work together as a team on mobile privacy and security. Prior studies in the space of technology support <cit.> showed evidence that caregivers' community collective efficacy was not different than the caregivees. Our study anticipates that the use of the CO-oPS app by caregivers and caregivees to exchange support and guidance on mobile privacy and security will enhance both groups' collective efficacy in managing these issues within their community. Additionally, we expect that our caregivees would receive more help and guidance from the caregivers in making their mobile privacy and security decisions through using the CO-oPS app. Therefore, caregivees' community collective efficacy will become stronger after using the CO-oPS app than the caregivers.
H4: a) Prior to using CO-oPS, there will be no significant difference in community collective efficacy between
caregivers and caregivees,
b) Both caregivers and caregivees will report higher levels of community collective efficacy after using the CO-oPS app, and
c) Caregiving status will moderate the effect of using the CO-oPS app, such that the increase in community collective efficacy after using the app will be stronger for caregivees than for caregivers.
Self-Efficacy
was originally defined by Bandura <cit.> as an individual's perceived capacity to perform a task. In our study, we utilized the pre-validated version of this construct, which was used in Kropczynski et al.'s work <cit.>. Their self-efficacy scale measured an individual's perceived ability to manage their own privacy and security. Their work showed evidence that technology caregivers have higher self-efficacy than caregivees.
In our study, since caregivers provided privacy and security support to their caregivees prior to participating in the study, we expect that our caregivers will also have higher self-efficacy for mobile privacy and security than the caregivees. Furthermore, since our caregivees used the CO-oPS app to learn from their community members' mobile privacy and security behavior and received feedback and guidance from others on their own mobile privacy practices, we expect that our caregivees' self-efficacy would become stronger after using the CO-oPS app than the caregivers. Therefore, we hypothesized that:
H5: a) Prior to using CO-oPS, caregivers will report higher levels of self-efficacy than caregivees,
b) Both caregivers and caregivees will report higher levels of self-efficacy after using the CO-oPS app, and
c) Caregiving status will moderate the effect of using the CO-oPS app, such that the increase in self-efficacy after using the app will be stronger for caregivees than for caregivers.
Data Analysis Approach
In this section, we first describe how we conducted our statistical analyses on the survey data to answer our RQ1 and RQ2 and test the related hypotheses. Then, we describe the quantitative and qualitative analyses approaches in the CO-oPS app usage logs and the interview transcripts to answer our RQ3.
Exploring the Personal Characteristics of Caregivers and Caregivees (RQ1):
To investigate the differences in personal characteristics of our caregivers and caregivees, we analyzed their demographic information and the power usage scale that they reported during the pre-study survey. We first categorized our participants as caregivers or caregivees. The individuals whom at least one of their community members reported as the primary person to whom they go to for questions regarding mobile privacy and security, are labeled as caregivers. The rest of the community members whom no one reported as the person they seek mobile privacy and security support from were labeled as caregivees. Next, we investigated whether there were demographic differences between the caregivers and caregivees using chi-square tests. We also measured the difference between our caregivers and caregivees' power usage. We first used Cronbach's alpha <cit.> to verify the internal consistency of this construct, which was 0.85. We then conducted aShapiro–Wilk test and found that the sum scores of these two groups' power usage were not normally distributed (ps<.01). Therefore, we performed the non-parametric Mann-Whitney U test <cit.> between the caregivers' and caregivees' responses to the power usage scale.
Exploring the Difference in Perceived Outcomes between Caregivers and Caregivees (RQ2):
To explore differences in perceptual outcomes, e.g., community trust, community belonging, community collective efficacy, and self-efficacy between caregivers and caregivees, we analyzed the pre- and post- study survey responses of our caregivers and caregivees. We first verified the construct validity of these measures using Cronbach's alpha <cit.> and created sum scores to represent each construct. All Cronbach's alphas were greater than 0.85, suggesting a good internal consistency of our measures. We then conducted Shapiro–Wilk tests to identify whether the sum scores of the constructs were normally distributed or not. Since these were not normally distributed, we used the Aligned Rank Transform (ART) tool to perform non-parametric 2 x 2 ANOVAs <cit.> and post-hoc contrast (ART-C) tests <cit.> as we anticipated a possible interaction effect between caregiving status and survey completion time. We conducted a 2 x 2 repeated measures ANOVA with mixed design to evaluate how using CO-oPS with the community throughout the study duration affects caregivers' and caregivees' community trust, community belonging, community collective efficacy, and self-efficacy. <ref> presents all these statistical tests for interaction and main effects. The means and standard deviations for all measures are presented in <ref>.
Exploring the Difference in Privacy & Security Behaviors between Caregivers and Caregivees (RQ3):
To examine how our caregivers' mobile privacy and security behaviors were different than the caregivees, we first analyzed our caregivers and caregivees' CO-oPS app usage data. Due to some technical issues with the CO-oPS logging feature, we could not log the first seven communities' app activities and so, the app usage data was collected from only the last fifteen communities (N = 35 caregivers and N=33 caregivees).
We first prepared our dataset by counting each participant's specific app activities, e.g., count of apps reviewed, messages sent, permissions denied, and apps hidden. We then conducted Shapiro–Wilk tests and found that the sum scores of these app activities were not normally distributed (ps<.01). We then performed the non-parametric Mann-Whitney U test to identify significant differences between the frequencies of the app activities performed by the caregivers and caregivees (<ref>).
We also qualitatively coded the app activity data to examine the types of messages sent (e.g., sought or provided advice, message regarding apps or permissions), types of apps reviewed, apps uninstalled and hidden, and the types of app permissions caregivers and caregivees changed. Lastly, to further explore how differently our caregivers and caregivees perceived these app activities they could perform with the CO-oPS app, we conducted a supplemental qualitative analysis, where we identified themes related to the app activities, but also focused on the differences between our caregivers and caregivees. N=28 caregivers and N=23 caregivees participated in the follow-up interviews. [appendix:d]Appendix D presents the codes and illustrative quotations for each app activity theme. Participant's quotations are identified by their IDs (e.g., C01P1,..C22P6), caregiving status, age, and gender information.
Participant Recruitment and Demographics
We recruited a total of 101 participants that were associated with 22 different groups of caregivers and caregivees. A post hoc power analysis using G*Power <cit.> was used to determine the power of a repeated measures within-between interaction ANOVA F test and found that the power of the test was 99% for the effect size (Cohen’s d = 0.19, α = 0.05 and total sample size = 101). Participants were recruited through recruitment emails, phone calls, word of mouth, and social media. For each community, we first recruited the initial contact, who completed a screening survey that verified their eligibility based on whether they: 1) reside in the United States, 2) are 13 years or older, 3) have an Android smartphone, 4) are willing to install and use the CO-oPS app, and 5) can participate in this study with at least two other people they knew.
The screening survey briefly introduced the study and the CO-oPS app. After being screened for eligibility, participants were provided a consent form detailing the study, described as an effort to evaluate the use of the CO-oPS app among their self-formed group and understand how community members can help each other with mobile privacy and security decisions.
For the teen participants, we required one of their parents or legal guardians to complete the screening survey. If eligible, the parent or guardian then provided consent for their child to participate in the study. The teens themselves received an assent form to agree or deny to participate in our study. On both the consent and assent form, we highlighted the potential benefits of participating in the study as increased awareness of how group members manage app privacy, which might inspire reflection on their personal permission decisions. We also outlined what data will be collected by the CO-oPS app and shared with their group members, and participation could be discontinued at any time. Lastly, we stated that all the data collected would be securely stored in a password-protected Dropbox folder, accessible only to the researchers listed in the IRB. Participants were then asked to forward the screening survey to their social contacts whom they would like to include as their community members in this study.
<ref> shows community IDs, size, caregiver count, composition (i.e. family, friends, neighbors, coworkers), and proximity of their residences in relation to each other in the community (i.e. same house, neighborhood, town). The size of the communities ranged from 2 to 6, where the majority (68%, N=15) of the communities had five or more members. Most of the communities (64%, N=14) had two to five caregivers, and only 36%, N=8 communities had one caregiver. Most communities (73%, N=16) were composed of families together with other relationships, e.g., friends, neighbors, co-workers. Among these communities, the most common family dyadic relationships were partners (50%, N=11), followed by parent-teen (23%, N=5), parent-adult child (14%, N=3), siblings (14%, N=3), and extended families (14%, N=3). The rest of the communities (N=6, 27%) consisted mainly of friends. Finally, in terms of proximity types,
almost all communities (95%, N=21) had most members living in the same house (73%, N=16) or same neighborhood (59%, N=13), but also included members from the same town or out of town.
§ RESULTS
Personal Characteristics of Caregivers and Caregivees
We characterize our caregivers and caregivees based on their demographic characteristics and level of power usage to identify the ways in which these two groups differ. Out of the 101 participants, N=51 (50%) were classified as caregivers, and N=50 (50%) as caregivees. <ref> illustrates the gender, age groups, ethnicity, and education of our caregivers and caregivees. Overall, we found no significant differences between the caregivers and caregivees based on any of these demographic characteristics.
Next, we examine power usage and find that the caregivers (mean=3.87, SD= 0.50) reported significantly higher levels of power usage than the caregivees' (mean = 3.61, SD = 0.51), with statistical significance (p = 0.04). This suggests that our caregivers had more expertise in technology, compared to the caregivees. Therefore, our hypothesis H1 is supported.
Perceived Outcomes of Caregivers and Caregivees
<ref> shows the statistical tests for interaction and main effects of caregiving status and pre-post study. The means and standard deviations of the caregivers and caregivees' pre- and post-study measures are provided in <ref>.
Community Trust: For community trust, we found a significant main effect for both caregiving status (F(1, 99) = 38.21, p < 0.001) and pre-post study, (F(1, 99) = 13.77, p < 0.001), with no statistically significant interaction (F(1, 99) = 3.51, p < 0.06). This suggests that community trust was higher for caregivers than caregivees prior to using the CO-oPS app (opposite of our hypothesis H2a). Both the caregivers and caregivees reported higher levels of community trust after using the CO-oPS app, supporting our hypothesis H2b. Lastly, the caregiving effect on community trust was not greater in the post-study than in the pre-study, indicating that the positive main effect of using the CO-oPS app on caregivees' community trust was not stronger than caregivers'. Therefore, our hypothesis H2c was not supported.
Community Belonging:
The pre-study community belonging was higher for caregivers than caregivees, with F(1,99) = 26.26, p < .001, suggesting that our null hypothesis H3a was not supported. However, the main effect of the pre-post study and the interaction effect were nonsignificant, suggesting that the increase in caregivees and caregivers' community belonging from pre-to-post study was not significant. Therefore, both hypotheses H3b and H3c were not supported.
Community Collective Efficacy:
For community collective efficacy, there was a significant main effect for caregiving status, F(1, 99) = 92.55, p < .001, indicating that the pre-study community collective efficacy was higher for caregivers compared to caregivees (H4a was not supported). The main effect of pre-post study for community collective efficacy also yielded an F ratio of F(1, 99) = 81.4, p < .001, with a significant interaction, F(1,99) = 18.87, p < .001, suggesting that it increased for both caregivers and caregivees from pre to post study, as illustrated in <ref>a, and the positive main effect of the use of the CO-oPS app (pre-post study) on community collective efficacy was also stronger for caregivees than for caregivers. So, hypotheses H4b and H4c were supported.
Self-Efficacy:
In terms of self-efficacy, we found a significant main effect for both caregiving status, F(1, 99) = 34.73, p < 0.001, and pre-post study, F(1,99) = 108.71, p < 0.001, with a statistically significant interaction, F(1,99) = 30.19, p < 0.001. This suggests that self-efficacy was higher for caregivers than caregivees before the study and increased for both caregivers and caregivers from pre-study to post-study test, as shown in <ref>b. Also, the positive main effect of using the CO-oPS app (pre-post study) on self-efficacy was stronger for caregivees than for caregivers. This confirms that all three of our hypotheses H5a, H5b, and H5c are supported. <ref> summarizes the hypotheses tested from our statistical model.
Privacy & Security Behaviors of Caregivers and Caregivees
As described in our Methods section, we conducted both quantitative and qualitative analysis on participants' app usage logs and interview transcripts for our RQ3.
On average, caregivers spent 38 minutes on the CO-oPS app over the duration of the study, with individual usage ranging from 21 minutes to 1 hour and 31 minutes. In comparison, caregivees spent an average of 29 minutes using the app during the study, with individual usage ranging from 19 minutes to 58 minutes. Below we present the results for how the caregivees and caregivers are different based on the app activities they performed on the CO-oPS app and also, how they perceived these app activities differently to support them in providing or receiving mobile privacy and security care from their communities.
Apps Reviewed:
As shown in <ref>, while the median number of apps reviewed by caregivers was slightly higher, we found no significant difference (p = 0.689) between caregivers (median = 10, SD= 16.71) and caregivees (median = 7, SD= 27.43) based on the number of the apps they reviewed. When we further investigated who reviewed whose apps more (<ref>), we found that caregivees' apps were reviewed more compared to the caregivers'. For example, caregivers reviewed a total count of 141 apps of other caregivers, with an average of 5.6 apps, while reviewing almost twice as many apps from caregivees (total = 263, avg = 10.5). Similarly, caregivees reviewed 278 apps from other caregivees (avg = 13.2), almost twice as much as caregivers' apps reviewed (total = 144, avg = 6.9).
From our data analysis of the interview, we found that more caregivees felt that using the CO-oPS app with their groups helped them receive valuable feedback. A majority of caregivees
mentioned that having other people review their apps helped them identify risky app permissions, which was made possible through the feedback they received from others. For example, C02P2, a 51-year-old male caregivee, said:
“It made them [community] see how I manage my apps and their data access. Like, I didn't give that much thought to my permissions, but they could see what I did and just tell me what I need to fix. This is great because you can be aware of the fishy permissions or the apps you have.” Most caregivees also felt that this feature gave them peace of mind that there was someone else in the community who would watch over their privacy and security. Caregivees frequently mentioned specific caregivers in their community on whom they would rely for mobile privacy and security advice. Having their caregivers included in the CO-oPS app who could keep an eye on their apps and permissions made caregivees less concerned.
Interestingly, caregivers felt that this feature that allowed every community member to review everyone else's apps and permissions helped distribute community responsibility. This is because people often use numerous apps on their phones, and reviewing each of their permissions can be tedious for one person. Here, a few caregivers also mentioned that not all people in a community have the same level of knowledge of recognizing the dangerous permissions, and therefore having multiple people review someone's apps would be beneficial. For instance, C19P6, 52-year-old female caregiver, said:
“
It's nice, because someone can have multiple feedback and use their judgement whether they keep the permission or not. Also, because some people in the community can not give good feedback, but others can. So, its good to have all people go through and inform.”
Messages Sent:
As shown in <ref>, we found no significant difference (p = 0.952) in the total count of messages sent by caregivers (median = 1, SD = 0.8) and caregivees (median = 1, SD = 0.97). However, both caregivers and caregivees sent slightly more messages to caregivees than to caregivers.
Qualitative analysis on the types of messages sent also gave us important insights (<ref>). For example, the majority of the messages sent by both caregivers and caregivees were about the permissions granted.
For instance C17P3, 32-year-old male caregiver, sent this message to C17P4, 32-year-old female caregivee: "your Bofa camera permission is granted which you should denied access.". Both caregivers and caregivees sent more messages to provide advice, compared to the count of messages sent to seek advice, and there were no differences between caregivers and caregivees' for this. The qualitative analysis of our interview transcripts gave us further insights into how the caregivers and caregivees overall felt about the CO-oPS app's messaging feature that facilitated exchanging advice and guidance on their mobile privacy and security. Most caregivers mentioned that initiating discussions about the installed apps of others and granted permissions became easier with the CO-oPS app, even when community members are not close. They often mentioned that they would usually hesitate to start conversations about the apps installed on someone else's devices. But since the CO-oPS app allowed them to review others' apps and then communicate their opinion through the messaging feature, it addressed that hesitation. For example, a 28-year-old male caregiver, C06P6, said:
“Look, this is really an awkward topic, especially when your community is not your family. You can not just tell someone out of nowhere, hey your x app is accessing microphone. This app has so many features to communicate this awkward topic. So, telling someone about their personal apps and their settings is a bit less difficult with it." Caregivees, on the other hand, felt that the CO-oPS app made it easy to seek advice and guidance. Interestingly, most of these caregivees said that they either used other messaging tools, e.g., Whatsapp, Facebook Messenger, text message, etc., or just talked in person to ask questions because they utilized those tools for their daily communication or because they lived in same neighborhood or house. However, most of our caregivers still saw value in the in-app messaging feature, as it helped keep the conversation specific to the topic of mobile privacy and security. They often said that using other messaging tools would not help much as the advice given would be lost among their other general conversations. Therefore, they found the messaging feature useful in preserving the advice given on the apps or permissions.
Apps/Permissions Changed:
We now describe how our caregivers and caregivees' mobile privacy and security behaviors changed at the end of the study. We specifically looked into caregivers and caregivees' permission changes through the CO-oPS app and the changes in their installed app list (<ref>). As illustrated in <ref>, we found no significant differences between the caregivers and caregivees based on both the apps uninstalled (p=0.321) and permissions denied (p= 0.263). Qualitative analysis of the app log data revealed that caregivees mostly uninstalled gaming apps, workout, and dictionary apps, whereas the caregivers uninstalled finance, mobile payment, and shopping apps. Our caregivers mainly denied contacts and microphone permissions, while caregivees mainly denied storage and camera. However, both caregivers and caregivees showed a similar trend in denying location permissions.
During interviews, all caregivees said that comparing their own app permissions with others inspired them to take the initiative and change the permissions. To this end, they often mentioned the CO-oPS feature that allowed users to view the count of community members who allowed or denied the permissions of an app, which helped them learn whether the permissions are necessary for a specific app and then change the permission accordingly. For instance, C12P4, a 25-year-old male caregivee, said:
“I can see what kind of permissions they're getting to their apps and how many. I saw that some of my friends didn't get permission to Facebook, on their location or something. So I see that in my app, I already give the permission of location. So after seeing that, my other friends haven't given that permission, I removed my location permission from my Facebook. ” Caregivees also often felt that they learned how to change their app permissions. Here, caregivees often mentioned the CO-oPS app feature that allowed users to easily navigate to the Android settings to turn off/on the app permissions, which enabled them to change the permissions easily. Our caregivees also often said that learning about others' permission changes encouraged them to make the changes for their own. To this end, they mostly brought up the CO-oPS' community feed where they would get posts about the app permission changes made by their community members.
Apps Hidden:
The CO-oPS app allows users to hide apps from their community members that they do not wish others to know about. Interestingly, we found that our caregivees hid more apps (Median = 11, SD = 21.25) than the caregivers (median = 3, SD = 13.18), and the difference was statistically significant (p = 0.043).
In total, 21 caregivees hid N = 192 apps (68.8%), which was approximately double the number hidden by 13 caregivers (31.2%, M = 87). Upon further analysis of the types of apps hidden, we discovered that caregivees predominantly hid social media apps (38%, n = 73), including messaging and video apps such as Instagram, WhatsApp, Youtube, and TikTok, as well as online shopping apps (21%, n = 41) such as Amazon, Walmart, Macy's, Target, and Kohl's. In contrast, caregivers mostly hid gaming apps (32%, m = 28) like Pokemon Go, Wordscapes, Clash of Clans, and Candy Crush, and video streaming apps (25%, m = 22) such as Disney+, Netflix, Peacock, and HBO Max. Additionally, both caregivers (15%, m = 13) and caregivees (11%, n = 22) hid financial apps including CashApp, Venmo, Zelle, and various banking apps.
Our qualitative analysis of the interview data gave us insights to further unpack why caregivees hid their apps more than caregivers. In general, caregivees were more concerned about their privacy on their app usage than the caregivers. All caregivees mentioned that the CO-oPS app allowed their community members to review their personal apps that they did not feel comfortable about. Caregivees often mentioned that while they generally liked the idea that others can review their apps and provide feedback, they would still feel uncomfortable sharing their apps with some of their community members. They also often felt uncomfortable when reviewing others' mobile apps installed, as it was perceived as violating others' privacy. For instance, C09P2, a 23-year-old male caregivee, said:
“The nature of this app is showing what apps we have installed on all devices to everyone else. The nature of the app, also kind of decreases privacy for your own and for others. In sum, individually. But I guess the idea is that you have a group of people you have to have trust somewhat so you can get help. But still it kind of gives a discomfort.” Caregivees also often specifically mentioned that when the community consisted of mixed relationships, their concerns about privacy could be aggravated. For example, they would not want their friends to monitor their family members' apps or vice versa. For instance C15P2, a 33-year-old female caregivee, said:
“You don't want your friends to be in the same community with your parents. Because you don't want your parents to see some of your apps or your friends to see some of your other apps, right? So yeah, I guess that was my main concern. … I also feel like I don't think my friends should be seeing my parents' apps, either.” Caregivers, on the other hand, expressed concern about this app hiding feature because when caregivees hide their apps it would defeat the primary purpose of the CO-oPS app. Caregivers often brought up the importance of the community members' personal privacy on their app usage, but they also felt that providing feedback would be challenging if a community member hid most of their apps. To this end, caregivers frequently gave examples of vulnerable members of their families who do not have much expertise of mobile privacy and security, e.g., children, parents, grand parents, etc.
§ DISCUSSION
In this study, we examined whether and how collaborative mechanisms for mobile privacy and security support caregiving relationships, and how caregiving status influences the outcomes of using the CO-oPS app. Overall, we found that caregivers and caregivees did indeed have different outcomes, exhibited different behaviors, and ultimately experienced the app in unique ways. In the following, we describe the implications of our findings in relation to previous work and provide recommendations for implementing community-based solutions that support community-based caregiving.
Power Use as Difference between Caregivers and Caregivees
Interestingly, we did not find significant differences between our caregivers and caregivees based on their demographic characteristics. Although this may partially be due to low sample-sizes across some groups (e.g., teens and older adults), anecdotally, we also found that older adults and youth in our study acted as caregivers. These findings run counter to common narratives in research (c.f., <cit.>), where older adults tend to seek support and younger generations provide the support to their families. Therefore, we give networked privacy and security researchers a word-of-caution – not to unintentionally stereotype caregivers/caregivees based on age or other demographics.
On the other hand, we hypothesized and confirmed that caregiving status differed significantly by power usage (i.e., higher for caregivers). This finding is important in that it serves to validate that individuals' self-reported perceptions of who provides oversight aligned well with the level of power usage of these individuals, suggesting that caregiving for mobile privacy and security is related to the competence and desire to use technology to its fullest <cit.>. This aligns with previous work <cit.> which found that users often referred to tech savvy members of their community because they were more knowledgeable about mobile privacy and security. Therefore, future research could utilize power usage as a proxy measure to differentiate between caregivers and caregivees for mobile privacy and security within community-based settings – or it could simply ask individuals from whom they receive oversight as we did.
Amplified Effects of Community Oversight for Caregivees
A key finding from our study was that the community-based approach of using CO-oPS to provide joint oversight was beneficial to both parties, but it was even more beneficial for caregivees. We saw a significant moderating effect of caregiving status, such that self-efficacy and community collective efficacy increased more for caregivees than caregivers from pre- to post-study. Interestingly, while caregivees received oversight from their community members, as our RQ3 results indicated, they also provided oversight to others, which may have contributed to their enhanced self-efficacy. Overall, our findings confirmed our hypothesis: people receiving care knew less about privacy and security than those providing care, likely because they had more to learn and needed more guidance. Consequently, our study provides strong empirical evidence for how community-based approaches can disproportionately benefit those in need of help <cit.> in ways that benefit the community as a whole. Therefore, we urge HCI researchers to continue to emphasize the importance of collaboration and community across all aspects of our scholarship. However, our data revealed several trends that ran contrary to the effects we hypothesized, which also offer valuable insights. In particular, caregivers reported significantly higher levels of community trust and belonging than caregivees, when we expected higher levels of trust from caregivees due to their reliance on caregivers. These unanticipated findings may partially be explained by our RQ3 results, which revealed that caregivees had more privacy concerns; and therefore, hid more apps than caregivers. This surfaces a potential tension between caregivees and caregivers due to their differing roles.
As such, a critically important takeaway is to make sure these technologies for community-based oversight do not unintentionally create oppressive power hierarchies that could harm vulnerable populations. HCI researchers have highlighted the potential pitfalls of surveillance-based technologies among marginalized individuals (e.g., children, older adults, victims of intimate partner violence (c.f., <cit.>). Therefore, when attempting to protect caregivees from digital privacy and security threats, it is important to assess whether protection is needed from the caregivers themselves. For example, while allowing users to hide apps within CO-oPS provides a safety mechanism against unwanted oversight, caregivees could still be unduly punished for such behavior outside of the app by caregivers who insist on wielding coercive control. One approach for mitigating users' personal privacy, safety and security concerns is through the careful design of these community-based collaborative mechanisms. Future research needs to find effective ways to increase caregivees' community belonging and trust to intrinsically motivate them to use the app voluntarily, rather than making community participation compulsory.
Caregivers and Caregivees Behave Differently
In terms of the behavioral differences between our caregivers and caregivees, we only found statistically significant differences in the number of apps they hid from their community members. However, this is likely because we asked both our caregivers and caregivees to perform the same weekly tasks. However, qualitatively, we did see some differences in their privacy behaviors. Although our interview results showed a distinction between caregivers and caregivees in terms of initiating discussions to provide guidance and support and asking questions or seeking advice, our log results showed that both groups provided advice to the other caregivers and caregivees of their communities. This is because the collaborative community-based mechanisms of the CO-oPS app were designed to provide an equal footing for everyone in the community, regardless of their roles and expertise. Yet, our caregivers tended to review more apps, while caregivees changed relatively more app permissions based on the oversight received. In retrospect, caregivees providing advice may cause more harm than benefit, as they may lack the necessary skills to provide mobile privacy and security advice to others. This may also potentially cause tensions within the community, as caregivers might ignore their advice <cit.> and the other caregivees may receive potentially wrong advice <cit.>.
Therefore, a key implication of our findings is that future research might find it advantageous to discriminate between study participants in terms of their assigned tasks, so that they are more appropriate for their individual roles.
This may resolve the tensions and discomfort that we observed among many of our caregivees when asking them to play the role of a caregiver. Such consideration would likely improve participants' experience by accounting for differing roles earlier in the design process, rather than post hoc, as was a limitation of our study. Importantly, these findings suggest that the community-based approaches should be designed to support different caregiving roles in heterogeneous communities.
Implications for Design
Our results demonstrate how caregivers and caregivees are uniquely different in communities and how their perceptions differed when they used a community-based mobile privacy and security mechanism to exchange caregiving.
The insights gained from the use of the CO-oPS app offer real-world applications not only for similar tools but also for the design of broader strategies aimed at helping users safeguard their privacy across various technological domains. These insights extend beyond mobile apps to other technology environments where users share personal information with third parties, such as smart home devices <cit.>, social media platforms <cit.> and websites <cit.>. For example, collaborative community-based mechanisms can be adapted to assist individuals in making informed decisions about the data collection by IoT devices, as demonstrated in the work of Emami-Naeini et al. <cit.>. Similarly, McDonald et al. <cit.> explored the design of technologies that enable collaboration among loved ones in managing cybersecurity, reflecting the broader applicability of community-driven approaches in enhancing privacy and security practices across different technological landscapes.
Below we discuss the features and mechanisms that need to be in a collaborative oversight tool to support these differences of roles and expertise in communities.
Identifying Caregivers in Communities:
Our study examined the differing roles of caregivers and caregivees in community oversight, albeit in a post hoc fashion. To gain deeper insights, future researchers and designers should consider integrating on-boarding features aimed at distinguishing caregivers from individuals primarily in need of care. This could be achieved by prompting community members to self-identify their caregiving roles during the sign-up process. Another approach could involve identifying power users within the community. However, the power use scale utilized in our study comprised 22 items <cit.>, which may pose feasibility challenges for users. Therefore, we recommend utilizing a condensed version of this scale, such as the one developed by Sundar et al. in <cit.>, which consists of 12 items.
Addressing when Caregivers are Absent:
In some cases, communities might lack technological expertise needed to provide care. To mitigate this problem, researchers and designers may need to explore ways to infuse external expertise in the community, as suggested by Chouhan et al. in <cit.>. Another possibility is that rather than bringing external experts into a community, it might be advantageous to educate community members. In real-world settings, some caregivees may eventually learn to take the lead to provide oversight within their communities. Identifying these transitions over time is also important. Therefore, we recommend incorporating smart caregiving detection mechanisms into the application to recognize caregiving behaviors, such as sending messages containing privacy and security advice. This may require the integration of real-time messaging APIs to discern the intent behind messages, such as whether they are intended to seek or offer advice.
Designing for Different Caregiving Roles:
We recommend that app designers adopt a nuanced approach when developing community oversight tools to accommodate the distinct roles of caregivers and caregivees, rather than unequivocally treating them as interchangeable entities. Utilizing methodologies such as storyboarding and creating user personas can help to fully understand and address the unique needs and expectations associated with these roles. However, it is imperative to recognize and mitigate potential tensions arising from power dynamics and disparities in privacy and security knowledge. To effectively navigate these complexities, we suggest incorporating methodologies such as value-sensitive design <cit.> or participatory design <cit.>, which prioritize the integration of diverse perspectives and values throughout the design process. These methodologies would help facilitate meaningful engagement of all stakeholders, ensuring that the resulting community oversight tools are attuned to the needs and preferences of all users involved regardless of their caregiving roles.
Incentivizing Caregivers to Provide Oversight:
Caregivers may encounter a decrease in motivation under some circumstances, especially when they perceive fewer personal benefits or lack close relationships within the community <cit.>. Therefore, it is important to incorporate reward mechanisms aimed at acknowledging and incentivizing their contributions as proactive digital citizens. Designers should also explore the development of effective nudging mechanisms strategically designed to prompt and encourage caregivers to engage more actively and consistently in providing oversight to their respective communities. For instance, weekly notifications or alerts upon new app installations within the community could prompt caregivers to assume oversight responsibilities. Implementing reward mechanisms, such as badges or points, to incentivize desired behaviors, could also be beneficial. Such interventions need to be carefully crafted to ensure that caregivees can also form a sense of duty and accountability to their communities.
Enhancing Caregivees' Community Trust and Belonging:
Collaborative oversight should include mechanisms fostering caregivees' trust and community belonging. To do so, we need to explore ways to encourage caregivees to seek advice and guidance from their community so that they feel their privacy and security needs matter and they belong to the community. Implementing features like autogenerated text messages for advice can aid caregivees with effective assistance-seeking. It's also crucial to ensure these tools make caregivees feel comfortable and connected rather than surveilled or punished. Thoughtful design is essential to co-manage mobile privacy and security, safeguarding caregivees from potential misuse while enhancing their sense of community belonging.
Limitations and Future Work
We recognize several limitations of our study that should be addressed in future work. First, our sample was skewed toward younger adults with at least bachelor's degrees. Our sample was also biased towards racial minorities with 72% of our participants being from Asian (primarily South Asian) descent. Further, our caregivers and caregivees were also generally equally distributed across these ethnicities, age groups, and education levels. Moreover, the co-monitoring feature of CO-oPS may have introduced conflicts among caregivees, as indicated by our RQ3 results, which showed that caregivees hid significantly more apps than caregivers. This issue could be further exacerbated in communities with power hierarchies, such as parent-child relationships, or among ethnic groups with lower privacy boundaries. Therefore, our results may not be generalizeable to populations of different ethnicities, age groups and education levels. Future work should explore caregiving with a community oversight mechanism among communities of more varied demographics and socio-economic status <cit.>.
One of the key limitations of our study was that we asked all of our participants to perform same set of tasks with CO-oPS, which resulted in non-significant differences in the behaviors of caregivees and caregivers. Additionally, the weekly tasks might have prompted caregivers and caregivees to adjust their app permissions or share oversight within their communities. In real-world scenarios, users might lack motivation for community oversight without incentives. Thus, we encourage further research into effective nudging mechanisms to incentivize users for community oversight and to take privacy protective actions for their own benefit.
Another potential limitation is the risk of privacy breaches if a community member's phone is lost or stolen, as this could expose the personal information of the entire community. Additionally, in large communities that include members who may not be close, even though were initially trusted, could potentially become malicious. Future research should investigate the possibility of allowing more granular control over the app-hiding features, enabling community members to selectively turn off sharing with specific individuals. Furthermore, caregivers' advice and privacy behaviors may sometimes be misleading. In such cases, collaborative approaches need to identify deserving candidates to label as caregivers. Factors such as community members' power use, the validity of their advice, and their privacy practices could be important considerations for future studies. While we allowed all community members to hide any of their installed apps as they preferred, certain mobile apps carry different security and privacy risks <cit.>. For example, caregivees were more likely to hide social media and shopping apps, which might have limited the feedback received for those apps. Future research should explore assigning risk-based weights to the apps installed, making users more aware of the risks associated with hiding certain apps and encouraging them to consider sharing to receive effective oversight. Finally, in this study, we did not measure power use in the post study survey as we did not expect the CO-oPS app intervention to alter individual power usage. In retrospect, heightened self-efficacy might lead to increased power usage. Hence, we recommend that future studies on privacy interventions assess this factor both pre and post-intervention to gauge potential changes in power use.
§ CONCLUSION
Collaborative mobile privacy and security approaches often treat all individuals in the community as equal, despite differences in expertise, needs and roles. Our study highlights the need to address these imbalances, as caregivers and caregivees have distinct perceptions and requirements. Our study illustrated the role community oversight can play in helping those who need support and build their self-efficacy and community collective-efficacy. Our findings call for interventions that prioritize increasing caregivees' community trust and belonging while incentivizing caregivers' participation in oversight. This approach aligns with our broader goal of developing community-based tools that effectively support mobile privacy and security for all. Future research will focus on refining these tools and exploring innovative strategies to enhance collaboration and address the unique needs of diverse community members.
We thank the individuals who participated in our study. We also acknowledge the contributions of Nikko Osaka, Anoosh Hari, and Ricardo Mangandi, in the CO-oPS app development. Our research was supported by the U.S. National Science Foundation under grants CNS-1814068, CNS-1814110, and CNS-2326901. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. National Science Foundation.
ACM-Reference-Format
§ COMMUNITY TRUST SURVEY QUESTIONNAIRE
Derived from Chouhan et al.'s conceptual model of Community Oversight <cit.>
1. I trust others in my community to protect my private information.
2. I trust others in my community to give me advice about mobile privacy and security.
3. Others in my community trust me to protect their private information.
4. Others in my community trust me to give them advice about mobile privacy and security.
§ SAMPLE QUESTIONS OF FOLLOWUP INTERVIEW
* During the study, how frequently did your community members discuss mobile privacy and security decisions with one another?
* During the study, how did you communicate with others who were part of your community?
* During the study, how did you manage your mobile privacy and security decisions? Did you see any changes compared to prior to the study? Why or why not?
* Can you explain how and why the app did or did not help provide transparency into the mobile privacy and security decisions of other people in your community?
* How and why did the app or did not help raise awareness in your community about mobile privacy and security?
* How and why did the app or did not enable you and individuals in your community to provide feedback and guidance about others’ mobile privacy and security?
* How and why did the app or did not help you work together as a community about mobile privacy and security?
* Were there any problems or concerns you or others in your community encountered when using the app?
§ CODEBOOK
|
http://arxiv.org/abs/2409.03519v1 | 20240905133240 | Tissue Concepts: supervised foundation models in computational pathology | [
"Till Nicke",
"Jan Raphael Schaefer",
"Henning Hoefener",
"Friedrich Feuerhake",
"Dorit Merhof",
"Fabian Kiessling",
"Johannes Lotz"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
preparation
Euclid Collaboration: J. Adamek0000-0002-0723-6740julian.adamek@uzh.ch<ref>
B. Fiorini0000-0002-0092-4321<ref>
M. Baldi0000-0003-4145-1943<ref>,<ref>,<ref>
G. Brando0000-0003-0805-1905<ref>
M.-A. Breton<ref>,<ref>,<ref>
F. Hassani0000-0003-2640-4460<ref>
K. Koyama0000-0001-6727-6915<ref>
A. M. C. Le Brun0000-0002-0936-4594<ref>
G. Rácz0000-0003-3906-5699<ref>
H.-A. Winther0000-0002-6325-2710<ref>
A. Casalino0000-0001-6709-5292<ref>
C. Hernández-Aguayo0000-0001-9921-8832<ref>
B. Li0000-0002-1098-9188<ref>
D. Potter0000-0002-0757-5195<ref>
E. Altamura0000-0001-6973-1897<ref>
C. Carbone0000-0003-0125-3563<ref>
C. Giocoli0000-0002-9590-7961<ref>,<ref>
D. F. Mota0000-0003-3141-142X<ref>
A. Pourtsidou0000-0001-9110-5550<ref>,<ref>
Z. Sakr0000-0002-4823-3757<ref>,<ref>,<ref>
F. Vernizzi0000-0003-3426-2802<ref>
A. Amara<ref>
S. Andreon0000-0002-2041-8784<ref>
N. Auricchio0000-0003-4444-8651<ref>
C. Baccigalupi0000-0002-8211-1630<ref>,<ref>,<ref>,<ref>
S. Bardelli0000-0002-8900-0298<ref>
P. Battaglia0000-0002-7337-5909<ref>
D. Bonino0000-0002-3336-9977<ref>
E. Branchini0000-0002-0808-6908<ref>,<ref>,<ref>
M. Brescia0000-0001-9506-5680<ref>,<ref>,<ref>
J. Brinchmann0000-0003-4359-8797<ref>,<ref>
A. Caillat<ref>
S. Camera0000-0003-3399-3574<ref>,<ref>,<ref>
V. Capobianco0000-0002-3309-7692<ref>
V. F. Cardone<ref>,<ref>
J. Carretero0000-0002-3130-0204<ref>,<ref>
S. Casas0000-0002-4751-5138<ref>
F. J. Castander0000-0001-7316-4573<ref>,<ref>
M. Castellano0000-0001-9875-8263<ref>
G. Castignani0000-0001-6831-0687<ref>
S. Cavuoti0000-0002-3787-4196<ref>,<ref>
A. Cimatti<ref>
C. Colodro-Conde<ref>
G. Congedo0000-0003-2508-0046<ref>
C. J. Conselice0000-0003-1949-7638<ref>
L. Conversi0000-0002-6710-8476<ref>,<ref>
Y. Copin0000-0002-5317-7518<ref>
F. Courbin0000-0003-0758-6510<ref>,<ref>,<ref>
H. M. Courtois0000-0003-0509-1776<ref>
A. Da Silva0000-0002-6385-1609<ref>,<ref>
H. Degaudenzi0000-0002-5887-6799<ref>
G. De Lucia0000-0002-6220-9104<ref>
M. Douspis0000-0003-4203-3954<ref>
F. Dubath0000-0002-6533-2810<ref>
X. Dupac<ref>
S. Dusini0000-0002-1128-0664<ref>
M. Farina0000-0002-3089-7846<ref>
S. Farrens0000-0002-9594-9387<ref>
S. Ferriol<ref>
P. Fosalba0000-0002-1510-5214<ref>,<ref>
M. Frailis0000-0002-7400-2135<ref>
E. Franceschi0000-0002-0585-6591<ref>
M. Fumana0000-0001-6787-5950<ref>
S. Galeotta0000-0002-3748-5115<ref>
B. Gillis0000-0002-4478-1270<ref>
P. Gómez-Alvarez0000-0002-8594-5358<ref>,<ref>
A. Grazian0000-0002-5688-0663<ref>
F. Grupp<ref>,<ref>
L. Guzzo0000-0001-8264-5192<ref>,<ref>
S. V. H. Haugan0000-0001-9648-7260<ref>
W. Holmes<ref>
F. Hormuth<ref>
A. Hornstrup0000-0002-3363-0936<ref>,<ref>
S. Ilić0000-0003-4285-9086<ref>,<ref>
K. Jahnke0000-0003-3804-2137<ref>
M. Jhabvala<ref>
B. Joachimi0000-0001-7494-1303<ref>
E. Keihänen0000-0003-1804-7715<ref>
S. Kermiche0000-0002-0302-5735<ref>
A. Kiessling0000-0002-2590-1273<ref>
M. Kilbinger0000-0001-9513-7138<ref>
B. Kubik0009-0006-5823-4880<ref>
M. Kümmel0000-0003-2791-2117<ref>
M. Kunz0000-0002-3052-7394<ref>
H. Kurki-Suonio0000-0002-4618-3063<ref>,<ref>
S. Ligori0000-0003-4172-4606<ref>
P. B. Lilje0000-0003-4324-7794<ref>
V. Lindholm0000-0003-2317-5471<ref>,<ref>
I. Lloro<ref>
G. Mainetti0000-0003-2384-2377<ref>
E. Maiorano0000-0003-2593-4355<ref>
O. Mansutti0000-0001-5758-4658<ref>
O. Marggraf0000-0001-7242-3852<ref>
K. Markovic0000-0001-6764-073X<ref>
M. Martinelli0000-0002-6943-7732<ref>,<ref>
N. Martinet0000-0003-2786-7790<ref>
F. Marulli0000-0002-8850-0303<ref>,<ref>,<ref>
R. Massey0000-0002-6085-3780<ref>
E. Medinaceli0000-0002-4040-7783<ref>
S. Mei0000-0002-2849-559X<ref>
M. Melchior<ref>
Y. Mellier<ref>,<ref>
M. Meneghetti0000-0003-1225-7084<ref>,<ref>
E. Merlin0000-0001-6870-8900<ref>
G. Meylan<ref>
M. Moresco0000-0002-7616-7136<ref>,<ref>
L. Moscardini0000-0002-3473-6716<ref>,<ref>,<ref>
C. Neissner0000-0001-8524-4968<ref>,<ref>
S.-M. Niemi<ref>
C. Padilla0000-0001-7951-0166<ref>
S. Paltani0000-0002-8108-9179<ref>
F. Pasian0000-0002-4869-3227<ref>
K. Pedersen<ref>
W. J. Percival0000-0002-0644-5727<ref>,<ref>,<ref>
V. Pettorino<ref>
S. Pires0000-0002-0249-2104<ref>
G. Polenta0000-0003-4067-9196<ref>
M. Poncet<ref>
L. A. Popa<ref>
L. Pozzetti0000-0001-7085-0412<ref>
F. Raison0000-0002-7819-6918<ref>
A. Renzi0000-0001-9856-1970<ref>,<ref>
J. Rhodes0000-0002-4485-8549<ref>
G. Riccio<ref>
E. Romelli0000-0003-3069-9222<ref>
M. Roncarelli0000-0001-9587-7822<ref>
R. Saglia0000-0003-0378-7032<ref>,<ref>
A. G. Sánchez0000-0003-1198-831X<ref>
D. Sapone0000-0001-7089-4503<ref>
B. Sartoris0000-0003-1337-5269<ref>,<ref>
M. Schirmer0000-0003-2568-9994<ref>
T. Schrabback0000-0002-6987-7834<ref>
A. Secroun0000-0003-0505-3710<ref>
G. Seidel0000-0003-2907-353X<ref>
S. Serrano0000-0002-0211-2861<ref>,<ref>,<ref>
C. Sirignano0000-0002-0995-7146<ref>,<ref>
G. Sirri0000-0003-2626-2853<ref>
L. Stanco0000-0002-9706-5104<ref>
J. Steinwagner0000-0001-7443-1047<ref>
P. Tallada-Crespí0000-0002-1336-8328<ref>,<ref>
D. Tavagnacco0000-0001-7475-9894<ref>
I. Tereno<ref>,<ref>
R. Toledo-Moreo0000-0002-2997-4859<ref>
F. Torradeflot0000-0003-1160-1517<ref>,<ref>
I. Tutusaus0000-0002-3199-0399<ref>
E. A. Valentijn<ref>
L. Valenziano0000-0002-1170-0104<ref>,<ref>
T. Vassallo0000-0001-6512-6358<ref>,<ref>
G. Verdoes Kleijn0000-0001-5803-2580<ref>
A. Veropalumbo0000-0003-2387-1194<ref>,<ref>,<ref>
Y. Wang0000-0002-4749-2984<ref>
J. Weller0000-0002-8282-2010<ref>,<ref>
G. Zamorani0000-0002-2318-301X<ref>
E. Zucca0000-0002-5845-8132<ref>
A. Biviano0000-0002-0857-0732<ref>,<ref>
C. Burigana0000-0002-3005-5796<ref>,<ref>
M. Calabrese0000-0002-2637-2422<ref>,<ref>
D. Di Ferdinando<ref>
J. A. Escartin Vigo<ref>
G. Fabbian0000-0002-3255-4695<ref>,<ref>
F. Finelli0000-0002-6694-3269<ref>,<ref>
J. Gracia-Carpio<ref>
S. Matthew0000-0001-8448-1697<ref>
N. Mauri0000-0001-8196-1548<ref>,<ref>
A. Pezzotta0000-0003-0726-2268<ref>
M. Pöntinen0000-0001-5442-2530<ref>
V. Scottez<ref>,<ref>
M. Tenti0000-0002-4254-5901<ref>
M. Viel0000-0002-2642-5707<ref>,<ref>,<ref>,<ref>,<ref>
M. Wiesmann0009-0000-8199-5860<ref>
Y. Akrami0000-0002-2407-7956<ref>,<ref>
V. Allevato0000-0001-7232-5152<ref>
S. Anselmi0000-0002-3579-9583<ref>,<ref>,<ref>
M. Archidiacono0000-0003-4952-9012<ref>,<ref>
F. Atrio-Barandela0000-0002-2130-2513<ref>
A. Balaguera-Antolinez0000-0001-5028-3035<ref>,<ref>
M. Ballardini0000-0003-4481-3559<ref>,<ref>,<ref>
A. Blanchard0000-0001-8555-9003<ref>
L. Blot0000-0002-9622-7167<ref>,<ref>
H. Böhringer0000-0001-8241-4204<ref>,<ref>,<ref>
S. Borgani0000-0001-6151-6439<ref>,<ref>,<ref>,<ref>
S. Bruton0000-0002-6503-5218<ref>
R. Cabanac0000-0001-6679-2600<ref>
A. Calabro0000-0003-2536-1614<ref>
B. Camacho Quevedo0000-0002-8789-4232<ref>,<ref>
G. Cañas-Herrera0000-0003-2796-2149<ref>,<ref>
A. Cappi<ref>,<ref>
F. Caro<ref>
C. S. Carvalho<ref>
T. Castro0000-0002-6292-3228<ref>,<ref>,<ref>,<ref>
K. C. Chambers0000-0001-6965-7789<ref>
S. Contarini0000-0002-9843-723X<ref>
A. R. Cooray0000-0002-3892-0190<ref>
G. Desprez0000-0001-8325-1742<ref>
A. Díaz-Sánchez0000-0003-0748-4768<ref>
J. J. Diaz<ref>
S. Di Domizio0000-0003-2863-5895<ref>,<ref>
H. Dole0000-0002-9767-3839<ref>
S. Escoffier0000-0002-2847-7498<ref>
A. G. Ferrari0009-0005-5266-4110<ref>,<ref>
P. G. Ferreira0000-0002-3021-2851<ref>
I. Ferrero0000-0002-1295-1132<ref>
A. Finoguenov0000-0002-4606-5403<ref>
F. Fornari0000-0003-2979-6738<ref>
L. Gabarra0000-0002-8486-8856<ref>
K. Ganga0000-0001-8159-8208<ref>
J. García-Bellido0000-0002-9370-8360<ref>
T. Gasparetto0000-0002-7913-4866<ref>
V. Gautard<ref>
E. Gaztanaga0000-0001-9632-0815<ref>,<ref>,<ref>
F. Giacomini0000-0002-3129-2814<ref>
F. Gianotti0000-0003-4666-119X<ref>
G. Gozaliasl0000-0002-0236-919X<ref>
C. M. Gutierrez0000-0001-7854-783X<ref>
A. Hall0000-0002-3139-8651<ref>
H. Hildebrandt0000-0002-9814-3338<ref>
J. Hjorth0000-0002-4571-2306<ref>
A. Jimenez Muñoz0009-0004-5252-185X<ref>
S. Joudaki0000-0001-8820-673X<ref>
J. J. E. Kajava0000-0002-3010-8333<ref>,<ref>
V. Kansal0000-0002-4008-6078<ref>,<ref>
D. Karagiannis0000-0002-4927-0816<ref>,<ref>
C. C. Kirkpatrick<ref>
S. Kruk0000-0001-8010-8879<ref>
J. Le Graet0000-0001-6523-7971<ref>
L. Legrand0000-0003-0610-5252<ref>
J. Lesgourgues0000-0001-7627-353X<ref>
T. I. Liaudat0000-0002-9104-314X<ref>
A. Loureiro0000-0002-4371-0876<ref>,<ref>
G. Maggio0000-0003-4020-4836<ref>
M. Magliocchetti0000-0001-9158-4838<ref>
F. Mannucci0000-0002-4803-2381<ref>
R. Maoli0000-0002-6065-3025<ref>,<ref>
C. J. A. P. Martins0000-0002-4886-9261<ref>,<ref>
L. Maurin0000-0002-8406-0857<ref>
R. B. Metcalf0000-0003-3167-2574<ref>,<ref>
M. Migliaccio<ref>,<ref>
M. Miluzio<ref>,<ref>
P. Monaco0000-0003-2083-7564<ref>,<ref>,<ref>,<ref>
A. Montoro0000-0003-4730-8590<ref>,<ref>
A. Mora0000-0002-1922-8529<ref>
C. Moretti0000-0003-3314-8936<ref>,<ref>,<ref>,<ref>,<ref>
G. Morgante<ref>
S. Nadathur0000-0001-9070-3102<ref>
L. Patrizii<ref>
V. Popa0000-0002-9118-8330<ref>
P. Reimberg0000-0003-3410-0280<ref>
I. Risso0000-0003-2525-7761<ref>
P.-F. Rocci<ref>
M. Sahlén0000-0003-0973-4804<ref>
E. Sarpa0000-0002-1256-655X<ref>,<ref>,<ref>
A. Schneider0000-0001-7055-8104<ref>
M. Sereno0000-0003-0302-0325<ref>,<ref>
A. Silvestri0000-0001-6904-5061<ref>
A. Spurio Mancini0000-0001-5698-0990<ref>,<ref>
K. Tanidis<ref>
C. Tao0000-0001-7961-8177<ref>
N. Tessore0000-0002-9696-7931<ref>
G. Testera<ref>
R. Teyssier0000-0001-7689-0933<ref>
S. Toft0000-0003-3631-7176<ref>,<ref>
S. Tosi0000-0002-7275-9193<ref>,<ref>
A. Troja0000-0003-0239-4595<ref>,<ref>
M. Tucci<ref>
C. Valieri<ref>
J. Valiviita0000-0001-6225-3693<ref>,<ref>
D. Vergani0000-0003-0898-2216<ref>
G. Verza0000-0002-1886-8348<ref>,<ref>
P. Vielzeuf0000-0003-2035-9339<ref>
N. A. Walton0000-0003-3983-8778<ref>
Received XXX; accepted ZZZ
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Due to the increasing workload of pathologists, the need for automation to support diagnostic tasks and quantitative biomarker evaluation is becoming more and more apparent.
Foundation models have the potential to improve generalizability within and across centers and serve as starting points for data efficient development of specialized yet robust AI models.
However, the training foundation models themselves is usually very expensive in terms of data, computation, and time.
This paper proposes a supervised training method that drastically reduces these expenses.
The proposed method is based on multi-task learning to train a joint encoder, by combining 16 different classification, segmentation, and detection tasks on a total of 912,000 patches.
Since the encoder is capable of capturing the properties of the samples, we term it the Tissue Concepts encoder.
To evaluate the performance and generalizability of the Tissue Concepts encoder across centers, classification of whole slide images from four of the most prevalent solid cancers - breast, colon, lung, and prostate - was used.
The experiments show that the Tissue Concepts model achieve comparable performance to models trained with self-supervision, while requiring only 6% of the amount of training patches.
Furthermore, the Tissue Concepts encoder outperforms an ImageNet pre-trained encoder on both in-domain and out-of-domain data.
The pre-trained models and will be made available under <https://github.com/FraunhoferMEVIS/MedicalMultitaskModeling>.
§ INTRODUCTION
The need for diagnostic systems to help pathologists manage the anticipated workload increases as cancer cancers worldwide are on the rise <cit.>. As <cit.> estimate, breast, colorectal, prostate, and lung cancers are among the six most common cancers types. Projections suggest that cases of these cancers will continue to increase, posing significant challenges due to time-consuming diagnosis, increased demand for tumor subtyping, and personalized treatment <cit.>. Deep learning (DL) has made significant progress in medical imaging, particularly in the field of computational pathology (CPath). Some studies have demonstrated that DL models even surpass human performance in certain tasks, making DL models effective tools to help pathologists cope with the increasing workload <cit.>. However, the unavailability of the required large data sets and the needed investment of time and effort limits the effectiveness and impact of DL models in pathology.
Recent advances in self-supervised learning have enabled the training of deep neural networks on large amounts of unlabeled medical data, resulting in the creation of foundation models in computer vision <cit.>. These models are pre-trained on a wide range of images, primarily using self-supervision through contrastive learning or masked image modeling. They have been shown to perform well in downstream tasks, including patch classification and weakly labeled whole slide image (WSI) classification <cit.>. Projects such as the Tissue Cancer Genome Atlas Program (TCGA) provide a data source of thousands of WSIs for training these networks on real-world data. This vast amount of data is necessary for self-supervised trained networks to reach their full potential <cit.>. However, the amount of resources required to create, train, and deploy such models has raised concerns among researchers about the environmental and other impacts <cit.>. In addition, extended training periods of several weeks impede development cycles and prolong research time.
Supervised learning, on the other hand, has been shown to outperform models trained on self-supervision in some tasks <cit.>. Although there are many annotated datasets available through challenges or other benchmarks, these datasets vary in size and contain annotations with varying degrees of detail. This variability between the datasets makes it challenging to condense the knowledge they contain into a single model. One approach to integrating all of these label types is to use multi-task learning (MTL) <cit.>. In <cit.>, we recently proposed a learning framework that combines the information contained in different labeling strategies, including detection, segmentation, and classification, and use it to train a single shared backbone model on a large corpus of images. In the study, images from different medical imaging domains, such as CT, X-ray, and microscopic images but also non-medical images were included.
This paper demonstrates that training a foundation model on supervised signals in CPath using MTL requires less data, time, and energy compared to models trained with self-supervision. At the same time, the measured performance is similar to that obtained from models trained on about 17 times more data without supervision. Following the MTL training scheme presented in <ref>a, this paper presents Tissue Concepts (TC), a robust encoder that is trained on a mixture of diverse annotations from small and medium-sized datasets in CPath to learn different concepts related to tissue. Considering the future prediction of cancer cases and clinical workflow, we evaluated the performance of the encoder on the four major cancer types breast, colon, lung, and prostate, for whole slide image classification, as shown in <ref>b. In addition, since models trained on one site are known to perform worse when evaluated on different sites, we test the performance of Tissue Concepts using a cross-center evaluation scheme <cit.>.
The paper's main contributions can be summarized as follows.
* We show that diverse pre-training using MTL learns robust representations and drastically reduces the required amount of data compared to self-supervised approaches.
* Our evaluation of the Tissue Concepts encoder on four of the most prevalent cancer types across multiple centers highlights the generalizability of our approach.
§ RELATED WORK
First approaches using MTL in CPath were presented by<cit.> and <cit.>. Mormont and colleagues converted different datasets into 22 classification tasks to train a shared network and contrasted the learned encoders against ImageNet-weights. A SVM was trained on the latent representations of the encoder. They found that the representations perform equal or better to the baseline ImageNet-weights. Graham et al. then used MTL on segmentation and classification tasks. This research focused on specific tasks that were present in the pre-training.
However, the evaluation of the general-purpose encoder and the corresponding latent representations based on whole slide image classification combined with cross-center evaluation is still an unexplored area. In addition, general purpose encoders in the form of foundation models have not been considered by <cit.>.
In <cit.> we presented a first approach using MTL to train supervised foundation models. By utilized expert knowledge in the form of multi-task learning we trained a shared model, called UMedPT, which can be applied to various medical images. To achieve this, different imaging domains, such as CT, X-ray, and microscopic images were used to train a shared backbone on classification, segmentation, and detection tasks. Currently, the impact of tasks outside the histopathology domain remains unclear due to the diverse pre-training of the encoder. This impact on performance and robustness requires further investigation.
The following sections focus in more detail on two topics discussed in this paper. While foundation models are still largely unexplored in terms of their application and performance, some approaches are mentioned below.
§.§ Foundation Models
A foundation model is broadly defined as being trained on a wide variety of data and being easily adaptable to many different downstream tasks <cit.>.
<cit.> used data from the TCGA in combination with data from the pathology AI platform PAIP to train a modified swin transformer, called CTransPath (CTP), using self-supervision on 15 million patches. They presented an adapted contrastive loss, based on MoCo v3 <cit.>, which uses a memory bank to retrieve the top S semantically relevant entries. These entries were used as additional positive examples for the loss calculation. The authors evaluated their model using patch classification, image retrieval, and weakly labeled WSI classification. Due to the large number of training images and the slow convergence of self-supervised training, they reported a training time of 250 hours on 48 GPUs (12.000 GPU-hours).
<cit.>, presented a comparable approach, training a tiny vision transformer (ViT) using standard DINO, and a ViT base model using standard masked autoencoding (MAE), both trained on about 3 billion patches. The authors report a training time of over 3000 GPU-hours for the models that were evaluated on a variety of tasks ranging from disease detection to outcome prediction. The evaluation also included images scanned at a different hospital than the training slides.
<cit.> present a general purpose foundation model that leverages over 100 million patches from 100.000 WSI slides across 20 major cancer types. They train a large ViT on patches, that were collected in an internal dataset, using DINOv2 <cit.>. They evaluate the model on 34 Tasks and find that it surpasses the previous baselines on most of them. The model was trained on 24 80GB GPUs.
Overall, all of the presented models rely on large image databases and require long training times, which contributes to increased CO2 emissions. The presented TC encoder and MTL training aim to reduce the need for large amounts of data while maintaining the desired performance.
In addition cross-center evaluation is needed to accurately asses models' performances.
§.§ Weakly Labeled WSI Classification
Learning from WSIs that are only labeled on a case basis, or that have only one endpoint, is challenging because training on the entire image at once typically exceeds the GPU memory. In addition, since a WSI provides only one sample, many WSIs are needed to effectively train a deep learning model. Classification of such gigapixel images is therefore typically performed using multiple instance learning (MIL) <cit.>. Using MIL involves two parts: first, extracting features from patches of the WSI using a pre-trained encoder to convert them into their latent representations, and second, aggregating features from a WSI using a trainable MIL head to predict the given label <cit.>. Therefore, robust encoders are needed to obtain patch representations that facilitate the second step of MIL <cit.>. In this paper MIL is used as an evaluation procedure to test the representativeness of the encoder's features. The following presents the most commonly used approaches, that focus on solving the second stage of MIL using either attention or convolution-based methods.
<cit.> introduced CLAM, a clustering-constrained attention MIL algorithm. The authors trained an attention-based head on features extracted from patches of a WSI to classify the corresponding labels. The attention was then used to identify sub-regions of high diagnostic value, which in turn were used to classify the entire slide. In addition, instance-level clustering was applied over the representative regions to constrain and refine the feature space.
<cit.> proposed TransMIL, an attention-based correlation method for solving weakly labeled classification tasks. The method uses differently sized convolutional layers to apply additional pyramid position encoding information between the attention modules. This allows the attention layers to aggregate morphological features, while the Pyramid Position Encoding Generator (PPEG) encodes spatial information.
<cit.> proposed neural image compression to train on entire WSIs. The authors trained an autoencoder on patches and used the resulting encoder for feature extraction. The patches extracted in the image domain were encoded and their latent representations were placed in the same spatial location. This effectively compressed the entire WSI into a smaller latent image with more channels, while preserving the spatial relationship between the individual patches. A small CNN was then trained on the compressed WSIs to predict the label of the WSI. In a second version of this approach the same authors used multi-task learning on four classification tasks to train the feature extractor <cit.>. The effect of segmentation and detection tasks, as well as more diverse pre-training, remained a point of further investigation and are part of the research presented in this paper.
All presented methods propose different aggregation methods to learn the desired label predictions based on the extracted features and thus work with the features extracted by the TC encoder. As an evaluation method, we adapted the convolution-based aggregation method presented by <cit.> and also applied an attention-based approach based on <cit.>, which are further described in Section <ref>.
§ METHODS
Multi-task learning was used to train the Tissue Concepts encoder on 14 small and medium-sized datasets. The pre-training datasets and the procedure used during this phase are described in the following sections. In addition, the evaluation datasets, MIL head, and corresponding training are explained in detail.
§.§ Pre-training Datasets
To pre-train the shared encoder, a total of 14 data sources were collected and distributed over 16 tasks. All data sources with corresponding number of patches and WSIs, as well as their tasks are presented in <ref>. The NCT-CRC-HE 100k, Panda, TUH, Breakhis, and Arvaniti datasets have been designated as patch-level classification tasks by their respective curators. Data from the PANDA dataset was used for classification and segmentation tasks, where the patches in PANDA were 20 patches per WSI, sampled of a total of 4000 WSIs. To increase data diversity, the Breakhis dataset was used at both 40x and 100x magnification.
Conic and MiDoG were included as detection tasks, while Conic also served as a segmentation task. Other datasets used for segmentation were the SemiCOL training dataset, Arvaniti, Peso, Schoemig-Markiefka, PANDA, TIGER, CRAG, BCSS, and HubMap. Each of the mentioned datasets is described in more detail in the <ref>. In total, about 912,000 patches from around 7,000 WSIs were used during the training. The training data consisted of about 100,000 patches from colorectal, 600,000 patches from prostate, and 10,000 patches from breast tissue. In addition, a small number of spleen, liver, and skin tissue slides were included, which are marked as "various" in <ref>.
All patches were extracted or scaled to 224 × 224 pixels, resulting in a resolution of approximately 1 to 0.5 micron per pixel (MPP). In addition, standard augmentation, such as random rotation, distortion, blurring, brightness, contrast, and hue changes were applied during pre-training.
§.§ Encoder Training
The shared backbone was trained using multi-task learning with all of the above data sources <cit.>. In the MTL pipeline, each task was solved by a shallow task-specific head (θ_t) that received input from larger, shared modules (θ_shared). <ref>a gives a brief overview of the different blocks in the architecture. The shared blocks are shown in the blue area, while the individual tasks are shown in green.
During training, all tasks were treated equally and were processed iteratively within the training loop. For each task the task-specific loss was computed and accumulated on the total loss. Formally, the total loss ℒ_total(X, Y) for a set of tasks X=(X_1,...,X_T) of task specific images X_t=(x_t,1,...,x_t,N) and corresponding labels Y_t=(y_t,1,...,y_t,N) is computed as
ℒ_total(X, Y) = ∑_t=1^Tℒ_t(θ_t(θ_shared(X_t)), Y_t)
where the loss of the task t is calculated based on the output of the task specific head θ_t that received its' input from a shared network structure θ_shared. X_t and Y_t describe the task-specific batch input images and the corresponding labels respectively. While the task-specific parameters θ_t in <ref> changed, depending on which task was being processed, the shared parameters θ_shared were task-independent. Since all tasks contributed equally to the total loss, multiple cycles through all tasks can be performed by performing gradient accumulation before an optimization step is performed.
The tasks themselves were divided into classification, segmentation, and detection. All tasks shared the same encoder, which yielded a feature pyramid with four feature maps at different perceptual levels. The feature pyramid was further processed depending on the task.
Classification tasks received a globally average pooled version of the lowest feature pyramid level from the encoder. Each task-specific classification head consisted of 20% dropout followed by a linear layer, mapping from the dimension of the latent space to the number of classes. By minimizing the cross-entropy loss, each head learned to predict the task-specific classes.
Segmentation tasks consisted of a shared u-net-style decoder that received features from all levels of the shared encoder. Each individual segmentation task was then processed by a 1 × 1 convolutional layer operating on the output of the last decoder level. A combination of dice and focal loss was minimized by each segmentation head.
The detection tasks also consisted of a shared u-net style decoder that received features from all levels of the shared encoder to extract spatially relevant feature maps. Unlike the segmentation tasks head, the detection task head consisted of an FCOS detection head, which minimized the corresponding losses.
Since transformer-based approaches are currently dominant in the literature we chose a tiny swin transformer as shared encoder <cit.> (further denoted as TC-swin). Additionally, we compared the attention-based backbone to a tiny ConvNeXt architecture that shares a similar number of parameters but follows a convolution-based approach <cit.> (further denoted as TC-conv). Both networks were initialized with corresponding ImageNet-1k weights which were imported from the torchvision library <cit.>. During pre-training, the networks were optimized for 5 million steps until convergence, where one step was defined as computing the loss for one batch of one task. The AdamW optimizer <cit.> was used with a constant learning rate of 10^-4, a weight decay of 0.01, and gradient accumulation of 128 steps. In total, the models were trained for 160 hours on one NVidia RTX A5000 GPU and the final models were used for evaluation.
§ EVALUATION
This section first establishes a comparison between UMedPT and TC using a sample efficiency experiment as an ablation study. Then, the evaluation datasets and corresponding tasks are presented, and the MIL head training procedure is described.
§.§ Ablation Study
To determine whether a multi-domain encoder, UMedPT, or a histology specific encoder, TC-swin, performs better on unseen histology data, we measured the sample efficiency on an unseen, in-domain dataset. In this experiment, the frozen encoder generated latent representations for all patches in the downstream dataset. These latent representations were then used as input to a random forest model. Sample efficiency was measured by establishing of a fixed training and validation split. Different subsets of images per class were systematically sampled from the training split to serve as training examples for the random forest. Each model was consecutively evaluated on the designated test split. This training-validation process was repeated 10 times with different seed values for robustness in the analyses and the F1-score was reported. We compared TC-Swin to the base Swin transformer encoder presented in <cit.>. Additionally, ImageNet weights from torchvision were used as baseline for a tiny swin transformer.
The BACH challenge dataset <cit.> is a breast tumor classification dataset consisting of 400 images of 2048 × 1536 pixels at 0.25 MPP, which are equally divided into 4 classes. Images were center cropped to 1024 × 1024 pixels and downsampled to 224 × 224 pixels. The dataset was split 80/20 into a training and test subset. From the training split of the BACH dataset, sets of 1, 3, 5, 10, 25, and max 46 images per class were sampled using 10 different seeds. For each training, the random forest was evaluated on the left out test set.
Figure <ref> shows that the domain-specific encoder outperforms the multi-domain encoder and the pre-trained ImageNet encoder in terms of sample efficiency.
The TC-Swin encoder showed higher F1-scores compared to the multi-domain encoder and to ImageNet. With three images per class, TC-Swin's mean F1-score was at 0.531, while UMedPT and ImageNet were at 0.496 and 0.423, respectively. With 10 images per class, the TC-Swin encoder showed a mean F1-score of 0.673, while UmedPT and ImageNet showed values of 0.628 and 0.566, respectively. Using the maximum number of available training images per class, the differences in reported performance are similar. While TC-Swin achieved 0.816, UmedPT and ImageNet achieved mean F1-scores of 0.762 and 0.662, respectively.
The TC-Swin encoder outperforms the multi-domain medical encoder UMedPT even though the later was trained with more images overall. In particular, with 5 or more images per class, the features obtained from the TC encoder provide better performance than the ImageNet and UMedPT features. When considering 1 and 3 images per class, UMedPT shows similar performance to the TC encoder, but also shows a higher variance over the 10 seeds. Therefore, we excluded the UMedPT encoder from further evaluation and focused on the TC encoder instead.
§.§ Evaluation datasets
We compared the TC encoder to the current state of the art for each organ, referred to as the external baseline, as well as to the results obtained using the features of the publicly available CTransPath encoder <cit.>. In addition, we use the Swin transformer model with ImageNet weights without further training as an additional baseline.
The learned representations of Tissue Concepts were evaluated using weakly labeled WSI classification tasks, colorectum, represent tissues within the training domain, lung cancer tissue was deliberately chosen to represent tissues outside the training domain. The architecture and learning schedule as described in Section <ref>, was used during the training of each MIL head. Where possible, multi-site datasets were split so that evaluation sites were not included in the training. The feature extractors were evaluated on the following datasets for each organ:
Breast Cancer: The BReAst Cancer Subtyping (BRACS) dataset consists of 547 WSIs from 189 patients which are already divided into training (395), validation (87) and test (65) splits <cit.>. According to <cit.>, the WSIs were obtained at the Department of Pathology at the National Cancer Institute - IRCCS-Fondazione Pascale, Naples, Italy, and were scanned with an Aperio AT2 scanner at a resolution of 0.25 microns per pixel (mpp). Each WSI is assigned to one of 7 different classes: normal, benign, usual ductal hyperplasia (UDH), atypical ductal hyperplasia (ADH), flat epithelial atypia (FEA), ductal carcinoma in situ (DCIS), and invasive. In addition, <cit.> grouped the different types of breast carcinoma into four coarser classes: normal, non-cancerous, precancerous, and cancerous. Furthermore, binary classification between invasive and non-invasive breast cancer was also performed by <cit.>. The authors also presented a trained model, HACT-Net, and applied it to this dataset. They reported the weighted F1-score for all of the classification problems mentioned above which we used as an external baseline and also reported the weighted F1-score in the BRACS experiments. Additionally, following <cit.>, we reported accuracy as multi-class metric and area under the receiver operating curve (AUC) as multi-threshold metric. For the evaluation of the TC-encoder, three experiments were created form the dataset, similarly to the approach proposed by <cit.>. The first experiment consisted of differentiating between normal and cancerous tissue slides. The second experiment focused on more fine-grained classification between normal, non-cancerous, precancerous, and cancerous tissues. Finally, the third experiment for breast cancer aimed at predicting one of the seven sub-types provided.
Prostate Cancer: The Prostate cANcer graDe Assessment (PANDA) challenge provides 10616 openly available WSIs from two different centers, Karolinska and Radboud <cit.>. Each of the WSIs was assigned an ISUP score resulting from the primary and secondary Gleason pattern. The score ranges from 0 to 5, with 0 representing normal tissue and higher numbers representing more severe prostate cancer. This information was used to create binary classification task for evaluation, where the task consists of differentiating between normal (ISUP 0) and cancerous tissue (ISUP > 0).
To test cross-center performance and overall performance, three different experiments were created from this dataset. First, the entire set of images obtained from the Radboud UMC was used as both the training and validation set, using a split ratio of 90% for training and 10% for validation. All images provided by the Karolinska Institute were used as the hold-out test set. In a subsequent experiment, the roles of training and test centers were swapped. In addition, a 5-fold cross-validation experiment involving both centers was conducted. We note that a small percentage of the dataset in the form of pre-selected patches were used during pre-training of the encoder, however, the task-specific head was discarded. <cit.> used this dataset in a k-fold cross-validation to evaluate the performance of their model and reported the accuracy and area under the curve for binary classification. Their reported results serve as an external baseline. <cit.> reported the accuracy and AUC of their algorithm on the dataset. To be comparable, we also report these metrics in our experiments. Additionally, we report the F1-score as per-class metric, as suggested by <cit.>.
Colorectal Cancer: During the SemiCOL challenge, a dataset of 499 WSIs from 4 different centers was provided as training dataset. The challenge task was to predict whether a given slide contained cancerous tissue or normal tissue. During the challenge, we used a modified model presented in <cit.> as an initial starting point and further fine-tuned on several colon and colorectal tasks. Using the learned encoder to extract features from the WSI, a MIL head was successively trained on these features. This resulted in an external test AUC of 0.99. In the experiments presented in this paper, the challenge training data was split by slide provider, effectively creating four smaller subsets of the original challenge dataset. For the evaluation presented in this paper, four different experiments were conducted. Images from one center were used as training examples (90/10), while images from the other three centers served as the hold out test set. By treating each center as training provider once, and using the other three centers as test centers, four different configurations were created We note that a small percentage of patch data of the dataset was used during pre-training of the encoder, however, the task-specific head was discarded. To the best of our knowledge, there is no other publication reporting results on this specific dataset, so we are unable to report an external baseline. Since the dataset is balanced with regard to the class distribution for each center in all splits, we reported the Accuracy, F1-Score and AUC as metrics, following <cit.>.
Lung Cancer: The TCGA-NSCLC dataset contains two subtypes of lung cancer which are lung squamous cell carcinoma (TCGA-LUSC <cit.>), and lung adenocarcinoma (TCGA-LUAD <cit.>) in 1006 (512 LUSC, 494 LUAD) slides from over 40 different sites. Two training and testing scenarios were created from the TCGA-NSCLC dataset. One experiment focused on corss-center evaluation. In this scenario, the three largest contributors (Johns Hopkins, International Genomics Consortium, and Asterand) were selected to serve as the training and validation subset providers. The remaining 39 sites were selected to serve as hold out test set. This split resulted in 340 training slides (90/10 split) and 666 test slides. In addition, a 5-fold cross-validation experiment was performed on the entire dataset to increase training set diversity, again using a 90/10 training and validation split. The objective of each experiment was to distinguish between slides containing lung adenocarcinoma and those containing squamous cell carcinoma (LUAD vs LUSC). The obtained results were compared to the results reported by <cit.> and <cit.>. To be comparable, the accuracy and AUC, as reported by the external baselines, were selected as metrics. Additionally, the F1-score was reported.
§.§ Training of the MIL head
Following medical diagnoses, WSIs are often only labeled at the patient or slide level. To address this, we used MIL to train slide-level classification heads for the respective evaluation tasks. The WSIs were compressed into latent images, by the respective frozen encoder, as suggested by <cit.>. During this process, all patches of a WSI were extracted with a size of 224 × 224 at about 0.5 MPP.
Using the exact same architecture for aggregation as presented by <cit.>, we found that training the proposed MIL head left room for improvement in terms of training stability and convergence speed. The selection and adaptation were motivated by the prior performance on the SemiCOL challenge <cit.>. There, we fine-tuned UMedPT and trained the small MIL head on WSI classification. On an external, multi-center test set, this pipeline achieved an AUC of 0.998.
To aggregate the latent WSIs, a small CNN with global max pooling, was used. The initial layer utilized a 1x1 convolution, reducing the latent WSIs' 768 channel to 16. Following this, 3x3 convolutions with a stride of 2, and padding of 1 doubling in depth each time were applied. Instance normalization, leaky ReLU activation, and 10% 2D dropout were applied in the convolutional layers. The lowest feature pyramid of 64 channels was globally max pooled, followed by one hidden linear layer and a classification layer. This resulted in 19250 total trainable parameters. The convolutional layers were initialized by drawing from a Kaiming normal distribution with a fixed seed. Additionally, we tested an attention-based approach by replacing the max pooling layer with an attention pooling layer, as suggested by <cit.>. Overall, we found similar results with both approaches across all organs. The results using an ABMIL aggregation head can be found in the <ref>.)
The same architecture was used over all evaluation datasets. To avoid overfitting, a label smoothing of 0.1 was used while minimizing the cross-entropy loss. The MIL head was trained using the AdamW optimizer with a constant learning rate of 10^-4 and a weight decay of 0.01 for 100 epochs, while monitoring the validation loss. The best performing model was used for testing. During training, additional augmentations were applied to each latent WSI. Since each latent WSI was treated as a 768-channel image, random flipping, mirroring, and resizing with nearest neighbor interpolation were applied during training. This resulted in high-dimensional latent WSI representations with sizes ranging from 32 × 32 to 224 × 224.
§ RESULTS
An overview of the obtained results is given in <ref>, where all tasks are presented with corresponding test AUC scores over 5 runs. The results show that the TC encoder, although trained with less data, performs as well as, or better than the CTransPath encoder, given the selected tasks. The Swin transformer with ImageNet weights exhibit the lowest overall performance and the largest variance across all tasks. All reported results were obtained using the default parameters of the corresponding functions from the scitkit-learn library <cit.>. In the following, the results on the different organs are examined in more detail.
§.§ Breast Cancer
<ref> shows the results for the three sub-problems of the BRACS classification challenge, including the weighted F1-score and the AUC score for each. For the binary classification problem, ImageNet weights achieve the highest weighted F1-score on average with 0.960, which is comparable to the external baseline of 0.96. CTP and TC-swin show similar performance with average wF1-scores of 0.954 and 0.951, respectively. The TC-Conv encoder performs almost three percentage points worse than the TC-swin encoder. Overall, all tested encoders result in similar AUC values.
Considering the more fine-grained classification of the given WSIs, none of the encoders matches the reported performance of the external baseline HACT-Net with a weighted F1-score of 0.84. The two TC encoders outperform the ImageNet weights on the four- and seven-class problem formulations. On the four-class problem, CTP achieves a mean weighted F1-score of 0.613, approximately 6 percentage points higher than the two TC encoders. However, when considering the AUC, all of the foundation models perform equally with a mean AUC of about 0.83. For the more detailed seven-class problem, the TC-swin encoder achieves the highest mean weighted F1 and AUC scores out of all the tested models with 0.40 and 0.76, respectively.
Overall, the performance of all four encoders tested decreases significantly as the problem formulation becomes more fine-grained. Although the external baseline performance also declines, the decline is not as steep as that observed with the foundation models and ImageNet weights.
§.§ Prostate Cancer
<ref> shows the results obtained of five runs on the PANDA dataset with cross-center and mixed evaluation. The 'clinic' column indicates different settings, where "Karolinska → Radboud" refers to the MIL head being trained on data from the Karolinska Institute and evaluated on data from the Radboud Clinic.
When comparing the results obtained in the 5-fold cross-validation with the external baseline, both TC encoders show performances that were close to the external baseline, with TC-conv slightly surpassing it with a mean accuracy of 0.957. The mean accuracies of CTP and ImageNet are 2 and 5 percentage points below the selected baseline, respectively. All foundation models outperform the pure ImageNet encoder by at least three percentage points. In terms of accuracy and AUC scores, both TC encoders slightly outperform the CTP weights by one percentage point with mean acc of 0.936 and mean auc of 0.979.
In the cross-center evaluation, the performance of all models decreases compared to an in-center evaluation. CTP achieves a mean accuracy of 0.819 when transferring decision boundaries from the Karolinska site to Radboud subset. The opposite scenario results in a decrease of 14 percentage points (0.678 mean acc). Furthermore, an even steeper drop in mean accuracy and AUC from 5-fold cross-validation to cross-center validation is observed when using the ImageNet weights (from 0.900 to 0.658). Both TC encoders are able to extract robust features. The TC-Swin encoder (from 0.942 to 0.846) shows more stable performance than the TC-Conv encoder (from 0.957 to 0.764) when compared to the metrics reported in the cross-center evaluation.
§.§ Colorectal Cancer
<ref> presents the results per trained center on the SemiCOL challenge dataset. The 'clinic' column indicates the training center. The averaged results are shown at the bottom of the table for orientation. On average the TC-swin encoder showed higher mean accuracy (0.953) compared to the TC-conv (0.941) and CTP (0.944) encoders, as well as higher F1-scores. In terms of AUC, all three encoders show approximately the same performance on average.
When looking at the performance on individual clinics, the transfer from UBERN to other clinics seems to be the most challenging. ImageNet (0.734) and CTP (0.860) both showed a performance about 10 percentage points lower than the performance on other centers. However, both TC encoders were able to extract features that allowed the transfer of the learned classification to other centers (TC-swin: 0.943, TC-conv: 0.943). The performance when trained on data from the UBERN clinic is similar to that when trained on data from other clinics.
Looking at performance on other clinics, there is a slight discrepancy in mean accuracy and AUC score between CTP and TC. CTP consistently achieves slightly higher scores of approximately one percentage point over the TC-Swin encoder.
§.§ Lung Cancer
<ref> shows the results of the 5-fold cross-validation and the cross-center evaluation on the TCGA-NSCLC dataset. Despite the absence of lung cancer images in the pre-training, both TC encoders outperform the ImageNet weights in the cross-center and 5-fold cross-validation tasks. The CTP encoder achieves the highest mean accuracy of 0.902, which is about 2 percentage points better than the TC-swin of 0.878. The CTP encoder was pretrained using the TCGA-NSCLC dataset and its performance achieved when evaluating the CTP in our pipeline is similar to the external baseline reported by <cit.> (0.902 vs. 0.912). The TC-swin encoder shows comparable performance earlier results reported by <cit.>, with an accuracy of 0.878 vs. 0.884 and an AUC of 0.949 vs. 0.961.
As expected, both models TC-models show weaker performance when evaluated across centers, compared to mixed-center 5-fold cross-validation. Both TC encoders outperform the ImageNet-based encoder. The CTP encoder was pre-trained to extract features from this dataset and outperforms all other models in the cross-center evaluation.
As an additional experiment, the UNI <cit.> encoder was compared to the other models using the ABMIL aggregation head and showed similar results to the other encoders. The detailed results can be found in <ref>.
§ DISCUSSION
This paper demonstrates that MTL is an effective method for training foundation models on supervised signals in CPath. We tested the MTL-trained Tissue Concepts encoders on breast, prostate, colorectal, and lung cancers and found that the performance of the encoders is comparable to a self-supervised model. This comparable performance was achieved with only 6% of the data and resources: training TC required only 912,000 patches, compared to 15 million patches used in <cit.>'s self-supervised approach. Less data and shorter training enable faster development and research cycles.
Further development and scaling of the foundation models in CPath will inevitably contribute to extended CO2 emissions. The TC encoders were trained for 160 hours on a single Nvidia RTX A5000 in Europe. This corresponds to an estimated 18.91 kg of CO2 emissions <cit.>. Most self-supervised approaches exceed these emissions by orders of magnitude due to larger training datasets, longer convergence times, and the use of multiple GPUs. Training such a model for the same amount of time on 48 NVIDIA V100s, a common number of GPUs for training large foundation models, results in 2004 kg of CO2 during training.
Overall, depending on the GPU, one training of an MTL-based model produces only 0.9% to 2.25% of the CO2 emissions of an SSL-based training while providing similar performance.
Even though, performance generally is comparable, there are two differences between TC and CTP in organ specific performance. One is the difference on the prostate dataset. While a small amount of patches of this dataset were present during the pre-training of TC, the vast amount of patches used during testing were unseen. However, a lot of prostate patches were used during the pre-training of TC, as mentioned in section <ref>. This dominant representation of prostate tissue during pre-training could potentially lead to the better performance on this organ.
This raises the question if organ-specific fine-tuning of TC encoders can lead to better performance on specific organs when considering cross-center evaluation and testing on unseen datasets. This question is further justified by the differences in organ specific performance between foundation models and external baselines.
Another clear difference in performance between CTransPath and Tissue Concepts was observed for lung cancer in the 5-fold cross-validation and cross-center validation. This difference most likely arises due to the out-of-domain nature of lung cancer tissue in TC pre-training. However, training on pathology images instead of real-world images improved performance when comparing Tissue Concepts weights to ImageNet weights. Scaling the Tissue Concepts pre-training with more data should be considered to overcome the current limitations. Importantly, balancing different tasks, tissue types, and magnifications during pre-training is important to effectively scale and apply multi-task learning in the CPath domain.
In the present study, only frozen encoders were considered.
While fine-tuning these foundation models can lead to better adapting to specific tasks, more research is needed on the balance between data specificity and variation, and the amount of data is required. In <cit.> we found that a fine-tuned foundation model encoder outperformed a fine-tuned ImageNet-base model on all tasks in terms of data efficiency and F1-score. Similar results can be expected for TC encoders and CPath specific problems, but this remains an open question to be investigated further. Additionally, this study focuses on solving MIL problems which occur frequently in diagnostic or prognostic tasks. While MIL tasks are clinically motivated, the individual performance of the segmentation, and detection branches in the multi-task trained model need to be considered in future evaluations.
Overall, the results show that the more domain-specific pre-training of Tissue Concepts encoders is advantageous for solving domain-specific tasks compared to the more general, multi-domain pre-trained encoders. A question that remains to be answered is the performance difference between foundation models and models trained on one specific organ. This will be part of future research.
§ CONCLUSION
In this paper, we propose to train a foundation model for CPath on supervised signals using multi-task learning to reduce the need for large corpora of data, computation time, and resources during training. The proposed method shows comparable results to a model trained using self-supervision while relying on a fraction of the training patches.
In addition, we found that, although trained on large amounts of data, existing models still exhibit a loss of performance when applied across centers. Better cross-center generalization is crucial to facilitate broader clinical application and needs to be further addressed in future research.
In order to effectively scale multi-task learning of foundation models, questions regarding the balance of tasks and tissue types need to be answered. Organ-specific fine-tuning might create robust and high-performing encoders for specific problems but needs to be further explored.
§ DATA AVAILABILITY
The trained models, scripts, and notebooks to reproduce the results will be made publicly available upon publication.
The pre-training framework to build the pipeline will be available as a python package.
The links to all datasets are mentioned in the appendix with a corresponding description.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial or personal relationships or interests that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGMENTS
This research was in part funded by the German ministry of education and research (BMBF) through the projects SynDICAD (01IS21067) and PROSurvival (01KD2213). The authors are responsible for the content of this publication.
The results show in this paper are in part based upon data generated by the TCGA Research Network: <https://www.cancer.gov/tcga>.
§ TRAINING DATASETS
NCT-CRC-HE 100k
The NCT-CRC-HE 100k dataset consists of 100.000 non-overlapping patches from 86 H&E stained human colorectal cancer and normal WSIs. The patches are of size 224 × 224 and have a resolution of 0.5 microns per pixel (MPP). Nine different tissue classes were present in this classification dataset. The dataset is available under <https://zenodo.org/records/1214456>.
Conic
This dataset is based in the Lizard Dataset <cit.>. It contains 4981 H&E stained, non-overlapping patches of colon tissue at 256 × 256 pixels. The segmentation masks were generated using the Hover-Net The dataset is available under <https://conic-challenge.grand-challenge.org/>.
CRAG
The CRAG dataset consists of 213 H&E stained patches from colon tissue of 38 WSIs. The patches are of size 1512 × 1512 at a resolution of 0.5 MPP. During preprocessing the patches and masks were further cropped into non overlapping 224 × 224 tiles with 1MPP resolution. The original 173/40 patch train/val split was maintained. The dataset is available under <https://warwick.ac.uk/fac/sci/dcs/research/tia/data/mildnet> with corresponding login.
SemiCol
The training data from the SemiCOL challenge is divided into two parts. One contains manual segmentation masks, which were used in the pre-training of the encoder. The other part contains weakly labeled data, which was described in <ref>. The 20 sparsly annotated slides were obtained from the university hospital in Cologne (Hamamatsu S360) and from the university hospital LMU in Munich (Leica GT450). Both sides scanned at 0.5 MPP and provided 10 slides each. Patches with annotations were extracted from the provided images and scaled to 1 MPP and 224 × 224 pixels.
More details about the data can be found here <https://www.semicol.org/data/>.
Arvaniti
The Arvaniti TMA dataset consists of prostate tissue microarrays, which were scanned at 0.23 MPP resolution at the University Hospital Zurich using a Hamamatsu NanoZoomer-XR Digital slide scanner. Patches of 1024 × 1024 were cut out from the foreground and scaled to 224 × 224. The major class of the mask was selected to serve as patch label.
The dataset is available under <https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/OCYCMP>.
Peso
The PESO dataset consists of 102 WSIs, which were scanned at 20x around 0.48 MPP with a 3DHistech Pannoramic Flash II 250 scanner. From the H&E stained prostate slides were, patches were extracted and scaled to 1 MPP of size 224 times 224. The corresponding masks were used as targets.
The dataset is available under <https://zenodo.org/records/1485967>.
Schoemig-Markiefka
This prostate dataset contains 6 sub-datasets, each containing 120.000 patches. All of the patches were scanned at approximately 0.25 MPP with at least 4 different scanners. Each sub-dataset contains 50.000 patches with tumor tissue, 50.000 patches with non-neoplastic glandular prostate tissue and 20.000 patches with non-glandular tissue. In our training the patches were scaled accordingly. 5 of the sub-datasets were used for pre-training, while 1 was used for validation
The dataset is available under: <http://zenodo.org>, Deposits: 4789576 (Dataset 1–4) and 4904569 (Datasets 5–6).
Panda
This prostate dataset was already described in <ref>. From the slides, patches of 224 × 224 were extracted. The labels for the patches were generated from the corresponding masks.
The dataset is available under <https://www.kaggle.com/c/prostate-cancer-grade-assessment/data>.
TUH
The Temple University digital pathology corpus consists of over 3505 annotated images of breast tissue. The labels range from artifact and background annotations to specific breast cancer annotations like invasive ductal carcinoma. All WSIs of the 296 patients are scanned at 0.5 MPP with corresponding annotations. From the 3505 slides 136 exhibited quality annotations that were used in the training. Fixed 20 patches per WSI were sampled.
Details about the download can be found here <https://isip.piconepress.com/projects/nsf_dpath/html/downloads.shtml>.
Tiger
This breast cancer dataset contains H&E stained patches from tumor infiltrating lymphocytes. The patches were extracted from 151 TCGA-BRCA slides at f0.25 MPP. Manual annotations for 7 different classes were used as masks. Patches and masks were scaled to 224 × 224 pixels.
The dataset is available under <https://tiger.grand-challenge.org/Data/>.
BCSS
The BCSS dataset was contained as part of the TIGER challenge. Annotations provided through the challenge were used as segmentations masks. These annotations were derived from the original BCSS dataset. All images were scaled same as patches from the Tiger dataset mentioned above. 124 slides from BCSS and NuCLS were annotated, where some of the annotations were grouped.
The dataset is available under <https://tiger.grand-challenge.org/Data/>.
BreakHis
The breakhis dataset contains H&E stained patches from 82 patients at different magnifications. The patches are of size 700 times 460 and were scaled accordingly to 1 MPP resolution. The classes for this dataset were derived from the original dataset. A two-class benign/malignant split is possible, however, the more fine-grained 8 class dataset was used.
This dataset is available under <https://web.inf.ufpr.br/vri/databases/breast-cancer-histopathological-database-breakhis/>.
MiDoG
The MIDOG 2022 challenge dataset contains 405 tumor cases across six different tumor types. During training the 44 lung cancer cases were excluded. The cases were patchified and scaled to 224 times 224 pixels with approximately 1 MPP.
The dataset is available under <https://zenodo.org/records/6547151>.
HubMap
The HubMap dataset contains 351 cases from different organs. Patches were extracted from the foreground containing annotation masks and scaled to 1 MPP of 224 × 224 pixels. Only large intestine was used during training.
The dataset can be found under <https://www.kaggle.com/datasets/dingyan/hubmap-data>.
§ MIL HEAD ARCHITECTURE
To aggregate the latent WSIs, as described in section <ref>, a small CNN with global max pooling, based on <cit.> was used. The initial layer utilized a 1x1 convolution, reducing the latent WSIs' 768 channel to 16. Following this, 3x3 convolutions with a stride of 2, and padding of 1 doubling in depth each time were applied. Instance normalization, leaky ReLU activation, and 10% 2D dropout were applied in the convolutional layers. The lowest feature pyramid of 64 channels was globally max pooled, followed by one hidden linear layer and a classification layer. This resulted in 19250 total trainable parameters. The convolutional layers were initialized by drawing from a Kaiming normal distribution with a fixed seed.
§.§ ABMIL Results and comparison to the UNI encoder
A different aggregation approach was proposed by <cit.> under the term "Attention-based Deep Multiple Instance Learning”. For comparison, the following tables present the results obtained by using an attention based multiple instance learning aggregation head instead of a maximum pooling aggregation head. The results were collected over three runs and averaged. Additionally, the experiments were repeated using the UNI <cit.> foundation model.
Overall, we observed similar results between the different encoders and aggregation techniques across all organs.
|
http://arxiv.org/abs/2409.03581v1 | 20240905143650 | A visual proof of entropy production during thermalization with a heat reservoir | [
"Ramandeep S. Johal"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"physics.class-ph"
] |
[e-mail: ]rsjohal@iisermohali.ac.in
Department of Physical Sciences,
Indian Institute of Science Education and Research Mohali,
Sector 81, S.A.S. Nagar, Manauli PO 140306, Punjab, India
§ ABSTRACT
In this note, the equilibrium curve of a thermodynamic
system is used to depict entropy production in the process
of thermalization with a reservoir. For the given
initial and final equilibrium states of the system, the entropy
production is reduced when work is also
extracted during thermalization.
The case of maximum work extraction corresponds to a reversible
process. For less than optimal work extraction,
the lost available work is shown to be directly
proportional to the entropy produced.
A visual proof of entropy production
during thermalization with a heat reservoir
Ramandeep S. Johal
September 9, 2024
==================================================================================
Entropy production is the core concept underlying
the Second law which states that
irreversible or spontaneous processes always
increase the entropy of the universe.
An example is the flow of heat across a temperature
gradient such as when
a system thermalizes with a heat reservoir. Here,
even if the heat is transferred in a
quasi-static manner, there is a net increase
in the total entropy of the system plus
reservoir <cit.>.
For the case when the reservoir
is initially at a higher (lower) temperature than the system,
it implies that the increase (decrease) in the entropy
of the system is more than
the decrease (increase) in the entropy of the reservoir.
Now, the change in the entropy
of the system depends on its nature, unlike
for the reservoir. Assuming an ideal gas
system, the increase of total entropy
may be easily demonstrated using
the well-known logarithm inequality. Visual demonstrations
of the Second law for such irreversible processes
have also assumed an ideal-gas type behavior
for the system <cit.>.
In this paper, we present a diagram
using equilibrium curve of the
system which overcomes these limitations.
Since visual proofs often
help in easy comprehension of abstract
concepts, a demonstration of entropy
production based on a generic thermodynamic system
is desirable.
For a given amount of a
thermodynamic
system, the equilibrium state is described in terms of its internal energy U(S,V) as a function
of its entropy S and volume V <cit.>.
Then, the temperature of the
system is defined as T = (∂ U/ ∂ S)_V^. One of the fundamental attributes
of the equilibrium state is that U(S,V) is a convex function of S at
constant V, which implies that the heat capacity
at constant volume is positive (C_V > 0).
Now, with an initial state at energy U_1, entropy S_1 and
temperature T_1, the system is placed in thermal contact
with a heat reservoir at temperature T_2 > T_1.
Heat flows from the reservoir to the system
till its temperature rises to T_2, corresponding
to a final energy U_2 and entropy S_2.
As the temperature is defined to be positive,
U_2 > U_1 implies S_2 > S_1, for a fixed volume.
Thus, the system is in thermodynamic equilibrium
in the
initial and the final state of the process.
The entropy of the system increases by Δ S = S_2-S_1, while
the energy increases by Δ U = U_2 - U_1 which
equals the heat exchanged with the reservoir,
Q_2= Q_1 = Δ U. On the other hand,
the decrease in the entropy of the reservoir is:
Δ S_ res = Q_2/T_2.
Thus, the net or total change in the entropy of
system plus reservoir is given by:
Δ S_ tot = Δ S - Δ S_ res.
The standard evaluation of the above quantities goes as follows.
As the process happens at a fixed
system volume V, we can write
Δ U = ∫_T_1^T_2 C_V dT and
Δ S = ∫_T_1^T_2 (C_V / T) dT.
Without assuming a specific form for
the function C_V(T) >0,
a general proof showing
Δ S_ tot >0 is
as follows. From the explicit expressions
given above, we can
write Eq. (<ref>) as
Δ S_ tot = ∫_T_1^T_2( 1/T - 1/T_2)
C_V dT.
Since T_2 > T_1, the integrand above is positive and
so is the value of the integral. A similar proof can be
constructed for the case T_2 < T_1 i.e. when
the system is cooled by the reservoir.
Fig. 1 shows the (convex) equilibrium curve U(S)
of the system at a given volume V. The entropy
changes involved in the process are
depicted as certain line segments
showing that Δ S > Δ S_ res,
and hence Δ S_ tot>0 due to Eq. (<ref>).
Further, the diagram is only based on
two properties: i) positivity of the temperature
and ii) convexity of the function U(S).
A major difference of the present diagram from
the previous ones is that it is not restricted
to the ideal gas systems.
The reader is invited
to draw the corresponding diagram
for the case where the reservoir is at a lower
temperature than the system (T_2 < T_1).
Note that the net rise in total entropy does
not require a complete thermalization with
the reservoir. Any amount of heat flow
across a finite temperature gradient
increases the total entropy.
Following Fig. 1,
we can as well analyze the case of incomplete thermalization
where the final state of the system is some intermediate
state lying on the equilibrium curve in between
the points (1) and (2).
The temperature of the system T' (T_1 < T' < T_2)
is again given by the slope of the tangent at that point.
It is easily seen that
Δ S_ tot >0 holds in this case too,
though the length of the segment denoting
entropy production is smaller than in the case
of complete thermalization. Thus,
we observe from the figure that the entropy
production attains its maximum value when the
system reaches thermal equilibrium with the
reservoir.
Suppose that instead of making a thermal contact, we couple the reservoir
and the system by means of a heat engine
for which these
act as heat source and heat sink, respectively.
The engine runs by executing certain heat
cycles—absorbing an amount of heat from the reservoir, converting
a part of it into work and rejecting the
rest of the heat to the system. The engine
produces useful work till the system comes to be
in thermal equilibrium with the reservoir.
Thus, the initial and the final states
of the system are the same as in the case of
thermalization above, yielding
the amount of heat rejected to the sink as
Q_1 = Δ U. Likewise, the change in system entropy
is equal to Δ S. Now, suppose that W ≥ 0 amount
of work is extracted by the end of this process.
Since the engine undergoes cycles,
the conservation of energy implies that
Q_2 = W+Q_1 amount of heat is absorbed from
the reservoir, which implies Q_2 ≥ Q_1.
This process is depicted in Fig. 2.
It is apparent that the entropy production here
is smaller in magnitude as compared to pure thermalization
where no work was extracted.
Fig. 2 also suggests that the magnitude of
work can be enhanced till
Δ S_ res = Δ S_ i.e.
when Δ S_ tot vanishes and
the engine becomes a reversible one.
Thus, we observe that
maximum work (W_ max)
is extracted when
thermalization proceeds as
a reversible process—with no entropy
production. In general, we have
W ≤ W_ max. In fact, using the similarity property of triangles in Fig. 2,
we can show that W_ max - W = T_2Δ S_ tot,
where Δ S_ tot is the entropy produced in the
process that extracts W amount of work. The difference
W_ max - W, called the lost available work or
the exergy destroyed, is directly proportional to
the entropy produced. This relation is
well known in the engineering parlance as
the Gouy-Stodola theorem.
Since, the initial and
final states of the system remain the same
irrespective of the amount of work extracted,
so it implies that Q_2 = W + Δ U increases
in direct proportion to the work extracted,
with its maximum value being T_2 Δ S
(see Fig. 2).
Thus, we obtain W_ max = T_2 Δ S - Δ U.
Temperature-energy interaction diagrams, depicting
heat and work flows in reversible as well as irreversible heat cycles,
were introduced in the engineering
literature <cit.> and
also reported in the physics literature
<cit.>.
As pointed out by Bejan <cit.>,
these instances mirrored the almost parallel developments
in the techniques of “entropy generation minimization”
amongst the engineering community and that of
“finite-time thermodynamics” within the physics community.
It is interesting to note that these earlier
diagrams show changes in entropy by
angles, whereas
the present diagram depicts such changes by
line segments, while making use of the thermodynamic
equilibrium curve of the finite-system involved.
10
Callenbook H. B. Callen,
Thermodynamics and an introduction
to thermostatistics, 2nd ed. John Wiley & Sons Inc. (1985).
Bucher1993
Manfred Bucher; Diagram of the second law of thermodynamics. Am. J. Phys. 1 May 1993; 61 (5): 462–466. https://doi.org/10.1119/1.17242
Vallejo24a
Andrés Vallejo; A diagrammatic representation of entropy production. Am. J. Phys. 1 March 2024; 92 (3): 234–235. https://doi.org/10.1119/5.0167570
Brodinskii1973
V. M. Brodyansky. The exergy method of thermodynamic analysis, Moscow: Energy; 1973 (in Russian).
Bejan1977
A. Bejan; Graphic techniques for teaching engineering thermodynamics,
Mechanical Engineering News, pp. 26-28, May 1977.
Bucher1986
Manfred Bucher; New diagram for heat flows and work in a Carnot cycle. Am. J. Phys. 1 September 1986; 54 (9): 850–851. https://doi.org/10.1119/1.14431
Wallingford
J. Wallingford; Inefficiency and irreversibility in the Bucher diagram. Am. J. Phys. 1 April 1989; 57 (4): 379–381. https://doi.org/10.1119/1.16030
Bejan1994
A. Bejan, "Engineering Advances on Finite-Time Thermodynamics," Am. J.
Phys. 1 January 1994; 62 (1): 11-12.
|
http://arxiv.org/abs/2409.03285v2 | 20240905064954 | Open charm mesons in variational scheme and HQET | [
"K. K. Vishwakarma",
"Alka Upadhyay"
] | hep-ph | [
"hep-ph"
] |
An Efficient Two-Dimensional Functional Mixed-Effect Model Framework for Repeatedly Measured Functional Data
CHENG CAO
Department of Data Science, City University of Hong Kong
and
JIGUO CAO
Department of Statistics and Actuarial Science, Simon Fraser University
and
HAO PAN, YUNTING ZHANG, FAN JIANG
Department of Developmental and Behavioral Pediatrics,
Shanghai Children’s Medical Center,
School of Medicine, Shanghai Jiao Tong University
and
XINYUE LI Corresponding author: Xinyue Li, Email: xinyueli@cityu.edu.hk
Department of Data Science, City University of Hong Kong
September 9, 2024
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The charm (D) and charm-strange (D_s) mesons are investigated in a variational scheme using Gaussian trial wave functions. The Hamiltonian contains Song and Lin potential with a constant term dependent on radial and orbital quantum numbers. The Gaussian wave function used has a dependence on radial distance r, radial quantum number n, orbital quantum number l and a trial parameter μ. These wave functions are used to compute the expectation of Song and Lin potential dependent on r. The expectation of relativistic kinetic energy of both quarks is computed from the Fourier transform of the above Gaussian wave function in the linear momentum (p) space. The total energy (expectation of Hamiltonian) for each state is minimized with a parameter μ from the wavefunctions. The obtained spectra of D and D_s mesons are in good agreement with other theoretical models and available experimental masses. The mass spectra of D and D_s mesons are also used to plot Regge trajectories in the (J^P, M^2) and (n_r, M^2) planes. In (J^P, M^2) plane, both natural and unnatural parity states of D and D_s mesons are plotted. The trajectories are parallel and equidistant from each other. The two-body strong decays of D and D_s are analyzed in the framework of heavy quark effective theory using computed masses. The strong decay widths are given in terms of strong coupling constants. These couplings are also estimated by comparing them with available experimental values for observed states. Also, the partial decay width ratios of different states are analyzed and used to suggest assignments to the observed states. We have assigned the spin-parity to newly observed D^*_s2(2573) as the strange partner of D^*_2(2460) identified as 1^3P_2, D_1^*(2760) and D^*_s1(2860) as 1^3D_1, D^*_3(2750) and D^*_s3(2860) as 1^3D_3, D_2(2740) as 1D_2, D_0(2550) as 2^1S_0, D^*_1(2660) and D^*_s1(2700) as 2^3S_1, D^*_J(3000) as 2^3P_0 D_J(3000) as 2P_1, D^*_2(3000) as 1^3F_2 states.
§ INTRODUCTION
The abundance of experimental investigation of hadrons containing heavy quarks has fueled many theoretical explorations in the past decade. A multitude of new findings are anticipated in this field with the current third run of LHC, which makes a busy path for the theoretical journey of heavy hadrons. The experimentally observed D- and D_s- states are listed in the review of particle physics (RPP) by Particle Data Group (PDG) <cit.>. Some of these states have confirmed J^P, while many others require more data to be confirmed. The S and P waves of non-strange and strange charm mesons are well-established states. The J^P of states D_0(2550), D^*_1(2600), D_2(2740) and D^*_3(2750) reported in the RPP <cit.> are determined to be 0^-, 1^-, 2^- and 3^-, respectively, by LHCb Collaboration <cit.>. The D_J(3000) and D_J^*(3000) were observed for the first time by LHCb <cit.> in a study of D^+π^-, D^0π^+ and D^*+π^- final states. The D_J(3000) meson was suggested to have unnatural parity and decaying in the D^*+π^- channel final state. The other D meson D_J^*(3000) was reported to be decaying in the D^+π^- and D^0π^+ final states, which suggests it to have natural parity. Later, in 2016, LHCb collaboration <cit.> also reported another new resonance D_2^*(3000) in the study of B^-→ D^+π^-π^- resonant substructures. The D_2^*(3000) propounded to be inconsistent with the previously reported D_J^*(3000), inferring they are different states. However, the LHCb collaboration also acknowledged that both resonances may have the same origin regardless of different masses and decay widths. In the strange charm (D_s) sector, the states D_sJ(2700)^+ and D_sJ(2860)^+ were observed by BABAR Collaboration <cit.> in 2006. The state D_sJ(2700)^+ was confirmed by Belle Collaboration <cit.> in 2008 with J^P=1^-. Also, the decays D^*_s1(2700)^+→ D^* K and D^*_sJ(2860)^+ → D^* K were observed and branching fractions relative to DK were measured by BABAR Collaboration <cit.> in 2009. The states D_sJ(2700)^+ and D_sJ(2860)^+ were later reconfirmed by LHCb <cit.> in 2012. The same study reported a new broad structure D_sJ(3040)^+ in the excited D_s region. The LHCb Collaboration <cit.> in 2014 by analyzing the B_s^0→D^0K^-π^+ decays observed two resonance states D^*_s1(2860)^- with J^P=1^- and D^*_s3(2860)^- with J^P=3^- in the final state of D^0K^-. Therefore, the reported state D_sJ(2860) by BABAR <cit.> and LHCb <cit.> consists of two states with J^P values 1^- and 3^-. In 2016, LHCb Collaboration <cit.> reported the first observation of D_s2^*(2573)^+→ D^*+K_s^0 decay with a significance of 6.9σ and measured its branching fraction relative to D^+K^0_s final state. Also, the presence of D^*_s3(2860)^+→ D^*+K_s^0 decay was reported by the LHCb. A new D_s meson state D_s(2590)^+ is observed by LHCb Collaboration <cit.> in B^0→ D^- D^+ K^+ π^- decay at a center-of-mass energy of 13 TeV. The D_s(2590)^+ observed decaying into the D^+K^+π^- final state with J^P=0^-. The LHCb assigns this state as a strong candidate to be the D_s(2^1S_0)^+ state.
Recent experimental developments have allowed studying higher radially and orbitally excited D- and D_s- mesons. Theoretical understanding of these recently observed mesons is an ongoing endeavor, and many phenomenological models are being used to investigate their different properties. To study masses and decays of D- and D_s- mesons, the heavy quark effective theory <cit.>, relativistic quark model <cit.>, Regge-phenomenology <cit.>, sum rules <cit.>, lattice QCD <cit.>, effective chiral Lagrangian approach <cit.>, and potential models <cit.> are used. A detailed discussion of the theoretical understanding of D and D_s mesons will be given in later sections. The observed states D^*(2300), D_1(2430), D_1(2420) and D^*_2(2460) are considered to be 1P states with J^P=0^+, 1^+, 1^+ and 2^+, respectively; however other interpretations <cit.> are also possible.
The present paper uses Song and Lin's potential <cit.> to study the spectroscopy of the D and D_s mesons. The potential has previously been used to successfully study the quarkonium structures in mass spectroscopy, and their decays <cit.>. This potential is not a directly QCD-inspired potential; it contains a vector term inspired by the leptonic decay width of vector mesons and a scalar term for quark confinement at large distances. Many other potentials are also used to study the spectroscopy and decays of heavy-light mesons <cit.>. This paper extends the potential to the heavy-light system from the heavy-heavy system to which Song and Lin introduced this potential. Further, to study the strong decays of D and D_s mesons, we have used the heavy quark effective theory (HQET). This phenomenological effective theory combines heavy quark symmetry with chiral symmetry to analyze the strong decays of heavy hadrons through the pseudoscalar Goldstone bosons. The details are given in the further sections. This paper is organized in the following way: In section <ref>, we present the framework of variational method with the employed potential and the spin-dependent potential for the hyperfine splitting. The formalism of heavy quark effective theory to study the strong decay is also given in this section. The results of the calculations are discussed in the section <ref>. The conclusions of the present analysis are given in the last section <ref>.
§ FRAMEWORK
§.§ Song and Lin's Potential in variational scheme
The Hamiltonian of a system consisting of a heavy quark of mass m_Q and a light anti-quark of mass m_q is given below in Eq. (<ref>) with kinetic energies of both quarks and potential energy between them. This Hamiltonian treats the kinetic energy of both heavy and light quarks relativistically.
H=√(p^2+m_Q^2)+√(p^2+m_q^2)+V(r)
The Gaussian trial wave function is used to solve for the eigen-energy of the Hamiltonian.
R_nl(μ,r)= μ^3/2(2 Γ(n)/Γ(n+l+1/2))^1/2 (μ r)^l
× e^-μ^2 r^2/2 L_n-1^l+1/2(μ^2 r^2)
The wave function in the momentum space is the Fourier transform of Eq. (<ref>) given as
R_nl(μ,p)= (-1)^n/μ^3/2(2 Γ(n)/Γ(n+l+1/2))^1/2(p/μ)^l
× e^r^2/2μ^2 L_n-1^l+1/2(p^2/μ^2)
The wave functions are normalized, and a variational parameter μ is used, which will be estimated for all states.
The Song and Lin's potential <cit.> with a constant potential depending on radial and orbital angular momentum quantum numbers is given as
V(r)=-a/r^1/2+b r^1/2+V_0
where, V_0=a_0+cn+dl. The a, b, a_0, c and d are potential parameters. n and l are the usual radial and orbital quantum numbers. In the present study, the above potential in Eq.(<ref>) is applied on the D- and D_s- mesons in a variational method. The variational method we have employed to study the heavy-light mesons in the present work is inspired by Ref. <cit.>. Later, this method was used by Ref. <cit.> to study the masses and decay constants of heavy-light mesons. We have extended from these studies in the sense that we used a more general wave function to study higher radial and orbital excited states. The total energy of the system is given by the expectation of the Hamiltonian H as
⟨ψ | H | ψ⟩ = E(μ)
The variational parameter μ is estimated by minimizing the total energy d E(μ)/dμ=0 at μ=μ̅. The masses taken from <cit.> of 1S, 1P, and 2S states of D-mesons and 1S state of D_s-mesons are used to fix the potential parameters and masses of quarks (m_Q and m_q). The parameters are a=1.17 GeV^1/2, b=0.11 GeV^-1/2, a_0=-0.401 GeV, c=0.33 GeV, d=0.20 GeV, quark masses m_c=2.31 GeV, m_u/d=0.36 GeV and m_s=0.51 GeV. The heavy quark mass m_c is higher than other models <cit.>. The present potential is not much explored for heavy-light systems, and the higher heavy quark mass may hint that this potential is better suited for bottom mesons. This will be analyzed in a future study. The obtained spin-averaged masses for D- and D_s- mesons and corresponding variational parameter μ are given in Table <ref> and <ref> respectively. The spin-averaged masses M_SA,J of mesons are given as
M_SA,J=∑_J(2J+1)M_J/∑_J(2J+1)
The obtained masses are in fair agreement with the other theoretical and available experimental masses in Tables <ref> and <ref>. This is a good sign for the application of Song and Lin potential in heavy-light systems. We also appreciate the consistency of obtained masses by plotting Regge-trajectories, which gives good results. The variational parameter μ for all masses is also in good agreement with other models shown in Tables <ref> and <ref>.
The hyperfine splitting is added perturbatively using a potential distinguishing between electric and magnetic parts of the interaction between quark and anti-quark pair. Also known as the one-gluon exchange potential (OGEP) <cit.>.
V_SD(r)=4/3α_s2/3m_Q m_q̅S_Q.S_q 4πδ(r)+4/3α_s1/m_Q m_q
×L.S/r^3
+(L.S_Q/2 m_Q^2+L.S_q/2 m_q^2)(-1/rdV(r)/dr+8/3α_s1/r^3)
+4/3α_s1/m_Q m_q1/r^3[3(S_Q.r̂)(S_q.r̂)-S_Q.S_q]
The 1^st term in spin-dependent potential V_SD(r) describes the spin-spin interaction between quark and light anti-quark pair. The 2^nd and 3^rd terms consist of the interactions of orbital angular momentum (L) with spin of heavy quark (S_Q) and anti-quark (S_q). The last term describes the tensor interactions, where tensor operator S_12=2[3(S_Q.r̂)(S_q.r̂)-S_Q.S_q] is given by <cit.>,
⟨S_12⟩= -1/(2l-1)(2l+3)
×[12⟨L.S⟩^2+6⟨L.S⟩-4⟨S^2⟩⟨L^2⟩]
The hyperfine splitting of nS states is given by
M(^3 S_1)-M(^1 S_0)= 8 α_s | R(0)|^2/9 m_1 m_2
The sum of individual spins of heavy quark and light anti-quark gives the total spin momentum S= S_Q + S_q. Also, the sum of the orbital angular momentum and the spin of the light antiquark makes the light angular momentum s_l = L + S_q. The total angular momentum is the sum of light angular momentum and heavy-quark spin, J = s_l + S_Q.
The notations for orbital and spin angular momentum are usual ⟨L^2⟩=l(l+1), and ⟨S^2⟩=s(s+1). For l=1 the states with s_l=l+s_q=3/2 and J=s_l+s_Q=2 are denoted as ^2s+1l_J= ^3P_2 and the states with s_l=l-s_q=1/2 and J=s_l-s_Q=0 are denoted as ^1P_0. States with different total spins (s) and the same total angular momentum (J) get mixed through the spin-orbit interaction potential given in Eq. (<ref>). The physical states for P-wave (l=1) with J=1 are given by the linear combinations of the states with J=s_l+s_Q=1 for s_l=l-s_q=1/2 (^3P_1) and J=s_l-s_Q=1 for s_l=l+s_q=3/2 (^1P_1) as
P_1 = ^1P_1 cosθ_nP + ^3P_1 sinθ_nP
P'_1 = - ^1P_1 sinθ_nP+ ^3P_1 cosθ_nP
Similar mixing expressions can be obtained for higher orbitally excited states (D and F waves). The higher mass state is denoted as P'_1, and the lower mass state is P_1. In the heavy-quark limit m_Q→∞, the light angular momentum s_l and s_Q are good quantum numbers. This mixing occurs due to the non-diagonal spin-orbit and tensor terms in Eq. (<ref>). The readers may go through these Ref <cit.> for more clarification. The mass shift due to the spin potential terms for P and D waves are given in <cit.>. For the mass shift in F wave, we give the relations below
Δ E_nF_4=α_s 34/9 m_Q m_q̅⟨1/r^3⟩+3/4(1/m_q̅^2+1/m_Q^2)
⟨-1/rdV(r)/dr+8/31/r^3⟩
Δ E_nF_2=-α_s 88/15 m_Q m_q̅⟨1/r^3⟩-(1/m_q̅^2+1/m_Q^2)⟨-1/rdV(r)/dr+8/31/r^3⟩
The expectation values are taken with the radial wavefunction R_nl(r) given in Eqn (<ref>) with μ=μ̅ given in Tables <ref> and <ref>. The mixing elements for F wave are given below:
⟨ ^7F_3|Δ E| ^7F_3⟩ =-α_s 6/21m_Q m_q̅⟨1/r^3⟩+(3/2m_q̅^2-27/14m_Q^2)⟨-1/rdV(r)/dr+8/31/r^3⟩
⟨ ^5F_3|Δ E| ^5F_3⟩ =-α_s 24/21m_Q m_q̅⟨1/r^3⟩+(-1/m_q̅^2-3/7m_Q^2)⟨-1/rdV(r)/dr+8/31/r^3⟩
⟨ ^7F_3|Δ E| ^5F_3⟩ =-α_s 2√(12)/21m_Q m_q̅⟨1/r^3⟩-√(21)/7m_q̅^2⟨-1/rdV(r)/dr+8/31/r^3⟩
Here the notation ^2s_l+1F_J is followed.
We have computed the mixing elements and their eigenvalues and mixing angles through equations like (<ref>), (<ref>), and (<ref>) for l=1, 2, and 3. The physical state masses for D and D_s mesons are enumerated in the Tables <ref>, <ref>, <ref>, and <ref> for l=0, 1, 2 and 3 states with their mixing angles mentioned in the caption for the corresponding states. The masses are in good agreement with the available experimental and other theoretical masses.
§.§ Strong Decays of charm- and charm-strange mesons
We use the heavy quark effective theory (HQET) to study the strong decays of charm (D) and charm-strange (D_s) mesons. In the heavy quark limit m_Q→∞, the heavy quark spin s_Q and light angular momentum s_l. Thus, these angular momenta (s_Q, s_l) and the total angular momentum J are separately conserved. This enables the classification of heavy-light mesons as doublets with different s_l. As discussed above, the total angular momentum is given as J=s_l± s_Q, and each doublet contains two states, called spin partners. The parity of the doublets is given as P=(-1)^l+1, where l is the orbital angular momentum. For l=0 and s_l^P=1/2^-, the doublet comprises two states with J^P=(0^-, 1^-), denoted as (P, P^*). For l=1, we have two doublets with s_l^P=1/2^+ and 3/2^+. The two doublets having J^P_s_l=(0^+,1^+)_1/2 and J^P=(1^+,2^+)_3/2 are denoted as (P^*_0, P'_1) and (P_1, P_2^*), respectively. For l=2, we have s_l^P=3/2^- and 5/2^-. The doublets with J^P=(1^-,2^-)_3/2 are denoted as (P^*_1,P_2) and with J^P_s_l=(2^-,3^-)_5/2 are denoted as (P'_2,P_3^*). Carrying on for l=3, the s_l^P=5/2^+ and 7/2^+. The first doublet is denoted as (P^*_2,P_3) with J^P_s_l=(2^+,3^+)_5/2. The second doublet is denoted as (P'_3, P^*_4) with J^P_s_l=(3^+,4^+)_7/2. The classification of higher radial excitation is done in the same manner; to denote the radial excitation, we use the above notations with a tilde (P̃,P̃^*, ...). The effective Lagrangians can be constructed by introducing effective fields in each doublet to study the strong decay dynamics of the heavy-light mesons doublets. The field H_a corresponds to the s_l=1/2^+ doublet (a=u,d,s), S_a and T_a to s_l=1/2^+ and s_l3/2^+, X_a and Y_a to s_l=3/2^- and s_l=5/2^-, Z_a and R_a to the doublets s_l=5/2^+ and s_l=7/2^+, where a=u,d,s is the index for light quarks. The effective fields of the mentioned doublets are given below as
H_a =1+v/2[P^*_aμγ^μ-P_aγ_5]
S_a =1+v/2[P'^μ_1aγ_μγ_5-P^*_0a]
T^μ_a =1+v/2{P^*μν_2aγ_ν-P_1aν√(3/2)γ_5 [g^μν-1/3γ^ν(γ^μ-v^μ) ] }
X^μ_a =1+v/2{P^μν_2aγ_5γ_ν-P^*_1aν√(3/2)[g^μν-1/3γ^ν(γ^μ-v^μ) ] }
Y^μν_a =1+v/2{P^*μνσ_3aγ_σ-P^'αβ_2a√(5/3)γ_5[g_α^μg_β^ν-1/5g^ν_βγ_α(γ^μ-v^μ)-1/5g^μ_αγ_β(γ^ν-v^ν) ] }
Z^μν_a =1+v/2{P^μνσ_3aγ_5γ_σ-P^*αβ_2a√(5/3)[g^μ_αg^ν_β-1/5g^ν_βγ_α(γ^μ+v^μ)-1/5g_α^μγ_β(γ^ν+v^ν) ] }
R^μνρ_a =1+v/2{P^*μνσ_4aγ_5γ_σ-P'^αβτ_3a√(7/4)
×[g^μ_αg^ν_βg^ρ_τ-1/7g^ν_βg^ρ_τγ_α(γ^μ-v^μ)-1/7g^μ_αg^ρ_τγ_β(γ^ν-v^ν)-1/7g^μ_αg^ν_βγ_τ(γ^ρ-v^ρ) ] }
where v is the meson four-velocity, conserved in the strong interactions. The operators P annihilate the corresponding mesons with four-velocity v. ref <cit.>. The operators include a factor of √(m_Q) and have dimension 3/2. These effective fields of mesons interact with each other through the pseudoscalar Goldstone bosons. The octet of light pseudoscalar mesons is introduced by defining the ξ=e^ιℳ/f_π and Σ=ξ^2, where the matrix ℳ incorporating the π, K and η mesons fields given as
ℳ=
[ √(1/2)π^0+√(1/6)η π^+ K^+; π^- -√(1/2)π^0+√(1/6)η K^0; K^- K̅^0 -√(2/3)η ]
The matrix accommodates the field of light pseudoscalar π, K, and η mesons. To describe the transitions F→ HM, where F = H, S, T, X, Y, Z, R are effective fields of heavy meson doublet and M is a light pseudoscalar meson, at the leading order approximation in the light meson momentum and heavy quark mass expansion, the interaction Lagrangian terms are given as
ℒ_HH =g_HH[ H̅_a H_b γ_μγ_5 𝒜^μ_ba]
ℒ_SH =g_SH[H̅_a S_b γ_μγ_5 𝒜^μ_ba]+H.c.
ℒ_TH =g_TH/Λ_χ[H̅_aT^μ_b(ι̇D_μ𝒜+ι̇D𝒜_μ)_baγ_5 ]+H.c.
ℒ_XH =g_XH/Λ_χ[H̅_a X^μ_b(ι̇D_μ𝒜+
ι̇D𝒜_μ)_baγ_5 ]+H.c.
ℒ_YH =1/Λ^2_χ[H̅_aY^μν_b[k^Y_1{D_μ,D_ν}𝒜_λ+k^Y_2(D_μD_λ𝒜_ν+D_νD_λ𝒜_μ)]_baγ^λγ_5]+H.c.
ℒ_ZH =1/Λ_χ[H̅_a Z^μν_b[k^Z_1{D_μ,D_ν}𝒜_λ+k^Z_2(D_μD_λA_ν+D_νD_λ𝒜_mu)]_baγ^λγ_5]+H.c.
ℒ_RH =1/Λ^3_χ[H̅_a R^μνρ[k^R_1{D_μ,D_ν,D_ρ}𝒜_λ+k^R_2(D_μ,D_ρD_λ𝒜_νD_ν,D_ρD_λ𝒜_μ
+D_μ,D_νD_λ𝒜_ρ)]_baγ^λγ_5]+H.c.
The definitions of identities in the above equations are D_μ=∂_μ+𝒱_μ; {D_μ,D_ν}=D_μD_ν+D_νD_μ; {D_μ,D_ν,D_ρ}=D_μD_νD_ρ+D_μD_ρD_ν+D_νD_μD_ρ+D_νD_ρD_μ+D_ρD_μD_ν+D_ρD_νD_μ. The axial-vector (𝒜) and vector (𝒱) currents are defined as
𝒜_μ=1/2(ξ^†∂_μξ-ξ∂_μξ^†)
𝒱_μ=1/2(ξ^†∂_μξ+ ξ∂_μξ^†)
The chiral symmetry-breaking scale Λ_χ is set to 1 GeV. The strong coupling constants involved in their corresponding transitions are g_HH, g_SH, g_TH, g_XH, g_YH=k^Y_1+k^Y_2, g_ZH=k^Z_1+k^Z_2, g_RH=k^R_1+k^R_2. These couplings can be estimated by the experimental decay widths. The f_π is taken to be 132 MeV. The decay widths from the above effective fields and the interaction Lagrangians are given below <cit.>. The strong decay width for two-body decay of heavy-light (D or D_s) mesons by emitting pseudoscalar light mesons (π, η, and K) according to the Lagrangians ℒ_HH, ℒ_SH, ℒ_TH, ℒ_XH, ℒ_YH, ℒ_ZH and ℒ_RH is given as
Γ =1/2J+1∑p_M/8π M_i^2|𝒜|^2
p_M = √((M_i^2-(M_f+M_M)^2)(M_i^2-(M_f-M_M)^2))/2M_i
where, 𝒜 is the amplitude of the transition, J is total angular momentum of initial heavy meson, ∑ represents the sum over all polarization vectors, M_i is the mass of initial heavy meson, M_f represents the mass of final heavy meson, M_M and p_M are mass and momentum of emitted light pseudoscalar meson. The explicit expressions for decay widths of transition of heavy-light mesons from different fields (H, S, T, X, Y, Z, and R) to H-field for different channels are given below <cit.>.
Decays of H-field (0^+,1^+) of S-wave
Γ(1^-→ 0^-)= C_M g_HH^2/6π f_π^2M_f/M_i|p⃗_M|^3
Γ(1^-→ 1^-)= C_M g_HH^2/3π f_π^2M_f/M_i|p⃗_M|^3
Γ(0^-→ 1^-)=C_M g_HH^2/2π f_π^2M_f/M_i|p⃗_M|^3
Decays of S-field (0^+,1^+) of P-wave
Γ(0^+→0^-)=C_M g_SH^2/2π f_π^2M_f/M_i[M^2_M+|p⃗_M|^2]|p⃗_M|
Γ(1^+→ 1^-)=C_M g_SH^2/2π f_π^2M_f/M_i[M^2_M+|p⃗_M|^2]|p⃗_M|
Decays of T-field (1^+,2^+) of P-wave
Γ(1^+→ 1^-)=C_M2g_TH^2/3π f_π^2 Λ_χ^2M_f/M_i|p⃗_M|^5
Γ(2^+→ 0^-)=C_M4g_TH^2/15π f_π^2Λ_χ^2M_f/M_i|p⃗_M|^5
Γ(2^+→ 1^-)=C_M2g_TH^2/5π f_π^2Λ_χ^2M_f/M_i|p⃗_M|^5
Decays of X-field (1^-,2^-) of D-wave
Γ(1^-→ 0^-)=C_M4g_XH^2/9π f_π^2 Λ_χ^2M_f/M_i[M_M^2+|p⃗_M|^2]|p⃗_M|^3
Γ(1^-→ 1^-)=C_M2g_XH^2/9π f_π^2 Λ_χ^2M_f/M_i[M_M^2+|p⃗_M|^2]|p⃗_M|^3
Γ(2^-→ 1^-)=C_M2g_XH^2/3π f_π^2 Λ_χ^2M_f/M_i[M_M^2+|p⃗_M|^2]|p⃗_M|^3
Decays of Y-field (2^-,3^-) of D-wave
Γ(2^-→ 1^-)=C_M4g_YH^2/15π f_π^2 Λ_χ^4M_f/M_i|p⃗_M|^7
Γ(3^-→ 0^-)=C_M4g_YH^2/35π f_π^2 Λ_χ^4M_f/M_i|p⃗_M|^7
Γ(3^-→ 1^-)=C_M16g_YH^2/105π f_π^2 Λ_χ^4M_f/M_i|p⃗_M|^7
Decays of Z-field (2^+,3^+) of F-wave
Γ(2^+→ 0^-)=C_M4g_ZH^2/25π f_π^2 Λ_χ^4M_f/M_i[M_M^2+|p⃗_M|^2]|p⃗_M|^5
Γ(2^+→ 1^-)=C_M8g_ZH^2/75π f_π^2 Λ_χ^4M_f/M_i[M_M^2+|p⃗_M|^2]|p⃗_M|^5
Γ(3^+→ 1^-)=C_M4g_ZH^2/15π f_π^2 Λ_χ^4M_f/M_i[M_M^2+|p⃗_M|^2]|p⃗_M|^5
Decays of R-field (3^+,4^+) of F-wave
Γ(3^+→ 1^-)=C_M36g_RH^2/35π f_π^2 Λ_χ^6M_f/M_i|p⃗_M|^9
Γ(4^+→ 0^-)=C_M16g_RH^2/35π f_π^2 Λ_χ^6M_f/M_i|p⃗_M|^9
Γ(4^+→ 1^-)=C_M4g_RH^2/7π f_π^2 Λ_χ^6M_f/M_i|p⃗_M|^9
where C_M is the factor dependent on the light pseudoscalar mesons, C_π_+=C_K^+=1, C_π^0=C_K_s=1/2, C_η=1/6 or 2/3 (for initial meson cq̅ or cs̅ respectively). The decay widths of different candidate states for the experimentally observed D- and D_s- meson family are calculated and shown in Table <ref> for 1P and 1D states, Table <ref> for 2S and 2P states, Table <ref> for 3S and 2D states, and Table <ref> for 1F states. In the next section, we will discuss the results from this section, and some meaningful conclusions will be drawn later.
§ RESULTS AND DISCUSSION
The calculated spin-averaged masses using the Song and Lin's potential for D(cq̅) and D_s(cs) mesons are listed in Tables <ref> and <ref>. The masses with hyperfine splittings are also calculated and given for D mesons in the Tables <ref>, <ref> and for D_s mesons in the Tables <ref>, <ref>. The masses are compared with other theoretical models and experiments are in good agreement. The mixing angles for nP states are in the lower range of different models <cit.>, while for nD states are on the higher range. The mixing angles for nF states are very small. The hyperfine splitting of D meson for S-wave are Δ(1S)=145 MeV, Δ(2S)=62 MeV, Δ(3S)=42 MeV, Δ(4S)=32 MeV and Δ(5S)=28 MeV are in fair agreement available experimental <cit.> and theoretical <cit.> splitting, where Δ(1S)=M(^3S_1)-M(^1S_0). For the strange charm D_s meson, Δ(1S)=80 MeV, Δ(2S)=62 MeV, Δ(3S)=42 MeV, Δ(4S)=32 MeV and Δ(5S)=28 MeV are also in fare accordance with other available masses <cit.>.
In the non-strange sector, 1S mesons D, D^* and 1P mesons D^*_0(2300), D_1(2430), D_1(2420) and D_2^*(2460) are well established as 1S_1/20^-, 1S_1/21^-, 1P_1/20^+, 1P_1/21^+, 1P_3/21^+ and 1P_3/22^+ states, respectively, given in the PDG <cit.>. In strange sector, 1S mesons D_s(1968), D_s^*(2112) and 1P mesons D_s0^*(2317), D_s1^'(2460) and D_s1(2536) are also well established. The calculated masses of mesons are used to construct the Regge trajectories in the (J, M^2) and (n_r, M^2) planes with the following definitions:
J=α_(n) M^2+α_(n)(0)
n_r=β_(L) M^2+β_(L)(0)
The Eq. (<ref>) is in the (J, M^2) plane and Eq. (n_r, M^2) plane. α and β are slopes, and α_0 and β_0 are intercepts of the Regge trajectories. Also, n=1,2,3,4,5 are the radial quantum number, and L=S, P, D, F represents the orbital excitations.
The Regge trajectories for unnatural (P=(-1)^J+1) and natural (P=(-1)^J) parity states are shown in Figs. <ref> and <ref> for D mesons and in Figs. <ref> and <ref> for D_s mesons are shown. The slopes and intercepts for corresponding Regge trajectories are also shown in Fig. <ref>. All trajectories are parallel and equidistant to each other, and the calculated masses also fit nicely with the linear trajectories.
In the following sections, many properties like masses, decay width, partial width ratios, and strong coupling constants are discussed and compared with the available experimental and theoretical values. This discussion and analysis are used to give the assignment to experimentally observed states. Further, some properties that can be observed in the future by experimental facilities are discussed to clear the status of some observed states.
§.§ TEXT
The calculated mass of D(1^3P_2) state is 2482 MeV, which is close to mass 2461.1 ± 0.7 MeV of D_2^*(2460) given in PDG <cit.>. Using the total decay width Γ(D^*_2(2460))=47.3 ± 0.8 MeV, we get g_TH=0.385±0.003, which is in accordance with g_TH computed in other Refs. 0.43±0.01<cit.>, 0.40±0.003<cit.>, 0.40±0.01<cit.>, 0.43±0.05 <cit.>. We found the ratio
Γ(D^*_2(2460)→ D^+π^-)/Γ(D^*_2(2460)→ D^*+π^-)=2.14,
which is in agreement with the average of experimental value 1.52±0.14 <cit.> and other theoretical estimates 2.29 <cit.>, 2.26 <cit.>, 1.96 <cit.>, 1.70 <cit.>. For D^*_2(2460), BaBar collaboration <cit.> measured the ratio Γ(D^+π^-)/Γ(D^+π^-)+Γ(D^*+π^-)=0.62±0.03±0.02, we have estimated this ratio to be 0.68. In the D_s sector, the calculated mass of D_s(1P_3/22^+) state is 2581 MeV, which is very close to D_s2^*(2573) state mass 2569.1±0.8 given in PDG <cit.>. Using the calculated coupling g_TH, we find the total decay width 19.26 MeV is also close to 16.9±0.7 from PDG <cit.>. The ratio Γ(D_s2^*(2573)→ D^*0K^+)/Γ(D_s2^*(2573)→ D^0K^+)=0.11 is in well accordance with the experimental value Γ(D_s2^*(2573)→ D^*0K^+)/Γ(D_s2^*(2573)→ D^0K^+)<0.33 <cit.>. D_2^*(2460) and D_s2^*(2573) are spin partners of each other, and the analyzed ratios support the given assignment.
§.§ TEXT
The calculated mass of D(1^3D_1) state is 2759 MeV. In Table <ref>, the Dπ is the dominant decay mode for this state. The candidate available for this state is D^*_1(2760)^0 with mass 2781±18±13 MeV and decay width 177±32±21 MeV reported by LHCb <cit.>. The total decay width of D^*_1(2760)^0 estimates the coupling g_XH to be 0.243±0.026. The D^*_1(2760)^0 is also studied as a mixing of 2^3S_1-1^3D_1 states by other theoretical models <cit.>. In the D_s family, the mass of D_s(1^3D_1) state is 2868 MeV, which is in good agreement with other theoretical models given in Table <ref>. The reported D^*_s1(2860)^± state by LHCb <cit.> with mass 2859±12±6±23 MeV and decay width 159±23± 27±72 MeV is a candidate of D_s(1^3D_1) state. The value of coupling g_XH from total decay width D^*_s1(2860)^± we get is 0.194±0.022. The average of coupling g_XH estimated from D^*_1(2760)^0 and D^*_s1(2860)^± states is 0.219±0.017, which is close to the previous prediction 0.19±0.049 <cit.>, 0.12 <cit.>, 0.41±0.02 <cit.> from the bottom meson studies.
§.§ TEXT
The estimated mass of D(1^3D_3) state is 2785 MeV. The D^*_3(2750) resonance can be interpreted as D(1^3D_3) state with observed mass 2763.1±3.2 MeV and decay width 66±5 MeV <cit.>. The decay width of D(1^3D_3) from Table <ref>, 493.89 g_YH^2 is used to estimate the coupling g_YH=0.366±0.014. The computed ratio R_π(D^*_3(2750))=1.88 for D^*_3(2750) is in agreement with other HQET studies like <cit.> but not in agreement with other models like R_π(D^*_3(2750))=1.1 <cit.>. The ratio R_π is defined as
R_π(D(1^3D_3))=Γ(D(1^3D_3)→ Dπ)/Γ(D(1^3D_3)→ D^*π)
Further experimental observations are needed to understand the nature of D^*_3(2750). In the strange sector, the mass computed for D_s(1^3D_3) state is 2895 MeV. The D^*_s3(2860)^± has a mass 2860.5±2.6±6.5 MeV which makes it a candidate for the D_s(1^3D_3) state. After putting the value g_YH the decay width of D_s(1^3D_3) state is estimated to be 66±5 MeV, which is close to the experimental value 53±7±7 MeV. We computed the ratio R_K(D^*_s3(2860)) given as,
R_K(D^*_s3(2860))=Γ(D^*_s3(2860))→ D^*K)/Γ(D^*_s3(2860))→ DK)=0.43
which suggests that DK channel is more dominant than D^*K. The experimental value of the ratio R_K(D^*_s3(2860))=1.04±0.17±0.20 <cit.>, which is higher than our result. The results of the other calculations of R_K(D^*_s3(2860))=0.73 <cit.> and R(K)=0.802 <cit.> also suggest that further experimental observation of D^*_s3(2860)^± with the ratio R_K(D^*_s3(2860)) is needed to clarify the behaviour. By above agreement in mass and total decay width of D_s3^*(2860), we tentatively assign it as 1^3D_3 states of Y- field in HQET.
§.§ TEXT and TEXT
The masses of D(1D_2) and D(1D^'_2) are 2773 MeV and 2776 MeV, respectively, given in Table <ref>, which are the physical states from the mixing of states 1^1D_2 and 1^3D_2 as discussed above. From the above-computed values of g_XH and g_YH, the decay widths of 1D_2 and 1D^'_2 are found to be 110±23 MeV and 35.7±2.7 MeV respectively. The D_2(2740)^0 may be a candidate for D(1^1D_2) or D(1D^'_2) state with mass 2747±6 MeV, and decay width 88±19 MeV <cit.>.
From Table <ref>, the 1D^'_2 state is the compact state which is also supported by <cit.>. Taking D_2(2740)^0 as the 1D^'_2 state, we get the coupling g_YH=0.57±0.06. Using the earlier value of g_YH, we get an average of g_YH=0.468±0.031 is close to the estimated values 0.42 <cit.>, 0.70 <cit.>, 0.49±0.039 <cit.>, 0.61±0.05 <cit.>.
In the strange sector, D_s(1D_2) and D_s(1D^'_2) states have masses 2881 MeV and 2886 MeV, and decay widths estimated at 126±20 MeV and 50±7 MeV. The experimental observation of these states in strange sector is currently missing, which we hope to see in the future.
§.§ TEXT and TEXT
The radially excited states are an essential part of the spectrum of mesons to understand the quark and anti-quark pair dynamics. The D(2^1S_0) and D(2^3S_1) states have calculated mass of 2563 MeV and 2625 MeV, respectively. The experimental candidate for D(2^1S_0) state is D_0(2550)^0 with mass 2549±19 MeV and decay width 165±24 MeV <cit.>. The candidate for D(2^3S_1) state is D^*_1(2600) with mass 2627±10 MeV and decay width 141±23 MeV. In Table <ref>, the state 2S_1/20^- decay in D^*π^- with ℬℛ=66.13% supports the experimental observation of D_0(2550)^0 in the D^*π^- channel by LHCb collaboration <cit.> and BaBar collaboration <cit.>. The coupling g̃_HH is found to be 0.375±0.027, considering the D_0(2550)^0 as D(2^1S_0) state. The D^*_1(2600) is reported in both Dπ and D^*π channels by LHCb collaboration <cit.> and BaBar collaboration <cit.>. From the experimental decay width of D^*_1(2600) and the total width in Table <ref> of 2S_1/21^-, we get the coupling g̃_HH=0.262±0.021. The theoretical ratio R_π(2^3S_1) is higher than the reported value 0.32±0.02±0.09 <cit.>. Many theoretical models have studied the D^*_1(2600) as the state of 2S-1D mixing <cit.>. In the D_s-meson sector, the computed masses of D_s(2^1S_0) and D_s(2^3S_1) are 2682 MeV and 2730 MeV with splitting (m(2^3S_1)-m(2^1S_0)) of 48 MeV is in agreement with the other theoretical models given in Table <ref>. The D_s1^*(2700)^± is a candidate for 2^3S_1 state with mass 2714 MeV and decay width 122±10 MeV. Using the decay width of D_s1^*(2700)^±, the coupling g̃_HH is calculated as 0.255±0.010. The theoretically calculated ratio R_K(D_s1^*(2700))=0.96 is in excellent agreement with the experimental value 0.91±0.13±0.12 reported by BaBar collaboration <cit.>.
The average g̃_HH from the above-mentioned values is 0.269±0.009. The total decay width of D_s(2^1S_0) is calculated as 63±4 MeV using the average value of g_HH. The newly reported D_s0(2590)^+ with mass 2591±6±7 MeV and decay width 89±16±12 MeV is suggested to be a strong candidate for the D_s(2^1S_0) state from LHCb collaboration <cit.>. Our calculated decay width for this state is close to the lower limits of the experimental value. We tentatively assign D_s0(2590)^+ as the candidate for D_s(2^1S_0) state. The D_s0(2590)^+ is observed to be decaying in the D^+K^+π^- final state. Further experiment observations may shed some light on the other decay channels and ratios of the D_s0(2590)^+.
§.§ TEXT, TEXT and TEXT
The D^*_J(3000)^0 and D_J(3000)^0 states were observed by LHCb collaboration in 2013 <cit.>. The D^*_J(3000)^0 is reported to be decaying in the Dπ channel with mass 3008.1±4.0 MeV and decay width of 110.5±11.5 MeV. The D_J(3000)^0 is observed in the D^*+π^- channel with mass 2971.8±8.7 MeV and decay width 188.1±44.8 MeV.
As LHCb collaboration <cit.> recommended, the D_J(3000)^0 with unnatural parity and D^*_J(3000) with natural parity. The suitable states for D_J(3000)^0 in the mass range are 3^1S_0(3S_1/20^-), 2P_1(2P_1/21^+), 2P^'_1(2P_3/21^+), 1F_3(1F_5/23^+), 1F^'_3(1F_7/23^+). While, the D^*_J(3000)^0 resonance is analysed as 3^3S_1(3S_1/21^-), 2^3P_0(2P_1/20^+), 2^3P_2(2P_3/22^+), 1^3F_2(1F_5/22^+), and 1^3F_4(1F_7/24^+).
The masses of 3^1S_0 and 3^3S_1 are 3070 MeV and 3112 MeV, respectively. The mass difference Δ(3S-2S) between states 3^1S_0 and 2^1S_0 is about 500 MeV, consistent with other models. Considering both experimental resonances D_J^*(3000) and D_J(3000) as spin partners, the splitting Δ(3S)=m(3^3S_1)-m(3^1S_0)=42 MeV is close to the mass splitting of 36±10 MeV between the experimental masses of D_J^*(3000) and D_J(3000). But the calculated masses of 3^1S_0 and 3^3S_1 states are about 100 MeV higher than the measured masses of D_J^*(3000) and D_J(3000). Also, the ratio R_π(3^3S_1)=0.62 suggests that for 3^3S_1 state, is in conflict with the observation of D^*_J(3000) is the Dπ channel. Considering D_J(3000) state as a candidate of 3^1S_0 state, the coupling g̃̃̃_HH = 0.150±0.018 is estimated.
The masses of 2^3P_0, 2P_1, 2P^'_1 and 2^3P_2 are 2881 MeV, 2947 MeV, 2956 MeV, and 2968 MeV, respectively. The 2^3P_0 state decay width given in Table <ref>, supports the assignment of D^*_J(3000) as 2^3P_0. The branching fraction of both D^+π^- and D^0π^0 channels are more than 60% and using the total decay width, we can estimate the coupling g̃_SH to be 0.115±0.006. Analyzing the D^*π channel for the D_J(3000), we can assign it as 2P_1 state, and the coupling g̃_SH using the total decay width is 0.162±0.019. Among the 2P states, the 2^3P_0 and 2P_1 states are broad, and 2P^'_1 and 2^3P_2 are the narrow states. The D_J(3000) may also be 2P^'_1 state, and the coupling g̃_TH is calculated to be 0.205±0.024 by using the total decay widths. Also, the D^*_J(3000) state can be considered as the 2^3P_2 state, and this gives the value of the coupling g̃_TH=0.134±0.007. The average of this coupling is 0.139±0.007. The ratio R_π(2^3P_2)=1.08 may be a good indication of reported Dπ channel for D_J^*(3000) to be 2^3P_2 state. If the D_J^*(3000) is 2^3P_2 state then we propose that D_J^*(3000) should be searched in D^*π channel. Also, the branching ratio BR(D_s K) is about 30% for both 2^3P_0 and 2P_1 whereas the branching ratio BR(D_s K) is only about 15% for both 2P^'_1 and 2^3P_2. This can also be verified by future experiments.
The 1F states are also candidates for both D_J(3000) and D^*_J(3000).
The calculated masses of 1F states are 3070 MeV, 3070 MeV, 3077 MeV, and 3078 MeV of 1^3F_2, 1F_3, 1F^'_3, and 1^3F_4 states, respectively. The 1^3F_2 and 1^3F_4 states have a dominant Dπ decay channel, and the ratio R_π(1^3F_2)=2.77 is greater than the R_π(1^3F_4)=1.79. Thus, the D^*_J(3000)^0 is tentatively preferred to be 1^3F_2 state, and using its experimental decay width, we calculate the coupling g_ZH=0.173±0.009. The possibility of D^*_J(3000)^0 being 1^3F_4 state may not be ignored. The 1F_3 and 1F^'_3 states are possible candidates of D_J(3000) as they have D^*π decay channels with branching ratio BR(D^*π)≈ 80%. Using the total decay widths, we estimate the coupling g_ZH=0.284±0.034 and the coupling g_RH=0.178±0.021. The branching fraction BR(D_s K) for 1^3F_2 and 1F_3 are 15%. The branching fraction BR(D_s K) for 1F^'_3 and 1^3F_4 states are about 8%. These fractions may be observed in future experiments to clear the properties of both D_J(3000) and D^*_J(3000).
In the year 2016, LHCb collaboration <cit.> reported D^*_2(3000)^0 with mass 3214±29 MeV and decay width 186±38 MeV. The LHCb proposed D^*_2(3000)^0 to be a different state than D^*(3000) reported previously <cit.>, but it may not be ruled out that both states are the same. The mass and decay width of D^*_2(3000)^0 are higher than the D^*(3000) resonance. The preferred assignments suggested for D^*_2(3000)^0 resonance by LHCb collaboration are of 2P or 1F family. The masses for these states are already discussed above, and treating the D^*_2(3000)^0 and D^*(3000)^0 as different resonances, we assign the D^*_2(3000)^0 as a 1^3F_2 state. This assignment is favored as the mass of the 1^3F_2 state is higher than the mass of the 2^3P_2 state. Using the total decay width of D^*_2(3000)^0 and the 1^3F_2 state from the table <ref>, we compute the coupling g_ZH to be 0.225±0.023. In the view of D^*_2(3000)^0 assigned as 1^3F_2, we prefer the D^*(3000)^0 as a 2P state. To distinguish between the 1^3F_2 and 2^3P_2 states, we estimate ratio Γ(D_sK)/Γ(D^*_sK) for 1^3F_2 state to be 3.73 and for 2^3P_2 state to be 1.60. Also, the ratio R_π(1^3F_2)≈ 2.5 and R_π(2^3P_2)≈1 may be helpful in distinguishing between states in the experimental reports.
We tentatively assign D^*_2(3000)^0 as 1^3F_2 state on the basis of masses and decay ratio R_π. Considering D^*_J(3000) and D_J(3000) as spin partners, we tentatively assign them as 2P(0^+,1^+) states with the coupling g̃_SH estimated to be 0.115±0.006. If D^*_J(3000) and D_J(3000) resonances are not spin partners, then the assignment of D_J(3000) as 3S(0^-) state can be favored with coupling g̃̃̃_HH=0.150±0.018.
§.§ TEXT
The BaBar collaboration <cit.> in 2009 reported a resonance D_sJ(3040)^± in the inclusive production of D^*K in the e^+e^- annihilation with mass 3044±8^+30_-5 MeV and decay width of 239±35^+46_-42 MeV. The D_sJ(3040) is a candidate for 2P_1 and 2P^'_1 states. The masses of these states are 3054 MeV and 3062 MeV, given in Table <ref>, which looks close to the measured mass. The decay widths of 2P states are given in Table Ref tab:2s2ppwidth with the possible channels. The decay width of 2P_1 state is broader than the 2P^'_1 state. Considering the D_sJ(3040) as 2P_1 state and using the total decay widths, we estimate the g̃_SH=0.159±0.012. The average from all the calculated values of g̃_SH is 0.127±0.005 is in accordance with other estimations 0.12±0.03 <cit.>, 0.10±0.015 <cit.>. The coupling g̃_SH from the non-strange 2P states and their strange partner are in very good agreement with other theoretical calculations, as shown above. We assign the D_sJ(3040) resonance as the D_s(2P_1) state of S- field. Table <ref> shows the couplings calculated in the present study.
§ CONCLUSIONS
We have analyzed the heavy-light charm mesons in the potential of Song and Lin with calculations and results of masses and decay widths in the above sections. The 1S, 1P, and 2S masses of charm mesons are fitted to estimate the potential parameters. The spectrum of the charm meson is given in Tables <ref>, <ref>, <ref>, and <ref>. This potential was proposed as an improvement over Cornell's potential by modifying the vector term from the leptonic decay width of vector mesons. The application of Song and Lin's potential to the heavy-light system studied in the present study can improve the understanding of quark-antiquark confining. The charm quark mass (m_c) estimated in the present study is on the higher side. This characteristic of the present potential will also be explored for bottom mesons. The masses obtained for different states are compared with the results of other theoretical models and experimental masses. The hyperfine splittings are computed with the mixing angles for the corresponding states. The strong decay widths are analyzed in the framework of HQET. The fields of the doublet states of mesons with interacting Lagrangians, as well as the final decay width formulae, are given. The total decay widths of different states with the possible channels are computed in terms of the corresponding couplings and shown in the tables <ref>, <ref>, <ref>, and <ref>. The assignments of observed resonances in different experimental facilities like LHCb and BaBar are done. The total decay widths of observed states are used to estimate the strong couplings. The branching ratios are also estimated and compared with available theoretical and experimental results. The tentative assignments of states are also discussed in the present study. We hope for further observations of the resonances like D_2(2740), D_J(3000), D^*_J(3000), D_2^*(3000), and D_sJ(3040) in different decay channels and properties.
§ ACKNOWLEDGMENT
The authors gratefully acknowledge the financial support by the
Department of Science and Technology
(SERB/F/9119/2020), New
Delhi and for Senior Research Fellowship (09/0677(11306)/2021-EMR-I) by Council of Scientific and Industrial Research, New Delhi.
epj
|
http://arxiv.org/abs/2409.03047v1 | 20240904193335 | On concentric fractal spheres and spiral shells | [
"Efstathios Konstantinos Chrontsios Garitsis"
] | math.DS | [
"math.DS",
"math.MG"
] |
Biermann-battery driven magnetized collisionless shock precursors in laser produced plasmas
C. K. Li
September 9, 2024
===========================================================================================
§ ABSTRACT
We investigate dimension theoretic properties of concentric topological spheres, which are fractal sets emerging both in pure and applied mathematics. We calculate the box dimension and Assouad spectrum of such collections, and use them to prove that fractal spheres cannot be shrunk into a point at a polynomial rate. We also apply these dimension estimates to quasiconformally classify certain spiral shells, a generalization of planar spirals in higher dimensions. This classification also provides a bi-Hölder map between shells and constitutes an addition to a general programme of research proposed by J. Fraser in <cit.>.
Keywords. Fractal, dynamical systems, spiral, Assouad spectrum, quasiconformal mappings.
§ INTRODUCTION
Concentric objects occur in various areas of pure and applied mathematics. In complex dynamics, the Fatou components of entire transcendental maps are typically of fractal boundary and converge to infinity (see for instance <cit.>). Therefore, under a conformal inversion that keeps most geometric properties intact, the components of the map can be seen as fractals “centered" at 0. In the context of harmonic analysis, concentric Euclidean spheres are closely associated to weights of measures that lie in the important Muckenhoupt class <cit.>. In fact, polynomially-concentric sphere collections, i.e., unions of spheres S_n centered at 0 with radii equal to n^-p, for some p>0, provide one of the very few examples of bounded sets whose distance function lies in the Muckenhoupt class, for certain exponents.
A set S⊂^d is called a topological sphere, if there is a homeomorphism f:^d→^d which maps the unit Euclidean sphere centered at 0, denoted by S(0,1)⊂^d, onto S. A topological sphere can potentially be smooth, or extremely “rough".
A quantitative classification between the two cases is a difficult problem of many interpretations. As a result, it has received attention within numerous fields, such as topology, analysis on metric spaces and geometric group theory (see <cit.> for a detailed survey on the topic). Despite such difficulties on the theoretical front, concentric topological spheres provide examples of concentric fractals that emerge in various applied areas, such as mathematical physics <cit.>, <cit.>, machine learning <cit.>, and dynamic scaling of mathematical models <cit.>.
It might seem natural that very rough fractals, which are disjoint and centered at a fixed point, would have to decrease a lot in size while they approach the center. More specifically, in concentric topological sphere collections, where the spheres are allowed to be fractals of high fractal dimension, they need to converge to their center significantly fast in order to avoid overlaps. Hence, for any concentric collection of topological spheres, there are certain restrictions on either their dimension, or on how fast they accumulate around their center. One of our main results is addressing this phenomenon for fractal spheres, by studying certain dimension notions for the whole collection. For what follows, we fix an integer d≥ 2, and we denote by B(0,1)⊂^d the open unit ball in ^d centered at 0. We formalize the notion of concentric sphere collections in the following definition.
Let S_0⊂^d be a topological sphere with S_0= f(S(0,1)), for some homeomorphism f:^d →^d. For x_0∈ f(B(0,1)) and a sequence of similarities {g_n}_n∈, set S_n:=g_n(S_0). We say that the union S:=⋃_n∈_0 S_n is a concentric sphere collection centered at x_0 and generated by S_0, if there are c_1, c_2∈ (0,1], n_0∈, and a strictly decreasing sequence {a_n}_n∈ converging to 0 such that:
* d_H(S_n,{x_0})=c_1 a_n, and
* (S_n,S_n+1)=c_2 (a_n-a_n+1),
for all n≥ n_0.
Moreover, if there is p>0 so that a_n=n^-p, for all n≥ n_0, then we say that S is a p-(polynomially) concentric sphere collection centered at x_0 and generated by S_0, and we denote it by S_p=S_p(x_0,S_0).
An example of a planar concentric sphere collection is depicted in Figure <ref>, where S_0 is the Koch snowflake (see <cit.>). Note that in Definition <ref>, (S_n,S_n+1) denotes the usual Euclidean distance between the sets S_n,S_n+1, and d_H denotes the Hausdorff distance (see Section <ref> for the definition). The latter is a natural way to talk about convergence of a collection of sets into a point (see for instance <cit.>, <cit.>).
Therefore, one could perceive Definition <ref> as the process of “shrinking" the topological sphere S_0 into a point x_0, using contracting similarities g_n of ratio approximately a_n.
Such problems have attracted a lot of interest from various areas of mathematics, and they have been extensively studied both from a deterministic <cit.> and a probabilistic <cit.> point of view.
One of the main contributions of this paper is to emphasize that dimension-theoretic techniques can be crucial to the understanding of this phenomenon. Namely, we adopt such techniques in order to prove that a topological sphere S_0 of non-trivial box dimension _B S_0 (see Section <ref> for the definition) cannot give rise to polynomially concentric sphere collections of any degree.
Let S_0⊂^d be a topological sphere with _B S_0∈ (d-1,d]. For any p>0, there is no p-concentric sphere collection generated by S_0.
It should be noted that it is always possible to shrink a topological sphere S_0 into a point through a sequence of similarities as described in Definition <ref>, as long as the sequence a_n converges fast enough to 0. See Section <ref> for a detailed discussion. Theorem <ref> establishes that sequences of the form n^-p (or of slower rate of convergence to 0) are simply not fast enough to shrink a topological sphere of non-trivial box dimension. Hence, all concentric fractal spheres encountered in the aforementioned applications <cit.> either “smoothen" out as they approach the common center, or they shrink at an exponential rate.
On the other hand, there are still various examples of topological spheres in ^d of box dimension equal to d-1 that generate polynomially concentric collections. For instance, the boundary of any convex set homeomorphic to B(0,1)⊂^d can be polynomially shrunk into its center of mass.
The proof of Theorem <ref> is achieved by assuming towards contradiction that for d_0>d-1 and p>0, a p-concentric sphere collection S_p(x_0,S_0)⊂^d exists. We then use dimension theoretic tools, and specifically the Assouad spectrum notion {_A^θ S_p: θ∈ (0,1)} introduced by Fraser-Yu <cit.> (see Section <ref> for the definition), in order to show that such a collection would be of dimension larger than d, surpassing the dimension of the ambient space ^d and leading to a contradiction. A partial version of Theorem <ref> can be proved by calculating the box dimension of such a collection, which results in a contradiction only for certain values of p. Hence, the Assouad spectrum is essential in achieving the result in full generality with this approach. Theorem <ref> is the first application of the Assouad spectrum in this context of “rigidity" of fractal collections.
Moreover, the property of the Assouad spectrum to trace “finer" geometric data is also crucial for “classification" problems under families of mappings. We establish Assouad spectrum estimates for concentric sphere collections (see Section <ref>), which we apply to classify certain objects, up to classes of homeomorphisms.
An instance of a planar set resembling a collection of concentric circles is the spiral of the form
(ϕ):={ϕ(t) e^it: t>1 },
where ϕ:[1,∞)→(0,∞) is a continuous, strictly decreasing function, with lim_t→∞ϕ(t)=0. Indeed, such spirals share many geometric properties with concentric collections of circles of the form ∪_n∈ S(0,ϕ(n)), while they hold a prominent role in fluid turbulence <cit.>, dynamical systems <cit.>, and even certain types of models in mathematical biology <cit.>. Moreover, they provide examples of “non-intuitive" fractal behavior (see <cit.>), while they have also been extensively studied due to their unexpected analytic properties. For instance, Katznelson-Nag-Sullivan <cit.> demonstrated the dual nature of such spirals, lying in-between smoothness and “roughness", as well as their connection to certain Riemann mapping questions. The relation between the rate of convergence of ϕ(t) for t→∞ and the existence of Lipschitz and Hölder parametrizations has also been extensively studied by the aforementioned authors in <cit.>, by Fish-Paunescu in <cit.>, and by Fraser in <cit.>. In particular, Fraser in the latter paper focuses on spirals that resemble polynomially concentric circles, i.e., spirals where ϕ(t)=t^-p, for p>0, and emphasizes further the “fractal" nature of these objects, due to the peculiar behavior they exhibit in relation to certain dimension notions.
A natural higher dimensional generalization of spirals is the notion of spiral shells in ^3. They exhibit similar behavior to spirals, which has attracted the interest of researchers in physics <cit.> and mathematical biology <cit.>. For p>0, we consider polynomial (spiral) shells of the form
_p:= { (u^-pcos u sin v, u^-psin u sin v, u^-pcos v)∈^3: u, v∈ [1,∞) }⊂^3.
The resemblance to the polynomial spirals mentioned above is perhaps already evident from the definition. In addition, spiral shells have a very similar geometric structure to p-polynomially concentric spheres in ^3. See Figure <ref>.
One significant restriction in the analytic and geometric study of these objects is the lack of planar tools in ^3. For instance, Liouville's theorem states that all conformal maps are Möbius transformations in ^3 (see for instance <cit.>). This was one of the motivations for the definition of quasiconformal maps in ^d, d≥ 2 (see Section <ref> for formal definition). More specifically, while conformal maps transform infinitesimal balls into infinitesimal balls, quasiconformal maps transform them into infinitesimal ellipsoids of globally bounded eccentricity. The bound on said eccentricity is called dilatation. If the dilatation of a quasiconformal map f is at most some uniform constant K≥ 1, we say that f is a K-quasiconformal map (see Section <ref>). The theory of quasiconformal maps has a rich history, which still continues to develop actively, especially due to its many connections and applications within various fields of mathematics. We refer to the books by Lehto and Virtanen <cit.>, Väisälä <cit.>, and Heinonen <cit.> for thorough expositions on these maps and their importance in , ^d and metric spaces, respectively.
Distinguishing sets up to quasiconformal maps, particularly at the level of the dilatation, is in general a hard problem.
For instance, determining under what assumptions a topological sphere is a quasiconformal image of the usual unit sphere S(0,1) is a long-standing open problem, with many implications in geometric group theory and topology (see <cit.> for a survey on this topic and connections to Cannon's conjecture, see also <cit.>). For planar topological spheres, i.e., simple closed curves homeomorphic to the unit circle, this classification problem was solved by Ahlfors in <cit.>, and their dimension-theoretic properties were fully determined by Smirnov in <cit.>. In higher dimensions only partial progress has been made; see <cit.> and the references therein.
Given the ties of spiral shells to several applications, as well as their relation to concentric topological spheres, it is natural to try and classify them under quasiconformal maps. Indeed, this is the outcome of our second main result. By applying the Assouad spectrum estimates we establish for concentric spheres (see Theorem <ref>), along with the dimension distortion properties of quasiconformal maps, proved by Tyson and the author in <cit.>, we achieve the desired classification of polynomial shells.
For p≥ q>0, let _p and _q be two polynomial spiral shells. There is a K-quasiconformal mapping F:^d→^d with F(S_p)=S_q if, and only if, K≥ p/q.
Note that quasiconformal maps are also locally Hölder continuous, with locally Hölder continuous inverse (see Section <ref> for definitions). Therefore, in the proof of Theorem <ref>, the quasiconformal map we construct, mapping _p onto _q, is a bi-Hölder map between polynomial spiral shells, with the Hölder exponents quantitatively depending on the contracting ratios of the shells p, q. This contributes to a general direction proposed by J. Fraser in <cit.>, on bi-Hölder classification of sets in ^d. We discuss this connection further in Section <ref>.
This paper is organized as follows. Section <ref> includes the notation we follow, and all the relevant dimension notions and their properties. In Section <ref> we prove precise estimates for the box dimension and Assouad spectrum of p-concentric sphere collections, and apply similar dimension-theoretic arguments to prove Theorem <ref>. In Section <ref> we recall the formal definition of quasiconformality, and state the relevant dimension distortion result we need for quasiconformal maps. Moreover, we demonstrate the shared dimension-theoretic properties between polynomially concentric spheres and spiral shells, which allows us to apply the quasiconformal dimension distortion result in order to classify polynomial shells as stated in Theorem <ref>. Section <ref> contains further remarks and potential future directions that relate to this work.
§.§ Acknowledgements
The author wishes to thank Vasiliki Evdoridou, Manisha Garg and Carlos Mudarra for the interesting and motivating discussions on this work.
§ BACKGROUND
We first establish the notation we follow.
Given two positive functions h_1, h_2:(0,∞)→ (0,∞), we write h_1(r) ≲ h_2(r) if there is C>0, independent of r, such that h_1(r)≤ C h_2(r), for all r>0. We call C>0 the comparability constant of the relation h_1(r) ≲ h_2(r).
We write h_1(r) ≳ h_2(r) if h_2(r) ≲ h_1(r). Lastly, we write h_1(r) ≃ h_2(r) if both h_1(r) ≲ h_2(r) and h_1(r) ≳ h_2(r) hold. Note that h_1(r) ≃ h_2(r) in general implies that there are C_1, C_2>0, potentially distinct, such that C_1 h_1(r)≤ h_2(r)≤ C_2 h_1(r) for all r>0. However, this implies that there is a single C=max{ C_1^-1, C_2 }>0 such that C^-1 h_1(r)≤ h_2(r)≤ C h_1(r) for all r>0. In this case, we say that C is the comparability constant of the relation h_1(r) ≃ h_2(r).
Given x∈^d, r>0, we denote by B(x,r) the open ball centered at x of radius r, and by S(x,r)=∂ B(x,r) the corresponding sphere in ^d. Recall that a set S⊂^d is called a topological sphere, if there is a homeomorphism f:^d→^d with S=f(S(0,1)).
Given two compact sets A,B⊂^d, we denote their usual distance by (A,B), and their Hausdorff distance by
d_H(A,B)=max{sup{({a},B): a∈ A}, sup{(A,{b}): b∈ B}}.
We next define the dimension notions we need, along with their properties. We refer to the books by Falconer <cit.> and Fraser <cit.> for thorough expositions on these topics.
Let E ⊂ℝ^d be a non-empty bounded set. For r>0, we denote by N_r (E) the smallest number of r-cubes needed to cover E, i.e., cubes that lie in the axes-oriented r-mesh of ^d and intersect E (see <cit.>).
The lower and upper box dimensions of E are defined by
_B E = lim inf_ r → 0 log N_r (E)/-log r and _B E = lim sup_ r → 0 log N_r (E)/-log r,
respectively. If the lower and upper box dimensions are equal, we call the common value the box dimension of E and denote it by _B E.
Note that we have _B E=d_0 if, and only if, for all ϵ>0 there is r_0=r_0(ϵ)∈ (0,1) such that
r^-d_0+ϵ≤ N_r(E)≤ r^-d_0-ϵ,
for all r<r_0.
Recall that a map g:^d→^d is a similarity if g(x)=c_g 𝐀x+t for all x∈^d, where c_g>0, 𝐀∈𝒪(,d) is a real orthogonal matrix, and t∈^d. We call the scalar c_g the similarity ratio of g. A standard property of the covering number N_r(E) that we need is the way it is distorted by similarities. More specifically, if g is a similarity map with similarity ratio c, then
N_r(g(E))=N_r/c(E),
for all r∈ (0,1) (see, for instance, <cit.> Proposition 2.5). Given that _B E=d_0, using the above, along with (<ref>) results in
(r/c)^-d_0+ϵ≤ N_r(E)≤(r/c)^-d_0-ϵ,
for all r<min{c r_0, r_0}. This gives a relation between covering numbers of similar topological spheres that is crucial in our arguments in Section <ref>.
For an arbitrary (not necessarily bounded) set E ⊂^d, the Assouad dimension of E is
_A E = inf{α>0 : ∃ C>0 N_r(B(x,R) ∩ E) ≤ C (R/r)^α}.
While we do not use the Assouad dimension definition directly, we include it for the sake of completeness, and to relate it to the notion of the Assouad spectrum.
The Assouad spectrum, introduced by Fraser and Yu <cit.>, is a one-parameter family of metrically defined dimensions which interpolates between the upper box dimension and the Assouad dimension. Specifically, the Assouad spectrum of a set E ⊂^d is a collection of values {_A^θ E:0<θ<1}, where
_A^θ E=inf{α>0 : ∃ C>0 N_r(B(x,r^θ) ∩ E) ≤ C (r^θ/r)^α},
for all θ∈ (0,1). We may also refer to the value _A^θ E as the (θ)-Assouad spectrum of E, for a specific θ∈ (0,1).
The Assouad spectrum essentially captures the growth rate of the covering number N_r(B(x,R)∩ E) for scales 0<r≤ R<1 related by R = r^θ. The map θ↦_A^θ E is continuous at all 0<θ<1 <cit.>, and for a fixed bounded set E⊂^d we have
_A^θ E → E , _A^θ E →_qAE ,
where _qAE denotes the quasi-Assouad dimension of E, a variant of Assouad dimension introduced by Lü and Xi <cit.>. We always have _qA E ≤_A E, and equality holds in many situations, including all sets of interest in this paper. We refer to <cit.> for proofs of the above properties.
In Section <ref> we use a slightly modified version of the Assouad spectrum, where the relationship R=r^θ between the two scales of the Assouad dimension definition is relaxed to an inequality R≥ r^θ. Namely, given E⊂^d and θ∈ (0,1), we set
_A,reg^θ E:=inf{α>0 : ∃ C>0 N_r(B(x,R) ∩ E) ≤ C (R/r)^α},
This modification leads to the notion of upper, or regularized Assouad spectrum, denoted by {_A, reg^θE:0<θ<1}. See <cit.> and <cit.> for more information. Tyson and the author focused on this notion in <cit.>, as it aligns more naturally with properties of the Assouad dimension that are fundamental in dimension distortion results (see also <cit.>). For a fixed θ∈ (0,1) the key relationship between the respective two values in the two spectra (see <cit.>) is that
_A, reg^θ E = sup_0<θ'<θ_A^θ'E.
It is immediate from (<ref>) that if _A^θ E is increasing in θ, then _A, reg^θ E=_A^θ E. This is particularly useful, since _A^θ E is often easier to calculate, while the dimension distortion results we use in Section <ref> are proved for the regularized Assouad spectrum.
Moreover, the relation between the different dimensions for a fixed bounded set E follows by (<ref>) and the respective definitions. Namely,
E ≤_A^θ E≤_A, reg^θ E ≤ E ≤_A E,
for all θ∈ (0,1). We also state an upper bound on the Assouad spectrum involving the box dimension, which reduces certain calculations in arguments within Section <ref> (see <cit.>).
Given a bounded non-empty set E⊂^d with d_0:=_B E, we have
_A^θ E≤min{d_0/1-θ, d },
for all θ∈ (0,1).
§ DIMENSIONS AND IMPOSSIBLE CONCENTRIC SPHERES
In this section, we first establish certain dimension-theoretic properties for polynomially concentric spheres. Namely, we estimate the box dimension and Assouad spectrum of such collections. We then modify the arguments we employed while proving the aforementioned estimates, in order to show the restrictions of polynomially concentric sphere collections with regard to the generating sphere, which yields Theorem <ref>.
Let p>0 and S_0=f(S(0,1))⊂^d be a topological sphere, for some homeomorphism f:^d→^d, with d_0:=_B S_0≤ d. Note that the topological dimension of S(0,1) is equal to d-1, and, since the topological dimension is preserved under homeomorphisms, so is the topological dimension of S_0. Thus, we immediately have d_0≥ d-1 (see, for instance, <cit.> p. 62). Suppose that
S_p=∪_n=0^∞ S_n is a p-concentric sphere collection centered at some x_0∈^d generated by S_0, as in Definition <ref>.
We can perform certain reductions on S_p. Without loss of generality, we can assume that x_0=0, since all the dimension notions we consider are translation-invariant. In addition, by finite stability of the box dimension and the Assouad spectrum (see <cit.>, p. 18, 49), we can further assume that for the given S_p, we have n_0=1 in Definition <ref>. The above reductions imply that there is c_1∈ (0,1] such that
d_H(S_n, {0})=c_1 n^-p,
for all n∈. In addition, there is c_2∈ (0,1], independent of n, such that
(S_n, S_n+1)=c_2(n^-p-(n+1)^-p),
for all n∈. Moreover, for any n∈, we have that
(S_n,{0})≃ n^-p,
where the comparability constant is independent of n. Indeed, the upper bound for (<ref>) follows trivially from (<ref>). In order to prove that (S_n,{0})≳ n^-p, let z_n∈ S_n be a point such that |z_n|=(S_n,{0}). Since S_j for j∈{n+1, n+2, …} are all topological spheres with corresponding topological balls containing 0, the line segment with end-points at z_n and 0 intersects each such S_j, say at z_j∈ S_j. This implies that
|z_j-z_j+1|≥(S_j, S_j+1)≃ j^-p-(j+1)^-p,
for all j∈{n, n+1, …}. However, the sum of all of these distances |z_j-z_j+1| has length less than the line segment where all these points lie on, namely
(S_n,{0})≳∑_j=n^∞(j^-p-(j+1)^-p)=∑_j=1^∞(j^-p-(j+1)^-p)-∑_j=1^n-1(j^-p-(j+1)^-p).
Using the fact that for any m∈ we have ∑_j=1^m(j^-p-(j+1)^-p)=1-(1+m)^-p, the proof of (<ref>) is complete. Note that (<ref>) also implies that the similarity ratio of g_n is in fact comparable to n^-p, for all n∈. This allows us to use (<ref>) in our arguments.
The above reductions are enough to start with the calculation of the box dimension of S_p in the case d_0=d-1. While we could replace d_0 in the following proof by the explicit value d-1, we prefer to use the former notation to showcase how similar arguments are used for the proof of Theorem <ref> for d_0>d-1.
For a p-concentric sphere collection S_p as outlined above, with d_0=_B S_0=d-1, we have
_B S_p = max{d/1+p, d-1 }.
Fix positive d_1, d_2<d close to d_0 such that d_1<d_0<d_2. Due to _B S_0=d_0 and (<ref>), there is r_0=r_0(d_1,d_2)>0 such that
r^-d_1≤ N_r(S_0)≤ r^-d_2,
for all r∈ (0,r_0). Moreover, since f is a homeomorphism, the topological dimension of B_0=f(B(0,1)) is equal to d. It follows that _B B_0=d. Hence, for any tiny ϵ>0, there is r_0'=r_0'(ϵ)>0 such that
r^-d+ϵ≤ N_r(B_0)≤ r^-d-ϵ,
for all r∈ (0,r_0').
Fix an arbitrary tiny ϵ>0, and an arbitrary r∈ (0,min{r_0,r_0'}), so that (<ref>), (<ref>) hold, and let n_r be the unique positive integer satisfying
n_r^-p-(n_r+1)^-p≤r/c_2d^1/2 < (n_r-1)^-p-n_r^-p.
In arguments that follow, we make a few more assumptions on how small r is selected to be, which only involve uniform constants, such as p, c_1, c_2, d, and the uniform comparability constant for the similarity ratios of g_j from (<ref>). This ensures there is no loss of generality in these further assumptions on the size of r, while it eases the notation and reduces the number of technical inequalities.
By the choice of n_r in (<ref>), we can split S_p into two disjoint sets, i.e., S_p= S_p^+∪ S_p^-, where
S_p^+:=⋃_n>n_r S_n
S_p^-:= ⋃_n ≤ n_r S_n,
and consider N_r(S_p^+) and N_r(S_p^-) separately. The reason is that r-cubes that intersect S_p^+ are essentially disjoint with those that intersect S_p^- by (<ref>) and (<ref>), hence resulting in N_r(S_p)=N_r(S_p^+)+N_r(S_p^-). Moreover, by (<ref>) and the right-hand side of (<ref>), the collection of r-cubes intersecting some S_n⊂ S_p^- for n≤ n_r and the collection of r-cubes intersecting some S_m for n≠ m≤ n_r are essentially disjoint collections, implying that N_r(S_p^-)=∑_n=1^n_r N_r(S_n). Moreover, by the left-hand side of (<ref>) and by (<ref>), (<ref>), r is large enough for each cube in the r-mesh intersecting B(0,c_1 n_r^-p) to overlap with more than one of the spheres in S^+, resulting in N_r(S^+)≃ N_r(g_n_r(B_0)), where the comparability constant is independent of r. Therefore, we have
N_r( S_p ) ≃ N_r(g_n_r(B_0)) + ∑_j=1^n_r N_r( S_j )
Note that n_r increases as r decreases. Hence, for small enough r (in a way that depends only on p, c_1, c_2, d), the relation (<ref>) actually implies
n_r≃ r^-1/p+1,
where the comparability constant depends only on p, c_1, c_2, d. The similarity ratio of g_n_r is comparable to n_r^-p. Since n_r^p r≃ n_r^-1 by (<ref>), without loss of generality, we can assume that r is small enough, in a way solely dependent on p and the comparability constant in the former relation, so that n_r^p r is smaller than min{r_0,r_0'}. This ensures that (<ref>) can be applied for the fixed r, which along with (<ref>) and (<ref>) results in
r^-d-ϵ/1+p≃(1/n_r^p r)^d-ϵ≲ N_r(g_n_r(B_0))≲(1/n_r^p r)^d+ϵ≃ r^-d+ϵ/1+p,
for the fixed tiny ϵ>0.
Since for any j∈{1,…, n_r} the similarity ratio of g_j is comparable to j^-p, and noting that j^p r≤ n_r^p r<min{r_0,r_0'} and S_j=g_j(S_0), we similarly have by (<ref>) that
(r/j^-p)^-d_1≲ N_r(g_j(S_0))≲(r/j^-p)^-d_2.
Combining (<ref>), (<ref>) and (<ref>) results in
r^-d-ϵ/1+p+∑_j=1^n_rj^-p d_1/r^d_1≲ N_r(S_p) ≲ r^-d+ϵ/1+p+∑_j=1^n_rj^-p d_2/r^d_2.
Suppose that pd_1<1. Then by standard partial sum estimates and (<ref>) we have
∑_j=1^n_rj^-p d_1/r^d_1≃ n_r^1-pd_1 r^-d_1≃ r^-1-pd_1/p+1-d_1=r^-1-d_1/p+1.
Similarly, if pd_1>1, we have
∑_j=1^n_rj^-p d_1/r^d_1≃1/pd_1-1 r^-d_1≃ r^-d_1.
A similar case study on the product pd_2 shows that for pd_2<1 we have
∑_j=1^n_rj^-p d_2/r^d_2≃ n_r^1-pd_2 r^-d_2≃ r^-1-pd_2/p+1-d_2=r^-1-d_2/p+1,
and for pd_2>1 we have
∑_j=1^n_rj^-p d_2/r^d_2≃1/pd_2-1 r^-d_2≃ r^-d_2.
We now determine _B S_p based on the value of the product pd_0. Suppose pd_0<1, then d_2 can be chosen close enough to d_0 to ensure pd_1<pd_2<1. Thus, by (<ref>), (<ref>), (<ref>) we have
r^-d-ϵ/1+p+ r^-1-d_1/p+1≲ N_r(S_p)≲ r^-d+ϵ/1+p+ r^-1-d_2/p+1.
Note that d_2> d-1, so ϵ can be chosen small enough so that d_2+1> d+ϵ. Since r was arbitrary, the above relation implies that
1+d_1/p+1≤_B S_p ≤1+d_2/p+1.
Recall that d_1, d_2 are arbitrary and as close to d_0 as necessary, which combined with the above inequality is enough to show that in the case pd_0<1, we have
_B S_p =1+d_0/p+1=d/p+1.
On the other hand, if pd_0>1, we similarly make sure pd_2>pd_1>1 by choice of d_1, d_2. Note that by d_2>d-1, the value ϵ can be chosen small enough so that pd_2>1 implies that d+ϵ<pd_2+d_2, which gives
d+ϵ/p+1<d_2.
Using a similar argument to that of the previous case, and the above inequality, along with (<ref>), (<ref>), (<ref>), we can show that
d_1≤_B S_p ≤ d_2,
which is enough to conclude that _B S_p=d_0 as needed.
The case p d_0=1 is treated similarly, using (<ref>), (<ref>), (<ref>), resulting in _B S_p=d_0=d-1, due to the fact that
1+d_0/1+p= 1+d_0/1+1/d_0=d_0.
All cases have been addressed and the proof is complete.
It might be tempting to consider r≃ n_r^-p, so that all but finitely many of the spheres S_j⊂ S_p are covered by the 2^d r-cubes with vertices at 0. However, this choice of n_r would hinder any following covering arguments, and especially the crucial equation N_r(S_p^-)=∑_j=1^n_r N_r(S_j) would no longer be true.
The techniques employed in the proof of Theorem <ref> are in fact very similar to those needed to determine the Assouad spectrum of S_p. By adjusting the right quantitative estimates, we can modify accordingly the arguments within the previous proof, so that they apply in the context of two small scales, r and r^θ, as in the definition of the θ-Assouad spectrum.
Let S_0⊂^d be a topological sphere and S_p=⋃_n∈_0 S_n be a p-concentric sphere collection generated by S_0 as in Theorem <ref>, with the additional assumption that _A S_0=d-1[Requiring _B S_0=_A S_0 might seem restrictive at first. However, various of those lines of research outlined in the Introduction focus on d_0-Ahlfors regular spheres (see <cit.> for definition and details), which is a special case of the spheres with _B S_0=_A S_0. Hence, it is a natural condition to consider.].
If p (d-1)≤ 1, then we have
_A^θ S_p = min{d/(1+p)(1-θ), d },
for all θ∈ (0,1). On the other hand, if p (d-1)>1, then
_A^θ S_p = min{ d-1+ θ/p(1-θ), d},
for all θ∈ (0,1).
Recall that we assume d_0=d-1=_A S_0. Note that for any θ≥ p/(p+1), both (<ref>) and (<ref>) result in _A^θ S_p=d. Hence, by continuity of the Assouad spectrum as a function of θ, it is enough to show (<ref>) and (<ref>) for all θ<p/(p+1).
Fix θ∈ (0,p/(p+1)) and positive d_1, d_2 close to d_0, such that d_1<d_0<d_2. Moreover, fix r∈ (0,1), ϵ>0 as small as necessary for the following arguments, in a way that depends only on uniform constants of S_p and ^d, similarly to the proof of Theorem <ref>. Denote by n_r∈ the unique positive integer satisfying
n_r^-p-(n_r+1)^-p≤r/c_2d^1/2 < (n_r-1)^-p-n_r^-p,
and by m_r∈ the unique positive integer satisfying
m_r^-p≤r^θ/c_1 < (m_r-1)^-p.
Since r was fixed to be small enough, the integers n_r, m_r are large enough, so that by (<ref>) we have
n_r≃ r^-1/p+1,
and by (<ref>) we have
m_r≃ r^-θ/p.
Due to the choice θ<p/(p+1), by (<ref>) and (<ref>) we also have that m_r≤ n_r. Note that the Assouad spectrum is stable under taking the closure of a set (see <cit.>), which ensures that we may estimate the Assouad spectrum of the closure of S_p, or simply assume that 0∈ S_p without loss of generality. Moreover, despite the fact that the definition of the Assouad spectrum requires to check balls centered at all points of the given set, we focus on B(0,r^θ) instead, for reasons that become clear later.
Arguing similarly to the proof of Theorem <ref> and splitting B(0,r^θ)∩ S_p into two disjoint sets, we have
N_r(B(0,r^θ)∩ S_p)≃ N_r(g_n_r(B_0))+∑_j=m_r^n_r N_r(S_j),
which by (<ref>) and the choices of ϵ, d_1, d_2 implies that
∑_j=m_r^n_rj^-pd_1/r^d_1≲ N_r(B(0,r^θ)∩ S_p)≲(n_r^-p/r)^d+ϵ+∑_j=m_r^n_rj^-pd_2/r^d_2.
Using (<ref>) and standard partial summation techniques on the above relation results in
r^-d_1max{ m_r^1-pd_1, n_r^1-pd_1}≲ N_r(B(0,r^θ)∩ S_p)≲ r^(p/p+1-1)(d+ϵ)+r^-d_2max{ m_r^1-pd_2, n_r^1-pd_2}.
The rest of the proof is a case study on the product pd_0, similarly to the proof of Theorem <ref>, while also accounting for the different values of θ that could potentially determine which power of r is dominant.
Suppose p d_0<1. In this case, we only need to find a sequence of balls centered at S_p and bound the corresponding covering number from below, which yields a lower bound on _A^θ S_p. This is because the desired upper bound on _A^θ S_p follows trivially by Theorem <ref> and Lemma <ref>. Hence, we can focus on balls centered at 0 and the lower bound of (<ref>). Namely, by (<ref>) and p d_1<1 we have
r^-d_1n_r^1-pd_1≲ N_r(B(0,r^θ)∩ S_p),
which by (<ref>) implies
N_r(B(0,r^θ)∩ S_p)≳ r^-d_1-1-pd_1/p+1= (r^θ-1)^-d_1(p+1)-(1-pd_1)/(θ-1)(p+1)=(r^θ-1)^d_1+1/(1-θ)(p+1).
Thus, the above shows that
_A^θ S_p≥d_1+1/(1-θ)(p+1),
for arbitrary d_1 close to d_0, which is enough to complete the proof of (<ref>).
Suppose that p d_0>1. Note at this point that if x∈ S_p with |x|≤ 2 r^θ, then
N_r(B(x,r^θ)∩ S_p)≤ N_r(B(0,10r^θ)∩ S_p)≲ N_r(B(0,r^θ)∩ S_p),
where the comparability constant only depends on the doubling constant of the space ^d (see for instance <cit.>). Hence, it remains to study the covering number N_r(B(0,r^θ)∩ S_p), and N_r(B(x,r^θ)∩ S_p) for arbitrary x∈ S_p with |x|>2 r^θ. In the latter case, we observe that the individual spheres S_j that intersect B(x,r^θ) are not more than m_r in number, due to (<ref>) and (<ref>) (see also <cit.> for a similar argument). Therefore, using the fact that _A S_0=d_0=_B S_0, which implies that _A^θ S_0= d_0 by (<ref>), and a similarity argument similar to (<ref>), we have
N_r(B(x,r^θ)∩ S_p)≲ m_r (r^θ-1)^d_2.
By (<ref>), the above implies that
N_r(B(x,r^θ)∩ S_p)≲ r^-θ/p(r^θ-1)^d_2= (r^θ-1)^d_2+θ/p(1-θ).
Hence, due to (<ref>) and the above inequality, it is enough to focus on achieving lower and upper bounds for N_r(B(0,r^θ) in the rest of the proof. The relation p d_0>1 implies for (<ref>) that
r^-d_1m_r^1-pd_1≲ N_r(B(0,r^θ)∩ S_p)≲ r^(p/p+1-1)(d+ϵ)+r^-d_2 m_r^1-pd_2.
Using (<ref>) in the above relation results in
r^-d_1r^-θ(1-pd_1)/p≲ N_r(B(0,r^θ)∩ S_p)≲ r^-(d+ϵ)/p+1+r^-d_2 r^-θ(1-pd_2)/p.
Note that
-d_1/θ-1-θ(1-pd_1)/p(θ-1)= d_1+θ/p(1-θ),
and similarly for d_2 instead of d_1, which when applied to the previous inequality yield
(r^θ-1)^d_1+θ/p(1-θ)≲ N_r(B(0,r^θ)∩ S_p)≲ (r^θ-1)^d+ϵ/(p+1)(1-θ)+(r^θ-1)^d_2+θ/p(1-θ).
Regarding the dominant term in the upper bound of (<ref>), note that
d_0+θ/p(1-θ)>d/(p+1)(1-θ)
if, and only if
θ< p(pd_0-d_0-d)/(p+1)(pd_0-1)=p/p+1.
Hence, by choice of θ, and by taking ϵ to be small enough, we have
d_2+θ/p(1-θ)>d+ϵ/(p+1)(1-θ),
which if applied to (<ref>) yields
(r^θ-1)^d_1+θ/p(1-θ)≲ N_r(B(0,r^θ)∩ S_p)≲ (r^θ-1)^d_2+θ/p(1-θ).
As a result, by the above and by (<ref>),(<ref>), we have shown that
d_1+θ/p(1-θ)≤_A^θ S_p≤ d_2+θ/p(1-θ),
for d_1 and d_2 arbitrarily close to d_0, which is enough to complete the proof in this case as well.
The case p d_0=1 is treated similarly to the above cases. Notice that in this case we have pd_1<1 and 1<pd_2, which yield different dominant terms on the two sides of (<ref>), but the arguments are identical for each side to the corresponding cases; namely, the lower bound is treated using pd_1<1 as in the case pd_0<1, and the upper bound is treated using pd_2>1 as in the case pd_0>1. This is enough to complete the proof
We emphasize that in the case p(d-1)>1, the Assouad spectrum is not given by the upper bound in Lemma <ref> for 0<θ<p/1+p. This provides us with a family of such examples in every Euclidean space ^d.
Theorem <ref> in fact implies what the box dimension of S_p is equal to as well, by letting θ→ 0 (see Section <ref>). We note that for p>(d-1)^-1, the box dimension of S_p is trivial, in the sense that _B S_p=d-1. On the other hand, the Assouad spectrum provides non-trivial information on S_p, even in the case where the box dimension fails to do so.
Moreover, (<ref>), (<ref>) yield that the Assouad dimension of S_p is equal to d, which also fails to provide additional geometric information compared to the Assouad spectrum. This is an immediate implication of (<ref>).
For all p>0, we have _A S_p = S_p = d, where S_p is as in Theorem <ref>.
We finish this section by showing that a topological sphere of large box dimension cannot generate a polynomially concentric sphere collection.
Assume towards contradiction that S_p is a p-polynomially concentric sphere collection, as described in the beginning of Section <ref>, but with d_0=_B S_0>d-1. Note that we do not require S_0 to satisfy _A S_0 =d_0. We arrive at a contradiction by providing an absurd lower bound on the Assouad spectrum of S_p for certain θ∈ (0,1).
Fix d_1<d_0 close to d_0, such that d_1>d-1, and θ∈ (0,p/p+1). Moreover, similarly to the proof of Theorem <ref>, fix small r∈ (0,1) and integers n_r, m_r so that (<ref>), (<ref>) are satisfied. Choosing r small enough ensures that n_r≃ r^-1/p+1 and m_r≃ r^-θ/p, which by choice of θ<p/(p+1) also ensures that m_r≤ n_r. Arguing exactly as in the proof of Theorem <ref> yields
N_r(B(0,r^θ)∩ S_p)≳∑_j=m_r^n_rj^-pd_1/r^d_1≳ r^-d_1max{ m_r^1-pd_1, n_r^1-pd_1}.
Suppose pd_0>1. Then d_1 can also be chosen so that p d_1>1, which implies by (<ref>) that
N_r(B(0,r^θ)∩ S_p)≳ (r^θ-1)^d_1+θ/p(1-θ),
exactly as in the proof of Theorem <ref>. Since r, θ are arbitrary, this implies that
_A^θ S_p ≥ d_1+θ/p(1-θ),
which also yields
_A^θ S_p ≥ d_0+θ/p(1-θ),
because d_1<d_0 can be arbitrarily close to d_0. However, note that d_0+θ/p(1-θ)<d if, and only if, θ<p(d-d_0)/p(d-d_0)+1. Since p(d-d_0)/p(d-d_0)+1<p/p+1, due to d_0>d-1, if we pick
p(d-d_0)/p(d-d_0)+1<θ<p/p+1,
then (<ref>) implies that _A^θ S_p> d, which is a contradiction.
On the other hand, if pd_0≤ 1, we have pd_1<1, which implies by (<ref>) that
N_r(B(0,r^θ)∩ S_p)≳ (r^θ-1)^d_1+1/(1-θ)(p+1).
This similarly yields
_A^θ S_p ≥d_0+1/(1-θ)(p+1),
which by choosing
d(p+1)-(d_0+1)/d(p+1)<θ<p/p+1,
provides the contradicting estimate _A^θ S_p>d for this case as well, completing the proof.
For p<d^-1(1+d_0-d), if we let d_1<d_0 arbitrarily close to d_0 and follow the argument of the proof of Theorem <ref> just for the lower bound of N_r(S_p), we get that _B S_p≥d_0+1/1+p>d, by choice of p, which is a contradiction. Hence, the box dimension of S_p can also trace the impossibility of such a concentric collection, but only for certain values of p. Therefore, the investigation of the Assouad spectrum is truly necessary.
§ QUASICONFORMAL CLASSIFICATION OF SPIRAL SHELLS
As mentioned in the Introduction, quasiconformal maps are a natural generalization of conformal maps in higher dimensions. There are multiple ways to express the intuition behind quasiconformality, i.e., the property that infinitesimal balls are mapped onto infinitesimal ellipsoids whose eccentricity is globally bounded by a constant. We state the analytic definition of quasiconformal maps, which relates directly to the dimension distortion result of interest (see <cit.> for a thorough exposition). A homeomorphism f:Ω→Ω' between domains in ^d, d ≥ 2, is said to be K-quasiconformal, for some K≥ 1, if f lies in the local Sobolev space W^1,d_loc(Ω:^d), and the inequality
|Df|^d ≤ K Df
holds almost everywhere Ω (with respect to the d-Lebesgue measure). Note that Df denotes the (a.e. defined) differential matrix, and |𝐀|=max{|𝐀(𝐯)|:|𝐯|=1} denotes the operator norm of a matrix 𝐀. Recall that the local Sobolev space W^1,d_loc(Ω:^d) is the space of maps defined in Ω with first-order weak derivatives in L^p_loc(Ω:^d) (see <cit.> for more details).
There are many dimension distortion results, for different analytic and geometric mapping classes related to quasiconformal maps, and regarding different dimension notions. See <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> for a non-exhaustive list. In fact, the following theorem proved by Tyson and the author (see <cit.>) is crucial for the proof of Theorem <ref>.
Let F:Ω→Ω' be a K-quasiconformal map between domains Ω, Ω'⊂^d. For t>0 , set θ(t)=1/(t+1). If E is a non-empty bounded set with closure E⊂Ω, we have
( 1 - d/s) ( 1/_A,reg^θ(t/K)E - 1/d)
≤1/_A,reg^θ(t)F(E) - 1/d≤( 1 - d/s)^-1( 1/_A,reg^θ(Kt)E - 1/d),
for a fixed s=s(d,K)>d that only depends on the space ^d and the constant K of the map F.
We have all the tools necessary to fully classify the polynomial spiral shells up to quasiconformal equivalence.
One direction is immediate, as there exists a map F with F(_p)=_q, which is K-quasiconformal for any K≥ p/q. Namely, the radial stretch map F:^d→^d with F(0)=0 and
F(x) = |x|^q/p-1x,
for all x∈^d∖{0}, is (p/q)-quasiconformal (see <cit.>), which implies that F is K-quasiconformal for all K≥ p/q. Moreover, it is elementary to show that F maps _p onto _q, as required.
For the other implication, assume towards contradiction that F:^d→^d is a K-quasiconformal map with K<p/q and F(_p)=_q. We aim to calculate the regularized Assouad spectrum of the shells _p, _q, and use Theorem <ref> to arrive at a contradiction. Set
S_p:= ∪_j=1^∞ S(0,j^-p) and S_q:= ∪_j=1^∞ S(0,j^-q). The sets S_p, S_q are polynomially concentric sphere collections centered at 0 and generated by the unit sphere S(0,1) in ^3. Hence, by Theorem <ref> we have that
_A^θ S_p ={[ 3/(1+p)(1-θ), if 0<θ <p/p+1, p≤1/2; 2+θ/p(1-θ), if 0<θ <p/p+1, p> 1/2; 3, if p/p+1≤θ<1,; ].
and similarly for _A^θ S_q.
Note that a bi-Lipschitz map h_1 can be constructed in order to map _p^u:=_p∩{
(x_1,x_2,x_3): x_3≥ 0 } onto S_p^u:= S_p∩{
(x_1,x_2,x_3): x_3≥ 0 } (see <cit.> for similar arguments). Similarly, a different bi-Lipschitz map h_2 can be constructed in order to map _p^ℓ:=_p∩{
(x_1,x_2,x_3): x_3≤ 0 } onto S_p^ℓ:= S_p∩{
(x_1,x_2,x_3): x_3≤ 0 }. Using the finite and bi-Lipschitz stability of the Assouad spectrum (see <cit.>, we have
_A^θ_p = max{_A^θ_p^u, _A^θ_p^ℓ}= max{_A^θ S_p^u, _A^θ S_p^ℓ}= _A^θ S_p,
for all θ∈ (0,1). Similarly we can show that _A^θ_q= _A^θ S_q. Hence, by the above and (<ref>) we can fully determine the Assouad spectrum of _p and _q. Moreover, since the expressions in (<ref>) are monotone, by the relation of the Assouad spectrum with the regularized Assouad spectrum in (<ref>), we have
_A,reg^θ_p ={[ 3/(1+p)(1-θ), if 0<θ <p/p+1, p≤1/2; 2+θ/p(1-θ), if 0<θ <p/p+1, p> 1/2; 3, if p/p+1≤θ<1,; ].
and similarly for _A,reg^θ_q.
We have all the necessary dimension-theoretic information in order to apply Theorem <ref>. In fact, we mostly use the relation between the phase transitions θ_p=p/(p+1) and θ_q=q/(q+1) of the two shells, i.e., the values where the regularized Assouad spectrum reaches the (quasi-)Assouad dimension. Set t=1/q. Then θ(t) = 1/(1+t) = q/(1+q), and so _A^θ(t)(S_q) = 3 by (<ref>). On the other hand,
θ(t/K) = K/K+t= q K/qK+1 < p/p+1,
due to the fact that K<p/q. Therefore, _A^θ(t/K)(S_p) < 3. This leads to a contradiction due to Theorem <ref> and, in particular, due to the left-hand side of (<ref>). Therefore, such a K-quasiconformal map does not exist.
A closer analysis of the above proof indicates that Theorem <ref> could be generalized to non-global quasiconformal maps. More specifically, the result is still be true if the map F:^d→^d in the statement of the theorem is replaced by a map F:Ω→Ω' between domains of ^d such that _p∪{0}⊂Ω and _q∪{0}⊂Ω'.
We emphasize that the Assouad spectrum estimates are crucial in the proof of Theorem <ref>. Indeed, in the case 1/2≤ q≤ p, all other dimension notions do not provide non-trivial information, since _H _p=_B _p=2<3=_A _p, where _H denotes the Hausdorff dimension (see <cit.>), and similarly for _q. The available dimension distortion results for the Hausdorff, box, and Assouad dimension under quasiconformal maps are not enough to classify such polynomial shells. See <cit.> for a more extensive discussion on this phenomenon in the case of polynomial spirals.
§ FINAL REMARKS
Suppose S_0=f(S(0,1))⊂^d is a topological sphere, where f:^d→^d is a homeomorphism, and x_0 is a point lying in the interior of the topological ball f(B(0,1)). We address the existence of a concentric sphere collection generated by S_0, regardless of the value of _B S_0∈ [d-1,d]. Since f(B(0,1)) is an open set, there is a maximal open ball lying in f(B(0,1)), centered at x_0 and of radius r_1>0. Set g_1 to be a similarity such that x_0∈ g_1(B(0,1))⊂ B(x_0,r_1). Repeating the same argument for g_1(B(0,1)) and iterating infinitely many times, we construct a sequence of similarities {g_n}_n∈ that contracts S_0 into the point x_0, i.e., a concentric sphere collection centered at x_0 and generated by S_0. However, by Theorem <ref>, we have established that the similarity ratios of g_n cannot be comparable to any sequence of the form n^-p, for any p>0, unless _B S_0=d-1. Thus, in the case _B S_0∈ (d-1,d), the sequence a_n in the constructed example has to converge to 0 faster than n^-p does. Note that in such situations, the convergence is typically too fast for any dimension notion to trace “fractalness" in the collection. For instance, following the method of the proof of Theorem <ref>, it can be shown that a concentric sphere collection with sequence a_n=e^-n has Assouad spectrum (and, similarly, Assouad dimension) equal to _B S_0.
A natural question motivated by this discussion is to try and fully characterize the topological spheres S_0 from which a polynomially concentric collection can be generated. Given a topological sphere S_0⊂^d, Theorem <ref> shows that _B S_0=d-1 is a necessary condition, but is it sufficient? Namely, if _B S_0=d-1, can we polynomially contract S_0 into a point using similarities? If the answer to this question is positive in general, are there specific restrictions imposed on the degree p of the polynomial rate n^-p, based on properties of S_0?
It should be emphasised that the case p>(d-1)^-1 in Theorem <ref> is another one of the few examples of sets E⊂^d, where _A^θ E is not constant and is not equal to _B E/(1-θ), for θ<1-_B E/d (see Lemma <ref>). Many of the first such examples mostly appeared in ^2, such as the polynomial spirals studied by Fraser in <cit.>, and the elliptical polynomial spirals studied by Falconer-Fraser-Burrell in <cit.> . In fact, the examples studied in the latter paper stand out for one more reason; for fixed p,q, the Assouad spectrum of elliptical spirals _p,q exhibits two distinct phase transitions, i.e., _A^θ_p,q has three different branches as a function, unlike _A^θ_p which only has two (see <cit.>). This motivates the study of another class in ^d, namely that of concentric ellipsoid collections. In particular, this class is defined similarly to the concentric sphere collections in Definition <ref>, but with the second condition (ii) replaced by (S_n, S_n+1)= c_2 b_n, for some strictly decreasing sequence {b_n}_n∈ that converges to 0 at a different rate than {a_n}_n∈. We expect the dimension study of these objects to be quite interesting, potentially providing us with more examples of sets with Assouad spectrum of two distinct phase transition values.
Recall that a homeomorphism f:^d→^d is (α,β)-bi-Hölder continuous on a non-empty set E, with Hölder exponents 0<α≤ 1≤β<∞, if there is a constant C>0 such that
C^-1|x-y|^β≤ |f(x)-f(y)|≤ C |x-y|^α,
for all x,y∈ E.
Fraser in <cit.> suggested the following classification-like direction: given two bounded sets X,Y⊂^d, try to construct a bi-Hölder map f such that f(X)=Y. Moreover, determine what is the sharpest possible choice of Hölder exponents for such a map (which depends on properties of X, Y), and compare them to the bounds on the exponents from dimension distortion theorems. We have indirectly stepped towards this direction for spiral shells through Theorem <ref>, and the connection between quasiconformal and bi-Hölder maps.
Namely, any K-quasiconformal map F:^d→^d is in fact (1/K,K)-bi-Hölder continuous on every compact set E⊂^d (see for instance <cit.>). Hence, in the proof of Theorem <ref>, we provided a (q/p,p/q)-bi-Hölder map that maps _p∪{0} onto _q∪{0}. However, while this map is sharp for the quasiconformal classification of spiral shells, it is not clear whether it is sharp for the bi-Hölder class of maps between such shells.
One could use the bi-Hölder dimension distortion results for the Assouad spectrum (see <cit.>), and try to come up with bounds on the Hölder exponents of a map that sends _p onto _q in terms of p and q. For instance, if f:^d→^d is (α,β)-bi-Hölder and E⊂^d is bounded, then
_B E/β≤_B f(E)≤_B E/α.
Therefore, in the case 0<q<p<1, using the the box dimension of _p, _q and the above inequality, we arrive at the trivial bound β≥ (q+1)/(p+1), and the bound α≤ (q+1)/(p+1) for the Hölder exponents. This does not necessarily imply that a q+1/p+1-Hölder map f with f(_p)=_q actually exists. Note that for 0<q<p we have
q/p<q+1/p+1,
so the latter term is indeed a “better" Hölder exponent, in the sense that it leads to a “smoother" map f.
However, it appears that apart from certain specialized examples, the Hölder dimension distortion estimates are rarely sharp. This was also emphasized in the Hölder unwinding problem for polynomial spirals by Fraser in <cit.>. As a result, it is not unreasonable to suspect that q/p is the sharpest exponent possible, at least for the classification problem involving (α,α^-1)-bi-Hölder maps between spiral shells _p,_q. In fact, a natural direction to explore is the following; suppose the sharp bi-Hölder exponent bounds in a classification problem are not achieved through knowledge of the dimensions of the sets and the Hölder dimension distortion estimates, are the bounds achieved by the corresponding quasiconformal classification results sharp?
99
Ahlfors
L. V. Ahlfors, Quasiconformal reflections, Acta Math. 109 (1963), 291–301.
Carlos
T. C. Anderson, J. Lehrbäck, C. Mudarra,
and A. Vähäkangas, Weakly porous sets and Muckenhoupt A_p distance functions, J. Funct. Anal. 287 (2024), no. 8, Paper No. 110558, 34 pp.
Baker
I. N. Baker, The domains of normality of an entire function, Ann. Acad. Sci. Fenn. Ser. A I Math. 1 (1975), no. 2, 277–283.
BaloghAGMS
Z. M. Balogh, J. T. Tyson, and K. Wildrick
Dimension distortion by Sobolev mappings in foliated metric spaces,
Anal. Geom. Metr. Spaces 1 (2013), 232–254.
btw:heisenberg
Z. M. Balogh, J. T. Tyson, and K. Wildrick, Frequency of Sobolev dimension distortion of horizontal subgroups
of Heisenberg groups,
Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 17 (2017),
655–683.
Barany_dH_conv1
I. Bárány, Random polytopes in smooth convex bodies, Mathematika 39 (1992), no. 1, 81–92.
Barany_dH_conv2
I. Bárány and C. Buchta, Random polytopes in a convex polytope, independence of shape, and concentration of vertices, Math. Ann. 297 (1993), no. 3, 467–497.
ComplexDyn
W. Bergweiler, P. J. Rippon and G. M. Stallard, Multiply connected wandering domains of entire functions, Proc. Lond. Math. Soc. (3) 107 (2013), no. 6, 1261–1301.
Bonk_QC
M. Bonk, Quasiconformal geometry of fractals, in International Congress of Mathematicians. Vol. II, 1349–1373, Eur. Math. Soc., Zürich.
Bonk_Meyer
M. Bonk and D. Meyer, Expanding Thurston maps, Mathematical Surveys and Monographs, 225, Amer. Math. Soc., Providence, RI, 2017.
Conc_eq_med-appl
A. Brú et al., Fractal analysis and tumour growth, Math. Comput. Modelling 47 (2008), no. 5-6, 546–559.
Falc_spirals
S. A. Burrell, K. J. Falconer and J. M. Fraser, The fractal structure of elliptical polynomial spirals, Monatsh. Math. 199 (2022), no. 1, 1–22.
Concentric_phys
I. Cavero-Peláez, P. Parashar and K. V. Shajesh, Quantum Vacuum Energy of Self-Similar Configurations. Universe 128 (2021), https://doi.org/10.3390/universe7050128
HolomSpecChron
E. K. Chrontsios Garitsis,
Quasiregular distortion of dimensions, Preprint:
https://arxiv.org/abs/2309.07362v2.
SobChron
E. K. Chrontsios Garitsis,
Sobolev mappings on metric spaces and Minkowski dimension, To appear in Proc. Amer. Math. Soc., Preprint: https://arxiv.org/abs/2309.07362v2.
OurQCspec
E. K. Chrontsios Garitsis, and J. T. Tyson,
Quasiconformal distortion of the Assouad spectrum and
classification of polynomial spirals, Bulletin of the London Mathematical Society 55, 1 (2023),
282–307.
dup
Y. Dupain, M. Mendès France, C. Tricot.
Dimensions des spirales,
Bulletin de la S. M. F., 111, (1983), 193–201.
falconer
K. J. Falconer.
Fractal Geometry: Mathematical Foundations and Applications,
John Wiley & Sons, Hoboken, NJ, 3rd. ed., 2014.
foi
C. Foias, D. D. Holmb, and E. S. Titi.
The Navier-Stokes-alpha model of fluid turbulence,
Physica D, (2001), 505–519.
spirals
A. Fish and L. Paunescu.
Unwinding spirals,
Methods App. Anal. (to appear), available at: http://arxiv.org/abs/1603.03145
Concentric_machine
D. Franzen and M. Wand, General Nonlinearities in SO(2)-Equivariant CNNs, Advances in Neural Information Processing Systems, (2021), 9086–9098
Fra_book
J. M. Fraser, Assouad dimension and fractal geometry, Cambridge Tracts in Mathematics, 222, Cambridge Univ. Press, Cambridge, 2021
Fra_spirals
J. M. Fraser, On Hölder solutions to the spiral winding problem, Nonlinearity 34 (2021), no. 5, 3251–3270.
Spectraa
J. M. Fraser and H. Yu.
New dimension spectra: finer information on scaling and homogeneity,
Adv. Math., 329, (2018), 273–328.
GehringVais
F. W. Gehring, and J. Väisälä,
Hausdorff dimension and quasiconformal mappings, J. London Math. Soc. (2) 6 (1973), 504–512.
Heinonen
J. Heinonen, Lectures on analysis on metric spaces, Universitext, Springer, New York, 2001
spirals_ode
R. Huzak, D. Vlah, D. Žubrinić and V. Županović. Fractal analysis of degenerate spiral trajectories of a class of ordinary differential equations, Appl. Math. Comput. 438 (2023), Paper No. 127569, 15 pp.
Iwaniec
T. Iwaniec and G. J. Martin, Geometric function theory and non-linear analysis, Oxford Mathematical Monographs, Oxford Univ. Press, New York, 2001.
unwindspirals
Y. Katznelson, S. Nag and D. Sullivan.
On conformal welding homeomorphisms associated to Jordan curves, Ann. Acad. Sci. Fenn. Math., 15, (1990), 293–306.
Kaufman
R. P. Kaufman
Sobolev spaces, dimension, and random series,
Proc. Amer. Math. Soc. 128, 2 (2000), 427–431.
Leh_QC
O. E. Lehto and K. I. Virtanen, Quasiconformal mappings in the plane, second edition Translated from the German by K. W. Lucas,
Die Grundlehren der mathematischen Wissenschaften, Band 126, Springer, New York-Heidelberg, 1973.
concentric_phys2
R. G. Littlejohn and M. W. Reinsch, Gauge fields in the separation of rotations and internal motions in the n-body problem, Rev. Modern Phys. 69 (1997), no. 1, 213–275
quasi
F. Lü and L. Xi.
Quasi-Assouad dimension of fractals,
J. Fractal Geom., 3, (2016), 187–215.
Mat_book
P. Mattila, Geometry of sets and measures in Euclidean spaces, Cambridge Studies in Advanced Mathematics, 44, Cambridge Univ. Press, Cambridge, 1995.
Meyer_Snowb
D. Meyer, Snowballs are quasiballs. Trans. Amer. Math. Soc. 362 (2010), no. 3, 1247–1300.
moff
H. K. Moffatt.
Spiral structures in turbulent flow,
Wavelets, fractals, and Fourier transforms, 317–324, Inst. Math. Appl. Conf. Ser. New Ser., 43, Oxford Univ. Press, New York, 1993.
Shell_Mathbio
J. D. Murray, Mathematical biology. I, third edition,
Interdisciplinary Applied Mathematics, 17, Springer, New York, 2002.
Shell_phys
P. P. Park et al., Mixing and the fractal geometry of piecewise isometries, Phys. Rev. E 95 (2017), no. 4, 042208, 11 pp.
Shell_engineer
P. P. Park et al., Mixing with piecewise isometries on a hemispherical shell, Chaos 26 (2016), no. 7, 073115, 11 pp.
random_dH
J. Prochno, C. Schütt, M. Sonnleitner and E. M. Werner,
Random approximation of convex bodies in Hausdorff metric, Preprint:
https://arxiv.org/abs/2404.02870.
Vais_book
J. Väisälä, Lectures on n-dimensional quasiconformal mappings, Lecture Notes in Mathematics, Vol. 229, Springer, Berlin-New York, 1971.
Smirnov
S. K. Smirnov, Dimension of quasicircles, Acta Math. 205 (2010), no. 1, 189–197; MR2736155
vass
J. C. Vassilicos.
Fractals in turbulence,
Wavelets, fractals, and Fourier transforms, 325–340, Inst. Math. Appl. Conf. Ser. New Ser., 43, Oxford Univ. Press, New York, 1993.
vasshunt
J. C. Vassilicos and J. C. R. Hunt.
Fractal dimensions and spectra of interfaces with application to turbulence,
Proc. Roy. Soc. London Ser. A, 435, (1991), 505–534.
zub
D. Žubrinić and V. Županović.
Box dimension of spiral trajectories of some vector fields in ℝ^3,
Qual. Theory Dyn. Syst., 6, (2005), 251–272.
|
http://arxiv.org/abs/2409.02556v1 | 20240904092149 | Want a Ride? Attitudes Towards Autonomous Driving and Behavior in Autonomous Vehicles | [
"Enrico Del Re",
"Leonie Sauer",
"Marco Polli",
"Cristina Olaverri-Monreal"
] | cs.RO | [
"cs.RO",
"cs.HC"
] |
Article Title]Want a Ride? Attitudes Towards Autonomous Driving and Behavior in Autonomous Vehicles
]Enrico Del Reenrico.del_re@jku.at
]Leonie Sauerk12220384@students.jku.at
]Marco Pollik01605988@students.jku.at
[]Cristina Olaverri-Monrealcristina.olaverri-monreal@jku.at
[]Department Intelligent Transport Systems, Johannes Kepler University Linz, Altenbergerstr. 69, Linz, 4040, Upper Austria, Austria
Research conducted previously has focused on either attitudes toward or behaviors associated with autonomous driving. In this paper, we bridge these two dimensions by exploring how attitudes towards autonomous driving influence behavior in an autonomous car. We conducted a field experiment with twelve participants engaged in non-driving related tasks. Our findings indicate that attitudes towards autonomous driving do not affect participants' driving interventions in vehicle control and eye glance behavior. Therefore, studies on autonomous driving technology lacking field tests might be unreliable for assessing the potential behaviors, attitudes, and acceptance of autonomous vehicles.
[
*
September 9, 2024
=====================
§ INTRODUCTION
With the introduction of autonomous vehicles on the streets, exemplified by driverless taxis from companies like Cruise and Waymo in San Francisco, the conversation around autonomous cars has evolved from futuristic speculation to a pressing issue in contemporary society <cit.>. Public discourse, shaped by personal experiences with existing automation technologies (e.g., adaptive cruise control, lane-keeping assistance), media coverage, and the automotive industry, has emerged to discuss both the opportunities and challenges of widespread adoption of autonomous vehicles. While some anticipate increased efficiency, reduced risk, and enhanced accessibility and convenience, others express concerns about issues such as data security, loss of human control, and skill deterioration <cit.>. Consequently, various stakeholders, including policymakers, are keenly interested in understanding attitudes towards autonomous driving and human-machine interaction in autonomous vehicles. This interest aims to align current mobility strategies with people's needs and expectations <cit.>, <cit.>.
From a sociological standpoint, mobility and mobility behavior are complex subjects intertwined with various dimensions, such as socio-demographics (e.g., rural or urban residence), technology acceptance (e.g., preference for driving cars), and lifestyle choices (e.g., environmentally conscious living) <cit.>. As we transition from manual to autonomous driving, these factors and their correlations may shift due to changes in human-machine interaction. This includes differences in tasks, roles, and control for both the autonomous vehicle and the human occupant, as well as evolving levels of trust in the technology and new perceptions and expectations regarding driving and traffic <cit.>. Therefore, a comprehensive understanding of autonomous driving interaction requires consideration of these multiple factors.
Several studies have explored autonomous driving within the context of sociotechnical transformation processes, aiming to understand individuals' perceptions of self-driving cars and their willingness to adopt them. For instance, <cit.> highlighted that acceptance is shaped by factors at both individual and societal levels. They further differentiated between:
Acceptance subject that encompasses the driver or passenger and includes attitudes, judgments, motivations, car usage, ownership, comfort, trust or skepticism, freedom or control, and enjoyment of driving.
Acceptance object that refers to the car itself, encompassing perceived technological features, development potential, legal frameworks, liability issues, safety considerations, implications for transportation and social systems, flexibility, comfort, efficiency, and uncertainty.
In our study, which focused specifically on attitudes toward autonomous vehicles (pertaining to the acceptance subject), we identified relevant variables. Additionally, we explored the correlation between attitudes toward autonomous driving and the behavior of individuals within an autonomous vehicle. By integrating these two dimensions into a single research design, we address a significant gap in existing literature. This user- and usage-oriented approach aimed to produce findings that are relevant and actionable for policymakers and vehicle developers.
To this end, we defined the following null hypotheses:
H0_1: There is no correlation between attitudes towards autonomous driving and the frequency of driving interventions made by individuals while riding in autonomous vehicles.
H0_2: There is no correlation between attitudes towards autonomous driving and individuals' eye glance behavior while riding in an autonomous vehicle.
The next section presents theoretical insights on attitudes and human-machine interaction in the field. Sections <ref> and <ref> outline the methodology employed to collect and analyze the relevant data. Section <ref> presents the findings, while Section <ref> discusses and concludes the paper proposing future research.
§ RELATED LITERATURE
Human-Robot Interaction (HRI) and Human-Machine Interaction (HMI) encompass the relational and mutual actions and communications between at least one human and one machine. They also involve the division of tasks, roles, and control <cit.>. The dynamic nature of HMIs in automotive contexts has been emphasized in several studies <cit.>, reshaping the driver's role to that of a supervisor. However, when a vehicle reaches its Operational Design Domain (ODD) limit, a driver response is expected. In such cases, cooperation between the automated driving system and the driver can ensure a safe and comfortable transition, facilitated, for example, by a haptic guidance system <cit.>.
Cooperative actions also occur in traffic as human drivers initiate socially negotiated behaviors such as eye contact, gestures and other to signal future actions.
The introduction of autonomous vehicles raises research questions about interactions between drivers of vehicles with varying degrees of automation, as highly automated vehicles may not be able to interpret non-verbal cues from other road users. Developing algorithms for the automatic analysis of pedestrian body language can provide insights into how different interaction paradigms affect public perceptions of road safety and trust in new vehicular technologies <cit.>.
Furthermore, Software as a Service (SaaS) platforms offer information exchange and cloud computing services in this context <cit.>.
The transformation towards vehicle automation presents challenges related to understanding human driving behavior and what drivers perceive as safe. Bridging the gap between subjective perceptions and objective safety measures has been the focus of numerous studies in specific scenarios <cit.>.
Regarding the interaction between passengers and autonomous vehicles, processes of trust-building are required to make sure that humans can be engage in various activities such as reading newspapers, eating, or using their phones. This paper presents findings regarding human behavior in autonomous vehicles, examining how this behavior correlates with their attitudes toward autonomous vehicles.
§ EXPERIMENTAL SETUP
A driving experiment was carried out in accordance with the relevant guidelines and regulations. Informed consent was obtained from all participants.
To collect attitudes toward autonomous driving from the participants, immediately prior to the field experiment, they filled out a pre-task questionnaire on-site. It based on a standardized quantitative survey to differentiate between participants based on their varying experiences, rather than differences stemming from the context of the questioning <cit.>.
The questionnaire started with a segment on attitudes towards autonomous driving, created based on related literature <cit.>. It comprised nine items, exploring knowledge about autonomous vehicles, emotional responses (such as enthusiasm, concern, and perceived safety), and attitudes towards them, including willingness to drive one, regulatory views, and potential consequences. Each item presented statements and utilized a four-point scale for agreement, with an additional option for no answer.
This was followed by a general demographic section.
The sample consisted of 12 participants with valid driver's licenses, ensuring a diverse representation across socio-demographic factors. The group comprised 5 male and 7 female participants, with age categorized into five groups. The majority of respondents were in the younger three age groups.
The experimental setup was based on the design from <cit.>. Data collection was conducted using the JKU-ITS vehicle (see Figure <ref>), as detailed in <cit.>] and <cit.>. During the experiment, participants performed various tasks while the vehicle autonomously navigated a predetermined test lane on the university campus.
Participants sat behind the steering wheel, with a member of the research team in the passenger seat. A technical supervisor occupied the back seat, overseeing an override system capable of intervening in the autonomous driving if necessary. Figure <ref> illustrates the seating arrangement in the vehicle.
Prior to the experiment, participants received an introduction to the car's technological specifications and safety features from the technical supervisor.
To start the experiment, participants manually drove the car to a predetermined starting point. At this location, the autonomous driving system was activated.
As the car autonomously navigated the designated route, participants were tasked with performing seven different activities, outlined by the technical supervisor. These activities are listed in Table <ref>.
The order of activities was randomized. After each task on the testing route, participants deactivated the autonomous mode, manually turned the car around, and then re-engaged the autonomous mode to proceed with the next task in the opposite direction.
§ DATA ACQUISITION AND ANALYSIS
Experimental observations were annotated, with particular attention to instances where the driver intervened in the control of the car by touching the steering wheel or using the pedals. We also recorded the participants' gaze direction (front window, mirrors, etc.) and the level of task completion (fully completed (1 point), partially completed (0.5 points), or not completed (0 points)). The objective was to assess whether participants could focus on other tasks while riding in an autonomous car. For instance, taking an extended pause from reading to observe the road was categorized as a partially completed reading task.
Additionally, we documented the sequence of tasks performed. When a driver intervention occurred, we further registered whether it was deemed necessary or not, The necessity was determined based on whether the car was driving smoothly without any external disruptions. For example, one participant pressed the brakes when a pedestrian approached a crossing. This intervention was categorized as unnecessary since the car would have naturally slowed down. On the other hand, another participant corrected the steering wheel when the car deviated from its intended path, approaching an obstacle on the left. This intervention was classified as necessary.
Lastly, we annotated and categorized participant comments.
After concluding the experiment, the observational notes were matched with the survey responses.
We examined correlations between the task score, frequency of participant intervention (intervention score), frequency of looking up from the windshield (glance score), and frequency of looking in the mirrors (mirror score) with the scores from the pre-task questionnaire.
To quantify the documented observations, we assigned a task completion score to each participant. The maximum score achievable, assuming all tasks were fully completed, was 7 points.
For other observed behaviors, we used binary variables to indicate their occurrence: 1 for a particular observed behavior and 0 when it did not occur.
Regarding the pre-task questionnaire, we quantified the findings by assigning scores to the attitudinal responses and then calculated the average scores based on Likert-scale from 1 to 4. Statements left unanswered were excluded from the analysis.
We established threes distinct scores to represent the attitudes explored in the questionnaire:
* Emotionality Score: This score indicated whether participants had a generally positive (1 point) or negative (4 points) emotional response to autonomous driving. The items for this score
were:
“How enthusiastic are you about the development of driverless vehicles?”
“How worried are you about the development of driverless vehicles?”
* Safety Score: This score reflected the participants' subjective safety when interacting with driverless vehicles, with 1 point for feeling completely safe and 4 points for feeling completely unsafe, assessed with the following items:
“How safe would you feel sharing the road with a driverless passenger vehicle?”
“How safe would you feel sharing the road with a driverless freight truck?”
* Rules Score: This score determined the participants' perception regarding the necessity of a legal framework for autonomous vehicles, with 1 point standing for rejecting regulations and 4 points for welcoming them. Below are the suggested regulations:
“Regulation requiring driverless vehicles to operate in special lanes.”
“Regulation prohibiting driverless vehicles from operating near certain areas, such as schools.”
“Requirement for a person capable of taking over in an emergency to be seated in the driver’s seat”.
To analyze the influence of the independent variables (attitudes) on the dependent variables (task completion and behavior), the Pearson correlation coefficient was calculated for task completion and the Point-Biserial correlation coefficient for the behaviors with a significance of p≤ 0.05. Furthermore, a regression analysis was conducted to obtain further insights into the connection between dependent and independent variables.
§ RESULTS
The distributions of attitude scores for the 12 participants are shown in Figure <ref> as histograms. The values are grouped close to the neutral answer score of 2.5 and shown in more detail in Table <ref>. A Student's t-test with the null hypothesis of a neutral attitude μ_0 = 2.5, a two-sided alternative, and a confidence level of p-value<0.05 resulted in no rejection of the null hypothesis.
The largest difference to a neutral attitude can be observed for the Safety Score, though it is not statistically significant.
The distribution of the Task Completion Scores (see Figure <ref>) revealed that some participants achieved a full score, being the lowest score 5 points. However, no task was fully completed by all participants.
The observed behavior during the task completion is shown in Figure <ref>. Forward Glances were observed for all participants, only 25% of them checked the mirrors or intervened with the steering wheel.
The bivariate distributions of the attitude scores and the observed behaviors are shown in Figure <ref>. No correlation is directly visible, which is confirmed through pairwise point-biserial correlation tests yielding only non-significant results. Additionally, multiple logistic regression using the three attitudes as input variables results in only non-significant coefficients for the attitude scores.
Similarly, the bivariate distribution of attitude scores and the Task Completion Score shown in Figure <ref> does not indicate any correlation. Pairwise Pearson correlation tests confirm this observation as no statistically significant correlations are found. Likewise, multiple linear regression using ordinary least squares results in non-significant coefficients for the three attitudes.
Thus, due to the lack of statistically significant correlation coefficients, the defined hypotheses H01 and H02 are both accepted. Especially for H02 the correlation was not calculable due to uniform behavior across all participants.
Regression analysis further emphasized the lack of correlation, though a larger sample size would be necessary to perform further tests and strengthen the statements of the ones used above.
The comments from the participants during the experiment related to emotional aspects such as trust, control, and feelings of safety, as well as practical considerations like the mobility experience, in-car behavior, and alternative transportation modes. They were generally positive, with interventions and mirror checks being only occasional. Some of the comments were:
“Reading would be practical but difficult on complicated road situations.”
“It is kind of strange to give the car control over driving.”
“I behave regularly out of habit.”
“Feels like sitting in the bus (...) or taxi.”
“I'm more afraid of automatic functions than autonomous driving.”
“Driving with automatic functionality causes more problems for me than autonomous driving.”
“Very focused” (referred to own task performance).
“Yes, there was a child!” (being asked whether the participant was scared).
“Now it overlooked the pedestrian? really!” (during phone call)".
§ CONCLUSION AND FUTURE WORK
The existing literature presents diverse approaches to examining autonomous driving from a sociological perspective. However, the majority of studies focus solely on either attitude toward autonomous vehicles or behaviour within these vehicles.
This study focused on integrating the two dimensions into a single research design.
Results indicated that there is great potential in employing a two-folded research design to explore autonomous driving. No statistically significant correlation between attitudes towards autonomous driving with both individuals' driving interventions and eye glance behavior while riding autonomous cars were identified. This finding suggests that questionnaires and surveys on attitudes and potential behavior towards autonomous driving technology may not necessarily correlate with real-world behavior in road situations. Therefore, to draw accurate conclusions, it is crucial to always conduct field tests.
The experiment was conducted in a controlled environment on a test track without other vehicles, prioritizing safety for the test drivers. As a result, certain observations should be interpreted with caution. For instance, the lack of checks in the side and rearview mirrors could be due to the absence of other cars on the road.
While our study offers valuable insights, there are limitations that may affect the generalizability of the results, such as the small sample size and the specific demographic characteristics of the participants. Therefore, future research will consider a larger sample.
=============================================
|
http://arxiv.org/abs/2409.02192v1 | 20240903180155 | A note on cables and the involutive concordance invariants | [
"Kristen Hendricks",
"Abhishek Mallick"
] | math.GT | [
"math.GT",
"57K18"
] |
§ ABSTRACT
We prove a formula for the involutive concordance invariants of the cabled knots in terms of that of the companion knot and the pattern knot. As a consequence, we show that any iterated cable of a knot with parameters of the form (odd,1) is not smoothly slice as long as either of the involutive concordance invariants of the knot is nonzero. Our formula also gives new bounds for the unknotting number of a cabled knot, which are sometimes stronger than other known bounds coming from knot Floer homology.
Microwave-free imaging magnetometry with nitrogen-vacancy centers in nanodiamonds at near-zero field
Adam M. Wojciechowski
September 9, 2024
====================================================================================================
§ INTRODUCTION
Cabling is a natural operation on a knot which acts on the smooth concordance group. There has been considerable interest in characterizing the behavior of various knot concordance invariants under cabling. Typically, one hopes to prove a formula which relates the values of some concordance invariant of a cabled knot to the value of the invariant on the companion knot and the value of the invariant on the pattern knot, in this case a torus knot. Some examples of invariants which are known to admit such formulas include the Levine-Tristram signatures <cit.>, the Heegaard Floer τ-invariant and ϵ-invariant <cit.>, Rasmussen's V_0-invariant <cit.>, and the Heegaard Floer ν^+-invariant <cit.>, among others.
Recently, concordance invariants stemming from the involutive variant of Heegaard Floer homology defined by the first author and Manolescu <cit.> have been shown to be fruitful in many applications. In this paper we will be interested in the concordance invariants (K) and (K), which can be thought of as the involutive analog of Rasmussen's V_0(K) invariant <cit.>. An early indication of the utility of these invariants was that the involutive concordance invariants are able to detect non-sliceness of certain rationally slice knots. Later, equivariant refinements of these invariants defined by the second author, Dai, and Stoffregen <cit.> were used in to show that (2,1)-cable of the figure-eight knot is not smoothly slice <cit.>.
In this article we give the cabling formula for the involutive concordance invariants. Let K_p,q denote the (p,q)-cable of a knot K, and V_s represent the generalization <cit.> of Rasmussen's V_0-invariant <cit.>.
Let p and q be positive integers with (p,q)=1. Then the involutive concordance invariants satisfy the following relations:
* If p is odd, we have
(K_p,q)= (K) + V_0(T_p,q) , (K_p,q)= (K) + V_0(T_p,q).
* If p is even,
(K_p,q)= max{ V_⌊s/p⌋(K), V_⌊p+q -1-s/p⌋(K) } + V_0(T_p,q) , (K_p,q)= V_0(T_p.q),
where s ≡p+q-1/2 (mod q).
Theorem <ref> implies that the involutive concordance invariants are determined by the knot Floer homology when the longitudinal winding parameter p is even. On the other hand, when p is odd, the involutive concordance invariants incorporate the corresponding invariants for the companion knot.
Theorem <ref> implies the following corollary. Let K_p_1,q_1; p_2, q_2; p_3, q_3; ⋯ ; p_k,q_k
represent the iterated cable of the knot K. For example, K_p_1,q_1; p_2, q_2 is the (p_2,q_2)-cable of the (p_1,q_1)-cable of K.
If the parameters p_i are all odd and positive, and either (K) or (K) is non-zero, then K_p_1,1; p_2,1; p_3,1; ⋯ ; p_k,1 is not smoothly slice.
Corollary <ref> is related to a famous open question by Miyazaki <cit.> which asks whether there are non-slice knots K for which the cable knot K_p,1 is slice.
It is possible to produce examples of topologically (and therefore algebraically) slice knots which are trivial with respect to ordinary Heegaard Floer homology and involutively nontrivial, for which the corollary above detects nonsliceness of the cable; for example, one such family appears in <cit.>. The authors are, however, not presently aware of a family of examples which are not better elucidated by other methods.
We now explain the strategy for the proof of Theorem <ref>. A direct approach would be to determine the full involutive knot Floer invariant of the cabled knot, which takes the form of the knot Floer chain complex of Ozsváth-Szabó and Rasmussen 𝒞𝒦ℱ(𝒦) together with the knot conjugation symmetry ι_K on the complex. The involutive concordance invariants of the knot may then in principle be extracted from this data. However, it is challenging to compute the full knot Floer chain complex for a general cabled knot, and additionally challenging to determine the symmetry ι_K. In general, direct computations of ι_K have only been carried out for simple chain complexes in which the map is determined by its algebraic properties, and for tensor products of those complexes.
Instead, our proof takes an indirect approach, using certain consequences of the surgery formula in involutive Heegaard Floer homology proved by Hom-Stoffregen-Zemke and the first author <cit.> to deduce the formulas presented in Thoerem <ref>. This approach is similar to one used by Ni-Wu to compute the Heegaard Floer correction terms of three-manifolds using surgery formulas <cit.>.
§.§ Applications to the unknotting number
We now discuss an application of Theorem <ref> to the unknotting number. Recall that the unknotting number u(K) of a knot K is the minimum number of times the knot must be passed through itself to turn it into an unknot. There are many well-known bounds in the literature for the unknotting number of a knot. In one recent example, in <cit.>, Alishahi and Eftekhary gave a bound using the torsion order of the HFK^- flavor of the knot Floer homology. This bound was later used by Hom, Lidman, and Park <cit.> to give bounds for the unknotting number of cabled knots. In particular, the authors showed that
u(K_p,q) ≥ p.
It is also well-known that the unknotting number is greater than or equal to the slice genus of the knot. Hence using the relationship of Rasmussen's V_0 invariant to the slice genus <cit.>, together with the cabling formula for the V_0-invariant <cit.>, one can also write down the bounds
u(K_p,q) ≥ 2 V_0(K) + 2 V_0(T_p,q), if g_4(K_p,q) is even
and
u(K_p,q) ≥ 2 V_0(K) + 2 V_0(T_p,q) -1, if g_4(K_p,q) is odd.
In particular, without knowing the slice genus of the knot, we have that
u(K_p,q) ≥ 2 V_0(K) + 2 V_0(T_p,q) -1.
Using Theorem <ref>, we improve the above bounds:
Let K_p,q be a cabled knot with p odd. Then
u(K_p,q) ≥ 2(K) + 2V_0(T_p,q) - 2 and u(K_p,q) ≥ -2(K) - 2V_0(T_p,q) - 2.
More precisely, one has
* u(K_p,q) ≥ 2(K) + 2V_0(T_p.q) - 1 and u(K_p,q) ≥ -2(K) - 2V_0(T_p,q) - 1, g_4(K_p,q) odd,
* u(K_p,q) ≥ 2(K) + 2V_0(T_p,q) - 2 and u(K_p,q) ≥ -2(K) - 2V_0(T_p,q) - 2 , g_4(K_p,q) even
although of course one in general one would like to apply Theorem <ref> in situations where the slice genus is unknown.
A statement similar to Theorem <ref> could also be made for the 4-dimensional positive clasp number of a knot in place of the unknotting number. In another direction, because and are concordance invariants, the bound from Theorem <ref> is in fact a bound on the concordance unknotting number of K_p,q, which is to say the minimum unknotting number of a knot J concordant to K_p,q.
In Example <ref> we show that there are infinitely many knots for which the bound from Theorem <ref> is stronger than that from <cit.> or the aforementioned bound from the V_0-invariant.
§.§ Organization
This paper is organized as follows. In Section <ref> we review the definition of the involutive concordance invariants and structural features of the involutive surgery formula. We then prove Theorem <ref>, Corollary <ref>, and Theorem <ref> in Section <ref>. Finally, in Section <ref> we consider an example in which our formula improves on other bounds from knot Floer homology.
§.§ Acknowledgments
The authors are grateful to Jen Hom, Sungkyung Kang, Tye Lidman, JungHwan Park, and Ian Zemke for helpful comments. Portions of this work were carried out while the second author was in attendance at a Simons Semester on “Knots, Homologies and Physics” at the Institute of Mathematics of the Polish Academy of Science; other portions were carried out while the first author was present at the BIRS workshop “What's your Trick?: A nontraditional conference in low-dimensional topology.” We are grateful to both institutions for their hospitality.
§ BACKGROUND ON INVOLUTIVE HEEGAARD FLOER HOMOLOGY
In this section, we briefly recall the definition of the involutive knot concordance invariants of a knot and the involutive correction terms of a three-manifold from <cit.> and their relationship to surgeries on the knot <cit.>. We begin by recalling the algebraic setup.
An iota-complex is a pair (C,ι) of the following form:
* C is an absolutely ℚ-graded and relatively ℤ-graded finitely-generated chain complex over 𝔽_2[U] with the property that U^-1H_*(C) ≃𝔽_2[U,U^-1] with some grading shift.
* ι is a grading-perserving chain map such that ι^2 is chain homotopic to the identity.
To a rational homology sphere Y together with a structure 𝔰 on Y, Ozsváth and Szabó's Heegaard Floer homology associates a chain complex (Y,𝔰) <cit.>; if 𝔰 is conjugation-invariant, involutive Floer homology appends to this data a chain map ι. The pair ((Y,𝔰), ι) is an iota-complex in the sense of Definition <ref>.
The involutive Heegaard Floer invariants satisfy a tidy connected sum formula, as follows.
(Y_1 # Y_2, 𝔰_1 #𝔰_2, ι) ≃ ((Y_1, 𝔰_1) ⊗(Y_2, 𝔰_2), ι_1 ⊗ι_2).
That is, the two complexes are chain homotopy equivalent via maps which commute up to homotopy with the involutions in their pairs. This relationship is sometimes called a strong equivalence. The statement for the chain complexes is due to <cit.> and the statement for the involution is due to <cit.>.
In ordinary Heegaard Floer homology, the d-invariant of a chain complex C satisfying the criterion above is the maximum grading of an element a in C such that [U^n a] ≠ 0 for all positive n, and the d-invariant or correction term of (Y,𝔰) is d((Y,s)) <cit.>. From this, one may extract Rasmussen's concordance invariant V_0 as V_0(K) = -1/2 d((S^3_+1(K)) <cit.>. The d-invariant is additive under connected sum; that is,
d((Y_1#Y_2), 𝔰_1#𝔰_2) = d(Y_1, 𝔰_1) + d(Y_2, 𝔰_2).
An important special case is that of the lens space L(p,q). The iota-complex associated to L(p,q) in any structure 𝔰 is a copy of the pair (𝔽_2[U], Id), with the grading of the element 1 in 𝔽_2[U] being d(L(p,q),𝔰). Therefore, taking the connected sum with a lens space has the sole effect of imposing a grading shift on ((Y), 𝔰).
We may now define the involutive variants of the d-invariant. We follow <cit.>, which is a reformulation of the original definitions from <cit.>.
Let (C,ι) be an iota-complex. Then (C,ι) is the maximum grading of a homogeneous cycle a ∈ C such that [U^na] ≠ 0 for all positive n and furthermore there exists an element b such that ∂ b= (id + ι)a.
Let (C,ι) be an iota-complex. Consider triples (x,y,z) consisting of elements of C, with at least one of x or y nonzero such that ∂ y = (id + ι)x, ∂ z = U^m x, for some m ≥ 0 and [U^n(U^my + (id + ι)z)] ≠ 0 for all n ≥ 0. If x ≠ 0, assign this triple the value (x)+1; if x=0, assign this triple the value (y). Then (C,ι) is defined as the maximum of these grading values across all valid triples (x,y,z).
Given a three-manifold Y with conjugation-invariant -structure 𝔰, we say that (Y, 𝔰) = ((Y,𝔰)) and likewise (Y, 𝔰) = ((Y, 𝔰)). The invariants and are not additive under connected sum, but instead, for 𝔰 = 𝔰_1 #𝔰_2, satisfy the following <cit.>:
(Y_1,𝔰_1) + (Y_2, 𝔰_2) ≤(Y_1#Y_2,𝔰) ≤(Y_1,𝔰_1) + (Y_2, 𝔰_2) ≤(Y_1#Y_2,𝔰) ≤(Y_1,𝔰_1) + (Y_2, 𝔰_2).
We may now define the invariants and .
Let K be a knot in S^3. The involutive concordance invariants of K are
(K):= -1/2(S^3_+1(K)), (K):= -1/2(S^3_+1(K)).
These invariants also admit a more general relationship to surgeries, which is an extension of a formula for the non-involutive d-invariant proved by Ni and Wu <cit.>. The set of -structures on p/q-surgery on K may be identified with ℤ/pℤ; in this paper we follow the convention used by <cit.> and <cit.> for this identification. Ni and Wu show that
d(S^3_p/q(K), [s])= d(L(p,q),[s]) - 2max{ V_⌊s/p⌋(K), V_⌊p+q -1-s/p⌋(K) }.
Here V_i represents the ith concordance invariant in the sequence defined in <cit.>, which sequence generalizes V_0. In <cit.>, the first author, Hom, Stoffregen, and Zemke and the first author prove a surgery formula in involutive Heegaard Floer homology, one of whose consequences is an involutive analog of the relationship above, which we now recall. With respect to the convention above, the conjugation action on -structures is
J([i])=[p+q-1-i], where [i] ∈ℤ_p.
In particular, if p and q are both odd, then [q-1/2] ∈ℤ_p represents the unique self-conjugate -structure on the manifold S^3_p/q(K). When p is even and q is odd, there are two self-conjugate -structures on the manifold S^3_p/q(K), to wit [q-1/2] and [p+q-1/2]. Finally, when p is odd and q is even, there is a unique -structure [p+q-1/2]. With this in mind, we have the following.
<cit.>
Suppose that q is odd, then we have
(S^3_p/q(K), [ q-1/2])= d(L(p,q),[q-1/2]) - 2(K)
(S^3_p/q(K), [q-1/2])= d(L(p,q),[ q-1/2]) - 2(K).
If one of p or q is even, then
(S^3_p/q(K), [p+q-1/2])= d(S^3_p/q(K),[p+q-1/2])
(S^3_p/q(K), [p+q-1/2])= d(L(p,q),[p+q-1/2]).
Note that the original version of the theorem is stated using a different indexing convention for the structures, which we have translated into the Ozsváth-Szabó convention here; for more on this, see <cit.>. For the statement of the full surgery formula of which this relationship is a consequence, see <cit.>.
§ PROOFS OF THE MAIN THEOREMS
§.§ A cabling formula for the involutive concordance invariants
In this subsection, we prove Theorems <ref> and <ref>. A key ingredient of our proof will be the following well-known diffeomorphism (see for example <cit.> or <cit.> for a proof):
S^3_pq(K_p,q) = S^3_q/p(K) # L(p,q).
Let us now denote the projections of a -structure to the two summands S^3_q/p(K) and L(p,q) as π_1 and π_2 respectively. We may treat these as functions
π_1 ℤ_pq→ℤ_q, π_2 ℤ_pq →ℤ_p.
We record the following lemma, which is a direct consequence of <cit.>.
The self-conjugate -structure [0] ∈ℤ_pq projects to the summands as follows:
* If both p and q are odd, we have
π_1(0) ≡p-1/2 (mod q); π_2(0) ≡q-1/2 (mod p).
* If p is odd and q is even,
π_1(0) ≡p-1/2 (mod q); π_2(0) ≡p+q-1/2 (mod p).
* If p is even and q is odd,
π_1(0) ≡p+q-1/2 (mod q); π_2(0) ≡q-1/2 (mod p).
In <cit.> it was shown that the projections maps satisfy the following relations
π_1(0) ≡ - (p-1)(q-1)/2 (mod q); π_2(0) ≡ - (p-1)(q-1)/2 (mod p).
The desired result now follows from a direct calculation.
We may now prove Theorem <ref>.
We break the proof into three cases, depending on the parity of p and q.
Case 1. We first consider the case when p and q are both odd. We will give the proof for . The proof for is identical.
Per Lemma <ref>, the projection of the [0] structure on S^3_pq(K) to L(p,q) has image the unique self-conjugate -structure, represented by
[q-1/2] (mod p).
Similarly, for q/p-Dehn surgery with q odd, the projection of the [0] structure on S^3_pq(K) to S^3_q/p(K) has image the unique self-conjugate -structure, namely
[p-1/2] (mod q).
Now applying Lemma <ref> and (<ref>) to the case that both p and q are odd, we we see that the iota-complex ((S^3_pq(K_p,q),[0]), ι) is strongly equivalent to the iota-complex ((S^3_q/p(K),[p-1/2]),ι) shifted in grading by d(L(p,q),[q-1/2]). It therefore follows that
(S^3_pq(K_p,q),[0])= (S^3_q/p(K), [ p-1/2]) + d(L(p,q),[q-1/2]).
Applying Theorem <ref>, we get that
(S^3_pq(K_p,q),[0])= d(L(pq,1),[0]) -2(K_p,q)
and also that
(S^3_q/p(K),[p-1/2]) = d(L(q,p),[p-1/2]) - 2(K).
Combining Equations (<ref>) and (<ref>) with Equation (<ref>) we get
d(L(pq,1),[0]) -2(K_p,q)=d(L(q,p),[p-1/2]) - 2(K) + d(L(p,q),[q-1/2]).
Now by plugging in the unknot for K in Equation <ref>, we get
d(L(pq,1),[0]) -2(T_p,q) = d(L(q,p),[p-1/2]) + d(L(p,q),[q-1/2]).
Substituting Equation (<ref>) into Equation (<ref>) and observing that (T_p,q)= V_0(T_p,q) <cit.> we get the desired equality. The proof for is similar.
Case 2. Let us now consider the case when p is odd and q is even. We first consider . The two self-conjugate -structures on S^3_q/p(K) are given by
[p-1/2] (mod q) and [p+q-1/2] (mod q).
On the other hand, on L(p,q) there is only one self-conjugate structure, to wit
[p+q-1/2] (mod p).
Now, in this case, Lemma <ref> implies
(S^3_pq(K_p,q),[0])= (S^3_q/p(K), [ p-1/2]) + d(L(p,q),[p+q-1/2]).
Again applying Theorem <ref>, we also have
(S^3_pq(K_p,q),[0])= d(L(pq,1),[0]) -2(K_p,q)
and
(S^3_q/p(K),[p-1/2]) = d(L(q,p),[p-1/2]) - 2(K).
The rest of the proof is identical to that for the case when p and q are both odd. The proof for is also similar.
Case 3. We are left to consider the case when p is even. In this case by Lemma <ref> we have
(CF(S^3_pq(K_p,q),[0])) = (S^3_q/p(K), [ p+q-1/2]) + d(L(p,q),[q-1/2]).
and from Theorem <ref> we still have
(CF(S^3_pq(K_p,q),[0])) = d(L(pq,1),[0]) - 2(K).
However, in this case our second consequence of Theorem <ref> is of the form
(S^3_q/p(K),[p+q-1/2]) = d(S^3_q/p(K),[p+q-1/2]).
Now let us choose s ≤ q-1 such that
s ≡p+q-1/2 (mod q).
Applying the Ni-Wu formula (<ref>) , we get
d(S^3_q/p(K),[p+q-1/2]) = d(L(q,p),[p+q-1/2]) - 2 max{ V_⌊s/p⌋(K), V_⌊p+q -1-s/p⌋(K) }.
Combining Equations (<ref>), (<ref>), (<ref>) and (<ref>) and following the similar steps as above gives us the desired equality for .
As and have significantly different expressions in this final case, we also comment briefly on the argument for . The analog of Equation (<ref>) is
(S^3_q/p(K),[p+q-1/2]) = d(L(q,p),[p+q-1/2]).
Therefore following similar steps as for the case, we get
(K_p,q) = V_0(T_p.q).
This completes the proof.
The proof of Corollary <ref> now follows.
We may apply Theorem <ref> recursively.
§.§ Unknotting number bound
In this subsection, we produce a bound for the unknotting number, using Theorem <ref>.
In <cit.> Juhász-Zemke proved a bound for the slice genus using the involutive concordance invariants. Specifically, they showed
-⌈g_4(K) + 1/2⌉≤(K) ≤(K) ≤⌈g_4(K) + 1/2⌉.
Now observe that
g_4(K) ≤ u(K).
Hence, we immediately get the desired inequality by replacing K with K_p,q and applying Theorem <ref> for p odd.
§ EXAMPLES
We now discuss an example for which our cabling formula improves on other bounds from knot Floer homology.
We consider the knot K:= -2 T_6,7# T_6,13. It was shown in <cit.> that the knot Floer chain complex of K splits into equivariant summands one of which is the tensor product of 𝔽[U,U^-1] with the complex C in Figure <ref> in such a way that (K) and (K) is the involutive concordance invariant of the complex in Figure <ref> with the action of ι shown; in the language of the literature, the two complexes are locally equivalent. Indeed, one may compute and of the knot K by computing and of a suitable subcomplex of C ⊗𝔽[U,U^-1]; for more on this computation, see <cit.> and <cit.>.
In particular, computation shows that (K)=3. Now the unknotting number bound for K_3,2 from Theorem <ref> gives
u(K_3,2) ≥ 2 (K) + 2 - 2 = 6
Note that slice genus, and hence unknotting number, bounds for K_3,2 coming from the Ozsváth-Szabó τ-invariant, from the V_0-invariant, and from the ν^+ invariant <cit.> are all 1. Finally, observe that the unknotting number bound from <cit.> shows that u(K_3,2) ≥ 3.
A similar argument applies to the general situation of the (n,2)-cables of the knots K_n = -2T_2n,2n+1#T_2n,4n+1 for n odd considered in <cit.>, which have chain complexes locally equivalent to the analog of the complex in Figure <ref> with length n arrows in the box, and have (K_n) = n.
amsalpha
|
http://arxiv.org/abs/2409.03078v1 | 20240904210714 | Asymptotic dimension and hyperfiniteness of generic Cantor actions | [
"Sumun Iyer",
"Forte Shinko"
] | math.LO | [
"math.LO",
"math.DS"
] |
^†Carnegie Mellon University; sumuni@andrew.cmu.edu
^*University of California Berkeley; forteshinko@berkeley.edu
This research was partially conducted at the Spring 2023 thematic program in Set Theoretic Methods in Algebra, Dynamics and Geometry at the Fields Institute for Research in Mathematical Sciences. The first author was supported by NSF GRFP grant DGE – 2139899 and the second author was supported by the Fields Institute.
§ ABSTRACT
We show that for a countable discrete group
which is locally of finite asymptotic dimension,
the generic continuous action on Cantor space
has hyperfinite orbit equivalence relation.
In particular,
this holds for free groups,
answering a question of Frisch-Kechris-Shinko-Vidnyánszky.
GraphEx: A Graph-based Extraction Method for Advertiser Keyphrase Recommendation
Forte Shinko^*
September 9, 2024
================================================================================
For this entire article,
fix a countable discrete group Γ.
§ INTRODUCTION
A countable Borel equivalence relation (CBER)
is an equivalence relation E on a standard Borel space X
which is Borel as a subset of X^2,
and for which every equivalence class is countable
(see <cit.> for more background on CBERs).
The theory of CBERs seeks to classify these equivalence relations
based on their relative complexity.
More precisely,
there is a natural preorder on CBERs,
called the Borel reducibility preorder,
defined as follows:
if E and F are CBERs on X and Y respectively,
then E ≤_B F if there is a Borel map f : X → Y
such that for all x, x' ∈ X,
we have
x E x'
f(x) F f(x').
If E ≤_B F,
then we think of E as “simpler” than F.
The simplest CBERs are the so-called smooth CBERs,
which are those CBERs E satisfying E ≤_B Δ_,
where Δ_ is the equality relation on .
The canonical non-smooth CBER is E_0 on 2^ defined as follows:
x E_0 y ∃ k ∀ n > k [x_n = y_n]
A CBER E is hyperfinite if E ≤_B E_0.
Hyperfiniteness is the next level up from smoothness in the following sense:
by the Harrington-Kechris-Louveau theorem,
a CBER E is non-smooth iff E_0 ≤_B E
(see <cit.>).
Hyperfiniteness is a very active area of research,
in part due to the deep connection with amenability.
Given a Borel action Γ X on a standard Borel space X,
denote by E_Γ^X the orbit equivalence relation of X.
By the Connes-Feldman-Weiss theorem <cit.>,
every orbit equivalence relation
of every amenable group is measure-hyperfinite,
where a CBER E on X is measure-hyperfinite
if for every Borel probability measure μ on X,
there is a Borel subset Y ⊆ X with μ(Y) = 1
such that E Y is hyperfinite.
A long-standing open question of Weiss
asks whether we can remove the measure condition
(see <cit.>):
Is every orbit equivalence relation of every amenable group hyperfinite?
This problem is far from being resolved,
and in fact it is still open for solvable groups,
although it is known that the answer is positive for nilpotent and polycyclic groups
(see <cit.>).
To answer Weiss's Question in the positive,
it would be enough to have a positive resolution to the following question
(see <cit.>):
Is every measure-hyperfinite CBER hyperfinite?
It is possible for this question to have a strong negative answer.
For instance,
measure-hyperfiniteness is a ^1_1 property,
and it is possible that hyperfiniteness is ^1_2-complete
(see <cit.>),
which in particular would imply that there are “many”
measure-hyperfinite CBERs which are not hyperfinite.
Another possible approach to a strong negative answer
is to apply Baire category in a Polish space of CBERs,
which we make precise.
A Cantor action of Γ is a group homomorphism Γ→(2^),
which we view as a continuous action Γ 2^.
Viewing (2^) as a Polish group with the compact-open topology,
let (Γ) be the Polish subspace of
(2^)^Γ consisting of Cantor actions of Γ.
It was shown by Suzuki (see <cit.>)
that if Γ is an exact group,
meaning that its reduced C*-algebra is exact,
then the set
{ a ∈(Γ) : a is measure-hyperfinite}
is comeager,
where we say that an action is hyperfinite
or measure-hyperfinite
if its orbit equivalence relation is.
This raises the following natural question,
which appears as Problem 8.0.16 in <cit.>
(note that there it is stated in terms of the space of subshifts,
but this is is equivalent by a result of Hochman,
see <cit.>):
If Γ is an exact group,
is the set
{ a ∈(Γ) : a is hyperfinite}
comeager?
A negative answer to <Ref>
would immediately give a negative answer to <Ref>.
However,
we show that <Ref> has a positive answer
for a wide class of exact groups,
in particular for free groups,
for which the problem had been open:
If Γ is locally of finite asymptotic dimension,
then the set
{ a ∈(Γ) : a is hyperfinite}
is comeager.
Asymptotic dimension is a coarse invariant of discrete groups
taking values in ∪{∞}.
Groups which are locally of finite asymptotic dimension
include free groups,
hyperbolic groups,
and mapping class groups of finite type surfaces
(see <cit.>
for background on asymptotic dimension of groups).
In particular,
this theorem exhibits examples of amenable groups,
such as (/2 ≀)^2,
for which it is now known that the generic Cantor action is hyperfinite,
but for which it is open whether all of its Cantor actions are hyperfinite.
There are still many amenable groups,
such as the solvable group ≀,
for which it not yet known that the generic Cantor action is hyperfinite.
§ BACKGROUND
We denote by (2^) the homeomorphism group of the Cantor space 2^,
viewed as a Polish group with the compact-open topology.
We describe an explicit basis for (2^).
We view every ϕ∈(2^) as a directed graph
whose vertex set is 2^,
and where there is a directed edge from x to y iff ϕ(x) = y.
Ranging over all finite directed graphs G
and over all continuous maps c : 2^→ V(G),
the sets
{ϕ∈(2^)
: c is a homomorphism of directed graphs from ϕ to G}
form an open basis for the topology of (2^).
For the rest of this section,
fix a countable group Γ.
We write S ⋐Γ
to mean that S is a finite subset of Γ
such that 1 ∈ S and S^-1 = S.
Let (Γ) be the set of continuous actions of Γ on 2^.
We view (Γ) as the (2^)-invariant Polish subspace of (2^)^Γ
consisting of all group homomorphisms Γ→(2^),
where the action (2^) (2^)^Γ is by conjugation on each coordinate.
We describe an explicit basis for (Γ).
A Γ-graph is a pair G = (V(G), E(G)),
where V(G) is a set,
and E(G) is a subset of Γ× V(G) × V(G).
We view every action Γ X as an Γ-graph G
where V(G) = X
and (γ, x, y) ∈ E(G) iff γ· x = y.
For Γ-graphs G and G',
a function f : V(G) → V(G') is an Γ-map from G to G'
if for every (γ, v, w) ∈ E(G),
we have (γ, f(v), f(w)) ∈ E(G').
A finite Γ-graph is a Γ-graph G
such that V(G) is finite
and such that E(G) is a cofinite subset of Γ× V(G) × V(G).
Ranging over all finite Γ-graphs G
and over all continuous maps c : 2^→ V(G),
the sets
{ a ∈(Γ) :
c is a Γ-map from a to G}
form an open basis for the topology of (Γ).
§ LOCALLY CHECKABLE LABELLING PROBLEMS
For this section,
fix a countable group Γ.
We describe another basis for (Γ).
An LCL on Γ
(short for Locally Checkable Labelling problem)
is a set of functions
each of whose domains is a finite subset of Γ.
We think of an LCL as a set of “allowed patterns” for a coloring.
Let Γ X be an action,
and let Π be an LCL on Γ.
A function c with domain X is a Π-coloring
if there is some finite Π_0 ⊆Π
such that for all x ∈ X,
there is some P ∈Π_0
such that for all γ∈(P),
we have c(γ x) = P(γ).
So c is a Π-coloring
iff there is some finite Π_0 ⊆Π
for which c is a Π_0-coloring.
Ranging over all LCLs Π
and over all continuous maps c from 2^ to a discrete space,
the sets
{ a ∈(Γ) : c is a Π-coloring}
form an open basis for the topology of (Γ).
First we show that each such set is open.
Let a ∈(Γ),
let Π be an LCL,
and let c be a continuous map from 2^ to a discrete space,
such that c is a Π-coloring of a.
Fix a total order on Π,
and let f : 2^→Π be the function defined as follows:
for x ∈ 2^,
let f(x) be the first element P ∈Π
such that for all γ∈(P),
we have c(γ x) = P(γ).
This is continuous since c is continuous.
Endow Π with a Γ-graph structure as follows:
say that (γ, P, Q) is an edge
if (γ P) ∪ Q is a function.
Then f is a Γ-map from a to Π,
and for every b ∈(Γ) for which f is a Γ-map,
we have that c is a Π-coloring of b.
To show that it is a basis,
we will show that every set in the previous basis is of the new form.
Fix a finite Γ-graph G and a continuous map f : 2^→ V(G).
Fix S ⋐Γ such that
(Γ∖ S) × V(G) × V(G) ⊆ E(G).
Consider the LCL Π consisting of all functions P : S → V(G)
such that (s, P(1), P(s)) ∈ E(G),
Then for every a ∈(Γ),
we have that f is a Γ-map from a to V(G)
iff f is a Π-coloring.
Notice that every set of the form
{ a ∈(Γ) :
a has a continuous Π-coloring}
is an open set
since it is a union of the basic open sets considered in <Ref>.
We show that nonempty such sets are dense.
Let Π be an LCL on Γ.
Then the following are equivalent:
* Γ has a Π-coloring.
* { a ∈(Γ) :
a has a continuous Π-coloring} is nonempty.
* { a ∈(Γ) :
a has a continuous Π-coloring} is dense.
We will need the following.
The set { a ∈(Γ) : a is free}
is dense G_δ.
For a proof of <Ref>,
see <cit.> or <Ref>.
* (1 2):
It suffices to find some zero-dimensional compact Γ-space
with a continuous Π-coloring,
since its product with 2^ yields a Cantor action with the same property.
Fix a finite subset Π_0 ⊆Π such that Γ has a Π_0-coloring,
and let K = ⋃_P ∈Π_0(P).
View K^Γ as a compact Γ-space
equipped with the action (γ· x)_δ = x_δγ.
Then the compact Γ-invariant subspace of K^Γ
defined by
X = {x ∈ K^Γ : x is a Π_0-coloring}
is nonempty,
and it has a continuous Π_0-coloring given by c(x) = x_1.
* (2 3):
The action (2^) (Γ) is generically ergodic,
i.e. has a dense orbit
(see <cit.>),
so since this set is non-empty and (2^)-invariant,
it is dense.
* (3 1):
By <Ref>,
there is a free a ∈(Γ)
with a continuous Π-coloring.
By freeness,
there is a Γ-equivariant map Γ→ a,
and the composition of this with the Π-coloring of a
is a Π-coloring of Γ.
We can also prove <Ref> using LCLs.
One can show using a coloring result like <cit.>
that a zero-dimensional Polish Γ-space is free
iff for every γ∈Γ,
it has a continuous coloring for the LCL
consisting of injections {1, γ}{0, 1, 2}.
Then <Ref> immediately follows from
(1 2 3)
of <Ref>,
whose proof never used <Ref>.
§ ASYMPTOTIC DIMENSION AND HYPERFINITENESS
For this section,
fix a countable group Γ.
Let n ∈.
An n-coloring is a function
whose image is a subset of
{0, 1, 2, …, n - 1}.
Let Γ X be an action and let S ⋐Γ.
A function c with domain X is S-separated
if there is a uniform bound on the sizes of the components
of the graph with vertex set X
where x and x' are adjacent iff x' ∈ Sx and c(x) = c(x').
The asymptotic dimension of an action
Γ X of a group on a set,
denoted (Γ X),
is defined as follows:
(Γ X)
= sup_S ⋐Γmin{n ∈ :
Γ X has an S-separated n-coloring} - 1
We define the asymptotic dimension of a group.
The asymptotic dimension of a group Γ,
denoted (Γ),
is the asymptotic dimension of the left-multiplication action
ΓΓ.
A group Γ is locally of finite asymptotic dimension
if all of its finitely generated subgroups have finite asymptotic dimension.
Note that every free action of Γ has asymptotic dimension (Γ).
In particular,
if Δ is a subgroup of Γ,
then (ΔΓ) = (Δ).
Asymptotic dimension can be encoded by LCLs.
Let S ⋐Γ and let n ∈.
The LCL Π_S, n is the set of n-colorings P with (P) such that
* (P) is a finite subset of Γ;
* 1 ∈(P);
* for every γ∈(P) with P(γ) = P(1),
we have S γ⊆(P).
Let Γ X be an action,
let S ⋐Γ,
and let n ∈.
Then every Π_S, n-coloring of X is an S-separated n-coloring.
Moreover,
if the action is free,
then the converse also holds.
Fix a function c with domain X.
Let G be the graph with vertex set X
where x and x' are adjacent iff x' ∈ Sx and c(x) = c(x').
Suppose c is a Π_S, n-coloring.
Then c is a Π_0-coloring for some finite Π_0 ⊆Π_S, n.
Let x ∈ X.
Since c is a Π_0-coloring,
there is some P ∈Π_0 such that for every γ∈(P),
we have P(γ) = c(γ x).
Then [x]_G ⊆(P) x.
Hence every component of G has size at most max_P ∈Π_0 |(P)|.
Now suppose that the action is free,
and suppose that c is an S-separated n-coloring.
Then there is some k ∈
such that for every x ∈ X,
the G-component [x]_G of x satisfies [x]_G ⊆ S^k x.
Let Π_0 ⊆Π_S, n consist of those P
with (P) ⊆ S^k+1.
Now suppose x ∈ X.
Consider the function P
with domain {γ∈Γ : γ x ∈ S[x]_G}
defined by P(γ) = c(γ x).
Then P ∈Π_S, n,
and we have (P)x ⊆ S[x]_G ⊆ S^k+1 x,
so by freeness we have (P) ⊆ S^k+1,
and hence P ∈Π_0.
Thus c is a Π_S, n-coloring.
For Cantor actions,
we use a topological version of asymptotic dimension.
The continuous asymptotic dimension of a continuous action
Γ X on a topological space,
denoted _c(Γ X),
is defined as follows:
_c(Γ X)
= sup_S ⋐Γmin{n ∈ :
Γ X has a continuous S-separated n-coloring} - 1
Let Δ≤Γ be a subgroup.
Then the set
{ a ∈(Γ) :
a is free and
_c( aΔ) = (Δ)}
is dense G_δ,
and hence comeager.
For free a ∈(Γ),
we have _c( a Δ)
≥( a Δ)
= (Δ),
so we need only consider the inequality
_c( aΔ) ≤(Δ).
Freeness is dense G_δ by <Ref>.
If (Δ) = ∞,
then the set in question is just the set of free actions,
so we are done.
So suppose (Δ) < ∞.
By <Ref>,
for a ∈(Γ),
we have that a is free and satisfies
_c( aΔ) ≤(Δ)
iff a is free and has a continuous Π_S, (Δ) + 1-coloring
for every S ⋐Δ,
The latter condition is dense G_δ by <Ref>,
so we are done.
In particular,
{ a ∈(Γ) :
a is free and _c( a) = (Γ)}
is dense G_δ and hence comeager,
so if (Γ) is finite,
then the generic element of (Γ) is hyperfinite
by <cit.>.
We can sharpen this to obtain <Ref> from the introduction:
The set
{ a ∈(Γ) :
a is free and _c( aΔ) = (Δ)
for every finitely generated Δ≤Γ}
is dense G_δ,
and hence comeager.
In particular,
if Γ is locally of finite asymptotic dimension,
then
{ a ∈(Γ) :
a is hyperfinite}
is comeager.
For every finitely generated Δ≤Γ,
the set
{ a ∈(Γ) :
a is free and _c( aΔ) = (Δ)}
is dense G_δ by <Ref>.
There are countably many finitely generated Δ≤Γ,
so the intersection over all of them is still dense G_δ.
Now suppose that Γ is locally of finite asymptotic dimension.
It suffices to show that every element a
of this dense G_δ set is hyperfinite.
Fix an increasing sequence (Δ_n)_n of finitely generated
subgroups whose union is Γ.
Then for every n,
we have _c( a Δ_n) = (Δ_n) < ∞.
Thus a is hyperfinite by <cit.>.
|
http://arxiv.org/abs/2409.02282v1 | 20240903203257 | A model for contractile stress fibers embedded in bulk actomyosin networks | [
"Mariya Savinov",
"Charles S. Peskin",
"Alex Mogilner"
] | physics.bio-ph | [
"physics.bio-ph",
"92C05 (Primary), 92C37 (Secondary)"
] | |
http://arxiv.org/abs/2409.03449v1 | 20240905115640 | MOBIUS: Towards the Next Generation of Query-Ad Matching in Baidu's Sponsored Search | [
"Miao Fan",
"Jiacheng Guo",
"Shuai Zhu",
"Shuo Miao",
"Mingming Sun",
"Ping Li"
] | cs.IR | [
"cs.IR"
] |
MOBIUS: Next Generation of Query-Ad Matching in Baidu's Sponsored Search]MOBIUS: Towards the Next Generation of Query-Ad Matching in Baidu's Sponsored Search
fanmiao, guojiacheng, zhushuai, miaoshuo, sunmingming01, liping11@baidu.com
^1 Cognitive Computing Lab (CCL), Baidu Research, Baidu Inc.
^2 Baidu Search Ads (Phoenix Nest), Baidu Inc.
§ ABSTRACT
Baidu runs the largest commercial web search engine in China, serving hundreds of millions of online users every day in response to a great variety of queries. In order to build a high-efficiency sponsored search engine, we used to adopt a three-layer funnel-shaped structure to screen and sort hundreds of ads from billions of ad candidates subject to the requirement of low response latency and the restraints of computing resources. Given a user query, the top matching layer is responsible for providing semantically relevant ad candidates to the next layer, while the ranking layer at the bottom concerns more about business indicators (e.g., CPM, ROI, etc.) of those ads. The clear separation between the matching and ranking objectives results in a lower commercial return. The Mobius project has been established to address this serious issue. It is our first attempt to train the matching layer to consider CPM as an additional optimization objective besides the query-ad relevance, via directly predicting CTR (click-through rate) from billions of query-ad pairs. Specifically, this paper will elaborate on how we adopt active learning to overcome the insufficiency of click history at the matching layer when training our neural click networks offline, and how we use the SOTA ANN search technique for retrieving ads more efficiently (Here “ANN” stands for approximate nearest neighbor search). We contribute the solutions to Mobius-V1 as the first version of our next generation query-ad matching system.
< g r a p h i c s >
“Mobius” is the internal code name of this project. Coincidentally, the well-known “Mobius Loop” is also the bird's-eye view of Baidu's Technology Park in Beijing, China.
[
^1Miao Fan, ^2Jiacheng Guo, ^2Shuai Zhu, ^2Shuo Miao, ^1Mingming Sun, ^1Ping Li
September 9, 2024
====================================================================================
§ INTRODUCTION
Baidu Search (<www.baidu.com>), as the largest commercial search engine in China, daily serves hundreds of millions of online users in response to a great variety of search queries. It is common knowledge that advertising has been the main revenue source for all major commercial search engine firms in the world. In this paper, we focus on explaining some of the recent exciting development and invention in Baidu's Search Ads system (conventionally known as the “Phoenix Nest” inside Baidu). As shown by Figure <ref>, it plays a vital role in retrieving advertisements (ads) which are relevant to user queries to attract clicks as advertisers are willing to pay when their ads get clicked. The goal of Baidu sponsored search system is to form and nourish a virtuous circle among online users, advertisers, and our sponsored search platform.
Conventional sponsored search engines <cit.> generally display ads through a two-step process. The first step is to retrieve relevant ads given a query, and the next step is to rank these ads based on predicted user engagement. As a high-efficiency sponsored search engine for commercial use in Baidu, we used to adopt a three-layer funnel-shaped structure to screen and sort hundreds of ads from billions of ad candidates subject to the requirement of low response latency and the restraints of computing resources. As illustrated in Figure <ref>, the top matching layer is responsible for providing relevant ad candidates to the next layer given a user query and the rich profile of the user. To cover more semantically relevant ads, query expansion <cit.> and natural language processing (NLP) techniques <cit.> are mostly leveraged. The ranking layer at the bottom concerns more about business indicators <cit.>, such as cost per mile (CPM = CTR × Bid), return on investment (ROI), etc., of the filtered ads provided by the upper layer.
However, this separation/distinction between matching and ranking objectives lead to a lower commercial return for various reasons. Given a user query, we have to employ complex models and to spend a lot of computing resources on ranking hundreds or even thousands of ad candidates. Perhaps most disappointingly, the ranking models report that many relevant ads are not offered by high CPM and will not be displayed.
To address this issue, Baidu Search Ads has set up the “Mobius” project which aims towards the next generation query-ad matching system in Baidu's sponsored search. This project is expected to unify the diverse learning objectives including the query-ad relevance and many other business indicators together, subject to lower response latency, restraints of computing resources and tiny adverse impact on user experience.
In this paper, we introduce Mobius-V1 which is our first attempt for teaching the matching layer to take CPM as an additional optimization objective besides the query-ad relevance. In other words, Mobius-V1 has the capability of accurately and rapidly predicting click-through rate (CTR) for billions of user query & ad pairs. To achieve this goal, we must resolve the following major problems:
* Insufficient click history: The original neural click model employed by the ranking layer was trained by high-frequency ads and user queries. It tends to estimate a query-ad pair at a much higher CTR for display once either a high-frequency ad or a high-frequency query appears, even though they might have low relevance.
* High computational/storage cost: Mobius is expected to forecast multiple indicators (including relevance, CTR, ROI, etc.) of billions of user query & ad pairs. It naturally faces the challenge of greater consumption on computing resources.
To address the problems above, we first design a “teacher-student” framework inspired by the idea of active learning <cit.> to augment the training data for our large-scale neural click model to predict CTR for billions of user query & ad pairs. Specifically, an offline data generator is responsible for constructing synthetic query-ad pairs given billions of user queries and ad candidates. These query-ad pairs are constantly judged by a teacher agent which is derived from the original matching layer and is good at measuring the semantic relevance of a query-ad pair. It can help detect the bad cases (i.e., high CTR but low relevance) in the synthetic query-ad pairs. Our neural click model, as a student, is taught by the additional bad cases to improve the ability of generalization on tail queries and ads. To save the computing resources and satisfy the requirement of low response latency, we further adopt the most recent state-of-the-art approximate nearest neighbor (ANN) search and Maximum Inner Product Search (MIPS) techniques for indexing and retrieving a large number of ads more efficiently.
To tackle the aforementioned challenges, Mobius-V1, as the first version of our next generation query-ad matching system, is an integration of the solutions above and has already been deployed in Baidu's sponsored search engine.
§ VISION OF BAIDU'S SPONSORED SEARCH
For a long period of time, the funnel-shaped structure is a classical architecture of sponsored search engines <cit.>. The major components include the query-ad matching and ad ranking. The query-ad matching is typically a lightweight module which measures the semantic relevance between a user query and billions of ads. In contrast, the ad ranking module should concern much more business indicators such as CPM, ROI, etc., and use complex neural models to sort hundreds of ad candidates for display. This decoupled structure is a wise option to save the expensive computing resources in the early days.
Moreover, it can also facilitate both scientific research and software engineering as the two modules can be assigned to different research/development teams to maximize individual objectives.
Baidu's sponsored search used to adopt a three-layer funnel-shaped structure which is shown by Figure <ref>. The optimization objective of the top matching layer (denoted by 𝒪_Matching) is to maximize the average relevance score among all the query-ad pairs:
𝒪_Matching = max1/n∑_i=1^nRelevance (query_i, ad_i).
However, according to our long-term analysis on the performance of Baidu's sponsored search engine, we find out that the distinction/separation between matching and ranking objectives tends to lead to lower CPM which is one of the key business indicators for a commercial search engine. It is unsatisfactory when the models in the ranking layer report that many relevant ads provided by the matching layer will not be displayed on search results as they are estimated not to have higher CPM.
With the rapid growth of computing resources, the Baidu Search ads team (“Phoenix Nest”) has recently established the Mobius project which aims towards the next generation query-ad matching system in Baidu's sponsored search. The blueprint of this project as illustrated in Figure <ref> looks forward to unifying multiple learning objectives including the query-ad relevance and many other business indicators into a single module in Baidu's sponsored search, subject to lower response latency, limited computing resources and tiny adverse impact on user experience.
This paper will report the first version of Mobius, i.e., Mobius-V1, which is our first attempt to teach the matching layer considering CPM as an additional optimization objective besides the query-ad relevance. Here we formulate the objective of Mobius-V1 as follows,
𝒪_Mobius-V1 = max∑_i=1^nCTR (user_i, query_i, ad_i) × Bid_i,
s.t. 1/n∑_i=1^nRelevance (query_i, ad_i) ≥ threshold.
Thus, it becomes a challenge about how to accurately predict CTR for billions pairs of user quires and ad candidates in Mobius-V1. In the rest of the paper, we will describe how we design, implement, and deploy Mobius-V1, in great details.
§ MOBIUS: NEXT GENERATION QUERY-AD MATCHING SYSTEM
“Mobius” is Baidu's internal code name of this project. Coincidentally, the well-known “Mobius Loop” is also the bird's eye view of Baidu's Technology Park in Beijing, China; see Figure <ref>. “Mobius-V1” is our first attempt (which has been successfully deployed) to transfer our neural click model to the matching layer directly facing billions of user query and ads. As the scale of input data dramatically increases, we need to re-train our neural click model offline and update the techniques of indexing and retrieving ads.
§.§ Active-Learned CTR Model
For over 6 years, Baidu's sponsored search engine has been using the deep neural networks (DNN) for the CTR model (of gigantic size). Recently, Mobius-V1 has adopted an innovative new architecture. An intuitive and simple way of building Mobius-V1 is to reuse the original CTR model in the ranking layer. It is a large-scale and sparse deep neural network (DNN) which is in favor of memorization. However, it suffers from a severe bias on CTR prediction of either the user queries or the ads in the tail. Consider, as shown in Figure <ref>, the two queries “Tesla Model 3” and “White Rose” requested by the same user as in the search log. For the funnel-shaped structure adopted in the past, the relevance between the query “Tesla Model 3” and the ad “Mercedes-Benz” is firstly guaranteed by the matching layer. Then our neural click model in the ranking layer tends to predict a higher CTR on the query-ad pair as “Tesla Model 3” is a high-frequency query and leaves a rich click history on the ad “Mercedes-Benz” in our search log. However, in Mobius-V1 we attempt to use our neural click network to directly handle billions of query-ad pairs lacking the guarantee of relevance. It is natural that many irrelevant query-ad pairs come out (e.g., the query “White Rose” and the ad “Mercedes-Benz” in Figure <ref>). Nevertheless, we have found out that our neural click model still tends to predict higher CTR for those irrelevant query-ad pairs.
According to our analysis on the query log in Baidu's sponsored search, the ads and the user queries suffer from the long-tail effect and the cold start problem.
Therefore, we can not directly leverage the original neural click model to accurately predict CTR for billions of user queries and ads in the tail. The key to the problem is how we teach our model learning to recognize the “low relevance but high CTR” query-ad pairs as the bad cases.
To solve this problem, we propose to use the original relevance judger in the matching layer as the teacher to make our neural click model aware of the “low relevance” query-ad pairs.
Our neural click model, as the student, acquires the additional knowledge on relevance from the augmented bad cases in an active learning fashion.
Figure <ref> illustrates the fashion by a flow diagram and Algorithm <ref> shows the training procedure of teaching our neural click model with active learning in pseudo code.
Generally speaking, the iterative procedure of active learning has two phases: data augmentation and CTR model learning. To be specific, we will elaborate the modules in each phase step by step.
The phase of data augmentation starts from loading a batch of click history (i.e., user query & ad pairs) from query logs into a data augmenter. Every time the data augmenter receives the query-ad pairs, it splits them into two sets: a query set and an ad set. Then we apply a cross join operation (⊗) to the two sets for the sake of constructing more user query & ad pairs. Suppose that there are m queries and n ads in the batch of click history, and then the data augmenter can help generate m × n synthetic query-ad pairs. After listing all possible query-ad pairs, the relevance judger involves in and takes charge of grading the relevance of these pairs. As we want to discover the low relevance query-ad pairs, a threshold is set to reserve those pairs as candidate teaching materials. These low relevance query-ad pairs, as teaching materials, are fed into our neural click model for the first time, and each pair is assigned with CTR predicted by the updated model in the previous iteration. To teach our 3-classes (i.e., click, unclick and bad) neural click model learning to recognize “low relevance but high CTR” query-ad pairs, we may intuitively set another threshold to filter out most low CTR query-ad pairs. However, we consider a better option to balance the exploration and exploitation of the augmented data. We employ a data sampler which selects and labels the augmented data referred to the predicted CTRs of those synthetic query-ad pairs. Once a query-ad pair is sampled as a bad case for our neural click network, this pair is labeled by an additional category, i.e., bad.
In the phase of learning our CTR model, both the click/unclick history and the labeled bad cases are added into the augmented buffer as the training data. Our neural click network is a large-scale and multi-layer sparse DNN which is composed of two subnets, i.e., user query DNN and ad DNN. As illustrated by Figure <ref>, the user query DNN on the left takes rich user profiles and queries as inputs and the ad DNN on the right regards the ad embeddings as features. Both subnets produce a distributed representation with 96 dimensions each of which is segmented into three vectors (32 × 3). We apply the inner product operation 3 times to the three pairs of vectors between the user query DNN and ad DNN and adopt a softmax layer for CTR prediction.
Overall, we contribute a type of learning paradigm to train our neural click model offline in Baidu's sponsored search engine.
For the sake of improving its capability of generalization on CTR prediction for billions of query-ad pairs in the tail, the neural click model (student) can actively query the relevance model (teacher) for labels. This type of iterative supervised learning is known as active learning <cit.>.
§.§ Fast Ads Retrieval
In Baidu's sponsored search engine, we have been using the deep neural networks (i.e., user query DNN and ad DNN) illustrated by Figure <ref> to acquire both the embeddings of queries and ads, respectively. Given a query embedding, Mobius must retrieve the most relevant and the highest CPM ads from billions of ad candidates as stated in Eq. (<ref>). Of course, it is unpractical to calculate it exhaustively for each query although the brute-force search can theoretically discover all the ads (i.e., 100% ad recall) we are looking for.
The online services often have restricted latency constraints and the ad retrieval must be done in a short period. Thus, we exploit approximate nearest neighbor (ANN) search techniques to speed up the retrieval process, as shown by Figure <ref>.
§.§.§ ANN Search
As shown in Figure <ref>, the mapping function combining user vectors and ad vectors by cosine similarity and then the cosine values go through a softmax layer to produce the final CTR. In this way, the cosine value and the CTR are monotonically correlated. After the model is learned, it will be clear they are positively or negatively correlated. If it is negatively correlated, we can easily transfer it to positively correlated by negating the ad vector. In this way, we reduce the CTR ranking problem into a cosine ranking problem, which is a typical ANN search setting.
The goal of approximate nearest neighbor (ANN) search is to retrieve, for a given query object, the “most similar” set of objects from a large corpus, by only scanning a small fraction of objects in the corpus. This is a fundamental problem and has been actively studied since the early days in computer science <cit.>. Typically, popular algorithms for ANN have been based on the idea of space-partitioning, including tree-based methods <cit.>, random hashing methods <cit.>, quantization based approaches <cit.>, random partition tree methods <cit.>, etc. For this particular problem (which deals with dense and relatively short vectors), we have found that random partition tree methods are fairly effective. There is a known implementation of random partition tree methods called “ANNOY”, among other variants <cit.>.
§.§.§ Maximum Inner Product Search (MIPS)
In the above solution, the business-related weight information is considered after the user vector and ad vector matching. In practice, this weight is vital in ads ranking. To take into account the weight information earlier in the ranking, we formalize the fast ranking process by a weighted cosine problem as follows:
cos(x, y)× w
= x^⊤ y× w/xy=(x/x)^⊤(y× w/y),
where w is the business related weight, x is user-query embedding and y is the ad vector. Note that the weighted cosine poses an inner product searching problem, often referred as Maximum Inner Product Search (MIPS) <cit.>. In this line of work, multiple frameworks can be applied for fast inner product search <cit.>.
§.§.§ Vector Compression
Storing a high-dimensional floating-point feature vector for each of billions of ads takes a large amount of disk space and poses even more problems if these features need to be in memory for fast ranking. A general solution is compressing floating-point feature vectors into random binary (or integer) hash codes <cit.>, or quantized codes <cit.>. The compression process may reduce the retrieval recall to an extent but it may bring significant storage benefits. For the current implementation, we adopted a quantization based method like K-Means to cluster our index vectors, rather than ranking all ad vectors in the index. When a query comes, we first find the cluster that the query vector is assigned to and fetch the ads that belong to the same cluster from the index. The idea of product quantization (PQ) <cit.> goes one more step further to split vectors into several subvectors and to cluster each split separately. In our CTR model, as mentioned in Section <ref>, we split both query embeddings and ad embeddings into three subvectors. Then each vector can be assigned to a triplet of cluster centroids. For example, if we choose 10^3 centroids for each group of subvectors,
10^9 possible cluster centroids can be exploited which is adequate for a billion-scale multi-index <cit.> for ads. In Mobius-V1, we employ a variant algorithm called Optimized Product Quantization (OPQ) <cit.>.
§ EXPERIMENTS
We conducted thorough experiments on Mobius-V1 before integrating it into Baidu's sponsored search engine. Specifically, we first need to conduct the offline evaluation of our CTR prediction model and the new approach on ad indexing. We need to make sure that our CTR model with the updated method on retrieving ads can discover more relevant ads with higher CPM. Then we tried to deploy it online to process a proportion of the query flow in Baidu Search. After Mobius-V1 had passed both the offline evaluation and the online A/B test, we launched it on multiple platforms to monitor the statistics of CPM, CTR and ACP (i.e., average click price).
§.§ Offline Evaluation
We loaded the search logs to collect the click/unclick history and built a training set which contains 80 billion samples. We also used the search log to construct the test set which has 10 billion records on ad click/unclick history. We compare the effectiveness of our actively learned CTR model with two baseline approaches. One method is the 2-class CTR model adopted by the original ranking layer which was trained solely by the click history without using any augmented data. The other approach is a 3-class CTR model trained by the randomly augmented data without being judged by the relevance model (teacher). As shown by Table <ref>, our model can maintain a comparable AUC with the original ranking model but significantly improves the relevance model score (from 0.312 to 0.575) measured by our relevance model. In other words, the low-relevance but high-CPM query-ad pairs are successfully recognized as the bad cases by our new CTR model in Mobius-V1.
Moreover, we delivered the top 100,000 query-ad pairs with the highest CTR predicted by each approach to the Crowdsourcing Team in Baidu, so as to manually grade the query-ad relevance ranging from 0 to 4 (0: no relevance, 4: quite relevant) by human experts. The report of subjective opinions also demonstrates that our CTR model in Mobius-V1 performs well on discovering relevant query-ad pairs. In addition, we used the same set of to retrieve ads from two ad indexing system powered by random partition trees (ANN+Re-Rank) and OPQ (Compressed Code+MIPS), respectively. Table <ref> shows that OPQ increases the ad coverage rate by 33.2%.
§.§ Online A/B Testing
The online A/B testing was conducted between two different ad retrieval strategies employed by Mobius-V1 from the perspectives of average response time and memory usage. Table <ref> shows that OPQ can provide much lower latency than random partition tree methods and reduce the average response time by 48ms/query. Furthermore, we examined the average response time of the top 3% high-CPM ads which have greater business value but require more computing resources. It shows that OPQ cuts down the query latency by 75% (from 120ms to 30ms) and substantially saves memory consumption.
§.§ System Launching
After Mobius-V1 had successfully passed both the offline evaluation and the online A/B test, we decided to launch it on multiple platforms in and outside Baidu. These platforms include the Baidu App on mobile phones, Baidu Search on PCs, and many other affiliated websites/apps that our sponsored search engine serves. Table <ref> shows the statistics on CPM, CTR, and ACP according to our 7-day monitor on the entire online traffic. CPM is the primary metric to evaluate the performance of a sponsored search engine. Compared with the previous system, Mobius-V1 leads to a major improvement of CPM by 3.8% on the Baidu App and 3.5% on the Baidu Search, which are the main portals of our sponsored search engine.
§ RELATED WORK
Our work on Mobius, which is towards the next-generation query-ad matching system in Baidu's sponsored search engine for commercial use, involves the research on query-ad matching and click-through rate (CTR) prediction.
§.§ Query-Ad Matching
Query-ad matching <cit.> is an extensively studied task which aims to retrieve advertisements that are not only the same with but also semantically similar to the given query (e.g., the query “U.S. tourist visa” and the ads about “travel agencies” displayed in Figure <ref>). As queries are commonly short texts, this issue has been mostly addressed by the techniques of query expansion <cit.>, query rewriting <cit.> and semantic matching <cit.>. Besides that we can leverage different NLP tools to directly compute the similarity between queries and textual ads, the semantic relationship between queries and ads can also be captured via learning from ad impressions. DSSM <cit.> is a well-known learning-to-match paradigm which leverages a deep neural architecture to capture query intent and to improve the quality of the learned semantic match given the click information.
§.§ CTR Prediction
CTR prediction <cit.> is another core task in sponsored search, as it directly influences some business indicators such as CPM. It focuses on predicting the probability that an ad would be clicked if shown as a response to a submitted query. Conventional approaches on CTR prediction preferred handcrafted features of ad impressions obtained from historical click data by Bayesian <cit.> or feature selection methods <cit.>. Along with the recent emergence of Deep Learning <cit.>, many approaches <cit.> for CTR prediction utilize various deep neural nets to primarily alleviate issues of creating and maintaining handcrafted features by learning them automatically from the raw queries and textual ads. Baidu Search Ads (“Phoenix Nest”) has been successfully using ultra-high-dimensional and ultra-large-scale deep neural networks for training CTR models since 2013.
§ CONCLUSIONS
In this paper, we introduce the Mobius project, which is the next generation of the query-ad matching system in Baidu's sponsored search engine, to you by answering the subsequent four questions:
* Q: Motivation — why do we propose the Mobius project?
A: We used to adopt a three-layer funnel-shaped structure to screen and sort hundreds of ads for display from billions of ad candidates. However, the separation/distinction between matching and ranking objectives leads to a lower commercial return. To address this issue, we set up Mobius-V1 which is our first attempt to make the matching layer take business impact measures (such as CPM) into account instead of simply predicting CTR for billions of query-ad pairs.
* Q: Challenges — what challenges have we encountered while building Mobius-V1?
A: The first problem is the insufficient click history for training the neural click model which is expected to have the generalization ability on the long-tail queries and ads. As the original neural click model employed by the ranking layer was trained by high-frequency ads and queries, it tends to estimate a query-ad pair at a higher CTR once either a high-frequency ad or a high-frequency query appears, even though they have no relevance at all. Another problem is the low retrieval efficiency and high memory consumption due to the increasing number of queries and ad candidates that Mobius has to handle.
* Q: Solutions — how do we design and implement Mobius to address those challenges?
A: To overcome the issue of insufficiency of click history, we design a “teacher-student” framework inspired by active learning to augment the training data. Specifically, an offline data generator is responsible for constructing synthetic query-ad pairs given billions of user queries and ad candidates. These query-ad pairs are constantly fed into the teacher agent which is derived from the original matching layer and is good at measuring the semantic relevance of a query-ad pair. The teacher agent can help detect the bad cases (i.e., with higher CTR but lower relevance) as the augmented data from the generated query-ad pairs. Our neural click model in Mobius-V1, as a student, is taught by the additional bad cases to improve the ability of generalization. To save the computing resources and satisfy the requirement of low response latency, we tested a variety of space partitioning algorithm for the approximate nearest neighbor (ANN) search and we have found that for our datasets, OPQ <cit.> is able to achieve good performance for indexing and retrieving billions of ads more efficiently.
* Q: Feedbacks — how does Mobius-V1 perform in Baidu's sponsored search engine?
A: We have already deployed Mobius-V1 in Baidu's sponsored search engine The results from both online and offline experiments demonstrate that this new matching system increases CPM by 3.8% and promotes ad coverage by 33.2%.
§ FUTURE WORK
Since 2013, Baidu Search Ads (a.k.a. Phoenix Nest) has successfully deployed ultra-large-scale deep neural networks for training CTR models. To move beyond the CTR model, Mobius has been established as an innovative and forward-looking project. The idea of unifying the objectives of optimizing the user experience and business target also inspires other featured products such as Feeds.
For future work, many potential directions can be explored. For example, we expect to be able to bring more business targets such as ROI (return on investment), as additional learning objectives into the matching layer so that we can discover more business-friendly ads. Along with more optimization objectives for billions of candidate ads and queries, the computational complexity will significantly increase. There ought to be a trade-off between the effectiveness and efficiency of our sponsored search engine given the requirement of lower response latency and the restraints of computing resources.
The crucial step in Mobius project is the fast ads retrieval task via approximate near neighbor search (ANN). The current system has used the cosine similarity to approximate the CTR, based on their monotonic correlation. If the final layer is more complicated, it will be problematic to rank by cosine (or weighted cosine). Searching by complicated measures has been studied, for example <cit.>, which could be adopted by future versions of Mobius.
Another promising direction is to adopt a GPU-based system for fast ANN, which has been shown highly effective for generic ANN tasks <cit.>.
We are deeply grateful to the contributions of many colleagues from Baidu. A few names are Lin Liu, Yue Wang, Anlong Qi, Lian Zhao, Shaopeng Chen, Hanju Guan, and Shulong Tan; but there are certainly many more who have contributed to this large project.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02290v1 | 20240903205856 | Unsupervised Welding Defect Detection Using Audio And Video | [
"Georg Stemmer",
"Jose A. Lopez",
"Juan A. Del Hoyo Ontiveros",
"Arvind Raju",
"Tara Thimmanaik",
"Sovan Biswas"
] | cs.RO | [
"cs.RO",
"cs.CV",
"eess.IV"
] |
Article Title]Unsupervised Welding Defect Detection Using Audio and Video
1]Georg Stemmer
[1]Jose A. Lopezjose.a.lopez@intel.com
1]Juan A. Del Hoyo Ontiveros
1]Arvind Raju
1]Tara Thimmanaik
1]Sovan Biswas
*[1]Intel Corp., 2200 Mission College Blvd., Santa Clara, 95054, CA, USA
In this work we explore the application of AI to robotic welding. Robotic welding is a widely used technology in many industries, but robots currently do not have the capability to detect welding defects which get introduced due to various reasons in the welding process. We describe how deep-learning methods can be applied to detect weld defects in real-time by recording the welding process with microphones and a camera. Our findings are based on a large database with more than 4000 welding samples we collected which covers different weld types, materials and various defect categories. All deep learning models are trained in an unsupervised fashion because the space of possible defects is large and the defects in our data may contain biases. We demonstrate that a reliable real-time detection of most categories of weld defects is feasible both from audio and video, with improvements achieved by combining both modalities. Specifically, the multi-modal approach achieves an average Area-under-ROC-Curve (AUC) of 0.92 over all eleven defect types in our data. We conclude the paper with an analysis of the results by defect type and a discussion of future work.
[
[
=====
§ INTRODUCTION
Robotic arc welding, i.e., the use of robots for automating the arc welding process, is a key manufacturing technology in many industries. As the quality of a weld depends on many factors, even a robot that repeats each step of the process perfectly will produce defective welds from time to time. The time at which a defect gets detected in the welding process has direct impact on the cost associated with correcting the problem. So to minimize the overall defects and reduce the correction cost there is a growing interest in detecting anomalies in real-time, i.e., during the welding process, rather than post weld defect detection. Ideally, a future intelligent manufacturing system would be able to adjust the welding robot’s parameters automatically even before the failure starts to occur in the welding process.
In this work, we investigate the use of deep learning models for automatic weld defect detection in real-time using a camera and microphones. Cameras can monitor the weld pool geometry and oscillations, which are known to be predictive of weld defects <cit.>. Cameras need to have line-of-sight to the weld pool, and this consequently limits the mounting options. Microphones, on the other hand, can capture audible disturbances in the welding process without line-of-sight. They have been shown to provide useful information about defects <cit.>.
We investigate in the experimental section for several defect types whether they can be better detected visually or acoustically, and how the two modalities perform in combination.
Current deep learning models require large amounts of training data to estimate their parameters. We address this issue by collecting more than 4000 samples of good and defective welds, which is, to best of our knowledge, significantly more than what has been reported for similar datasets in the literature. Still, we consider the number of samples too small to train a classifier that can distinguish each relevant defect type reliably, given the variation of the input signals that can be expected in a real application. Therefore we decided to address the weld defect detection problem with an unsupervised anomaly detection approach. Anomaly detection models are trained on good welds only: all defects in our dataset occur solely during evaluation and are unseen in training. While this makes the defect specific performance more difficult to tune, we believe that the resulting performance is more representative of a real application.
Of course, we are not the first interested in bringing the benefits of AI to established spaces like manufacturing, and welding in particular. Companies have produced AI-based anomaly detection solutions in recent years <cit.>, and numerous authors have explored the use of modern data-driven algorithms to improve weld defect detection <cit.>, which can be costly to remediate <cit.>. In <cit.>, the authors used sequence tagging and logistic regression to detect welding defects. Mohanasundari et al. <cit.> used post-weld images to classify defects. Buongiorno et al. <cit.> leveraged thermographic image sequences from an infrared camera to detect defects.
Our work is also related to predictive maintenance solutions, which tend to use vibration or acoustic-emission sensors. Acoustic emission sensors have even been used successfully to monitor weld quality <cit.> as well. There is good reason for using this type of sensors for machine condition monitoring, as 90% of rotating machinery uses rolling-element bearings, which tend to be points of failure <cit.>. On the other hand, welding is a much more complex process. Therefore we expect that microphones and cameras are better suited for detecting weld defects than acoustic emission or vibration sensors.
The main contributions of this work are the following: We collect a large multi-modal dataset of samples of robotic arc welding in a real industrial environment. It covers different weld types, various welding parameter configurations, and steel types. The size of the dataset allows us to train deep ML models, as opposed to shallow or analytical models often described in the literature (e.g., <cit.>). With these deep ML models we are able to demonstrate that camera and microphone are adequate sensors for real-time weld defect detection. Our unsupervised approach ensures that the model is not biased to defect characteristics which are specific for our data collection setup and allows us to compare results between different defect types and and modalities. Finally, we demonstrate that defect detection performance can be improved using a multi-modal combination of both sensor types.
In the next sections we provide a comprehensive description of how the dataset has been collected. We introduce our experimental setup, quality metrics and ML models. The performance of unsupervised algorithms trained in single and multi-modal fashion on this dataset is evaluated experimentally and we present our conclusions.
*
§ DATA SET
The goal of our data collection was to record enough samples for each of the most important weld categories to make statistically valid comparisons across modalities and weld defect types. We collaborated with a supplier that has access to automotive factories to conduct the data acquisition in a real factory environment. The welds were generated using a 6-axis arc welding robot using two steel types and thicknesses. The steel types were selected to be often-used varieties for automotive applications in India. The first type is known as “FE410” and the second “BSK46” type has higher carbon content and is used for higher-strength applications. The thicknesses used were 7mm for most of the samples and 3mm for specific defects that could not be efficiently induced using the 7mm steel, like burnthrough.
§.§ Recording Setup, Data Collection Procedure, and Limitations
The data collection station comprised an AII-V6 6-axis arc welding robot <cit.>, an Intel i7-based workstation and camera <cit.>, two high-bandwidth microphones <cit.>, and an audio interface <cit.>. The microphones were selected to enable studies on the observability of weld defects at higher frequencies. Moreover, audio samples were recorded using a 192 KHz sampling rate and saved in lossless FLAC format <cit.>. The video samples were recorded at a nominal 30 FPS and saved in AVI format. Fig. <ref> shows the KML camera and computer. The camera was attached to the welder arm, about 200mm from the torch. The microphones were attached to the work table, about 300mm from the welder arm motion axis.
To generate weld defects, the supplier contracted a welding expert who supervised the initial configuration of the welding robot to produce the desired weld defects. The welding expert did not perform any post-weld validation or labeling. Thus, the dataset contains some amount of label noise. Moreover, all sensors were triggered to start recording at the same time, but there is some variation in the actual time it takes for the individual device to respond. To align the different modalities which is required for multi-modal experiments, the weld start and end times were identified for each modality by inspecting the audio and illumination changes at the start and end of welding. The audio data, which was collected through a dedicated audio interface with a low-latency driver <cit.> was taken as the ground truth source for determining the welding duration. In this way, it was determined that the actual recorded frames-per-second (FPS) of the video varies from the expected 30 FPS. It is worth mentioning that this post-collection alignment would not be needed if the robot command signals were readily accessible.
§.§ Welding Sample Distribution
The dataset contains weld samples for the 12 weld categories shown in Tab. <ref>. Each weld category has been recorded for different weld types, and, where applicable, for different materials. Tab. <ref> contains a brief summary of the dataset by weld type. A complete breakdown of the dataset is included in Appendix <ref>. Fig. <ref> in Appendix <ref> shows some examples of (post-weld) photos of welding samples.
§ EXPERIMENTAL SETUP
The experiments in this work focus on the following questions: (1) Can weld defects be reliably detected in real-time from audio and/or video recordings? (2) How does the detection accuracy depend on the defect type and modality? (3) What improvements can be expected from combining audio and video in a multi-modal system?
To answer those questions, we treat the weld defect problem as an anomaly detection problem, not a classification problem. That means, our models do not attempt to classify a sample into a weld category, but generate an anomaly score that increases with the likelihood of a defect. For a real use case a threshold has to be determined in advance. If the score of the model exceeds this threshold during welding, a defect will be detected and the supervisor of the robot will be notified. The choice of the threshold depends on the false positive and false negative rates of the model for the defect types that are relevant for a specific use case, the costs associated with a false detection of a defect, and the costs resulting from a missed defect. As we would like to evaluate the quality of our models independently of an application-specific threshold, we compare the models using the Area-under-ROC-Curve (AUC) metric which is scale- and threshold-independent <cit.>. AUC is defined as the area under the Receiver Operating Characteristic (ROC) curve for all possible values of the false positive rate (FPR):
AUC = ∫_x=0^1 ROC(x) dx
where x is the FPR and ROC(x) is the true positive rate (TPR) <cit.>.
When calculating FPR and TPR for a specific threshold on the validation and test sets, we have to take into account that we are targeting a real-time scenario, where the model produces an anomaly score at every time frame. On the other hand, each sample in our data set has just a single label denoting the weld category. For instance, a sample could be labeled as “porosity” but there is no indication at which time instance the defect occurs. Therefore, we aggregate all scores produced by the model for all frames of the sample by either taking their maximum or expected value.
In this work we employ simple but effective convolutional neural networks (CNN) for weld defect detection. This allows us to focus more on gaining insights and less on conducting large hyper-parameter optimizations and architecture searches. For audio, we used a 1D CNN auto-encoder <cit.>. For video, we applied the relatively simple 3D CNN from <cit.> provided by the MMAction2 library <cit.>[The “slowfast_r101_4x16x1_256e_kinetics400_rgb_20210218-d8b58813.pth” checkpoint from v0.15.0 was used <cit.>.]. All our work used the PyTorch tensor library for Python <cit.>.
For the experiments, the dataset is split into a training, validation, and test partition. The training data includes only good, i.e., normal-state, samples, while the defects are divided equally among the validation and test subsets.
The validation partition of the dataset is used for hyperparameter tuning. The best hyperparameter configuration on the validation partition is used to generate the test results from the test partition. The data split is summarized in Tab. <ref>. The same partitions were used for all experiments with both modalities.
§.§ Acoustic Anomaly Detection
For the audio experiments, the original sampling rate of 192 kHz was maintained and only one channel was used. The audio CNN auto-encoder architecture is shown in Table <ref>. A detailed description of the model's topology can be found in <cit.>: the key characteristic is that the bottleneck layer largely preserves the time dimension. This feature has been motivated by the work of Agrawal et al. <cit.>. The model in Tab. <ref> is much smaller than the one described in <cit.> because the convolutions are not gated.
The model uses leaky-ReLU activations in all but the last activation before the output layer, which uses a PReLU activation. Overall, the model has 31,670,306 trainable parameters.
The latency of this model equals the hop length used to train the model. For example, for a hop length of 8192 and 192 kHz audio, the latency is about 42.7 ms. The model has 5 encoding layers with kernel size 3. Each encoding layer decreases the time dimension by 2, therefore inputs must have more than 10 frames. This means the input buffer must satisfy Eq. <ref>, with the FFT window an integer multiple of the hop length. In the foregoing example, the input buffer needs to hold 12 × 8192 samples.
buffer size = hop length×( 10 + FFT window/hop length)
§.§ Visual Anomaly Detection
0
Our approach to visual weld defect detection is based on a two-stage process. The first stage encodes each video frame into a fixed dimensional feature vector. For this, a window that is 64 frames long is shifted frame-by-frame over the whole video. 64 frames correspond to roughly 2 seconds at a frame rate of 30 FPS. As the window is centered around the frame of interest, the 64-frame window size leads to defect detection latency of around one second in a real-time scenario. This is a larger latency than for the acoustic anomaly detection, which is based on much smaller windows as will be described in Sec. <ref>. Still, we believe that it should be acceptable for many use cases. Each window is encoded into a 2304-dimensional feature vector using the pre-trained and fixed Slowfast <cit.> model.
In the second stage, an auto-encoder model consisting of an encoder, a bottleneck, and a decoder is used for generating anomaly scores from the input feature vector. The encoder maps the 2304-dimensional input feature vector to 64-dimensional latent space by passing it through multiple linear layers along with ReLU activation and dropout in sequence. Each linear layer reduces the dimension by 1/2, thus creating a bottleneck with a 64-dimensional latent vector. Later, the decoder uses the latent space embedding to reconstruct the original feature of the frame. The decoder consists of multiple linear layers with ReLU activation and dropout as well. Each linear layer of the decoder scales the dimension by 2 until the original 2304 dimension is obtained. All dropout layers zero weights with probability 0.5. The architecture details of the auto-encoder are provided in Tab. <ref>.
The auto-encoder model is trained on welding videos minimizing the model's anomaly score which is defined as the mean-squared-error (MSE) between the input feature vector and the output of the decoder. Note, that the Slowfast model from the first stage is fixed, and no-back propagation is applied to the first stage during training.
§.§ Multi-modal Anomaly Detection
We used a late-fusion approach to combine the anomaly scores of the two modalities. Since anomaly scores generated by different models generally have different scales, we first standardized the scores using the mean and standard deviation of the anomaly scores over the training set. Next, we identified the optimal convex combination of the audio and video anomaly scores by running grid search over convex combinations on the validation data. The best weighting is applied to compute the anomaly scores on the test set and to produce the final results.
§ RESULTS
§.§ Acoustic Anomaly Detection
For hyperparameter tuning we trained separate auto-encoder models with different parameter settings on the training partition and evaluated their AUC on the validation set. More specifically, a grid search was performed over FFT window sizes 4096, 16384, 32768, and 65536 and bottleneck dimensions 16, 32, 48, and 64. The hop length was fixed at 50% of the FFT window size. We remind the reader that for a given analysis (time) window, the corresponding FFT window length is proportional to the sampling rate. Therefore, the FFT window sizes included in the grid search range from approximately 21 ms to 341 ms for the sampling rate in our data set, which is 192 kHz. The latency of these models range from approximately 11ms to 171ms. All models were trained for 50 epochs using a one-cycle learning schedule with a peak learning rate of 1× 10^-4 for the Adam optimizer <cit.>. The MSE loss was minimized during training.
Tab. <ref> shows the FFT window search results. These experiments determine that an FFT window of 16384 with a bottleneck dimension of 48 produced the best performance on the validation set. To obtain the AUC scores we explored several frame-wise anomaly score aggregation methods: expected value, moving average (MA) smoothing, and taking the max. We found taking the average worked best, followed closely by MA smoothing, and taking taking the max was last. We only show the expected value scores for space considerations.
On the test set, the best validation model obtained an AUC of 0.8460. This is not much different from the AUC on the validation set which indicates that the model does not overfit.
To gain insights into the difficulty of detecting particular defects, we also determined the validation and test AUC by defect type – the results are shown in Tab. <ref> and Tab. <ref>, respectively. Again, the results indicate a high agreement between validation and test set AUCs. Note that these results are not directly actionable because in most use cases the defect type is not known a priori. Breaking-out performance by defect type is, however, useful for estimating performance gains that could be obtained when fusing audio with video information.
In cases where an error (or error cost) distribution is known a priori, one may select the optimal hyperparameters differently. Additionally, latency or memory requirements can affect the FFT window and hop length selection as well.
§.§ Visual Anomaly Detection
For visual defect detection, we first generate feature vectors of dimension 2304 using the pre-trained Slowfast model for each sample in all data partitions. In a second step we train the auto-encoder model using the Adam optimizer with a learning rate of 0.0005 and MSE loss function for up to 1000 epochs on the feature vectors. For the model that performs best on the validation set, we compare different methods to aggregate the scores:
simply taking the maximum score (“Max w/o smoothing”), smoothing the scores by averaging them within a one-second window (“Max over 1s-MA”), or smoothing the scores by averaging them within the full two-second window (“Max over 2s-MA”). Table <ref> shows the AUC for different aggregation methods on the test set. The best validation model obtains an overall AUC of 0.9052 and 0.8977 on the validation and test data, respectively.
§.§ Multi-modal Anomaly Detection
The input scores for the late fusion combination of the video and audio modalities were created using the best performing models for each modality, i.e., the audio model using FFT window size 16384 and bottleneck dimension of 48, and the video model with 2s-MA smoothing. We determined the weighting for the modalities on the validation data using the grid search described in Sec. <ref> with step size 0.01.
This way, we found the best weighting to be 0.37 and 0.63 for the audio and video scores, respectively. These weightings make sense given the stronger performance of the video modality.
Using the optimal weighting we obtained our overall best test AUC of 0.9178. From Tab. <ref>, it can be seen that the overall audio validation and test AUCs improved by 9.8% and 8.5%, respectively, and the overall video validation and test AUCs improved by 2.5% and 2.2%, respectively. The defect-specific scores improved as well. For audio, the AUC metric improved for 8 of 11 defect categories. For video, the AUC improved for 7 of 11 categories. It is worth mentioning that the weakest performing defect categories all improved. In particular, audio had four defect categories with AUCs in the 0.7s and all benefited from fusing. For video, two defect categories had AUCs in the 0.7s and both benefited from fusing. See the entries of Tab. <ref> shown in bold.
Fig. <ref> shows the detection error curves for the multi-modal predictions, on the test data. As can be seen, the FPR and false-negative-rate (FNR) intersect at about 17%.
§ CONCLUSIONS AND FUTURE WORK
In this work, we explored unsupervised weld defect detection using audio, video, and their combination. We demonstrated that, using a deep-learning based approach, both modalities allow to reliably detect the most important defect types in real-time. The best audio model had a latency of 42.7ms and is best suited for detecting porosity, overlap, and crater cracks. It has the lowest AUC scores for burnthrough, excessive penetration, spatter, and warping. Video generally shows better AUC than audio, but this comes at the price of a larger overall model size and a higher latency. Using video, the best detection performance is achieved for porosity, excessive convexity, and spatter. We observe the lowest performance for lack of fusion, and warping. We demonstrated that a combined approach using late fusion of normalized scores for both modalities offers improvements. More specifically, AUC scores for all defect types average 0.92, with the lowest score approximately 0.80. The worst performing categories for both modalities all improved.
Future work will investigate more elaborate ways to combine the two modalities, e.g., a joint model that directly incorporates input from all sensors. Furthermore, we believe that the biggest limitation of our dataset is that it has been collected in a supervised way, i.e., the robot has been configured by purpose to generate defects as we required a sufficient number of defect samples for our experiments. This approach creates a potential mismatch to a real use case where defects occur randomly and rarely. We expect that defects will show up less pronounced than in our dataset. Therefore, we plan to record a real production process of a collaboration partner and to label the defects as they occur. This will allow us to validate our proposed methods in a more realistic setting.
§ COMPLIANCE WITH ETHICAL STANDARDS
This research was funded by the Intel Corporation. We are unaware of any conflicts of interest. The welding data used in this work does not contain information that could be used to identify an individual. All work was done in a manner consistent with expected ethical standards.
§ SAMPLE DISTRIBUTIONS
This appendix contains a complete dataset description by defect category: Tab. <ref> shows the distribution of the samples for the normal-state, i.e., good weld category, by weld type and material. Tab. <ref> to <ref> show the sample distribution for the remaining weld categories.
§ SAMPLE IMAGES
This Appendix shows in Fig. <ref> examples of photos taken of samples after the welding has been completed.
|
http://arxiv.org/abs/2409.02244v1 | 20240903191913 | Therapy as an NLP Task: Psychologists' Comparison of LLMs and Human Peers in CBT | [
"Zainab Iftikhar",
"Sean Ransom",
"Amy Xiao",
"Jeff Huang"
] | cs.HC | [
"cs.HC",
"cs.CL",
"I.2.7; J.4"
] |
]Therapy as an NLP Task: Psychologists' Comparison of LLMs and Human Peers in CBTx
0000-0002-4086-436X
Department of Computer Science
Brown University
Department of Psychiatry
Louisiana State University Health Sciences
0009-0003-2303-8204
Department of Computer Science
Brown University
0000-0002-3453-5666
Department of Computer Science
Brown University
§ ABSTRACT
Wider access to therapeutic care is one of the biggest challenges in mental health treatment. Due to institutional barriers, some people seeking mental health support have turned to large language models (LLMs) for personalized therapy, even though these models are largely unsanctioned and untested. We investigate the potential and limitations of using LLMs as providers of evidence-based therapy by using mixed methods clinical metrics. Using HELPERT, a prompt run on a large language model using the same process and training as a comparative group of peer counselors, we replicated publicly accessible mental health conversations rooted in Cognitive Behavioral Therapy (CBT) to compare session dynamics and counselor's CBT-based behaviors between original peer support sessions and their reconstructed HELPERT sessions. Two licensed, CBT-trained clinical psychologists evaluated the sessions using the Cognitive Therapy Rating Scale and provided qualitative feedback. Our findings show that the peer sessions are characterized by empathy, small talk, therapeutic alliance, and shared experiences but often exhibit therapist drift. Conversely, HELPERT reconstructed sessions exhibit minimal therapist drift and higher adherence to CBT methods but display a lack of collaboration, empathy, and cultural understanding. Through CTRS ratings and psychologists' feedback, we highlight the importance of human-AI collaboration for scalable mental health. Our work outlines the ethical implication of imparting human-like subjective qualities to LLMs in therapeutic settings, particularly the risk of deceptive empathy, which may lead to unrealistic patient expectations and potential harm.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010179.10010181</concept_id>
<concept_desc>Computing methodologies Discourse, dialogue and pragmatics</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010179.10010182</concept_id>
<concept_desc>Computing methodologies Natural language generation</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10011748</concept_id>
<concept_desc>Human-centered computing Empirical studies in HCI</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Empirical studies in HCI
[500]Computing methodologies Discourse, dialogue and pragmatics
[300]Computing methodologies Natural language generation
< g r a p h i c s >
font=bf
[]Example public session by a peer counselor (left) reproduced using CBT-prompted large language model-based system called HELPERT (right).
[Example public session by a peer counselor (right) reproduced using CBT-prompted large language model-based system called HELPERT (left).]
[
Jeff Huang
September 9, 2024
=====================
§ INTRODUCTION
The above two sample excerpts, taken from therapeutic sessions (Figure <ref>), are conducted by a human peer counselor (left) trained in Cognitive Behavioral Therapy (CBT) techniques and a CBT-prompted Large Language Model (LLM) <cit.> (right) trained through the same process with the same materials and operational team. A distressed user seeking support describes their situation, thoughts, feelings, and behavior; in response, a counselor—human or LLM—uses CBT techniques to provide support. The results were hour-long sessions guided by the same principles and goals, but where the LLM version is a simulation of the responses that would have been given based on pre-existing publicly available sessions.
From a decision-making system to a CBT-prompted peer <cit.>, LLMs have enabled some people wider access to counseling through completely automated means <cit.>. Due to their increased accessibility and constant availability <cit.>, LLMs, especially recent implementations like ChatGPT, are now used for more than just language generation; they are perceived as facilitating meaningful conversations <cit.>. One in four Americans prefer talking to an AI chatbot instead of a therapist, and of those who did, 80% claim it to be an effective alternative <cit.>. This preference is understandable since conventional therapy faces issues of inaccessibility, high costs, and complexity, leaving over half of U.S. adults with mental illness without adequate care <cit.>.
However, repurposing LLMs as therapeutic peers when it was originally intended for basic language generation seems counter-intuitive. Several challenges exist. First, LLMs are designed to predict the next possible sequence from a given text based on previously observed patterns in their training data that is largely devoid of fact-checking <cit.>. This encompasses traditional challenges of AI-mediated health care, including lack of high-quality training data <cit.>, low external validity and misinformation <cit.>, societal biases <cit.>, and the impact of AI on patient-clinician relationships <cit.>. Beyond data quality, there are additional risks of data breach and an individual's right to privacy <cit.>. These issues have high stakes in mental health, where the quality of a therapist's responses impacts treatment outcomes.
Despite concerns from interdisciplinary experts, an overwhelming user audience attests to LLM's effectiveness <cit.>. However, informal evaluation of these tools is subjective and contextual; a user in distress is unlikely to gauge the risk associated with the tools, considering they act as a band-aid for mental health support—accessible and free. The discrepancy has ignited debates between the recipients of LLM-based mental health and field experts. While current work has found that users prefer AI responses over humans <cit.>, these studies examine the models' responses to a single, isolated interaction (utterance level) <cit.>. However, these models lack long-term memory <cit.> and perform poorly in situations that demand sustained interactions, such as peer counseling.
The growing user acceptance of LLM-based counseling, combined with expert concerns, calls for objective investigation of the model's responses in sustained counseling sessions by clinical therapists. In this paper, we evaluate alternatives to traditional licensed CBT therapy: peer counselors and LLM-based therapeutic sessions. The LLM responses are generated through best-effort reconstruction from pre-existing publicly released sessions to avoid testing on human subjects with an experimental method that may cause harm. We used a CBT-based prompt, designed collaboratively by peer counselors and licensed therapists, to reconstruct pre-existing publicly available counseling sessions. We sourced the original sessions from Cheeseburger Therapy <cit.>, an online text-based peer counseling platform where the people providing the counseling are trained through a series of customized learning modules. Because the LLM prompt and peer counselors are both trained from the same materials and supervised by the same operational team, this controls for some of the many variables that can lead to differences between a human peer counselor and an LLM-based counselor.
As the Cheeseburger Therapy platform is entirely online and based on text-only communication, some of the historical peer counseling sessions are publicly available online. These conversations were made public after users provided written consent for them to be shared online. Consent was requested only after the session had ended. Note that the authors have been in communication and exchange information with the Cheeseburger Therapy team, but do not operate the service or have any ownership over it. The publicly available sessions were downloaded, and the peer counselors' responses were replaced by an LLM-generated response using HELPERT, a detailed prompt that asks the LLM to go through the same process as the peer counselors do.
The resulting sessions are cleaned and become part of a comparative dataset. Using a mixed method analysis of objective observer rating scales to assess competence in Cognitive Behavioral Therapy <cit.> and psychologist's annotation and feedback, two licensed, CBT-trained clinical psychologists conducted a blind analysis of the quality of care provided by peer counselors and HELPERT. By examining clinical metrics such as therapeutic alliance, collaboration, adherence to the method, and harm to participants, we measured CBT skill competence for both counselors.
We study the research question: How do humans and the implementation of an LLM counselor compare in their capability to provide evidence-based single-session CBT counseling, and what specific challenges are faced by each according to clinical psychologists trained in evaluating CBT sessions?
Both the HELPERT sessions and complete quantitative and qualitative evaluations of the peer counselor sessions and corresponding HELPERT sessions are released for other researchers to replicate and as a resource for comparing with human or LLM-based counseling sessions in the future [link to be included upon publication]. Our study contributes this dataset by integrating individual experiences and experts' opinions, to develop more equitable and fairer evaluation methods. We discuss how, instead of replacing one with the “other,” each counselor can complement the other's capabilities to provide alternative mental health care that is safe and effective for the user.
Complementing current work, our research has the following novel contributions.
* Firstly, current work on LLMs in mental health examines user preferences for LLM versus human responses in isolated, one-off interactions, which neglects their behavior in sustained interactions. Since counseling is context-dependent, this comparative study evaluates these models in longer, continuous interactions using established CBT metrics in literature, with evaluations provided by clinical psychologists.
* By releasing CBT scores and comments for human peer counselors and LLMs, we provide a dataset designed by psychologists to identify the elements of effective versus ineffective support. This dataset can serve as valuable lessons for training human peers to offer better counseling and inform the future design of language models to ensure safe support, acknowledging the increasing interactions between humans and LLMs.
* Lastly, we draw attention to the implication of how alternative methods of peer counseling—whether provided by humans or LLMs—differ from traditional CBT and how this method of support is a complement instead of a replacement for therapy.
§ BACKGROUND & RELATED WORK
Alternative cost-effective interventions, including peer support platforms and AI-mediated health care, have become ubiquitous and accessible solutions to increase user access to care. This section outlines some of the prior and current work in scalable mental health.
§.§ Peer-based Interventions for Mental Health
The United States has an average of thirty psychologists per a hundred thousand people <cit.>. This ratio is unlikely to improve by training additional professionals alone. Instead, new scalable approaches are emerging to expand access to care, including peer support platforms <cit.>. According to <cit.>, peer support is defined as:
A system of giving and receiving help founded on key principles of respect, shared responsibility, and mutual agreement on what is helpful
In the US, peer support groups, self-help organizations, and consumer-operated services are more than double the traditional and professional mental health organizations <cit.>. Initially, groups such as Alcoholics Anonymous (AA) <cit.>, InTheRooms.com (ITR) <cit.>, GROW and eGrow <cit.> started as community-based organizations claiming that individuals with similar lived experiences can better relate to each other and offer more genuine understanding, empathy, and validation <cit.>. This led to shaping current peer support for mental health through digital innovations, including social, crowdsourced one-time interactions (a single response) <cit.> or research into unmoderated communities and social networks for support <cit.>. For instance, developed Koko to crowd-source peer support interactions. The platform design was inspired by Panoply, a web-based peer support platform that was previously demonstrated to alleviate symptoms of depression. Other researchers have focused on how context-specific anonymity in online communities such as Reddit achieves social support through social media disclosure <cit.>.
The studies on peer-to-peer connections in digital support platforms and social media have found that these connections promote well-being and can potentially shape the future of mental health research <cit.>.
However, a challenge exists: Peer counselors often lack formal training in mental health interventions. Unlike trained professionals who receive extensive psycho-therapeutic training, peers connect with individuals through shared and lived experiences <cit.>. While this support can foster a sense of understanding and connection, the interactions can suffer from a lack of evidence-based treatment <cit.>.
Hence, there is a growing trend in HCI toward expanding effective peer support training via online platforms. For example, platforms like 7 Cups of Tea and Koko offer training in active listening techniques and cognitive reappraisal skills to help improve peer-to-peer interactions. The Cheeseburger Therapy website offers 15-20 hours of training in Cognitive Behavioral Therapy (CBT) techniques, focusing on active listening, reflective restatements, and cognitive restructuring to guide hour-long text-based conversations. Training peers has been found to be effective in existing research. For instance, found that psycho-therapeutic training, like CBT, helps peers provide empathetic support <cit.> Though effective, this reliance on 1:1 peer support circles back to the ongoing issue of access to mental health care: the limited availability of trained people to provide support <cit.>, leading researchers to look at automated means of care.
§.§ Scalable Mental Health: From Conversational Agents to Large Language Models
In response to the limited availability of trained care providers, prior work has focused on using machine learning to develop and evaluate automated and widely accessible alternatives to mental health care <cit.>.
A significant area of focus is the use of conversational agents for psychiatric care <cit.>, particularly the development of therapeutic chatbots <cit.> either as an agent that provides psychoeducational support or as a psychotherapist.
For example, Woebot, a text-based conversational agent, delivers content based on CBT techniques in a conversational format and has been used for self-managing depressive symptoms <cit.> and substance use disorders <cit.>. Other conversational agents include Shim, designed to deliver CBT intervention to improve well-being for a non-clinical population <cit.> and agents that could take on the role of a psychotherapist, delivering feedback to help clients evaluate and address negative thought patterns <cit.>.
However, conversational agents are rule-based, meaning they follow predefined scripts, which limits their ability to adapt to dynamic human behavior and tailor responses to individual needs <cit.>. This presented a challenge in deploying these agents in mental health care, as psychotherapy is patient-centric and relies heavily on personalized conversations for effective treatment, which is why LLMs, because of their personalized conversational fluidity, received massive attention in digital mental health.
Though LLMs lack genuine understanding and empathy, they are highly effective at generating tailored responses to user inputs in a near-conversational style. The ability to generate human-like language, combined with their user-friendly interfaces, allowed thousands of users to customize these models to cater to their specific needs without human intervention. For mental health support, this addressed the primary challenge of accessibility. LLMs are increasingly seen as approachable and helpful in providing therapeutic information and meaningful conversations, similar to how peer support platforms provide accessible counseling. Using LLMs as “trained peers” gained significant traction in online communities, particularly on platforms like Reddit <cit.> and Twitter, where users shared their experiences and CBT prompts to receive immediate “therapy.”
“People are not available at 4 am to help me with my overwhelming thoughts; ChatGPT is.” (r/ChatGPT)
Recognizing the potential of LLMs in providing support, researchers are developing LLM-driven applications for mental health, ranging from prompt design to treatment evaluation. One example is MindfulDiary, an LLM-driven app that helps psychiatric patients document daily experiences <cit.>. Another study focuses on fine-tuning LLMs for CBT techniques to support psychological health queries <cit.>. However, LLM-mediated psychotherapy has been criticized as premature, with studies suggesting these models have harmful limitations like racial and gender bias and ethical risks <cit.>. Other work has reviewed the practical challenges of deploying LLM-driven chatbots in health interventions by studying CareCall, a chatbot targeting social isolation. Despite recognizing various benefits, such as emotional support and workload reduction, their findings pointed to inherent complexity around stakeholder concerns <cit.>. In response to these concerns, researchers are formulating guidelines for the responsible use of LLMs in clinical settings, emphasizing the need for an interdisciplinary approach to minimize potential harm and enhance transparency <cit.>.
While current work has explored the risks and benefits of LLMs in this space, either through the framework of responsible AI or by interviewing individuals with lived experiences <cit.>, these studies lack an objective clinical perspective on the quality of care these models provide through therapy metrics rooted in psychotherapy literature.
§.§ Human-AI Collaboration: To Replace or to Augment?xxxx
With the rise of LLMs, there is an increased interest in HCI for comparing the agent of task (AI) with its traditional counterpart, the human, predicting that AI will outperform humans in many domains, even health practitioners <cit.>. Current work has contrasting outcomes when comparing AI and human counselors. For instance, surveyed public perceptions of AI-driven psychotherapy and found a significant inclination towards AI-based psychotherapy due to its confidentiality and accessibility despite a profound trust in human psychotherapists in handling personal data. The authors found that users prefer AI-mediated therapy, especially text-only communication since text attributes allow for selective self-presentation <cit.>. While some research demonstrates that conversational agents can produce more empathetic and high-quality responses than human physicians <cit.>, other studies establish that users generally prefer empathy generated by their human peers over AI-assisted therapeutic support despite its perceived acceptability <cit.>. This aligns with previous works highlighting the gap between human and machine understanding of empathy in peer support sessions <cit.>. While people often feel deeply connected online, AI models rate these interactions low in empathy due to their focus on sentence structure over genuine emotional connection, implying that LLMs do not have a genuine understanding of mental health support and the critical role of humans <cit.>. These contrasting findings were synthesized by , who used ChatGPT, an implementation of LLM, to complement professional psychotherapy and as a first step for those hesitant to seek professional help. Through a series of use case studies, the author highlighted the tool's capability for accessible, immediate support but also its limitations for comprehensive care, reinforcing its role as a supplement rather than a substitute.
Researchers have also raised concerns about experimenting with LLMs in standalone with a vulnerable population. For instance, founders of a digital mental health company faced criticism for using LLMs in their services without explicitly informing their users, arguing that the nature of the test rendered it “exempt” from laws of informed consent. The approach was challenged by medical and technology experts, who questioned the experiment's ethics and the harms it could present <cit.>. Given the delicate nature of mental health care, deploying LLMs without a thorough understanding of the support they offer could be harmful <cit.>. Hence, in this study, we made an intentional trade-off. Instead of opting for an experiment that may cause harm, we used publicly available real-life session transcripts that has been previously conducted with human peer counselors and recreated these mental health conversations with a CBT-prompted LLM. We then collaborated with clinical psychologists with expertise in CBT-based therapy to evaluate LLMs' quality of “therapy”.
Disclaimer: This paper makes multiple intentional trade-offs. First, sessions are experimentally recreated instead of re-conducted to avoid a setup that could cause harm to vulnerable populations, which stands as a first step in understanding the challenges that LLMs can present in mental health support. Second, the term “therapy” is either avoided or placed in quotation marks because, unlike current work that refers to this support as therapy or LLMs as therapists, we argue that support rendered by an LLM is not therapy which is a clinical practice with legal licensing. In its best form, it can be considered as CBT-based peer counseling, which is why it is also evaluated against peer counseling sessions conducted by a trained human peer.
§ DATA & METHODS
This section provides an overview of the experiment design and metrics used to generate data to evaluate the effectiveness of HELPERT in providing CBT-based peer counseling. We introduce two primary datasets:
* HELPERT Dataset: 27 simulated CBT-based peer counseling sessions using HELPERT, a large language model prompted to simulate human-like interactions in therapeutic settings. Each session originally involved CBT-based text session dialogue between a trained peer counselor (helper) and an individual seeking support (thinker)
* Psychologist Evaluation Dataset: Quantitative CBT competency scores and qualitative feedback provided by clinical psychologists for original peer counseling sessions and their HELPERT counterparts
These datasets aim to benchmark HELPERT's performance against human counselors in providing structured mental health support. Both datasets generated as part of this study will be released to the public.
§.§ Helpert Dataset
Human-Mediated CBT Counseling Sessions:
Current research on human versus AI-mediated care often focuses on comparing one-time utterances written by each agent <cit.>. However, such comparisons often fail to consider the complete context necessary for evaluating care. Counseling is not just a one-time interaction (a reply to a user post) but a sustained dialogue between the care provider and the seeker. Hence, to cater that, we obtained a dataset of 27 text-based CBT counseling sessions shared on an online peer support platform (Table <ref>). These sessions were conducted by trained peer counselors and covered a range of therapeutic topics and user profiles. Each session was guided by peer counselors to support individuals through distressing events using cognitive behavioral techniques like active listening, open-ended questions, and cognitive restructuring to identify cognitive distortions, related feelings, and behaviors, and then guide them in creating new, helpful thoughts. This overarching process is shown in Figure <ref>. Sessions follow a balanced conversation dynamic between the user and the peer counselor, typically lasting one hour to reflect standard therapy sessions.
The “HELPERT” Prompt: The HELPERT prompt running on GPT-4 was used to reconstruct publicly available counseling sessions. The prompt is available at https://wickie.invisible.college/cheese/gpt2the platform's website. To control as many factors as possible for comparison, the same team that managed the original sessions developed the HELPERT prompt, adhering to the same principles and goals from the same underlying training manual. The structured framework used for both processes was based on CBT and Nonviolent Communication techniques. This standardized approach provides an ideal scenario for effectively comparing human counselors with an LLM-based chatbot due to the systematic nature of CBT. The prompt was divided into seven phases to guide users through self-reflection and problem-solving, as shown in Figure <ref> The prompt underwent rigorous internal testing. The team tested each version with simulated scenarios of their own life events to understand where the system went off-track CBT and had issues in providing support. This iterative design involved clinical evaluation by a licensed therapist and peer counselors trained in CBT techniques. The approach mirrored the original training of the peer counselors, which was developed through iterations and self-critical feedback sessions. While our contribution in this work does not involve releasing prompts for therapy, we aimed to test both agents trained on the same material and an iterative design with the same team involved in creating the prompt allowed for a fair comparison.
Task 1: Session Reconstruction:
Existing session dataset was recreated using the HELPERT prompt. Since the sessions consisted of a text-based dialogue between a user and a peer counselor, the counselor's responses were substituted with those generated by HELPERT, while the user's responses were used as input prompts. The goal was to produce simulated versions of the original sessions, but where the peer counselor's response is replaced by HELPERT's output while retaining a coherent conversation. To ensure consistency the raters were blind to the source of the sessions, slight modifications were applied to the user's responses, aligning them with a consistent dialogue scenario. However, care was taken to retain the essence and context of the original interactions. Generated AI responses and overall sessions were carefully examined to ensure they reflected original interactions and did not deviate significantly from the context of the user's input prompt.
Design Considerations for Task 1: Two authors read and annotated the original sessions in advance to understand the entire context of each session. This preparation helped us address any potential misinterpretations by HELPERT. However, if HELPERT did not actively probe deeper as a human would, we intentionally withheld additional information from the system. This decision was made to observe how the session would naturally progress without further exploration from HELPERT. Hence, misinterpreted responses were corrected to replicate the session accurately, but additional self-disclosure and reflective responses (which would not have originated if the human peer counselor had not probed deeper in the first place) were not given as inputs to the system. This approach allowed us to evaluate the LLM's capabilities and limitations without introducing human bias. An example of this consideration is shown in Figure <ref>.
§.§ Psychologist Evaluation Dataset
Task 2: Psychologists' Evaluations using CBT Metrics: Both sets of sessions (n_1 = 27; n_2 = 27) were anonymized and shared with two clinical psychologists who specialize in Cognitive Behavioral Therapy for quality assessment. Although the psychologists conducted their evaluations independently and simultaneously, they practice CBT in the same office, thus controlling for their (possibly diverging) perspective as therapists. After reviewing each session, each psychologist completed the Cognitive Therapy Rating Scale (CTRS) and open-ended surveys.
Cognitive Therapy Rating Scale (CTRS): The CTRS scale is a standardized observer-rated tool and is used to assess a counselor's competence in administering CBT <cit.>. The scale includes 11 items, each evaluated on a 7-point Likert scale ranging from poor (0) to excellent (6). Total scores can vary from 0 to a maximum of 66, with previous studies considering a score of 40 as the threshold for competence in CBT <cit.>. Items on the CTRS are divided into two sub-groups: (i) General Therapeutic Skills, which evaluate a counselor's therapeutic relationship skills, and (ii) Conceptualization, Strategy, and Technique, which measure the counselor's application of CBT specific skills. Each of the two sub-groups consists of individual items designed to quantify a particular skill, as outlined in Table <ref>.
Session Feedback Surveys: Since CTRS is a quantitative metric and lacks qualitative insights, we asked psychologists to complete a feedback survey for each session. The survey included a list of suggested prompts (See Appendix <ref>) to assist psychologists in reflecting on the counselor's behavior and session nuances for outlining key moments in approach, technique, and client engagement. At the end of reviewing the original session and its HELPERT counterpart and providing CTRS and qualitative comments for each individual session, psychologists filled out a session comparison analysis questionnaire (See Appendix <ref>) to outline the distinct observations made by each counselor in their respective sessions that were not made by the other. This question was intended to highlight the unique strengths and drawbacks of peer versus LLM-assisted counseling. Lastly, we conducted a semi-structured interview with both psychologists. The semi-structured interview lasted twenty minutes and focused on the approach that both therapists used for their ratings, as well as overall reflections post-study.
§.§ Dataset Schema & Release:
Publicly available text-based counseling sessions that strictly adhere to an evidence-based psycho-therapeutic treatment such as CBT are exceedingly rare due to the sensitive and confidential nature of the dataset and data quality issues. Most publicly available data is sourced from videos and can contain signals that cannot be translated to text; for instance, empathetic responses in the form of nodding cannot be translated to text (Table <ref>). To circumvent these potential challenges, HELPERT leverages real-life CBT-based text conversations. This approach ensures multiple things: 1) LLM and peers are evaluated on their ability to provide counseling in a single-session intervention (instead of a one-time response studied in prior work). This back-and-forth of context-dependent responses comprehensively evaluates if LLMs are safe to deploy for quasi-therapeutic purposes in situations that are susceptible of complex case conceptualization, considering social and cultural contexts and addressing unpredictable human behavior. Next, the sessions follow a highly protocolized structure (CBT) that makes it easy to simulate with an LLM. Lastly, the sessions are text-based, hence it is easy to compare the text-based nuances of support with both peers and LLM.
Each session pair in the counseling dataset is associated with attributes across message and session levels, including a “Session ID” (text), “Counselor ID”, and the “Source” of the session (binary), indicating whether the session was conducted by a peer counselor or recreated with HELPERT. The Message Attributes include the content of the “Message” (text), a binary field “FromThinker” to determine if the message is sent by the user or counselor, “Timestamp” (e.g., Sat Feb 19 2022 17:49:52 GMT-0500), “Offset” (e.g., Eastern Standard Time), and “MessageID”. The Session Notes Attributes record CBT-based labels for each session, such as “Event”, “Thoughts”, “Feelings”, “Behavior”, and “Cognitive Distortion”. Sessions are labeled in four ways: with the CTRS scores (11 CTRS items rated from 0 to 6) and the “Session Feedback” (text) provided by both psychologists. For a detailed dataset schema, refer to <ref>.
Both companion datasets, HELPERT and Psychologist Evaluation, generated as part of this work will be released to the public. These two datasets serve as a starting benchmark for comparing linguistic differences between HELPERT-generated responses with those of peer counselors and act as an initial resource for evaluating future LLM-driven mental health support interactions. This dataset will become available for replication studies and further research [link to be included upon publication].
§.§ Mixed-Methods Analysis
To investigate how the quality of human-provided care compares with LLM, each CTRS skill between the two counselors was compared. Each psychologist outlined the strengths and weaknesses of the counselor and selected the counselor who demonstrated a better understanding of the support seeker's trouble and application of the method. The absolute difference between psychologists' scores, denoted as Δ, was calculated for CTRS items to depict the distance between the ratings. To assess the degree of consistency between the psychologists, Intraclass Correlation Coefficient (ICC) was calculated, a statistical measure that indicates how closely numerical ratings by multiple raters resemble each other <cit.>. This reliability score is particularly useful when assessing multiple raters' evaluations on the same subjects or items, as in this study, where each session was independently evaluated. ICC was calculated by taking the difference between the variability of different ratings of the same session (between-rater variances) and the average variability of all ratings (total variances), divided by the total variances. This measure indicated how much of the total variability in ratings could be attributed to differences between sessions rather than differences between raters or random error. An ICC of -1 indicated perfect disagreement, 0 indicated no agreement, and 1 indicated perfect agreement among raters <cit.>.
For qualitative feedback, thematic analysis guided by the CTRS items was used to evaluate session dynamics and the counselor's skills, such as interpersonal effectiveness, collaboration, focus on key cognitions, and the application of cognitive-behavioral techniques. This method was chosen for its ability to systematically identify patterns in qualitative data that reflect the CTRS scores. Two researchers independently coded the evaluations and then met to identify and discuss themes guided by the research questions. Specifically, codes were developed according to CTRS items such as “collaboration”, “connection”, “session dynamic”, “adherence to CBT”, and “potential harm to participants”.
Privacy, Ethics and Disclosure:
This work relies on publicly accessible sessions from the Cheeseburger Therapy platform, with users' written consent for public viewing and research. None of these sessions contain personally identifiable information (PII). Participants were informed that sharing their sessions was voluntary, and only those sessions with explicit consent were used in this study. While the authors collaborate with the Cheeseburger Therapy team, they have no direct interaction with the participants, nor do they access any PII. Cheeseburger Therapy outlines its mission as a research initiative aimed at improving therapy accessibility by training laypeople to provide support. As this study involves retrospective analysis of de-identified data, it does not offer any make diagnostic claims.
§ FINDINGS
§.§ Human Counselors Chat while HELPERT Focuses on CBT Concepts
Despite being trained on the same CBT manual, human counselor and HELPERT sessions had starkly different engagement dynamics with users. Human counselors were more adept at picking up implicit cues and asking questions to create a space for user reflection, allowing both greater nuance within sessions and opportunities to get off track. On the other hand, HELPERT excelled at providing quality psycho-education and adhering to CBT methods consistently but often required explicit signaling from support seekers and often missed out on potentially important contextual details (Figure <ref>). In the original sessions, trust was established through small talk and shared lived experiences. Psychologists noted that these sessions were characterized by “authentic rapport,” often including “random chatter, which led to deeper self-reflection and self-disclosure”. At times, human counselors often shared their similar lived experiences to help users feel understood and validated. For example, in a session (Figure<ref>) where the user felt isolated and alienated from their feelings of being productive, psychologists observed that self-disclosure was not just beneficial but necessary and was more impactful and compelling than standalone CBT techniques.
“The counselor was able to generate a tremendous amount of credibility and buy-in by self-disclosure and by connecting their own experiences with the client’s own.”
At the same time, while peer counselors “additional therapeutic chatter added to session quality” (Psychologist 1) by establishing rapport or gaining context, other times “it detracted from the therapeutic goals” (Psychologist 2). In some cases, peer counselors' engagement with users strayed far from the core CBT approach of the session that became actively harmful.
“The helper introduced or referred to unscientific principles and outdated therapy concepts such as the “inner child”,
and then used these concepts to encourage the client to entertain and elevate thoughts that were actively harmful. Very little of this was CBT informed, and I fear that the client would have suffered harmful effects from this attempt at help.”
In comparison, since LLMs have no understanding of shared lived experiences or a self to disclose, HELPERT communicated strictly through CBT concepts such as restatements and storytelling to validate clients. Thus, positive feedback for HELPERT sessions often centered on agenda-setting, pacing, and adherence to CBT, with comments like “[the counselor] offered a digestible and organized session” and “[the counselor] did a good job at keeping the session structured and CBT-focused.” Adherence to the method, at times, compensated for the absence of a genuine connection. For instance, HELPERT sessions were annotated with comments like “[peer counselor] did well with offering validation and active listening skills while also re-framing negative thoughts” and “[peer counselor] brought in cognitive behavioral techniques such as taking a brief time-out from interaction and using breathing techniques to help the client manage their emotions.”. With this ascribed focus on CBT concepts, however, HELPERT, by extension, was a more passive agent within sessions, only able to interpret what was either explicitly prompted via the 6-phase structure or by the user directly (Figure <ref>).
Comparing the two following summaries (Figure <ref> by HELPERT and a peer counselor, we see such differences in session dynamics play out, with the peer counselor having gleaned and utilizing more emotional and contextual details from the user and HELPERT focusing on a straightforward, CBT-centric analysis.
§.§ Peer Counselor Performances Varied, while HELPERT was Consistent
The variability of the effectiveness of peer counselors' more free-form engagement was evident in psychologists’ CTRS scores, with
some scoring high ratings above 55 out of a possible 66, while others fell below 25 (Table <ref>). Comments for these sessions had extreme opinions from both psychologists, ranging from “This counselor was profoundly empathetic and non-judgmental while also identifying appropriate thoughts that respecting cultural and religious milieu the client came from” to “This counselor introduced or referred to unscientific principles and outdated therapy concepts”.
Unlike the peer counselor sessions, HELPERT’s CTRS scores and feedback were more consistent, with no session scoring above 50 or below 30 (Table <ref>), except in cases where HELPERT refused to continue the session due to the LLM’s regulations. For these sessions, HELPERT was severely criticized as it abruptly ended sessions and dumped patients who mentioned self-harm or suicidal ideation without providing follow-up care, an extremely unethical practice in real life.
This indicates that while peer counselors often had 'shining moments,' they also “went off-book and engaged in non-CBT activities”; HELPERT was able to provide overall “more consistent, more mediocre care”, except in cases when it did not adhere to ethical guidelines.
§.§ Human Counselors Achieve Warmth and Empathy Through Cultural Sensitivity; HELPERT cannot
Since peer counselors used small talk and self-disclosure, these sessions naturally achieved warmth and empathy since “there was a human mix of sincerity, seriousness, and total goofiness that mingled together in a pretty effective social interaction”. Psychologists documented occasions where human peers were sensitive to users' needs with respect, warmth, and genuineness. When users sought help from diverse cultural backgrounds, peers showed respect and understanding for their culture and religion, even when they couldn't fully relate to the user. For example, during a session (Figure <ref>), a user from Global South encountered difficulties navigating familial and cultural pressure not typically present in Western societies.
Psychologists noted that the counselor's empathetic approach helped establish an understanding and validating experience despite their cultural differences, writing:
“[The counselor] was really beautiful in their ability to relate with the client while also expressing empathy and cultural understanding. The conversation of culture and how cultural factors are impacting the client's life was impressive, especially because it was organic and unforced.”
Meanwhile, this interaction was missing from HELPERT sessions a) because of HELPERT's inability to ask deeper and intuitive open-ended questions (discussed in Section <ref>) and b) its lack of cultural sensitivity. Beyond cultural understanding, human counselors were also mindful of religious values and adjusted their strategies subtly to align the methods with the values important to the user.
“[The counselor] was profoundly empathetic and non-judgmental while also identifying appropriate thoughts that respected the cultural and religious milieu the client came from. [...] was “wise” to have the client repeat the new thought several times and then apply techniques to post-session. All in all, it was one of the best sessions reviewed so far.”
On the other hand, the lack of small talk, self-disclosure, cultural understanding, and genuine reactions to sad events in HELPERT sessions significantly hindered the establishment of effective warmth and empathy. The inability to relate to the client was a major barrier. In their feedback, Psychologist 1 specifically called out HELPERT's inability to use self-reference as a means of connection, writing:
A lot of times, the human is going to have self-referential statements, even if not necessarily self-revealing. The problem is that for AI to be self-referential, it would necessarily be deceptive since there is no “self” to reference. AI could do what some skilled therapists do to avoid self-disclosure: the famous, “I know of a person who ...”. In [peer support], the AI could say, “Your burnout feelings are pretty common. I interact with a lot of people who experience burnout this time of the school year so the [client] could feel validated.”
In most of the sessions, the lack of empathy, warmth, and genuineness made HELPERT appear “detached from the client's internal reality and [compensate] for it with excessive and repetitive restatements.”
The initial response [by the counselor] lacked empathy, which is a very important aspect of responding to sadness particularly. This is one of the biggest giveaways to know whether the counselor is AI or not - does the helper express genuine empathy in response to sadness? The reason is that one of the primary purposes of sadness is to draw empathetic support from others. Imagine seeing someone you know well sitting alone on a sidewalk bench, looking extremely sad. Your initial reaction would be to approach that person and ask what’s wrong empathetically. In this case, the counselor didn't express empathy. Rather, it restated the topic and intellectualized using CBT. It can basically never portray empathy because AI cannot feel it. There was a distinct lack of human connection as compared to the other [peer] counselor, who expressed a lot of empathy while also restating and giving direction.
§.§ Verbosity & Over-Use of CBT Concepts Compromises Therapeutic Connection
According to psychologists, collaboration during sessions depended on how counselors interacted with participants to understand their needs, incorporate perspectives, and provide feedback. Human sessions, characterized by colloquial, conversational styles, exhibited higher degrees of collaboration compared to HELPERT, which produced verbose outputs with low turn-taking. Extensive dialogue in human sessions on topics beyond CBT was viewed as a strategy for connection. This dynamic was absent in HELPERT sessions, marked by lengthy, less interactive responses despite being prompted to adopt a conversational style. The verbose HELPERT's responses led to inconsistent collaboration with instances where “the [counselor] was lecturing, over-explaining rather than connecting with the client.” Both therapists noted a lack of feedback and guided discovery with HELPERT sessions, noting instances where “the helper zeroed in on a thinking error without working with the client to identify his or her own thinking errors or even offering alternatives that the client might be able to select from” and where “the helper simply “told” the patient what was wrong and how to fix it.” In response to one such session, the feedback outlined:
This [counselor] was afflicted by verbosity. There were text-heavy responses that summarized all that the client said, as if the task were to summarize literally everything the client said. The counselor couldn't separate the important things the client said from things that were less important or trivial. This lack of guided discovery and collaborative work made the session seem perfunctory and more of a lecture than a therapy session - this feeling was worsened by the incongruity of the high-level verbal approach of the helper versus the more casual and colloquial language of the client.
Psychologist 1 strongly critiqued HELPERT sessions, highlighting instances where it applied CBT frameworks without seeker collaboration to “identify questionable thinking errors” and “impose solutions without seeking client input.” HELPERT also struggled to engage clients actively, often appearing “passive when clients failed to answer direct questions” and fixating on concluding sessions swiftly.
HELPERT applied CBT frameworks without seeker collaboration to “[identify] a thinking error that was quite questionable in its application to this case” and “[impose] a solution without asking the client for initial ideas, additional possibilities or really much feedback”, identifying collaboration as a therapeutic aspect that delineates between simply using CBT language and techniques:
“The thing about therapy, especially CBT, is that it's not something that is “done” to someone - it's a shared collaborative experience, and when one person has the mic for so much of the time, that collaboration kind of goes away.”.
§.§ Psychologists had Different Interpretations of What Makes an Effective Session
We discussed in Section <ref> how both peer counselors used different techniques for an effective session with human's tendency to connect through self-disclosure and for HELPERT to communicate through CBT techniques. Each psychologist perceived these techniques differently, implying that multiple strategies can be effective when it comes to providing support. For instance, Psychologist 1 called out multiple sessions where the peer's self-disclosure was pivotal in building a therapeutic alliance. While this kind of session dynamic was mostly looked at favorably, there were instances where Psychologist 2 interpreted that self-disclosure was a detriment for a professional session, with feedback such as “Counselor often got off track and overly personal like self-disclosure, off-topic remarks, taking a break to feed neighbors dog during session, that do not match the way CBT is meant to be” and “[The session] at times was off-topic. The counselor brought in too much of their own experiences.”.
Because of their differing views on self-disclosure and CBT adherence, psychologists presented opposing views on counselors' collaboration and interpersonal effectiveness and varied in their prioritization of the sessions' organizational aspects. For human peer counselor sessions, positive feedback centered on the counselor's understanding of the users' issues, with the psychologists explaining, “[the counselor] filled a lot of the session with validation and reflective listening.” Whereas Psychologist 2 viewed this human aspect as unprofessional, with their common criticism focusing on poor structure and inconsistent use of CBT concepts, leaving comments such as, “the session was disorganized and hectic; the [counselor] did not go through components of CBT in a way that seemed to make sense or bring understanding to the client that well. It was all over the place,” and “[peer counselor] got off topic frequently and did not provide as structured a session as typically done in CBT.”
This diverging perception of the role of collaboration and self-disclosure in the overall application of CBT thus gives insight into the raters' quantitative disagreement over both HELPERT's use of collaboration and peer counselor's application of CBT techniques (Table <ref>). In other words, where Psychologist 1 interpreted HELPERT's long explanations as a lack of collaboration and over-lecturing, Psychologist 2 interpreted it as a stronger application of CBT. On the flip side, where Psychologist 2 interpreted the peer counselor's back-and-forth self-disclosure as disorganization and poor CBT education, Psychologist 1 interpreted it as a highly collaborative session with a strong user connection. For instance, in response to one session, where Psychologist 2 wrote “the session was disorganized and hectic [...] it was all over the place”, Psychologist 1 wrote, “the [counselor] was extraordinarily collaborative, instructing the client to look over the thinking errors....[].. while also stepping in when the client asked for direct help.”
Despite the conflicting opinions, both counselors received high ratings. The varying session strategies led psychologists to focus on different aspects of what constitutes an “effective” session. In the Discussion section, we explain some of the reasons behind these differing interpretations.
§.§ A Counselor's Inability to Identify and Handle Out-of-Scope Issues Holds the Risk for Serious Patient Harm
Humans and HELPERT failed to properly handle out-of-scope issues, resulting in a serious risk for patient harm. Within the simulated HELPERT sessions, there were two particular sessions where a user brought up issues of depression and self-harm and where the user struggled with addiction and isolation. In both cases, the HELPERT abruptly ended the session without providing any directions for other resources or direction. As Psychologist 2 called out:
[In response to the session in Figure <ref>]: “This session likely resulted in patient harm. When the patient was expressing significant distress over rejection and abandonment, the counselor responded to self-harm talk by engaging in ... rejection and abandonment. The counselor needed, above all, to provide a resource such as the National Crisis Hotline number (988) to provide the client an immediate resource for care, but the counselor needed to do this with empathy and compassion as well, explaining the limits of his or her training patiently and expressing hope that the patient can get the help they need from a better-trained person. The abrupt abandonment of a patient in distress is specifically forbidden by the ethics codes of psychologists and practitioners, and this counselor failed the basic ethical obligation to provide basic help to a person in distress. This is absolutely harmful and could end up causing significant issues for clients who are vulnerable.”
Psychologist 1 further added that “in the best-case scenario, the client would have been handed over directly to a person with higher training. At the very least, there should have been a list of referral resources relevant to the client's community, such as the 988 nationwide crisis Hotline in the U.S.” Both psychologists emphasized that the counselor “was correct to not attempt to help the client beyond their expertise” but that it was unethical and harmful to abruptly end the session.
In addition, there was another session where HELPERT did not terminate the session suddenly but, in fact, inappropriately validated a support seeker's harmful fears and behaviors. Psychologist 1 commented:
There was a serious problem of “over-agreement” and “overvalidation”. The client appeared to be out of touch with reality [...] the helper leaned into this and reinforced client's unhealthy ideas.
The human peer counselor did not end sessions they were not equipped to handle and may have caused harm to the user by inappropriately addressing these serious challenges without proper clinical training. In one session where the seeker struggled with self-harm:
“Unfortunately, and this is a serious issue - the client expressed self-harm behaviors and ideations that the helper did not address — in fact, the helper normalized self-harm inappropriately as a common response to stress. It is not. More training on self-harm for the peer counselors should be done.”
In another session (Session <ref>) where the user expressed concern about addiction, Psychologist 1 explained “the [counselor]'s attempts at validation moved into justification and perhaps even enabling the client's dysfunction.”
[In response to the session in Figure <ref>]: “This counselor is overly validating. The above example in which the counselor's attempts at validation moved into justification and perhaps even enabled the client's dysfunction. Depression and substance abuse are dysfunctional and shouldn't be excused. Instead, the counselor would have been better off reflecting on the client's own feelings rather than offering opinions on how right or wrong those feelings are. For example,
Client: “it tends to make one feel totally worthless when everyone around you has people to do stuff with, and you have no one.” A better response would be something like, “It sounds to me that you've been feeling deeply depressed and that you're having a hard time seeing your way out of this.” The problem with this helper is that agreeing with the hopelessness in any way (e.g., “It's society's fault! Feeling like all you have is substance abuse makes sense!), we risk reinforcing the hopelessness. The idea that “all I have” is a weed doesn’t make sense at all; that’s the depression talking. This client would have benefited greatly from a professional CBT-trained psychologist, but the counselor missed reading the signs of depression.”
Without the ability to identify issues beyond their expertise and provide appropriate referrals, there is a significant risk to users. According to psychologists, the potential for harm is a major distinction between alternative support methods and traditional therapy, which typically involves comprehensive training to manage such risks.
§ DISCUSSION: THE LIMITS OF LLMS IN THERAPY AND THE CRITICAL ROLE OF HUMAN-AI COLLABORATION
The emergence of LLM therapy has been criticized as premature, with critics stating that these systems lack efficacy and could potentially harm some patients. This study evaluates the role of LLM within accessible mental health support by recreating publicly available CBT-based counseling sessions using HELPERT, a prompt based on CBT techniques. In this section, we present the strengths, weaknesses, and ethical implications of using LLMs in healthcare by combining psychologists' evaluations and post-study reflections.
§.§ An Empathetic AI: A Nonexistent AI?
Recent research on LLM-mediate care indicates that responses from LLMs exhibit greater overall empathy than human peer-to-peer interactions <cit.>. However, in this study, both psychologists agreed that the human peer support sessions “had much more warmth, empathy, and shared understanding” compared to HELPERT sessions, “which felt more like self-help content”. The low empathy, from a session point of view, could be attributed to HELPERT's over-reliance on CBT methods since prior work has shown that rigid adherence to the method and lack of collaboration with the user contributes to users' low perceived empathy <cit.>.
This suggests that while research supports that LLMs can be empathetic in generating single, isolated responses, their ability to lead empathetic sessions is much more limited. Theoretically, empathy in CBT refers to “the conscious engagement with another's suffering, where we imagine and relate what it is like to be experiencing the thoughts and feelings of the other person” <cit.>. An AI neither has the consciousness to engage nor a self to relate. Hence, to say LLM responses are highly empathetic is misleading and can be harmful to the general audience. This finding is significant because, even in research settings, we must be cautious not to overstate LLMs' empathetic capabilities, which could encourage more users to rely on this support without understanding its limitations. We argue that empathy is not an NLP task that can be easily addressed with more data or additional fine-tuning, highlighting the challenges of making mental health support accessible solely through AI.
§.§ Human-AI Collaboration: Balancing Empathy with Method Adherence
The original peer counselor sessions were characterized by small talk, empathy, and warmth. In contrast, when these sessions were recreated with HELPERT, they included more CBT-based educational content. Due to different session dynamics, Psychologist 2 preferred HELPERT in 74% of the sessions for “its adherence to the method, psycho-educational content, and effective use of CBT techniques.” In contrast, they chose peer counselors only 4% of the time due to their “overuse of self-disclosure and off-tangent remarks.” However, Psychologist 1 preferred peer counselor sessions (55% individually and 69% combined) because of the “consistent warmth, empathy, and non-judgmental, destigmatizing nature of the counselors, which helped form a strong therapeutic alliance with the user.”
Prior literature provides evidence that both strategies—therapeutic alliance and application of CBT—are effective. For instance, previous AI-mediated health research has found that greater use of cognitive and behavioral change methods correlates with symptom improvement and patient engagement, while non-therapeutic content is inversely related <cit.>. Similarly, therapeutic alliance and a therapist's subjective variables, such as their values, personalities, and reflective capacities, have a strong impact on psychotherapy outcomes <cit.>. These findings are also in alignment with the pluralistic framework of psychotherapy that argues that various therapeutic methods may be effective in different situations and there is `unlikely to be one right therapeutic method' suitable for all situations and people <cit.> calling for hybrid care that augment human connection and collaboration with AI's adherence to the method <cit.>.
Hence, while both psychologists differed in their opinions on the primary factor impacting session quality, they highlighted different yet important aspects of counseling. Both therapeutic alliance and structured CBT techniques are essential for a successful therapeutic conversation, validating the varying perspectives of psychologists. This balance provides insights into designing peer support platforms and demonstrates the potential of using LLMs to augment, rather than substitute, peer counselors' abilities <cit.>. Human-AI collaboration can make the therapeutic process safer without losing the authenticity of human interactions or the scalability of an advanced language model <cit.>.
§.§ Challenges in Evaluating LLMs in Therapeutic Settings: The Need for New Benchmarks and Standardization
The high variation in the psychologists' ratings, despite having similar CBT training, highlights the complexity of quantifying LLMs' ability in therapeutic settings. We argue that scales developed to quantitatively measure human competence may not appropriately evaluate AI performance because of their high variance. For example, CTRS defines Interpersonal Effectiveness as:
Degree of warmth, concern, confidence, genuineness, and professionalism appropriate for this particular patient in this session.
In post-study interviews, we found Psychologist 1 prioritized warmth and genuineness, while Psychologist 2 emphasized confidence and professionalism in their ratings. Psychologist 2 reached out to us after the study stating:
“Although the sessions were blinded, it was possible to infer which ones were recreated with AI because of the lack of self-disclosure. Therefore, I focused on other aspects of Interpersonal Effectiveness, as it would be unfair to rate AI on warmth.”
This lack of standardization hindered an objective evaluation. While current research on AI-mediated mental health care focuses on accuracy and reliability, future studies will need to evaluate these models' ability to provide counseling. Recent developments in conversational agents have established metrics for perceived empathy <cit.>, but there is a lack of reliable instruments to quantify a conversational agent's counseling behavior, which necessitates reevaluating how we assess AI-delivered mental health care.
The absence of such social evaluation frameworks also presents unique challenges in designing universally applicable language models for sensitive populations and high-risk scenarios. The lack of benchmarks for LLMs in highly subjective tasks invites deliberate and thoughtful design of language models that account for the inherent variability and nuances of human experiences and responses in such high-risk situations. Our findings emphasize involving multiple domain experts as humans-in-the-loop since different mental health experts can have varying opinions on the quality of non-traditional care, making it difficult to disseminate psychotherapy in an automated, low-cost manner <cit.>.
§.§ Dataset Contributions: Transparent AI-Driven Mental Health Support
Research in digital mental health is challenging to replicate due to the confidential and inconsistent nature of the datasets. Existing datasets are often derived from video session transcripts, typically sourced from platforms like YouTube and Vimeo. These transcripts are not representative of real-time text interactions and often contain transcription errors from automatic captioning <cit.>.
HELPERT and Psychologist Evaluation Dataset consists of text-based sessions and evaluations that can be used to study the behaviors of LLMs and human counselors in text-based psycho-therapeutic settings beyond CBT, specifically for analyzing linguistic differences, such as LLM's inability to ask intuitive questions and its lack of deeper understanding and contextual awareness (mentioned in Section <ref>). Future research can investigate how the style, intonation, and tone of a language model impact session quality and care.
We argue that human-LLM interaction for “therapy” is likely to stay. Instead of shifting responsibility to users, we need to train LLMs for safer interaction. NLP practitioners can use the Psychologist Evaluation Dataset (13,378 words and approximately 15,545 tokens) to fine-tune current language models for this specific task. The dataset contains different training signals for peer counselors and LLMs (Figure <ref>), which, for LLMs, can be used to train the current models so that, at a minimum, they follow ethical practices outlined in Section <ref> and do not harm or abandon the user as outlined in Section <ref>. A fine-tuned LLM is expected to generate safer responses and provide supervisory signals for LMs to learn safer behaviors, unlike an LLM that has not been fine-tuned.
Lastly, in the introduction, we claimed that the informal evaluation of large language models is subjective and contextual; a user in distress is unlikely to gauge the risk, considering LLMs act as a band-aid for mental health support—accessible and free. Therefore, current developments in AI must inform users about the quality of care provided and the trade-offs of using alternatives to traditional therapy, either before or after the interaction. To communicate these risks and trade-offs, NLP researchers can use the Psychologist Evaluation Dataset. This dataset includes detailed evaluation criteria using the Cognitive Therapy Rating Scale (CTRS) and narrative feedback like `Session Feedback`, which provides a standardized and thorough assessment of counseling sessions. Creating transparent feedback loops can help users become more aware of the potential limitations of AI-driven mental health support (Figure <ref>). For example, after each interaction, users could receive a conversation summary, including an evaluation of the AI's performance and resources for seeking professional help if needed.
§.§ Ethical Implications
The potential ethical implications of chat-based, LLM-enabled mental health support are expansive and multi-disciplinary. We thus limit our discussion of such considerations to the focused scope that guided the study, that is, the comparative quality of care between peer and LLM counselors in single-session intervention settings. As prompted by our analysis of psychologist feedback and ratings, there are several key ethical questions that emerge from the performance of LLM counselors within sessions.
First is the question of deceptive empathy and self-relation in LLM-facilitated care. Health practitioners caution that given their lack of subjective qualities, LLMs are unable to form a therapeutic alliance with end-users <cit.>, a fundamental quality for effective psychotherapy. Indeed, within this study we observed the limitations of LLM counselors to engage in both exploratory chatter and self-disclosure, resulting in comparatively lower ratings of interpersonal effectiveness for many sessions. However, as the evaluating psychologists noted in their feedback, directly integrating such features into LLM-based therapy poses significant ethical concerns. While some aspects of small talk can likely be performed by an LLM counselor with the correct training, any form of self-disclosure or self-relation by it would inherently be deceptive as there is no “self” to reference. To put it in perspective, LLM-based counselors would fundamentally lack the ability to truthfully give basic assurances like “I understand” or even “I'm sorry that happened” <cit.>.
Beyond the ethical concern of deception in simulating human interpersonal engagement, there is also a broader question of whether intentionally imparting any human-like warmth in a therapeutic setting may be harmful. Integrating such subjective qualities may cause patients who are seeking therapeutic care to ascribe intentionality and care that simply does not exist for LLMs, producing unrealistic expectations of understanding and acceptance. In practice, such prescriptions could exacerbate risk in cases of over-validation and abandonment for likely already vulnerable users, as found in this study. Because of these concerns, current work in the field suggests such systems can “never” engage in a genuinely therapeutic conversation and would be best utilized as a mediator with limitations<cit.>.
On the other hand, it is still valuable to discuss the idea of whether there is a level of deception (in the most inclusive sense) that is ethical while designing AI in mental health settings. Barring the more extreme speculations of AI personas, can end user-facing LLM agents generate self-referential comments or basic pleasantries without ultimately causing harm? Or can the psycho-educational content provided by AI itself be valuable despite what is outlined by the current CTRS scale?
This overarching concern of deceptive empathy is further preceded by the ethical challenges of whether LLM-based mental health agents can even functionally display basic therapeutic competencies in assessing and handling cases where users' needs may be outside of their scope of care. While not explicitly part of the CTRS scale, HELPERT's failures in refusing to continue support in instances of substance use, the disclosure of specific mental health disorders, or self-harm, as well as over-validating other harmful behaviors, are in direct conflict with broader mental healthcare standards. In particular, organizations like the American Psychological Association (APA) have set ethical and conduct standards of which, if a provider feels that a patient's issues exceed their professional competency, “an appropriate termination process that addresses the client’s ongoing treatment needs through pre-termination counseling and making any needed referrals must occur” <cit.>.
The ability to assess what a care provider is equipped to do and how to handle situations where they are not may be as easily solved for AI agents as simply linking resources upon a refusal of service that is based on a blanket keyword flag. This competency is distinctly emphasized when there is an imminent risk to a user, such as that of suicidal ideation or domestic violence. Especially when considering that AI agents are already being characterized or sought out as forms of therapy in the wake of inaccessible healthcare, the design of such AI agents must seriously consider the ethical implications of how and how not such tools may handle these common yet high-stake situations. Such ethical questions only scratch the surface of what it means to implement or direct LLM-human interactions in mental healthcare but present fundamental ethical challenges to evaluating the nature of LLM mental healthcare tools.
§.§ Limitations and Paths Forward
The LLM responses were generated in a non-interactive, one-sided manner, unlike real-time human peer counseling sessions. Although care was taken to recreate the session and maintain the essence and context of the original, HELPERT sessions were highly constrained by the transcript of the previously generated human-to-human session. It is likely that HELPERT's sessions would have diverged from the original ones, as users might have responded differently to HELPERT's responses, which sometimes varied from those of the original counselor.
This design choice, however, was an intentional trade-off to minimize risks to human subjects. Our goal was to evaluate the care provided as an alternative to traditional forms of care, whether by a peer counselor or an LLM-based chatbot, not to investigate how accurately a session could be reconstructed with LLMs. This method also served as one of the only feasible ways to evaluate the quality of care offered by these systems without exposing real participants to potential risk. In addition, since patient outcome measures were not accessible for the recreated sessions, our study focused solely on the quality of care provided, as assessed by clinical psychologists.
Future work should, therefore, explore avenues for integrating LLMs into therapeutic settings in a safe, controlled, and supervised environment. This will help better understand the role of different session dynamics, such as connection versus method adherence, on user therapeutic outcomes. Researchers who wish to conduct such a study involving human subjects can use the HELPERT and Psychologist Evaluation Dataset to understand and be aware of the potential risks.
§ CONCLUSION
From a decision-making system to a CBT-trained peer, large language models are undergoing a shift in their role, enabling wider access to care—a fundamental necessity in contemporary mental health support.
However, re-purposing LLMs as therapeutic counselors when they were not originally intended for this task is counter-intuitive and has ignited a debate between some recipients of AI-assisted support and field experts. In this paper, we reproduced a publicly available CBT dialogue dataset using a CBT-based prompt running on GPT-4. The two sets of session dialogues were evaluated by clinical psychologists through the Cognitive Therapy Rating Scale (CTRS) and psychologists' feedback data. Despite the same training, we found that peer counselors focused on connection and validation through self-disclosure, whereas HELPERT focused on adherence to the method. Our findings call for a hybrid model of care that offers a balance between the scalability of AI and the nuanced interpersonal effectiveness of humans. In addition to our analysis, we will be releasing two companion datasets to the public, providing CBT-based benchmarks for exploring the roles of peer counselors and LLMs in mental health. Our contributions in this paper encompass being the first to analyze therapeutic counseling conversations at the session level amidst the rise of more sophisticated LLMs, especially at a time when a significant number of individuals have turned to these chatbots for support.
*
ACM-Reference-Format
§ CTRS
§ SESSION FEEDBACK SURVEY
After reviewing the session, please make a comment on the session. Consider responding to one or more of the following prompts that address the most interesting aspects of the session.
* What did you notice in the session that seemed most different than what a human might ask? (e.g., tone, conversation style, questions, reactions)
* What was the most impactful or compelling thing that the peer supporter did to guide the session?
* What could the peer supporter have done better? Recommendations for improvement?
* What are the most noticeable differences between this session and CBT sessions that happen in your practice?
§ SESSION COMPARISON SURVEY
Upon reviewing both sessions conducted by a human peer supporter and an AI peer supporter, please answer the following question, which is mandatory:
* What unique observations did each peer supporter make in their respective sessions that the other peer supported did not? For example, Peer Supporter 1 may have noticed 'X' while Peer Supporter 2 observed 'Y.'
* Which peer supporter demonstrated a better understanding of the support seeker's trouble and application of the method?
[t]
t]l l l l
∙ Peer Supporter 1 ∙ Peer Supporter 2
∙ Both ∙ Neither
§ SESSION SCHEMA
Each session contains a tet-based dialogue between peer support provider (human or AI) and the following schema:
|
http://arxiv.org/abs/2409.03240v1 | 20240905044055 | An Efficient Enumeration of Flat-Foldings : Study on Random Single Vertex Origami | [
"Chihiro Nakajima"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"cond-mat.dis-nn",
"cond-mat.soft"
] |
Gr-IoU: Ground-Intersection over Union for Robust Multi-Object Tracking with 3D Geometric Constraints
Keisuke Toida10009-0006-4873-3651 Naoki Kato20009-0004-3815-0829 Osamu Segawa20009-0000-2469-6098 Takeshi Nakamura20009-0001-4991-3383 Kazuhiro Hotta10000-0002-5675-8713
September 9, 2024
==============================================================================================================================================================================
§ ABSTRACT
This paper deals with themes such as approximate counting/evaluation of the total number of flat-foldings for random origami diagrams, evaluation of the values averaged over various instances, obtaining forcing sets for general origami diagrams, and evaluation of average computational complexity. An approach to the above problems using a physical model and an efficient size reduction method for them is proposed.
Using a statistical mechanics model and a numerical method of approximate enumeration based on it, we give the result of approximate enumeration of the total number of flat-foldings of single-vertex origami diagram with random width of angles gathering anound the central vertex, and obtain its size dependence for an asymptotic prediction towards the limit of infinite size.
In addition, an outlook with respect to the chained determination of local stacking orders of facets caused by the constraint that prohibits the penetration of them is also provided from the viewpoint of organizing the terms included in the physical model.
A method to efficiently solve the problem of the determination or enumeration of flat-foldings is discussed based on the above perspectives.
This is thought to be closely related to forcing sets.
§ INTRODUCTION
Statistical mechanics deals with configurations given by combinations of variables that take on a small number of states (for example, binary variables)<cit.>.
It introduces realization probabilities to configurations and discusses the properties of moments and large deviation functions, and is closely related to mathematics through probability theory and combinatorics.
In mathematical and computer science research on origami, a statistical mechanics perspective has been introduced to some problems (though it is sporadically, not based on a unified perspective)<cit.>.
§ PHYSICAL MODEL FOR FOLDING OF ORIGAMI DIAGRAM
In this study, the folding diagrams are generated according to probabilistically generating algorithm explained below.
With the total number of facets which is descried as n is fixed, large numbers of instances are generated by randomly giving widths of angles around the center, and discuss the statistical properties of the resulting set of instances.
We try to obtain asymptotic predictions on the behavior in the limit of an infinite number of facets n →∞ from the sequences of those with finite number of facets.
[][ht]
< g r a p h i c s >
(a)Example of origami diagram. Each edge in the figure represents a crease. In this figure there are no overlaps of facets. (b)Corresponding pre-folded diagram, which describes the overlaps of facets when the figure (a) is folded along the creases. Each vertiex indicated by the same mark is the same as that in Fig. (a). (c)Schematic picture of introduction of the Ising variable to a local layer-ordering.
Attempts to introduce physical models into origami have been made in the past.
It includes one that deals with geometric constraints on the bending of flat structures in three-dimensional space<cit.>,
one aimed at the physical properties of polymers and membranes<cit.>,
one that extracts the phase transition phenomenon seen in self-folding origami<cit.>,
and one that uses it in the context of mathematical research<cit.>.
In this paper we consider the model consisting of binary variables s_i,j∈{-1,+1} with the condition subjected with the form of products of two or four variables<cit.> to traet the stacking of facets and determinate flat foldability accurately.
The variables s_i,j represent the vertical relationship in stacking of two facets i and j in the origami diagram.
Each realization of s_i,j∈{-1,+1} represent the global stacking order of n pieces of facets.
The conditions to prohibit the situation that a crease is penetrated or intruded by other facets are imposed with the following terms in the energy function,
E^(i)_i,j;k
=1/2(1-J_(ik)(kj)s_i,ks_k,j)
E^(c)_i,j,k
=1/4(1-L_(ij)(jk)s_i,js_j,k-L_(jk)(ki)s_j,ks_k,i-L_(ki)(ij)s_k,is_i,j),
E^(q)_i,j ; k,l
=1/2(1-K_ijkls_i,ks_i,ls_j,ks_j,l),
where J_(ik)(kj)=-τ_ikτ_kj,
K_ijkl=τ_ikτ_ilτ_jkτ_jl,
L_(ij)(jk)=-τ_ijτ_jk,
and τ_ij is the sign of the difference between two indices of facets, namely τ_ij=sign(j-i).
Note that for the variable s_i,j, pay attention to the order in which subscripts are written, is assigned only when i<j and vice versa.
The term (<ref>), which prohibits a facet k from intruding between two facets i and j which are connected by a crease, is assigned for a geometry where a crease with (i,j) has an overlap with a facet k in the pre-folded diagram.
The term (<ref>), which prohibits cyclic stacking among the three facets i, j, k, is assigned for a geometry where three facets i,j and k simultaenously share an area.
The term (<ref>), which prohibits the unrealizable overlap between the four facets i, j, k, l that form two creases in (sometimes partially) coincident position, is assigned for a geometry where the two creases each consisting two facets i and j, k and l with respect are in coincident position.
The terms with the form (<ref>)-(<ref>) are respectively assigned to each geometries of overlap of a crease and a facet, simultaneously shared area, and coincident creases in the pre-folded diagram.
Hence, the total energy function is described as followings,
H({s})=∑_(i,j ; k)E^(i)_i,j;k + ∑_(i,j,k)E^(c)_i,j,k + ∑_(i,j;k,l)E^(q)_i,j;k,l,
where the each summation is taken over all corresponding geometries in the pre-folded diagram.
Thus the problem of finding the flat folding is casted as an optimization problem,
that of finding the conbination of {s_i,j} with H=0.
§ SINGLE-VERTEX ORIGAMI DIAGRAM
Preparing a flat-foldable diagram in general is considered to be equivalent to a combinatorial optimization problem itself.
In fact, when an origami diagrams are generated at random angles, the frequency of the flat-foldable diagrams is extremely small, so that it is almost unrealistic to discuss the foldability problem.
Instead of the generation of general diagrams, generating ones of single-vertex structure such as exhibited in Fig.<ref>(a) is performed in this research.
The properties of single-vertex diagrams have been actively researched in the context of determining foldability under a fixed crease pattern (mauntain-valley assignment), enumerating foldable crease patterns, and enumerating flat-foldings (global stacking-order of facets)<cit.>.
It is known that the foldability of a single vertex diagram can be determined using the Kawasaki's theorem<cit.>.
This themrem makes to generate foldable diagrams with random angles around the center possible by maintaining the condition that the alternating sum of those is 0.
Regarding the enumeration of the total number of flat foldings at least for the case of random angles, even in single vertex diagrams there is still no known conclusion, as far as the narrow knowledge of the auther.
§.§ Numerical procedure for generating each instance
For an origami diagram whose total number of facets is n, the total number of pairs of facets is n(n-1)/2, so the upper limit of N, the number of pairs which the local layering order should be considered, is also this value.
However, depending on the instance, there are facets that do not have an overlapping area in the folded diagram.
No variables are assigned to these pairs, as there is no need to consider direct hierarchical relationships.
As a result, the value of N for the origami diagram of n facets roughly takes a value close to the upper limit, but takes various values depending on the details of the overlap.
Thus the number of variables s_i,j for the corresponding optimization problem is given by N.
The instances are generated with two method, the examples of diagrams are shown in Fig.<ref>.
In the first method, the width of each angle around the center is ramdomly given from continuous real numbers in the range 0 to 1 by uniform distribution.
The angles are given so that the alternating sum is 0, and eventually normalized so that the sum is 2π.
This can be expected to correspond to the infinitesimal limit of the width compared to the case where discrete unit width w is introduced to the angles, which will be described later.
[][ht]
< g r a p h i c s >
Examples of of single vertex origami diagrams,
(a) An origami diagram with n=24 and randomly generated width of angles around a center vertex,
(b) A Corresponding pre-folded diagram for the diagram (a),
(c) An origami diagram with n=24 and w=24.
(d) A Corresponding pre-folded diagram for the diagram (b),
§.§ Definition of quantities
§.§.§ Entropy as Logarithm of Total Number of Flat-Foldings
In statistical mechanics, a quantity called a partition function Z(β) works as a generating function to obtain the expected value of various physical quantities, energy, each variable, or the sum of them at a fixed temperature.
Its definition is the summation of the quantity exp(-β H) over combinations of variables included in the energy function H, namely,
Z(β)=∑_s_i,j=± 1∑_s_i',j'=± 1⋯exp(-β H({s})),
where the summation symbol over many variables, ∑_s_i,j=± 1∑_s_i',j'=± 1⋯, means that it is taken over all combinations of the values of the variables of the system.
The H discussed in this paper obviously has a minimum value 0, at least for the origami diagrams that satisfy the Kawasaki theorem. Therefore, the value of Z(β) in the limit of β→∞, which is called zero-temperature limit in physics, is the sum of the value 1 by the total number of flat-foldings.
The logarithm of the partition function, F(β)=βlog Z(β), is usually called the free energy.
In particular for the energy function with the minimum value 0, the zero temperature limit lim_β→∞ F(β) is also called zero-temperature entropy or ground-state entropy.
The value of the ground-state entropy is numerically obtained by integrating the expected value of energy at a finite value of β using the following formula.
This paper describes the results of an approximate evaluation of this absolute zero entropy using a numerical calculation or probabilistic sampling method based on the Markov chain Monte Carlo method, especially the replica exchange Monte Carlo method<cit.>.
In this paper, in order to discuss the average behavior for various folding diagrams, the zero-temperature entropy lim_β→∞ F(β) on each instance is further averaged over several instances.
Therefore we represent each instance of origami diagram with the symbol Δ and write the zero temperature entropy for a diagram Δ as S_tot^(Δ).
In addition, these quantities averaged over all generated Δ is represented as [S_tot]=(∑_Δ S_tot^(Δ)) / (∑_Δ1).
§.§.§ Number of facets sandwitched
If a certain facet k is sandwiched between two other facets i and j, the product s_i,ks_j,k takes the value τ_ikτ_kj=-J_(ik)(kj).
Hence, the number of facets which are sandwitched between i and j is obtained by calculating the following quantity,
n^(sdw)_i,j=∑_k1-J_(ik)(kj)s_i,ks_j,k/2,
where the running suffix k is taken for every k for which a variable s is given between both of the facets i and j those which form a crease.
However, no variables are introduced for facet pairs that do not have a direct vertical relationship due to their overlapping positions.
In other words, a facet l that does not have a vertical relationship with either of the two facets i or j is not included in the sum ∑_k in Eq. (<ref>).
Therefore, in the scope of this study, facets that are sandwiched indirectly or hidden are not counted in the number.
After computing n^(sdw)_i,j for all creases (i,j) included in the diagram,
the maximum and minimum of n^(sdw)_i,j included in each flat folding are written as n_max, n_min.
In addition, for each value of n_max and n_min, we approximately enumerated (the logarithm of) the number of flat foldings that have such values as the maximum and minimum of n^(sdw)_i,js, S(n_max) and S(n_min) resepctively, in the same way as S_tot.
Furthermore, these values are averaged over instances as well as S_tot. Those are written as [S(n_max)] and [S(n_min)].
In the below section <ref>, they are written [S(ν_max)] and [S(ν_min)] with ν_max=n_max/n and ν_min=n_min/n.
The number of facets sandwiched between a crease is also called the crease width.
Awareness of this issue was introduced in the paper <cit.> and a research context called folding complexity has been developed.
It is known that the problem of finding a flat-folding that minimizes the maximum value of crease widths n_max in stamp-folding problem, in the single vertex origami-diagram with uniform facets and open boundary, is NP-hard.
In introducing [ S(n_max) ] or [ S(n_min) ], it must be mentioned that the maximum and minimum values of n_max or n_min are different for each instance.
There is a gap between the average value taken over only instances where non-zero contribution of S(n_max) is confirmed and the frequency itself of obtaining instances with foldings whose maximum crease width is n_max.
If an instance does not have a fold whose maximum crease width is n_max, its number is 0 and its logarithm is -∞, which is not suitable for averaging.
To deal with this problem with, the following two quantity is computed,
[S(n_max)]=lim_α→ 0[ log{α+exp(S(n_max))}],
[S(n_max)]=∑_Δ∈ D^(n_max)S^(Δ)(n_max) / ∑_Δ∈ D^(n_max)1,
where
D^(n_max) is the set of instances that have foldings whose maximum crease width is n_max.
§.§.§ Overlap between Two Configurations
As an indicator of how the solution space corresponding to flat folding is embedded in the variable space expressed by the spin variables, we conducted two independent numerical experiments at sufficiently low temperatures and computed the overlap of the spin variables.
The configurations sampled from two independent numerical simulations at the same temperature are denoted by spin variables s_i,j and s'_i,j, respectively.
We calculate the following quantities for these pair of simulations,
Q=∑_i,j1-s_i,js'_i,j/2,
where the summation in Eq. (<ref>) is taken over all variables of the system composed of {s_i,j}.
Let the total number of variables be N and q=Q/N. If q=1, the two configurations are completely the same, and if 0, they are completely reversed.
Also, when q ≃ 0.5, it means that about half of all variables are reversed, and the two configurations can be said to be almost uncorrelated or “unrelated”.
A normalized histogram h_β,n(Q) is obtained from the sequence of Qs sampled by a pair of numerical simulations with a certain value of (inverse) temperature β.
Using h_β,n(Q), the probability density distribution P_β(q) is derived as,
P_β(q)=∑_Q=0^N h_β,n(Q) χ_q(Q/N)/∑_Q=0^N h_β,n(Q),
where χ_q(Q/N) is an indicator function that returens the value 1 when Q/N=q and returns 0 when Q/N ≠ q.
This quantity provides informations about how far apart the sampled configurations in a pair of simulations are from each other in variable space at a given temperature.
At the limit of β→∞, this asymptotes to the structure of the embedding of the solution in the configuration space.
If there is not one or only a small number of configurations that completely satisfy the constraints, then P_β(q) is sufficiently low and q=1, and the physical model in question in this paper is considering that it is symmetric about inversion, there is a sharp peak near q=0 and q=1.
Furthermore, if there is another satisfied configurations in a region where the values of some variables are different, a peak similarly occurs in the region of q values corresponding to the difference.
The average over instances are taken as well as the entropies, namely [P_β(q)]=∑_Δ{P_β^(Δ)(q)× 1/(∑_Δ1)}.
§ RESULTS
§.§ Average behavior of total numbers of foldings
The logarithm of the total number of flat-foldings in a origami diagram consisting of facets of uniform length is proportional to nlog n.
Whereas, in diagrams with random facets generated by the procedure explained in Section <ref>, the entropy value which corresponds to the total number of folds exhibits the behavior shown in Fig..
In Fig. the values of entropy is proportional to n in average, which means that the dependence of the total number of flat-foldings on n is exponential.
This dependence is qualitatively different from the case with uniform facets.
In the exponential function, whose form is exp(γ nlog n), the value of γ can be readed as 0.098 from the figure, even though the figure is the result of a numerical experiment and is an averaged value over various instances.
[][ht]
< g r a p h i c s >
The values of total entropy averaged over instances [S_tot] is plotted against the inverse of the number of facets, 1/n, for n=18 (square), 24 (circle) and 36 (triangle).
Each point is overlaid with an error bar of the average value itself over instances and the variance over instances.
The error bars are given by the bootstrap method and drawn as solid lines, however they are smaller than the size of the mark.
The variances are written with dashed lines.
§.§.§ Distribution of the minimal and maximal numbers of sandwitched facets
Next, the results regarding the entropy of flat-foldings on the maximum and minimum numbers of facets sandwiched between each crease, respectively noted as n_max and n_min, are described.
In Figure <ref> the horizontal axis describes n_max or n_min divided by the total number of facets of the system n,
which are respectively represented as ν_max=n_max/n and ν_min=n_min/n,
and the vertical axis is the logarithm of the total number of the flat foldings such that the value of n_max or n_min takes the value on the horizontal axis, [ S(ν_max) ] and [ S(ν_max) ] with respect devided by n log n.
A curve is shown for n=18,24 and 36.
For reference, the same quantities for a single vertex diagram with uniform facet angle are shown in the inset.
The same as [S_tot], the curves of [S(ν_max)] or [S(ν_min)] for various sizes appears to collapse into a master curve when divided by n log n.
The master curve is found in both cases with uniform and random facets.
However, those two exhibit qualitatively different behaviors.
In the case of uniform facets, the curve of S(n_max) increases monotonically as the value of n_max increases.
This suggests that various combinations of stacking orders are realized even among the facets sandwiched between the creases with the maximum width.
On the other hand, in the case of random facets, the value of S(ν_max) shows a behavior that decreases slightly in the region where the value of n_max is large.
At n_max=n-2, it roughly takes the maximum value of S(ν_max) ≃ 0.3 × nlog n.
Whereas, in the case of random angle width [S(ν_max)] takes its maximum value, approximately [S(ν_max)]≃ 0.1 × nlog n, at ν_max≃ 0.3 and decreases once the value of ν_max exceeds that argument as shown in Fig.<ref>.
Also regarding [S(ν_max)], this scaling behavior, including the value of the coefficient, of the maximum value is the same as shown in Fig.<ref>, although the decrease of the value is rather significant in this case bacause the fraction becomes smaller as n_max goes to its large value.
Meanwhile, S(ν_min) or [S(ν_min)] have its contribution only on ν_min=0, both for origami diagrams with random and uniform angle width.
This is thought to be due to the fact that in the single-vertex diagram the array of facets is closed, that is, all facets have creases at both side of them and no have an open end.
In the case with exact stamp folding problem, there obviously exists a folding in which the minimum value of the crease width is a finite positive value.
Note here that there is nothing unnatural about the curve of [S(ν_max)] having a finite number of folds assigned to the position ν_max=0.
For example, if a random folding diagram contains two facets with fairly large angles and they are not connected by a crease, it is possible to create a folding where one of the two is located at the top in the vertical relationship of the overall stacking and the other is at the bottom.
In such a folding, between the top and bottom facets, two stacks may be formed at a considerable distance from each other.
Due to the circumstances described in Section <ref>, in such stacking there may be the case in any creases and two facets that compose them no other facets sandwiched between.
§.§.§ Averaged Shape of Overlap Distribution
The results of this study are shown in Figure <ref>.
The contribution of the distribution is widely dispersed, with the ratio of the degree of overlap to the total number of variables q ranging from about 0.2 to 0.8.
Comparing to the shape of the overlap distribution of the origami diagram with a uniform angular width is a Gaussian,
that on diagrams with random angular widths clearly has the different shape.
Instead, the contribution of [P(q)] is large even in regions where the absolute value of q is far from 0.
Therefore, it can be seen that the set of flat foldings is composed of stacks of facets whose vertical relationships are very different from each other.
As a consequence of the theory of replica symmetry breaking in statistical mechanics, the three peak structure in the asymptotic shape of [P(q)] (at q=0.5 and symmetrical positions centered on it) implies that the average complexity class of the seach problem is NP-complete<cit.>.
When the size dependence shown in Fig. <ref> is focused based on this viewpoint, the shape found in 0.2<q<0.8 in the range up to n = 36 somewhat seems to lead the three peak structure.
Meanwhile,the width of the error bar is also large.
Thus, it yet require the carefull observation to conclude that the average-case computational complexity of searching flat foldings of random single-vertex origami diagrams is NP-complete.
§.§ Reduction of System Size for Computational Efficiency
In the physical model given by Eq. (<ref>), each term with the form (<ref>) in particular gives a relationship that should be satisfied in the ground state between variables respectively involved in each term.
Applying this relationship to a set of variables in (P) and rearranging them brings an efficiency for numerical approximations to the total number of flat folds and overlapping distributions we have seen in the above by mapping the ground state to a system described by fewer variables.
In addition, this procedure is expected to aggregate variables whose values are determined uniquely each other into a single cluster and to reveal the combinatorial structure that is the essential cause of the problem.
Here, the result of the reduction of variables by the contraction via the equations (<ref>) for the origami diagram instances ranging from n=24 to 68.
For the origami diagram obtained using the method in Sec.<ref>, the variables are reduced as demonstrated in Sec. <ref>.
This reduced number of variables is represented as Ñ and the ratio R=Ñ/N is discusssed for each instance.
Note that there are many cases that the reduction result as the collection of several independent component each consisting of variables {s,C}.
The two variables s_i,j or C_l which belong to each different component are not involved in a same product term.
For such cases let Ñ be the number of variables contained in the largest component in the collection.
The density plot of the distribution of the ratio D(R) for s = 24,48,68, from 465, 518, 319 instances respectively, is shown in Fig.<ref>.
In Fig.<ref>, for small n, for example n=24, the distribution has a unimodal peak roughly around 0.05<R<0.15.
However, in the case of a larger system size, another peak appears on the side with a smaller value of R.
As the system size further increases, the distribution becomes bimodal, with the height of two peaks swapping, and the peak with the smaller R value becoming dominant.
A similar change in the shape of D(R) also occurs in the case with sets of instances generated by the second method, the method with the minimum unit is introduced to the angular width.
As shown in Fig.<ref>, it is observed when the value of s is fixed and the values of w are increased.
From this observation, it is thought that when the number of terms with a product of four-variables like Eq.(<ref>) is large, D(R) has a unimodal shape with its peak in the region of relatively large values of R,
and as the number decreases the shape changes to one with a dominant peak on the small-R side via the bimodal shape.
However, if the angular width takes continuous random values for a small n, the number of terms like Eq.(<ref>) in the corresponding cost function still remains almost 0, which does not mean that there are a large number of terms.
For this point further consideration on the relationship to the number of the four-variables product is currently required.
Within the range of system sizes tested, the system size Ñ can be reduced to roughly 1/20 of the original system size.
Compared to this, in the diagram with a uniform angles width, we have to treat the system with n(n-1)/2 variables because there is no room for reducing the variables using the contraction procedure.
This suggests that the average computational amount for a set of random instances can behave quite differently compared to that for a uniform angular value, which is considered to be the corresponding to the worst-case computational complexity.
It is also interesting from a computational complexity perspective.
§.§.§ Trying to obtain system-size dependence of each separated peak
In Fig. <ref>, it is found that the peak on the small R side in D(R) gradually becomes more prominent as the larger n.
Although being aware that this may not be an accurate evaluation, an attempt to evaluate the asymptotics of this peak is made as shown in Fig.7.
The dispersion σ_R of the small-R side peak, meaning the width of the peak, is computed using only instances whose R are less than or equal to a cutoff.
< g r a p h i c s >
The dispersion σ_R for the lower side peak in D(R) with various thresholding values.
While the cutoff is located between the two peaks, the slope of the variance change is relatively small, and when the cutoff is located on the small R peak, the slope of the change is large.
Using this behavior, we estimated the value at which the slope of the curve representing dependence begins to increase (again) as seen in the caption of Fig.7.
The threshold value of R is estimated as Rt = 0.030, 0.037, 0.045 for n = 36, 48, 68, respectively.
These values exhibit that it is also not possible to exclude the possibility that [Ñ] has a depencence of sub-linear order on [N], where [Ñ] and [N] is averaged over instances because the system size N is different among each instances.
§ SUMMARY
In this study based on the physical model formulation, approximate enumeration of the total number of foldings and its decomposition into those with fixed maximum and minimum values of the number of facets between each crease, i.e., the crease width.
The value of the logarithm of the total numbers of the configuration, computed by the replica exchange Monte Carlo method, is 0.098 × nlog n and rather smaller than that regarding the origami diagrams with uniform angle width.
Furthermore, the behaviors of the decomposed entropy with the maximum number of facets, n_max sandwithed by each crease of origami diagrams of respectively random or uniform angle width behave qualitatively different from each other.
This comparison implys that the constraints on mutual penetration of facets sandwiched within the crease with the maximum width brings a limit of the diversity of combinations of their stacking orders.
In addition, we approach the average computational complexity of the problem of enumerating the number of flat folding of the single vertex origami diagram from two perspectives: observation of the behavior of overlap distribution functions from the viewpoint of replica symmetry breaking in spin glass theory, and reduction of system size based on chain relationships that fix the values of variables.
Given the range of system sizes observed in this study, it has still not been possible to draw conclusions regarding the results from both and their consistency with each other. However, it is expected that research in this direction will still have the potential to be a future topic.
§ APPENDIX
§.§ Contraction of spin variables
Here, the method of variable reduction is intruduced based on the example of the origami diagram shown in Fig.<ref>.
First, from the diagram of this figure, the following cost function is obtained by the modeling described in Section <ref>.
In a term whose form is the same as Eq.(<ref>), the combination of the involved spin variables that brings the value 0 of the term is uniquely determined except for total inversion of the both two.
Therefore, we can translate the constraint from the term like Eq.(<ref>) into an allowed relationship between the spin variables.
By using such translation, several spin variables that appear in the energy function can be collectively re-expressed as a single variable.
And the energy function can be re-given as a combination of fewer binary variables.
< g r a p h i c s >
An example of diagram of single vertex origami with n=6. α, β, γ, δ at the bottom of the figure are the angle values around the center point of facets.
two colored creases in (a) overlap when the diagram is flat-folded as in (b).
∑_(ijkl)E^(q)_ijkl =1+s_24s_25s_34s_35/2,
∑_(ij,k)E^(i)_ij,k =1-s_13s_23/2+1-s_14s_24/2+1-s_12s_13/2+1-s_26s_36/2
+1-s_14s_15/2+1-s_46s_56/2+1-s_15s_16/2+1-s_25s_26/2
+1-s_35s_36/2+1-s_45s_46/2,
∑_(ijk)E^(c)_ijk =1+s_12s_23-s_23s_13-s_13s_12/4+⋯+1+s_35s_56-s_56s_36-s_36s_35/4,
where the detail of Eq.(<ref>) is written as Eq.(<ref>)-(<ref>), described later.
By applying the rewriting process based on the product of two spin variables, which is mentioned at the beginning of this section, from the 2nd to 11th term of the above cost function (<ref>) each leads the relationship between two variables.
The relationships are eventually summarized as follows,
C_1=s_12=s_13=s_23,
C_2=s_14=s_24=s_15=s_16,
C_3=s_26=s_36=s_25=s_35,
C_4=s_45=s_46=s_56.
With the above introduced cluster variables, the terms of Eq.(<ref>) is rewritten as
1+s_12s_25-s_25s_15-s_15s_12/4 =1+C_1C_3-C_3C_2-C_2C_1/4,
1+s_12s_26-s_26s_16-s_16s_12/4 =1+C_1C_3-C_3C_2-C_2C_1/4,
1+s_34s_45-s_45s_35-s_35s_34/4 =1+s_34C_4-C_4C_3-C_3s_34/4,
1+s_34s_46-s_46s_36-s_36s_34/4 =1+s_34C_4-C_4C_3-C_3s_34/4,
1+s_12s_23-s_23s_13-s_13s_12/4 =1+C_1C_1-C_1C_1-C_1C_1/4=0
1+s_12s_24-s_24s_14-s_14s_12/4 =1+C_1C_2-C_2C_2-C_2C_1/4=0
1+s_13s_34-s_34s_14-s_14s_13/4 =1+C_1s_34-s_34C_2-C_2C_1/4,
1+s_13s_35-s_35s_15-s_15s_13/4 =1+C_1C_3-C_3C_2-C_2C_1/4
1+s_13s_36-s_36s_16-s_16s_13/4 =1+C_1C_3-C_3C_2-C_2C_1/4,
1+s_14s_45-s_45s_15-s_15s_14/4 =1+C_2C_4-C_4C_2-C_2C_2/4=0,
1+s_14s_46-s_46s_16-s_16s_14/4 =1+C_2C_4-C_4C_2-C_2C_2/4=0,
1+s_23s_35-s_35s_25-s_25s_23/4 =1+C_1C_3-C_3C_2-C_2C_1/4,
1+s_23s_36-s_36s_26-s_26s_23/4 =1+C_1C_3-C_3C_2-C_2C_1/4,
1+s_24s_45-s_45s_25-s_25s_24/4 =1+C_2C_4-C_4C_3-C_3C_2/4,
1+s_24s_46-s_46s_26-s_26s_24/4 =1+C_2C_4-C_4C_3-C_3C_2/4,
1+s_15s_56-s_56s_16-s_16s_15/4 =1+C_2C_4-C_4C_2-C_2C_2/4=0,
1+s_45s_56-s_56s_46-s_46s_45/4 =1+C_4C_4-C_4C_4-C_4C_4/4=0,
1+s_25s_56-s_56s_26-s_26s_25/4 =1+C_3C_4-C_4C_3-C_3C_3/4=0,
1+s_35s_56-s_56s_36-s_36s_35/4 =1+C_3C_4-C_4C_3-C_3C_3/4=0.
In particular, we focus on the rewriting of the first term, Eq.(<ref>), which is
1+s_24s_25s_24s_35/2=1+C_2C_3s_34C_3/2=1+C_2s_34/2,
whose form composes the product of two spin variables similar to Eq.(<ref>) again.
The reproduction of the term whose form is similar to Eq.(<ref>) induces the recursive application of the rewriting process.
Here, the following relationship,
s_34=-C_2,
is lead from Eq.(<ref>).
In addition, further rewritings and relationships are induced from Eq.(<ref>) as follows,
1-C_1C_2+C_2C_2-C_2C_1/4 =1-C_1C_2/2,
1-C_2C_4-C_4C_3+C_3C_2/4+
1+C_2C_4-C_4C_3-C_3C_2/4 =1-C_4C_3/2,
which results
C_1=C_2,
C_3=C_4,
where Eqs(<ref>) and (<ref>) are brought from Eqs.(<ref>) and (<ref>)+(<ref>)+(<ref>)+(<ref>), respectively.
The method of variable reduction explained here utilizes the relationship between variables, meaning the vertical relationship of facets, that must be satisfied in the ground state (state of zero energy) of the physical model.
The concept that gives a cascading chain of decisions on the vertical relationship of facets is also studied in mathematics and information science, and is called a forcing set.
Algorithms for finding forcing sets for one-dimensional origami<cit.> and Miura-ori diagrams<cit.> have been proposed, however it appears that no algorithms exist for general origami at present.
Based on a series of studies on this subject, the original definition of a forcing set is considered to be the subset of minimal assignments of the crease patterns that determine the assignments for all creases included in the origami diagram.
It is known that the problem of giving an entire forcing set is can be an NP-complete problem.
However, a chain of decisions for stacking relationships can be given for local relationships among facets.
Therefore, it is an interesting application to make the determination of the possibility or enumeration of flat-folding more efficient by partially utilizing the subsets of chained decisions contained in forcing sets.
The method demonstrated in this section is expected to provide insight into the derivation of forced sets for general origami.
§.§ Numerical method for approximately estimating the number of folding
The numerical simulations performed for this study are based on the replica-exchange Monte Carlo method and the multiple histogram reweighting technique.
Each variable s_i,j is updated using the Metropolis rule. That is, the accepting probability for local updating, p_ud, is given by
p_ud=min{1,exp(-β (E'-E))},
where β is the physical inverse temperature, E and E' is the value of Hamiltonian (<ref>) for configuration respectively before and after the update.
The exchange of replicas with indices l and l+1 is performed using the Metropolis rule with excahge probability p_exch, which is given as
p_exch=min{1,exp((β^(l)-β^(l+1))(H^(l+1)-H^(l)))}.
The histograms sampled with each replica, h_l(E,Q), are integrated to estimate the number of states W(E,Q) with the multiple histogram reweighting method.
W(E,Q) is obtained via the equation
W(E,Q)=∑_lh_l(E,Q)/∑_l(ω_l(E,Q)/z_l∑_E=0^n∑_Q=0^nh_l(E,Q)),
where ω_l(E,Q)=exp(-β_l E), and z_l=∑_E=0^n∑_Q=0^nW(E,Q)ω_l(E,Q).
Eq. (<ref>) is originally derived in the paper
<cit.>.
osmebibstyle
|
http://arxiv.org/abs/2409.02594v1 | 20240904102040 | A Supersymmetric Suspicion From Accelerating Black Hole Shadows | [
"L. Chakhchi",
"H. El Moumni",
"K. Masmar"
] | hep-th | [
"hep-th"
] |
lahoucine.chakhchi@edu.uiz.ac.ma
h.elmoumni@uiz.ac.ma (Corresponding author)
LPTHE, Physics Department, Faculty of Sciences, Ibnou Zohr University, Agadir, Morocco.
karima.masmar@gmail.com
LPTHE, Physics Department, Faculty of Sciences, Ibnou Zohr University, Agadir, Morocco.
Laboratory of High Energy Physics and Condensed Matter
HASSAN II University, Faculty of Sciences Ain Chock, Casablanca, Morocco.
§ ABSTRACT
In light of the Event Horizon Telescope (EHT) images of the supermassive black holes Sgr A^⋆ and M87^⋆, we explore a potential supersymmetry suspicion within the observational data. Specifically, we investigate the shadow of a supersymmetric accelerating black hole and compare our findings with observed quantities such as the angular diameter 𝒟 and the fractional deviation δ.
Our analysis reveals a significant alignment between the calculated quantities and the EHT collaboration measurements. This alignment suggests that the features of the black hole shadows observed by the EHT exhibit characteristics consistent with the supersymmetry framework.
Our results provide compelling evidence for supersymmetry from a gravitational perspective, which remains absent from the particle physics viewpoint till now.
Keywords:
Supersymmetry, EHT, Observational data,
Shadows, Accelerating black holes.
A Supersymmetric Suspicion From Accelerating Black Hole Shadows
K. Masmar .
September 9, 2024
==================================================================================
Over a century ago, Albert Einstein revolutionized our understanding of the universe by introducing his groundbreaking theory of gravitation, i.e. general relativity. This remarkable framework has since provided profound insights into a myriad of cosmic phenomena, shaping our comprehension of planetary orbits, the bending of starlight, the vast expanse of cosmology, and the existence of exotic celestial objects such as white dwarfs, neutron stars, and especially, black holes.
The true marvel of Einstein's theory was not fully realized until 2019 when humanity bore witness to a historic moment—the first direct visual evidence of these enigmatic entities, thanks to the collaborative efforts of the Event Horizon Telescope (EHT) <cit.>. Through the EHT's groundbreaking observations, we could gaze upon the awe-inspiring image of the supermassive black hole M87^⋆ nestled at the heart of the elliptical galaxy M87.
Building upon this monumental achievement, the EHT continued to push the boundaries of astrophysical exploration. In 2022, the EHT once again captivated the world by capturing the elusive black hole residing at the center of our very own Milky Way galaxy, known as Sgr A^⋆ <cit.>. These extraordinary accomplishments not only validate Einstein's profound insights into the nature of spacetime but also open new frontiers in our quest to unravel the mysteries of the cosmos.
The remarkable images captured by the EHT have ushered in a new era in astrophysics and gravitational research. As visual evidence often speaks louder than equations alone, these images have propelled black holes into the spotlight, positioning them as invaluable laboratories for testing the principles of general relativity, as well as theories involving modified and quantum gravity.
Recent literature has extensively explored the implications of gravitational wave measurements for alternative theories of gravity with extra dimensions of spacetime
<cit.>. Similarly, in-depth discussion exists regarding the shadow measurements from Sgr A^⋆ and M87^⋆, by Tang et al. <cit.>, where the black hole shadow observations obtained from the EHT were utilized to probe the existence of extra dimensions, leading to significant constraints on their size. These findings highlight the transformative impact of EHT's pictures on our understanding of the fundamental nature of the universe.
It is widely understood that the geometry of spacetime is encapsulated in the metric tensor, in this sense, the original C metric, which belongs to a class of exact solutions to Einstein field equations, was discovered by Hermann Weyl in 1917 <cit.>.
This solution can be understood as a pair of accelerating black holes apart from each <cit.>. Afterward in 1976, Plebaski-Demiaski's generalization expanded the C black hole solutions to incorporate rotation and the cosmological constant <cit.>.
Investigations on the C metric have a long history in literature, and the thermodynamics phase structure of the accelerating black holes has been formulated for small acceleration in AdS spacetime in recent years <cit.>, with a special emphasis to the supersymmetric black hole case in <cit.>, where the authors identify a complex locus of supersymmetric and non-extremal solutions, defined through an analytic continuation of the parameters, upon which they obtain a simple expression for the on-shell action, while cognizant of that Supersymmetric black holes in AdS spacetime are inherently interesting AdS/CFT correspondence despite the fact those solutions are rare finds <cit.>.
In addition to gravity, our universe is governed by three other fundamental interactions, which are described by the Standard Model of particle physics. Supersymmetry (SUSY) offers a natural foundation for unifying gravity with such fundamental interactions and aids in the resolution of the high-energy physics hierarchy problem.
Furthermore, gravitational backgrounds that preserve supersymmetry in supergravity theories are crucial to the development of string/M-theory, flux compactifications, and the AdS/CFT correspondence.
Although one of the most promising frameworks for theories beyond the Standard Model, there is still a need to find its direct experimental support in the natural world. In particular and from the particle physics perspective, since no significant excess in data is observed in CMS and ATLAS detectors in squarks searches in the proton-proton collisions at
√(s)=13 TeV, and where the exclusion limits at 95% confidence level exclude top squark masses up to 1150 GeV <cit.>, the remaining possible hope for supersymmetry is the gravitational way.
With a multitude of motivations driving our exploration and the potential for shedding light on the supersymmetry theory through observations by the Event Horizon Telescope (EHT), our proposal entails a comprehensive investigation into the shadow of a charged, accelerating black hole with a cosmological constant and possessing angular momentum.
The minimal D=4, 𝒩= 2 gauged supergravity solution is described by a bulk action given in <cit.> by
S_bulk=1/4π G_(4)∫ dx^4 √(-g)(R-6/ℓ^2-F^2).
In which F=dA represents the Maxwell field strength, while ℓ denotes the AdS (Anti-de Sitter) radius.
The line element of the accelerating black hole with a cosmological constant reads in Boyer–Lindquist coordinates as <cit.>
ds^2 = 1/H^2[-Q/Σ(1/κdt-χ dϕ)^2+Σ/Q dr^2+Σ/P dθ^2
+ P/Σ(χ/κ dt-(r^2+a^2)sin^2θ dϕ)^2].
where the involved functions are given by
χ = asin^2θ, Σ(r,θ) =r^2+a^2cos^2θ, H(r,θ) =1-𝒜 r cosθ,
P(θ) = 1-2𝒜 m cosθ + (𝒜^2(a^2+e^2+g^2)-a^2/ℓ^2)cos^2θ ,
Q(r) = (r^2-2mr+a^2+e^2+g^2)(1-𝒜^2r^2)+ r^2/ℓ^2(a^2+r^2),
and the gauge field is obtained to be
A = A_tdt+A_ϕ dϕ
= -er/Σ(1/κdt-asin^2θ dϕ)+qcosθ/Σ(a/κdt-(r^2+a^2)dϕ).
Such a solution is parameterized by five quantities m, e, g, a and 𝒜, which stand for mass, electric charge, magnetic charge, angular momentum and acceleration parameter, respectively, additionally ℓ is the AdS radius and the constant κ > 0 is a trivial constant that can be absorbed in a rescaling of the time coordinate and help to normalized the Killing vector ∂_t in order to get a first law of thermodynamics.
Adopting the strategy of <cit.>, it's possible to elaborate additional conditions on the parameters required for the solution to be supersymmetric based on Dirac spinor and Killing spinor equation. Indeed, for nonvanishing values of the accelerating parameter 𝒜≠ 0 and for ℓ=1, the constraints read as
<cit.>
g=𝒜 m,
0=𝒜^2(e^2+g^2)(Ξ+a^2)-(g-a𝒜 e)^2,
where, Ξ≡ 1+𝒜^2(a^2+e^2+g^2)- a^2.
Such constraints generate a (e,a,𝒜) moduli space depicted in the top panel of Fig.<ref>.
Exploring the shadow of such a black hole can unveil intricate details regarding the features of strong gravitational phenomena in the vicinity of the black hole horizon, potentially revealing a potential fingerprint of supersymmetry characteristics within the observational data obtained by the Event Horizon Telescope (EHT).
In this sense, studying the geodesics and orbits around the accelerating supersymmetric black hole involves the application of the Hamilton-Jacobi equations. By considering the tetrad components and the four-momentum, we can derive the coordinates describing the apparent displacement along both perpendicular and parallel axes to the projected axis of the black hole's symmetry. Utilizing a zero angular momentum observer (ZAMO) <cit.> positioned at (r_o,θ_o), we parametrically determine the silhouette of the black hole shadow within the constraints of Eq.(<ref>), as illustrated in the bottom panel of Fig.<ref>.
This figure reveals that the size of the supersymmetric shadow cast decreases proportionally with the magnitude of the acceleration parameter 𝒜. Furthermore, a precise consideration of the supersymmetric constraint in Eq.(<ref>) unveils an additional influence on the shape of the black hole shadow by altering the rotation parameter a and the black hole electric charge e in conjunction with the acceleration 𝒜. This interaction justifies the appearance of a D-shaped shadow for small values of 𝒜.
It is inherent to anticipate that black hole parameters can be constrained through shadow analysis, as the shape and size of shadows directly correlate with these parameters. In seeking potential hints of supersymmetry, we turn our attention to key shadow observables, specifically the angular diameter and fractional deviation. By exploiting these quantities, we aim to unveil any indications of supersymmetric behavior. Subsequently, we juxtapose our derived results with observational data from the Event Horizon Telescope, focusing on measurements pertaining to M87^⋆ and Sgr A^⋆.
In our investigation, we rigorously consider the angular diameter 𝒟 of the black hole shadow as the key observable, initially defined in <cit.>. Additionally, the Event Horizon Telescope (EHT) collaboration introduced the fractional deviation δ to quantify the deviation between the infrared shadow radius and the Schwarzschild shadow radius <cit.>.
According to reports from the Event Horizon Telescope (EHT) collaboration <cit.>, the supermassive black hole M87^⋆ at the center of the galaxy Messier 87 (M87) exhibits an angular diameter of 𝒟=42 ± 3 μ as, accompanied by a fractional deviation evaluated to δ=-0.01_-0.17^+0.17. Additionally, assuming the axis of rotation aligns with the jet axis, the inclination angle is estimated to be 17^∘. The collaboration also reports the source's mass as (6.5± 0.7)× 10^9M_⊙ and its measured distance as 16.8± 0.8Mpc.
Furthermore, the observed emission ring of Sgr A^⋆, as reported by the Event Horizon Telescope collaboration <cit.>, has an angular diameter of 𝒟=51.8± 2.3 μ as. In contrast, the angular diameter of its shadow is estimated at 48.7± 7 μ as. Collaborative efforts, such as those from the Keck observatory team, provide estimates for the mass m=(3.975± 0.058± 0.026)× 10^6 M_⊙ and distance d=(7959±59±32) pc of Sgr A^⋆ by leaving the red-shift parameter-free.
Alternatively, the distance d=(7935±50) pc and mass m=(3.951± 0.047)× 10^6 M_⊙ are evaluated through assuming the red-shift parameter sets to unity <cit.>.
Similarly, collaborations involving the Very Large Telescope and the GRAVITY interferometer (VLTI) propose mass and distance estimates for Sgr A^⋆, yielding values of m=(4.261±0.012)× 10^6 M_⊙ and d=(8246.7±9.3) pc <cit.>. Adjustments for optical aberrations lead to slightly altered values of m=(4.297±0.012±0.040)× 10^6 M_⊙ and d=8277±9±33 pc.
Otherwise, based on comparisons between the observed image of Sgr A^⋆ and numerical simulation models, the inclination angle i is inferred to exceed 50^∘. A specific value of i≃ 134^∘ (or equivalently 46^∘) is adopted to calculate the theoretical angular diameter of Sgr A^⋆ <cit.>. Additionally, the fractional deviation δ obtained using the eht-img algorithm <cit.> is δ=-0.08_-0.09^+0.09 according to VLTI measurements, and δ=-0.04_-0.10^+0.09 within the Keck estimations.
In Fig.<ref>, we present the contours of the angular diameter for the M87^⋆ and Sgr A^⋆ black holes across different parameter planes: (charge, rotation), (acceleration, rotation), and (acceleration, charge). Meanwhile, Fig.<ref> illustrates the contours of the fractional deviation.
From both figures, one can notice that the observable δ presents a behavior similar to that of the angular diameter 𝒟. Furthermore,
the supersymmetric black hole solution aligns well with the observational data of M87^⋆ and Sgr A^⋆ illustrated by the red contours. This alignment suggests that the features of the black hole shadows, as observed by the EHT, exhibit characteristics consistent with supersymmetry predictions. The concordance between the theoretical models and empirical data highlights the potential presence of supersymmetry imprints in the shadows of these black holes, offering compelling evidence of supersymmetric phenomena in astrophysical observations. This stands in contrast to traditional particle physics experiments, which have so far failed to reveal such evidence.
The existence and nature of supersymmetry are pivotal open questions in physics, garnering significant attention, particularly in the context of quantizing gravity, unification theories, and string theory. Our conclusion, suggesting the first supersymmetric imprints from the EHT observations, presents an exciting avenue for further verification through additional observations.
This new result underscores the burgeoning importance of black hole theory and phenomenology, especially given the rapid advancements in observational techniques. As these observations continue to develop, the study of black holes is poised to remain a vibrant and critical field of research, potentially offering deeper insights into the fundamental principles of our universe.
apsrev4-1
|
http://arxiv.org/abs/2409.03498v1 | 20240905131109 | Cosmic superstrings, metastable strings and ultralight primordial black holes: from NANOGrav to LIGO and beyond | [
"Satyabrata Datta",
"Rome Samanta"
] | gr-qc | [
"gr-qc",
"astro-ph.CO",
"hep-ph"
] |
=1
=10000
=10000
[],n,
addtoresetequationsectionsection
amisatyabrata703@gmail.com
samanta@na.infn.it§ ABSTRACT
While topologically stable cosmic strings are disfavored by the recent observation of nHz stochastic gravitational waves (GW) by Pulsar Timing Arrays (PTA), e.g., NANOGrav, cosmic metastable strings and superstrings are not. However, because the gravitational waves from all classes of strings generally span a wide range of frequencies, they contradict LIGO's non-observation of stochastic gravitational waves at the f∼ 25 Hz band for a substantial string-parameter space favoured by the PTA data. Suppose ultralight primordial black holes (M_BH<10^9 g) existed in the early universe. In this case, they reduce the amplitude of the GWs at higher frequencies by providing an early matter-dominated phase, alleviating the tension between LIGO observation and PTA data. We show that the recent PTA data complemented by future LIGO-Virgo-KAGRA (LVK) runs plus detectors such as LISA and ET would be able to dapple the properties and further search strategies of such ultralight primordial black holes which are otherwise fairly elusive as they evaporate in the early universe by Hawking radiation.
Cosmic superstrings, metastable strings and ultralight primordial black holes: from NANOGrav to LIGO and beyond
Rome Samanta
===============================================================================================================
§ INTRODUCTION
Ultralight primordial black holes PBHs <cit.> with initial mass M_ BH≲10^8 g are elusive objects as they evaporate in the early universe, therefore, they can not be constrained by astrophysical means like heavier black holes <cit.>. Nonetheless, if they existed abundantly and dominated the energy density in the early universe, plenty of direct or indirect observable signatures are achievable in the form of gravitational radiation. For instance, they emit gravitons to constitute a high-frequency gravitational wave background <cit.> and produce density fluctuation to induce GW background testable in the planned GW detectors <cit.>. In addition, a PBH domination imprints the spectrum of stochastic gravitational waves background (SGWB) originating from other independent sources. Such imprints on the GWs could be probed with the spectral shape reconstruction techniques <cit.> planned for the GW detectors. Therefore, a combined study of the GW spectrum associated with the ultralight PBHs–the GW spectrum obtained directly from PBHs plus inquiring into the imprints of a PBH-dominated phase on an independent GW spectrum could be a riveting way to understand the properties of ultralight PBHs. The latter case is the central theme of this article. We study imprints of PBHs on GW background produced by cosmic superstrings <cit.> and metastable strings <cit.>[Effect of PBHs on GWs from stable cosmic strings have been recently explored in Refs.<cit.>]. We classify the other GW backgrounds obtained directly from PBH evaporation or their dynamics in the early universe as complementary probes that could be envisaged to distinguish the effect of PBH domination on GWs from cosmic strings from any other early matter domination (EMD) scenarios, see, e.g., <cit.>.
The motivation to consider the mentioned classes of strings is twofold. First, because the ultralight PBHs evaporate at early times, before the Big Bang Nucleosynthesis (BBN) <cit.>, they generically affect any propagating GW spectrum during the PBH-domination phase at higher frequencies. Cosmic strings radiate GWs spanning a wide range of frequencies; generally with a strong scale-invariant amplitude at higher frequencies. Therefore, ultralight PBHs leave possibly the cleanest detectable imprints on such GW spectrum. In addition, a wide-spanning GW signal has the following advantage: if a measurement at a certain frequency band fixes the parameter space describing the GW source, signals at other frequency bands get automatically constrained. This redirects us to the second motivation. Cosmic superstrings and metastable strings are speculated to be among the possible sources of nHz SGWB recently discovered by the Pulsar Timing Arrays (PTA) <cit.>. Unlike stable cosmic strings <cit.> described only by the parameter: Gμ, where G is the Newton's constant and μ∼ v_Φ^2 is the string tension with v_Φ being the vacuum expectation value of the scalar field describing the strings <cit.>, cosmic superstrings and metastable cosmic strings fit the data better <cit.> because the last two being a two-parameter string model. In addition to Gμ, the inter-commutation probability P for superstrings and the parameter κ describing the separation between the Grand Unified Theory (GUT) scale and the string scale for metastable strings provide additional degrees of freedom to match the required GW amplitude and spectral slope at nHz frequencies. On the other hand, the high-frequency scale-invariant behaviour of the GW spectrum remains similar for all string classes. Fitting PTA data with GWs from superstrings and metastable strings predicts extremely strong scale-invariant amplitude at higher frequencies if the universe undergoes standard cosmological expansion. In fact, the high-frequency amplitudes are so strong that a significant string parameter space consistent with the PTA data gets ruled out by the LIGO-O3 run which did not observe any SGWB <cit.>. This puts superstring and metastable string interpretation to the PTA data perhaps in jeopardy unless an additional GW contribution from supermassive black holes (SMBH) is considered to relax the tension somewhat <cit.>. A PBH-dominated phase in the early universe, however, can cause a characteristic fall-off of the spectrum at higher frequencies, alleviating the tension with the PTA and LIGO completely. Besides, because the PTA data constrains the string parameters and the PBH mass is bounded from above so that they evaporate before BBN, even though reduced, the predicted overall signal strength could be pretty strong.
Focusing exclusively on the parameter range of superstrings and metastable strings favoured by the PTA data, and considering non-rotating PBHs with monochromatic initial mass, we find that PBHs with an initial mass within the range M_ BH∈[10^6g,10^8 g] would imprint the GWs at higher frequencies so that they evade the LIGO bound. We broadly classify such viable signals into two classes: whether or not the next LIGO run (LVK-Design) <cit.> could probe the PBH-imprinted GW signal obtained from superstrings and metastable strings. We find that for most of the mentioned mass range, PBHs can produce signals at LVK-D while being consistent with the PTA data. Heavier PBHs with an initial mass around 10^8 g reduce the GW amplitude such that the signals fall short of the sensitivity reach of LVK-D, however strong enough to produce a Signal-to-Noise-Ratio at ET <cit.> as large as ∼ 10^5.
In the companion theory paper by NANOGrav collaboration <cit.>, the effect of an EMD on GWs from superstrings and metastable strings has been mentioned to evade the LIGO-O3 bound. Therefore, root-idea-wise, this article does not offer anything new. We rather report if the PBHs provide such EMD, the scenario becomes pretty distinct compared to any other EMD, e.g., provided by a scalar field. This is because, first, the PBHs provide additional GW signatures. For the obtained mass range M_ BH∈[10^6g,10^8 g], although the GW in the form of gravitons appears at extremely high frequency, GWs, e.g., induced by PBH density fluctuation appear in the detectable range. It is not trivial to reconstruct such a combined GW signal by any other means. Second, lately, there has been an extensive effort to understand the effects of ultralight PBHs on Beyond the Standard Model (BSM) physics such as particle dark matter (DM) production and baryon asymmetry of the universe (BAU), see, e.g., Refs.<cit.> (subsequent references are in Sec.<ref>). For the M_ BH∈[10^6g,10^8 g], such effects could be quite significant, specifically, BSM physics occurring around the Electroweak (EW) scale, even down to the BBN (MeV) scale. In this context, we discuss a recently proposed BSM model <cit.> accommodating neutrino mass, BAU via leptogenesis, super-heavy DM, and metastable strings (the last feature is new). We show how a PBH-string landscape could be capable of probing untrod parameter regions of BSM models.
§ STABLE STRINGS, SUPERSTRINGS, METASTABLE STRINGS, AND NON-STANDARD COSMOLOGY
In this section, we briefly discuss/review the foremost aspects of cosmic superstrings and metastable strings required to compute GW spectra in the presence of PBHs. The gravitational wave spectral features obtained from the superstring and metastable string network differ from the standard stable strings mostly at the low frequencies, e.g., in the PTA band. On the other hand, imprints of early matter domination (by PBHs in our case) on the GW spectra at higher frequencies are similar to all string classes. It is therefore useful to highlight first the properties and the technicalities required to compute the GW spectrum from stable cosmic strings. The methodology used can be generalized to the case of superstrings and metastable strings. In addition, it would be opportune to understand why the GWs from stable cosmic strings do not fit the recent PTA data, unlike the other two string classes.
Stable cosmic strings: Gravitational waves are radiated from cosmic string loops chopped off from the long strings resulting from the spontaneous breaking of the gauged U(1)<cit.>. Long strings are described by a correlation length L=√(μ/ρ_∞), with ρ_∞ is the long string energy density and μ is the string tension defined as <cit.>μ=π v_Φ^2 h(λ,g^'),h(λ,g^')≃ 1, unless the coupling λ and g^' are not strongly hierarchical <cit.>. The time evolution of a radiating loop of initial size l_i=α t_i is given by l(t)=l_i-Γ Gμ(t-t_i), where Γ≃ 50 <cit.>, α≃ 0.1 <cit.>, G is the Newton's constant, and t_i being the initial time of loop production. The total energy loss from a loop can be decomposed into a set of normal-mode oscillations with instantaneous frequencies f_k=2k/l_k=a(t_0)/a(t)f, where k=1,2,3...k_ max, f is the present-day frequency at t_0, and a is the scale factor. The total GW energy density can be computed by summing all the k modes giving <cit.>Ω_ GW= ∑_k=1^k_ max2k ℱ_α Gμ^2 Γ_k/fρ_c∫_t_i^t_0[a(t)/a(t_0)]^5 n_ω(t,l_k) dt ,
where ρ_c is the critical energy density of the universe, ℱ_α≃ 0.1 is an efficiency factor <cit.> and n_ω(t,l_k) is the loop number density which can be computed from the velocity-dependent-one-scale model as <cit.>
n_ω(t,l_k)=A_β/α(α+Γ G μ)^3(1-β)/[l_k(t)+Γ G μ t]^4-3βt^3β .
In Eq.(<ref>), β=2/3(1+ω) with ω is the equation of state parameter of the universe, and A_β =5.4 (A_β = 0.39) for radiation-dominated (matter-dominated) universe <cit.>. The quantity Γ_k=Γ k^-δ/ζ(δ) quantifies the emitted power in k-th mode, with δ=4/3 (δ=5/3) for loops containing small-scale structures such as cusps (kinks) <cit.> and makes it evident that k=1 (fundamental) mode is the dominant one contributing to the GWs. In principle, the GW spectrum corresponding to the fundamental mode broadly captures most of the underlying features of a given BSM model producing cosmic strings. However, when it comes to comparing the spectrum with real data, e.g., PTA data, and based on that, one intended to forecast future predictions, it is desirable to sum a large number of modes to obtain accurate results.
The integral in Eq. (<ref>) is subjected to the Heaviside functions Θ(t_i-t_ fric)Θ(t_i-l_c/α) that set cut-offs on the GW spectrum at high frequencies f_* above which the spectrum falls as f^-1 for the fundamental mode. The quantity t_ fric represents the time until which the string network is damped by friction <cit.>. For l>l_c, the GW emission dominates over particle production as shown by recent numerical simulations <cit.>. The critical length can be approximately computed as l_c≃δ_w (Γ G μ)^-γ where δ_w=(√(λ)v_Φ)^-1 is the width of the string and γ=2 (γ=1) for loops containing cusps (kinks). In most cases, the cut-offs owing to friction and kinks are weaker (occur at higher frequencies). We therefore focus on the string loops containing cusps only.
Without any intermediate matter-dominated epoch, Eq.(<ref>) can be solved to obtain the GW spectra taking into account two main contributions: i) from the loops created in the radiation domination but emitting in the standard matter domination, ii) from the loops created as well as emitting in the radiation domination. The former corresponds to a low-frequency peak while the latter produces a scale-invariant plateau at high frequencies (see Fig.<ref>) which for k=1 mode is given by <cit.>Ω_ GW^ plt = 128πℱ_α Gμ/9ζ(δ)A_R/ϵ_RΩ_R[(1+ϵ_R)^3/2-1] .
In Eq. (<ref>),
ϵ_R=α/Γ Gμ≫ 1, A_R≡ A_β≃ 5.4 and Ω_R∼ 9× 10^-5. Note that the plateau amplitude Ω_ GW^ plt∝√(μ)∝ v_Φ, implying strong amplitude GWs are produced for larger symmetry breaking scale. In scenarios that feature an early matter-dominated epoch (in this work by PBHs) instead, the plateau breaks at a high frequency f_ brk, beyond which the spectrum falls as Ω_ GW(f>f_ brk)∼ f^-1 for the fundamental mode (green and red curves on the top panel in Fig.<ref>). This spectral break frequency f_ brk can be estimated as <cit.>
f_ brk≃ 0.45 Hz(10^-12/Gμ)^1/2(T_ brk/ GeV) ,
where T_ brk≡ T_ end is the temperature corresponding to the end of early matter domination. When summed over a large number of modes (k→∞), two important changes occur in the GW spectrum. First, the plateau amplitude increases approximately by a factor of ζ(δ), i.e., Ω_ GW^ plt,∞≃Ω_ GW^ pltζ(δ) and the spectrum falls as f^1-δ instead of f^-1<cit.>.
Given an observed spectrum with a plateau followed by a fall-off, one has to compare two parallel scenarios to pinpoint the origin of such a spectrum–whether the EMD or the particle production cut-off causes the spectral shape. Considering the former is equivalent to imposing a constraint on the parameter space as T_ cusp/T_ brk>1, where we derive T_ cusp as
T_ cusp≃ 2.3× 10^4 GeV(G μ/10^-12)^5/4 .
Similar to Eq.(<ref>), an expression for f_ cusp can therefore be obtained as
f_ cusp≃ 1.01× 10^4 Hz(G μ/10^-12)^3/4
which we find to be consistent with Eq.(50) of Ref.<cit.>. While analyzing the GW spectrum with PBHs, one has to make sure that f_ cusp>f_ brk is satisfied (see Fig.<ref>, top panel).
Cosmic superstrings: Cosmic superstrings are characterized by a weaker inter-commutation probability P because of their quantum-mechanical nature. The lack of availability of numerical simulations with larger simulation times makes the actual GW spectrum produced by superstrings pretty uncertain. Nonetheless, the standard paradigm to compute the spectrum has been to multiply the loop number density by a factor of P^-1<cit.> (see Ref.<cit.> for a different scaling relation). This method has also been adopted by the NANOGrav collaboration <cit.>. Therefore, we shall use all the methodologies considered for stable strings but rescaling the loop number density by P^-1, with P∈[10^-3,1]<cit.>. Note that, this is not a rescaling of Gμ that one obtains for hierarchical coupling (h(λ,g^')≠ 1) <cit.>, thereby making the spectral shape, e.g., in the PTA band, different from the stable cosmic strings.
Metastable cosmic strings: Metastable cosmic strings decay into string segments connecting monopole-antimonopole
pairs by quantum tunneling <cit.>. In the semiclassical approximation, the network is characterized by the decay rate per string unit length as
Γ_d = μ/2πexp(-πκ), √(κ) = m_M/μ^1/2≃Λ_ GUT/v_Φ,
where m_M is the monopole mass which is of the order of Λ_ GUT≃ 10^16 GeV. From Eq.(<ref>), one can define a time t_s=1/Γ_d ^1/2, before which the cosmic string network behaves similarly to a stable cosmic string network, around which the monopole nucleation starts, and after which, the network behaves as a decaying network with new loops no longer produced. For t>t_s the loop number density in Eq.(<ref>) receives a correction factor <cit.>:
D(l,t)= e^-Γ_d[l(t-t_s)+1/2Γ Gμ(t-t_s)^2]Θ(α t_s-l(t_s)),
where the Θ function with l(t_s)≃ l+Γ Gμ t, represents the fact that only the loops produced before t_s contribute to the number density. In principle, unless the κ is quite large (which is the case for stable strings), it is sufficient to take the upper limit of the integration in Eq.(<ref>) as t∼ t_ eq, because during t_s<t < t_ eq the loop number density encounters rapid exponential suppression. In this work, we consider GW emission only from loops. Nonetheless, when monopoles do not carry unconfined flux, cosmic string segments formed owing to monopole nucleation can contribute to GWs <cit.>. Compared to the stable cosmic strings, two distinct features of the GW spectrum from metastable strings are i) for smaller values of κ, the frequency at which the spectrum reaches the plateau gets exponentially right-shifted on the Ω_ GW vs. f plane (the blue curve in Fig.<ref>). As such, for sufficiently small values of κ, the spectrum reaches the plateau at a very high frequency f≳ 10 Hz, thus avoiding all the existing bound on Gμ, barring the one expected from BBN bound on the effective number of degrees of freedom <cit.>. ii) The infra-red tail of the GW spectrum behaves as Ω_ GW∝ f^2<cit.> favored by the PTA data when fitted to a power-law GW spectrum. Markedly, only a few other models exhibit Ω_ GW∝ f^2 power-law in the infra-red tail, see, e.g., <cit.>. Note that, compared to the stable cosmic strings, in both cases, we have an extra parameter in addition to Gμ: the intercommutation probability P for superstrings, and κ for metastable strings. This additional degree of freedom (i.e., considering a two-parameter string model) makes it easier to fit the PTA data which are characterized also by two parameters; the amplitude and the spectral index.
We conclude the discussion by showing the GW spectra in Fig.<ref> for superstrings and metastable strings comparing them to the ones from stable strings. We choose a benchmark value of T_ brk =0.1 GeV and consider only the k=1 mode, leaving a more realistic analysis by summing a large number of modes for the next section. In Fig.<ref> (left) we show the GW spectrum for cosmic super (stable) strings with the green solid (dashed) curve. For the superstrings, we choose the benchmark values P=6× 10^-3 and Gμ=2× 10^-12 while for the stable cosmic strings, we rescale Gμ as Gμ→ Gμ/P^2 such that comparable plateau amplitude can be obtained in both cases. Note that, for superstrings, one has the required blue-tilted GW spectrum in the PTA band (A blue-tilted distribution of fourteen NANOGrav posteriors is shown in black; in this work, we use only NANOGrav data which are consistent with all other PTAs (IPTA) <cit.> ). On the other hand, even though the required GW amplitude can be obtained for the stable strings, the spectrum is nearly scale-invariant in the PTA band and thereby does not fit the data. A similar situation occurs in the case of metastable strings as shown in the top right panel with the red curves[With the blue curve we show the GW spectra for √(κ)=6 evading all the bounds with the dominant contribution at very high-frequencies. Although this spectrum is not so relevant for the present study, we find it interesting to mention it as it is generally not present in the case of stable (and super) strings.]. In both plots, the black dotted lines represent the analytical expression for the plateau amplitude and the spectral break frequency given in Eq.(<ref>) and (<ref>), respectively. The green and the red vertical lines represent the spectral break frequency f_ cusp at which the spectrum would have fallen if there was no matter domination. In the bottom panel, we zoomed in on the GW spectra within the PTA band to show the consistency of the blue-tiled GW spectrum obtained from cosmic superstrings and metastable strings with the data.
An important aspect of this study is considering a falling GW spectrum at higher frequencies to avoid the LIGO bound. Although we zero in on early matter domination for this purpose, as discussed earlier, the spectrum may also fall because of particle production. With a straightforward generalization of Eq.(11) of Ref.<cit.> with δ_w=(√(λ) v_Φ)^-1, it can be shown that for making the spectrum fall before the LIGO frequency band while being consistent with the PTA data, an extremely small value of λ is required. For the standard case with λ∼ 1 considered in the numerical simulations, the spectrum always falls at very high frequency owing to particle production. This makes the case of an early matter-dominated scenario pragmatic while interpreting the PTA data as cosmic metastable (super) strings radiated GWs evading the LIGO-O3 bound.
§ IMPRINTS OF ULTRALIGHT PBHS ON GWS FROM COSMIC (SUPER/METASTABLE) STRINGS
We focus on non-rotating PBHs with monochromatic initial mass M_ BH. The energy density of the PBHs (ρ_ BH) and radiation (ρ_R) evolve according to the following Friedmann equations <cit.>:
dρ_R/dz+4/zρ_R = 0,
dρ_ BH/dz+3/zH/H̃ρ_ BH - Ṁ_ BH/M_ BH1/zH̃ρ_ BH=0,
where H is the Hubble parameter (H_0∼ 1.44× 10^-42 GeV), z=T_ Bf/T, with T_ Bf being the PBH formation temperature and the time derivative Ṁ_ BH is given by
Ṁ_ BH=𝒢g_*B(T_ BH)/30720πM_Pl^4/M_ BH^2,
where M_Pl=1.22× 10^19 GeV, 𝒢≃ 3.8 is a grey body factor and g_*B(T_ BH)≃ 100 being the number of relativistic degrees of freedom below the PBH temperature T_ BH with Standard Model (SM) containing left-handed neutrinos <cit.>. The quantity H̃ and the scale factor a evolve as
H̃=(H+𝒦), da/dz=(1-𝒦/H̃)a/z,
where 𝒦=Ṁ_ BH/M_ BHρ_ BH/4ρ_R. To derive Eq.(<ref>)-Eq.(<ref>), we assume the entropy (g_*s) and the energy (g_*ρ) degrees of freedom are equal and constant. For a given value of β≡ρ_ BH(T_ Bf)/ρ_ R(T_ Bf) and M_ BH, the above equations can be solved to determine the duration of PBH domination and the evaporation temperature. In the case where the PBH dominates the energy density, an analytical expression for the evaporation temperature is obtained as
T_ ev=(5 M_Pl^2/π^3g_*(T_ ev)τ^2)^1/4,
where τ is the PBH lifetime which can be computed by integrating Eq.(<ref>). This allows also to express T_ ev in terms of M_ BH as
T_ ev=0.1(M_ BH/4.1× 10^7 g)^-3/2 GeV.
Eq.(<ref>) shows that with T_ ev≡ T_ brk, all the plots in Fig.<ref> can be reproduced for PBHs with initial mass M_ BH≃ 4.1× 10^7 g.
It is crucial to note the following: i) An eternal falling of the GW spectrum in the range f_ brk≲ f≲ f_ LIGO, (see Fig.<ref>), requires longer duration of PBH domination and therefore large β–we consider this to constrain PBH mass with PTA data and the LIGO bound plus projection. However, in scenarios where β can be constrained (can be bounded from above, resulting in a short duration of PBH domination) by GWs from PBH density fluctuations <cit.>, the falling of the GW spectrum might stop before the LIGO frequency band and another subsequent plateau with suppressed amplitude (originating out of the contribution from the loops in the first radiation epoch before the PBH start to dominate) might show up. Therefore one needs to modify our results accordingly, see, e.g., sec.<ref>. ii) The PBHs should evaporate before the BBN at a temperature T≳ 10 MeV. Therefore, from Eq.(<ref>), M_ BH is generically constrained as M_ BH≲ 2× 10^8 g, which we use as the absolute upper bound on M_ BH.
We now proceed to the detailed discussion of the impact of PBH-induced matter domination on GWs from cosmic superstrings and metastable strings. First, from Eq.(<ref>) and Eq.(<ref>), we obtain the spectral break frequency as
f_ brk^ PBH≃ 0.045 Hz(10^-12/Gμ)^1/2(M_ BH/4.1× 10^7 g)^-3/2 .
When a large number of modes are summed, GW energy density beyond f_ brk^ PBH can be written as
Ω_ GW(f)≃Ω_ GW^ plt,∞(f/f_ brk^ PBH)^-1/3Θ[ f-f_ brk^ PBH],
where
Ω_ GW^ plt,∞≃ 3.1× 10^-8 Υ^-1(Gμ/10^-7)^1/2, Υ=P for superstrings
Υ=1 for metastable strings.
Using Eq.(<ref>), Eq.(<ref>) can be re-expressed as
Ω_ GW(f)≃Ω_ GW^ plt,∞Ω_ GW^ mod,PBH,
where Ω_ GW^ mod,PBH, given by
Ω_ GW^ mod,PBH≃ 0.1 (10^-12/Gμ)^1/6(M_ BH/4.1× 10^7 g)^-1/2(f/25 Hz)^-1/3
captures the modulation in the GW spectra because of PBH domination. The absence of PBHs corresponds to Ω_ GW^ mod, PBH=1. In which case, Eq.(<ref>) and the non-observation of SGWB by LIGO-O3 run; i.e., Ω_ GW^ any model≲ 1.7× 10^-8 at f_ LIGO=25 Hz, implies the following:
without the PBH, LIGO-O3 excludes
P≲ 1.8 (Gμ/10^-7)^1/2 for superstrings,
Gμ≳ 3×10^-8 for metastable strings.
Though Eq.(<ref>) is the most interesting region to explore with non-standard cosmological evolution, e.g., with PBHs, for a comprehensive discussion, we present our numerical results by dividing parameter space into three different categories (all cases are consistent with PTA data):
Category A: Ω_ GW^ plt,∞≳Ω_ GW^ LIGO-O3 and Ω_ GW^ LIGO-O3≳Ω_ GW^ plt,∞Ω_ GW^ mod,PBH(f=25 Hz)≳Ω_ GW^ LVK-D.
Category B: Ω_ GW^ LIGO-O3≳Ω_ GW^ plt,∞≳Ω_ GW^ LVK-D and Ω_ GW^ LIGO-O3≳Ω_ GW^ plt,∞Ω_ GW^ mod,PBH(f=25 Hz) ≳Ω_ GW^ LVK-D.
Category C: Ω_ GW^ LVK-D≳Ω_ GW^ plt,∞Ω_ GW^ mod,PBH(f=25 Hz) for all Ω_ GW^ plt,∞.
Here Ω_ GW^ LVK-D=2× 10^-9 at f=25 Hz is the maximum projected sensitivity (minimum testable value of Ω_ GW ) reach of the LVK-D. Note that Category A corresponds to recovering the excluded parameter space in Eq.(<ref>), for which one must require a non-standard cosmological evolution, e.g., by PBH. For Category B, PBHs are not essential because even the scale-invariant amplitude does not contradict the LIGO-O3 bound. A PBH domination however can be tested for f_ brk^ PBH≲25 Hz and Ω_ GW^ LIGO-O3≳Ω_ GW^ plt,∞Ω_ GW^ mod,PBH(f=25 Hz) ≳Ω_ GW^ LVK-D. For Category C, though Ω_ GW is consistent with the PTA data, the LVK-D cannot test the spectrum because the amplitude is smaller than its sensitivity reach. However, detectors like the Einstein Telescope (ET) can potentially test such signals. We quantify this class of signals by computing Signal-to-Noise-Ratio at ET. In Fig.<ref>, we show the benchmark GW spectrum (summing over larger modes) from each category belonging to both string classes. The benchmark parameters are tabulated in Table <ref> (Superstrings) and Table <ref> (Metastable strings). In what follows, we present detailed numerical results including a more rigorous parameter space scan to find an approximate mass range of PBHs that can produce such signals.
§.§ Numerical results: PBH mass ranges for PTA GW signals evading LIGO-O3 bound
On the top panel in Fig.<ref>, we show the allowed regions for fixed PBH mass that would produce GW signals in each category belonging to cosmic superstrings. The gray contours in each plot are the NANOGrav 1σ and 2σ posteriors of Gμ and P<cit.> (one may also look at the Appendix <ref> of this article). The benchmark points tabulated in Table <ref> corresponding to the GW spectra in Fig.<ref> (left) are denoted with the symbols BP1 (Cat-A), BP2 (Cat-B), and 2 BP3 (Cat-C). A similar description holds for metastable strings on the Gμ and √(κ) plane (Fig.<ref>, top) with the benchmarks tabulated in Table <ref> and corresponding GW spectra shown in Fig.<ref> (right). In the bottom panel of both figures, the GW spectra presented in Fig.<ref> are zoomed in to the NANOGrav and the LIGO frequency band. Note that for all categories in both figures, only a constrained range of PBH mass M_ BH∼[10^6 g, 10^8 g] would produce a viable GW signal consistent with the PTA data. For category C, slightly heavier PBHs are allowed; to make the spectrum fall below Ω_ GW^ LVK-D with f_ brk at lower frequencies. On the other hand, for Category B, only lighter PBHs are allowed for the opposite reason, i.e., Ω_ GW^ LVK-D≲Ω_ GW(f=25 Hz) requiring f_ brk≲ 25 Hz at much higher frequencies. Our parameter space scanning does not account for a correction factor due to the possible variation of effective degrees of freedom, which would make plateau amplitude slightly less. Therefore, in this analysis, the LIGO constraints on the string parameter space are slightly stronger than they should be <cit.>. As such, from Fig.<ref>, it seems that without PBH the NANOGrav 2σ region is excluded by LIGO-O3. But when the correction factor is considered, a small portion of 2σ region around Gμ∼ 10^-7 does not contradict the LIGO-O3 bound <cit.>. Therefore, one can produce Category-B signals for metastable strings with M_ BH∼ 10^5.5 g.
§.§ Numerical results: Expected Signal-to-Noise-Ratio at the Einstein Telescope
The standard way to assess the detectability of predicted GW signal in the future planned detectors is to compute the Signal-to-Noise-Ratio (SNR) <cit.>. Because the PTA GWs are of strong amplitudes, a generic expectation from any such signal spanning a wide frequency range would be to obtain a large SNR at the high-frequency detectors. In the present case, because PBH domination reduces the signal amplitude at higher frequencies, it is useful to quantify the detectability of such signals with SNR. Note that, in our case, the higher the frequency smaller the amplitude of GWs. One gets the most reduced signal at higher frequencies for Category C. We therefore assess those signals by computing SNR at the Einstein Telescope. The SNR is defined as <cit.> SNR = √(t_ obs∫_f_ min^f_ maxdf (Ω_ GW(f)/Ω_ noise(f))^2) ,
where Ω_ noise(f) is the noise spectrum of the detector and f_ min(f_ max) are minimum (maximum) accessible frequency. In Fig.<ref>, we present the SNR@ET for PTA GWs radiated from superstrings (metastable strings) in the upper (lower) panel for the benchmark values M_ BH=10^7,8 g. Note that even though at the ET band the signal strength reduces, the predicted SNR is pretty high. This is owing to the required strong amplitude GWs at the PTA band plus the maximally allowed value of PBH mass which is around M_ BH∼ 10^8 g-the BBN constraint.
Given the above discussion and the parameter scan rendering an approximate (fairly accurate) range of PBH masses that make string-radiated GWs consistent with the LIGO bound, we now discuss the phenomenological advantage of having a PBH domination over any other early matter domination.
§ PBHS OVER ANY OTHER EMD: ADDITIONAL PHYSICS
As mentioned in the introduction, the advantage is broadly twofold. First, PBHs themselves are additional sources of GWs, thereby making the overall stochastic GW spectrum peculiar. As such, PBHs inevitably produce high-frequency gravitons, plus, as shown recently, the inhomogeneity in the distribution of PBHs over the spaces could induce tensor modes if the PBHs dominate the universe's energy density. As we show below, remarkably, for the obtained PBH mass range in Fig.<ref> and Fig.<ref>, the peak of such induced GWs from PBH density fluctuations appears in the LISA-DECIGO-ET band. Second, lately, there have been considerable efforts to discuss the effect of ultralight PBHs on particle dark matter and baryogenesis models <cit.>. The PBHs with the predicted mass range in this article would have profound effects (perhaps the best testable effects) on BSM particle physics models. To this end, we discuss a recently proposed BSM scenario <cit.> based on the seesaw mechanism <cit.> of neutrino masses stemming also the possibility of baryogenesis via leptogenesis <cit.> and superheavy dark matter in the presence of PBHs<cit.>. In the scenario, it was reported for the first time, that if there are ultralight PBHs in the early universe, seesaw Lagrangian can accommodate the novel right-handed neutrino (RHN) mass spectrum Λ_ GUT∼ M_R3>M_R2>M_R1, where M_Ri being the mass of the ith RHN, with heaviest one (N_R3) being super heavy dark matter and the other two generating baryon asymmetry. The analysis in Ref.<cit.> was made with stable cosmic strings produced owing to the spontaneous breaking of an U(1)_B-L (embeddable in GUT) responsible also for generating the RHN masses. Here we present an analysis of the same model considering a metastable string network incorporating the new PTA data. In what follows, we present a concise discussion with numerical results for the additional GWs from PBHs and the U(1)_B-L^ PBH seesaw model.
§.§ Complementary GW signatures: density fluctuations and graviton emission
The ultralight PBHs are many-faced objects concerning the stochastic GWs. For instance, the initial curvature perturbations leading to the formation of PBHs give rise to GWs (see, e.g., <cit.>), ultralight PBHs could possibly merge, releasing GWs <cit.>, PBHs emit gravitons, which constitute GWs at ultra-high frequencies <cit.>, and the inhomogeneous distribution of PBHs, which results in density fluctuations, triggers the production of GWs <cit.>. Here we do not discuss any specific production mechanisms of PBHs, nor the possibility of merger formation. We rather focus on the last two options, being slightly biased to the fact that graviton emission is inevitable and independent of the production mechanism, the GWs from PBH-density fluctuation seem to provide a realistic stochastic background with spectral shape dependent on their spatial distribution which has been assumed to be like poison like in Refs.<cit.>. Even though the PBH gas, on average, behaves like pressure-less dust, the inhomogeneous spatial distribution results in density fluctuations, which are isocurvature in nature. When PBHs dominate the universe's energy density, the isocurvature component gets converted to curvature perturbation, which subsequently induces secondary GWs. Due to the significant density fluctuations at smaller scales (equivalent to the average separation of PBHs at T_ Bf), substantial GWs are generated. These GWs are further amplified due to the nearly instantaneous evaporation of PBHs. The amplitude of such induced GWs in the present day is given by
Ω_ GW^ PBH,I(t_0,f)≃Ω^ peak,I_ GW( f/f_ peak)^11/3Θ (f_ peak,I-f),
where the peak amplitude is given by
Ω^ peak,I_ GW≃ 2× 10^-6(β/10^-8)^16/3(M_ BH/10^7 g)^34/9,
and the peak frequency is given by
f_ peak,I≃ 5.4 Hz( M_ BH/10^7 g)^-5/6.
The ultra-violet cutoff frequency f_ peak corresponds to the comoving scale representing the mean separation of PBHs at the formation time.
On the other hand, the present spectral energy density of GWs from gravitons is given by <cit.>
Ω_ GW(f) h^2= (2π)^4βγ^2f^4/4H_0^2M_ BH∫_t_ Bf^t_ ev(1-t-t_ Bf/τ_ ev)^2/3( a(t_ Bf)/a(t_0))^-3/exp( 16π^2 f a(t_0)M_ BH/a(t)M_ Pl^2)-1
By maximizing the Eq.(<ref>) w.r.t the present-day frequency f, one obtains the peak frequency and consequently the peak GW amplitude as
f_ peak,g≃ 2.8 a(t_ ev)T_ BH≃ 2 × 10^17 Hz( M_ BH/10^7 g)^1/2, Ω_ GW^ peak,g∼ 10^-7.
In Fig.<ref> and Fig.<ref> (top-left) we show the total GW spectrum: density fluctuation spectrum combined with the one from metastable strings and superstrings for the benchmark parameters belonging to Category C in each string class. Instead of large β, here we chose a benchmark value such that the duration of PBH domination is relatively short and the density fluctuation GW spectrum is visible (the sharp peak at higher frequencies).
Note that because of the strong dependence of the peak GW amplitude on β (see Eq.(<ref>)) in this scenario, the duration of PBH domination can not be arbitrarily large, otherwise, the GWs would saturate the BBN bound on Δ N_ eff: ∫ d f f^-1Ω_ GW(f)h^2≲ 5.6× 10^-6Δ N_ eff, with Δ N_ eff≲ 0.2. Consequently, the falling of the GW spectrum from cosmic strings would stop at higher frequencies, potentially showing up a subsequent GW plateau originating from the loops in the radiation epoch before the PBH domination. This has been shown in the top-right panel of Fig.<ref> for metastable strings for three benchmark values of β. Nonetheless, the appearance of the second plateau is not always the case if the particle production cut-off is strong enough. We show it for superstrings on the top-right panel of Fig.<ref>. Compared to the metastable string case, here owing to the strong particle production cut-off, the GW spectra closely follow the large β case. As discussed in Sec.<ref>, for the GWs with amplitude at the level of PTAs, the particle production cut-off appears mostly at frequencies higher than the LIGO frequency band. Therefore, for shorter-duration PBH-domination scenarios, the regions in Fig.<ref> and Fig.<ref> extracted for Ω_ GW∝ f^-1/3 falling, requires modification now to account for a flattening of the spectrum. Although we do not present such a scan, to observe GW peak from density fluctuation on top of the strong amplitude cosmic string radiated GWs required by PTAs, reasonably large β (respecting, of course, the BBN bound) is required. For this reason, for most of the parameter space presented in Fig.<ref> and Fig.<ref>, the duration of PBH domination would be such that from f_ brk until the f_ LIGO, the spectrum would fall as Ω_ GW∝ f^-1/3, keeping the results of the previous section intact. We expect modification to the parameter space for heavier PBH masses around 10^8 g leading to smaller values of f_ brk, and thereby potential flatting of the spectrum at the LIGO frequency band due to the shorter duration of PBH domination.
Besides these low-frequency peculiarities, a PBH+superstrings (metastable strings) system provides a characteristic high-frequency GW spectrum constituting gravitons as shown in the bottom panels of Fig.<ref> and <ref>, where the GW strain h_c is related to Ω_ GW as
h_c(f)=√(3 H_0^2/2π^2Ω_ GW)f^-1.
Because Eq.(<ref>) characterizes the GW background, for the allowed mass ranges of PBH presented in Sec.<ref>, a considerable GW background constituting gravitons can be obtained within a constrained frequency range spanning approximately, only a decade. Although the proposed high-frequency GW detectors <cit.> are not presently competitive with the Δ N_ eff bound from Planck <cit.> (according to Ref.<cit.>, the electromagnetic resonant cavities can potentially reach below
the dark radiation projections around f∼ 10^8 Hz; the black oscillating curve in Fig.<ref> and Fig.<ref>), optimistically, in the future graviton background may be detected. In any case, these marked spectral features in the total GW background are described with three characteristic frequencies f_ brk, f_ peak,I, and f_ peak,g make the PBH + string scenario more appealing than an ordinary EMD+string case.
§.§ Implication on a realistic BSM model of leptogenesis and super-heavy dark matter
In the seesaw mechanism <cit.>, the light neutrino mass m_i∼ 0.01 eV inferred by neutrino oscillation data and cosmological measurements <cit.>, can be explained as a ratio of two scales: m_i≃Λ_ EW^2/Λ_ GUT, with Λ_ EW and Λ_ GUT being the Electroweak and GUT scale, respectively. At the Lagrangian level, these two scales are introduced as ℒ_ seesaw∼Λ_ EWL N_R+Λ_ GUTN^c_R N_R, where L is the Standard Model (SM) lepton doublet and N_R is a RHN field. Besides generating light neutrino mass, the seesaw Lagrangian can explain the entire DM relic and the observed BAU, if there are three RHN fields, with one being the DM and the other two generating BAU through leptogenesis.
A plethora of studies is dedicated to exploring this three-dimensional aspect of the seesaw–the neutrino mass+BAU+DM, see, e.g., <cit.>. Such attempts are theoretically well-built and predictive in particle physics experiments, however, at the expense of fine-tuning the L N_R
Yukawa coupling and treating the RHN mass as a free parameter, thereby giving up the supposition that active neutrino masses arise from the GUT and EW scale physics. As such, perhaps the most studied scenarios based on <cit.> consider N_R to be extremely light 𝒪( GeV, keV), therefore tiny L N_R coupling. In Ref.<cit.>, it was proposed that if there are PBHs in the early universe, all three aspects of the seesaw can be addressed without significant deviation from the relation m_i≃Λ_ EW^2/Λ_ GUT, i.e., considering all the RHNs to be 𝒪(Λ_ GUT), with one of them being the GUT-scale (super heavy) DM. In this case, the scenario becomes testable in GW experiments. The idea is based on dark matter production from ultralight PBH evaporation <cit.>. If such DM makes up 100% relic density without contradicting structure formation, the DM mass must be in the super-heavy range following the relation
M_ DM≃ 4.7×10^12 (M_ BH/10^7 g)^-5/2 GeV.
The thermal leptogenesis mechanism (thermal scatterings populate the decaying RHNs producing the lepton asymmetry) generates the BAU. But in this case, the final asymmetry gets modified to account for an entropy dilution owing to the PBH domination. Once the condition for the observed baryon-to-photon ratio η_B≃ 6.3× 10^-10<cit.> is taken into account, the parameter β and scale of thermal leptogenesis get correlated as <cit.>β≃ 1.2× 10^-9(10^7g/M_ BH) (T_ lepto/10^14 GeV),
where we have assumed leptogenesis is dominated by the decay of N_1 so that M_1∼ T_ lepto. Eq.(<ref>) can be shown to be accurate, matching the numerical results obtained from Eq.(<ref>),(<ref>) and the modified leptogenesis equations in the presence of PBHs <cit.>. The PBHs can also populate the decaying RHNs to produce lepton asymmetry, but if Eq.(<ref>) is to be satisfied, such PBHs evaporate after Spaleron freeze-out temperature, and the lepton asymmetry does not get processed to BAU. Therefore we discard this option.
Because the seesaw Lagrangian can be embedded in GUT groups with the RHN mass term generated after the breaking of an intermediate (While going to GUT→ SM) symmetry U(1)_B-L, cosmic strings are formed. Not exploring any explicit GUT scheme, we consider the formation of a metastable string network after the U(1)_B-L breaking. The tentative timeline of this U(1)_B-L model in the presence of PBHs looks like Fig.<ref>. It is crucial to note that like the BAU, dilution of any pre-existing DM relic (e.g., from scattering by B-L gauge bosons) should be taken care of (the red line in Fig.<ref>) in a way that Eq.(<ref>) prevails. A detailed discussion related to this can be found in Ref.<cit.>. Note now the features of the model. First, The DM and the PBH masses are anti-correlated (cf. Eq.(<ref>)). As such, ∼ 10^15 GeV DM gets produced from ∼ 10^6 g PBHs. Lighter PBHs are required to produce heavier DM. Second, as the leptogenesis scale increases (cf. Eq.(<ref>)), β increases, consequently the duration of PBH domination. Therefore, as the RHN mass scale (including the DM) approaches Λ_ GUT, remarkably, besides maintaining the m_i≃Λ_ EW^2/Λ_ GUT relation to a great extent, the GWs produced from cosmic strings automatically evades LIGO bound owing to large β or the long duration of PBH-domination. Assuming the hierarchy in the RHN masses[This assumption is not necessary, but more convenient from the model building perspective. As such, one can choose to impose a simple condition M_ DM>T_ RH>M_Ri(=1,2) on the parameter space, to avoid DM production from scattering while maintaining the condition for thermal leptogenesis T_ RH>M_Ri(=1,2). ] as M_ DM>M_Ri(=1,2) we can re-write Eq.(<ref>) as
μ=π M_DM^2 f_ DM^-2h(λ,g^'),
where f_ DM is the DM Yukawa coupling which determines the interaction strength of N_R field with B-L scalar; N^c_R N_RΦ_B-L. Working with 𝒪(1) couplings in Eq.(<ref>), and using Eq.(<ref>) we obtain an expression for Gμ as
Gμ≃ 4.7× 10^-8(M_ BH/10^6 g)^-5.
With this relation, we now redo the PTA fit for metastable strings performed in the previous section. The result is shown in Fig.<ref>. Note that because of this extra condition in Eq.(<ref>), this U(1)_B-L^ PBH model fits the PTA data for an extremely constrained range of PBH mass and produces GW signals belonging only to Category A. Therefore any non-observation of SGWB in the LVK-D would rule out the model/the best parameter choice of the model. Besides, because the PBH mass window is narrow, the other GW signals, e.g., from density fluctuation plus in the form of gravitons are peaked in a stringent range in frequencies. In an upcoming publication, we shall provide a detailed discussion of the model for more general parameter space with all possible GW signatures.
§ CONCLUSION
An abundant presence of ultralight PBHs leading to their domination on energy densities in the early universe could produce plenty of observable signatures specifically in the form of gravitational radiation. In addition to producing direct GW background, e.g., in the form of gravitons, a PBH domination gets imprinted on the spectral shape of GWs originating from other independent sources. Such a spectral shape reconstruction with the GW detectors and studying complementary GW signals originating directly from the PBHs could be an interesting way to probe the properties of ultralight PBHs. Here we study the impact of PBH domination on the GWs emitted from cosmic superstrings and metastable strings which are speculated to be among the sources of nHz GWs observed by the PTA. Such an interpretation might become perilous, as a GW background originating from cosmic strings spans a wide range in frequencies thereby contradicting the non-observation of stochastic GW background by LIGO dealing with comparable signal strength. In fact, the LIGO constraint rules out a significant string-parameter space favoured by the recent PTA data. A PBH-dominated early universe, however, affects the propagation of such GWs to reduce the amplitude at the high frequencies thereby evading the LIGO constraints. Assuming monochromatic and non-rotating PBHs we work out the mass range of such ultralight PBHs which makes the string-radiated GWs consistent with LIGO bound and the PTA data. We show that PBHs with mass in the range M_ BH∈[10^6g,10^8 g] would produce such viable signals that can be tested in the mid-band detectors such as LISA, high-frequency detectors such as LVK-D plus ET with a high signal-to-noise ratio. We discuss why a PBH-domination is distinguishable from any other early matter-domination scenarios that reduce the amplitude of string-radiated GWs at higher frequencies. In this context, we discuss a BSM model based on U(1)_B-L-seesaw providing neutrino mass, leptogenesis, super-heavy dark matter, and metastable cosmic strings. We show how a PBH-cosmic string scenario could be extremely predictive in probing unexplored parameter space of such BSM scenarios.
§ ACKNOWLEDGEMENTS
The work of SD is supported by the National Natural Science Foundation of China (NNSFC) under grant No. 12150610460. The work of RS is supported by the research project TAsP (Theoretical Astroparticle Physics) funded by the Istituto Nazionale di Fisica Nucleare (INFN).
§ SUPERSTRINGS AND METASTABLE STRINGS FIT TO THE NANOGRAV DATA
In Fig.<ref> we present the superstring and metastable fit to the PTA data using <cit.>. The 2D posteriors have been used in the main text.
|
http://arxiv.org/abs/2409.02882v1 | 20240904170746 | Benchmarking Spurious Bias in Few-Shot Image Classifiers | [
"Guangtao Zheng",
"Wenqian Ye",
"Aidong Zhang"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
G. Zheng et al.
^1University of Virginia, Charlottesville VA 22904, USA
{gz5hp,wenqian,aidong}@virginia.edu
Benchmarking Spurious Bias in Few-Shot Image Classifiers
Guangtao Zheng10000-0002-1287-4931 Wenqian Ye10000-0002-6069-5153 Aidong Zhang10000-0001-9723-3246
September 9, 2024
======================================================================================================
§ ABSTRACT
Few-shot image classifiers are designed to recognize and classify new data with minimal supervision and limited data but often show reliance on spurious correlations between classes and spurious attributes, known as spurious bias.
Spurious correlations commonly hold in certain samples and few-shot classifiers can suffer from spurious bias induced from them. There is an absence of an automatic benchmarking system to assess the robustness of few-shot classifiers against spurious bias.
In this paper, we propose a systematic and rigorous benchmark framework, termed FewSTAB, to fairly demonstrate and quantify varied degrees of robustness of few-shot classifiers to spurious bias. FewSTAB creates few-shot evaluation tasks with biased attributes so that using them for predictions can demonstrate poor performance. To construct these tasks, we propose attribute-based sample selection strategies based on a pre-trained vision-language model, eliminating the need for manual dataset curation. This allows FewSTAB to automatically benchmark spurious bias using any existing test data. FewSTAB offers evaluation results in a new dimension along with a new design guideline for building robust classifiers. Moreover, it can benchmark spurious bias in varied degrees and enable designs for varied degrees of robustness. Its effectiveness is demonstrated through experiments on ten few-shot learning methods across three datasets. We hope our framework can inspire new designs of robust few-shot classifiers. Our code is available at <https://github.com/gtzheng/FewSTAB>.
§ INTRODUCTION
Few-shot classification <cit.> (FSC) has attracted great attention recently due to its promise for recognizing novel classes efficiently with limited data.
Few-shot classifiers can transfer the knowledge learned from base classes to recognize novel classes with a few labeled samples. However, they face potential risks when deployed in the real world, such as data distribution shifts <cit.> and adversarial examples <cit.>.
A subtle yet critical risk factor is the spurious correlations <cit.> between classes and spurious attributes — attributes of inputs non-essential to the classes. In the traditional learning setting <cit.>, deep learning models tend to rely on spurious correlations as their prediction shortcuts or exhibit
spurious bias, such as predicting
classes using the associated backgrounds <cit.> or image textures <cit.>,
leading to significant performance drops when the associated backgrounds or textures change to different ones.
In the low-data regime, spurious bias becomes more evident.
For example, in Fig. <ref>, the correlation between the class and the attribute in the support (training) image may form a shortcut path from to
predicting the image as and hinder the learning of the desired one that uses class-related attributes, such as , , and . The shortcut will fail to generalize in the query (test) image where no can be found. In general, few-shot image classifiers are susceptible to spurious bias.
However, there lacks a dedicated benchmarking framework that evaluates the robustness of few-shot classifiers to spurious bias. The standard benchmarking procedure in FSC trains a few-shot classifier on base classes from a training set with ample samples and evaluates the classifier on FSC test tasks constructed from a test set with novel classes. The problem with this procedure is the lack of explicit control over the spurious correlations in the constructed FSC tasks. Each FSC test task contains randomly sampled support and query samples. Thus, spurious correlations in the majority of the test set samples can be demonstrated in these tasks, providing unfair advantages for few-shot classifiers with high reliance on the spurious correlations.
In this paper, we propose a systematic and rigorous benchmark framework, termed Few-Shot Tasks with Attribute Biases (FewSTAB), to fairly compare the robustness of various few-shot classifiers to spurious bias. Our framework explicitly controls spurious correlations in the support and query samples when constructing an FSC test task to reveal the robustness pitfalls caused by spurious bias.
To achieve this, we propose attribute-based sample selection strategies that select support and query samples with biased attributes. These attributes together with their associated classes formulate spurious correlations such that if the support samples induce spurious bias in a few-shot classifier, i.e., the classifier learns the spurious correlations in the support samples as its prediction shortcuts, then the query samples can effectively degrade the classifier's performance, exposing its non-robustness to spurious bias.
Our framework exploits the spurious attributes in test data for formulating spurious correlations in FSC test tasks. Some existing datasets <cit.> provide spurious attribute annotations. However, they only have a few classes and cannot provide enough classes for training and testing. Many benchmark datasets for FSC do not have annotations on spurious attributes, and obtaining these annotations typically involves labor-intensive human-guided labeling <cit.>. To address this, we further propose to use a pre-trained vision-language model (VLM) to automatically identify distinct attributes in images in the high-level text format. Our attribute-based sampling methods can use the identified attributes to simulate various spurious correlations. Thus, we can reuse any existing FSC datasets for benchmarking few-shot classifiers' robustness to spurious bias, eliminating the need for the manual curation of new datasets.
The main contributions of our work are summarized as follows:
* We propose a systematic and rigorous benchmark framework, termed Few-Shot Tasks with Attribute Biases (FewSTAB), that specifically targets spurious bias in few-shot classifiers, demonstrates their varied degrees of robustness to spurious bias, and benchmarks spurious bias in varied degrees.
* We propose novel attribute-based sample selection strategies using a pre-trained VLM for constructing few-shot evaluation tasks, allowing us to reuse any existing few-shot benchmark datasets without manually curating new ones for the evaluation.
* FewSTAB provides a new dimension of evaluation on the robustness to spurious bias along with a new design guideline for building robust few-shot classifiers. We demonstrate the effectiveness of FewSTAB by applying it to models trained on three benchmark datasets with ten FSC methods.
§ RELATED WORK
Few-shot classification. Few-shot classification <cit.> has received vast attention recently. Few-shot classifiers can be trained with meta-learning or transfer learning on base classes to learn the knowledge that can be transferred to recognize novel classes with a few labeled samples. The transfer learning approaches <cit.> first learn a good embedding model and then fine-tune the model on samples from novel classes. The meta-learning approaches can be further divided into optimization-based and metric-based methods. The optimization-based methods <cit.> aim to learn a good initialized model such that the model can adapt to novel classes efficiently with a few gradient update steps on a few labeled samples. The metric-based methods <cit.> aim to learn a generalizable representation space with a well-defined metric, such as Euclidean distance <cit.>, to learn novel classes with a few labeled samples. Recently, large vision-language models <cit.> are used for few-shot classification. However, they have completely different training and inference pipelines from the models that we consider in this paper.
Robustness in few-shot classification. There are several notions of robustness for few-shot classifiers. The common one requires a few-shot classifier to perform well on the in-distribution samples of novel classes in randomly sampled FSC test tasks. The robustness to adversarial perturbations further requires a few-shot classifier to perform well on samples with imperceptible perturbations <cit.>. Moreover, the cross-domain generalization <cit.> aims to test how robust a few-shot classifier is on samples from novel classes with domain shifts, which are typically reflected by the changes in both image styles and classes. In contrast, we focus on a new notion of robustness: the robustness to spurious bias.
There is a lack of rigorous evaluation methods on the topic. We provide
a new evaluation method that specifically targets spurious bias and can systematically demonstrate few-shot classifiers' varied degrees of vulnerability to spurious bias, which has not been addressed in the existing literature.
Benchmarks for spurious bias. There are some existing datasets <cit.> that are designed to benchmark spurious bias in image classifiers. However, these datasets are only applicable to the traditional learning setting <cit.> since the classes in them are not sufficient for the training and testing of few-shot classifiers. Existing benchmarks in few-shot classification are not tailored for benchmarking spurious bias in few-shot classifiers. A recent work <cit.> creates a large-scale few-shot classification benchmark dataset with spurious-correlation shifts. In contrast, we propose a benchmark framework that can reuse existing few-shot classification datasets and provide a new dimension of evaluation.
Discovering spurious attributes. A spurious attribute is non-essential to a class and only exists in some samples. Early works on discovering spurious attributes <cit.> require a predefined list of spurious attributes and expensive human-guided labeling of visual attributes. Recent works <cit.> greatly reduce the need for manual annotations by using the neurons of robust models to detect visual attributes. However, they still need humans to annotate the detected visual attributes. We automate this process by using a pre-trained VLM to obtain distinct attributes as words. Instead of discovering spurious correlations, we simulate them via attribute-based sampling for benchmarking.
§ PRELIMINARY
Few-shot classification tasks.
A typical FSC task 𝒯 has a support set 𝒮 for training and a query set 𝒬 for testing. In this task, there are C classes (c=1, …,C) with N_𝒮 (a small number) training samples and N_𝒬 test samples per class in 𝒮 and 𝒬, respectively. The task is called a C-way N_𝒮-shot task.
Few-shot classifiers. A few-shot classifier f_θ with parameters θ aims to classify the samples in 𝒬 after learning from 𝒮 with a learning algorithm 𝒪 in a few-shot task 𝒯.
Here, 𝒪 could be any learning algorithms, such as the optimization method <cit.> or a prototype-based classifier learning method <cit.>. To acquire a good few-shot learning capability, f_θ is typically meta-trained or pre-trained <cit.> on a base training set 𝒟_train={(x_n,y_n)|y_n∈𝒞_train,n=1,…,N_train} with N_train sample(x)-label(y) pairs, where 𝒞_train is a set of base classes.
Performance metrics. The performance of a few-shot classifier is typically measured by its average classification accuracy over N_𝒯 C-way N_𝒮-shot tasks randomly sampled from 𝒟_test={(x_n,y_n)|y_n∈𝒞_test,n=1,…,N_test} where N_test sample-label pairs from novel classes 𝒞_test do not appear in 𝒟_train, , 𝒞_train∩𝒞_test=∅.
We denote this metric as standard accuracy Acc(f_θ), ,
Acc(f_θ)=1/N_𝒯∑_t=1^N_𝒯∑_c=1^CM_c(𝒯_t;f_θ, 𝒪),
where M_c(𝒯_t;f_θ, 𝒪) denotes the classification accuracy of f_θ on the query samples from the class c in 𝒯_t after f_θ is trained on 𝒮 with 𝒪.
The metric Acc(f_θ) in Eq. (<ref>) only shows the average learning capability of f_θ over C randomly selected novel classes. To better characterize the robustness of f_θ to spurious bias, we define the class-wise worst classification accuracy over tasks as
wAcc(f_θ)=1/N_𝒯∑_t=1^N_𝒯min_c=1,…,CM_c(𝒯_t;f_θ, 𝒪).
A larger wAcc(f_θ) indicates that f_θ is more robust to spurious bias.
Spurious correlations. A spurious correlation is the association between a class and an attribute of inputs that is non-essential to the class, and it only holds in some samples. We formally define it as follows.
Let 𝒟_c denote a set of sample-label pairs having the label c, and let ψ:𝒳→ℬ_𝒜 be an attribute detector, where 𝒳 is the set of all possible inputs, ℬ_𝒜 denotes all possible subsets of 𝒜, and 𝒜 is the set of all possible attributes. The class c and an attribute a∈𝒜 form a spurious correlation, denoted as ⟨ c,a⟩, if and only if the following conditions hold:
* There exists (x, c) ∈𝒟_c that satisfies a ∈ψ(x), and
* There exists (x', c) ∈𝒟_c that satisfies a ∉ψ(x').
We define a as the spurious attribute in ⟨ c,a⟩.
<ref> specifies that all the spurious correlations are based on 𝒟_c.
In the remainder of the paper, we define 𝒟_c={(x,c)|∀(x,c)∈𝒟_test} with c∈𝒞_test as we focus on evaluating the robustness to spurious bias.
We list major symbols in the paper alongside their meanings in <ref>.
§ METHODOLOGY
§.§ Attribute-Based Sample Selection
We first propose two attribute-based sample selection methods to reveal spurious bias in a few-shot classifier.
Consider a training set 𝒮 in a few-shot test task 𝒯, which has C classes with each class c∈𝒞_test associating with a unique spurious attribute a∈𝒜. We aim to discover samples that can exhibit a classifier's spurious bias on ⟨ c,a⟩ induced from 𝒮.
Motivated by existing findings <cit.> that classifiers with high reliance on ⟨ c,a⟩ tend to perform poorly on samples without it, we propose an attribute-based sample selection strategy
below.
Intra-class attribute-based sample selection. Given 𝒟_c and the training set 𝒮 having the spurious correlation ⟨ c,a⟩, we generate a set ℐ_⟨ c,a⟩ of sample-label pairs which have class c but do not contain attribute a, i.e.,
ℐ_⟨ c,a⟩ = {(x, c)|∀ (x, c) ∈𝒟_c, a ∉ψ(x)}.
The above proposed method demonstrates a few-shot classifier's robustness to individual spurious
correlation ⟨ c,a⟩ and does not consider a multi-class classification setting where spurious attributes from some other class c' exist in samples of the class c. In this case, these attributes may mislead the classifier to predict those samples as the class c' and severely degrade the performance on the class c. For example, consider using the spurious correlations ⟨, ⟩ and ⟨, ⟩ for predicting and , respectively. An image showing a vase in green is more effective in revealing the robustness to ⟨, ⟩ as it is more likely to be misclassified as than other images.
Motivated by this, we propose the inter-class attribute-based sample selection below.
Inter-class attribute-based sample selection. Given 𝒟_c and the training set 𝒮 having the spurious correlations ⟨ c,a⟩ and ⟨ c',a'⟩, where c'≠ c and a'≠ a, we generate a set ℐ_⟨ c,a⟩^⟨ c',a'⟩ of sample-label pairs which have class c, do not contain attribute a, but contain attribute a' from another class c':
ℐ_⟨ c,a⟩^⟨ c',a'⟩ = ℐ_⟨ c,a⟩∩{(x, c)|∀ (x, c) ∈𝒟_c, a'_⟨ c' ⟩∈ψ(x)},
where a'_⟨ c' ⟩ denotes a' in ⟨ c',a'⟩, and ℐ_⟨ c,a⟩ is defined in <ref>.
Considering that there are C classes in the training set 𝒮 with each class associating with a unique spurious attribute a, to effectively demonstrate the reliance on the spurious correlation ⟨ c,a⟩ with the inter-class attribute-based sample selection, we consider all the spurious correlations in 𝒮. Specifically, we apply the above selection strategy to all the C-1 spurious correlations in 𝒮 other than ⟨ c,a⟩ and obtain ℐ_⟨ c,a⟩^C as the union of the C-1 sets as follows:
ℐ_⟨ c,a⟩^C = ⋃_⟨ c',a'⟩∈𝒞_\ cℐ_⟨ c,a⟩^⟨ c',a'⟩,
where 𝒞_\ c denotes all the spurious correlations in 𝒮 other than ⟨ c,a⟩.
The inter-class attribute-based sample selection is built upon the intra-class attribute-based sample selection. In the remainder of the paper, we use the inter-class method as our default sample selection strategy, which is more effective empirically (<ref>). In certain cases, however, where there are not enough desired samples during task construction, we resort to the intra-class sample selection strategy (<ref>, Implementation details).
In the following, we introduce FewSTAB, a benchmark framework that uses the proposed selection strategies to construct FSC tasks containing samples with biased attributes for benchmarking spurious bias in few-shot classifiers.
§.§ FewSTAB (Part 1): Text-Based Attribute Detection
Our attribute-based sample selection methods require knowing the attributes in images, which typically involves labor-intensive human labeling. To make our method scalable and applicable to few-shot classifiers trained on different datasets, we adopt a pre-trained VLM to automatically identify distinct attributes in images in text format, which includes the following two steps.
Step 1: Generating text descriptions. We use a pre-trained VLM <cit.> ϕ to automatically generate text descriptions for images in 𝒟_test. The VLM is a model in the general domain and can produce text descriptions for various objects and patterns. For example, for the current input image in the class in <ref>(a), besides the class object , the VLM also detects the vase's color , and another object with its material .
Step 2: Extracting informative words. From the generated text descriptions, we extract nouns and adjectives as the detected attributes via an automatic procedure Ω. The two kinds of words are informative as a noun describes an object, and an adjective describes a property of an object. All the detected attributes form the candidate attribute set 𝒜. We realize the attribute detector ψ defined in <ref> as ψ(x)=Ω(ϕ(x)).
Remark 1: A VLM in general can extract many distinct attributes from the images. On some images, the VLM may detect non-relevant attributes, such as detecting a duck from a bird image. A more capable VLM could warrant a better attribute detection accuracy and benefit individual measurements on few-shot classifiers. Although being a VLM-dependent benchmark framework, FewSTAB can produce consistent and robust relative measurements among all the compared FSC methods, regardless the choice of VLMs (<ref>).
Remark 2: The candidate set 𝒜 constructed with all the extracted words may contain attributes that represent the classes in 𝒟_test. However, during our attribute-based sample selection, these attributes will not be used since they always correlate with classes and therefore do not satisfy the definition of spurious attributes in <ref>.
We provide details of ϕ and Ω in <ref>.
§.§ FewSTAB (Part 2): FSC Task Construction
Constructing a C-way N_𝒮-shot FSC task 𝒯 for benchmarking spurious bias in few-shot classifiers involves constructing a support (training) set 𝒮 and a query (test) 𝒬 with biased attributes.
Constructing the support set. The support set contains the spurious correlations that we aim to demonstrate to a few-shot classifier. As a fair and rigorous benchmark system, FewSTAB makes no assumptions on the few-shot classifiers being tested and randomly samples C classes from 𝒞_test. For each sampled class, it randomly selects a spurious correlation ⟨ c,a⟩ in 𝒟_test with a∈𝒜 . To effectively demonstrate the spurious correlation ⟨ c,a⟩ to a few-shot classifier, we select samples of the class c such that (1) they all have the spurious attribute a and (2) do not have spurious attributes from the other C-1 spurious correlations.
We construct 𝒮_c with N_𝒮 samples for the class c that satisfy the above two conditions. Thus, the spurious attribute a becomes predictive of the class c in 𝒮_c. We take the union of all C such sets to get 𝒮=∪_c=1^C𝒮_c.
<ref>(b) demonstrates the case when C=3. Note that we have no requirements for other non-selected attributes in 𝒜 to ensure that we have enough samples for 𝒮_c.
Constructing the query set. To evaluate the robustness to the spurious correlations formulated in 𝒮, we first construct a candidate set ℐ_⟨ c,a⟩^C in Eq. (<ref>) for each spurious correlation ⟨ c,a⟩ in 𝒮. Since we have no requirements on the non-selected attributes that are not used to formulate spurious correlations in 𝒮, a few-shot classifier may predict query samples via some of these attributes, e.g., the yellow blocks in <ref>(b), bypassing the test on the formulated spurious correlations in 𝒮. To address this, we propose query sample selection below.
Query sample selection: We select query samples from ℐ_⟨ c,a⟩^C that are least likely to have non-selected spurious attributes, such as the ones enclosed with red boxes in <ref>(b).
To achieve this, we first calculate the fraction of sample-label pairs in ℐ_⟨ c,a⟩^C that have the attribute a as
p_a=|{x|a∈ψ(x),∀ (x,c)∈ℐ_⟨ c,a⟩^C}|/|ℐ_⟨ c,a⟩^C|,
where |·| denotes the size of a set, a∈𝒜̃, and 𝒜̃ contains all non-selected attributes. A larger p_a indicates that the attribute a occurs more frequently in data and is more likely to be used in formulating prediction shortcuts. We then calculate the likelihood score for each (x,c)∈ℐ_⟨ c,a⟩^C as s(x) = ∑_a∈ψ(x),a∈𝒜̃ p_a, i.e., the summation of all p_a of non-selected attributes in x. The likelihood score will be zero if there are no non-selected attributes in x. A large s(x) indicates that the image x can be predicted via many non-selected attributes. Therefore, we select N_𝒬 samples from ℐ_⟨ c,a⟩^C that have the lowest likelihood scores to construct 𝒬_c. Then, we have 𝒬=∪_c=1^C𝒬_c, which contains samples for evaluating the robustness of a few-shot classifier to the spurious correlations in 𝒮.
Complexity analysis. The text-based attribute detection only needs to use VLMs once to extract attributes for each test set of a
dataset. For the task construction, in a nutshell, we analyze the attributes of samples from each of the C classes and do the sampling. Thus, the computational complexity is O(N_𝒯CN_cN_𝒜), where N_c is the maximum number of samples per class in test data, N_𝒜 is the number of extracted attributes. We only need to run the process once and use the generated tasks to benchmark various models.
§ EXPERIMENTS
§.§ Experimental Setup
Datasets. We used two general datasets of different scales, miniImageNet <cit.> and tieredImageNet <cit.>, and one fine-grained dataset, CUB-200 <cit.>. Each dataset consists of 𝒟_train, 𝒟_val, and 𝒟_test for training, validation, and test, respectively (see Appendix).
All images were resized to 84×84.
FSC methods. We trained FSC models with ten algorithms covering three major categories. For gradient-based meta-learning algorithms, we chose ANIL <cit.>, LEO <cit.>, and BOIL <cit.>. For metric-based meta-learning algorithms, we chose ProtoNet <cit.>, DN4 <cit.>, R2D2 <cit.>, CAN <cit.>, and RENet <cit.>. For transfer learning algorithms, we chose Baseline++ <cit.> and RFS <cit.>.
See Appendix for more details.
Any backbones can be used as the
feature extractor. For fair comparisons between different methods, we used the ResNet-12 backbone adopted in <cit.>.
Text-based attribute detection. We used a pre-trained VLM named
ViT-GPT2 <cit.> to generate text descriptions for images in 𝒟_test.
After that, we used Spacy (<https://spacy.io/>) to extract nouns and adjectives from these descriptions automatically.
We also used another pre-trained VLM, BLIP <cit.>, to test whether FewSTAB can produce consistent results.
The statistics of the detected attributes are shown in Table <ref>.
Implementation details.
We trained FSC
models with the implementation in <cit.>.
Each model was trained on 𝒟_train of a dataset with one of the ten FSC methods. For each meta-learning based method, we trained two models using randomly sampled 5-way 1-shot and 5-way 5-shot tasks, respectively. All the tasks have 15 samples per class in the query set. We saved the model that achieves the best validation accuracy on 𝒟_val for evaluation.
For FewSTAB, if we do not have enough desired samples to construct a support set, we redo the construction from the beginning. If there are not enough desired samples to construct a query set, we first try to use the intra-class attribute-based sample selection; if the desired samples are still not enough, we redo the construction from the beginning.
We created 3000 tasks for model evaluation. All experiments were conducted on the NVIDIA RTX 8000 GPUs.
§.§ Visualization of a Constructed Task
We show a 5-way 1-shot task constructed by FewSTAB in Fig. <ref>. Each class in the support set correlates with a unique spurious attribute. The query samples of a class do not have the spurious attribute correlated with the class and some of them have spurious attributes associated with other classes in the support set. For example, the query samples of the class do not have the spurious attribute , and some of them have spurious attributes from other classes in the support set, such as and .
FewSTAB introduces biased attributes in the task so that query samples can be easily misclassified as other classes by a few-shot classifier that relies on the spurious correlations in the support set.
§.§ Effectiveness of FewSTAB
FewSTAB can effectively reveal spurious bias in few-shot classifiers.
We show in Table <ref> the wAcc (Eq. (<ref>)) on 5-way 1/5-shot test tasks that
are randomly
sampled (wAcc-R) and are constructed with FewSTAB (wAcc-A), respectively. FewSTAB generates FSC test tasks only based on the class-attribute correlations in data. In each test setting, the FSC methods in Table <ref> are evaluated with the same FSC tasks.
We observe that wAcc-A is consistently lower than wAcc-R on the three datasets and on two test-shot numbers, showing that FewSTAB
is more effective than the standard evaluation procedure (random task construction) in exhibiting the spurious bias in various few-shot classifiers. We additionally show that FewSTAB also works on the most recent FSC methods and can reflect the improvement made to mitigate spurious bias (see Appendix).
R0.4
Spearman's rank correlations between wAcc-A and wAcc-R in Table <ref>.
0.8!
Dataset 1-shot 5-shot
miniImageNet 0.96 0.95
tieredImageNet 0.96 0.90
CUB-200 1.00 0.94
FewSTAB reveals new robustness patterns among FSC methods. In Table <ref>, we calculate the
Spearman's rank correlation coefficients <cit.> between
the values of wAcc-R and wAcc-A from Table <ref>. The coefficients are bounded from 0 to 1, with larger values indicating that the ranks of FSC methods based on wAcc-R are more similar to those based on wAcc-A. In the 1-shot setting,
it is not effective to control the spurious correlations since we only have one sample per class in the support set. Hence, the coefficients are large, and the ranks based on wAcc-A are similar to those based on wAcc-R. In the 5-shot cases, we have more samples to demonstrate the spurious correlations. The coefficients become smaller, , the ranks based on wAcc-A show different trends from those based on wAcc-R. In this case, FewSTAB reveals new information on FSC methods' varied degrees of robustness to spurious bias.
FewSTAB can benchmark spurious bias in varied degrees. As shown in Fig. <ref>, the accuracy gap, defined as wAcc-R minus wAcc-A, in general, becomes larger when we switch from 5-way 1-shot to 5-way 5-shot tasks. Compared with the random task construction, FewSTAB creates more challenging tasks in the 5-shot case for demonstrating spurious bias in few-shot classifiers. In other words, with a higher shot value in the constructed test tasks, FewSTAB aims to benchmark spurious bias in a higher degree.
§.§ A New Dimension of Evaluation and a New Design Guideline
r0.4
< g r a p h i c s >
Acc versus wAcc-A of the ten FSC methods tested on 5-way 5-shot tasks from miniImageNet.
FewSTAB creates a new dimension of evaluation on the robustness to spurious bias.
We demonstrate this
with a scatter plot (Fig. <ref>) of Acc (Eq. (<ref>))
and wAcc-A of the ten few-shot classifiers. FewSTAB offers new information regarding different few-shot classifiers' robustness to spurious bias as we observe that Acc does not well correlate with wAcc-A.
A high wAcc-A indicates that the classifier is robust to spurious bias, while a high Acc indicates that the classifier can correctly predict most of the samples. With the scatter plot, we can view tradeoffs between the two metrics on existing few-shot classifiers. A desirable few-shot classifier should appear in the top-right corner of the plot.
R0.5
< g r a p h i c s >
Accuracy gaps of few-shot classifiers tested on 1-shot, 5-shot, and 10-shot tasks constructed from (a) miniImageNet, (b) tieredImageNet, and (c) CUB-200 datasets.
§.§ FewSTAB Enables Designs for Varied Degrees of Robustness
As demonstrated in Section <ref>, FewSTAB can benchmark spurious bias in varied degrees, which in turn enables practitioners to design robust few-shot classifiers targeted for different degrees of robustness to spurious bias. The reason for differentiating designs for varied degrees of robustness is that the same design choice may not work under different robustness requirements. For example, increasing shot number in training tasks is a common strategy for improving the few-shot generalization of meta-learning based methods. We trained few-shot classifiers with 5-way 5-shot and 5-way 1-shot training tasks randomly sampled from 𝒟_train, respectively. We then calculated the accuracy gap defined as the wAcc-A of a model trained on 5-shot tasks minus the wAcc-A of the same model trained on 1-shot tasks. A positive and large accuracy gap indicates that this strategy is effective in improving the model's robustness to spurious bias.
In Fig. <ref>, on each of the three datasets, we give results of the eight meta-learning based FSC methods on the 5-way 1-, 5-, and 10-shot FewSTAB tasks which are used to demonstrate the strategy's robustness to increased degrees of spurious bias. This strategy does not work consistently under different test shots. For example, in Fig. <ref>(a) this strategy with CAN only works the best on the 5-way 5-shot FewSTAB tasks.
§.§ Ablation Studies
Techniques used in FewSTAB. We analyze how different sample selection methods affect the effectiveness of FewSTAB in Table <ref>. With only intra-class attribute-based sample selection, we randomly select query samples from Eq. (<ref>). For inter-class attribute-based sample selection and intra-class attribute-based sample selection (automatically included by Eq. (<ref>)), we randomly select query samples from Eq. (<ref>). FewSTAB uses all the techniques in Table <ref>. We define accuracy drop as wAcc-R minus wAcc-A, and we use the drop averaged over the ten FSC methods tested on 5-way 5-shot tasks from the miniImageNet dataset as our metric.
A larger average drop indicates that the corresponding sample selection method is more effective in reflecting the spurious bias in few-shot classifiers. We observe that all proposed techniques are effective and the inter-class attribute-based sample selection is the most effective method.
Choice of VLMs.
Although our main results are based on the pre-trained ViT-GPT2 model <cit.>, we show in Table <ref> that when switching to a different VLM, , BLIP <cit.>, the relative ranks of different few-shot classifiers based on wAcc-A still hold with high correlations. In other words, FewSTAB is robust to different choices of VLMs.
Detection accuracy of VLMs. A VLM may miss some attributes due to its limited capacity, resulting in a small detection accuracy. However, the detection accuracy of a VLM has little impact on our framework.
To demonstrate this, we adopt a cross-validation strategy, , we use the outputs from one VLM as the ground truth to evaluate those from another VLM, since assessing the detection accuracy of a VLM typically requires labor-intensive human labeling. On the CUB-200 dataset, we observe that the detection accuracy of ViT-GPT2 based on the BLIP's outputs is 70.12%, while the detection accuracy of BLIP based on the ViT-GPT2's outputs is 59.28%. Although the two VLMs differ significantly in the detected attributes, our framework shows almost consistent rankings of the evaluated FSC methods (Table <ref>).
Additional results are presented in Appendix.
§ CONCLUSION
In this paper, we proposed a systematic and rigorous benchmark framework called FewSTAB for evaluating the robustness of few-shot classifiers to spurious bias. FewSTAB adopts attribute-based sample selection strategies to construct FSC test tasks with biased attributes so that the reliance on spurious correlations can be effectively revealed. FewSTAB can automatically benchmark spurious bias in few-shot classifiers on any existing test data thanks to its use of a pre-trained VLM for automated attribute detection. With FewSTAB, we provided a new dimension of evaluation on the robustness of few-shot image classifiers to spurious bias and a new design guideline for building robust few-shot classifiers. FewSTAB can reveal and enable designs for varied degrees of robustness to spurious bias. We hope FewSTAB will inspire new developments on designing robust few-shot classifiers.
§ ACKNOWLEDGMENTS
This work is supported in part by the US National Science Foundation under grants 2313865, 2217071, 2213700, 2106913, 2008208, 1955151.
splncs04
§ APPENDIX
The appendix is organized as follows:
we introduce the ten FSC algorithms adopted in the paper in <ref>. Then, we give the details of the evaluation metrics used in the main paper in <ref>. In <ref>, we show statistics of the datasets used in this paper along with detailed training settings. In <ref>, we analyze different methods for constructing the support and query sets in a FewSTAB task (<ref>), show the scatter plots of wAcc-A versus Acc from all the training settings (<ref>), present more results on the effectiveness of FewSTAB (<ref>), and demonstrate the robustness of FewSTAB with different VLMs (<ref>). Finally, we give more examples of the tasks constructed by FewSTAB in <ref>.
§ FEW-SHOT CLASSIFICATION ALGORITHMS
ANIL (Almost No Inner Loop) <cit.>: ANIL is an optimization-based meta-learning method and follows a similar optimization procedure to MAML <cit.> whose few-shot adaptation algorithm 𝒪 is to update the whole model using gradient descent with a few learning samples. ANIL does not update the whole model and instead only updates the classifier in the last layer.
BOIL (Body Only update in Inner Loop) <cit.>: BOIL is another optimization-based meta-learning method. Its adaptation algorithm 𝒪 freezes the update of the classifier and only updates the embedding backbone.
LEO (Latent Embedding Optimization) <cit.>: LEO is similar to MAML. But instead of directly optimizing high-dimensional model parameters, its adaptation algorithm 𝒪 learns a generative distribution of model parameters and optimizes the model parameters in a low-dimensional latent space.
ProtoNet (Prototypical Networks) <cit.>: ProtoNet is a metric-based meta-learning method. Its adaptation algorithm 𝒪 first calculates a prototype representation for each class as the mean vector of each support class, and then uses a nearest-neighbor classifier created with the class prototypes and the Euclidean distance function to predict a query image.
DN4 (Deep Nearest Neighbor Neural Network) <cit.>: DN4 is a metric-based meta-learning method, which does not use attributes after pooling for classification. Instead, DN4 uses the local attributes before pooling and employs a local descriptor based image-to-class measure for classification.
R2D2 (Ridge Regression Differentiable Discriminator) <cit.>: R2D2 is a metric-based meta-learning method and adopts ridge regression as the few-shot adaptation algorithm 𝒪. The advantage of R2D2 is that ridge regression
enjoys a closed-form solution and can learn efficiently with a few training samples.
CAN (Cross Attention Network) <cit.>: CAN is a metric-based meta-learning method and calculates the cross attention between each pair of class and query features so as to exploit and learn discriminative features for predictions.
RENet (Relational Embedding Network) <cit.>: RENet is a metric-based meta-learning method. It uses a self-correlational representation module and a cross-correlational attention module to learn relational patterns within and between images, respectively.
RFS (Rethinking Few-Shot) <cit.>: RFS is a transfer learning method. It first trains an embedding network using base classes. Then, instead of fine-tuning the last fully-connected classification layer, it learns a new logistic regression classifier with L2-normalized feature vectors from a few samples of novel classes.
Baseline++ <cit.>: Baseline++ is a transfer learning method. It first pretrains an embedding network using samples from base classes. Then, it fine-tunes the last fully-connected layer with a few samples of novel classes but replaces the standard inner product with a cosine distance between input features and the weight vectors of the layer.
§ EVALUATION METRICS
Standard accuracy (Acc): Acc measures on average how a few-shot classifier generalizes to different tasks with novel classes not seen before. We define Acc as follows,
Acc=1/N_T∑_t=1^N_T∑_c=1^CM_c(𝒯_t;f_θ, 𝒪),
where N_T is the number of test tasks, C is the number of classes per task, 𝒯_t is the t-th C-way N_S-shot task with N_Q query samples per class, f_θ is a few-shot classifier, 𝒪 is the few-shot adaptation algorithm associated with f_θ, M_c denotes the classification accuracy on the query samples of the class c. This metric is used in <ref>.
Class-wise worst classification accuracy (wAcc): wAcc characterizes the performance limit of f_θ in learning novel classes, and we calculate wAcc as the average of the smallest per-class classification accuracy on query samples over N_T tasks, i.e.,
wAcc=1/N_T∑_t=1^N_Tmin_c=1,…,CM_c(𝒯_t;f_θ, 𝒪).
Depending on what kinds of tasks are used for evaluation, we have the following two types of wAcc:
* wAcc-R: If the test tasks are randomly sampled in Eq. (<ref>), then we get wAcc-R on N_T randomly sampled tasks. This metric is used in <ref> as a baseline for highlighting the effectiveness of our FewSTAB in revealing the spurious bias in few-shot classifiers.
* wAcc-A: If the N_T test tasks in Eq. (<ref>) are constructed by our FewSTAB, then we get wAcc-A, which characterizes the robustness of a few-shot classifier to spurious bias. This metric is the main metric used in the experiments.
Accuracy gap between wAcc-R and wAcc-A: We obtain the wAcc-R and wAcc-A of a model by testing it with tasks randomly sampled and with tasks constructed by FewSTAB, respectively. The accuracy gap is calculated as the wAcc-R minus the wAcc-A. A large gap indicates the effectiveness of FewSTAB in revealing the robustness of a few-shot classifier to spurious bias. This metric is used in <ref> and <ref>.
Accuracy gap between wAcc-A of models trained with different shots:
We train a few-shot classifier with C-way (e.g. 5-way) 5-shot and 1-shot training tasks from 𝒟_train, respectively. Then, we test the obtained two classifiers with the same tasks created by FewSTAB and calculate the accuracy gap as the wAcc-A of the model trained with 5-shot tasks minus the wAcc-A of the model trained with 1-shot tasks. A large accuracy gap indicates that increasing training shots can improve a few-shot classifier's robustness to spurious bias. This metric is used in <ref>.
§ EXPERIMENTAL SETTINGS
We conducted experiments using three datasets: miniImageNet, tieredImageNet, and CUB-200. Each of these datasets has training (𝒟_train), validation (𝒟_val), and test (𝒟_test) sets. Numbers of classes and samples in the three sets of the three datasets are shown in <ref>.
We trained eight meta-learning based FSC methods with the ResNet-12 backbone using 5-way 1-shot or 5-way 5-shot tasks from each 𝒟_train of the three datasets, resulting in a total of 48 models. For the two transfer learning based methods, RFS and Baseline++, we trained them on each 𝒟_train of the three datasets using mini-batch stochastic gradient descent. As a result, we trained a total of 54 models.
To facilitate reproducibility and further research, the training configurations and hyperparameters are provided in <ref> for training on the miniImageNet, tieredImageNet, and CUB-200 datasets, respectively. We closely followed the settings in <cit.> to train these models. In the “Mode” column of these tables, “T(5w1s)” denotes that we trained the corresponding model using 5-way 1-shot tasks, “T(5w5s)” denotes that we trained the corresponding model using 5-way 5-shot tasks, and “B (128)” denotes that we trained the corresponding model using mini-batch stochastic gradient descent with a batch size of 128. In the “LR scheduler” column, “CosineAnnealingLR” denotes a cosine annealing learning rate scheduler, “StepLR(20, 0.5)” denotes a learning rate scheduler which decreases the learning rate after every 20 epochs by multiplying it with 0.5, and “MultiStepLR([60, 80], 0.1)” denotes a learning rate scheduler which decreases the learning rate after 60 epochs and 80 epochs by multiplying it with 0.1 each time. The “Training episodes” column in these tables denotes the number of tasks used in each epoch. The “Episode size” column of these tables denotes the number of tasks jointly used to do a model update.
§ ADDITIONAL EXPERIMENTAL RESULTS
§.§ Ablation Studies
Support set construction methods:
To construct the support set in an FSC test task, FewSTAB randomly selects samples that have mutually exclusive spurious attributes across the randomly selected classes, which is illustrated in <ref>(a) and formally described in <ref> in the main paper. To further show the effectiveness of this construction method, we keep the techniques for constructing the query set in an FSC test task, and report in <ref> the results of two alternatives for constructing the support set: randomly selecting samples of the selected classes (SC1) and randomly selecting samples with targeted attributes for selected classes with no further constraints on the selected samples (SC2). We also include the results of the proposed one: randomly selecting samples with mutually exclusive targeted attributes across the selected classes (SC3) in <ref>.
A larger average drop in <ref> indicates that the corresponding support set construction method is more effective in revealing robustness of few-shot classifiers to spurious bias. We observe that the third technique SC3, which is used by FewSTAB, achieves the largest average accuracy drop among the techniques compared on the miniImageNet and tieredImageNet datasets and achieves a comparable drop to SC2 on the CUB-200 dataset due to the limited number of detected attributes in this dataset.
Query set construction methods:
There are three techniques used by FewSTAB to construct the query set in a task: the intra-class attribute-based sample selection (QC1), the inter-class attribute-based sample selection (QC2), which is a special case of the intra-class attribute-based sample selection, and the query sample selection (QC3). We have done an ablation study on the effectiveness of the three techniques in <ref> in the main paper using the miniImageNet dataset. Here, we include the results on all the three datasets in <ref>.
We observe that all the three proposed techniques in FewSTAB are effective with positive accuracy drops for all the ten FSC methods on the three datasets. Moreover, using the inter-class attribute-based sample selection significantly improves the average drops of the intra-class attribute-based sample selection, with 8.17%, 13.26%, and 8.52% absolute gains on the miniImageNet, tieredImageNet, and CUB-200 datasets, respectively.
§.§ Scatter Plots of wAcc-A versus Acc
We show the scatter plots of wAcc-A versus Acc (standard accuracy) of the ten FSC methods when they are tested with FewSTAB and randomly constructed FSC test tasks, respectively, on the three datasets in <ref> (exact values are shown in <ref>). We observe that an FSC method having a higher Acc does not necessarily have a higher wAcc-A. For example, in <ref>(a), BOIL has a higher Acc but a lower wAcc-A than ProtoNet, LEO, and Baseline++. Moreover, we observe that in <ref>(b) and (d), for methods that achieve high standard accuracies, e.g., for the top-5 methods in terms of Acc, their relative increments in wAcc-A are small (with differences smaller than 1%) compared with their relative increments in Acc. In other words, methods with higher standard accuracies do not necessarily learn more robust decision rules, since their wAcc-A values remain comparable to those with lower Acc values.
The values of Acc and wAcc-A on the fine-grained dataset CUB-200 in <ref>(e) and (f) show a different pattern from those in <ref>(c) and (d). More specifically, methods that achieve high Acc values, e.g., R2D2, ProtoNet, DN4, RENet, and RFS, tend to have comparable relative increments in wAcc-A compared with their relative increments in Acc. This indicates that on a fine-grained dataset, which does not have many spurious attributes, an FSC method with a higher Acc also tends to have a higher wAcc-A or improved robustness to spurious bias.
In summary, our framework, FewSTAB, reveals new robustness patterns of FSC methods in different evaluation settings.
§.§ Effectiveness of FewSTAB: More Results
Results on more recent methods.
Note that our method selection in <ref> aims to cover diverse methods and allow for rigorous comparison in the same setting. Importantly, our method is general and can continue to evaluate emerging methods. To demonstrate, we provide results on recent methods, namely UniSiam <cit.>, PsCo <cit.>, and BECLR <cit.>. FewSTAB uncovers that, even the state-of-the-art methods still suffer from spurious bias as we observe large gaps between wAcc-R and wAcc-A (Table <ref>), when we explicitly construct the test tasks to have spurious correlations. This also shows that FewSTAB is effective for various FSC methods.
Results on IFSL. Interventional few-shot learning (IFSL) <cit.> is a method that specifically addresses spurious correlations in few-shot classification. We follow the settings in <cit.> and report the results of MAML <cit.>, MN <cit.>, SIB <cit.>, and MTL <cit.> in Table <ref>, where “Base” refers to one of the four methods, “+IFSL” denotes using IFSL on top of “Base”, and the better performance between the two is in bold. Overall, IFSL is effective in mitigating spurious bias in few-shot classifiers except for some methods, e.g. SIB. This shows that FewSTAB can reveal the improvement made to mitigate spurious bias.
§.§ Robustness of FewSTAB with Different VLMs
We instantiated our FewSTAB with a pre-trained ViT-GPT2 and a pre-trained BLIP, respectively. We calculated the wAcc-A on FSC test tasks constructed by FewSTAB with the two VLMs on the miniImageNet, tieredImageNet, and CUB-200 datasets, respectively.
Effects on individual and relative measurements. We observe from <ref> that FewSTAB with BLIP produces lower wAcc-A than with ViT-GPT2 on the miniImageNet and tieredImageNet datasets. This indicates that FewSTAB with BLIP is more effective in uncovering the robustness of few-shot classifiers to spurious bias. We reason that BLIP can identify more attributes than ViT-GPT2 (<ref>) and therefore more spurious correlations can be formulated by our FewSTAB.
However, on the fine-grained CUB-200 dataset, which contains different bird classes, FewSTAB with BLIP is less effective than with ViT-GPT2. Although BLIP can identify more attributes than ViT-GPT2 in this fine-grained dataset, it may also detect more attributes related to classes. To validate this, we first found a set of attributes 𝒰_BLIP unique to BLIP from all the attributes 𝒜_BLIP detected by BLIP, and a set of attributes 𝒰_ViT-GPT2 unique to ViT-GPT2 from all the attributes 𝒜_ViT-GPT2 detected by ViT-GPT2. Specifically, we have 𝒰_BLIP=𝒜_BLIP-𝒜_ViT-GPT2, and 𝒰_ViT-GPT2=𝒜_ViT-GPT2-𝒜_BLIP. Then, we found in 𝒰_BLIP and 𝒰_ViT-GPT2 how many attributes contain “bird”, “beak”, “wing”, “breast”, “tail”, or “mouth”, which are all related to the concept of a bird. We found that there are 11 attributes, or 8.5% of total attributes in 𝒰_BLIP that are related to a bird. While there is only 1 attribute (2.4% of total attributes) in 𝒰_ViT-GPT2 that is related to a bird. Due to the limited capability of BLIP, these class-related attributes cannot be detected in all the images. Hence, although these attributes are not spurious, they are treated as spurious attributes and used by FewSTAB to construct FSC test tasks. In this case, FewSTAB becomes ineffective in revealing the spurious bias in few-shot classifiers since the classifiers can exploit spurious correlations in the tasks to achieve high accuracies.
Nevertheless, from the perspective of comparing the robustness of different FSC methods to spurious bias, the test tasks constructed by FewSTAB using different VLMs can reveal consistent ranks in terms of wAcc-A for different FSC methods (<ref>).
Detection accuracies of VLMs. Using different VLMs may generate different sets of attributes. Some sets of attributes may not exactly reflect the data being described, resulting in low detection accuracies. For example, some attributes are not identified by a VLM or the identified attributes do not match with the ground truth attributes. To analyze how the detection accuracy of a VLM affects our framework, we show in <ref> the detection accuracies of the two VLMs that we used in our paper along with the Spearman's rank correlation coefficients between the evaluation results on the ten FSC methods based on the two VLMs. To calculate the detection accuracy of a VLM without the labor-intensive human labeling, we use the outputs of another VLM as the ground truth. Specifically, for the i'th image, we have two detected sets of attributes, 𝒜_query^i and 𝒜_ref^i, representing the attributes from a VLM being evaluated and the ones from another VLM serving as the ground truth attributes. The detection accuracy is calculated as follows:
Acc(VLM_query,VLM_ref)=1/|𝒟_test|∑_i=1^N_test|𝒜_query^i∩𝒜_ref^i|/|𝒜_ref^i|,
where N_test=|𝒟_test|, and |·| denotes the size of a set. For example, to calculate the detection accuracy of ViT-GPT2, we set VLM_query=ViT-GPT2 and VLM_ref=BLIP. From <ref>, we observe that the detection accuracies of the two VLMs are not high, indicating that the attributes identified by the two VLMs are very different. However, the two VLMs are well-established in practice and can identify many attributes from images (<ref>). The correlation coefficients in <ref> indicate that for well-established VLMs, the detection accuracies have little impact on the comparison of robustness to spurious bias between different FSC methods.
§ TASKS CONSTRUCTED BY FEWSTAB
FewSTAB does not construct tasks based on a specific model. Hence, FewSTAB is a fair evaluation framework for different FSC methods, and the tasks constructed by FewSTAB can be used to reveal few-shot classifiers' varied degrees of robustness to spurious bias.
We show a 5-way 1-shot task constructed by FewSTAB using samples from the tieredImageNet and CUB-200 datasets in <ref> and <ref>, respectively. Query samples for each class are constructed such that they do not contain the spurious attribute from the support set sample of the same class but contain spurious attributes from support set samples of other classes. For example, in <ref>, the class has a support set sample with a background, but most of its query samples have a which is the spurious attribute from the support set sample of the class.
Moreover, in <ref>, the class has a support set sample with a background, but its query samples all have a background similar to that in the support set sample of the class. Note that the sample selection may not be ideal due to the limited capacity of VLMs. For example, in <ref>, some query images of the class have the spurious attribute which also appears in the support set image of the class, leading to a high accuracy on these query samples for a model that relies on this spurious attribute. However, this does not affect our evaluation of different FSC methods on their robustness to spurious bias since the same set of tasks is used to evaluate different FSC methods. Moreover, our metric, wAcc-A, measures the worst per-class classification accuracy over FSC tasks, making our evaluation robust to the sampling noise caused by a VLM.
|
http://arxiv.org/abs/2409.02207v1 | 20240903181931 | Quantum Neural Network Extraction Attack via Split Co-Teaching | [
"Zhenxiao fu",
"Fan Chen"
] | quant-ph | [
"quant-ph"
] |
Checkpoint and Restart: An Energy Consumption Characterization in Clusters
Marina Morán1 Javier Balladini1 Dolores Rexachs2 Emilio Luque2
September 9, 2024
==========================================================================
§ ABSTRACT
Quantum Neural Networks (QNNs), now offered as QNN-as-a-Service (QNNaaS), have become key targets for model extraction attacks. State-of-the-art methods use ensemble learning to train accurate substitute QNNs, but our analysis reveals significant limitations in real-world NISQ environments, where noise and cost constraints undermine their effectiveness.
In this work, we introduce a novel attack, split co-teaching, which leverages label variations to split queried data by noise sensitivity and employs co-teaching schemes to enhance extraction accuracy. Experiments on NISQ processors demonstrate that our approach outperforms classical extraction attacks by 6.5%∼9.5% and existing QNN extraction methods by 0.1%∼3.7% across various tasks.
Quantum neural network, model extraction attack, noisy intermediate-scale quantum, co-teaching
§ INTRODUCTION
Motivation.
Quantum Neural Networks (QNNs) are powerful tools for complex problem-solving <cit.>, but their development requires specialized expertise and costly data, making them valuable intellectual property (IP) now offered as QNN-as-a-Service (QNNaaS) <cit.>, as illustrated in Figure <ref>. This value has attracted adversaries who seek to steal QNNs from noisy intermediate-scale quantum (NISQ) cloud via model extraction <cit.>.
Among these, QuantumLeak <cit.> stands out as the state-of-the-art (SOTA), utilizing ensemble learning to reduce noise in queried labels from QNNaaS, enabling the training of an accurate substitute QNN.
However, our experiments reveal that QuantumLeak faces substantial challenges in real-world NISQ environments. We increased the query rounds (e.g., from 3 <cit.> to 5) to address varying quantum noise and reduced the number of queried data points (e.g., from 6000 <cit.> to 3000) to minimize costs (e.g., $1.6 per second for accessing IBM quantum computers <cit.>) and maintain stealth. These adjustments led to a 7.49%∼9.85% accuracy drop, rendering the QuantumLeak attack ineffective. Consequently, we are exploring new QNN extraction attack strategies better suited to real-world conditions.
Contributions.
This work makes the following contributions:
* In Section <ref>,
we identified a key limitation in QNN extraction attacks like QuantumLeak <cit.>, which relies on ensemble learning to re-weight noisy labels. In real-world NISQ conditions, this method fails because highly noisy data misleads the substitute QNN, making pre-filtering necessary to separate clean from noise-prone data.
Inspired by the success of the classical co-teaching strategy <cit.>, which employs multiple neural networks to classify and clean noisy data, we adapted co-teaching to the quantum domain. However, its effectiveness is limited, as the loss-based classification of noisy and clean data, essential to co-teaching, does not effectively translate to quantum settings.
* In Section <ref>,
we introduce a quantum co-teaching framework, termed split co-teaching, to address the challenges identified in our preliminary study.
This approach involves splitting noise-robust data from noise-vulnerable data based on variations in queried labels obtained at different times, followed by co-teaching QNNs with configurations tailored to the noise sensitivity of the data.
* In Section <ref>,
we implemented the framework on a NISQ device and compared the local substitute QNN's accuracy with QNNs trained via SOTA extraction attacks.
Results show that our approach outperforms the classical extraction attack <cit.> by 6.5%∼9.5% and the QNN extraction attack <cit.> by 0.1%∼3.7% across various tasks.
§ BACKGROUND
QNN Basics.
Quantum neural networks are a representative NISQ algorithm capable of operating on current noisy quantum computers. Central to a QNN are Variational Quantum Circuits (VQCs) <cit.>, which are parameterized circuit ansatz typically implemented with one-qubit gates (i.e., 1QG) for rotation and two-qubit gates (i.e., 2QG) for entanglement.
As shown in the shaded block of Figure <ref>, a standard QNN comprises three key components:
(1) a data encoder that embeds data into quantum states;
(2) a multi-layer, trainable VQC circuit; and
(3) a measurement layer that projects quantum states into probabilistic vectors.
QNN generate the final prediction by applying an activation function (e.g., ) to the raw probability output vector.
Quantum Noises.
NISQ hardware suffers from various types of noise <cit.> due to environmental interactions, imperfect controls, cross-talk, and other factors. These noise sources introduce quantum errors, such as decoherence, which leads to information loss; gate errors, causing operational inaccuracies; and readout errors, resulting in incorrect measurements.
Moreover, quantum noise in NISQ platforms is not static; it can fluctuate spatially and temporally.
For example, in trapped ion systems <cit.>, fluctuations can arise from instabilities in laser and voltage control, while superconducting qubits <cit.> are affected by variations in unpaired electron populations.
Table <ref> illustrates this variability, showing that gate errors on a NISQ computer, measured at different times on the same day, exhibited a 31.2% variation (i.e., Δ) for one-qubit gates and a 10.3% variation for two-qubit gates.
Co-Teaching.
Classical co-teaching <cit.> effectively trains deep learning models with noisy labels. It leverages the tendency of neural networks to memorize clean labels before noisy ones, as indicated by lower loss values for clean data in early epochs. In co-teaching, two neural networks are trained simultaneously: each selects data with potentially clean labels (i.e., lower loss values) from a mini-batch, shares these selections with its peer, and backpropagates using the data chosen by the other. This process significantly enhances model robustness and demonstrates superior performance.
§ PRELIMINARY STUDY AND MOTIVATION
§.§ Preliminary Study
Ineffectiveness of Existing Techniques.
We trained a victim QNN using a recent model <cit.> on the MNIST dataset,
achieving 98.8% accuracy in simulation.
This accuracy, denoted as Ideal, represents the upper bound a substitute QNN could achieve.
To reduce the amount of queried data and increase the number of query rounds, we selected 3,000 data samples (compared to 6,000 in <cit.>) and queried the victim QNN five times (compared to three times in <cit.>) over a 24-hour period to obtain labels. We then implemented model extraction attacks using classical CloudLeak <cit.>, QuantumLeak <cit.>, the classical co-teaching method <cit.>, and their combination.
All QNNs were deployed on . For detailed information on datasets, NISQ configuration, and comparison schemes, please refer to Section <ref>.
As shown in Figure <ref>, no existing techniques, whether used individually or in combination, can construct a local substitute QNN that approaches the performance of the victim QNN. Furthermore, these experimental results reveal several previously undiscovered key findings:
* Observation-1: Considering the noise fluctuation of NISQ devices, QuantumLeak <cit.> only achieves a <2.5% accuracy improvement over the classical CloudLeak <cit.> attack, rather than the reported 4.99%∼7.35% improvement.
* Observation-2: The classical co-teaching <cit.> is effective, showing a 1.8% accuracy improvement over QuantumLeak <cit.> when naïvely adopted without optimization.
* Observation-3: Integrating the ensemble-learning approach from QuantumLeak <cit.> into co-teaching <cit.> diminishes performance rather than enhancing it.
Classical vs. Quantum Noisy Data.
Following <cit.>, we compare loss distributions after one epoch to assess the impact of noisy labels on classical and quantum neural networks.
Specifically, we analyze the classical CloudLeak <cit.> attack for classical model extraction using cross-entropy loss, and QuantumLeak <cit.> for QNN extraction, which uses a 2-dimensional Huber loss.
As shown in Figure <ref>, classical models tend to learn clean patterns in the initial epochs, allowing small-loss instances to be easily filtered out as clean data. This aligns with the conclusions in <cit.> and is effectively leveraged in co-teaching <cit.>. In contrast, this correlation does not exist in QNNs, leading to our fourth observation:
* Observation-4: The clean and noisy data are nearly evenly distributed across specific loss values, blurring the distinction between the two. This makes it difficult to effectively filter out clean data based on loss, a method that is typically successful in classical models <cit.>.
§.§ Motivation
Observation-1, 2, and 4 are interconnected and mutually informative.
Specifically, the quantum noisy fluctuations render QuantumLeak <cit.>, the SOTA QNN extraction attack, ineffective, underscoring the importance of training approaches that account for noisy labels. However, the unique interaction between quantum noise and QNNs undermines the theoretical foundation of co-teaching <cit.>, highlighting the need for a quantum-specific data partition scheme.
This motivates us to optimize co-teaching by partitioning data based on noise sensitivity rather than the loss values used in classical methods..
Additionally, for Observation-3, we hypothesize that the multiple ensembles and bagging in QuantumLeak degrade data quality by increasing the proportion of incorrectly labeled instances, leading to reduced performance. This will be investigated further in future studies.
§ SPLIT CO-TEACHING
To address the limitations of existing QNN extraction attacks, which underestimate quantum noise fluctuations and overestimate the available number of queried data, we propose a new attack that incorporates practical NISQ features to train an accurate substitute QNN. As shown in the workflow comparison with the SOTA QuantumLeak <cit.> in Figure <ref>,
our approach introduces two key improvements: (1) it captures quantum noise fluctuations by conducting more query rounds than QuantumLeak, and (2) instead of relying on an ensemble of QNNs with data bagging—an approach that incurs high design costs and has proven ineffective in our preliminary results—we split data based on quantum noise sensitivity and employ co-teaching to address training with noisy labels.
§.§ Queried Data Split
Similar to the QuantumLeak attack <cit.>, we use publicly available data to query QNNaaS, with queries evenly distributed throughout the day to capture the impact of fluctuating noise.
Specifically, data is sent to the victim QNN in M (e.g., M=5) rounds of queries, spaced at intervals of 24/M hours. The server executes the QNN on its NISQ devices, measures the outputs, and returns classical probability results.
As shown in Figure <ref>(a),
the resulting dataset is represented as D={d_i, P⃗^ m_i},
where P⃗ denotes the obtained raw probability output vectors,
i∈[1,N] represents the number of data samples, and
m∈[1,M] corresponds to the query rounds.
The function in Figure <ref>(b) calculates the variation among the raw probability output vectors for each data sample d_i across different query rounds, resulting in var(d_i)∈[0,1].
A lower value indicates that d_i is robust to NISQ noise, while a higher value indicates vulnerability. We set a predefined threshold V_th to classify the dataset D to a robust subset D_r (with var(d_i)≤V_th)
and a vulnerable subset D_v (with var(d_i)>V_th).
§.§ QNN Co-Teaching
The proposed approach involves training two QNNs simultaneously, where each QNN identifies and selects a subset of “clean” data from each mini-batch based on its confidence—specifically, instances that result in smaller loss values.
This selected subset of data is then shared with the peer QNN, which uses it to update its parameters for the subsequent mini-batch. This collaborative process aims to enhance the training robustness of both QNNs by leveraging the strengths of each network.
As illustrated in Figure <ref>(b), we refer to this method as split co-teaching.
The overall algorithmic workflow is provided in Figure <ref>(a) (lines 3-7), with the detailed implementation of the split co-teaching function () outlined in Figure <ref>(c).
In this approach, two QNNs, Q_1 and Q_2, with corresponding parameters w_1 and w_2, are trained simultaneously.
During each training iteration, the mini-batch of data, denoted as D, is processed by each QNN to identify and select a subset of instances that produce smaller losses, indicating higher confidence in the correctness of those instances. For example, Q_1
might select a subset D_1 as its most reliable data. This subset is then considered useful knowledge for training and is subsequently shared with the peer QNN (Q_2) to update its parameters.
The process of selecting these instances is regulated by a forget rate f which determines the proportion of large-loss instances within the mini-batch that are excluded from training.
Furthermore, different forget rates, such as f_1 for D_r and f_2 for D_v, are applied depending on the data's sensitivity to quantum noise.
Specifically, a smaller forget rate f_1 is applied to D_r to preserve its robustness, while a larger forget rate f_2 is used for D_v to penalize its higher sensitivity to noise, thereby refining the training process for each QNN in a manner tailored to the characteristics of the data.
§.§ NISQ Implementation
The work focuses on proposing a new attack framework that addresses practical NISQ noise variations and reduced queried data in QNN model extraction attacks, rather than on fine-tuning the QNN itself.
To ensure an apples-to-apples comparison, we adopt all implementation configurations from the QuantumLeak <cit.> attack, utilizing the same victim QNN circuit and local substitute QNN zoo, but with half the number of queried data samples.
§ EXPERIMENTS AND RESULTS
§.§ Experimental Setup
Datasets.
Following recent QNNs <cit.>, we used MNIST <cit.> and Fashion-MNIST <cit.> datasets, down-sampling data to 1×8 vectors using principal component analysis.
For each dataset, 3000 images were sampled to query the QNNaaS server five times daily, while 500 images were selected for validation and 1000 for testing.
For MNIST, we perform classification of digits 0/1 () and 7/8 (). For Fashion-MNIST, classification tasks included t-shirt/trouser (), dress/sneaker (), and shirt/sneaker ().
Configuration and QNN Zoo.
We trained a victim QNN for each task, implemented with four qubits. Each QNN architecture included one amplitude encoding layer, two repeated VQC layers, and one measurement layer. The VQC ansatz included an -- rotation layer and a 2-qubit entanglement layer. We employed the Adam optimizer with a learning rate of 5e-3 and a weight decay of 1e-4, training with a batch size of 32 over 30 epochs.
We adopted the same local QNN zoo as used in QuantumLeak <cit.>, including the victim QNN, <cit.>, and <cit.>.
All quantum circuit designs were synthesized using Pennylane <cit.> and deployed to NISQ devices with Qiskit <cit.>. The circuits were executed and measured on the 27-qubit computer.
Schemes.
To compare our design with SOTA model extraction attacks, we established following four baselines:
* Base: A local substitute QNN identical to the victim QNN. We applied the state-of-the-art classical technique, CloudLeak <cit.>, to extract the QNN using queried data.
* QLeak: We adopted the QNN architecture with the best performance as reported in <cit.> and trained it using the ensemble learning method detailed in the same study.
* CoT: A local substitute QNN selected from the QNN zoo, trained with classical co-teaching <cit.>, where two QNNs were simultaneously trained, each using data selected by its peer QNN for parameter updates.
* CoT-E: A combined technique of QuantumLeak <cit.> and co-teaching <cit.> where an ensemble of the best QuantumLeak substitute QNNs is trained with co-teaching.
§.§ Results and Analysis
Similar to <cit.>, our results show that a local QNN using the same VQC ansatz as the victim QNN achieves the best accuracy. The following results are based on this configuration.
Accuracy.
Figure <ref> compares the accuracy of our method with existing approaches. The split co-teaching approach consistently produced the most accurate substitute QNN across all tasks.
Our design achieved a 6.5%∼9.5% accuracy improvement over the model and a 0.1%∼3.7% enhancement compared to QuantumLeak <cit.>. These results underscore the effectiveness of our approach under varying quantum noise, positioning it as a superior alternative to existing attack methods.
Impact of V_th.
V_th is a critical parameter that determines whether data samples are classified as noise-robust or noise-vulnerable. A lower V_th can cause a larger portion of data to be categorized as noise-vulnerable data (i.e., D_v), leading to the loss of clean samples during co-teaching. Conversely, a higher V_th may retain more noisy samples, potentially disrupting the training process.
Figure <ref> illustrates the impact of varying V_th on the accuracy of the classification task, showing that at V_th=0.15, our model achieved optimal performance with a peak accuracy of 92.6% with 72.6% of the data classified as D_r.
For practical implementation, we recommend task-specific profiling to identify the optimal V_th.
Impact of f_1 and f_2.
Figures <ref> and <ref>
shows results when adjusting the forget rates f_1 and f_2, respectively.
Increasing the forget rate tends to enhance the proportion of clean data in the training set; however, this reduction in training set size is significant, especially in model extraction attacks where the number of queries is constrained. On the other hand, setting the forget rate too low may fail to adequately remove noisy data, thereby compromising model performance.
Therefore, both f_1 and f_2 must be carefully optimized to balance the retention of clean data with the need to maintain a sufficiently large training set. Identifying these optimal values is essential for maximizing performance in model extraction attacks.
§ CONCLUSION
In this work, we introduced a novel attack framework, split co-teaching, designed to address the limitations of existing model extraction methods in practical NISQ environments. By utilizing label variations to partition data based on noise sensitivity and implementing co-teaching strategies, our approach achieves state-of-the-art accuracy in QNN extraction attacks, as validated by results on NISQ computers.
IEEEbib
|
http://arxiv.org/abs/2409.02075v1 | 20240903172511 | X-ray and optical observations of the millisecond pulsar binary PSRJ1431-4715 | [
"D. de Martino",
"A. Phosrisom",
"V. S. Dhillon",
"D. F. Torres",
"F. Coti Zelati",
"R. P. Breton",
"T. R. Marsh",
"A. Miraval Zanon",
"N. Rea",
"A. Papitto"
] | astro-ph.HE | [
"astro-ph.HE"
] |
X-ray and optical observations of PSR J1431-4715
D. de Martino et al.
INAF-Osservatorio Astronomico di Capodimonte, Salita Moiariello 16, I-80131 Naples, Italy
domitilla.demartino@inaf.it
Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, UK
Department of Physics and Astronomy, University of Sheffield, Sheffield, S3 7RH, UK
Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain
Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans s/n, E-08193 Barcelona, Spain
Institut d'Estudis Espacials de Catalunya (IEEC), Carrer de Gran Capitá 2-4, E-08034 Barcelona, Spain
Institució Catalana de Recerca i Estudis Avançats (ICREA), E-08010 Barcelona, Spain
INAF-Osservatorio Astronomico di Brera, Via Bianchi 46, I-23807 Merate (LC), Italy
Department of Physics, University of Warwick, Coventry CV4 7AL, UK
ASI - Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Rome, Italy
INAF-Osservatorio Astronomico di Roma, Via Frascati 33, I-00040 Monte Porzio Catone (RM), Italy
We present the first X-ray observation of the energetic millisecond pulsar binary
PSR J1431-4715, performed with and complemented with fast optical
multi-band photometry acquired with the instrument at ESO-NTT.
It is found as a faint X-ray source without a significant orbital modulation. This
contrasts with the majority of systems that instead display substantial X-ray orbital
variability.
The X-ray spectrum is dominated by non-thermal emission and, due to the
lack of orbital modulation, does not favour an origin in an intrabinary shock between the pulsar and
companion star wind. While thermal emission from the neutron star polar
cap cannot be excluded in the soft X-rays, the dominance of synchrotron
emission favours an origin in the pulsar magnetosphere that we describe
at both X-ray and gamma-ray energies with a synchro-curvature model.
The optical multi-colour light curve folded at the 10.8 h orbital period is
double-humped, dominated by ellipsoidal effects, but also affected by
irradiation. The light
curves are fit with several models encompassing direct heating and a cold
spot, or heat redistribution after irradiation either through convection or
convection plus diffusion.
Despite the inability to constrain the best irradiation models, the fits
provide consistent system parameters, giving an orbital inclination of
59±6^∘ and a distance of 3.1±0.3 kpc.
The companion is found to be an F-type star, underfilling its Roche lobe
(f_ RL=73±4%),
with a mass of 0.20±0.04 M_⊙, confirming the redback status,
although hotter than the majority of redbacks.
The stellar dayside and
nightside temperatures of 7500 K and 7400 K, respectively, indicate a
weak irradiation effect on the companion, likely due to its high
intrinsic luminosity. Although the pulsar mass cannot be precisely derived,
a heavy (1.8-2.2 M_⊙) neutron star is favoured.
X-ray and optical observations of the millisecond pulsar binary PSR J1431-4715
D. de Martino1
A. Phosrisom2
V. S. Dhillon3,4
D. F. Torres5,6,7
F. Coti Zelati5,6,8
R.P. Breton2
T.R. Marsh9Deceased
A. Miraval Zanon10,11
N. Rea5,6
A. Papitto11
Received 3 June 2024 / Accepted 2 September 2024
================================================================================================================================================================================
§ INTRODUCTION
Millisecond pulsar (MSP) binaries are compact systems consisting of a
fast spinning neutron star (NS) and a low-mass companion star. Those in a tight
orbit (P_orb 1 d) are dubbed "spiders"
<cit.> and, depending on the companion mass (M_c),
are subdivided into "black widows" (M_c 0.1 M_⊙)
and "redbacks" (M_c ∼ 0.1-0.4 M_⊙). These old NSs in binaries
are believed to be spun-up to very short spin periods during a previous
Gyr-long phase of mass accretion from an evolved companion.
According to the recycling
scenario <cit.> during the accretion phase, MSP binaries were
Low-Mass X-ray Binaries (LMXBs) and turned into radio and gamma-ray pulsars when mass
accretion ceased. The first observational evidence of transitions from a LMXB
to a rotation-powered pulsar state was found in the MSP PSR J1023+0038 <cit.>.
However, the subsequent detections of state transitions from and/or to a disc state
in IGR J1825-2452, in the M28 globular cluster <cit.>, in
the Galactic field X-ray source XSS J12270-4859
<cit.> and again in PSR J1023+0038
<cit.> imply that
transitions between accretion-powered and rotation-powered states can
occur on timescales much shorter than secular evolution, likely due to
changes in the mass transfer rate from the companion star and possibly
controlled by the interplay between the pulsar spin-down power and the
companion star wind or by its magnetic activity
<cit.>.
In spiders, the interaction of the relativistic pulsar wind with the
companion star wind produces an intrabinary shock (IBS), where particles
are accelerated, evaporating the late-type star <cit.>.
Depending on the energy of the particles, the IBS cools via inverse Compton or
synchrotron radiation. Also,
depending on the pulsar spin down power and companion wind momentum, the IBS
assumes different orientations, wrapping around the companion star in black widows
or the pulsar in redbacks <cit.>.
X-ray orbital modulations are indeed observed
in a number of black widows and redbacks with different orbital phasing indicating the
different IBS geometry <cit.>. Furthermore redbacks are found to be more luminous
on average in the X-rays than black widows and isolated MSPs, indicating that a larger fraction of
pulsar wind is intercepted at the shock compared to black widows
<cit.>.
The late-type companions are generally found to be strongly
irradiated in black widows,
whilst in redbacks the effect is not always detected
<cit.>. The irradiation pattern in both black widows
and redbacks is not always found to be consistent with direct irradiation
by the pulsar, sometimes requiring illumination from the IBS as well <cit.>, or
even additional heating due to magnetic activity of the companion star <cit.>.
The number of spiders and candidates of both types has largely increased
recently, thanks to several and deep radio pulsar searches
<cit.> and to the gamma-ray Fermi-LAT
survey <cit.>. The latter, in particular, has allowed
efficient detection of MSP binaries, which are unaffected at high energies by
eclipse effects from the intrabinary material <cit.>.
Redbacks are of particular interest because they include the recently discovered
subclass of transitional millisecond pulsar binaries. Among the
newly identified redbacks a number of them still lack multi-band
observations to derive the physical parameters of the IBS and the
companion star.
In this paper we focus on the poorly studied MSP binary
PSR J1431-4715 (henceforth J1431). It was discovered in the
High Time Resolution Universe Survey (HTRU) by <cit.> as one of the
fastest (2.01 ms) and most energetic pulsars with a spin-down power
Ė = 6.8× 10^34 erg s^-1. Its pulsed radio
emission was found to be affected by strong eclipses at the binary
10.8h orbital period. The radio orbital solution classified
J1431 as a MSP binary with a non-degenerate donor, with a minimum mass of
M_c = 0.12 M_⊙. J1431 has also recently been detected at high
energies by Fermi-LAT and catalogued as 4FGL J1431.4-4711 in the
4FGL-DR3 12-yr and 4FGL-DR4 14-yr LAT catalogs
<cit.>, respectively,
with L_γ =1.26-8.11× 10^33 erg s^-1
for a distance range of 1.5-3.8kpc, encompassing distance values
derived from dispersion measure (DM) <cit.> and
Gaia <cit.>[The Gaia DR3 geometric distance
ranges beteen 1.61-2.58 kpc].
A deep search for pulsations has also identified J1431 as a gamma-ray pulsar at the ∼ 7σ level
<cit.> and thus it is included in the recent 3^rd Fermi-LAT catalog
of gamma-ray pulsars <cit.>, joining the sample of 14 confirmed redbacks
with radio and gamma-ray detections. Medium time resolution optical photometry and
spectroscopy was carried out by <cit.> that revealed
the orbital motion of the companion star and found it to be affected by mild
irradiation. The binary parameters were not constrained due to the degeneracy
between inclination and filling factor.
We present here a multi-band analysis of J1431 consisting of an X-ray
observation of J1431 performed with , that has allowed the detection
for the first time of the X-ray emission, as well
as optical multicolour fast photometry acquired at ESO La Silla
with the instrument mounted at the NTT telescope. The
observations are presented in Sect. <ref>, the X-ray timing and
spectral analysis are reported in Sect. <ref> and the optical
photometric study in Sect. <ref>. We discuss the results
in Sect. <ref> and compare them with other redback properties.
§ OBSERVATIONS AND DATA REDUCTION
§.§ The observation
J1431 was observed by on 2021 Jan. 21 (OBSID: 0860430101) with the
EPIC-pn camera <cit.> in the Large Window imaging mode and with the
thin optical blocking filter for an exposure time of 92.3 ks and with the EPIC-MOS1
and MOS2 cameras <cit.> in Partial Window imaging mode with the thin filter
for an exposure time of 93.3 and 94.6 ks, respectively. The presence
of background flaring activity at the beginning and end of the pointing
reduced the effective exposures to 78.9 ks and 84.4 ks for the EPIC-pn and MOS
cameras, respectively[The filtering of the event files was performed creating good time intervals (GTIs)
by setting a cut in the background rate time series without source contributions].
The Optical Monitor (OM) <cit.> was set in
fast window mode using the V (5100-5800Å) and UVW1
(2450-3200Å) filters. Nine OM science windows of ∼4400 s each were
acquired sequentially in the V and UVW1 filters, totalling an exposure
of 39.6ks in each of them, respectively.
The log of the observations is reported in Tab. <ref>
The data were processed and analysed using the Science Analysis Software
(SAS v.20.0) with the latest calibration files. The photon arrival
times from EPIC cameras were corrected to the Solar System barycentre
using the JPL DE405 ephemeris and the nominal position of J1431 reported
by <cit.>[The radio position is fully consistent with the
Gaia DR3 position <cit.>]. For the EPIC cameras, we
extracted events using a 16" radius circular region centred on the source
and using a background region of the same size located on the same CCD chip.
To improve the S/N the data were filtered by selecting pattern pixel events up to
double with zero-quality flag for the EPIC-pn data and up to quadruple for
the EPIC-MOS data. Background subtracted light curves were produced using
the epiclccorr task in the whole 0.2-12 keV range and in two
bands covering the 0.2-4 keV and 4-12 keV ranges. Given the faintness of the
source, the light curves were binned into 1200 s and 2400 s-long intervals,
respectively.
The EPIC spectra were extracted in the 0.3-10 keV range, filtering the events using good
time intervals obtained by selecting low background epochs with the task gtigen.
Response matrix and ancillary files were generated using the tasks rmfgen and
arfgen, respectively. The spectra were rebinned with a minimum
of 30 counts in each bin using the specgroup routine before fitting.
The OM V and UVW1 fast window data were processed and background light
curves were generated with the task omfchain with a bin time of 400 s and
1100 s, respectively. Given the small size of the fast window mode, the centering
of the target was inspected for each window. Due to a drift of the telescope,
the target was found to be partially outside the window of the last V band
observation
and the first UVW1 band window and therefore these were disregarded.
The light curves
were then corrected to the Solar System barycentre in the same manner as the
EPIC data. The OM V and UVW1 count rates were also converted into AB
magnitudes using the count rate magnitude conversion available at the SOC[https:/www.cosmos.esa.int/web/xmm-newton/sas-watchout-uvflux].
§.§ The observations
J1431 was observed with the ESO 3.5m New Technology Telescope (NTT)
at La Silla (Chile) equipped with the photometer <cit.> that allows simultaneous
high speed three-colour photometry for three consecutive nights from 2019 April
12 to April 14 in relatively good seeing (1-1.5") conditions.
During the first two nights the Super SDSS
u_ s,g_ s,r_ s filters
were used, while on the third night the photometry was acquired with the
u_ s,g_ s,i_ s filters. The Super SDSS filters are described in <cit.>.
Single exposure times were 60.6 s for the u_ s
filter and 20.2 s for the g_ s,r_ s, i_ s filters. The instrument was set in
full-frame and no-clear mode giving a dead time between each frame of 0.024 s
with a GPS time stamp of each frame to an absolute accuracy of 1 ms. Due
to a rotator flip at transit, the science exposures were stopped for a few minutes.
The log of the observations is reported in Tab. <ref>.
The data were reduced using the improved reduction pipeline developed for
HIPERCAM <cit.>. The images were debiased using master
bias and dark frames and flat-fielded using a master twilight flat field.
Aperture photometry with variable sized object apertures scaled to the
seeing in each frame was applied to extract counts. The sky was
determined from a clipped mean in an annulus around the object aperture
and subtracted.
Several comparison stars in the same field of view were also extracted to
perform differential photometry. Among the comparison stars two
of them have similar colour to J1431[The comparison stars colours
were checked against Gaia DR3], which then have been used to calibrate
the target using their magnitudes from the SkyMappper Southern
Survey DR2 catalogue <cit.>. These have also been checked
against variability and we chose the more stable reference star
SMSS J143149.5-471538.1 with AB magnitudes u=17.61±0.03,
g=15.89±0.01, r=15.50±0.01 and i=15.261±0.008.
The light curves in the different bands were inspected against degradation
due to twilight at the end of the observing runs and the affected points
were removed from the light curves. Barycentric correction was
also applied to the time series in the different filters.
§ DATA ANALYSIS AND RESULTS
§.§ The X-ray emission
The observation finds J1431 as a faint source (Tab. <ref>) and
thus we restrict to the more sensitive EPIC-pn data for the timing analysis,
after inspection of no improvement in the S/N when summing the rates from
the three EPIC cameras.
In Fig. <ref> the EPIC-pn light curve in the total 0.2-12 keV is displayed
together with those in
the soft 0.2-4 keV and hard 4-12 keV ranges. Although the
length of the observation covers twice the 10.8 h binary period, there is no
clear orbital modulation in the total band. The soft 0.2-4 keV bands may hint
at a possible variability with a marginal increase in the flux at about 28 ks
from the start of the observation, not observed in the hard (4-12 keV) range.
The hardness ratio HR, defined as the ratio of count rates between the 4-12 keV and 0.2-4 keV bands,
may indicate a softening at that time, but also at earlier and later times as depicted in Fig. <ref>,
although at these epochs the hard band count rates are consistent with zero given the
uncertainties on the background subtracted rate.
The EPIC-pn energy resolved light curves were also folded at the 10.8 h binary orbital
period using the <cit.> ephemeris (see left panels of Fig. <ref>), where phase zero corresponds
to the time of the NS ascending node[The radio solution is highly
accurate (dϕ/ϕ = 1.15× 10^-10) allowing the
folding of the data]. The soft band hints to a variability with
a semi-amplitude of 14±12% (χ^2_ν/d.o.f. = 0.94/9), while a fit with a constant
gives χ^2_ν/d.o.f. =0.99/11. The signficance of an orbital variability estimated with an F-test is only
at 1.6σ confidence level, thus not statistically
significant. The light curve in the harder (4-12 keV) range appears to be double-humped
with unequal maxima at ϕ_orb∼ 0.25-0.4 and ∼0.8-1.0. These phases
are close but slightly later than the superior and inferior conjunction of the pulsar, respectively.
Fitting the hard band orbital light curve with the fundamental and harmonic frequencies, the amplitude is 34± 14%
(χ^2_ν/d.o.f. = 0.60/9), while a fit with a constant gives χ^2_ν/d.o.f. =1.20/11. The significance
of the modulation estimated with an F-test is at 4σ confidence level.
The hardness ratios indicate a hardening at these two maxima (bottom left panel in Fig. <ref>).
On the other hand, the light curve in the
total range does not reveal significant orbital modulation
(right upper panel in Fig. <ref>) being consistent with a constant (χ^2_ν/d.o.f. = 0.50/11)
likely due to the higher count rate in the soft than in the hard band.
While a detailed comparison with the optical emission is performed in
Sect. <ref>, the simultaneous UV and optical photometry
(Fig. <ref>, right bottom panels)
shows a weak double-humped variability with amplitudes of 8±6% in the
UVW1 band and 8±3% in
the V band with unequal maxima at the ascending and descending nodes of the NS, although an F-test gives
significance at 3σ and 2.5σ levels.
The different behaviour will be discussed in Sect. <ref>.
The X-ray 0.3-10 keV spectrum averaged over the whole observation is
featureless with no sign of a cut-off.
The EPIC-pn and MOS1,2 spectra were analysed using xspec
package <cit.> and fitted together. We first adopt a simple model consisting of an
absorbed power law, const*tbabs*powerlaw, where tbabs accounts
for the ISM absorption with abundances from <cit.>, and const
accounts for intercalibration among the three instruments. This model gives a reasonable fit
(χ^2_ν/d.o.f. = 1.01/26), with a power law index
α =1.63_-0.18^+0.29 (uncertainties are 90% confidence level
on one interesting parameter) and an unconstrained neutral hydrogen
column density N_H ≤ 8×10^20 cm^-2.
This upper limit is roughly consistent with
the total ISM hydrogen column density in the direction of the source
(N_H,ISM = 1.1× 10^21 cm^-2 <cit.>) and the column
density
from the dispersion measure DM=59.35pc cm^-3 of <cit.>,
N_H,DM = 1.8(6)× 10^21 cm^-2,
using the empirical relation of <cit.>, as well as with the column
density
of ∼ 1.1× 10^21 cm^-2,
derived from the recent 3D-N_H tool of <cit.> that combines dust maps and distance to estimate
the optical reddening and X-ray absorption adopting the same distance range.
Fixing the column density to
N_H,ISM = 8× 10^20 cm^-2 the fit provides similar results
(χ^2_ν/d.o.f. = 1.08/27) for the power law index
α= 1.88±0.19 (see Fig. <ref>)[The results do not change adopting a column
density of 1.0× 10^21 cm^-2].
The unabsorbed flux in the 0.3-10 keV
range is 2.6±0.3× 10^-14 erg cm^-2 s^-1.
At the distances of 1.53-3.8 kpc the corresponding luminosity is
L_X = 0.7-5.0× 10^31 erg s^-1.
Although the simple power law model is formally acceptable, we also included a
blackbody component, using bbodyrad to inspect
whether the presence of the thermal emission of the NS is required. We do not
attempt to use a NS atmosphere model (e.g. nsa) given the low statistics
of the spectra.
Keeping fixed the hydrogen column density as before we find a power law index
α=1.31_-0.38^+0.36, a blackbody temperature
kT_BB= 0.15±0.04 keV and normalisation N_BB= 1.12_-0.72^+2.63
(χ^2_ν/d.o.f. = 0.83/25) (see Fig. <ref>).
The latter implies an emitting radius
R_BB∼100-740 m for a distance in the range of 1.53-3.8 kpc.
The 0.3-10 keV unabsorbed
fluxes of the two components result in: F_BB=6.4_-0.5^+0.3×
10^-15 erg cm^-2 s^-1 and F_pow=2.6_-0.2^+0.1×
10^-14 erg cm^-2 s^-1. The total luminosity in the 0.3-10 keV range
then results L_X ∼ 0.9-5.55× 10^31 erg s^-1 for the
above distance range.
The inclusion of the blackbody using
an F-test is however significant only at a confidence level of 2.5σ.
J1431 displays a similar spectrum to other MSP binaries in
the rotation-powered state, with a power
law index within the range α=1.0-1.9.
The X-ray emission is therefore dominated by synchrotron radiation with
a luminosity consistent with that derived in redbacks and higher than
those of black widows <cit.>.
Although not required from the fits, a blackbody component with
temperature of ∼0.15 keV and size of a few hundreds of meters is reasonably
consistent with those found in MSPs, where the NS thermal
emission from the heated pole is detected
<cit.>.
The ratio of the unabsorbed fluxes between the putative thermal component and
the power law in the 0.3-10 keV is ∼0.25
and the bolometric luminosity of the thermal component
in the range 2.3-15.0× 10^30 erg s^-1 are also consistent
with those found in MSPs <cit.>.
Whether the non-thermal emission originates in the IBS or from the pulsar magnetosphere
will be discussed in Sec. <ref>.
§.§ The optical light curves
The time series clearly show a low-amplitude periodic variability
in the u_ s,g_ s,r_ s,i_ s bands. Using the longer time coverages in the
u_ s,g_ s,r_ s filters, the Lomb-Scargle periodograms <cit.>
reveal the presence of strong power at the harmonic of the 10.8 h orbital period
(see Fig. <ref>).
A sinusoidal fit composed of the fundamental and harmonic
to the light curves gives a period
P_u_s^Ω=0.45024(22) d, P_g_s^Ω=0.449499(48) d and
P_r_s^Ω=0.44950(12) d fully consistent with the more precise orbital
period of 0.4497391377(7) d derived from radio timing by <cit.>.
The modulation amplitudes in the four bands are A_u_s^Ω = 0.0193(8) mag,
A_u_s^2Ω = 0.0734(2) mag; A_g_s^Ω= 0.0215(1) mag,
A_g_s^2Ω = 0.0682(1) mag, A_r_s^Ω=0.0153(2) mag,
A_r_s^2Ω = 0.0630(1) mag and
A_i_s^Ω= 0.0117(3) mag, A_i_s^2Ω = 0.0601(2) mag,
where for the i_ s band the orbital frequency of <cit.> has been adopted.
The observed light curves, including the OM UVW1 and V band filters
folded at the orbital period using the accurate ephemeris of
<cit.> are shown in Fig. <ref>.
The double-humped shape clearly demonstrates the presence of ellipsoidal
variations due to the tidal distortion of the companion star, which however
are not symmetrical and do not show equal minima. These minima occur
at ϕ_orb∼ 0.25 and 0.75. Also slight phase shifts are observed
at the two maxima, one anticipating at ϕ_orb=1.0 and the other lagging at ϕ_orb=0.5.
The wavelength dependent amplitudes also hint at colour effects,
likely due to irradiation as pointed out by <cit.>.
It is worth of note the different shape of the V band orbital modulation observed in 2021 with
respect to the light curves acquired in 2019, possibly indicating
a change in irradiation efficiency, although the lack of a deeper minimum
at ϕ_orb∼ 0.25 could also be due to the lower accuracy of
the OM photometry. We then limit ourselves to the analysis of
the high time resolution data.
§.§.§ Modeling of the optical light curves
The light curves, complemented with the
radial velocity curve obtained from the H_α absorption line
by <cit.> for the companion star,
were analysed with the icarus[https://github.com/bretonr/Icarus]
binary modeling code <cit.>
to constrain system parameters and the temperature profile of the companion.
This software is used along with ATLAS9 stellar atmosphere grids <cit.>
to construct a photometric grid of synthetic atmosphere models,
which in turn are folded through the filter transmissions to
obtain the observed flux at a given distance. The MultiNest
nested sampling algorithm <cit.> was used to explore the
parameter space, which provides the Bayesian evidence (Z) of a model.
The precise pulsar radio-timing solution derived by <cit.> was adopted
to fix the input parameters: the time of the ascending node of the pulsar
(T_asc= BMJD 55756.1047771), the orbital period
(P_orb=0.4497391377 d) and the projected
semimajor axis of the pulsar orbit (x=a_ psr sin i/c=0.550061 lt-s).
The pulsar mass function f( M_psr) relates to the system parameters
as f( M_psr)= M_psr sin^3i/(1+1/q)^2 =
q^3 x^3 4 π^2/G P_orb^2 where q=M_psr/M_c is the
pulsar-to-companion mass ratio, i is the binary inclination. Since the mass
function is related to the companion star radial velocity amplitude K_c
as f( M_psr)=P_orb K_c^3 /2 π G, the parameters to
be fit are i, K_c.
The icarus code constructs a stellar surface whose radius is
parametrized by the Roche lobe filling factor f_ RL,
defined as the ratio between the
radius of the companion in the direction of the pulsar and the distance
between the companion centre of mass and the inner Lagrangian point
(L_1). f_ RL is then another fit parameter.
The surface stellar temperature T_base is defined as the
temperature of the
pole prior to irradiation and, accounting for gravity darkening,
T_base is
multiplied by (g/g_ pole)^β, where g is the acceleration
gravity and
β is the gravity darkening exponent. This parameter is fixed at
β=0.08 and β=0.25, which are appropriate for convective and
radiative envelopes, respectively <cit.>.
The effect of the pulsar heating is represented by the
irradiating temperature T_irr, defined such that a flux
σ T_irr^4 at the star centre of mass is received
at a distance a from the pulsar. With the assumption that the irradiating
flux is immediately thermalised and re-radiated <cit.>, each surface
element at a distance r from the pulsar, with its normal vector
forming an angle θ with respect the vector pointing to the pulsar,
has a temperature such that
T^4 = [T_base^4 (g/g_ pole)^4β + T_irr^4 cos θ (a/r)^2]^1/4.
This model assumes direct heating (DH) from the pulsar and applies gravity
darkening prior to irradiation, here termed pre-irradiation gravity darkening (pre-IGD) <cit.>.
Gravity
darkening applied after irradiation and heat redistribution on the companion
surface, here termed post-irradiation gravity darkening (post-IGD),
as done in <cit.> and <cit.>, has also been investigated.
However the presence of
asymmetries and shifts in the light curves of J1431, already noticed by
<cit.>, cannot be accounted for by the simple DH approach.
These features
can instead be modeled either by adding hot or cold spots or by including
heat redistribution within the outer layers of the star by convection
only (C) and/or diffusion and convection (D+C)
<cit.>. These models have also been inspected by
applying both pre-IGD and post-IGD.
The icarus code also encodes as fit parameters the optical
interstellar reddening defined as E(B-V) in the direction of J1431
determined in each band using dust maps and extinction
vectors from <cit.> as well as the source distance d.
Hence the fit parameters for all these models are: E(B-V), d, K_c,
i, f_ RL, T_base, T_irr. For the models accounting
for heat
redistribution with convection (C) and diffusion plus convection (D+C),
additional fit parameters are the convective strength parameter C_ ampl,
which determines the profile of the convective wind
and the diffusion parameter κ_ diff, which describes the amplitude of
a linear diffusion effect <cit.>. In the case of spot model
a single surface hot or cold spot is added to the DH model with fitted
temperature difference T_ spot, angular size R_ spot, and position angles
θ_ spot, ϕ_ spot <cit.>.
The main priors for the fit parameters were choosen such that they are
physically or geometrically motivated.
Among them, the distance of J1431 whose prior is constructed combining
the ranges derived by <cit.>, <cit.>,
<cit.> and Gaia DR3 (see Sec. <ref>), but allowed to vary from 0.1 to 20 kpc.
Also a Gaussian prior is applied to the interstellar
extinction E(B-V), centred at 0.166 with standard deviation of 0.16 but allowed
to vary over a larger range from 0 to 1. The base and irradiation temperatures
were constrained to lie between 2100 K and 12000 K, and between 2300 K
and 15000 K, respectively.
For the binary inclination angle i, a uniform prior in cos i has been
adopted. As for the Roche lobe filling factor f_ RL, a uniform prior has
been adopted between 0.2 and 1.0.
The spectroscopic K_c constraints are provided by the observed radial
velocity curves obtained from cross-correlation of the H_α absorption line in the
optical spectrum of J1431 by <cit.>. Here the icarus code
simulates spectra at specific orbital phases and the radial velocities of the model are
compared to the observed values <cit.>.
Among the several model fits, those with post-IGD prescription outperform the pre-IGD in all cases,
except identical results are found for the the DH+Spot model adopting either
pre-IGD or post-IGD heat redistribution prescription.
In Tab. <ref> the results of the different
modelling are reported for the post-IGD prescription only and for the two values of the
gravity darkening exponent β. Models with β=0.25 have much lower
Bayesian evidence (log Z) with respect to those adopting β=0.08, and hence are discarded.
Here we note that the transition temperature from convective to radiative envelopes
falls in the range ∼ 7000-7900 K <cit.>, justifying the investigation with both β values.
The fitted parameters with β=0.08 are found to be all consistent
within 1σ for those models with
higher Bayesian evidence, namely that with convection (C), with diffusion and convection (D+C)
and with direct heating (DH) with a cold spot.
A slight preference for the DH with a cold spot is found. This model is shown in
Fig. <ref> where the fitted
multi-colour light curves (panel A) are displayed together with the surface
temperature map of the companion (panel B) as well as the observed and
model radial velocity curves (panel C). Corner plots for the best three
models (C), (D+C) and DH+spot with post-IGD are reported
in Fig. <ref>, Fig. <ref> and
Fig. <ref>, respectively.
The similarities of the results obtained with these three best fit models provide
evidence that the derived parameters are reliably constrained despite the
inability of determining whether heat redistribution models or a spotted
surface represent the physical state of the companion star.
The distance of J1431 has not been well constrained
by several methods <cit.>, allowing
for a wide range from 1.5 kpc to 3.8 kpc. Here we find consistency for all
the three models, which, adopting the higher and lower bounds of the derived parameters,
provide a distance to J1431 of 3.1±0.3 kpc, well within the above
range and that we adopt in the forthcoming section. The interstellar extinction
E(B-V)=0.17-0.19, giving A_V=0.52-0.59
is also consistent with the upper limit to the
hydrogen column density found from the X-ray spectral fits (see
Sec. <ref>) using the <cit.> relation.
The binary system parameters are discussed in Sec. <ref>.
§ DISCUSSION
§.§ A mildly irradiated companion
The parameters obtained with the icarus code for the three best fit models
give consistent values within 1σ for the binary inclination,
the companion Roche lobe filling factor and mass. By averaging among the three models we
obtain i=59±6^∘, f_ RL=73±4%, M_c=0.20±0.04 M_⊙, which
are broadly consistent with the preliminary results obtained by
<cit.> but more accurate. These values confirm the redback status of J1431.
The companion surface is found to have hemisphere-averaged nightside and
dayside temperatures T_night∼7400 K and
T_day∼7500 K, both higher than previously estimated
(∼6500-6600 K) by <cit.>. We here note that this lower temperature
estimate is based on spectra covering a narrow spectral range between 5500-6730 Å
encompassing only H_α and not a wider wavelength range as our multi-band photometry.
We also verified whether the extinction prior could affect the temperature determination adopting
a flat prior for the reddening. The posterior distributions of the fit are found similar to those
adopting a Gaussian prior, indicating that the reddening does not affect the temperature determination.
The spectral type of the companion would then be an early-F star,
much hotter than the majority of redbacks, which are instead found in the range of 2800-6300 K <cit.>, with only
PSR J1816+4510 found to be extreme (16000 K) <cit.>.
The companion of J1431 could be an outlier due to an unusual binary evolution,
or hot redback companions are still to be discovered. This shows that the characterization of the optical companions of newly discovered
redbacks is an important aspect of investigation.
The irradiating temperature
T_irr∼ 3900 K implies an equivalent irradiation
luminosity L_irr= 4 π a^2 σ T_irr^4
∼ 7.8× 10^33 erg s^-1, where a=2.17× 10^11 cm
having adopted a_ psr sin i from <cit.> and the derived
values for the inclination i=59^∘ and
mass ratio q=10.4. This luminosity when compared
to the spin-down power Ė = 6.8× 10^34 erg s^-1 from
<cit.> or that derived in Sec.<ref> below, gives
an efficiency η_irr=L_irr/Ė∼ 12-13%,
not unusual in rebacks, although at the low end of
irradiation efficiencies <cit.>. To estimate
the heating effect
on the companion we consider the expected heating impinging on the companion
in the case of an isotropic pulsar wind L_heat = f_Ω Ė
where f_Ω is the geometric factor ∼ 0.5 (1-cosθ) with
θ= arctan ( R_c/a), neglecting albedo. Adopting the derived companion
radius R_c =0.6 R_⊙ and the above value for a,
f_Ω=0.009 and then
L_heat = 6.1× 10^32 erg s^-1. The ratio between the
expected heating and the companion unheated luminosities,
L_c,base = 4π R_c^2 σ T_base^4 = 4×
10^33 erg s^-1, then results in
η_heat=L_heat/L_c,base= 0.15. This indicates that
the companion luminosity is about 6.5 times larger than the heating,
hence implying weak irradiation as indeed evidenced by the observed
double-humped light curves, indicating that they are dominated by ellipsoidal effects.
The fact that MSP binaries with intrinsic companion
luminosities larger or comparable to that expected from pulsar irradiation
show negligible heating effects was already pointed out by
<cit.> and it is further supported by the results
by <cit.> on a larger number of spiders with a wide
range of orbital periods.
The long orbit of J1431 and hence large
separation together with the high companion luminosity can explain the
lack of strong irradiation in this system.
§.§ A heavy neutron star
Noteworthy is the derived pulsar mass M_psr ranging from 1.8 to 2.2
M_⊙, considering all three models. Although we are unable to
discriminate among the various models employed, which limits the precision to which this
parameter is constrained, a massive NS in J1431 is favoured.
In the recent years, NS masses in spiders have been subject of several investigations
<cit.>. Comparing the NS mass derived for J1431 with the latest compilations of
NS masses in <cit.>, J1431 joins the
group of massive NSs, which includes the redbacks PSR J2215+5135, PSR J1622-0315 and PSR J1816+4510 and the
black widows PSR J1653-0158, PSR J1810+1744, PSR J0952-0607 and
PSR JB1957+20. Noteworthy is the case of PSR J1622-0315, a mild irradiated redback potentially hosting a massive
NS <cit.>. These systems are particularly attractive because they
allow lower limits be set on the maximum NS mass and hence have
important implications for the dense matter equation of state (EoS).
The spin-down power of the pulsar, defined as
Ė=4 π^2 I Ṗ/P^3, where P and Ṗ are the spin period
and its derivative and
I = 2/5 M_NS R_NS^2 is the momentum of inertia of the NS,
was estimated by <cit.> as Ė = 6.8× 10^34 erg s^-1,
adopting the canonical NS moment of inertia I=10^45 g cm^2
and correcting the spin period derivative for the Shklovskii effect.
The proper motion of J1431 is now better constrained by
Gaia DR3 <cit.> μ=18.7(2) mas yr^-1, which
allows a more accurate determination of the Shklovskii effect.
We then correct the observed spin period derivative Ṗ for
this effect:
Ṗ_Sh/P = v_t^2/(c d), where v_t is the transverse velocity,
and d the distance. Adopting d =3.1±0.3 kpc, we derive
Ṗ_corr = Ṗ_obs - Ṗ_Sh = 8.7(4)
× 10^-21 s s^-1, where P_obs=1.411×10^-20 s s^-1 is the observed
spin period derivative by <cit.>.
Given the evidence of a higher NS mass than 1.4M_⊙ (see Tab. <ref>) ,
we adopt conservatively the lower limit of M_psr=1.8 M_⊙ and, lacking
knowledge of EoS,
a NS radius of 10 km <cit.>, giving
Ė = 5.8-6.4× 10^34 erg s^-1, slightly lower
than the previous determination. It is not too far from the value
of 5.2±0.3 × 10^34 erg s^-1 derived
by <cit.> adopting the canonical moment of inertia
I=10^45 g cm^2 and the lower Gaia distance of ∼1.6-2.6 kpc.
Further observations, such as optical spectroscopy, would be desirable
to better investigate the irradiation model and hence constrain the
NS mass.
§.§ An X-ray dim redback
Our first deep X-ray observation of J1431 has detected a faint X-ray source without signficant variability
at its orbital period. The X-ray emission in the soft (0.2-4 keV) is unmodulated and in the hard (4-12 keV)
range only a hint at 4σ level of a double peaked variability is found. This contrasts with the majority
of spiders displaying substantial X-ray orbital variations, which are often found double-peaked
due to Doppler boosting of the IBS synchrotron emission. In redbacks the non-thermal emission
is Doppler boosted at inferior conjunction of the NS and de-boosted at superior conjunction.
The separation of the two peaks and their intensity strongly depends on the bulk Lorentz factor,
viewing angle and shock radii <cit.>. In black widows the maximum of X-rays occurs
at superior conjunction of the NS, while in the majority of rebacks it is the opposite
<cit.>. This has been interpreted as different geometry of the IBS, which wraps the pulsar/companion in
redbacks/black widows, respectively depending to the momentum ratio between the companion and the pulsar
winds <cit.>.
In J1431 the companion is found to be luminous, suggesting a higher wind momentum and it is conceivable that
the IBS wraps the pulsar as the majority of redbacks. In this case an orbital modulation is expected and
affected by Doppler boosting at inferior conjunction of the NS.
The lack of an orbital modulation in J1431 could arise if the system is viewed at low inclination angles. However
the optical analysis reveals an intermediate binary inclination of ∼60^∘.
A low inclination is also not favoured by the presence of long radio eclipses at superior conjunction of the pulsar,
lasting ∼ 0.3 in orbital phase <cit.>.
However, the possibility that an IB shock dominates the X-ray emission in J1431, not producing an X-ray orbital modulation
cannot be discarded, but determining the properties of the IB shock is not an easy task given the dependence on the binary inclination,
the shock location, opening angle and bulk velocity (β=v/c) of accelerated particles.
In the thin-shell approximation <cit.>, generally adopted for spiders, for a moderate inclination angle,
shadowing of the companion is not expected to produce signficant obscuration of the shock emission if the shock is located
close to the pulsar and if most of the emission originates close to the shock nose and the opening angle is small.
Near the shock nose, the bulk velocity is low and Doppler-boosting is not efficient <cit.>.
Given the lack of higher energy coverage above 10 keV that would allow the inspection of a
spectral break to estimate the shock location as well as the lack of coverage of the radio eclipses at low frequencies
<cit.> to derive the maximum shock opening angle, it is not possible to verify this hypothesis.
Adopting the derived distance d=3.1±0.3 kpc,
the X-ray luminosity ranges between 1.4-2.1× 10^31 erg s^-1 in the
2-10 keV band, or between 2.1-3.0× 10^31 erg s^-1 in the
0.5-10 keV range. When compared the X-ray luminosity to those observed in spiders,
J1431 is located between the redbacks and black widows and in
particular among low X-ray luminosity redbacks <cit.>.
The efficiency in converting spin-down power to X-ray luminosity in J1431 is
η_x= L_x/Ė∼ 0.02-0.04% (2-10 keV) or 0.03-0.05% (0.5-10 keV).
Although caution should be taken adopting empirical relations,
this luminosity ratio locates J1431 in between the so-called "inefficient" and "efficient" IBS
tracks derived by <cit.>.
All this indicates a very weak contribution of the IBS, if any.
Noteworthy, J1431 not only joins the low X-ray luminosity redbacks but also shares with
these systems a double-humped optical light curve, dominated by ellipsoidal variability.
Hence, it is very likely that the weak companion heating is not due to
X-ray irradiation. Indeed our time-resolved X-ray light curve does not
allow us to conclude whether the non-thermal emission originates in an IBS.
The spectral analysis indicates that the emission is dominated by non-thermal radiation, described
with a power-law model with index 1.3-1.9, similar to those observed in spider pulsars
<cit.>. Although formally not required by the spectral fits,
a thermal component at kT_BB∼ 0.15 keV
originating at the heated NS polar cap (R_BB∼ 180-660 m for the
distance range 2.8-3.4 kpc derived in Sec.<ref>)
could be also present. Similar values for this thermal component are found in other spiders
<cit.>, contributing ∼20% to the total X-ray flux.
Hence, it is also possible that the dominant X-ray non-thermal emission originates in the NS magnetosphere.
In this case the X-ray emission would be pulsed at the spin period of the NS, but the X-ray data were acquired in
imaging mode with a temporal resolution of 47.7 ms, preventing any search of spin pulses. Worth of note is that
searches for X-ray spin pulsations in the rotation-powered state of the transitional MSP XSS J12270-4859 led negative
detection even using high time resolution data <cit.>.
The gamma-ray emission of J1431 is typical of pulsars with a
spectral shape modelled with a power law with a superexponential
cutoff <cit.>
giving a 100 MeV-100 GeV flux of 4.7× 10^-12 erg cm^-2 s^-1
and a gamma-ray luminosity for a distance range of 2.8-3.4 kpc
L_γ = 4.4-6.5× 10^33 erg s^-1.
The ratio of gamma-ray luminosity
to spin down power η_γ = L_γ/Ė∼ 7-11%
indicates an efficiency of conversion consistent with the median of 10% of
redbacks <cit.>.
The gamma-ray emission in pulsar binaries is generally dominated by the pulsar
magnetospheric radiation
and should not be modulated at the orbital period for moderately low
binary inclinations.
However if gamma-rays are also produced in
an IBS an orbital modulation should be observed such as in the bright systems
XSS J12270-4853, PSR J2339-0533 and PSR J2039-5617 <cit.>,
in which also the X-rays
are strongly modulated at the orbital period <cit.>.
In J1431 there is no firm evidence of variability in the X-rays and searches in the gamma-rays
do not provide evidence of either an orbital modulation or eclipses <cit.>.
The X-ray (0.5-10 keV) to gamma-ray flux ratio F_x/F_γ∼ 0.5% is lower than
that found in most redbacks and similar to PSR J1908+2105 (∼ 0.6%), surpassing only
PSR J1622-0315 (∼ 0.3%) and PSR J1816+4510 (∼
0.05%) <cit.>. It is then
plausible that the gamma-ray emission in J1431 originates in the pulsar wind.
We constructed the broad band X-ray and gamma-ray spectral energy distribution (SED) using our X-ray
spectral analysis adopting the composite blackbody and power law model and the Fermi-LAT
SED as retrieved from the recent 3^rd pulsar catalog <cit.> along with the
typical pulsar superexponential cutoff
power law model PLEC4 with peak energy E_p = 0.44±0.24 GeV, spectral slope Γ=2.63±0.16
spectral curvature d=0.8±0.3 and superexponential index fixed at the
canonical value b=2/3 due to the source faintness (see details in <cit.>).
The SED (Fig. <ref>) shows that
the X-ray spectrum, when extrapolated to higher energies, is higher than that predicted by the
PLEC4 model that, however, cannot be reliably evaluated at lower energies due to the fixed b parameter
at the canonical value.
We then explored whether a significant part of the gamma-ray and X-ray
emission could be the direct result of the pulsar magnetospheric emission.
Curvature and synchrotron emission from accelerated particles in the magnetospheric
gaps or reconnection in the current sheet producing synchrotron radiation at high energies
are possible mechanisms at the origin of non-thermal pulses. The synchro-curvature emission model
developed by <cit.> to describe the high energy spectra of pulsars is based on the assumption
that near the light cylinder of a pulsar with determined spin period and period derivative there is
a gap with a significant component of the electric field parallel to the magnetic lines and this field
accelerates particles. The model follows the particle time evolution,
solving the equation of motion and balancing acceleration and losses.
It has been successfully applied to the high energy spectra
of gamma-ray and X-rays detected pulsars <cit.> and has been
further updated
with improved treatment of particle injection to generate
synthetic spectra and gamma-ray light curves of pulsars
<cit.>.
We hence attempt to describe the observed broad-band SED of J1431 with the updated version of the
synchro-curvature model. This allows us to compare the results with those obtained for other gamma-ray
pulsars detected. It also has the advantage that it is a rather austere spectral model
having only three main physical parameters and a normalisation, which has proven
useful to represent X-ray and gamma-ray spectra
of pulsars, the appearance of sub-exponential cutoffs at high energies,
or the flattening of the X-ray spectra at soft energies.
The parameters of the model are the electric field parallel to the
magnetic field lines, E_||, assumed constant throughout an accelerating region;
the magnetic gradient b, such that the local magnetic field
B= B_s (x/R_s)^-b, where B_s
is the surface polar magnetic field estimated from radio timing
(∼ 2× 10^8 G) and R_s is the NS radius; and the spatial extent
x_0/R_lc[
R_lc=c /Ω=96 km
is the radius of the light cylinder of the pulsar in J1431]
of the emitting region for particles injected in a
given injection point x_in.
The parameter x_0/R_lc plays the role of a weighting function representing the
reduction of the number of emitting particles directed
towards the observer at a distance from their injection point.
The model solves the equation of motion of the particles to obtain their trajectories,
and computes the emission of particles along each point. The latter is dependent on
the local kinetic properties, like Lorentz factor, the pitch angle and position,
and the local properties of the magnetic field <cit.>.
Fig. <ref> (left panel) shows the results of the model with
log E_||=9.31 ± 0.04 V m^-1, log (x_0/R_lc)=-2.34 ± 0.04 and
b= 1.24±0.23 (χ^2_ν/d.o.f.=2.29/23).
The fit is not perfect, but reasonably close to the data.
The X-ray emission is the worse described by the model,
and, related to that, the magnetic gradient is low in comparison with other
fitted pulsars <cit.>. This can come as a result of two related effects.
First, the X-ray spectrum is steeply rising at soft energies, more than usually found for
other pulsars, and as a result the peak in X-rays is comparable to the peak in gamma-rays,
so that the model is trying to keep up with a large synchrotron emission.
Second, this may also come as an effect of considering a more extended region.
Farther from the pulsar, the magnetic field still needs to be high –decay more slowly–
in order for synchrotron emission to proceed. In addition,
the possible NS thermal contamination would mostly affect at these low energies.
Given such possibility, the SED has also been fit excluding
the soft X-ray portion of the spectrum, and hence below 2 keV. Fig. <ref> (right panel)
shows the results with log E_||=9.56 ± 0.04 V m^-1,
log (x_0/R_lc)=-2.59 ± 0.04 and
b=0.99±0.23 (χ^2_ν/d.o.f.=0.72/7), which describe much better the
observed SED. The higher value of the parallel electric field and the lower b are
the consequence of the steep X-ray slope. Since there are not many MSPs with both X-ray and gamma-ray data
<cit.>, it is not possible to state whether this steep spectrum is a common or unusual
property of MSPs.
In summary, a magnetospheric origin for the X-ray to gamma-ray emission seems to be plausible.
§ CONCLUSIONS
We have analysed for the first time the X-ray and optical emission of the energetic
MSP pulsar binary J1431 to assess the system parameters and characterize its multi-band
emission. Here we summarise the main results:
* Our first deep X-ray observation of J1431 has detected a dim X-ray
source without signficant variability at its 10.8 h orbital period.
This contrasts with the majority
of spiders displaying substantial X-ray orbital variations, which are often found double-peaked
due to Doppler boosting of the IBS synchrotron emission.
* The X-ray spectrum is featureless and consistent with non-thermal emission
with a power law photon index Γ= 1.6^+0.3_-0.2 and negligible absorption
(N_h ≤ 8×10^20 cm^-2). While not statistically
significant, the presence of a thermal component (kT_BB =0.15±0.04 keV) may hint
at the contribution of the heated polar cap (R_BB∼ 180-660 m) in the soft X-rays
together with a harder (Γ=1.3±0.4) power law component dominating at higher energies.
* Given that the X-ray timing and spectral analyses do not strongly favour
a dominant contribution from an IBS, we inspected whether a magnetospheric origin from the MSP can describe
the X-ray and gamma-ray SED and found a reasonable match.
This suggests that an IBS in this system, if present, has a negligible contribution.
*
The companion star is found to be an early F-type star with a mass of 0.20±0.04 M_⊙
which underfils its Roche lobe (f_ RL=73±4%), confirming the redback nature of J1431,
although hotter than the majority of redbacks.
The multi-colour optical light curves display typical
ellipsoidal modulation at the long orbital period but are also affected by irradiation.
Among the several models attempted to fit the multi-colour light curves and radial velocity
curve, similar parameters are found with a model that includes direct heating from the
pulsar and a cold spot with respect to the base stellar temperature (Δ T ∼ -500 K),
as well as with models that
include heat redistribution after irradiation encompassing convection and also
both diffusion and convection. The resulting
temperatures of the dayside and nightside faces of the companion are found to be ∼7500 K
and 7400 K, respectively, indicating a mild irradiation temperature T_irr∼3900 K.
The lack of a strong irradiation is likely due to the high luminosity of the star which
is about 6.5 times larger than that expected from heating at the orbital distance
from the pulsar.
*
The binary inclination is found to be i= 59±6^∘, explaining
the lack of orbital modulation in the X-rays and gamma-ray regimes. Furthermore, a
distance of 3.1±0.3 kpc is found, which locates J1431 at larger distance than
previously estimated.
*
Despite the inability to derive the best irradiation model, which limits
the precision to which the NS mass is constrained, a massive (1.8-2.2 M_⊙)
NS in J1431 is favoured.
This work is based on observations obtained with , am ESA science mission
with instruments and contributions directly funded by ESA Member States and
NASA and on observations collected at the
European Southern Observatory under ESO programme 0103.D-0241.
This work has made use of data from SkyMapper, a facility funded through
ARC LIEF grant LE130100104 from the Australian Research Council,
awarded to the University of Sydney, the Australian National University,
Swinburne University of Technology, the University of Queensland, the
University of Western Australia, the University of Melbourne, Curtin
University of Technology, Monash University and the Australian
Astronomical Observatory. SkyMapper is owned and operated by The
Australian National University's Research School of Astronomy and
Astrophysics.
This work has also made use of data from the European Space Agency (ESA) mission
Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia
Data Processing and Analysis Consortium (DPAC,
<https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement.
The data are publicly available at the European Space Agency (ESA) archive
http://nxsa.esac.esa.int/nxsa-web. The data can be obtained by contacting
the corresponding author or the team (V.S. Dhillon).
DdM, AP, FCZ, AMZ acknowledge financial support from the Italian Institute for Astrophysics
(INAF) Research Grant 2022:"FANS". AP is supported by the Italian Ministry
of University and Research (MUR), PRIN 2020 (prot.2020BRP57Z) "Gravitational
and Electromagnetic-wave Sources in the Universe with current and next generation
detectors (GEMS).
FCZ is supported by a Ramón y Cajal fellowship (grant agreement RYC2021-03088-I).
VSD and operations are funded by the Science and Technology Facilities Council (grant ST/Z000033/1).
RPB acknowledges support from the European Research Council (ERC)
under the European Union's Horizon 2020 research and innovation program
(grant agreement N.715051; Spiders).
We also acknowledge the support of the PHAROS COST Action (CA 16214).
We acknowledge useful comments from the anonymous referee.
aa
§ CORNER PLOTS FOR THE THREE BEST FIT MODELS WITH POST-IGD
|
http://arxiv.org/abs/2409.02978v1 | 20240904175930 | Cavitating bubbles in condensing gas as a means of forming clumps, chondrites, and planetesimals | [
"Eugene Chiang"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Cavitation and Chondrites]
CAVITATING BUBBLES IN CONDENSING GAS
AS A MEANS OF FORMING CLUMPS, CHONDRITES, AND PLANETESIMALS
0000-0002-6246-2310]E. Chiang
Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA
Department of Earth and Planetary Science, University of California, Berkeley, CA 94720-4767, USA
§ ABSTRACT
Vaporized metal, silicates, and ices on the verge of re-condensing into solid or liquid particles appear in many contexts: behind shocks, in impact ejecta, and within the atmospheres and outflows of stars, disks, planets, and minor bodies. We speculate that a condensing gas might fragment, forming overdensities within relative voids, from a radiation-condensation instability. Seeded with small thermal fluctuations, a condensible gas will exhibit spatial variations in the density of particle condensates. Regions of higher particle density may radiate more, cooling faster. Faster cooling leads to still more condensation, lowering the local pressure. Regions undergoing runaway condensation may collapse under the pressure of their less condensed surroundings. Particle condensates will compactify with collapsing regions, into overdense clumps or macroscopic solids (planetesimals). As a first step toward realizing this hypothetical instability, we calculate the evolution of a small volume of condensing silicate vapor—a spherical test “bubble” embedded in a background medium whose pressure and radiation field are assumed fixed for simplicity. Such a bubble condenses and collapses upon radiating its latent heat to the background, assuming its energy loss is not stopped by background irradiation. Collapse speeds can range up to sonic, similar to cavitation in terrestrial settings. Adding a non-condensible gas like hydrogen to the bubble stalls the collapse. We discuss whether cavitation can provide a way for mm-sized chondrules and refractory solids to assemble into meteorite parent bodies, focusing on CB/CH chondrites whose constituents likely condensed from silicate/metal vapor released from the most energetic asteroid collisions.
§ INTRODUCTION
Ubiquitous in meteorites,
the millimeter-sized igneous spheres that are chondrules
pose two problems: how to melt them in space, and how to assemble them into asteroids.
The first problem, chondrule melting, has had a number of proposed solutions. These include heating from shock waves in the solar nebula, as driven by planetesimals (e.g. ; ), planets (e.g. ), or disk self-gravity (e.g. ; ; ). Alternatively, circumstellar solids that drift too close to the magnetospheric cavities of their young host stars may be heated by stellar flares (). Nebular lightning (electrons accelerated in a breakdown electric field) has also been proposed as an energy source (e.g. ).
Still another means of melting is asteroid collisions. There are many flavors of collisional scenarios: chondrules as melt droplets splashed out from planetesimals already molten from ^26Al (; ); chondrules as melt jetted from the impact site between solid planetesimals (; ); or chondrules as solids thermally processed by impact-generated vapor (; ; ) and radiation (). The impact models are not without their problems, among them avoiding elemental differentiation in liquid/molten asteroids as most chondrules are undifferentiated (e.g. ), and the low efficiencies with which colliding solids produce vapor (≲20%; ) and melt (≲7% for impacts with dunite, and orders of magnitude less for dunite mixed with ice;
). Nevertheless, a good case can be made that the subset of chondrites classified CB or CH (; ) have a collisional origin: their petrologic textures (; ; ), elemental abundances (), and thermal histories () can be reproduced by vapor condensation and radiative cooling in high-velocity impacts between asteroids 10–100 km in radius.
CB chondrites contain a special mix of Fe/Ni metal nodules and grains, and silicate chondrules unusually devoid of metal. This composition is interpreted to arise from the vaporization of an asteroid formerly differentiated into a metal core and a silicate mantle, and its subsequent recondensation into chondrules and metallic particles ().
The second problem—collecting chondrules, after they have been heated, into chondrite parent bodies—seems to have received less attention. The problem would seem acute for CB/CH chondrules, which are understood to form from vaporizing collisions. The sign seems wrong—debris from hypervelocity impacts is ejected at speeds comparable to the sound speed of rock vapor, on the order of km/s, readily escaping the gravity wells of bodies 100 km in size or smaller. Even if some chondrule debris were to fall back onto the post-collision remains (e.g. ), or be accreted onto other asteroids (e.g. ), accretion velocities tend to be so high as to shatter chondrules on impact,[Although
“chondrule-like” fragments < 30 μm in size have been collected from the comet 81P/Wild-2 () and the asteroid Ryugu (). Such trace micro-chondrules may be the remains of chondrules that shattered upon impacting the surfaces of asteroids and comet progenitors (). In-situ collisions, and the heating and re-accretion of solids resulting therefrom, may (contrary to prevailing wisdom; cf. ) obviate the need for large-scale transport of solids from heated regions near the proto-Sun.]
and to defeat gravitational focussing and render accretion improbable (). In reality, chondrules must have agglomerated gently enough to remain intact, and with remarkable efficiency to fill the majority of the volumes of chondritic meteorites. In the case of CB/CH chondrites, the volume-filling fraction of chondrules and metal nodules is ≳ 95% (e.g. ). To achieve such efficiency, the mechanism of aggregating chondrules was likely part and parcel of the same mechanism that melted them. That is, between the heating and assembly phases, chondrules must not have dallied and mixed unduly
with other nebular solids.[Though some mixing is necessary, as the chondrule + Fe/Ni metal volume-filling fraction is never 100% in any chondrite. Typical chondrites are a sedimentary-like amalgamation of chondrules, Fe/Ni metal, calcium-aluminium inclusions (CAIs), and unheated matrix grains, components with different chemical and thermal histories (e.g. ; for a review of CB/CH constituents, see ). Furthermore, the chondrules within a given chondrite may have different ages. CB chondrites contain not only young chondrules, but also CAIs that are 4–5 Myr older <cit.>.]
Despite appearances, a hot expanding vapor cloud generated from an asteroid collision may yet be conducive to planetesimal (re)formation, as proposed by <cit.> and <cit.>. Their numerical simulation of an impact vapor plume shows that its expansion can be halted and reversed: cooling and condensation create a pressure void which collapses under the background pressure from nebular H_2. The simulated implosion in <cit.> is arguably exaggerated, as it assumes the background H_2 is stationary with respect to the target asteroid; the roughly symmetric expansion of ejecta into a static medium allows for its subsequent, roughly symmetric collapse.[Technically axisymmetric about the velocity vector of the impactor relative to the target. Debris is strewn along an axis extending from the impact point to what is left of the impactor barreling through the assumed static nebula. Vapor expands outward from this axis, in a cone-shaped plume with the impactor at its vertex.] A more realistic set-up would account for H_2 gas moving relative to both target and impactor, i.e., the km/s nebular headwind due to the colliders' non-circular orbits, which are necessarily eccentric and/or inclined to yield a vaporizing collision in the first place. The hydrogen headwind would be expected to sweep through and flush away the condensed remains of the impact plume, rather than implode the plume from all directions ().
We consider here the possibility that condensation-induced collapse might still occur within the plume on smaller scales. <cit.> note “eddies” and “clumping of the entrained condensates” in their simulation. It is not hard to imagine how a gas might be susceptible to heterogenous condensation and localized collapse, even without an external confining nebula. Consider a purely condensible gas like silicate vapor, with no H_2 anywhere, on the verge of condensing. Because the saturation vapor pressure is exponentially sensitive to temperature, small variations in temperature across the gas yield larger-amplitude variations in the degree of saturation. Cooler regions saturate first. The resultant condensations may run away: condensed liquids/solids (“dust particles”) are efficient emitters of thermal radiation,[The simulation of <cit.> and <cit.> did not account for radiation.] cooling the condensing regions still further and accelerating condensation. These pockets of high condensation and low pressure will be compressed by surrounding lower condensation, higher pressure gas, potentially achieving densities high enough to self-gravitate (e.g. ). Thus, assuming a seed field of thermal fluctuations, the condensing medium may fragment into clumps of condensed particles—on their way, perhaps, to becoming planetesimals. In the context of an asteroid collision, melt droplets within the hot vapor plume could be compactified and reconstituted into chondrites or chondrite progenitors, before being swept downstream by the nebular headwind.
Here we explore condensation-induced collapse with some simple calculations. Within a condensing gas we consider a small test region or “bubble”, and study how the bubble leaks energy by radiation and compresses under the pressure of the surrounding medium. We assume for simplicity that the medium exterior to the bubble has given and fixed properties (in particular pressure and radiation field). Not following the evolution of the background medium, or writing down its governing equations, prevents us from studying its thermal stability (cf. ), and testing whether our hypothesized radiation-condensation instability is real.
Instead, in this preliminary study, we rig up bubbles that can collapse, identify the circumstances under which they do, and describe quantitatively how the collapse unfolds, establishing orders of magnitude. Our assumption that background properties do not change may not be unrealistic insofar as bubbles can collapse quickly—we will see that evolution times can be measured in seconds to minutes. The physics studied here relates to pressure oscillations of air bubbles in water (), as well as cavitation: the formation and collapse of vapor bubbles in low-pressure liquids (see, e.g., ). In terrestrial examples of cavitation, collapse speeds can be sonic and generate shock waves; ram pressures from imploding bubbles can damage fast-moving machinery immersed in liquid (watercraft propellers, gear pumps), or disable the prey of fast-striking marine animals (pistol shrimp, mantis shrimp). In the context of chondrule formation, cavitation has also been invoked to limit the sizes of molten droplets experiencing gas drag (), and to create chondrules from the popping of vapor bubbles off initially solid bodies heated to boiling (). These works consider cavitation on mm-cm scales; our work asks whether a different kind of cavitation can operate on much larger, super-km scales, not to create chondrules, but to assemble them into parent bodies.
In section <ref> we write down and solve the equations governing the dynamics and thermodynamics of condensing spherical bubbles in a fixed background. Though we do not spatially resolve the bubbles, and use expressions that are at best accurate to order-of-magnitude, we can follow the evolution of bubbles across a wide range of scales and circumstances, and also assess what size solid particles can be entrained in collapsing gas. In section <ref> we consider how mixing a non-condensible gas (hydrogen) into the condensing bubble can qualitatively change outcomes. Section <ref> summarizes and connects our results to problems of chondrite formation and structure formation generally.
§ BUBBLES OF WHOLLY CONDENSIBLE VAPOR
To re-cap the big picture motivated in the Introduction: we consider how a portion of a condensible medium evolves under perturbation. The medium can be drawn from any chondrule-forming event, staged concurrently with chondrule formation or in its immediate aftermath—e.g., the protostellar nebula downstream of a shock, or the debris cloud from a hypervelocity impact between asteroids—any region containing hot rock vapor and either freshly melted chondrules or soon-to-be-condensed chondrules, that subsequently cools by expansion and radiation. We analyze a piece of the medium, a “bubble”, whose properties are for whatever reason slightly different from its surroundings, and ask how the bubble interacts with the background, focusing on how the bubble may lose energy to the background through radiation, and be crushed into a smaller volume by the background pressure. While we evolve the bubble properties in time, we freeze the background—this is done for simplicity, and is justifiable insofar as the bubble might evolve on a timescale faster than the background evolves. This timescale requirement is not necessarily satisfied, and we will butt up against it. In the case of a background medium drawn from the vapor plume from an asteroid impact, <cit.> showed that thermal histories of impact plumes produced by colliding asteroids 10-100 km in radius (these reproduce empirical chondrule cooling rates of 100-1000 K/hr) unfold over hours, possibly stretching to days, and create plumes on the order of 10000 km across. We are asking here how such impact plumes might fragment on smaller scales, on timescales shorter than days.
For this first set of calculations (section <ref>), we assume the medium is composed purely of silicates and metals, a fraction of which is in the vapor phase and capable of condensing wholly into solid/liquid “dust” particles (we do not distinguish between solid and liquid phases and hereafter refer to dust particles as “solid”). The medium has uniform pressure P_ bkg and temperature T_ bkg. Out of this background we carve a test sphere (bubble) of radius R, temperature T, and pressure P.
The equations governing the evolution of the bubble are described in section <ref>. Results for a fiducial set of parameters are presented in section <ref>, and parameter space explored in section <ref>. In section <ref>, we consider what size solids may be aerodynamically dragged with the bubble, and what particle-particle relative velocities may be. Section <ref> introduces a non-condensible gas like nebular hydrogen to the problem.
§.§ Equations solved
The gas in the bubble is treated as ideal:
P = ρ_ gas k T/μ m_ H
where ρ_ gas is the gas mass density (< ρ, the total mass density in gas + solids), k is Boltzmann's constant, μ= 30 is the mean molecular weight, and m_ H is the atomic mass unit.
While the total bubble mass M is constant, the bubble mass in solid form, M_ solid, can evolve. If the bubble is under-saturated, i.e. if P < P_ sat(T), where P_ sat is the
saturation pressure of vapor over rock, then the bubble remains purely vapor:
M_ solid = 0 (P < P_ sat) .
Otherwise the bubble is saturated and solids condense out:
M_ solid = M - ρ_ sat(T) V (P = P_ sat)
where V = 4π R^3/3 is the bubble volume and ρ_ sat = P_ satμ m_ H / (kT) is the saturated gas density. We use here the saturation vapor pressure curve of molten bulk silicate Earth (“BSE”), which has a chemical composition similar to that of olivine-rich chondrites (; see also ):
log_10(P_ sat/ bars) = -30.6757 - 8228.146 K/T+ 9.3974log_10(T/ K) .
Modeling the medium as a single, continuously condensible species ignores the reality that different minerals condense out at different temperatures; indeed, this discrete condensation sequence (e.g. , their Table 32) is critical to explaining the segregated components of Fe/Ni metal, and metal-free silicate chondrules, in CB/CH chondrites (e.g. ). Depending on the composition of rock melt, saturation vapor pressures can vary by 1–2 orders of magnitude at a given temperature. We adopt equation (<ref>) for simplicity and to capture a key feature of condensation-induced collapse, namely the exponential sensitivity of pressure to temperature (Clausius-Clapeyron).
Equation (<ref>) follows from our assumption that the bubble can never be super-saturated (e.g. ). The “excess” mass—what would super-saturate the bubble if it were a gas[In practice we determine whether the bubble is under-saturated or saturated by calculating a hypothetical pressure if all of mass M were in the vapor phase. The hypothetical pressure is set equal to P if it is < P_ sat(T); otherwise we set P = P_ sat.]—condenses into dust particles. Interestingly, equation (<ref>) predicts that if the bubble shrinks isothermally, more dust condenses out. While such a trend seems sensible insofar as a decreasing volume increases the vapor density and by extension nucleation and condensation rates, the microphysics of dust growth is not actually captured by our model. As such, we do not know the condensate size distribution.
We assume for the purpose of calculating the bubble opacity to thermal radiation that all condensates are identical spheres of radius s and internal density ρ_ solid = 3 g/cm^3. Further assuming that these spheres absorb and emit radiation with their geometric cross sections yields the optical depth of the bubble along its radius:
τ = 9 M_ solid/16π R^2 ρ_ solid s .
The bubble's temperature evolves with time t according to the internal energy equation, whose right-hand side accounts for radiation, PdV work, and latent heat release:
MC dT/dt = - 4π R^2 σ_ SB (T^4-T_ rad,bkg^4) f(τ)
- PdV/dt + L_ vapdM_ solid/dt
where C ≃ 3k/(μ m_ H) = 8 × 10^6 erg/(g K) is the bubble's specific heat (ignoring the order-unity difference between the specific heats of solids vs. gas in the bubble), σ_ SB is the Stefan-Boltzmann constant, and L_ vap = 3 × 10^10 erg/g is the latent heat of vaporization of rock.
The background radiation temperature, T_ rad,bkg, is the temperature to which the bubble would relax into radiative equilibrium in the absence of other effects. It is not necessarily the same as the background gas kinetic temperature T_ bkg, as the background can be optically thin, in which case T_ rad,bkg < T_ bkg. The bubble's emission likewise depends on its optical depth, through the dimensionless function f(τ). When the bubble is optically thin (τ≪ 1), it is considered to have a uniform temperature T and a uniform volume emissivity (power emitted per unit volume); accordingly, its luminosity scales as its volume ∝ R^2 τ. The same scalings apply for absorption of energy from the background radiation field in the optically thin limit. When the bubble is optically thick (τ≫ 1), T is interpreted as the internal “core” temperature of the bubble (where most of the bubble mass is assumed to reside), distinct from its photospheric temperature. When T ≫ T_ rad,bkg, the photosphere is cooler than the core, allowing radiation to diffuse out at a rate proportional to T^4/τ; in the opposite limit, the photosphere is hotter than the core, allowing radiation to diffuse in at a rate proportional to T_ rad,bkg^4/τ (note τ is a grey optical depth that is the same for outgoing and incoming radiation). We capture both optically thin and thick limits with the dimensionless
function[The factor of 3π in f(τ) follows from the luminosity ∫ j_ν dν× V of an optically thin sphere of constant volume emissivity j_ν = α_ν B_ν, where ν is the photon frequency, B_ν is the Planck function, and α_ν = R/τ is the grey absorption coefficient. The factor of 3/16 follows from the plane-parallel energy flux in radiative diffusion (assuming local thermodynamic equilibrium), -(16/3)σ_ SB T^3 dT/dτ, and the approximation dT/dτ∼ -T/τ.]
f(τ) = τ/3τ^2/16+3π .
Our assumption that an optically thick bubble has a temperature gradient regulated by radiative diffusion across its entire radius ignores the possibility that latent heat release or turbulence might render the bubble more spatially isothermal, in which case the temperature difference T-T_ rad,bkg may occur over an optical depth interval smaller than the bubble's full τ, and thereby yield a higher radiative flux (either in or out). In this sense our picture is conservative because it minimizes radiative loss, staving off collapse.
The final equation governing the bubble describes how it expands or contracts depending on the pressure difference with the background:
M d^2 R/dt^2 = 4π R^2 (P - P_ bkg) .
On the left-hand side we assume only the bubble mass accelerates and neglect the background mass that accelerates with it.[For oscillating air bubbles in water, the background water mass dominates the inertia ().] The relevant background mass is located within distance ∼R of the bubble surface, and as such makes only an order-unity contribution to the inertia when bubble and background densities are similar. As the bubble contracts and becomes more dense, the background inertia becomes less important. Note further that equation (<ref>) assumes all of the bubble mass, including embedded solids, accelerates in response to gas pressure forces—condensates are assumed to be aerodynamically dragged in sync with bubble gas. This last assumption will be tested in section <ref> for various bubble parameters and particle sizes.
Equations (<ref>) and (<ref>) are two coupled ordinary differential equations that we solve numerically (with Python's ). Note that the background gas kinetic temperature T_ bkg does not formally enter into the equations solved.
§.§ Fiducial case
Input parameters evaluated at time t=0 of our fiducial model are as follows.
For the bubble:
T = 2500 K, P = P_ sat(T), R = 1 km, Ṙ = 0, and M = V ×ρ_ sat(T) so that M_ solid=0 (initially).
For the background: T_ rad,bkg = 0 and P_ bkg = 0.99 P(t=0).
In words: a pure gas bubble starts stationary and slightly overpressured relative to the background, which has no radiation to heat the bubble. For reference, the bubble's initial pressure is P(t=0)=9 mbar, and its mass density ρ_ sat(T) = 1.3 × 10^-6 g/cm^3 (about 3 orders of magnitude less than air at sea level). If the bubble mass M were to compactify into a solid sphere of density ρ_ solid = 3 g/cm^3, it would have a radius of 7.6 m. We set s = 0.01 cm, intermediate between the mm-cm sizes of the largest chondrules (e.g. ) and the μm sizes of matrix grains in chondritic meteorites (e.g. ). The value of s should not be taken too literally, as it is merely a proxy knob for the opacity, which in reality is determined by the (unmodeled) size distribution of condensed particles. The starting temperature of 2500 K matches the onset of condensation in the impact-vapor cloud models of Choksi et al. (; their Fig. 1), and falls within the preferred range of temperatures in the “metal-rich” elemental condensation sequence for CB chondrites (, their Fig. 6).
Figure <ref> describes how such a bubble radiatively cools and eventually collapses. The initial overpressure sets the bubble oscillating. We verified that the oscillations, having period equal to the bubble sound-crossing time of
t_ sound∼2R/√(P/ρ_ gas)≃ 2 ( R/1 km) s
last indefinitely when radiation is neglected (by setting τ = 0 in equation <ref>). The oscillation is a simple acoustic or pressure mode (p-mode).
With every expansion, the bubble cools and condenses a fraction of its mass into dust. When radiation from this dust is accounted for, the bubble irreversibly leaks heat, and successive temperature minima decrease ever so slightly, producing more dust and cooling the bubble faster in a positive feedback. The temperature oscillations are small, less than a percent for the first 10 s of the evolution, and growing to only a few percent just prior to collapse.
Pressure variations are more pronounced because the saturation vapor pressure depends exponentially on temperature. The upward trend in the amount of dust and the corresponding downward trend in the bubble's mean pressure causes a corresponding decline in its mean radius. In the penultimate oscillation (t ≃ 14–16 s), as the bubble shrinks to less than half its initial size, its internal gas pressure surges upward, but not enough to overcome the inertia of the infall. At this point, most of the bubble mass has condensed, not only from the decreasing temperature, but also from the decreasing volume (equation <ref>).
The bubble finally implodes at a near-sonic velocity from the order-unity difference in bubble vs. background pressures. Radial flow speeds reach maxima of Ṙ≃ -200 m/s
(see Figure <ref>); for reference, the initial sound speed in the bubble is ∼√(P/ρ_ gas)≃ 800 m/s. We follow the evolution until R is < 1% of its initial value; formally, the total bubble density (in solids and gas) at this point is ∼1 g/cm^3, practically that of a bulk solid. Despite the radius of the bubble shrinking by more than two orders of magnitude, the internal pressure P and gas density ρ_ gas vary by at most a factor of two from their initial values, and the temperature T varies even less.
The collapse occurs over the time it takes the bubble to radiate away enough energy to condense. We define a characteristic cooling time by dividing the bubble's latent heat content by its radiative luminosity:
t_ cool = ML_ vap/σ_ SB (T^4-T_ rad,bkg^4) 4π R^2 f(τ) .
The numerator ignores the contribution in heat content from changes in temperature since the bubble collapses nearly isothermally. Equation (<ref>) can be evaluated using initial conditions, setting τ to what it would be if M_ solid = 0.3 M and R = R(t=0) (the factor of 0.3 is an arbitrary order-unity constant chosen to approximately match numerical results). So defined, τ≃ 1 and t_ cool≃ 6 s (see vertical dashed line in the top panel of Fig. <ref>). This calculation underestimates the actual collapse time of ∼20 s because in reality the bubble spends the first half of its evolution optically thin and radiating relatively inefficiently.
§.§ Parameter study
The collapse of the bubble as described in the previous section occurs irrespective of many parameter choices. In particular, the initial pressure difference between bubble and background can vary by orders of magnitude, and be negative or positive or zero, and the bubble will still collapse, as we have verified by direct calculation (data not shown). Assuming the background can serve as an energy sink (T > T_ rad,bkg), all that is needed for collapse is dust to secularly cool the bubble. Despite formally having no dust to begin with, our fiducial bubble becomes dusty because it is initially overpressured relative to the background, and so expands, cools, and condenses. Other combinations of initial pressure difference P-P_ bkg (including zero) and initial velocity perturbation Ṙ (including zero) lead to similar runaway outcomes, where cooling, condensation, and contraction feed off one another in a positive feedback. Simply seeding the bubble with dust (M_ solid > 0 at t=0) also works.
Figure <ref> shows how the collapse of the bubble varies quantitatively with some input parameters. We change one of the following five parameters at a time, keeping others at their fiducial values:
* Seed solid mass: M_ solid(t=0)/M = 0.5 (red curves in Fig. <ref>). Compared to the fiducial case with no seed solids (black curves), the bubble with seed solids is optically thicker, radiates faster, and collapses on a timescale shorter than that of the fiducial model by a factor of ∼2, and reasonably predicted by t_ cool (3 s; vertical red dashed line in top left panel). Radial flow speeds are correspondingly faster, with a maximum contraction velocity of 500 m/s.
* Particle size controlling opacity: s = 1 μm (light blue curves). Smaller particles have higher radiative cross-sections per unit mass and render the bubble optically thicker. At first the bubble cools and shrinks faster than in the fiducial model (black curves, s = 100 μm), but later becomes so much more opaque that it cools slower
and ultimately takes longer to collapse (150 s vs. 20 s).
* Background radiative heating: T_ rad,bkg = 2475 K = 0.99 T(t=0) (dark blue curves). Cooling is slowed relative to the fiducial model (black curves, T_ rad,bkg=0), with collapse taking ∼1 hour. Our metric t_ cool, though lengthened by T_ rad,bkg according to the denominator of equation (<ref>), still underestimates the actual collapse time because the formula does not consider how background heating limits dust production and keeps the optical depth low for some time (bottom left panel). Background heating buffers the bubble against the large-amplitude oscillations in radius seen in other models. Radial speeds are highly sub-sonic (and are magnified for easier viewing in Fig. <ref> by a factor of 10, for this model only), varying between ± 8 m/s during the initial quasi-adiabatic acoustic-oscillation phase, and slowing from -6 m/s to -0.4 cm/s during the protracted final collapse. As this experiment demonstrates, the bubble can leak energy to the environment and collapse even when T_ rad,bkg < T by as little as -1%. Various experiments with T_ rad,bkg > T(t=0), even by +1%, do not lead to collapse (data not shown).
* Initial bubble size: R(t=0) = 100 km (heavy orange curves). Compared to our fiducial 1-km bubble (black curves), the 100-km bubble has a 100× longer acoustic period (200 s vs. 2 s because of the longer distance crossed at the same sound speed). Collapse takes several hours, about 3 orders of magnitude longer than the fiducial case, because the larger bubble has more mass to cool and its radiative luminosity does not scale in linear proportion to the mass when the bubble becomes opaque. Equation (<ref>) predicts t_ cool∝ M/(R^2 f(τ)), which in the optically thick limit scales as M/(R^2/τ) ∝ R^2 (and in the optically thin limit scales as M/(R^2τ) ∝ M^0).
* Initial bubble temperature: T(t=0) = 3000 K (grey curves) and 2000 K (light orange curves). Higher temperatures imply higher densities from the saturation pressure curve (equation <ref>) and the ideal gas law. At 3000 K, the bubble is 17× denser than our fiducial 2500 K bubble (black curves); the hotter bubble, if compactified to solid density, would yield a planetesimal 19 m in radius (vs. 7.6 m for the fiducial bubble).
The correspondingly larger mass takes longer to cool and collapse, about
10 minutes, with radial speeds varying between ± 60 m/s.
Compared to the fiducial model, the cooler 2000 K bubble is 43× less massive (equivalent planetesimal radius 2.2 m), but also takes longer to collapse (34 s vs. 20 s) because it generates comparatively little dust (bottom left panel) and therefore cools less efficiently.
In 5 out of 6 of our non-fiducial, parameter space experiments, the bubble pressure and gas density deviate by less than a factor of 10 relative to their initial values, and for temperature much less, despite reductions in bubble volume by more than a factor of 10^6. The smallest fractional changes are seen in the background heating experiment (T_ rad,bkg = 0.99 T(t=0)), where Δ T/ T ≲ 10^-3 (thereby maintaining the inequality T > T_ rad,bkg throughout the evolution) and Δ P / P ∼Δρ_ gas / ρ_ gas≲ 2%. In the large bubble (initial R = 100 km), hotter temperature (initial T = 3000 K), seed solid (initial M_ solid/M = 0.5), and small particle size (s = 1 μm) experiments, temperatures deviate by 2-14% relative to starting conditions, and pressures and gas densities range from 0.1× to 6× initial values. The largest variations are seen for the cold bubble (initial T = 2000 K), whose final temperature is T ≃ 1550 K, and whose final (saturation) pressure is accordingly a factor of 180 smaller than the initial pressure.
§.§ Particle drag and relative velocities
Here we evaluate how well bubble gas entrains solid particles of various sizes. We consider a solid particle of radius s_ c and mass m_ c = 4πρ_ solids_ c^3/3 located at radial distance R_ c < R from the bubble center. The particle accelerates by gas drag according to
m_ cd^2 R_ c/dt^2 = 4π/3ρ_ gas( u_ gas - dR_ c/dt) √(8kT/πμ m_ H) s_ c^2 g(s_ c/λ_ mfp)
g = 1/1+0.63s_ c/λ_ mfp
where λ_ mfp = μ m_ H/(ρ_ gasσ) is the collisional mean free path in gas with molecular cross-section σ≃π× 10^-15 cm^2. The dimensionless function g is constructed to smoothly switch from a drag force that scales as s_ c^2 when s_ c≪λ_ mfp (free molecular drag; ), to a force that scales as s_ cλ_ mfp when s_ c≫λ_ mfp (Stokes drag).[Numerical coefficients in (<ref>) and (<ref>) match coefficients derived in the literature. In the free molecular drag regime, we use the force law that assumes “specular reflection” of molecules off a sphere (). In the Stokes regime, the drag force on a sphere moving at speed u relative to gas is 6πρ_ gasν_ visc u s_ c, with kinematic viscosity ν_ visc = 0.35 λ_ mfp u_ thermal and u_ thermal = √(8kT/(πμ m_ H)) ().]
The gas velocity u_ gas at the particle's position is scaled to Ṙ, the velocity at the bubble boundary:
u_ gas = Ṙ× R_ c/R .
Thus u_ gas = 0 when R_ c=0 at the bubble center, and grows linearly up to Ṙ at R_ c = R (a linear “Hubble law”).
Equation (<ref>) is solved numerically for R_ c(t), with R(t) and Ṙ(t) obtained separately from the equations in section <ref>. The solid particle is thus treated as a kind of test particle in the gas; the momentum backreaction of solids on gas is ignored. This neglect means our solutions for R_ c(t) at late times, when solids outweigh gas, cannot be trusted. Nevertheless the calculations provide a starting point for discussion, and should still identify reasonably accurately the largest particles that can compactify with the bubble. It is hard to see how accounting for drag backreaction would change the sizes above which particles would decouple from the gas.
Figure <ref> displays the evolution of particle positions relative to the bubble boundary, R_ c/R, for particles of various sizes, in our fiducial model (section <ref>) plus five of our experiments exploring parameter space (section <ref>, omitting the cold bubble experiment). For each model at t=0 we place initially stationary particles (Ṙ_ c = 0) at R_ c/R = 0.25 and 0.75. In plotting R_ c/R vs. t, we offset some curves on the ordinate or stretch them on the abscissa for clarity.
The smallest particles, having sizes from 1 to 100 μm depending on the model, are so tightly coupled to gas that they maintain their relative positions in the bubble; their R_ c/R curves are nearly flat, so that when the bubble collapses,
they collapse along with it. The largest particles, having sizes of 10 μm to 1 cm depending on model parameters, are left outside the bubble when it implodes (we stop integrating particle trajectories when R_ c/R > 1). Intermediate size particles stay within the bubble and change their positions within it, moving radially outward or inward, not quite in sync with bubble gas.
Particles are least coupled to those bubbles that collapse fastest, as can be seen in Fig. <ref> by comparing the bottom row of faster-collapsing models to the top row of slower-collapsing models. Across the parameter space surveyed, cm-sized particles and larger do not collapse. The drag regime is largely controlled by the gas temperature, which sets the gas density under saturation conditions. For T ≤ 2500 K, gas mean free paths λ_ mfp exceed 100 μm, and smaller sized particles experience free molecular drag. In this regime the momentum stopping time is
t_ stop≡m_ cu_ rel/4π/3ρ_ gas u_ rel√(8kT/πμ m_ H)s_ c^2 = ρ_ solid s_ c/ρ_ gas√(8kT/πμ m_ H)
which evaluates to ≤ 0.2 s for s_ c≤ 100 μm, T = 2500 K, and ρ_ gas = ρ_ sat(T). These stopping times are shorter than our bubble acoustic oscillation periods, which are of order t_ sound∼ 2–200 s (equation <ref>), ensuring entrainment. Larger particles with s_ c > λ_ mfp will have longer stopping times, not only because of their larger inertia but also because they experience Stokes drag which is less efficient than free molecular drag by a factor of s_ c/λ_ mfp. For model bubbles that take hours to collapse, the cut-off size for retaining particles appears to be a millimeter (top row of Fig. <ref>).
Particles within a bubble will have a dispersion of sizes, and accordingly different velocities relative to the gas, and different velocities relative to one another. Particle-particle velocities u_ rel are limited by |Ṙ|, the maximum gas velocity at the bubble boundary (equation <ref>); a hard upper limit on u_ rel would be 2Ṙ, if we imagine one particle traveling inward at -|Ṙ| and a second traveling outward at +|Ṙ| (accelerated to this velocity during a prior expansion phase of the bubble, and somehow left coasting). For most particle-particle encounters, u_ rel < |Ṙ|, with smaller particles, and smaller differences in sizes between particles, reducing u_ rel.
For cavitating bubbles to collect chondrules into chondrite parent bodies, relative particle velocities cannot be too large, lest chondrules be destroyed on impact. The sedimentary laminations of the CB/CH chondrite Isheyevo point to gentle, layer-by-layer accretion of size and mineral-sorted material (). <cit.> and <cit.> require u_ rel≲ 13–25 m/s for mm-sized aggregates to not break on impact. The aggregates in their experiments are porous and composed of micron-sized monomers. Chondrules and metal nodules are individually competent and should tolerate higher impact velocities; on the other hand, they may splash when molten. <cit.> cite upper limits on u_ rel of ∼100 m/s for solid chondrules, and ∼1 m/s for liquid ones. <cit.> roughly estimate u_ rel≲ 30–200 m/s based on the material strengths of carbonaceous chondrites. These variously estimated maximum velocities should be compared against our theoretical values of Ṙ (read max u_ rel) displayed in Fig. <ref>. Of particular interest for chondrite assembly are the slowest velocities |Ṙ| < 8 m/s for our model bubble heated by background radiation.
§ BUBBLES OF CONDENSIBLE VAPOR MIXED WITH HYDROGEN
We now add hydrogen, a non-condensing gas, to the silicate vapor in the bubble. In equations (<ref>) and (<ref>) we make the following replacements:
MC → MC + M_ H_2C_ H_2
P → P + P_ H_2
M → M + M_ H_2
where variables subscripted H_2 refer to molecular hydrogen and those not so subscripted refer to the condensible silicates. The specific heat C_ H_2 = (7/2)k / (μ_ H_2 m_ H) = 1.5× 10^8 erg/( g K) where μ_ H_2 = 2, and P_ H_2 = (M_ H_2/V) kT / (μ_ H_2 m_ H).
Figure <ref> shows how the evolution of our fiducial bubble changes with increasing hydrogen-to-metal mass fraction M_ H_2/M. A tiny addition of hydrogen can substantially slow the collapse. The first ∼10-20 s of the evolution is largely independent of M_ H_2, with nearly all the metals condensing out before 20 s. But whereas condensation removes all of the bubble's pressure support when M_ H_2 = 0, the presence of H_2 sets a floor on the pressure. Pound for pound at a given temperature, H_2 supplies more pressure than silicate vapor by a factor of μ/μ_ H_2∼ 15; the reduction in bubble volume immediately following metal condensation quickly amplifies the H_2 pressure up to the background pressure (top right panel of Fig. <ref>). Thereafter the bubble continues to shrink from radiative cooling, albeit more slowly, remaining on average in pressure equilibrium with the background while becoming colder and denser. From dT/dt ∝ R^2 T^4 / τ (for optically thick bubbles),
τ∝ R, and T/V = constant (constant pressure maintained by the background), we derive T ∝ t^-3/10 and R ∝ t^-1/10, as confirmed in Fig. <ref>.
Experiments that simultaneously vary M_ H_2/M with solid seed mass or initial bubble temperature do not yield behaviors beyond those reported above. When both M_ H_2 and the background radiation temperature T_ rad,bkg < T(t=0) are non-zero, the bubble shrinks and cools until T = T_ rad,bkg, at which point the bubble stabilizes.
§ SUMMARY AND DISCUSSION
A condensing gas loses pressure as vapor converts to liquid or solid particles. We have raised the possibility that condensation proceeds inhomogeneously—that pockets or “bubbles” of gas that begin condensing relative to their surroundings will accelerate their condensation and collapse under background pressure. We have calculated how a bubble shrinks as its condensates (“dust”) radiate energy to the background, under the drastic simplifying assumption that background properties (pressure and radiation field) stay constant. For the background to serve as an energy sink, its radiation temperature must be less than the bubble's.
Gas condenses over the time it takes radiation to carry away latent heat. Bubbles having sizes of 1–100 km, saturated with silicate vapor at temperatures of ∼2500 K, can condense over seconds to hours, depending on parameters. The temperature stays nearly constant during this time. Once condensation completes, the bubble implodes, at near-sonic speed in the absence of background heating (similar to cavitating water vapor bubbles on Earth), and at subsonic speeds (m/s or below) in the presence of background irradiation.
Adding a non-condensible gas like H_2 to the bubble sets a floor on the pressure, halting the collapse when the bubble, containing only inert dust and H_2, attains radiative and pressure equilibrium with the background. The resultant clumps may have astrophysical application. Perhaps the clumpy solids revealed by transits of pre-main-sequence stars (e.g. ), or white dwarfs (e.g. ; ), arise from a stalled form of cavitation, or a thermal instability that generates a multi-phase medium (). Orbital shear sets an upper limit of about an orbital period on the lifetimes of non-self-gravitating bubbles/clumps.
The obvious shortcoming of our work is that we have solved for the evolution of a bubble in a background medium while not accounting for how the background medium may also change. A linear stability analysis of the background, with an imposed cooling function to drive the medium to condense over time, may be a good next step. The question is whether small perturbations to the background undergo runaway condensation and collapse, and on what spatial and time scales. If the medium is optically thick, radiation can be treated locally in a diffusive approximation; but if optically thin, the problem will be global, with the radiation field at one point dependent on radiation emitted from elsewhere. The need for collapsing bubbles to lose energy to the background suggests that while the bubbles may be optically thick, the background may need to be optically thin, or at least thinner than the bubbles. Other issues requiring investigation include turbulence; instabilities at the bubble boundary (e.g. Rayleigh-Taylor; Richtmyer-Meshkov); the drag backreaction of solids on gas; and the evolution of the condensate size distribution.
§.§ Pros and cons of cavitation as a means of assembling chondrites
Might cavitation be a way to accrete chondrules and other condensates into chondrite parent bodies—or at least create overdensities sufficiently strong for self-gravity to take over (e.g. )? Cooling and condensation, and the compression resulting therefrom, are generic processes which should occur in some form whatever the nature of the heating event that melted chondrules (asteroid collisions, nebular shocks, stellar flares, lightning, ...). That cavitation can happen quickly, in the immediate aftermath of heating, on timescales possibly shorter than a day, helps explain the large volume filling fractions of chondrules in meteorites. Cavitation promises to collect melt droplets together as soon as the flash-heated vapor starts to cool and condense; there would be little intervening time for chondrules to mix and be overly diluted by other kinds of protoplanetary disk solids.
Cavitation of condensing silicate vapor is also consistent with the size distribution of chondrite constituents, ranging from the μm sizes of matrix grains (e.g. ) to the 0.1-1 mm sizes of chondrules (e.g. ). We have shown over a wide range of parameter space that particles having such sizes can be entrained by collapsing bubbles, whereas cm-sized and larger particles may fail to be dragged along and are stranded outside. The particle size dividing coupled and uncoupled particles is around 1 mm, which follows naturally from the characteristic temperatures (a few thousand deg K) and pressures (measured in mbar) of saturated silicate vapor, and the corresponding aerodynamic stopping times. Background radiative heating can slow bubble collapse speeds to below ∼10 m/s and prevent destruction of particles as they are agglomerated.
That bubbles can collapse unimpeded when they are not contaminated by a non-condensible gas like H_2 appears to favor chondrule formation by asteroid collisions, as opposed to a nebular scenario involving H_2. The case for a collisional origin has been made most convincingly for the CB/CH subclass of chondrules, whose elemental abundances point to condensation in a practically pure silicate/metallic vapor (e.g. ; ). Silicate chondrules and Fe/Ni metal nodules have exceptionally high volume filling fractions in CB/CH chondrites (>95%; e.g. ), underscoring the need for a rapid and efficient accretion mechanism, which cavitation offers. If there were any ambient nebular hydrogen at the time of the collision—and see <cit.> and <cit.> for radiogenic evidence that little or no H_2 remained at the time of CB chondrite formation—the H_2 would be evacuated to the periphery of an expanding cloud of silicates and metals ().
<cit.>, <cit.>, and <cit.> highlight how nebular H_2 confines the impact plume from an asteroid collision and forces its re-collapse (see also criticism of this scenario by , as reviewed in our section <ref>). Our emphasis here is different. We have imagined instead an H_2-free scenario, where collapse occurs locally, on small scales, within a plume that consists wholly of condensibles and is seeded with small thermal fluctuations. In an expanding vapor plume, collapse can only proceed on small scales, as shear from the background expansion tears apart material on large scales. In a plume of length scale ℒ whose boundary expands outward at velocity ℒ̇, velocity differences between points separated by R < ℒ are of order Δℒ̇∼ℒ̇× R/ℒ (free expansion). For ℒ̇ to be less than a bubble collapse speed of, say, 30 m/s, in a plume having ℒ∼ 10000 km and ℒ̇∼ 3 km/s (e.g. ), cavitation would be restricted to bubble sizes R ≲ 100 km.
In Figure <ref> we offer a summary bubble evolution that addresses the above constraints on chondrite formation—but also points to problems. The 1-km bubble starts optically thick, with solid particles 10 μm in size comprising 1/3 of the total mass, and the remaining 2/3 in saturated silicate vapor. The bubble maintains a temperature of 2500 K, the temperature marking the onset of condensation in the expanding impact-plume models of <cit.>, and favored by the metal-rich condensation sequences of <cit.> for CB chondrites. The bubble collapses within about 10 minutes, shorter than the backgroud plume evolution timescale measured in hours <cit.>. Gas velocities within the bubble peak at ∼20 m/s, fast enough to defeat the ∼0.3 m/s (10^4 km/ℒ) [ℒ̇/(3 km/s)] shear from the impact plume stretching the bubble apart. Embedded solid particles that are 1 mm in size or smaller follow the bubble as it shrinks. When particles collide with one another, they do so at speeds < 20 m/s (and probably considerably less than that depending on the particle size distribution), slow enough to avoid shattering.
We identify a couple problems with this scenario:
* For the bubble to shrink, the gas within it should have a lower time-averaged pressure than the background gas, which means the bubble gas should be colder (on average) than the background gas. At the same time, for the bubble to leak radiation to the background and cool, the emitting particles in the bubble should be hotter than the background radiation. Satisfying both conditions (which the model in Fig. <ref> does by fiat, by setting the free parameter T_ rad,bkg < T) would seem to require the background to be optically thin, or at least thinner than the bubble. This would seem difficult to achieve for a bubble deeply embedded in a dusty impact plume. Perhaps the radiation-condensation instability is restricted to the exposed edges of the plume, or occurs at times when the plume is more transparent—either early on when the vapor is hot and largely uncondensed, or late, when the plume is thinned out by expansion.
* The mass of the bubble featured in Fig. <ref>, if compactified to solid density, would produce a planetesimal 9 m in radius. This is orders of magnitude too small compared to the sizes of ordinary (H, L, LL) and enstatite (EH) chondrite parent asteroids, estimated to be on the order of 100 km (e.g. ). The parent bodies of these classes of chondrite need to be this big to reproduce the most strongly thermally metamorphosed meteorites (types 6–7), from heat generated by ^26Al at depth. Thus unless cavitation operated on scales vastly larger than we have imagined (perhaps possible with nebular shocks?), it cannot be the sole assembly mechanism for these chondrite classes; the small bodies created by cavitation would have to accrete en masse to form larger asteroids by another means—e.g. pairwise accretion, or self-gravity, or a nebular gas-assisted mechanism such as the streaming instability (e.g. ). Note that this problem may not afflict CB/CH chondrites, and other types of carbonaceous chondrite, insofar as these classes are not thermally metamorphosed and therefore do not impose constraints on parent body size. Perhaps CB/CH chondrite parents are systematically much smaller than ordinary and enstatite chondrite parents.
In the context of an impact plume, more multi-scale simulations like that of <cit.>, but including radiation (and dispensing with H_2, or including it with more realistic velocities), will be helpful. An alternative, semi-analytic approach would be to couple the equations governing the background plume () with our bubble evolution equations.
In addition to pursuing radiative-hydrodynamic simulations and more formal stability analyses, we might also draw inspiration from fragmentation processes in other kinds of phase-changing, exploding or expanding media: supernovae; disk or stellar winds; and the expanding universe.
We thank Sarah Stewart for a seminar that started this line of thinking, and Rixin Li and Siyi Xu for inspiring discussions. Erik Asphaug, Rick Binzel, Phil Carter, Jeffrey Fung, Emmanuel Jacquet, Anders Johansen, Myriam Telus, Andrew Youdin, and Shangjia Zhang provided extensive and thoughtful feedback on a draft manuscript that led to substantive changes. This work was supported by Berkeley's Esper Larsen, Jr. fund, and a Simons Investigator grant.
numpy <cit.>,
scipy <cit.>,
matplotlib <cit.>
aasjournal
|
http://arxiv.org/abs/2409.02452v1 | 20240904051915 | Deformation probes for light nuclei in their collisions at relativistic energies | [
"Hai-Cheng Wang",
"Song-Jie Li",
"Lu-Meng Liu",
"Jun Xu",
"Zhong-Zhou Ren"
] | nucl-th | [
"nucl-th",
"hep-ex",
"nucl-ex"
] |
School of Physics Science and Engineering, Tongji University, Shanghai 200092, China
School of Physics Science and Engineering, Tongji University, Shanghai 200092, China
Physics Department and Center for Particle Physics and Field Theory, Fudan University, Shanghai 200438, China
[Correspond to ]junxu@tongji.edu.cn
School of Physics Science and Engineering, Tongji University, Shanghai 200092, China
[Correspond to ]zren@tongji.edu.cn
School of Physics Science and Engineering, Tongji University, Shanghai 200092, China
§ ABSTRACT
We have investigated the performance of anisotropic flows ⟨ v_n^2 ⟩, transverse momentum fluctuations ⟨δ p_T^2 ⟩, and their correlations ⟨ v_n^2 δ p_T ⟩ in central collisions at relativistic energies as probes of deformation parameters β_n of colliding nuclei, if these nuclei are light nuclei with large β_n and different configurations of α clusters. The effects from higher-order β_n terms are illustrated by derived relations based on the overlap of two nuclei with uniform density distributions and by dynamic simulations of collisions of heavy nuclei whose density distributions are of a deformed Woods-Saxon (WS) form. While the linear relations between β^2_n, ⟨ v_n^2 ⟩, and ⟨δ p_T^2 ⟩ and that between β^3_n and ⟨ v_n^2 δ p_T ⟩ can be violated for extremely large β_n, they are mostly valid for realistic values of β_n, as long as the density distribution of colliding nuclei can be described by a deformed WS form. However, these linear relations are generally not valid with more realistic density distributions of light nuclei with α clusters, and the amount of deviation depends on the detailed α-cluster configurations. Care must be taken when one tries to extract the deformation of light nuclei, and specific probes for α-cluster structures in these nuclei are very much needed.
Deformation probes for light nuclei in their collisions at relativistic energies
Zhong-Zhou Ren
September 9, 2024
================================================================================
§ INTRODUCTION
Understanding the density distribution in finite nuclei, especially their shapes, is a fundamental goal of nuclear physics. Comparing with traditional low-energy experiments, it has been realized recently that relativistic heavy-ion collisions can serve as an alternative way of achieving this goal (see, e.g., Ref. <cit.>). Compared to collisions of spherical nuclei, collisions of deformed nuclei provide more possible geometries and configurations, which may lead to different areas and shapes of the overlap region, and thus enhance the anisotropic collective flow and affect the transverse momentum spectrum <cit.>. The situation becomes more clear in central collisions, where various probes have been proposed to extract the deformation of colliding nuclei, e.g., a linear correlation is observed between ⟨ϵ_n^2 ⟩ and β_n^2 <cit.>, where ϵ_n represents the nth-order anisotropy coefficient of the overlap region and β_n is the nth-order deformation parameter of the colliding nuclei. Since the initial anisotropy in coordinate space is eventually transformed into the final anisotropy in momentum space, the square of the final nth-order anisotropic flow ⟨ v_n^2 ⟩ is expected to be linearly correlated with β_n^2 as well. There are also other probes such as the transverse momentum fluctuation ⟨δ p_T^2 ⟩ originated from the fluctuation in the initial overlap area characterized by ⟨δ d_⊥^2 ⟩ <cit.>, as well as the correlation between the anisotropic flow and the transverse momentum fluctuation <cit.>. These probes have been used to successfully extract the deformation parameters of ^96Ru <cit.>, ^96Zr <cit.>, ^197Au <cit.>, and ^238U <cit.>. It was proposed that light nuclei such as ^20Ne could have a huge deformation which can be measured through their collisions <cit.>. The recent ^16O+^16O collisions under analysis by the STAR Collaboration also provide a good opportunity to investigate the structure of light nuclei.
So far the linear relations between β_n^2 and ⟨ϵ_n^2 ⟩ (⟨ v_n^2 ⟩) or ⟨δ d_⊥^2 ⟩ (⟨δ p_T^2 ⟩) <cit.> work well generally based on studies by assuming that the density distribution of colliding nuclei is of a deformed WS form with small β_n. On one hand, whether the linear relation is valid for large values of β_n^2 such as those for light nuclei needs further investigations. On the other hand, it has been shown that light nuclei are generally formed by different configurations of α clusters <cit.>, and their density distribution can no longer be described by a deformed Woods-Saxon form. The typical 3-α cluster and 4-α cluster structure in ^12C and ^16O, respectively, have been under hot investigation for a long time <cit.>, and α clusters may have linear-chain and triangle configurations in ^12C, and linear-chain, tetrahedron, square, and Y-shape configurations in ^16O <cit.>. ^20Ne is largely deformed as proposed in Ref. <cit.>, and it also has a special internal structure with five α clusters <cit.>. Another candidate of light nuclei could be ^10Be, which has a life time long enough to be used for heavy-ion experiments, and it could be composed of two α clusters and two valence neutrons <cit.>. These different configurations of light nuclei may affect anisotropic flows in their collisions <cit.>. It is of interest to extract the deformation or the α-cluster structure of these nuclei with proper probes in their collisions at relativistic energies.
The purpose of the present paper is to investigate whether the probes used to extract the deformation of colliding nuclei work for largely deformed nuclei and for light nuclei with different α-cluster configurations. If not, how large is the error in extracting β_n from these probes for nuclei whose density distribution can not be described by a deformed WS form. Section <ref> presents how the density distributions of ^12C, ^16O, ^20Ne, and ^10Be are obtained from the microscopic cluster model with Brink wave function <cit.>, and reviews briefly the structure of a multiphase transport (AMPT) model <cit.> used to simulate the collision dynamics of these nuclei at relativistic energies. Section <ref> first illustrates the performance of the probes for the deformation of colliding nuclei by using ^96Zr+^96Zr collisions, with the density distribution of ^96Zr parameterized by a deformed WS form, and then investigate the difference in the probes using density distributions of a deformed WS form and more realistic α-cluster configurations for light nuclei. We conclude and outlook in Sec. <ref>.
§ THEORETICAL FRAMEWORK
In this section, we will first present the framework of calculating the density distributions of ^12C, ^16O, ^20Ne, and ^10Be with α-cluster structures, and then briefly review the AMPT model used for simulating the collisions of these light nuclei at relativistic energies. The deformation probes for colliding nuclei in their central collisions at relativistic energies will be discussed, and numerical relations related to these probes with simple assumptions will be derived.
§.§ A microscopic cluster model
In order to obtain the density distribution of light nuclei, we adopt the following Hamiltonian
Ĥ = ∑^A_i=1 E_i^
+ ∑_i<j V^NN(r_ij)
+ ∑_i<j V^Cou(r_ij) + ∑_i<j V^ls(r_ij).
The summation in the above Hamiltonian is over the total nucleon number A. The first term represents the kinetic energy in the center-of-mass (c.m.) frame, the second term is Volkov No.2 force <cit.> representing the effective nucleon-nucleon interaction, the third term is the Coulomb interaction, and the fourth term is the spin-orbit interaction, with r⃗_ij being the relative coordinates between nucleon i and nucleon j. The form of the effective nucleon-nucleon interaction can be expressed as
V^NN(r_ij)
= ( V_1 e^-α_1 r^2_ij - V_2 e^-α_2 r^2_ij )
( W - M P̂_σP̂_τ + B P̂_σ - H P̂_τ ),
where P̂_σ and P̂_τ are the spin and isospin exchange operator, respectively, and V_1 = -60.650, V_2 = 61.140, α_1 = 0.980 ^-2, and α_2 = 0.309 ^-2 are used in all calculations. For the calculation of ^12C, ^16O, and ^10Be, we use W = 0.4, M = 0.6, and B = H = 0.125, determined from the phase shift data of α-nucleon and α-α scatterings as well as the binding energy of deuteron <cit.>, and they are also used in Ref. <cit.>. For the calculation of ^20Ne, we use W = 0.38, M = 0.62, and B = H = 0, as used in Ref. <cit.>. The G3RS force is
used for the spin-orbit interaction with the form
V^ls(r_ij) = u_ls(e^-α_3 r^2_ij - e^-α_4 r^2_ij) L⃗·S⃗P̂_31,
with u_ls=1600 MeV, α_3=5 fm^-2, and α_4=2.778 fm^-2 as in Ref. <cit.>, and P̂_31 being the operator which projects the two-nucleon system to the (S=1, T=1) state.
For ^12C, ^16O, and ^20Ne, their ground states can be described with the Bloch-Brink wave function, namely, the creation operator C^†_α of α clusters acting on the vacuum state | vac⟩, i.e.,
| Φ^Brink⟩ = (C^†_α)^n
| vac⟩,
with n=A/4 being the number of α clusters, and
C^†_α = ∫ d^3 r_1 ⋯ d^3 r_4
×ϕ(r⃗_1 - R⃗) a^†_σ_1, τ_1(r⃗_1)
⋯ϕ(r⃗_4 - R⃗) a^†_σ_4, τ_4(r⃗_4).
In the above, the spatial part of a single particle has a Gaussian form
ϕ(r⃗ - R⃗) ∝exp[- (r⃗ - R⃗)^2/2 b^2],
where the Gaussian width b is fixed to be 1.46 fm for all nucleons <cit.>, and a^†_σ, τ is the creation operator for nucleon with spin σ and isospin τ.
For ^10Be, we follow the framework in Ref. <cit.>, where the ground state is described by the Tohsaki-Horiuchi-Schuck-Röpke (THSR) wave function, i.e.,
| Φ^THSR_^10Be⟩
= (C^†_α)^2 (c^†_n)^2
| vac⟩
,
where C^†_α and c^†_n are the creation operators of α clusters and valence neutrons, respectively. The creation operator C^†_α of α clusters in the THSR framework is reformulated as
C^†_α = ∫ d R⃗ 𝒢_α(R⃗)
∫ d^3 r_1 ⋯ d^3 r_4
×ϕ(r⃗_1 - R⃗) a^†_σ_1, τ_1(r⃗_1)
⋯ϕ(r⃗_4 - R⃗) a^†_σ_4, τ_4(r⃗_4),
according to the α-condensate picture <cit.>. Here a Gaussian container 𝒢_α(R⃗) is used to confine the motion of α clusters, i.e.,
𝒢_α(R⃗)
= exp(
-R_x^2 + R_y^2/β_α,xy^2
-R_z^2/β_α,z^2),
where β_α,xy and β_α,z are parameters to be optimized by the variational principle in the calculation.
The creation operator c^†_n of valence neutrons is formulated as
c^†_n =
∫ d R⃗ 𝒢_n(R⃗)
∫ d^3 rϕ(r⃗ - R⃗) a^†_σ, τ(r⃗).
In the above, 𝒢_n(R⃗) is the Gaussian container for valence neutrons, which is modulated by a phase factor <cit.> to reproduce the negative parity of the π-orbit, i.e.,
𝒢_n(R⃗)
= exp(
-R_x^2 + R_y^2/β_n,xy^2
-R_z^2/β_n,z^2) e^i m ϕ_R⃗.
Here m is the magnetic quantum number, which is set to be +1 and -1 for the two valence neutrons to achieve an overall zero angular momentum, and ϕ_R⃗ is the azimuthal angle of the neutron position R⃗ in the spherical coordinate system.
The above wave functions should then be further antisymmetrized, and they are calculated in the c.m. frame of the nuclei. To restore the rotational symmetry, we further project the wave functions through
| Ψ^J_M⟩
= 𝒫̂^J_M,K | Φ^⟩,
where the corresponding angular momentum projection operator is expressed as <cit.>
𝒫̂^J_M,K =
2J+1/16π^2 ∫^2π_0 dϕ∫^π_0 d θ sin(θ)
×∫^4π_0 dγ D^J*_M,K(ϕ, θ, γ)
R̂(ϕ, θ, γ),
with D^J_M,K(ϕ, θ, γ) being the Wigner rotation matrix, R̂ being the rotation operator, and ϕ, θ, and γ being the three Euler angles. J and M are respectively the quantum number of the total angular momentum and that in the third direction after projection, and K is the one before projection. Since the ground states of ^12C, ^16O, ^20Ne, and ^10Be are all 0^+ states, both J and M are set to be 0. The wave function | Ψ^J_M⟩ after the angular momentum projection is used to calculate the energy of the ground state, and the variational principle is used in order to find the minimum energy with the optimized distance among nucleons for specific configurations as shown in Fig. <ref>. For ^12C, we consider typical triangle and linear-chain configurations, as shown in Fig. <ref> (a) and Fig. <ref> (b). For ^16O, we consider typical tetrahedron and linear-chain configurations, as shown in Fig. <ref> (c) and Fig. <ref> (d). The distance parameter d in the configurations of ^12C and ^16O is varied in order to get the minimum energy. For ^20Ne, we take the structure of a tetrahedron configuration for four α clusters and another one under the bottom of the tetrahedron <cit.>, as shown in Fig. <ref> (e), and the distance parameters d and d_5 are varied. For ^10Be, as shown in Fig. <ref> (f), values of parameters for the Gaussian container are all varied to achieve the ground-state energy.
With the optimized distance for a specific configuration obtained, we use the wave function before the angular momentum projection to calculate the density distribution
ρ(a⃗)
= ⟨Φ | ∑_i=1^A δ(r⃗_i - a⃗) | Φ⟩/⟨Φ | Φ⟩
= ∑_i=1^A ρ_i(a⃗),
where the density for the ith nucleon at position a⃗ can be calculated from
ρ_i(a⃗)
= ⟨Φ | δ(r⃗_i - a⃗) | Φ⟩/⟨Φ | Φ⟩
= 1/(2 π)^3∫ d^3 k e^-i k⃗·a⃗⟨Φ | e^i k⃗·r⃗_i | Φ⟩/⟨Φ | Φ⟩.
The initial nucleons in relativistic heavy-ion collisions are sampled according to the above density distributions, and the later dynamics is modeled by the string-melting version of the AMPT model, to be discussed in the next subsection. It is noteworthy that we first sample the nucleons within each α cluster, and then use the corresponding distance parameters to construct the specific configuration, so in this way the α-cluster structure is preserved in the sampling.
§.§ A multiphase transport model
In the AMPT model <cit.>, the initial particle production in relativistic heavy-ion collisions is modelled by a heavy ion jet interaction generator (HIJING) model <cit.>, where the Lund string fragmentation function
f(z) ∝ z^-1 (1-z)^a exp(-b m_⊥^2/z)
is used to describe the momentum spectrum of the produced particles, with z being the light-cone momentum fraction of the produced hadron of transverse mass m_⊥ with respect to that of the fragmenting string, and a and b being two paramters. In the string-melting version, these particles are converted to their valence quarks and antiquarks at the same spatial coordinates. Partons do not undergo scatterings until they have propagated for a given formation time. The later dynamics of these partons is described by Zhang's parton cascade (ZPC) model <cit.>, where two-body elastic scatterings between partons are simulated using the following differential cross section
dσ/dt≈9πα_s^2/2(t-μ^2)^2,
with t being the standard Mandelstam variable for four-momentum transfer, α_s being the strong coupling constant, and μ being the screening mass in the partonic matter. After the kinetic freeze-out of these partons, quarks and antiquarks are converted to hadrons via a spatial coalescence model. The later dynamics of the hadronic phase is described by a relativistic transport (ART) model <cit.> with various hadronic elastic and inelastic scattering and decay channels.
In the present study, we set the values of the parameters to be a=0.5 and b=0.9 GeV^-2 in the Lund string fragmentation function [Eq. (<ref>)], and α_s=0.33 and μ=3.2 fm^-1 in the parton scattering cross section [Eq. (<ref>)]. These parameters have been shown to reproduce the particle multiplicity and anisotropic flows in Au+Au collisions at √(s_NN)=200 GeV <cit.> and Pb+Pb collisions at √(s_NN)=2.76 TeV <cit.> reasonably well.
§.§ Probes for deformation of colliding nuclei
In this subsection, we give a brief discussion of the probes for the deformation of colliding nuclei in relativistic heavy-ion collisions. Let's first assume that the nucleus density distribution can be approximately described by an axial symmetric deformed WS form, i.e.,
ρ(r,θ) = ρ_0/1+exp[(r-R(θ))/a].
In the above, ρ_0 is the normalization constant, a is the diffuseness parameter, and
R(θ)=R_0[1+∑_nβ_n Y_n,0(θ)]
is the deformed radius, with R_0 being the average radius, β_n being the deformation parameters, and Y_n,0 being the spherical harmonics.
For a given density distribution with axial symmetry, the deformation parameter β_n^⋆ can be calculated from
β_n^⋆ = 4π Q_n/3 A R_rms^n,
where A is the total nucleon number, R_rms is the root-mean-square (rms) radius, and
Q_n = ∫ρ(r) r^n Y_n,0(θ) d^3 r
is the intrinsic multipole moment. An axial symmetric density distribution with a β_n in Eq. (<ref>) generally leads to a different β_n^⋆ <cit.>. The previous studies, e.g., Refs. <cit.>, mostly extract β_n in Eq. (<ref>) rather than β_n^⋆ in Eq. (<ref>).
In the present study, we focus on the anisotropic flows ⟨ v_n^2 ⟩, transverse momentum fluctuations ⟨δ p_T^2 ⟩, and their correlations ⟨ v_n^2 δ p_T ⟩ as deformation probes in relativistic heavy-ion collisions, where ⟨...⟩ represents the event average. The nth-order anisotropic flow v_n originates from the nth-order anisotropy coefficient ϵ_n of the overlap region with respect to the event plane Φ_n. For tip-tip relativistic heavy-ion collisions at zero impact parameter, i.e., with symmetric axis head-on, ϵ_n can be formally expressed as
ϵ_n e^inΦ_n=-∫ r^n sin^n(θ) e^inϕρ(r,θ)d^3r/∫ r^n sin^n(θ) ρ(r,θ)d^3r.
For other collision configurations, the orientations of the colliding nuclei can be generated by incorporating the Wigner rotation matrix (see Appendix <ref>). ϵ_n for an arbitrary collision configuration can also be expressed as
ϵ_n e^inΦ_n=-∫ r^n_⊥ e^inϕρ_⊥ (r_⊥,ϕ) r_⊥ dr_⊥ dϕ/∫ r^n_⊥ρ_⊥(r_⊥,ϕ)r_⊥ dr_⊥ dϕ,
where ρ_⊥(r_⊥, ϕ) = ∫ρ dz is the transverse nucleon density, with ρ being the nucleus density distribution at the given orientation. The transverse momentum fluctuation ⟨δ p_T^2 ⟩ originates from the fluctuation ⟨δ d_⊥^2⟩ of the overlap's inverse area d_⊥ = 1/√(x^2 y^2) <cit.>, where (...) represents the average value of a quantity in one event. For tip-tip relativistic heavy-ion collisions at zero impact parameter, x^2 and y^2 can be formally expressed as
x^2 = ∫ r^2 sin^2(θ) cos^2(ϕ) ρ(r,θ)d^3r/∫ρ(r,θ)d^3r,
y^2 = ∫ r^2 sin^2(θ) sin^2(ϕ) ρ(r,θ)d^3r/∫ρ(r,θ)d^3r,
respectively. For other collision configurations, again, the orientations of the colliding nuclei can be generated by incorporating the Wigner rotation matrix (see Appendix <ref>). The correlation between the anisotropic flow and the transverse momentum fluctuation ⟨ v_n^2 δ p_T ⟩ thus originates from that between the anisotropic coefficient and the overlap's inverse area ⟨ϵ_n^2 δ d_⊥⟩.
Following the procedure in Refs. <cit.>, we consider a uniform density distribution with a sharp edge, i.e., a=0 in Eq. (<ref>). Here we slightly go beyond the derivation in Refs. <cit.> by considering more higher-order β_n terms in the expansion of the numerator in Eqs. (<ref>), (<ref>), and (<ref>). With only nonzero β_2 and β_3 in Eq. (<ref>) as considered in the present study, we can get the following approximate relations
⟨ϵ_2^2 ⟩ = 0.477β_2^2+0.172β_2^3+0.161β_2β_3^2+O(β_n^4),
⟨ϵ_3^2 ⟩ = 0.539β_3^2+0.452β_2^2β_3+O(β_n^4),
⟨δ d_⊥^2⟩ = 0.25/R_0^4×(7.954β_2^2-4.301β_2^3+5.352β_2β_3^2)
+O(β_n^4),
⟨ϵ_2^2 δ d_⊥⟩ = 0.05/R_0^2×(-8.602β_2^3-3.875β_2^4-8.682β_2^2β_3^2)
+O(β_n^5),
⟨ϵ_3^2 δ d_⊥⟩ = 0.05/R_0^2×(-7.849β_2β_3^2-0.440β_2^2β_3^2-3.813β_3^4)
+O(β_n^5).
If ⟨ v_n^2 ⟩, ⟨δ p_T^2 ⟩, and ⟨ v_n^2 δ p_T ⟩ are linearly correlated with ⟨ϵ_n^2 ⟩, ⟨δ d_⊥^2⟩, and ⟨ϵ_n^2 δ d_⊥⟩, respectively, similar relations between ⟨ v_n^2 ⟩, ⟨δ p_T^2 ⟩, and ⟨ v_n^2 δ p_T ⟩ and β_n are expected to be valid. For the detailed derivation to get Eqs. (<ref>)-(<ref>), we refer the reader to Appendix <ref>. In the lowest order, these relations are similar to those in Refs. <cit.>, while higher-order β_n terms as well as cross terms appear when we consider large deformation for light nuclei. These relations are useful for the illustration purpose, while one should keep in mind that there are a few approximations and limitations of these relations. First, we neglected event-by-event fluctuations and used uniform density distributions with sharp surfaces. Here the event-by-event fluctuations are caused by finite particle numbers and stochastic dynamics. Second, the derivation is only valid for not too large β_n but may fail for β_n ∼ 1.
§ RESULTS AND DISCUSSIONS
The purpose of the present study is to investigate whether the deformation probes work for light nuclei with both large β_n and special internal structures. In this section, we will first evaluate the performance of the deformation probes for nuclei with large β_2 or β_3, and then discuss how these deformation probes work for light nuclei with different α-cluster configurations.
§.§ Validity of probes at large deformation
The performance of the deformation probes is evaluated with ^96Zr+^96Zr collisions at √(s_NN)=200 GeV, and the density distribution of ^96Zr is parameterized as Eqs. (<ref>) and (<ref>) with only nonzero β_2 and β_3. We fix R_0=5.02 fm and a=0.52 fm as in Ref. <cit.> while vary β_2 from 0 to 1.5 and β_3 from 0 to 0.9 to get different deformations of the colliding nuclei, and the dynamics of their collisions is described by the AMPT model. We select events at 0-5% centralities according to charged-particle multiplicities, and investigate the relations between ⟨ϵ_n^2 ⟩, ⟨ v_n^2 ⟩, ⟨δ d_⊥^2 ⟩, ⟨δ p_T^2 ⟩, and ⟨ v_n^2 δ p_T ⟩ and the deformation parameters β_n in the large ranges of β_2 and β_3. The nth-order anisotropic coefficient and the fluctuation of the overlap's inverse area are calculated from the coordinates of partons at t=0 in AMPT according to
ϵ_n = √([∑_i r_⊥,i^n cos(nϕ_i)]^2+[∑_i r_⊥,i^n sin(nϕ_i)]^2)/∑_i r_⊥,i^n,
δ d_⊥^2 = (d_⊥ - ⟨ d_⊥⟩)^2,
respectively, where r_⊥,i=√(x_i^2+y_i^2) and ϕ_i=arctan(y_i/x_i) are respectively the polar coordinate and polar angle of the ith particle in the transverse plane, and d_⊥ = 1/√(x^2 y^2) is the overlap's inverse area with (...) representing the average over all particles in one event. δ d_⊥ is linearly correlated with the deviation of the mean transverse momentum away from its event-averaged value δ p_T = p_T- ⟨p_T⟩ <cit.>. The anisotropic flows, the transverse-momentum fluctuation, and their correlation are calculated from the phase-space information of particles at the final stage in AMPT according to
⟨ v_n^2 ⟩ = ⟨cos [n(φ_i-φ_j)] ⟩_i,j,
⟨δ p_T^2 ⟩ = ⟨ (p_T,i - ⟨p_T⟩) (p_T,j - ⟨p_T⟩) ⟩_i,j,
⟨ v_n^2 δ p_T ⟩ = ⟨cos[n(φ_i-φ_j)] (p_T,k - ⟨p_T⟩)⟩_i,j,k,
respectively. Here ⟨...⟩_i,j,... represents the average over all possible combinations of i,j,... for all events, and p_T,i=√(p_x,i^2+p_y,i^2) and φ_i=arctan(p_y,i/p_x,i) are, respectively, the momentum and its polar angle of the ith particle in the transverse plane. Particles at midpseudorapidities (|η|<2) and 0.2<p_T<3 GeV are selected for the calculation, with a pseudorapidity gap of |Δη|>0.5 used in calculating ⟨...⟩_i,j,... to remove the non-flow effect.
Figure <ref> displays the relations between β^2_n, ⟨ϵ_n^2 ⟩, and ⟨ v_n^2 ⟩ in central ^96Zr+^96Zr collisions from AMPT. As shown in the first column, a large β_n^2 leads to a large ⟨ϵ_n^2 ⟩ for both n=2 and 3, and the linear relation holds until β_2^2 ≳ 1 and β_3^2 ≳ 0.4. The slower increasing trend at large values of β_2^2 is inconsistent with the positive β_2^3 term in Eq. (<ref>), likely due to the fact that the derivation is not valid at β_2 ≳ 1. On the other hand, the non-zero value of ⟨ϵ_2(3)^2 ⟩ at β_2(3)^2=0 is due to the effect of finite particle numbers which leads to event-by-event fluctuations. The traditional linear relation between ⟨ϵ_n^2 ⟩ and ⟨ v_n^2 ⟩ is shown in the second column, which is seen to be slightly violated at ⟨ϵ_2^2 ⟩≳ 0.15 and ⟨ϵ_3^2 ⟩≳ 0.08. Consequently, the linear relation between β^2_n and ⟨ v_n^2 ⟩ is violated for β_2^2 ≳ 1 and β_3^2 ≳ 0.4. For a finite β_3(2), ⟨ϵ_2(3)^2 ⟩ is larger for a given β_2(3)^2, consistent with the positive β_2β_3^2 term in Eq. (<ref>) and the positive β_2^2β_3 term in Eq. (<ref>), while this generally does not lead to larger anisotropic flows ⟨ v_2(3)^2 ⟩. Considering the largest β_2 and β_3 for light nuclei obtained in the present study (see Table <ref>), the linear relation between β^2_n and ⟨ v_n^2 ⟩ is mostly valid, as long as the density distribution can be approximately described by a deformed WS form. We have also compared the relations between ⟨ϵ_n^2 ⟩, ⟨ v_n^2 ⟩, and β^⋆_n^2, with β_n^⋆ calculated from Eq. (<ref>). It is interesting to see that the linear relations of ⟨ϵ_2(3)^2 ⟩∼β^⋆_2(3)^2 and ⟨ v_2(3)^2 ⟩∼β^⋆_2(3)^2 are preserved at extremely large β^⋆_2(3), although they have smaller slopes.
Considering that some nuclei (e.g., ^20Ne) have both large β_2 and β_3, we discuss more explicitly the cross relation between β^2_3(2) and ⟨ϵ_2(3)^2 ⟩ as well as ⟨ v_2(3)^2 ⟩ in Fig. <ref>. It is seen that ⟨ϵ_2(3)^2 ⟩ increases also approximately linearly with increasing β_3(2), and for a finite β_2(3) the whole curves move to the upper side. This is again consistent with the positive β_2β_3^2 term in Eq. (<ref>) and the positive β_2^2β_3 term in Eq. (<ref>). For the final anisotropic flows, the relation between ⟨ v_2^2 ⟩ and β_3^2 is similar to that between ⟨ϵ_2^2 ⟩ and β_3^2, while the relation between ⟨ v_3^2 ⟩ and β_2^2 shows a different behavior. Note that we use the same scale for the y axis in Fig. <ref> as in Fig. <ref>, so it is seen that the correlation strength shown in Fig. <ref> is much weaker than that shown in Fig. <ref>.
Figure <ref> illustrates the relation between β^2_n, ⟨δ d_⊥^2 ⟩, and ⟨δ p_T^2 ⟩ as well as between β^3_n and ⟨ v_n^2 δ p_T ⟩ in central ^96Zr+^96Zr collisions from AMPT calculations. A large β^2_2 leads to large fluctuations of the initial overlap area characterized by ⟨δ d_⊥^2 ⟩, and the linear relation approximately holds until β_2^2 ≳ 1 and becomes saturated, as shown in Fig. <ref> (a). The saturation behavior is consistent with the negative coefficient of the β_2^3 term in Eq. (<ref>). The linear relation between ⟨δ d_⊥^2 ⟩ and ⟨δ p_T^2 ⟩ is also approximately valid from the AMPT dynamics, as shown in Fig. <ref> (b). Consequently, the linear relation between β^2_2 and ⟨δ p_T^2 ⟩ is approximately valid until β_2^2 ≳ 1 and then the increasing trend becomes slower, as shown in Fig. <ref> (c). We also illustrate the relation between β^3_2 and ⟨ v_2^2 δ p_T ⟩ in Fig. <ref> (d). The negative value of ⟨ v_2^2 δ p_T ⟩ is due to the fact that a larger (smaller) overlap area generally leads to a smaller (larger) p_T but a larger (smaller) v_2^2, and this is especially so for a larger β_2 (see typical cases for central tip-tip and body-body collisions). The linear relation between β^3_2 and ⟨ v_2^2 δ p_T ⟩ is valid until β_2^3 ≳ 0.5 and then the slope becomes much smaller for larger β_2. This behavior is inconsistent with the negative β_2^4 term in Eq. (<ref>), probably due to the event-by-event fluctuation or that the derivation is not applicable at too large β_2. While there are some linear relations between β^2_3 and ⟨δ d_⊥^2 ⟩ as well as ⟨δ p_T^2 ⟩ and between β^3_3 and ⟨ v_3^2 δ p_T ⟩, the correlation strength is rather weak compared to those for β_2, as can also be expected from Eq. (<ref>) where there is no β_3^3 term. These relations with a fixed finite β_3 or β_2 are also compared, and the qualitative behaviors are consistent with the above discussions. Again, the linear relations between β^2_n, ⟨δ d_⊥^2 ⟩, and ⟨δ p_T^2 ⟩ as well as those between β^3_n and ⟨ v_n^2 δ p_T ⟩ are mostly valid for reasonable values of β_2 and β_3 in realistic nuclei whose density distributions can be approximately described by a deformed WS form. The linear relations of ⟨δ d_⊥^2 ⟩∼β^⋆_2(3)^2, ⟨δ p_T^2 ⟩∼β^⋆_2(3)^2, and ⟨ v_2(3)^2 δ p_T⟩∼β^⋆_2(3)^3 are better preserved, and they have mostly smaller slopes.
§.§ Validity of probes with α clusters
Now we investigate whether the deformation probes shown in the previous subsection work well for collisions of light nuclei with α clusters, and we perform AMPT simulations for central collisions of ^12C+^12C, ^16O+^16O, ^20Ne+^20Ne, and ^10Be+^10Be also at √(s_NN)=200 GeV. In order to carry out a fair comparison, we construct an axial symmetric deformed WS density distribution as Eq. (<ref>) for each configuration of light nuclei, with β_2 and β_3 in the deformed WS distribution adjusted to reproduce β_2^⋆ and β_3^⋆ calculated using Eq. (<ref>) from the realistic density distribution with α clusters. The values of the radius parameter R_0 and the diffuseness parameter a in the deformed WS distribution are determined in such a way that the values of ⟨ r^2 ⟩ and ⟨ r^4 ⟩ should be the same as those from the realistic density distribution with α clusters, where the lth-order moment of r is defined as ⟨ r^l ⟩ = ∫ρ(r⃗) r^l d^3 r/∫ρ(r⃗) d^3 r.
Figure <ref> compares the realistic density distributions for ^12C, ^16O, ^20Ne, and ^10Be with different α-cluster configurations as shown in Fig. <ref> and the deformed WS distributions with parameters determined by the realistic density distributions as described above. The values of β_2^⋆ and β_3^⋆ from realistic density distributions as well as those of β_2 and β_3 used in the parameterized deformed WS distribution for different cases are given in Table <ref>. Here the density distributions of the deformed WS form are axial symmetric with respective to the z axis. For the density distributions with realistic α-cluster structures, they are also axial symmetric with respective to the z axis, except the triangle-shaped ^12C and tetrahedron-shaped ^16O, for which the density distributions are plotted in the 3-α plane for a better vision. Obviously, the deformed WS distributions are quite different from the realistic ones with α clusters in most cases, and one may expect that they may lead to different values of the probes even if they have the same β_2^⋆ and β_3^⋆.
Figure <ref> displays how the deformation probes using deformed WS density distributions deviate from those using realistic density distributions with different α-cluster configurations for ^12C, ^16O, and ^10Be. The left three columns show relation between probes and β_2(3) in the deformed WS distribution, and the right three columns show relations between probes and β^⋆_2(3) calculated from Eq. (<ref>). For both parameterized WS distributions or realistic density distributions as shown in Fig. <ref>, these nuclei have either finite β_2 (β^⋆_2) or finite β_3 (β^⋆_3). For the WS density distributions, we calculate results from spherical and deformed density distributions with the same R_0 and a, as shown by black squares in Fig. <ref>. Basically, ⟨ v_2(3)^2 ⟩, ⟨δ p_T^2 ⟩, and ⟨ v_2(3)^2δ p_T ⟩ follow qualitatively linear relations with β_2(3)^2 or β_2(3)^3, consistent with results from ^96Zr as shown in Sec. <ref>, indicated by solid lines in Fig. <ref>. However, results from most realistic density distributions with α clusters deviate from these linear relations. The chain structure of ^12C and ^16O as well as the realistic density distribution for ^10Be lead to larger ⟨ v_2^2 ⟩, larger ⟨δ p_T^2 ⟩, and smaller ⟨ v_2^2δ p_T ⟩, and this can be understood from the larger asymmetries in z and r_xy directions for the realistic density distributions compared to those for a deformed WS form as shown in Fig. <ref>. The triangle structure of ^12C leads to a smaller ⟨ v_3^2 ⟩, a larger ⟨δ p_T^2 ⟩, and a smaller ⟨ v_3^2δ p_T ⟩, compared to a deformed WS distribution with the same finite β_3. The smaller ⟨ϵ_3^2 ⟩ from the triangle structure of ^12C can be intuitively understood, since the three α clusters actually form a plane, which generally does not lead to a large ϵ_3 of the overlap region for arbitrary collision configurations, while there are more collision configurations for a deformed WS distribution with a finite β_3 to have a large ϵ_3. The much larger ⟨δ p_T^2 ⟩ originating from the larger fluctuation of the inverse overlap's area ⟨δ d_⊥^2 ⟩ for the triangle structure of ^12C compared to the corresponding deformed WS distribution can also be understood in the similar way. The difference is smaller for the tetrahedron configuration of ^16O, since the density distributions from a deformed WS form and a realistic calculation are not quite different, as can be seen from Fig. <ref>. The relations between deformation probes and β_2(3)^⋆ show qualitatively similar behaviors except with smaller slopes. Here we don't see a robust deformation probe among those investigated in the present study that is only sensitive to the deformation parameters of colliding nuclei but insensitive to the existence of α clusters.
Figure <ref> shows similar results as Fig. <ref> but for ^20Ne which has both finite β_2 and β_3. The left two columns show relation between probes and β_2(3) in the deformed WS distribution, and the right two columns show relations between probes and β^⋆_2(3) calculated from Eq. (<ref>). Using the density distribution of the WS form, qualitatively linear relations between ⟨ v_2(3)^2 ⟩, ⟨δ p_T^2 ⟩, and β_2(3)^2 for a fixed β_3(2) are observed, as shown by black squares and blue triangles as well as solid lines, and the behaviors are qualitatively consistent with those from ^96Zr+^96Zr collisions as shown in Sec. <ref>. Again, the relations between deformation probes and β_2(3)^⋆ show qualitatively similar behaviors, except with smaller slopes. Overall, for the 5-α cluster structure of ^20Ne, we found that the values of the resulting deformation probes are not too different from those obtained from the parameterized WS distribution within statistical error.
§ CONCLUSIONS
We have investigated how the probes of deformation parameters β_n of colliding nuclei in their collisions at relativistic energies, such as anisotropic flows ⟨ v_n^2 ⟩, transverse momentum fluctuations ⟨δ p_T^2 ⟩, and their correlations ⟨ v_n^2 δ p_T ⟩, work for light nuclei with large β_n and different α-cluster configurations. By assuming a uniform density distribution with a sharp surface, we have derived the relations between the above probes and β_n, where higher-order relations and cross relations are observed. The performance of these probes is also investigated with AMPT simulations of collisions of heavy nuclei by assuming they have large β_n. While the linear relations between β^2_n, ⟨ v_n^2 ⟩, and ⟨δ p_T^2 ⟩ and that between β^3_n and ⟨ v_n^2 δ p_T ⟩ can be violated for extremely large β_n, they are mostly valid for realistic values of β_n, as long as the density distribution of colliding nuclei can be described by a deformed WS form. However, using more realistic density distributions with α clusters for light nuclei, these probes can deviate from those using a deformed WS form with the same deformation parameters, and the amount of deviation can be different for different α-cluster configurations. For the tetrahadron structure of ^16O and the 5-α cluster structure of ^20Ne, it is difficult to distinguish the difference in the deformation probes from realistic density distributions and WS density distributions. Therefore, specific probes for α-cluster structures in these nuclei are very much demanded in future analysis. For other cases, no robust deformation probe among those investigated in the present study, which is only sensitive to the β_n of colliding nuclei but insensitive to the existence of α clusters, is observed, so care must be taken when one tries to extract the deformation of light nuclei.
This work is supported by the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDB34030000, the National Natural Science Foundation of China under Grant Nos. 12375125, 12035011, and 11975167, and the Fundamental Research Funds for the Central Universities.
§ RELATIONS BETWEEN INITIAL GEOMETRY AND DEFORMATION
Here we try to derive the relations between the initial geometry in relativistic heavy-ion collisions at zero impact parameter and the deformation parameters β_n of colliding nuclei. The initial geometry is characterized by the nth-order anisotropy coefficient ϵ_n, the fluctuation of the overlap's inverse area δ d_⊥^2, and their correlation ϵ_n^2 δ d_⊥. We start from a density distribution with a general deformed WS form
ρ(r,θ, ϕ) = ρ_0/1+exp[(r-R(θ,ϕ))/a]
with R(θ,ϕ)=R_0[1+∑_l,mβ_l α_l,m Y_l,m(θ,ϕ)], but we take the limit of a → 0 so that the density distribution is uniform with a sharp surface. We basically follow the procedure in Refs. <cit.> but keep more higher-order β_l terms.
ϵ_n with respect to the event plane Φ_n in central tip-tip relativistic heavy-ion collisions can be formally expressed as
ϵ_n e^inΦ_n=-∫ r^n sin^n(θ) e^inϕρ(r,θ,ϕ)d^3r/∫ r^n sin^n(θ) ρ(r,θ,ϕ)d^3r.
By using Y_n^n = √((2n+1)!!/4π(2n)!!)sin^n(θ) e^inϕ and uniform density distribution within r ∈ [0, R_0(1+∑_l,mβ_l α_l,m Y_l,m)], the above equation can be further written as
ϵ_n e^inΦ_n = -√(4π (2n)!!/(2n+1)!!)∫ (1+∑_l,mβ_lα_l,mY_l^m)^n+3Y_n^n sin(θ) dθ dϕ/∫ (1+∑_l,mβ_lα_l,mY_l^m)^n+3sin^n+1(θ) dθ dϕ
≈ -√(4π (2n)!!/(2n+1)!!)∫ [1+(n+3) ∑_l,mβ_lα_l,mY_l^m+(n+3)(n+2)/2(∑_l,mβ_lα_l,mY_l^m)^2]Y_n^n sin(θ) dθ dϕ/∫ [1+(n+3)∑_l,mβ_lα_l,mY_l^m] sin^n+1(θ) dθ dϕ.
It is seen that we consider additional β_l^2 terms in the expansion of the numerator, compared to Ref. <cit.>. By defining
A_n=(n+3)Γ(1+1/2+n/2)/πΓ(1+n/2)√((2n)!!/(2n+1)!!)
and
B_n=(n+3)(n+2)Γ(1+1/2+n/2)/2πΓ(1+n/2)√((2n)!!/(2n+1)!!)
and by rotating the two nuclei with the same Euler angles Ω=(α,β,γ), the above equation can be further written as
ϵ_n e^inΦ_n = -A_nβ _n∑_m α_n,mD_n,m^n-B_n ∫∑_l_1,m_1β_l_1α_l_1,m_1Y_l_1^m_1∑_l_2,m_2β_l_2α_l_2,m_2Y_l_2^m_2∑_m'D_n,m'^n Y_n^m'sin(θ) dθ dϕ
= -A_nβ _n∑_m α_n,mD_n,m^n
- B_n ∑_l_1,m_1,l_2,m_2,m'(β_l_1β_l_2α_l_1,m_1α_l_2,m_2 D_n,m'^n√((2l_1+1)(2l_2+1)(2n+1)/4π)([ l_1 l_2 n; 0 0 0 ]) ([ l_1 l_2 n; m_1 m_2 m' ])),
with D_n,m^n(α,β,γ) being the Wigner rotation matrix, and ([ j_1 j_2 j; m_1 m_2 m ]) representing the Wigner 3j-Symbol. Taking the square of the above expression leads to
ϵ_n^2 = A_n^2β _n^2(∑_m α_n,mD_n,m^n)(∑_m'α_n,m'D_n,m'^n)
+ 2A_nB_nβ _n ∑_l_1,m_1,l_2,m_2,m',mβ_l_1β_l_2α_n,mD_n,m^nα_l_1,m_1α_l_2,m_2 D_n,m'^n√((2l_1+1)(2l_2+1)(2n+1)/4π)([ l_1 l_2 n; 0 0 0 ]) ([ l_1 l_2 n; m_1 m_2 m' ])
+ O(β_n^4).
In the present study, we only use finite β_2 and β_3 but β_n=0 for n>3. The event average here is equal to the rotational average ∫ (...) dΩ /(8π^2)=∫ (...) dαsin(β) dβ dγ/(8π^2), and then we can get numerically the relations as Eqs. (<ref>) and (<ref>) in the axial symmetric case. For independent rotation of the two nuclei, as shown in Ref. <cit.>, qualitatively similar relations apply except that the coefficients are a factor of 2 smaller.
Next, we calculate the fluctuation δ d_⊥^2 of the overlap's inverse area d_⊥ = 1/√(x^2 y^2). At zero impact parameter, x^2 in tip-tip collisions can be formally expressed as
x^2 = ∫ r^2 sin^2(θ) cos^2(ϕ) ρ(r,θ, ϕ)d^3r/∫ρ(r,θ, ϕ)d^3r
= 3R_0^2/20π∫ (1+∑_l,mβ_lα_l,mY_l^m)^5 [1/3 -2/3√(π/5)Y_2^0+√(2π/15)(Y_2^2+Y_2^-2)] sin(θ) dθ dϕ
≈ 3R_0^2/20π∫(1+5∑_l,mβ_lα_l,mY_l^m+10∑_l_1,m_1,l_2,m_2β_l_1β_l_2α_l_1,m_1α_l_2,m_2Y_l_1^m_1 Y_l_2^m_2) [1/3 -2/3√(π/5)Y_2^0+√(2π/15)(Y_2^2+Y_2^-2)] sin(θ) dθ dϕ
= R_0^2/5+3R_0^2/20π∫[ -10/3√(π/5)∑_l,mβ_lα_l,mY_l^mY_2^0+5√(2π/15)∑_l,mβ_lα_l,mY_l^m(Y_2^2+Y_2^-2)+10/3∑_l_1,m_1,l_2,m_2β_l_1β_l_2α_l_1,m_1α_l_2,m_2Y_l_1^m_1 Y_l_2^m_2.
. -20/3√(π/5)∑_l_1,m_1,l_2,m_2β_l_1β_l_2α_l_1,m_1α_l_2,m_2Y_l_1^m_1 Y_l_2^m_2Y_2^0 +10√(2π/15)∑_l_1,m_1,l_2,m_2β_l_1β_l_2α_l_1,m_1α_l_2,m_2Y_l_1^m_1 Y_l_2^m_2(Y_2^2+Y_2^-2) ]sin(θ) dθ dϕ.
In the above, the relation cos^2(ϕ) sin^2(θ)=1/3 -2/3√(π/5)Y_2^0+√(2π/15)(Y_2^2+Y_2^-2) is used, and we keep higher-order β_l terms compared to Ref. <cit.>. By rotating the two nuclei with the same Euler angles Ω and carrying out the integral of the spherical harmonics, the above equation can be further written as
x^2 = R_0^2/5+3R_0^2/20π[ -10/3√(π/5)∑_mβ_2α_2,mD_0,m^2+5√(2π/15)∑_mβ_2α_2,m(D_2,m^2+D_-2,m^2)+10/3∑_l,m,β_l^2α_l,m^2 .
. -20/3√(π/5)∑_l_1,m_1,l_2,m_2,mβ_l_1β_l_2α_l_1,m_1α_l_2,m_2D_0,m^2√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ]) .
.+10√(2π/15)∑_l_1,m_1,l_2,m_2,mβ_l_1β_l_2α_l_1,m_1α_l_2,m_2(D_2,m^2+D_-2,m^2)√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ]) ]
= R_0^2/5+R_0^2/√(20π)∑_mβ_2α_2,m[-D_0,m^2+√(3/2)(D_2,m^2+D_-2,m^2)]+R_0^2/2π∑_l,mβ_l^2α_l,m^2
+R_0^2/√(5π)∑_l_1,m_1,l_2,m_2,mβ_l_1β_l_2α_l_1,m_1α_l_2,m_2√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])[-D_0,m^2+√(3/2)(D_2,m^2+D_-2,m^2)].
Similarly, we can formally express y^2 in central tip-tip collisions as
y^2 = ∫ r^2 sin^2(θ) sin^2(ϕ) ρ(r,θ,ϕ)d^3r/∫ρ(r,θ,ϕ)d^3r
≈ R_0^2/5+R_0^2/√(20π)∑_mβ_2α_2,m[-D_0,m^2-√(3/2)(D_2,m^2+D_-2,m^2)]+R_0^2/2π∑_l,mβ_l^2α_l,m^2
+R_0^2/√(5π)∑_l_1,m_1,l_2,m_2,mβ_l_1β_l_2α_l_1,m_1α_l_2,m_2√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])[-D_0,m^2-√(3/2)(D_2,m^2+D_-2,m^2)],
for which we have used the relation sin^2(ϕ) sin^2(θ)=1/3 -2/3√(π/5)Y_2^0-√(2π/15)(Y_2^2+Y_2^-2) and keep higher-order β_l terms compared to Ref. <cit.>. Thus, up to the β_l^3 term, x^2 y^2 can be written as
x^2 y^2 = R_0^4/25-R_0^4/5√(5π)∑_mβ_2α_2,mD_0,m^2+R_0^4/20π∑_m_1,m_2β_2^2α_2,m_1α_2,m_2[D_0,m_1^2D_0,m_2^2-3/2 (D_2,m_1^2+D_-2,m_1^2)(D_2,m_2^2+D_-2,m_2^2)]
+R_0^4/5π∑_l,mβ_l^2α_l,m^2-R_0^4/√(20π^3)∑_l,m_1,m_2β_2β_l^2α_2,m_1α_l,m_2^2D_0,m_1^2
-2R_0^4/5√(5π)∑_l_1,m_1,l_2,m_2,mβ_l_1β_l_2α_l_1,m_1α_l_2,m_2√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])D_0,m^2
+R_0^4/5π∑_l_1,m_1,l_2,m_2,m,m'β_2β_l_1β_l_2α_l_1,m_1α_l_2,m_2α_2,m'√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])
×[D_0,m^2D_0,m'^2-3/2 (D_2,m^2+D_-2,m^2)(D_2,m'^2+D_-2,m'^2)]+O(β_l^4).
By definition d_⊥ = 1/√(x^2 y^2) can then be expressed as
d_⊥ ≈ 5/R_0^2{1+√(5/4π)∑_mβ_2α_2,mD_0,m^2-5/8π∑_m_1,m_2β_2^2α_2,m_1α_2,m_2[D_0,m_1^2D_0,m_2^2-3/2 (D_2,m_1^2+D_-2,m_1^2)(D_2,m_2^2+D_-2,m_2^2)].
.-5/2π∑_l,mβ_l^2α_l,m^2+√(5/π)∑_l_1,m_1,l_2,m_2,mβ_l_1β_l_2α_l_1,m_1α_l_2,m_2√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])D_0,m^2 .
.+√(125/16π^3)∑_l,m_1,m_2β_2β_l^2α_2,m_1α_l,m_2^2D_0,m_1^2-5/2π∑_l_1,m_1,l_2,m_2,m,m'β_2β_l_1β_l_2α_l_1,m_1α_l_2,m_2α_2,m'.
.×√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])[D_0,m^2D_0,m'^2 -3/2 (D_2,m^2+D_-2,m^2)(D_2,m'^2+D_-2,m'^2)] }+O(β_l^4).
Let's define the deviation of a quantity A away from its rotational average value as δ A = A - ⟨ A ⟩, where ⟨ A ⟩=∫ A dΩ/(8π)^2. In this way, δ d_⊥, (δ d_⊥)^2, and ϵ_n^2 δ d_⊥ can then be formally expressed as
δ d_⊥ = 5/R_0^2{√(5/4π)∑_mβ_2α_2,mδ D_0,m^2-5/8π∑_m_1,m_2β_2^2α_2,m_1α_2,m_2δ[D_0,m_1^2D_0,m_2^2-3/2 (D_2,m_1^2+D_-2,m_1^2)(D_2,m_2^2+D_-2,m_2^2)].
.+√(5/π)∑_l_1,m_1,l_2,m_2,mβ_l_1β_l_2α_l_1,m_1α_l_2,m_2√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])δ D_0,m^2.
.+√(125/16π^3)∑_l,m_1,m_2β_2β_l^2α_2,m_1α_l,m_2^2δ D_0,m_1^2-5/2π∑_l_1,m_1,l_2,m_2,m,m'β_2β_l_1β_l_2α_l_1,m_1α_l_2,m_2α_2,m'.
.×√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])δ[D_0,m^2D_0,m'^2-3/2 (D_2,m^2+D_-2,m^2)(D_2,m'^2+D_-2,m'^2)]}+O(β_l^4),
(δ d_⊥)^2 = 25/R_0^4{5/4π∑_m_1,m_2β_2^2α_2,m_1α_2,m_2δ D_0,m_1^2δ D_0,m_2^2-√(125/64π^3)∑_m,m_1,m_2β_2^3α_2,mα_2,m_1α_2,m_2.
.×δ D_0,m^2δ[D_0,m_1^2D_0,m_2^2-3/2 (D_2,m_1^2+D_-2,m_1^2)(D_2,m_2^2+D_-2,m_2^2)].
.+5/πβ_2∑_l_1,m_1,l_2,m_2,m,m'β_l_1β_l_2α_l_1,m_1α_l_2,m_2α_2,m'√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])δ D_0,m^2δ D_0,m'^2}+O(β_l^4),
and
ϵ_n^2 δ d_⊥ = 5/R_0^2A_n^2{√(5/4π)β_2β_n^2∑_m_1,m_2,m_3α_2,m_1α_n,m_2α_n,m_3D_n,m_2^n D_n,m_3^nδ D_0,m_1^2.
.-5/8πβ_2^2β_n^2∑_m_1,m_2,m_3,m_4α_2,m_1α_2,m_2α_n,m_3α_n,m_4δ[D_0,m_1^2D_0,m_2^2-3/2 (D_2,m_1^2+D_-2,m_1^2)(D_2,m_2^2+D_-2,m_2^2)]D_n,m_3^n D_n,m_4^n.
.+√(5/π)β_n^2∑_l_1,m_1,l_2,m_2,m_3,m_4,mβ_l_1β_l_2α_l_1,m_1α_l_2,m_2α_n,m_3α_n,m_4.
.×√(5(2l_1+1)(2l_2+1)/4π)([ l_1 l_2 2; 0 0 0 ])([ l_1 l_2 2; m_1 m_2 m ])D_n,m_3^n D_n,m_4^nδ D_0,m^2.
.+√(5/4π)(n+2) β_2β_n∑_l_1,m_1,l_2,m_2,m_3,m',mβ_l_1β_l_2α_n,mα_l_1,m_1α_l_2,m_2α_2,m_3.
.×√((2l_1+1)(2l_2+1)(2n+1)/4π)([ l_1 l_2 n; 0 0 0 ]) ([ l_1 l_2 n; m_1 m_2 m' ])D_n,m^nD_n,m'^nδ D_0,m_3^2}+O(β_l^5),
respectively.
Again, using finite β_2 and β_3 but β_n=0 for n>3 and taking the rotational average by integrating over the Euler angles, we can get numerically the relations as Eqs. (<ref>), (<ref>), and (<ref>) in the axial symmetric case. For independent rotation of the two nuclei, as shown in Ref. <cit.>, qualitatively similar relations apply except that the coefficients are a factor of 2 smaller.
|
http://arxiv.org/abs/2409.03223v1 | 20240905034211 | Why mamba is effective? Exploit Linear Transformer-Mamba Network for Multi-Modality Image Fusion | [
"Chenguang Zhu",
"Shan Gao",
"Huafeng Chen",
"Guangqian Guo",
"Chaowei Wang",
"Yaoxing Wang",
"Chen Shu Lei",
"Quanjiang Fan"
] | cs.CV | [
"cs.CV"
] |
[
Amy X. Zhang
September 5, 2024
=====================
§ ABSTRACT
Multi-modality image fusion aims to integrate the merits of images from different sources and render high-quality fusion images. However, existing feature extraction and fusion methods are either constrained by inherent local reduction bias and static parameters during inference (CNN) or limited by quadratic computational complexity (Transformers), and cannot effectively extract and fuse features. To solve this problem, we propose a dual-branch image fusion network called Tmamba. It consists of linear Transformer and Mamba, which has global modeling capabilities while maintaining linear complexity. Due to the difference between the Transformer and Mamba structures, the features extracted by the two branches carry channel and position information respectively. T-M interaction structure is designed between the two branches, using global learnable parameters and convolutional layers to transfer position and channel information respectively. We further propose cross-modal interaction at the attention level to obtain cross-modal attention. Experiments show that our Tmamba achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion. Code with checkpoints will be available after the peer-review process.
§ INTRODUCTION
Image fusion can integrate important image information from different data sources to render fused images with rich information.
Traditional image fusion approaches typically employ hand-crafted feature extraction and fusion rules. With the advancement of deep learning, image fusion methods based on deep learning have attained remarkable results <cit.>. In recent years, the networks used for feature extraction and image fusion are mostly built by CNN and Transformer. CNN-based methods have difficulty in capturing global context due to their limited receptive field, which makes them challenging to obtain high-quality fused images <cit.>. The convolution kernel of CNN has static parameters during the inference phase, which reduces the adaptability of CNN to different inputs <cit.>.
The Transformer-based methods perform well in global modeling, but the quadratic complexity of self-attention leads to high computational overhead. People use CNN and Transformer to form the dual-branch network to achieve the complementarity of network structure, but the problems of low input adaptability of CNN in the inference stage and quadratic complexity of Transformer still exist, which limits its application in the field of image fusion.
Recently, Mamba <cit.> provides us more options. Compared with CNN, Mamba has global modeling capabilities, and its selective scanning mechanism enables it to have high input adaptability. Compared with Transformer, Mamba has linear complexity. However, compared with the dual-branch network composed of CNN and Transformer, the ability of Mamba to extract features is obviously insufficient. The richness of features has a great influence on the effect of image fusion task. Therefore, how to customize a dual-branch network for Mamba suitable for multi modality image fusion is a compelling research issue.
In this paper, we propose the Tmamba block and use it to build a fusion network. It is a dual-branch network tailored for Mamba based on image fusion requirements. We considered three main points when choosing the branch to pair with Mamba: First, Mamba is good at processing long sequence inputs, so the branch paired with it should maintain acceptable computational complexity under long sequence inputs. Second, the branch paired with Mamba needs to be complementary and optimized with Mamba. Third, the branch paired with Mamba cannot undermine Mamba's advantages in image fusion. We choose Restormer block <cit.> and Vmamba block <cit.> to build the dual-branch Tmamba block. Restormer block is a Transformer with linear complexity, which allows us to use pixel-level sequences while maintaining reasonable computational complexity. The design principles and network structures of Restormer block and Vmamba block are different, which enables them to extract features of different patterns as shown in Figure 1 (a) and (b). Diverse features can provide richer information for fusion. In addition, different network structures also make the features carry different information. Restormer block implicitly implements global attention through channel interactions, so that the features it extracts carry channel attention information. Mamba’s forgetting mechanism gives position information to the features it extracts <cit.>. Unlike traditional dual-branch networks, two branches in our network are not completely independent. We added an interaction structure between the two branches to help complementary information pass to each other. Both Transformer and Mamba can adjust parameter matrices according to different inputs for targeted inference. This enables our method to fully extract and preserve information from different modalities. As shown in Figure 1 (c), entropy (EN) reflects the amount of information in the fused image, and mutual information (MI) reflects the similarity between the fused image and the input images. Our method can not only extract rich information, but also well preserve the features of the input images.
Furthermore, considering that the original self-attention of a single modality may be highly restricted by the modality information, we designed a cross-modal interaction at the attention level to get cross-modal attention.
In summary, the contributions of our work are as follows:
* We innovatively built a Transformer-Mamba hybrid framework for multi-modality fusion tasks, and designed a hierarchical interaction strategy between Mamba and Transformer to optimize the features extracted by each other.
* We proposed a cross-modal interaction at the attention level to break the limit of single modality information on attention, and get a more favorable attention for image fusion .
* Our model is highly adaptable to image inputs of multiple modalities and is able to perform targeted inferences based on the input images. This enables us to obtain state-of-the-art results on multiple datasets across two tasks by training on merely one dataset.
§ RELATED WORK
§.§ DeepLearning-based Methods for Image Fusion
With the advancement of artificial intelligence technology, DL-based methods have become the mainstream in the field of image fusion. DL-based image fusion methods can be roughly divided into three categories: modals based on generative method <cit.>, Autoencoder based models <cit.> and models that combine downstream tasks <cit.>. (1)Among the image fusion methods based on generative methods, GAN-based methods are more commonly used. GAN-based models integrate different visual information by establishing an adversarial game between the original image and the fusion result, generating image results with richer and more diverse content. Recently, <cit.> proposed generative image fusion networks based on diffusion models that leverage powerful generative priors to address challenges such as training instability and lack of interpretability of GAN-based generative methods. (2)Autoencoder-based models extract image features using the encoder, then fuse the extracted image features, and finally output the final image through the decoder. (3)Many works combinine multi-modal image fusion with downstream tasks, <cit.> makes the model obtain more semantic information by introducing segmentation loss, <cit.> explores methods to combine image fusion with detection, and <cit.> achieves good results in both tasks through the joint optimization of fusion network and segmentation network.
§.§ State Space Model
SSM is a fundamental scientific model used in control theory. In recent years, efforts have been made to apply it to deep learning related tasks. LSSL <cit.> is the first to introduce SSM into the field of deep learning, demonstrating its significant advantages in handling long-sequence speech classification tasks. S4 <cit.> introduces low-rank correction adjustment based on LSSL and reduces the computational complexity of SSM. Subsequent works S5 <cit.> and H3 <cit.> improve SSM to make it better adapted to tasks in the field of deep learning. Recently, the emergence of Mamba <cit.> has once again triggered a craze for SSM in the field of deep learning. It can selectively extract input features and also integrate hardware-aware algorithm. In the computer vision community, many models based on Mamba have also emerged. Vision mamba <cit.> and Vmamba <cit.> provide spatial perception capabilities for Mamba through bidirectional sequence modeling and cross-scanning, respectively. Many mamba-based models <cit.> have also emerged and achieved good results In the multi-modality image fusion. They all choose to use Mamba instead of Transformer and CNN, but did not try to combine them.
§ METHOD
§.§ Overview
Our network can be specifically divided into five parts. Shallow feature extractor, Tmamba block for further feature encoding and feature interaction, Cross-modal interaction module for achieving cross-modal attention, Tmamba fusion module for fusing images of different patterns separately and Decoder for rendering the final fused image. All Transformer structures in our network are built by Restormer blocks <cit.> and all Mamba structures in our network are built by Vmamba blocks <cit.>. Our network is a generic image fusion network and we will explain our work using Infrared-Visible Image Fusion as an example.
§.§ Shallow feature extractor
Here we define the input paired infrared and visible images as I∈ℝ^ H× W and V∈ℝ^H× W. The shallow feature extractor, Mamba and the Transformer blocks are represented by 𝒮(·), ℳ(·) and 𝒯(·) respectively.
We first use Transformer blocks to form a shallow feature extractor to extract shallow features {Φ_I^S, Φ_V^S} from infrared and visible inputs {I , V} :
Φ_I^S = 𝒮(I) ,Φ_V^S = 𝒮(V).
In order to take advantage of Mamba in processing long sequence features, the shallow feature extractor processes the input image into 128×128×64 features.
§.§ Tmamba Block
We build Tmamba blocks with two layers of Mamba blocks(Mamba branch) and two layers of Transformer blocks(Transformer branch). We add an interaction structure between the two branches so that the two branches can transfer information to each other while maintaining feature differences.
§.§.§ Transformer Branch.
The shallow features extracted by the shallow feature extractor are input into the Transformer branch:
Φ_I^trans,Φ_V^trans = 𝒯(Φ_I^S,Φ_V^S),
where Φ_{I,V}^trans represent features extracted from the first Transformer block.
Restormer block can implicitly model global features by self-attention across channels. Φ_{I,V}^trans are encoded as Q∈ℝ^ HW× C, K∈ℝ^C× HW, and V∈ℝ^HW× C. We interact Q and K in the channel dimension to get the channel attention matrix with dimensions C × C :
𝒜 = Softmax(K × Q / α),
where 𝒜∈ℝ^C× C, α is a scaling parameter.
The elements in this matrix reflect the attention relationship between the corresponding channels. After interacting with V, this information will be passed to Φ_{I,V}^trans
§.§.§ Mmaba Branch.
The input features of the Mamba branch are the same as those of the Transformer branch:
Φ_I^vm,Φ_V^vm = ℳ(Φ_I^S,Φ_V^S),
where Φ_{I,V}^vm represent features extracted from the first Mamba block respectively.
Mamba’s forgetting mechanism ensures that the previous hidden state is always attenuated until the current token
is reached, which makes Mamba sensitive to the order of the input sequence and has position awareness <cit.>. This allows each feature pixel in Φ_{I,V}^vm to sense the positions of other feature pixels.
§.§.§ T-M Interaction.
The information extracted by the two branches can complement each other. The channel attention information of the features extracted by the Transformer branch is exactly what the Mamba branch lacks, and the position information of the features extracted by the Mamba branch can also help the Transformer better extract and parse the input image without position encoding.
We designed different interaction structures based on the characteristics of information to help the two types of information transmit between the Transformer branch and the Mamba branch.
To be specific, we choose a method similar to positional encoding to pass positional information to the Transformer block. We align and add the features with position information to the features extracted by the Transformer branch. In the process of addition, we set a global learnable parameter ω, which controls the proportion of information transmitted from the Mamba branch to the Transformer branch without interfering with the position information encoded by Mamba:
Φ_I^T = 𝒯(ω·Φ_I^vm+(1-ω)·Φ_I^trans),
Φ_V^T = 𝒯(ω·Φ_V^vm+(1-ω)·Φ_V^trans) ,
where Φ_{V,I}^T are the final outputs of the Transformer branch
For features with channel attention, we use 1×1 convolution kernel to mix them with features extracted by Mamba branch, and then use 3×3 convolution kernel to aggregate local features in space and send the aggregated features into Mamba block:
Φ_I^M = ℳ(𝒞onv(Φ_I^vm,Φ_I^T)),
Φ_V^M = ℳ(𝒞onv(Φ_V^vm,Φ_V^T)),
where Conv(·) stands for convolution operation and Φ_{V,I}^M are the final outputs of the Mamba branch.
The interactive process allows the features of the two branches to be optimized while maintaining their differences, which enables the Tmamba block to extract rich and high-quality image features.
§.§ Cross-modality Interaction
After passing through the two branches of the Tmamba block, each modality (infrared and visible) has two different patterns of features due to the different network structures of the two branches. Before entering the fusion module, we integrate and process the same pattern features of different modalities. For the features extracted from the Mamba branchs, we align and add the features at element wise to obtain the pre-fusion features:
Φ_P^M = Φ_I^M + Φ_V^M,
where Φ_P^M is the pre-fusion feature of the Mamba branchs.
For the features extracted from the Transformer branchs, we adopt the cross-modality interaction at attention-level , which simply refers to 𝒜ℱ.
The 𝒜ℱ block consists of two attention blocks and a weight calculation block 𝒲(·). Specifically, we first obtain the attention matrices of the infrared and visible modality through the operation of channel interaction respectively. In the process of calculating attention, the features extracted by the network are encoded as Q∈ℝ^ HW× C,K∈ℝ^C× HW, and V∈ℝ^HW× C. We multiply K and Q matrix to get attention matrix with dimensions C × C.
𝒜_V = Softmax(K_V × Q_V / α),
𝒜_I = Softmax(K_I × Q_I / β) ,
where 𝒜_V,𝒜_I∈ℝ^C× C, α and β are scaling parameters.
Since there is interaction between channels in the calculation process of the Transformer branch’s attention, the attention matrix reflects the attention of each channel to other channels. Obviously, for different modalities of the same scene, the attention relationship between the various channels is different. We choose the DenseASPP to build 𝒲(·) to generate weights and get the cross-modal attention.
Specifically, we first use DenseASPP blocks to further encode the features of the two modalities, and then connect the encoded features in the channel dimension and send them into the full connection layer to obtain two weights:
ω_1 , ω_2 = 𝒲(Φ_V^T,Φ_I^T).
𝒜 = ω_1 ·𝒜_V + ω_2 ·𝒜_I ,
Finally, we apply the cross-modal attention to the V of the two modalities separately, and then add them at element wise to obtain the pre-fusion feature:
Φ_P^T = 𝒜· V_I + 𝒜· V_V
where Φ_P^T is the pre-fusion feature of the Transformer branchs, V_I and V_V ∈ℝ^HW× C are the encoded matrices.
§.§ Tmamba Fusion
The two patterns of features extracted by the Transformer and Mamba branches are fused separately in the fusion layer. Considering that the inductive bias of feature fusion should be similar to that of feature extraction, we still adopt Tmamba block for fusion. For the fusion block of Mamba's features, we only select the output of Mamba branch as the final output. and for fusion block of Transformer's features, we only select the output of Transformer branch as the final output:
Φ_M = ℱ_ℳ(Φ_P^M) ,Φ_T = ℱ_𝒯(Φ_P^T),
where ℱ_ℳ(·) and ℱ_𝒯(·)
are the fusion blocks of Transformer's features and Transformer's features respectively.
§.§ Decoder
We keep the decoder structure consistent with the design of shallow feature encoder, using the Restormer block as the basic unit of the decoder.
Our training process is divided into two stages. In the first stage, the model does not fuse images, but performs image restoration. In the second stage, we add the Cross-modality Interaction module and Tmamba fusion module to the training to obtain high quality fusion images:
Stage1: V = 𝒟(Φ_V^T,Φ_V^M) , I = 𝒟(Φ_I^T,Φ_I^M).
Stage2: F = 𝒟(Φ^T,Φ^M),
where 𝒟(·) indicates that the Decoder module, F is the final fusion image.
§.§ Loss Function
The loss of Stage1 is the image reconstruction loss, which is designed to guide the encoder and decoder to learn the basic feature extraction and image reconstruction methods. The specific loss function is as follows:
ℒ_1 = ℒ_ir + ℒ_vis,
where ℒ_ir and ℒ_vis are reconstruction losses of infrared image and visible image respectively.The reconstruction loss of infrared image can be concretely written as:
ℒ_ir = ℒ^1_int(I,Î) + ℒ_SSIM(I,Î),
where ℒ^1_int(I,Î) = I-Î^2_2 and ℒ_SSIM(I,Î) = 1-SSIM(I,Î),SSIM(·,·)is the structural similarity index<cit.>.The reconstruction loss of visible image can also be obtained by the same method.
The loss of Stage2 is used to guide the process of image fusion, which mainly consists of intensity loss and gradient loss:
ℒ_2 = ℒ^2_int + ℒ_grad
where ℒ^2_int = 1/HWI_f - max(I_ir,I_vis)_1 and ℒ_grad = 1/HW = |∇ I_f| - max(|∇ I_ir|,|∇ I_vis|)_1,where ∇ indicates the Sobel gradient operator.
§ EXPERIMENTS
§.§ Infrared and visible image fusion
§.§.§ Setup
IVF experiments use three popular benchmarks to verify our fusion model, i.e., TNO <cit.>, RoadScene<cit.>, and MSRS <cit.>. We train our model on MSRS training set (1083 pairs) and test it on MSRS test set (361 pairs), RoadScene (50 pairs) and TNO (25 pairs). Note that we did not retrain on the TNO and RoadScene datasets, but directly tested on the test sets of the three datasets using the model trained on the MSRS. We use six metrics to quantitatively measure the fusion results: entropy (EN), standard deviation (SD), spatial frequency (SF), mutual information (MI) ,visual information fidelity (VIF) and QAB/F . Higher metrics indicate that a fusion image is better. The details of these metrics can be found in <cit.>.
Our experiments are carried out on a machine with one NVIDIA GeForce RTX 3090 GPUs. The training samples are randomly cropped into 128×128 patches in the preprocessing stage. The number of epochs for training is set to 80 with 40 and 40 epochs in the first and second stage respectively. The batch size is set to 4. We adopt the Adam optimizer with the initial learning rate set to 7.5 × 10^-5 and decreasing by 0.5 every 20 epochs.
§.§.§ Comparison with advanced methods.
In this section, we test Tmamba on the three test sets and compare the fusion results
the state-of-the-art methods including DIDFuse<cit.>, U2Fusion<cit.>, TarDAL<cit.>, ReCoNet <cit.>,CDDFuse<cit.>,DDFM<cit.>,EMMA<cit.>.
For qualitative comparison, We show the qualitative comparison in Figure 3 and Figure 4. Compared with other existing methods, our Tmamba has two significant advantages. First, the discriminative target from infrared images can be well preserved.The people in Figures 3 and Figure 4 are all highlighted, and our method has a higher contrast and clearer outline compared to other methods. Second, our method can get richer background information.As shown in Figure 4 (the wires and poles in the orange box), our method clearly restores the outline of the wires and poles, while visual inspection shows that other methods do not bring out this background information.
For quantitative comparisons, a total of six metrics are employed to quantitatively compare the above results, which are displayed in Table 1. Our method has excellent performance on almost all metrics.
§.§ Ablation studies
Ablation experiments are set to verify the rationality of the different modules.We did a total of five experiments to test our model as shown in Table 2 and Table 3.
§.§.§ Transformer branch.
We remove the Mamba branch from the Tmamba block, leaving only the Transformer branch, while removing the cross-modality interaction of the Transformer branch. The test results show that all the indexes declined.
§.§.§ Cross-modality interaction.
We add the cross-modality interaction in the Transformer branch.It makes the attention relationship of each channel no longer restricted by single-modal information, which comprehensively improves the effect of image fusion.
§.§.§ Mamba branch.
We add the Mamba branch to form a dual-branch structure with Transformer. The test results show that all the indexes improve. The dual-branch structure composed of Transformer and Mamba can encode image features in different patterns and then fuse them separately, so that the acquired image information is richer and the quality of the fused image is greatly improved.
§.§.§ Interaction structure.
We added hierarchical interaction structure between Transformer branch and Mamba branch to form a complete Tmamba block. By adding hierarchical interaction structure, the information carried by the the features is continuously exchanged between the two branches, further improving the fusion effect.
§.§.§ Why mamba is effective.
We replace the Mamba block with a residual convolution module. The SF index of the fused image has improved, but the overall quality has decreased, as shown in Table 3. We believe that there are two main reasons. On the one hand, in the long sequence image fusion task, Mamba can show feature extraction and feature analysis capabilities that surpass CNN; on the other hand, we use Transformer and Mamba to build the network backbone, which makes our network input-aware and can adjust parameters for different inputs, making it more suitable for multimodal image fusion tasks than the Transformer-CNN hybrid network.
§.§ Medical image fusion
§.§.§ Setup.
We selected 136 pairs of medical images from the Harvard Medical website for MIF experiments, of which 21 pairs of MRI-CT images, 42 pairs of MRIPET images and 73 pairs of MRI-SPECT images are utilized as the test datasets.
§.§.§ Comparison with advanced methods.
In this group of experiments, we selected five networks, U2Fusion <cit.> TarDAL<cit.>, ReCoNet <cit.>, CDDFuse<cit.>, and EMMA<cit.>, to conduct comparative tests.We made quantitative and qualitative comparisons across all networks. Note that none of the models were fine-tuned on the medical data set.
For qualitative comparison,we visualize the fusion results on the MRI-CT dataset in Figure 5. The results of quantitative comparison are shown in Table.4. As you can see, our method achieves leading performance on all datasets.
§ CONCLUSION
In this paper, we proposed a dual-branch Transformer-Mamba network for multi-modality image fusion. By combining the Restormer block and the Vammba block, we extracted features containing different information. We interacted and fused these features and achieved better results. We further proposed cross-modality Interaction at the attention level to break the limitation of single-modality information on attention. Experiments show that our method achieves advanced performance on six datasets across two image fusion tasks.
|
http://arxiv.org/abs/2409.03369v1 | 20240905091526 | Fast Payload Calibration for Sensorless Contact Estimation Using Model Pre-training | [
"Shilin Shan",
"Quang-Cuong Pham"
] | cs.RO | [
"cs.RO"
] |
Identification of non-causal systems
with arbitrary switching modes
[
====================================================================
§ ABSTRACT
Force and torque sensing is crucial in robotic manipulation across both collaborative and industrial settings. Traditional methods for dynamics identification enable the detection and control of external forces and torques without the need for costly sensors. However, these approaches show limitations in scenarios where robot dynamics, particularly the end-effector payload, are subject to changes. Moreover, existing calibration techniques face trade-offs between efficiency and accuracy due to concerns over joint space coverage. In this paper, we introduce a calibration scheme that leverages pre-trained Neural Network models to learn calibrated dynamics across a wide range of joint space in advance. This offline learning strategy significantly reduces the need for online data collection, whether for selection of the optimal model or identification of payload features, necessitating merely a 4-second trajectory for online calibration. This method is particularly effective in tasks that require frequent dynamics recalibration for precise contact estimation. We further demonstrate the efficacy of this approach through applications in sensorless joint and task compliance, accounting for payload variability.
Machine Learning for Robot Control, Industrial Robots, Physical Human-Robot Interaction
§ INTRODUCTION
Robot manipulators are widely deployed around the globe, making force/torque (F/T) sensing crucial for various tasks. Typically, this capability is facilitated by additional 6-axis F/T sensors or joint torque sensors. However, these sensors are expensive and may be undesirable in certain situations due to their weight and volume. Consequently, researchers have proposed sensorless contact estimation techniques that rely on dynamics identification, either model-based <cit.> or model-free <cit.>. These techniques estimate external contacts using proprioceptive signals, such as motor current and joint states.
However, many of these approaches overlook scenarios where robot dynamics are subject to change. To address this, various methods have been proposed to mitigate errors resulting from payload variations <cit.>. These solutions typically involve a trade-off between achieving high accuracy across a wide range of joint space, determined by the volume of calibration data, and the time required for calibration trajectories. However, both precision and efficiency are essential for many collaborative tasks, especially when the robot must quickly grasp and assemble components of varying weights while accurately detecting external contacts with human operators and the environment.
To address these challenges, we introduce a fast online calibration scheme enabled by pre-trained Neural Network (NN) models. Specifically, we present both Payload-specific and Payload-adaptive model architectures. Training sets for various payload types are collected through offline trajectories that encompass the entire joint space. These NN models are then trained through offline learning, enabling the acquisition of minimal online data via a short 4-second calibration trajectory specifically aimed at identifying key payload features. This advancement significantly expands the range of applications for sensorless contact estimation techniques in environments with frequent dynamics variations. We further integrate the proposed calibration scheme into two tasks: joint space compliance and task space compliance, to demonstrate its effectiveness in real-world applications. A snapshot of the task space compliance experiment is depicted in Fig. <ref>.
This paper is structured as follows: Section II presents a literature review and discussion of calibration methods. Section III outlines the baseline model-based algorithm, online learning algorithm and the proposed offline learning schemes. Section IV details the training data collection process and the rationale behind the online calibration trajectory design. Section V compares the results obtained from all discussed methods. Finally, Section VI demonstrates the integration of the proposed method into interactive tasks.
§ RELATED WORK
The study of dynamics identification has laid a solid foundation for sensorless external contact detection and estimation. In the literature, major solutions to this challenge are categorized into three groups: model-based methods <cit.>, model-free methods <cit.>, and hybrid methods <cit.>. These approaches utilize parametric dynamics modeling, black-box learning algorithms, or a combination of both, estimating external contact through measurements of motor torque, current, and joint states. Relying on current introduces additional challenges. Studies <cit.> report that current signals tend to be noisier than torque measurements, and motor constants provided by manufacturers can be imprecise for certain robotic models. Additional efforts may be required to obtain reliable torque estimation from current signals. Despite careful signal processing, these studies commonly face another limitation: they are often conducted across a wide range of joint spaces, aiming to identify the complete robot model under a consistent robot dynamics setting. If frequent changes in the end-effector payload necessitate calibration for external contact estimation, the standard identification process may become impractically time-consuming.
To tackle this problem, model-based methods have been developed to facilitate fast calibration by compensating for variations in the end-effector payload. In <cit.>, it is demonstrated that a 1.5-second calibration trajectory is sufficient given F/T measurements and knowledge of the payload shape, suggesting that fast payload calibration is achievable with appropriate data modality. In <cit.>, the Least Squares (LS) algorithm is employed to determine the optimal set of parameters based on data collected online. Whereas, the reported calibration times remain significant. <cit.> proposes gathering payload information and solving for payload parameters through similar LS algorithms by executing two similar calibration trajectories. This design allows for compensation of unmodeled disturbances with flexibility. However, a common challenge of payload estimation using proprioceptive sensors is the lack of global accuracy with calibration data collected locally. This could be caused by physically inconsistent dynamics parameters being identified using local data. <cit.> summarizes that accurate estimation requires global data, but online data collection across the entire joint space remains impractical due to extensive time requirements.
Conversely, learning-based methods have demonstrated a potent ability to capture the robot dynamics, enabling minimal calibration data use with an offline-trained model. In <cit.>, a detailed comparison among various learning-based techniques, including Gaussian Process Regression (GPR), Echo State Networks (ESN), and Extreme Learning Machine (ELM), is provided. This comparison suggests that ELM — a single-layer Neural Network (NN) — outperforms other methods in tasks related to payload calibration. Further research in <cit.> introduces a calibration strategy that trains models with various payloads. This approach allows for the estimation of joint-level compensations by interpolating between the models' predictions. However, for effective interpolation, the payloads' mass must be known beforehand; without this information, online collection of fine-tuning data across the joint space would lead to the same problem of excessive time consumption.
In this paper, we adopt a similar model pre-training approach but introduce two different strategies to address the challenges: (i) We implement fine-grained discretization of payloads during the training phase and train payload-specific models to ensure there is always a model capable of compensating for the unknown payload with acceptable errors. (ii) We identify and incorporate payload features during the training phase, allowing for their extraction from short online trajectories. This process informs the pre-trained model about payload variations.
As a preliminary step to payload compensation, a base model is needed to estimate joint currents in a payload-free scenario. A previous study introduced a Neural-Network-based dynamics identification approach <cit.>, utilizing a specialized Motion Discriminator (MD) input scheme. This method incorporates both instantaneous joint states and temporal information as inputs, effectively reducing significant errors caused by friction hysteresis. We adopted this design to train the base model and provide estimations for the payload-free dynamics.
§ CALIBRATION METHODS
In this section, we introduce four candidate calibration methods: the model-based calibration method, the Online Learning Model, the Payload-specific Pre-trained Model, and the Payload-adaptive Pre-trained Model. The architecture for the base model and all calibration models is Multilayer Perceptron (MLP). According to a previous study <cit.>, MLP has been shown to outperform other architectures, including Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), in terms of accuracy and inference speed.
§.§ Model-based Method
In this paper, we consider and implement the model-based calibration method discussed in <cit.>, based on the robot's fundamental dynamics parameters obtained using <cit.>. The end-effector payload can be regarded as an extension of the robot's last link, allowing for identification of the payload parameters through modified dynamics. Specifically, the joint-level torque variation at the n^th data frame due to payloads can be written as follows:
Y(q_n,q̇_n,q̈_n)(π_L - π) = Δτ_n = τ_L,n - τ_n
where Y, π, π_L, τ, and τ_L are the regressor matrix of the robot, original dynamics parameters, loaded dynamics parameters, load-free motor torque, and loaded motor torque, respectively. By collecting calibration data online, the variation of dynamics parameters can be evaluated using the stacked formulation:
ϵ̂ = Y^#_LΔτ
where ϵ̂ is the parameter variations due to end-effector payloads. Furthermore, the payload parameters:
p_L = (m_L, c_Lxm_L, c_Lym_L, c_Lzm_L, J_L)^T ∈ℝ^10
can be solved explicitly using:
p̂_L = J^#_ϵϵ̂ J_ϵ = ∂ϵ(p_L)/∂ p_L ϵ(p_L) = π_L - π
By modifying π using the identified payload parameters, the estimated motor torque would include the component for payload compensation.
§.§ Online Learning Model (OLM)
As discussed in Section II, online learning methods are widely employed for dynamics and kinematics calibration purposes, as referenced in previous works <cit.>. However, these methods suffer from issues such as prolonged calibration times and diminished accuracy over large workspaces. We implemented an Online Learning method as a baseline to substantiate these statements through experimental observations.
We initialize the OLM using a payload-free base model that was pre-trained, as discussed in previous research <cit.>. Calibration data are then collected through short calibration trajectories, followed by the fine-tuning of the base model using Batch Gradient Descent (BGD). The base model features a 6-layer MLP architecture with residual connections. Its structure, along with the data flow of online training and inference, is depicted in Fig. <ref>. Consistent with the training framework of the base model, we adopt the same MD input scheme to effectively reduce hysteresis error. The specifics of the input scheme are not detailed in this paper and are represented by the `data pre-processing' unit in Fig. <ref>.
§.§ Payload-specific Pre-trained Models (PsPM)
As highlighted in Section II, the pre-trained model can learn the complete robot dynamics from offline data collected across the entire workspace. This approach significantly reduces the need for extensive data collection during the online calibration of an unknown payload. In this section, we introduce a straightforward yet effective method that trains distinct models for different end-effector payloads.
The block diagrams of data flow, model training, and online selection are illustrated in Fig. <ref>. During the offline training phase, all models are initialized with random weights and trained to predict the current residuals associated with payloads, which are referred to as 'payload residuals' throughout this paper. Each model employs a 3-layer fully-connected (FC) MLP architecture. For training a single payload-specific model, a distinct combination of mass and center of mass (CoM) is used for data collection. This necessitates the discretization of both variables, creating finite sets of combinations. Since data collection and model training are conducted offline, it is feasible to gather extensive training data, ensuring comprehensive coverage of the entire joint space. Section IV.A will detail the required dataset size for each payload and the discretization methodology.
For the online model selection process, data are gathered from online calibration trajectories. We then obtain estimations from all models via multi-process computation and select the most accurate model based on the current estimations and measurements. Specifically, the most accurate model is selected using the following equations:
W_a = F(r̅^1_est....,r̅^M_est,r̅_mea) = _W_m(MSE(r̅_mea,r̅^m_est))
MSE(r̅_mea,r̅^m_est) = ∑_i=1^I (r̅_mea,i - r̅^m_est,i)(r̅_mea,i - r̅^m_est,i)^T/I
where W_a, r̅^m_est∈ℝ^N × 6, r̅_mea∈ℝ^N × 6, and I are the selected model, m^th model's estimation, payload residual measurements, and the number of data frames of the calibration trajectory. W_a will remain activated until the next calibration.
§.§ Payload-adaptive Pre-trained Model (PaPM)
Although PsPM can provide highly accurate payload compensation across the entire joint space, loading all models into the system could significantly increase memory usage. Furthermore, with fine-grained discretization of the payload, resulting in numerous models being tested online, the computation time required for both inference and model selection becomes substantial. For example, the computation time for equations (5) and (6) across 69 models on 400 data frames (4 seconds) exceeds 2 seconds, despite parallel computation. In response to these challenges, we propose the PaPM in this section. The data flow and block diagram of this approach are depicted in Fig. <ref>. PaPM is trained with all the data collected for PsPM but incorporates a unique element, the Payload Indicator (PI, x_ind), obtained from a carefully designed calibration trajectory, to distinguish between PsPM datasets.
The format and length of the PI vector x_ind can be flexible, as long as it effectively reflects the unique identity of the payload. For example, it can include the magnitude of the mismatch between loaded measurements and load-free estimations. In practice, the PI vector should be designed in conjunction with the calibration trajectory, as its format and length may vary significantly with different trajectories. Theoretically, in a non-singular configuration, a single current measurement uniquely correspond to an end-effector payload and is sufficient for evaluating the PI vector. However, disturbances during low-speed operations - unnecessarily white noise <cit.> - may lead to inconsistent measurements across multiple calibration trials of the same payload. In contrast, evaluating the PI vector from a trajectory helps reduce the uncertainty of single measurements and ensures the acquisition of the same values in every trial. Section IV.B will detail our implementation for the PI vector.
For capturing payload-specific information, the same calibration trajectory is executed alongside the standard excitation trajectories during PsPM data collection. It is consistent for different payloads and is executed for each new payload dataset collected. In light of the objective for fast calibration, two key guidelines must be adhered to in the design of the calibration trajectory: (i) The trajectory should have an execution time of less than 5 seconds. (ii) It should effectively convey the payload information at the joint level. Specifically, it should avoid singularities and positions near singularities, where payload variations result in no or negligible changes to the current measurements of specific joints. Section IV.B will detail our implementation for the calibration trajectory.
Similar to PsPM, PaPM employs a 3-layer fully-connected MLP architecture, trained to predict payload residuals. During the training phase, each input data frame x^m_n is concatenated with the PI vector x^m_ind to form the complete PaPM input. Notably, the PI vector is disturbed with white noise to mimic noisy measurements and prevent the model from making out-of-distribution estimations when encountering unseen PI vectors during inference. The variance of the white noise is set at 0.25, taking into account the discretization resolution. For online calibration, the designated calibration trajectory is executed to evaluate the PI vector for the unknown payload. This PI vector is then incorporated into the input for applications and remains unchanged until the next calibration.
§ TRAINING DATA COLLECTION
This section provides implementation details of the data collection procedure, excitation trajectories, and calibration trajectory. The robot manipulator used is the Denso-VS060, a position-controlled robot without force/torque sensing capabilities. The robot's hardware interface, motion planning algorithms, and data acquisition algorithms were all developed within the Robot Operating System (ROS) framework.
§.§ Data Collection for Offline Learning
§.§.§ Payload Discretization
Through careful design of the end-effector, it is possible to adjust payload variables such as mass, Center of Mass (CoM), and inertia. Given that inertia plays a minor role in the low-speed tasks of interest and is challenging to adjust systematically, we focus exclusively on mass and CoM as the key payload variables.
The end-effector design is illustrated in Fig. <ref>. This design facilitates the adjustment of mass and CoM by filling the containers with various materials, enabling payload discretization. Most end-effector and gripper designs are geometrically symmetrical around the Z-axis, thus assuming a centric CoM (c_Lx=0, c_Ly=0) with fine-grained discretization of m and c_Lz could be beneficial for efficient training data collection, meeting the requirements for most applications. Nevertheless, to demonstrate the robustness and capability of the proposed method in learning from diverse payloads, we collected off-centric training data with rough discretization. Table <ref> presents 21 sets of centric variables and 48 sets of off-centric variables, totaling 69 data sets, for training PsPM and PaPM.
§.§.§ Trajectory Excitation
Sinusoidal exciting trajectories are commonly used for inverse dynamics identification. We refer to <cit.> for the following trajectory expression:
q_j(t) = ∑_l=1^L a_l,j/lω_fsin(lω_ft) - b_l,j/lω_fcos(lω_ft) + q_0,j
where L=5 represents the number of harmonics, ω_f = 0.15π in the original design, and the coefficients a_l,j, b_l,j, and q_0,j are pre-determined constants, carefully chosen to cover the entire joint space. However, two factors make this specific trajectory less suitable for our learning-based method. First, Neural Network training typically requires a substantial amount of data. The 21-second trajectory in the original study, despite having manually selected optimal coefficients, does not provide sufficient dynamic information for NN-based methods. Second, the described trajectory is continuous. Previous research <cit.> has shown that hysteresis becomes a significant issue with stationary joints or those in the pre-sliding zone. This concern is overlooked by continuous trajectories.
To this end, we introduce interruption-rich exciting trajectories for long-duration data collection. Given the sinusoidal nature of the trajectory, merely extending the execution time would result in repetitive robot motion. To ensure at least 30 minutes of varied trajectories for each payload, we generated multiple trajectories by following the steps below:
* For each trajectory, randomly sampled the coefficients a_l,j, b_l,j, and q_0,j in equation (7) within the ranges of [-0.5, 0.5], [-0.5, 0.5], and the joint space, respectively.
* Modified ω_f to scale down the velocity to better align with the low-speed requirements of tasks.
* Introduced interruptions into the continuous trajectories. Specifically, a pause of t_p seconds is introduced after every t_e seconds of execution, with t_e and t_p randomly sampled from the ranges of [1.0 s, 3.0 s] and [7.0 s, 9.0 s], respectively. Notably, such an interruption was applied to different joints separately.
Given that the Denso VS060 is a position-controlled robot, it provides only joint positions as the state measurements. Consequently, velocities and accelerations were obtained from the first and second-order derivatives of the position data. A Butterworth Filter:
v_n = ∑_t=0^3b_tu_n-t - ∑_t=1^3a_tv_n-t
[a_1, a_2, a_3] = [-2.592, 2.264 ,-0.664]
[b_0, b_1, b_2, b_3] = [0.0009, 0.0026, 0.0026 ,0.0009]
where u and v are the filter input and output, was applied to reduce signal noise, featuring a cutoff frequency of approximately 7 Hz under a 200 Hz sampling rate. Joint positions, velocities, accelerations, and currents were recorded at a system frequency of 100 Hz.
§.§.§ Data Size Investigation
With the PsPM training scheme, we conducted preliminary experiments to explore the relationship between estimation errors (e = y_mea - y_est - r_est) and the volume of training data. We collected the sample dataset over two hours of excitation trajectories executed with a consistent payload (m = 0.91 kg, c_Lz = 0.075 m). Models were then trained using different portions of this dataset. Details on the model size and training parameters are provided in Section V.
Fig. <ref> displays the estimation errors of all models, tested on a different dataset that covers a wide joint space, and plotted in relation to the length of the training data. Clearly, a larger training set size leads to reduced estimation errors, although the benefits diminish for training durations exceeding 30 minutes. In light of this diminishing return, we collect a 30-minute dataset for each discretized payload. Longer trajectories could be considered if time constraints for data collection are relaxed, or if higher accuracy is required. Our design resulted in 180,000 (30 mins) data frames for each PsPM dataset, cumulatively amounting to 12,420,000 (34.5 hours) data frames for PaPM training.
Interestingly, within the segment of Fig. <ref> that spans from 0 to 300 seconds, we observe two distinct patterns of error variation. The solid lines represent joints 2, 3, and 5, which are significantly influenced by the end-effector payload due to the robot's geometry. Training the calibration model with merely 60 seconds of data proves to effectively reduce errors for these joints. In contrast, the dashed lines represent joints 1, 4, and 6, which are less affected by the payload. For these joints, the estimation error initially increases with smaller datasets and then decreases as the dataset size expands. This pattern indicates that when the calibration data is insufficient, the NN model tends to overfit the robot dynamics in a local region, resulting in decreased accuracy across the entire joint space. This observation also elucidates why the model-based method and OLM exhibit lower accuracy, which will be discussed in Section V.
§.§ Implementation of PI and Calibration Trajectory
In accordance with the concept discussed in Section III.D, our implementation of the calibration trajectory comprises two steps: (1) selecting a static position distal from singularities to maximally distinguish payloads, and (2) commanding all joints to move locally around the selected position to acquire reliable PI values while avoiding hysteresis errors. Specifically, we designed a 4-second calibration trajectory for online model selection (PsPM) and PI evaluation (PaPM). The robot's initial position was set at [0^∘,40^∘,50^∘,45^∘,45^∘,0^∘]. For the first 2 seconds, all joints maintain constant speeds of [3^∘/s,3^∘/s,3^∘/s,-3^∘/s,-3^∘/s,-3^∘/s], with these velocities reversing in the final 2 seconds. The command velocity was doubled for the model-based method and OLM to span larger joint space. The calibration algorithms for OLM and PsPM are discussed sufficiently in Section III.A and III.B. In the following, we will discuss the evaluation details of the PI vector in our setting.
The PI vector x_ind∈ℝ^12 × 1 was obtained by concatenating two vectors x^1_ind∈ℝ^6 × 1 and x^2_ind∈ℝ^6 × 1, which were the average payload residuals measured in the first and second half of the calibration trajectory, respectively. Both x^1_ind and x^2_ind were evaluated using the following equation:
x^k_ind = ∑_n=1^N y_mea,n - y_est,n/N, k=1,2
where y_mea∈ℝ^6 × 1 is the measured currents, y_est∈ℝ^6 × 1 is the base-model-estimated current, and N is the number of frames recorded for each 2-second trajectory.
§ MODEL TRAINING AND COMPARISONS
The fully-connected hidden layers in all models described in Sections III and IV, including OLM, PsPM, PaPM, and the preliminary test model, consist of 512 neurons and utilize the ReLU activation function. For PsPM and PaPM, the PyTorch framework was employed for model training during the offline learning phases, and all models were implemented using the Python-enabled NumPy library for online inference. During training, MSEloss and Adam regularization were applied, along with a set of hyperparameters: a 0.5 random dropout rate, a training batch size of 4096, and a learning rate of 0.001. For OLM, fine-tuning the base model adheres to the same training procedures. However, the 4-second calibration trajectory produces only 400 data frames, allowing for the model's fine-tuning to be completed within 3 seconds.
For the qualitative evaluation of calibration outcomes, a test trajectory was executed both before and after the calibration process. Since PsPM and PaPM predict only payload residuals, their estimations offset from the base model's estimations. Fig. <ref> shows the estimation results for the test trajectory, comparing the performance of the model-based method, OLM, PsPM, and PaPM. These methods significantly reduce estimation errors in joints 2, 3, and 5, which were notably affected by the payload (m = 0.83 kg, c_L = [0.000 m, 0.030 m, 0.087 m]). However, for the model-based method and OLM, although the joint space explored was larger given the doubled velocity, the data collected online was still relatively local, leading to inaccurate estimations when the Inverse Kinematics Class was distal from that of the calibration trajectory. The Root Mean Square Errors (RMSE) for 90,000 test data frames are detailed in Table <ref>. These results suggest that PsPM and PaPM both deliver high accuracy across various configurations. Given the advantages of PaPM outlined in Section III, PaPM was preferred over PsPM for the applications in Section VI.
To further validate the robustness of PaPM, we conducted additional tests using five different PaPMs, each corresponding to a unique calibration trajectory across the joint space, as well as four different payloads, all on the same test trajectory. Table <ref> presents the average results of these tests.
§ APPLICATION
§.§ Sensorless Joint Space Compliance
In the joint compliance task, robots are expected to respond compliantly to external contacts with the human operator. This objective is often straightforwardly achieved through gravity compensation when users can directly command joint torques. However, the Denso-VS060 is a position-controlled robot, which prohibits direct torque or current commands. Consequently, an alternative method is necessary to enable sensorless joint compliance. Previous research <cit.> has provided a comprehensive analysis of this challenge. An admittance controller:
q̇_n = K_py_ext,n = K_p(y_mea,n-y_est,n-r_est,n)
where K_p and y_ext,n are the constant gain and external residual estimation, respectively, is utilized to achieve compliant motions without the need for direct current commands. To prevent unexpected movements in the absence of contact, a deadzone was imposed on y_ext,n, ensuring a joint activates only in response to sufficiently large external forces.
To begin with, we compared the estimation error in two scenarios: base model estimation without a payload and PaPM-compensated estimation with a payload. Intuitively, the deadzone selection should be identical since PaPM only calibrates the payload. However, experimental results show that the calibration model also behaves as an additional model hierarchy, as presented in <cit.>. As a result, in addition to compensating for the payload residuals, PaPM further reduces the estimation error left by the base model. This enhancement has enabled the selection of smaller deadzone boundaries, making the robot feel more responsive and 'lighter'. Details on the deadzone values are provided in Table <ref>. Notably, expressing the current's unit in percentage usage (%use) is less intuitive for force analysis. Therefore, equivalent torque values were approximated using measurements from the F/T sensor, taken at specific static postures and considering the robot's geometry.
In this experiment, the end-effector utilized has a mass and CoM of (m = 0.83 kg, c_L = [0.000 m, 0.030 m, 0.087 m]). The joint compliance mode enables whole-body compliance, as demonstrated in the supplementary video. In the subsequent experiment, the operator interacted only with the end-effector to record external contacts via the F/T sensor. Initially, the system was activated using the base model, without any calibration. Once joint compliance was enabled, the end-effector began to drop due to the gravity of the uncalibrated payload. This behavior is clearly illustrated in Fig. <ref>, where non-zero joint current residuals were observed, even though no force was detected by the sensor. In contrast, after calibration, Fig. <ref> displays a clear correlation between the measured force and the estimated residual, indicating that the payload was accurately calibrated. This allowed the robot to cease movements immediately upon the removal of external forces.
§.§ Wrench Estimation and Task Compliance
External Wrench Estimation (WE) is another essential application of dynamics identification. It facilitates the detection and control of contact forces exclusively through the use of current and joint state measurements <cit.>, enabling complex industrial tasks such as high-precision assembly, without the necessity for external sensing devices.
Due to the absence of exact motor constants, directly determining the end-effector wrench using the estimated current residual and the robot's Jacobian Matrix is not feasible. To address this challenge, we draw upon methodologies from previous research <cit.>, which introduces a systematic data collection procedure for wrench estimation, and <cit.>, which implemented compound NN models. The conceptual framework of this approach is depicted in Fig. <ref>. The WE model employs a fully-connected MLP architecture. It inputs the instantaneous joint state vector (q_n, q̇_n, q̈_n) along with an estimation of the payload residuals r_est,n. Additionally, the model incorporates a short-term memory mechanism, utilizing the vector (x_we,n-1,...,x_we,n-5) from the preceding five frames to enhance accuracy and minimize noise.
After training the WE model, it was integrated into the ROS framework to facilitate real-time wrench estimation. For task space motion control, an admittance controller was employed, leveraging the wrench estimates instead of direct measurements. Subsequent joint space motion was determined using Inverse Kinematics. Additionally, a deadzone was carefully selected to set thresholds for initiating motion.
In this experiment, after pre-calibrating the weight of the end-effector, which included a coupled F/T sensor and gripper, the robot's task compliance mode was activated. The robot then proceeded to grasp an aluminum block (0.62 kg), which was treated as an unknown payload for the purpose of calibration. Upon grasping the aluminum block, a constant offset in the estimated wrench was observed, leading to a gradual descent of the end-effector until it reached the boundary. This behavior is displayed more clearly in the supplementary video. Subsequently, the same 4-second calibration was conducted. The results of this calibration, both quantitative and qualitative, are presented in Fig. <ref>. Overall, the calibration's accuracy is evidenced by nearly overlapped curves between the estimations and measurements, along with minimal RMSE.
§ CONCLUSION
In this study, we presented a fast end-effector payload calibration technique for dynamics identification and its application in joint space compliance and wrench estimation. This approach employs Neural Networks to model robot dynamics accurately across various payload scenarios, covering the entire joint space. Leveraging an extensive offline learning process, this method requires only minimal online data, enabling the use of short calibration trajectories. This outcome is particularly valuable in industrial settings where manipulators grasp a wide range of objects.
While the proposed calibration scheme achieves high levels of accuracy, learning-based methods generally rely on substantial volumes of training data, posing a challenge for easy replication on other robots. Additionally, the current payload discretization approach and end-effector design allow only adjustments of mass and CoM. Consequently, our implementation is limited to low-speed operation and collaboration, where the effect on payload inertia is less obvious. For replication on different robot models, it would be beneficial to develop a better payload discretization approach, such as an end-effector design that allows systematic adjustment of inertia. This would expand training data diversity while reducing the data size requirement.
Another promising avenue for applying the proposed method is with torque-controllable robots. For position-controlled robots, joint compliance should be implemented with careful selection of deadzones. Moreover, given the admittance control scheme, robot motion may be jerky if the current measurement is noisy. However, with torque/current-controllable robots, the estimated payload residuals can be integrated directly into gravity compensation. The quality of robot behaviors would no longer depend on deadzone selection and measurement noise, resulting in lighter, more responsive robot behaviors with smoother trajectories during interaction.
ieeetr
|
http://arxiv.org/abs/2409.02894v1 | 20240904173813 | Follow the Mass -- A Concordance Picture of Tidal Disruption Events | [
"Julian Krolik",
"Tsvi Piran",
"Taeho Ryu"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.SR"
] |
Follow the Mass
0000-0002-2995-7717]Julian Krolik
Physics and Astronomy Department, Johns Hopkins University, Baltimore, MD 21218, USA
Julian Krolik
jhk@jhu.edu
0000-0002-7964-5420]Tsvi Piran
Racah Institute for Physics, The Hebrew University, Jerusalem, 91904, Israel
0000-0003-2012-5217]Taeho Ryu
Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, Garching, 85748, Germany
JILA, University of Colorado and National Institute of Standards and Technology, 440 UCB, Boulder, 80308 CO, USA
Department of Astrophysical and Planetary Sciences, 391 UCB, Boulder, 80309 CO, USA
§ ABSTRACT
Three recent global simulations of tidal disruption events (TDEs) have produced, using different numerical techniques and parameters, very similar pictures of their dynamics. In typical TDEs, after the star is disrupted by a supermassive black hole, the bound portion of the stellar debris follows highly eccentric trajectories, reaching apocenters of several thousand gravitational radii. Only a very small fraction is captured upon returning to the vicinity of the supermassive black hole. Nearly all the debris returns to the apocenter, where shocks produce a thick irregular cloud on this radial scale and power the optical/UV flare. These simulation results imply that over a few years, the thick cloud settles into an accretion flow responsible for the long term emission. Despite not being designed to match observations, the dynamical picture given by the three simulations aligns well with observations of typical events, correctly predicting the flares' total radiated energy, luminosity, temperature and emission line width. On the basis of these predictions, we provide an updated method (TDEmass) to infer the stellar and black hole masses from a flare's peak luminosity and temperature. This picture also correctly predicts the luminosity observed years after the flare. In addition, we show that in a magnitude-limited survey, if the intrinsic rate of TDEs is independent of black hole mass, the detected events will preferentially have black hole masses ∼ 10^6 ± 0.3 M_⊙ and stellar masses of ∼ 1-1.5 M_⊙.
§ INTRODUCTION
Tidal disruption events (TDEs) are inherently dramatic: a star is ripped apart by the gravity of a supermassive black hole. The result is a flare in which, for a month or two, the former star radiates with ∼ 10^10× its ordinary luminosity. Over the duration of the flare (typically months), TDEs are among the most luminous transients known. Once hard to find, there are now ∼ 100 known examples <cit.>, and many more can be expected from new instruments (e.g., the Rubin Observatory: and ULTRASAT: ) due to go online soon.
As TDEs involve infall of matter onto a black hole, they are of significant interest as a tool to explore dynamical accretion processes. They also have potential interest as a probe of the black hole population in galactic nuclei and the dynamical relationship between those nuclear black holes and the surrounding stars.
In addition to these reasons, they also merit attention as truly multiwavelength systems <cit.>. TDEs have been observed mostly in the optical, but at times, they shine in X-rays and in radio as well. The emission in different bands typically comes at different times and emerges from different locations, indicating that multiple mechanisms contribute to their radiation processes.
To identify where and how the radiation is emitted, the first task is to track where the debris mass goes. When the star's orbit is effectively parabolic and the star has been fully disrupted, half the star's mass is unbound and escapes to infinity, while the other half remains bound to the black hole <cit.>. Although there is a firm consensus that the bound mass is initially placed on highly-eccentric orbits
whose apocenters are thousands of gravitational radii, ∼ O(10^2) × the stellar pericenter, hitherto there has been little agreement about its whereabouts following its first return to the pericenter.
The oldest, and still most prevalent, view has been that the bound material immediately forms a compact accretion disk on the pericenter scale. This picture predicts a very luminous X-ray source, but the observed spectra have luminosities ∼ 10^-2× this prediction, and shapes well fit by black bodies with temperatures ≳ 10^4 K and radiating areas corresponding to the apocenter scale (see <cit.> for two recent data compilations). To explain these unexpected observations, many (e.g., <cit.>) have suggested that the X-rays are reprocessed by different material associated with the event whose photosphere happens to be at this distance from the black hole.
Alternatively, it has also been suggested <cit.> that, rather than forming a compact disk, the bound material returns to the apocenter region and, upon colliding with later-returning bound debris, forms a large irregular cloud on a scale similar to the apocenter distance. In this picture <cit.>, the shocks associated with these collisions power the flare.
The question of where the debris finds itself shortly after the disruption is given further importance by the fact that the energy available for radiation is exactly the binding energy of the debris orbits. The binding energy per unit mass is, of course, inversely proportional to the semimajor axis of a given fluid element's orbit. Given the large discrepancy between the ratio of radiated energy to debris mass and the energy per unit mass released in relativistic accretion, the orbital distribution is, therefore, of prime dynamical interest.
Large-scale numerical simulations treating most of the bound debris have the power to resolve these questions, but hitherto their computational expense has been prohibitive due to the extremely large dynamical range in lengthscales inherent to the problem. Consequently, early global simulations were run with parameter choices that, although unlikely to describe real events, made the runs feasible <cit.>. In these simulations, nearly all the bound matter passes again through the apocenter region after its first return to the vicinity of the black hole. However, as the simulation parameters were not realistic, these results have not been widely accepted as relevant to observed TDEs.
Recently, the numerical barriers have been overcome by three different groups <cit.> using very different numerical methods. In this paper, we will show that these simulations largely agree on where the bound debris goes after its first return to the vicinity of the black hole. In addition, we will outline the striking observational implications stemming from this consensus. These implications touch upon the total radiated energy during the flare peak and its spectral shape, the long-term evolution of TDE emission, and how the event rates in flux-limited samples depend on the stellar and black hole masses.
We will first briefly summarize the relevant observational results, focusing on TDEs of the most commonly seen variety, those dominated by an optical/UV flare (Sec. <ref>). Next (in Sec. <ref>) we will remind readers of the tidal debris' characteristic lengthscales, orbital energies, etc. Following these presentations of background, we will summarize the three new global simulations (Sec. <ref>). The heart of our work will appear in Sec. <ref>, where we demonstrate how the shared results of the simulations lead to important statements about both the underlying dynamics of TDEs and many of their observed properties. Our conclusions will then be summarized in Sec. <ref>.
§ A BRIEF SUMMARY OF OBSERVATIONS
§.§ The optical flare
In most optical/UV TDEs, the optical/UV flux rises quickly (over a few weeks) to a peak of ∼ 3 × 10^43-3 × 10^44 erg s^-1, followed by a lengthy (a few months) decline. The spectra are well-fit by single-temperature Planck functions, whose
typical temperatures are ∼ 3 × 10^4 K and corresponding black-body emitting radii are 10^14-15 cm <cit.>. When lines are observed, their typical width corresponds to ∼ 5000 km/sec <cit.>.
For a black hole mass ∼ 10^6 M_⊙, the scale of the emitting area is consistent with the scale of orbits with the speed inferred from the lines.
In some cases, the luminosity declines from its peak ∝ t^-α with α≃ 5/3, the expected slope of the mass infall rate's decline <cit.>. However, the entire sample exhibits a wider range of lightcurve power-law indices, 1 ≲α≲ 3 <cit.>. Combining lightcurve and single-observation luminosity data, one finds
the radiated energy is ∼ 10^50.5 ± 0.5 erg. At late times, years or more after the peak, the decay is much shallower. For those events observed ≳ 0.5 yr past the peak, although there is a good deal of scatter, the bolometric luminosity typically falls below the peak by a factor ∼ 10 <cit.>. The late-time radiated energy is therefore comparable to the prompt radiated energy.
§.§ X-rays
Although there are a few dozen examples of TDEs whose peak X-ray luminosities are comparable to the peak optical/UV luminosities often seen, only a few TDEs discovered by optical/UV flaring have been associated with X-ray flares <cit.>. Nonetheless, the great majority of TDEs seen to produce X-rays, whether optically-detected or not, have rather soft spectra during the flare: when fitted by a Planck function, the characteristic temperature kT ∼ 50 - 100 eV <cit.>. There is, however, a good deal of diversity in their lightcurves:
in terms of how rapidly the X-rays decline <cit.>, non-monotonic behavior, and separation in time from any associated optical/UV flare <cit.> or spectral changes <cit.>.
§.§ Radio
Radio emission has been observed both during the prompt phase and also at times months or even years after a TDE, sometimes even when there was no prompt emission <cit.>. The luminosity of the observed radio signal is always much less than in the optical/UV or X-rays.
In most cases, equipartition analysis
indicates that their sources involve only a small fraction of the total energy[When the TDE is jetted, as for example Swift J1644+57, equipartition analysis <cit.> suggests that the energy of the emitting electrons is comparable to the energy content of the source producing other signals (e.g. the prompt X-rays).] in the system <cit.>.
It is generally thought that the radio emission takes place at large distances from the black hole, but there is considerable controversy over the nature of the outflow producing it
<cit.>. Because the radio source is dynamically decoupled from the bound material, we will not discuss it here.
§ BASIC QUANTITIES
Through dimensional analysis and simple physical arguments, the fundamental parameters of tidal disruption events determine a set of characteristic distances, timescales, and orbital properties for the debris as it separates from the star. Although these undergird the topic, it is important to recognize that details omitted from their definitions significantly alter some of the scalings they imply.
§.§ Lengthscales
At the most basic level, TDEs depend on the mass of the star (generally stated in units of M_⊙), the mass of the black hole, M_ BH (for which we adopt a fiducial value of 10^6 M_⊙), and the pericenter of the star's orbit. Because there are very few stars whose orbital semimajor axes are comparable to the scale on which TDEs happen, the rate of TDEs is almost certainly dominated by stars whose orbits are extremely eccentric. In fact, most stars that become victims of TDEs are on effectively parabolic orbits; we will quantify this statement momentarily.
At the order of magnitude level, the criterion for the tidal gravity of the black hole to overwhelm the self-gravity of a star is
GM_ BH R_* r_ t^3≳G R_*^2.
Thus, the order-of-magnitude tidal radius r_ t is given by
r_ t = R_* M_ BH^1/3^1/3
= 7 × 10^12 m_ BH,6^1/3^0.55 cm
= 50 m_ BH,6^-2/3^0.55 r_ g ,
where is the stellar mass in solar masses, m_ BH,6 is the black hole mass in units of 10^6 M_⊙, and r_ g≡ GM_ BH/c^2 is the black hole's gravitational radius.
Here and in the rest of the text, we describe the main-sequence mass-radius relation by the power-law R_* = 0.93 R_⊙^0.88 <cit.>.
whenever appears in a scaling relation, if its exponent is written as a decimal quantity, part of the -dependence is through R_*().
The tidal radius estimator r_ t is based on an order-of-magnitude argument. Other radii relevant to tidal disruptions differ from it by factors of order unity. In particular, the critical radius within which a star can be completely disrupted is R_T = Ψ(M_⋆,M_ BH) r_ t <cit.>.
The correction factor
Ψ(M_⋆,M_ BH) is defined in the Appendix. Although Ψ(M_⋆,M_ BH) is of order unity, its dependence on the stellar mass and the black hole mass is important when estimating TDE rates.
§.§ Energy scales and orbital properties
The specific orbital energy of the debris liberated from the star is conventionally estimated <cit.> by
Δ E_0 = GM_ BH R_* r_ t^2
= G (M_ BH^2)^1/3 R_*
= GM_ BH/r_ t(/M_ BH)^1/3 = 2.3 × 10^-4 c^2 m_ BH,6^1/3^-0.213.
In other words, the orbital energy of the debris is, for typical parameters, ∼ 10^-2 the potential energy near the nominal tidal radius. However, once again, detailed calculations <cit.> have found that consideration of the internal density profile of main sequence stars introduces order-unity correction factors: <cit.> defined them by Ξ = Δ E/Δ E_0, where the energy range -Δ E < E < +Δ E contains 90% of the debris mass. Details about the correction factor Ξ, including fitting formulæ for its dependence on and , are provided in the Appendix <ref>.
For the star's orbit to be “effectively parabolic", its specific binding energy should be much smaller than the debris specific energy, Δ E. This condition corresponds to an initial stellar semi-major axis much larger than the debris minimal semi-major axis a_0 (defined in eqn. <ref>), and to eccentricity much closer to unity than the debris' eccentricity (defined in eqn. <ref>).
The specific orbital energy also determines another characteristic lengthscale: the semimajor axis of the initial orbit traveled by the “most bound" matter:
a_0 = GM_ BH 2 Δ E = M_ BH^1/3 r_ t/2 Ξ^1/3 =
M_ BH^2/3 R_*/2 Ξ^2/3
= 3.26 × 10^14 Ξ^-1 m_ BH,6^2/3^0.213 cm
=2200 Ξ^-1 m_ BH,6^-1/3^0.22 .
Consistent with the ratio ∼ 10^-2 between Δ E and the gravitational potential near r_ t, the orbital semimajor axis is ∼ 10^2 r_ t.
Because the pericenter of all the debris orbits is very nearly the stellar center-of-mass pericenter, and r_ p is often ≲ r_ t, the large ratio between a_0 and r_ t immediately implies that the debris orbits are extremely eccentric:
1-e ≤ 2(r_ p/r_ t) Ξ (/M_ BH)^1/3
≤ 0.02 (r_ p/r_ t) Ξ (/m_ BH,6)^1/3.
Lastly, the energy also determines a characteristic timescale t_0, the orbital period of the most bound matter:
t_0
=π√(2)( M_ BH R_*^3 G^2 Ξ^3 )^1/2
=37 Ξ^-3/2 m_ BH,6^1/2^0.32 d.
As increases, the internal density profile of main sequence stars becomes increasingly centrally-concentrated. This causes Ξ to rise as a function of ,
and the net result is for there to be almost no net trend in t_0 as increases from ≃ 0.1 M_⊙ to ≃ 10M_⊙ <cit.>.
On the other hand, the explicit scaling with M_ BH is augmented by the implicit dependence on M_ BH through Ξ, making t_0 ∝6^0.6.
This characteristic timescale is significant for (at least) two reasons. The first is that it is the characteristic timescale on which the debris revisits the vicinity of the black hole: the rate at which mass returns to near the stellar pericenter rises to a peak that occurs ≃ t_0 after the star's pericenter passage and then declines thereafter as (/3t_0) (t/t_0)^-5/3. The second is that, as the orbital period of the debris, it also defines the growth time of the internal stresses capable of driving accretion through outward angular momentum transport. Nonlinear saturation of the MHD turbulence driven by the magnetorotational instability is generally thought to take ∼ 10 orbital periods, which in this instance is ∼ 10 t_0, or ∼ 1 yr for typical parameters.
§.§ Circularization and the inverse energy crisis
The length scale where most of the mass is located dictates the implied efficiency of energy extraction. The conversion efficiency of kinetic to thermal energy at distance r from a black hole is:
η(r)
≃G M_ BH/ r c^2 = r_ g/r .
For example, if, upon its first return to the black hole, the debris joins a compact disk with circular orbits of radius ∼ r_ p, the peak rate at which orbital energy is dissipated is
L_0 ∼η(r_ p) M_⋆/3t_0∼ 10^46 (25 r_ g/r_ p) ^0.68 m_ BH,6^-1/2Σ^3/2 erg s^-1 .
This is ∼ 10^2 × larger than the luminosity typically observed in the prompt phase, a problem that has been called the “inverse energy crisis" (first discussed in <cit.>).
If this much light were generated over a surface comparable to that subtended by the inner regions of an accretion disk around a black hole, it would have a characteristic temperature
T_0 ∼(L_0 σ_ SB 2 π r_ p^2)^1/4∼ 1 × 10^6 (r_ p/25 r_ g)^-3/4^1/6 m_ BH,6^-5/8Σ^3/8 K,
where σ_ SB is the Stefan-Boltzmann constant.
This is ∼ 50 × the observed optical/UV temperature <cit.>.
Typical observed velocities from motion at a few tens of r_ g would be ∼ 30,000 km s^-1, about ten times larger than implied by the measured emission line widths.
Interestingly, all three problems, about the luminosity, the temperature, and the velocity, are solved if the apocenter, a_0, were to replace the pericenter, r_ p, as the place where kinetic energy of the flow is converted to heat.
§ SUMMARY OF CURRENT NUMERICAL SIMULATIONS
§.§ Realistic, Global Simulations
Explicit hydrodynamic simulations of what happens to the bound portion of the tidal debris are the best path toward understanding the fate of the returning matter stream and uncovering the observational implications of debris dynamics.
Beginning with <cit.>, many numerical hydrodynamics simulations about various aspects of TDE evolution have been published. Unfortunately, relatively few had realistic initial conditions and were carried out long enough to be suitable for investigating the system as a whole. The prerequisites for a simulation to be genuinely realistic and global are:
* have a problem volume large enough to contain all the bound debris (or at least that portion of it returning within a few t_0 of the disruption);
* run for a time ≳ t_0;
* consider a star on an effectively parabolic orbit (when this criterion is not satisfied, the orbit of the returning stream is dominated by the energy of the initial stellar orbit rather than by the disruption event);
* self-consistently link the disruption itself with the post-disruption debris hydrodynamics;
* assume parameters that might apply to observed events.
Three simulations stand out as satisfying all these criteria. We list them in order of their publication dates.
<cit.> employed two fixed grids, one a small box following the star's orbit, the other a large volume surrounding the black hole. Simultaneous evolution on the two grids was coordinated through a “multipatch" system <cit.>. The programs running on both grids solve the general relativistic hydrodynamics equations; the one responsible for the star adds relativistically-consistent stellar self-gravity. <cit.> considered a 3 M_⊙ star, whose internal structure was taken from a MESA model of a middle-aged main-sequence star, and a 10^5 M_⊙ black hole. The equation of state included LTE radiation pressure, but there was no computation of radiation transfer because, for these parameters, the cooling time was always very long compared to the evolution time. The duration of this simulation was 3t_0.
<cit.> solved the equations of Newtonian hydrodynamics and radiation transport in the flux-limited diffusion approximation on a moving mesh. Stellar self-gravity was computed via a quadrupole moment tree, and the black hole's gravity was approximated as a Paczynski-Wiita potential with a softening term. They chose a 1 M_⊙ star and a 10^6 M_⊙ black hole, but described the internal structure of the star as an n=3/2 polytrope, i.e., an isentropic structure for a gas with an adiabatic index of 5/3. This simulation ran for ≈ 1.4 t_0.
The third <cit.> was produced by an SPH code with relativistic hydrodynamics in a Schwarzschild spacetime and the same equation of state as in <cit.>. Their star's initial structure was the same as in <cit.>, but its self-gravity was calculated as an integral over the Newtonian Green's function for the Poisson Equation with a softening term to smooth short lengthscale fluctuations. The simulation covered a time up to ≃ 9t_0.
Three other simulations are also of interest, but each fails one or two of the criteria. The first <cit.> passed all the criteria but the final one: they investigated the disruption of a 0.64 M_⊙ white dwarf by a 500M_⊙ black hole, choosing these parameters because they reduced the contrast in lengthscales, thereby diminishing the computational cost. Like the work of <cit.>, this simulation treated the problem in terms of fully general relativistic dynamics and included radiation pressure through its LTE contribution to internal energy. It also had the longest duration of all simulations published to date: 13t_0. The second <cit.> combined general relativistic hydrodynamics with Newtonian stellar self-gravity. Its parameters, however, were somewhat special: the star began on a bound orbit with e=0.97, so that its specific orbital energy was actually greater in magnitude than Δ E of the debris. In addition, the star's pericenter was at 7r_ g, so this simulation probed the relatively small phase space associated with strong apsidal precession. Lastly, the duration of the simulation was only ≃ 0.3t_0. Third, <cit.>, much like <cit.>, used Newtonian SPH to generate tidal debris and then general relativistic hydrodynamics to study its further motions for an event in which the stellar pericenter was 7r_ g, but the star was on an effectively parabolic orbit. The duration was only ≈ 0.03t_0. Thus, it, too, cannot be directly applied to the most common events, both because it treated a TDE with an exceptionally small pericenter and because it ran for only a brief time.
§.§ Consensus dynamical results
All four of the simulations studying disruptions with pericenters > 10r_ g found the same principal dynamical features. The most prominent of these are several quasi-standing shocks.
As predicted by <cit.> and <cit.>, the convergence of debris streams whose orbital planes are slightly different creates a shock near the pericenter, dubbed the “nozzle shock". Compared to the orbital speed, this is a weak shock, with speed at early times only ∼ (/M_ BH)^1/3 v_ orb and therefore able to dissipate only ∼ 10^-4^2/3 m_ BH,6^-2/3 of the orbital kinetic energy near pericenter. Although the nozzle shock initially comprises a pair of roughly horizontal shocks, the shock fronts tilt over time. This results in a somewhat greater dissipation efficiency, perhaps reaching, at its greatest, ∼ (/M_ BH)^1/3 of the pericenter-region kinetic energy <cit.>. A small fraction of the matter encountering the nozzle shock is deflected inward as the shock redistributes angular momentum (; and <cit.> [private communication, G. Lodato]), carrying matter to smaller radii at a rate that is at most ≲ 0.01 /t_0. The inward-moving matter does not instantaneously settle into a “normal" circular-orbit accretion disk. Instead, the substantial eccentricity of the deflected matter's orbits leads to further shocks at radii inside pericenter <cit.>. The pericenters of the matter gaining angular momentum move outward. When this matter returns to the pericenter region, it shocks at a larger radius. Consequently, the radial extent of the nozzle shock gradually stretches, reaching ∼ 400 r_ g∼ 4 r_ p by t ∼ 3t_0 <cit.>.
Debris that has passed through the nozzle shock swings back out toward an apocenter that is slightly smaller because of the orbital energy dissipated in the shock; its natal specific angular momentum is so small that the angular momentum exchange in the shock hardly affects the apocenter. Because the disruption of the star takes place across a range of distances from the black hole (roughly from r ≃ r_ p to r ≃ 20 r_ p: <cit.>) and because of relativistic effects in both the tidal stress and the orbits, the lines of apsides of the debris orbits stretch across a range of angles ∼ 10^∘ <cit.>. When the range of angles is this small and the orbits are highly eccentric, the streams intersect near apocenter (see also the related geometric argument of <cit.>). Two views of these shocks are presented in Figure <ref>, which highlights shock locations by portraying temperature (see also Fig. 1 of <cit.>).
Unlike the nozzle shock, the angle between the flows shocking against one another near apocenter is large. Consequently, the energy dissipated is comparable to the local orbital kinetic energy. In rough terms, the kinetic energy near apocenter is a fraction 1-e of the kinetic energy near pericenter, and we have already estimated that 1 - e ≃ 0.02 (r_ p/r_ t) Ξ (/m_ BH,6)^1/3. The energy per unit mass dissipated in an apocenter shock is then ∼Δ E, comparable to or greater than that of the nozzle shock <cit.>.
The precise location(s) of the apocenter shock(s) change over time. Because the earliest matter to return is the most bound, and therefore has the smallest apocenter, at early times this shock moves outward. At later times, after some material has gone around more than once and lost some orbital energy in the nozzle and apocenter shocks, the apocenter shock moves inward. In addition, as the debris orbits change shape, other shocks form at radii comparable to a_0, but farther from the path of newly-returning debris <cit.>. Throughout these events, matter that is returning for the first time commingles with matter that has already completed one or more orbits; in other words, from very early on, the dynamics are poorly approximated by considering only a stream wrapping around the black hole once and then encountering a newly-arriving stream. This is one of the reasons that encompassing at least the majority of the bound debris is a prerequisite for simulation credibility.
Although there have been suggestions in the literature that the radiation pressure of light emitted from an inner accretion disk <cit.> or arising from “stream-stream" shocks like the apocenter shocks <cit.> could lead to unbinding a significant amount of mass, there is little evidence for this in global simulations; over the first ∼ (1 - 2) t_0, the amount of mass converted from bound to unbound is ≲ 0.01 <cit.>.[<cit.> find an expanding “Eddington envelope" carries ∼ 2/3 of the initially bound debris outward, but do not state what fraction of this envelope is unbound.]
Lastly, despite the expectation in the traditional model that matter is accreted onto the black hole as fast as it returns from its first visit to apocenter, and therefore the possible rate of energy release is highly super-Eddington, the two global simulations with astrophysically-relevant parameters find that radiation forces are generally significant, but rarely exceed the gravitational force, i.e., the flux at most approaches the Eddington level. That this is so is particularly striking in the case treated by <cit.> because the peak mass-return rate is so high that it would yield a heating rate ∼ 5000L_E if the efficiency were ∼ 0.1.
§.§ Consensus structural results
As already stressed in the introduction, the simple question “Where is the tidal debris?" is fundamental to any consideration of its observable phenomenology. When speaking about the bulk of the bound matter that has already returned from its first visit to apocenter, the simulations give a consistent answer: at a radius ∼ a_0. Given the dynamical picture already summarized, it could hardly be anything else: its orbital energy has been diminished by at most a factor of order unity. Because a_0 depends very weakly on both the black hole mass and the star mass (see eqn. <ref>), most of the bound mass—even when it settles into an accretion flow—remains at distances ≳ 2500 r_ g from the black hole.
For example, at t ≃ 1.4t_0 (the endpoint of the <cit.> simulation), ≈ 3/4 of the accretion flow mass is > 5000 r_ g from the black hole in both that simulation (private communication, E. Steinberg) and the simulation of <cit.>. At t=3t_0, the endpoint of the latter simulation, that fraction has hardly changed.
The vertical mass distribution for all the bound mass that has passed through at least one shock is also simply described: it is geometrically thick. As already estimated, the apocenter shocks generically dissipate an energy similar to the net energy of the debris orbits, Δ E. It immediately follows that in energetic terms, the gas is supported as much by pressure (largely radiation) as by rotation.
The characteristic radial scale remains close to a_0, but the characteristic eccentricity changes substantially, dropping to ∼ 0.5. This is due primarily to the orbital energy lost by the gas in all of the shocks, but is aided by the angular momentum gained when the nozzle shock removes angular momentum from the fluid deflected inward and transfers it to the fluid that remains on orbits going out to the r ∼ a_0 region.
Although the great majority of the bound debris stays relatively far from the black hole, small amounts can find their way to much closer. Again, the three extant simulations give similar results: at t ≃ 1.4t_0, both <cit.> and <cit.> find ∼ 2 - 5 × 10^-3 at radii ≲ r_ p/2. Both numbers, as well as the fate of this matter, are uncertain due to the inner cut-offs employed by these simulations (see the next subsection for details). Although <cit.> do not quote a figure for the mass close to the black hole at t ∼ t_0, their Figure 2 shows only a very small fraction of the total mass this close even at t ≃ 9t_0.
§.§ Limitations
To close this section, we acknowledge two limitations to the guidance we can derive from the existing global simulations. First, two of them impose unrealistic dynamics near the black hole. <cit.> place an outflow boundary condition at a spherical radius 40r_ g from the black hole's center. This means they cannot say what happens to matter passing through that surface, and they likely overestimate the net rate at which matter crosses the boundary. Rather than place a sharp boundary around the black hole, <cit.> instead force the gravitational potential to increase linearly with radius from the black hole out to 30r_ g, where it switches to a Paczynski-Wiita form. The impact of this policy is to make the dynamics of gas close to the black hole unphysical: fluid elements have the wrong velocity and follow incorrect orbits. The motivation for both policies is to avoid excessively short timesteps. Like <cit.>, <cit.> employ general relativistic hydrodynamics (except for the star's self-gravity), but differ by not requiring any central cut-out. Thus, this simulation may be more reliable than the other two for matter passing close to the black hole.
The second limitation is crude approximations to time-dependent radiation transfer. In the simulations by <cit.> and <cit.>, there is no transfer at all: the radiation is assumed to be in LTE everywhere and at all times. Because the parameters of the <cit.> simulation led to a particularly large ratio of cooling time to evolution time, this was not a bad approximation, but it would break down for larger M_ BH and/or smaller .
Although <cit.> allowed no radiation transport during their hydrodynamical simulation, they estimated the radiated luminosity in post-processing.
To do so, they solved 1D time-steady transfer equations (even though the photon diffusion time was often longer than the evolution timescale) in which they assumed the total opacity had the Thomson scattering value, but was purely absorptive. This procedure likely overestimates the luminosity.
<cit.> solved a time-dependent radiation transfer equation along with the hydrodynamic equations, but in a very simplified form: in the gray flux-limited diffusion approximation, with Thomson scattering opacity. In much of the volume occupied by debris, the density is low enough to support the approximation of Thomson opacity, but in the denser regions it is likely to underestimate significantly the Rosseland mean <cit.>. Moreover, the flux-limited diffusion formalism always directs the flux along the radiation intensity gradient, but this is frequently wrong near and outside the photosphere. Errors of this sort are particularly problematic for radiation-driven outflows and geometrically-complicated photospheres. Given these uncertainties, we note that our estimates below of the energy dissipation rate are more robust than those of the luminosity and temperature. Nonetheless, for the angle-integrated bolometric luminosity, the time-dependent solution of <cit.> should provide the best estimate of the three; unfortunately, it ran for only ≃ 1.4 t_0.
§.§ Summary of simulation results
For many purposes, these results can be summarized very concisely by two key statements. It is worth remarking that their import is in excellent agreement with the results of the simulation reported by <cit.>.
First, by a few t_0, when the majority of the bound mass has returned from its first passage through apocenter, a new structure has been created that is hot, irregular, crudely elliptical, and geometrically thick. It might be called an “accretion flow", but it is very different from a classical “accretion disk". During its formation, the orbits of fluid elements change by angular momentum exchange and energy dissipation in shocks, but once the accretion flow is formed, these shocks diminish in strength.
Second, that ≳ 99% of the debris mass remains ∼ 10^3 - 10^4 r_ g from the black hole at times a few t_0 after disruption immediately implies that “circularization", in which debris mass is placed on roughly circular orbits at radii ∼ 2r_ p from the black hole, can be accomplished on a timescale ∼ t_0 for only a very small fraction of the mass. The accretion rate onto the black hole must therefore be at least two orders of magnitude smaller than the mass fallback rate, in sharp contradiction to the traditional model.
This fact follows from the lack of any mechanism to remove sufficient energy from the debris; both the dissipation of orbital energy into heat and the radiation of heat by photons are far too slow (see also Sec. <ref>) to permit all but a small fraction of the debris mass to settle quickly (on the ∼ t_0 timescale) into orbits near the black hole.[<cit.> extrapolated from the data of their simulation to argue that the total heating by shocks grows exponentially after t ≃ t_0. However, their simulation did not extend long enough to test this extrapolation, and it is not seen in either of the two longer simulations, those of <cit.> and <cit.>.]
0.5cm
§ IMPLICATIONS
The shared results of these simulations regarding the location of the debris have many implications for our understanding of TDEs, both in terms of conceptual framework and specific observational predictions. Remarkably, the simple fact of where the debris mass is deposited leads to strong predictions about many aspects of TDE phenomenology. Moreover, as we are about to show, these predictions are, without any fine-tuning, in agreement with many of the most striking features of these events.
§.§ The energy budget
As commented previously (Sec. <ref>), the distance between the black hole and the debris is directly connected to the energy available to radiate. With nearly all the debris remaining (at times a few t_0 after the disruption) at a distance ∼ a_0, the relevant dissipative efficiency is
η(a_0)
≃G M_ BH a_0 c^2 ≃r_ g/a_0≃
4.5 × 10^-4Ξ m_ BH,6^1/3^-0.21 .
The total heat released is then
E_ diss≃η(a_0) (/2)c^2 ≃ 4 × 10^50Ξ m_ BH,6^1/3^0.79 erg,
right in the middle of the observed radiated energy distribution. In fact, the total dissipated energy found by the simulations is closely consistent with this estimate.
It is possible that the ≲ 1% of deflected inside r_ p adds to the dissipated energy, but as we will demonstrate in Sec. <ref>, in terms of bolometric luminosity, it can at most augment the light from the bulk of the debris by a factor of order unity. In other words, the three simulations support the suggestion at the end of Sec. <ref> about placing the debris at a distance ∼ a_0 rather than ∼ r_ p.
§.§ Emission line widths
Another consequence of our more robust knowledge of the debris' location is that it implies an orbital speed for the debris, and therefore the width of any atomic line features in its spectrum. Although the orbital speed varies around an elliptical orbit, the scale of the speed is nonetheless determined by its semimajor axis:
v_ orb≃ c (r_g/a_0)^1/2 = 6400 Ξ^1/26^1/6^-0.11 km s^-1
This characteristic speed depends extremely weakly on both M_ BH and M_*. It is in the center of the measured Hα FWHM line-width distribution for TDEs without Bowen NIII lines, and is about half the median FWHM when Bowen NIII lines are present <cit.>.
§.§ Optical depth and the cooling time
The optical depth τ of the debris is, of course, immediately determined by the density distribution.
Because the mass return-rate is ∝ t^-5/3 after its peak, only a minority of the bound mass has returned to the vicinity of the black hole by a few t_0 after the disruption. In addition, the mass that has returned spends most of its time near its orbital apocenter, ≳ 2a_0 from the black hole. Consequently, the optical depth varies as a function of position. For example, as shown in <cit.>, only a fraction f(a_0) ≃ 15 % of the bound mass can be found within a distance a_0 of the black hole at t=t_0. Over the span of radii where most of the mass is located, 0.3a_0 ≲ r ≈ 3a_0, f(r) ∝ r^1 + ϵ, where ϵ is small and positive at t=t_0, but small and negative by t=3t_0.
If κ is the gas's Rosseland mean opacity and κ__ T is the Thomson opacity, at t=t_0 the characteristic vertical optical depth to the midplane of a circular disk with radius ∼ a_0 is
τ_0 ≈ 80 [f(a_0)/0.15] Ξ^2 (κ/κ__ T)6^-4/3^0.55.
Because a_0 ≈ 3.5 × 10^14 Ξ^-1 m_ BH,6^2/3^0.22 cm,
the basic dynamics of the bound debris automatically create a photosphere on a radial scale ∼ 10^14.4± 0.5 cm, just as inferred from the radiating area associated with the optical/UV blackbody. However, it is important to note that, as shown by the simulations, the photosphere can be far from spherical, as the density distribution is both flattened and nonaxisymmetric <cit.>.
In disk geometry, the photon diffusion time is given by (τ + 1) h/c, where h is the disk's vertical scaleheight and τ is the optical depth across a distance h <cit.>. Because the radiation energy density is generally larger than the gas thermal energy, the characteristic cooling time for the debris within a distance r of the black hole is then
t_ cool≈
10 [f(r)/0.15](h/r)(a_0/r)(κ/κ__ T) m_ BH,6^-2/3^0.77Ξ days .
In units of t_0, it is <cit.>:
t_ cool/t_0 ≈ fh/r G^1/2κ^8/3Ξ^5/2/ 2^3/2π^2 M_ BH^7/6 R_*^5/2
≈ 0.26(f/0.15)(h/r)( κ/κ__ T)Ξ^5/2 m_ BH,6^-7/6 m_*^0.47 .
The simulation results are consistent with this estimate. For example, <cit.> find that their proxy for the volume-integrated dissipation rate varies in a way not too different from the luminosity estimated by the flux-limited diffusion approximation (essentially equivalent to our t_ cool estimate), but can occasionally depart from it by as much as a factor ∼ 10.
Fortuitously, t_ cool/t_0 ≈ 1 for our fiducial parameters. Because, as we have already mentioned, <cit.> found that f(r) ∝ r^1 + ϵ, the characteristic cooling time inside the photosphere varies only slowly with radius, ∝ r^ϵ. When 6 ≳ 1, t_ cool < t_0, so photon losses respond quickly to the heating rate throughout the debris and the luminosity matches the total heating rate. Within a time ∼ t_0, radiation can vent enough heat to let the flow settle closer to its equatorial plane. On the other hand, for encounters with a relatively low-mass black hole, the photon diffusion time is long relative to the heating time, making cooling inefficient. In addition, in this slow-cooling regime, the internal transport of radiation is not in a steady-state.
In other words, over a time ∼ t_0, the gas retains much of its heat content—i.e., its heat content evolves adiabatically—until close to the time its orbit carries it through the photosphere or it encounters another shock.
Because radiation transport in the slow-cooling regime is time-dependent, simple estimates of the luminosity, generally based on time-steady transfer, are subject to significant uncertainty. However, in this context, a simple estimate of the cooling time of individual fluid elements combined with the adiabaticity of the slow-cooling regime suggests that the luminosity of a given fluid element is held approximately constant as it moves around an orbit <cit.>. Estimated in this way, the luminosity released on a radial scale r is L(r) ∼ P_ rad r^2 c/τ. The radiation pressure varies ∝ r^-4 when the volume of a moving fluid element is ∝ r^3, as it is here due to the rough constancy of h/r, while τ for a specific fluid element is ∝ r^-2. It follows that L(r) is approximately independent of r. Additionally, as we will show in the next subsection, the heating rate in TDEs links this luminosity to the Eddington luminosity.
§.§ Flare luminosity, effective temperature, and flare duration: cooling fast and slow
§.§.§ Luminosity
The mass distribution also determines the luminosity. As argued in Sec. <ref>, the depth of the potential well specifies the energy available; the luminosity follows from combining the total energy with the heating time (effectively ∼ t_0) and the cooling time (as just estimated).
Rather than being the usual relativistic radiative efficiency times the mass fallback rate, ∼η(10 r_ g)Ṁ_ peak c^2, the rate at which energy is dissipated is L_ peak,diss∼ E_ diss,peak/t_0 ∼ (r_ g /a_0) Ṁ_ peak c^2.[Note that the f(r) factor appearing in the optical depth is not relevant here because this quantity relates to the rate at which mass returns, not to how much resides within a given radius.]
The heating efficiency for the bulk of the debris is much smaller than the canonical relativistic accretion value because the gravitational potential where the apocenter shocks take place is shallower by a factor ∼ 10r_ g /a_0.
Consequently, instead of the peak dissipation rate being extremely super-Eddington, ∼ 2 × 10^46Ξ^3/2 m_ BH,6^-1/2^0.66 erg s^-1, it is
L_ diss,peak ≃ η(a_0) c^2/3 t_0≃ 8.6 × 10^43Ξ^5/26^-1/6^0.47 erg s^-1
≃ 0.6 Ξ^5/2 m_ BH,6^-7/6^0.47 L_E (M_ BH).
This much lower luminosity is greater than Eddington only for black holes with masses ≲ 5 × 10^5 ^0.38 M_⊙, and even then it exceeds Eddington by much less than the expectation based on relativistic efficiency.
To translate the rate at which energy is dissipated into a luminosity requires consideration of both the cooling time (see eqn. <ref>) and the duration of the heating, which is ∼ t_0 for the bulk of the bound debris during the flare.
The radiated luminosity L_ rad = L_ diss when the cooling time is shorter than the time, i.e., t_0, in which the heat is dissipated. On the other hand, when t_ cool > t_0, the radiated luminosity is reduced by a factor ∼ t_0/t_ cool relative to the heating rate L_ diss.
The peak luminosity (i.e., at t=t_0, when <cit.> found the cylindrical radius of the photosphere is at r ≈ a_0) can then be written in a combined form that approximates both the slow and fast cooling regimes:
L_ rad,peak = L_ diss,peak 1 + t_ cool/t_0
≃ 8.6 × 10^43Ξ^5/2 m_ BH,6^-1/6^0.47 1 + 0.26 (f/0.15)(h/r) Ξ^5/2 m_ BH,6^-7/6^0.47 (κ/κ__ T) erg s^-1.
In the limit of t_0 ≪ t_ cool, the peak luminosity is limited to roughly the Eddington luminosity and is therefore independent of M_⋆ (see Appendix <ref>):
L_ rad,peak≃2L_E (f/0.15) (h/r)(κ/κ_ T) for t_0 ≪ t_ cool.
Note that L_ rad,peak may be reduced further if the density in the debris is high enough for the Rosseland mean opacity to exceed the Thomson value. This adjustment could be important because, at the expected temperature range (∼ 1 - 3 × 10^4 K), atomic features can significantly augment electron scattering when the density is ≳ 10^-10 gm cm^-3 (see the opacity calculations of <cit.>).
The net result for the dependence of L_ rad,peak on M_ BH and is shown in Figure <ref>.
In terms of the absolute flare luminosity (left panel), the greater mass of debris available from more massive stars often creates a higher radiative output.
For this reason, the highest luminosities generally correspond to the largest values of .
However, this no longer is the case when the larger amount of mass also means a greater optical depth and longer photon diffusion time—which occurs when the black hole mass is smaller. This is why the peak luminosity is smaller, even for the most massive stars, when M_ BH is comparatively small, and hardly depends at all on when 6 ≲ 0.5 and ≳ 1.
On the other hand, larger black hole mass leads to a longer time over which radiation of nearly the same amount of energy takes place, driving down the peak luminosity for large M_ BH. In the end, the highest luminosity is produced by moderate mass black holes (M_ BH∼ 1 - 5 × 10^6 M_⊙) and stars of the highest mass. The least luminosity occurs for particularly high-mass black holes and low-mass stars. Thus, for low M_ BH the optical/UV peak luminosity is ∝ M_ BH and almost independent of M_⋆, while for higher black hole mass, the luminosity increases rapidly with and decreases slowly with M_ BH. The division between “low" and “high" M_ BH corresponds to the peak luminosity for any particular M_⋆, a value ranging from ∼ 1 × 10^43 erg s^-1 for M_⋆∼ 0.1 M_⊙ to ∼ 1 × 10^45 erg s^-1 for M_⋆∼ 10 M_⊙.
These predicted peak luminosities are closely consistent with the peak luminosity estimated directly from simulation data (on the basis of local cooling times by <cit.>, time-dependent radiation transfer in the flux-limited diffusion approximation by <cit.>, and instantaneous shock-heating by <cit.>). In all cases, the peak luminosity is predicted by our approximate expression to be ≃ 1 × 10^44 erg s^-1 and this coincides with the simulation-based estimates. Note that the parameters of the disruption treated by <cit.> and <cit.> were the same (M_⋆ = 1M_⊙, M_ BH = 1 × 10^6 M_⊙), whereas <cit.> studied a case with M_⋆=3 M_⊙ and M_ BH = 1 × 10^5 M_⊙.
A small fraction of optical/UV TDEs shows still higher luminosities, reaching as high as 5 × 10^45 erg s^-1 <cit.>. As explained in Sec. <ref>, very luminous events are overrepresented in a magnitude-limited sample, so their real fraction is significantly smaller than the fraction observed. We speculate that these are the rare TDEs in which the star plunges to r_ p≲ 10r_ g, a regime in which general relativistic apsidal precession is substantial and changes the character of TDEs, likely yielding higher luminosity <cit.>.
The relationship between luminosity, stellar mass, and black hole mass is simplified considerably when the luminosity is measured in Eddington units (Fig. <ref>, right panel). In the very long cooling time limit, the photon output is regulated to be several times L_ E: this regime is found for almost all when the black hole mass is small. For sufficiently high black hole mass, L_ rad,peak/L_ E falls steadily but begins that fall at higher M_ BH when is greater.
§.§.§ Effective temperature
To zeroth order, we may expect a thermal spectrum for the radiation from the photosphere of the debris, although a variety of stellar-atmosphere features are likely. Combining our estimate of L_ diss,peak with our estimate of the radiating area (2π a_0^2), we arrive at a characteristic temperature applicable to the fast-cooling limit <cit.>:
T_ peak = (L_ rad,peak 2π a_0^2 (1 + 2h/r) σ_ SB)^1/4
≃ 3.9 × 10^4 [(1 + 2h/r)(1 + t_ cool/t_0)]^-1/4Ξ^9/8 m_ BH,6^-3/8 K.
Just as it did for the luminosity, a shift of radial scale for the debris from ∼ r_ p to ∼ a_0 brings the temperature into the range actually observed (cf. eqn. <ref>).
Thus, there is a strong prediction that the temperature is a few × 10^4 K, nearly independent of the star's mass or whether the debris cools rapidly or slowly—and this is exactly the temperature generally observed.
The principal parameter-dependence is on , which gives somewhat cooler temperatures for larger .
§.§.§ Duration
In the rapid-cooling regime, the optical/UV lightcurve should track the rate at which the debris' internal energy would increase in the absence of cooling. This rate is driven by the mass return rate, but its decline following the peak in the mass return rate is more gradual, as fluid elements continue to suffer shocks as they orbit around the black hole. Because the placement and strength of these shocks depend on details of the flow, the degree to which the decline is slower than the decline in the mass return rate could vary from case to case. Nonetheless, in this regime the duration of the flare should generally be a few t_0.
On the other hand, the peak luminosity of slowly-cooling shocked debris (i.e., the debris in events for which
m_ BH,6 ≪ 0.3[ (f/0.15) (h/r) (κ/κ__ T)]^6/7Ξ^15/7 m_*^0.4)
is locked to its Eddington luminosity. Because, in addition, the total dissipated energy available for radiation varies extremely weakly with both M_ BH and , the optical/UV lightcurve for slowly-cooling events should have a flat peak lasting
≃ 0.5 (h/r)_(κ/κ__ T)Ξ6^-2/3^0.78 months,
followed by a decline once the thermal energy remaining can no longer support this luminosity. Note, however, that this limit usually applies when the black hole mass is small, making the duration typically a few months.
Such a prediction was also made, for somewhat different reasons, in <cit.>[A number of the properties of this model overlap with ours: most of the bound debris residing on scales ∼ a_0 for a long time, the large geometrical thickness of the flow, and, when t_ cool > t_0, the roughly Eddington luminosity. The two models differ principally in that bulk kinetic energy and irregular elliptical gas flows are important in our model, but play no part in <cit.>.].
Thus, the relationship between the duration of the flare peak and t_0 is somewhat indirect. For rapidly-cooling events, the period of high dissipation rate, and therefore maximum luminosity, lasts for ∼ t_0 and then declines, perhaps ∝ t^-5/3 and perhaps more slowly. For slowly-cooling events, the Eddington-limited phase lasts for ≃ 0.4 Ξ^5/66^-1/2 m_*^0.47(f/0.15)(h/r)(κ/κ_T) t_0. For the smaller black holes associated with this limit, this phase has a duration ≳ t_0.
The brightest part of the flare ends once the system has cooled significantly. At this point, the accretion flow settles down to a geometrically-thin eccentric disk with a size ∼ 10^3 r_ g and an eccentricity ∼ 0.5. Further radiation during the time from ∼ 3 t_0 to ∼ 10t_0 comes from tapping the remaining heat in the gas or the emergence of reprocessed light initially generated by promptly-deflected matter. The latter may be significant because <cit.> found that hydrodynamic effects led to an accretion rate onto the black hole declining rather slowly, roughly ∝ t^-0.7 out to t ∼ 9t_0.
§.§ Scaling of flare properties with and
In the previous subsections we have shown how the placement of the debris mass at distances ∼ a_0 leads to estimates of a variety of properties observable during the optical/UV flare. All of these estimates depend, to varying degrees, on and . Part of the and dependence is derived from traditional TDE order-of-magnitude Newtonian estimates, but part comes from the results of detailed simulations informed by main-sequence internal density profiles for the stars and general relativistic effects dependent on .
Here we will: describe how interesting observable properties depend on the stellar and black hole mass, incorporating both kinds of dependence; and discuss how these relations can be inverted to infer and from observations.
§.§.§ Total radiated energy
The total energy dissipated during the flare E_ diss,peak∝Ξ6^1/3^0.78 (see eqn. <ref>). However, its actual scaling with 6 is roughly ∝6^1/4 rather than ∝6^1/3 because Ξ declines with increasing 6.
On the other hand, Ξ increases with .
Consequently, the total radiated optical/UV energy during the flare's peak is a weakly rising function of M_ BH, but rises somewhat more rapidly with M_⋆.
§.§.§ Peak optical/UV luminosity
When cooling is rapid (t_ cool < t_0), equation <ref> gives the stellar and black hole mass-dependence of the emitted luminosity: ∝Ξ^5/26^-1/6^0.44. Although the explicit dependence of L_ rad,peak on M_ BH is very weak, the implicit dependence through the Ξ^5/2 factor makes it a gradually declining function of black hole mass, roughly ∝ M_ BH^-3/8. On the other hand, the Ξ^5/2 factor significantly strengthens the increasing trend with , so that L_ rad,peak∝^1.8 and rises by a factor ∼ 60 from =0.1 to =3.
On the other hand, when cooling is slow, the peak luminosity is Eddington-limited, so L_ rad,peak∝ and is independent of .
§.§.§ Temperature
As already remarked, the observed temperature is extremely insensitive to either or . For rapidly-cooling events, the temperature rises slightly with increasing through the function Ξ's dependence on the stellar mass, while it declines gradually (∝6^-3/8) with increasing . The temperature of slowly-cooling events depends extremely weakly on the masses: ∝Ξ^1/26^-1/12^-0.11.
§.§.§ Duration
As discussed in Sec. <ref> and originally pointed out in <cit.> (see its Fig. 8a), the increase of Ξ with causes t_0 to be almost independent of . For this reason, the timescale of a flare is a very weak indicator of . It is, on the other hand, ∝^0.6. Unfortunately, the persistence of shock heating for several t_0 means that the period of greatest luminosity is a few t_0. Quantitative prediction of the flare duration therefore demands a better calibration of “a few" than the simulations to date can give.
§.§.§ Parameter inference
Parameter inference is, of course, most precise when the data's dependence on the parameter is strong. The assembled scalings above show that the only truly strong dependence is that of the peak luminosity on (L_ rad,peak∝^1.8) when the event is in the rapid-cooling regime.
Because the radiative efficiency of the debris shocks is much smaller than the canonical relativistic radiative efficiency, events of typical luminosity can be readily explained as associated with stars with ∼ 1, rather than as very low-mass stars as has sometimes been suggested <cit.>. In fact, because L_ rad,peak∝^1.8 when rapid-cooling applies, despite the larger numbers of low-mass stars, the frequency of low-mass star events in flux-limited samples should be rather low (see Sec. <ref> and Fig. <ref>).
The next strongest dependence is the linear proportionality of L_ rad,peak to in the slow-cooling limit. Regrettably, because slow-cooling applies for smaller , such events are harder to detect and less likely to appear in flux-limited samples.
In a previous paper on this topic <cit.>, we presented a method (called TDEmass) for using measurements of L_ rad,peak and T_ peak to infer and . In that work we did not include the effect of slow cooling, so we argued that T_ peak∝6^-3/8 for all cases; allowing for slow cooling drastically weakens that dependence so that the temperature in that limit is very nearly independent of .
To use this method on data, we have incorporated into the existing Python code implementing TDEmass a new algorithm to solve Eq. <ref>, rather than Eq. <ref>; the code is available on github (https://github.com/taehoryu/TDEmass.githttps://github.com/taehoryu/TDEmass.git). We encourage interested readers to download an updated version incorporating slow-cooling effects.
§.§ Late-time evolution
Over longer timescales, years or more, nearly all the bound mass should eventually accrete. If most accretes with high radiative efficiency, the total energy emitted over long timescales could be as large as ∼ 10^53 erg, several hundred times the energy emitted during the flare. However, this could take a very long time. On the other hand, the radiated energy is reduced to the extent that some of the gas has especially low angular momentum matter; such gas can plunge ballistically, passing through the event horizon without significant dissipation <cit.>.
If inflow is driven by angular momentum transport, it requires internal stresses, which are generally due to correlated MHD turbulence. Building the turbulence from scratch takes ∼ 5 - 10 orbital periods, i.e., ∼ 5 - 10 t_0, whether the orbits are circular or elliptical <cit.>. Even after the turbulence reaches nonlinear saturation, the characteristic inflow time is ∼ 10 (r/h)^2 orbital periods, where h/r is the disk aspect ratio.
Thus, one might expect <cit.> that the timescale on which most of the debris accretes is t_ late≳ 2 × 10^3 (h/0.1r)^-2 t_0, which translates to
t_ late∼ 200 Ξ^-3/2 (h/0.1r)^-2m_ BH,6^1/2^0.82 yr
and luminosity (if the accretion is radiatively efficient)
L_ late∼ 10^43 Ξ^3/2 (h/0.1r)^2 m_ BH,6^-1/2^0.18 erg s^-1.
Here our fiducial h/r = 0.1 rather than 0.5 because cooling should lead to a diminishing scale height.
Even though the MHD-mediated inflow is slow at the large distance where most of the mass initially resides, there is so much more of it than the mass that went quickly into a small disk that the luminosity generated at late times is down by only a single order of magnitude from the peak:
L_ late/L_ rad,peak≈ 0.1 Ξ (h/0.1r)^2 6^-1/3^-0.26.
Moreover, this ratio is only weakly dependent on M_⋆ and , and L_ late should also be rather slowly-varying because it takes many years to deplete the mass reservoir on the r ∼ a_0 scale.
However, there may be another source of time-dependence. As shown by <cit.>, outward angular momentum transport in eccentric disks can accelerate accretion by making their inner regions even more eccentric. This effect might tilt the late-time lightcurve, raising the luminosity at the start of MHD-driven accretion and diminishing it somewhat at still later times.
It should also be noted that once MHD-mediated accretion begins (at t ≳ 5 - 10 t_0), the hydrodynamical mechanisms studied by <cit.> may not be good predictors of the inflow rate.
Thus, the optical/UV lightcurve can be expected to flatten out on roughly year-timescales, plateauing at a level ∼ 0.1× the peak (for rapidly-cooling events).
In fact, the lightcurves of many TDEs do flatten out at times ∼ 1 yr past peak, and when they do, the most common value of L_ late/L_ rad,peak is ∼ 0.1 <cit.>.
It is also important to note that the total energy radiated at late times, ∼ 3 × 10^50Ξ^3/2 (h/0.1r)^2 6^-1/2^0.18 (Δ t/1 yr) erg, is comparable to the total energy radiated during the flare. This fact, in itself, demonstrates that the majority of the debris mass cannot have been brought to smaller radii during the flare.
As a final remark, we emphasize that the initial deposition of nearly all the debris mass on the ∼ a_0 scale implies directly that the accretion flow in TDEs is very far from inflow equilibrium—in other words, the accretion rate varies substantially with distance from the black hole—everywhere except possibly very near the black hole (see Sec. <ref>).
§.§ Matter deflected to small radii
§.§.§ Luminosity
As mentioned in Secs. <ref> and <ref>, a small amount of debris is deflected inward by the nozzle shock. By this means, ≲ 1% of the debris can, in fact, quickly enter an accretion flow with a radial scale comparable to the stellar pericenter. The inflow time this close to the black hole should be much smaller than t_0, even after allowing for the time necessary for the MHD turbulence to reach a saturated amplitude, because the orbital period is only ∼ (/M_ BH)^1/2 t_0 ∼ 10^-3 (/6)^1/2 t_0. As a result, the accretion rate through this small disk should be very close to the inward-deflection rate.
At t ∼ 1 - 3t_0, <cit.> found the deflection rate to be Ṁ_ defl∼ 0.01/t_0, about twice the peak black hole accretion rate found by <cit.>, ∼ 0.005 /t_0.
If the radiative efficiency is relativistic, the associated luminosity could be as large as
L_ defl≃ 2.5 × 10^44(η_ rel/0.1) (Ṁ_ defl/0.005 /t_0) Ξ^3/2 m_ BH,6^-1/2^0.66 erg s^-1.
Thus, the luminosity from the inner disk could be comparable to the luminosity from the shocked debris at much greater distance from the black hole; it would then also be mildly super-Eddington, particularly for smaller black holes and larger stars.
Taken at face value, L_ defl declines with M_ BH and increases with M_⋆, and the explicit mass-scalings for both are steepened by the trends in Ξ. However, there are numerous caveats and uncertainties attached to this estimate. The mass deflection rate itself remains poorly determined, and its dependence on and 6 is unknown.
In addition, there are several ways the radiative efficiency of the deflected gas could be less than the canonical relativistic efficiency of 0.1. As mentioned briefly at the end of Sec. <ref>, the debris orbits have energy much greater than that of a circular orbit with their angular momentum. To join a small, circular orbit therefore requires the loss of a large amount of energy, and the simulations do not exhibit a mechanism for this. If the debris orbits lose a small amount of their angular momentum, the associated gas can plunge directly across the event horizon without having lost any of its kinetic energy to dissipation <cit.>. Even if the angular momentum of the deflected debris is large enough for it to go into orbit around the black hole, it may still be small enough that Reynolds stresses suffice to push the gas onto a plunging orbit; there may be associated dissipation, but too little time to radiate <cit.>. Even if there is dissipation and photons are emitted, high optical depth may suppress photon escape <cit.> or reprocess the radiation, as we will discuss in the next subsection.
§.§.§ Spectrum
If the accreting matter close to the black hole during the flare period radiates a thermal spectrum, its temperature should be considerably higher than the light radiated by the bulk of the debris, but lower than if all the debris mass were promptly placed in such a small disk (cf. eqn. <ref>):
T_ defl ≈ (L_ defl 2πσ_ SB (10r_ g )^2)^1/4≈ 7.5 × 10^5 (η_ rel/0.1)^1/4×
(Ṁ_ defl/0.005 /t_0)^1/4Ξ^5/86^-5/8^0.16 K .
The characteristic energy is therefore ∼ 50 - 100 eV, i.e., soft X-rays <cit.>; the systematic uncertainties expressed in the scaling factors for η_ rel and Ṁ_ defl remain, but compressed by the 1/4 power relative to their influence on L_ defl. Higher-mass black holes should tend to have softer X-ray spectra because, in addition to its explicit dependence on M_ BH, T_ defl also decreases with higher M_ BH through Ξ(M_ BH).
It is interesting to compare this prediction to the observed spectra of steadily-accreting supermassive black holes, i.e., AGN. For the non-blazar varieties, the spectral band contributing the most to the bolometric luminosity is the FUV <cit.>. That thermal emission from the inner rings produces FUV rather than soft X-rays may be understood by writing the luminosity as a fraction ṁ of Eddington and supposing that it is radiated over the same area as used to estimate T_ defl; the result is T_ disk≃ 7 × 10^5 ṁ^1/46^-1/4 K.
The lower disk temperatures in AGN can then be explained by noting that the black hole masses in AGN are often at least an order of magnitude (and sometimes several orders of magnitude) greater than 10^6 M_⊙, and their Eddington-scaled accretion rates can be an order of magnitude below unity.
On the other hand, there is a stronger inconsistency with AGN X-ray spectra. In the typical case, the X-ray luminosity is ∼ O(10^-1)× the bolometric luminosity, and takes the form of a fairly hard power-law (L_ϵ∝ϵ^-α with 0.5 ≲α≲ 1) with a high-energy cut-off at ∼ 50 - 200 keV. Such power-laws are only rarely seen in TDEs, and when they are, it is relatively late in the event <cit.> or when jets are involved <cit.>. One might therefore speculate that higher ṁ suppresses hard X-ray emission, a thought that receives some support from simulational studies <cit.> as well as observations made of ∼ 10M_⊙ black holes <cit.>.
When we see both the FUV and the coronal X-rays from AGN, our line of sight to the central regions must be unobscured;
the UV continuum from TDEs can be seen directly because it is generated on the outside of the debris. By contrast, soft X-rays from an inner disk in a TDE can suffer large amounts of obscuration by the surrounding debris. The optical depth of the main body of debris is quite large: as already estimated in Equation <ref>, electron scattering alone contributes an optical depth ∼ 300 Ξ^2 6^-4/3^0.55. If the gas's ionization balance is near LTE, atomic absorption processes like ionization of He^+ should significantly add to the opacity for photons with energy ∼ 50 - 200 eV, while K-shell photoionization of C, N, and O create substantial opacity for photons with energy ∼ 200 - 1000 eV. Closer to the black hole, where both the density and the temperature are higher, He^+ ionization contributes to the opacity wherever the temperature is ≲ 3 × 10^5 K. Consequently, soft X-rays radiated from near the black hole would be entirely absorbed if they pass through the bound debris (in agreement with <cit.>, who found that, averaged over solid angle, the X-ray luminosity was only ∼ 10^-2× the optical/UV luminosity).
Note that in this sense, reprocessing of photons radiated closer to the black hole can be important. However, it acts upon a far smaller luminosity than originally envisioned. This sort of reprocessing also contrasts with other reprocessing scenarios in that here the gas doing the reprocessing is the bulk of the bound debris, which remains at distances from the black hole ∼ 10^3 r_ g, rather than the debris made unbound at the time of disruption <cit.> or a wind driven by super-Eddington radiation <cit.>.
In addition, whether the light is scattered or absorbed and reprocessed, the large optical depth also imposes a delay between when the initial photons are radiated and when they (or the photons into which their energy is transformed) reach the photosphere. Because, as we have already shown, the photon diffusion time through the outer regions of the flow can be significant compared to t_0, the inner-region diffusion time can only make the total escape time longer.
Only within the small solid angle clear of obscuration would soft X-rays be observable at the time of the flare peak. However, their consequences may be observable from a much larger solid angle because they have energies large enough to photoionize a number of medium-Z elements to unusually high ionization states. Many of these ions have transitions linking their ground states to states only a few eV higher in energy, making the corresponding photons easily observable.
The Bowen lines frequently, but not always, seen in TDEs <cit.> are examples: soft X-rays ionize He^+; recombination generates the HeII Lyα line; these line photons are absorbed by near-resonances in NIII and OIII (species created by UV photons below the HeII edge); collisional excitation generates optical/NUV lines from these ions. Because the gas has much lower absorption opacity in the optical/NUV, these emission lines can (absent obscuration farther out in the galaxy) be seen even when the X-ray photons are strongly obscured. Moreover, the lengthy diffusion time for these X-rays can create delays between the peak of the optical/UV flare and the appearance of the Bowen emission lines <cit.>; these delays may be augmented if the processes leading to X-ray radiation begin later than the shocks supplying the power for the optical/UV luminosity.
Depending on the ionization state of the debris gas near the edge of the optically-thin cone, reflection from the Thomson-thick debris may enhance the X-ray flux seen by observers with lines-of-sight within the cone. Because of the several varieties of photoionization, the cone albedo is likely to have several ionization edges imprinted upon it. At later times, when the debris has cooled, the debris opening angle widens, making higher-energy photons from near the black hole visible to a larger fraction of distant observers. It is possible that by the time this occurs, the spectrum of X-rays radiated near the black hole may be harder.
Somewhat fortuitously (because the dynamics and initial conditions are quite different), in respect to obscuration properties the picture we have just described resembles qualitatively that put forward by <cit.> and <cit.>. In the calculations reported in these papers, the gas density was assumed to be axisymmetric, to decline vertically so as to yield a constant aspect ratio h/r = 0.3, and to fall with increasing radius ∝ r^-1.3 all the way to r=8500 r_ g. The duration of the simulation, 20,000 r_ g /c≃ 1.4 × 10^-6Ξ^3/26^1/2^-0.82 t_0, was far too short for any evolution in the density at r ≳ 50r_ g.[Even though they assumed the debris contained far more magnetic flux than a star might have held, the inflow time at these radii was still far longer than the duration of their simulation.] Consequently, the majority of the debris mass was found at r ≳ 3000 r_ g and had an aspect ratio only slightly smaller than the one arising in the global simulations. Thus, these papers' predictions having to do with obscuration during the flare peak are similar to ours.
§.§ Detected event rates
The considerations presented so far have interesting implications for the rate of observed TDEs. Almost every astronomical survey is, in some way, flux-limited. When the objects of interest exhibit a range of luminosities, this automatically entails a statistical bias for those with larger luminosity. If the sources have a uniform spatial density, the number of sources in the sample is proportional to the volume out to which the sources can be detected; for non-cosmological extragalactic surveys (the case relevant to TDEs), the detected population is then ∝ L^3/2. Because the peak luminosity for TDEs is, in the end, determined by where the debris is placed after returning to the black hole, even the relative rates of events involving different stellar and black hole masses are consequences of where the debris mass goes.
As we have seen in Figure <ref>, the range of predicted luminosities across the most likely portion of parameter space spans a factor ∼ 10^3. The most-luminous events are therefore over-represented relative to the least-luminous by a factor ∼ 3 × 10^4. The sense of this over-representation is to favor events with larger and intermediate M_ BH (as shown in Fig. <ref>).
In the most naive model of the intrinsic TDE population, the probability of a TDE is independent of , so that the number of events is proportional to the IMF, but cut off at the maximum mass that has yet to leave the main sequence. The intrinsic population would therefore be dominated by low-mass stars. On the other hand, as we have just seen, we expect high-mass stars to yield more luminous events. The net effect on the observed population is to strongly limit the predominance of low-mass stars. Assuming a stellar mass distribution dN/d∝^-2.35 (i.e., a young stellar population created with the Salpeter IMF)
and that the intrinsic event rate is independent of M_ BH (consistent with the predictions of <cit.>, <cit.>, and <cit.> for this range of black hole masses), the resulting relative rates (per log and log M_ BH) are shown in Figure <ref>.
Despite the sharp decline with M_⋆ of the Salpeter IMF, the luminosity bias places the highest event detection rate at M_⋆∼ 1 - 1.5M_⊙ and M_ BH∼ 0.5 - 2× 10^6 M_⊙. The lowest rates, both a factor ∼ 3 × 10^-3× the peak rate, are found at low stellar mass and high black hole mass (M_⋆≲ 0.5M_⊙ and M_ BH≳ 3 × 10^7M_⊙), and at high stellar mass and very low black hole mass: ≳ 10 M_⊙ and M_ BH≲ 2 × 10^5 M_⊙. Note that the rate for all and M_ BH≳ 10^7 M_⊙ shown in Figure <ref> is likely overestimated because we have not included the relativistic effects that magnify the fraction of events leading to direct capture <cit.>.
§ SUMMARY
In the past year, three global TDE simulations in which events with astrophysically-realistic parameters were studied (∼ 1 M_⊙ and ∼ 10^5 - 10^6 M_⊙) have been published. Although they used different hydrodynamic algorithms (a fixed-grid intrinsically-conservative method, a moving-mesh intrinsically-conservative method, and SPH), in many respects their results are very consistent. In particular, all three find that after a few characteristic orbital times for the debris, the overwhelming majority of the debris mass—≳ 99%—is found at radii ∼ 3 × 10^3 r_ g. Thus, this fact is now a robust prediction of the basic physics of these events, subject to alteration only for extreme parameter choices (e.g., stellar pericenters ≲ 10r_ g, for which strong apsidal precession would change the dynamics <cit.>.).
Because the location of the mass directly implies many physical quantities: orbital binding energy, orbital timescale, photospheric area, etc., the robust agreement on where the mass goes leads to a similarly robust agreement in their predictions of important observable properties. Although in all three simulations the specific parameters chosen were selected only for being within the plausible range and for computational convenience, the simulations' predictions for the principal observables associated with the optical/UV flare lie squarely in the middle of the observed distribution <cit.>: total energy radiated a few × 10^50 erg, peak optical/UV luminosity ∼ 1 × 10^44 erg s^-1, and temperature of a black-body component ≃ 3 × 10^4 K. The last prediction is particularly striking because it has almost no dependence on any of the parameters.
In addition, the fact that nearly all the mass is deposited at such a large distance from the black hole directly implies that its subsequent accretion will stretch over a long period of time. It immediately follows that after the flare, the luminosity should decline over a period of several times the characteristic debris orbital timescale until it reaches rough constancy at a level ∼ 0.1 L_ peak. This prediction, too, is only weakly dependent on parameters and is nonetheless in excellent agreement with observations. In fact, because the total energy radiated on several year-timescales is generically comparable to the energy radiated during the flare <cit.>, observations demand a situation in which accretion of a large part of the debris mass takes place slowly.
All three simulations predict similar luminosities because the underlying mechanism is also similar: shocks dissipating ∼ 10^-3 of the debris' rest-mass energy on a timescale ∼ t_0. For this reason, all three likewise share a scaling relationship for how the luminosity depends on and . Because the accessible volume for flux-limited surveys is ∝ L^3/2, the rate at which events are detected in such surveys is ∝ L(,)^3/2∂^2 N_ TDE/∂∂. Thus, the results of these simulations predict how much events with specific pairs of and are favored (or disfavored) for appearance in surveys, relative to their true rate. Interestingly, the rate enhancement is within a factor ∼ 3 of its greatest amount for 0.6≲/M_⊙≲ 6 and 2 × 10^5 ≲/M_⊙≲ 1 × 10^7 if the intrinsic rate is proportional to the stellar IMF and independent of M_ BH.
All the independent estimates of SMBH masses in the sample of <cit.> are in this range, as are 28 of the 33 independently estimated SMBH masses in the <cit.> compilation.
All three simulations also agree that the fraction of the debris pushed quickly to radii within the stellar pericenter is ≲ 1%. Even though this is a very small fraction, it may possibly contribute to the bolometric luminosity of the flare at a level comparable to the bulk of the debris; its orbital binding energy is ∼ O(10^2)× that of the bulk. This material may be the source of the soft X-rays sometimes seen in TDEs, and reprocessing of these soft X-rays at larger distances by the bulk of the debris may lead to the Bowen emission lines often seen. However, none of the simulations done so far has treated the inner region with sufficient care to determine either its immediate radiation properties or, at a quantitative level, how much obscuration and reprocessing the emitted light may suffer en route to distant observers.
Thus, the basic mechanics of the most common variety of events is now understood. Over timescales of a few t_0, very nearly all the debris retains its original orbital energy to within a factor ∼ O(1); in so doing, it remains a distance ∼ a_0 from the black hole. The optical/UV flare results from radiating the energy that is dissipated. This matter approaches closer to the black hole only over much longer (years +) timescales. A small fraction (≲ 1%) of the mass moves inward more rapidly. It should also be emphasized, however, that rarer varieties of TDEs—those with pericenters small enough to create large apsidal precession <cit.>, or large enough to produce a partial disruption <cit.>, or those in which a massive disk already orbits the black hole <cit.>, for example—may behave differently.
0.5cm
We acknowledge support
from National Science Foundation (NSF) grants AST-2009260 and PHY-2110339
(JHK).
We also received support from the European Research Council Advanced Grant “MultiJets" and grant MP-SCMPS-00001470 from the Simons Foundation
to the Simons Collaboration on Extreme Electrodynamics of Compact Sources - SCEECS (TP).
§ APPENDIX - RELATIVISTIC CORRECTION FACTORS
Two correction factors are important in estimating the depends of observational properties on the stellar and the black hole masses. The first factor determine the relation between the real tidal radius R_T and the order of magnitude estimate r_ t:
R_T = Ψ(M_⋆,M__ BH) r_ t <cit.>.
Ψ(,6) = [ 0.80 + 0.26 6]^0.5
× 1.47+ exp[(-0.669 )/0.137]/1 + 2.34 exp[(-0.669)/0.137].
Ψ is a function of m_*, and possibly the star's age and chemical composition, through the dependence of the star's internal density profile on these factors. Taken as a function of alone, it declines smoothly from ≈ 1.5 for very small to ≈ 0.43 for all ≳ 1.. For a typical MS star encountering a black hole with M_ BH = 10^6 M_⊙, R_T ≈ 25 almost independent of the stellar mass <cit.>. This “physical tidal radius" is also a function of M_ BH through the impact of general relativistic corrections; with r_ t several tens of gravitational radii, they can be substantial <cit.>: the portion of Ψ dependent upon M_ BH grows from ≈ 1.8 for 6 ≈ 10 to ≈ 5 for 6 = 100.
The second correction factor Ξ(M_⋆,M__ BH) defines the change in energy of the debris, Δ E, relative to the fiducial change Δ E_0:
Ξ(,6) ≡ [1.27 - 0.36 ^0.242]
× 0.62+exp[(-0.67)/0.21]/1 + 0.55 exp[(-0.67)/0.21] .
Like the factor Ψ, Ξ is a function of stellar mass, with additional possible dependence on stellar age and chemical composition.
The function Ξ() is almost opposite in behavior to Ψ(): Ξ rises from an asymptote ≃ 0.66 for ≲ 0.3M_⊙ to an asymptote at ≈ 1.8 for ≳ 1.3 M_⊙.
Ξ also depends upon M_ BH, particularly when 6 ≳ 1, because of general relativistic effects <cit.>: as a function of M_ BH, Ξ falls from ≃ 1.3 in the Newtonian limit to ≈ 0.75 when 6 = 10 and to ≈ 0.3 at 6 = 30, beyond which the fitting formula no longer applies. Figure <ref> illustrates how Ξ is greatest for large and small 6 and least for small and large 6.
§ APPENDIX - THE EDDINGTON LUMINOSITY LIMIT
That the luminosity is limited to approximately Eddington is, in fact, a general result when a gas is heated to nearly the virial temperature (in the sense of associated photon energy density per unit mass), and is then left to radiatively cool in an environment whose opacity is ∼κ_T <cit.>. This may be seen from a very simple argument. If U_ rad/ρ∼ GM /r in a homogeneous sphere, the luminosity is roughly
L ∼GM M_g r (c/r) [1 + 3κ_T M_g/(4π r^2)]^-1,
where M is the central mass and r is radius of the sphere. The optical depth τ_T (r)∼ (3/4π)κ_T M_g/r^2. Using the definition L_E = 4π c GM/κ_T, this approximation to the luminosity becomes
L ∼L_E 3 (1 + 1/τ_T(r)).
Thus, the luminosity is ∼ L_E τ_T when τ_T < 1 and rises to ∼ L_E when τ_T > 1.
natexlab#1#1
[Alexander et al.(2016)Alexander, Berger, Guillochon,
Zauderer, & Williams]Alexander2016
Alexander, K. D., Berger, E., Guillochon, J., Zauderer, B. A., &
Williams, P. K. G. 2016, , 819, L25,
10.3847/2041-8205/819/2/L25
[Alexander et al.(2020)Alexander, van Velzen, Horesh, &
Zauderer]Alexander2020
Alexander, K. D., van Velzen, S., Horesh, A., & Zauderer, B. A. 2020,
, 216, 81, 10.1007/s11214-020-00702-w
[Andalman et al.(2022)Andalman, Liska, Tchekhovskoy,
Coughlin, & Stone]Andalman+2022
Andalman, Z. L., Liska, M. T. P., Tchekhovskoy, A., Coughlin, E. R., &
Stone, N. 2022, , 510, 1627, 10.1093/mnras/stab3444
[Auchettl et al.(2017)Auchettl, Guillochon, &
Ramirez-Ruiz]Auchettl2017
Auchettl, K., Guillochon, J., & Ramirez-Ruiz, E. 2017, , 838, 149,
10.3847/1538-4357/aa633b
[Avara et al.(2023)Avara, Krolik, Campanelli, Noble,
Bowen, & Ryu]Avara+2023
Avara, M. J., Krolik, J. H., Campanelli, M., et al. 2023, arXiv
e-prints, arXiv:2305.18538, 10.48550/arXiv.2305.18538
[Barniol Duran & Piran(2013)]Barniol2013
Barniol Duran, R., & Piran, T. 2013, , 770, 146,
10.1088/0004-637X/770/2/146
[Bellm et al.(2019)Bellm, Kulkarni, Graham, Dekany,
Smith, Riddle, Masci, Helou, Prince, Adams, Barbarino,
Barlow, Bauer, Beck, Belicki, Biswas, Blagorodnova, Bodewits,
Bolin, Brinnel, Brooke, Bue, Bulla, Burruss, Cenko, Chang,
Connolly, Coughlin, Cromer, Cunningham, De, Delacroix, Desai,
Duev, Eadie, Farnham, Feeney, Feindt, Flynn, Franckowiak,
Frederick, Fremling, Gal-Yam, Gezari, Giomi, Goldstein,
Golkhou, Goobar, Groom, Hacopians, Hale, Henning, Ho, Hover,
Howell, Hung, Huppenkothen, Imel, Ip, Ivezić, Jackson,
Jones, Juric, Kasliwal, Kaspi, Kaye, Kelley, Kowalski,
Kramer, Kupfer, Landry, Laher, Lee, Lin, Lin, Lunnan,
Giomi, Mahabal, Mao, Miller, Monkewitz, Murphy, Ngeow,
Nordin, Nugent, Ofek, Patterson, Penprase, Porter, Rauch,
Rebbapragada, Reiley, Rigault, Rodriguez, van Roestel, Rusholme,
van Santen, Schulze, Shupe, Singer, Soumagnac, Stein, Surace,
Sollerman, Szkody, Taddia, Terek, Van Sistine, van Velzen,
Vestrand, Walters, Ward, Ye, Yu, Yan, & Zolkower]ZTF
Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, , 131,
018002, 10.1088/1538-3873/aaecbe
[Ben-Ami et al.(2022)Ben-Ami, Shvartzvald, Waxman, Netzer,
Yaniv, Algranatti, Gal-Yam, Lapid, Ofek, Topaz, Arcavi, Asif,
Azaria, Bahalul, Barschke, Bastian-Querner, Berge, Berlea,
Buehler, Dittmar, Gelman, Giavitto, Guttman, Haces Crespo,
Heilbrunn, Kachergincky, Kaipachery, Kowalski, Kulkarni, Kumar,
Küsters, Liran, Miron-Salomon, Mor, Nir, Nitzan, Philipp,
Porelli, Sagiv, Schliwinski, Sprecher, De Simone, Stern, Stone,
Trakhtenbrot, Vasilev, Watson, & Zappon]Ben-Ami+2022
Ben-Ami, S., Shvartzvald, Y., Waxman, E., et al. 2022, in Society of
Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 12181,
Space Telescopes and Instrumentation 2022: Ultraviolet to Gamma Ray, ed.
J.-W. A. den Herder, S. Nikzad, & K. Nakazawa, 1218105,
10.1117/12.2629850
[Berger et al.(2012)Berger, Zauderer, Pooley, Soderberg,
Sari, Brunthaler, & Bietenholz]Berger2012
Berger, E., Zauderer, A., Pooley, G. G., et al. 2012, , 748, 36,
10.1088/0004-637X/748/1/36
[Blagorodnova et al.(2019)Blagorodnova, Cenko, Kulkarni,
Arcavi, Bloom, Duggan, Filippenko, Fremling, Horesh,
Hosseinzadeh, Karamehmetoglu, Levan, Masci, Nugent, Pasham,
Veilleux, Walters, Yan, & Zheng]Blagorodnova+2019
Blagorodnova, N., Cenko, S. B., Kulkarni, S. R., et al. 2019, ,
873, 92, 10.3847/1538-4357/ab04b0
[Blanton et al.(2017)Blanton, Bershady, Abolfathi,
Albareti, Allende Prieto, Almeida, Alonso-García, Anders,
Anderson, Andrews, Aquino-Ortíz, Aragón-Salamanca,
Argudo-Fernández, Armengaud, Aubourg, Avila-Reese, Badenes,
Bailey, Barger, Barrera-Ballesteros, Bartosz, Bates, Baumgarten,
Bautista, Beaton, Beers, Belfiore, Bender, Berlind, Bernardi,
Beutler, Bird, Bizyaev, Blanc, Blomqvist, Bolton, Boquien,
Borissova, van den Bosch, Bovy, Brandt, Brinkmann, Brownstein,
Bundy, Burgasser, Burtin, Busca, Cappellari, Delgado Carigi,
Carlberg, Carnero Rosell, Carrera, Chanover, Cherinka, Cheung,
Gómez Maqueo Chew, Chiappini, Choi, Chojnowski, Chuang,
Chung, Cirolini, Clerc, Cohen, Comparat, da Costa, Cousinou,
Covey, Crane, Croft, Cruz-Gonzalez, Garrido Cuadra, Cunha,
Damke, Darling, Davies, Dawson, de la Macorra, Dell'Agli, De
Lee, Delubac, Di Mille, Diamond-Stanic, Cano-Díaz, Donor,
Downes, Drory, du Mas des Bourboux, Duckworth, Dwelly, Dyer,
Ebelke, Eigenbrot, Eisenstein, Emsellem, Eracleous, Escoffier,
Evans, Fan, Fernández-Alvar, Fernandez-Trincado, Feuillet,
Finoguenov, Fleming, Font-Ribera, Fredrickson, Freischlad,
Frinchaboy, Fuentes, Galbany, Garcia-Dias,
García-Hernández, Gaulme, Geisler, Gelfand,
Gil-Marín, Gillespie, Goddard, Gonzalez-Perez, Grabowski,
Green, Grier, Gunn, Guo, Guy, Hagen, Hahn, Hall, Harding,
Hasselquist, Hawley, Hearty, Gonzalez Hernández, Ho, Hogg,
Holley-Bockelmann, Holtzman, Holzer, Huehnerhoff, Hutchinson,
Hwang, Ibarra-Medel, da Silva Ilha, Ivans, Ivory, Jackson,
Jensen, Johnson, Jones, Jönsson, Jullo, Kamble, Kinemuchi,
Kirkby, Kitaura, Klaene, Knapp, Kneib, Kollmeier, Lacerna,
Lane, Lang, Law, Lazarz, Lee, Le Goff, Liang, Li, Li,
Lian, Lima, Lin, Lin, Bertran de Lis, Liu, de Icaza Lizaola,
Long, Lucatello, Lundgren, MacDonald, Deconto Machado, MacLeod,
Mahadevan, Geimba Maia, Maiolino, Majewski, Malanushenko,
Malanushenko, Manchado, Mao, Maraston, Marques-Chaves, Masseron,
Masters, McBride, McDermid, McGrath, McGreer, Medina Peña,
Melendez, Merloni, Merrifield, Meszaros, Meza, Minchev,
Minniti, Miyaji, More, Mulchaey, Müller-Sánchez, Muna,
Munoz, Myers, Nair, Nandra, Correa do Nascimento, Negrete,
Ness, Newman, Nichol, Nidever, Nitschelm, Ntelis, O'Connell,
Oelkers, Oravetz, Oravetz, Pace, Padilla, Palanque-Delabrouille,
Alonso Palicio, Pan, Parejko, Parikh, Pâris, Park, Patten,
Peirani, Pellejero-Ibanez, Penny, Percival, Perez-Fournon,
Petitjean, Pieri, Pinsonneault, Pisani, Poleski, Prada,
Prakash, Queiroz, Raddick, Raichoor, Barboza Rembold, Richstein,
Riffel, Riffel, Rix, Robin, Rockosi, Rodríguez-Torres,
Roman-Lopes, Román-Zúñiga, Rosado, Ross, Rossi, Ruan,
Ruggeri, Rykoff, Salazar-Albornoz, Salvato, Sánchez, Aguado,
Sánchez-Gallego, Santana, Santiago, Sayres, Schiavon, da Silva
Schimoia, Schlafly, Schlegel, Schneider, Schultheis, Schuster,
Schwope, Seo, Shao, Shen, Shetrone, Shull, Simon, Skinner,
Skrutskie, Slosar, Smith, Sobeck, Sobreira, Somers, Souto,
Stark, Stassun, Stauffer, Steinmetz, Storchi-Bergmann,
Streblyanska, Stringfellow, Suárez, Sun, Suzuki, Szigeti,
Taghizadeh-Popp, Tang, Tao, Tayar, Tembe, Teske, Thakar,
Thomas, Thompson, Tinker, Tissera, Tojeiro, Hernandez Toledo, de
la Torre, Tremonti, Troup, Valenzuela, Martinez Valpuesta,
Vargas-González, Vargas-Magaña, Vazquez, Villanova, Vivek,
Vogt, Wake, Walterbos, Wang, Weaver, Weijmans, Weinberg,
Westfall, Whelan, Wild, Wilson, Wood-Vasey, Wylezalek, Xiao,
Yan, Yang, Ybarra, Yèche, Zakamska, Zamora, Zarrouk,
Zasowski, Zhang, Zhao, Zheng, Zheng, Zhou, Zhou, Zhu,
Zoccali, & Zou]SDSS
Blanton, M. R., Bershady, M. A., Abolfathi, B., et al. 2017, , 154,
28, 10.3847/1538-3881/aa7567
[Bloom et al.(2011)Bloom, Giannios, Metzger, Cenko,
Perley, Butler, Tanvir, Levan, O'Brien, Strubbe, De Colle,
Ramirez-Ruiz, Lee, Nayakshin, Quataert, King, Cucchiara,
Guillochon, Bower, Fruchter, Morgan, & van der Horst]Bloom+2011
Bloom, J. S., Giannios, D., Metzger, B. D., et al. 2011, Science, 333,
203, 10.1126/science.1207150
[Bonnerot et al.(2021)Bonnerot, Lu, &
Hopkins]LuBonnerot2021
Bonnerot, C., Lu, W., & Hopkins, P. F. 2021, , 504, 4885,
10.1093/mnras/stab398
[Bricman & Gomboc(2020)]BricmanGomboc2020
Bricman, K., & Gomboc, A. 2020, , 890, 73,
10.3847/1538-4357/ab6989
[Burrows et al.(2011)Burrows, Kennea, Ghisellini, Mangano,
Zhang, Page, Eracleous, Romano, Sakamoto, Falcone, Osborne,
Campana, Beardmore, Breeveld, Chester, Corbet, Covino,
Cummings, D'Avanzo, D'Elia, Esposito, Evans, Fugazza, Gelbord,
Hiroi, Holland, Huang, Im, Israel, Jeon, Jeon, Jun, Kawai,
Kim, Krimm, Marshall, P. Mészáros, Negoro, Omodei,
Park, Perkins, Sugizaki, Sung, Tagliaferri, Troja, Ueda,
Urata, Usui, Antonelli, Barthelmy, Cusumano, Giommi, Melandri,
Perri, Racusin, Sbarufatti, Siegel, & Gehrels]Burrows+2011
Burrows, D. N., Kennea, J. A., Ghisellini, G., et al. 2011, , 476,
421, 10.1038/nature10374
[Cendes et al.(2023)Cendes, Berger, Alexander, Chornock,
Margutti, Metzger, Wieringa, Bietenholz, Hajela, Laskar, Stroh,
& Terreran]Cendes2023
Cendes, Y., Berger, E., Alexander, K. D., et al. 2023, arXiv e-prints,
arXiv:2308.13595, 10.48550/arXiv.2308.13595
[Cenko et al.(2012)Cenko, Krimm, Horesh, Rau, Frail,
Kennea, Levan, Holland, Butler, Quimby, Bloom, Filippenko,
Gal-Yam, Greiner, Kulkarni, Ofek, Olivares E., Schady,
Silverman, Tanvir, & Xu]Cenko+2012
Cenko, S. B., Krimm, H. A., Horesh, A., et al. 2012, , 753, 77,
10.1088/0004-637X/753/1/77
[Chan et al.(2020)Chan, Piran, & Krolik]Chan2020
Chan, C.-H., Piran, T., & Krolik, J. H. 2020, , 903, 17,
10.3847/1538-4357/abb776
[Chan et al.(2021)Chan, Piran, & Krolik]Chan2021
—. 2021, , 914, 107, 10.3847/1538-4357/abf0a7
[Chan et al.(2022)Chan, Piran, & Krolik]Chan+2022
—. 2022, , 933, 81, 10.3847/1538-4357/ac68f3
[Chan et al.(2023)Chan, Piran, & Krolik]Chan+2023
—. 2023, arXiv e-prints, arXiv:2312.06775, 10.48550/arXiv.2312.06775
[Chan et al.(2019)Chan, Piran, Krolik, &
Saban]Chan2019
Chan, C.-H., Piran, T., Krolik, J. H., & Saban, D. 2019, , 881,
113, 10.3847/1538-4357/ab2b40
[Charalampopoulos et al.(2022)Charalampopoulos, Leloudas,
Malesani, Wevers, Arcavi, Nicholl, Pursiainen, Lawrence,
Anderson, Benetti, Cannizzaro, Chen, Galbany, Gromadzki,
Gutiérrez, Inserra, Jonker, Müller-Bravo, Onori, Short,
Sollerman, & Young]Charalam+2022
Charalampopoulos, P., Leloudas, G., Malesani, D. B., et al. 2022, ,
659, A34, 10.1051/0004-6361/202142122
[Dai et al.(2015)Dai, McKinney, & Miller]Dai+2015
Dai, L., McKinney, J. C., & Miller, M. C. 2015, , 812, L39,
10.1088/2041-8205/812/2/L39
[Dai et al.(2018)Dai, McKinney, Roth, Ramirez-Ruiz, &
Miller]Dai+2018
Dai, L., McKinney, J. C., Roth, N., Ramirez-Ruiz, E., & Miller,
M. C. 2018, , 859, L20, 10.3847/2041-8213/aab429
[Eftekhari et al.(2018)Eftekhari, Berger, Zauderer,
Margutti, & Alexander]Eftekhari2018
Eftekhari, T., Berger, E., Zauderer, B. A., Margutti, R., &
Alexander, K. D. 2018, , 854, 86, 10.3847/1538-4357/aaa8e0
[Evans & Kochanek(1989)]EvansKochanek1989
Evans, C. R., & Kochanek, C. S. 1989, , 346, L13,
10.1086/185567
[Gezari(2021)]Gezari2021
Gezari, S. 2021, , 59, 21, 10.1146/annurev-astro-111720-030029
[Guolo et al.(2024)Guolo, Gezari, Yao, van Velzen,
Hammerstein, Cenko, & Tokayer]Guolo+2024
Guolo, M., Gezari, S., Yao, Y., et al. 2024, , 966, 160,
10.3847/1538-4357/ad2f9f
[Hammerstein et al.(2023)Hammerstein, van Velzen, Gezari,
Cenko, Yao, Ward, Frederick, Villanueva, Somalwar, Graham,
Kulkarni, Stern, Andreoni, Bellm, Dekany, Dhawan, Drake,
Fremling, Gatkine, Groom, Ho, Kasliwal, Karambelkar, Kool,
Masci, Medford, Perley, Purdum, van Roestel, Sharma, Sollerman,
Taggart, & Yan]Hammerstein+2023
Hammerstein, E., van Velzen, S., Gezari, S., et al. 2023, , 942, 9,
10.3847/1538-4357/aca283
[Hawley & Krolik(2001)]HK2001
Hawley, J. F., & Krolik, J. H. 2001, , 548, 348,
10.1086/318678
[Hirose et al.(2014)Hirose, Blaes, Krolik, Coleman, &
Sano]Hirose+2014
Hirose, S., Blaes, O., Krolik, J. H., Coleman, M. S. B., & Sano, T.
2014, , 787, 1, 10.1088/0004-637X/787/1/1
[Horesh et al.(2021)Horesh, Cenko, & Arcavi]Horesh2021
Horesh, A., Cenko, S. B., & Arcavi, I. 2021, Nature Astronomy, 5, 491,
10.1038/s41550-021-01300-8
[Jonker et al.(2020)Jonker, Stone, Generozov, van Velzen,
& Metzger]Jonker+2020
Jonker, P. G., Stone, N. C., Generozov, A., van Velzen, S., &
Metzger, B. 2020, , 889, 166, 10.3847/1538-4357/ab659c
[Kaiser et al.(2002)Kaiser, Aussel, Burke, Boesgaard,
Chambers, Chun, Heasley, Hodapp, Hunt, Jedicke, Jewitt,
Kudritzki, Luppino, Maberry, Magnier, Monet, Onaka, Pickles,
Rhoads, Simon, Szalay, Szapudi, Tholen, Tonry, Waterson, &
Wick]PANSTARRS
Kaiser, N., Aussel, H., Burke, B. E., et al. 2002, in Society of
Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4836,
Survey and Other Telescope Technologies and Discoveries, ed. J. A. Tyson &
S. Wolff, 154–164, 10.1117/12.457365
[Kesden(2012)]Kesden2012
Kesden, M. 2012, , 85, 024037, 10.1103/PhysRevD.85.024037
[Khorunzhev et al.(2022)Khorunzhev, Sazonov, Medvedev,
Gilfanov, Atapin, Belinski, Vozyakova, Dodin, Safonov,
Tatarnikov, Bikmaev, Burenin, Dodonov, Eselevich, Zaznobin,
Krivonos, Uklein, Postnov, & Sunyaev]Khorunzhev2022
Khorunzhev, G. A., Sazonov, S. Y., Medvedev, P. S., et al. 2022,
Astronomy Letters, 48, 767, 10.1134/S1063773723010036
[Kinch et al.(2021)Kinch, Schnittman, Noble, Kallman, &
Krolik]Kinch+2021
Kinch, B. E., Schnittman, J. D., Noble, S. C., Kallman, T. R., &
Krolik, J. H. 2021, , 922, 270, 10.3847/1538-4357/ac2b9a
[Kochanek(1994)]Kochanek+1994
Kochanek, C. S. 1994, , 422, 508, 10.1086/173745
[Krolik et al.(2020)Krolik, Piran, & Ryu]Krolik+2020
Krolik, J., Piran, T., & Ryu, T. 2020, , 904, 68,
10.3847/1538-4357/abc0f6
[Krolik et al.(2016)Krolik, Piran, Svirski, &
Cheng]Krolik+2016
Krolik, J., Piran, T., Svirski, G., & Cheng, R. M. 2016, , 827,
127, 10.3847/0004-637X/827/2/127
[Krolik(1999)]Krolik1999
Krolik, J. H. 1999, Active Galactic Nuclei. From the Central Black Hole to
the Galactic Environment
[Krolik(2010)]Krolik2010
—. 2010, , 709, 774, 10.1088/0004-637X/709/2/774
[Law-Smith et al.(2020)Law-Smith, Coulter, Guillochon,
Mockler, & Ramirez-Ruiz]Law-Smith+2020
Law-Smith, J. A. P., Coulter, D. A., Guillochon, J., Mockler, B., &
Ramirez-Ruiz, E. 2020, , 905, 141, 10.3847/1538-4357/abc489
[Leloudas et al.(2019)Leloudas, Dai, Arcavi, Vreeswijk,
Mockler, Roy, Malesani, Schulze, Wevers, Fraser, Ramirez-Ruiz,
Auchettl, Burke, Cannizzaro, Charalampopoulos, Chen, Cikota,
Della Valle, Galbany, Gromadzki, Heintz, Hiramatsu, Jonker,
Kostrzewa-Rutkowska, Maguire, Mandel, Nicholl, Onori, Roth,
Smartt, Wyrzykowski, & Young]Leloudas+2019
Leloudas, G., Dai, L., Arcavi, I., et al. 2019, , 887, 218,
10.3847/1538-4357/ab5792
[Levan et al.(2011)Levan, Tanvir, Cenko, Perley,
Wiersema, Bloom, Fruchter, de Ugarte Postigo, O'Brien, Butler,
van der Horst, Leloudas, Morgan, Misra, Bower, Farihi,
Tunnicliffe, Modjaz, Silverman, Hjorth, Thöne, Cucchiara,
Cerón, Castro-Tirado, Arnold, Bremer, Brodie, Carroll,
Cooper, Curran, Cutri, Ehle, Forbes, Fynbo, Gorosabel,
Graham, Hoffman, Guziy, Jakobsson, Kamble, Kerr, Kasliwal,
Kouveliotou, Kocevski, Law, Nugent, Ofek, Poznanski, Quimby,
Rol, Romanowsky, Sánchez-Ramírez, Schulze, Singh, van
Spaandonk, Starling, Strom, Tello, Vaduvescu, Wheatley, Wijers,
Winters, & Xu]Levan+2011
Levan, A. J., Tanvir, N. R., Cenko, S. B., et al. 2011, Science, 333,
199, 10.1126/science.1207143
[Liu et al.(2024)Liu, Yarza, & Ramirez-Ruiz]Liu+2024
Liu, C., Yarza, R., & Ramirez-Ruiz, E. 2024, arXiv e-prints,
arXiv:2406.01670, 10.48550/arXiv.2406.01670
[Lu & Bonnerot(2020)]LuBonnerot2020
Lu, W., & Bonnerot, C. 2020, , 492, 686,
10.1093/mnras/stz3405
[Malyali et al.(2024)Malyali, Rau, Bonnerot, Goodwin,
Liu, Anderson, Brink, Buckley, Merloni, Miller-Jones, Grotova,
& Kawka]Malyali+2024
Malyali, A., Rau, A., Bonnerot, C., et al. 2024, ,
10.1093/mnras/stae927
[Matsumoto & Piran(2021)]Matsumoto2021
Matsumoto, T., & Piran, T. 2021, , 507, 4196,
10.1093/mnras/stab2418
[McKernan et al.(2022)McKernan, Ford, Cantiello, Graham,
Jermyn, Leigh, Ryu, & Stern]McKernan+2022
McKernan, B., Ford, K. E. S., Cantiello, M., et al. 2022, , 514,
4102, 10.1093/mnras/stac1310
[Metzger(2022)]Metzger2022
Metzger, B. D. 2022, , 937, L12, 10.3847/2041-8213/ac90ba
[Metzger & Stone(2016)]MetzgerStone2016
Metzger, B. D., & Stone, N. C. 2016, , 461, 948,
10.1093/mnras/stw1394
[Mockler & Ramirez-Ruiz(2021)]MockR-R2021
Mockler, B., & Ramirez-Ruiz, E. 2021, , 906, 101,
10.3847/1538-4357/abc955
[Mummery et al.(2024)Mummery, van Velzen, Nathan, Ingram,
Hammerstein, Fraser-Taliente, & Balbus]Mummery+2024
Mummery, A., van Velzen, S., Nathan, E., et al. 2024, , 527,
2452, 10.1093/mnras/stad3001
[Nicholl et al.(2022)Nicholl, Lanning, Ramsden, Mockler,
Lawrence, Short, & Ridley]Nicholl+2022
Nicholl, M., Lanning, D., Ramsden, P., et al. 2022, , 515, 5604,
10.1093/mnras/stac2206
[Nolthenius & Katz(1982)]NoltheniusKatz1982
Nolthenius, R. A., & Katz, J. I. 1982, , 263, 377,
10.1086/160511
[Phinney(1989)]Phinney1989
Phinney, E. S. 1989, in The Center of the Galaxy, ed. M. Morris, Vol. 136,
543
[Piran et al.(2015)Piran, Svirski, Krolik, Cheng, &
Shiokawa]Piran+2015
Piran, T., Svirski, G., Krolik, J., Cheng, R. M., & Shiokawa, H.
2015, , 806, 164, 10.1088/0004-637X/806/2/164
[Polkas et al.(2023)Polkas, Bonoli, Bortolas,
Izquierdo-Villalba, Sesana, Broggi, Hoyer, & Spinoso]Polkas2023
Polkas, M., Bonoli, S., Bortolas, E., et al. 2023, arXiv e-prints,
arXiv:2312.13242, 10.48550/arXiv.2312.13242
[Price et al.(2024)Price, Liptai, Mandel, Shepherd,
Lodato, & Levin]Price+2024
Price, D. J., Liptai, D., Mandel, I., et al. 2024, arXiv e-prints,
arXiv:2404.09381, 10.48550/arXiv.2404.09381
[Rees(1988)]Rees1988
Rees, M. J. 1988, , 333, 523, 10.1038/333523a0
[Remillard & McClintock(2006)]RR+McC2006
Remillard, R. A., & McClintock, J. E. 2006, , 44, 49,
10.1146/annurev.astro.44.051905.092532
[Rosswog et al.(2008)Rosswog, Ramirez-Ruiz, &
Hix]Rosswog2008
Rosswog, S., Ramirez-Ruiz, E., & Hix, W. R. 2008, , 679, 1385,
10.1086/528738
[Ryu et al.(2020a)Ryu, Krolik, &
Piran]Ryu+2020e
Ryu, T., Krolik, J., & Piran, T. 2020a, , 904, 73,
10.3847/1538-4357/abbf4d
[Ryu et al.(2020b)Ryu, Krolik, &
Piran]TDEmass
—. 2020b, , 904, 73, 10.3847/1538-4357/abbf4d
[Ryu et al.(2023a)Ryu, Krolik, &
Piran]Ryu+2023
—. 2023a, , 946, L33, 10.3847/2041-8213/acc390
[Ryu et al.(2020c)Ryu, Krolik, Piran, &
Noble]Ryu+2020a
Ryu, T., Krolik, J., Piran, T., & Noble, S. C. 2020c,
, 904, 98, 10.3847/1538-4357/abb3cf
[Ryu et al.(2020d)Ryu, Krolik, Piran, &
Noble]Ryu+2020b
—. 2020d, , 904, 99, 10.3847/1538-4357/abb3cd
[Ryu et al.(2020e)Ryu, Krolik, Piran, &
Noble]Ryu+2020c
—. 2020e, , 904, 100, 10.3847/1538-4357/abb3ce
[Ryu et al.(2020f)Ryu, Krolik, Piran, &
Noble]Ryu+2020d
—. 2020f, , 904, 101, 10.3847/1538-4357/abb3cc
[Ryu et al.(2023b)Ryu, Krolik, Piran, Noble,
& Avara]Ryu2023b
Ryu, T., Krolik, J., Piran, T., Noble, S. C., & Avara, M.
2023b, , 957, 12, 10.3847/1538-4357/acf5de
[Ryu et al.(2024)Ryu, McKernan, Ford, Cantiello, Graham,
Stern, & Leigh]Ryu+2024
Ryu, T., McKernan, B., Ford, K. E. S., et al. 2024, , 527, 8103,
10.1093/mnras/stad3487
[Saxton et al.(2021)Saxton, Komossa, Auchettl, &
Jonker]SaxtonKomossa2021
Saxton, R., Komossa, S., Auchettl, K., & Jonker, P. G. 2021,
Correction to: X-Ray Properties of TDEs, Space Science Reviews, Volume 217,
Issue 1, article id.18, 10.1007/s11214-020-00759-7
[Sazonov et al.(2021)Sazonov, Gilfanov, Medvedev, Yao,
Khorunzhev, Semena, Sunyaev, Burenin, Lyapin, Meshcheryakov,
Uskov, Zaznobin, Postnov, Dodin, Belinski, Cherepashchuk,
Eselevich, Dodonov, Grokhovskaya, Kotov, Bikmaev, Zhuchkov,
Gumerov, van Velzen, & Kulkarni]Sazonov+2020
Sazonov, S., Gilfanov, M., Medvedev, P., et al. 2021, , 508,
3820, 10.1093/mnras/stab2843
[Shappee et al.(2014)Shappee, Prieto, Grupe, Kochanek,
Stanek, De Rosa, Mathur, Zu, Peterson, Pogge, Komossa, Im,
Jencson, Holoien, Basu, Beacom, Szczygieł, Brimacombe,
Adams, Campillay, Choi, Contreras, Dietrich, Dubberley,
Elphick, Foale, Giustini, Gonzalez, Hawkins, Howell, Hsiao,
Koss, Leighly, Morrell, Mudd, Mullins, Nugent, Parrent,
Phillips, Pojmanski, Rosing, Ross, Sand, Terndrup, Valenti,
Walker, & Yoon]ASASSN
Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, , 788, 48,
10.1088/0004-637X/788/1/48
[Sharma et al.(2024)Sharma, Price, & Heger]Sharma+2024
Sharma, M., Price, D. J., & Heger, A. 2024, , 532, 89,
10.1093/mnras/stae1455
[Shiokawa et al.(2018)Shiokawa, Cheng, Noble, &
Krolik]Shiokawa+2018
Shiokawa, H., Cheng, R. M., Noble, S. C., & Krolik, J. H. 2018, ,
861, 15, 10.3847/1538-4357/aac2dd
[Shiokawa et al.(2015)Shiokawa, Krolik, Cheng, Piran, &
Noble]Shiokawa+2015
Shiokawa, H., Krolik, J. H., Cheng, R. M., Piran, T., & Noble, S. C.
2015, , 804, 85, 10.1088/0004-637X/804/2/85
[Sądowski et al.(2016)Sądowski, Tejeda, Gafton,
Rosswog, & Abarca]Sadowski+2016
Sądowski, A., Tejeda, E., Gafton, E., Rosswog, S., & Abarca,
D. 2016, , 458, 4250, 10.1093/mnras/stw589
[Steinberg & Stone(2024)]SteinbergStone2024
Steinberg, E., & Stone, N. C. 2024, , 625, 463,
10.1038/s41586-023-06875-y
[Stone et al.(1996)Stone, Hawley, Gammie, &
Balbus]Stone+1996
Stone, J. M., Hawley, J. F., Gammie, C. F., & Balbus, S. A. 1996,
, 463, 656, 10.1086/177280
[Strubbe & Quataert(2009)]Strubbe2009
Strubbe, L. E., & Quataert, E. 2009, , 400, 2070,
10.1111/j.1365-2966.2009.15599.x
[Strubbe & Quataert(2011)]SQ2011
—. 2011, , 415, 168, 10.1111/j.1365-2966.2011.18686.x
[Svirski et al.(2017)Svirski, Piran, &
Krolik]Svirski+2017
Svirski, G., Piran, T., & Krolik, J. 2017, , 467, 1426,
10.1093/mnras/stx117
[Szekerczes et al.(2024)Szekerczes, Ryu, Suyu, Huber,
Oguri, & Dai]Szekerczes+2024
Szekerczes, K., Ryu, T., Suyu, S. H., et al. 2024, arXiv e-prints,
arXiv:2402.03443, 10.48550/arXiv.2402.03443
[Thomsen et al.(2022)Thomsen, Kwan, Dai, Wu, Roth, &
Ramirez-Ruiz]Thomsen+2022
Thomsen, L. L., Kwan, T. M., Dai, L., et al. 2022, , 937, L28,
10.3847/2041-8213/ac911f
[Tucci & Volonteri(2017)]Tucci2017
Tucci, M., & Volonteri, M. 2017, , 600, A64,
10.1051/0004-6361/201628419
[van Velzen et al.(2020)van Velzen, Holoien, Onori, Hung,
& Arcavi]vanVelzen2020SSRV
van Velzen, S., Holoien, T. W. S., Onori, F., Hung, T., & Arcavi, I.
2020, , 216, 124, 10.1007/s11214-020-00753-z
[van Velzen et al.(2019a)van Velzen, Stone,
Metzger, Gezari, Brown, & Fruchter]vanVelzen+2019b
van Velzen, S., Stone, N. C., Metzger, B. D., et al.
2019a, , 878, 82, 10.3847/1538-4357/ab1844
[van Velzen et al.(2016)van Velzen, Anderson, Stone,
Fraser, Wevers, Metzger, Jonker, van der Horst, Staley, Mendez,
Miller-Jones, Hodgkin, Campbell, & Fender]vanVelzen2016
van Velzen, S., Anderson, G. E., Stone, N. C., et al. 2016, Science,
351, 62, 10.1126/science.aad1182
[van Velzen et al.(2019b)van Velzen, Gezari,
Cenko, Kara, Miller-Jones, Hung, Bright, Roth, Blagorodnova,
Huppenkothen, Yan, Ofek, Sollerman, Frederick, Ward, Graham,
Fender, Kasliwal, Canella, Stein, Giomi, Brinnel, van Santen,
Nordin, Bellm, Dekany, Fremling, Golkhou, Kupfer, Kulkarni,
Laher, Mahabal, Masci, Miller, Neill, Riddle, Rigault,
Rusholme, Soumagnac, & Tachibana]vanVelzen+2019
van Velzen, S., Gezari, S., Cenko, S. B., et al. 2019b,
, 872, 198, 10.3847/1538-4357/aafe0c
[van Velzen et al.(2021)van Velzen, Gezari, Hammerstein,
Roth, Frederick, Ward, Hung, Cenko, Stein, Perley, Taggart,
Foley, Sollerman, Blagorodnova, Andreoni, Bellm, Brinnel, De,
Dekany, Feeney, Fremling, Giomi, Golkhou, Graham, Ho,
Kasliwal, Kilpatrick, Kulkarni, Kupfer, Laher, Mahabal, Masci,
Miller, Nordin, Riddle, Rusholme, van Santen, Sharma, Shupe, &
Soumagnac]vanVelzen+2021
van Velzen, S., Gezari, S., Hammerstein, E., et al. 2021, , 908, 4,
10.3847/1538-4357/abc258
[Wen et al.(2024)Wen, Jonker, Levan, Li, Stone,
Zabludoff, Cao, Wevers, Pasham, Lewin, & Kara]Wen+2024
Wen, S., Jonker, P. G., Levan, A. J., et al. 2024, arXiv e-prints,
arXiv:2405.00894, 10.48550/arXiv.2405.00894
[Wevers et al.(2024)Wevers, Guolo, Pasham, Coughlin,
Tombesi, Yao, & Gezari]Wevers+2024
Wevers, T., Guolo, M., Pasham, D. R., et al. 2024, , 963, 75,
10.3847/1538-4357/ad1878
[Wevers & Ryu(2023)]WeversRyu2023
Wevers, T., & Ryu, T. 2023, arXiv e-prints, arXiv:2310.16879,
10.48550/arXiv.2310.16879
[Wevers et al.(2021)Wevers, Pasham, van Velzen,
Miller-Jones, Uttley, Gendreau, Remillard, Arzoumanian,
Löwenstein, & Chiti]Wevers+2021
Wevers, T., Pasham, D. R., van Velzen, S., et al. 2021, , 912, 151,
10.3847/1538-4357/abf5e2
[Yalinewich et al.(2019)Yalinewich, Steinberg, Piran, &
Krolik]Yalinewich2019
Yalinewich, A., Steinberg, E., Piran, T., & Krolik, J. H. 2019,
, 487, 4083, 10.1093/mnras/stz1567
[Yao et al.(2023)Yao, Ravi, Gezari, van Velzen, Lu,
Schulze, Somalwar, Kulkarni, Hammerstein, Nicholl, Graham,
Perley, Cenko, Stein, Ricarte, Chadayammuri, Quataert, Bellm,
Bloom, Dekany, Drake, Groom, Mahabal, Prince, Riddle,
Rusholme, Sharma, Sollerman, & Yan]Yao+2023
Yao, Y., Ravi, V., Gezari, S., et al. 2023, , 955, L6,
10.3847/2041-8213/acf216
[Zauderer et al.(2013)Zauderer, Berger, Margutti, Pooley,
Sari, Soderberg, Brunthaler, & Bietenholz]Zauderer2013
Zauderer, B. A., Berger, E., Margutti, R., et al. 2013, , 767, 152,
10.1088/0004-637X/767/2/152
[Zauderer et al.(2011)Zauderer, Berger, Soderberg, Loeb,
Narayan, Frail, Petitpas, Brunthaler, Chornock, Carpenter,
Pooley, Mooley, Kulkarni, Margutti, Fox, Nakar, Patel,
Volgenau, Culverhouse, Bietenholz, Rupen, Max-Moerbeck, Readhead,
Richards, Shepherd, Storm, & Hull]Zauderer2011
Zauderer, B. A., Berger, E., Soderberg, A. M., et al. 2011, , 476,
425, 10.1038/nature10366
|
http://arxiv.org/abs/2409.02973v1 | 20240904121057 | SDOoop: Capturing Periodical Patterns and Out-of-phase Anomalies in Streaming Data Analysis | [
"Alexander Hartl",
"Félix Iglesias Vázquez",
"Tanja Zseby"
] | cs.LG | [
"cs.LG"
] |
SDOoop: Capturing Periodical Patterns and Out-of-phase Anomalies in Streaming Data Analysis
Alexander Hartl 0000-0003-4376-9605
Institute of Telecommunications
TU Wien
1040 Wien, Austria
me@alexhartl.eu
Félix Iglesias Vázquez 0000-0001-6081-969X
Institute of Telecommunications
TU Wien
1040 Wien, Austria
felix.iglesias@tuwien.ac.at
Tanja Zseby 0000-0002-5391-467X
Institute of Telecommunications
TU Wien
1040 Wien, Austria
tanja.zseby@tuwien.ac.at
September 9, 2024
=====================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Streaming data analysis is increasingly required in applications, e.g., IoT, cybersecurity, robotics, mechatronics or cyber-physical systems.
Despite its relevance, it is still an emerging field with open challenges.
SDO is a recent anomaly detection method designed to
meet requirements of speed, interpretability and intuitive parameterization. In this work, we present SDOoop, which extends the capabilities of SDO's streaming version to
retain temporal information of data structures. SDOoop spots contextual anomalies undetectable by traditional algorithms, while enabling the inspection of data geometries, clusters and temporal patterns.
We used SDOoop to model real network communications in critical infrastructures and extract patterns that disclose their dynamics.
Moreover, we evaluated SDOoop with data from intrusion detection and natural science domains and obtained performances equivalent or superior to state-of-the-art approaches.
Our results show the high potential of new model-based methods to analyze and explain streaming data.
Since SDOoop operates with constant per-sample space and time complexity, it is ideal for big data, being able to instantly process large volumes of information.
SDOoop conforms to next-generation machine learning, which, in addition to accuracy and speed, is expected to provide highly interpretable and informative models.
Contextual Anomalies, Streaming Data Analysis, Anomaly Detection, Communication Networks, Critical Infrastructures
§ INTRODUCTION
In data stream processing, data points v_j ∈ℝ ^D consistently arrive at monotonically increasing times t_j ∈ℝ for j=1,2,…
Due to this steady acquisition,
analysis algorithms face the challenge of discovering knowledge in unbounded data that substantially accumulates in a short time.
In such a context, real-life applications dismiss batch-mode operation while demanding fast online processing able to update models and parameters to concept drift.
Here, “updating models and parameters” does not only mean adapting to new patterns and classes, but also forgetting those that have become obsolete.
In anomaly/outlier detection (OD),
we commonly set a sliding window (or an observation horizon) w that establishes the memory length for which space geometries are remembered.
Hence, the anomaly is defined: (a) either based on the neighborhood of a data point within w (e.g., Exact- and Approx-STORM <cit.>), or (b) by comparing to a model that evolves with w (e.g., Robust Random Cut Forests <cit.>).
In both cases, note that the comparison reference is purely static (or geometric) relative to the point of comparison at the instant of comparison. That is, the data within w (or the model used instead) is a snapshot.
While this is not a problem for many types of anomalies, most traditional methods are blind to identify contextual anomalies. A contextual (aka. conditional or out-of-phase) anomaly “occurs if a point deviates in its local context” <cit.>, i.e., if it happens outside its usual time. Consider a method whose w spans a one-week period. If a cluster occurs exclusively during weekends, but a data point of this cluster accidentally appears on a Wednesday, this method will not identify it as an anomaly, but as a normal inlier instead.
Here, we present SDOoop
(SDO out-of-phase), an algorithm for OD in streaming data whose models
store temporal information.
While retaining constant per-sample space and time complexity and keeping intact the functionalities to detect other types of anomalies,
SDOoop is also able to identify contextual anomalies and capture periodical patterns that explain the time behavior of the data bulk.
SDOoop builds models
by sampling a fixed number of data points at representative locations in feature space, called observers.
To escape dependence of the data volume, it uses an exponentially weighted moving average (EWMA) to estimate model information from the arriving data mass. At the same time, observers hold temporal information as coefficients of Fourier transforms (FT).
Thus, for a specific time of interest t, observers “twinkle” to show only the most representative model for time t.
The simple example in Fig. <ref> can give an intuition of the distinctive capabilities of SDOoop when compared with alternatives. In the figure, the internal model (incrementally updated) is formed by the red and orange points, which represent observers.
Observers are placed in areas of considerable density to see the data mass around. Note that, while the orange observer stands for a cluster that occurs at a continuous pace, the red observer represents a cluster that exhibits a temporal behavior. Consequently, the red observer twinkles accordingly, the drawn gray curve showing the inverse FT of its captured FT coefficients.
If a contextual anomaly happens, closest observers will not be awake, hence it will be detected as an anomaly by far observers.
Our work advances the research on observers-based unsupervised learning, which originated SDO <cit.>, SDOstream <cit.> and SDOclust <cit.>.
The remainder of this paper is structured as follows:
In Section <ref>, we introduce observers-based OD.
Section <ref> describes SDOoop and explains its parameters.
SDOoop is evaluated in Section <ref>.
In Section <ref> we explore related research efforts and contrast them to the problem solved here.
Finally, we summarize the main ideas and contributions in Section <ref>.
To enable reproducibility, we make all our source code available in our repository
<https://github.com/CN-TU/tpsdos-experiments>.
§ OBSERVERS-BASED OUTLIER DETECTION
The Sparse Data Observers (SDO) method <cit.> for OD on static datasets is the foundation of our current proposal.
In a nutshell, SDO works as follows:
(a) Randomly sample points from the dataset, which will be called “observers”.
(b) Each data point in the dataset is only observed by the x nearest observers, resulting in each observer performing a different number of observations.
(c) Remove idle observers, i.e., with the smallest number of observations. Remaining observers are termed active.
(d) For each data point, compute an outlier score as median distance to the x nearest active observers.
Hence, observers capture the main shapes of the data in a low-density model and outlier scores are calculated as distances to points in this model. Removing idle observers minimizes the chances of outliers being part of the model.
The observer-approach also holds for SDOstream <cit.>, which adapts the algorithm for a streaming setting by continuously sampling new observers and using EWMA for computing observations.
In this paper, we predominantly adhere to the notation of <cit.>. Hence, we denote the observers set as Ω and, accepting a slight abuse of notation, we denote by ω∈Ω both an abstract observer and its feature vector. Furthermore, P_ω denotes ω's observations, where P_ω∈ℕ_0 for SDO and P_ω∈ℝ_0^+ for SDOstream. Hence, P_ω counts the number of data points for which ω belongs to the x nearest observers with an algorithm parameter x ∈ℕ. Observers with insufficient P_ω are thus disregarded for outlier scoring.
In contrast to SDO and SDOstream, SDOoop replaces the number of observations P_ω with a temporal function, allowing active observers to become temporarily idle (i.e., asleep) and reappear dynamically in accordance with the temporal pattern of the underlying clusters. Therefore, it is possible to construct an active observers set representative for the data stream at the current time and, hence,
to detect data points that do not meet the established temporal pattern, i.e., contextual anomalies.
§ SDOOOP
We describe the construction of our proposal.
Main symbols and notation are shown in Table <ref>.
We denote by [ N_bins ] with N_bins∈ℕ the set {0,…,N_bins-1} and by d: ℝ ^D×ℝ^D→ℝ^+ a distance function (e.g., Euclidean).
Our method enables the model to absorb temporal patterns in processed data streams. To describe this, we consider data streams satisfying the following definition.
For a given data stream, let γ( v, t) ∈ℝ_0^+ denote the expected rate of arriving data points at location v ∈ℝ^D and time t ∈ℝ. Therefore, γ( v, t)Δ v Δ t stands for the expected number of data points seen in a volume Δ v and time interval Δ t. We say that the stream exhibits T_0-periodic patterns with T_0 ∈ℝ^+ if γ( v,t) is T_0-periodic, i.e., γ( v,t)=γ( v,t+T_0) for all v ∈ℝ^D, t ∈ℝ.
Definition <ref> is based on the expected rate of arriving data points. This means that, to reason about periodic behaviors, the random stream is modeled as generated by an underlying deterministic process.
In particular, a stationary stream exhibits T_0-periodic patterns for any T_0.
Note that Definition <ref> does not include concept drift, which is tackled by SDOoop with a exponential sliding window.
To capture temporal patterns, we allow the observers' observations to be T_0-periodic. We represent and store the associated temporal functions in terms of their FT coefficients P_ω,n∈ℂ with ω∈Ω, n ∈ [N_bins].
To extract observers relevant for the current point in time from the model, we first define the q_id-percentile P_thr∈ℝ^+ of the observers' average observations P_ω,0, i.e.,
P_thr = max{ρ∈ℝ^+ | |{ω∈Ω | P_ω,0 < ρ}|≤ q_id|Ω|}.
Similar to previous work, P_thr allows us to require active observers to have a minimum number of observations in relation to the total time-averaged observation count.
Hence, we construct a view yielding the currently active observers
Ω_a = {ω∈Ω | {∑_n ∈ [N_bins] P_ω,n}≥ P_thr}
in terms of a lower-bound for the inverse FT, where {∑_n ∈ [N_bins] P_ω,n} evaluates the temporal shape of the observers' observations at the current time.
To narrow the scope to the most relevant information, we form sets from both Ω and Ω_a that only contain the x nearest points. Hence, for a point v ∈ℝ^D we specify the set of nearest observers 𝒩( v)⊂Ω with |𝒩| = min(x,|Ω|) and the set of nearest active observers 𝒩_a( v) ⊂Ω_a with |𝒩_a | = min(x,|Ω_a|), i.e.
d( ω̃, v ) ≤ d( ω, v ) ∀ ω̃∈𝒩( v), ω∈Ω∖𝒩( v) and
d( ω̃, v ) ≤ d( ω, v ) ∀ ω̃∈𝒩_a( v), ω∈Ω_a ∖𝒩_a( v) .
Algorithm <ref> depicts the core process, discussed as follows.
§.§ Algorithm Construction
Algorithm <ref> can be divided into three parts: establishing active observers Ω_a (line 1), scoring outlierness (line 2), and updating the model (lines 3-10).
The core concept, which allows to capture periodical patterns, is based on Lemma <ref>.
For an observer ω∈Ω, let g(t)∈ℝ^+ denote the expected rate of arriving data points, for which ω is contained in 𝒩 at time t. If g(t) is a T_0-periodic function and T≫ T_0, observations P_ω,n approximate a Fourier transform E{P_ω,n}≈∫ _-T_0^0 g(τ-t) exp(-j2 π nτ/T_0) dτ up to a constant factor.
We prove the lemma in Appendix <ref>.
Lemma <ref> shows that temporal information about how frequently observers are used can be extracted from P_ω , n in terms of an inverse FT.
To obtain the current set of active observers Ω_a,
it suffices by selecting observers that have been used most often in the past.
Here,
observer activity is mainly evaluated at time t-T_0, which is reasonable due to T_0-periodicity. However, due to inherent interpolation, also the very recent activity of observers is considered, which is particularly relevant for setting new observers.
In Theorem <ref>, we show that our method applies this approach for constructing Ω_a.
At time t, for data streams with T_0-periodic patterns, the active observers set Ω_a, as used by Algorithm <ref>, contains observers with highest g(t).
Equation <ref> constructs the set Ω_a by selecting observers from Ω, for which {∑_n∈[N_bins]P_ω,n} is highest. If P_ω,n yields the FT of g(t) according to Lemma <ref>, the theorem follows immediately, since {∑_n∈[N_bins]P_ω,n} performs the inverse FT at time t=0 relative to the current time.
Theorem <ref> allows us to use Ω_a for assessing outlierness of arriving data points by leveraging nearest-observer distances. Hence, in line 1, 𝒩 and 𝒩_a are constructed. Based on <cit.>, we compute an outlier score with the median of distances to the x closest observers.
The final part of Algorithm <ref> handles model updating, which involves replacing the less “active” observer.
Updating of P_ω, n in line 4 follows an exponential shape set by time T. We show in Theorem <ref> that replacing observers proceeds with the same pace, which is necessary as observers otherwise would not be able to build meaningful P_ω, n values.
For data streams exhibiting T_0-periodic patterns, Algorithm <ref> on average samples k data points during a time period T as new observers.
We prove the lemma in Appendix <ref>.
Note that the factor ∑_ω∈𝒩 P_ω,0 /x/∑_ω∈Ω P_ω,0 /k occurring in line 6 of Algorithm <ref> might be omitted without invalidating Theorem <ref>.
However, we include it to promote representativity of the observers set.
Hence, underrepresented observers in a neighborhood cause the observation count in this neighborhood to increase, leading to a higher sampling probability, while overrepresented observers lead to a lower sampling probability.
Moreover, this factor is stronger during the transient starting phase, ensuring that the model soon reaches its full size, but at the same time avoiding that it is fulfilled with the first points, which in many cases would build unrepresentative models.
During time T the model is replaced one time on average according to Theorem <ref>. Since we use a fixed-size model, an observer has to be removed when adding a new one, picking the removal candidate based on its P_ω, n.
To avoid new observers constantly replaced due to the stronger inertia of old observers, we use an age-normalized observation count P_ω,0/H_ω for selecting the observer to remove in line 7. By updating H_ω as depicted in line 3, H_ω denotes the maximum P_ω,0 that an observer ω might have reached over time. Thus, P_ω,0/H_ω∈ [0,1], where 1 is only scored if ω has always been in 𝒩 since it was assimilated in the model.
§.§ Interpreting the Learned Model
Direct analysis of patterns in streams from a manual perspective is inherently complicated. Experts quickly run into difficulties regarding how to observe the data, what reference points to take, for how long, how much data to use, how to do this incrementally, etc.
SDOoop solves all these issues in a natural and elegant way.
At any point in time, the observers set Ω allows prompt access to highly representative data points that additionally retain temporal information. The temporal shape g_ω(t) of observed data points in a neighborhood of a given observer ω can be efficiently recovered in terms of an inverse FT,
g_ω(t) = {∑_n ∈ [N_bins] P_ω,nexp(jtn2π /T_0) } .
SDOoop is designed to be embedded in the data processing pipeline of stand-alone systems. In other words, its primary purpose is feeding subsequent analysis phases, e.g., visualization or clustering techniques to extract further knowledge. Nevertheless, Ω is commonly small enough for manual inspection,
meaning that we can explore the set of observers along different time spans and study their periodicities, but also isolate a given instant in time to focus only on its stream characteristics.
By analyzing observers in Ω_a, the data analyst obtains an immediate depiction of the data model to easily interpret both the ground of the outlierness scoring and how data are (or are expected to be) as a whole.
§.§ Costs and Parameters
The computational cost lies primarily in the comparison of incoming data points with the learned model.
Assuming that distance computation is 𝒪(D) with the number of dimensions D,
building Ω_a (equation <ref>) implies
time complexity 𝒪(kN_bins)+𝒪(kD).
Holding the model in memory requires storing the observers ω and storing their observations P_ω,n, similarly resulting 𝒪(kN_bins)+𝒪(kD) as space complexity.
Therefore, per-point space and time complexity linearly depend on model size k, which is a pre-fixed parameter.
This makes SDOoop suitable for big data with highly demanding processing.
Temporal behavior is captured by
T, T_0 and the number of frequency bins N_bins.
T is the time constant of the exponential windowing mechanism.
It governs memory length, therefore equivalent to the window length of sliding window algorithms.
T_0 denotes the period of the FT base frequency. Periodicities can be captured best if T_0 is an integer multiple of expected periodicities.
For instance, in many real-world applications, it might be reasonable a T_0-value of one week, so that weekly and diurnal patterns can be detected.
Furthermore, to ensure that the EWMA approximates a Fourier integral, T_0 should be reasonably smaller than T.
N_bins determines the maximum frequency that can be captured by the model,
hence also fixing the temporal resolution of the learned temporal shapes.
T, T_0 and N_bins are intuitive parameters and can be easily adjusted based on domain knowledge.
q_id, k and x are discussed extensively in <cit.> and <cit.>.
Here suffice it to mention that further experimentation confirms q_id, k and x robustness, meaning that performances are stable for a wide range of values and that most applications work properly with default configurations.
The setting of k depends on the expected variability and degree of representation, but several hundred observers is sufficient in most cases.
x inherits from nearest-neighbor algorithms, with similar tuning strategies <cit.>.
In our experiments, values between k ∈ [100,1000], x ∈ [3,9] and q_id∈ [0.1,0.3] have shown excellent results.
§ EXPERIMENTAL EVALUATION
In this section, we discuss the experimental evaluation of SDOoop.
Based on a proof of concept, we first demonstrate its capability to detect contextual outliers. We then proceed by benchmarking OD performances on public datasets. We finally show the discovery and modeling of temporal patterns in real-life cases.
§.§ Contextual Anomalies: Proof of Concept
For this proof of concept, we use MDCGen <cit.> to generate a synthetic data stream of five clusters that vanish and reappear at different times and periods. We add both spatial and contextual outliers into the stream.
Fig. <ref> shows an excerpt of the generated data stream with 0.5% of contextual outliers. Hence, while normal outliers are distributed across the entire feature space, outliers occurring out of phase fall in the same spatial location as clustered data, but their time of appearance does not meet the temporal shape of clustered data.
In Fig. <ref>, we plot the area under the ROC curve (AUC) for different ratios of contextual outliers to data points in one active cluster.
We compare SDOoop with two consolidated OD methods for streaming data:
SW-kNN (the sliding-window implementation of kNN for OD <cit.>) and RRCT <cit.>.
All algorithms have been properly tuned to capture at least one full period. The more outliers occur out of phase, the more the performance of traditional algorithms plummets,
whereas our method retains the highest AUC at all times.
This clearly indicates that
SDOoop is the only algorithm capable of detecting contextual outliers.
§.§ OD Performance with Evaluation Datasets
To compare our method with state-of-the-art stream OD algorithms, we selected popular OD datasets of sufficient length
and with timestamped data points.
Datasets and metrics.
The KDD Cup'99 dataset <cit.> aims at detecting network intrusions based on a number of network and host features and, similar to previous work <cit.>,
we considered User to Root (U2R) attacks as outliers over normal traffic, resulting in 976,414 data points with an outlier proportion of 0.4%.
Additionally, we selected the recent SWAN-SF <cit.> dataset, which collects data about solar flares, and used preprocessing scripts provided by Ahmadzadeh and Aydin <cit.>. For SWAN-SF, we assigned a normal label to the majority class and an outlier label to the remaining classes, resulting in 331,185 data points with an outlier portion of 17.2%. In both experiments, we randomly sampled 50% of the data stream for randomized hyperparameter search and the other half for evaluation. For an overview of the ranges of hyperparameters, we refer to the code repository of this paper. Metrics for evaluation are Adjusted Average Precision (AAP),
Adjusted Precision at n (AP@n),
and AUC <cit.>.
Algorithms and experimental setups.
We used the dSalmon framework <cit.>, which provides efficient versions of several stream OD algorithms.
Since ensembles commonly exhibit superior accuracy, we used an ensemble of nine for SDOoop, yet noticing almost no difference compared to a single SDOoop detector.
OD performances.
Experiment results in Table <ref> show how our method matches and even outperforms state-of-the-art algorithms for streaming OD. The strongest competitor is RS-Hash <cit.>.
In the SWAN-SF case, SDOoop ranks among the best performers, while, in the KDD Cup'99 dataset, it clearly stands out, particularly in AAP and AP@n.
The higher AP@n also indicates that our method finds several true outliers that pass unperceived for the competitors.
Disclosing insights about the data.
Obtained results seem consistent with data contexts. Considering how
outliers have been defined in the SWAN-SF dataset, we do not expect
that outliers break possible temporal periodicities in samples from solar
flares.
On the other hand, the KDD Cup’99 dataset describes events in a
computer network, which are expected to exhibit strong
temporal patterns due to human activity. Patterns may be broken by attack traffic,
leading to contextual outliers, behaviors that can be spotted by our method.
Here, the superior detection of
SDOoop not only indicates that the data
show temporal patterns, but also that some U2R
attacks are indeed contextual outliers.
§.§ Temporal Patterns in Machine-to-Machine Communication
Application context.
We study network traffic captured in a critical infrastructure,
in particular, of an energy supply company that connects charging stations for electric vehicles. The network communication satisfies management, accounting and maintenance aspects[While we embrace reproducible research, issues related to confidentiality, security and privacy prevent us from making these data publicly available.].
Network communication for these purposes usually adopts the OCPP protocol.
Due to the large portion of machine-to-machine communications, we expected to discover distinct periodic patterns.
Preprocessing and parameters.
We preprocessed data with the feature vector described in <cit.>, resulting in 13 million flows during a 1 month period. We parameterized the algorithm using T=1 week, 2000 frequency bins and T_0=2000 minutes, obtaining a minimum period of 1 minute.
We used 400 observers.
Capturing periodical patterns/clusters.
Fig. <ref> shows on the left side examples for the frequency spectrum (magnitude) learned by observers. Hence, different clusters show diverse temporal patterns. While observer 1 shows no or just weak periodicities, observer 2 shows a clear 5 minute periodicity and observers 3 and 4 show a 10 minute periodicity. From the learned FT, temporal shapes can be constructed in terms of an inverse FT as depicted in equation <ref>.
Fig. <ref> also shows the reconstructed temporal shape plotted over a 1 hour and 24 hour period. Hence, beneath the periodicities already found when inspecting the FT directly, the temporal shape for observers 3 and 4 additionally shows periodicities of a longer period of approximately 2.5 hours.
Interpreting clusters in the application.
The manual examination of network flows represented by observers confirmed the soundness of discovered temporal patterns. For example, observer 3 corresponds to ICMP pings that happen regularly to ensure that network devices are alive. Observer 4 identifies DNS requests that charging stations perform to resolve the name of the OCPP server to its IP address and transmit meter readings. For observer 4, the periodicity emerges from DNS caching, so that every second request for transmitting meter readings can be performed without having to perform a DNS lookup.
Observer 1 corresponds to protocol heartbeat messages. The fact that it does not show a clear periodicity might be due to the requesting devices not being time-synchronized or by deviating device configurations. Alternatively, heartbeat messages might take place with a very high frequency, so that no periodicities can be observed at the analyzed time scale.
Identifying outliers in the application.
Fig. <ref> shows outlier scores of points in time order. The manual inspection of flows with highest outlierness (in the center of Fig. <ref>) revealed that they are firmware update processes. Since updates took place only during two days in the monitored time span, then high outlier scores are consistent (yet they are not contextual anomalies).
Learning stability.
Finally, we investigated whether our results meet the expected algorithm behavior with respect to the sampling of new observers. Fig. <ref> shows how many data points have been sampled as new observers during the first two weeks. With T=1 week and k=400 observers, 400/7≈ 57 observers should be sampled each day according to equation <ref>. This theoretical conjecture shows good agreement with the empirical results.
Fig. <ref> also shows that the model is not instantly filled with observers in the first hours, but it is instead built up during the first days. Since data seen within the first couple of hours might not be representative for the remaining data, this transient behavior boosts the swift discovery of a representative model while achieving a fast model buildup with a high sampling rate at the beginning.
§.§ Discovery of Temporal Patterns: Darkspace Data
Application context and parameterization.
We additionally tested our method on the publicly available CAIDA “Patch Tuesday” darkspace dataset <cit.>. During preprocessing, we aggregated features by source IP address using the AGM feature vector <cit.>, specifically proposed for analyzing darkspace data. We applied our algorithm with T_0=1 week and T=10 weeks and 100 observers and 100 frequency bins, resulting in a minimum period length of about 100 minutes.
Capturing and identifying periodical patterns/clusters.
Fig. <ref> shows the magnitude of the Fourier coefficients of the three strongest observers. Peaks in Fig. <ref> occur at the 7th and 14th frequency bins, which are diurnal and semi-diurnal periodicities.
This coincides with previous studies of the darkspace <cit.>
that, among others, identified Conficker.C worm attacks or BitTorrent misconfigurations for diurnal patterns, and horizontal scan, vertical scan and probing activities on the UDP protocol for semi-diurnal patterns.
§ RELATED WORK
The problem addressed in this paper is covered in diverse fields, yet differing in some core aspects.
In this section, we provide an overview of related concepts and precedents to place SDOoop in the landscape of existing work.
Time Series Analysis and Contextual Anomalies.
A time series is a temporal sequence of observations of specific measurement variables. Time series have been studied in multiple domains, e.g.,
finance and econometrics <cit.>
weather forecasting <cit.>,
electric load forecasting <cit.>.
Traditionally, time series have been analyzed with mathematical tools
<cit.>,
however, in multivariate time series complexity increases dramatically and experts usually resort to nonlinear machine learning, e.g., <cit.>
Multivariate time series and streaming data are frequently considered synonyms, showing small differences open to discussion <cit.>.
Contextual anomalies have been tackled mainly in the time series analysis domain <cit.>, but here experts also emphasize the low attention given to contextual anomalies in spite of its relevance for cybersecurity, healthcare sensory and fraud detection <cit.>.
Latest research tends to expand the focus and, besides point anomalies, face anomalous sequences within the whole time series context, giving rise to
fast model-based detectors <cit.>. However, when considered, contextual anomalies are confronted mainly from a univariate time series perspective. An exception is the recent work by Pasini et al. <cit.>, which copes with low-dimensional multivariate time series and proposes a global contextual variance score by weighting feature-wise contextual variances based on Mahalanobis distances.
Note that such approach assumes feature independence though.
OD in Streaming Data
The trend in OD of recent years is to build models to process streaming data with constant memory complexity. In addition to distance-based methods like SDOstream <cit.>, OD in streams is grounded on tree-based methods <cit.>, half-space chains <cit.>,
histograms <cit.>, randomized hashing <cit.>,
or simply based on nearest neighbors in a sliding window <cit.>.
A thorough comparison of these methods can be consulted in <cit.>.
When compared to SDOoop, beyond the core approach for calculating point outlierness, the main difference is that
earlier algorithms establish a temporally evolving model (or a set of reference data points) that is deemed stationary at time scales smaller than a pre-fixed time parameter.
Hence, contextual outliers, i.e. data points that occur at an atypical time (out-of-phase), are wrongly classified as normal inliers.
Another important property of OD methods is interpretability of returned outlier scores.
In fact, many modern techniques like forest-based methods require space transformations that inevitably sacrifice interpretability.
In SDOoop, outlier scores can be directly interpreted as distance-to-observers, i.e., distance-to-normality. The model is small enough for manual inspection, allowing the analyst to draw conclusions about the data mass based on main model patterns. On the other hand, obtained models are also suitable for stand-alone systems or frameworks where knowledge must be integrated with decision-making modules or other types of knowledge.
Table <ref> shows a summarized comparison of recent algorithms for evolving stream OD with regard to key properties. SW-kNN and SW-LOF denote sliding-window implementations of the popular kNN <cit.> and LOF <cit.> algorithms.
R[2]
>angle=#1,lap=-(#2)
l
<
Periodic Pattern Mining.
The detection of periodicities in
sequences has also been investigated in the context of periodic pattern mining <cit.>.
Periodic pattern mining can be applied to spatiotemporal data <cit.> to detect periodicities in the movement of objects.
In contrast, SDOoop is able to detect periodicities of arbitrary clusters even if the corresponding data points are mixed up with data points from other clusters with different temporal patterns or no patterns at all.
To the best of our knowledge, this problem has not been explored before.
§ CONCLUSION
Big data frequently arrives in data streams and requires online processing and analysis. We proposed SDOoop, a method for knowledge discovery in data streams that is able to capture coexisting periodicities regardless of data geometries.
Our method performs a single pass through the data and builds a fixed-size model consisting of representative point locations along with their temporal behavior in Fourier space.
We showed equal or superior performances compared to state-of-the-art algorithms when testing OD in established evaluation datasets. Moreover, we showed that our method can be an important tool for understanding and visualizing the spatiotemporal behavior of steadily arriving real-world data,
particularly in network security and critical infrastructures communications.
IEEEtran
§ PROOF OF LEMMA <REF>
Let i_o ∈ℕ denote the index of a data point, for which ω is contained in 𝒩 and i_c is the index of the currently processed data point, i.e., i_o < i_c. Then, the contribution of i_o to P_ω,n according to line 4 of Algorithm <ref> has been multiplied by Π _i=i_o+1^i_c(exp(-T^-1+jn2 π /T_0))^t_i-t_i-1 = (exp(-T^-1+jn2 π /T_0))^t_i_c-t_i_o. Summing over all points that have arrived in ω's neighborhood, we can write
E{P_ω,n} = ∫ _-∞^t g(τ) (exp(-T^-1+jn2 π /T_0))^t-τ dτ.
Splitting the integral into intervals of length T_0, we obtain
E{P_ω,n} =
∑_l=0^∞∫ _t-T_0^t g(τ-lT_0) (exp(-T^-1+jn2 π /T_0))^t-τ-lT_0 dτ
= ( ∑_l=0^∞exp(-T^-1lT_0) ) ∫ _t-T_0^t g(τ) (exp(-T^-1+jn2 π /T_0))^t-τ dτ
due to T_0-periodicity of g(t) and exp(jn2π)=1. Abbreviating the constant factor and substituting τ^'=τ-t, we obtain
E{P_ω,n} =
c ∫ _-T_0^0 g(τ^'-t) (exp(-T^-1+jn2 π /T_0))^-τ^' dτ^'
T ≫ T_0≈
c ∫ _-T_0^0 g(τ^'-t) exp(-jn2 πτ^' /T_0)) dτ^' .
§ PROOF OF THEOREM <REF>
Taking line 6 in Algorithm <ref> as starting point,
the probability of selecting a newly seen point as observer is
min(1,k^2/Tx∑_ω∈𝒩 P_ω,0/∑_ω∈Ω P_ω,0t_i-t_i_LAO/i-i_LAO).
Since we target specifically data streams with high rates of arriving data points, we can safely assume this probability to be small.
Hence, {1 <k^2/Tx∑_ω∈𝒩 P_ω,0/∑_ω∈Ω P_ω,0t_i-t_i_LAO/i-i_LAO} is negligible and we can write for the average probability of sampling a new point as observer
P_s ≈ E {k^2/Tx∑_ω∈𝒩 P_ω,0/∑_ω∈Ω P_ω,0t_i-t_i_LAO/i-i_LAO}. Under the same assumption, we observe that the term t_i-t_i_LAO/i-i_LAO depends on the current time, but, since points belonging to different neighborhoods arrive in an interleaved manner, does not depend on a point's neighborhood. Since ∑_ω∈𝒩 P_ω,0/∑_ω∈Ω P_ω,0 does not depend on time, we can split the term to P_s ≈ E {k^2/Tx∑_ω∈𝒩 P_ω,0/∑_ω∈Ω P_ω,0} E{t_i-t_i_LAO/i-i_LAO} due to stochastic independence of both terms.
∑_ω∈𝒩 P_ω,0 /x expresses the average observation count in the current neighborhood. The algorithm implements several mechanisms to make the observer density agree with the time-averaged point density, rendering the time-averaged local average observation count E{∑_ω∈𝒩 P_ω,0 /x} equal to the total average observation count of all observers ∑_ω∈Ω P_ω,0 /k, hence P_s ≈k/TE{t_i-t_i_LAO/i-i_LAO}=k/TIAT, where the average inter-arrival time of two data points is termed IAT.
During a time period of T, T/IAT data points arrive, yielding an average number of sampled points of P_sT/IAT=k.
|
http://arxiv.org/abs/2409.03105v1 | 20240904220432 | BASILISK II. Improved Constraints on the Galaxy-Halo Connection from Satellite Kinematics in SDSS | [
"Kaustav Mitra",
"Frank C. van den Bosch",
"Johannes U. Lange"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.GA"
] |
BASILISK II. Satellite Kinematics in SDSS]
BASILISK II. Improved Constraints on the Galaxy-Halo Connection from Satellite Kinematics in SDSS
Mitra et al.]
Kaustav Mitra^1E-mail: kaustav.mitra@yale.edu0000-0001-8073-4554,
Frank C. van den Bosch^10000-0003-3236-2068,
Johannes U. Lange^2,3,40000-0002-2450-1366
^1Department of Astronomy, Yale University, PO. Box 208101, New Haven, CT 06520-8101
^2Department of Physics, American University, 4400 Massachusetts Avenue NW, Washington, DC 20016, USA
^3Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA
^4Leinweber Center for Theoretical Physics, University of Michigan, Ann Arbor, MI 48109, USA
firstpage–lastpage
2024
[
[
=====
§ ABSTRACT
is a novel Bayesian hierarchical method for inferring the galaxy-halo connection, including its scatter, using the kinematics of satellite galaxies extracted from a redshift survey. In this paper, we introduce crucial improvements, such as updated central and satellite selection, advanced modelling of impurities and interlopers, extending the kinematic modelling to fourth order by including the kurtosis of the line-of-sight velocity distribution, and utilizing satellite abundance as additional constraint. This drastically enhances 's performance, resulting in an unbiased recovery of the full conditional luminosity function (central and satellite) and with unprecedented precision. After validating ’s performance using realistic mock data, we apply it to the SDSS-DR7 data. The resulting inferences on the galaxy-halo connection are consistent with, but significantly tighter than, previous constraints from galaxy group catalogues, galaxy clustering and galaxy-galaxy lensing. Using full projected phase-space information, breaks the mass-anisotropy degeneracy, thus providing precise global constraint on the average orbital velocity anisotropy of satellite galaxies across a wide range of halo masses. Satellite orbits are found to be mildly radially anisotropic, in good agreement with the mean anisotropy for subhaloes in dark matter-only simulations. Thus, we establish as a powerful tool that is not only more constraining than other methods on similar volumes of data, but crucially, is also insensitive to halo assembly bias which plagues the commonly used techniques like galaxy clustering and galaxy-galaxy lensing.
methods: analytical —
methods: statistical —
galaxies: haloes —
galaxies: kinematics and dynamics —
cosmology: dark matter
§ INTRODUCTION
According to the current cosmological paradigm, the vast majority of all galaxies form and reside in extended dark matter haloes <cit.>. Halo occupation modelling tries to use observational constraints on the population of galaxies in order to infer the statistical link between the galaxy properties (mainly their luminosity or stellar mass) and the properties of the dark matter haloes (mainly some measure of halo mass) in which they reside <cit.>. The resulting `galaxy-halo connection' provides valuable insight regarding the formation and evolution of galaxies, and benchmarks to calibrate, compare and validate semi-analytic models <cit.> and simulations <cit.>. In addition, since it describes the link between the light we see and the mass that governs the dynamical evolution of the Universe, it is a powerful tool that allows astronomers to constrain cosmological parameters using the observed distribution of galaxies <cit.>.
Arguably, the most straightforward method to infer the galaxy halo connection, and one that has become extremely popular, is subhalo abundance matching <cit.>. It matches the ordered list of galaxies (typically ranked by stellar mass or luminosity) to that of subhaloes (typically ranked by their peak or infall mass)[Rather than abundance matching individual galaxies to subhaloes, one can also match the abundance of galaxy groups (identified using some group finder) to dark matter host haloes <cit.>.]. In this mostly non-parametric method, one usually allows for some amount of scatter (a free parameter) in the rank-order matching, to have realistic spread in the stellar mass - halo mass relation <cit.>. A key advantage of SHAM over other methods, discussed below, is that it only requires stellar mass (or luminosity) measurements of the galaxies. However, an important downside is that it relies crucially on the assumption that both the galaxy sample and the (sub)halo sample (typically taken from a N-body simulation) are complete. Hence, SHAM cannot be applied to subsamples of galaxies (i.e., samples of emission line galaxies, or galaxies selected by colour). In addition, even if the galaxy sample is complete, the (sub)halo catalogues used, which are typically extracted from numerical simulations, suffer from incompleteness due to artificial disruption <cit.> and failures of subhalo finders <cit.>. This can significantly impact the galaxy-halo connection inferred via SHAM <cit.>.
These problems can be overcome using data that more directly constrains halo mass. The two most commonly used methods are galaxy clustering <cit.> and galaxy-galaxy lensing <cit.>. The former relies on the fact that more massive haloes are more strongly clustered <cit.>; hence, the clustering strength of a given population of galaxies informs the characteristic mass of the haloes in which they reside. Unfortunately, its reliability is hampered by the finding that halo clustering strength depends not only on mass but also on secondary halo properties <cit.>, something that is collectively referred to as halo assembly bias. Galaxy-galaxy lensing, which is a manifestation of weak gravitational lensing, uses the tangential shear distortions of distant background galaxies around foreground ones in order to constrain the halo masses of the latter <cit.>. Although, in principle, a fairly direct probe of halo mass, this method requires tedious shape measurements of faint background sources, which can be prone to effects like blending and intrinsic alignment. Typically the background sources lack spectroscopic redshifts, which can also cause systematic errors in the modelling of their measured shear distortions. In addition, on large scales the 2-halo term of the lensing shear is also impacted by the same assembly bias issues that plague clustering.
Another method that can be used to constrain the galaxy-halo connection, but which has hitherto been severely under-utilized, is satellite kinematics. It uses measurements of the line-of-sight velocities of satellite galaxies with respect to their corresponding central galaxy in order to constrain the gravitational potential, and hence the mass, of the host halo[Famously, the same principle was used by <cit.> in order to infer the presence of dark matter in the Coma cluster]. With the exception of large galaxy groups and clusters, individual central galaxies typically only have a few spectroscopically detected satellites. Consequently, it is common to combine the satellite velocity measurements from a large stack of central galaxies in order to estimate an average satellite velocity dispersion, which in turn is used to infer an average host halo mass using either a virial mass estimator or a simple Jeans model <cit.>.
It has often been argued that satellite kinematics is not a reliable mass estimator for any combination of the following reasons: (a) satellite galaxies are not necessarily virialized tracers <cit.>, (b) their orbits may well be anisotropic <cit.>, resulting in a well-known mass-anisotropy degeneracy <cit.>, and (c) the stacking that is used implies `mass-mixing' (i.e., combining the kinematics of haloes of different masses), which muddles the interpretation of the data. In addition, the selection of centrals and satellites from a redshift survey is unavoidably impacted by impurities, incompleteness and interlopers, further complicating the analysis. Despite these concerns, a number of studies have progressively improved satellite kinematics and have shown that it can yield reliable, as well as precise, constraints on the galaxy-halo connections[and on the masses of individual clusters <cit.>]. In particular, <cit.> demonstrated that by selecting centrals and satellites using iterative, adaptive selection criteria the impact of impurities and interlopers can be minimized. <cit.> has shown that by combining different weighting schemes one can accurately account for mass mixing, and even constrain the scatter in the stellar mass-halo mass relation <cit.>. This was significantly improved upon by <cit.> who demonstrated that kinematics of satellite galaxies from a large redshift survey such as the Sloan Digital Sky Survey <cit.> can yield constraints on the galaxy-halo connection that are complementary to, and competitive with, constraints from galaxy clustering and/or galaxy-galaxy lensing.
<cit.> were the first to analyse satellite kinematics while accounting for orbital anisotropy. Using a method first developed by <cit.> they were able to simultaneously constrain halo mass, halo concentration and orbital anisotropy, albeit without accounting for mass mixing.
<cit.> developed , a Bayesian hierarchical inference formalism that further improves on the ability of satellite kinematics to constrain the galaxy halo connection. Unlike previous methods, does not resort to stacking the kinematics of satellite galaxies in bins of central luminosity, and does not make use of summary statistics, such as satellite velocity dispersion. Rather, it leaves the data in the raw form and computes the corresponding likelihood. Consequently, it can simultaneously solve for halo mass and orbital anisotropy of the satellite galaxies, while properly accounting for scatter in the galaxy-halo connection. In addition, can be applied to flux-limited, rather than volume-limited samples, greatly enhancing the amount and dynamic range of the data.
Paper I also tested and validated against mock data sets of varying complexity, and demonstrated that it yields unbiased constraints on the parameters specifying the galaxy-halo connections. However, in order to speed up the analyses, all those tests where performed using mock data samples that were only about 1/8 the size of the full SDSS sample analysed here. When we ran on full-sized mocks instead, the model parameter uncertainties shrank considerably, as expected, revealing several significant discrepancies that turned out to be systematic. This necessitated a number of modifications to , which we present in the first half of this paper. Most notably, we introduce significant improvements to the treatment of interlopers (i.e., galaxies that are selected as satellites but that do not reside in the same dark matter halo as the central), allowing for both a population of splash-back galaxies <cit.> and a large scale infall population akin to the well-known <cit.> effect. In addition, we slightly modify the cylindrical selection criteria in order to improve the purity of our sample (i.e., reduce misclassification of satellites as centrals), we assure that the selection of secondaries around each individual primary is volume-limited, and we forward-model the contribution of impurities that arise from haloes in which the brightest galaxy is a satellite rather than the central. We also let go of the oversimplified assumption that the satellite velocity profile along any given line-of-sight is Gaussian, as was done in . Rather, we now use the fourth-order Jeans equations to model the kurtosis of the line-of-sight velocity distribution (LOSVD). This enables more accurate modelling of the full phase-space distribution of satellite galaxies, and allows to break the mass-anisotropy degeneracy. Finally, we also replace fitting binned statistics of centrals with zero (detected) satellites, as done in , with a more general, Bayesian hierarchical modelling of the number of satellites around each central. Although this data on satellite abundances does not yield direct kinematic constraints on halo mass, it greatly helps to constrain the overall galaxy-halo connection.
The goal of this paper is threefold: (i) showcase the advancements in satellite kinematics methodology that we have introduced in , and highlight its improved performance when tested against realistic SDSS-like mock data; (ii) apply to SDSS DR7 data to simultaneously constrain the conditional luminosity functions of central and satellite galaxies, the satellite velocity anisotropy and satellite radial distribution, all with unprecedented precision, and compare those with previous constraints on halo occupation statistics; and (iii) establish as a powerful method to infer the galaxy-halo connection which is free of halo assembly bias effects, and that is even more constraining than commonly used techniques like galaxy clustering and galaxy-galaxy lensing, when applied on data of similar volumes.
Throughout this paper we adopt the flat Planck18 ΛCDM cosmology <cit.>, which has matter density parameter Ω_ = 0.315, power spectrum normalization σ_8=0.811, spectral index n_ = 0.9649, Hubble parameter h = (H_0/100) = 0.6736 and baryon density Ω_ h^2 = 0.02237.[These are the TT,TE,EE+lowE+lensing best-fit values assuming a base-ΛCDM cosmology.]
§ SAMPLE SELECTION
§.§ Selecting central-satellite pairs
The first step in analysing satellite kinematics is to select a sample of centrals and their associated satellites from a redshift survey. Unfortunately, this selection is never perfect; one undoubtedly ends up selecting some bright satellites as centrals (we refer to these as `impurities') and not every galaxy selected as a satellite actually resides in the same host dark matter halo as the corresponding central (those that don't are referred to as `interlopers'). In what follows, we therefore use `primaries' and `secondaries' to refer to galaxies that are selected as centrals and satellites, respectively.
A galaxy at redshift z is considered a potential primary if it is the brightest galaxy in a conical volume of opening angle Θ_ ap^ pri≡/d_(z) centered on the galaxy in question, and extending along the line-of-sight from z-(Δ z)^ pri to z+(Δ z)^ pri. Here d_(z) is the angular diameter distance at redshift z, and (Δ z)^ pri = (/c) (1+z). The parameters and specify the primary selection cone. Following <cit.>, we select the primaries in a rank-ordered fashion, starting with the most luminous galaxy in the survey. Any galaxy located inside the selection cone of a brighter galaxy is removed from the list of potential primaries. All galaxies fainter than the primary and located inside a similar cone, but defined by and , centred on the primary are identified as its secondaries. Note that, although it is common to refer to these selection volumes as `cylinders', a convention we also adopted in , in actuality the selection volumes are frustums of cones. In order to rectify this confusing nomenclature, in this paper we refer to them as `selection cones' (see Fig. <ref>).
The four parameters , , , and control the completeness and purity of the sample of primaries and secondaries. Increasing and/or , boosts the purity among primaries (i.e., it reduces the number of satellites erroneously identified as centrals), but reduces the overall completeness. Similarly, decreasing and/or reduces the number of interlopers, but at the costs of a reduced number of satellites, which are the dynamical tracers of interest. Since brighter primaries typically reside in larger, more massive haloes, it is advantageous to scale the sizes of the selection cones with the luminosity of the primary <cit.>. In particular, we adopt = 0.6 σ_200, = 0.15 σ_200, and = = 1000 σ_200. Here σ_200 is a rough measure of the satellite velocity dispersion in units of 200, which, following <cit.> and <cit.>, we take to scale with the luminosity of the primary as
logσ_200 = 0.04 + 0.48 log L_10 + 0.05 (log L_10)^2 ,
where L_10 = L / (10^10), and σ_200 is allowed to take a maximum value of 4. The values of and correspond to roughly 1.65 and 0.4 times the halo virial radius, respectively, while the value for is large enough to include the vast majority of all satellites around primaries of the corresponding luminosity. Note that the numerical values in these selection criteria are tuned in order to optimize the selection of primaries and secondaries against impurities and interlopers. In particular, they are slightly different from the values we adopted in . As detailed in Appendix <ref>, this is done in order to reduce the fraction of impurities that are neither true centrals, nor the brightest satellites in their corresponding host haloes. These impurities are particularly difficult to account for in our forward-modelling approach and can cause a small but systematic overestimate of the scatter in the relation between halo mass and central luminosity.
The SDSS redshift catalogue, to which we apply in this study, is a flux-limited survey. As emphasized in , an important advantage of over earlier studies of satellite kinematics, is that it is not limited to volume-limited subsamples, thereby greatly boosting the number of primaries and secondaries to be used in the analysis. However, in order to facilitate proper modelling of the number of secondaries (true satellites and interlopers) we need to assure that the selection of secondaries around each individual primary is volume limited. This is something that was not implemented in , but which turned out to be important in order to avoid a systematic bias in the inferred faint-end slope of the satellite luminosity function. This effect was not significant in the smaller mock data samples used to test in , but could no longer be overlooked using data sets comparable in size to the SDSS data used here.
In this paper, we limit our analysis to primaries in the luminosity range 9.504 ≤log(/[h^-2]) ≤ 11.104, corresponding to -19 ≤ M_r^0.1 - 5log h ≤ -23. Here M_r^0.1 is the absolute magnitude in the SDSS r-band K+E corrected to z=0.1. In addition, we only use data in the redshift range 0.02≤ z ≤ 0.20. Note that the selection cone used to identify secondaries around a primary of luminosity L_ pri at redshift z_ pri extends from to , given by
= z_ pri - Δ V_ max^ sec(L_ pri) / c (1 + z_ pri) and
= z_ pri + Δ V_ max^ sec(L_ pri) / c (1 + z_ pri) .
Hence, as depicted in Fig. <ref> we are only complete in the selection of secondaries with luminosities ≥ L_ min(). Here L_ min(z) is the minimum luminosity of galaxies at redshift z that make the apparent magnitude limit of our survey data (m_r = 17.6; see <ref>). In order to assure a complete, volume-limited selection of secondaries around each primary, secondaries fainter than L_ min() are discarded. In addition, in order to assure that the entire secondary selection cone around a given primary fits within the flux-limits of the SDSS data, we require that > L_ min(). Finally, the redshifts of the primaries are restricted to 0.034 ≤ z_ pri≤ 0.184, such that the and of the most luminous primaries fit within the 0.02 ≤ z ≤ 0.20 limits of the entire sample.
To elucidate this further, Fig. <ref> illustrates the bounds on the selection of primaries and secondaries. The solid, vertical lines at z=0.02 and z=0.20 mark the minimum and maximum redshifts of the entire sample, while the dashed, vertical lines at z=0.034 and z=0.184 mark the redshift limits allowed for primaries. Dashed, horizontal lines at log L = 11.104 and log L = 9.504 mark the luminosity cuts for primaries. The solid circles, labelled A to H, represent hypothetical primaries of three different luminosities L_1, L_2 and L_3 (indicated by three different colours). The shaded rectangle associated with each primary indicates the allowed luminosity-redshift ranges of its corresponding secondaries. Since the redshift extent of the secondary selection cone scales with the luminosity of the primary, the shaded regions of fainter primaries have a smaller Δ z-extent, as is evident from the figure. Note that these shaded regions extend down to where the apparent magnitude of the secondaries at the back-end of the selection cone is equal to the magnitude limit of the survey, which can be significantly lower than log L = 9.504, specifically for the primaries that are relatively nearby (e.g., primaries A and F).
Primary A is at the minimum allowed redshift for primaries, z_ min^ pri = 0.034, such that the `front' end of its secondary selection cone is equal to the minimum redshift of our survey data (i.e., =0.02). Similarly, primary C is located at the maximum allowed redshift for primaries, z_ max^ pri = 0.184, and has = 0.20. Primary H is also special in that it has the highest redshift possible given its luminosity. Had it been any farther away, the far end of its secondary selection cone would stick outside of the SDSS flux limit, resulting in incompleteness. The three dashed and dotted curved lines, labelled as L_ min((L_1/2/3)), show the lower luminosity limits for secondaries as a function of , corresponding to each of the three different primary luminosities represented in the figure. For example, primaries of luminosity L_2 (like primary D and E) can not have secondaries fainter than the middle dashed curve in green. This ensures that their secondaries are individually volume limited around each of those primaries.
§.§ Survey incompleteness
As any spectroscopic redshift survey, the SDSS data, from which our sample of primaries and secondaries derives, suffers from spectroscopic incompleteness due to fibre collisions and other failure modes <cit.>. Each galaxy in the survey is assigned a spectroscopic completeness, 𝒞_ spec, which indicates the fraction of spectroscopic targets in the angular region of the galaxy in question with a useful spectrum. In order to avoid primaries in regions with poor spectroscopic completeness, we remove all primaries with 𝒞_ spec < 0.8.
If a primary is close to the edge of the survey, such that its secondary selection cone sticks partially outside of the survey footprint, or if the secondary selection cone overlaps with a masked region, the number of secondaries may be underestimated. In order to account for this, we proceed as follows. For each primary we uniformly distribute ∼ 5× 10^4 particles in its secondary selection cone. We then compute the fraction, w_ app, of those particles that are located inside the angular footprint of the SDSS, accounting for both survey edges and masked areas. In what follows we use w_ app,i to denote the aperture completeness of galaxy i. In order to avoid primaries with a poor aperture completeness, we remove all primaries with w_ app < 0.8.
As demonstrated in <cit.>, it is important to correct satellite kinematics data for fibre-collision induced incompleteness. In the SDSS, spectroscopic fibres cannot be placed simultaneously on a single plate for objects separated by less than ϑ_ fc≡ 55” <cit.>. Although some galaxies are observed with multiple plates, yielding spectroscopic redshifts even for close pairs, roughly 65% of galaxies with a neighbour within 55” lack redshifts due to this fibre collision effect. In order to correct the data for the presence of fibre collisions, we follow <cit.> and start by assigning each fibre-collided galaxy the redshift of its nearest neighbour <cit.>. Note that we only use these during the identification of primaries. Once the selection is complete, all fiber-collided primaries and secondaries are removed from the sample[As shown in <cit.>, including fibre-collided galaxies during the selection of primaries significantly reduces sample impurity.]. In addition, each galaxy is assigned a spectroscopic weight, w_ spec, that is computed as follows. For each galaxy we first count the number of neighboring galaxies, n, brighter than m_r = 17.6 within a projected separation less than 55”. Next, for all galaxies in the survey with n neighbours, we compute the fraction, f_ spec, of those neighbours that have been successfully assigned a redshift. Finally, all galaxies with n neighbours are then assigned a spectroscopic weight equal to w_ spec = 1 / f_ spec.
In order to correct for aperture incompleteness and fibre collisions, down-weights the expectation value for the number of secondaries around primary i (see equation [<ref>] below), using the following correction factor:
f_ corr,i = /∑_j=1^ w_ spec,ij w_ app,i .
Here w_ spec,ij is the spectroscopic weight for secondary j associated with primary i. Since correcting for fibre collisions is extremely difficult on scales below the fibre-collision scale, we remove all secondaries with R_ < R_ cut() ≡ d_() ϑ_ fc. Hence, the secondary selection volumes used in the end are conical frustums with a central hole with an opening angle of 55” (see Fig. <ref>). As shown in and <cit.>, this combined approach of down-weighting the model predictions for the number of secondaries and ignoring secondaries below the fibre-collision scale accurately accounts for incompleteness arising from fibre-collisions in the SDSS.
§ OBSERVABLES
Here we describe the various observables used by in order to constrain the galaxy-halo connection. These include (i) accessible 2D phase-space parameters of primary-secondary pairs (line-of-sight velocity and projected separation), which contains the information regarding the kinematics of satellite galaxies, (ii) statistics regarding the number of secondaries per primary (including primaries with zero secondaries), which helps to constrain the halo occupation statistics, and (iii) the galaxy luminosity function. The following subsections discuss each of these observables in detail.
§.§ Satellite kinematics
For each primary-secondary pair in the sample we compute their projected
separation
= d_() ϑ ,
and their line-of-sight velocity difference
= c ( - )/1 + .
Here and are the observed redshifts of the primary and secondary, respectively, c is the speed of light, and ϑ is the angular separation between the primary and secondary on the sky.
As detailed in , the main data vector used in is given by
_ SK = ⋃_i=1^N_+ ( {, | j=1,...,} | , , ) .
where the union is over all N_+ primaries with at least one secondary. Here is the number of secondaries associated with primary i, and it is made explicit that , , and are only treated as conditionals for the data {, | j=1,...,}. In other words, we consider , and as `given' and shall not use the distributions of these quantities as constraints on our satellite kinematics likelihood. Rather, uses the number densities of all galaxies as additional constraints (see <ref>). The main reason for doing so is to make the method less sensitive to the detailed selection of primaries, which is difficult to model in detail. In particular, as discussed in , this approach makes fairly insensitive to details regarding the σ_200(L) relation (equation [<ref>]) used to define the selection cones.
§.§ Number of secondaries
The data vector _ SK described above only contains primaries with at least one secondary. The complementary data vector _0 = ({,} | i=1,2,...,N_0) lists all N_0 primaries with zero spectroscopically detected secondaries. Even though _0 contains no kinematic data, it still provides additional constraints on the galaxy-halo connection, in particular regarding the occupation statistics of satellite galaxies. In we utilized this information by computing the fraction, P_0 = N_0 /(N_0 + N_+), of primaries, in a given bin of log and , that have zero secondaries. Here N_0 is the number of `lonely primaries' with zero detected secondaries, and N_+ is the number of primaries that have at least one secondary. As discussed in , this P_0 statistic provides valuable constraints on the galaxy-halo connection. However, upon closer examination we found that the binning used in this method causes small, but systematic errors in the inference. Using smaller bins was not able to solve this problem, which is why we ultimately opted for the following alternative, unbinned approach.
In line with 's philosophy to leave the data as much as possible in its raw form, rather than computing P_0 on a (log, )-grid, we use the following raw data vector as constraint on the model:
= ⋃_i=1^N_ NS( | , ) .
Here the union is over (a random subset of) all = N_0 + N_+ primaries, independent of how many secondaries they have (i.e., including the primaries with zero secondaries). Since N_0 ≫ N_+, computing the likelihood for all primaries is much more time-consuming than computing the likelihood for the satellite kinematics data vector (eq. [<ref>]). Therefore, we only use a downsampled, random subset of N_ NS = (N_+) primaries, where each primary has a probability equal to N_+/ to be included. In the case of the SDSS data set described in <ref> this probability is 0.094. This downsampling assures that the computation of the likelihood for has a CPU requirement that is comparable to that for _ SK. We emphasize that our constraints are primarily driven by the satellite kinematics data. Hence, this down-sampling of the satellite abundance data has no significant impact on our constraining power of the central galaxy-halo connection or the orbital anisotropy of the satellite galaxies. It only slightly broadens the posterior constraints for some of the parameters characterizing
galaxy-halo connection of satellite galaxies.
§.§ Galaxy Number Densities
The final observable that we use to constrain the galaxy-halo connection is the galaxy luminosity function, which provides important additional constraints on the CLF <cit.>, and therefore helps to tighten the posterior in our inference problem. We use the number density of galaxies in 10 bins of 0.15 dex in luminosity, ranging from 10^9.5 to 10^11. These are computed using the corresponding, volume-limited subsamples, carefully accounting for the SDSS DR-7 footprint. In what follows, we refer to the data vector representing these 10 number densities as _ LF. The covariance matrix of this data is computed using a jackknife estimator. In particular, we apply a recursive routine[ https://github.com/rongpu/pixel_partition] developed by <cit.>, that takes into account the survey mask and window, and iteratively constructs 𝒩 maximally compact, equal-area partitions of the survey footprint <cit.>. We adopt 𝒩=100 which is large enough to capture the covariance in the survey while also being small enough to assure that each subregion still hosts an adequate number of galaxies.[We apply a Hartlap correction factor <cit.> to the inverse of the covariance matrix to account for the relatively small number of jackknife samples, but note that this has a negligible impact.]
§ METHODOLOGY
We analyse the data described above using the Bayesian, hierarchical satellite kinematics code , which is described in detail in . Here we briefly summarize its salient features and introduce a few modifications that improve 's performance.
uses an affine invariant ensemble sampler <cit.> to constrain the posterior distribution,
P( | ) ∝( | ) P() .
Here = _ SK + + _ LF is the total data vector, is the vector that describes our model parameters, P() is the prior probability distribution on the model parameters, and (|) is the likelihood of the data given the model. The latter consists of three parts: the likelihood _ SK for the satellite kinematics data _ SK, the likelihood _ NS for the numbers of secondaries as described by the data vector , and the likelihood _ LF for the luminosity function data _ LF. In what follows we briefly describe the computation of each of these three different likelihood terms in turn. However, we first describe the model that we use to characterize the galaxy-halo connection.
§.§ Galaxy-halo connection model
§.§.§ Conditional luminosity function
The galaxy occupation statistics of dark matter haloes are modelled using the conditional luminosity function (CLF), Φ(L|M,z) L, which specifies the average number of galaxies with luminosities in the range [L- L/2, L + L/2] that reside in a halo of mass M at redshift z <cit.>. In particular, we write that
Φ(L|M,z) = Φ_(L|M) + Φ_(L|M) .
Here and throughout the rest of the paper, subscripts `c' and `s' refer to central and satellite, respectively, and we assume that the CLF is redshift independent, at least over the redshift range considered here (0.02 ≤ z ≤ 0.20).
The CLF of centrals is parametrized using a log-normal distribution (see blue, dashed curve in Fig. <ref>),
Φ_ (L | M) L = log e/√(2πσ_^2)exp[ -(log L - logL̅_/√(2)σ_)^2] L/L.
The mass dependence of the median luminosity, L̅_, is parametrized by a broken power-law:
L̅_ (M) = L_0 (M / M_1)^γ_1/(1 + M / M_1)^γ_1 - γ_2.
which is characterized by three free parameters; a normalization, L_0, a characteristic halo mass, M_1, and two power-law slopes, γ_1 and γ_2.
Motivated by the fact that several studies suggest that the scatter, σ_, increases with decreasing halo mass <cit.>, we allow for a mass-dependent scatter using
σ_(M) = σ_13 + σ_ (log-13)
Hence, the scatter is characterized by two free parameters, a normalization, σ_13, that specifies the intrinsic scatter in log in haloes of mass = 10^13, and a power-law slope σ_. Note that this is slightly different from the parametrization adopted in .
For the satellite CLF we adopt a modified Schechter function (see red curve in Fig. <ref>):
Φ_ (L | M) = ϕ_^*/L_^*( L/L_^*)^α_exp[ - ( L/L_^*)^2 ].
Thus, the luminosity function of satellites in haloes of a given mass follows a power-law with slope α_ and with an exponential cut-off above a critical luminosity, L_^*(M). Throughout we adopt
L_^*(M) = 0.562 L̅_ (M) .
which is motivated by the results from galaxy group catalogues <cit.>. As in <cit.>, we assume a universal value for the faint-end slope of the satellite CLF, α_ s, independent of halo mass. Finally, the normalization ϕ_^*(M) is parametrized by
log[ ϕ_^*(M) ] = b_0 + b_1 log M_12 + b_2 (log M_12)^2.
where M_12 = M/(10^12). Note that this characterization of the CLF is very similar to that adopted in a number of previous studies <cit.>. All CLF parameters, along with parameters that characterize the satellite velocity anisotropy, and nuisance parameters used for interloper modelling, are listed in Table <ref>. It also includes the best-fitting values and 1σ confidence intervals for all the parameters, obtained by fitting the SDSS-DR7 data.
§.§.§ Spatial distribution of satellites
Throughout we assume that the radial distribution of satellite galaxies is given by a spherically symmetric, generalized NFW (gNFW) profile
n_ sat(r|M,z) ∝( r/ r_)^-γ( 1 + r/ r_)^γ - 3 .
Here and γ are free parameters and r_ is the scale radius of the dark matter halo, which is related to the halo virial radius via the concentration parameter = / r_. This gNFW profile has sufficient flexibility to adequately describe a wide range of radial profiles, from satellites being unbiased tracers of their dark matter halo (γ = = 1), to cored profiles that resemble the radial profile of surviving subhaloes in numerical simulations (γ=0, ∼ 2). This also brackets the range of observational constraints on the radial distribution of satellite galaxies in groups and clusters <cit.>.
§.§ Satellite Kinematics
The data vector for the satellite kinematics is given by eq. (<ref>) and contains the projected phase-space coordinates and of all secondaries (satellite galaxies plus interlopers) associated with the N_+ primaries (centrals plus impurities). We make the reasonable assumption that the data for different primaries is independent. Additionally, for a primary with more than one secondary, we assume that the phase-space distribution of the secondaries are not correlated with each other. The latter may not be entirely justified, given that satellites are often accreted in groups, which can bias halo mass estimates <cit.>. We emphasize, though, that the majority (71% in the case of the SDSS data discussed in <ref>) of primaries that contribute to the satellite kinematics data only have a single secondary. In addition, tests based on realistic simulation-based mocks (see <ref>) indicate that any potential correlations between satellites (subhaloes) that occupy the same host halo can safely be ignored (i.e., do not cause a significant systematic error in our inference). Hence, we have that
_ SK ≡(_ SK|)
= ∏_i=1^N_+ ∏_j=1^ P(, | , , , ) .
Here, P(, | , , ) is the probability that a secondary galaxy in a halo at redshift , with a primary of luminosity, , and with a total of detected secondaries has projected phase-space parameters (, ). For true satellites, the probability is computed assuming that satellite galaxies are a virialized, steady-state tracer of the gravitational potential well in which they orbit (see <ref>). Throughout, we assume dark matter haloes to be spherical and to have NFW <cit.> density profiles characterized by the concentration-mass relation of <cit.> with zero scatter. Hence, host haloes are completely specified by their virial mass, , alone[Throughout this paper, we define virial quantities according to the virial overdensities given by the fitting formula of <cit.>.], which implies that we can factor the likelihood as
_ SK = ∏_i=1^N_+ ∫ P( | , , ) ×
∏_j=1^ P(, |, , ) .
This equation describes a marginalization over halo mass, which serves as a latent variable for each individual primary, accentuating the hierarchical nature of our inference procedure. Note that the `prior' for halo mass is informed by , , and according to the model . Using Bayes theorem, we have
P(M|L,z,) = P(|M,L,z) P(M,L,z)/∫ M P(|M,L,z) P(M,L,z) .
In what follows we discuss each of the conditional probability functions required to compute _ SK in turn.
§.§.§ The probability P(|M,,)
The number of secondaries, N_, associated with a particular primary consists of both satellites (galaxies that belong to the same dark matter host halo as the primary), and interlopers (those that do not). Throughout we assume that the number of interlopers and the number of satellite galaxies are independent, and that both obey Poisson statistics. As shown in , this implies that
P(N_|M,L,z) = λ^N__ tot ^-λ_ tot/N_! ,
where
λ_ tot = f_ corr× [λ_ sat + λ_ int] ,
is the expectation value for the number of secondaries, corrected for fibre collision and aperture incompleteness using the correction factor of equation (<ref>), and with λ_ int(L,z) and λ_ sat(M,L,z) as the expectation values for the numbers of interlopers and satellites, respectively.
The expectation value for the number of satellites brighter than the magnitude limit L_ min(), in a halo of mass M at redshift , that fall within the aperture used to select secondaries around a primary of luminosity , is given by
λ_ sat(M,,) = f_ ap(M,,) ∫_L_ min^∞Φ_(L|M) L
Note that L_ min is a function of which in turn is a function of and (see <ref>). Here Φ_(L|M) is the satellite component of the CLF given by equation (<ref>) and f_ ap is the aperture fraction, defined as the probability for true satellites to fall within the secondary selection cylinder specified by and . Given that is much larger than the extent of the halo in redshift space, we have that
f_ ap(M , , ) =
4 π∫_0^n̅_ sat(r|M,)
[ζ(r,) - ζ(r,)] r^2 r .
Here = (M,) is the virial radius of the halo in question, and = () and = R_ cut() are the outer and inner radii of the conical volume used to select the secondaries. The function n̅_ sat(r|M,z) is the average, radial profile of satellites around haloes of mass M at redshift z, normalized such that
4 π∫_0^n̅_ sat(r|M,z) r^2 r = 1 ,
and
ζ(r, R) =
1 if r ≤ R
1 - √(1 - R^2 / r^2) otherwise.
More specific expressions for λ_ sat(M,L,z) and f_ ap(M,L,z) are provided in .
For the interlopers, one naively expects the abundance to be proportional to the number density of galaxies within the relevant range of luminosities and the volume of the secondary selection cone. However, being biased tracers of the mass distribution, galaxies are highly clustered which typically will boost the number density of galaxies in the vicinity of a bright primary. Moreover, this clustering strength is known to depend on halo mass, galaxy luminosity and redshift <cit.>, and to be affected by peculiar velocities, in particular due to large-scale infall <cit.>. We bypass the intricate complexities involved with modeling this clustering on small scales by modeling the expectation value for the number of interlopers as the product of an effective `bias', b_ eff, and the expectation value for the number of galaxies with L_ min() < L < in a randomly located conical selection volume, V_ cone(, ):
λ_ int(,) = b_ eff× V_ cone×n̅_ gal .
where each term on the right-hand side is a function of {,}. Here
n̅_ gal(, ) = ∫_L_ min^ L ∫_0^∞Φ(L|M) n(M,) M
is the average number density of galaxies at redshift with luminosity in the range [L_ min,],
with n(M,z) the halo mass function at redshift z, computed using the fitting function of <cit.>, and
V_ cone(,) = π [^2 - ^2] 2 /H() (1+)^3
with H(z) the Hubble parameter[Note, there was a typo in Eq. (22) in , where the power-law index of (1+) was 2, rather than the correct 3]. The effective bias is modelled as
b_ eff(,) = η_0 (/10^10.5)^η_1 ( 1+)^η_2
where η_0, η_1, and η_2 are three free nuisance parameters that fully specify our interloper bias model, and that are constrained simultaneously with all other physical parameters. This model has proved to be sufficiently flexible to accurately model the full complexity of interloper abundance in realistic simulation-based mock data (see Section <ref>).
§.§.§ The probability P(M,,)
The function P(M,,) describes the probability distribution function of primaries as a function of host halo mass, luminosity and redshift, and can be written as
P(M,,) = P(|M,) n(M,) (M,,) .
where n(M,z) is the halo mass function <cit.> and (M,L,z) is a `completeness', to be defined below. As in , if we assume that all primaries are true centrals, then we have that P(|M,) = Φ_(|M). However, in reality some primaries are misidentified satellites, and such impurities need to be accounted for. In we argued that the impact of these impurities is sufficiently small that it can be ignored. Although this was indeed the case for the small mock data sets used there, the impact of impurities can no longer be ignored when using data sets similar in size to the SDSS data analysed here. In fact, detailed tests showed that they can systematically bias the inferred scatter in the relation between halo mass and central luminosity, and we therefore devised the following scheme in order to account for impurities.
The vast majority of all impurities in realistic SDSS-like mocks (such as the Tier-3 mock described in <ref>) are those satellite galaxies which happen to be the brightest galaxy in their halo (even brighter than their central). In what follows, we refer to these as Type-I impurities. Since primaries are by definition the brightest galaxies in their selection cones, such brightest-halo-galaxy (hereafter BHG) satellites typically end up being selected as primaries, rather than their corresponding central. In rare cases, a primary is neither a true central, nor a BHG satellite. We refer to these as Type-II impurities, which arise, for example, if the true central is the BHG but is absent from the SDSS survey data, either because of fiber collisions or because it falls outside the window of the SDSS footprint. As detailed below, Type I impurities can be accounted for in our new theoretical modelling. However, Type-II impurities are virtually impossible to model accurately. Using detailed mock data sets, we therefore tuned our selection criteria in order to minimize the contribution of Type-II impurities. In particular, we found that we were able to significantly reduce their frequency by slightly enlarging the volume of the primary selection cone as described in <ref>; In particular, the new criteria reduce the fraction of Type-II impurities from ≳ 1% when using the old selection criteria used in , to ∼ 0.5% with our new selection criteria. More importantly, in mock data, the new selection criteria predominantly eliminate the presence of Type-II impurities that are extreme outliers of the average relation between halo mass and primary luminosity, and which are the main culprits for causing mild systematic errors in the inferred galaxy-halo connection (specifically in the scatter, σ_). Detailed tests with mock data, presented in <ref> below, show that our new primary selection criteria sufficiently suppress the impact of Type-II impurities that it allows for unbiased estimates of the galaxy-halo connection (at least for a survey the size of SDSS).
Therefore, in what follows, we ignore Type-II impurities and assume that primaries are either true centrals or BHG satellites (i.e., Type-I impurities). Hence, we have that
P(|M,z) = P(=|M,z) P(L_ bs<|M,z)
+ P(<|M,z) P(L_ bs=|M,z).
Here P(L_ bs<L|M,z) is the probability that the brightest satellite in a halo of mass M at redshift z has a luminosity less than L, which is given by
P(L_ bs<L|M,z) = exp[ - Λ(L|M,z) ] .
Here Λ(L|M,z) is the expectation value for the number of satellites brighter than L in a halo of that mass and redshift, which in turn is given by
Λ(L|M,z) = ∫_L^∞ L' Φ_(L'|M)
Differentiating P(L_ bs<L|M,z) with respect to luminosity yields:
P(L_ bs=L|M,z) = Φ_ s(L|M,z) e^-Λ(L|M,z)
The two other terms that appear in equation (<ref>) are P(=|M,z), which is simply equal to the central CLF, Φ_(|M), and its cumulative distribution, which is given by
P(<L|M,z) =
∫_0^L' Φ_('|M)
The expression for P(|M,) given by equation (<ref>), when substituted in equation (<ref>), accurately forward models the impact of the vast majority of impurities.
The final ingredient we need is an expression for the completeness (M,L,z), which is defined as the fraction of haloes of mass M at redshift z= with a central or brightest satellite of luminosity that falls within the survey volume of the SDSS, and that is selected as a primary by our selection criteria. In general we have that (M,L,z) = (M|L,z) _0(L,z). As is evident from equation (<ref>), the modelling in is independent of _0, which drops out (see also ). In other words, we only need to account for the halo mass dependence of the completeness. As shown in Appendix <ref>, this mass-dependence is already accounted for by our forward-modelling of the Type-I impurities. Hence, we set (M,L,z) = 1 throughout.
§.§.§ The probability P(,|M,,)
In order to model the line-of-sight kinematics of the secondaries we proceed as follows. Since secondaries consist of both true satellites and interlopers, which have distinct phase-space distribution, we write
P(, |M,L,z) = f_ int P_ int(, |L,z) +
[1 - f_ int] P_ sat(, |M,L,z)
with the interloper fraction defined as
f_ int = f_ int(M,L,z) = λ_ int(L,z)/λ_ tot(M,L,z) ,
where λ_ int and λ_ tot have been individually defined in <ref>. We first describe how we compute P_ sat(, |M,L,z) (in <ref>) before detailing our treatment of interlopers (<ref>).
§.§.§ The phase-space distribution of satellites:
In computing the joint 2D probability P_ sat(, |M,L,z), we assume that the baryonic matter of the central galaxy has a negligible impact on the kinematics of its satellite galaxies[We address the accuracy of this assumption, which is common to virtually every study of satellite kinematics, in a forthcoming paper (Baggen et al., in prep).], and we model the satellites as tracers in a pure dark matter halo which is fully characterized by its halo mass and concentration. Throughout, we use the median concentration-halo mass relation of <cit.>, and we emphasize that our modelling is fairly insensitive to the exact choice of the concentration-mass relation within reasonable bounds of its theoretical uncertainty. We also assume the central galaxy to be located at rest at the centre of the halo. As shown in , relaxing this assumption by allowing for non-zero velocity bias for centrals has negligible impact on 's inference.
Under these assumptions we have that
P_ sat(, |M,L,z) = P(|M,L,z) P(|,M,z) .
with
P(|M,L,z) = 2 π Σ̅(|M,z)/f_ ap(M,L,z) .
Here f_ ap is defined in equation (<ref>), and
Σ̅(|M,z) = 2 ∫_^r_ sp(M,z)n̅_ sat(r|M,z) r r/√(r^2 - R^2) ,
is the projected, normalized number density distribution of satellite galaxies.
In , we made the simplified assumption that the line-of-sight velocity distribution, P(|,M,z), is a Gaussian, which is completely characterized by the line-of-sight velocity dispersion σ_ los(|M,L,z). However, there is no a priori reason why the LOSVD should be Gaussian. In fact, the detailed shape of the LOSVD contains valuable information regarding the velocity anisotropy <cit.>, which we aim to constrain using . In this work we therefore improve upon by extending our modelling of the kinematics to fourth-order and by describing P(|,M,z) as a generalised Gaussian with a projected velocity dispersion, σ_ los(|M,L,z), and a line-of-sight kurtosis, κ_ los(|M,L,z). The projected, line-of-sight velocity dispersion is related to the intrinsic, radial velocity dispersion, σ_r^2(r|M,z), according to
σ^2_ los( |M,z) = 2/Σ̅() ∫_^r_ sp(M,z)[ 1 - β(r|M) ^2/r^2] ×
n̅_ sat(r|M,z) σ_r^2(r|M,z) r r/√(r^2 - ^2) ,
where σ_r^2(r|M,z) follows from the second order Jeans equation for a spherically symmetric NFW halo <cit.>. Here, the local anisotropy parameter
β(r|M) ≡ 1 - σ^2_(r|M)2 σ^2_(r|M)
relates the tangential (σ_) and radial (σ_) velocity dispersions. For our fiducial model we assume that β is independent of both radius and halo mass, and we constrain this `average' velocity anisotropy using the satellite kinematics data. In <ref> we discuss the implications of adopting more flexible models in which the anisotropy parameter is allowed to depend on halo mass. Note that the upper-integration limit of equations (<ref>) and (<ref>) is set to r_ sp(M,z) = f_ sp r_ vir(M,z), instead of r_ vir(M,z), to account for a population of splash-back galaxies (see <ref>).
The projected, fourth moment of the LOSVD at projected separation is given by
v_ los^4(R_) = 2Σ(R_)∫_R_^r_ sp[1-2β R_^2/r^2 + 12β (1+β) R_^4 / r^4 ] ×
v_r^4(r|M,z) n_ sat(r|M,z) r r√(r^2-R_^2) ,
where β is β(r|M) in general, and v_r^4(r|M,z) follows from the fourth-order spherical Jeans equation <cit.>, which for radius-independent anisotropy is given by:
v_r^4(r |M,z) = 3 Gr^2β n̅_ sat(r|M,z)×
∫_r^r_ sp dr' (r')^2β n̅_ sat(r'|M,z) σ_r^2(r'|M,z) M(r')r'^2
Here M(r) is the enclosed mass of the spherical NFW halo inside radius r. Given the fourth-order line-of-sight velocity moment, we can compute the projected kurtosis as
κ_ los(R_|M,z) = v_ los^4 (R_|M,z) / σ_ los^4 (R_|M,z).
Finally, in order to account for non-zero redshift errors in the data, the line-of-sight velocity dispersion is modified according to σ_ los→√(σ^2_ los + 2 σ^2_ err), with σ_ err = 15 the typical SDSS redshift error <cit.>. Having computed both the velocity dispersion and kurtosis, we model the detailed shape of the LOSVD, P(|,M,z), using a symmetric (all odd moments are equal to zero), generalized form of the normal distribution, known as the <cit.> distribution[The Langdon distribution is often used to characterize the non-Maxwellian velocity distribution of electrons heated due to the inverse-Bremsstrahlung process <cit.>.]:
P_() = 12 Γ(1/m) ma_m exp(-|/a_m|^m) .
Here the parameters a_m and m are related to the variance, σ^2, and the kurtosis, κ, according to
σ^2 = a_m^2 Γ(3/m)Γ(1/m) and κ = Γ(5/m) Γ(1/m) Γ^2(3/m)
The reason for using this particular distribution function is purely one of convenience; P_() has a nice analytic closed form, is simple to compute, has all the features required of a probability distribution (normalized and positive-definite), and includes the Gaussian as a special case (m=2).
In , we use equation (<ref>) to compute a_m and m from σ^2_ los( |M,z) and κ_ los( |M,z), after which we compute
P(|,M,z) = P_() Γ(1/m)Γ(1/m) - Γ(1/m, (/a_m)^m)
which is properly normalized such that its integral from - to is unity.
§.§.§ The phase-space distribution of interlopers
In we assumed that interlopers have a constant projected number density and a uniform distribution in line-of-sight velocities, so that P_ int(, |L,z) = / [ (^2 - ^2)]. Here = (L) and = R_ cut(z) are the outer and inner radii of the conical volume used to select secondaries around primaries of luminosity L at redshift z and is the corresponding line-of-sight depth (see <ref>).
However, as discussed in detail in , a subset of the interlopers are either infalling or splash-back galaxies and have kinematics that are very similar to the true satellites. Assuming that the velocity distribution of interlopers is uniform ignores this `kinematically coupled interloper population', which causes to overestimate the number of satellite galaxies. Although the resulting offsets were modest for the smaller mock samples studied in , they cause a significant, systematic bias (predominantly in the satellite CLF parameters b_0, b_1 and b_2) when using larger samples. This motivated us to develop a more sophisticated treatment for the phase-space distribution of interlopers.
Based on a detailed assessment of interlopers in our mock data sets (see and <ref> for details), we now model the interlopers as consisting of three fairly distinct populations: (i) a population of `splash-back galaxies' associated with the host halo of the primary, and extending out to a distance r_ sp from the primary, (ii) a roughly uniform background population of `true' interlopers that are uncorrelated with the primary, and (iii) an `infalling' population of interlopers, located outside of the splash-back radius. This infall motion, on large linear scales, is responsible for redshift space distortions in clustering data known as the Kaiser effect <cit.>.
We assume that the phase-space distribution of splash-back galaxies can be modelled similar to that of the satellites; i.e., they follow the same n_ sat(r|M,z), extrapolated to beyond the halo's virial radius, and their kinematics obey the same Jeans equations. The only difference is that they are located between the host halo's virial radius, r_ vir, and a splash-back radius r_ sp≡ f_ sp r_ vir. In order to account for a population of splash-back galaxies we simply change the upper-integration limit of equations (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) from r_ vir(M,z) to r_ sp(M,z). Throughout we adopt f_ sp = 2, which is motivated by estimates of the splash-back radius in simulations <cit.>. In addition, detailed tests with mock data sets (see <ref>) show that yields unbiased estimates of the velocity anisotropy parameter β for f_ sp≳ 1.5. We find that not accounting for splash-back galaxies (i.e., setting f_ sp=1) results in a weak bias of β, without significantly affecting any of the other parameters. On the other hand, setting a much larger value for splash-back radius, like f_ sp=3, yields posteriors that are indistinguishable from those for f_ sp=2. Hence, our choice of f_ sp=2 is reasonable and our results are robust against modest changes in the adopted value of f_ sp.
We assume that both the uncorrelated `background' interlopers (bg) as well as the `infalling' interlopers (inf) have a uniform angular distribution on the sky, such that their phase-space distribution can be written as
P_ int(, ) = 2 /(^2 - ^2) [P_ bg() + P_ inf()] ,
where the term in square brackets is normalized such that its integral from - to + is unity. Since the secondary selection volume is conical in shape, the backside has a larger volume than the front (see Fig. <ref>). Note that all the secondaries of any given primary have luminosities above a fixed threshold. Therefore, due to the conical selection volume, we expect the velocity distribution of the uncorrelated background interlopers, P_ bg(), to increase with . In particular, P_ bg() is proportional to the comoving volume of the corresponding velocity slice of the secondary selection cone, and we therefore adopt
P_ bg() = 3/H(z') d^2(z')/d^3() - d^3() (1+z_ pri)
Here z' = z_ pri + (1+z_ pri) /c and d(z) is the comoving distance out to redshift z.
We have experimented with modelling the line-of-sight velocity distribution of the infalling population of interlopers, P_ inf(), using the linear <cit.> model, but this did not yield sufficiently accurate results. We therefore opted for a semi-empirical approach. Tests with mock data (see <ref>) show that P_ inf() is accurately fit by a Gaussian, P_ inf(Δ V) = A_ inf ^-1/2(Δ V^2 / σ_ inf^2), with A_ inf and σ_ inf free parameters that vary from primary to primary. Rather than trying to devise an analytical model for these parameters, we use the following data-driven approach. Around each primary we select a set of `tertiary' galaxies in a conical volume similar to that used for the secondaries, but at larger projected distances from the primary. More specifically, the tertiary selection cone is specified by an inner projected radius R_ min^ int = 0.6 σ_200 and an outer projected radius R_ max^ int = 0.9 σ_200, and by the same redshift depth as the secondary selection cone (shown by the outermost hollow annular cone in Fig. <ref>). Tests with mock data (see Fig. <ref>) indicate that (i) the line-of-sight velocity distribution of these tertiaries is virtually indistinguishable from that of the infalling interlopers among the secondaries, (ii) the results are insensitive to the exact radii of the tertiary selection cone[A weak dependence of σ_ inf on projected radius is apparent in Fig. <ref>, and has also been noted by <cit.>. We have experimented with implementing such a R_-dependence and extrapolating this
to the radial interval of the secondary selection cone, but found that this had a negligible impact on the inference.], and (iii) less than one percent of the tertiaries are actual satellites of the primary. The latter indicates that for most primaries R_ min^ int lies well outside the virial radius of the host halo of the primary, as required. We assume that A_ inf and σ_ inf are each quadratic functions of log(L_ pri) and (including the cross-term), and determine the corresponding 6 × 2=12 coefficients by simultaneously fitting the velocity distribution of all tertiaries around all primaries. These coefficients are then used in to model the line-of-sight velocity distribution of the infalling population of interlopers.
Note that this rather elaborate model for the phase-space distribution of interlopers has zero degrees of freedom. The only degrees of freedom for the interloper-modelling is with regard to their number density, which is modelled via the effective bias described by equation (<ref>).
§.§ Modelling the number of secondaries
As mentioned in <ref>, the data vector _ NS that describes the number of secondaries for a random subset of N_ NS primaries, including those with zero secondaries, contains valuable information regarding the occupation statistics of satellite galaxies, and hence, the CLF.
Similar in spirit to how we compute the likelihood for the satellite kinematics by marginalizing over halo mass (cf. equation [<ref>]), the likelihood for _ NS given a model θ is given by
_ NS = ∏_i=1^N_ NS∫ P(|, ) P( | , , ) .
Here P(|M,L,z) is given by equation (<ref>), while
P(M|L,z) = P(L, M, z)/∫ M P(L,M,z) ,
with P(L,M,z) given by equation (<ref>).
§.§ Modelling the galaxy luminosity function
The final observational constraint that we use to constrain the halo occupation statistics is the comoving number density of SDSS galaxies, n_ gal(L_1,L_2) in ten 0.15 dex bins in luminosity, [L_1,L_2] covering the range 9.5 ≤log L/(h^-2) ≤ 11.0 (see <ref>). We include this data in our inference problem by defining the corresponding log-likelihood
ln_ LF(_ obs|) = -1/2 [() - _ obs]^t Ψ [() - _ obs] .
Here _ obs is the data vector and () is the corresponding model prediction, computed from the CLF and the halo mass function using
n_ gal(L_1,L_2) = ∫_L_1^L_2 L ∫_0^∞Φ(L|M) n(M,z_ SDSS) M ,
where z_ SDSS = 0.1 is a characteristic redshift for the SDSS data used,[We have verified that our results do not depend significantly on this choice.] and Ψ is the precision matrix, which is the inverse of the covariance matrix, with <cit.> correction (see <ref>).
§.§ Numerical implementation
The fiducial model used by is characterized by a total of 16 free parameters: 6 parameters describing Φ_(L|M) (namely log M_1, log L_0, γ_1, γ_2, σ_13 and σ_), 4 parameters describing Φ_(L|M) (namely α_ s, b_0, b_1, and b_2), 1 parameters (β) to quantify the average velocity anisotropy of satellite galaxies, 2 parameters (γ and ) that describe the radial number density profiles of satellite galaxies (see equation [<ref>]), and 3 nuisance parameters (η_0, η_1 and η_2) that specify the abundance of interlopers (see equation [<ref>]). We assume broad uniform priors on all parameters, except for β. The value of the anisotropy parameter β formally ranges from -∞, for maximal azimuthal anisotropy, to +1, for maximal radial anisotropy, which is difficult to probe with our MCMC sampler. Hence, in order to assure roughly equal amounts of parameter space for radially and azimuthally anisotropic models, we sample ℬ≡ -log(1-β), rather than β. In particular, we adopt uniform priors over the range -1 ≤ℬ≤ 1, which corresponds to -9 ≤β≤ 0.9.
Probing the posterior P(|) over the fiducial 16-dimensional parameter space requires close to a million likelihood evaluations, each of which involves thousands of numerical integrations (see for details). In order to make this problem feasible we perform the Bayesian inference under the assumption of a fixed normalized, radial number density distribution of satellite galaxies, i.e., fixed values for γ and . This has the advantage that f_ ap(M,L,z) and P_ int(, |L,z) are all independent of the model, , while P_ sat(, |M,L,z) only depends on a single anisotropy parameter (see <ref>). We compute P_ sat(, |M,L,z) for each central-satellite pair for 10 values of ℬ between -1 and 1 (or β between -9 and 0.9), and we interpolate it for intermediate values. Combined with the fact that we perform all integrations over halo mass using Gaussian quadrature with fixed abscissas (see ) implies that we only need to compute all these quantities once for each primary and/or secondary, which we then use to find the posterior P(|) in the 14-dimensional parameter space at fixed (γ,). As a consequence, for a single evaluation of the full likelihood
ln_ tot = ln_ SK + ln_ NS + ln_ LF ,
for the full SDSS data consisting of 18,373 primaries with at least one secondary and a total of 30,431 secondaries (see <ref>), it only takes of the order of 200 milliseconds using a single run-of-the-mill CPU. This is sufficiently fast, that it allows one to run different Monte-Carlo Markov Chains for different assumptions regarding n̅_ sat(r|M,z), and to find the best-fit radial profile, marginalized over all other model parameters. First, we combine the posteriors from separate MCMC runs on 15× 15 grid in (γ, log)-space, each time marginalizing over the other 14 parameters, to constrain n̅_ sat(r|M,z) (see Appendix <ref>). Having determined the values of γ and that maximize _ tot, we then run a MCMC sampler to infer the full posterior P(|) keeping γ and fixed at these best-fit values. The MCMC sampler used to probe our 14 dimensional parameter space is the affine invariant stretch-move algorithm of <cit.>. Throughout we use 1,000 walkers and the proposal density advocated by <cit.>. This results in typical acceptance fractions between 0.3 and 0.4, and the MCMC chain is typically converged after about 500 steps (i.e., 5 × 10^5 likelihood evaluations). We have experimented at length with other initial guesses, and find the results to be extremely robust, and to always fully converge well under 1 million likelihood evaluations. Finally, throughout we adopt flat priors on all parameters, with very broad prior bounds that do not affect our inference.
§ VALIDATION WITH MOCK DATA
§.§ Tier-3 mock data
We validate the performance of using the Tier-3 mock data introduced in . This mock sample is constructed using the z=0 halo catalogue of the high-resolution SMDPL simulation <cit.>, which uses 3840^3 particles to trace structure formation in a cubic volume of (400 )^3, adopting cosmological parameters consistent with <cit.>.
Each host halo in the catalogue with a mass M_ vir≥ 3 × 10^10 is populated with mock galaxies with luminosities L ≥ 10^8.5 according to a particular fiducial CLF model. Each central galaxy is given the position and velocity of its halo core, defined as the region that encloses the innermost 10% of the halo virial mass. Satellite galaxies are assigned the phase-space coordinates of the subhaloes with the highest peak halo masses. If the number of satellites, drawn from the input CLF, exceeds the number of resolved subhaloes in a specific host halo, we randomly assign the excess satellites the halo-centric positions and velocities of subhaloes hosted by other haloes of similar mass. Note that no assumption of quasi-equilibrium dynamics has been made in the mock making procedure. Therefore, our Tier-3 mock satellites obey the Jeans equations only as much as the live subhaloes do in the SMDPL simulation.
Once all haloes have been populated with mock galaxies, we construct a mock SDSS survey as follows. First, we place a virtual observer at a random position within the simulation volume. We use this virtual observer to convert the (x, y, z) coordinates of each galaxy into a cosmological redshift, z_ cosm, and sky coordinates (using a random orientation). If necessary, the simulation box is repeated with random sets of right angled rotations until the entire cosmological volume out to z_ cosm = 0.20 is covered. Next, we overlay the SDSS DR7 footprint on the simulated sky, and only keep galaxies with m_r ≤ 17.6 that lie within the SDSS DR7 survey window. Redshift-space distortions are simulated by adding (1 + z_ cosm) v_ los / c to the redshift of each galaxy, with v_ los the galaxy's peculiar velocity along the line-of-sight. Spectroscopic redshift errors in the SDSS are simulated by adding a random Δ z from a Gaussian with scatter σ_ err = 15 <cit.>. Finally, we simulate the effect of fibre collisions induced spectroscopic incompleteness following the method of <cit.>. Once the mock spectroscopic survey is completed, we select primaries and secondaries using the selection cones described in <ref>, and assign spectroscopic weights to all secondaries using the method described in <ref>. Similar to what we do for the real data, we remove primaries with an aperture completeness w_ app< 0.8 and exclude secondaries that are located within 55” from their primary. Finally, we use the mock data to compute the comoving abundances of galaxies in the ten luminosity bins described in <ref> using the same method as used for the real SDSS data (i.e., by taking into account the SDSS DR7 footprint with its mask and window functions as well incompleteness caused by fiber collision).
§.§ Inference from the Tier-3 mock
The Tier-3 mock data described above is used to test the performance of . As described in <ref>, we first determine the best-fit values of γ and , which characterize the radial number density profile of satellite galaxies, properly marginalized over all other model parameters. In the Tier-3 mock, satellites are placed in subhaloes which are known to have a radial profile that differs starkly from that of the dark matter. The true radial number density distribution closely follows the generalised NFW shape (equation <ref>) with (γ,) ≈ (0.0, 2.6). Hence, the radial profile of satellites, in our Tier-3 mock, is cored and has a scale-radius that is significantly larger than that of their dark matter host haloes. This feature of the DM-only simulation is consistent with many previous studies <cit.>. The best-fit values obtained by , when applied on the Tier-3 mock data is (γ,) = (0.0, 2.5), in almost perfect agreement with the profile inferred directly from the N-body simulation. Thus, in agreement with what we reported in , can accurately recover the radial profile of satellite galaxies. Next, we keep (γ,) fixed at the best-fit values, (0.0, 2.5), and run to infer the posterior distribution of the remaining 14 parameters. The brown lines in Fig. <ref> show the CLF constraints thus obtained for our fiducial model[No results are shown for the three nuisance parameters η_0, η_1 and η_2 that characterize the abundance of interlopers, or for the velocity anisotropy parameter β, which is discussed in detail in <ref>.]. Note that the posteriors of all CLF parameters are in excellent agreement with the input values, shown as blue dashed lines. For comparison, the posteriors indicated in green correspond to a model that allows for mass-dependence of the orbital anisotropy, and will be discussed in <ref>.
Panel (a) of Fig. <ref> demonstrates that accurately fits the galaxy luminosity function. The blue circles show the mean number density of galaxies in logarithmic bins of luminosity of width 0.15 dex (roughly the width of the blue vertical bars), while the brown band indicates the 95% confidence interval as inferred from 's posterior.
Fig. <ref> panel (b) shows the relationship between halo mass and median central galaxy luminosity. Note that the true input model (blue squares) is perfectly recovered by (brown band, indicating the 95 % confidence interval from the posterior). The pink dots indicate the true halo masses and luminosities of all primaries (both true centrals and impurities) in the Tier-3 satellite kinematics sample. The magenta triangles show their median values of log in bins of log M. Note that, due to selection bias, these do not agree with the true input model (blue squares). A more luminous primary has a larger secondary selection volume associated with it, as well as a larger luminosity range between and L_ min(). Therefore, at any given redshift, brighter primaries are more likely to have at least one secondary than less luminous primaries in haloes of the same mass, hence the brighter primaries get preferentially selected in the _ SK data. This causes the median luminosity of primaries in the satellite kinematics sample to be biased high with respect to that of true centrals. This effect is especially pronounced at the low halo mass end, where the expectation value for the number of secondaries is lowest. Despite this strong and unavoidable selection bias, perfectly recovers the true input relation between central luminosity and halo mass, and not the biased relation! This indicates that , in its forward modelling, accurately accounts for selection bias and other systematics such as the presence of impurities.
Panels (c) and (d) of Fig. <ref> show, respectively, the logarithmic scatter in central galaxy luminosity at fixed halo mass, σ_, and the normalization of the satellite CLF, ϕ_^*. Once again, blue dots show the true input values, while the brown shaded regions mark the 95% confidence interval of the posterior distribution inferred with . Finally, the brown-shaded histograms in panels (e) and (f) show, respectively, the posterior distributions for the faint-end slope of the satellite CLF, α_, and that of the orbital anisotropy parameter, β. The blue, vertical line in panel (e) indicates the true input value of α_, while the blue-shaded region in panel (f) show the range of mean velocity anisotropy of subhaloes in different halo masses in the SMDPL simulation used to construct the Tier-3 mock (discussed further in <ref>). As is evident, in each case the posterior constraints are in excellent agreement with the input values, which indicates that can put tight and accurate constraints on the more intricate aspects of the galaxy-halo connection, beyond the mere relation between halo mass and central luminosity.
Fig. <ref> shows the interloper fraction, f_ int, as a function of primary galaxy luminosity in six redshift bins (indicated by different colours). Solid circles indicate the true interloper fractions in the Tier-3 mock sample with the error bars computed assuming Poisson statistics. The coloured bands show the corresponding posterior predictions as inferred by , which are in good agreement with the true interloper fractions. Hence, correctly distinguishes satellites from interlopers, at least in a statistical sense, and accurately recovers their relative prevalence as a function of luminosity and redshift of the primary.
Finally, Figs. <ref> and <ref> compare the posterior predictions for the HOD and CLF, respectively, to their true input used to construct the Tier-3 mock data (solid dots). In particular, Fig. <ref>, shows the average number of central (circles) and satellite (triangles) galaxies per halo, above four different luminosity thresholds (L_ th), as a function of host halo mass. Interestingly, the true satellite mean occupation in our mock data deviates significantly from a simple power-law (which is a common assumption in the literature), and accurately recovers that complexity in its shape across all luminosities and halo masses. Fig. <ref> plots the central (purple) and satellite (orange) CLFs for 8 different halo masses, as indicated in the top-right corner of each panel. Note that in all cases the halo occupation statistics are recovered with exquisite precision and accuracy.
§.§ Velocity Anisotropy in the Tier-3 Mock
Unlike previous studies of satellite kinematics, which used satellite velocity dispersions in bins of primary luminosity, a unique aspect of is that it models the full probability distribution P(, |M,L,z). By modelling the full 2D phase-space information, rather than just the second moment of velocity, has the potential to break the mass-anisotropy degeneracy that hampers dynamical models which only rely on measurements of the satellite velocity dispersions. Hence, is able to simultaneously constrain the halo masses of the primaries, as well as the velocity anisotropy of its secondaries.
The brown shaded contours in Fig. <ref> show the 68 and 95 percent confidence intervals on the orbital anisotropy parameter β as inferred by from the Tier-3 mock data. Note that the constraints (β = 0.11 ± 0.05) are remarkably tight and indicate a mild, radial anisotropy. Recall that the Tier-3 mock was constructed by placing satellite galaxies inside subhaloes in the SMDPL simulation. Hence, these satellite galaxies have the same orbital anisotropy as those subhaloes. The solid, black line indicates the average anisotropy parameter of subhaloes in the SMDPL simulation as a function of host halo mass. It, too, indicates a mild radial anisotropy in reasonable agreement with the constraints obtained with . However, this comparison is not entirely fair. After all, our secondary selection criteria typically only selects secondaries with projected separations R_ ap^ sec 0.4 (see <ref>). Therefore, it is more meaningful to compare our posterior constraints with the orbital anisotropy of subhaloes located in the inner regions of their host haloes. The blue, solid line in Fig. <ref> shows the orbital anisotropy of subhaloes with a 3D halo-centric radii less than 0.4. These are in better agreement with 's posterior constraints, especially given that the satellite kinematics data is dominated by haloes above 10^12. Hence, we conclude that , by using the full line-of-sight velocity distributions of the secondaries, is indeed able to break the mass-anisotropy degeneracy and obtain simultaneous, reliable constraints on both halo mass and orbital anisotropy.
Thus far we have only considered models in which the orbital anisotropy is a `universal' constant, independent of halo mass or halo-centric radius. However, a detailed analysis of the orbital anisotropy of dark matter subhaloes in the SMDPL simulation (and hence our Tier-3 mock satellites) reveals a rather complicated dependence on both halo mass and halo-centric radius (see Fig. C1 in ). Indeed, the blue and black curves in Fig. <ref> indicate some dependence on halo mass, albeit weak.
We therefore also analysed the Tier-3 mock data using a more flexible model with a mass-dependent orbital anisotropy parameter, given by β(M) = 1 - 10^-ℬ(M), with
ℬ(M) =
ℬ_12, if M < 10^12
ℬ_14, if M > 10^14
ℬ_12 + 1/2 (ℬ_14-ℬ_12) log M_12, otherwise
Here both ℬ_12 and ℬ_14 are free parameters for which we adopt uniform priors ranging from -1 to +1. Hence, by replacing our fiducial model, in which β is independent of halo mass, with this mass-dependent model we add one extra free parameter to the mix.
Remarkably, we find that this extra degree of freedom has no discernible impact on the constraints of any of the other parameter. This is evident from Fig. <ref>, where the green contours show the posterior constraints from the mass-dependent β model, which are indistinguishable from those of our fiducial model (brown contours). The green shaded contours in Fig. <ref> show the 68 and 95 percent confidence intervals on β(M) of the corresponding model. It reveals a weak hint for the orbital anisotropy to become more radially anisotropic in lower mass haloes, in agreement with the weak trend for the SMDPL subhaloes with r < 0.4. However, the uncertainties at the low mass end are rather large, and 's inference is consistent with a constant, mass independent β at the 2σ level.
§ APPLICATION TO SDSS
§.§ The Data
Having demonstrated the can accurately infer the galaxy-halo connection from kinematic data of primaries and secondaries that can be extracted from a galaxy redshift survey, we now apply to data from the SDSS. In particular, we use the New York University Value-Added Galaxy Catalog <cit.>, which derives from the Seventh Data Release of the SDSS <cit.>. More specifically, we use the VAGC sample[<http://sdss.physics.nyu.edu/lss/dr72/bright/0/>], which includes ∼ 570,000 galaxies with a limiting Petrosian magnitude of m_r < 17.6. We use this data to identify primaries and secondaries using the selection criteria outlined in <ref>. As already mentioned there, we limit our analysis to galaxies in the redshift range 0.02≤ z ≤ 0.20. In order to assure that the secondary selection cones fit entirely within this redshift range, the redshifts of primaries are restricted to 0.034 ≤ z ≤ 0.184 (see Fig. <ref>). Primaries are also limited to have luminosities in the range 9.504 ≤log[/(h^-2)] ≤ 11.104. We end up with a total of ∼ 165,000 primaries[For the computation of _ NS, the number of primaries is downsampled by an order of magnitude to 16,491, as discussed in <ref>.], of which N_+=18,373 primaries have at least one secondary. The total number of secondaries, and thus the total number of primary-secondary pairs for which kinematic data is available, is 30,431.
The upper left-hand panel of Fig. <ref> shows the luminosity distributions of all primaries (blue), primaries with at least one secondary (orange), and secondaries (green). Note that primaries with at least one secondary are brighter, on average, than those with zero secondaries. This simply follows from the fact that, on average, brighter centrals reside in more massive haloes, which host more satellites. Also, given a fixed halo mass the brighter primary is assigned a larger secondary selection volume, making it more likely to have a secondary. This latter effect, though, is subdominant. Note also that there are no satellites with L < 10^9. As discussed in <ref>, this is a consequence of the apparent magnitude limit of the SDSS survey combined with the fact that we only allow for primaries brighter than 10^9.504. The upper right-hand panel shows the probability, P_0, that a primary of luminosity contains zero secondaries. It is simply defined as the fraction of primaries, in a given luminosity and redshift bin, with zero secondaries, i.e., P_0 = N_0 / (N_0 + N_+), where N_0 and N_+ are the number of primaries in our sample with zero and at least one detected secondary, respectively. These probabilities have been computed using a 8× 6 uniformly-spaced grid in (log, ) covering the ranges [9.504,11.104] and [0.034,0.184], respectively. Different colours correspond to different redshift bins, as indicated. We emphasize that this binned data are not used in our inference; it is merely used here to illustrate how P_0 scales with luminosity and redshift. Errorbars are computed assuming Poisson statistics, and are smaller than the data points in most cases. Note that P_0 increases with decreasing luminosity and increasing redshift, as expected from the Malmquist bias resulting from the apparent magnitude limit of the spectroscopic SDSS data. The lower left-hand panel shows the multiplicity function, i.e., the number of primaries each with N_ sec secondaries. Note that most primaries have zero secondaries, and that there are very few primaries with more than 20 secondaries. Finally, the lower right-hand panel shows the distribution of the aperture completeness w_ app for all primaries. Note that the vast majority of primaries have w_ app=1, i.e., their entire secondary selection cone completely falls within the SDSS survey footprint. As mentioned in <ref>, primaries with w_ app < 0.8 have been removed from the sample.
Figure <ref> plots the line-of-sight velocity difference as a function of the luminosity of the primary galaxies (left-hand panel) and as a function of the projected distance between primary-secondary pairs (right-hand panel). Data points are colour-coded according to the redshift of the primary, as indicated. This constitutes the satellite kinematics data in SDSS, that we attempt to forward-model with in order to constrain the galaxy halo connection. The deficit of data points at small R_ p reflects that we have removed secondaries with a projected separation less than θ_ fc = 55” because of fiber collision issues (see <ref>). Similarly, the absence of data points for low and large |Δ V| reflects the luminosity dependence of the secondary selection criteria (see <ref>). Evidently, that the velocity dispersion of secondaries is a strong function of primary luminosity, consistent with the expectation that more luminous centrals reside in more massive dark matter haloes. The low-density, high velocity wings of P() at any given reflects the contribution of foreground and background interlopers, i.e., galaxies selected as secondaries that do not reside in the dark matter halo of the primary.
§.§ Results
We start our analysis of the SDSS data presented above by constraining the satellite radial number density profile (equation [<ref>]). Using a 15× 15 grid in (γ, log)-space, we obtain (γ,) = (0.94, 1.7) (see Appendix <ref> for details), in good agreement with the results of <cit.>
and <cit.>. Note that this central slope, γ, is significantly steeper than what we inferred for the Tier-3 mock data (γ=0.0, see <ref>), in which the satellites were positioned on subhaloes in numerical simulations. This indicates that the radial distribution of real satellite galaxies is more centrally concentrated than that of subhaloes in DM-only numerical simulations. This discrepancy is most likely is due to a combination of artificial disruption in simulations <cit.> and failures of the subhalo finders being used <cit.>.
Next, keeping and γ fixed at 1.7 and 0.94, respectively, we run to constrain the posterior distribution of the remaining 14 parameters that characterize the CLFs of central and satellite galaxies, the interlopers, and the average orbital anisotropy of satellite galaxies. Once again, we adopt very broad non-informative priors for all parameters. Table <ref> lists the best-fit parameters plus their 68% confidence intervals thus obtained. We emphasize that, as shown in and Appendix <ref>, all these results are extremely robust to modest changes in and γ.
Before showing the key results on the CLF, we first demonstrate that the best-fit model of is an excellent fit to the data. In order to illustrate this, we bin the data in 2D-bins of luminosity and redshift of the primaries. We emphasize that no such binning was used in the analysis; it is merely used here for the purpose of visualization of the data and its corresponding prediction from . The various panels in Fig. <ref> show the LOSVDs of primary-secondary pairs for bins in log (different rows) and (different columns). We only show panels for which the luminosity lower bound of the {log,} bin falls above the flux limit at the redshift upper bound of that bin. Blue dots and shaded histograms show the stacked data, while the red shaded bands show to 95% confidence intervals obtained using the inferred posterior distributions. In order to quantify the level of agreement between the data and the model, we proceed as follows. Let N_(,) be the actual number of secondaries in the SDSS data for each of the various {log,} bins. Using the best-fit model's predicted P(Δ V) for each {log,} bin we draw N_(,) values of Δ V, and compute the likelihood of this fake data representing the best-fit model. We repeat this 10^4 times. The red-shaded histogram in the inset-panel in the lower-right corner of Fig. <ref> shows the resulting distribution of likelihoods, which we then compare to the analogous likelihood for the actual SDSS data (blue, vertical dashed line). The fact that the latter is perfectly consistent with the distribution of likelihoods (red histogram) indicates that the LOSVDs obtained from are in excellent agreement with the data, fitting not only the roughly Gaussian LOSVDs centered on Δ V=0, but also the extended wings due to the interlopers.
Fig. <ref> uses the same (log,) binning and panels[Note that bins with five or fewer primaries in the down-sampled
_ NS data have been omitted.] as Fig. <ref>, but this time we plot the distributions of the numbers of secondaries per primary (red dots with Poisson errorbars). More specifically, the x-axis indicates the number of secondaries, N_ sec, and the y-axis indicates log(1+N_ pri), where N_ pri is the number of primaries that each have N_ sec secondaries. In most cases the distributions clearly peak at N_ sec=0, as most primaries in SDSS DR7 do not have a spectroscopically detected secondary. The only exceptions are a few high-log L_ pri bins. Once again, in each panel the red shaded bands indicate 95% confidence intervals obtained using the posterior distributions inferred with .
The inset-panel in the lower-right compares the likelihood of the SDSS data given the best-fit model, to the distribution of expected likelihoods computed by drawing 10^4 random realisations of the best-fit multiplicity function. This time the likelihood of the SDSS data falls at the edge of the expected range, indicating that the fit to the data is not optimal. Indeed, upon closer inspection one can notice that the best-fit model overpredicts the multiplicity of primaries with N_ sec∼ 3-6, especially for some intermediate and bins. As we demonstrate in a forthcoming paper, this small discrepancy arises from certain limitations in the satellite CLF model, and can be resolved by adopting a slightly more flexible halo-occupation modelling without significantly affecting any of the main relations presented here.
Fig. <ref> shows several key halo mass dependencies that characterize the galaxy-halo connection inferred by from the SDSS data. In each panel, the brown shaded bands show the inferred 95 % confidence intervals, while the coloured symbols show best-fit constraints from previous SDSS-based studies[Where needed we have converted these results to our definition of halo mass.]. In particular, we compare our inference to the results from an analysis of galaxy group catalogues by <cit.>, to results based on a simultaneous analysis of galaxy clustering and galaxy-galaxy lensing by <cit.>, and to results of the most recent analysis of satellite kinematics by <cit.>. Note that the latter did not use the Bayesian hierarchical methodology of , but rather was based on the standard summary statistics of host-weighted and satellite-weighted velocity dispersions as a function of binned primary luminosity.
Panel (a) plots the median central luminosity, L_, as a function of halo mass. As is evident, the constraints obtained with are in excellent agreement with previous results, though we emphasize that our constraints are significantly tighter.
Panel (b) plots the posterior constraints on σ_(M), characterizing the scatter in log L_ at fixed halo mass. Most studies in the past have assumed σ_ to be independent of halo mass, and inferred values that lie roughly in the range 0.15-0.2 dex <cit.>. , on the other hand, allows for a mass dependence as characterized by equation (<ref>). Yet, despite this extra degree of freedom, our inference is statistically consistent with a constant σ_ = 0.17 dex over the entire range of halo masses probed. Note that this is different from <cit.> who, using a similar mass-dependent characterization of the scatter in the log L_ - log M relation, inferred that σ_ increases with decreasing halo mass, as depicted by the blue circles in Panel (b). As discussed in more detail in <ref>, the reason for this discrepancy can be attributed to the fact that all previous analyses, including <cit.>, invariably assumed the brightest halo galaxy to be the central.
Panel (c) shows the posterior constraints on the the faint-end slope, α_, of the satellite CLF. Throughout we have assumed a global, mass-independent α_ similar to <cit.>, <cit.> and most other previous work. Our inference that α_ = -0.87 ± 0.06 is in excellent agreement with <cit.>, and is largely consistent with the constraints obtained by <cit.> and <cit.> given their uncertainties. Note, though, that <cit.> inferred that α_ becomes significantly steeper at the massive end, reaching values as low as -1.5 for groups with an inferred halo mass M 3 × 10^14. In future work we plan to allow for a mass-dependent α_, to see if satellite kinematics reveal a similar mass dependence as that inferred from the galaxy group catalogue of <cit.>, and to study how this extra degree of freedom impacts the other parameters that characterize the galaxy-halo connection.
Finally, panel (d) of Fig. <ref> shows the constraints on the normalization, ϕ_^*, of the CLF of satellite galaxies, as a function of host halo mass. The constraints obtained by , as depicted by the brown bands, are in fair agreement with previous constraints, especially if the uncertainties on the latter are taken into account (note that the coloured symbols only indicate the best-fit values). At the low mass end, the results of <cit.>, which are also based on satellite kinematics, seem to suggest significantly larger values of ϕ_^* (i.e., more satellites per halo). This is likely due to the fact that <cit.> have assumed that interlopers have a uniform distribution of line-of-sight velocities, <cit.>. As discussed in <ref>, this oversimplified assumption implies that some of the splash-back galaxies and infalling interlopers, which have a LOSVD that resembles that of true satellites, will be incorrectly `counted' as satellite galaxies. Because of this, and since we have demonstrated that can accurately recover the interloper fraction as well as the CLF normalization, ϕ_^*(M), we reckon that our results for the abundance of satellite galaxies are likely to be more accurate. In all fairness, we emphasize that most previous studies of satellite kinematics were not aiming to accurately recover satellite abundances; rather they mainly focused on constraining halo masses as a function of primary galaxy luminosity. As discussed in detail in <cit.>, this aspect of the analysis is not significantly impaired by the oversimplified assumption that the LOSVD of interlopers is uniform.
§.§ Scatter in central galaxy luminosity
Different empirical estimates of the galaxy-halo connection, at present, broadly agree on the relation between central galaxy luminosity (or stellar mass) and halo mass. However, the constraints on the scatter in this relation, especially as a function of halo mass, has yet to attain similar convergence, and therefore is considered a key parameter at the forefront of empirical modelling <cit.> and is highly informative for testing physical models <cit.>.
's inference of the logarithmic scatter in central luminosity, σ_(M_ vir), shows no significant halo mass dependence despite having the freedom in the model. Fig. <ref> compares our constraints on σ_ (grey band) with estimates from previous studies (all error-bars and uncertainties in this plot are 68% confidence intervals). Crucially, our constraints disagree with <cit.>, who also used satellite kinematics extracted from the SDSS DR-7 data. They split their sample into red and blue centrals, and the scatter in the two sub-populations are shown by the shaded regions of corresponding colour. At the high halo mass end, their red-fraction of centrals approaches unity, and thus the red-shaded region should be a good approximation of the overall scatter. As is evident, it reveals weak but significant mass-dependence with dσ_/ dlog M ≈ -0.04, which, at first sight appears inconsistent with our results. However, <cit.>, as all other previous studies, have simply assumed that their primary, defined as the brightest galaxy in the selection cone, is always the central galaxy. Hence, their σ_ has to be interpreted as the scatter in the brightest halo galaxy, log L_ BHG, rather than that in log L_.
, on the other hand, accounts for type-I impurities, which are BHG-satellites that are misclassified as primaries. In particular, we have demonstrated that by directly forward modelling these BHG-satellites, 's inferred σ_ is an unbiased estimate of the intrinsic luminosity scatter of true centrals. The probability that the BHG is a satellite, rather than a central, increases strongly with halo mass <cit.>. Therefore, the inferred scatter at the high mass end, from studies that did not account for type-I impurities, may have been biased. We can directly test this with , which makes it straightforward to compute the expected scatter in log L_ BHG and compare it to that in log L_. The black dashed curve in Fig. <ref> shows the predicted scatter in BHG luminosity, σ_ BHG, as a function of host halo mass for our best-fit CLF model. Note that σ_ BHG drops significantly below σ_ at the high-mass end. This is because it is mostly the fainter centrals that are `replaced' by a brighter satellite, which causes the distribution of BHG luminosities to be narrower than that of the true centrals. For M 10^13.5, the mass-dependence of the inferred σ_ BHG is in excellent agreement with <cit.> (recall that the vast majority of all centrals in massive haloes are red, and the comparison should thus be with the red-shaded region). Above 10^13, the black dashed line also shows improved agreement with previous results from an analysis of galaxy clustering and galaxy-galaxy lensing by <cit.>, who assumed that σ_ is mass-independent, and that from an analysis of a SDSS galaxy group catalogue by <cit.>.
As is evident from Fig. <ref>, the various results disagree strongly at the low-mass end (M 10^13). This, however, has to be interpreted with caution, as none of the constraints are particularly reliable there. For example, the results of <cit.> can not be trusted at the low-mass end, since their halo masses are estimated from the total group luminosity. Since this is dominated by the central luminosity in low mass haloes, their inferred scatter at the low mass end is guaranteed to be an underestimate <cit.>. In the case of <cit.>, most of the constraining power comes from haloes with M ≥ 10^13. As they assume a simple linear dependence of σ_ on log M, the constraints at the low-mass end mainly reflect extrapolation of the assumed linear relation. Our results are also affected this way, but less so, since we have used a flux-limited sample, rather than a more restricted volume-limited sample as in <cit.>. This allows for a better sampling of fainter centrals, that reside in lower-mass haloes.
§.§ Orbital Anisotropy of Satellite Galaxies in SDSS
The brown contours in Fig. <ref> show the 68 and 95 percentile constraints on the orbital anisotropy parameter, β, for satellite galaxies in the SDSS data as inferred from our fiducial model. We infer a significant radial anisotropy with β = 0.29^+0.05_-0.04. These global constraints on the average orbital anisotropy of satellite galaxies, across a large range of halo masses, are perfectly consistent with, but significantly tighter than, the results of <cit.> who also analysed the kinematics of satellite galaxies in SDSS data to infer β=0.26 ± 0.09. We emphasize that, unlike , the analysis of Wojtak & Mamon did not account for mass mixing, and was based on a much smaller sample of primary-secondary pairs than used here.
Interestingly, our constraints on the orbital anisotropy are also consistent with the typical orbital anisotropy of subhaloes in numerical simulations of structure formation in a ΛCDM cosmology. In fact, the green contours show the constraints we obtain using a model in which the orbital anisotropy is allowed to depend on halo mass as given by equation [<ref>]. We find a weak indication that the orbits of satellite galaxies become more radially anisotropic towards lower halo mass. Most importantly, these results for the SDSS data (Fig. <ref>) are consistent with those for the Tier-3 mock data (Fig. <ref>), in which the satellite orbits reflect those of subhaloes in the ΛCDM-based SMDPL. The fact that the orbital anisotropy of satellite galaxies in the SDSS appears to be consistent with that of subhaloes in N-body simulations can be heralded as yet another success for the ΛCDM model.
Although the weak mass-dependence of the orbital anisotropy inferred here is intriguing, especially in light of the agreement with the Tier-3 results, we emphasize that these results have to be interpreted with caution. The reason is that we have excluded data on projected separations < 55” because of fibre collision issues. As a consequence, the range in radii probed, in terms of the halo virial radius, in low mass haloes is different than that probed in more massive haloes. Hence, any potential radial dependence of the orbital anisotropy of satellite galaxies can, in principle, masquerade as a mass dependence in our analysis. We intend to address this `degeneracy' in a forthcoming paper (Mitra et al., in prep) in which we consider models in which the orbital anisotropy is allowed to depend on halo-centric radius, as well as halo mass. In particular, we will consider Osipkov-Merritt model <cit.>, as well as more realistic simulation-inspired models such as those used by <cit.>.
§ SUMMARY AND CONCLUSION
In we presented , a novel, Bayesian hierarchical method for analysing the kinematics of satellite galaxies. Based on the spherically symmetric Jeans equations it models the kinematics of large ensembles of satellite galaxies associated with central galaxies that span a wide range in halo mass and luminosity. The halo masses of the individual centrals act as latent variables in a hierarchical Bayesian framework that uses the data to constrain the detailed galaxy–halo connection as characterized by the CLF. Unlike traditional methods for analysing satellite kinematics, does not make use of any summary statistic, such as velocity dispersions of satellite galaxies in central galaxy luminosity bins. Rather, it leaves the data in its raw form, which has the advantage that all data are used optimally while avoiding systematics that arise from binning. In addition, whereas traditional methods typically require volume-limited samples, can be applied to flux limited samples, thereby greatly enhancing the quantity and dynamic range of the data. And finally, is the only available method that simultaneously solves for halo mass and orbital anisotropy of the satellite galaxies, while properly accounting for `mass-mixing'.
In this paper we have presented a number of important improvements to , required for an unbiased recovery of all parameters when using large samples of data comparable to what can be achieved with existing SDSS catalogues. In particular,
* We introduced an improved selection of primaries and secondaries that assures that the secondaries associated with each individual primary are volume-limited, even-though the overall sample is still flux-limited. This facilitates a more accurate modelling of the abundance and velocity distribution of the secondaries.
* We forward model the contribution of impurities among the primaries, where impurities are predominantly those satellites that are brighter than their corresponding centrals.
* We slightly modified the selection criteria of primaries to minimize the effect of other kinds of impurities that are extremely difficult to forward-model.
* We extended the satellite kinematics model to higher-order, by using the fourth-order Jeans equation to compute the kurtosis of the LOSVD. Incorporating this, in the modelling of the full 2D phase-space distribution of satellites, allows to break the mass-anisotropy degeneracy, and to put tight constraints on the global, average velocity anisotropy of satellite galaxies.
* We drastically improved the modelling of interlopers by (i) accounting for the fact that the selection volume of secondaries is conical rather than cylindrical, (ii) accounting for splash-back galaxies, and (iii) using a data-driven method to model the line-of-sight velocity distribution of the interlopers.
* Instead of discarding primaries with zero secondaries, utilizes their information to further constrain the galaxy-halo connection. Congruous with the satellite kinematics methodology, we introduced a similar Bayesian hierarchical framework to model the abundance of secondary galaxies around each primary, which improves significantly on the stacking-based approach used in . This allows to put unprecedented constraints on the satellite CLF.
Using realistic mock data of similar quality and volume as the SDSS DR7, we have demonstrated that, with this improved methodology, can break the mass-anisotropy degeneracy, and simultaneously constrain the host masses and average orbital velocity anisotropy of satellite galaxies. In particular, achieves an unbiased recovery of all 10 CLF parameters that characterize the galaxy-halo connection covering almost four orders of magnitude in halo mass (from ∼ 10^11 to 10^15), and with unprecedented accuracy. In addition, it simultaneously recovers the orbital anisotropy parameter, β, the luminosity and redshift dependence of the interloper fraction, and the radial number density profile of satellite galaxies. It is worth emphasizing that the recovery is unbiased despite the fact that the selection of primaries and secondaries is (unavoidably) plagued by biases, incompleteness, and impurities.
We applied to a sample of 18,373 primaries and 30,431 secondaries extracted from the SDSS DR-7 data, yielding some of the tightest constraints on the galaxy-halo connection to date (Table <ref>). The model accurately reproduces both the abundance and line-of-sight velocity distributions of secondaries (Figs. <ref> and Fig. <ref>), and is in good agreement with previous constraints on the galaxy-halo connection derived from galaxy group catalogues, galaxy clustering, galaxy-galaxy lensing and previous analyses of satellite kinematics.
Assuming that the orbital anisotropy of satellite galaxies is independent of halo mass and halo-centric radius, our analysis of SDSS data reveals a significant radial anisotropy of β = 0.29^+0.05_-0.04, in excellent agreement with, but significantly tighter than, previous results <cit.>. We also find a weak indication that β is slightly larger in lower mass haloes, in good agreement with the orbital anisotropy of subhaloes in dark-matter only simulations of structure formation in a ΛCDM cosmology <cit.>. Since satellite are believed to reside in subhaloes, this may be considered another success of the standard model for structure formation.
We find that the radial number density profile of satellite galaxies, n_ sat(r|M), is tightly constrained and well characterized by a generalized-NFW profile (equation [<ref>]) with a central cusp-slope γ=0.94 (compared to γ=1 for a pure NFW profile), and a characteristic scale radius that is roughly two times larger than what is expected for the dark matter. Within the uncertainties, this is consistent with several previous studies <cit.>. Consistent with paper I, we find our results to be extremely robust to modest changes in n_ sat(r|M); the only parameter that displays some dependence is the anisotropy parameter, β (see Appendix <ref>). This is to be expected given that both β and n_ sat(r|M) appear in the Jeans equation used to model the kinematics of the satellite galaxies.
Interestingly, we find no evidence for a significant halo mass dependence of the scatter in central luminosity, and at any given halo mass we find the luminosity scatter to be around σ_ = 0.17 dex. This is inconsistent with the latest analysis of satellite kinematics by <cit.>, also based on SDSS DR-7 data, who inferred that the scatter decreases with increasing halo mass, albeit only weakly with σ_/log M ∼ -0.04. As discussed in <ref>, this discrepancy arises primarily from the fact that <cit.> and all previous studies simply assumed the brightest galaxy in the halo to be the central. We, however, take into account the existence of brightest halo galaxy satellites, and forward-model the probability of misidentifying them as primaries. By doing so, we demonstrate that 's inference of σ_ is an unbiased recovery of the intrinsic luminosity scatter of true centrals. From our best-fit model we can predict what the scatter in brightest halo galaxy luminosity should be, as a function of halo mass, and that is in good agreement with the scatter inferred by <cit.> and other previous analyses.
For completeness, we point out that several studies that used stellar mass, rather than r-band luminosity, to characterize the galaxy-halo connection also inferred that the scatter in stellar mass of central galaxies decreases with increasing halo mass <cit.>. However, most of these inferences were only significant below the ∼ 2 σ level. Hence, observationally it remains unclear whether or not the scatter in the galaxy-halo connection has a significant mass dependence. Taking our results at face value, it seems that scatter is fairly mass-independent, at least for log M 10^12, and that previous indications for a significant mass dependence at the massive end are likely a result of confounding true centrals with brightest halo galaxies.
On the theory side, the situation is even more higgledy-piggledy, with a clear lack of consensus <cit.>. In general, semi-analytical models <cit.> predict a weak mass dependence with a small, negative value for σ_/log M, but the magnitude of the overall scatter is typically much larger than what is inferred observationally <cit.>. Hydrodynamical simulations of galaxy formation, typically predict a significantly lower scatter, at least for haloes with M 10^12, in much better agreement with observations. Typically, though, they predict that the scatter rapidly increases for M 10^12 <cit.>. Finally, empirical models such as the <cit.> and <cit.> seem to predict σ_(M) relations that fall roughly in between the predictions from semi-analytical models and hydrodynamical simulations.
To conclude, we have demonstrated that satellite kinematics extracted from galaxy redshift surveys contain a wealth of information regarding the statistical relation between galaxies and their associated dark matter haloes. The Bayesian hierarchical framework , developed here and in , is able to analyse such data in an unbiased way, yielding accurate constraints on the galaxy-halo connection over a wide range of halo mass, and with unprecedented precision. Hence, satellite kinematics is complementary to other techniques that are used to constrain the galaxy-halo connection, in particular galaxy clustering and galaxy-galaxy lensing. Importantly, by only probing the smallest, most non-linear scales (i.e., the 1-halo term) it is insensitive to halo assembly bias, which hampers an unambiguous interpretation of the 2-halo term in clustering and galaxy-galaxy lensing. Hence, it is to be expected that by combining all these methods, degeneracies can be broken which opens up new avenues to test our cosmological paradigm and our models for galaxy formation. To this end, we plan to use, and where necessary further develop, in future work. In particular, among others, we intend to explore additional degrees of freedom in the characterization of the galaxy-halo connection (for example, mass dependence in the faint-end slope of the satellite CLF and in the ratio of the characteristic luminosities of the centrals and satellites in haloes of any given mass), the impact of baryonic effects on the halo potential (which may introduce systematic errors in the inference from satellite kinematics), and the impact of scatter in the halo concentration-mass relation (and the expected correlation with the abundance of subhaloes/satellite galaxies). In addition, we are excited about the prospects of using to probe the galaxy-halo connection as a function of secondary galaxy properties, such as galaxy colour and/or size, and to put constraints on cosmological parameters by combining satellite kinematics with other observables.
§ ACKNOWLEDGMENTS
We are grateful to the anonymous referee for an insightful referee report that has resulted in significant improvements of the manuscript. FvdB has been supported by the National Aeronautics and Space Administration through Grant No. 19-ATP19-0059 issued as part of the Astrophysics Theory Program and by the National Science Foundation (NSF) through grant AST-2307280, and received additional support from the Klaus Tschira foundation. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611, and at the KITP Santa Barbara, which is supported in part by the National Science Foundation under Grant No. NSF PHY-174895.
This work utilized, primarily for plotting purposes, the following python packages: <cit.>, <cit.>, <cit.>, and <cit.>.
§ DATA AVAILABILITY
The SDSS DR7 New York University Value Added Galaxy Catalog sample, used in this work, is publicly available at <http://sdss.physics.nyu.edu/lss/dr72/bright/0/>. All derived data, such as kinematic data of primaries and secondaries, and the data corresponding to each plot, will be made available on reasonable request to the corresponding author.
mnras
§ OPTIMIZING SELECTION CRITERIA
As described in <ref>, the selection of primaries and secondaries makes use of conical selection volumes. In this appendix we describe modifications to the parameters characterizing these selection volumes that result in an improved purity of the sample. Following <cit.>, <cit.> and <cit.> the selection cones are characterized by = a_ σ_200, = a_ σ_200, = b_ σ_200, and = b_ σ_200 (see Fig. <ref>). Here σ_200 is an estimate for the satellite velocity dispersion in units of 200, which scales with the luminosity of the primary as logσ_200 = c_0 + c_1 log L_10 + c_2 (log L_10)^2, where L_10 = / (10^10). In we adopted exactly the same parameters as <cit.>: a_=0.5, a_ = 0.15, b_ = 1000, b_ = 4000/σ_200 and (c_0, c_1, c_2) = (-0.04, 0.38, 0.29). As discussed in , this results in impurity fractions of ∼ 5 percent. These impurities cause a slightly bias estimate of the scatter in the galaxy-halo connection. In the case of the small mock data samples used in , the effect was not significant. However, when using an 8-times larger, full-size SDSS sample the systematic bias in scatter becomes >3 σ significant.
In <ref> we classified impurities as either Type-I (BHG satellites) or Type-II (neither a central nor a BHG satellite). The green dots in the left-hand panel of Fig. <ref> show the luminosities as function of host halo mass of the primaries selected from the Tier-3 mock data using the selection criteria used in . Blue and orange contours mark the 4 and 7σ_ ranges around the median relation between halo mass and central luminosity used to construct the mock data. Red and black circled dots mark impurities of Type-I and II, respectively. As is evident, Type-I impurities have luminosities that are comparable to those of true centrals at the same host halo mass. That is because Type-I impurities are BHG satellites which are brighter than their corresponding central galaxy (hence they must have a luminosity in the typical range of Φ_(L|M) for a true central to be fainter). Being the brightest one in the corresponding halo, a Type-I impurity is impossible to avoid in the selection procedure. However, as we have demonstrated in <ref>, we actually forward model the contribution of Type-I impurities.
Type-II impurities, though, are a much bigger concern. As is evident from Fig. <ref>, these can have luminosities that are much lower than that of a typical central at the corresponding halo mass (by as much as 7σ). Since lower luminosities are indicative of a lower halo mass, a too-large contribution of Type-II impurities can give rise to significant, systematic errors in the inference. Since the kinematic information from secondaries associate with Type-II impurities still reflect a high velocity dispersion consistent with the actual halo mass, the main effect of Type-II impurities is to cause a systematic overestimate in the scatter of central luminosities at a fixed halo mass (i.e., an overestimate of σ_). Since we are not aware of a reliable method to forward model the impact of Type-II impurities, it is prudent that we minimize their incidence by tuning our selection criteria accordingly.
After extensive testing with different mock data sets similar to the Tier-3 mock discussed in the main text, we finally settled on the following set of parameters: a_ = 0.6, a_ =0.15, b_ = b_ = 1000 and {c_0, c_1, c_2} = {0.04, 0.48, 0.05}. With these new selection criteria we are able to reduce the fraction of Type-II impurities from ∼ 1.1 % to ∼ 0.5 %. The impact of this reduction can be seen by comparing the two panels of Fig. <ref>. Note that in addition to dropping the fraction of Type-II impurities to sub-percent levels, the new selection criteria preferentially removes the most dramatic outliers and also drastically reduces the contribution of Type-II impurities at the high mass end, where the old selection criteria caused the fraction of Type-II impurities to be very high.
With these new and improved selection criteria we find that Type-II impurities no longer cause a significant overestimate of σ_. Although the new selection criteria reduces the number of primaries in the satellite kinematics sample by almost 40 percent, we find that this does not significantly compromise the precision with which can infer the galaxy-halo connection. The reason is that the main reduction of primaries occurs at the low-luminosity end, where most of the secondaries are interlopers that do little to constrain the halo occupation model.
§ THE COMPLETENESS OF CENTRALS
The selection of centrals as primaries (<ref>) is not complete; i.e., not every central is selected as a primary. This incompleteness owes to two different reasons: (i) incompleteness in the SDSS redshift survey, for example due to fibre-collisions, or (ii) the central is located inside the selection cone of a brighter galaxy. Let the completeness (M,L,z) be the fraction of centrals — of luminosity L, at redshift z, residing in haloes of mass M, in the survey volume of the SDSS — that are selected as primaries. We can write that (M,L,z) = (M|L,z) _0(L,z). As discussed in the main text (see <ref>), the modelling in is independent of _0, which drops out. In other words, we only need to account for any potential halo mass dependence of the completeness given by (M|L,z).
In order to gauge this mass dependence, we construct 100 mock SDSS redshift surveys similar to the Tier-3 mock survey discussed in the main text to which we apply our primary selection criteria. For each central galaxy in the mock SDSS volumes we assess whether it is selected as a primary. The combined results from all 100 mocks are shown as symbols with Poisson errorbars in Fig. <ref>. Different colours correspond to different luminosities of the centrals, as indicated. Here we have combined data on all centrals over the entire redshift range, but we emphasize that the redshift dependence is weak. A few trends are evident. First of all, the completeness is lower for fainter centrals. This simply reflects that fainter centrals are more likely to have a brighter galaxy in a neighbouring halo that happens to fall within the primary selection criterion. Modelling this would require accurate knowledge of the clustering of haloes (2-halo term) and is sensitive to assembly bias issues. Fortunately, we do not need to model this luminosity dependence. All we care about is the halo mass dependence as characterized by (M|L,z).
As is evident from the data, the completeness for centrals of a given luminosity is roughly independent of halo mass at the low-mass end, but then drops drastically at the high-mass end. This transition occurs at higher mass for brighter centrals. The yellow-shaded region indicates, for each luminosity bin, the 5 to 95 percentile range of halo masses. Note that the exponential decline of (M|L,z) at large masses only affects the top ∼ 5 percent of centrals of a given luminosity. This is why, in , we decided to ignore this mass dependence all together. However, as we demonstrate below, it is actually fairly straightforward to model (M|L,z). As it turns out, this mass dependence owes almost entirely to the fact that a central of a given luminosity in a more massive halo is more likely to have a satellite that is brighter than itself. Recall that Φ_(L|M) is modelled as a log-normal distribution. Hence, the central galaxies in haloes of a given mass have a tail of excessively faint centrals. And since we assume that the luminosities of satellite galaxies are independent of that of their central, those faint centrals are more likely to have a brighter satellite, and thus to fail selection as a primary. As shown in Appendix A of <cit.>, the probability that the central galaxy is the brightest galaxy in its halo is given by
P_ BC(M|L,z) = exp[-Λ_ BTC] .
Here Λ_ BTC is the expectation value for the number of satellites that are brighter than the central, which is given by equation (<ref>).
The solid lines in Fig. <ref> show the predictions for P_ BC(M|L,z), computed using equation (<ref>) for the same CLF model as used to construct the mocks. The absolute normalization for each luminosity bin, which represents _0(L,z), is tuned to match the mock data at the low-log M end. As is evident, equation (<ref>) accurately describes the halo-mass dependence of the completeness of primaries. Hence, the mass-dependence of the completeness of centrals can be modelled as (M|L,z) = P_ BC(M|L,z).
Note that 1-(M|L,z) is the probability that a central is not the brightest galaxy in its halo, and thus the probability that the halo gives rise to a Type-I impurity. As discussed in the main text, we forward model the contribution of these Type-I impurities, which effectively means that we already account for the mass-dependence of the completeness of centrals depicted in Fig. <ref>. Indeed, P_ BC given by equation (<ref>) is identical to P(L_ bs<L|M,z) (equation [<ref>]) used in <ref> to forward model the Type-I impurities.
§ THE RADIAL PROFILE OF SATELLITES
Throughout we assume that n_ sat(r|M) is characterized by a generalized-NFW form (equation <ref>) which has two free parameters: the inner logarithmic density slope, γ, and the concentration ratio ℛ = c_ vir / c_ sat which characterizes the scale radius of the number density profile. As discussed in <ref>, pre-computes and stores essential arrays which are then used in each step of the MCMC chain varying the CLF, anisotropy, and nuisance parameters. This pre-computation drastically speeds up , but requires that n_ sat(r|M), and thus {γ,}, are held fixed. Therefore, instead of keeping the radial profile free, we run separate MCMC chains for each assumed radial profile on a 15 × 15 grid of {γ, log}. We then combine the posteriors and likelihoods from each of these runs to compute the marginalized likelihood (γ, | ).
The grey contours in the top-left panel of Fig. <ref> show the 68, 95 and 99 percent confidence intervals for γ and log thus obtained using the SDSS data described in <ref>. The best-fit values, indicated by the black cross, correspond to {γ, } = {0.94, 1.7}, which are the values we adopt for our detailed SDSS-analysis described in <ref>. However, the confidence intervals for γ and log reveal a significant degeneracy along a narrow ridge-line in γ- parameter space (see also ). To demonstrate the impact this degeneracy has on our inference, the coloured histograms in the other panels of Fig. <ref> show the posteriors on our CLF parameters and the anisotropy parameter, β, for 9 different combinations of γ and (indicated by the circles of corresponding colour in the top-left panel) that roughly trace out the boundary of the 3σ confidence interval. The vertical black dashed line in each of these panels show the best-fit parameters inferred by with the best-fit {γ, } combination, same as the values quoted in table <ref>. As is evident, the inferred CLF parameters are extremely robust to changes in γ and along the direction of this degeneracy. The only parameter that shows a weak dependence is the orbital anisotropy parameter β (bottom-right panel), which is to be expected from the fact that both n_ sat(r|M) and β appear in the expression for the line-of-sight velocity dispersion given by equation (<ref>).
Note also that the constraints on γ and are inconsistent with satellite galaxies following the same radial profile as the dark matter (i.e., γ = = 1, indicated by the red square in the top-left panel) at >5σ significance. If we had assumed that satellite galaxies are an unbiased tracer of their host halo mass distribution, which is not uncommon in the literature when modelling the galaxy-halo connection, we would have obtained the posteriors indicated by the red histograms. Interestingly, most CLF parameters would still be consistent with the values inferred using our fiducial, best-fit model with {γ, } = {0.94, 1.7}. The main exceptions, though, is the orbital anisotropy parameter, which would be biased high (i.e., we would markedly overestimate the radial velocity anisotropy). A few other parameters like γ_2, α_, and σ_, are also somewhat biased in red histograms. Thus, we would incorrectly infer a steeper L̅_ (M) relation, a shallower faint-end slope of satellite CLF, and a slight decreasing trend in the central luminosity scatter with host halo mass, if we wrongly assumed the satellites to follow the dark matter radial distribution. In conclusion, yields tight constraints on the radial number density profile of satellite galaxies, and whatever degeneracy remains between the central density slope and concentration has no significant impact on any inferred parameter.
|
http://arxiv.org/abs/2409.03414v1 | 20240905110316 | Accelerating multipartite entanglement generation in non-Hermitian superconducting qubits | [
"Chimdessa Gashu Feyisa",
"J. S. You",
"Huan-Yu Ku",
"H. H. Jen"
] | quant-ph | [
"quant-ph"
] |
APS/123-QED
Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 10617, Taiwan
Molecular Science and Technology Program, Taiwan International Graduate Program, Academia Sinica, Taiwan
Department of Physics, National Central University, Taoyuan 320317, Taiwan
Department of Physics, National Taiwan Normal University, Taipei 11677, Taiwan
huan.yu@ntnu.edu.tw
Department of Physics, National Taiwan Normal University, Taipei 11677, Taiwan
sappyjen@gmail.com
Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 10617, Taiwan
Molecular Science and Technology Program, Taiwan International Graduate Program, Academia Sinica, Taiwan
Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan
§ ABSTRACT
Open quantum systems are susceptible to losses in information, energy, and particles due to their surrounding environment. One novel strategy to mitigate these losses is to transform them into advantages for quantum technologies through tailored non-Hermitian quantum systems. In this work, we theoretically propose a fast generation of multipartite entanglement in non-Hermitian qubits. Our findings reveal that weakly coupled non-Hermitian qubits can accelerate multiparty entanglement generation by thousands of times compared to Hermitian qubits, in particular when approaching the 2^n-th order exceptional points of n qubits in the P T- symmetric regime. Furthermore, we show that Hermitian qubits can generate GHZ states with a high fidelity more than 0.9995 in a timescale comparable to that of non-Hermitian qubits, but at the expense of intense driving and large coupling constant. Our approach is scalable to a large number of qubits, presenting a promising pathway for advancing quantum technologies through the non-Hermiticity and higher-order exceptional points in many-body quantum systems.
Accelerating multipartite entanglement generation in non-Hermitian superconducting qubits
H. H. Jen
September 9, 2024
=========================================================================================
§ INTRODUCTION
Quantum physics in the Hermitian realm provides a genuine description of the dynamics of the closed quantum systems. Hermiticity ensures the conservation of essential quantities such as probability, energy, particle number, and information <cit.>. However, when quantum systems interact with their surroundings, one frequently employs the Lindblad master equation <cit.>. It includes a Hermitian term that drives the unitary and coherent evolution of the system, along with additional non-Hermitian terms accounting for dissipations.
A broader framework for dissipative quantum systems has been developed using non-Hermitian Hamiltonians <cit.>. These Hamiltonians exhibit intriguing physical phenomena such as imaginary eigenvalues, biorthogonal states, and exceptional points (EPs) <cit.>. EPs are complex branching singularities in a parameter space where both eigenvalues and eigenvectors coalesce. These singular points are unique to non-Hermitian systems and lead to heightened sensitivity to applied perturbations, causing dramatic changes in the system's behavior <cit.>. This property has spurred significant interest in developing quasi-classical systems capable of exhibiting EPs, with applications in ultra-sensitive sensing <cit.>, wave transport management <cit.>, and single-mode lasing operations <cit.>. EPs have also been experimentally demonstrated in various quantum systems, including superconducting qubits <cit.>, trapped ions <cit.>, thermal atom ensembles <cit.>, and cold atoms <cit.>, offering tremendous quantum advantages <cit.>.
Among these benefits, the dynamics of entanglement around EPs have recently attracted much attention <cit.>. For example, second-order EP has been identified as the location of Bell state generation between non-Hermitian qubit and Hermitian qubits <cit.>. In addition, fourth-order EPs in two non-Hermitian qubits have been shown to accelerate bipartite entanglement generation <cit.>. Furthermore, the trade-off relationship between the degree of entanglement and the success rate has recently been investigated in non-Hermitian qubits with (un)balanced gain and loss <cit.>. Despite these pioneering studies on the interplay between bipartite entanglement and EPs, very little is known about the role of higher-order EPs in multipartite entanglement.
Multipartite entanglement has been implemented in various protocols involving Hermitian systems and gate operations <cit.>. In general, it provides quantum advantages over bipartite entanglement <cit.> e.g., in quantum networks <cit.>, quantum secret sharing <cit.>, quantum key distributions <cit.>, and quantum thermodynamic tasks <cit.>. By arranging non-Hermitian and Hermitian qubits in desired configurations, multipartite entanglement allows for the adjustment of higher-order EPs. In principle, higher-order EPs could further amplify the effects of external perturbations and initial conditions. Therefore, it is crucial to understand the advantages of higher-order EPs in multipartite entanglement over (1) conventional Hermitian counterparts and (2) gate operations that require numerous and time-consuming SWAP operations.
In this work, we investigate multipartite entanglement in driven, dissipative, and weakly coupled non-Hermitian transmon qubits, which have been realized experimentally in Refs. <cit.>. Without inter-qubit coupling, the 2^n-th order EPs emerge among n qubits due to the rivalry between dissipation-induced non-Hermiticity and resonant driving fields. We make quantitative analysis of entanglement dynamics near these higher-order EPs and in the Hermitian limit using entanglement entropy <cit.> and residual three-tangle <cit.>, alongside the dynamics of probability amplitudes and phases to understand the physics of multipartite entanglement generation. Our result shows that the simultaneous rearrangement of populations and phases in three non-Hermitian qubits quickly builds robust tripartite entanglement near the eighth-order EP. More intriguingly, four non-Hermitian qubits can be entangled even more rapidly due to the influence of higher-order EPs involved in them. Conversely, Hermitian qubits cost strong driving and relatively large inter-qubit coupling to develop high-fidelity multipartite entangled states within a comparable timescale of the non-Hermitian qubits.
§ RESULTS
§.§ Emergence of higher-order EPs
We consider a transmon circuit consisting of a capacitor and an inductor connected by a superconducting wire <cit.>. The key component of this circuit is the Josephson Junction, which provides non-linear inductance, imparting an anharmonic character to the transmon circuit <cit.>, in contrast to the harmonic oscillation of an ideal LC circuit <cit.>. Controlling the circuit elements allows us to access three energy levels of the transmon circuit successively named ground state |g⟩, first excited state |e⟩, and second excited state |f⟩. In fact, the circuit resembles a qutrit system depicted in Fig. <ref>(a). The distinct energy spacings of the circuit facilitate the realization of a non-Hermitian qubit in the subspace {|e⟩, |f⟩} via driven-dissipative processes, while the ground state |g⟩ serves as an environment external to the qubit. To achieve the qubit's functionality in the chosen subspace, it is necessary to ensure faster decay of the state |e⟩ compared to the state |f⟩, and this hierarchy of dissipation rates can be monitored by an impedance-mismatching element, which amplifies or suppresses electromagnetic radiation mode in the three-dimensional microwave cavity <cit.>.
These systems provide insights into entanglement dynamics within the framework of non-Hermitian Hamiltonians <cit.>. These Hamiltonians are effective for modeling realistic scenarios involving dissipative quantum systems. In this context, we consider weakly coupled non-Hermitian qubits described by the total non-Hermitian Hamiltonian
Ĥ = ∑^n_j=1[(Δ_j-iγ_j/2)σ̂_jσ̂^†_j+Ω_jσ̂^x_j]
+ ∑^n_j≠ k∑^n_k=1J_jk(σ̂^†_jσ̂_k+σ̂_jσ̂^†_k),
where the terms in the square bracket represent the individual non-Hermitian Hamiltonian for each qubit, while Ω_j denotes the driving amplitude, Δ_j represents the detuning frequency, and γ_j is the dissipation rate of the level |e⟩_j. The Pauli matrices for the qubit are given by σ̂_j^x=σ̂_j^† + σ̂_j, with σ̂_j^†=|f⟩_j ⟨ e| and σ̂_j=|e⟩_j ⟨ f|. The second term in Eq. (1) describes the inter-qubit coupling with constant J_jk. For Δ_j=0, our model obeys the passive P T-symmetry i.e., [ P T, Ĥ_ P T]=0, <cit.> where Ĥ_ P T=Ĥ(Δ_j=0)+∑^n_j=1(iγ_j)/4. Here, P and T indicate the parity and the time-reversal operators, respectively.
Before showing our main result (i.e., the fast generation of multipartite entanglement in n qubits), we first discuss their EPs, and then we address advantages of these EPs in multipartite entanglement generation later. A single non-Hermitian qubit in Fig. <ref>(a) exhibits a second-order EP at Ω_j=γ_j/4 <cit.>, and two identical non-Hermitian qubits exhibit fourth order EP at the same Rabi frequency <cit.>. In addition, Figs. <ref>(b) and <ref>(c) show the real and imaginary parts of the eigenvalues for three identical, uncoupled non-Hermitian qubits with Δ_j=0. At Ω=γ/4=1.5 rad/μs (referred to as Ω_ EP hereafter), the qubits exhibit an eighth-order EP, where all eigenvalues converge to E_ EP=-3iγ/4, and all eigenvectors coalesce into |φ_ EP⟩ (see Supplementary Note A for more detail). This result generalizes to (2^n)^ th order EPs for n uncoupled non-Hermitian qubits, with the EP order matching the Hilbert space dimensions. These higher-order EPs arise from the rivalry between dissipation-induced non-Hermiticity and resonant driving field effects. In practice, they can be tuned by varying the resonant Rabi frequency while keeping the non-Hermiticity parameter <cit.>.
Figure <ref>(d) illustrates three identical non-Hermitian qubits weakly coupled to each other with a uniform coupling constant J_jk=J. This coupling acts as an external perturbation to the system and reduces the order of EPs at Ω_ EP due to degeneracy lifting. For example, with a coupling strength of J = 10^-3 rad/μs, an eighth-order EP is reduced to fourth- and third-order EPs at Ω_ EP. Furthermore, two second-order EPs emerge on either side of Ω_ EP as shown in Figs. <ref>(e) and <ref>(f). These second-order EPs persist even in the strong coupling regime (J>γ), although increasing the coupling strength completely lifts the degeneracy at Ω_ EP. In contrast, lower coupling strengths such as J=10^-4, 10^-5, and 10^-6 rad/μs result in a fifth-order EP at Ω_ EP, with the response of the system weakening as the coupling strength decreases. Around the higher-order EPs, the response of energy eigenvalues can be enhanced by several orders of magnitude <cit.>, resulting in the heightened sensitivity of the non-Hermitian qubits to external perturbations. Later, we will show that this effect can indeed boost multipartite entanglement generation.
§.§ EP-induced multipartite entanglement generation
We focus on the dynamics of non-Hermitian qubits in the P T-symmetry preserving regime <cit.>, where Ω≥Ω_ EP. In this regime, the qubit system exhibits equal imaginary eigenvalues, which are removed from the state |ψ(t)⟩=e^-iĤt/ħ|ψ(0)⟩/√(⟨ψ(0)|e^iĤ^̂†̂t/ħe^-iĤt/ħ|ψ(0)⟩) of the composite system through normalization conditions. Meanwhile, the distinct real eigenvalues significantly influence the system dynamics (see Figs. <ref>(d) and <ref>(b)). We can then write the normalized state of n qubits as |ψ(t)⟩=∑^2^n_m=1α_m(t)|δ_m⟩, where α_m(t) signifies the complex probability amplitude, and |δ_m⟩ is the computational basis (see Methods for further details).
We aim to generate multipartite GHZ states <cit.>, which can be distinguished from any other states using the entanglement entropy <cit.> S_j=- Tr[ρ̂_j(t)logρ̂_j(t)], where ρ̂_j(t) stands for the reduced state of qubit j for j=1, 2, ..., n (see Methods). Each qubit subsystem in GHZ states achieves the maximum entanglement entropy S_j=log2. This can be further confirmed by the genuine residual three-tangle τ_123 for tripartite system when τ_123=1 <cit.>.
We first investigate the entanglement developed in three-qubit system in the P T-symmetric regime Ω≥Ω_ EP. In Figs. <ref>(a) and <ref>(b), we illustrate the dynamics of entanglement entropy and residual three-tangle as functions of time and inter-qubit coupling for a specific driving amplitude Ω=1.576 rad/μs. The quantifiers consistently indicate the generation of an optimal GHZ class from the initial coherent state |ψ(0)⟩=2^-3/2(|f⟩-i|e⟩)^⊗3 at t≈3.233 μs and J≈10^-3 rad/μs. At this point, the entanglement entropy of each reduced qubit is S_j≈0.690, and the three-tangle is τ_123≈0.980. This entangled state is highlighted by green-dashed and black-dashed lines intersecting at the central, brighter parts of the three-tangle (see Fig. <ref>(b)). This indicates that for a specific driving amplitude and coupling constant, there exists a scenario where entanglement is significantly shared among all three qubits, with negligible bipartite entanglement between any two qubits (see Supplementary note B). This is a characteristic property of GHZ states <cit.>. After the optimal entanglement generation at t ≈ 3.233 μs in Fig. 2, the qubits evolve into product states. To understand the physics behind the GHZ state generation at t≈3.233 μs and J≈10^-3 rad/μs, we compare the dynamics of probability amplitudes |α_m(t)| and phases Arg[α_m(t)] of the Hermitian and non-Hermitian qubits in Figs. <ref>(a)-<ref>(c) for J=0 and J=10^-3 rad/μs within a time span of 6.5 μs (one period of evolution for non-Hermitian qubits). Hermitian qubits refer to a regime with γ=0 in the non-Hermitian Hamiltonian.
Figure <ref>(a) illustrates that the amplitudes of the bases |fff⟩ and |eee⟩ of non-interacting Hermitian qubits oscillate periodically within π/Ω≈ 2 μs. These oscillations result from coherent energy transfer between the qubits and the driving field, similar to Rabi oscillations of two-level atoms interacting with a quantized radiation mode <cit.>. This coherent energy exchange results in a periodic evolution of the Hermitian qubits via the product states |ψ(0)⟩→ |eee⟩→ |ψ^∗(0)⟩→ |fff⟩→ |ψ(0)⟩ every 2 μs.
Since local operation with classical communications is an allowed operation under the framework of the resource theory of entanglement <cit.>, we now discuss whether changing the local basis influences the generation of entangled states.
As the qubits jump from one basis state to another on the Bloch sphere, the state |fff⟩ exchanges a discrete π-phase with the single-excitation degenerate bases |fee⟩, |efe⟩, and |eef⟩ at π/4Ω≈0.5 μs, while the state |eee⟩ exchanges an equal π-phase with the double-excitation degenerate bases |ffe⟩, |fef⟩, and |eff⟩ at 3π/4Ω≈1.5 μs.
Furthermore, applying a small perturbation of the order J=10^-3 rad/μs does not substantially affect probability amplitudes of the Hermitian qubits within such a short time dynamics. However, it may slightly distort phase evolutions, which is insufficient for generating tripartite entanglement. This is because generating tripartite entanglement requires a simultaneous rearrangement of both populations and phases as will be clarified below.
We now demonstrate the dynamics of probability amplitudes and phases of the non-Hermitian qubits in Figs. <ref>(b) and <ref>(c) for J=0 and J=10^-3 rad/μs, respectively. Without coupling (see Fig. <ref>(b)), the qubits evolve in a manner similar to Hermitian qubits, but the introduced non-Hermiticity distorts the oscillations and increases the period to 4π/√(16Ω^2 - γ^2)≈6.5 μs compared to Hermitian qubits. In Fig. <ref>(c) at a finite coupling J=10^-3 rad/μs, the dynamics of qubits at t≈3.233 μs is significantly modified, where the amplitudes for |fff⟩ and |eee⟩ are increased, while the ones for the degenerate bases are decreased. This further triggers continuous phase evolutions instead of the discrete π-phase jumps seen without coupling.
The rearrangement of both populations and phases highlights the fact that entanglement generation in three-qubit is fundamentally different from two-qubit cases, where all maximally entangled bipartite states are equivalent up to local change of bases <cit.>. The vertical dashed lines in Fig. <ref>(c) at t≈3.233 μs indicate an entangled state generated at the peaks of entanglement entropies shown in Fig. <ref>(d) (at the central bright regions in Fig. <ref>). In contrast, achieving tripartite entanglement between three Hermitian qubits under the same parameter regimes requires several thousand microseconds, as depicted in Fig. <ref>(e).
To gain further insights into the dynamics of the qubits, we visualize the evolution of reduced Hermitian and non-Hermitian qubits starting from the coherent state |ψ(0)⟩=2^-3/2(|f⟩-i|e⟩)^⊗3 (Fig. <ref>(f)) and the product state |ψ(0)⟩=|fff⟩ (Fig. <ref>(g)). The green trajectories in both figures represent the reduced Hermitian qubit with and without coupling, as well as the reduced non-Hermitian qubit without coupling. In these scenarios, both reduced qubits independently evolve on the surface of the Bloch sphere and remain in pure product states, indicating the absence of tripartite entanglement. For the Hermitian qubit with J=10^-3 rad/μs, the green trajectory suggests a regime where tripartite entanglement generation is not accessible due to the short qubit dynamics (t=6.5 μs) (see Fig. <ref>(e)).
On the other hand, the non-Hermitian qubit with J=10^-3 rad/μ s evolves into a mixed state, suggesting significant tripartite entanglement generation. The corresponding purity values are P_j≈0.5033 at t≈3.232 μs for an initial coherent state and P_j≈0.512 at t≈5.325 μs for an initial state |fff⟩ (see red trajectories in Figs. <ref>(f) and <ref>(g)). The lowest purity value, P_1=P_2=P_3=0.5, corresponds to maximally mixed states and indicates the standard three-party GHZ state. Notably, the initial coherent state |ψ(0)⟩=2^-3/2(|f⟩-i|e⟩)^⊗3 not only generates a robust genuine and bi-separable tripartite entangled state but also saves time and driving energy compared to the state |ψ(0)⟩=|fff⟩ (supplementary note B and C). This advantage arises due to the sensitivity of non-Hermitian qubits to initial conditions as well as applied perturbations, which is significantly beneficial for tasks requiring multiple qubits with higher-order EPs.
Our system is scalable to n-qubit configurations, exemplified by a setup of four coupled non-Hermitian qubits shown in Fig. <ref>(a). The entanglement entropies of each reduced-qubit system, depicted in Fig. <ref>(b), can detect four-qubit GHZ-class states (Methods). For a given resonant driving amplitude Ω, we can find an optimal value of coupling constant J that generates four-qubit entanglement (see peaks of S_j in Fig. <ref>(b)). The timing differences in entanglement generation are owing to the system's response to the combined effects of higher-order EPs, resonant driving, and inter-qubit coupling.
As the system approaches higher-order EPs located at Ω_ EP, a weaker coupling strength is required to generate an optimal four-qubit entanglement. This is evident from the delayed peak of Fig. <ref>(b). Increasing the coupling constant leads to faster entanglement generation slightly away from higher-order EPs. This is illustrated by the faster generation peaks. In addition, larger coupling constants enhance the robustness of the entangled states against off-resonant drivings (Supplementary note D). Thus, non-Hermitian qubits enable a design of specific driving protocols and inter-qubit couplings, allowing for the generation of multipartite entanglement unattainable with Hermitian qubits.
Moreover, a comparison of the four-qubit entanglement at t≈ 2.85 μs shown in Fig. <ref>(b) with the tripartite entanglement at t≈ 3.23 μs in Fig. <ref>(d) and the bipartite entanglement at t= 5.325 μs from Ref. <cit.>, all for the same interaction strength J, demonstrates that four-qubit entanglement can be achieved more rapidly by slightly adjusting the resonant driving amplitude to Ω=1.598 rad/μ s. This fast entanglement generation can be attributed to higher-order EPs and rapid transitions of the qubits between different quantum states. These findings suggest that higher-order EPs in many-body quantum systems can be advantageous for generating multipartite entanglement.
§.§ Hermitian Limit
Up to now, we have concentrated on the dynamics of multipartite GHZ-class states near higher-order EPs, highlighting that non-Hermitian qubits can save thousands of microseconds compared to Hermitian qubits within the same parameter regime. The long time dynamics required for Hermitian qubits expose them to environment-induced decoherence effect, which is detrimental to entanglement generation.
Alternatively, Hermitian qubits can generate three- and four-qubit entangled states on a timescale comparable to non-Hermitian qubits as demonstrated in Figs. <ref>(a) and <ref>(b). This is achieved by driving the qubits with a strong Rabi frequency Ω=10 rad/μs and adjusting the coupling strength to J=0.4 rad/μs, while remaining in the weak coupling regime. In this parameter range, Hermitian effects are dominant, therefore the entanglement induced by non-Hermitian qubits showcases an oscillatory behavior on top of the Hermitian results. Additionally, this parameter regime not only speeds up entanglement generation in Hermitian qubits but also stabilizes the maximal entanglement for both initial states |ψ(0)⟩=2^-n/2(|f⟩-i|e⟩)^⊗ n and |ψ(0)⟩=|f⟩^⊗ n after a few microseconds (supplementary note D for the details).
To explicitly identify the entangled states generated in stable regimes illustrated in Figs. <ref>(a) and <ref>(b), we examine the dynamics of qubit probability amplitudes and phases within the time interval t ∈ [7.6 μs, 8.1 μs], as illustrated in Figs. <ref>(c)-<ref>(f). In both the three-qubit and four-qubit systems, the probability amplitudes oscillate over time, and the phases adjust accordingly to ensure the generation of a specific multipartite entangled state at a certain time. For example, when the probability amplitudes of the bases |f⟩^⊗ n and |e⟩^⊗ n are close to 1/√(2), and the amplitudes of the other degenerate bases become vanishing, this suggests the generation of GHZ states of the form e^iθ_g( |f⟩^⊗ n + e^iθ_r |e⟩^⊗ n)/√(2) as illustrated by the vertical lines in Figs. <ref>(c) and <ref>(e). The global phase θ_g and the relative phase θ_r can be obtained from the phase dynamics shown in Figs. <ref>(d) and <ref>(f).
In particular, three-qubit GHZ states |ψ_∓⟩≈ e^iπ(|fff⟩ + e^∓ iπ/2|eee⟩)/√(2) can be generated at t ≈ 7.775 μs and t ≈ 7.932 μs, respectively, as indicated in Figs. <ref>(c) and <ref>(d). In Fig. <ref>(g), we depict their fidelities, defined as |⟨ GHZ|ψ_±⟩| with | GHZ⟩=(|fff⟩ -i|eee⟩)/√(2), and they are approximately 0.9997 and 0.9998 at the respective times. Similarly, four-qubit GHZ states |ψ̃_±⟩≈ e^iπ/4(|ffff⟩ + e^± iπ/2|eeee⟩)/√(2) can be produced at t ≈ 7.852 μs and t ≈ 8.009 μs, respectively, with the corresponding fidelities |⟨G̃H̃Z̃|ψ̃_±⟩|≈ 0.9997 demonstrated in Fig. <ref>(h).
Our system also hosts other maximally entangled GHZ classes |ψ⟩ ≈ 0.5e^-iπ/2(|ffe⟩ + |fef⟩ + |eff⟩ + e^iπ|eee⟩) and |ψ⟩ ≈ 0.5(|fee⟩ + |efe⟩ + |eef⟩ + e^iπ|fff⟩) at t ≈ 7.715 μs and t ≈ 7.856 μs, respectively. The former state is obtained by swapping |fff⟩ in the standard GHZ state with the degenerate bases {|ffe⟩, |fef⟩, |eff⟩}, while the latter state replaces |eee⟩ with {|fee⟩, |efe⟩, |eef⟩}. This observation indicates that tripartite entanglement remains unaffected by population exchange between |fff⟩ and {|fee⟩, |efe⟩, |eef⟩}, and between |eee⟩ and {|ffe⟩, |fef⟩, |eff⟩}, provided that an appropriate relative phase should be maintained to keep maximal entanglement. This property may be specific to odd numbers of qubits because, in the case of four-qubit entanglement, all probability amplitudes are non-vanishing except for the standard GHZ states. For example, four-qubit GHZ classes with nearly equal probability amplitudes are evident from Figs. <ref>(e) and <ref>(f) at t ≈ 7.782 μs and t ≈ 7.945 μs. Here, entanglement primarily arises from phase buildup, similar to the two-qubit scenario <cit.>.
§.§ Discussions
In conclusion, we generate three- and four-qubit entanglement near higher-order EPs that correspond to the dimension of their Hilbert spaces. We also present entangled states generation in the strongly-driven Hermitian qubits with high fidelity to the GHZ states as a comparison to the non-Hermitian cases. Our results show that non-Hermitian qubits can simultaneously save significant driving energy and accelerate multipartite entanglement generation. In particular, they can reduce the required multiparty entanglement generating time by thousands of microseconds compared to Hermitian qubits operating in the same parameter regimes. This advantage is attributed to the ultra-high sensitivity of non-Hermitian qubits to parameter changes and initial conditions near EPs. Notably, our result points out that higher-order EPs in many-body quantum systems can be beneficial for multipartite entanglement generation. Furthermore, these EPs allow a design of specific driving protocols and inter-qubit couplings to reach desired multipartite entangled states at precise times, which is crucial for practical quantum technologies and fundamental studies. Non-Hermitian qubits thus exhibit unique multipartite entanglement dynamics not observed in Hermitian systems.
Furthermore, we show that achieving entanglement with Hermitian qubits on the same timescale as non-Hermitian qubits requires strong resonant driving, which also stabilizes maximal entanglement for several time periods. Observations of the dynamics of probability amplitudes and phases in stable entangled regimes reveals fundamental mechanisms that lead to GHZ states in both odd and even numbers of qubits.
Although our work concentrates on GHZ classes, it is interesting to see whether non-Hermitian system can generate W states and other distinct classes of multiply entangled states e.g., graph states <cit.>. Our result can naturally be extended to scalable GHZ states and realized through parametrically driven and Josephson-based quantum state routers <cit.> and superconducting processors with all-to-all coupling <cit.>.
§ METHODS
The joint evolution of the qubits from an arbitrary initial state |ψ(0)⟩ is given by (ħ=1)
|ψ(t)⟩ = e^-iĤt|ψ(0)⟩/√(⟨ψ(0)|e^iĤ^̂†̂te^-iĤt|ψ(0)⟩),
where Ĥ is the Hamiltonian defined in Eq. (1).
We can also rewrite the state in Eq. (<ref>) as
|ψ(t)⟩=∑^2^n_m=1⟨ϕ̃_m|ψ(0)⟩ e^-iE_mt|ϕ_m⟩,
where the summation index 2^n denotes the dimension of the Hilbert space spanned by n qubits, E_m represents the complex eigenvalues, while |ϕ_m⟩ and ⟨ϕ̃_m| are the normalized right and left biorthogonal eigenvectors. The state can also be expanded in terms of the computational bases. For example, the three-qubit state is given by
|ψ(t)⟩=∑^8_m=1α_m(t)|δ_m⟩,
where α_m(t) indicates the complex probability amplitudes, and |δ_m⟩ is the basis states. These bases include |fff⟩ for m=1, a set of degenerate bases {|ffe⟩, |fef⟩, |eff⟩} for m=2-4, another set of degenerate bases {|fee⟩, |efe⟩, |eef⟩} for m=5-7, and |eee⟩ for m=8.
Similarly, the four-qubit state can be expressed as
|ψ̃(t)⟩=∑^16_m=1α̃_m(t)|δ̃_m⟩,
with the new probability amplitudes α̃(t) and bases |δ̃_m⟩. Four qubits have sixteen computational bases, which can be grouped into five as |ffff⟩ for m=1, the first set of degenerate basis {|fffe⟩, ..., |efff⟩} for m=2-5, the second set of degenerate basis {|ffee⟩,..., |eeff⟩} for m=6-11, the third set of degenerate basis {|feee⟩,..., |eeef⟩} for m=12-15, and lastly |eeee⟩ for m=16 (see Figs. <ref>(e) and <ref>(f)).
The probability amplitudes and phases discussed in the main text are computed using the QuTiP software packages <cit.>. This approach enables us to determine the states of non-Hermitian qubits at any given time by specifying their probability amplitudes and phase factors. For instance, standard three-qubit and four-qubit GHZ states are obtained when the amplitudes of the basis states |f⟩^⊗ n and |e⟩^⊗ n are exactly 1/√(2), while all other probability amplitudes are vanishing, as illustrated in Figs. <ref>(c)-<ref>(f) (see also Figs. <ref>(a)-<ref>(c)).
Entanglement entropy quantifies the amount of information needed to fully describe the state of one part of a quantum system when the other parts are inaccessible <cit.>. For example, we can divide the three-qubit system into subsystem A consisting reduced qubit j and subsystem B consisting the remaining two qubits. Quantum correlation between the subsystems can be evaluated by the entanglement entropy (equal for both subsystems) of the reduced qubit as S_j=- Tr[ρ̂_j(t) logρ̂_j(t)], where the reduced density matrices ρ̂_j(t) of each qubit are given by
ρ̂_1(t) = [ |α_1(t)|^2 + |α_2(t)|^2 + |α_3(t)|^2 + |α_5(t)|^2 α_1(t)α_4^*(t) + α_2(t)α_6^*(t) + α_3(t)α_7^*(t) + α_5(t)α_8^*(t); α_4(t)α_1^*(t) + α_6(t)α_2^*(t) + α_7(t)α_3^*(t) + α_8(t)α_5^*(t) |α_4(t)|^2 + |α_6(t)|^2 + |α_7(t)|^2 + |α_8(t)|^2 ],
ρ̂_2(t) = [ |α_1(t)|^2 + |α_2(t)|^2 + |α_4(t)|^2 + |α_6(t)|^2 α_1(t)α_3^*(t) + α_2(t)α_5^*(t) + α_4(t)α_7^*(t) + α_6(t)α_8^*(t); α_3(t)α_1^*(t) + α_5(t)α_2^*(t) + α_7(t)α_4^*(t) + α_8(t)α_6^*(t) |α_3(t)|^2 + |α_5(t)|^2 + |α_7(t)|^2 + |α_8(t)|^2 ],
ρ̂_3(t) = [ |α_1(t)|^2 + |α_3(t)|^2 + |α_4(t)|^2 + |α_7(t)|^2 α_1(t)α_2^*(t) + α_3(t)α_5^*(t) + α_4(t)α_6^*(t) + α_7(t)α_8^*(t); α_2(t)α_1^*(t) + α_5(t)α_3^*(t) + α_6(t)α_4^*(t) + α_8(t)α_7^*(t) |α_2(t)|^2 + |α_5(t)|^2 + |α_6(t)|^2 + |α_8(t)|^2 ].
Donating the reduced density matrices of Eq. (<ref>) by
ρ̂_j(t)=
[ ρ^j_xx ρ^j_xy; ρ^j_yx ρ^j_yy ],
and noting ρ^j_xx+ρ^j_yy=1,
the entanglement entropy of qubit j can further reduce to
S_j = -λ^j_- logλ^j_- - λ^j_+ logλ^j_+,
where λ^j_±=1/2±1/2√((ρ^j_xx-ρ^j_yy)^2+4ρ^j_xyρ^j_yx) are the eigenvalues of Eq. <ref>.
The entanglement entropy of the reduced qubits takes on distinct values for product states, GHZ states, and W states. This distinction not only allows us to differentiate between product and entangled states but also indicates that GHZ states capture greater quantum correlations compared to W states. For GHZ states, the eigenvalues λ^j_+ and λ^j_- are both 1/2. In contrast, for product states, either λ^j_+ or λ^j_- is zero, while the remaining eigenvalue is unity. For instance, in the case of the initial coherent superposition considered in the results section, we have λ^j_+=0 and λ^j_-=1, leading to S_j=0.
Consequently, the entanglement entropy of the reduced qubits ranges from S_1= S_2= S_3=0 for product states to S_1= S_2= S_3=log2 for GHZ states. For W states, it takes a specific value S_1= S_2= S_3=log3-(2/3)log2, which falls within the interval 0≤ S_j≤log2. The same idea can apply to large number qubits. For instance, entanglement entropy has successfully identified the four-qubit GHZ state with a fidelity of approximately 0.999 in the Hermitian limit (see Fig. <ref>(b) and Fig. <ref>(h)).
Furthermore, we use three tangle, often known as the residual three-qubit entanglement measure specifically designed to quantify the amount of genuine entanglement shared among all three qubits, excluding any bipartite entanglement contributions. It thus captures the essence of entanglement monogamy by showing that the entanglement is distributed among the three qubits in a constrained way as <cit.>
τ_123= C^2_1(23)- C^2_12- C^2_13,
where C_12 and C_13 represent pairwise concurrences between two qubits <cit.>. For instance, the concurrence C_12 is defined as <cit.> C_12=max(0, λ_1-λ_2-λ_3-λ_4), where λ_1, λ_2, λ_3 and λ_4 are the eigenvalues of the Hermitian matrix R̂_12=√(√(ρ̂_12(t))ρ̃̂̃_12(t)√(ρ̂_12(t))), with ρ̃̂̃_12(t)=(σ_y⊗σ_y)ρ̂^∗_12(t)(σ_y⊗σ_y), and ρ̂_12(t)=Tr_3[|ψ(t)⟩⟨ψ(t)|]. The eigenvalues should be ordered as λ_1≥λ_2≥λ_3≥λ_4.
Moreover, C_1(23) in Eq. <ref> represents a bi-partition concurrence calculated from the purity P_1= Tr[ρ̂^2_1(t)] of the first qubit as <cit.> C_1(23)=√(2-2P_1). The three-tangle is invariant under permutations of the qubits <cit.>; thus, rearrangement of the qubits in any order does not change the value of the three-tangle.
*
H. H. J. acknowledges support from the National Science and Technology Council (NSTC), Taiwan, under the Grant No. NSTC-112-2119-M-001-007, and from Academia Sinica under Grant AS-CDA-113-M04. We are also grateful for support from TG 1.2 of NCTS at NTU. H.- Y. K. is supported by the Ministry of Science and Technology, Taiwan, (with grant number MOST 112-2112-M-003- 020-MY3), and Higher Education Sprout Project of National Taiwan Normal University (NTNU).
§ AUTHOR CONTRIBUTIONS
H. Y. K. initiated the ideas, J. S. Y., H. Y. K., and H. H. J. supervised the project. C. G. F conducted the analytical and numerical calculations, interpreted the results, and wrote the first draft of the manunscript. All authors contributed to the writing of the manuscript.
§ COMPETING INTERESTS
The authors declare no competing interests.
|
http://arxiv.org/abs/2409.03395v1 | 20240905101434 | Data-driven turbulent heat flux modeling with inputs of multiple fidelity | [
"Matilde Fiore",
"Enrico Saccaggi",
"Lilla Koloszar",
"Yann Bartosiewicz",
"Miguel Alfonso Mendez"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
APS/123-QED
matilde.fiore@vki.ac.be
Environmental and Applied Fluid Dynamics Department, von Karman Institute for Fluid Dynamics, Belgium
Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Italy
Environmental and Applied Fluid Dynamics Department, von Karman Institute for Fluid Dynamics, Belgium
Environmental and Applied Fluid Dynamics Department, von Karman Institute for Fluid Dynamics, Belgium
Institute of Mechanics, Materials and Civil Engineering (IMMC), Universite catholique
de Louvain (UCLouvain), Place du Levant 2, 1348 Louvain-la-Neuve, Belgium
Environmental and Applied Fluid Dynamics Department, von Karman Institute for Fluid Dynamics, Belgium
§ ABSTRACT
Data-driven RANS modeling is emerging as a promising methodology to exploit the information provided by high-fidelity data. However, its widespread application is limited by challenges in generalization and robustness to inconsistencies between input data of varying fidelity levels. This is especially true for thermal turbulent closures, which inherently depend on momentum statistics provided by low or high fidelity turbulence momentum models. This work investigates the impact of momentum modeling inconsistencies on a data-driven thermal closure trained with a dataset with multiple fidelity (DNS and RANS).
The analysis of the model inputs shows that the two fidelity levels correspond to separate regions in the input space. It is here shown that such separation can be exploited by a training with heterogeneous data, allowing the model to detect the level of fidelity in its inputs and adjust its prediction accordingly. In particular, a sensitivity analysis and verification shows that such a model can leverage the data inconsistencies to increase its robustness. Finally, the verification with a CFD simulation shows the potential of this multi-fidelity training approach for flows in which momentum statistics provided by traditional models are affected by model uncertainties.
Data-driven turbulent heat flux modeling with inputs of multiple fidelity
Miguel A. Mendez
September 9, 2024
=========================================================================
§ INTRODUCTION
Data-driven methods are progressively entering the field of turbulence modeling as one of the most promising avenues to overcome the modeling barrier reached by traditional approaches in the last decade. A variety of data-driven approaches for the modeling of Reynolds stresses have been recently proposed <cit.>, among which artificial neural networks <cit.> are widely adopted to handle large databases and generate complex, non-linear input/output mappings. Despite the merits of these approaches, significant limitations still restrict their use for a wide range of Computational Fluid Dynamics (CFD) simulations.
<cit.> highlights the most important of these limitations, including the lack of generality in data-driven closures, i.e., their limited applicability to flows that deviate from the conditions covered by the training. The derived closures usually target specific classes of flows <cit.> and existence of general or universal data-driven turbulence models is currently being questioned by the community <cit.> due to the complex dependence of coherent structures on geometry, Reynolds number, and many other local/global conditions. Moreover, the literature points out limitations when data-driven closures are trained offline <cit.>, i.e., with high-fidelity data only in a frozen mode. This training strategy neglects the influence of the numerical methods and additional sub-models involved in the CFD setup. The frozen training mode causes model-data inconsistencies <cit.>, where the quantities taken as input by the model differ during training and testing phases. This limits the accuracy when the model is applied a posteriori. A typical example of the consistency problem is the turbulent time scale k/ϵ <cit.>, which is systematically different in high-fidelity and modeled (RANS) data, leading to inconsistencies in the data-driven closure if this quantity is one of the model inputs.
Current research proposes mitigating this issue by partially replacing the critical high-fidelity turbulent statistics with their RANS counterparts. This approach is followed, for example, by <cit.>, who developed a k-corrective frozen training strategy where the turbulent dissipation rate ε is computed by solving its RANS transport equation, taking all the other statistics from the high-fidelity database and adding a corrective term accounting for errors in the k production term. <cit.> proposed a similar approach in which ω=ε/k is computed from RANS equations evaluated with high-fidelity mean flow and Reynolds stresses. <cit.> included the inconsistency between RANS and DNS estimates of k in the outputs of the data-driven model, and used this to inform the data-driven model when it operates in conjuction with a RANS solver.
In general, addressing model inconsistencies in turbulence modeling requires balancing the physical consistency of the data-driven model against its applicability in standard RANS solvers. On one hand, using modeled statistics enhances the applicability of the closures but inevitably introduces structural errors in the mapping to compensate for input inaccuracies. On the other hand, utilizing high-fidelity data promotes physical consistency and leverages the real correlation among the statistics, but this comes at the expense of applicability when implemented in RANS solvers.
Moving to non-isothermal problems, the modeling of turbulent heat fluxes depends on the momentum turbulence modeling, and the Reynolds stress tensor is inevitably one of the critical inputs for turbulent heat flux models. Consequently, a thermal turbulence model trained with Reynolds stresses derived from high-fidelity data makes significant errors when deployed in a RANS solver, which uses low-fidelity modeling of Reynolds stresses along with the Boussinesq approximation.
This problem was highlighted in <cit.>, focusing on the data-driven modeling of the turbulent heat flux at near unity and low Prandtl numbers. Model inconsistencies in the Reynolds stresses were identified to be the major challenge in data-driven thermal closures, partially addressable only by the use of second order momentum turbulence closures. Modeling the Reynolds stresses with the eddy viscosity concept poses significant limitations to thermal turbulence modeling, because the turbulence anisotropy is essential to represent the heat flux vector, and is indeed the foundation of most algebraic thermal closures <cit.>. Such modelling choices are typical of both data-driven and theory-driven RANS modeling. In fact, traditional thermal models are often developed to be combined with specific momentum closures <cit.>, or variants are proposed to account for different momentum models <cit.>.
This work explores the hypothesis that machine learning tools could identify the source of the input (e.g. high fidelity vs low fidelity momentum modeling) and adapt accordingly, or find compromises by mitigating its sensitivities to critical inputs. Both approaches could ensure an optimal compromise between physical consistency of the model when this is deployed on high-fidelity data, and robustness when this is deployed with traditional RANS solvers.
To explore this hypothesis, this paper proposes a training method based on inputs of multiple fidelity multi-objective optimization.
More specifically, the training of a data-driven model for the turbulent heat flux, introduced in section <ref>, was carried out with a hybrid dataset consisting of high-fidelity (DNS) and low fidelity (RANS) data. The peculiarities of such heterogeneous dataset are discussed in section <ref>. The learning performances are analyzed in section <ref>: from the model choices within the input space, to the verification of the learned closure and its sensitivity analysis.
The validation of the model is primarily aimed at estimating the robustness gain obtained with this strategy. Based on the results presented in section <ref>, conclusions and future development are presented in section <ref>.
§ DATA-DRIVEN AHFM
The data-driven Algebraic Heat Flux Model (AHFM) considered in this work is a physics-constrained neural network that predicts the turbulent heat flux at near unity and low Prandtl numbers. Details of its architecture are provided in <cit.> and briefly recalled in this section for completeness.
The network embeds rotational invariance properties by construction, as proposed by <cit.>, and its predictions are realizable, i.e. they satisfy the second law of thermodynamics. Figure <ref> provides a schematic of the neural network structure and its layers. The network predicts variable closure coefficients a_i and w_i based on a set of dimensionless invariants, denoted as π_i and defined on the left column of Table <ref>. Among the quantities listed in the table, k represents the turbulent kinetic energy, ϵ the turbulent dissipation rate, k_θ the variance of the thermal fluctuations, ϵ_θ the thermal turbulent dissipation rate. The quantities ν and α_l denote the molecular viscosity and diffusivity, respectively, while the tensors 𝐒, Ω and 𝐛 are defined as:
𝐒 =1/2(∂𝐔/∂𝐱+∂𝐔/∂𝐱^T),
Ω=1/2(∂𝐔/∂𝐱-∂𝐔/∂𝐱^T) ,
𝐛=𝐮𝐮/k-2/3𝐈,
in which 𝐔 is the mean velocity field, 𝐮 the velocity fluctuation and the operator · denotes the Reynolds averaging.
The coefficients a_i and w_i are used to compute the following expansions of tensors:
𝐀 =∑_i=1^n a_i 𝐓^i, 𝐖 = ∑_i=1^n w_i 𝐓^i,
in which 𝐓^i are the element of the tensor basis indicated in Table <ref>. The dispersion tensor is then computed as a sum of a symmetric, positive-definite tensor and a skew-symmetric tensor:
𝐃=[(𝐀+𝐀^T) (𝐀^T+𝐀) + k/ϵ^0.5 (𝐖-𝐖^T)],
In Ref. <cit.>, the network was trained with high-fidelity DNS data for Prandtl numbers ranging from 0.71 to 0.01. The loss function applied for the training promotes the smoothness of the predicted heat flux fields:
ℒ(𝐪)= 1/N(∑_i=1^N ∑_j=1^3 (q̂_i,j -q_i,j)^2 )
+ λ/N( ∑_i=1^N ∑_j,k=1^3 | ∂q̂_i,j/∂ x_k - ∂ q_i,j/∂ x_k| Δ x_k ),
where i∈[1,… N] is the index spanning across the N data points contained in each mini-batch, q_i,j=u_j θ is the prediction of the ANN and q̂_i,j is the corresponding flux provided by DNS data. Both values q_i,j and q̂_i,j are normalized with respect to the maximum DNS value achieved in each flow configuration.
Once trained, the data-driven AHFM was verified with some non-isothermal simulations run in OpenFoam, for which the model was compared with other theory-driven closures <cit.>. The verification highlighted the sensitivity of the data-driven AHFM to the momentum turbulence model applied in combination with it. As an example, Figure <ref>
presents the validation of the data-driven AHFM (denoted as ANN in the Figure) for turbulent channel flow, and the comparison with other theory-driven models such as the Manservisi model (MM) <cit.>, the Kays correlation <cit.> and the AHFM developed by Shams (AHFM) <cit.>. In the figures on the left, the AHFM is coupled with the Elliptic Blending Reynolds Stress Model (EBRSM) <cit.> while in the figure on the right the AHFM is coupled with the Launder-Sharma k-ϵ. Note that when the Reynolds stresses are accurately modelled with second order closures, the data-driven AHFM outperforms the traditional AHFM considered for comparison <cit.>. However, the accuracy deteriorates when the network receives Reynolds stresses modelled with the Linear Eddy Visocisty Model (LEVM), because of its dependency on the Reynolds stress anisotropy 𝐛, defined by eq. (<ref>).
Hence, the comparison shown in Figure <ref> highlights that the data-driven AHFM cannot handle deficiencies of the momentum modeling. This is a significant limitation of the machine learning closure, which restricts its applicability in CFD solvers.
§ MULTI-FIDELITY DATABASE AND PRELIMINARY ANALYSES
The database employed to train the data-driven AHFM consists of high-fidelity (DNS) data for non-isothermal turbulent channel flow <cit.> and non-isothermal backward-facing step <cit.> at various Reynolds and Prandtl numbers. An additional dataset of a non-isothermal planar impinging jet <cit.> was employed for testing. The details about the reference DNS data are provided in Table <ref>. In the present work, the training database is extended with RANS counterparts of the same flows simulated in the OpenFoam environment <cit.>. For the RANS simulations, the Launder-Sharma k-ϵ model was selected as the turbulence model. The computational domains were discretized to achieve y^+ ranging from 0.1 to 1.0. The RANS simulations were limited to the momentum field, i.e. the temperature field was not resolved. The
reader is referred to the Appendix <ref> for further details about the computational setup for the training data.
The collected DNS and RANS data were manipulated to compute the tensors 𝐓^i and the invariants π_i indicated in Table <ref>. For turbulent channel flow, the discrepancy between the π_i computed with the two datasets is shown in Figure <ref>, representing the invariants in their 10-dimensional space with red (DNS) and green (RANS) lines. Specifically, each line in the plot indicates the values of the input features at each point of the computational mesh. Only 500 out of 425400 points constituting the training database are depicted for visualization purposes.
As expected, the distribution of the invariants π_1 and π_2, π_7 and π_8 and Re_t is similar between RANS and DNS since these only involve isotropic turbulent quantities and the derivatives of the mean fields (𝐔 and T). On the other hand, significant differences appear on the invariants related to the anisotropic part of the Reynolds stress tensor 𝐛, i.e. π_3, π_4, π_5 and π_6. In fact, this behavior of the invariants reflects the differences in the tensor 𝐛 that can be appreciated in Figure <ref>, depicting the Reynolds stress anisotropy in the barycentric map <cit.> for the two datasets in case of turbulent channel flow.
This representation shows that the maximum distance between the turbulent states predicted by the two approaches occurs at y^+<30, at which the anisotropy is severely misrepresented. The gap decreases at higher y^+, for which the points move towards the state of pure isotropy.
§ METHODOLOGY
The proposed methodology for a model handling heterogeneous input data combines dimensionality reduction, multiobjective optimization, sensitivity analysis, and uncertainty quantification. We describe the various steps in the following. Concerning the dimensionality reduction, Principal Component Analysis (PCA) was used to identify the optimal combination of features describing the different levels of fidelity as described in Section <ref>. Section <ref> reports on the multiobjective optimization at the foundation of the model training. The performances of the model obtained with this training are analyzed and compared with the original data-driven AHFM by inspecting its output layer and estimating its sensitivities in terms of Shapley values, introduced in section <ref>. Finally, the data-driven model is implemented in a CFD solver (OpenFoam), and there tested for a non-isothermal planar impinging jet at Pr=0.01. The details of this validation test case and the simulation setup are provided in section <ref>.
§.§ Principal Component Analysis (PCA)
The PCA was applied on the feature matrix 𝐗∈ℝ^N × F collecting the value of all the F features on the N grid points of both the training datasets. The reader is referred to <cit.> for an introduction to the PCA.
The goal was to identify P<F linear combinations of the F input features that explain most of the input variance and hence restrict the dimensionality of the model input's space along the associated principal directions, here denoted as 𝐯_i, with i=[1,..P]. These directions are eigenvectors of the covariance matrix 𝐒
𝐒 = 1/N-1 (𝐗-𝐱_μ)^T (𝐗-𝐱_μ),
where 𝐱_μ∈ℝ^N is the vector containing the average of each column. Denoting as 𝐕=[𝐯_1 …𝐯_P]∈ℝ^N× P the matrix collecting the leading P eigenvectors and Λ∈ℝ^P× P the diagonal matrix with the associated eigenvalues sorted in decreasing order, the reduced feature space can be written as
𝐗̃=𝐗𝐕∈ℝ^N× P.
The correlations between the original variable and the principal components can be evaluated from the load matrix 𝐋=Cov(𝐗, 𝐗̃) ∈ℝ^F × P. Introducing the Singular Value Decomposition of the feature matrix 𝐗=𝐔√(Λ)𝐕^T, the load matrix can be computed as defined as
𝐋=Cov(𝐗, 𝐗̃) = 𝐗^T 𝐗̃/N-1 = 𝐕√(Λ)𝐔^T 𝐔√(Λ)/N-1 = 𝐕Λ/N-1.
§.§ Multi-fidelity training
We denote as 𝐗_lf the feature matrix collecting data from low-fidelity simulations and as 𝐗_hf the one collecting data from high-fidelity simulations. To ensure reasonable predictions with both sets of data, the artificial neural network must be trained on the inconsistencies of the input data between the two fidelity levels. Denoting with 𝐲_lf and 𝐲_hf the predictions of the model with the inputs 𝐗_lf and 𝐗_hf, and with 𝐲̂_hf the reference values provided by the high-fidelity database, the cost function driving the training in case of multi-fidelity inputs is taken as:
ℒ=ℒ(𝐲_hf,𝐲̂_hf)+αℒ(𝐲_lf,𝐲̂_hf),
in which ℒ(·) is defined by (<ref>). This choice of the loss translates the previous regression problem introduced in section <ref> into a multi-objective optimization problem in which the errors computed with both families of inputs need to be minimized. The scalar α in (<ref>) is a training hyperparameter that varies between 0.001 and 1000 to build the Pareto front of the the multi-objective problem.
§.§ Shapley values
We analyze the model sensitivity to its feature using Shapley values <cit.>. Shapley values, originating from cooperative game theory, offer a way to distribute the total gain (or cost) among the players (or features) based on their marginal contributions. For a model f with input features 𝐗=(𝐱_1,𝐱_2,…,𝐱_F), the Shapley value ϕ_i for the feature 𝐱_i is defined as:
ϕ_i = ∑_S ⊆ K ∖{i}|S|! (|K| - |S| - 1)!/|K|![ f(S ∪{i}) - f(S) ],
in which K is the set including all F features, S is a subset of N features not containing X_i, |S| is the cardinality of subset S, and f(S) denotes the model prediction using the subset of features S. The term f(S ∪{i}) - f(S) represents the marginal contribution of feature 𝐱_i when added to the subset S. The Shapley value calculation involves averaging these marginal contributions over all possible subsets S, weighted by the combinatorial factors |S|! (|K| - |S| - 1)!/|K|!. Note that, compared to other approaches for sensitivity analysis (e.g. the Integrated Gradient method <cit.> applied in <cit.>) this method provides more complete information about the sensitivity of the model, since the discrepancies in eq. (<ref>) is evaluated over the whole input space instead of targeting specific trajectories.
Specifically, the predictions f(S) for each subset are computed by averaging the model evaluations obtained by perturbing the features excluded from the subset S around their predefined baseline values. In the present case, the baseline values of the ablated features is considered their average over the entire range of flows considered in the database indicated in Table <ref>.
Note that based on eq. (<ref>) the computational cost of the Shapley method is given by the product of the number of features with their number of permutations, i.e., F · F!, which would lead to 18· 18! model evaluations based on the list of features reported in Table <ref>. To reduce the computational cost, the features were assigned to four main groups that will also ease the interpretation of the results. These groups, reported in Table <ref>, are:
* Group 1: Momentum isotropic features (MI) depending on the velocity gradient and isotropic momentum statistics (e.g. k, ϵ);
* Group 2: Momentum anisotropic features (MA) depending on the anisotropic part of the Reynolds stress tensor 𝐛;
* Group 3: Thermal-based features (TH) depending on the molecular Prandtl number and thermal related statistics;
* Group 4: The basis tensors (TE) indicated in Table <ref>.
§.§ Uncertainty propagation
The uncertainty analysis of the trained data-driven AHFMs was carried out to evaluate their robustness with respect to the momentum treatment. The method is based on the perturbation of the turbulent state in the barycentric map introduced by Figure <ref> and, specifically, between the true state (DNS) and the one predicted by Linear Eddy Viscosity Models (LEVMs). The coordinates of the points in the map are computed from the eigenvalues ψ_i (i=1,2,3) of the anisotropic part of the Reynolds stress tensor 𝐛. The vertices of the triangle (𝐳_1C,𝐳_2C,𝐳_3C) represent the limiting states of turbulence:
* one component (𝐳_1C=[1,0]) for which ψ_i = 2/3, -1/3, -1/3.
* two components (𝐳_2C=[0,0]) for which ψ_i = 1/6, -1/6, -1/3.
* isotropic (𝐳_3C=[1/2,√(3)/2]), for which ψ_i are all zero.
Each point 𝐳∈ℝ^2 in this plane is associated to a specific set of eigenvalues ψ∈ℝ^3 by a linear mapping <cit.>:
𝐳^*= 𝐳_1C(ψ_1-ψ_2) + 𝐳_2C(2ψ_2-2ψ_3) + 𝐳_3C (3ψ_3 + 1 ),
complemented with the conditions:
ψ_1 + ψ_2 + ψ_3 = 0,
Writing the mapping ψ→𝐳 as 𝐳=𝐁ψ, its inverse becomes ψ=𝐁^-1𝐳.
This mapping was here used to propagate uncertainties in the baricentric map to uncertainties in the associated anisotropic stress and then the turbulent heat flux through the model. To this end, the high-fidelity (DNS) state 𝐳_hf in the barycentric map is perturbed towards the state of turbulence given by the low fidelity (RANS) momentum treatment 𝐳_lf, leading to a modified location 𝐳^*:
𝐳^* = 𝐳_hf + Δ (𝐳_lf-𝐳_hf),
in which Δ is a uniform random variable with range [0,1]. An example of a perturbed state for turbulent channel flow (Re_τ=640) is given in Figure <ref>.
Based on the perturbed location 𝐳^* in the barycentric map, the new vector of eigenvalues ψ^* is computed as
ψ^* = 𝐁^-1𝐳^* .
The associated perturbed anisotropy tensor is then computed as:
𝐛^* = 𝐄diag(ψ^*) 𝐄^-1,
in which 𝐄 is the matrix of eigenvectors of the unperturbed tensor 𝐛. The tensors 𝐓^i and invariants π_i are computed based on the perturbed 𝐛^* and propagated through the model.
For each point of the original DNS dataset, 50 perturbed states are sampled and submitted to the data-driven models trained in single fidelity mode <cit.> and in the multi-fidelity mode described in section <ref>. The predictions from both models are averaged, and confidence intervals are constructed to assess how uncertainties in the Reynolds stress components propagate through the data-driven closure.
§.§ Verification test case
To test the performance of the data-driven AHFMs generated with single fidelity <cit.> and multi-fidelity training modes, the networks were implemented in a CFD solver (OpenFoam) following the procedure proposed by Maulik et al. <cit.>. The test case proposed is a non-isothermal planar impinging jet simulated by Duponcheel et al. <cit.> at Pr=0.01 for the geometry depicted in Figure <ref>.
This test case is ideal for testing the data-driven closures, as this flow configuration is excluded from the training database. Additionally, for this flow configuration, the momentum field obtained with low-order turbulence models (e.g., k-epsilon, k-omega) is affected by large uncertainties due to the extent of anisotropy, secondary recirculation, and transport effects.
The computational domain for this test case consists of two infinite parallel plates with a slit in the middle injecting the flow conditions reached in a fully developed turbulent channel flow at Re_τ=181 (see Figure <ref>). The heat transfer is triggered by the temperature gap imposed on the walls. The structured computational grid leads to y^+ values ranging from 0.32 to 1.6 in the first cell near the wall, allowing a wall-resolved treatment for both momentum and thermal fields. The boundary conditions imposed are summarized in Table <ref>.
The simulations employed the Launder-Sharma k-ϵ and k-ω SST as turbulent momentum closures. The energy equations was closed with the analytical model developed by Manservisi <cit.>, the original data-driven model introduced in section <ref> and the model generated with the multi-fidelity training mode.
§ RESULTS
This section presents the results of the analysis of the data-driven AHFM discussed in section <ref>, comparing the model trained with multi-fidelity inputs to the one trained exclusively with DNS data. For brevity, we refer to the former as the Hybrid ANN and the latter as the High-Fidelity ANN.
§.§ Results of the PCA
The PCA was carried out on a feature matrix combining both the high-fidelity and the low-fidelity data.
The variance explained by each principal component is illustrated in Figure <ref>, which also reports the cumulative sum of the variance explained. These results show that the first three principal components are able to explain 77.5% of the total variance of the input data.
The loading matrix (cf. (<ref>)) was computed to analyze the principal components' contribution to the original features' variance. The loadings of the first three principal components are shown in Figure <ref>. The loading values show that the first component (PC1) is mainly composed of momentum features depending on the mean velocity gradients and isotropic turbulence statistics (e.g., k, ϵ), which are similar in both RANS and DNS datasets. The second principal component (PC2) is primarily constituted by thermal-based features depending on the Prandtl number of the fluid. The third principal component (PC3) depends on momentum-based features that describe the turbulence anisotropy through the deviatoric part of the Reynolds stress tensor. These latter features are misrepresented in the RANS input dataset due to the Boussinesq approximation. Interestingly, the PCA algorithm splits the entire input dataset in three clusters, including isotropic momentum-based features (PC1), thermal-based features (PC2), and anisotropic momentum-based features (PC3).
The two datasets (RANS and DNS) are projected onto the PCs axes and compared in Figures <ref> and <ref>. In particular, Figure <ref> compares the two datasets in terms of PC1 and PC2. The DNS inputs achieve higher peaks of PC1 for the same PC2 values due to the higher turbulent kinetic energy retrieved in DNS simulations compared to the RANS counterpart. However, the distributions of the data on this plane are similar. A huge separation between the two datasets is observed when projected on the PC1 and PC3 plane, as done in Figure <ref>. This data separation reflects the effect of the Reynolds stress modeling and suggests that the current parameter space allows the detection of the type of momentum modeling based on the distance between critical features (mainly π_5 and π_3).
This cluster separation is crucial to the present work, as it demonstrates that specific features constituting PC3 can be utilized by a data-driven algorithm to detect the fidelity level of the momentum turbulence model and to adapt predictions accordingly.
§.§ Performance of the training with multi-fidelity inputs
The neural network architecture introduced in section <ref> was trained with the multi-fidelity approach introduced in section <ref>. Specifically, for each value of α, 30 statistical trainings were carried out. The Pareto front of the multi-objective problem is reported in Figure <ref>, which is built from the final values achieved by the two losses at the end of each training. It is worth stressing that the resulting Pareto front is sharp, meaning that the training finds solutions that do not compromise the accuracy of the heat flux predictions with both types of inputs. This is an interesting result because it proves that increasing the robustness of the data-driven AHFM to the momentum modeling is possible.
Regarding accuracy, a comparison between the hybrid ANN and the high-fidelity ANN is given in Figure <ref>, which relates the network output obtained with DNS and RANS predictions. Note that hybrid ANN significantly reduces the dispersion of the heat flux predictions with the change of the momentum treatment.
§.§ Results of the sensitivity analysis
The Shapley values defined in (<ref>) were computed considering the streamwise and wall-normal components of the heat flux as the outputs of the model.
Figures <ref> and <ref> presents the obtained Shaply values for Pr=0.71 and Pr=0.025, respectively. The values obtained in the presence of DNS inputs are indicated with solid lines, of RANS inputs with dashed lines. At near unity Prandtl numbers, the data-driven model is shown to be highly sensitive to the thermal features. This sensitivity reduces at Pr=0.025, meaning that the dependence of the model on the thermal gradients and features increases with the Prandtl number. Clearly, the ANN trained with the multi-fidelity approach is insensitive to the anisotropy-based invariants in the presence of RANS inputs, though it preserves a small sensitivity to this group when the model is fed with high-fidelity momentum data.
The comparison of the Shapley values in the presence of DNS and RANS data (Figure <ref> and <ref>) shows that the sensitivity of the model to all the groups of features decreases when low fidelity data are detected. Hence, the hybrid ANN relies more on high-fidelity data than low-fidelity ones, i.e., the robustness in case of inaccurate Reynolds stress modeling is higher. The profiles are in fact smoother, characterized by lower peaks and fewer changes of sign. This behavior means that the model derived with multi-fidelity training simplifies when low-fidelity input data are detected. The network adapts to the uncertain inputs by reducing its sensitivity and non-linearities.
Figure <ref> depicts the distribution of the Shapley values of the original, high-fidelity ANN for the same flow configuration at Pr=0.71. The comparison between Figure <ref> and <ref> highlights that the hybrid training mode reduces the sensitivity of the ANN to the features belonging to the last two groups indicated in Table <ref>. In particular, the sensitivity drop is evident for y^+ ≈ 10 where the momentum turbulent production reaches its peak. The barycentric triangle in Figure <ref> shows that, at this distance from the wall, the gap between RANS and DNS turbulent states is maximum. Hence, the hybrid training mode moderates the sensitivity to the anisotropic part of the Reynolds stresses where the separation between the two categories of inputs is significant. This confirms that this training strategy effectively leads to a more robust thermal model with respect to the one trained with high-fidelity data only, which would better interface with standard momentum closures based on the eddy viscosity concept.
§.§ Layer output analysis
Additional insights into the differences between the models obtained from single-fidelity and multi-fidelity training can be gained by analyzing the output layer of the network schematized in Figure <ref>. Figure <ref> depicts the coefficients a_i and w_i at the output of the merge layer (see Figure <ref>) of the ANN trained with the multi-fidelity approach, in case of non-isothermal turbulent channel flow at Re_τ=640 and Pr=0.025 and with high-fidelity and low-fidelity momentum input data.[Note that the sign of the coefficients a_i does not alter the final output, since 𝐀=∑ a_i 𝐓^i is the Cholesky factorization of the symmetric part of 𝐃, as indicated by eq.(<ref>).]. For both kinds of input data, the dominant coefficients are a_1, a_2, a_6, w_4, and w_6. For y^+>100, the values of the predicted coefficients are similar for both DNS and RANS inputs. At y^+≃ 10, where the turbulence anisotropy reaches its peak, the predicted coefficients are much higher in the presence of DNS input data than of RANS inputs. This explains the lower sensitivity to the RANS inputs detected by the Shapley value analysis in section <ref>. The reduction is significant, especially for a_2, the coefficient multiplying the tensor 𝐓_2 that is proportional to the anisotropic part of the Reynolds stress tensor 𝐛, as reported in Table <ref>.
The output of the last layer of the high-fidelity network is significantly different, as shown in Figure <ref>. The coefficients a_1 and a_2 dominate above all the other terms and achieve a single peak in the region of maximum momentum turbulence production (y^+ ≈ 10). Hence, compared to the high-fidelity network, the new model increases the dependence on 𝐓_4, 𝐓_6, and 𝐓_8 which depend on the strain and rotation tensors. This implies that training the network with data of multiple fidelity replaces the dependence on the true Reynolds stress tensor with linear and quadratic functions of the velocity gradient. In other words, the network tries to reconstruct the true Reynolds stress anisotropy as a function of the mean velocity gradient and its uncertain estimate given by the combined momentum turbulence model. This explains the low dispersion of the predictions shown in Figure <ref> with both kinds of momentum treatments.
§.§ Propagated uncertainties
The uncertainties of the Reynolds stress tensor caused by the inaccuracies of the combined momentum treatment were propagated to the output of the data-driven AHFM with the method explained in section <ref>. The statistics obtained for Re_τ=640 and Pr=0.71 and 0.025 are shown in figures <ref> and <ref>, respectively. The confidence intervals, highlighted with shaded areas, quantify the uncertainty of the predictions due to the inconsistency between reference and modeled input data. The comparison between the hybrid ANN and the high-fidelity ANN reveals a significant reduction of the model uncertainty when the training is enriched with RANS data. This is true at both near unity and low Prandtl numbers, albeit more evident for Pr=0.71, at which a stronger dependence of the heat flux field on the Reynolds stresses is expected.
The standard deviation computed for the two models at both values of the Prandtl numbers is shown in Figure <ref>. The standard deviation of the high-fidelity model achieves its peak in the near wall region, where the dependence of the model on the anisotropy state is maximum. In this region, the standard deviation of the hybrid model predictions is much lower since the training with both data mitigates the dependency on the Reynolds stress anisotropy. This evidence agrees with the considerations drawn for the Shapley values for y^+ close to 10.0. This region of peak turbulence production, where temperature gradients are most significant, and the influence of the temperature field on heat flux is at its maximum, is critical. Therefore, reducing the uncertainty in model predictions in this area is of utmost importance. These considerations lead to the conclusion that the hybrid ANN is significantly more robust than the high-fidelity ANN when it comes to handling inaccuracies in the combined momentum turbulence model.
§.§ Results of the simulation test case
The numerical setup described in section <ref> was employed to verify the behavior of the hybrid model when combined with LEVMs, and to compare the new formulation with the original data-driven AHFM. As anticipated in section <ref>, for this test case, the computed velocity field highly depends on the type of momentum treatment and the choice of the specific LEVM. This is confirmed by Figure <ref>, which presents the velocity fields computed with the k-ϵ and k-ω models. The recirculation bubble's size and shape generated after the impinging point significantly differs from the two closure models. The difference can also be appreciated from the velocity and k profiles reported in Figure <ref> and <ref>, where they are compared with the reference DNS. In particular, the k-ϵ and k-ω models underestimate and overestimate the recirculation bubble's extension.
The data-driven thermal model and the Manservisi model <cit.> were applied over the two underlying momentum fields to compute the heat flux and temperature distributions. The high-fidelity ANN, whose predictions are indicated with dashed blue lines, completely mismatches the reference heat flux profiles due to the inconsistency between training (DNS) and simulation (RANS) momentum data. The hybrid ANN gives instead accurate heat flux predictions, regardless of the combined momentum closure (k-ϵ or k-ω). These results show that the applicability of the hybrid ANN is not restricted to the momentum closure that generated the training data (Launder-Sharma k-ϵ), but could potentially extend to the entire family of LEVMs. The heat flux fields given by the hybrid ANN are close to the one computed with the Manservisi model, although a slight improvement in the accuracy can be appreciated close to the slit.
The comparison among the thermal models in terms of temperature distribution is presented in Figure <ref>, in which the hybrid ANN combined with the k-ω SST shows the best accuracy, while the hybrid ANN and the Manservisi model significantly overestimate the temperature, especially far from the slit. However, the better thermal field computed with the k-ω and hybrid ANN does not originate from a significantly more accurate heat flux modeling. Indeed, the comparison of the profiles in Figure <ref> does not indicate the superiority of the heat flux predictions compared to the other two setups (k-ϵ-hybrid ANN and k-ϵ-Manservisi model). The better temperature agreement obtained with the high-fidelity ANN than with the hybrid ANN with the same momentum closure suggests that this temperature field is a result of compensation errors of both momentum and thermal closures. This consideration further underlines the importance of combining accurate thermal and momentum closures for heat transfer RANS simulations, as both the turbulent treatments contribute substantially to the resulting thermal field. This is especially true for low Prandtl numbers, at which the turbulent heat transport can be comparable, or even lower, than molecular transport and convection contributions in the overall energy balance.
§ CONCLUSIONS AND OUTLOOK
This article investigated the problem of model-data inconsistency for a data-driven thermal turbulence model trained with high-fidelity data. Specifically, we explored ways to improve the robustness of the data-driven model to extend its coupling with low-fidelity models for Reynolds stresses (LEVMs), generally preferred in industrial contexts for their stability and computational cost.
The crucial point was understanding whether the modeled (RANS) input statistics could be employed in training to inform the data-driven model about the inconsistencies between low and high-fidelity data. Detecting the quality of the momentum treatment could allow to adapt to different inputs or mitigate the sensitivities to the most critical ones. For such purpose, the work proposes the analysis of the multi-fidelity input space and a training strategy based on a hybrid dataset consisting of DNS and RANS input data.
The work demonstrates that increasing the robustness of the thermal model with respect to momentum modeling is possible. The systematic deviation between high-fidelity and low-fidelity statistics can be utilized to identify the type of momentum modeling and adapt the output accordingly or to find alternative relationships of the input statistics. The multi-fidelity training mode generates a model less sensitive to Reynolds stress anisotropy than the model trained with only high-fidelity data, especially in regions characterized by high anisotropy. The network naturally adapts its sensitivity to low-fidelity inputs (RANS) and is thus more robust than the original. From a structural standpoint, the hybrid model appears to incorporate Reynolds stress anisotropy modeling through algebraic expansions of tensors dependent on velocity gradients.
The validation of the new data-driven model for a non-isothermal planar impinging jet shows that it is significantly more robust than the original high-fidelity network and provides accurate predictions with different eddy viscosity models. The uncertainty propagation analysis demonstrates, more generally, the greater robustness of the new model against perturbations of the Reynolds stresses in the barycentric plane.
While more test cases are needed to develop a truly general-purpose model, the promising results of this work open the door to new training strategies for data-driven turbulence closures that can leverage information from databases of varying sizes and fidelity levels. As demonstrated here, multi-fidelity inputs enhance model robustness and resilience to perturbations. Additionally, incorporating multi-fidelity targets can expand the training database to include conditions where high-fidelity data are challenging to obtain, such as high Reynolds or Grashof numbers or flows involving multiple physics or complex geometries. This approach can improve the generality and applicability of the learned closures. Future work will focus on extending the current framework to a broader range of data, both in inputs and outputs, to further advance data-driven turbulence modeling and its application in industrial contexts.
§ NOMENCLATURE
[t]0.5
𝐔 [m/s] Mean velocity
T [K] Mean Temperature
𝐮 [m/s] Velocity fluctuation
θ [K] Thermal fluctuation
k [m^2/s^2] Turbulent kinetic energy
ϵ [m^2/s^3] Turbulent dissipation rate
k_θ [K^2] Thermal variance
ϵ_θ [K^2/s] Thermal dissipation rate
𝐮𝐮 [m^2/s^2] Reynolds stress
[t]0.5
𝐮θ [mK/s] Turbulent heat flux
ν [m^2/s] Molecular viscosity
α_l [m^2/s] Molecular diffusivity
δ [m] Half channel width
𝐈 [-] Identity tensor
𝐒 [1/s] Strain rate tensor
Ω [1/s] Rotation tensor
𝐛 [-] Reynolds stress anisotropy tensor
This work was supported by an F.R.S.-FNRS FRIA grant, and the authors gratefully acknowledge Prof. Iztok Tiselj and Dr. Mathieu Duponcheel for providing their datasets.
§ RANS SETUP FOR THE TRAINING FLOWS
The simulations of the RANS counterpart of the flows in the training database were conducted in the openfoam environment with the Launder-Sharma k-ϵ model. The choice of the computational domains follow the setups described in Ref. <cit.> and <cit.>. For both setups, the mesh is structured and consistent with the wall-resolved approach, i.e., the values of y^+ range between 0.1 and 1.0. Table <ref> and <ref> indicate the boundary conditions applied for the two flows.
For the channel, the fully developed flow is obtained by imposing cyclic boundary conditions at the inlet and outlet and a pressure gradient source term that adapts to the bulk velocity prescribed. For the backward-facing step flow, a fully developed flow in the channel preceding the step is obtained using recycling conditions, i.e., the iterative remapping of the fields at a certain distance from the inlet to the inlet boundary.
|
http://arxiv.org/abs/2409.03146v1 | 20240905004744 | Envisioning an Optimal Network of Space-Based Lasers for Orbital Debris Remediation | [
"David O. Williams Rogers",
"Matthew C. Fox",
"Paul R. Stysley",
"Hang Woon Lee"
] | math.OC | [
"math.OC",
"cs.SY",
"eess.SY"
] |
West Virginia University, Morgantown, WV, 26506
NASA Goddard Space Flight Center, Greenbelt, MD, 20771
West Virginia University, Morgantown, WV, 26506
Envisioning an Optimal Network of Space-Based Lasers for Orbital Debris Remediation
Hang Woon Lee[Assistant Professor, Department of Mechanical, Materials and Aerospace Engineering; hangwoon.lee@mail.wvu.edu. Member AIAA (Corresponding Author).]
School of Physics, Nankai University, Tianjin, 300071, China
=====================================================================================================================================================================
§ ABSTRACT
The rapid increase in resident space objects, including satellites and orbital debris, threatens the safety and sustainability of space missions. This paper explores orbital debris remediation using laser ablation with a network of collaborative space-based lasers. A novel delta-v vector analysis framework quantifies the effects of multiple simultaneous laser-to-debris (L2D) engagements by leveraging a vector composition of imparted delta-v vectors. The paper introduces the Concurrent Location-Scheduling Problem (CLSP), which optimizes the placement of laser platforms and schedules L2D engagements to maximize debris remediation capacity. Due to the computational complexity of CLSP, it is decomposed into two sequential subproblems: (1) optimal laser platform locations are determined using the Maximal Covering Location Problem, and (2) a novel integer linear programming-based approach schedules L2D engagements within the network configuration to maximize remediation capacity. Computational experiments are conducted to evaluate the proposed framework’s effectiveness under various mission scenarios, demonstrating key network functions such as collaborative nudging, deorbiting, and just-in-time collision avoidance. A cost-benefit analysis further explores how varying the number and distribution of laser platforms affects debris remediation capacity, providing insights into optimizing the performance of space-based laser networks.
§ NOMENCLATURE
@l @ = l@
L2D laser-to-debris
DVA Δ v vector analysis
CLSP concurrent location-scheduling optimization problem
MCLP maximal covering location problem
L2D-ESP laser-to-debris engagement scheduling problem
𝒯 mission time horizon (index t)
𝒟 set of debris (index d)
𝒥 set of debris orbital slots (index j)
𝒫 set of space-based laser platforms (index p)
𝒮 set of laser platform orbital slots (index s)
𝒦 set of valuable assets in space
R debris remediation capacity reward
π network topology reward
c_m momentum coupling factor
φ on-debris delivered laser fluence
ρ debris density
m debris mass
r position vector
v velocity vector
W laser-to-debris engagement boolean parameter
z laser platform location decision variable
y laser-to-debris engagement decision variable
x debris relocation decision variable
§ INTRODUCTION
The number of resident space objects is rapidly increasing, largely due to the emergence of small satellites, owing to standardized manufacturing processes, advancements in technology, and lower costs for sharing space on launch vehicles. This increase in the number of satellites is correlated with the significant growth in orbital debris, posing a serious risk to both manned and unmanned missions as well as active satellites that form critical infrastructure, yielding an escalation in conjunction alerts (i.e.,, satellites approaching within 3km <cit.>) and an increased probability of in-space collisions, which can create debris clouds of varying sizes.
Large space debris, objects with characteristic lengths greater than 10cm, is identified as the primary source of new debris <cit.> and consists mostly of defunct satellites, payloads, and rocket stages. Its presence in orbit increases the cost of operations, requiring active satellites to perform collision avoidance maneuvers, thereby demanding additional fuel consumption, the loss of mission objectives due to maneuvering, and increased labor costs for planning these maneuvers. Additionally, large debris restricts access to space for new satellites or payloads, affecting activities that heavily rely on space infrastructure, such as telecommunications, financial exchanges, and climate monitoring <cit.>.
Conversely, small debris comprises objects with characteristic lengths between 1 and 10cm. The challenges of tracking small debris pieces, coupled with their significant population close to 500000 objects <cit.>, present a continuous threat of collisions in space. Even with their relatively small mass, these fragments have the potential to inflict hypervelocity impacts, penetrating spacecraft shielding and risking the success of the mission. The origins of small debris include loose parts from operational or defunct satellites, payloads, or rocket stages, such as sodium-potassium droplets, solid rocket motor slag and dust, multi-layer insulation, ejecta, and paint flakes <cit.>. However, a significant part of the population has been generated by anti-satellite tests <cit.>, accidental explosions of satellites and rocket bodies <cit.>, and debris-to-debris collisions <cit.>.
Considering the substantial size of the debris population and the looming threat of triggering the Kessler syndrome <cit.>, strategies limited to the prevention of new debris formation, such as implementing regular conjunction assessments for active spacecraft <cit.> and enforcing restrictions on mission operations and end-of-life procedures <cit.> are insufficient for addressing the debris problem comprehensively. In response to this, a range of promising and innovative debris remediation technologies has been proposed to reduce the debris population.
The literature on debris remediation introduces contact-based methods as strategies for eliminating orbital debris. Chaser spacecraft equipped with tethered nets are considered an effective debris remediation mechanism <cit.> for addressing irregularly shaped debris.
Space balls and space winches are mechanisms designed to tackle the problem of debris with high angular momentum. These methods are capable of exerting a retarding torque on debris, aiding in its stabilization. However, they require an accurate activation at the time of the engagement <cit.>.
For relatively stable (i.e., detumbled) objects, debris remediation mechanisms such as grappling can be implemented for targeted and controlled removal. For instance, spacecraft equipped with robotic arms can accomplish this task <cit.>, by leveraging a rendezvous maneuver to capture the target and subsequently relocating it into a disposal orbit or into an orbit where atmospheric effects induce reentry. However, this technique is highly dependent on the physical properties of the debris, such as its shape and surface texture. It also necessitates precise attitude adjustment to counteract the forces exerted by the robotic arm's movements <cit.>. Hooks and harpoons are penetrating mechanisms used for debris remediation; however, they require proximity operations, endangering the success of the mission, and during penetration, they can trigger an explosion <cit.> generating more debris pieces.
Unlike the contact-based debris remediation mechanisms described above, which are targeted at large debris, remediating small debris requires specific methods compatible with their size. Novel tethered plate systems, capable of remediating debris pieces without damaging the plate's structure, have been proposed <cit.>. Furthermore, perimeter-ring-truss systems can handle small debris while ensuring scalability and practicality in the design phase <cit.>. However, the cost of remediation per debris piece is analyzed to be the highest in comparison with other methods <cit.>.
Ground-based lasers have emerged as a promising, cost-effective debris remediation solution to address the growing orbital debris population while circumventing the challenges and limitations of contact-based DR methods.
The change in velocity, Δ v, required to reduce the target's orbit altitude can be imparted by a ground-based laser leveraging photon pressure or laser ablation mechanisms. Photon pressure-based lasers exert a small force on the target object sufficient to induce small orbit changes. Mason et al. <cit.> present the use of two photon pressure ground-based laser systems, to engage debris multiple times and change its orbit due to the applied perturbation, thereby ultimately avoiding collisions with operational satellites.
However, given the laser's wavelength, debris temperature and material properties there is a threat of generating specular reflections on debris surface (i.e., Iridium flares), which scale linearly with the laser power <cit.>. In light of the small forces that photon pressure imparts on the target, their application is constrained to collision avoidance only <cit.>.
Alternatively, laser ablation mechanisms rely on the laser's high energy to rapidly melt the target's surface and generate a material jet that produces a reactive momentum, typically higher than the one produced by photon pressure-based lasers <cit.>. On the one hand, continuous wave heating produces irregular melt ejection, which can generate more debris. In the case of a tumbling target, the average momentum transfer can be nullified due to the slow heating and decay characteristic of the imparted thrust <cit.>.
On the other hand, pulsed lasers appear as a mechanism suitable for collision avoidance and deorbiting debris, given that the per pulse energy delivered from the ground-based station is enough to change debris orbit <cit.>.
Despite their promise as debris remediation solutions, ground-based laser debris remediation systems face significant technical challenges. These systems are subject to numerous atmospheric constraints, including aerosol attenuation, cloud cover, scintillation effects, and turbulence, all of which affect beam quality <cit.>. Moreover, ground-based lasers have limitations in range and angles and thus require strategic positioning of their ground stations to maximize efficiency while considering various civil and operational constraints (i.e., ideally situated away from airports and air routes) <cit.>.
Ultimately, given its ground-based nature, their laser engagement opportunities depend on debris passing over its operational range, constraining ground-based lasers to preventive just-in-time collision avoidance without the ability to tackle immediate collision threats <cit.>.
Against this backdrop, the idea of space-based lasers has garnered increased attention over the past decade, owing to their capacity to address debris of varying sizes (both large and small) and their potential to overcome the inherent challenges associated with ground-based lasers <cit.>. Space-based lasers possess several advantages compared to their ground-based counterparts, including more efficient energy delivery and beam quality <cit.>, increased access to debris (contact duration and opportunities) <cit.>, and reduced risk of collateral damage due to more precise control enabled by shorter ranges <cit.>.
According to the NASA Cost and Benefit Analysis of Orbital Debris Remediation report <cit.>, ground-based and space-based laser systems are the only debris remediation methods that can handle both trackable and non-trackable debris pieces with the best cost-to-benefit remediation ratio. Moreover, when not used for remediation purposes, the operator can rely on them to track and characterize debris objects <cit.>.
The state-of-the-art literature on space-based lasers successfully validates the concept of small debris remediation by leveraging laser ablation mechanisms and exploring different laser parameters. Regarding large debris, literature presents how just-in-time collision avoidance actions can be effectively taken by changing the debris orbit due to imparted momentum. Yet, constraining the number of platforms to one or two cannot scale up efficiently to a more extensive debris population, which is rapidly expanding every year and is highly heterogeneous in materials, size, density, and orbital parameters. Further, the effectiveness of laser ablation is heavily influenced by the irradiation distance, angle, and revisit time, posing a challenge for a single or two laser platforms.
In addition, while nudging debris or placing it on a descent trajectory can alleviate immediate threats to specific valuable assets, it may inadvertently create hazardous conjunction events for other objects in space.
In response to these challenges, we envision a network of interconnected, collaboratively working space-based lasers, optimally designed to maximize the debris remediation capacity, that is, the ability to nudge and deorbit debris and perform just-in-time collision avoidance. Aiming to materialize this concept, we address the following research questions in this paper:
* “Where do we optimally locate the space-based laser platforms with respect to each other and the debris field?”
* “How do we optimally schedule laser-to-debris engagements such that the debris remediation capacity is maximized?”
We advance the state of the art in several ways. First, we propose leveraging the formulation of the Maximal Covering Location Problem (MCLP) <cit.>, typically found in the literature on facility location problems, to design an optimal network topology that maximizes laser engagement rewards with the debris field. In its original domain, MCLP aims to find the optimal locations for a set of facilities to maximize coverage over a set of customers.
We exploit the similarities between satellite constellation pattern design and facility location problems discussed in Ref. <cit.> and interpret the facilities as laser platforms and the debris as customers. Second, we study the effect of multiple laser engagements on target debris by conducting a vector analysis of the Δ v's imparted by the laser platforms, which we name Δ v Vector Analysis (DVA). Employing multiple lasers increases the degrees of freedom in the debris control mechanism for space-based laser debris remediation missions. This allows for finer control over the target debris resultant Δ v magnitude and direction, offering a wider range of potential paths for its subsequent descent or collision avoidance trajectory. Moreover, leveraging this new framework, we design a novel integer linear programming (ILP)-based laser-to-debris (L2D) engagement scheduling problem (ESP) that dictates which combination of lasers should engage with which debris while considering a reward function that accounts for debris mass, change in periapsis radius, and the risk of potential conjunctions with operating satellites before and after the engagements. It is important to remark that we are introducing, for the first time, an optimization framework that considers a network of space-based lasers to engage space debris of different sizes. Finally, we present several case studies, considering small, and large debris; further, we set a case with a mixed debris field along with 10 valuable assets. For all three case studies, the framework presented in this paper outperforms Walker-Delta <cit.>-based networks with the same number of platforms. Additionally, we conduct a cost and benefit analysis to characterize the impact of varying the number of laser platforms in the debris remediation capacity.
The rest of the paper is structured as follows: Sec. <ref> describes the technical details of the laser ablation process and DVA. Section <ref> presents the parameter generation and the mathematical optimization formulation. Section <ref> illustrates the case studies with debris fields of diverse sizes, and Sec. <ref> presents the conclusions of this paper.
§ LASER-TO-DEBRIS ENGAGEMENT MODELING
In this paper, space-based laser platforms engage debris by irradiating a laser beam, inducing a laser ablation process that alters the debris orbit. The governing dynamics of the laser ablation process are described in Sec. <ref> and the DVA mechanism is presented in Sec. <ref>.
The L2D framework is built upon several assumptions. First, debris is considered to be a perfect sphere. Second, all L2D engagements are deterministic, meaning that the position and velocity of debris after the engagement are known. Third, each laser platform has complete knowledge of the characteristics of debris (e.g., material, mass, orbit) and whether they endanger the safe operation of valuable in-space assets, for instance, the International Space Station, and in-service telecommunications and Earth-observation satellites.
Finally, we assume that the space-based laser network operates as a centralized system with global knowledge. In this system, each platform is aware of the magnitude and direction that all other platforms in the network can impart over debris, as well as their orbital states during the mission.
We define the mission time horizon as a set of uniformly discretized time steps 𝒯 = {t_0, …, t_T-1}, with index t and cardinality T, where t_0 corresponds to the epoch.
The laser platform set is defined as 𝒫={p_1,…,p_P} with index p and cardinality P, and the debris set 𝒟={d_1,…,d_D} with index d and cardinality D.
§.§ Laser Ablation Principle
Pulsed laser ablation is the process by which a solid ejects plasma from its surface due to the action of a short, intense laser pulse <cit.>. At regimes of high irradiance, the vaporized surface material becomes ionized and begins to absorb the incident laser beam leading to vapor breakdown and plasma formation <cit.>. The ablation rate, defined as the thickness of the ablated material per laser pulse, is inversely proportional to the square root of the beam diameter, and negatively correlated with laser wavelength given the reduced optical absorptivity and high reflectivity of targets at large wavelengths <cit.>. Given the velocity of the ejected material, and by the principle of momentum conservation, the ejected mass of the plasma generates a net impulse on the object <cit.>.
Pulsed laser ablation can be leveraged to impart an impulse to debris, causing a change in velocity that alters the size and/or shape of its resulting orbit. The magnitude of this perturbation is determined by laser parameters such as wavelength, output power, pulse energy, beam quality, and the length and frequency of pulses <cit.>, as well as by the properties of the target debris, including its mass, density, and surface material composition <cit.>. Moreover, the geometric relationship between the laser and debris determines the direction and magnitude of the velocity perturbation applied to the debris, influencing the resulting orbit.
The momentum coupling factor c_m relates the imparted impulse on debris with the used laser's pulse optical energy <cit.>, and is defined as <cit.>:
mΔ v = c_mE
where E is the laser energy delivered to the debris by a single pulse and m is the debris mass. Further, the on-debris delivered laser fluence φ is defined as <cit.>:
φ = 4ED_eff^2T_tot/π B^4ζ^2λ^2u^2
with D_eff^2 being the effective illuminated beam diameter, T_tot the total system loss factor, B the beam quality factor, ζ a constant that regulates diffraction, λ the wavelength, and u the range between the platform and debris. Consequently, the per-pulse L2D Δ v delivered on debris with surface mass ρ is given as:
Δ v = η c_mφ/ρ
where η is the impulse transfer efficiency, which takes into account shape effects, tumbling, improper thrust direction on debris, and other factors <cit.>.
§.§ Δ v Vector Analysis
In this section, we present the DVA, a novel framework to quantify the effects of multiple simultaneous L2D engagements on target debris by leveraging a vector composition of the imparted Δv vectors.
We let r_td^- and v_td^- indicate the position and velocity vectors, respectively, of debris d at time step t immediately before an L2D engagement. Similarly, r_td^+ and v_td^+ are the position and velocity vectors, respectively, of debris d at time step t immediately after an L2D engagement.
We assume that at time step t an L2D engagement induces an instantaneous change in the velocity of target debris but its position remains unchanged. Hence, we have:
v_td^+ = v_td^- + Δv_tpd
r^-_td = r^+_td=r_td
where Δv_tpd represents the change in velocity experienced by debris d due to an L2D engagement from laser platform p.
At time step t, given position vectors r_td and r_tp of debris d and platform p, respectively, the relative position vector that points from platform p to debris d is given as u_tpd = r_td - r_tp; its unit vector is given as û_tpd = u_tpd /u_tpd, with u_tpd=‖u_tpd‖_2. The total change in velocity imparted from laser platform p to debris d at time step t is calculated by generalizing Eq. (<ref>) for every time step t, laser platform p, and debris d. Equation (<ref>) presents the vector Δv_tpd from laser platform p to debris d at time step t given unit vector û_tpd.
Δv_tpd = N_dη_pdφ_pd c_m,pd/ρ_dû_tpd
where N_d is the number of laser pulses per time step t on debris d, computed from the time step size and the laser pulse repetition frequency (PRF), which denotes the number of laser pulses per second.
The DVA framework captures multiple, simultaneous L2D engagements and represents them as a total effective single L2D engagement. The total Δv_td over debris d at time step t due to multiple L2D engagements can be represented as follows:
Δv_td = ∑_p∈𝒫_tdΔv_tpd
where 𝒫_td represents the set of laser platforms that engage debris d at time step t.
To determine how Δv_td affects debris trajectory, the new debris orbital parameters (i.e., semi-major axis, eccentricity) can be analytically computed using r_td and v_td^+ = v_td^- + Δv_td and compared with those before the L2D engagement.
The significance of DVA lies in its ability to enable a higher degree of control over debris dynamics by controlling the imparted Δv in both magnitude and direction, thus leading to a new debris orbit. Figure <ref> illustrates an example involving two laser platforms acting on the same debris at a given time step. In this specific scenario, there are three different perturbed orbits available for the debris: (1) one acted solely by platform p_1, (2) another acted solely by platform p_2, and (3) one acted collaboratively by both platforms simultaneously, resulting from a combined Δv_t_1d = Δv_t_1p_1d + Δv_t_1p_2d. The availability of multiple L2D engagement opportunities is advantageous, providing the space-based laser network with greater flexibility to achieve more effective debris remediation.
§ SPACE-BASED LASERS NETWORK DESIGN AND OPERATIONS OPTIMIZATION
The debris remediation capacity of space-based networks is correlated with the location of laser platforms. Given the distribution of the debris field and its concentration over specific altitude bins, the network's target access rate depends on the orbits taken by the platforms. Further, the laser ablation mechanism depends on the relative geometry between the platforms and debris. Hence, an optimal network topology can increase the effectiveness of laser ablation, leading to a higher debris remediation capacity.
During the debris remediation mission, each platform can present multiple L2D engagement opportunities. The dynamic scheduling of resources enables the network to select the platforms that present the most favorable relative geometry with respect to debris, such that the debris remediation capacity is maximized due to the L2D ablation mechanism.
The remainder of Sec. <ref> is organized as follows. In Sec. <ref>, we exhibit the generation of parameters involved in the optimization, and in Sec. <ref> the debris remediation capacity reward is introduced. Section <ref> materializes the proposed location-scheduling optimization problem, and Secs. <ref> and <ref> present the decoupled location and scheduling optimization problems, respectively.
§.§ Network Optimization Parameters
As a consequence of an L2D engagement, debris can relocate to a new orbit. We define 𝒥_td as the set of orbital slots for debris d at time step t. Each orbital slot j ∈𝒥_td is associated with the position and velocity vectors defined at time step t to fully define the state of debris orbit. Laser platform orbital slots are defined in set 𝒮, indexed by s and with cardinality S. Each laser platform is assumed to take an orbital slot in 𝒮 and maintain it during the entire mission. Moreover, we define a set of valuable assets 𝒦 with cardinality K.
The feasibility of the L2D engagements is encoded with Boolean parameter W_tsd, given as:
W_tsd=
1, if a platform located at orbital slot s can engage debris d at time step t
0, otherwise
To determine if an L2D engagement is feasible, two conditions must be satisfied. First, debris d has to be in the line-of-sight of a laser platform located at orbital slot s at time step t. Given the relative position vector u_tsd and the range u_tsd = ||u_tsd||_2, Eq. (<ref>) computes the line-of-sight indicator.
q_tsd = ((r_ts)^2-(R_⊕+ϵ)^2)^1/2+((r_td)^2-(R_⊕+ϵ)^2)^1/2-u_tsd
where R_⊕ is the radius of the Earth and ϵ is a bias parameter. If q_tsd>0 debris is in the line-of-sight of a platform and q_tsd≤0 otherwise. Second, as defined in Ref. <cit.>, an L2D engagement can happen if u_tsd lies within the maximum and minimum operational ranges defined as u_max^d and u_min^d, respectively. The upper bound is determined by the given laser specifications, and the lower bound is set by considering the safety and operational aspects of the system.
Leveraging parameter W_tsd, it is possible to generate for each time step t the set of laser platform orbital slots 𝒮_tdj, with index s and cardinality S_tdj, that engage debris d ∈𝒟 and generate debris orbital slot j ∈𝒥_t+1,d.
§.§ Laser-to-Debris Engagement Reward
We introduce debris remediation capacity reward R_tdij to quantify the value of relocating debris d from its current orbital slot i ∈𝒥_td to a new orbital slot j ∈𝒥_t+1,d at time step t, as a consequence of an L2D engagement while considering potential conjunctions with the set of valuable assets 𝒦. It is defined as:
R_tdij = C^0_td + C_tdij + αΔ h_tdij + βM_td
First, C^0_td is an incentive term that accounts for the conjunction analysis between debris d and the set of valuable assets 𝒦 during the entire mission scenario assuming no L2D engagements, by checking if debris orbit lies inside the deterministic conjunction ellipsoid of any valuable asset in 𝒦. We define C^0_td as:
C^0_td =
G_0, if t ∈ [t_min, t_max]
0, otherwise
If a conjunction, or multiple ones, is feasible between debris d and any valuable asset 𝒦, C^0_td = G_0 for all t ∈ [t_min, t_max] where G_0 is a large positive constant that incentivizes the network to engage debris during the time window [t_min, t_max]. The length of the interval and its starting point are parameters to be defined by a user considering the characteristics of the space-based laser, its capability for changing debris orbits, and the desired miss distance. Given t_c, the time step at which the first conjunction occurs (if debris d is to have multiple conjunctions with the same valuable asset or with several ones), t_max should be defined such that t_max≤ t_c.
Figure <ref> illustrates a scenario where reward C^0_td is not activated since there is no feasible conjunction. Figure <ref> outlines the case where a conjunction is imminent and the reward is activated.
Second, C_tdij is a look-a-head penalty term that accounts for whether relocating debris d from orbital slot i to orbital slot j due to an L2D engagement at time step t triggers at least one conjunction with any valuable asset in 𝒦.
Once all debris' candidate orbital slots are generated for step t + 1, each orbital slot is propagated from t+1 to t+1+τ, where τ represents the number of look-a-head time steps. Further, we compare for every t ∈ [t+1, t+1 +τ] if the range between debris d at slot j with any valuable asset is less than the range threshold. If a specific L2D engagement with debris generates a conjunction with any valuable asset defined in 𝒦, we set C_tdij=-G to ensure that the network is discouraged from relocating debris d to orbital slot j, as illustrated in Fig. <ref>. Conversely, Fig. <ref> highlights two simultaneous L2D engagements where the resultant debris orbital slot does not invade the valuable asset's conjunction ellipsoid; hence C_tdij is not activated.
Third, Δ h_tdij is a look-a-head incentive term that compares the periapsis radius of debris d orbital slot j after an L2D engagement at time step t with the periapsis radius threshold h^*, at which we consider that debris is governed by atmospheric drag and subject to induced natural orbital decay. Once this decay is initiated, the debris is considered deorbited within the framework. Given h_t+1,dj, the periapsis radius of the new orbital slot j for debris d after an L2D engagement, the reward term is defined as:
Δ h_tdij = γ_tdij(h^*/h_t+1,dj)^3
where γ_tdij is a variable that compares the periapsis radius of debris d between initial orbital slot i at time step t and after the L2D engagement at final orbital slot j at time step t+1, defined as:
γ_tdij =
-G_h, if h_t+1,dj≥ h_tdi
1, otherwise
If an L2D engagement increases the periapsis radius, then a large negative constant G_h is assigned to γ_tdij to discourage the engagement.
Furthermore, given that Δ h_tdij is a parameter that captures the ratio between the periapsis radius threshold and debris orbit periapsis radius after the engagement, a value of Δ h_tdij≥ 1 implies that the object is deorbited; hence, we constrain it to be at most one to avoid obtaining large rewards due to the nature of the terms' formulation. Additionally, we scale it by α in Eq. (<ref>) giving more flexibility to the user.
Lastly, M_td accounts for debris d mass, which is activated if debris is engaged at time step t, and whose objective in the reward formulation is two-fold.
First, debris of large mass is identified as the main source of new debris in case of a collision, and therefore in alignment with Refs. <cit.> and <cit.>, which assign a debris remediation capacity reward to debris proportional to its mass, we aim to assign a bigger reward for engaging debris of larger mass.
Second, given Eq. (<ref>), it is clear that for the same laser and c_m parameters, debris of larger mass will have a smaller Δ v than those of smaller mass, making it harder to achieve significant changes in its trajectory, resulting in a smaller Δ h_tdij. To address the two key points stated above, we introduce the mass reward term, defined as:
M_td =
m_d/m_max, if debris d is engaged at step t
0, otherwise
where m_max is the maximum debris mass in 𝒟. We normalize the mass of debris d dividing it by the maximum debris mass to be consistent in terms of magnitude with Δ h_tdij. Further, we introduce β in Eq. (<ref>) to scale it with respect to other parameters.
§.§ The Concurrent Location-Scheduling Optimization Problem
The optimal mission design and operation of a network of space-based lasers is tackled by proposing the Concurrent Location-Scheduling optimization Problem (CLSP). This problem seeks to determine the optimal location for a set of platforms while considering an optimal L2D engagement scheduling to maximize the debris remediation capacity.
§.§.§ Decision Variables
We model the placement of laser platforms using the following platform location decision variables:
z_s =
1, if a laser platform is located at orbital slot s
0, otherwise
Similarly, we leverage the following L2D engagement decision variables:
y_tsd =
1, if a laser platform located at orbital slot s engages debris d at time step t
0, otherwise
At each time step t, debris d is enforced to take a new orbital slot j ∈𝒥_t+1,d. The set of debris orbital slots can be subdivided into two subsets. First, 𝒥_t+1,d⊆𝒥_t+1,d with index j and cardinality J_t+1,d contains the orbital slots that encode no changes in the current orbit for debris d since it is not engaged, and consequently they have no associated reward. Second, the set of new orbits, each one with its corresponding reward R_tdij, is encoded as 𝒥_t+1,d∖𝒥_t+1,d with index j. In CLSP, debris relocation decision variables are given as:
x_tdij =
1, if debris d relocates from orbital slot i to orbital slot j at time step t
0, otherwise
§.§.§ Constraints and Objective Function
We introduce constraints (<ref>) to enforce that a laser platform located at orbital slot s engages debris d at time step t only if it satisfies the L2D engagement requirements encoded in parameter W_tsd. Constraints (<ref>) enforce that each platform has at most one L2D engagement per time step.
W_tsdz_s ≥y_tsd, ∀t∈𝒯, ∀s ∈𝒮,∀d∈𝒟
∑_d∈𝒟 y_tsd ≤1, ∀t∈𝒯, ∀s ∈𝒮
We define path contiguity constraints to ensure the correct flow of debris throughout the entire mission horizon. First, we require all debris to occupy a new orbital slot at time step t_1 with constraints (<ref>). Second, constraints (<ref>) ensure that debris d follows a continuous path between steps t-1 and t+1 with respect to the intermediate reference slot i ∈𝒥_td at time step t.
∑_j∈𝒥_t_1d x_t_0di_0j = 1, ∀d ∈𝒟
∑_j∈𝒥_t+1,d x_tdij - ∑_υ∈𝒥_t-1,d x_t-1,dυi = 0, ∀t∈𝒯∖{t_0,t_T-1}, ∀d ∈𝒟, ∀i ∈𝒥_td
Constraints (<ref>) are linking constraints that couple debris relocation decision variables x_tdij and L2D engagement decision variables y_tsd. A relocation to orbital slot j ∈𝒥_t+1,d is only possible if debris d is engaged by a set of laser platforms located at orbital slots in 𝒮_tdj, which enable such a relocation. We relax 𝒥_t+1,d given that it has an associated 𝒮_tdj=∅ and debris is enforced to occupy only one new orbital slot imposed by constraints (<ref>) and (<ref>).
∑_s∈𝒮_tdj y_tsd≥ S_tdjx_tdij, ∀ t∈𝒯∖{t_T-1},∀ d∈𝒟, ∀ i ∈𝒥_td,∀ j ∈𝒥_t+1,d∖𝒥_t+1,d
Additionally, constraint (<ref>) enforces the number of platforms in the network to be equal to P.
∑_s∈𝒮 z_s = P
The domains of all decision variables are given as follows:
z_s ∈{0,1}, ∀s∈𝒮
y_tsd ∈{0,1}, ∀t∈𝒯,∀s∈𝒮, ∀d ∈𝒟
x_tdij ∈{0,1}, ∀t∈𝒯∖{t_T-1},∀d ∈𝒟, ∀i ∈𝒥_td,∀j ∈𝒥_t+1,d
Lastly, the model's objective is to maximize the remediation capacity of the network, which is given as:
∑_t∈𝒯∖{t_T-1}∑_d∈𝒟∑_i∈𝒥_td∑_j∈𝒥_t+1,d R_tdijx_tdij
§.§.§ Mathematical Formulation
Piecing everything together, the mathematical formulation for CLSP is given as follows:
max Objective function (<ref>)
s.t. Constraints (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>)
We report all sets, parameters, and variables used in the CLSP mathematical formulation in Table <ref>.
§.§.§ Illustrative Example
To illustrate CLSP, we present a small-scale instance with S=3, T=3, D=3, and P=2.
Figure <ref> outlines the tree structure of the problem with platform orbital slots s represented with rhombus, and debris orbital slots i,j with squares. The magenta lines represent feasible decision variables y_tsd, the gray lines are debris d feasible relocation variables x_tdij at time step t from orbital slot i∈𝒥_td to the set of orbital slots j ∈𝒥_t+1,d, and the dark lines are debris feasible relocation variables x_tdijto orbital slots j ∈𝒥_t+1,d∖𝒥^t+1_d. Unfeasible decision variables are given a large negative reward R_tdij and are not displayed in the figure.
In Fig. <ref> debris is initially located at orbital slot i ∈𝒥_t_0d and has an associated virtual orbital slot j ∈𝒥_t_1,d, which represents no change in orbit if no L2D engagement occurs, and consequently R_t_0dij=0. The subset of debris orbital slots that represent new orbits depends on the number of feasible L2D engagements, for instance, while the leftmost debris can be engaged only by the leftmost platform orbital slot, debris located in the center orbital slot can be engaged by at most two platforms, hence it has three new orbits, all of them with its corresponding reward R_t_0dij. Given that, at time step t_1 the number of total debris orbital slots is eight and that the number of possible L2D engagements is seven, the total number of debris orbital slots at time step t_2 expands to 18.
The problem has 94 integer binary variables and 85 constraints and is solved using the Gurobi optimizer 11, which retrieves an objective value (i.e., debris remediation capacity reward obtained) of 7.50 with a duality gap of 0.00%. In Fig. <ref> the selected platform orbital slots are colored in dark gray, the debris orbital slots in blue, and decision variables y_tsd=1 and x_tdij=1 are straight lines, conversely, decision variables equal to zero are represented with dashed lines.
Even though the problem is presented for a small number of time steps and orbital slots, the number of decision variables and constraints grows exponentially. Further, Fig. <ref> accurately showcases the tree structure of the problem and how it rapidly expands. Scaling this formulation for larger mission horizons with more debris and platform orbital slots makes the problem computationally prohibitive, restricting its application to small-scale problems.
To overcome this challenge, we propose two separate optimization formulations. First, we find the optimal laser network topology design using MCLP, then, we leverage it to optimize the debris remediation capacity using the L2D-ESP.
§.§ Optimal Laser Network Topology Design
The problem of designing an optimal laser network topology is tackled under the assumption that debris does not relocate; consequently, decision variables x_tdij are relaxed, that is, removed, along with constraints (<ref>), (<ref>), (<ref>) and (<ref>). The resulting formulation inherits the location-scheduling structure from CLSP with location decision variables z_s, L2D engagement decision variables y_tsd and an orbital slot-independent reward, given as:
R_td = C^0_td + M_td
The network topology reward R_td accounts for feasible conjunctions between debris d and any valuable asset in 𝒦 leveraging parameter C^0_td, and for the mass of the engaged debris d with parameter M_td, both introduced in Eq. (<ref>).
However, the new reward R_td cannot assess whether that L2D potential interaction will effectively lower debris periapsis radius because the Δ h_tdij term from Eq. (<ref>) is relaxed. To address this problem, we introduce Boolean parameter W_tsd', defined as:
W'_tsd=
1, if W_tsd=1 and h_t+1,dj≤ h_tdi
0, otherwise
Given the nature of reward R_td, the new formulation can prioritize the selection of platform orbital slots closer to larger debris and neglect orbital slots that can engage small debris objects since the change in debris periapsis radius is not considered in the reward.
Further, the resulting topology which is concentrated over larger debris, can considerably reduce the L2D engagement opportunities during the L2D-ESP described in Sec. <ref>.
To tackle this problem, we propose to leverage MCLP to design an optimal laser network topology. We refer the reader to Appendix A for an in-depth justification.
§.§.§ Decision Variables
To formulate our problem as an MCLP, we keep platform location decision variables z_s and introduce new decision variables x_td defined as:
x_td =
1, if debris d is engaged at time step t
0, otherwise
These new decision variables are distinct from CLSP L2D engagement decision variables y_tsd in a manner that they encode if debris is engaged or not, and do not discriminate which platforms perform L2D ablation over debris.
§.§.§ Constraints and Objective Function
In light of the fact that no L2D engagement can occur between a platform located at orbital slot s and debris d if orbital slot s is not taken, linking constraints (<ref>) between decision variables z_s and x_td are imposed. Decision variables x_td are activated if at least S_td∈ℤ_≥1 platforms engage debris d at time step t.
∑_s∈𝒮 W'_tsdz_s≥ S_td x_td, ∀ t∈𝒯, ∀ d ∈𝒟
Here, S_td is a parameter inspired from the cardinality of set 𝒮_tdj used in constraints (<ref>) which imposes the required platforms to engage debris d in order to relocate it to orbital slot j. In this formulation, S_td aims to impose a lower bound on the number of platforms required to engage debris d at time step t.
This requirement can be beneficial in scenarios where large debris is targeted and a large Δv is required to generate a significant change in debris orbit.
The number of platforms to be used in the network is enforced by constraints (<ref>), and the decision variables domain in the location formulation are defined as:
z_s ∈{0,1}, ∀s∈𝒮
x_td ∈{0,1}, ∀t∈𝒯, ∀d ∈𝒟
The MCLP formulation aims to maximize the realized network topology reward of the laser network during the mission time horizon 𝒯, which is encoded in objective function (<ref>):
π = ∑_t∈𝒯∑_d∈𝒟R_td x_td
If an L2D engagement exists for debris d at time step t, then x_td=1 and reward R_td associated is obtained. It is important to mention that if R_td = 1, ∀ t ∈𝒯, ∀ d ∈𝒟, the problem will yield the total number of potential engagements that the network has at the beginning of the L2D-ESP.
§.§.§ Mathematical Formulation
Piecing all constraints and the objective function together, the resulting MCLP optimization formulation that determines the location of the laser platforms to maximize the network topology reward is given as:
max Objective function (<ref>)
s.t. Constraints (<ref>), (<ref>), (<ref>) and (<ref>)
The nature of the objective function and the lack of L2D engagement constraints similar to constraints (<ref>) incentivize the network to locate platforms at orbital slots that have a larger number of debris in range. Lastly, Table <ref> presents the new parameters and variables introduced for the MCLP formulation.
§.§ Optimal Laser-to-Debris Engagement Scheduling Problem
In Sec. <ref>, we presented a framework to obtain the optimal location of the laser platforms to maximize the network topology rewards. In this section, we build upon the assumption that the location of platforms is given and we maintain the engagement constraints from CLSP to derive the L2D-ESP.
Relaxing constraints (<ref>) and (<ref>), the L2D-ESP is:
max Objective function (<ref>)
s.t. Constraints (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>)
Even though the constraints that involve location decision variables z_s are relaxed, the scheduling problem continues to suffer from an exponential expansion of the solution space. To overcome this challenge, we propose a trade-off between the optimality of the solution and the computational runtime by implementing a myopic policy algorithm. The algorithm consists of breaking the problem into a set of coupled subproblems and solving them in a sequential manner such that the solution of a subproblem is used as initial conditions for the immediately subsequent subproblem. For the L2D-ESP, we parameterize the time step t and partition it into T coupled subproblems, where each subproblem solves the L2D-ESP for a time step t, therefore, each one of them has a solution space smaller than the original scheduling problem, making it lighter to solve.
§.§.§ Decision Variables
To solve the L2D-ESP by leveraging myopic policy, we parameterize time step t present in decision variables (<ref>) and (<ref>). Additionally, orbital slot index i ∈𝒥_td is relaxed from constraints (<ref>) since the initial orbital slot for each subproblem is known. The new decision variables for the subproblem are defined as:
y_pd(t) =
1, if platform p engages debris d
0, otherwise
x_dj(t, i) =
1, if debris d relocates to orbital slot j
0, otherwise
§.§.§ Constraints and Objective Function
To define the engagement constraints, we parameterize time step t in constraints (<ref>) and (<ref>). Furthermore, since no platform orbital slots are considered in the framework, we introduce the set of platforms 𝒫_dj(t) ⊆𝒫 with index p and cardinality P_dj(t), that generate orbital slot j ∈𝒥_d(t) for debris d ∈𝒟. Then, the engagement and linking constraints between variables for the subproblem are:
∑_d∈𝒟y_pd(t) ≤1, ∀p∈𝒫
∑_p∈𝒫_dj(t) y_pd(t) ≥P_dj(t)x_dj(t, i), ∀d∈𝒟, ∀j ∈𝒥_d(t)
In addition to constraints (<ref>), we need to enforce each debris to relocate to at most one new orbital slot. To achieve this, we introduce constraints (<ref>).
∑_j∈𝒥_d(t) x_dj(t, i) ≤ 1, ∀ d ∈𝒟
The domains of the decision variables are given as:
x_dj(t,i) ∈{0,1}, ∀d∈𝒟, ∀j∈𝒥_d(t)
y_pd(t) ∈{0,1}, ∀p∈𝒫, ∀d∈𝒟
To account for the debris remediation capacity of the network for each subproblem t, we derive the reward for every time step t from Eq. (<ref>) by parameterizing indices t and i, given as:
R_dj(t, i) = C^0_d(t) + C_dj(t,i) + αΔ h_dj(t,i) + βM_d(t)
albeit we are solving subproblem t, we let C^0_d(t)=C^0_td and C_dj(t,i)=C_tdij with C^0_td and C_tdij defined as in Eq. (<ref>). Therefore, objective function (<ref>) encodes the debris remediation capacity of the network for subproblem t.
V(t) = ∑_d∈𝒟∑_j∈𝒥_d(t)R_dj(t,i)x_dj(t,i)
§.§.§ Mathematical Formulation
The optimization problem for a single subproblem t is formulated as:
max Objective function (<ref>)
s.t. Constraints (<ref>), (<ref>) (<ref>), (<ref>) and (<ref>)
Table <ref> presents the sets, parameters, and decision variables specific to subproblem t.
As a result of implementing the myopic policy algorithm, we can obtain the network's debris remediation capacity over the entire mission horizon by summing up objective function (<ref>) for all subproblems, as defined in Eq. (<ref>).
V = ∑_t∈𝒯V(t)
§ CASE STUDIES
To demonstrate the extension and flexibility against different mission environments of the proposed formulations, we present three case studies involving debris remediation missions for (1) small, (2) large, and (3) mixed debris populations (comprising both small and large debris) in the presence of 10 valuable assets. For each case study, we determine the optimal network topology of 10 laser platforms based on the given debris field using the MCLP, obtain its network topology reward, and then evaluate its debris remediation capacity using the L2D-ESP.
The results obtained for the optimal network of 10 space-based laser platforms are benchmarked against two baseline cases: (1) a single platform case and (2) a 10-platform case in an optimized Walker-Delta network. The first benchmark, against a single platform system, helps reveal the extent to which debris remediation capacity improves with additional platforms. The second benchmark is intended to compare the effectiveness of asymmetry in the network topology in the debris remediation capacity. Walker-Delta <cit.> is a symmetrical constellation pattern that enforces satellites to be placed in circular orbits, all of them having the same altitude and inclination, making them beneficial for missions that require global coverage. The Walker-Delta pattern is denoted as P/O/F, where P stands for the number of satellites (platforms), O for the number of orbital planes, and F for the phasing factor between satellites. To identify the optimized 10-platform Walker-Delta network, we performed an enumeration of every possible pattern given 10 platforms, resulting in 18 patterns. Additionally, we randomly selected 20 pairs of semi-major axes and inclinations from the set of platform's orbital slots and generated all possible combinations of Walker-Delta configurations using 10 satellites, resulting in 360 networks. Using a brute-force algorithm, we obtain from the pool of generated Walker-Delta networks the best-performing one (i.e., the one that collects the highest network topology reward) and obtain its debris remediation capacity leveraging the L2D-ESP. The orbital elements of the simulated debris remediation networks in the case studies along with a three-dimensional visualization of the platform's orbit and debris field are presented in Appendix B.
We adopt the laser and debris parameters from Ref. <cit.> (except for the perigee threshold). The laser system considered is a laser-diode-pumped solid-state oscillator-amplifier with an amplifier medium of Nd:YAG or ND:glass, which operates using 100ps ultraviolet pulses at the 3rd harmonic of neodymium at a wavelength of 355nm.
In line with the adopted value of c_m, we assume that the material composition of all debris is aluminum.
Furthermore, we propagate the orbits of laser platforms and debris considering up to the J_2 perturbation. The epoch t_0 is set to February 26, 2024, at 04:30:51 Coordinated Universal Time. After an L2D engagement, we assume the debris periapsis radius remains constant, neglecting the influence of J_2 to reduce the computational runtime when computing C_tdij. Lastly, to avoid a computationally prohibitive optimization problem, we propose a trade-off between the number of time steps and debris considered, that is, a larger population will be simulated with fewer time steps and vice versa.
Table <ref> summarizes the parameters shared across all case studies.
§.§ Case Study 1: Small Debris Field
We consider a seven-day mission time horizon, uniformly discretized with a time step size of 130s, adopted from Ref. <cit.>. The time step size is determined by the L2D engagement duration of 10s, with a PRF of 56Hz, plus an additional 120s for the laser equipment's cooling. Table <ref> presents the remaining laser and debris parameters.
The small debris field is initialized based on results from a simulation conducted using ESA's Meteoroid And Space debris Terrestrial Environment Reference (MASTER-8) model <cit.>. The simulation considers small debris over an altitude range of 186km to 2000km, which accounts for explosion/collision fragments, sodium-potassium droplets, solid rocket motor slag, and multi-layer insulation. The MASTER-8 outputs are provided as relative frequencies across 100 altitude bins that discretized the low Earth orbit (LEO) altitude range of 100km to 2000km into bins having a width of 18.14km. To initialize a representative small debris field across the LEO altitude band, 820 samples are drawn from the respective relative frequencies for each altitude bin. Figure <ref> shows the small debris population per altitude bin. Each small debris object is assumed to be in a circular orbit with uniformly distributed values for the argument of periapsis, the argument of latitude, and right ascension of the ascending node (RAAN), bounded between 0 and 360deg, and inclination ranging from 0 to 180deg.
Next, we generate a set of orbital slots that laser platforms can take by defining an altitude range from 400km to 1100km, equally spaced in nine altitude layers. We adopt this range as it embraces the peak of operating satellites, outlined in Fig. <ref> and the small debris population. Further, we define 10 steps for the argument of latitude and RAAN, uniformly spaced between 0deg and 360deg. In addition, we define 9 uniformly spaced steps for the inclination between 35deg and 90deg. All orbits are assumed to have an eccentricity of zero at the epoch.
For this specific case study, we are not considering valuable assets operating in the environment; consequently, C^0_td = C_tdij = 0 for all t ∈𝒯, d ∈𝒟, i ∈𝒥_td, j ∈𝒥_t+1,d. Second, M_td is assumed to be equal for all d ∈𝒟 due to the assumption that all debris has equal density.
§.§.§ Results and Discussions
The network topology reward obtained is π^∗= 9269, and the retrieved optimal network topology consists of 10 platforms asymmetrically distributed. The optimal L2D-ESP is implemented with the optimal network topology obtained, retrieving a debris remediation capacity reward of V^∗=3132.34. Figure <ref> provides a snapshot of the debris field with 10 laser platforms at time step t = 11, and Fig. <ref> visualizes an L2D engagement over the Antarctic region at that same time step.
From the L2D optimization results it is possible to derive certain metrics that are useful for characterizing the mission performance, even though they are not the objective of the optimization.
First, the network engages 599 small debris throughout the seven-day mission time horizon, representing the 73.04% of the total population considered. Second, the number of debris deorbited during this time horizon is 422, or 51.46% of the total population. Third, we understand debris nudging as the sum of the differences between the periapsis radius at the epoch and the end of the simulation for all engaged but not deorbited debris, yielding a value for the optimal 10-platform network of 64707.56km.
The single platform case collects a network topology reward of π^∗ = 1038 and has a debris remediation capacity of V^∗ = 722.08. The total number of engaged debris is 313 deorbiting a total of 29 and nudging debris 95321.31km. The adoption of 10 platforms over one increases by 88.80% and 76.94% the network topology reward π and the debris remediation capacity V, respectively. Considering the derived metrics, increasing the number of platforms leads to 93.12% more debris deorbited. The single platform mission nudges debris 95321.31km outperforming the 10-laser platform by 43.31%, given that the former is less successful in deorbiting objects, it has more debris objects contributing to this metric.
Lastly, we obtain a Walker-Delta network with pattern 10/1/0, an altitude of 7303.14km and inclination of 48.75deg. The network topology reward obtained by the Walker-Delta network is π= 7354 and its debris remediation capacity is V=2927.24. The derived metrics for this network exhibit that 538 debris are engaged, 382 are deorbited, and the network nudges debris a total of 63801.59km.
The debris remediation capacity of the network depends not only on the number of platforms but also on their distribution. Breaking the symmetry in the distribution of lasers leads to an increase of 20.66% and 6.54% in the network topology reward and the debris remediation capacity, respectively. Additionally, the MCLP-based network outperforms the symmetrical Walker-Delta network on the derived metrics as it deorbits 9.47% more debris, and presents an increase of 1.40% on the total debris periapsis radius difference with respect to the epoch.
§.§ Case Study 2: Large Debris Field
The mission time horizon is set to 31 days, uniformly discretized with a time step of 160s. Each L2D engagement lasts 40s, followed by a cooling period of 120s.
For the debris population, we adopt the 50 statistically most concerning debris objects as identified by Ref. <cit.>. For simplicity, we assume that all debris objects have the same cross-sectional area while retaining their actual masses as reported in Ref. <cit.>. With this approach, the magnitude of the delivered Δ v is not constant among the considered debris field since it is inversely proportional to debris mass.
To define the platform orbital slots, we maintain the values defined for Sec. <ref> except for the altitude range, since the L2D range is larger we extend the altitude upper bound to 1400km. We report the specific parameters used in this case study in Table <ref>.
§.§.§ Results and Discussions
The network topology reward obtained is π^∗=20330.23 with an asymmetrical platform distribution, and its debris remediation capacity is V^∗=26826.56. Figure <ref> displays the large debris field and the 10 laser platforms at time step t=1314 whereas Fig. <ref> shows two L2D engagements. The first one consists of a single L2D engagement from one platform to debris; the second one is a collaborative L2D engagement from two platforms to one debris, highlighting the significance of DVA and how the network leverages this concept to maximize rewards.
Similarly to the analysis carried out in Sec. <ref>, we derive metrics from the obtained results. The total number of L2D engagements during the operation of the mission is 20225 and the debris that is engaged the most has a mass of 9000kg. Considering that it is less likely to induce significant periapsis radius reductions due to L2D engagements at each individual time step given the large mass of debris, the term that accounts for debris mass in Eq. (<ref>) weighs the most in the reward, which is reflected in the number of engagements that debris of higher mass receive. Further, the network nudges debris 941.54km.
Changing the cardinality of the network and imposing only one platform into the MCLP problem formulation retrieves a network topology reward π^∗=2916.35 and a debris remediation capacity of V^∗=6526.54. The significant reduction in the number of platforms directly impacts the results by reducing 85.65% and 75.67% the network topology reward and the debris remediation capacity, respectively. The single platform performs 4163 L2D engagements, with the same most engaged debris object as the 10-platform network has. The difference with respect to the 10-platform network is 79.41%. The single platform nudges debris 214.70km signifying a decrement of 77.19% compared to the 10-platform network.
The best-performing Walker-Delta network, with a pattern of 10/10/0, an altitude of 7040.64km and an inclination of 76.25km, achieves a network topology reward of π=11255.10 and a debris remediation capacity V=22651.94. The Walker-Delta network underperforms compared to the asymmetrical MCLP-based network by 44.63% and 15.56% for the network topology reward and debris remediation capacity, respectively. The total number of L2D engagements for the Walker-Delta network is 17632, which is 12.82% less than that of the optimal MCLP-based network, but it continues to have the most L2D engagements to a 9000kg debris object. Lastly, the Walker-Delta network nudges debris 975.45km, compared to the 941.54km obtained by leveraging the optimal network. This phenomenon can be associated with the fact that neither the MCLP formulation nor the L2D-ESP directly optimizes the difference in debris periapsis radius with respect to the epoch.
§.§ Case Study 3: Mixed Debris Field With Valuable Assets
In this case study, the mission horizon of seven days is uniformly discretized with a time step size of 160s to encompass the L2D engagement and cooling times for small and large debris. Further, we maintain the laser operational ranges for small and large debris as defined in Sec. <ref> and Sec. <ref>, respectively.
The small debris field is generated as in Sec. <ref> and we define a population of 820 objects. The large debris field is the same from Sec. <ref> in addition to defunct satellite COSMOS 2221 (NORAD ID: 22236) with an associated reward of G_0=e6 during the network topology optimization, and G_0=e4 for the L2D-ESP. Lastly, we define a set of 10 valuable assets 𝒦 = {k_1, …, k_10} and Table <ref> presents their respective orbital elements defined at epoch t_0, with ω and ν as the argument of periapsis and true anomaly, respectively. The conjunction ellipsoid for all valuable assets is assumed to be a perfect sphere of radius 10km.
In addition to the 10 valuable assets, the mixed debris field study includes a known close conjunction event on February 28, 2024. The conjunction event was between the defunct satellite 22236 and the non-maneuverable operational NASA TIMED satellite (NORAD ID: 26998) with a reported miss distance of 20m at the time of closest approach (TCA) <cit.>. In the simulated conjunction event within the mixed debris field, a miss distance of 2.30km is observed at the TCA, corresponding to time step t=1081.
§.§.§ Results and Discussions
The optimal 10-platform network achieves a network topology reward of π^∗=7.10e7 and has a debris remediation capacity obtained of V^∗=2.86e5. Figure <ref> displays the mixed debris field and the 10 laser platforms at time step t=7. Figure <ref> is a detailed view of the same scenario pinpointing into three L2D engagements. First, two platforms engage two large debris objects, second, a single platform engages small debris from a closer range.
The optimal 10-platform network engages 546 debris objects of various sizes, successfully deorbiting 141 small debris, and nudging debris 43510.04km.
The conjunction event expected to occur at time step t=1081 in this scenario is successfully avoided, with the miss distance between NASA satellite 26998 and defunct satellite 22236 increasing from 2.30km to 2104.47km after 17 L2D engagements during the imposed time step window t∈[500, 622]. The new minimum miss distance between the two objects is 545.91km at time step t=103 when the debris has not been engaged yet.
The baseline single platform case collects a network topology reward π^∗=1.60e7 and has a debris remediation capacity of V^∗= 69211.24. Compared to the optimal 10-platform network, these values represent significant reductions of 77.46% in network topology reward and 75.83% in debris remediation capacity. The single platform engages with 228 debris, successfully deorbits 24 small debris, and nudges debris 25733.06km. These metrics present reductions of 82.97% in the number of total debris deorbited and 40.85% in the total distance nudged compared to the 10-platform network. The new closest approach between defunct satellite 22236 and satellite 26998 has a distance of 57.43km at time step t=936, and a new miss distance of 411.71km at the original TCA corresponding to t=1081.
The best-performing Walker-Delta network has a pattern 10/5/2 with an orbital altitude of 6953.14km and an inclination of 76.25deg. It achieves a network topology reward π=2.98e5 and a debris remediation capacity of V=1.23e5. When the same number of laser platforms is used but constrained to fit the symmetrical Walker-Delta pattern, there is a substantial reduction in both the network topology reward and debris remediation capacity by 99.58% and 56.90%, respectively. The Walker-Delta network engages with 441 debris objects from which it successfully deorbits 129 of them, resulting in 8.51% fewer than the optimal 10-platform network. The minimum miss distance between the active and defunct satellite is 545.94km at time step t=103, and even though the Walker-Delta does not engage the defunct satellite at the imposed time step window t∈[500, 622], it does so before it, consequently, the relative distance between the two objects at time step t=1081 is 2605.02km.
§.§ General Discussions
Even though the case studies presented in this paper do not span all possible mission scenarios, we handpicked the most representative ones for debris remediation missions. Each mission scenario is uniquely characterized by a set of variables that condition the outcome of the space-based laser debris remediation mission. For the three different scenarios, we tested a triplet of networks, two of them optimally locate their one and 10 platforms exploiting the MCLP formulation considering the debris field distribution, and the remaining one is an optimized Walker-Delta network from a pool of 360.
Increasing the number of platforms from one to 10 leads to a significant rise in the debris remediation capacity of the network and its derived metrics. In all case studies, the debris remediation capacity improves close to 75% when the number of platforms increases. The results obtained in all three case studies enable us to conclude that further increments in the debris remediation capacity and derived metrics can be achieved using the same number of platforms by breaking the symmetry of the network and determining the optimal location of platforms leveraging MCLP to deliver the additional increments. The MCLP provides flexibility in the location of laser platforms as it accounts for the mission environment defined and seeks to select the orbital slots, which are not constrained to follow a symmetrical pattern, that maximizes the network topology reward. The outcome of this formulation significantly improves the obtained debris remediation capacity, given by the L2D-ESP, as the platforms are better located relative to each other, the debris field considered, and, if any, the satellite-debris conjunction events.
§ COST AND BENEFIT ANALYSIS
In this section, a cost and benefit analysis is performed for different optimal networks, varying the number of platforms P from one to 10. Additionally, a 10-platform Walker-Delta network is included in the study. We define the cost of a network as the number of platforms used, and the benefit as the total rewards π^∗ and V^∗ obtained. All networks are simulated in the same mission environment adopted in Sec. <ref>.
Figure <ref> highlights the rewards obtained by each network in the optimization problems, where 10-WD stands for the 10-platform Walker-Delta network. The leftmost bar plot of Fig. <ref> presents the network topology reward π^∗, demonstrating a positive correlation between the increment in the number of platforms and the achieved reward. Further, all MCLP-based optimal networks obtain a higher reward with respect to the 10-platform Walker-Delta network.
The center of Fig. <ref> outlines the total debris remediation capacity reward V^∗ obtained leveraging the L2D-ESP. Concerning this metric, the Walker-Delta network is outperformed by MCLP-based networks with at least three platforms. Focusing on MCLP-based networks only, the obtained V^∗ does not maintain a monotonic increase with the increment in the number of platforms since adopting 8 platforms retrieves the highest reward. The rightmost bar plot of Fig. <ref> breaks down the total debris remediation capacity reward by each term of Eq. (<ref>); C_tdij term has been omitted since no networks perform L2D engagements that would trigger the penalty. First, C^0_td, the reward term that accounts for conjunctions if no L2D engagements occur, is obtained by all MCLP-based networks since the flexibility gained in the design space allows the network to engage defunct satellite 22236 in the specified time window, conversely, the Walker-Delta network is not able to collect any C^0_td reward. Moreover, the latter outperforms MCLP-based networks analyzing individual reward terms αΔ h_tdij, which accounts for the ratio between debris periapsis radius after the L2D engagement and periapsis radius threshold, and βM_td, the term that accounts for engaged debris mass, however, it is not conveyed in the total reward given its lack to obtain C^0_td, the term that weighs the most in Eq. (<ref>).
Figure <ref> presents relevant mission metrics derived from the L2D-ESP, however, it is noteworthy to mention that objective function (<ref>) is not designed to tackle any of them directly. The left bar plot of Fig. <ref> presents the number of small debris deorbited per network, exhibiting a positive correlation between the number of deorbited objects and the number of platforms. The right bar plot of Fig. <ref> reports the magnitude of nudged debris per network, where The Walker-Delta network is outperformed by all MCLP-based asymmetrical networks, and the 9-platform network provides the highest debris nudging value of 46484.58km.
In summary, this section outlines the performance of different mission architectures varying the number of space-based lasers used and their network topology. The results demonstrate the necessity of breaking the inherent symmetry of Walker-Delta networks. MCLP-based optimal networks with at least three platforms outperform these symmetrical networks in terms of debris remediation capacity. Adopting MCLP expands the network topology design space, allowing for the retrieval of probably optimal network topologies that do not adhere to the symmetrical Walker-Delta pattern.
§ CONCLUSIONS
In this paper, we addressed the problem of how to efficiently tackle the remediation of orbital debris of different sizes. We presented an optimization framework suitable for the design and operation of a network of space-based lasers capable of deorbit and nudge debris, and to perform just-in-time collision avoidance.
Introducing the MCLP formulation to design debris remediation networks, for the first time, allows us to obtain its network topology considering several variables of the mission environment, mainly, the characteristics of the targeted debris field, the number and specifications of the laser platforms, and the operation of valuable assets in space. We propose an innovative L2D-ESP that seeks to maximize the debris remediation capacity of the network during the operation of the mission. The latter leverages the novel concept of DVA, a collaborative engagement framework that enables to achieve of a higher degree of control over the L2D engagements.
In this research, we propose two sets of experiments. The first one corresponds to the case studies where the main objective is to exhibit the applicability and extension of the optimization formulations proposed. Further, the different case studies enable us to conclude that the debris remediation capacity can be increased if (i) the number of platforms used in the network is incremented and (ii) given the same number of laser platforms, breaking the symmetry of Walker-Delta networks by leveraging the MCLP formulation.
The second set of experiments corresponds to the cost and benefit analysis which ranges the number of platforms from one to 10 and includes a 10-platform Walker-Delta network. The results obtained outline that increasing the number of platforms can be beneficial up to a certain cardinality, which after it either the debris remediation capacity or certain metrics of interest tend to decrease.
There are several avenues for future research. First, consider the impact of distance on the delivered fluence to debris and include stochasticity in L2D ablation processes. Second, enhance the flexibility of the network by allowing platforms to reposition into different orbits, leveraging the concept of satellite constellation reconfiguration to maximize debris remediation capacity in a dynamic debris environment. Lastly, explore the design of a novel laser platform, taking into account laser specifications, onboard hardware characteristics, and its mass and shape, enabling an economical cost-benefit analysis of space-based laser networks for orbital debris remediation.
§ ACKNOWLEDGMENT
This work was supported by an Early Career Faculty grant from NASA’s Space Technology Research Grants Program under award No. 80NSSC23K1499.
§ APPENDIX A: LOCATION OPTIMIZATION FORMULATION
In this section, we present a step-by-step justification for the adoption of MCLP to tackle the optimal design of the space-based laser network topology. The CLSP formulation introduced in Sec. <ref> is given as:
max ∑_t∈𝒯∖{t_T-1}∑_d∈𝒟∑_i∈𝒥_td∑_j∈𝒥_t+1,d R_tdijx_tdij <ref>
s.t. W_tsdz_s ≥y_tsd, ∀t∈𝒯, ∀s ∈𝒮,∀d∈𝒟<ref>
∑_d∈𝒟 y_tsd ≤1, ∀t∈𝒯, ∀s ∈𝒮 <ref>
∑_j∈𝒥_t_1d x_t_0di_0j = 1, ∀d ∈𝒟<ref>
∑_j∈𝒥_t+1,d x_tdij - ∑_υ∈𝒥_t-1,d x_t-1,dυi = 0, ∀t∈𝒯∖{t_0,t_T-1}, ∀d ∈𝒟, ∀i ∈𝒥_td<ref>
∑_s∈𝒮_tdj y_tsd ≥S_tdjx_tdij, ∀t∈𝒯∖{t_T-1},∀d∈𝒟, ∀i ∈𝒥_td,∀j ∈𝒥_t+1,d∖𝒥_t+1,d<ref>
∑_s∈𝒮 z_s = P <ref>
z_s ∈{0,1}, ∀s∈𝒮 <ref>
y_tsd ∈{0,1}, ∀t∈𝒯,∀s∈𝒮, ∀d ∈𝒟 <ref>
x_tdij ∈{0,1}, ∀t∈𝒯∖{t_T-1},∀d ∈𝒟, ∀i ∈𝒥_td,∀j ∈𝒥_t+1,d <ref>
As described in Sec. <ref>, the tree structure in the problem's solution space makes it computationally prohibitive. To overcome this problem, we adopt a series of assumptions that impact decision variables, parameters, and sets to obtain a space-based platform location optimization formulation.
The core assumption to overcome the tree structure is that debris maintains its initial orbit defined at t_0 during the entire mission. The consequences of this statement are several; first, debris cannot be deorbited, which makes the debris field constant over time. Second, debris orbital slot indices i,j can be dropped, consequently constraints (<ref>) and (<ref>) are relaxed. Third, given that orbital slot indices are dropped, CLSP decision variables x_tdij are now defined as:
x_td =
1, if debris d is engaged at time step t
0, otherwise
As a result of dropping relocation indices, reward R_tdij defined in Eq. (<ref>) is recasted as R_td=C^0_td + M_td. Removing Δ h_tdij forbids the optimization model to be informed about the post-L2D engagement behavior of debris (i.e. whether debris periapsis radius increases or decreases). To overcome this situation, we redefine W_tsd as W_tsd' where the latter incorporates as an additional requirement that debris periapsis radius after an L2D engagement has to be lower than its initial one.
With the described assumptions and relaxations, the derived formulation is:
max ∑_t∈𝒯∑_d∈𝒟R_tdx_td <ref>
s.t. W'_tsdz_s ≥y_tsd, ∀t∈𝒯, ∀s ∈𝒮,∀d∈𝒟
∑_d∈𝒟 y_tsd ≤1, ∀t∈𝒯, ∀s ∈𝒮 <ref>
∑_s∈𝒮_td y_tsd ≥S_tdx_td, ∀t∈𝒯,∀d∈𝒟
∑_s∈𝒮 z_s = P,<ref>
z_s ∈{0,1}, ∀s∈𝒮 <ref>
y_tsd ∈{0,1}, ∀t∈𝒯,∀s∈𝒮, ∀d ∈𝒟 <ref>
x_td ∈{0,1}, ∀t∈𝒯,∀d ∈𝒟<ref>
Even though relocation constraints are relaxed, the structure of the problem inherits the location-scheduling structure of the CLSP, since y_tsd are the L2D engagement scheduling decision variables that dictate whether an L2D engagement occurs from a platform located at slot s to debris d at time step t. Consequently, despite the fact that the new formulation does not have a tree structure, the dimension of decision variables y_tsd∈{0,1}^T × S × D can make the problem computationally prohibitive.
On top of that, given that the reward does not account for debris orbit changes, the formulation will tend to assign to each platform those orbital slots that maximize engagement with debris of larger mass.
Given two debris d_1 and d_2 with masses m_d_1 < m_d_2 and C^0_td_1=C^0_td_2=0, then M_td_1 < M_td_2 and consequently R_td_1 < R_td_1. However, throughout the L2D-ESP where changes in debris periapsis are considered, if Δ h_td_1ij≫Δ h_td_2ij, then:
αΔh_td_1ij + βM_td_1 > αΔh_td_1ij + βM_td_1
R_td_1ij > R_td_2ij
outlining that the outcome, which corresponds to the optimal network that maximizes reward R_td, can overlook platform orbital slots that collect higher debris remediation rewards during the L2D-ESP.
To tackle this problem, and considering that it is not proper to perform L2D assignments if debris change in orbit is neglected, we propose to relax CLSP L2D engagement scheduling decision variables y_tsd. Consequently, constraints (<ref>), (<ref>), (<ref>) and (<ref>) are dropped. The resulting optimization problem is given as:
max ∑_t∈𝒯∑_d∈𝒟R_tdx_td <ref>
s.t. ∑_s∈𝒮 z_s = P,<ref>
z_s ∈{0,1}, ∀s∈𝒮 <ref>
x_td ∈{0,1}, ∀t∈𝒯,∀d ∈𝒟<ref>
However, the new problem lacks coupling constraints that link location decision variables z_s and decision variables x_td such that L2D engagements with debris d occur only if a platform is occupying an orbital slot such that W_tsd=1. Inspired by constraints (<ref>) and (<ref>), we introduce the new coupling constraints given as:
∑_s∈𝒮 W'_tsdz_s≥ S_td x_td, ∀ t∈𝒯, ∀ d ∈𝒟<ref>
where x_td are activated if at least S_td platforms perform L2D ablation on debris d at time step t. Hence, the formulation imposes a minimum number of platforms required to obtain debris' reward R_td at time step t. Consequently, the optimal laser network topology design problem is given as:
max ∑_t∈𝒯∑_d∈𝒟R_td x_td<ref>
s.t. ∑_s∈𝒮 z_s = P<ref>
∑_s∈𝒮 W'_tsdz_s≥S_td x_td, ∀t∈𝒯, ∀d ∈𝒟<ref>
z_s ∈{0,1}, ∀s∈𝒮<ref>
x_td ∈{0,1}, ∀t∈𝒯, ∀d ∈𝒟<ref>
where its structure resembles the well-known MCLP <cit.>.
§ APPENDIX B: 3D ILLUSTRATIONS AND CASE STUDIES RESULT'S ORBITAL ELEMENTS
We report the corresponding orbital elements of the results obtained in Sec. <ref> along with the 3D visualization of the debris remediation network with the platform's orbits in the ECI reference frame. Figures <ref>, <ref> and <ref> display for P=1, 10 and the Walker-Delta network the debris remediation networks for the small, large and mixed debris field cases, respectively. Tables <ref>, <ref> and <ref> present the orbital elements for the P=1, 10 and Walker-Delta networks for the small, large, and mixed debris field cases, respectively.
|
http://arxiv.org/abs/2409.02611v1 | 20240904105605 | GoT-CQA: Graph-of-Thought Guided Compositional Reasoning for Chart Question Answering | [
"Lingling Zhang",
"Muye Huang",
"QianYing Wang",
"Yaxian Wang",
"Wenjun Wu",
"Jun Liu"
] | cs.CV | [
"cs.CV"
] |
GoT-CQA: Graph-of-Thought Guided Compositional Reasoning for
Chart Question Answering
Lingling Zhang1# Muye Huang1# Qianying Wang2Corresponding author. # These authors contributed to the work equally. Yaxian Wang1 Wenjun Wu1 Jun Liu1
Xi’an Jiaotong University1 Lenovo Research2
{huangmuye, wyx1566, nickjun98}@stu.xjtu.edu.cn wangqya@Lenovo.com
{zhanglling, liukeen}@xjtu.edu.cn
September 9, 2024
=================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Chart Question Answering (CQA) aims at answering questions based on the visual chart content, which
plays an important role in chart summarization, business data analysis, and data report generation.
CQA is a challenging multi-modal task because of the strong context dependence and complex reasoning requirement.
The former refers to answering this question strictly based on the analysis of the visual content or internal data of the given chart, while the latter emphasizes the various logical and numerical reasoning involved in answer prediction process.
In this paper, we pay more attention on the complex reasoning in CQA task, and propose a novel Graph-of-Thought (GoT) guided compositional reasoning model called GoT-CQA to overcome this problem.
At first, we transform the chart-oriented question into a directed acyclic GoT composed of multiple operator nodes, including localization, numerical and logical operator.
It reflects the human brain's solution process to this question intuitively.
After that, we design an efficient auto-compositional reasoning framework guided by the GoT, to excute the multi-step reasoning operations in various types of questions.
Comprehensive experiments on ChartQA and PlotQA-D datasets show
that GoT-CQA achieves outstanding performance, especially in complex human-written and reasoning questions, comparing with
the latest popular baselines.
§ INTRODUCTION
Charts and figures are an effective visual expression about data statistics, the trends, and outliers, which exist widely in academic articles, technical reports, and various websites.
Chart question answering (CQA) refers to correctly answer the given question based on the visual chart content.
It plays an important role for a range of downstream tasks, such as chart summarization, business data analysis, and data report generation.
With the development of multi-modal understanding and reasoning techniques, CQA task has received growing attentions in recent years <cit.>.
As everyone knows, the hot research visual question answering (VQA) <cit.> is closely related to CQA task, where the former is the question answering (QA) task on natural images and the latter focues on the specific charts.
Although the performance on VQA task has made break-through progress, CQA is still challenging because of the strong context dependence and the complex reasoning requirement.
On one hand, strong context dependence means answering a question strictly rely on the visual and underlying data information in the given chart, rather than the common sense or priors.
Natural images directly reflect the phenomena in real world, thus the common sense or priors mined from previous QA pairs can help better answer the current question.
As shown in right part of Fig. <ref>,
the answer “arms” is a very likely candidate for question “Where is the child sitting?”, even if the model does not understand the image context.
However, for the question “How many more descendants of P2 than of P1?”, the locations and values of P1 and P2 must be extracted from given chart, even if the labels P1 and P2 have appeared in other charts.
In this case, the fine-grained parser of charts, including coordinates, legends, point locations, bar heights and other information, is a necessary preliminary work for CQA.
And current researches <cit.> generally combine multiple technologies, such as object detection, OCR, word embedding, and expert rules, to achieve this goal.
On the other hand, complex reasoning that involves several logical and arithmetic operations after the detailed chart parser is required for the answer prediction in CQA.
Most previous researches <cit.> regard the CQA as a simple classification task, where the output answers are limited in YES/NO or a fixed vocabulary contained all textual elements in the given chart.
However, the general setting, e.g. answers are not appearing in the chart called out of vocabulary (OOV), is more common in realistic application.
Thus, many CQA models such as ChartT5 <cit.>, ChartReader <cit.>, and Matcha <cit.>, are designed based on the large pre-trained vision-language models for the general setting, which improve the answering performance effectively but are still limited on the complex reasoning questions and have great limitation on model interpretability.
In this paper, we focus more on better performing complex reasoning in CQA task, and summarize the following two key issues in this reasoning process.
Firstly, how to implement the multi-step numerical and logical reasoning?
In left part of Fig. <ref>,
for question “How many more descendants of P2 than of P1?”, its reasoning procedure involves finding the bars corresponds to P1 and P2 in chart, estimating the descendant number of them, and finally executing the difference operation.
For the more challenging question “Are the number of descendants of P2 more than the average descendants?”, in addition to identifying the number of bars, the model needs to evaluate the corresponding value of each bar, and perform averaging and comparison operations.
Overall speaking, it is necessary in CQA model to perform multi-step operations such as finding, estimating, comparison, summation, and subtraction.
Secondly, how to establish a unified reasoning framework for various types of questions?
Taking the large-scale dataset PlotQA-D <cit.> as an example, the questions are constructed with 74 templates, and are categorized into three groups: structural understanding, data retrieval, or reasoning.
Apparently, different types of questions involve different inference operations, for instance the inference of question “How many descendants does P2 have?”
is much simpler than
“Are the number of descendants of P2 more than the average descendants?”.
However, designing a reasoning model for each type of question is not only time-consuming and expensive, but also difficult to generalize to new question types.
Thus it is essential to design a unified and flexible reasoning framework for various questions.
To address the above issues, we propose a novel Graph-of-Thought (GoT) guided compositional reasoning framework for the challenging CQA, which is called GoT-CQA model.
GoT-CQA includes three modules: chart & question parsing, compositional reasoning, and answering module.
Chart & question module applies the pre-trained large-scale models to generate an feature sequence for given chart and a GoT corresponds to the question, where the directed acyclic GoT reflects the types and orders of the reasoning operations need to be performed.
Compositional reasoning module achieves complex reasoning over the chart data-flow under the guidance of GoT.
This module designs the self-data reasoning block and localization, numerical and logical reasoning operation blocks, and the reasoning framework is automatically formed by an orderly combination of these blocks according to the question's GoT.
The answering module is a general transformer decoder to jointly generate the YES/NO, in-vocabulary, or OOV answers.
Our main contributions can be summarized into four folds:
* We reveal the strong context dependence and complex reasoning requirement for the challenging CQA task. And we propose the novel model GoT-CQA as the first attempt to solve this complex reasoning problem in CQA.
* We transform the question about chart into an interesting directed acyclic GoT to guide the answer reasoning.
GoT splits the complex reasoning process into several ordered localization, numerical, or logical operations.
* We present an efficient automatic compositional reasoning pattern that generalizes to various types of chart-oriented questions.
And it enhances the reasoning interpretability to a certain extent.
* We conduct extensive experiments on dataset ChartQA and PlotQA-D to verify the superiority of GoT-CQA. The results
show that GoT-CQA achieves a good improvement, especially in complex reasoning questions.
§ METHODOLOGY
Fig. <ref> illustrates the overall framework of GoT-CQA, and it includes three modules: 1) Chart & question parsing module (Left Part) to extract the visual feature sequence from chart and generate a GoT corresponds to the question;
2) Compositional reasoning module (Center Part) to perform the complex localization, numerical, and logical reasoning over chart's data flow guided by GoT;
3) Answering module (Right Part) to generate the target answer based on reasoning result.
Details of these modules are introduced as follows.
§.§ Chart & Question Parsing Module
Chart to Sequence-Feature.
Following to the work <cit.>, we employ the pretrained OCR-free architecture Donut <cit.> as the chart encoder.
Donut is originally designed for document image (e.g. receipts) understanding, and the work <cit.> testifies its effectiveness on chart parsing.
For any chart C, Donut generates an embedding sequence C_v, following the order from the
top-left corner to the bottom-right corner of this chart.
Question to Graph-of-Thought.
For any question Q, we find its logic structure is relatively clear and easy to dig out.
To achieve this goal, we first define the following three types of unit operators: 1 Localization operator (Loc) means query the position information for the given input;
2 Numerical operator (Num) means retrieve or simply reasoning the value for the target variable;
And 3 Logical operator (Log) means execute some logical inference on multiple values, such as comparison, summation, and maximum.
Apparently, any question can be decomposed into the ordered operations of the above three unit operators.
Namely question Q can be represented as a directed acyclic graph 𝒢 composed of multiple operator nodes, which is called Graph-of-Thought (GoT).
This procedure can be formalized as:
𝒢 = {𝒪, ℰ},
𝒪 = {o_1, o_2, ⋯, o_n}, o_i = (o_i, type(o_i)),
ℰ = {e_ij=(o_i,o_j)}⊂𝒪×𝒪,
where 𝒪 is the operator node set, and n records the total number of operators for question Q.
For each node o_i, it is denoted to a tuple consisting of the operation content o_i and type(o_i)∈{Loc, Num, Log}.
ℰ is the edge set, and the edge e_ij exists when there is a chain of thought from node o_i to o_j.
After analyzing some popular CQA datasets, we found that GoTs can be extracted by pre-defined template rules or the prompted large-scale language models such as GPT.
Fig. <ref> shows several examples of question GoTs.
For question “How many more descendants of P2 than of P1?”, its GoT includes two localization nodes, two numerical nodes, one logical nodes, and four edges between them.
The edge (o_1,o_3) indicates that the value of P1 can only be measured after finding the location of P1 in chart.
Similarly, edges (o_3,o_5) and (o_4,o_5) indicate that the values of P1 and P2 need to be obtained before the subtraction operation.
§.§ Compositional Reasoning Module
Given chart data-flow C_v, this module accomplishs auto-compositional reasoning under the guidance of GoT 𝒢.
In Fig. <ref>, we design four blocks for self-data reasoning and three types of operation (i.e. {Loc, Num, Log}) reasoning.
Self-Data Reasoning. This block takes visual sequence C_v as input, and performs self-reasoning on the chart to mine the meaning of elements and the relationship between them.
As shown in left part of Fig. <ref>, the pre-process layer is a fully connected layer over C_v.
Each reasoning layer is designed two encoders with self-attentions, where each encoder computes the query, key, and value, followed by feed-forward, skip connection and normalization.
After N-layer, the output of this block is denoted as the feature matrix 𝐎_start, which is applyed to the following compositional reasoning.
Loc/Num/Log Reasoning.
In right part of Fig. <ref>, the architectures of localization, numerical, and logical reasoning blocks are similar.
Note that each operator node o_i in GoT corresponds to a reasoning block.
And its input consists of two parts:
the current guidance information o_i, and the chart data flows from o_i's precursor nodes pre(o_i).
This procedure is formalized to Eq. (<ref>) with the output data flow matrix 𝐎_i,
and the node set pre(o_i) is arrived at by
Eq. (<ref>).
𝐎_i =
Block_type(o_i)(pre(o_i),o_i),
pre(o_i) =
{o_start, o_k;e_ki∈ℰ}, indegree(o_i)≠0
{o_start}, otherwise
where the function indegree(·) computes the in-degree value of the input operator node.
o_start is a virtual node outside the GoT, and it stores the original chart flow data matrix 𝐎_start that obtained by self-data reasoning block.
Especially, if the in-degree of node o_i is zero, it is the starting node of compositional reasoning with initial data flow 𝐎_start as input.
There is also a pre-process layer in Loc/Num/Log reasoning block.
On the one hand, this layer fuses all the precursor data flows {𝐎_k;o_k∈pre(o_i)} into a new feature matrix.
On the other hand, it uses the pre-trained language model (e.g. Bert) to extract the semantic feature of textual guidance information o_i.
For each reasoning layer, there are two types of encoders: self-attention and cross-attention.
The architecture of the former is the same as in the self-data reasoning block.
The latter is designed to interact the information between the guidance and the chart data flow, where the transformer's keys and values are mutually exchanged resulting
a cross-attention.
After N-layer, the output data flow 𝐎_i is the input of the reasoning blocks of o_i's successor nodes.
Compositional Reasoning Guided by GoT.
For any question Q,
its reasoning network is formed by an orderly combination of localization, numerical, logical blocks under the guidance of GoT 𝒢.
This is a novel auto-compositional reasoning process, which means that the architecture of reasoning network are varies for different questions.
After the construction, all reasoning blocks are executed in multiple steps.
For each step, these blocks whose related precursor nodes
have been inferred in the current state, are activated and executed.
Taking the question “How many more descendants of P2 than of P1?” as an example, we first execute two localization blocks to find the location of P1 and P2, then execute numerical blocks for estimating their values, and finally execute logical block for subtraction operation.
The compositional reasoning process of the above CQA example is shown in the center of Fig. <ref>.
Specially, the data flow output by the node whose out-degree is zero in GoT 𝒢, is the result of the reasoning module.
It provides the evidence for answering module in the next section.
We found that there is generally only one such terminal node in the GoT, and the its output is denoted 𝐎_end for convenience.
§.§ Answering Module
For question Q, its answer may be YES/NO, a textual element in chart, or the result of some numerical operation.
Following the latest CQA models <cit.>,
We design a simple transformer decoder that takes data flow 𝐎_end as input and generates the final answer for all types of questions.
The traditional cross-entropy loss is applied to train this model.
§ EXPERIMENT
§.§ Experimental setting
Datasets.
(1) ChartQA <cit.> is constructed with real-world charts and human-authored question-answer pairs.
It covers 9.6K human-written questions focusing on logical and visual reasoning, and 23.1K
questions generated from human-written chart summaries.
(2) PlotQA-D <cit.> is another large-scale dataset that includes two benchmarks called PlotQA-D1 and PlotQA-D2.
Different from ChartQA, the charts in PlotQA-D are generated with a programming tool and the questions are created with 74 pre-defined templates that can be grouped into structural understanding, data retrieval, and reasoning.
Competitors.
Nine latest works on CQA task, including PlotQA-M <cit.>, PReFIL <cit.>, CRCT <cit.>, VisionTapas <cit.>, ChartT5 <cit.>, Pix2Struct <cit.>,
ChartReader <cit.>,
MatCha <cit.>,
and UniChart <cit.>, are applied to compare with our model.
Note that we directly run their publicly available codes or use the published evaluation results on ChartQA and PlotQA-D.
Settings.
In question parsing module, GoTs of questions in ChartQA are generated by large-language model Qwen2-7B[https://https//qwenlm.github.io/blog/qwen2/wen2/], while the GoTs in PlotQA are obtained with a template expert library.
In compositional reasoning module, the layer of self-data reasoning and the Loc/Num/Log reasoning blocks are set to 4 and 1, respectively.
And in answering module, the 12-layer transformer decoder framework from translation model mBART <cit.> is applied.
During training, the pre-trained parameters from Donut encoder are applied to initialize our GoT-CQA.
For ChartQA dataset, similar to <cit.>, we conduct a first-stage pre-training on a extra chart dataset for 200k steps, followed by a second-stage fine-tuning on ChartQA dataset for 5 epochs. For PlotQA dataset, we perform a single-stage training on the PlotQA-D1 and PlotQA-D2 datasets for 200k steps.
All experiments are completed on two A100 GPUs.
§.§ Performance Comparison
Results on ChartQA. Table <ref> reports the answer scores over human and augmented questions from ChartQA.
From these results, we make the following two observations.
(1) our GoT-CQA consistently perform better than other competitors, in addition to the “Augmented” score obtained by MatCha.
And GoT-CQA's advantage on “Human” questions is more prominent, which is nearly 3.2% higher than the best comparison result.
This indicates that the proposed GoT guided compositional reasoning is beneficial to CQA task, especially for the challenging human-written questions.
(2) For all methods, the performance scores on “Human” questions are significantly worse than those on “Augmented” questions.
This is reasonable because the human annotated questions are more complex and require the more challenging reasoning compared to these augmented questions.
Results on PlotQA-D.
Table <ref> reports the scores over structural, data retrieval and reasoning questions, and the overall questions from PlotQA-D1 and PlotQA-D2 test set.
The results are very consistent with those from ChartQA.
(1) GoT-CQA still achieves the performance improvement on all test data, in addition to the overall performance on PlotQA-D2 achieved by MatCha.
This phenomenon further demonstrates the outstanding ability of GoT-CQA on solving the more challenging questions.
Note that the competitor MatCha achieves the outstanding performance on PlotQA-D2.
This is because MatCha is pre-trained over the large scale MATH and DROP datasets, which significantly enhances its mathematical calculation abilities <cit.>.
(2) With all methods, the performance scores worsen when the question type changes from structural to data retrieval, then to reasoning. And the answer performance of current models on the reasoning questions is less than 87%.
This is because the structural questions just focus on chart’s structure; data retrieval questions require to search for relevant information from the chart; and reasoning questions involve deep reasoning based on chart data understanding.
§.§ Analysis on GoT Guided Compositional Reasoning
GoT Ablation Studies.
As shown in Table <ref>, we conduct a thorough ablation
study on the GoT guided compositional reasoning, to analyze the contribution of key strategies in the proposed
GoT-CQA.
Here, “w/” and “w/o” GoT represent whether to utilize the graph-of-thought to guide the answer reasoning, respectively.
Namely, the variant “w/o” GoT regards the sentence embedding of entire question as the guidance information.
In addition, the compositional reasoning module includes three types of reasoning blocks corresponds to localization, numerical, logical operators in GoT.
To verify the effectiveness of the operator setting, we combine them together as a model variant, and also combine the localization and numerical operators into the finding (Find) operator as another model variant.
According to the results in Table <ref>, we observe the three points.
(1) Whether setting two or three operators, the overall performance of model variant w/o GoT is lower than model GoT-CQA.
(2) Compared to one or two operators, the model GoT-CQA with three operators (Loc+Num+Log) could achieve the comparable performance on “Augmented", “S", and “D" questions, but it is more outstanding on “Human" questions from ChartQA and “Reasoning” questions from PlotQA-D1.
(3) The model with three operators does not cause too much time overhead.
This benefits from the proposed auto-compositional reasoning pattern.
That is to say, for the complex question, its structure of GoT corresponds to more localization, numerical, and reasoning operators, thus its inference time is longer; for simple question, it still requires fewer blocks and time cost.
Reasoning Architecture Studies.
In this section, we analyze the block architecture in compositional reasoning module from the following three aspects.
1) Analysis on Self-Data Reasoning Layers.
As shown in the upper part of Table <ref>, we vary the layer number of self-data reasoning block,
and record the performance scores on test set of ChartQA and PlotQA-D1.
We observe that the GoT-CQA model achieves satisfactory results with 4-layer self-data reasoning.
Moreover, the shallow layers are insufficient for comprehensive self-data understanding, and the deep layers likely to cause the overfitting problem.
2) Analysis on Loc/Num/Log Reasoning Layers.
As shown in the center part of Table <ref>, we
set the layer number of Loc, Num, and Log reasoning blocks to be the same, and vary it in range [1,2,3] to record the GoT-CQA's question-answering performance on two datasets.
The experimental results show that a Loc/Num/Log operator does not require a complex multi-layer reasoning block to implement, and one-layer of attention mechanism could obtain good performance on various type of questions.
Moreover, as the number of layers increases, the compositional reasoning module cannot handle the simple classification questions well.
3) Analysis on Loc/Num/Log Reasoning Architecture.
As shown in the bottom part of Table <ref>, we analyze the effect of different attention mechanisms in reasoning blocks on the CQA performance.
In addition to the cross-stacking mechanism of self- and cross- attentions (i.e. SELF-CROSS),
the pure-stacking mechanisms of cross-attention or self-attention (i.e. CROSS-CROSS or SELF-SELF) are also implemented in the experiment.
From these results, we observe that the GoT-CQA model with SELF-CROSS strategy is a better choice, because both internal and interactive information mining of question guidance and chart data flow are required in the reasoning procedure.
§.§ Case Study
To obtain a more insightful understanding of GoT-CQA model, we demonstrate some case studies in Table <ref>.
Human Questions from ChartQA: including cases Q1.1, Q1.2, Q2.1, and Q2.2.
Obviously, GoT-CQA performs better than the popular competitor MatCha and UniChart, over the human annotated questions.
We also notice that the predicted results of all models are wrong on case Q2.2, possibly due to the limited performance of visual feature extraction with pretrained Donut or the self-data reasoning block.
Structure or Data Retrieval Questions from PlotQA-D: including cases Q3.1 and Q4.1.
We conclude that the performance of MatCha and GoT-CQA are both satisfactory on these questions.
This is because these questions pay more attention on visual elements or their relations in charts, and they does not involve complex reasoning or calculations.
Reasoning Questions from PlotQA-D: including cases Q3.2, Q3.3, Q4.2, and Q4.3.
Apparently, the predicted results with our GoT-CQA model are closer to the ground-truth compared to MatCha.
This indicates that our compositional reasoning pattern guided by GoT is indeed suitable for parsing and solving the challenging reasoning questions.
§ RELATED WORK
Chart-to-Table. It aims to identify the constituent components (e.g. bars and legends) in given chart, and extract its underlying data.
Traditional methods <cit.> on this task rely on various heuristic rules which do not work well for new chart types.
ChartOCR <cit.> combines the advantages of deep-learning and hand-designed rules to achieve outstanding performance on bar, pie, and line charts.
Recently, ChartReader <cit.> introduces a transformer-based chart component detection module and an extended pre-trained vision-language model, which achieves SOTA performance on chart-to-table.
Intuitively, it is a reasonable CQA pipeline by combining the chart-to-table and table question answering together.
However, it not only faces the serious time-consuming annotation issue in chart-to-table stage, but also leads to error accumulation.
Moreover, answering questions does not necessarily require understanding the whole underlying data in chart.
Chart-to-Text.
It aims to generate natural language captions or summaries from the chart image or chart metadata.
Current researches <cit.> generally design a deep generation model to achieve this model.
For example, Qian et al. <cit.> formulated the chart-to-text task as a controlled captioning problem, where the deep model FigJAM is proposed by utilizing metadata
information and a joint static and dynamic dictionary.
Recently, Huang et al. <cit.> pointed that the chart captioning in articles can be solved by summarizing the paragraphs mentioning the chart, and the popular pre-trained language model is applied in the baseline model.
Nonetheless, these methods still could not reflect high-level meanings such as data trends in text generation.
Chart Question Answering.
It aims to answer questions related to charts by mining the visual and textual information.
<cit.> pioneered a synthetic CQA corpus called FigureQA, and
designed a benchmark baseline with the relation network to output binary (Yes/No) answers.
<cit.>
introduced the synthetic DVQA dataset that contains three forms of CQA: structural, data retrieval, and reasoning.
To avoid biases of synthesized data,
<cit.> provided a more complex dataset LEAF-QA in which charts are extracted from varied real-world data sources.
The above researches regard CQA as a simple classification task.
They assume the answer comes from a fixed size vocabulary or is one textual element within the chart.
To the best of our knowledge, PlotQA-D <cit.> is the first dataset that includes the more challenging regression scenarios, namely the answers are not presented in chart and need more complex numerical reasoning.
After that,
<cit.> proposed a joint classification-regression model called CRCT for CQA, which achieved the outstanding results on PlotQA.
Besides, latest works including ChartReader <cit.>, MatCha <cit.>, UniChart <cit.>, apply large pre-trained vision-language models such as BERT <cit.> and GPT-3 <cit.> for CQA task, and they collect massive chart data to train the model parameters.
Nonetheless, the current performance of CQA is far from human-level performance.
And the weak reasoning ability of the models is one of the important factors for performance failure.
§ CONCLUSION
In this paper, we propose a novel multi-modal reasoning model named GoT-CQA for challenging CQA task.
The core of GoT-CQA is the compositional reasoning module guided by the interesting graph-of thought (GoT). Note that the GoT transforms the complex question into a operation sequence with several simple operators, and the auto-compositional reasoning pattern is suitable to various type of chart-oriented questions.
Extensive experiments on ChartQA and PlotQA-D show the effectiveness and superiority of GoT-CQA.
In the future, we intend to improve the GoT-CQA’s ability by exploring more reasonable operators.
ieee_fullname
|
http://arxiv.org/abs/2409.02768v1 | 20240904144424 | Understanding the origin of early-type dwarfs: The spectrophotometric study of CGCG014-074 | [
"N. Guevara",
"C. G. Escudero",
"F. R. Faifer"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
Magnetic, Kinetic, and Transition regime: Spatially-segregated structure of compressive MHD turbulence
[
September 9, 2024
======================================================================================================
§ ABSTRACT
Early-type dwarf galaxies constitute a prevalent population in the central regions of rich groups and clusters in the local Universe. These low-luminosity and low-mass stellar systems play a fundamental role in the assembly of the luminous galaxies observed today, according to the ΛCDM hierarchical theory. The origin of early-type dwarfs has been linked to the transformation of disk galaxies interacting with the intracluster medium, especially in dense environments. However, the existence of low-luminosity early-type galaxies in low-density environments presents a challenge to this scenario.
This study presents a comprehensive photometric and spectroscopic analysis of the early-type dwarf galaxy CGCG014-074 using deep GEMINI+GMOS data, focusing on its peculiarities and evolutionary implications. CGCG014-074 exhibits distinct features, including a rotating inner disk, an extended stellar formation with a quiescent phase since about 2 Gyr ago, and the presence of boxy isophotes.
From the kinematic analysis, we confirm CGCG014-074 as a nucleated early-type dwarf galaxy with embedded disk. The study of its stellar population parameters using different methods provides significant insights into the galaxy's evolutionary history. These results show an old and metal-poor nucleus (∼ 9.3 Gyr and [Z/H]∼-0.84 dex), while the stellar disk is younger (∼4.4 Gyr) with a higher metallicity ([Z/H]∼-0.40 dex).
These distinctive features collectively position CGCG014-074 as a likely building block galaxy that has evolved passively throughout its history.
galaxies: dwarf – galaxies: formation – methods: observational
§ INTRODUCTION
Dwarf galaxies constitute a class of low-luminosity (M_B > -18 mag) and low-mass (M < 10^9 M_⊙) stellar systems. Among these, early-type dwarf galaxies
stand out as the dominant galaxy type within nearby clusters and groups <cit.>. According to the ΛCDM hierarchical theory <cit.>, dwarf galaxies serve as the fundamental constituents in the assembly of the luminous galaxies observed today.
The formation scenarios and theories for these objects focus on the transformation of late-type galaxies into their early-type counterparts through different processes involving interactions with their cluster/group and the environment. The galactic cold gas depletion induced by the hot intracluster medium's ram pressure <cit.>, tidal mass loss and kinematic heating caused by tidal shocks <cit.>, and structural alterations from tidal interactions with massive cluster members are all crucial factors in this transformation. These effects are particularly pronounced in low-mass galaxies due to their shallow gravitational potentials. Furthermore, the morphology-density relation underscores the affinity of early-type galaxies for high-density regions <cit.>. Thus, the combined impact of ram pressure, environmental quenching <cit.>, and tidal interactions leads to gas removal, suppression of star formation, and kinematic heating.
Supporting evidence for this theory emerges from observations of dwarf galaxies retaining vestiges of their late-type history, such as bars or residual spiral structures <cit.>. Additionally, the study of intermediate luminosity jellyfish galaxies provides insight into the active interplay of these mechanisms <cit.>. The presence of fast-rotating dE-like galaxies also contributes to this framework <cit.>. However, it remains unclear whether these processes are relevant in low-density environments, such as the field or poor groups <cit.>. Hence, a comprehensive analysis of the morphological characteristics of dwarf galaxies becomes essential for unravelling their underlying dynamics and assembly histories.
In this paper, we present a photometric and spectroscopic analysis of CGCG014-074 <cit.>, a completely unexplored early-type dwarf galaxy originally catalogued as a dwarf lenticular <cit.>, located in the vicinity of NGC 4546 <cit.>, a massive lenticular galaxy classified as a field galaxy <cit.>.
The observational properties of both galaxies are summarized in Table <ref>. By considering the distance modulus for the group of (m-M)=30.75±0.12 mag (14.1±1.0 Mpc), the spatial scale corresponds to 0.067 kpc/arcsec.
Given their location in a low-density environment, studying this galactic pair offers an exceptional case to study the formation and evolution processes of dwarf and S0 galaxies under such conditions. Our main goal is to characterize the evolutionary history of CGCG014-074 by examining its stellar populations globally and across spatially resolved regions.
From this analysis, we hope to determine if the morphology of this early-type dwarf galaxy reflects gradual formation and evolution as a building block, or if significant mergers and/or interactions have influenced it. The first scenario, according to cosmological hierarchical formation models, predicts an extended star formation history (SFH), leading to enriched stellar populations over time until their cold gas is depleted, while the latter case suggests a radial gradient of age and metallicity, with a younger, metal-rich central region <cit.>.
The paper is organised as follows. Section <ref> introduces the data and its reduction procedure, while Section <ref> provides a comprehensive account of the photometric and spectroscopic analysis performed. Finally, Section <ref> discusses the results obtained in this study.
§ OBSERVATIONAL DATA
The acquisition of photometric data was carried out under the programme GS-2014A-Q-30 (PI: Escudero, C.), using the capabilities of GEMINI/GMOS instrument in image mode. The observations were conducted under exceptional seeing conditions, achieving an FWHM between 0.46-0.67 arcsec. The dataset comprises exposures in the g', r', i', and z' filters <cit.>, each consisting of four exposures of 100 sec for g', r', and i', and four exposures of 290 sec for the z' filter.
The observations were reduced using specific GEMINI/GMOS routines within the iraf[IRAF is distributed by the National Optical Astronomical Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation] software (version 2.16), such as gprepare, gbias, giflat, gireduce, and gmosaic. Baseline calibration was implemented using bias and flat-field images obtained from the Gemini Observatory Archive (GOA), which were essential for correcting the raw data.
In particular, the i' and z' frames exhibited night sky fringing caused by thin film interference effects within the CCD detectors. To mitigate this effect, blank sky frames were used to subtract the fringing pattern.
These calibration images were downloaded from the GOA, combined with the gifringe task, and applied to the science frames using the girmfringe task. Finally, the iraf task imcoadd was used for the co-addition of the reduced frames in each filter, obtaining the final g', r', i', and z' images. Figure <ref> shows the GMOS mosaic with the position of CGCG014-074 and NGC 4546.
The spectroscopic observations, on the other hand, were obtained with the GMOS instrument in long-slit mode, under the programme GS-2020A-Q-130 (PI: Escudero, C.). A total of nine exposures of 1540 sec were taken using the B1200 grating and a slit of 1 arcsec width slit aligned along the major axis of the galaxy. Taking into account the gap between the CCDs and the slit bridges, the data acquisition was centred at three different wavelengths, 530, 540 and 550 nm, and with a spatial offset of 7 arcsec. This instrumental configuration yields a dispersion of 0.26 Å/pixel with a spectral resolution of FWHM∼1.9Å measured at several sky lines, and covering a wavelength range of 4700-6200 Å.
The subsequent reduction process was performed using the tasks from the GEMINI/GMOS iraf package, run within the pyraf environment (version 2.1.15 for Python 2.7). Bias calibrations were downloaded from the GOA and used in the reduction process along with the flat-field and arc calibrations observed on the same night as the science frames. The latter were corrected, rectified and calibrated in wavelength using tasks such as gbias, gqecorr, gsflat, greduce, gswavelenght and gstransform. Cosmic rays were removed from the images using the Laplacian Cosmic Ray Identification <cit.> routine via the gemcrspec task. To obtain a single 2D image of the spectrum, the nine individual science frames were combined using the lscombine task, taking into account both the spectral and spatial offset. The standard star CD-329927 underwent the same reduction procedure, and its sensibility function was derived using the gsstandard task.
Flux calibration and heliocentric correction of the 1D science spectra were performed using the sensitivity function and the calibrate and dopcor tasks, respectively.
§ ANALYSIS
§.§ Surface Brightness Profiles
The surface brightness distribution of the dwarf galaxy was studied using the ellipse task <cit.> from the iraf software. The isophotal parameters (ellipticity (ε), position angle (PA), and Fourier coefficients A_4, B_4) were allowed to vary as a function of the equivalent radius (r_eq= a √(1-ε); where a is the semi-major axis of the ellipses).
ellipse was run interactively on the images, with appropriate masking of the light from NGC 4546 and bright objects in the field before fitting. During the fitting process, the ellipses' centre, ε, and PA were allowed to vary freely. However, as the iterations extended towards the outer regions of the galaxy where the signal-to-noise ratio (SNR) decreases, the aforementioned parameters were fixed to achieve convergence in the fit. In this case, the edge of the images was reached, corresponding to the equivalent radius of 60 arcsec.
Finally, the models obtained in each filter were calibrated to the standard system using the following expression:
μ(r)= C_0 - 2.5 log_10(I(r)/t E^2) - K ( X -1 ),
where μ(r) represents the surface brightness at the equivalent radius r in magnitudes per square arcsec, C_0 denotes the constant for transformation to the standard photometric system derived from <cit.>, and it contains the galactic extinction and the point-zero correction, I(r) corresponds to the mean intensity of the isophote in ADU per square pixel, t is the exposure time of the data (see Section <ref>), E is the GMOS detector scale in arcsec per pixel (0.146 arcsec/pixel), K is the mean atmospheric extinction and X the airmass of the observations.
To validate and refine sky-level estimations in the profiles, images in g, r, i and z bands were downloaded from the DESI Legacy Imaging Surveys[http://legacysurvey.org/] <cit.>. These images, although shallower photometrically than the GMOS images, cover a larger area around the galaxy. Again, ellipse was used on this dataset, following the same procedure as on the GMOS images, but reaching a larger galactocentric radius (r ∼ 80 arcsec; 5.6 kpc). Subsequently, the profiles obtained from the Legacy images were calibrated using Equation <ref>, considering a pixel scale of 0.27 arcsec/pixel, and C_0=22.5 mag, for all photometric bands. Finally, the calibrated profiles from GMOS and the corresponding Legacy images were compared to unveil any subtle signal differences between them, mainly due to the effect of the background sky value considered during the isophotal fit. In this way, the calibrated surface brightness profiles of CGCG014-074 were obtained in the four filters.
To characterise and estimate the structural parameters of CGCG014-074, a Sérsic <cit.> function was initially fitted in the form:
μ(r)= μ_eff + (1.086 b_n)[(r/R_eff)^(1/n)-1],
where R_eff is the effective radius of the galaxy, μ_eff is the surface brightness at R_eff, n represents the Sérsic index, and b_n is a parameter associated with n that follows the approximate expression b_n=1.9992,n-0.3271 for 0.5<n<10 <cit.> to ensure that half of the total flux is within R_eff.
The parameters obtained from these fits are listed in Table <ref>, where the integrated apparent (and absolute) magnitudes of the galaxy were also obtained using the following expression:
m_0 = μ_eff-1.995450-5 log(R_eff)-1.0857 b_n-2.5 log[b_n^-2n nΓ(2n)]
being Γ(2n) the gamma function. As observed in the mentioned Table, the values of the effective radii and Sérsic indices obtained in the different bands are similar, being typical values obtained for early-type dwarf galaxies <cit.>. This characteristic would indicate that strong colour gradients are not expected in the galaxy.
These single-component fits give reasonable results, but when the residuals are examined in detail, particularly towards the innermost and outermost regions of the profiles (r_eq<10 and r_eq>55 arcsec), they show values higher than ±0 .2 mag/arcsec^2. This suggests that a single component is not sufficient to accurately reproduce the galaxy's light distribution. Therefore, the next step was to fit multiple functions to the profiles. This was done following the procedure of <cit.>, where the best model is the one that contains a minimum number of components with reasonable parameters that describe visibly different structures.
First, a Sérsic profile was considered to model the extended component of the galaxy. The fit was done by leaving the model parameters free for r_eq>10 arcsec to avoid the excess light shown by the profiles in the central region as well as the effect of seeing. This model was then subtracted from the profile to remove the extended component of the galaxy, and the residual was observed to confirm that the model was correct. A second Sérsic profile was then used to model the excess light between 0.6<r_eq<10 arcsec. As mentioned above, this model was subtracted from the profile by verifying the residual obtained. Finally, to represent the inner region of the galaxy, a Gaussian profile was fitted in the range r_eq<0.6 arcsec. From these initial values obtained for each function, they were fitted simultaneously throughout the galactocentric range to improve and obtain the final parameters.
Figure <ref> shows the best model obtained for each filter, composed of three functions considered, while Table <ref> lists the values obtained for each component. It can be seen that the inner component, described by a Sérsic profile, has an index value of n≈0.8. This value agrees with what is expected for early-type dwarf galaxies with disk properties <cit.>.
§.§ Maps and Colour Profiles
To obtain the profiles and colour maps of CGCG014-074, we initially aligned all the photometric images using the g' filter image as a reference, using the iraf tasks geomap and geotran. Then, since the g' filter frame is the lowest quality (FWHM=0.67 arcsec), the r',i' and z' images were degraded to this value to homogenise the dataset and avoid substructures in the colour maps and profiles caused by differing FWHM values.
This involved degrading the images using the gauss task within iraf, which convolves the original science image with a Gaussian kernel defined by its standard deviation (σ).
To denoise the colour map, we applied a smoothing process to these standardised images using the boxcar task with a rectangular kernel of dimensions 5×5 pixels, similar to the seeing value. Finally, the imcalc task facilitated the generation of colour maps by calculating the differences between the images. Obtaining these colour maps can provide relevant information about different stellar components and/or colour substructures related to the presence of dust regions and/or merger events.
The ellipse task is used to extract the colour profiles from the generated colour maps. Fixed parameters for centre position, ellipticity and position angle were adopted, based on average values (⟨ϵ⟩ = 0.47 and ⟨ AP ⟩ = 37^∘) derived from calibrated surface brightness profiles (as described in Section <ref>). These parameters allowed us to construct corresponding colour profiles for the colour indices (g'-r')_0, (g'-i')_0, and (g'-z')_0.
Figure <ref> presents the colour maps and profiles of CGCG014-074. These profiles extend up to r_eq∼ 40 arcsec (indicated by a black circle with a dashed line in the colour maps). Beyond this radius, the considered sky values begin to significantly influence the colour maps.
As can be seen, the colour profiles remain relatively flat within r_eq<30 arcsec (r_eq<2 kpc), with mean colours of ⟨(g'-r')_0⟩=0. 6, ⟨(g'-i')_0⟩=0.86, and ⟨(g'-z')_0⟩=1.07 mag, with dispersions of σ_g'-r=0.01, σ_g'-i=0.02, and σ_g'-z=0.03 mag. Moreover, the colour maps do not show any discernible signs of dust presence or distinct colour substructures.
§.§ Isophotal Analysis
The results of the fitting process performed by the ellipse task on the photometric images provide a basis for analyzing the variations of the isophotal parameters (ε, PA, cosine Fourier coefficient B_4) relative to the equivalent radius.
Figure <ref> shows the variation of the aforementioned parameters in different photometric bands.
In the innermost region (r_eq<3 arcsec; 0.2 kpc), the ellipticity shows significant variation, ranging from ε∼ 0.1 to 0.6. This large shift is due to the presence of the nucleus of the dwarf galaxy. Beyond r_eq>3 arcsec, the ellipticity begins to decrease smoothly until r_eq∼18 arcsec (1.2 kpc), remaining at an approximately constant value (ε∼0.48) until r_eq∼30 arcsec (1.8 kpc).
On the other hand, the position angle (PA), measured counterclockwise from north to east, shows a similar behaviour to the ellipticity in the inner region (r_eq<3 arcsec; 0.2 kpc), changing its value rapidly. From r_eq=3 to 60 arcsec, the PA gradually increases, varying by about 13 degrees between the two ends, reaching a value of 60 degrees in the outermost region of the galaxy. It is interesting to note that the fitted isophotes begin to rotate smoothly, pointing in the direction of the companion galaxy NGC 4546.
Finally, regarding the Fourier coefficient B_4, it takes values B_4>0 for r_eq<18 arcsec (1.2 kpc), indicating the presence of disk-like isophotes. These types of isophotes are found in the galactocentric region where ε shows a smooth change, pointing to the presence of an inner stellar disk.
Then, as we move towards larger radii (r_eq>18 arcsec), the parameter shows values B_4<0, indicating boxy-like isophotes.
These characteristics exhibited by CGCG014-074 resemble those of the rectangular-shaped galaxy LEDA 074886, studied by <cit.>, which also shows an edge-on stellar disk and notable boxy-like isophotes.
§.§ Kinematics
To study the kinematics of CGCG014-074, 1D spectra at different galactocentric radii were extracted from the 2D spectroscopic image (see Section <ref>). This was done using an iterative IDL code (version 7.1), which varies the aperture of a given extraction around certain galactocentric radii to obtain an SNR above a user-defined value. In this work, a criterion of SNR > 20 per Å (measured at 4700Å) was chosen to ensure a robust analysis of the kinematics and stellar population history of the galaxy. However, for regions beyond 15 arcsec (r_g > 15 arcsec; 1 kpc) in the 2D spectrum, this condition was relaxed to SNR > 5 per Å to allow at least a meaningful kinematic analysis.
From these spectra, we determined the radial velocity (V_rad) and the velocity dispersion (σ_*) using the full spectral fitting technique implemented in the pPXF algorithm <cit.>. In this work, we used the stellar population synthesis models from <cit.>, specifically the single stellar population (SSP) models based on the ELODIE stellar library. This preference was driven by the spectral range coverage, aligning closely with our observations (3900-6800 Å), albeit at a slightly higher resolution (FWHM=0.55 Å) compared to the CGCG014-074 spectra (FWHM=1.9 Å). The wavelength range for the fits was from 4750 to 6200 Å, with selective masking of certain regions and lines susceptible to contamination by prominent sky lines or emission lines, as these could potentially skew the results.
Table <ref> lists the apertures used at different galactocentric radii as well as the values for V_rad and σ_*, along with their respective uncertainties estimated by Monte Carlo simulations. In each simulation, each pixel of the spectra was resampled from a Gaussian distribution with a width equal to the observational error of that pixel. The parameter values and their associated uncertainties were obtained as the median and 1σ estimation from 100 simulations, respectively. This number of simulations is conventional for this type of analysis, as evidenced by previous studies <cit.>.
Figure <ref> shows an example fit obtained for the spectrum in Table <ref> corresponding to the central region of the galaxy.
Figure <ref> shows the velocity profile of CGCG014-074 based on the values estimated by pPXF. As seen, there is strong evidence of rotation with an amplitude of ∼20 km/s.
Notably, the V/σ_* values show a progressive increase towards the outer regions, eventually reaching V/σ_* ∼ 1. This observation lends substantial support to the plausible existence of a disk component within CGCG014-074.
In addition, we calculate the anisotropy parameter V_max/σ within an effective radius by correcting it for the inclination using the following expression (V_max/σ)^* = (V_max/σ) [ε/(1-ε)]^-1/2, being ε the ellipticity. We estimate this parameter to determine whether the galaxy is pressure or rotationally-supported. To do this, we obtained the dispersion velocity by correcting the spectra in Table <ref>, except for the outermost spectra, by shifting them to the same wavelength scale from the estimated radial velocity values. Once corrected, they were coadded into a single spectrum on which the dispersion velocity was obtained using pPXF. Regarding the maximum rotational velocity (V_max), we estimate the mean value from the two most distant estimated velocities on either side of the galactic centre but within the effective radius. Finally, the obtained value is (V_max/σ)^* = 1.5 ± 0.2, indicating a rotationally supported system.
The central radial velocity value obtained for CGCG014-074 (V_rad^cent= 1005.5 ± 1.6 km/s) is in good agreement with the estimate made by <cit.> (V_hel=998±54 km/s) using data from the 2dF Galaxy Redshift Survey. Furthermore, the values of V_rad corresponding to the spectra of the central region, which includes the light from the compact object, indicate that it is part of the dwarf galaxy and constitutes its nucleus.
On the other hand, the velocity dispersion profile shows minimal, if any, discernible trend with radius, with a mean of ⟨σ_* ⟩=21.5 km/s and a standard deviation of 3.9 km/s. The σ_* values obtained here are deemed reliable due to pPXF's ability to accurately recover the velocity dispersion even below the spectral resolution <cit.>, as in this case. The absence of a prominent central dip or peak in the σ_* distribution suggests that the nucleus of CGCG014-074 exerts a negligible influence on its kinematics.
In addition, this estimated ⟨σ_* ⟩ value for the galaxy is consistent with those reported in the literature for early-type dwarf galaxies <cit.>.
§.§ Mass estimates
Based on the mean colour values obtained in Section <ref>, we estimated the stellar mass of CGCG014-074 using the expressions from <cit.> that relate the colour of the galaxy to the mass-to-light ratio (M/L). We used the absolute solar magnitude in the z filter of 4.50 <cit.> since it is less sensitive to metallicity variations. Considering an absolute magnitude of M_z=-16.68 mag for CGCG014-074, we obtained the value M_*/L_z=1.134 (with a dispersion of 0.102). Consequently, the estimated stellar mass of CGCG014-074 is 3.3×10^8 M_⊙. This stellar mass value, together with the previously presented effective radius, is in the range exhibited by early-type dwarf galaxies <cit.>.
In addition, from the determination of the kinematic parameters (see Section <ref>), we estimated the dynamical mass of CGCG014-074 following the guidelines of <cit.>. To do this, we define the total mass from the relation M_tot = M_press + M_rot, where M_press is the mass obtained from the velocity dispersion, and M_rot the mass inferred from the intrinsic rotation velocity of the galaxy.
Both terms are given by the expressions:
M_press = C σ_*^2 R/G ,
and
M_rot = R v_max^2/G ,
where C is the virial coefficient, R is a measure of the system's size, σ_* is the velocity dispersion of the system, v_max the maximum rotational velocity, and G is the gravitational constant.
To estimate the value of C, we used Equation 11 from <cit.>, which relates the value of the Sérsic index to this coefficient. In our case, considering the estimated mean value of n=1.7 (C=7.59) and the effective radius of 0.89 kpc, the total dynamical mass obtained for CGCG014-074 is M_tot=8.0 × 10^8 M_⊙.
§.§ Stellar Populations
From the reduced 2D spectroscopic data (see Section <ref>), we extracted and flux-calibrated the spectrum of CGCG014-074 within an aperture of 192 arcsec (12.8 kpc). This aperture was used to include the greatest amount of light from the galaxy. The integrated spectrum shows an SNR between 30 per Å at 4800 Å to 50 per Å at 6200 Å. We measure several Lick/IDS indices <cit.> within our spectral range. We use the χ^2 minimization method of <cit.> together with the SSP models of <cit.> to determine the age, metallicity, and α-element enhancement ratio [α/Fe] of CGCG014-074. The SSP models were interpolated to obtain a smoother and finer grid of models in parameter space.
Minimizing the χ^2 distance between the indices Fe_4531, Hβ, Fe_5015, Mg_2, Mg_b, Fe_5270, Fe_5335 and Fe_5406 and the parameter grid <cit.>, we obtained the following light-weighted mean values for the integrated spectrum of the dwarf galaxy: 8.3^+1.4_-1.2 Gyr; [Z/H]=-0.59±0.11 dex and [α/Fe]=0.08±0.06 dex. The respective errors are derived from 100 Monte Carlo simulations of the data within the measured index errors.
Subsequently, we used pPXF on the extracted spectrum to study the star formation history (SFH) of CGCG014-074. Unlike the kinematic analysis, we used the MILES[https://miles.iac.es] model library <cit.> for this purpose, which allows us to select a given initial mass function (IMF) and a specific α-element abundance. In this case, we assume a Kroupa IMF and [α/Fe]=0.0 dex since these are the MILES models closest to the value previously estimated with the LICK indices. The MILES models, with a resolution of 2.51 Å, cover a wide range of ages (0.03-14 Gyr) and metallicities (-2.27 < [Z/H] < 0.4 dex). During the spectral fitting, the science spectrum was reduced to the resolution of the templates and a multiplicative polynomial of order 10 was considered. In addition, pPXF was allowed to fit emission lines, if present.
The obtained solutions were smoothed using the regularisation option to avoid spurious solutions and to reduce the degeneracy problem. To do this, it was sought that the solutions obtained had Δχ^2 ∼√(2N) <cit.>, where N is the number of pixels in the spectrum being fitted, and Δχ^2 is the difference between the χ^2 value of the current solution and that of the non-regularised case. This guarantees that the solution obtained is the smoothest one consistent with the observations.
We infer the uncertainties of the SSP grid weights through a bootstrap analysis, where the pPXF fit is repeated 100 times. This was done by resampling the pixels from the residual obtained from the unregularized best fit. This resampled spectrum is then used as input to pPXF taking into account a minimum regularization (R = 20). In this way, the variance of the weights is obtained solely from the variance of the spectrum, without forcing any prior regularisation.
When performing the fits, the SSP models are normalised in luminosity to a solar mass; thus, the solution provided by pPXF yields luminosity or mass weights for each fitted SSP, depending on the user's choice. From these weights, mass or luminosity-weighted averages of age and metallicity are derived using the following expressions:
⟨log(Age)⟩ = Σ w_ilog(Age_i)/Σ w_i
⟨[Z/H]⟩ = Σ w_i[Z/H]_i/Σ w_i
where w_i is the mass or luminosity fraction of a given model with age and metallicity.
Using these expressions, we calculate the average values of age and metallicity weighted in luminosity and mass for the integrated spectrum of the galaxy, obtaining: 5.3±1.1 Gyr; -0.62±0.05 dex and 6.1±1.1 Gyr, -0.59±0.05 dex, respectively.
Figure <ref> shows the luminosity and mass-weighted star formation histories (top and middle panels) and the cumulative mass as a function of age (bottom panel) for the integrated spectrum of CGCG014-074. Initially, we note that both SFHs behave similarly, showing the expected differences in the light contribution of stellar populations according to their ages. Specifically, CGCG014-074 appears to have experienced an extended SFH from its early stages (∼13 Gyr) until about 2 Gyr ago, when it reached virtually all of its stellar mass. During this period, the metallicity shows a wide range of values from [Z/H]∼-1.50 dex to 0.0 dex. However, this range decreases towards younger ages, probably due to the enrichment of the interstellar medium and the new generation of more metal-rich stellar populations.
We ran pPXF again, but in this case on the individual spectra from Table <ref>, except for those spectra furthest from the galactic centre due to their low SNR. We have used the expressions <ref> and <ref> to obtain the mass- and light-weighted stellar population parameters for each of them. Columns 5-8 of the same table list the values obtained for each spectrum, and Figure <ref> shows the variation of these parameters as a function of the galactocentric radius.
This figure shows that the central region corresponding to the nucleus of CGCG014-074 has an old age (∼9.3 Gyr) and a low metallicity ([Z/H]∼-0.84 dex), both in its weighted values of luminosity and mass. Then, along the semi-major axis in the region corresponding to the disk of the galaxy, the age has approximately constant values (⟨Age_lum⟩ = 4.3 Gyr; ⟨Age_mass⟩ = 5.7 Gyr) in the range 1 < r_g < 6 arcsec (0.067 < r_g < 0.33 kpc), increasing slightly outwards (⟨Age_lum⟩ = 4.6 Gyr; ⟨Age_mass⟩ = 6.3 Gyr). The metallicity, outside the nucleus and throughout the analysed region, remains relatively constant (⟨[Z/H]_lum⟩ = -0.39 dex; ⟨[Z/H]_mass⟩ = -0.42 dex).
§ SUMMARY AND DISCUSSION
This paper presents the photometric and spectroscopic analysis of the early-type dwarf galaxy CGCG014-074. The observations were obtained with the GMOS South instrument of the Gemini Observatory. The photometric data were obtained using the broadband filters g', r', i' and z', while the spectroscopic observations were made in the long-slit mode of the same instrument.
From the photometric analysis, we determined the surface brightness profiles of the galaxy and found that a simple single-component model cannot represent the light distribution of the galaxy. In this case, the best representation of these profiles was obtained using three components: a Gaussian profile describing the nucleus of CGCG014-074, and two Sérsic profiles, one representing a light excess in the innermost part of the galaxy (0.2 ≲ r_eq≲ 0.8 kpc) and the other for the outermost (r_eq≳ 0.8 kpc) extended stellar component. This result is consistent with the original classification of CGCG014-074 as a lenticular dwarf galaxy since such objects show surface brightness profiles of at least two components <cit.>.
The innermost Sérsic profile in the four filters shows Sérsic index values of n∼0.8. Considering that the value n∼ 1 represents an exponential disk, the result obtained here would indicate the presence of a stellar disk in CGCG014-074. These values are even comparable to those obtained for dwarf galaxies of equivalent magnitude located in clusters <cit.>.
In this sense, examination of the colour maps of the galaxy in these regions shows a smooth surface, with no visible evidence of dust and no colour differences between the inner disk and other components of the galaxy. From this analysis, we determined the absolute magnitudes for each filter (see table <ref>) and the effective radii (mean effective radius ⟨ R_eff⟩=13.27±0.08 arcsec; 0.89 kpc), per filter, of CGCG014-074.
The isophotal analysis shows smooth changes in ellipticity and position angle (ΔPA = 13^∘ and Δε = 0.3) within 3 < r_eq < 40 arcsec (0.2 < r_eq < 2.8 kpc). In the same galactocentric region, the kinematic study shows signs of rotation, with a velocity dispersion that does not vary with radius and has an average value of ⟨σ_* ⟩=21.5 km/s. Furthermore, the anisotropy parameter of the galaxy results in (V_max/σ)^* = 1.5 ± 0.2, indicating a rotationally supported system.
These results confirm the presence of a disk in CGCG014-074.
On the other hand, in the innermost region of the surface brightness profile, we observe significant variations in the isophotal parameters ε and PA due to the presence of the nucleus, which has been confirmed kinematically.
The classification of early-type dwarf galaxies has been a subject of debate due to the lack of a clear and consistent definition. Traditionally, these galaxies have been divided into dE (dwarf ellipticals) and dS0, though the distinction between the two remains ambiguous. Some studies, such as <cit.>, have challenged the existence of dS0 as a separate class, proposing instead that many of these galaxies might be disk systems rather than spheroids with an embedded disk component. With our validation, we can morphologically classify CGCG014-074 as a nucleated early-type dwarf galaxy featuring an embedded disk. Following the nomenclature suggested by , this galaxy would be classified as "dEdi,N", indicating its likely membership in a population of genuine disk galaxies.
Furthermore, the variation of the isophotal parameter shows that the major axis of the isophotes tends to point towards the dominant galaxy of the group (NGC 4546) with increasing radii. Likewise, at r_eq∼ 25 arcsec (∼ 1.6 kpc) the B_4 parameter changes from positive (disky isophotes) to negative (boxy isophotes). The occurrence of boxy isophotes in galaxies
is usually associated with possible interaction or merger events <cit.>.
The presence of this type of isophotes in CGCG014-074 could be explained as a consequence of the beginning of an interaction with the neighbouring giant galaxy NGC 4546. Although the globular cluster system of NGC 4546 and the surrounding region of CGCG014-074 was studied in <cit.>, no clustering or anisotropy of objects was initially found around the latter that could provide further evidence for this. In this same aspect, it is necessary to consider another potential object that could have influenced CGCG014-074 and that is the presence of the nucleated dwarf galaxy destroyed by tidal interaction with NGC 4546 in the last 1-2 Gyr <cit.>.
We studied the stellar population parameters of CGCG014-074 using two different methods. First, the values of age, metallicity and α-element abundances were determined by analyzing the Lick/IDS indices on the total spectrum extracted from the dwarf galaxy. In this way, global parameters of CGCG014-074 were obtained, which are: 8.3^+1.4_-1.2 Gyr; [Z/H]=-0.59±0.11 dex and [α/Fe]=0.08±0.06 dex. This last value of [α/Fe] was then used to run the full spectral fitting technique using pPXF to obtain its luminosity- and mass-weighted star formation history. This analysis revealed that CGCG014-074 underwent a prolonged period of stellar formation from its beginning (∼13 Gyr) until about 2 Gyr ago when its star formation ceased and it reached 100% of its stellar mass. In this age range, the metallicity covers a range of values from [Z/H]∼ -1.5 to [Z/H]∼ 0.0 dex, and then its dispersion decreases towards younger ages. It is necessary to emphasize that in principle it is not possible to determine whether this SFH was continuous or if it was due to different bursts of star formation. The average values of age and metallicity weighted in luminosity and mass obtained by pPXF yielded 5.3±1.1 Gyr; [Z/H]=-0.62±0.05 dex and 6.1±1.1 Gyr, [Z/H]=-0.59±0.05 dex, respectively, obtaining a difference compared to the Lick values. This discrepancy between the values obtained by the two methods has already been pointed out in other works in the literature <cit.>.
In the same context, observing the stellar parameters at different galactocentric radii, it is seen that the central region corresponding to the nucleus of CGCG014-074 has an older age (∼ 9.3 Gyr) compared to the regions that make up the stellar disk (∼ 4.4 Gyr). Likewise, the metallicity in the central region ([Z/H]∼-0.84 dex) is lower than in the regions external to it ([Z/H]∼-0.40 dex). This variation in age and metallicity in the galaxy is consistent with the observed flat photometric colour profile, indicating the effect of age-metallicity degeneration.
According to <cit.>, these characteristics in CGCG014-074 suggest a formation scenario in which the nucleus primarily formed through the accretion of older, metal-poor globular clusters, while the younger, more metal-rich stellar disk indicates subsequent in situ star formation. This dual formation pathway is particularly relevant for galaxies with a mass around M_gal∼ 10^9 M_⊙, where both globular cluster accretion and in situ star formation contribute to the build-up of the nuclear star cluster and the surrounding galaxy structure.
Finally, we have determined the total stellar and dynamical mass of CGCG014-074 by studying the photometric colour profiles and kinematic analysis, respectively. The values obtained are M_⋆= 3.3 × 10^8 M_⊙ and M_tot = 8.0 × 10^8 M_⊙, and they are in good agreement with the values obtained for other early-type dwarf galaxies, <cit.>.
As for the possible origin of CGCG014-074, several works using simulations <cit.> have suggested that large mergers of metal-rich disk galaxies could produce slow-rotating early-type galaxies (V/σ_* ∼ 1), with the possibility of a kinematically decoupled core <cit.>. In these cases, the galaxy's gas would be driven towards the centre of the forming galaxy, creating an internal disk with a scale of 1 R_eff, while the stars at larger galactocentric radii would undergo dry mergers, giving rise boxy isophotes in these regions. Although low-mass galaxies are expected to undergo fewer mergers, this scenario is supported by several observations, and many simulations suggest that this mechanism of formation could be possible in a low-density environment <cit.>. If this were the case for CGCG014-074, we would expect the inner disk of the galaxy to be younger than the outer regions.
Another possibility is that isolated low-mass galaxies, which have stopped forming stars and have not experienced dramatic events such as major mergers or tidal shocks, are just the building blocks for the formation of hierarchical structures. These objects could form disks and steadily acquire angular momentum through the accumulation of gas from the cosmic web <cit.>, and through minor mergers.
In this context, CGCG014-074 displays characteristics aligning with both scenarios discussed earlier. On the one hand, the galaxy shows a rotating inner disk, an extended stellar formation <cit.>, with the cessation of activity ∼ 2 Gyr ago, and boxy isophotes towards the outer regions. Initially, no kinematically decoupled core is detected, nor are there any signs or traces of a major merger. On the other hand, the possible accretion of primordial gas from the environment in the recent past does not seem to be the case either, as the galaxy's metallicity is not notably low. As shown in this study, the disk of the galaxy has a higher metallicity than its nucleus, possibly indicating a secular evolution in CGCG014-074. Together, these features position CGCG014-074 as a probable building block galaxy that has evolved passively throughout its history.
§ ACKNOWLEDGEMENTS
We thank the referee, Professor Dr. Reynier Peletier, for his constructive comments and suggestions that helped improve this paper.
This work was funded with grants from Consejo Nacional de Investigaciones Cientificas y Tecnicas de la Republica Argentina, and Universidad Nacional de La Plata (Argentina).
Based on observations obtained at the international Gemini Observatory, a program of NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigación y Desarrollo (Chile), Ministerio de Ciencia, Tecnología e Innovación (Argentina), Ministério da Ciência, Tecnologia, Inovações e Comunicações (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). The Gemini program ID are GS-2014A-Q-30 and GS-2020A-Q-130.
The DESI Legacy Imaging Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS), the Beijing-Arizona Sky Survey (BASS), and the Mayall z-band Legacy Survey (MzLS). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF’s NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). Legacy Surveys also uses data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Legacy Surveys was supported by: the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility; the U.S. National Science Foundation, Division of Astronomical Sciences; the National Astronomical Observatories of China, the Chinese Academy of Sciences and the Chinese National Natural Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy.
This research has made use of the NASA/IPAC Extragalactic Database (NED),
which is operated by the Jet Propulsion Laboratory, California Institute of Technology,
under contract with the National Aeronautics and Space Administration.
§ DATA AVAILABILITY
The data underlying this article are available in the Gemini Observatory Archive at https://archive.gemini.edu/searchform, and can be accessed with Program ID:GS-2014A-Q-30/PI: Escudero, C. and Program ID:GS-2020A-Q-130/PI: Escudero, C.
mnras
|
http://arxiv.org/abs/2409.02170v1 | 20240903180001 | An efficient observational strategy for the detection of the Oort cloud | [
"Eran O. Ofek",
"Sarah A. Spitzer",
"Guy Nir"
] | astro-ph.IM | [
"astro-ph.IM"
] |
1Department of particle physics and astrophysics, Weizmann Institute of Science, 76100 Rehovot, Israel.
2Lawrence Berkeley National Laboratory, 1 Cyclotron Road, MS 50B-4206, Berkeley, CA 94720, USA
⋆Corresponding Author: eran.ofek@weizmann.ac.il
§ ABSTRACT
The Oort cloud is presumably a pristine relic of the Solar System formation.
Detection of the Oort cloud may provide information regarding the stellar environment in
which the Sun was born and on the planetesimal population during the outer planets' formation phase.
The best suggested approach for detecting Oort cloud objects in situ,
is by searching for sub-second occultations of distant stars by these objects.
Following Brown & Webster, we discuss the possibility of detecting Oort cloud objects by observing near the quadrature direction.
Due to the Earth's projected velocity, the occultations are longer near the quadrature direction and are therefore easier to detect, but have lower rate.
We show that, for ≲1 m size telescopes, the increased exposure time will result in about one to three
orders of magnitude increase in the number of detectable stars
that have an angular size smaller than the Fresnel scale
and are therefore suitable for an occultation search.
We discuss the ability of this method to detect Oort cloud objects using existing survey telescopes, and we estimate the detection rate as a function of the power-law index of the size distribution of the Oort cloud objects and their distance from the Sun. We show that occultations detected using ≈1-s integration by ≲1 m telescopes at the optimal region near the quadrature points will be marginally dominated by Oort cloud objects rather than Kuiper belt objects.
§ INTRODUCTION
The Oort cloud () formation is presumably the result of the scattering of planetesimals from the primordial Solar System,
where the most efficient scatterers were likely Uranus- and Neptune-mass-like planets (e.g., ).
So far, there are no observations of objects, in situ, in the Oort Cloud,
and their numbers, distances, and size distributions are uncertain.
The Oort cloud content and distance from the Sun may hold clues regarding the Solar System formation and evolution,
such as the presence of ice planets and their past locations;
the stellar density at which the Sun was born;
and the evolution of planetesimals before their ejection from the planets' orbits
(e.g., , ).
<cit.> suggested
that it is possible to detect small Trans-Neptunian objects by occultations.
However, due to their tiny cross-section and short duration,
these occultations are hard to find.
So far, only three serendipitous occultations by Kuiper-Belt Objects have been reported (, , ),
as well as a few upper limits (e.g., ).
The duration of these occultations is controlled by the Earth's velocity,
the Fresnel angular size, and the angular sizes of the occulter and the occulted star.
Since the size distributions of Kuiper belt objects,
and presumably Oort cloud objects, follow a steep power law,
their population is dominated by small objects.
In this case, the overwhelming majority of the occultations
is expected from occulters whose sizes are near the Fresnel size (e.g., ).
In such cases,
the semi-duration of Kuiper-Belt occultations
are of the order of 0.05 s (i.e., Fresnel size ≈ 1 km, divided by the Earth's velocity).
For Oort cloud objects at 3000 (1000) AU from the Sun,
the Fresnel size is ten (five) times larger,
and hence the semi-duration of occultation is of the order of ∼0.4 (0.2) s.
Another critical factor is that for stars whose angular sizes
are much larger than the angular size of the occulter and the Fresnel size,
the occultation will be diluted and harder to detect.
Since the angular size of stars depends on the flux and color temperature,
fainter and bluer stars are preferred.
For example, a Sun-like star at an apparent magnitude of about 14,
will have an angular radius that is similar to the Fresnel size for objects at 3000 AU.
Therefore, the detection of occultations by Oort cloud objects
not only requires sub-second time resolution and a wide field of view,
but also large telescopes[Or by combining multiple small telescopes which may be more cost-effective (e.g., ).].
A way around the short exposure requirement
is to use instruments with long exposure times, but precise photometry.
This technique was suggested for Kepler- and TESS-like data by
<cit.>
and
<cit.>.
An alternative option is to use fast readout devices on large telescopes ().
Here we show that it is possible to increase the efficiency of small telescopes
(e.g., W-FAST [], Tomo-e Gozen [], TAOS-I [] and TAOS-II [] like systems)
for Oort cloud object detection by about an order of magnitude, using a simple observing strategy.
The idea is to observe the direction near the quadrature
(90 deg from the Sun on the ecliptic),
where the occultation duration is longer,
and a few seconds of integration time is sufficient.
Although the rate of occultation per suitable star will decrease by an order of magnitude,
the number of suitable stars will increase by about one to three orders of magnitudes.
<cit.> suggested this technique for the detection of KBOs.
They argue that this method may be useful
because it can be applied to existing sky surveys.
The analysis presented here suggests that for ≲1 m telescopes,
the efficiency of this method for Oort cloud object detection, is considerably higher compared to opposition observations.
Furthermore, the expected detection rate of Oort cloud objects near quadrature will be marginally larger than the predicted rate of KBO occultations near quadrature.
In <ref> we provide some background related to Solar System occultations,
and in <ref> we discuss exposure times and readout noise considerations.
In <ref> we discuss the quadrature method, and we conclude in <ref>.
§ BASICS OF STELLAR OCCULTATIONS BY SOLAR SYSTEM OBJECTS
The size distribution of known KBOs follows a power-law
dN(R)/dR = N_>1 kmq-1/1 km(R/1 km)^-q.
Here N_>1 km is the total number of objects with a radius larger than 1 km. N(R) is the number distribution of objects of radius R, and q defines the power-law.
For KBOs smaller than about 40 km, q≈3.7 ().
We assume that a steep power-law size distribution is also relevant for the Oort Cloud. A steep power-law size distribution implies that small bodies dominate the optical depth for occultations.
Solar System occultations have three relevant angular scales.
These are:
(i) the object angular radius θ_ obj=r_ obj/d,
where r_ obj is the object radius, and d is its distance;
(ii) the occulted star's angular radius
θ_*≈ 465
(T/5700 K)^-2
10^-0.2 (m_ V - 4.83) μ arcsec.
Here we assume the star has a black-body spectrum with
effective temperature T, and V-band apparent magnitude m_ V (extinction is discussed in Appendix <ref>);
and
(iii) the Fresnel radius
(r_ F)
and angular Fresnel radius (θ_ F)
which are:
r_ F≈ 10.6 (λ/5000 Å)^1/2(d/3000 AU)^1/2 km,
θ_ F≈ 4.9 (λ/5000 Å)^1/2(d/3000 AU)^-1/2 μ arcsec,
where λ is the wavelength at which we observe.
The duration of these occultations is dominated
by the Earth's velocity[This is not correct near the points of zero angular speed, because of the object's orbital eccentricity.],
the Fresnel size, and the angular size of the occulted star and the occulter.
Assuming a negligible angular size for the star (see below),
an occulter size smaller than the Fresnel size,
and observations taken near opposition,
this translates to an occultation semi-duration of about 0.05
and 0.4 s for the Kuiper belt (40 AU) and Oort cloud (3000 AU), respectively.
Such short durations require fast readout cameras.
Yet another important challenge is related to the angular size of stars.
To avoid photometric dilution of the occultation,
the angular size of the star should be smaller than the Fresnel size.
The Fresnel sizes are 41 μarcsec and 4.6 μarcsec
for 40 AU and 3000 AU, respectively.
A Solar-like star (T≈ 5700 K)
will have these angular sizes at an apparent magnitude (m_ V) of
about 10.1 and 14.9, respectively (see Figure <ref>).
To get a decent S/N for a 15 mag star with sub-second integrations,
a large collecting area (diameter 1 m) is required. This means that using the regular (short-exposures) technique, the detection of the Oort cloud requires large telescopes, or ≳0.5 m telescopes with excellent image quality and detectors with low readout noise.
The optical depth of TNOs is very small – at any given moment, the probability that a star is occulted by a TNO is
about 10^-9.
Therefore, a good strategy is to target many stars simultaneously.
The alternative is to observe a few stars for a long time (e.g., ).
The short duration of the occultation and the need to observe
a large number of stars means a huge data rate
that in turn requires special data handling and analysis techniques (see e.g., ).
Furthermore, short integrations are affected by intensity scintillation,
which needs to be accounted for in the statistics (e.g., ).
The next section will discuss another important consideration: the exposure time and the readout noise-dominated regime.
§ THE EFFECT OF EXPOSURE TIME
The common strategy for TNOs occultation observations
is to observe at a high rate,
typically with exposure times shorter than 0.1 s
(e.g., ).
When using CCD or CMOS detectors
with such short exposures, and small to medium-sized telescopes,
the noise is usually dominated by the camera read-noise.
We define the transition between the background-noise dominated regime
and the read-noise-dominated regime
when the variance of the background is equal to the read-noise squared.
This transition time [s] takes place at an exposure time roughly given by
t_ trans≈R^2/p^2 A 10^-0.4(b_ V-14.76) s,
where R is the read-noise (in electrons), p is the pixel scale (in arcsec pix^-1),
A is the effective collecting area of the telescope (in cm^2),
and b_ V is the sky V-band magnitude per square-arcsec.
This time scale does not depend on the seeing.
For R=2 e^-, p=1 arcsec pix^-1, and b_V=21 mag arcsec^-2,
this is about 4 s, 0.2 s, and 0.006 s
for 20, 100, and 500 cm telescopes (diameter), respectively.
In the read-noise dominated regime, the Signal to Noise ratio is S/N∝ t,
and hence the limiting flux (at some detection threshold) is F_ lim∝ t^-1,
while in the background-noise-dominated regime
the limiting flux is F_ lim∝ t^-1/2.
This means that in the read-noise-dominated regime, increasing the exposure time increases the number of faint stars faster compared to the
background-noise-dominated regime.
This consideration may play an important role when designing a TNO occultation survey.
When considering short vs. long integration times, other effects may be important.
For example, when the exposure time is longer than the occultation duration,
the occultation depth will be diluted (e.g., ).
Typically, ground-based surveys' photometric noise is not limited by Poisson noise,
but by other effects like flat-field errors, intensity scintillation (e.g., ), and atmospheric absorption.
Therefore, ground-based observations of diluted occultations are not practical in most cases.
§ THE QUADRATURE OBSERVATIONS STRATEGY
<cit.> suggested utilizing the fact that
the duration of a TNO occultation depends on its elongation
(angular distance from the Sun, along the ecliptic),
where the instantaneous duration approaches infinity near the quadrature points.
For a circular orbit on the ecliptic,
the on-sky speed (V_⊥),
as a function of elongation (ϵ)
is
V_⊥ = V_⊕sin(ϵ-90) + V_ objcos(sin^-1[a_⊕/a]sinϵ),
where a is the semi-major axis of the object.
This gives points of zero angular velocity (i.e., V_⊥=0), at angular distances
of 56.3, 80.9, 88.2, and 89 deg,
for 3, 40, 1000, and 3000 AU, respectively.
Therefore, observing near the point of zero angular speed
will result in occultations with a long duration (e.g., larger than a few seconds).
Defining the angular distance from the point of the circular orbit's zero angular speed by ϕ,
the approximate duration of the occultation increases by a factor of 1/sinϕ,
compared to occultations observed near opposition.
However, this also means that the occultation rate is proportional to
about sinϕ, compared to the occultation rate near opposition.
This seems to reduce the efficiency of this method considerably.
Although the efficiency of this approach at first glance appears to be an order of magnitude lower,
<cit.>
suggested that longer integration times
result in a larger number of monitored stars and are more
practical for KBO detection from a technological point of view –
i.e., longer integration time images are easier to obtain and handle using existing equipment.
Here we claim that for Oort cloud object searches using meter-class or smaller telescopes,
this method may be more efficient than opposition observations.
This is because, assuming we are in the read-noise-dominated regime,
increasing the integration time increases the limiting flux of the system linearly.
Hence, the number of detectable stars increases faster than in the background-dominated regime.
Furthermore, when going to fainter magnitudes the average color of stars becomes redder[This is correct in the magnitude range we are interested in.].
Therefore,
the fraction of usable stars
(i.e., smaller than the Fresnel scale),
increases faster than the increase in the number of stars.
In Figure <ref>, we show the angular radii of stars (μarcsec) as a function of their
effective temperature and apparent V-band magnitude (ignoring extinction; see Appendix <ref>).
To estimate the surface density of stars
with angular radii smaller than the angular Fresnel radius of Oort cloud objects,
we used the tools in <cit.> and <cit.>
to fit the black-body temperature and calculate the approximate angular radii
of Pan-STARRS1 sources ().
Figure <ref> shows
the surface density of stars brighter than g AB-magnitude 18
and fitted angular radius smaller than 5 μarcsec.
Some sky regions have a stellar density of 10^4 deg^-2 brighter than 18 magnitudes
and smaller than about 5 μarcsec.
Figure <ref> presents
the surface density of stars with an angular radius smaller than 5, 8, and 42 μarcsec, as a function of magnitude.
Next, we would like to estimate the occultation rate of some survey
near the points of zero angular speed.
To simplify the calculations,
here we assume that we are interested
only in stars whose angular size is smaller
than 5 μarcsec, and that we cannot detect
an occulter whose radius is smaller than the Fresnel radius.
These limitations allow us to neglect diluted occultations,
which will be difficult to verify.
In this case, the probability that a given star will be occulted
by an Oort cloud object, per unit time, is roughly given by
R_ occ≈1/4π 2θ_ Fμ N_>r_ F f_ occ≅ 2.2×10^-11θ_ F/4.9μ arcsecμ/12μ arcsec s^-1N_>r_ F/10^11 f_ occ s^-1.
Here f_ occ is the fraction of occultations
that can be detected using a specific integration time
(i.e., with duration above some minimum time scale),
the angular Fresnel radius θ_ F is measured in radians, the number of objects with a radius greater than the Fresnel radius is
N_>r_ F = N_>1 km (r_ F/[1 km])^1-q,
and μ is the angular speed, in radians per second,
of the objects of interest on the celestial sphere,
approximately given by:
μ≈V_⊕/asinϕ≅ 13.6 (a/3000 AU)^-1sinϕ μ arcsec s^-1.
However, this expression for μ is only an approximation.
Specifically, for a more accurate evaluation of μ,
we need to take into account the object's orbital elements.
To estimate the effect of the non-zero eccentricity and inclinations,
we present the following simulations.
We generate two billion random orbital elements,
with a constant semi-major axis a,
where the longitude of perihelion (ω),
the longitude of ascending node (Ω),
and the mean anomaly, at some fiducial epoch, is distributed uniformly
between 0 and 360 deg.
The inclination (I) is distributed like ∝cosI,
and the eccentricity is distributed uniformly between 0 and 0.6.
Next, we calculate the ephemerides of these simulated objects,
at some epoch, for a geocentric observer.
The result of this simulation is a list of Right Ascension and Declination, and on-sky angular speed
for each object.
We then calculate the on-sky angular speed (μ) distribution as a function of sky position.
Figure <ref> shows the mean on-sky angular speed as a function of sky position
for objects with a semi-major axis of a=3000 AU.
The solid line represents the ecliptic, and the black circle is the Sun position at the time of evaluation.
The two points of lowest mean angular speeds, μ, can be seen on the ecliptic at an angular distance of 89 deg from the Sun.
Using Equation <ref> with an Earth velocity of 29.8 km s^-1, an elongation between 0 and 360 deg, an Earth semi-major axis of 1 AU, an object semi-major axis of 3000 AU, and an object velocity of V_obj = V_⊕/√(a), yields a maximum angular speed of around 14 μarcsec s^-1 for an object with zero eccentricity. The maximum angular speed, given an eccentricity of 0.6, goes up to around 35 μarcsec s^-1.
Next, we calculate the fraction of objects, f_ occ
that have an occultation half-duration (θ_ F/μ) that is longer than some exposure time,
as a function of sky position.
Based on these simulations, figures <ref>–<ref> show the average fraction f_ occ of occultations with semi-duration above 1 s
and 4 s, respectively, as a function of angular distance
from the point of zero angular speed (ϕ), for several semi-major axes (a).
We next use Equation <ref> to calculate the
occultation rate as a function of a, q, ϕ, and the exposure time (Δt).
Since this parameter space is wide, we provide here several examples demonstrating the general trends.
In these calculations, we assume that to detect an event, we require that its Fresnel-radius crossing time will be longer than the exposure time (i.e., the events can be detected using two photometric data points).
In Figure <ref>, we present the rate of occultations whose semi-duration is longer than Δt seconds, as a function of the angular distance from the point of minimum angular speed (ϕ; note that this point depends on the semi-major axis).
This is shown for semi-major axes of approximately 3000 AU (blue lines) and 1000 AU (orange lines).
For each semi-major axis, we show three lines corresponding to three exposure times of Δt=1 s (solid heavy), 2 s (solid thin), and 4 s (dashed).
Optimization of a sky survey depends on several parameters, including the sky survey's available field of view, minimum exposure time, and limiting magnitude as a function of exposure time.
In addition, using the quadrature method, one can run out of sky quickly. For example, for KBO detection (a≈40 AU), the maximum angular distance from the point of minimum angular speed (ϕ) in which one can detect occultations is only ∼1 (3) deg for exposure times of 4 (1) s (see Figure <ref>–<ref>).
To demonstrate the order of magnitude capabilities of the quadrature method,
we use our simulations to estimate its detection capabilities for Tomo-e Gozen- ()
and the Large Array Survey Telescope (LAST; ) like surveys.
Here we assume the surveys have a rectangular field centered on the point of minimum angular speed, and using the simulations, we calculate the event rate over the entire field of view.
In this case, the main parameter of the survey is the exposure time.
For Tomo-e Gozen, we assume a square field of view of 4.5×4.5 deg^2 and exposure time of 2 s. In this case, the occultation probability per day per suitable star in the field of view, as a function of a and q is shown in Figure <ref>.
For LAST, we assume a field of view of 17.6 by 19.8 deg (48 telescopes).
Figure <ref> shows the occultation probability for LAST with 1 s exposure time, while Figure <ref> is for 4 s exposure time.
Next, to convert these plots to detection rates we need to know the number of observed stars within the field of view as a function of the exposure time.
For Tomo-e Gozen, we assume a limiting magnitude of about 18.5 in a 2 s exposure, and we assume the number of stars is about 10^4 deg^-2 (90th upper percentile in Figure <ref>).
While for LAST, we assume a limiting magnitude of 17 in 1 s, 17.7 in 2 s, and 18.5 in 4 s exposures.
The corresponding number of suitable stars are 3000, 8000, and 10,000 deg^-2 (90th upper percentile in Figure <ref>), respectively.
From these numbers, as well as our previous assumptions, we can estimate the detection rate of Oort cloud objects using
Tomo-e Gozen and LAST.
We assume a=10^3 AU and q=3 for these calculations.
For Tomo-e Gozen, the expected detection rate is 0.001 day^-1,
while for LAST, the rate is about 0.4, 0.3 and 0.06 day^-1,
for exposure times of 1, 2, and 4 s, respectively.
From Figure <ref>, and Equation <ref>, we see that the rate of occultations roughly scales like
∝ N_ >1, ∝ a^-2, and ∝ q^-3.3.
These rates are estimates assuming S/N=5 for detection, and the sky background in the LAST site (≈20.8 mag arcsec^-2. However, for detection of an occultation, a source with somewhat larger S/N may be required.
The number of suitable stars as a function of magnitude (Fig. <ref>), around magnitude 17.5, scales like ∝ 3Δm, where Δm is the change in the assumed limiting magnitude.
For example, if a S/N=10 is required, the rate will be decrease by a factor of about 2. However, if darker site may be used (e.g., 22 mag arcsec^-2), then the number of suitable sources may increase by a factor of about 2.
The parameters for these plots are relevant for an Oort cloud objects population.
To estimate the occultation rate for KBOs,
the rates in these plots should be multiplied by about 10^-2.5.
The reasons for this reduction are:
(i) we expect about a 10^-4 lower normalization of N_>1 km for KBOs (), compared with Oort cloud objects;
(ii) The surface density of stars smaller than 42 μarcsec (i.e., the Fresnel radius for the KBOs)
is about half an order of magnitude higher (Fig. <ref>);
and (iii) since KBOs are concentrated near the ecliptic,
their surface density is about an order of magnitude higher
near the ecliptic compared with their all-sky mean surface density.
Finally, the larger Fresnel size of KBOs is already accounted for in Equation <ref>.
Multiplying the rate, from Figure <ref>–<ref>, for KBOs (e.g., a=40 AU; q=3)
by 10^-2.5, we see that the expected detection rate for Oort cloud objects (with a=10^3 AU; q=3) becomes
marginally higher than the detection rate for KBOs.
Therefore, unlike opposition observations which are expected to yield a higher occultation rate for KBOs, in comparison to Oort cloud objects,
the quadrature method may be slightly more efficient for detecting the Oort cloud.
§ CONCLUSIONS
We analyze the likelihood of detecting Oort cloud objects using a few-second integration observations at quadrature.
This method was first suggested by <cit.>
as a way to overcome the technical problem of short integration times
required for KBO detection by occultations (e.g., ).
Here we show that the efficiency of this method for Oort cloud object detection
using small (sub-meter size) telescopes is high.
The reason for this is that while the longer occultation duration
near the quadrature points means lower occultation rates,
the longer integration times result in a large increase in the number
of stars that are smaller than the Fresnel size for Oort cloud objects.
This method potentially has another advantage – the longer integration times
also mean higher resilience to intensity scintillation (e.g., ; ).
Nevertheless, using this method requires a new generation of survey telescopes
with a large field of view that are capable of observing at
a few-second integration times with negligible dead time.
Examples of such surveys are the operational Tomo-e Gozen (),
the under-commissioning Large Array Survey Telescope (LAST;),
and the planned Argus array ().
A disadvantage of this method is that the sky area that can be used is ∼1% of the celestial sphere.
A possible way to overcome this problem is to use multiple observatories
that scan independent realizations of the sky.
I.e., By changing the observatory's location, the Oort cloud objects occult at different sky locations.
The typical distance between such observatories
should be larger than the Fresnel size for the Oort cloud (>10 km).
We thank an anonymous referee for useful comments.
E.O.O. is grateful for the support of
grants from the
Willner Family Leadership Institute,
André Deloro Institute,
Paul and Tina Gardner,
The Norman E Alexander Family M Foundation ULTRASAT Data Center Fund,
Israel Science Foundation,
Israeli Ministry of Science,
Minerva,
NSF-BSF,
Israel Council for Higher Education (VATAT),
Sagol Weizmann-MIT,
Yeda-Sela,
and the Rosa and Emilio Segre Research Award.
This research was supported by the Institute for Environmental Sustainability (IES) and The André Deloro Institute for Space and Optics Research at the Weizmann Institute of Science.
S.A.S. is grateful for support by the Zuckerman Scholars Program.
§ EFFECT OF EXTINCTION ON ANGULAR SIZE
Extinction and reddening complicate the estimation of stars' angular sizes.
Specifically, reddening changes the temperature estimation,
and extinction affects the star's flux (F).
Since θ_*∝ T^-2 F^1/2, both are important.
Figure <ref> shows
the multiplication factor by which the fitted angular size of a star should be multiplied in order to get its real angular size, as a function of the E_B-V parameter
and the star g-r color.
Here we assume the measured angular radius of the stars is based on fitting the g and r magnitudes
with a black-body curve.
We also assume R_V=3.08
().
This plot demonstrates that for a wide range of the parameter space,
the angular size correction factor is below 1
(i.e., the actual angular size is smaller than estimated by the fit).
Therefore, we conclude
that the estimate for the number of stars with an angular size smaller than 5 μarcsec is reasonable.
apj
|
http://arxiv.org/abs/2409.02624v1 | 20240904113252 | Gyroscopic Gravitational Memory from quasi-circular binary systems | [
"Guillaume Faye",
"Ali Seraj"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
[1][]nobeforeafter, math upper, tcbox raise base,
enhanced, rounded corners, colback=black!5, colframe=black
width=0.9
JHEPs
0.50.08334em
|
http://arxiv.org/abs/2409.03330v1 | 20240905080943 | Maximum size and magnitude of injection-induced slow slip events | [
"Alexis Sáez",
"François Passelègue",
"Brice Lecampion"
] | physics.geo-ph | [
"physics.geo-ph",
"physics.flu-dyn"
] |
Quantitative Diophantine approximation and Fourier dimension of sets
Dirichlet non-improvable numbers versus well-approximable numbers
Bo Tan Qing-Long Zhou^†
======================================================================================================================================
§ ABSTRACT
Fluid injections can induce aseismic slip, resulting in stress changes that may propagate faster than pore pressure diffusion, potentially triggering seismicity at significant distances from injection wells. Constraining the maximum extent of these aseismic ruptures is thus important for better delineating the influence zone of injections concerning their seismic hazard. Here we derive a scaling relation based on rupture physics for the maximum size of aseismic ruptures, accounting for fluid injections with arbitrary flow rate histories. Moreover, based on mounting evidence that the moment release during these operations is often predominantly aseismic, we derive a scaling relation for the maximum magnitude of aseismic slip events. Our theoretical predictions are consistent with observations over a broad spectrum of event sizes, from laboratory to real-world cases, indicating that fault-zone storativity, background stress state, and injected fluid volume are key determinants of the maximum size and magnitude of injection-induced slow slip events.
§ INTRODUCTION
A growing body of observations suggests that a significant part of the deformation induced by subsurface fluid injections is due to aseismic fault motions <cit.>. This phenomenon, known as injection-induced aseismic slip, has been known since at least the 1960s when a slow surface fault rupture was causally linked to fluid injection operations of an oil field in Los Angeles <cit.>. Since then, an increasing number of observational studies have inferred the occurrence of slow slip events as a result of industrial fluid injections. For example, in the Brawley geothermal field, California, ground- and satellite-based geodetic techniques allowed for the detection of an injection-induced aseismic slip event <cit.>. This event was found to precede and likely trigger a seismic sequence in 2012 <cit.>. In western Canada, two of the largest aseismic slip events observed thus far (magnitudes 5.0 and 4.2) occurred in 2017-2018 and were detected using InSAR measurements of surface deformation <cit.>. These events were attributed to hydraulic fractures possibly intersecting glide planes during the stimulation of an unconventional hydrocarbon reservoir <cit.>. Similarly, InSAR-derived surface deformations allowed for the recent detection of aseismic ruptures in the southern Delaware Basin, Texas <cit.>, likely induced by wastewater injection operations <cit.>. These recent geodetic observations, in combination with mounting evidence for aseismic slip from fluid-injection field experiments <cit.>, suggest that injection-induced slow slip events might be a ubiquitous phenomenon, largely underdetected over the past decades only due to the lack of geodetic monitoring.
There is increasing recognition of the importance of injection-induced aseismic slip in the geo-energy industry. For instance, in the development of deep geothermal reservoirs, hydraulic stimulation techniques are commonly used to reactivate pre-existing fractures in shear. This process aims to enhance reservoir permeability through the permanent dilation of pre-existing fractures or the creation of new ones. The occurrence of predominantly aseismic rather than seismic slip is desirable, as earthquakes of significant magnitude can pose a substantial hazard to the success of these projects <cit.>. Injection-induced aseismic slip can be, however, detrimental in several ways. For example, aseismic slip on fractures intersecting wells can cause casing shearing <cit.> and adversely impact well stability <cit.>. Additionally, in CO_2 storage operations, injection-induced aseismic slip could affect the integrity of low–permeability caprocks, as fault slip may be accompanied by permeability enhancements, increasing the risk of CO_2 leakage <cit.>. Similar concerns may arise in other underground operations such as the storage of hydrogen and gas. Furthermore, it is well-established that quasi-static stress changes due to aseismic slip may induce seismic failures on nearby unstable fault patches <cit.>. Moreover, since aseismic slip can propagate faster than pore pressure diffusion <cit.>, aseismic-slip stress changes can potentially reach regions much further than the zones affected by the direct increase in pore pressure due to injection, thereby increasing the likelihood of triggering earthquakes of undesirably large magnitude by perturbing a larger rock volume <cit.>.
Understanding the physical factors controlling the spatial extent of aseismic slip is thus of great importance to better constrain the influence zone of injection operations concerning seismic hazards. Recent theoretical and numerical modeling studies have provided, within certain simplifying assumptions, a fundamental mechanistic understanding of how injection-induced aseismic slip grows in a realistic three-dimensional context and through all its stages, from nucleation to arrest <cit.>.
Estimating the rupture run-out distance of aseismic slip transients remains, despite these efforts, an unresolved issue, particularly as these prior investigations have focused only on specific injection protocols <cit.>. Yet the spatiotemporal patterns of injection-induced aseismic slip growth are anticipated to be strongly influenced by the history of injection flow rate
<cit.>. On the other hand, a related issue is estimating the maximum magnitude of injection-induced earthquakes. This quantity plays a crucial role in earthquake hazard assessment and has been the focus of significant research efforts in recent times <cit.>.
A common limitation of prior research in this area is neglecting the portion of moment release due to aseismic slip, despite substantial evidence suggesting that aseismic motions may contribute significantly to the total moment release <cit.>, potentially surpassing seismic contributions in some cases <cit.>. Understanding the factors governing aseismic moment release is thus important, and would constitute a first step toward understanding the physical controls on slip partitioning, that is, the relative contributions of aseismic and seismic motions to the release of elastic strain energy, which is crucial for a better understanding of the seismic hazard posed by these operations.
Building upon our previous works <cit.>, we develop here an upper-bound model for the spatial extent and moment release of injection-induced aseismic slip events. Our model notably accounts for fluid injections that are conducted with an arbitrary history of injection flow rate including the shut-in stage, thus effectively reproducing aseismic slip events during their entire life cycle, from nucleation to arrest. Using fracture mechanics theory, scaling analysis, and numerical simulations, we propose scaling relations for the maximum size and magnitude of aseismic ruptures which are shown to be consistent with a global compilation of events that vary in size from cm-scale slip transients monitored in the laboratory to km-scale, geodetically inferred slow slip events induced by industrial injections. Our results suggest that fault-zone hydro-mechanical storativity, background stress state, and injected fluid volume are crucial quantities in determining upper limits for the size and magnitude of aseismic ruptures. Moreover, the total fluid volume injected by a given operation is shown to be the only operational parameter that matters in determining the upper bounds in our model, regardless of any other characteristic of the injection protocol.
§ RESULTS
§.§ Physical model and upper bound rationale
We consider purely aseismic ruptures nucleated by a localized increase of pore-fluid pressure due to the direct injection of fluids into a porous fault zone of width w (Fig. <ref>c). For simplicity, we begin by examining a fluid injection conducted at a constant volumetric rate Q_0 over a finite time t_s, followed by a sudden injection stop (Fig. <ref>a). This results in two distinct stages: a continuous injection or pressurization stage, in which pore pressure increases everywhere within the permeable fault zone; and a shut-in or depressurization stage, in which pore pressure decays near the injection point (Fig. <ref>b) while transiently increasing away from it (Fig. <ref>e). Incorporating these two stages into the model allows for examining aseismic ruptures throughout their entire lifetime, from nucleation to arrest. We consider a planar infinite fault obeying a slip-weakening friction law with a static (peak) friction coefficient f_p, dynamic (residual) friction coefficient f_r, and characteristic slip-weakening distance δ_c. The decay of friction from the peak to the residual value can be either linear or exponential <cit.>. The host rock is considered purely elastic with the same elastic constants as the fault zone (Fig. <ref>c). We assume the host rock to be impermeable at the relevant time scales of the injection. This configuration is motivated by the permeability structure of fault zones in which a highly-permeable damage zone is commonly surrounded by a less permeable host rock <cit.>. Under these assumptions and,
particularly at large times compared to the characteristic time for diffusion of pore pressure in the direction perpendicular to the fault, w^2/α, with α the fault-zone hydraulic diffusivity, the deformation rate in the fault zone is essentially oedometric (uniaxial along the z-axis) <cit.>. Fluid flow within the fault zone is then governed by an axisymmetric linear diffusion equation for the pore pressure field p, ∂ p/∂ t = α∇^2p <cit.>, where the hydraulic diffusivity α=k/Sη, with k and η the fault permeability and fluid dynamic viscosity respectively, and S is the so-called oedometric storage coefficient representing the variation of fluid content caused by a unit pore pressure change under uniaxial strain and constant stress normal to the fault plane <cit.>. By neglecting any poroelastic coupling within the fault zone upon the activation of slip, deformation in the medium is governed by linear elasticity which by virtue of the slow nature of the slip we are concerned with, is regarded in its quasi-static approximation. Moreover, we assume that fault slip is concentrated in a principal slip zone modeled as a mathematical plane located at z=0 (Fig. <ref>c). We presented an investigation of this physical model, which can be regarded as an extension to three dimensions of the two-dimensional plane-strain model of Garagash and Germanovich <cit.>, in a recent study <cit.>. Here, our main focus is on the case of ruptures that are unconditionally stable according to the terminology and regimes presented in <cit.>.
This condition requires that the background shear stress τ_0, which is assumed to be uniform, must be lower than the in-situ residual fault strength f_rσ_0^', where σ_0^'=σ_0-p_0, with σ_0 and p_0 the uniform background normal stress and pore pressure respectively. τ_0 and σ_0^' are thought to be the result of long-term tectonic processes and thus considered to be constant during the times scales associated with the injection operation.
In our model, unconditionally stable ruptures evolve always between two similarity solutions (Fig. <ref>a and see <cit.> for further details), one at early times where the fault interface operates with a constant friction coefficient equal to the peak value f_p, and the other one at late times where the fault interface behaves as if it were governed by a constant friction coefficient equal to the residual value f_r. As shown in Fig. <ref>a, the constant residual friction solution, which is the ultimate asymptotic regime of any unconditionally stable rupture, is an upper bound for the rupture size at any given time during the pressurization stage. In this asymptotic regime, rupture growth is dictated by a fracture-mechanics energy balance, where the interplay between driving and resisting forces leads to a scenario in which the fracture energy can be effectively neglected <cit.>. For this reason, this upper-bound limiting regime is also referred to as the zero-fracture-energy solution <cit.>. During the shut-in stage, a similar upper-bound rationale can be applied to the case of unconditionally stable ruptures. Assuming the fault slides with a constant residual friction value across the slipping region (equivalent to neglecting the fracture energy in the rupture-tip energy balance), this limiting solution would consistently yield a maximum for the rupture size, as the effect of the fracture energy is always to slow down rupture advancement. The limiting scenario of constant residual friction serves, therefore, as an effective upper-bound model for unconditionally stable ruptures from nucleation to arrest. In the following sections, we explore the consequences of such a limiting condition to provide an upper bound for the
evolution of the rupture size and moment release during and after fluid injection, as well as
a theoretical estimate for the final, maximum size and magnitude of injection-induced slow slip events. While the canonical example of injection at a constant flow rate is used to examine injection-induced aseismic slip in a relatively comprehensive manner which includes different stages and regimes, we emphasize in advance that our estimates for the maximum size and magnitude will account for fluid injections conducted with an arbitrary volumetric rate history (Fig. <ref>a).
§.§ Dynamics of unconditionally stable ruptures and maximum rupture size
During and after fluid injection, our upper-bound model is governed by a single dimensionless number (see Methods), the so-called residual stress-injection parameter:
𝒯_r= Δτ_r-0/f_rΔ p_*, with Δτ_r-0=f_rσ_0^'-τ_0 and Δ p_*=Q_0η/4π kw.
This dimensionless number systematically emerges in physics-based models of injection-induced fault slip <cit.>. It quantifies the competition between the two opposite forces that determine the dynamics of unconditionally-stable ruptures in our upper-bound model. One is a driving force associated with the sole effect of pore pressure increase due to fluid injection which continuously reduces fault shear strength, thereby releasing elastic strain energy that becomes available for rupture growth. Its stress scale is f_rΔ p_*, where Δ p_* is the injection intensity. Injections with faster pressurization are associated with increasing values of Δ p_* which can occur, for example, due to a higher injection flow rate (Q_0) or a lower hydraulic transmissivity (kw). The other force is of a resisting kind, which in the absence of a local energy dissipation mechanism such as the fracture energy, corresponds to a non-local consumption of elastic strain energy associated with the background stress change, Δτ_r-0. The latter is defined as the difference between the in-situ residual fault strength (f_rσ_0^') and the initial shear stress (τ_0). The former quantity can be also interpreted as the final shear stress that would act on the slipped fault patch after the termination of the injection operation and the subsequent dissipation of overpressure due to the injection. Hence, Δτ_r-0 quantifies a change of shear stress between a final and initial state. The background stress change is strictly positive. This is an essential feature of unconditionally stable ruptures. Specifically, the ultimate stability condition: τ_0<f_rσ_0^', ensures the fault stability and the development of quasi-static slip unconditionally in this regime <cit.>. Intuitively one expects that as the intensity of the injection (Δ p_*) increases, the rupture would propagate faster. Conversely, when the background stress change (Δτ_r-0) is higher, it presents greater resistance to rupture growth, consequently slowing down the slip propagation. Hence, decreasing 𝒯_r values will always result in faster aseismic ruptures. This behavior can be clearly observed in Fig. <ref>b, where the solution (see Methods) during the pressurization stage for a circular rupture of radius R(t) is shown. Here, R(t)=λ L(t), where λ is the so-called amplification factor <cit.>, and L(t)=√(4α t) is the classical diffusion length scale, also considered as the nominal position of the overpressure front (Fig. <ref>d). λ therefore relates the position of the overpressure and slip fronts. This analytical circular-rupture solution is strictly valid only when ν=0 <cit.>. Throughout this work, we generally adopt the circular rupture approximation to derive purely analytical insights. We, nevertheless, quantify the effect of rupture non-circularity numerically via a boundary-element-based numerical solver (see Methods).
The analytical solution in Fig. <ref>b provides important insights into the response of our upper-bound model. During the pressurization stage, the fault response is characterized by two distinct regimes. When 𝒯_r∼10, aseismic ruptures are confined well within the overpressurized region (λ≪1), a regime known as marginally pressurized because it relates to a scenario in which the fluid injection provides just the minimum amount of overpressure that is necessary to activate fault slip <cit.>. Conversely, when 𝒯_r≪1, aseismic ruptures break regions much further away than the pressurized fault zone (λ≫1). This is the so-called nearly unstable regime <cit.> as when Δτ_r-0→ 0 the rupture approaches the condition under which it becomes ultimately unstable. From a practical standpoint and in an upper-bound sense, this is indeed the most relevant regime as it produces the largest ruptures for a given injection. While operators in geo-energy applications typically maintain good control over the parameters of the fluid injection, in-situ conditions such as the stress state acting upon fractures and faults within a reservoir are subject to significant uncertainties. Given that in-situ conditions largely control the response of aseismic slip in our model, it seems reasonable to assume under rather generic, generally uncertain conditions in the rock mass surrounding a given operation, that the nearly unstable regime provides an upper limit for the size and magnitude of aseismic slip events. Consequently, our emphasis in this work will predominantly be on exploring this regime. In fact, when λ≫1, one can derive a relation linking the evolution of the rupture radius to the accumulated injected fluid volume (V(t)) and in-situ conditions as follows (see Methods):
R(t)=A_situ√(V(t)), with A_situ=(f_r/2π wS Δτ_r-0)^1/2.
This equation is valid not only for injection at a constant flow rate but also for any arbitrary fluid injection as long as the rupture propagates in crack-like mode during the pressurization stage (see Methods), so that the fracture-mechanics energy balance, equation (<ref>), remains valid. A crack-like propagation mode will certainly hold at least in one relevant scenario, wherein overpressure due to fluid injection increases monotonically everywhere within the sliding region. Hereafter, to put the term arbitrary in a more specific but still sufficiently general scope, we refer to injection with monotonically increasing fluid pressure as arbitrary. However, we emphasize that in our model, a monotonically increasing fluid pressure is only a sufficient (not a necessary and sufficient) condition for crack-like propagation. Moreover, equation (<ref>) implies that, during the pressurization stage, the cumulative injected fluid volume V(t) is the only operational parameter of the injection that matters to estimate an upper bound for the rupture size at a given time t. Furthermore, the prefactor is exclusively related to in-situ conditions (A_situ).
Upon the stop of the fluid injection (t>t_s), our upper-bound model produces ruptures that transition from crack-like to pulse-like propagation mode (Fig. <ref>d-f). Indeed, since we have reduced the upper-bound problem to a fault responding with a constant friction coefficient equal to the residual value f_r, we inherit essentially all the results obtained recently by Sáez and Lecampion <cit.> who extensively investigated the propagation and arrest of post-injection aseismic slip on a fault with constant friction. In particular, the overpressure drops quickly near the fluid source upon the stop of the injection while it keeps increasing transiently away from it (Fig. <ref>d-e). This latter increase of pore pressure is what further drives the propagation of aseismic ruptures after shut-in. As shown in Fig. <ref>e, slip propagates first as a ring-shaped pulse with a locking front that propagates always faster than the rupture front (Fig. <ref>a). The locking front is driven by the continuous depressurization of pore fluids which re-strengthens the fault. After, and for the more general case of non-circular ruptures, the pulse splits into two `moon-shaped' pulses (Fig. <ref>f). This ultimate stage is due to the locking front catching up with the rupture front first in the less elongated side of the slipping region. For the idealized case of circular ruptures, the moon-shaped pulses are absent due to the axisymmetry property of both the fluid flow and shear rupture problems. Fig. <ref>a displays the evolution of the locking front B(t) and rupture front R(t) for a circular rupture for an exemplifying case with 𝒯_r=0.15. Slip arrests when the locking front catches the rupture front at the time t_a (arrest time or duration of the slow slip event), resulting in the maximum rupture run-out distance R_max. Although perhaps more insightfully, the rupture front stops when it is caught by the so-called pore-pressure back front P(t) <cit.> introduced by Parotidis et al. <cit.>. This latter means that there is no further increase of pore pressure within the rupture pulse that is available to sustain the propagation of slip. Moreover, this arrest condition leads to the following analytical relation between the maximum rupture radius R_max and the arrest time t_a:
R_max=[4α t_a(t_a/t_s-1)ln(t_a/t_a-t_s)]^1/2.
In the more practically relevant, nearly unstable regime (𝒯_r≪ 1), the normalized arrest time (Fig. <ref>c) can be estimated via the following numerically-derived asymptotic approximation, t_a/t_s≈ a𝒯_r^-b, with a=0.946876 and b=1.084361. Moreover, Fig. <ref>c shows that when ruptures are marginally pressurized (𝒯_r∼10), the slip pulses arrest almost immediately after the injection stops. Conversely, when ruptures are nearly unstable (𝒯_r≪1), the upper bound for the arrest time (t_a) is predicted to be several orders of magnitude the injection duration (t_s). Rupture non-circularity has the effect of slightly increasing both the arrest time and maximum rupture run-out distance R_max (Figs. <ref>c-d). Furthermore, the contribution of the shut-in stage to R_max is approximately a factor of two at most when ruptures are very nearly unstable (𝒯_r∼0.001, Fig. <ref>d). Hence, the order of magnitude of R_max comes directly from evaluating R(t) in the analytical solution displayed in Fig <ref>b at the shut-in time, which in the regime λ≫1 takes a more insightful expression given by equation (<ref>), which is valid for arbitrary fluid injections. Using this latter expression, we can calculate the maximum run-out distance when λ≫1 as:
R_max=S_ν A_situ√(V_tot),
where V_tot=V(t_s) is the total volume of fluid injected during a given operation, and the coefficient S_ν accounts for the further growth of the rupture during the shut-in stage and the effect of rupture non-circularity. S_ν is a function of 𝒯_r and ν and can be simply approximated by the blue dashed line in Fig. <ref>d for ν=0.25.
§.§ Maximum moment release and magnitude
To calculate the moment release, we derive analytical upper bounds for the spatiotemporal evolution of fault slip during the pressurization stage, for both nearly unstable (λ≫1) and marginally pressurized (λ≪1) ruptures (see Methods). Notably, the slip distribution of nearly unstable ruptures is highly concentrated around the injection point due to a boundary layer associated with the fluid-injection force at distances r∼ L(t) (Fig. <ref>b). Upon integrating the analytical slip distributions over the rupture surface, the temporal evolution of moment release is:
M_0≃16/3Δτ_r-0R^3 for nearly unstable ruptures, λ≫1,
16/9f_rΔ p_* R^3 for marginally pressurized ruptures, λ≪1,
with the temporal dependence of M_0 embedded implicitly in R(t)=λ L(t) which is known analytically (Fig. <ref>b). As expected, the previous asymptotic solutions for M_0 match very closely the full numerical solution (Fig. <ref>a). The numerical solution helps us to describe the precise transition between the two end members. We emphasize that the structure of the scaling for M_0 is evidently the one expected for a circular crack (M_0∝ R^3). Yet the pre-factors and relevant stress scales are specific to the characteristic loading of each regime. For instance, in the nearly unstable regime (λ≫ 1), the proper stress scale is the background stress change (Δτ_r-0), as opposed to the injection intensity (f_rΔ p_*) which is the adequate stress scale when λ≪1. This is because, in the nearly unstable regime, most of the slipping region experiences a uniform stress variation Δτ_r-0 except for a very small region of size ∼ L(t) near the fluid source which undergoes an additional non-uniform stress change due to the fluid injection. The effect of the fluid-injection force is indeed in the pre-factor 16/3, which is about two times bigger than the one of a circular crack with purely uniform stress drop (16/7 when ν=0.25 <cit.>, and 8/3 when ν=0 <cit.>). Moreover, in this regime, we obtain the following expression for the moment release which is valid for arbitrary fluid injections (see Methods):
M_0(t)= I_situ· V(t)^3/2, with I_situ=16/3(2π)^3/21/√(Δτ_r-0)( f_r/wS)^3/2.
Equation (<ref>) has the same property as equation (<ref>), that is, the only operational parameter of the injection controlling the upper bound for the moment release during the pressurization stage is the cumulative injected fluid volume V(t). Furthermore, the prefactor corresponds as well to in-situ conditions (I_situ), thus effectively separating contributions to the moment release that are controllable during an operation (V) and those that are not (I_situ). Such kind of relation for the moment release has been previously reported in the literature for the case of regular, fast earthquakes <cit.>.
Equation (<ref>) shows that a decrease in background stress change (Δτ_r-0) leads to an increase in moment release. The reason behind such behavior is simple, lower background stress variations result in less opposition for the rupture to grow and thus in a higher moment release. For the same reason, higher values of the residual friction coefficient (f_r) also augment M_0. On the other hand, decreasing the product between the fault-zone width and oedometric storage coefficient (wS), hereafter denominated as fault-zone storativity, also leads to a larger moment release. The explanation, in this case, is that wS controls the pressurization intensity due to fluid injection that is experienced on average over the fault pressurized region. A lower storativity in the fault zone naturally implies a higher fluid pressure to accommodate a fixed amount of injected volume (see equation (<ref>)). A higher fluid overpressure decreases fault shear strength therefore increasing the mechanical energy available for rupture growth and the corresponding moment release. It is important to note that in the marginally pressurized regime (λ≪1), M_0 does not follow an expression as in (<ref>). Indeed, by substituting the expressions Δ p_*=Q_0η/4π kw, R(t)=λ√(4α t), and V(t)=Q_0 t into equation (<ref>), one can readily show that the moment release for an injection at a constant flow rate is given by M_0(t)= B · V(t)^3/2, with B=(32/9π)(f_rη/kw)(λ^3α^3/2/Q_0^1/2). This implies that in this regime, the moment release depends on both the current injected volume V(t) (or injection time t) and the injection rate Q_0 (which is also implicitly in λ). More importantly, the in-situ and operational factors cannot be separated as in (<ref>). This separation is a unique characteristic of nearly unstable ruptures, associated with the fact that when λ≫1, the effect of the fluid source on rupture propagation is entirely described by its equivalent force at distances r≫ L(t). The magnitude of this equivalent force is determined only by the injected fluid volume, irrespective of any other detail of the fluid injection (see Methods, equation (<ref>)).
During the shut-in stage (t>t_s), the propagation and ultimate arrest of the aseismic slip pulses result in a further accumulation of fault slip (Fig. <ref>b). The depressurization stage thus increases the final, maximum moment release of the events. Fig. <ref>b displays the evolution of this increase for an exemplifying case with 𝒯_r=0.01. We observe that the moment release keeps growing after shut-in very slowly (over a timescale that is about 100 times the injection duration) up to reaching (at arrest) nearly three times the moment release at the time the injection stops (M_0(t_s)). We quantify this effect in the most general form by defining the shut-in coefficient C_shut-in, equal to the ratio between the maximum moment release at the time in which a circular rupture arrest, M_0(t_a), and the moment release at the shut-in time, M_0(t_s). By dimensional analysis, the shut-in coefficient depends only on the residual stress-injection parameter 𝒯_r, whose relation is calculated numerically and displayed in Fig. <ref>c. We observe that M_0(t_a) is at most around 4 times the moment release at the time the injection stops in the more nearly unstable cases (smallest values of 𝒯_r). Conversely, there is virtually no further accumulation of moment release for marginally pressurized ruptures. We quantify the effect of rupture non-circularity in a similar way by introducing the coefficient C_ν equal to the ratio between the moment release at the time of arrest for non-circular ruptures (ν≠0), and the same quantity for the circular case (ν=0). Again, by dimensional considerations, C_ν depends only on 𝒯_r for a given ν. This is shown in Fig. <ref>d for the particular case of a Poisson's solid (ν=0.25, a common approximation for rocks). We observe that the effect of the Poisson's ratio is to reduce in about 13.8 percent the moment release of a non-circular rupture with respect to the one of a circular rupture, for the same 𝒯_r. We find this to be valid over a wide range of practically relevant cases (0.01≤𝒯_r≤ 1). With all the previous definitions and calculations, we can finally estimate the maximum moment release as M_0^max= C_ν· C_shut-in· M_0(t_s). Notably, in the nearly unstable regime (λ≫ 1), equation (<ref>) can be evaluated at the shut-in time, which allows us to arrive at the following expression valid for arbitrary fluid injections:
M_0^max= C_ν· C_shut-in· I_situ· V_tot^3/2,
where V_tot=V(t_s) is the total volume of fluid injected during a given operation. Equation (<ref>) has a multiplicative form, thus effectively factorizing contributions from the injected fluid volume, in-situ conditions, shut-in stage, and rupture non-circularity to the maximum moment release. Note that both C_shut-in and C_ν depend on 𝒯_r and thus also on in-situ conditions and parameters of the injection (equation (<ref>)). However, the in-situ conditions and injection protocol are for the most part contained in I_situ and V_tot, respectively, which can vary over several orders of magnitude. On the contrary, the dimensionless coefficients C_shut-in and C_ν remain always of order one. Moreover, we shall keep in mind that these latter two coefficients are, strictly speaking, defined only for injection at a constant flow rate. Nevertheless, one could crudely approximate any other kind of injection protocol as Q_eq=(1/t_s)∫_0^t_s Q(t)dt for the purpose of estimating these two coefficients. The previous approximation guarantees that the same amount of fluid volume is injected over the same injection period t_s by both the equivalent constant-rate source Q_eq and the time-varying arbitrary source Q(t). Finally, for marginally pressurized ruptures (λ≪1), a similar expression for the maximum moment release can be derived as M_0^max= C_ν· B · V_tot^3/2 (since C_shut-in≈1, Fig. <ref>c). As already discussed, the in-situ and operational factors cannot be separated in this regime.
To calculate the maximum magnitude, we follow the definition by Hanks and Kanamori <cit.>: M_w^max=2/3·[log_10(M_0^max)-9.1] (here, in SI units). In the regime that provides the largest rupture size and moment release for a given injection (λ≫1), equation (<ref>) leads to the following estimate for the maximum magnitude:
M_w^max=log_10(V_tot) + 2/3[log_10(I_situ) + log_10(C_shut-in) + log_10(C_ν) - 9.1 ].
Due to the multiplicative form of equation (<ref>), equation (<ref>) takes an additive form that separates contributions from different factors to the maximum magnitude of injection-induced slow slip events.
Among these factors, rupture non-circularity decreases the magnitude only by 0.06. The contribution from the shut-in stage is, on the other hand, slightly larger. Since C_shut-in≈ 4 at most when ruptures are very nearly unstable (𝒯_r∼0.001), the shut-in stage may contribute to an increase in the moment magnitude of 0.4 at the maximum. The larger contributions to M_w^max are by far the ones associated with in-situ conditions and the total injected fluid volume. For example, a tenfold increase in V_tot gives a magnitude increase of 1.0, while a tenfold increase in I_situ results in a magnitude growth of approximately 0.67. The relative contributions from the sub-factors composing I_situ can be further understood by substituting equation (<ref>) into (<ref>), and then isolating the in-situ term as follows:
(2/3)log_10(I_situ)=log_10(f_r)-(1/3)log_10(Δτ_r-0)-log_10(wS)-0.3135.
The more significant variations in M_w^max come clearly from the fault-zone storativity (wS) and background stress change (Δτ_r-0), which could vary over several orders of magnitude. For instance, a variation of three orders of magnitude in Δτ_r-0 yields a change of magnitude of 1.0, while the same variation in fault-zone storativity results in a magnitude change of 3.0, highlighting the potentially strong effect of wS in M_w^max.
§.§ Fault-zone storativity and injected fluid volume: two key parameters
To test our scaling relations, we compiled and produced a new dataset (Supplementary Materials) with estimates of aseismic moment release, rupture size, and injected fluid volumes from events that vary in size from laboratory experiments (centimetric to metric scale ruptures) <cit.> to industrial applications (hectometric to kilometric scale ruptures) <cit.>, including in-situ experiments in shallow natural faults at intermediate scales (metric to decametric ruptures) <cit.>. The comparison between this dataset and our expressions for the maximum moment release (<ref>) (or magnitude (<ref>)) and maximum rupture size (<ref>), are displayed in Figs. <ref> and <ref> respectively. We focus first on the maximum moment release (Fig. <ref>). To facilitate the comparison against the dataset, we introduce in Fig. <ref> the factor N=C_ν· C_shut-in· I_situ which encapsulates all effects other than the injected fluid volume, so that equation (<ref>) can be simply written as M_0^max=N· V_tot^3/2. Three different values for N are considered in Fig. <ref> which collectively form an upper bound for the data across the different volume and moment release scales characterizing the dataset. Considering that C_ν≈ 0.862 and that plausible values for the coefficient C_shut-in range from 1 to 4, the order of magnitude and units of N are the ones determined by the in-situ factor (I_situ)
. This latter, in turn, depends on three parameters: the residual friction coefficient f_r (with a plausible range of 0.4 to 0.8), the background stress change (Δτ_r-0), and the fault-zone storativity (wS). The background stress change can be at most equal to the amount of shear stress that is necessary to activate fault slip before the injection starts, Δτ_p-0=f_pσ_0^'-τ_0, in the limiting case in which the weakening of friction is small (f_r≈ f_p). Its minimum value could be, on the other hand, as small (but positive) as possible when the residual fault strength drops close to the initial shear stress (f_rσ_0^'≈τ_0). This is, as already discussed, the case that would promote larger ruptures and moment release. Δτ_r-0 could therefore reasonably fluctuate between some MPa and a few kPa. The fault-zone storativity (wS) may similarly vary over several or potentially many orders of magnitude <cit.>. Estimating this parameter is quite challenging; however, as anticipated by equation (<ref>), wS could have a strong effect on the maximum magnitude. Hence, we conduct a more intricate analysis of representative values for wS within our compilation of events.
To do so, we examine the end members of our data points, namely, small-scale laboratory experiments and industrial-scale fluid injections. Let us first note that in our model, wS can be written in terms of generally more accessible quantities as kw/αη, where kw is the fault-zone hydraulic transmissivity, η is the fluid dynamic viscosity, and α is the fault-zone hydraulic diffusivity. At the centimetric scale composing the smallest aseismic slip events in the dataset, Passelègue et al. <cit.> estimated the hydraulic transmissivity of their saw-cut granitic fault within 10^-17 and 2×10^-18 m^3, and a hydraulic diffusivity from 3×10^-5 m^2/s to 10^-6 m^2/s <cit.>, at confining pressures ranging from 20 to 100 MPa respectively. Considering a water dynamic viscosity at the room temperature the experiments were conducted, η∼10^-3 Pa·s, we estimate wS to be within 3×10^-10 and 2×10^-9 m/Pa (assuming that kw and α are positively correlated). Taking into consideration the aforementioned characteristic range of values for f_r and Δτ_r-0, we estimate the maximum value for the in-situ factor that is representative of these laboratory experiments to be roughly I_situ∼10^12 N·m^-7/2. Interestingly, the upper bound for the moment release resulting from this value of I_situ aligns closely with our estimates of moment release and injected fluid volumes for this very same set of experiments (Fig. <ref>, yellow triangles). Note that in Fig. <ref>, the factor N must always be interpreted as being greater than I_situ due to the combined effect of the coefficients C_ν and C_shut-in. In addition, this upper bound seems to explain relatively well the centimeter-scale laboratory experiments presented in this study (cyan triangles; Supplementary Materials) and the
meter-scale laboratory experiments of Cebry et al. <cit.> (red triangles). The former experiments were carried out under almost identical conditions to the ones of Passelègue et al. <cit.>, whereas the latter ones were conducted in a similar saw-cut granitic fault with hydraulic properties that are close to the ones of Passelègue et al.'s fault at the lower confining pressures of this latter one <cit.>.
At the large scale of industrial fluid injections, we consider one of the best-documented field cases: the 1993 hydraulic stimulation at the Soultz geothermal site in France <cit.>. The hydraulic transmissivity associated with the 550-m open-hole section stimulated during the test has been estimated to experience a 200-fold increase as a consequence of the two fluid injections conducted, giving us a possible range of approximately 10^-14 m^3 to 2×10^-12 m^3 <cit.>. However, the smallest value of kw represents only the very short, initial part of the injection <cit.>. Therefore, a possible variation between 5×10^-14 m^3 and 2×10^-12 m^3 seems a more reasonable range to be considered within the assumptions of our model which assumes a constant transmissivity. On the other hand, the hydraulic diffusivity possesses significant uncertainties due to the single-well nature of the hydraulic data in contrast to the double-well measurements employed, for instance, by Passelègue et al. <cit.> in the laboratory. We consider a range of values for α from 0.01 m^2/s to 0.1 m^2/s, which is consistent with estimates derived from micro-seismicity migration <cit.> and aseismic fracture slip <cit.>. Assuming a water dynamic viscosity of η=2×10^-4 Pa·s which is representative of the temperature conditions within the reservoir <cit.>
, we estimate wS to fall within the range of 5×10^-8 m/Pa to 10^-7 m/Pa. With these estimates, we calculate a representative maximum value for the in-situ factor in this field test to be roughly I_situ∼10^9 N· m^-7/2. As shown in Fig. <ref>, the resulting upper limit aligns very well with the field data (circles), providing an effective upper bound for the hectometric to kilometric rupture cases composing the dataset. Furthermore, this simplified, order-of-magnitude analysis suggests that the behavior of the upper limit we observe from the laboratory to the reservoir scale, namely, the decrease of the factor N with increasingly larger volume and moment release scales, might be primarily controlled by an increase in fault-zone storativity.
Moreover, the upper bound for intermediate scales (in-situ experiments, square symbols in Fig. <ref>) is characterized by a value of N (or I_situ) that is approximately in the middle of the values that provide an upper limit for the laboratory and field data, suggesting that the increase in storativity with larger scales could be a general explanation for the trend observed throughout the entire dataset.
Finally, Fig. <ref> shows the comparison of our scaling relation for the maximum rupture run-out distance, equation (<ref>), with the estimated rupture run-out distances for the same injection-induced aseismic slip events as in Fig. <ref>. To facilitate the interpretation, we similarly define the factor M=S_ν A_situ accounting for all effects other than the injected fluid volume, so that equation (<ref>) becomes simply R_max=M√(V_tot). It is important to note that the effects of Δτ_r-0 and wS are now of similar order, as R_max scales alike with the background stress change R_max∝Δτ_r-0^-1/2 and fault-zone storativity R_max∝ (wS)^-1/2. Considering the same range of values for f_r, Δτ_r-0 and wS discussed previously, we calculate A_situ to be around 200 m^-1/2 for Passelègue et al.'s experiments, and 15 m^-1/2 for the Soultz case. As shown in Fig. <ref>, the upper limits resulting from these two values of A_situ (considering also an amplification due to the factor S_ν) are in remarkably good agreement with the data, providing an effective upper bound for the maximum rupture run-out distance from cm-scale ruptures in the laboratory to km-scale ruptures in industrial applications.
§ DISCUSSION
Our results provide a rupture-mechanics-based estimate for the maximum size, moment release, and magnitude of injection-induced slow slip events. Moreover, the dependence of our scaling relations on in-situ conditions and injected fluid volume allows us to explain variations in rupture sizes and moment releases resulting from fluid injections that span more than 12 orders of magnitude of injected fluid volume.
Similar scaling relations for the moment release of regular, fast earthquakes have been previously proposed in the literature <cit.>. Notably absent in those relations, fault-zone storativity appears here to be a crucial factor influencing the upper-bound behavior we observe with increasing scales of volume and moment release in the data. While our scaling relation for the maximum aseismic rupture run-out distance is the first of its kind, McGarr and Barbour <cit.> suggested in a prior work that for the moment release, the relation for the cumulative moment Σ M_0=2μ V_tot that was originally proposed by McGarr <cit.> for regular earthquakes, account also for aseismic slips. It is thus pertinent to discuss their scaling relation in light of our findings.
We first note that in testing their relation, McGarr and Barbour <cit.> incorporated numerous data points of aseismic moment release and injected volume into a dataset characterized by otherwise only regular earthquakes. All of these aseismic slip events come from laboratory experiments of hydraulic fracturing <cit.>, except for one single data point that stems from direct measurements of injection-induced aseismic slip during an in-situ experiment <cit.>. The mechanics of hydraulic fractures <cit.>, however, differs significantly from its shear rupture counterpart. Indeed, the moment release by hydraulic fractures scales linearly with the injected fluid volume simply because the integral of the fracture width over the crack area is equal to the fracture volume. The latter is approximately equal to the injected volume under common field conditions, namely, negligible fluid leak-off and fluid lag <cit.>. In our study, we discarded these hydraulic-fracturing data points because they correspond to a different phenomenon. The remaining data point of McGarr and Barbour, which does correspond to a fluid-driven shear rupture <cit.>, is retained in our dataset albeit with a certain degree of uncertainty based on moment release estimates provided by more recent studies (Supplementary Materials).
In terms of modeling assumptions, one of the most significant differences between McGarr's and ours is that we account for the potential for aseismic ruptures to propagate beyond the fluid-pressurized region (λ≫1). This regime, which from an upper-bound perspective is of most practical interest as it produces the largest ruptures for a given injection, is not allowed by construction in McGarr's model due to his assumption that any fault slip induced by the fluid injection must be confined within the region where pore fluids have been effectively pressurized due to the injection <cit.>. We emphasize that aseismic ruptures breaking non-pressurized fault regions are a possibility that always emerges when incorporating rupture physics in a model <cit.>, even in the absence of frictional weakening owing simply to long-range elastostatic stress transfer effects <cit.>. Moreover, such a regime has already been directly observed in laboratory experiments <cit.>, and inferred to have occurred during in-situ experiments <cit.> and industrial fluid injections for reservoir stimulation <cit.>. Furthermore, as a natural consequence of incorporating rupture physics in our model, we obtain a dependence of the moment release on the background stress state and fault frictional parameters. McGarr's model is, in contrast, insensitive to these physical quantities, which largely control the release of elastic strain energy during rupture propagation. Another important distinction between both models is that McGarr's relies uniquely on the capacity of the rock bulk to elastically deform and volumetrically shrink to accommodate the influx of fluid mass from the injection, unlike our model which accounts for bulk, fluid, and pore compressibilities within the fault zone via the so-called oedometric storage coefficient <cit.>.
Despite the significant differences between both models, it is pertinent to compare McGarr's relation for the moment release with our newly compiled dataset. By doing so, we observe that McGarr's upper bound can explain the majority of the data points, albeit with one very important exception (Fig. <ref>): the 2017 M_w 5.0 slow slip event in western Canada <cit.>; the largest event detected thus far. Specifically, McGarr's formula fails by predicting a maximum magnitude of 4.4 (considering V_tot=88,473 m^3 and assuming a shear modulus of 30 GPa <cit.>). This magnitude is equivalent to predicting an upper limit for the moment release that is 16 times smaller than the actual moment that was inferred geodetically <cit.>. Such underestimation is somewhat similar to that performed by McGarr's formula in the case of regular earthquakes: for instance, when considering the 2017 M_w 5.5 Pohang earthquake in South Korea <cit.>. Our scaling relation can, conversely, explain the M_w 5.0 slow slip event in Canada and, more generally, our entire compilation of events by accounting for variations in in-situ conditions such as the background stress change (Δτ_r-0) and, especially, the fault-zone storativity (wS). Note that from a `data-fitting' perspective, the dependence of our model on wS and Δτ_r-0 introduces additional degrees of freedom compared to McGarr's formula, which depends only on the injected fluid volume and the shear modulus; a parameter that has very little variation in practice.
Our estimates of the maximum rupture size and magnitude for slow slip events may be regarded, to some extent, as an aseismic counterpart of the also rupture-mechanics-based scaling relation proposed by Galis et al. <cit.> for regular earthquakes. Although we are describing a fundamentally different process here, the two scaling relations share the same 3/2-power law dependence on the injected fluid volume. This equal exponent arises from the similarities between the competing forces driving both slow slip events and dynamic ruptures in each model, namely, a point-force load due to fluid injection and a uniform stress change behind the cohesive zone and within the ruptured surface. In Galis et al.'s model, a point-force-like load is imposed to nucleate an earthquake. In our model, it is the natural asymptotic form that the equivalent force associated with the fluid injection takes in the regime that provides the largest ruptures for a given injection (λ≫1). Note that the two models differ in their storativity-like quantity. As we discussed before, we account for the capacity of the fluid, pore space, and bulk material in the fault zone to store pressurized fluids. In contrast, Galis et al.'s model accounts only for the capacity of the bulk material: a property they inherited from McGarr's model <cit.>. A revision of seismic scaling relations may be required to include the notion of a more general storativity term, particularly considering the significant variability in pore compressibility observed in practice <cit.> which can sometimes dominate over bulk and fluid compressibilities. Another important difference with Galis et al.'s model is the uniform variation of background stress which in their model is the so-called stress drop Δτ_0-r=τ_0-f_rσ_0^', whereas in our case, it corresponds to the same quantity but of opposite sign, Δτ_r-0=f_rσ_0^'-τ_0. Conceptually, this is indeed a very important difference. In our model, the stress drop is negative, which implies that after the termination of the injection operation and the subsequent dissipation of overpressure due to the injection, the residual shear stress acting on the slipped fault patch will be greater than the initial shear stress. This, which is a prominent feature of unconditionally stable ruptures, implies no release of tectonically accumulated pre-stresses on the fault.
The previous point brings us to an important issue: we have considered only one of the two possible modes of aseismic slip, namely, fault ruptures that are unconditionally stable. However, injection-induced aseismic slip can also be the result of conditionally stable slip, that is, the nucleation phase preceding an otherwise dynamic rupture. The principal factor determining whether aseismic slip will develop in one way or the other is the so-called ultimate stability condition <cit.>. For conditionally stable slip to occur, the initial shear stress must be thus greater than the background residual fault strength (τ_0>f_rσ_0^'), resulting in a positive stress drop. This is therefore the mode of aseismic slip that can potentially release tectonically accumulated pre-stresses. In general, we cannot rule out that the points in the datasets of Fig. <ref> and <ref> correspond to either conditionally stable or unconditionally stable slip, as estimating the background stress state and fault frictional properties that are representative of the reactivated fault remains extremely challenging in practice. There is, nevertheless, at least one case in the dataset in which aseismic slip is as a matter of fact, conditionally stable. These are the two aseismic slip events from the meter-scale laboratory experiments of Cebry et al. <cit.>, which preceded seismic ruptures that broke the entire fault interface sample (Supplementary Materials). The scaling relations resulting from this mode of aseismic slip are therefore important. Moreover, considering that the data points from Cebry et al.'s experiments align well with the other points in the dataset, we anticipate these scaling relations to be similar in their structure to the ones presented here.
Our model aimed to capture the most essential physical ingredients of unconditionally stable ruptures to provide the desired theoretical insights into the physical mechanisms controlling the maximum size and magnitude of injection-induced slow slip events. To achieve this, we have however adopted several simplifying assumptions that warrant further investigation. In particular, our model does not account for fluid leak-off from the permeable fault zone to the host rock, nor permeability enhancements associated with fault slip and/or the reduction of effective normal stress due to fluid injection. Despite these simplifications, we expect our scaling relations to still provide an effective upper bound with regard to these additional factors. Indeed, we think that incorporating a permeable host rock would notably decrease the injection overpressure in the fault zone compared to the impermeable case, thus decelerating rupture growth. The effect of slip-induced dilatancy, which is relatively well-established <cit.>, would introduce a toughening effect that would similarly slow down slip propagation from a fracture-mechanics perspective. Furthermore, permeability enhancements due to both dilatancy and reduced effective normal stress are expected to be inconsequential in the limit λ≫ 1, which is the relevant one for establishing an upper bound. This is due to, in this regime, most of the slipping region remains non-pressurized except for a small area near the fluid source. The strength of this small (point-force-like) region remains unchanged in our model, provided that the enhanced hydraulic properties are considered as the constant ones <cit.>. An additional simplification in our model is the consideration of a single fault zone. Although this might likely be the case for the majority of the events incorporated in our dataset <cit.>, in some cases, a network of fractures or faults could be reactivated instead <cit.>. Recent numerical modeling studies on injection-induced aseismic slip have, however, shown that approximately the same scaling relations for the moment release predicted by a single fracture in two dimensions emerge collectively for a set of reactivated fractures belonging to a two-dimensional discrete fracture network <cit.>. This is notably the case when the regime λ≫1 is reached in a global, fracture-network sense. Yet the generality and prevalence of this finding in three dimensions remain to be confirmed.
We notably showed that in the nearly unstable regime (λ≫1), the dynamics of the rupture expansion in our upper-bound configuration are controlled uniquely by the history of injected fluid volume, irrespective of any other characteristic of the injection protocol. The implications of this finding may go well beyond the ones explored in this work. For example, in hydraulic stimulation operations for the development of deep geothermal energy, micro-seismicity clouds which often accompany fluid injections are commonly used to constrain the areas of the reservoir that have been effectively stimulated. If aseismic-slip stress transfer is a dominant mechanism in the triggering of micro-seismicity, our model suggests that these seismicity clouds may contain important information about the pre-injection stress state and fault frictional properties which are embedded in the factor A_situ (equation (<ref>)). Moreover, if the effect of the fracture energy on rupture propagation can be approximately neglected in comparison to the other two competing forces driving aseismic ruptures in our model, that is, the background stress change and fluid-injection force, our results imply that the spatiotemporal patterns of seismicity migration might be deeply connected to injection protocols via the dependence of the aseismic slip front dynamics on the square root of the cumulative injected fluid volume. This could be used, for instance, to identify from injection-induced seismicity catalogs under what conditions aseismic-slip stress transfer may become a potentially dominant triggering mechanism due to this unique spatiotemporal footprint, which differs notably from the ones emerging from other triggering mechanisms such as pore pressure diffusion and poroelastic stressing <cit.>. Similarly, our model could be potentially applied to the study of natural seismic swarms where sometimes fluid flow and aseismic slip processes are thought to be the driving forces behind their observed dynamics <cit.>. Lastly, our model could be also utilized to understand slow slip events occurring at tectonic plate boundaries in many subduction zones worldwide. The fundamental mechanics of slow slip events remains debated <cit.> yet multiple, recent observations suggest that their onset and arrest might be spatially and temporally correlated with transients of pore-fluid pressure <cit.>.
Finally, we emphasize that our investigation has focused on constraining the rupture size and moment release of purely aseismic injection-induced ruptures. However, in some instances, seismic or micro-seismic events may release a substantial portion of the elastic strain energy stored in the medium. In this study, we have incorporated in our compilation of events only cases where the seismic contribution to the moment release is thought to be orders of magnitude smaller than the aseismic part. From a mechanics perspective, this aimed to exclude events where the stress transfer from frictional instabilities could significantly influence the dynamics of the slow rupture under consideration, thereby ensuring a robust comparison between the data and the scaling relations of our model. Future studies should therefore focus on understanding what physical factors govern the partitioning between aseismic and seismic slips during injection operations. Our work, in this sense, contributes to such possibility by providing an upper limit to the previously unexplored aseismic end-member. Together with prior works on purely seismic ruptures, we believe this offers a starting point to examine slip partitioning during injection-induced fault slip sequences: a crucial step toward advancing our physical understanding of the seismogenic behavior of reactivated faults and the associated seismic hazard.
§ MATERIALS AND METHODS
§.§ Time-dependent upper-bound model for the size of unconditionally stable ruptures
We define the axisymmetric overpressure due to the injection as Δ p(r,t)=p(r,t)-p_0, with p_0 the uniform background pore pressure. During the pressurization stage (t≤ t_s), the overpressure is given by Δ p(r,t)=Δ p_*· E_1(r^2/4α t) for an injection at constant flow rate Q_0 <cit.>, where Δ p_*=Q_0η/4π kw is the intensity of the injection with units of pressure, α is the fault hydraulic diffusivity, η is the fluid dynamic viscosity, the product kw is the so-called fault hydraulic transmissivity, and E_1 is the exponential integral function. The fracture-mechanics energy balance for a quasi-static circular rupture propagating on a slip-weakening fault was presented in <cit.>. In our upper-bound configuration here, neglecting the fracture energy spent during rupture propagation leads to the following expression describing the evolution of the rupture radius R with time:
2/√(π)f_rΔ p_*/√(R(t))∫_0^R(t)E_1(r^2/4α t)/√(R(t)^2-r^2)rdr =
2/√(π)Δτ_r-0√(R(t)),
where Δτ_r-0=f_rσ_0^'-τ_0 is the
background stress change. In equation (<ref>), the integral term of the left-hand side
is associated with an influx of potential energy towards the rupture front which becomes available for the rupture to grow owing to the sole effect of overpressure due to the injection. Conversely, the term of the right-hand side due to the background stress change is responsible alone for resisting rupture advancement. Nondimensionalization of equation (<ref>) shows that the competition between both energy terms is quantified by one single dimensionless number, the so-called residual stress-injection parameter 𝒯_r= Δτ_r-0/f_rΔ p_*, introduced first in <cit.>. Moreover, equation (<ref>) admits analytical solution in the form: R(t)=λ· L(t) <cit.>, with the asymptotes λ≃ 1/√(2𝒯_r) for nearly unstable ruptures (λ≫ 1, 𝒯_r ≪1), and λ≃ e^(2-γ-𝒯_r)/2/2 for marginally pressurized ruptures (λ≪ 1, 𝒯_r ∼ 10).
To highlight how significant is to analyze the end-member cases of nearly unstable (λ≫1) and marginally pressurized (λ≪1) ruptures throughout this work, we refer to their asymptotes for λ plotted in Fig. <ref>b which nearly overlap and thus quantify together almost any rupture scenario.
This analytical solution for λ was first derived by Sáez et al. <cit.> for a fault interface with a constant friction coefficient (equation (21) in <cit.>). Here, in our upper-bound configuration, the mathematical solution is identical to the one presented in <cit.> provided that the constant friction coefficient f in <cit.> is understood as the residual value f_r of the slip-weakening friction law here.
To make the link between the evolution of the rupture radius R(t) and the injected fluid volume V(t) in the most practically relevant, nearly unstable regime (λ≫1), we use the asymptote R(t)≃(1/√(2𝒯_r))L(t) in combination with the following expressions for the residual stress-injection parameter 𝒯_r=Δτ_r-0/f_rΔ p_*, overpressure intensity Δ p_*=Q_0η/4π kw, overpressure front L(t)=√(4α t), hydraulic diffusivity α=k/Sη, and injected fluid volume V(t)=Q_0 t. By doing so, we arrive at equation (<ref>) in the main text. For non-circular ruptures (ν≠0), building upon the work of Sáez et al. <cit.> for a constant friction coefficient, we obtain that the rupture front of our upper-bound model is well-approximated by an elliptical shape that becomes more elongated for increasing values of ν and decreasing values of 𝒯_r, with a maximum aspect ratio of 1/(1-ν) when 𝒯_r ≪ 1 and a minimum aspect ratio of (3-ν)/(3-2ν) when 𝒯_r ∼ 10. Other features of Sáez et al.'s model <cit.> such as the invariance of the rupture area with regard to the Poisson's ratio and the numerically-derived asymptotes for the quasi-elliptical fronts are also inherited here in the upper-bound model. In the shut-in stage (t>t_s), the overpressure is obtained by superposition simply as Δ p(r,t)=Δ p_*·[ E_1(r^2/4α t) - E_1(r^2/4α (t-t_s)) ]. The spatiotemporal evolution of overpressure has been studied in detail in <cit.>. Moreover, as already discussed in the main text, we reduced the upper-bound problem in the shut-in stage to a fault responding with a constant friction coefficient equal to f_r. Hence, our upper-bound model inherits all the results obtained by Sáez and Lecampion <cit.> who investigated extensively the propagation and arrest of post-injection aseismic slip on a fault obeying a constant friction coefficient. In particular, we take advantage of their understanding of the propagation and arrest of the slip front that ultimately determines the maximum size of unconditionally stable ruptures in our upper-bound model. Here, we have indeed expanded the work of Sáez and Lecampion <cit.> to account for an examination of the previously unknown evolution of the moment release during the shut-in stage (Fig. <ref>).
§.§ Asymptotics of moment release for nearly unstable and marginally pressurized ruptures
The scalar moment release M_0 at a given time t is given by <cit.>,
M_0(t)=μ∬_A_r(t)δ(x,y,t)dxdy,
where μ is the bulk shear modulus, δ is the current slip distribution, and A_r is the current rupture surface. To calculate the time-dependent slip distribution in the circular rupture case, we consider the quasi-static relation between fault slip δ and the associated elastic change of shear stress Δτ within an axisymmetric circular shear crack <cit.>:
δ(r,t)=4R(t)/πμ∫_r̅^1ξdξ/√(ξ^2-r̅^2)∫_0^1Δτ(sξ R(t),t)sds/√(1-s^2),
where r̅=r/R(t) is the normalized radial coordinate. Equation (<ref>) was originally derived for an internally-pressurized tensile circular crack with axisymmetric load <cit.>. Nevertheless, under the assumptions of uni-directional slip with axisymmetric magnitude and a Poisson's ratio ν=0, the shear crack problem is mathematically equivalent on the fault plane to its tensile counterpart <cit.>: crack opening being δ and crack-normal stress change being Δτ. In the limiting regime of a rupture propagating with zero fracture energy and at the residual friction level f_r, the change of shear stress is simply
Δτ(r,t)=τ_0-f_r[σ_0^'-Δ p(r,t)]=f_rΔ p(r,t)-Δτ_r-0,
where Δτ_r-0=f_rσ_0^'-τ_0 is the background stress change. Hence, for injection at a constant volumetric rate Q_0, the spatio-temporal evolution of slip for the end-member cases of nearly unstable (λ≫1) and marginally pressurized (λ≪1) ruptures turn out to be identical to the ones determined by Sáez et al. <cit.> for their so-called critically stressed regime (equation (26) in <cit.>) and marginally pressurized regime (equation (25) in <cit.>) respectively, as long as we interpret their constant friction coefficient f as f_r. The self-similar slip profiles can be written in a more convenient dimensionless form as:
δ(r,t)/δ_*(t)=D(r/R(t)), with δ_*(t)=
Δτ_r-0R(t)/μ when λ≫1,
f_rΔ p_* R(t)/μ when λ≪1,
and
D(x)=
(4/π)(arccos(x)/x-√(1-x^2)) when λ≫1,
(8/π)(√(1-x^2)-x·arccos(x)) when λ≪1.
Note that in the nearly unstable regime (λ≫1), we have recast equation (26) in <cit.> using the expressions L(t)=R(t)/λ and λ≃1/√(2𝒯_r). The nearly unstable asymptote for fault slip is plotted in Fig. <ref>b and compared to the numerical solution. Integration of the self-similar slip profiles via equation (<ref>) leads to the asymptotes for the moment release during the pressurization stage given in the main text: M_0(t)=(16/3)Δτ_r-0R(t)^3 when λ≫1, and M_0(t)=(16/9)f_rΔ p_* R(t)^3 when λ≪1. It is worth mentioning that the slip distribution of nearly unstable ruptures has a singularity (of order 1/r) at r=0. Strictly speaking, this asymptote corresponds to the solution of the so-called outer problem which is defined at distances r≫ L(t). An interior layer must be resolved at distances r∼ L(t) to obtain the finite slip at the injection point, which scales as δ_c(t)=f_rΔ p_* L(t)/μ <cit.> (see Fig. <ref>b). Nevertheless, this interior layer has no consequences in estimating the moment release. Indeed, the integrand in equation (<ref>) for such a slip distribution is non-singular so that after taking the limit L(t)/R(t)→0, one effectively recovers the actual asymptote for the moment release. The details of the interior layer are therefore irrelevant to the calculation of M_0 in this limit.
§.§ Relation between fluid-injection force and injected fluid volume for arbitrary fluid sources
Under the assumptions of our model, the displacement field u induced by the fluid injection into the poroelastic fault zone is irrotational ∇×u=0 <cit.>. Therefore, the variation in fluid content ζ, which corresponds to the change of fluid volume per unit volume of porous material with respect to an initial state (here, t=0), satisfies the following constitutive relation with the pore-fluid overpressure Δ p (eq. 96, <cit.>),
ζ=S Δ p,
where S is the so-called oedometric storage coefficient representing the variation of fluid content caused by a unit pore pressure change under uniaxial strain and constant normal stress in the direction of the strain <cit.>, here, the z-axis (Fig. <ref>c).
S accounts for the effects of fluid, pore, and bulk compressibilities of the fault zone, and is equal to <cit.>
S=1/M+b^2(1-2ν)/2(1-ν)μ,
where M is the Biot's modulus and b the Biot's coefficient.
To obtain the cumulative injected fluid volume at a given time, V(t), we just sum up changes in fluid volume all over the spatial domain of interest, say Ω, at a given time t, that is, V(t)=∫_ΩζdΩ. In our model, the fluid flow problem is axisymmetric and the fault-zone width w is uniform, such that the differential of the volume is simply dΩ=2π w rdr in cylindrical coordinates. With these definitions, we can now integrate (<ref>) over the entire fault-zone volume to obtain the following expression for the injected fluid volume valid for an arbitrary fluid injection:
V(t)=S w · 2π∫_0^∞Δ p(r,t)rdr.
By defining the normal force induced by the fluid injection over the slip surface (simply equal to the integral of the overpressure over the fault plane) as:
F(t)=2π∫_0^∞Δ p(r,t)rdr,
we arrive at the following relation between the fluid-injection force and injected fluid volume:
F(t)=V(t)/wS.
Expressions of a similar kind to (<ref>) have been reported in previous studies <cit.>. For example, McGarr <cit.> considered a similar relation except that his storativity-like term is the inverse of the elastic bulk modulus. Garagash <cit.> also proposed a similar expression to (<ref>) but accounting only for pore compressibility. Finally, the relation considered by Shapiro et al. <cit.> is the closest to our expression, including the oedometric storage coefficient.
§.§ Scaling relations for nearly unstable ruptures accounting for arbitrary fluid injections
Nearly unstable ruptures (λ≫1) provide the upper bound of most practical interest. Here, we generalize such an upper bound for the rupture size and moment release to account for an arbitrary fluid injection. In the pressurization stage (t≤ t_s), the reduction of fault strength due to fluid injection in the so-called outer problem (r≫ L(t)) can be effectively approximated as a point force (e.g., <cit.>),
f_rΔ p(r,t) ≈ f_r F(t)δ^dirac(r)/2π r= f_r V(t)/wSδ^dirac(r)/2π r,
where F(t) is the fluid-injection normal force, equation (<ref>), which is related to the cumulative injected fluid volume via equation (<ref>). Substituting equation (<ref>) into the stress change (<ref>), and then the latter into the double integral, equation (<ref>), we obtain upon evaluating those integrals an asymptotic upper bound for the spatiotemporal evolution of fault slip as:
δ(r,t)=4/πΔτ_r-0/μR(t)[ f_rV(t)/2π w SΔτ_r-0arccos(r/R(t))/r/R(t) - √(1-(r/R(t))^2)],
The propagation condition for a rupture with negligible fracture energy, equation (<ref>), can be alternatively written in terms of the slip behavior near the rupture front as <cit.>,
lim_r→ R(t)^-∂δ(r,t)/∂ r√(R(t)-r)=0.
This imposes a constraint in the slip distribution (<ref>) that can be also seen, in a limiting sense, as eliminating any stress singularity at the rupture front. By differentiating equation (<ref>) with respect to r, and then applying the propagation condition (<ref>), we obtain the following relation:
R(t)=√(f_r V(t)/2π wS Δτ_r-0),
which is valid for an arbitrary fluid injection.
Equation (<ref>) is identical to equation (<ref>) in the main text, which was originally derived for injection at constant flow rate. It thus represents a generalization of the insightful relation between the evolution of the rupture radius and the cumulative injected fluid volume, equation (<ref>), for arbitrary fluid injections. Note that alternatively, equation (<ref>) can be derived through the rupture propagation condition imposed over the stress change, equation (<ref>). It only takes to replace the particular overpressure solution for injection at a constant volumetric rate, Δ p(r,t)=Δ p_*E_1(r^2/4α t), by the more general point-force representation, equation (<ref>). We report here the derivation based on the slip distribution because it makes now the calculation of the moment release for arbitrary fluid injections straightforward. Indeed, by substituting (<ref>) into (<ref>), we obtain the slip distribution satisfying the zero-fracture-energy condition of our upper-bound model. Upon integrating the resulting slip profile via equation (<ref>), we obtain the following final expression for the moment release:
M_0(t)=16/3(2π)^3/2V(t)^3/2/√(Δτ_r-0)( f_r/wS)^3/2,
which is identical to equation (<ref>) in the main text, thus demonstrating that the relation between the moment release, in-situ conditions, and injected fluid volume (<ref>), holds for arbitrary fluid injections.
Finally, we note that some constraints on the fluid source are required to ensure that some important model assumptions are satisfied. For example, the rupture must always propagate in a crack-like mode during the pressurization stage. This is necessary so that equations (<ref>) and (<ref>) (in combination with (<ref>)) remain always valid. Essentially, crack-like propagation allows for substituting the shear stress acting within the ruptured surface directly with the fault shear strength at any time during propagation. This would not be valid, for instance, for the pulse-like ruptures characterizing the shut-in stage <cit.>, where behind the locking front equating the shear stress to the fault strength is no longer valid. As discussed in the main text, crack-like propagation holds at least in one relevant scenario, where the pore pressure increases monotonically everywhere within the fault zone. Another assumption in the upper bound rationale of our model relies on the following property of unconditionally stable ruptures: the effect of the fracture energy in the front-localized energy balance must diminish as the rupture grows and, ultimately, become negligible <cit.>. Although this is certainly valid even in the case of arbitrary fluid injections, it relies on an implicit assumption of the slip-weakening model, namely, the fracture energy being constant. Our theoretical framework allowed in principle to account for non-constant and non-uniform fracture energy. We do not account for fracture-energy heterogeneity for the same reason that we do not account for stress or other kinds of heterogeneities in our model: we aim to provide fundamental, first-order insights into the problem at hand. Moreover, we also do not consider a possible scale dependence of the fracture energy. The scale-dependency of fracture energy for seismic ruptures is a topic of active research (<cit.> and references therein). Although we do expect this phenomenon to be also present in aseismic ruptures, to the best of our knowledge, there is no experimental or observational evidence suggesting such behavior for slow frictional ruptures. We therefore refrain from exploring the theoretical implications of this hypothetical physical ingredient at the moment.
§.§ Numerical methods
All the numerical calculations in this study have been conducted via the
boundary-element-based method described in <cit.>. For the general case of non-circular ruptures, we use the fully three-dimensional method presented in <cit.>. For the particular case of axisymmetric, circular ruptures, we use a more efficient axisymmetric version of the method presented in <cit.>.
§.§ Acknowledgements
The authors thank J.-P. Avouac for providing comments on an earlier manuscript version.
§.§ Funding
The results were obtained within the EMOD project (Engineering model for hydraulic stimulation). The EMOD project benefits from a grant (research contract no. SI/502081-01) and an exploration subsidy (contract no. MF-021-GEO-ERK) of the Swiss federal office of energy for the EGS geothermal project in Haute-Sorne, canton of Jura, which is gratefully acknowledged. A.S. was partially funded by the Federal Commission for Scholarships for Foreign Students via the Swiss Government Excellence Scholarship. F.P. acknowledges funding from the European Union (ERC Starting Grant HOPE num. 101041966).
§.§ Author contributions
A.S. developed the main ideas of the upper-bound model, conducted the theoretical derivations and numerical calculations, compiled and produced the dataset of slow slip events, interpreted the results, carried out the discussion, and wrote the manuscript. F.P. provided the dataset of cm-scale laboratory events, contributed to the interpretation of the laboratory data, and provided comments on the manuscript. B.L. helped to develop some of the main ideas and provided comments on the manuscript.
§.§ Competing interests
The authors declare that they have no competing interests.
|
http://arxiv.org/abs/2409.03671v1 | 20240905162442 | TRACE-cs: Trustworthy Reasoning for Contrastive Explanations in Course Scheduling Problems | [
"Stylianos Loukas Vasileiou",
"William Yeoh"
] | cs.AI | [
"cs.AI"
] |
rm
ruledlabelfont=normalfont,labelsep=colon,strut=off
basicstyle=,
numbers=left,numberstyle=,xleftmargin=2em,
aboveskip=0pt,belowskip=0pt,
showstringspaces=false,tabsize=2,breaklines=true
ruled
listingtblst
listingListing
/TemplateVersion (2025.1)
Washington University in St. Louis
{v.stylianos, wyeoh}@wustl.edu
Affiliation
Affiliation Line 2
name@example.com
1Affiliation 1
2Affiliation 2
firstAuthor@affiliation1.com, secondAuthor@affilation2.com, thirdAuthor@affiliation1.com
|
http://arxiv.org/abs/2409.03181v1 | 20240905021538 | Wrapped Gaussian Process Functional Regression Model for Batch Data on Riemannian Manifolds | [
"Jinzhao Liu",
"Chao Liu",
"Jian Qing Shi",
"Tom Nye"
] | stat.ME | [
"stat.ME"
] |
DasAtom: A Divide-and-Shuttle Atom Approach to Quantum Circuit Transformation
Yunqi Huang, Dingchao Gao, Shenggang Ying, and Sanjiang Li^*
Yunqi Huang and Sanjiang Li are with Centre for Quantum Software and Information (QSI), Faculty of Engineering and Information Technology, University of Technology Sydney, NSW 2007, Australia. Dingchao Gao and Shenggang Ying are with Institute of Software, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Beijing, China
Corresponding author (E-mail: Sanjiang.Li@uts.edu.au)
September 5, 2024
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Regression is an essential and fundamental methodology in statistical analysis. The majority of the literature focuses on linear and nonlinear regression in the context of the Euclidean space. However, regression models in non-Euclidean spaces deserve more attention due to collection of increasing volumes of manifold-valued data. In this context, this paper proposes a concurrent functional regression model for batch data on s by estimating both mean structure and covariance structure simultaneously. The response variable is assumed to follow a wrapped Gaussian process distribution. Nonlinear relationships between manifold-valued response variables and multiple Euclidean covariates can be captured by this model in which the covariates can be functional and/or scalar. The performance of our model has been tested on both simulated data and real data, showing it is an effective and efficient tool in conducting functional data regression on Riemannian manifolds.
§ INTRODUCTION
Regression models are ubiquitous and powerful tools in data analysis to reveal the relationship between independent variables and dependent variables. Most well known regression models are formulated under the assumption that the variables all lie in Euclidean vector spaces. However, statistical models of manifold-valued data are gaining popularity in various fields of scientific analysis, such as computational social science, medical imaging analysis and computer vision <cit.>. Together with the significant increase in manifold-valued data observed by current instrumentation, generalizations of Euclidean regression models to manifold-valued settings are needed. If the manifold structure is not accounted for then the reliability of model predictions in the target spaces is compromised. Consider, for instance, Figure <ref>, where a regression model is trained on manifold-valued data (represented by the black curves on a S2). The predictions of this model (illustrated by the blue curve) deviate from the manifold. However, when the manifold structure is incorporated into the model, the predictions (depicted by the red curve) align with the target manifold. This underscores the necessity of employing non-Euclidean structure for modelling manifold-valued data. An additional example demonstrating this principle in Kendall's shape space is provided in the Appendix.
In recent years, functional data have garnered increasing attention due to the continuous or intermittent recording of data at discrete time points. These data often exhibit non-linearity, and a common assumption is that they lie on a nonlinear manifold. For instance, image data, which can be influenced by random domain shifts, are known to reside in such manifolds . However, analyzing functional data poses challenges, particularly when dealing with spaces lacking global or local linear structures. Traditional methods like functional principal component analysis become infeasible in these cases. One specific difficulty arises from defining a suitable metric for data residing on a Riemannian manifold. Researchers have explored alternative approaches to address this issue. For example, <cit.> propose using point-wise Fréchet means for non-Euclidean time-varying random objects in general metric spaces. Additionally, they introduce point-wise distance trajectories between individual time courses and the estimated Fréchet mean trajectory. This innovative approach allows for a representation of time-varying random objects using functional data. In summary, the study of functional data on nonlinear manifolds presents exciting opportunities for advancing statistical methodologies and understanding complex data structures.
According to the manifold hypothesis, functional data can be mapped onto a low-dimensional nonlinear manifold, which is also known as manifold learning <cit.>.
Gaussian processes (GPs) have proven to be powerful tools for modeling complex data structures. While GPs are commonly applied in Euclidean spaces, recent research has extended their applicability to Riemannian manifolds. <cit.> utilize the spectral theory of the Laplace-Beltrami operator to compute Matérn kernels on Riemannian manifolds. These kernels, widely used in physical sciences, capture both smoothness and spatial correlations. The spectral approach allows for efficient computation of Matérn covariance functions, enabling GPs to model data residing on curved surfaces. <cit.> introduce a vector-valued GP equipped with a matrix-valued kernel. This novel framework enables modeling of vector fields on Riemannian manifolds. Applications include geodesic flow modeling, fluid dynamics, and other scenarios where vector-valued data arise. Recently, <cit.> and <cit.> extend stationary GPs to compact Lie groups and non-compact symmetric spaces. These spaces play a critical role in spatiotemporal modeling. Those models define a metric based on geodesic distances between predictors, accommodating both functional and scalar predictors. In addition, <cit.> propose a regression model for functional data on manifolds. While existing approaches assume Euclidean response variables, certain problems involve responses residing directly on Riemannian manifolds. For instance, predicting flight trajectories based on flight time. In contrast to previous models, we propose a regression framework that maps Euclidean-valued predictors to manifold-valued functional responses.
A number of statistical approaches to analysis of manifold-valued data exist in the literature, especially in the context of principal component analysis (PCA). For example, <cit.> introduced principal geodesic analysis which is a generalisation of PCA on s. It replaces the first principal component with a principal geodesic constrained to pass through the intrinsic mean, with calculations performed approximately in Euclidean tangent space. <cit.> proposed a PCA technique for functional data on 2-dimensional Riemannian manifolds in which the authors adopt a regularization method by a smoothing penalty coherent with geodesic distances. Subsequently, <cit.> formulated Riemannian functional principal component analysis by mapping data to tangent spaces via the Riemannian logarithm map and then performing multivariate functional principal component analysis within the tangent space.
In addition to generalizations of PCA to manifold data, generalizations of linear regression have also been studied. Similar to manifold-based PCA, the regression line is typically replaced with a geodesic within the manifold, which gives rise to more challenging optimisation problems in order to fit the models than in the standard Euclidean setting. Geodesic regression <cit.> is a generalization of linear regression to a manifold-valued setting in which an univariate independent variable is considered in ℝ and dependent variables lie in a . The author derived a gradient descent algorithm for model fitting via derivatives of the exponential map and Jacobi fields, because the least square method has no analytical solution under this setting. <cit.> then extended geodesic regression to the multivariate setting in which the independent variables are in ℝ^n and the dependent variable is still manifold-valued. In addition, the authors proposed a variational gradient descent method based on parallel transport which is more convenient for high-dimensional independent variables. In the context of shape analysis, <cit.> provided an analytic approach to geodesic regression in Kendall's shape space. Multiple linear regression has also been generalised by <cit.> for complex random objects in metric spaces with independent variables in ℝ^n. Using a least-squares approach, the authors derived asymptotic rates of convergence.
However, in many applications, non-Euclidean data cannot always be assumed to follow form of linear relationship. <cit.> introduced a kernel-based nonlinear regression model for both manifold-valued independent variables and manifold-valued dependent variables. <cit.> extended polynomial regression to s by introducing a class of curves on s which generalize geodesics and are analogs of polynomial curves in Euclidean space. <cit.> developed a regression model for a manifold-valued response variable in Riemannian symmetric spaces and covariates in Euclidean spaces with applications in medical imaging analysis. Moreover, <cit.> introduced an additive linear model for manifold-valued data which is based on the exponential map. In particular, they transform the manifold-valued data onto tangent spaces and then estimate the parameters of the additive linear model by a generalised least squares method.
Gaussian process regression (GPR) is a powerful non-linear and non-parametric model widely used for learning probability distributions over unknown functions. Operating within a Bayesian framework, GPR assumes that both the prior and likelihood follow normal distributions, allowing for the derivation of a posterior distribution that is also Gaussian.
This ash been extended to solve the problems involving non-Gaussian data <cit.> and the use of other prior processes <cit.>. In addition, researchers have extended GPR to accommodate multi-output scenarios. Notable contributions include the work of <cit.>, <cit.>, and <cit.>, who have explored multi-output Gaussian process regression. Over the past decade, substantial progress has been made in developing Gaussian process regression models specifically tailored for manifolds, thereby expanding the model’s scope and applicability. One intriguing approach is the wrapped Gaussian process regression proposed by <cit.>. In this framework, the Riemannian manifold is linearized via the logrithm map, projecting data points onto a tangent space. This results in a non-parametric regression model with a probabilistic foundation. Importantly, the computational cost of wrapped Gaussian process regression remains relatively low, as it involves additional calculations only for the exponential map and logarithm map of each manifold-valued data point. The key innovation lies in defining a Gaussian distribution directly on the Riemannian manifold, leveraging insights from real-valued multivariate Gaussian distributions and the exponential map. This novel approach opens up avenues for Gaussian process regression on Riemannian manifolds. Within the Bayesian framework, researchers have derived manifold-valued posterior distributions, albeit under certain assumptions (e.g., the requirement of infinite injectivity radius).
To the best of our knowledge, there is little current literature about regression models for functional batch data on s within a probabilistic framework. In light of this, we provide a nonlinear non-parametric regression model with uncertainty for batch data on a smooth . Called the wrapped Gaussian process functional regression model (WGPFR), it models the relationship of a functional manifold-valued response and mixed (scalar and functional) Euclidean-valued predictors by representing the mean structure and the covariance structure as different model terms. Specifically, a functional regression model is used for the mean structure with a functional manifold-valued response and scalar covariates, while a wrapped Gaussian process has been used for the covariance structure with functional covariates based on <cit.> who extend Gaussian process regression on s. In this way, the proposed WGPFR model extends the research field of non-parametric regression on manifolds.
The rest of this paper is organized as follows. In section <ref>, some basic concepts of functional data, s, Gaussian process regression and wrapped Gaussian process regression are reviewed. In section <ref>, we propose our model together with a method for inference via an efficient iterative algorithm. Numerical experiments and real data analysis are reported in Section <ref>. Finally, we draw conclusions and discuss the results in Section <ref>.
§ BACKGROUND
§.§ Functional Data
Functional data analysis is a field of study wherein each observation is a random function, manifesting as a curve, surface, or other variations over a continuum. This analysis is particularly pertinent to data recorded over identical intervals, with consistent frequency and numerous repeated observations. While data are frequently modelled by parametric models incorporating randomness in the realms of statistics or machine learning, functional data analysis contemplates a smooth process observed at discrete time points, offering greater flexibility and reduced parametric constraints, see e.g. <cit.> and <cit.>.
Furthermore, functional data can be procured from a diverse range of domains. For instance, in finance, volatility surfaces serve as a source of functional data. In the field of biomedical research, time-varying physiological signals yield functional data. Functional Magnetic Resonance Imaging (FMRI) is yet another domain that generates functional data. These examples underscore the broad applicability and interdisciplinary nature of functional data analysis.
§.§ Preliminaries for Riemannian Manifolds
In this section, we review a few basic concepts of s, wrapped Gaussian distributions, wrapped Gaussian processes on manifolds and then set up basic notations. More detail on basic Riemannian geometry can be found in standard texts, such as that by <cit.>.
§.§.§ Concepts and Notation
Given a d-dimensional smooth differentiable manifold and a tangent space T_p, for p∈, a Riemannian metric g_p:T_p× T_p→ℝ on is a family of positive definite inner products which vary smoothly with p∈. Equipped with this Riemannian metric, we call the pair (, g) a Riemannian manifold.
The tangent bundle of is defined as a set 𝒯ℳ=∪_p∈ (p× T_p), where p is a point in and v∈ T_p is a tangent vector at p. If γ is a smooth curve in ℳ, then its length is defined as the integral of ∂γ/∂ t where the norm is computed via the Riemannian metric at γ(t). For any pair (p,v)∈𝒯ℳ with v sufficiently small, there is a unique curve γ:[0,1]→ℳ of minimal length between two points p∈ and q∈, with initial conditions γ(0)=p, γ'(0)=v and γ(1)=q. Such a curve is called a geodesic. The exponential map (p,v):× T_p→ at p∈ is defined as (p, v)=γ(1) for sufficiently small v∈ T_p.
Consider the ball of largest radius around the origin in T_p on which (p, v) is defined and let V(p)⊂ denote the image of (p,·) on this ball.
Then the exponential map has an inverse on V(p), called the Riemannian logarithm map at p, (p,·): V(p) → T_p. For any point q∈ V(p), the geodesic distance of p∈ and q∈ is given by d_ℳ(p,q)=(p,q).
Figure <ref> shows the concepts above. For example, p and q are two points on a ℳ, the tangent space at p is denoted as T_p(ℳ) and γ(t) refers to a geodesic between p and q. The tangent vector v is from p to q at T_p(ℳ) and p moves towarding v with distance v is q, which represents (p, q) and (p, v) respectively.
The exponential map and its inverse are analogs of vector space operations in the following way. Suppose p_ and q_ refer to two different points on linked by a geodesic γ, and suppose the tangent vector from the former to the latter is denoted by pq_=(dγ/dt)_p. In addition, suppose p_E and q_E refer to two Euclidean data points and the vector from the former to the latter is given by pq_E. Table <ref> shows the relationship of operations between s and Euclidean spaces.
If (, d_) is a complete metric space that every Cauchy sequence of points in has a limit that is also in , then the Hopf-Rinow theorem shows that every pair of points p,q∈ℳ is joined by at least one geodesic, a condition known as geodesic completeness, and, equivalently, (p,v) is defined for all (p,v)∈𝒯ℳ. However, given two points on ℳ, the geodesic between the points may not be unique, even if ℳ is geodesically complete.
The injectivity radius at p∈ℳ, denoted p, is defined to be the radius of the largest ball at the origin of T_pℳ on which (p,·) is a diffeomorphism. It follows that the geodesic ball at p of radius p is contained within V(p) under which the Wrapped Gaussian process regression can be derived to a analytical form. In the following, we assume the data are on a Riemannian manifold with infinite injectivity radius, such as spheres and Kendall's shape spaces.
Some examples of s are shown in Appendix, including explicit formulae for the exponential and logarithm maps in certain cases.
§.§ Functional Data on Riemannian Manifolds
To study functional data located on s, <cit.> considered a d-dimensional complete Riemannian submanifold ^# of a Euclidean space ^d_0, with a geodesic distance d_ and a probability space (Ω, , P) with sample space Ω, σ-algebra and probability P. The sample space of all -valued continuous functions on a compact interval ⊂ is denoted by ={ y: → | y ∈𝒞()}. Thus, we define -valued random functions to be functions y(t, ω), y: ×Ω→, satisfying y(·, ω) ∈. Let ={v: → ℝ^d | ∫_v(t)^Tv(t)dt < ∞} refers to an ambient L^2 Hilbert space of d-dimensional square integrable functions on 𝒯. The inner product is defined as ⟨ v,u⟩=∫_v(t)^Tu(t)dt and the norm is given by v=⟨ v,v⟩^ for v,u∈.
§.§ Gaussian Process Regression and Wrapped Gaussian Process Regression
§.§.§ Euclidean Gaussian Process Regression
As described in <cit.>, a general Gaussian process regression model is defined as
y=f(x)+ϵ
where x∈ℝ^Q, y ∈ℝ, ϵ∼(0,σ^2).
The regression function f is assumed to follow a Gaussian process prior, that is
f(·)∼ GP(μ(·),k(·,·;θ))
where μ(·)=E[f(·)] is the mean function of Gaussian process, and k(·,·;θ) is the covariance function with hyper-parameter θ.
Given training data = {x_i, y_i,i=1,…,n}, we have a multivariate Gaussian distribution that is
(f(x_1),...,f(x_n))∼(μ(X),K(X,X))
where X refers to (x_1,...,x_n), μ(X) = (μ(x_1),...,μ(x_n)) refers to a mean vector, and K(X,X) refers to a n× n covariance matrix in which the entry of the (i,j)-th element is k(x_i,x_j;θ).
To obtain the predictive distribution at a new input x^*, it is convenient to take advantage of the properties of Gaussian distribution:
[ y; f(x^*) ]∼[ [ μ(X); μ(x^*) ]
,
[ K(X,X)+σ^2I K(x^*,X)^T; K(x^*,X) K(x^*,x^*) ] ]
where K(x^*,X) is a 1× n covariance matrix, K(x^*,x^*) is scalar and their entries are similar to K(X,X). Fortunately, the predictive distribution has an analytical form which is
p(f(x^*)|𝒟,x^*)∼( K(x^*,X)^T(K(X,X)+σ^2I)^-1(y-μ(X))+μ(x^*),
K(x^*,x^*)-K(x^*,X)^T(K(X,X)+σ^2I)^-1K(x^*,X))
For more details of Gaussian process regression, see <cit.>.
The kernel function in the covariance function can be chosen from a parametric family, such as the radial basis function (RBF) kernel:
K(x,x';θ)=λ_1^2exp(-x-x'^2/2λ_2)
with the hyper-parameter θ={λ_1,λ_2} chosen by maximizing the marginal likelihood. Specifically, the hyper-parameters can be estimated by conjugate gradient descent algorithm <cit.> or Markov Chain Monte Carlo <cit.>.
§.§.§ Wrapped Gaussian Distribution
In this subsection, we first review the definitions of the wrapped Gaussian distribution and wrapped Gaussian process on a Riemannian manifold ℳ, and then review wrapped Gaussian process regression (WGPR) which is introduced by <cit.>.
Suppose denotes a d-dimensional and X is a random variable on . For μ∈ and a symmetric positive definite matrix K∈ℝ^d× d, we can define a wrapped Gaussian distribution as follows
X =(μ,v), v∼(0,K)
where μ is called basepoint and v is a vector belongs to the tangent space at μ which follows a multi-variate Gaussian distribution with zero mean and covariance matrix K (see Section 3.1 in <cit.>).
The wrapped Gaussian distribution is formally denoted as
X∼_(μ,K).
<cit.> describe a manifold-valued Gaussian process constructed by the wrapped Gaussian distribution, and the following material is based on their work. A collection (f(X_1),...,f(X_n)) of random points on a manifold index over a set Ω is a wrapped Gaussian process, if every subcollection follows a jointly wrapped Gaussian distribution on .
It is denoted as
f(·)∼𝒢𝒫_(μ(·),k(·,·;θ))
where μ(·)=E[f(·)] is the mean function on a Riemannian manifold, and k(·,·;θ) is a covariance function with hyper-parameter θ on a tangent space at μ(·).
Consider the wrapped Gaussian process regression model which relates the manifold-valued response p∈ to the vector-valued predictor x∈ℝ^Q though a link function f:ℝ^Q→ which we assume to be a wrapped Gaussian process.
Analogously to Gaussian process regression, the authors also derive inferences for wrapped Gaussian process regression in Bayesian framework. Specifically,
given the training data 𝒟_={(x_i,p_i)|x_i∈ℝ^Q,p_i∈, i=1,⋯,n}, the prior for f(·) is assumed to be a wrapped Gaussian process, which is f(·)∼𝒢𝒫_(μ(·),K(·,·;θ)). Then the joint distribution between the training outputs p=(p_1,⋯,p_n) and the test outputs p^* at x^* is given by
([ p^*; p ]) ∼𝒩_(([ μ^*; μ ]),([ K^* * K^*; K^*⊤ K ]))
where μ^*=μ(x^*), μ = (μ(x_1),...,μ(x_n)), K=K(x,x),
K^*=K(x,x^*), K^* *=K(x^*, x^*) with x=(x_1,⋯,x_n).
Therefore, by Theorem 1 in <cit.>, the predictive distribution for new data given the training data can be obtained by
p^*|p∼Exp(μ^*,v^*),
where
v^*∼𝒩_ℳ(m^*, Σ^*), m^*=K^* K^-1(μ,p), Σ^*=K^* *-K^* K^-1 K^*⊤.
Similar to the comparison of operators between Riemannian manifolds and Euclidean spaces, Table <ref> shows the analogs of Gaussian processes between these two spaces.
§ MAIN MODEL AND INFERENCE
Gaussian process regression is a powerful non-parametric tool in statistics and machine learning. In Section <ref>, we adapt GPR to functional data on manifolds by proposing the WGPFR model for batch data on . Inference of the WGPFR model is presented in Section <ref>, where the statistical properties of the model parameters are studied and an efficient algorithm for inference is proposed.
§.§ Wrapped Gaussian Process Functional Regression Model
The data consist of discrete observations of M random functional curves on , so that the m-th curve is denoted as y_m(t), t ∈, m=1, …, M. We assume that all curves are observed at a set of times t_i, i=1,…,N. Throughout, i will index time and m will index curves.
The set of observations is therefore y_mi=y_m(t_i), i=1,…,N, m=1,…,M. Associated with the m-th curve, we assume there is a real vector-valued functional covariate xm(t)∈ℝ^Q. We assume, as for the functional responses y_m(t), that xm(t) is also observed at the times t_i, so that we are given points xm=xm(t_i). Finally, we assume there is a vector-valued covariate um∈ℝ^P observed for each curve y_m. Thus, the data for the m-th curve comprises y_mi, xm, t_i, i=1,…, N together with the vector um. These data, for fixed m, are called a batch.
Next we define our hierarchical model for the data. It has the following overall structure, and details of each part are filled out in what follows. At the top level of the hierarchy, there is a mean curve μ_*(t)∈. There are two stages to obtain each curve y_m from μ_*. First there is a perturbation of μ_* to obtain an unobserved curve μ_m. We assume this perturbation depends on the vector covariate um and a functional model parameter β(t)∈ℝ^P, but not on the functional covariates (see Equation (<ref>)). Secondly, we assume y_m is obtained from μ_m via a vector field τ_m along μ_m which depends on the functional covariate xm(t) (see Equation (<ref>)). Figure <ref> shows a schematic of the overall model.
§.§.§ Definition of Mean Structure
It is difficult to infer non-linear relationships non-parametrically between a functional response variable and multi-dimensional functional covariates even when both of them are observed in a space for real-valued functions <cit.>. Therefore, we assume that the mean structure μ_m(·) depends on the scalar covariates u_m only. Specifically we assume
μ_m(t) = (μ_*(t), u_m^Tβ(t))
where μ_*(t)∈ and β(t)∈ T_μ_*(t) play a similar role to the intercept term and slope term of the function-on-scalar linear regression model in Euclidean space.
These are model parameters which we estimate. Furthermore, equation (<ref>) can be seen as a functional version of geodesic regression proposed in <cit.>, in which a geodesic regression model for manifold-valued response and scalar-valued predictor is proposed in the form y=(p,xv) with (p,v)∈𝒯ℳ where x∈ℝ is the predictor variable.
§.§.§ Definition of Covariance Structure
Conditional on μ_m, the m-th curve y_m is modelled as
y_m(t)=(μ_m(t),τ_m(t)), for m=1,...,M
where τ_m(t) ∈ T_μ_m(t) is a vector field along μ_m. When the data are observed with random noise, <cit.> defines the geodesic model as y=((p,xv),ϵ). Similarly, the proposed WGPFR model with error term can be defined as
y_m(t)=(μ_m(t),τ_m(t)+ϵ_m(t)), for m=1,⋯,M;
where ϵ_m(t) are independent measurement errors following a multivariate normal distribution with mean zero and covariance matrix σ^2𝐈. For functional data, in addition to the measurement error term, the most important part we are interested is the underlying covariance structure contained in term τ_m(t), which is Equation (<ref>). In what follows, we explain how τ_m(t) is modelled as depending on the functional covariate xm(t).
We assume that the vector field τ_m(t)∈ T_μ_m(t) can be represented in local coordinates as a d-dimensional vector function τ_m(t)=(τ_m,1(t),...,τ_m,d(t))∈^d.
We further assume this vector-valued function follows a Gaussian process with mean zero and an underlying covariance structure k_m(·,·;θ), which is
τ_m(t) ∼ GP(0, k_m(·,·;θ)).
Specifically, given two observation times t_i and t_mj from the m-th batch of data, we assume that the covariance of the two random vectors τ_m(t_i) and τ_m(t_mj) a function of the observed covariate values xm(t_i) and xm(t_mj).
Under the framework of the wrapped Gaussian process in <cit.>, we can represent the above Gaussian process as the wrapped Gaussian process for manifold functional data. That is,
y_m(t)∼𝒢𝒫_(μ_m(t),k_m(·,·;θ))
where μ_m(t)∈ and k_m(·,·;θ)∈ T_μ_m(t) are defined as above.
To summarize, the WGPFR model studies the non-linear relationship between the manifold-valued functional response and mixed (functional and vector) Euclidean-valued predictors in two parts, as shown in Figure <ref>. The first part is the mean structure which is defined through a functional regression to model the manifold-valued functional mean with the vector-valued predictor. The second part is called the covariance structure, and for which the inference is based on a Bayesian method with the prior assumed to be Gaussian process.
Compared to the geodesic regression in <cit.>, the WGPFR additionally incorporates the information of dependence between data modelled by covariance structure which is the main focus of interest for functional data analysis. Compared to the WGPR model in <cit.>, this paper further models the mean part which relates the functional mean on the manifold to the scalar predictor on Euclidean space, while the mean part in <cit.> is not related to scalar predictors. Note that the scalar predictor plays an important role in the prediction of the model, especially for the batch data we consider in this paper. The proposed WGPFR model is more complicated but much flexible than existing models and induces a more complex inference procedure which will be illustrated in detail in the next subsection.
§.§ Inference for WGPFR
§.§.§ Estimation of Mean Structure
Since the mean structure model (<ref>) can be seen as a functional version of the geodesic regression model <cit.>, an intuitive inference method consists of optimizing μ_*(t) and β(t) simultaneously for each fixed t. However, this would ignore information about the correlation of the function μ_* and β(t) at different times. On the other hand, if the number of observed times t_i is large, and we account fully for correlations between all different times, then the computational cost will be very high, unless we use some efficient methods, such as <cit.>. Due to the dependent structure, we cannot conduct optimization at each fixed t separately, otherwise the correlation information across different t will be lost. It can also be seen that inference of the functional mean structure model is indeed challenging.
In this paper we borrow an idea of <cit.> to estimate μ_*(t) and β(t). Note that μ_*(t) plays the role of an intercept in the linear regression in Euclidean space, which is usually defined as the mean of the response. Similarly, for functional data y_m(t), m=1,...,M on a , we can define the intrinsic population Fréchet mean function under the assumption of existence and uniqueness:
μ_0(t)=p(t)∈arg min E[d_(y(t),p(t))^2]
where d_(·,·) denotes geodesic distance. When t is fixed, μ_0(t) refers to a point on a ; when t is variable, μ_0(t) refers to a curve on a . Additionally, since each y_m(t) ∈ is continuous, μ_0(t) is also continuous (see equation (1) in <cit.>).
The intrinsic Fréchet mean function μ_0(t) can be a good choice for μ_*(t) since it is a natural mean of manifold-valued functional data and it is reasonable to assume y_m(t), m=1,...,M are confined to stay within the radius of injectivity at μ_0(t) for all t∈𝒯. Then the inverse exponential map can be well-defined. We estimate μ_*(t) at each time t via the sample Fréchet mean of the points y_m(t). This requires that the times t_i are the same across all curves m=1,…,M. Standard gradient descent methods <cit.> can be used to obtain the Fréchet sample mean at each time t=t_i to obtain estimates μ̂_0(t_i) of μ_*(t_i), i=1,…,N.
Denote the logarithm process data as
V_m(t)=(μ_0(t),y_m(t)),
where V_m(t) is a d-dimensional tangent vector of y_m(t) at μ_0(t) for a fixed t.
Since we assume is a d dimensional closed Riemannian submanifold of a Euclidean space, it is reasonable to identify the tangent space T_μ_0(t) as a subspace of the Euclidean space ℝ^d_0 with d_0≥ d. Then V_m(t) can be seen as ℝ^d_0 valued square integrable functions, which can be represented by a set of basis functions. Similarly, the slope function β(t) can also be represented by this set of basis functions. In this way, we can not only easily estimate the intercept term μ_0(t) but also the slope term β(t) can be obtained by least square methods on ℝ^d_0. This reduces the computation complexity compared with the optimization method in <cit.>. Next we illustrate the estimation procedure in detail.
For the estimation of the intrinsic Fréchet mean μ_0(t), the sample Fréchet mean is calculated by minimizing the following loss function for each t∈𝒯:
μ̂_0(t) = p(t)∈arg min 1/M∑_m=1^Md_(y_m(t), p(t))^2.
It is natural to use the gradient descent algorithm introduced by <cit.> to estimate the sample Fréchet mean for each fixed t. Using <cit.>, the gradient is given by
∇ d_(p,y_m(t))^2 = -2(y_m(t),p).
An implementation of the gradient descent algorithm to solve equation (<ref>) is given in Algorithm <ref>. In practice, we choose the step size l=1/N where N is the number of samples and ϵ=10^-8 which is relatively small.
For the estimation of β(t)=(β_1(t),⋯,β_p(t)), the first step is to find a reasonable basis to represent β(t) in the tangent space. Under the framework of <cit.>, suppose we have an arbitrary system of K orthonormal basis functions
Φ_K={ϕ_k∈ℍ | ϕ_k(t)∈ T_μ_0(t),⟨ϕ_k,ϕ_l ⟩=δ_kl,k,l=1,⋯,K},
where ℍ={ν:𝒯→ℝ^d,∫_𝒯ν(t)^Tν(t)dt<∞} is a L^2 Hilbert space of ℝ^d with inner product ⟨ν,u ⟩=∫_𝒯ν(t)^Tu(t)dt} and norm ‖ v‖=⟨ v,v ⟩^1/2, δ_kl=1 if k=l and 0 otherwise. Note that the values of each ϕ_k(t) at each time t∈𝒯 is restricted to the tangent space T_μ_0(t), which are identified with ℝ^d_0. Define the K-dimensional linear subspace of ℍ:
ℳ_K(Φ_K):={ x(t)=∑_k=1^K a_kϕ_k(t) for t ∈𝒯| a_k∈ℝ}.
Then the slope functions β_j(t)∈ T_μ_0(t), j=1,⋯,p can be approximated by a linear span on ℳ_K(Φ_K) with expansion coefficients b_jk:
β_j(t)≈∑_k=1^K b_jkϕ_k(t).
Then the estimation of β_j(t) can be transformed to the estimation of b_jk, k=1,2,⋯,K, j=1,2,⋯,p.
Defining V_m(t)=(μ_0(t),y_m(t)), V̂_m(t)=(μ̂_0(t),y_m(t)) and
W_m(t) =(μ_0(t),μ_m(t)), then the multiple linear function-on-scalar regression model becomes
V_m(t)=W_m(t)+e_m(t)≈∑_j=1^p∑_k=1^Ku_mjb_jkϕ_k(t)+e_m(t)
where e_m(t)=V_m(t)-W_m(t) which are assumed to be independent of the covariate u_m for m=1,⋯,M.
Since V_m(t) can not be obtained directly, we use V̂_m(t) as the response of the multiple linear regression model, so that the parameters of the regression model are estimated by minimizing the following loss function with observed data:
L(b_jk)=∑_m=1^M∑_i=1^nV̂_m(t_i)-∑_j=1^p∑_k=1^Ku_mjb_jkϕ_k(t_i)^2.
The b_jk coefficients are calculated by standard least squares methods.
To be more specific, let
V̂=(V̂_im)_i=1,⋯,nd_0;m=1,⋯,M denote a nd_0× M matrix such that the m-th column denotes the m-th batch, and the i-th row denotes the vectored element (V̂_m1(t_1),⋯,V̂_m1(t_n),⋯,V̂_md_0(t_1),⋯,V̂_md_0(t_n)). Similarly, let
Φ=(ϕ_ik)_i=1,⋯,nd_0;k=1,⋯,K denote a nd_0× K matrix; let U denote a M× p matrix for which each row u_m indicates a data point; and let B denote the p× K coefficient matrix with element b_jk for j=1,⋯,p, k=1,⋯,K. Then the above loss function can be rewritten as
L(B)=vec(V̂)-(Φ⊗ U)vec(B)^2.
The least squares estimate of B is obtained by
vec(B̂)=((Φ⊗ U)^⊤(Φ⊗ U))^-1(Φ⊗ U)^⊤ vec(V̂)
Consistency of the estimator of the sample Fréchet mean was proved in <cit.>. The following theorem establishes consistency of the estimator of the regression coefficients; the proof is left to the appendix.
Under the conditions (C1)-(C5) in the Appendix, the coefficient vec(B̂) is a consistent estimator with probability tending to 1 as n →∞ in the sense that
vec(B̂)-vec(B)=o_p(1).
It follows that the functional slope coefficients can be estimated by β̂(t)=(β̂_1(t),⋯,β̂_p(t)) where
β̂_j(t)= ∑_k=1^Kb̂_jkϕ_k(t)
for j=1,⋯,p.
The estimated mean structure for the m-th curve is given by
μ̂_m(t)=(μ̂_0(t),u_mβ̂(t)).
The function μ̂_m(t) is expected to approximate μ_m(t). However we omit the rigorous proof of the consistency of μ̂_m(t) on the manifold due to some technique difficulties.
§.§.§ Estimation of Covariance Structure
From the section above, we obtain an estimate of the mean structure μ_m(t) which is a continuous function on . In this section, we will focus on inference of the covariance structure τ_m(t)∈ T_μ_m(t). We assume that the tangent spaces T_μ_m(t) for t∈𝒯 can be identified with ^d_0 via some smooth local basis of orthonormal vector fields along μ_m(t). We have mentioned above that the covariance structure can be related to another functional covariate x_m(t), which is a ℝ^q-valued function. If we denote τ_m(t)=τ_m(x_m(t))=(τ_m1(t),⋯,τ_md_0(t)), then the correlation of different components in τ_m(t) could be estimated via a cross-covariance function model, such as the convolved Gaussian process <cit.>. However, if there are n observations τ_md(t_i), i=1,⋯,n, then the size of a covariance matrix in a Gaussian process of τ_md(t), d=1,⋯,d_0 is n× n while the size of a cross-covariance matrix in a convolved Gaussian process of τ_m(t) is n^d_0× n^d_0, which is computationally expensive. As a result, we will consider different dimensions independently. Specifically, we assume
τ_md(t)∼ GP(0,K_md(x_m(t),x_m(t');θ_md)),
where K_md(·,·;θ_md) denotes a covariance kernel depending on hyper-parameter θ_md.
Given the value of τ_md(t_i), i=1,⋯,n, denote τ_md=(τ_md(t_m1),...,τ_md(t_mn)), then for any new input t_m^*, the conditional distribution of τ_md(t_m^*) is
τ_md(t_m^*)|_md∼(μ_md^*, Σ_md^*), μ_md^* = k_md^*TK_md^-1τ_md, Σ_md^* = k_md^** - k_md^*TK_md^-1k_md^*
where k_md^*=(K_md(t_m1,t_m^*),...,K_md(t_mn,t_m^*)) and k_md^** = K_md(t_m^*,t_m^*).
In Gaussian process regression, the hyper-parameters for the d-th dimension, θ_md, can be estimated by maximizing the sum of marginal likelihood functions for each batch, i.e. maximizing
(2π)^-n/2(K_md)^-e^-τ_md^TK_md^-1τ_md.
Under some regularity conditions, the estimator θ̂_md converges to θ_md <cit.>. With the estimated hyper-parameters, the estimated covariance structure of the d-th coordinate for the m-th batch is given by
τ̂_md(t)|τ_md∼ GP(k_md^T(t)K_md^-1τ_md, K_md(t,t)-k_md(t)^TK_md^-1k_md(t)|θ̂_md))
where k_md(t)=(K_md(t,x_m(t_m1)),⋯,K_md(t,x_m(t_mn))). Under some mild conditions, the above estimator of the covariance structure is information consistent, as the following theorem states.
Suppose (1) the underlying true vector field, τ_m(t) along μ_m(t) can be represented using a local basis by real-valued functions τ_md, d=1,…,d_0, and that each function has a Gaussian process prior with mean zero and bounded covariance function K_md(·,·,θ_md) for any covariate x_m(t), (2) K_md(·,·,θ_md) is continuous in θ_md, further assume the estimator θ̂_md converges to θ_md as n→∞ almost surely, and (3) the mean structure μ_m(t) is known.
Then τ̂_md(·) is information consistent to the true τ_md(·), which means an estimator that guarantees convergence to the true parameter value as the sample size grows, if the reproducing kernel Hilbert space norm τ_md_k is bounded and the expected regret term E_X(log|I+σ^-2C_n n|)=o(n), where C_n n=(k(x_i, x_j ; θ)) is the covariance matrix over x_i, i=1, …, n, and σ^2 is the variance of the measurement error ϵ_m(t).
The proof is given in Appendix. Note that the above theorem holds under the condition that μ_m(t) is known, but in practice, we only have an estimate of μ_m(t). Furthermore, the term τ_md in the posterior distribution can not be observed directly, and it is approximated by τ̃_md=(τ̃_md(t_m1),⋯,τ̃_md(t_mn)) which is the realization of the d-th element of τ̃_m(t)=(τ̃_m1(t),⋯,τ̃_md_0(t))=(μ̂_m(t),y_m(t)).
§.§.§ Update Mean Structure and Covariance Structure
After obtaining the estimated mean structure and covariance structure, we are able to make predictions with given new inputs. In order to improve the performance of our model, we introduce an algorithm which can update the estimated mean structure and covariance structure iteratively.
The loss function of the m-th curve at time point t_i with the estimated mean structure and estimated covariance structure is given as
E=∑_m=1^M∑_i=1^nd_(((μ̂_0(t_i),μ̂_m(t)), τ̂_m(t_i)), y_m(t_i))^2
where μ̂_m(t)=∑_j=1^p∑_k=1^Ku_mjb̂_jkϕ_k(t_i).
Given the above estimated covariance structure τ̂_m(t_i), the mean structure can be updated by a gradient descent algorithm where the gradient is
∇_μ_m(t_i)E = -d_μ_m(t_i)(μ_m(t_i),τ̂_m(t_i))^†(y_m(t_i),(μ_m(t_i),τ̂_m(t_i))),
where † is an adjoint with respect to the Riemannian inner product, which plays a similar role to parallel transport. In practice, a variational method for gradient descent <cit.> can be used as a substitute for gradient in Equation (<ref>). Thereafter, we update the mean structure from μ̂_m(t) to μ̂_m^(1)(t) (here the superscript ^(1) means the 1-st iteration). The updated coefficients b_jk^(1) can be estimated by minimizing the loss function
L(b_jk)= (μ̂_0(t_i), μ̂^(1)_m(t_i))-∑_j=1^p ∑_k=1^Ku_m,jb_jkϕ_k(t)^2,
which is a linear least squares problem.
Given the updated mean structure μ̂_m^(1)(t), we can re-calculate the covariance structure τ̂_m^(1)(t). The only difference from Section <ref> is that the estimated mean structure is replaced by the updated mean structure. This two-way updating procedure is then repeated iteratively, stopping when the difference of the updated mean or covariance structure smaller than some given threshold. The algorithm of the updating procedure is specified in Algorithm <ref>.
§ NUMERICAL EXPERIMENTS
In this section we demonstrate the WGPFR model on two Riemannian manifolds: S^2 and Kendall's shape space. As previously, we suppose there are M curves on a Riemannian manifold; in what follows we will simulate data points on these random curves in different ways to form different scenarios of interpolation and extrapolation. For the extrapolation problem, we use our model to predict the last 15 data points (given all preceding data points) to test the long-term extrapolating performance, and the last 5 data points to test the short-term extrapolation performance. Moreover, since the Algorithm <ref> is calculated in Euclidean space, we use a optimizer from SciPy which is powerful and widely-used for minimization problems.
§.§ Regression analysis on S^2
§.§.§ Simulation scenario on S^2
Suppose observation times t are equally spaced in the interval [0, 1] with N points, where m∈{1,...,m_1,m_1+1,...,M} and M=m_1+m_2. To test the performance of our model with different number of observed curves and data points, we considered 30 and 60 batches and 20, 40 and 60 data points on each batch respectively, which is M ∈{30, 40} and n∈{20,40,60 }. The simulated data are shown in Figure <ref> and in the remainder of this subsection we explain how the data were generated.
As introduced previously, the WGPFR model consists of three parts: the intrinsic Fréchet mean function, the mean structure and the covariance structure. In order to simulate data, we first define the Fréchet mean function μ_0(t) on S^2 by
μ_0(t) = (p, (sin(tπ/2)^2, sin(tπ)^3,0))
where p=(0,0,1) refers to a point on a S^2.
Suppose there are two batches, and that we generate two mean structures based on batch-specific covariates u_m and Fourier basis functions β(t), defined by
μ_m(t) = (μ_0(t), u_s^Tβ(t)), m=1,...,m_1+m_2
where u_m=(1,0) for m=1,...,m_1 and u_m=(1,1) for m=m_1+1,...,m_1+m_2 refer to the covariates for two scalar batches respectively and β(t)=(β_1(t),β_2(t)) refers to tangent vector-valued basis functions. Specifically, we define
β_1(t) = R_t(∑_i=1^20 w_1,i(ϕ_i(t), ϕ_i(t+1/2), ϕ_i(t+2/2))),
β_2(t) = R_t(∑_i=1^20 w_2,i(ϕ_i(t), ϕ_i(t+1/2), ϕ_i(t+2/2))), i=1,...,20
where
w_1,i = i/120,
w_2,i = -1/2√(sin(i/60)), i=1,...,20,
ϕ_i(t) = 1/√(2), for i=0,
ϕ_i(t) = sin(2π it), for i = 1,3,5,...,19,
ϕ_i(t) = cos(2π it), for i = 2,4,6,...,20,
and R_t is a rotation matrix transferring the vector ∑_i=1^20 w_1,i(ϕ_i(t), ϕ_i(t+1/2), ϕ_i(t+2/2))) or
∑_i=1^20 w_2,i(ϕ_i(t), ϕ_i(t+1/2), ϕ_i(t+2/2))) to T_μ_0(t), this ensures that the vectors β_1(t),β_2(t) are tangents at μ_0(t). For example, given a point p and its tangent vector 𝐯, the rotation matrix is defined by
R_t(𝐯)=𝐯-⟨ v,p⟩/‖𝐯-⟨ v,p⟩‖‖𝐯‖.
We use the following covariance function
k(t_i, t_j) = v_0exp(-1/2w_0 (t_i-t_j)^2)+a_0+a_1t_it_j+σ^2δ_ij
which is a combination of a squared exponential kernel (often used for stationary Gaussian processes) and a linear kernel (often used for non-stationary Gaussian process), this can be used for most of Gaussian process regression <cit.>. Since S^2 can be embedded into ^3, for each dimension of tangent vector of covariance structure, the hyper-parameters (v_0,w_0,a_0,a_1,σ) are given as θ_1=(0.012, 3, 0.01, 0.01, 0.02), θ_2=(0.017, 3.1, 0.011, 0.012, 0.015) and θ_3=(0.015, 3.2, 0.012, 0.013, 0.01). Given these three Gaussian processes with zero mean and kernel <ref>, we can generate a tangent vector for the covariance structure. However, in practice, the tangent vector may not in the target tangent space because of numerical rounding. For example, if the ideal tangent vector is (1,1,0), our generated tangent vector might be (0.99,1,0.01), which will result in the exponential map and inverse exponential map incalculable (defined in Equation (<ref>) and Equation (<ref>) respectively). Therefore, the tangent vector must be projected into the correct tangent space, which is denoted as v_3(t)|θ_1,θ_2,θ_3. The subscript 3 in v_3 refers to 3-dimensional. As a consequence, the formula to generate manifold-valued data for numerical experiments is written as
y_s,m(t)=(μ_s(t),v_3(t)|θ_1,θ_2,θ_3), s=1,2, m=1,...,M_s.
§.§.§ Model assessment on S^2
In this section, we delete some generated data points on a randomly selected curve, say y_r(t), in different ways to form training data sets and then calculate the predictions for these deleted data points (i.e. the test data). The performance of WGPFR model is assessed by comparing the root mean square error between predictions and their real data by Euclidean distance, since a sphere can be embedded in ℝ^3 easily.
Specifically, we selected 15 data points on y_r(t) uniformly at random as our test data set, and all remaining data points were used as the training data set, which is a typical interpolation problem. This scenario is denoted as Type 1 prediction. As to Type 2 prediction, the short-term forecasting, the last 5 data points on y_r(t) are considered as test data set which is a typical extrapolation problem. Analogously to the short-term forecasting, we also test the performance of WGPFR for long-term forecasting, the Type 3 prediction. In Type 3 prediction, we choose the last 15 data points as test data.
For comparison, in each scenario, the same training data set is used for several other models, such as functional linear regression on Riemannian manifolds (FLRM) and wrapped Gaussian process Fréchet mean regression (WGFmR). Specifically, FLRM is the mean structure (<ref>) without covariance structure; WGPFmR consists of mean structure and covariance structure in which the mean structure is the sample Fréchet mean point for all training data. In addition, the WGPFmR model does not have the updating part. We compare the performance of these three models not only to show a significant improvement of predictive accuracy from concurrent model (consider mean structure and covariance structure simultaneously), but also from the intervatively updating algorithm <ref>. Moreover, we can use some models, such as Gaussian process functional regression without manifold structure to fit the data and make predictions. However, such models are meaningless since the inferences cannot be guaranteed on the right space which is manifold. Thus, it might be not suitable as a baseline and we only consider comparison with manifold structure.
We replicate each simulation study 100 times. Thus, we test the performance of our model on thousands of test data points. The numerical results reported in Table <ref> are the average of root-mean-square-error (RMSE) in every single replication. Using the embedding S^2 ⊆^3, it is reasonable to use the Euclidean norm between points as a distance function (chordal metric), which provides a method to calculate the RMSE.
In Type 1 prediction, Type 2 prediction and Type 3 prediction, WGPFR always provides the best prediction. The predictive accuracy of FLRM is better than that of WGPFmR in most scenarios, because FLRM only learns the mean structure of the training data which might be more useful for trend forecasting. In extrapolation, the long-term forecasting and short-term forecasting, since test data are distant from the training data, the output of GRP is usually inaccurate and then the mean structure mainly determines the accurate of prediction. Because we used the Fréchet mean point as the mean function of WGFmR, the prediction of WGFmR is very poor.
Comparing long-term and short-term forecasting, since the test data of the former is closer to the training data than that of the latter, the predictions of short-term forecasting are more accurate than that of long-term forecasting. In addition, from the table we can see that when the numbers of curves are fixed, the RMSE between prediction and real data decreases with the increasing number of points; when the number of points is fixed, the RMSE also decreases with the increasing number of curves.
Since Gaussian process regression is a Bayesian model, we also compare the log-predictive likelihood of the covariance structure which provides the randomness of this model. <cit.> introduce a calculation of log pointwise predictive density in practice. Thus we compute this index by summing the log pointwise predictive density in each dimension and the result is shown in Table <ref>.
Moreover, discrete Fréchet distance is a metric to measure the similarity between two curves which considers the locations and orderings between predictions and real data. This method is widely used in GPS data, especially for comparing trajectories of animals, vehicles or other moving objects <cit.>. The results show that our model provides small predictive error under this metric. In other words, our model is effective even the measurement is different from its objective function. The numerical results are shown in Table <ref>.
We also show the RMSE of β in Table <ref>. We can see that, when the number of curves is fixed, the RMSE between estimated β and real β decreases with the increasing number of data points; however, when the number of data points is fixed, the RMSE between estimated β and real β is almost the same (0.1779 compared to 0.1773 and 0.0212 compared to 0.0214).
§.§ Regression Analysis on Kendall's Shape Space
We firstly generated μ_0(t), for t∈ (0,1), which describes a varying shape. Specifically, the shape is a circle at the beginning and becomes a square in the end. The varying shapes are shown in Figure <ref> and they are generated based on an elliptic equation after removing scale, rotation and translation.
We generated data corresponding to 80 landmarks, given by an element of ℂ^2× 80(See the Appendix for more details about Kendall's shape space)
and the mean shape function μ_0(t) is defined as
μ_0,z(t) =
(z/20-1,√((z/20-1)^2+(1-(z/20-1)^2)(1-t/2)^2)i), for z=1,...,40
(1-z/20,-√((1-z/20)^2+(1-(z/20-1)^2)(1-t/2)^2)i), for z=41,...,80
Then, we use the scalar covariates to generate tangent vectors from the mean function μ_0(t) to mean structures μ_1(t) and μ_2(t) by
μ_1(t) =(μ_0(t), ∑_p=1^Pu_1,psin(t)^3sin(r(t))),
μ_2(t) =(μ_0(t), ∑_p=1^Pu_2,psin(t)^3sin(r(t))cos(r(t))),
where P refers to the number of elements in 𝐮_1, u_1,1=0, u_1,2=0, u_2,1=1, u_2,2=2 and r(t) refers to a ranking function of t. For example, when t=t_3, r(t)=3. Thus, we obtain 3 curves in a batch and each curve is consist of 10 points (curves).
The generated μ_0(t) and the mean structures μ_1(t) and μ_2(t) are shown in Figures <ref> and <ref> respectively.
We estimate the mean structure and covariance structure as discussed in Section <ref> and Section <ref>. The training data consist of all data in batch one together with the first and the second curve in batch two; the difference between these types of prediction is the handling of the third curve in batch two. Specifically, for Type 1 prediction, 8 time points are randomly selected to form the training data and the remaining 2 points are used as the test data; with regard to Type 2 prediction, we add the first half into the training data and the remaining for the test data; as for the Type 3 prediction, the last 3 time points are considered as the test data and the remaining are added into the training data. Each numerical experiment is repeated for 100 times. In addition, to compare the performances among different regression models, we also tested the WGPFR, approximate wrapped Gaussian process functional regression model, functional linear regression model on (the mean structure) and wrapped Gaussian process Fréchet mean regression model.
The RMSE are given in Table <ref>. Similar to the simulation study on a S^2, the WGPFR model achieves the best prediction results in both interpolation and extrapolation. Moreover, the time consuming of simulation study on S^2 is less than 10 minutes while that of shape space is about one hour, since the later has higher dimension. Specifically, there are 3 sets of estimated hyper-parameters for a wrapped Gaussian process on a S^2 while 160 sets of estimated hyper-parameters for a wrapped Gaussian process on a shape space.
Thus we also show a log pointwise predictive density and the results are in Table <ref>.
§.§ Flight Trajectory Data
In this section, we test our model on a real data set. The earth is roughly a sphere and can be modelled as a copy of S^2. Certain data sets, for example hurricane trajectories, are therefore considered as a manifold-valued random curve <cit.>. Here we study flight trajectory data, shown in Figure <ref> in which the red curves represent flights from Shanghai to London by British Airways and the black curves represent flight trajectories of Eastern China Airlines between the same destinations (the data were downloaded from www.variflight.com on 2020). Therefore, these sets of trajectories can naturally be split into two batches and the model with common mean structure can be used.
The original data includes time, height, speed, longitude and latitude of each flight at discrete time points. We select the position of each airplane as the response variable, which can be transformed onto S^2 using longitude and latitude; in addition, the company and time are regarded as non-functional and functional covariates respectively. Before we train the model, it is necessary to pre-process the raw data. In this step, we firstly set the takeoff time of each flight to be 0 and the landing time to be 1, excluding taxi time. 25 trajectories of each company were selected in which the number of observed data points in every flight is greater than 600. In order to obtain smooth manifold-valued curves from the data, some kernel smoothing functions with small bandwidth were applied to the longitude and latitude of the training data. For computational reasons, we choose every 6-th data points of each smoothed trajectory as training data (100 data points in each curve totally).
To model the mean structure for flight trajectory data, we use company as the batch-specific covariate. In practice, for Eastern China Airlines, the covariate is defined as 0; and for British Airways, the covariate is defined as 1. Estimation of the mean structure and covariance structure were described in Section <ref> and <ref>, respectively. The parameters of the basis functions in mean structure and hyper-parameters in the covariance structure were be updated iteratively, as described in Section <ref>. In addition, the initial values of hyper-parameters in convariance structure are drawn from a standard Gaussian distribution independently. Afterwards, the predictions of the WGPFR model has been compared to FLRM and WGFmR for the same flight trajectory data.
The overall prediction of WGPFR model should be better than the other models, since the FLRM model only learns from mean structure and ignores the dependent error. In addition, the mean function of WGFmR is a point on S^2 and the prediction should have significant error when data are far away from the Fréchet mean. This is verified by numerical results in Table <ref>.
The training data and test data are generated in the following way. We randomly selected another flight trajectory of British Airways which had >600 time observations. After the same pre-processing steps, 50 data points were added into the training data and the remaining 50 data points are used as the test data. In order to test the performance of interpolation and extrapolation for the real data set, we form the Type 1 prediction by randomly choosing these 50 data points and form the forecasting by selecting the first 50 points in the trajectory as training data. In addition, we test the capability for short-term forecasting and long-term forecasting by supposing different test data. Specifically, the 50-th to 60-th points and the 50-th to final data points of the flight trajectory can be selected respectively to form two scenarios of prediction which are denoted as short-term and long-term, respectively.
The predictive accuracy of WGPFR compared to FLR and WGFmR on the flight trajectory data is show in Table <ref>, which shows the mean of root-mean-squared-error of 20 repeated simulations to reduce the effects of random seeds. We can see that the WGPFR model outperforms FLR and WGFmR for interpolation. For the short-term prediction, the RMSE is much smaller than that of the long-term prediction. As mentioned previously, the reason is that GPR provides little extra information when the test data are distant from the training data. The prediction of WGFmR is less accurate since the mean structure (Fréchet mean) is only a manifold-valued point and the test data set are not close to that point.
§ CONCLUSION AND DISCUSSION
In this paper we studied a novel functional regression model within a probabilistic framework for nonlinear manifold-valued response variables and real-valued covariates. Specifically, a wrapped Gaussian process functional regression (WGPFR) model is proposed to model the regression relationship into mean structure and covariance structure. For the mean structure, we proposed a functional version of geodesic regression model for the manifold-valued functional response and scalar covariates, where the inference of the mean structure is based on the intrinsic Fréchet mean function and the traditional functional linear regression model is used on the tangent vectors of the manifold-valued data at the intrinsic Fréchet mean function to estimate the scalar covariates in an ambient Euclidean space. A wrapped Gaussian process prior is proposed to model the covariance structure in the tangent space at the mean function of each batch, where inference is conducted in a Bayesian way.Furthermore, an iterative approach based on a gradient descent algorithm or variational gradient descent algorithm are also applied to update mean structure and covariance structure efficiently. The mean structure captures the relation between functional dependent variable and batch-based scalar independent variable. Meanwhile, the covariance structure models the nonlinear concurrent relationship between functional output and multiple functional covariates. This idea in WGPFR model avoids the curse of dimensionality by using multiple functional covariates as input for functional regression, which promotes the flexibility of this method.
Future research endeavors could encompass various extensions and enhancements to the proposed WGPFR model. Initially, the WGPFR model, as delineated in this paper, is predicated on the assumption of Riemannian manifolds with infinite injectivity radius. The potential exists to adapt this model to other manifolds that may not conform to this assumption.
Secondly, the current paper presumes the independence of Gaussian processes across different dimensions. Future investigations could explore methodologies as suggested by <cit.> and <cit.>, where Gaussian processes exhibit dependence. However, such an approach would necessitate the development of additional computational efficiencies.
Thirdly, the proposed model could be applied to other intriguing real data sets. For instance, the performance of our model could be evaluated on medical imaging data. Repeated measurements of functional magnetic resonance imaging (fMRI), for example, could be construed as data residing in a specific manifold. Furthermore, the recurrently scanned shapes of the corpus callosum from diverse patients could be modeled to predict the likelihood of Alzheimer’s disease based on certain relative factors <cit.>. In addition, consideration could be given to non-Gaussian data as <cit.> and alternative methods to define mean and covariance structures.
The convergence of parameter estimation is substantiated in Theorem <ref> and <ref>. However, the assurance of convergence for the iterative optimization Algorithm <ref> remains an open question for future exploration. <cit.> demonstrated that, given suitable initial parameters, the estimation error of conditional maximization problems is confined within the bounds of statistical error and optimization error with a high degree of probability, potentially laying the groundwork for convergence. Nonetheless, it is crucial to acknowledge the distinction between graphical spaces and Riemannian manifolds. The approach proposed by <cit.>, which establishes that the convergence rate of GPR is optimal under certain preconditions and is upper bounded based on the smoothness of the correlation function in Euclidean space, could potentially be extrapolated to Riemannian manifolds to facilitate the convergence of covariance structure estimation. This proposition, however, requires rigorous validation.
§ ACKNOWLEDGMENTS
JQS’s work is supported by funds of National Key R&D Program of China (2023YFA1011400), the National Natural Science Foundation of China (No. 12271239) and Shenzhen Fundamental Research Program (No. 20220111). Chao Liu is supported in part by China Postdoctoral Science Foundation, No.2021M691443, No.2021TQ0141 and SUSTC Presidential Postdoctoral Fellowship.
§ APPENDIX
§.§ Comparison of A Model with or without A Manifold Structure
In Kendall's shape space, the prediction of a regression model without a manifold structure loses the shape framework while a model with manifold structure still keep it.
§.§ Examples of Riemannian Manifolds
Sphere S^n:
The two-dimensional sphere S^2={x∈ℝ^3:x=1} is a simple example of a Riemannian manifold.
It is easy to show that the tangent vectors at a point p∈ S^2 are the vectors which are orthogonal to p.
A Riemannian metric is inherited from the ambient Euclidean metric on ℝ^3, and it exactly the Euclidean inner product between tangent vectors.
It is easy to show, using this metric, that the shortest curve passing between two points p,q∈ S^2 is a the great circle, which is contained within the intersection of S^2 and a plane containing p, q and the origin.
The geodesic is unique exactly when one such plane exists, or in other words, when p and q are not antipodal.
The formula of geodesic distance is given by
d_(p, q)=cos^-1(p^T q).
The formula of exponential map is
(p, v)=cos(‖ v‖)p + sin(‖ v‖)v/‖ v‖, for all v∈ T_p={v:p^Tv=0}.
The logarithm map is defined for all pairs p,q which are not antipodal:
(p,q)=u/‖ u‖d_(p,q), where u=q-(p^Tq)p, for all p+q≠ 0.
These concepts can be to the hyper-sphere S^n, where n>2.
Kendall's Shape Space:
As a well-developed probability space, Kendall's shape space provides a very useful space for illustrate new theory and applications in statistics and machine learning.
In 2-dimensional shape analysis <cit.>, a landmark is a point that can be unambiguously identified on all objects in a population. Landmarking is the process of identifying and labelling a set of k landmarks on each object in an analysis.
This process generates a 2k dimensional (real) vector, which can also be represented in a k-dimensional complex space ℂ^k.
The shape of an object is the equivalence class of the object when translation, rotation and scaling of the object are removed.
Translation is removed by moving the centroid of each object to be origin point. The landmark vectors then lie in a complex subspace V={(z_i)∈ℂ^k|∑_i=1^k z_i=0}, which is a copy of ℂ^k-1. Two configurations in ℂ^k-1 are equivalent if they are related by scaling and rotation. The set of equivalence classes can be show to be equal to complex projective space Cℙ^k-2 and is known as Kendall's shape space <cit.>. The following formulae of exp map, log map on Kendall's shape space are from <cit.>.
Analogously to the sphere, we write down formulae which specify the Riemannian geometry on Kendall's shape space. Suppose p and q are two data points in a general Cℙ^k-2, the geodesic distance can be computed by
d_(p, q) = arccos|p^*q|/pq
Note that this expression is invariant under multiplication of p and q by arbitrary non-zero complex constants.
The inverse exponential map is given by
(p,q)=θ·(q-π_pq)/ q-π_pq, θ=arccos|⟨ p,q⟩|/pq, π_pq=p·⟨ p,q⟩/p^2
The exponential map is defined by
(p,v)=cosθ· p + psinθ/θv, θ=(p,q)
The tangent space T_p of a point p has the same dimensionality as the Kendall's shape space.
§.§ Proofs
In order to prove the theorems in Section 3, we need the following assumptions for the Riemannian manifold and the manifold-valued data y(t).
* (C1) ℳ is a closed Riemannian submanifold of a Euclidean space ℝ^d_0, with geodesic distance d_ℳ induced by the Euclidean metric.
* (C2) Trajectories y(t) are continuous for t ∈𝒯 almost surely.
* (C3) For all t ∈𝒯, the Fréchet mean μ_0(t) and sample Fréchet mean μ̂_0(t) exist and are unique almost surely.
* (C4) Almost surely, trajectories y(t) lie in a compact set S_t⊂ B_ℳ(μ_0(t), r) for t ∈𝒯, where B_ℳ(μ_0(t), r) ⊂ℳ is an open ball centered at μ_0(t) with radius r<inf _t ∈𝒯inj_μ_0(t).
* (C5) For any ϵ>0,
inf _t ∈𝒯inf _p: d_ℳ(p, μ_0(t))>ϵ M(p, t)-M(μ_0(t), t)>0 .
§.§ Proof of Theorem 1
Proof. Recall the notations and the multiple functional regression model V_m(t)=W_m(t)+e_m(t), denote the vector of the realizations at t_i, m=1,·,M, i=1,⋯,n of V_m(t),W_m(t) and e_m(t) as V, W and E, respectively. Then we have
vec(B̂)-vec(B) = ((Φ⊗ U)^⊤ (Φ⊗ U))^-1(Φ⊗ U)^⊤ vec(V̂)-vec(B)
=((Φ⊗ U)^⊤ (Φ⊗ U))^-1(Φ⊗ U)^⊤(vec(W+E)+vec(V̂-V))-vec(B)
=((Φ⊗ U)^⊤ (Φ⊗ U))^-1(Φ⊗ U)^⊤((Φ⊗ U)vec(B)+vec(E)+vec(V̂-V))-vec(B)
=((Φ⊗ U)^⊤ (Φ⊗ U))^-1(Φ⊗ U)^⊤ vec(E)
+((Φ⊗ U)^⊤ (Φ⊗ U))^-1(Φ⊗ U)^⊤ vec(V̂-V)
=̇A_1+A_2
For the first term, as e_m(t) are assumed to be independent to the covariate u_m for m=1,⋯,M, so that E(A_1)=0, Var(A_1)=O_p(1/M). Therefore A_1=o(1) as M→∞.
For the second term, note that
V̂_m(t)-V_m(t)=(μ̂_0(t),y_m(t))-(μ_0(t),y_m(t))≤ d_(μ̂_0(t),μ_0(t))=O_p(1/√(M))
where the last term has been proofed in <cit.> under the above conditions (C1)-(C5).
So the second term also has A_2=o(1) as M→∞. ▪
§.§ Proof of Theorem 2
Before proofing the Theorem 2, we first introduce some notation to simplify the proof.
Recall that τ_md=(τ_md(t_m1),⋯,τ_md(t_mn)), for simplicity, we omit the subscript md and denote the n observation at the m-th batch and the d-th dimension as z_n=(z_1,⋯,z_n)=̇(τ_md(t_m1),⋯,τ_md(t_mn)), and denote the corresponding covariates
X_n=(x_1,⋯,x_n) with x_i∈𝒳 which are independently drawn from a distribution 𝒰(x). Denote the observed data 𝒟_n={(x_i,z_i),i=1,⋯,n}.
Let τ_0(·) be the true underlying function. Assume that the underlying process τ(·) ∼GP(0, K(·, · ; θ)) where all the subscript md are omitted. Denote
p_g p(z_n) =∫_ℱ p(z_1, …, z_n|τ(X_n)) d p_n(τ)
p_0(z_n) =p(z_1, …, z_n|τ_0(X_n))
then p_g p(z_n) is the Bayesian predictive distribution of z_n based on the GPR model. Note that p_n(τ) depends on n since the hyperparameters of τ(·) is estimated from the data.
Proof.
It suffices to show
1/n E_X_n(D[p_0(z_n), p_g p(z_n)]) → 0 as n →∞,
Note that
D[p_0(z_n), p_g p(z_n)] =∫_𝒵^n p_0(z_1, ⋯, z_n) logp_0(z_1, ⋯, z_n)/ p_g p(z_1, ⋯, z_n) d z_1⋯ d z_n
=∫_𝒵^n p_0(z_1, ⋯, z_n)[-log p_g p(z_1, …, z_n)+log p_0(z_1, …, z_n)] d z_1⋯ d z_n .
It suffices to give an upper bound for the term -log p_g p(z_1, …, z_n)+log p_0(z_1, …, z_n).
Let ℋ be the Reproducing Kernel Hilbert Space (RKHS) associated with the
covariance function k(·, · ; θ), and ℋ_n the span of {k(·, x_i ; θ)}, i.e. ℋ_n={f(·): f(x)=
∑_i=1^nα_i k(x, x_i ; θ), for any .α_i∈ℝ}.
Assume the true underlying function τ_0∈ℋ_n,
then τ_0(·) can be expressed as
τ_0(·)=∑_i=1^nα_i k(·, x_i ; θ) ≜ K(·) α
where K(·)=(k(·, x_1 ; θ), …, k(·, x_n ; θ)) and α=(α_1, …, α_n)^T . By the properties of
RKHS,τ_0_k^2=α^TC_n nα, and (τ_0(x_1), …, τ_0(x_n))^T=C_n nα, where C_n n=(k(x_i, x_j ; θ)) is
the covariance matrix over x_i, i=1, …, n.
Let P and P̅ be any two measures on ℱ, then it yields by Fenchel-Legendre duality relationship that, for any functional g(·) on ℱ,
E_P̅[g(τ)] ≤log E_P[e^g(τ)]+D[P̅, P]
Let g(τ)=log p(z_1, …, z_n|τ) for any z_1, …, z_n in 𝒵 and τ∈ℱ, let P be the measure induced by G P(0, k(·, · ; θ̂_n)), hence its finite dimensional distribution at z_1, …, z_n is p̃(z_1, …, z_n)=N(0, Ĉ_n n), and
E_P[e^g(τ)]=E_P[p(z_1, …, z_n|τ)]=∫_ℱ p(z_1, ⋯, z_n|τ) d p_n(τ)=p_g p(z_n)
where Ĉ_n n is defined in the same way as C_n n but with θ being replaced by its estimator
θ̂_n.
Let P̅ be the posterior distribution of τ(·) on ℱ which has a prior distribution G P(0, k(·, · ; θ))
and normal likelihood ∏_i=1^n N(ẑ_i ; τ(x_i), σ^2), where
ẑ_n ≜([ ẑ_1; ⋮; ẑ_n ])=(C_n n+σ^2I) α
and σ^2 is a constant to be specified. In other words, we assume a model z=τ(x)+η
with η∼ N(0, σ^2) and τ(·) ∼ G P(0, k(·, · ; θ)), and ẑ_n is a set of observations at x_1, …, x_n . Thus, P̅(τ)=p(τ|ẑ_n, X_n) is a probability measure on ℱ. Therefore,
by Gaussian process regression, the posterior of (τ_1, …, τ_n) ≜(τ(x_1), …, τ(x_n)) is
p̅(τ_1, ⋯, τ_n) ≜ p(τ_1, ⋯, τ_n|ẑ, X_n)
=N(C_n n(C_n n+σ^2I)^-1ẑ, C_n n(C_n n+σ^2I)^-1σ^2)
=N(C_n nα, C_n n(C_n n+σ^2I)^-1σ^2)
=N(C_n nα, C_n n B^-1)
where B=I+σ^-2C_n n.
It follows that
D[P̅, P] =∫_ℱlog(d P̅/d P) d P̅
=∫_R^np̅(τ_1, …, τ_n) logp̅(τ_1, …, τ_n)/p̃(τ_1, …, τ_n) d τ_1… d τ_n
=1/2[log|Ĉ_n n|-log|C_n n|+log |B|+tr(Ĉ_n n^-1C_n n B^-1)+(C_n nα)^TĈ_n n^-1(C_n nα)-n]
=1/2[-log|Ĉ_n n^-1C_n n|+log |B|+tr(Ĉ_n n^-1C_n n B^-1)+τ_0_k^2.
.+α^TC_n n(Ĉ_n n^-1C_n n-I) α-n]
On the other hand,
E_P̅[g(τ)]=E_P̅[log p(z_1, …, z_n|τ)]=∑_i=1^n E_P̅[log p(z_i|τ(x_i))]
By Taylor's expansion, expanding log p(z_i|τ(x_i)) to the second order at τ_0(x_i) yields
log p(z_i|τ(x_i))= log p(z_i|τ_0(x_i))+.d[log p(z_i|τ(x_i))]/d τ(x_i)|_τ(x_i)=τ_0(x_i)(τ(x_i)-τ_0(x_i))
+.1/2d^2[log p(z_i|τ(x_i))]/[d τ(x_i)]^2|_τ(x_i)=τ̃(x_i)(τ(x_i)-τ_0(x_i))^2
where τ̃(x_i)=τ_0(x_i)+λ(τ(x_i)-τ_0(x_i)) for some 0 ≤λ≤ 1.
For Gaussian probability density function,
it follows that
E_P̅[log p(z_i|τ(x_i))]= log p(z_i|τ_0(x_i))-σ^2/2Var[τ(x_i)]
so that
E_P̅[log p(z_1, …, z_n|τ)]=log p_0(z_1, …, z_n)-σ^2/2tr(Var[τ(X_n)])=log p_0(z_1, …, z_n)-σ^2/2tr(C_n n B^-1)
Therefore,
-log p_g p(z_1, …, z_n)+log p_0(z_1, …, z_n)
≤ -log E_P[e^g(τ)]+E_P̅[g(τ)]+σ^2/2tr(C_n n B^-1)
≤ D[P̅, P]+σ^2/2tr(C_n n B^-1)
= 1/2τ_0_k^2+1/2[-log|Ĉ_n n^-1C_n n|+log |B|+tr(Ĉ_n n^-1C_n n B^-1+σ^2 C_n n B^-1).
.+α^TC_n n(Ĉ_n n^-1C_n n-I) α-n].
Since the covariance function is continuous in θ and θ̂_n→θ we have Ĉ_n n^-1C_n n-I→ 0
as n →∞. Therefore there exist some positive constants K and ϵ such that
-log|Ĉ_n n^-1C_n n|<K
α^TC_n n(Ĉ_n n^-1C_n n-I) α<K
tr((Ĉ_n n^-1C_n n-I) B^-1)<K
Thus
-log p_g p(z_1, …, z_n)+log p_0(z_1, …, z_n) <1/2τ_0_k^2+1/2log |B|+3/2K
for any τ_0(·) ∈ℋ_n.
Taking infimum over τ_0 and applying Representer Theorem (see Lemma 2 in <cit.>) we obtain
-log p_g p(z_1, …, z_n)+log p_0(z_1, …, z_n) ≤1/2τ_0_k^2+1/2log|I+σ^-2C_n n|+3/2K
for all τ_0(·) ∈ℋ.
Therefore, we obtain that
1/n E_X_n(D[p_0(z_n), p_g p(z_n)]) ≤1/2 nτ_0_k^2+1/2 n E_X_n(log|I+σ^-2C_n n|)+3/2K/n→ 0
as n →∞. ▪
0.2in
|
http://arxiv.org/abs/2409.02340v1 | 20240904000129 | A generalized adaptive central-upwind scheme for compressible flow simulations and preventing spurious vortices | [
"Amareshwara Sainadh Chamarthi"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
AA_address]Amareshwara Sainadh Chamarthi [cor1]Corresponding author.
E-mail address: sainath@caltech.edu (Amareshwara Sainadh Ch.).
[AA_address]Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA, USA
§ ABSTRACT
This work introduces a novel adaptive central-upwind scheme designed for simulating compressible flows with discontinuities in the flow field. The proposed approach offers significant improvements in computational efficiency over the central gradient-based reconstruction approach presented in <cit.> (Hoffmann, Chamarthi and Frankel, JCP 2024). By leveraging a combination of conservative and characteristic variable reconstruction, the proposed approach demonstrates oscillation-free results while effectively reducing computational costs and improving the results. Furthermore, the adaptive central-upwind algorithm is generalized to be compatible not only with the gradient-based reconstruction as in <cit.> but with other existing methods. In this regard, with the proposed algorithm, the standard fifth/sixth-order reconstruction scheme has also been shown to outperform existing schemes with a 20-30% reduction in computational expense with improved results. Notably, the proposed approach has successfully prevented the generation of spurious vortices in the double shear layer test cases, even with linear schemes, showcasing its robustness and effectiveness.
Low dissipation, Gradient-Based Reconstruction, Monotonocity Preserving, Central scheme, Spurious vortices
§ INTRODUCTION
Adaptive central-upwind schemes are required to simulate compressible flows when both turbulence and discontinuities, like shock waves or contact discontinuities, are present. The challenge in these simulations is to capture discontinuities (where upwind schemes with dissipative properties are required) while maintaining low dissipation (an attribute of central schemes) in turbulent flow regions, which is a difficult balance to achieve. This is particularly crucial in large eddy simulations (LES) on coarse meshes, where resolving a wide range of turbulence scales with minimal dissipation is essential. Adaptive central-upwind schemes address this by combining the strengths of upwind and central schemes, allowing them to adapt to the complex demands of such simulations <cit.>.
The adaptive central-upwind schemes are minimally dissipative, leading to sharp shocks and well-resolved broadband turbulence, but rely on an appropriate shock sensor. The shock or discontinuity sensor is the main challenge for the hybrid central/upwind method, as mentioned in <cit.>. Various researchers have proposed different approaches to address this challenge. For example, Kim and Kwon suggested a hybrid scheme that combines a central scheme with the numerical dissipation of the upwind weighted essentially non-oscillatory (WENO) scheme using a weighing function <cit.>. Hu, Wang, and Adams proposed the adaptive central-upwind WENO scheme, known as WENO-CU6, which leverages a low dissipation central scheme in smooth regions while maintaining strong shock-capturing capabilities through the WENO approach <cit.>. Subsequently, Fu et al. developed the Targeted ENO family of schemes, which further improves upon the capabilities of WENO-CU6 <cit.>. Other researchers, such as Liu et al. <cit.>, Wong and Lele <cit.>, and Subramaniam et al. <cit.>, have also made contributions by developing hybrid weighted interpolation schemes and improved versions of WENO-CU6 with localized dissipative interpolation. Subramaniam et al. <cit.> proposed an explicit-compact interpolation along with compact finite differences that provide higher resolution and more localized dissipation compared to that of Wong and Lele <cit.>. Furthermore, Chamarthi and Frankel <cit.> introduced an adaptive central-upwind scheme, which employs a sixth-order linear-compact scheme and the fifth-order Monotonicity Preserving scheme (MP) using the Boundary Variation Diminishing (BVD) algorithm. Hoffmann, Chamarthi, and Frankel <cit.> have proposed a wave-appropriate central-upwind scheme by taking advantage of the wave structure of the Euler equations <cit.>. The proposed scheme successfully simulates the hypersonic transitional flows on coarse grids. While it outperformed existing methods for benchmark and hypersonic flow test cases, the wave-appropriate centralization approach is still expensive as it requires the reconstruction of the cell-interface values to be carried out for characteristic variables. It is well known that transforming conservative variables to characteristic variables, while necessary for shock-capturing, is expensive and is not required for regions with no discontinuities. On a different note, Feng et al. have employed optimization techniques to develop an adaptive central-upwind scheme that balances dissipation and shock-capturing <cit.>. Their approach involves using multi-objective Bayesian optimization to construct a WENO-based subgrid-scale model for implicit Large Eddy Simulations of compressible flows. Feng et al. further improved the approach by using deep reinforcement learning <cit.> and Bayesian optimization <cit.> techniques. In their papers, Feng et al. simulated the double periodic shear layer test case and observed that the proposed optimized schemes are free of spurious vortices.
The objective of the current paper is to propose a generalized and efficient adaptive central-upwind scheme based on the wave-appropriate scheme proposed by Hoffmann, Chamarthi, and Frankel <cit.>. The proposed algorithm aims to reduce computational expenses by reconstructing conservative variables in regions without discontinuities and to prevent spurious vortices in the double periodic shear layer test case.
The rest of the manuscript is organized as follows: In Section <ref>, the governing equations are presented. Section <ref> presents the numerical methods, including the novel adaptive central-upwind approach with conservative and characteristic variable reconstruction. Section <ref> consists of the numerical results with the proposed algorithm and the conclusions are presented in Section <ref>.
§ GOVERNING EQUATIONS
In this study, the three-dimensional compressible Navier-Stokes equations are solved in Cartesian coordinates:
∂𝐔/∂ t+∂𝐅^𝐜/∂ x+∂𝐆^𝐜/∂ y+∂𝐇^𝐜/∂ z=∂𝐅^𝐯/∂ x+∂𝐆^𝐯/∂ y+∂𝐇^𝐯/∂ z,
where U is the conservative variable vector, 𝐅^𝐜, 𝐆^𝐜, 𝐇^𝐜 and 𝐅^𝐯, 𝐆^𝐯, 𝐇^𝐯, are the convective (superscript c) and viscous (superscript v) flux vectors in each coordinate direction, respectively. The conservative variable, convective, and viscous flux vectors are given as:
𝐔 = [ ρ; ρ u; ρ v; ρ w; ρ E ],
𝐅^c = [ ρ u; ρ u u + p; ρ u v + p; ρ u w + p; ρ u H ],
𝐆^c = [ ρ v; ρ v u + p; ρ v v + p; ρ v w + p; ρ v H ],
𝐇^c = [ ρ w; ρ w u + p; ρ w v + p; ρ w w + p; ρ w H ],
a– d
[ 𝐅^𝐯=[0, τ_x x, τ_x y, τ_x z, u τ_x x+v τ_x y+w τ_x z-q_x]^T,; 𝐆^𝐯=[0, τ_x y, τ_y y, τ_y z, u τ_y x+v τ_y y+w τ_y z-q_y]^T,; 𝐇^𝐯=[0, τ_x z, τ_y z, τ_z z, u τ_z x+v τ_z y+w τ_z z-q_z]^T, ]
where ρ is density, u, v, and w are the velocities in the x, y, and z directions, respectively, p is the pressure, E = e + (u^2 + v^2 + w^2 )/2 is the specific total energy, and H = E + p/ρ is the specific total enthalpy. The equation of state is for a calorically perfect gas so that e = p/ [ ρ (γ-1) ]^-1 is the internal energy, where γ is the ratio of specific heats. The components of the viscous stress tensor τ and the heat flux q are defined in tensor notations as:
τ_i j=μ/Re(∂ u_i/∂ x_j+∂ u_j/∂ x_i-2/3∂ u_k/∂ x_kδ_i j),
q_i=-μ/Re Pr Ma(γ-1)∂ T/∂ x_i, T= Ma^2γp/ρ,
where μ is the dynamic viscosity, T is the temperature, Ma and Re are the Mach number and Reynolds number, and Pr is the Prandtl number.
§ NUMERICAL METHODS
Using a conservative numerical method, the governing equations cast in semi-discrete form for a Cartesian cell I_i,j,k = [ x_i-1/2, x_i+1/2] ×[ y_i-1/2, y_i+1/2] ×[ z_i-1/2, z_i+1/2] can be expressed via the following ordinary differential equation:
d/d t𝐔̌_i,j,k = 𝐑𝐞𝐬_i,j,k = - . d𝐅̌^c/d x|_i,j,k - . d𝐆̌^c/d y|_i,j,k - . d𝐇̌^c/d z|_i,j,k
+ . d𝐅̌^v/d x|_i,j,k + . d𝐆̌^v/d y|_i,j,k + . d𝐇̌^v/d z|_i,j,k,
where the check accent, (̌·̌)̌, indicates a numerical approximation of physical quantities, 𝐑𝐞𝐬_i,j,k is the residual function, and the remaining terms are cell centre numerical flux derivatives of the physical fluxes in Equation (<ref>). For brevity, we continue with only the x-direction as it is straightforward to extend to all three dimensions in a dimension-by-dimension manner. The indices j and k are also dropped for simplicity.
§.§ Viscous Flux Spatial Discretization Scheme
Viscous fluxes, 𝐅̌^̌v̌, are computed using the fourth-order α-damping scheme of Chamarthi <cit.>, which is based on the α-damping approach of Nishikawa <cit.>. In the one-dimensional scenario, the cell center numerical viscous flux derivative is:
. d𝐅^v/d x|_i = 1/Δ x( 𝐅^v_i+1/2 - 𝐅^v_i-1/2).
The cell interface numerical viscous flux is computed as,
𝐅^v_i+1/2 =
[ 0; -τ_i+1/2; -τ_i+1/2 u_i+1/2 + q_i+1/2,; ], τ_i+1/2 = 4/3μ_i+1/2. ∂ u/∂ x|_i+1/2,
q_i+1/2 = -κ_i+1/2. ∂ T/∂ x|_i+1/2.
For an arbitrary variable, ϕ, the α-damping approach computes cell interface gradients as:
. ∂ϕ/∂ x|_i+ 1/2 = 1/2( . ∂ϕ/∂ x|_i + . ∂ϕ/∂ x|_i+1) + α/2 Δ x( ϕ_R - ϕ_L ), ϕ_L = ϕ_i + . ∂ϕ/∂ x|_iΔ x/2, ϕ_R = ϕ_i+1 - . ∂ϕ/∂ x|_i+1Δ x/2,
where, in this work, α = 4. The gradients at cell centers are computed using the fourth order compact finite-difference scheme, as in <cit.>, which is as follows:
5/14. ∂ϕ/∂ x|_i-1 + . ∂ϕ/∂ x|_i + 5/14. ∂ϕ/∂ x|_i+1 = 1/28 Δ x( ϕ_i+2 - ϕ_i-2) + 11/14 Δ x( ϕ_i+1 - ϕ_i-1).
§.§ Convective Flux Spatial Discretization Scheme
Similar to the viscous flux discretization, the cell centre numerical convective flux derivative is expressed as:
. d𝐅̌^c/d x|_i = 1/Δ x( 𝐅̌^c_i+1/2 - 𝐅̌^c_i-1/2),
where i ±1/2 indicates right and left cell interface values, respectively. 𝐅̌^c_i ±1/2 are computed using an approximate Riemann solver. This work uses the componentwise local Lax-Friedrichs (cLLF) <cit.> approximate Riemann solver unless otherwise explicitly stated. The numerical fluxes at cell boundaries computed using a Riemann solver can be expressed in the following standard form:
𝐅̌^c_i ±1/2 = 1/2[ 𝐅̌^c ( 𝐔^L_i ±1/2) + 𝐅^c ( 𝐔^R_i ±1/2) ] - 1/2| 𝐀_i ±1/2| ( 𝐔^R_i ±1/2 - 𝐔^L_i ±1/2),
where the L and R superscripts denote the left- and right-biased states, respectively, and | 𝐀_i ±1/2| denotes the convective flux Jacobian. The objective is to obtain the left- and right-biased states, 𝐔^L_i ±1/2 and 𝐔^R_i ±1/2. The procedure to obtain these interface values is elucidated in the following sections.
§.§.§ Linear Schemes:
In this subsection, we provide the details of the calculations of candidate polynomials that can be used to approximate the values of 𝐔^L_i ±1/2 and 𝐔^R_i ±1/2. While the adaptive central-upwind approach to be presented later is a general approach that might potentially be used with most of the schemes, we consider the standard fifth-order upwind scheme <cit.> and the corresponding sixth-order scheme, and the Gradient-Based Reconstruction (GBR) schemes proposed in <cit.>.
Fifth-order upwind scheme: The well-known fifth-order upwind schemes for obtaining the values of the left and right interfaces are as follows:
ϕ_i+1 / 2^L5, Linear=1/30ϕ_i-2-13/60ϕ_i-1+47/60ϕ_i+0+9/20ϕ_i+1-1/20ϕ_i+2,
ϕ_i+1 / 2^R5, Linear=1/30ϕ_i+3-13/60ϕ_i+2+47/60ϕ_i+1+9/20ϕ_i+0-1/20ϕ_i-1,
where ϕ is an arbitrary variable, either conservative (𝐔) or characteristic (𝐂) variables are used in the present paper. The superscripts L5 and R5 represent left and right-biased fifth-order formulas, and the Linear superscript represents the scheme is linear. Linear schemes cannot be used for flows with discontinuities as they will lead to oscillations. The details of the non-linear scheme will be presented later.
Upwind-biased GBR scheme: The GBR schemes differ slightly from the fifth-order mentioned above scheme. As such, the interface values depend not only on the cell centre values but also on the gradients at the cell centres. The advantage of having gradients is that they can be used in the parts of the subroutine, such as viscous flux discretization, shock capturing, post-processing quantities, and so on <cit.>. GBR method employs the first two moments of the Legendre polynomial evaluated on x_i-1 / 2≤ x ≤ x_i+1 / 2 and written for a general variable, ϕ, as:
ϕ(x) = ϕ_i + ϕ'_i(x-x_i) + 3 ϕ”_i/2κ[(x-x_i)^2-Δ x_i^2/12],
where ϕ'_i and ϕ”_i respectively represent the first and second derivatives of ϕ_i. If x = x_i ±Δ x/2 and κ = 1/3, the following equations for the left- and right-biased states are obtained:
ϕ^L,GBR, Linear_i+1/2 = ϕ_i+0 + Δ x/2ϕ'_i+0 + Δ x^2/12ϕ”_i+0,
ϕ^R,GBR, Linear_i+1/2 = ϕ_i+1 - Δ x/2ϕ'_i+1 + Δ x^2/12ϕ”_i+1.
Like the fifth-order upwind scheme, the superscripts L, GBR and R, GBR represent left and right-biased gradient-based reconstruction formulas. In this work, ϕ'_i was computed using eighth order explicit central differences <cit.>:
ϕ'_i = 1/Δ x( 1/280ϕ_i-4 - 4/105ϕ_i-3 + 1/5ϕ_i-2 - 4/5ϕ_i-1 + 4/5ϕ_i+1 - 1/5ϕ_i+2 +4/105ϕ_i+3 - 1/280ϕ_i+4).
ϕ”_i was computed by the following formula <cit.>:
ϕ”_i = 2/Δ x( ϕ_i+1 - 2 ϕ_i + ϕ_i-1) - 1/2 Δ x( ϕ'_i+1 - ϕ'_i-1).
Central schemes: The abovementioned schemes are left and right-biased upwind schemes. They provide necessary dissipation in some regions but are unsuitable for turbulent simulations as they require a low dissipation central scheme. A central scheme can be obtained by averaging the left- and right-biased upwind reconstructions (they can also be derived in other possible ways as well).
ϕ^L_i+1/2 = ϕ^C_i+1/2 = ( 1 - η) ϕ^L, Linear_i+1/2 + ηϕ^R, Linear_i+1/2,
a
ϕ^R_i+1/2 = ϕ^C_i+1/2 = ηϕ^L, Linear_i+1/2 + ( 1 - η) ϕ^R, Linear_i+1/2,
b
where η = 0.5 and ( ·)^C denotes the centralized reconstruction. The well-known sixth-order central scheme can be obtained by averaging the left and right-biased reconstruction formulas given by Equation (<ref>), and is as follows:
ϕ^C6,Linear_i+1/2=1/2(ϕ_i+1 / 2^L5, Linear + ϕ_i+1 / 2^R5, Linear)=1/60(ϕ_i-2-8 ϕ_i-1+37 ϕ_i+37 ϕ_i+1-8 ϕ_i+2+ϕ_i+3).
Similarly, the centralized GBR scheme can be obtained by averaging the Equations (<ref>), and is as follows:
ϕ^C,GBR,Linear_i+1/2=1/2(ϕ_i+1 / 2^L,GBR, Linear + ϕ_i+1 / 2^R,GBR, Linear).
§.§.§ Monotonicity Preserving approach
The linear reconstruction schemes represented by Equations (<ref>) and (<ref>) are susceptible to oscillations when there are discontinuities present. To address this issue, we utilized MP limiting. The following details the MP limiting procedure specifically for the left-biased state, although the procedure is identical for the right-biased state. The limiting procedure is typically carried out for the characteristic variables, 𝐂, for oscillation-free results, as explained in <cit.>. The transformation from conservative variables to characteristic variables will be explained later. The MP limiting criterion for the GBR scheme <cit.> is:
𝐂^L_i+1/2 =
𝐂^L,Linear_i+1/2 if ( 𝐂^L,Linear_i+1/2 - 𝐂_i ) ( 𝐂^L,Linear_i+1/2 - 𝐂^L,MP_i+1/2) ≤ 10^-40,
𝐂^L,Non-Linear_i+1/2 otherwise,
where 𝐂^L,Linear_i+1/2 corresponds to Eqn. <ref> and the remaining terms are:
𝐂^L,Non-Linear_i+1/2 = 𝐂^L,Linear_i+1/2 + minmod( 𝐂^L,MIN_i+1/2 - 𝐂^L,Linear_i+1/2, 𝐂^L,MAX_i+1/2 - 𝐂^L,Linear_i+1/2),
𝐂^L,MP_i+1/2 = 𝐂^L,Linear_i+1/2 + minmod[ 𝐂_i+1-𝐂_i, 𝒜( 𝐂_i-𝐂_i-1) ],
𝐂^L,MIN_i+1/2 = max[ min( 𝐂_i, 𝐂_i+1, 𝐂^L,MD_i+1/2), min( 𝐂_i, 𝐂^L,UL_i+1/2, 𝐂^L,LC_i+1/2) ],
𝐂^L,MAX_i+1/2 = min[ max( 𝐂_i, 𝐂_i+1, 𝐂^L,MD_i+1/2), max( 𝐂_i, 𝐂^L,UL_i+1/2, 𝐂^L,LC_i+1/2) ],
𝐂^L,MD_i+1/2 = 1/2( 𝐂_i + 𝐂_i+1) - 1/2 d^L,M_i+1/2, 𝐂^L,UL_i+1/2 = 𝐂_i + 4 ( 𝐂_i - 𝐂_i-1),
𝐂^L,LC_i+1/2 = 1/2( 3 𝐂_i - 𝐂_i-1) + 4/3 d^L,M_i-1/2, d^L,M_i+1/2 = minmod( 0.5(d_i + d_i+1),2d_i, 2d_i+1),
d_i = 2 ( 𝐂_i+1 - 2𝐂_i + 𝐂_i-1) - Δ x/2( 𝐂'_i+1 - 𝐂'_i-1),
where 𝒜 = 4 and minmod( a,b ) = 1/2[ sgn(a) + sgn(b) ] min( | a |, | b | ). The GBR method that employs explicit finite differences and the above MP limiter is called MEG (Monotonicity Preserving Explicit Gradient) as in <cit.>. As it uses the eighth-order explicit gradients, it is denoted as MEG8. For the fifth-order upwind scheme, the computations of the d_i+1 / 2^M and d_i differ from that of the GBR scheme and are as follows <cit.>:
d_i+1 / 2^M =minmod(4 𝐂_i-𝐂_i+1, 4 𝐂_i+1-𝐂_i, 𝐂_i, 𝐂_i+1),
d_i =𝐂_i-1-2 𝐂_i+𝐂_i+1.
The fifth-order upwind method with MP limiting is known as MP5, as in <cit.>, in this paper. Only the adaptive central-upwind scheme is considered in this paper and is denoted as MP6.
§.§.§ Novel adaptive central-upwind and conservative-characteristic variable reconstruction
This section presents the novel central-upwind scheme with adaptive conservative-characteristic variable reconstruction. Before proceeding to the novel algorithm, a brief background of the approach is presented.
* In Ref. <cit.> authors took advantage of the wave structure of the Euler equations. Once the variables are transformed from physical to characteristic space, the characteristic variables have specific properties. The first and last variable is known as the acoustic waves, the second variable is the entropy wave, and the rest are known as shear waves. Together, the entropy and shear waves are known as linearly degenerate waves. The density varies across the entropy wave in characteristic space, and the rest of the variables remain unchanged. Ref. <cit.> took advantage of this and reduced the frequent activation of the MP criterion, Equation (<ref>). Shockwaves are detected by the Ducros sensor, which is used for waves (characteristic variables) other than entropy waves. However, all the waves are computed using the upwind, inherently dissipative schemes.
* The approach is improved in <cit.> that all the linearly degenerate waves are computed using a central scheme if the necessary discontinuity detection criteria are met. The acoustic waves are still computed using an upwind scheme for stability, and the proposed approach significantly improved benchmark test cases and was able to predict hypersonic transitional flows.
One disadvantage of Ref. <cit.> and <cit.> is that the variables are transformed from physical (conservative) to characteristic variables, which is computationally expensive. It would be efficient to convert to characteristic variables only in regions where there are discontinuities, which are, strictly speaking, localized to some regions of the flow, and the rest of the regions are computed using conservative variables. While the Ducros sensor detects shockwaves, the contact discontinuities must be detected by a different approach and done in physical space. In physical space, across the contact discontinuity, only density varies, and the rest of the variables remain continuous; therefore, if a contact sensor coupled with the Ducros sensor can identify the discontinuities, then the regions without discontinuities can be computed using conservative variables thereby avoiding the variable transformation. The following steps describe the novel algorithm that avoids this expensive transformation procedure:
Step 1. Compute the linear left and right-biased reconstructions of the variable density either by the fifth-order upwind schemes or the GBR schemes (depending on the linear scheme of choice the user has decided upon, either MEG8 and MP).
Step 2. Evaluate the MP limiting criterion (Equation (<ref>)) for the variable density (ρ = 𝐔_1) using the following equations
L_ρ =( 𝐔^L,Linear_i+1/2,1 - 𝐔_i+0,1) ( 𝐔^L,Linear_i+1/2,1 - 𝐔^L,MP_i+1/2,1),
R_ρ = ( 𝐔^R,Linear_i+1/2,1 - 𝐔_i+1,1) ( 𝐔^R,Linear_i+1/2,1 - 𝐔^R,MP_i+1/2,1).
The above equations can be considered as contact discontinuity detectors in physical space. Also evaluate the Durcos sensor, Ω_d, which is computed as follows:
Ω_d = | -p_i-2 + 16 p_i-1 - 30 p_i + 16 p_i+1 - p_i+2|/| +p_i-2 + 16 p_i-1 + 30 p_i + 16 p_i+1 + p_i+2|( ∇·𝐮)^2/( ∇·𝐮)^2 + | ∇×𝐮|^2,
where 𝐮 is the velocity vector, and the derivatives of velocities are computed by the Equation (<ref>) as in <cit.>. We modify Ω_d by using its maximum value in a three-cell neighborhood:
Ω_d = max( Ω_i+m), for m = -1,0,1.
Step 3. If both the Ducros sensor and the contact discontinuity detectors identify that there are no discontinuities in the region as per the following conditions:
if (Ω_d > 0.01) & (L_ρ≤ 10^-40) & (R_ρ≤ 10^-40).
One can reconstruct the conservative variables according to the following algorithm:
In all directions:
𝐔^L,R_i+1/2,b =
{[ if b = 1,5: 𝐔^C,Linear_i+1/2,b i.e. ρ & ρ E (η =0.5, in Eqn. <ref>) ].
In x-direction:
𝐔^L,R_i+1/2,b =
{[ if b = 2: 𝐔^L,R,Linear_i+1/2,b i.e. ρ u (η =1, in Eqn. <ref>); if b = 3,4: 𝐔^C,Linear_i+1/2,b i.e. ρ v & ρ w (η =0.5, in Eqn. <ref>) ].
In y-direction:
𝐔^L,R_i+1/2,b =
{[ if b = 3: 𝐔^L,R,Linear_i+1/2,b i.e. ρ v (η =1, in Eqn. <ref>); if b = 2,4: 𝐔^C,Linear_i+1/2,b i.e. ρ u & ρ w (η =0.5, in Eqn. <ref>) ].
In z-direction:
𝐔^L_i+1/2,b =
{[ if b = 4: 𝐔^L,R,Linear_i+1/2,b i.e. ρ w (η =1, in Eqn. <ref>); if b = 2,3: 𝐔^C,Linear_i+1/2,b i.e. ρ u & ρ v (η =0.5, in Eqn. <ref>) ].
Only linear schemes are required in the above procedure, as there are no discontinuities in the computational region.
Step 4. If the criterion in Equation (<ref>) is not met, then one can proceed with the characteristic variable reconstruction as in Ref. <cit.>. Compute Roe-averaged variables at the interface to construct the left, 𝐋_n, and right, 𝐑_n, eigenvectors of the normal convective flux Jacobian.
Step 5. For the GBR scheme transform 𝐔_i, 𝐔'_i, and 𝐔”_i to characteristic space by multiplying them by 𝐋_n:
𝐂_i+m,b = 𝐋_n,i+1/2𝐔_i+m,
𝐂'_i+m,b = 𝐋_n,i+1/2𝐔'_i+m,
𝐂”_i+m,b = 𝐋_n,i+1/2𝐔”_i+m.
a– c
For the fifth-order upwind scheme transform only 𝐔_i to characteristic space by multiplying them by 𝐋_n
𝐂_i+m,b = 𝐋_n,i+1/2𝐔_i+m,
for m = -2,-1,0,1,2,3 and b = 1,2,3,4,5, representing the vector of characteristic variables which are defined as follows in the current implementation:
Step 6. Using Equations (<ref>), obtain the unlimited reconstruction to cell interfaces in characteristic space via:
𝐂^L, GBR, Linear_i+1/2,b = 𝐂_i+0,b + Δ x/2𝐂'_i+0,b + Δ x^2/12𝐂”_i+0,b,
𝐂^R, GBR, Linear_i+1/2,b = 𝐂_i+1,b - Δ x/2𝐂'_i+1,b + Δ x^2/12𝐂”_i+1,b.
For the fifth-order reconstruction schemes Equations (<ref>) are used for the interface values and are as follows:
𝐂_i+1/2,b^L5, Linear =1/30𝐂_i-2,b-13/60𝐂_i-1,b+47/60𝐂_i+0,b+9/20𝐂_i+1,b-1/20𝐂_i+2,b,
𝐂_i+1/2,b^R5, Linear =1/30𝐂_i+3,b-13/60𝐂_i+2,b+47/60𝐂_i+1,b+9/20𝐂_i+0,b-1/20𝐂_i-1,b.
The left-biased reconstruction is then treated by the following algorithm:
𝐂^L_i+1/2,b =
{[ if b = 1,5: 𝐂^L,Non-Linear_i+1/2,b if Ω_d > 0.01,
𝐂^L,Linear_i+1/2,b otherwise,; if b = 2: 𝐂^L,Non-Linear_i+1/2,b if ( 𝐂^L,Linear_i+1/2 - 𝐂_i ) ( 𝐂^L,Linear_i+1/2 - 𝐂^L,MP_i+1/2) ≥ 10^-40,
𝐂^C,Linear_i+1/2,b otherwise,; if b = 3,4: 𝐂^L,Non-Linear_i+1/2,b if Ω_d > 0.01,
𝐂^C,Linear_i+1/2,b otherwise. ].
A similar procedure is carried out for the right-biased reconstruction.
Step 7. After obtaining 𝐂^L,R_i+1/2,b, the variables are transformed back to physical fields:
𝐔^L,R_i+1/2 = 𝐑_n,i+1/2𝐂^L,R_i+1/2.
The nonlinear scheme with the above procedure along with the GBR schemes (Equations (<ref>)) as linear schemes is denoted as MEG8-CC (CC for conservative-characteristic variables). Similarly, if fifth-order schemes (Equations (<ref>)) are used as linear schemes, then the nonlinear scheme is denoted as MP6-CC in this paper. The approach is generic, so even higher-order schemes like the seventh-order upwind schemes or implicit gradient schemes as in <cit.> can also be used. For simplicity, only the MEG8-CC and MP6-CC schemes are presented in this paper. Steps 4-7 are the same as that of the wave-appropriate centralization approach presented in <cit.>, and such scheme is denoted as MEG8-C as in <cit.>. The novel approach here is steps 1-3, where the conservative variables are reconstructed in a unique central-upwind approach if the shock and contact discontinuity sensors are satisfied.
It is essential to note that a complete central scheme is not used even though the criteria in Equation (<ref>) are satisfied. In Equations (<ref>), (<ref>), (<ref>), and (<ref>) not all variables are centralized. While ρ and ρ E are centralized in all directions, the variables ρ u, ρ v and ρ w are upwind or centralized depending on the direction. The characteristic variable centralization explained in steps 4-7 and the conservative variable reconstruction in steps 1-3 above have some similar characteristics and will be explained below. A complete central scheme would still give spurious vortices for the double shear layer test case, and the proposed algorithm would avoid spurious vortices.
Relation between centralization in characteristic and conservative variable space:
In <cit.>, it was found that using a centralized reconstruction for all but the characteristic acoustic waves, i.e. all the linearly degenerate waves; entropy wave and the shear waves, was a robust and superior solution. The similarity between the central-upwind scheme in conservative variable space and the central-upwind scheme in characteristic space is explained below. In order to obtain charactertic variables, 𝐂, the conservative variables, 𝐔, are multiplied by the left eigenvectors, 𝐋_n. For simplicity we consider the two-dimensional scenario:
𝐂_b = 𝐋_n𝐔_b,
( [ 𝐂_1; 𝐂_2; 𝐂_3; 𝐂_4; ])
= ([ 1/2(γ-1/2 c^2 q^2+q_n/c) -1/2(γ-1/c^2 u+n_x/c) -1/2(γ-1/c^2 v+n_y/c) γ-1/2 c^2; ; 1-γ-1/2 c^2 q^2 γ-1/c^2 u γ-1/c^2 v -γ-1/c^2; ; -q_ℓ ℓ_x ℓ_y 0; ; 1/2(γ-1/2 c^2 q^2-q_n/c) -1/2(γ-1/c^2 u-n_x/c) -1/2(γ-1/c^2 v-n_y/c) γ-1/2 c^2 ])
( [ 𝐔_1; 𝐔_2; 𝐔_3; 𝐔_4 ]),
where n = [n_x n_y]^t and [l_x l_y]^t is a tangent vector (perpendicular to n) such as [l_x l_y]^t = [-n_y n_x]^t. By taking n = [1, 0]^t and [0, 1]^t we obtain the corresponding eigenvectors in x and y directions (in the x-direction, n_x = 1, whereas n_y=0. The y-direction is analogous.). Furthermore, q^2 = u^2 + v^2, q_l = u l_x + v l_y, and q_n = u n_x + v n_y. In the matrix (<ref>), the 𝐂_1 and 𝐂_4 are acoustic waves, 𝐂_2 is the entropy wave, and the 𝐂_3 is the shear wave. The characteristic variable 𝐂_3 in the characteristic space is as follows:
𝐂_3 = -q_l 𝐔_1 + l_x𝐔_2 + l_y𝐔_3+0𝐔_4.
In x- direction, [l_x l_y]^t = [-n_y n_x]^t= [0 1] which means q_l = u l_x + v l_y = v. Therefore,
𝐂_3 = -v 𝐔_1 + 1𝐔_3 = -ρ v + ρ v ≈ 0 (a small value in computations).
In y- direction, [l_x l_y]^t = [-n_y n_x]^t= [-1 0] which means q_l = u l_x + v l_y = -u. Therefore,
𝐂_3 = u 𝐔_1 + (-1)𝐔_2 = ρ u - ρ u ≈ 0 (a small value in computations).
The critical aspect is that in x- direction ρ v is being reconstructed in the characteristic space and centralized if the Ducros sensor criterion is satisfied. Likewise, in the y- direction, ρ u is reconstructed in the characteristic space and centralized if the Ducros sensor criterion is satisfied. Similarly, the variable ρ v is centralized in conservative variable space in x- direction (Equations (<ref>)), and ρ u is centralized in conservative variable space in y- direction (Equations (<ref>)), respectively. The analysis straightforwardly extends to the three-dimensional scenario.
Similarly, the characteristic variable 𝐂_2 represents the contact discontinuity in characteristic space and across a contact discontinuity density changes in physical space. The corresponding variables are centralized in their respective computational spaces if there is no contact discontinuity in the computation region. It explains the relation between the centralization in characteristic and conservative variable space.
§ RESULTS AND DISCUSSION
This section tests the proposed numerical schemes for various benchmark test cases. For time integration, we use the explicit third-order TVD Runge-Kutta method <cit.>. Time integration is performed with a CFL = 0.4 for all the problems. The advantages of the proposed algorithm are compared with the TENO5 scheme <cit.>. All the computations are carried out on the authors' Mac mini with an M1 processor.
Double periodic Shear layer
In the first test case, we demonstrate that the proposed algorithm will prevent spurious vortices for the double periodic shear layer test case <cit.>. The test involves two initially parallel shear layers that develop into two significant vortices at t = 1. The non-dimensional parameters for this test case are presented in Table <ref>.
The initial conditions were:
p = 1/γMa^2, ρ=1, u=
tanh[ θ (y-0.25) ], if (y ≤ 0.5),
tanh[ θ (0.75-y) ], if (y > 0.5),
v = 0.05 sin[ 2 π(x+0.25) ] for θ = 80, and v = 0.05 sin[ 2 π(x) ] for θ = 120,
The parameter θ defines the initial shear layer width. Two configurations are considered with θ =80 and θ =120, as shown in the above initial conditions. The reference solutions for θ =80 and θ =120, shown in Fig. <ref>, are computed with MEG8-C scheme on a grid sizes of of 800 × 800 and 512 × 512, respectively. For this test case, if the grid is under-resolved, unphysical braid vortices and oscillations can occur on the shear layers according to the literature <cit.>. Observing the well-resolved reference solution, there are no braid vortices or oscillations that form on the shear layers.
Fig. <ref> displays the z-vorticity computed for MEG8-CC, MEG8-C, MEG8-central, MP6-CC, and TENO5 schemes on a grid size of 160 × 160 (except for Fig. <ref>). From Fig. <ref>, it can be seen that MEG8-CC best approximates the reference solution, as there are no unphysical braid vortices and oscillations. MEG8-C, Fig. <ref> is also relatively free of spurious vortices. MEG8-CC and MEG8-C use the unique adaptive central-upwind algorithm proposed, i.e. not all variables are reconstructed using the central scheme. Fig. <ref> shows the results obtained by the fully central scheme for all the variables if both Ducros and contact discontinuity sensors criteria are satisfied (Equation <ref>). The fully central scheme produced unphysical vortical structures unlike the adaptive central-upwind scheme (Equations (<ref>), (<ref>), (<ref>), and (<ref>) in conservative variable computational space). Likewise, the MP6-CC scheme, vorticity contours shown in Fig. <ref>, also has no spurious vortices. The stencils of the MEG8-CC and MP6-CC schemes are significantly different, yet both schemes resulted in results free of spurious vortices.
Figs <ref> and <ref> show the vorticity contours computed by the TENO5 scheme on grid sizes of 160 × 160 and 320 × 320, respectively. Even on a grid size of 320 × 320, the TENO5 scheme produced spurious results in the primary vortex. It is important to note that the MP6-CC and TENO5 schemes have the same stencil for reconstructing cell-interface values. It indicates that the proposed algorithm produces results free of oscillations and spurious vortices for this test case. Feng et al. <cit.> simulated this test case on a grid size of 320 × 320 using TENO5 and TENO8. Even on that fine grid resolution, the TENO8 scheme produced spurious vortices along the shear layer (readers can refer to Fig. 13 of <cit.>). These findings suggest that the proposed algorithm with a unique central-upwind scheme can prevent spurious vortices for this test case, which has been the subject of many papers in the literature.
Table <ref> shows the computational cost of the proposed algorithms, MEG8-CC and MP6-CC, and MEG8-C and TENO5. The MP6-CC and MEG8-CC are almost similar in computational cost, whereas MEG8-C is twice as expensive due to the reconstruction of characteristic variables only. The TENO5 scheme will require four times more grid points and is 13 times more costly than the proposed schemes.
This test case does not involve shocks or contact discontinuities; therefore, the Ducros sensor or contact sensor was unnecessary. Therefore, a linear combination of the central-upwind scheme algorithm given by Equations (<ref>), (<ref>), (<ref>), and (<ref>) alone was sufficient. Central reconstruction scheme for all the variables except for ρ u in x- direction and ρ v in y- direction will suffice. In this regard, simulation is also carried out using the third-order upwind and fourth-order central scheme combination to show the uniqueness of the proposed algorithm. The third-order upwind scheme for left and right interfaces is as follows:
ϕ_i+1 / 2^L3, Linear = 1/6(-ϕ_i-1 + 5ϕ_i + 2ϕ_i+1),
ϕ_i+1 / 2^R3, Linear = 1/6(2ϕ_i + 5ϕ_i+1 -ϕ_i+2),
and the fourth-order central scheme is:
ϕ^C4,Linear_i+1/2=1/2(ϕ_i+1 / 2^L3, Linear + ϕ_i+1 / 2^R3, Linear)=1/12(- ϕ_i-1+7 ϕ_i+7 ϕ_i+1- ϕ_i+2).
Figs <ref> and <ref> show the vorticity contours computed by the linear third-fourth order scheme and linear fifth-sixth-order scheme using the proposed central-upwind algorithm on grid size of 160 × 160, respectively. Even the linear schemes are free of spurious vortices for this test case using the proposed algorithm. The only difference between the third-fourth-order scheme and the fifth-sixth-order scheme is the thickness of the shear layer. The third-fourth-order scheme is thicker than the fifth-sixth-order scheme, which can be attributed to the difference in spectral properties and order of accuracy. With the proposed algorithm, even a third-fourth-order linear scheme prevented spurious vortices, unlike the TENO5 scheme. Furthermore, for the simulations using the linear schemes (it is applicable also for the MEG8-CC and MP6-CC schemes), the fourth-order alpha-damping scheme of Nishikawa <cit.> was used instead of the compact viscous scheme proposed in <cit.> (described in Section <ref>) and yet there are no spurious vortices. It indicates that the viscous flux discretization had little impact on this test case as long as the discretization has good spectral properties <cit.>.
Finally, Fig. <ref> displays the z-vorticity computed for MEG8-CC, linear third-fourth scheme and linear fifth-sixth-order scheme on a grid size of 96 × 96 for θ=80. It can be observed that all the schemes are free of spurious vortices on such a coarse grid. These results show the advantages and uniqueness of the proposed algorithm. The algorithm also produced significantly better results than the kinetic energy preserving schemes considered in <cit.> on much coarser grids.
Finally, simulation is also carried out for inviscid, Re=∞, scenario with θ=80 for v = 0.05 sin[ 2 π(x) ] in the initial conditions as in <cit.>. Fig. <ref> displays the z-vorticity computed for MEG8-CC, MP6-CC and the linear fifth-sixth-order scheme on a grid size of 320 × 320. In Ref. <cit.>, authors have computed the simulation on a grid size of 512 × 512 for these initial conditions. With the present algorithm, even the inviscid case is free of spurious vortices on a grid size that is two and a half times smaller than <cit.>. The fifth-sixth-order scheme has a slightly thicker shear layer width than MEG8-CC. The proposed algorithm performed better than the optimized TENO8 scheme considered in <cit.> (see Fig. 13 of the concerned reference). All the schemes, both convective and viscous flux discretization, are only second-order accurate <cit.>, as fluxes are not reconstructed directly, making these results even more significant.
Possible reasons for lack of spurious vortices:
* Contact discontinuities are characterized by the condition that no mass flows through the discontinuity (normal velocities are zero) and continuous pressure. This results in a jump in density and tangential velocity (page 137 of <cit.>). The tangential velocities are v in the x- direction and u in the y- direction. In the absence of contact discontinuity, the tangential velocities will be continuous. However, while tangential velocities are discontinuous for inviscid flows, they are not discontinuous for viscous flow simulations. The driving force behind interfacial instabilities is shear. In the interface between two different fluids, the conservation of momentum requires that the tangential viscous stresses be equal, leading to the continuity of the tangential velocity. The fluids would slip in the actual inviscid limit, resulting in discontinuous tangential velocity. However, in the presence of viscosity (even artificial viscosity due to upwind reconstructions), the tangential velocity remains continuous across an interface according to Batchelor <cit.>. In the present algorithm, the tangential velocities are deemed continuous if the MP criterion for density, and Ducros sensor for shocks are met and, therefore, reconstructed using a central scheme in conservative variable space - Equations (<ref>), (<ref>), and (<ref>) or wave appropriate centraliztion in characteristic space.
* The shear layer test case does not involve shocks, and the density is almost constant and therefore does not require any shock or contact discontinuity sensor and can be simulated using linear schemes itself, i.e. directly using the Equations (<ref>), (<ref>), (<ref>), and (<ref>). The flow is viscous, and the density is almost constant; therefore, the tangential velocities are also continuous, which is what the proposed algorithm is incidentally doing: centralizing the ρv in x- direction and ρu in y- direction.
* Potential improvement of optimization of the numerical schemes: In <cit.> authors have optimized the parameter η in the following (a similar one) equation:
ϕ^C_i+1/2 = ( 1 - η) ϕ^L, Linear_i+1/2 + ηϕ^R, Linear_i+1/2
The parameter η is constant, once the optimum value is found, for all the variables. Despite the optimization procedure, the optimized schemes, including the Ducros sensor, lead to spurious vortices even on much larger grid sizes than the current paper (see Figure 13 of Ref. <cit.> and the corresponding discussion). In this paper, the value of η is either 0.5 or 1 and is different for different variables in various directions. If one has to optimize, then η has to be an independent value for each variable in each direction. It implies that instead of optimizing one parameter η, there will be η_1 to η_8, eight parameters to optimize in a two-dimensional scenario (four variables in two directions)
* It is only a partial explanation because there is no clear understanding of why some variables are reconstructed using upwind reconstruction for this test case (it is necessary for test cases with discontinuities as in <cit.>). Nevertheless, there are no spurious vortices even with the linear schemes using the current central-upwind algorithm, which is a significant outcome.
The shear layer test case does not have shock waves or contact discontinuities. The algorithm is tested for such cases in the following test cases, and its computational benefits and robustness are shown in the subsequent test cases.
Kelvin Helmholtz instability
First, the Kelvin-Helmholtz instability case is considered. This instability arises due to an unstable velocity gradient at the interface between the two fluids, leading to vortices and mixing patterns. The test case has the following initial conditions over a periodic domain of [0, 1] × [0, 1],
[ p=2.5, ρ(x, y)={[ 2, If 0.25<y ≤ 0.75,; 1, else, ].; u(x, y)={[ 0.5, If 0.25<y ≤ 0.75,; -0.5, else , ].; v(x, y)=0.1 sin (4 π x){exp[-(y-0.75)^2/2 σ^2]+exp[-(y-0.25)^2/2 σ^2]}, where σ = 0.05/√(2). ]
The computational domain is discretized with 512 cells in each direction, and the final time is t = 0.8. The numerical solutions are computed using various schemes and are depicted in Fig. <ref>. We can observe that all the schemes capture complex structures and small-scale vortices. Both MP6-CC and MEG8-CC are free of oscillations despite being reconstructed with the adaptive conservative-characteristic variable reconstruction. Despite sharing the same stencil, MP6-CC shows more vortical structures than the TENO5 scheme.
Table <ref> shows the schemes' computational times. MP6-CC is also 15% faster than the TENO5 scheme. For computational efficiency, the MEG8-CC scheme takes the same time as the TENO5 scheme with improved results. MEG8-CC is also 36 % faster than the MEG8-C presented in <cit.>, indicating the proposed approach's computational benefit.
Rayleigh-Taylor instability
The Rayleigh-Taylor instability is a hydrodynamic phenomenon that occurs when a dense fluid is above a less dense fluid in a gravitational field. This instability arises from an unstable density gradient, causing the denser fluid to accelerate downward and the lighter fluid to accelerate upward. As a result, complex and turbulent mixing patterns emerge between the two fluids. According to <cit.>, the initial conditions for the Rayleigh-Taylor instability are as follows:
(ρ,u,v,p)=
(2.0, 0, -0.025√(5p/3ρcos(8π x)), 2y+1.0), 0≤ y< 0.5,
(1.0, 0, -0.025√(5p/3ρcos(8π x)), 1y+1.5), 0≤ y≤ 0.5,
In this test case, simulations were conducted using a uniform mesh with a 128 × 512 resolution until t=1.95. The computational domain for the simulations was [0, 1/4]× [0,1], and the adiabatic constant γ was set to 5/3. The flow conditions were specified as ρ=1, p=2.5, and u=v=0 on the top boundary, and ρ=2, p=1.0, and u=v=0 on the bottom boundary. Source terms S=(0,0,ρ, ρ v) were added to the Euler equations. The density distribution of the Rayleigh-Taylor instability problem is shown in Figs. <ref>. The proposed approaches, MP6-CC and MEG8-CC, are free of oscillations as the MP criterion with density as the variable could detect contact discontinuity and prevent oscillations. The MP6-CC is again significantly faster, as shown in Table <ref>, than the TENO5 scheme with the thinner material interface and more vortical structures. MEG8-CC is 35 % faster than the MEG8-C approach as it avoids the costly characteristic variable transformation and produces slightly more vortical structures.
Double Mach Reflection
In this test example, we will examine the proposed algorithm for the Double Mach Reflection (DMR) case originally presented by Woodward and Collela <cit.>. This scenario involves a Mach 10 unsteady planar shock wave interacting with a 30-degree inclined surface, forming intricate flow patterns. The computational domain for this particular case spans [x,y] = [0, 3] × [0, 1], and our simulations will be conducted on a uniform grid of 768 × 256. The specified initial conditions are as follows:
(ρ, u, v, p)={[ (1.4,0,0,1), if y<1.732(x-0.1667); (8,7.145,-4.125,116.8333), otherwise ].
For this test case, we utilized the shock-stable cLLF Riemann solver from the reference <cit.> to avoid the “carbuncle” phenomenon. The simulations were conducted up to a final time of t=0.3. Reflecting wall conditions were applied to the bottom boundary for 0.1667 < x ≤ 4.0, and post-shock conditions for 0.0 ≤ x ≤ 0.1667. The top boundary was set using the exact solution of the time-dependent oblique shock. Density contours from all the considered schemes are depicted in Figs. <ref>. Among the results, the TENO5 and MP6-CC schemes exhibited the highest dissipation and failed to resolve small-scale vortical structures. Due to their superior dispersion properties, the MEG schemes captured the shocks sharply compared to the TENO and MP6-CC schemes. Furthermore, within the MEG family of schemes, the MEG8-CC outperformed the MEG8-C scheme in capturing vortical structures and near-wall jets. Similar to the previous Example <ref>, it can be concluded that the MP detection criterion with the density as the variable and the Ducros sensor effectively identify discontinuities, aiding in the reconstruction of conservative variables.
Once again, from Table <ref>, the computational cost of the MEG8-CC scheme is cheaper than the MEG8-C and TENO5 schemes. MP6-CC is the most efficient of all the schemes, and MEG8-CC is a close second.
Riemann Problem
This example considers the two-dimensional Riemann problem of configuration 3 <cit.> for testing the proposed schemes. The initial conditions for the configuration considered are as follows:
(ρ, u, v, p)={[ (1.5,0,0,1.5), if x>0.5, y>0.5; (0.5323,1.206,0,0.3), if x<0.5, y>0.5; (0.138,1.206,1.206,0.029), if x<0.5, y<0.5; (0.5323,0,1.206,0.3), if x>0.5, y<0.5 ].
The initial conditions described above result in four shocks at the interfaces of the four quadrants, and the Kelvin-Helmholtz instability along the slip lines gives rise to small-scale structures, often used as a benchmark for evaluating the numerical dissipation of a particular scheme. In this test case, simulations are conducted within a computational domain of [x,y] = [0,1]× [0,1], using a uniform grid of 1024 × 1024, until a final time, t=0.3.
The density contours obtained by the considered schemes are presented in Figs. <ref>. The small-scale structures produced by the TENO scheme, Fig. <ref>, are dissipative compared to that of MP6-CC, Fig. <ref>, and they both share the same stencil for reconstruction. These results indicate that a properly designed scheme with limiters can also outperform the ENO-type schemes (considered standard and most popular schemes in the literature for compressible flow simulations). Furthermore, the small-scale vortices obtained by the MEG8-CC scheme are significantly better than the MEG8-C and the other schemes.
It is important to note that this test case does not involve any contact discontinuities, and there is little difference between the MEG8-C and MEG8-CC schemes regarding computational cost, shown in Table <ref>. MP6-CC is the cheapest of all the schemes, about 36% faster than the TENO5.
Shock-Bubble interaction
In the sixth example, we focus on the interaction between a shock wave and a bubble at Mach 6, as described in Hu (2010). The shock wave and the helium bubble are assumed to behave as ideal gases. The helium bubble is positioned at coordinates x = 0.25, y = 0 within a domain spanning [0, 1] × [-0.5, 0.5]. The initial conditions for the problem are as follows:
(ρ,u,v,p)=
(1.0, -3, 0, 1), pre-shocked air,
(216/41, (1645/286)-3, 0, 251/6) post-shocked air,
(0.138, -3, 0, 1) helium bubble.
The initial radius of the bubble is 0.15, while the initial position of the shock front is at x = 0.05. Inflow and outflow conditions are enforced at the left and right boundaries. Neumann conditions with zero gradients for all quantities are applied at the remaining boundaries. The simulation resolution is set to 800 × 800. Simulations are also carried out with shock-stable cLLF Riemann solver <cit.> to prevent the “carbuncle” phenomenon. Once again, from Fig. <ref>, one can observe that the MEG8-CC scheme resolves more small-scale vortices than TENO5 and MP6-CC schemes. Results obtained by the MEG8-C and MEG8-CC are very similar, but the MEG8-CC scheme is 12% faster than the MEG8-C scheme.
Inviscid Taylor-Green Vortex
The Taylor-Green vortex problem is a common benchmark for evaluating the performance of the adaptive central-upwind schemes. It is a three-dimensional inviscid test case with the following initial conditions for the simulation:
[ ρ; u; v; w; p; ]
=
[ 1; sinxcosycosz; -cosxsinycosz; 0; 100 + ( cos(2z) + 2 ) ( cos(2x) + cos(2y)) - 2/16 ].
The simulations are conducted until time t=10 on a grid size of 64^3, with a specific heat ratio of γ=5/3 on a periodic domain of size x,y,z ∈ [0,2π). The flow is incompressible since the mean pressure is significantly considerable. The study aims to evaluate the ability of the proposed schemes to preserve kinetic energy and the growth of enstrophy over time. Enstrophy is the integral of the square of the vorticity and is used as a measure of the scheme's ability to preserve vortical structures. The fourth-order compact scheme, Equation (<ref>), computes the velocity derivatives required for the enstrophy computation for all the schemes, and the HLLC Riemann solver is used for this test case.
The kinetic energy evolution of the numerical schemes MP6-CC, MP6-C, TENO5 and the linear fifth-order upwind scheme (U5) is shown in the Fig. <ref>, indicating that the MP6-CC scheme better preserves kinetic energy than the TENO5 scheme and even the linear U5 scheme. Furthermore, the MEG8-C and MEG8-CC schemes preserve the kinetic energy significantly better than all the other schemes, as shown in Fig. <ref>. There is little to no difference between the MEG8-C and MEG8-CC schemes, as this test case has no contact discontinuity or shockwaves. Figures <ref> and <ref> depict the computed enstrophy plots of the MP6-CC, MP6-C, TENO5, U5, MEG8-C and MEG8-CC schemes. The schemes with the proposed algorithm MP6-CC and MEG8-CC perform better than TENO and U5 schemes. There is a small difference between the MEG8-C and MEG-CC schemes for enstrophy values at late times, with MEG8-CC marginally better than MEG8-C. From Table <ref>, one can infer that the MP6-CC scheme is significantly cheaper than the TENO5 scheme and better preserves kinetic energy. While the computational costs of the TENO5 and MEG8-CC are almost the same, the MEG8-CC scheme outperforms the TENO5 scheme significantly in terms of both kinetic energy preservation and enstrophy values.
§ CONCLUSIONS
This work proposed a general adaptive central-upwind scheme for the simulations of compressible flows with discontinuities in the flow field. The approach significantly improves over the centralized gradient-based reconstruction proposed by Hoffmann, Chamarthi and Frankel <cit.> with reduced computational cost where the reconstruction is carried out for characteristic variables. In the current approach, conservative variables are reconstructed if no discontinuities are detected by the Durcos shock sensor and the MP criterion-based contact discontinuity sensor; otherwise, the reconstruction is the same as that of <cit.>. While it is well known in the literature that reconstruction of the characteristic variables will result in cleaner and oscillation-free results, the conversion of the conservative variables to characteristic variables will incur significant computational expense. The current approach overcomes this by reconstructing conservative variables in the regions with no discontinuities and characteristic variables in the areas of discontinuities, significantly reducing the computational cost and improving the results in some instances. The generalized approach also works with a standard fifth/sixth-order reconstruction scheme, MP6-CC. It has been shown that the MP6-CC scheme is 20-30% cheaper than the TENO5 scheme (which has the same stencil) and yet produces significantly superior results (judging by the vortical structures produced, which is the norm in the literature for assessing a low-dissipation scheme). The computational expense of the MEG8-CC scheme is the same as that of the TENO5 scheme, yet it produces superior results despite additional computations for the derivatives and Ducros sensor.
The proposed approach did not produce spurious vortices for the periodic double shear layer test case with the proposed algorithm for all the schemes on coarse grids, unlike that of the TENO5 and the other existing schemes in the literature. It has been shown that the current algorithm (a combination of upwind and central schemes for appropriate variables in different directions ) can also prevent spurious vortices even with linear schemes (even the third/fourth scheme has prevented spurious vortices).
§ INSPIRATION AND ACKNOWLEDGEMENTS:
The work is inspired by the paper written by Van Leer <cit.>, where the author wrote - “Presumably, the advent of high-performance computing and promise of massively parallel computing has quelled any drive toward a systematic modernization of CFD algorithms. I personally believe, though, that the next round of gains in CFD will not come from hardware improvement but, once again, from method development”, coupled with the lack of resources. A.S. thanks his wife and son for checking the code and the manuscript.
elsarticle-num
|
http://arxiv.org/abs/2409.02633v1 | 20240904115200 | Predicting Miscibility in Binary Compounds: A Machine Learning and Genetic Algorithm Study | [
"Chiwen Feng",
"Yanwei Liang",
"Jiaying Sun",
"Renhai Wang",
"Huaijun Sun",
"Huafeng Dong"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"physics.comp-ph"
] |
< g r a p h i c s >
§ ABSTRACT
The combination of data science and materials informatics has significantly propelled the advancement of multi-component compound synthesis research. This study employs atomic-level data to predict miscibility in binary compounds using machine learning, demonstrating the feasibility of such predictions. We have integrated experimental data from the Materials Project (MP) database and the Inorganic Crystal Structure Database (ICSD), covering 2,346 binary systems. We applied a random forest classification model to train the constructed dataset and analyze the key factors affecting the miscibility of binary systems and their significance while predicting binary systems with high synthetic potential. By employing advanced genetic algorithms on the Co-Eu system, we discovered three novel thermodynamically stable phases, CoEu_8, Co_3Eu_2, and CoEu. This research offers valuable theoretical insights to guide experimental synthesis endeavors in binary and complex material systems.
§ INTRODUCTION
The field of materials science has consistently been a focal point for scientific research and innovation, leading to numerous in-depth and extensive R&D activities. The evolution of materials science research has progressed through four major paradigms: experimental science, theoretical laws, computational modeling, and data-driven approaches (Figure <ref>) <cit.>. The initial paradigm relied on experimental trial and error, a process characterized by long development cycles, potentially spanning 10-20 years and high experimental costs.
Only a few centuries ago, materials science began to transition from trial-and-error methods to a more systematic and theoretical approach, spurred by the development of physical theoretical models and general laws such as thermodynamic constants. This shift marked a new focus on “material design”. However, as calculations became more complex, theoretical computing in its second paradigm encountered significant bottlenecks in simulating intricate phenomena. It was not until the advancement of computers over the past decade that the third paradigm emerged, facilitating virtual laboratory simulations of real-world phenomena and enabling the “synthesis” of new materials.
First-principles calculations based on Density Functional Theory (DFT) <cit.>, Local Density Approximation (LDA), and Generalized Gradient Approximation (GGA) <cit.> have been widely used to study the properties of new binary materials <cit.>. Meanwhile, significant advancements in ground-state structure prediction tools have completely transformed the field of materials science, making it possible to predict new material structures before experimental synthesis. For example, methods such as Genetic Algorithms/Evolutionary Algorithms <cit.>, Particle Swarm Optimization <cit.>, Random Sampling <cit.>, Minima Hopping <cit.>, Simulated Annealing <cit.>, Topological Modeling Method <cit.>, and Firefly Algorithm <cit.> have achieved notable success. Among them, is the Universal Structure Predictor: Evolutionary Xtallography (USPEX), known for its powerful search capabilities and high success rate <cit.>. It performs well in finding low-energy structures of variable composition compounds, especially in the search for binary compounds <cit.>.
However, given the vast unexplored space and the significant computational costs involved, traditional methods of exploration appear impractical. The rapid development of data mining and artificial intelligence technologies has propelled materials science into its fourth paradigm. Machine learning algorithms, known for their robust data processing capabilities and flexibility, can efficiently analyze large datasets without requiring preset hypotheses. Through machine learning, we can perform virtual high-throughput screening of extensive molecular spaces, rapidly identifying structures with potential<cit.>. This methodology not only expedites the discovery of novel materials but also significantly reduces both computational and experimental costs <cit.>.
The proposal of the Materials Genome Initiative <cit.> has significantly promoted the development of big data in materials science <cit.>. A wealth of data has been accumulated through both experimental findings and computational simulations. The ICSD is currently the largest database for experimental identification of inorganic crystal structures, containing data dating back to 1913 and encompassing 291,382 fully evaluated and published crystal structure entries derived primarily from experimental results <cit.>. The MP database includes 154,718 entries, providing researchers with comprehensive DFT data <cit.>. The Open Quantum Materials Database (OQMD) is a high-throughput database that currently features nearly 300,000 compounds' DFT total energy calculations and common crystal structure information from ICSD <cit.>. These databases lay a solid foundation for the application of machine learning in materials science <cit.>.
Currently, there is no unified theoretical framework or clear concept to describe the key factors that determine whether two different elements can be miscible to form binary compounds. Therefore, we employed the machine learning method (random forest classifier) to train a model on the constructed dataset. The dataset consists of binary system data from experiments in MP database and ICSD, as well as data on immiscible binary systems. This method leverages the powerful features and flexibility of machine learning to address the classification problem of binary system miscibility. It identifies key factors influencing the miscibility of two different elements in forming binary compounds and analyzes the importance of these factors. Notably, this model predicts potentially miscible binary systems that have not yet been recorded in experimental databases. Using USPEX to quickly search for variable component structures of potentially miscible binary systems, and combining the Vienna Ab-initio Simulation Package (VASP) <cit.> with high-precision optimization of the stable structures generated by USPEX, we successfully predicted the ground state structures of three different binary compounds. These structures demonstrate superior thermodynamic stability compared to theoretical data for the same systems in the OQMD, suggesting a greater advantage for experimental synthesis. Our theoretical research provides valuable guidance for discovering and synthesizing new compounds in similar systems.
§ COMPUTATIONAL METHOD
§.§ Machine Learning Model Building
§.§.§ Dataset
The MP database does not include unstable elements like Po, At, Rn, Fr, Ra, Rf, Db, Sg, Bh, Hs, Mt, Ds, Rg, Cn, Nh, and Og, among others, which are either naturally radioactive or synthetically produced. Radioactive elements such as Ac, Th, Pa, U, Np, Pu, Tc, and Pm undergo spontaneous decay, leading to instability in the compounds they form. Noble gases —He, Ne, Ar, Kr, and Xe— have complete electron shells, making them chemically inert under normal conditions due to their high ionization energy. Halogens (F, Cl, Br, I) and elements like O, N, and H, known for their high electronegativity, multiple oxidation states, and reactivity, readily form a vast array of compounds.
Excluding these elements, our classification model will utilize a 2,346 possible binary system combinations dataset. We integrate experimental data from both the MP database and ICSD, encompassing 20,295 binary compounds across 1,420 binary systems. Binary systems that are experimentally miscible from these databases are used as positive samples, while the 926 non-miscible systems serve as negative samples. The balance of positive and negative samples in the data set enhances the model's performance and generalization capability.
§.§.§ Determinants
Drawing from experience, we explored several factors that could influence the synthesis of binary compounds and developed simple features, such as addition or subtraction, to represent their interrelationships, aiming to pinpoint key factors. Initially, we examined various parameters from the periodic table, including electronegativity, atomic radius, melting point, flash point, and molar volume. Notably, we observed significant differences in electronegativity between miscible and non-miscible binary systems, characterizing that electronegativity plays a key role in the miscibility of elements to form binary compounds. This observation led us to hypothesize that electronegativity is a pivotal factor in compound formation. Electronegativity quantifies an atom's capacity to attract electrons in a molecule, an element with a higher electronegativity value exerts a stronger pull on bonding electrons <cit.>. Consequently, we selected the sum of electronegativity values as a critical indicator for our study.
However, when comparing and analyzing other parameters, we did not observe the expected significant changes (see Figure S1), prompting the introduction of custom parameters. Empirically, the number of extranuclear electrons has been observed to play a significant role in the formation of binary compounds. However, there is currently no specific parameter available to quantify this effect. To address this, we propose a novel weighted indicator, termed WEle (Weighted Electrons of the Extranuclear Level), which quantifies the relationship between the extranuclear electron count of different elements and their contribution to compound stability.
WEle is defined to more intuitively represent the stability of an atom or molecule's electron configuration in relation to its orbital capacity. A positive WEle value indicates an excess of electrons available for bonding, while a negative value suggests a need for interaction with other atoms to achieve stability. Consequently, we've chosen both the sum and difference of the valence electron weights as indicators, demonstrating the potential gain or loss of valence electrons for the elements involved in compound formation. The specific formula is shown below:
WEle =1/0+1 * n_s+∑1/l+1 * n_e
n_e=-[2(2 l+1)-n_l] if n_l>2 l+1,
n_e=n_l if n_l≤ 2 l+1.
In this formula, l represents the orbital type, with the `s', `p', `d', and `f' orbitals being assigned the numerical values 0, 1, 2, and 3, respectively. Additionally, n_l represents the current number of electrons occupying the orbital l. When the outermost orbital of the extranuclear charge distribution is an `s' orbital, n_s specifically refers to the current number of electrons in that outermost s orbital, furthermore, n_e represents the “effective" electron count in orbitals such as `p', `d', and `f', which are beyond the outermost shell.
We've also introduced a metric for the number of unoccupied orbitals for electrons, termed Unoccupy. Unoccupied orbitals indicate an atom's potential to achieve a stable electron configuration by sharing, transferring, or rearranging electrons, particularly within the outermost shell. The difference in the number of Unoccupy orbitals between elements is selected as an indicator, reflecting their potential for chemical bonding and stability.
Taking Fe as an example, its arrangement of extranuclear electrons is [(1, `s', 2), (2, `s', 2), (2, `p', 6), (3, `s', 2), (3, `p', 6), (3, `d', 6), (4, `s', 2)]. Based on the valence electron weight index formula, WEle can be calculated as WEle =1/0+1 * 2+1/1+2 * -4, and the result is 0.667. Also, according to the definition of unoccupied orbitals, it’s determined that Unoccupy equals 4.
After careful consideration of various indicators, we have identified four critical parameters that significantly influence the miscibility of binary compounds: the sum of electronegativity values (sumElecNeg), the difference in the number of unoccupied orbitals (diffUnoccupy), the sum of valence electron weights (sumWEle), and the difference in valence electron weights (diffWEle), respectively.
sumElecNeg =ElecNeg_a + ElecNeg_b
sumWEle =WEle_a + WEle_b
diffWEle =|WEle_a - WEle_b|
diffUnoccupy =|Unoccupy_a - Unoccupy_b|
where a and b represent two different elements.
§.§.§ Random Forest Model
Our research focuses on exploring the key factors that determine whether two different elements can be miscible to form binary compounds and quantifying the importance of these factors. There are the following characteristics in the data of this study: (1) high data dimension, involving the basic characteristics of many elements; (2) large data scale, with a large number of samples to be processed; (3) intricate relationship between the data, preceding a simple linear relationship. Given these characteristics, the random forest model is an ideal choice due to its exceptional fitting capability and its proficiency in managing complex data structures <cit.>.
We utilized data from 2,346 binary systems to construct our random forest model. To effectively evaluate the model's performance, we employed a commonly used partitioning strategy, randomly dividing the dataset into a training set and a test set with an 80% to 20% ratio. This method ensures that the model does not overfit during training while allowing for effective performance evaluation on an independent dataset.
During the training process, we use information entropy as a criterion to evaluate the quality of node splitting because it can measure the data's uncertainty and evaluate each node's purity after splitting <cit.>. By employing this method, we can identify the optimal split points, thereby enhancing the model's performance to its fullest potential. We integrated a subset of 10 decision trees into the random forest model. Through this integration strategy, we enhanced the prediction accuracy and robustness of the model, thereby effectively improving its generalization ability.
To comprehensively evaluate the performance of the constructed random forest model in classifying the miscibility of binary compound systems, we utilized stratified sampling to randomly generate the test set. Additionally, we assessed the model using four key performance metrics: Accuracy, Precision, Recall, and F1-score. The calculation formulas for these metrics are as follows:
Accuracy measures the proportion of correctly classified samples to the total number of samples:
Accuracy =TP+TN/TP+TN+FP+FN
Precision measures the proportion of true positive samples among all samples classified as positive:
Precision =TP/TP+FP
Recall measures the proportion of true positive samples correctly identified among all actual positive samples:
Recall =TP/TP+FN
F1-score is the harmonic mean of precision and recall, providing a comprehensive metric that balances both aspects:
F1=2Precision*Recall/Precision+Recall
Here, TP, TN, FP, and FN represent true positives, true negatives, false positives, and false negatives, respectively. This evaluation framework ensures a thorough analysis of the model’s performance across multiple dimensions.
The key performance metrics we used, such as accuracy, precision, recall, and F1-score, were calculated using weighted averages based on class weights. These weights were assigned according to the size of each class to ensure balanced predictive performance across all classes. We performed numerous experiments with random combinations of potential tuning parameters to identify the optimal set of hyperparameters. Finally, we applied the best-fitting random forest model to the test set to evaluate its performance on unseen data.
§.§ Structural Prediction
Recognized for its efficiency in identifying all stable crystal structures across a range of atomic numbers, USPEX has been validated through numerous studies <cit.>. We employed the USPEX algorithm to conduct a structural search at 0 GPa. The search began with an initial population of 150 diverse structures, each with up to 16 atoms in their respective primitive cells. The evolutionary process involved generating subsequent populations of 40 structures. This generation was composed through a combination of methods: 20% heredity, ensuring the propagation of successful structures; 20% random space group generation, introducing new symmetry possibilities; 20% transmutation, simulating atomic substitutions; 20% soft mutation, allowing for slight modifications to existing structures; 10% lattice mutation, altering the unit cell parameters; and 10% topological random generation, exploring new topological configurations.
The search algorithm is designed to terminate under two conditions: either when the optimal structure has remained unchanged for 30 consecutive generations, indicating convergence towards a stable solution, or after completing 60 iterations, ensuring a thorough exploration of the configurational space.
We employed the VASP code to carry out DFT calculations using the Perdew–Burke–Ernzerhof (PBE) functional within the framework of the GGA. To accurately represent the core electrons and their effects on the valence electrons, we utilized the projector-augmented wave (PAW) method <cit.>.
A plane-wave basis was used with a kinetic energy cutoff of 500 eV, and the convergence criterion for the total energy was set to 10^-8 eV. The unit cell lattice vectors (both unit cell shape and size) were fully relaxed together with the atomic coordinates until the force on each atom was less than 0.005 eV/Å.
To characterize the energetic stability of structures at various compositions resulting from a structure search, the formation energy (E_f) of any given structure E(Co_xEu_y) is calculated:
E_f=E(Co_xEu_y)-xE(Co)-yE(Eu)/x+y
Where E(Co_xEu_y) is the per-atom energy of Co_xEu_y structure, E(Co) and E(Eu) represents the per-atom energy of Hexagonal-Co (Fm-3m) and Hexagonal-Eu (P6_3/mmc), respectively (all details are provided in Table S3 of the Supporting information). Also, the phonon calculations were performed using the Density-Functional Perturbation Theory (DFPT) by combining the VASP code and PHONOPY package <cit.>. A 2 × 2 × 2 supercell and a kinetic energy cutoff of 500 eV were used to calculate the phonon spectra.
§ RESULTS AND DISCUSSION
§.§ Machine Learning Prediction Model
Here, we have selected four critical factors: sumElecNeg, diffUnoccupy, sumWEle, and diffWEle. Figure <ref> illustrates the relationships between the distribution of these key factors and the miscibility of different elements to form binary systems. The horizontal axis represents the four key factors, while the vertical axis represents the number of binary systems.
As seen from Figure <ref>, a, c, e, and g represent the sample label distributions that can form binary compound systems, while b, d, f, and h correspond to the sample label distributions for which no binary compound systems have been found yet.
The effect of sumElecNeg on the binding of binary compounds is shown in Figure <ref>(a-b). In Figure <ref>(a), mixable phases are cumulative in the range of 2.8-4. On the contrary, nonmixable ones range between 2 and 3 and fail to get binary phases. What’s more, Figure <ref>(c-d) reflects the effects of diffUnoccupy on the binding of binary compounds. In Figure <ref>(c), mixable compounds decay sharply in the region after 2, while in Figure <ref>(d), nonmixable compounds decay slowly in the region after 2. The effects of sumWEle on the binding of binary compounds are shown in Figure <ref>(e-f). In Figure <ref>(e), mixable phases are mainly concentrated in the region of 1-2, while in Figure <ref>(f), nonmixable phases are mostly concentrated in the region of 1-2.5. Figure <ref>(g-h) reflects the effects of diffWEle on the binding of binary compounds. In Figure <ref>(g), mixable compounds mainly decay in the region after 1.5, while in Figure <ref>(h), nonmixable drops in the region after 0.
We selected the above four predominant factors during the training of the Random Forest algorithm. Performance indicators were applied to test the model's performance. The specific indicators are as follows: Accuracy is 94.47%, Precision is 94.83%, Recall is 94.55%, and F1-score is 94.64%. The complete specific performance parameters are shown in Table <ref>(Partial classification results for different labels in the binary system are shown in Table S2 in the supporting material).
To further analyze the importance of each input, we extracted the feature contributions from the random forest model. As depicted in Figure <ref> below, their weight ratios are 35.2% (sumElecNeg), 22.54% (diffWEle), 28.81% (sumWEle), and 13.45% (diffUnoccupy), respectively. SumElecNeg emerges as the most significant factor, highlighting the critical role of electronegativity summation in predicting the formation of binary compounds. Then it comes to sumWEle and diffWEle, which shows that the WEle weight value plays an important role in forming the compound. Also, diffUnoccupy provides useful global information about the formation propensity of a compound. These varying degrees of significance not only elucidate the extent to which these critical factors influence the formation of binary compounds by elements but also represent the degree to which distributions are differentiated in graphical analyses.
Finally, the trained model has been applied to predict, some potential combinations that have the potential to form binary compounds but have not yet been found in the experimental database, including Na-Fe, Al-Bi, P-Te, Fe-Rb, Co-Eu, As-Tl, Mo-Tb, Ru-Pb, Ru-Bi, Pd-Os, Tb-W, and Ho-W. These predicted binary combinations will serve as our target set for discovering new materials.
§.§ Updating The Energy Convex Hull Of Co-Eu System
We conducted a rapid variable component structure search using USPEX, guided by the results from the Random Forest Model Prediction Combination previously mentioned. It's known that the energy convex hull (ECH) phase diagram is a vital bias for judging the structural stability of a special system<cit.>. In Figure <ref>, red points represent stable phases, connected by continuous black lines to form the ECH. Phases directly on the ECH are thermally stable, while blue points above the hull indicate instability. Notably, the only currently stable phase, Eu_2Co_3, was identified in the OQMD <cit.>. However, this phase has become metastable following the update of the ECH, and its formation energy is now marked as an `x' in the phase diagram. As shown in Figure <ref>, CoEu_8, Co_3Eu_2 and CoEu exhibit negative formation energy, indicating they are thermodynamically stable.
The structures of the lowest energy CoEu_8, Co_3Eu_2 and CoEu are both in the P1 space group, as shown in Figure <ref>. Crystallographic details are provided in the supplementary materials (Tables S4-S6).
The phonon band structure offers key insights into the dynamic stability of newly predicted phases of CoEu_8, Co_3Eu_2 and CoEu. The CoEu_8 and Co_3Eu_2 phases are dynamically stable, as there are no imaginary frequencies in Figure <ref>(a-b), which further consolidates the theoretical basis for CoEu_8 and Co_3Eu_2 as potentially stable phases. Among them, since the CoEu phase has an imaginary frequency in Figure <ref>(c), it indicates that it may be dynamically unstable. The phonon spectrum of Eu_2Co_3 in OQMD after structural optimization also has imaginary frequencies, as shown in the supplementary material (Figure S7).
Our results indicate that the newly predicted structures of the Co-Eu system exhibit superior stability, making them strong candidates for experimental synthesis. This finding corroborates the results presented in Chapter 3.1.
§ CONCLUSION
In summary, our study utilized a dataset comprising 2,346 binary systems, including 1,420 experimentally synthesized binary systems from the MP database and ICSD as positive samples and 926 experimentally unsynthesized binary systems as negative samples. We employed a random forest model for binary classification, integrating key features such as the sumElecNeg for two different elements in the periodic table and custom-weighted features like sumWEle, diffWEle, and diffUnoccupy. This feature set significantly bolstered the model's reliability, yielding an impressive Accuracy of 94.47%, Precision of 94.83%, Recall of 94.55%, and an F1-score of 94.64%.
We utilized USPEX to conduct rapid variable composition structure searches on 12 systems with synthetic potential predicted by our model and discovered three new thermodynamically stable structures, CoEu_8, Co_3Eu_2 and CoEu, in the Co-Eu system, thereby verifying the reliability of our model. Our model successfully explored combinations of miscible element pairs, allowing us to explore and synthesize potentially stable structures more efficiently, providing valuable guidance for future discovery of new materials. It also laid the foundation for further investigations into incorporating a third element to form new ternary systems with previously immiscible element pairs.
Work at Guangdong University of Technology is supported by the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2022A1515012174 and 2021A1515110328). R. Wang also thanks Center of Campus Network and Modern Educational Technology, Guangdong University of Technology, Guangdong, China for providing computational resources and technical support for this work.
The Supporting Information can be accessed at no cost through https://pubs.acs.org/doi/xxx.
|
http://arxiv.org/abs/2409.03631v1 | 20240905154630 | Condensed Ricci Curvature on Paley Graphs and their Generalizations | [
"Vincent Bonini",
"Daniel Chamberlin",
"Stephen Cook",
"Parthiv Seetharaman",
"Tri Tran"
] | math.CO | [
"math.CO",
"math.NT",
"52C99, 53B99"
] |
[2020]primary 52C99, 53B99; secondary 05C10, 05C81, 05C99
Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning
Atoosa Kasirzadeh
September 9, 2024
================================================================================
§ ABSTRACT
We explore properties of generalized Paley graphs and we extend a result of Lim and Praeger <cit.> by providing a more precise description of the connected components of disconnected generalized Paley graphs. This result leads to a new characterization of when generalized Paley graphs are disconnected. We also provide necessary and sufficient divisibility conditions for the multiplicative group of the prime subfield of certain finite fields to be contained in the multiplicative subgroup of nonzero k-th powers. This latter result plays a crucial role in our development of a sorting algorithm on generalized Paley graphs that exploits the vector space structure of finite fields to partition certain subsets of vertices in a manner that decomposes the induced bipartite subgraph between them into complete balanced bipartite subgraphs. As a consequence, we establish a matching condition between these subsets of vertices that results in an explicit formula for the condensed Ricci curvature on certain Paley graphs and their generalizations.
§ INTRODUCTION
Ollivier defined the coarse Ricci curvature of Markov chains on metric spaces in terms of the Wasserstein (or transport) distance of measures in <cit.>, providing a synthetic notion of Ricci curvature and a bridge between Riemannian geometry and probabilistic methods. The investigation of Ollivier's coarse Ricci curvature on graphs presents an interesting avenue for research and offers an accessible framework for quantifying local connectivity of graphs. Consequently, it has many practical and computational applications in artificial intelligence, network analysis, and data science (cf. <cit.>).
Much work has been done on various forms of the coarse Ricci curvature on graphs (cf. <cit.>). We consider a modified notion of Ollivier's coarse Ricci curvature on graphs introduced by Lin, Lu, and Yau in <cit.> that we refer to as the condensed Ricci curvature as in <cit.>. In particular, we study the condensed Ricci curvature on Paley graphs and their generalizations, which serve as models of pseudo-random graphs that encode algebraic relations between the elements of certain finite fields. Standard (or quadratic) Paley graphs (q,2) are constructed by taking the elements of finite fields of prime power order q=p^n ≡ 1 4 as vertices and defining edges between vertices that differ by squares. Generalized Paley graphs (q,k) are constructed in a similar manner by taking the elements of finite fields of prime power order q=p^n ≡ 1 2k as vertices and defining edges between vertices that differ by higher order k-th powers (see Section <ref>). The connections of Paley graphs and their generalizations to number theory, field theory, and other branches of mathematics adds to their mathematical interest and allows one to apply tools from number theory and algebra in their study.
Generalized Paley graphs (q,k) share some of the well-known properties of quadratic Paley graphs. Indeed, they are symmetric and q-1/k-regular but unlike quadratic Paley graphs they are not self-complementary and may be disconnected when k >2 (see Section <ref>). Our main results concerning the condensed Ricci curvature on generalized Paley graphs requires an understanding of their connectivity properties. In a study of the automorphism groups of generalized Paley graphs, it was shown in Theorem 2.2 of <cit.> that a generalized Paley graph (q,k) of order q=p^n is connected if and only if k is not a multiple of (q-1)/(p^a-1) for any proper divisor a of n. Moreover, if (q,k) is disconnected, then each connected component is isomorphic to the generalized Paley graph (p^a,k') where k'=k(p^a-1)/(q-1) ≥ 1 and a is some proper divisor of n such that (q-1)/(p^a-1) divides k.
From the divisibility conditions in this result of <cit.>, one finds that a generalized Paley graph 𝒫(q, k) is connected if and only if no proper subfield of _q contains the multiplicative subgroup of nonzero k-th powers. Hence, although it is not explicitly stated in <cit.>, it is natural to expect that each of the connected components of a disconnected generalized Paley graph 𝒫(q, k) is isomorphic to a generalized Paley graph defined over the smallest subfield of _q that contains the subgroup of nonzero k-th powers. Using basic properties of finite fields and finite cyclic groups, we provide a modest extension of this result of <cit.> by showing that each connected component of a disconnected generalized Paley graph is isomorphic to the generalized Paley graph (p^a,k') as described above, where a is in fact the smallest such proper divisor of n, or equivalently, where _p^a is the smallest subfield of _q containing the subgroup of nonzero k-th powers.
Let k ≥ 2 and suppose that the generalized Paley graph 𝒫(q, k) of order q=p^n is disconnected. Then each connected component of 𝒫(q, k) is isomorphic to the generalized Paley graph 𝒫(p^a, k') with k' = k (p^a-1)/(q-1) ≥ 1 where a is the smallest proper divisor of n such that (q-1)/(p^a-1) divides k.
Now suppose that θ is a primitive element of the finite field _q of order q=p^n and let (_q^×)^k denote the multiplicative subgroup of nonzero k-th powers in _q^×. Noting that the smallest subfield of _q that contains (_q^×)^k is given by the field extension _p(θ^k), it follows that the parameter a in the results of <cit.> and Theorem <ref> is precisely the degree of this extension, or equivalently, the degree of the minimal polynomial of θ^k over _p. As a result of these observations, we have the following reformulation of the findings of <cit.> and Theorem <ref>, which provides a new characterization of when generalized Paley graphs are disconnected.
Let k ≥ 2 and suppose that θ is a primitive element of the finite field _q of order q=p^n with q ≡ 1 2k. Then the generalized Paley graph 𝒫(q, k) is disconnected if and only if the field extension _p(θ^k) is a proper subfield of _q. Furthermore, if 𝒫(q, k) is disconnected, then each connected component is isomorphic to the generalized Paley graph 𝒫(p^a, k') with
k' = |_p(θ^k)^×|/| (_q^×)^k |≥ 1
where a < n is the degree of the extension _p(θ^k) over _p.
In light of the aforementioned results of <cit.>, we also provide some cases of interest where we can guarantee that generalized Paley graphs are connected.
Let k ≥ 2 and suppose that q=p^n is a prime power such that q ≡ 1 2k. If k < p^n/2+1, then the generalized Paley graph (q,k) is connected.
As a consequence of Theorem <ref> and a straightforward calculus argument, we obtain the fact that all generalized Paley graphs (q,k) of order q=p^km are connected.
Let m≥ 1, k ≥ 2 and suppose that q=p^km is a prime power such that q ≡ 1 2k. Then the generalized Paley graph 𝒫(q, k) is connected.
We then turn our attention to the derivation of an explicit formula for the condensed Ricci curvature on Paley graphs and their generalizations. Given a connected, locally finite, undirected, simple graph G = (V, E) with shortest path distance function ρ:V × V →∪{0} and vertex v ∈ V, let
Γ(v) = {w ∈ V |ρ(v,w)=1 } = {w ∈ V | vw ∈ E }
denote the subset of vertices that are adjacent v. Then for any edge xy ∈ E, one can decompose the neighbor sets Γ(x) and Γ(y) into disjoint unions
Γ(x) = N_x ∪∇_xy∪{y} and Γ(y) = N_y ∪∇_xy∪{x}
as in <cit.>, where ∇_xy = Γ(x) ∩Γ(y) denotes the subset of vertices that are adjacent to both x and y and where
N_x = Γ(x) ∖ (∇_xy∪{y}) and N_y = Γ(y) ∖ (∇_xy∪{x}).
Understanding matchings between the neighbor sets N_x and N_y in a graph G is essential to the calculation of the condensed Ricci curvature (x, y) of an edge xy ∈ E. For example, in <cit.> it is shown that for any edge xy ∈ E of a strongly regular graph of degree d, the condensed Ricci curvature
(x, y) = 1/d(2 + |∇_xy| - (|N_x| - m))
where m is the size of a maximum matching ℳ between N_x and N_y. In general, when a perfect matching exists between the neighbor sets N_x and N_y for every edge xy ∈ E, the graph G is said to satisfying the Global Matching Condition <cit.>. An explicit formula for the condensed Ricci curvature along edges in graphs that satisfy the Global Matching Condition is established in <cit.>. We state a combined version of Lemma 6.2 and Theorem 6.3 of <cit.> below.
Let G=(V,E) be a connected, locally finite, undirected, simple graph satisfying the Global Matching Condition. Then G is regular of degree d and the condensed Ricci curvature
(x,y) = 1/d(2 + |∇_xy|)
for any edge xy ∈ E.
Our derivation of an explicit formula for the condensed Ricci curvature on generalized Paley graphs relies on establishing the Global Matching Condition and Theorem <ref>. We refer readers to Section 2 of <cit.> for the precise definition of the condensed Ricci curvature on graphs and our choice of terminology. The main ideas we use to establish the Global Matching Condition were first conceived for the special case of quadratic Paley graphs (q,2) of even power order q=p^2m in an unpublished student research project <cit.>. In particular, in <cit.> the fact that the multiplicative group of the corresponding prime subfield _p^× is contained in the subgroup of nonzero squares (_q^×)^2 led to the idea that the vector space structure of the finite field _q could be used to partition (or “sort”) the vertices in N_0 and N_1 in a way that a perfect matching could be easily obtained. Then, by the symmetry of quadratic Paley graphs, there is a perfect matching between the neighbor sets N_x and N_y for every edge xy ∈ E.
We found that the observations of <cit.> hold more broadly on certain generalized Paley graphs and that the induced bipartite subgraph of edges between vertices in N_x and N_y can actually be decomposed into complete balanced bipartite subgraphs with a “sorting algorithm” (see Section <ref>). With the aim of generalizing the ideas of <cit.>, we first establish a substantial case where the multiplicative group of the prime subfield _p^× of a finite field _q is contained in the subgroup of nonzero k-th powers (_q^×)^k.
Suppose p and k are prime. Then _p^×≤ (_p^km^×)^k for any positive integer m.
Moreover, as a simple consequence of the properties of finite cyclic groups, we have the following necessary and sufficient divisibility conditions for the multiplicative group of the prime subfield to be contained in the subgroup of nonzero k-th powers for the finite fields that serve as vertex sets of generalized Paley graphs.
Let k ≥ 2 and suppose that q=p^n is a prime power such that q ≡ 1 2k. Then _p^×≤ (_q^×)^k if and only if k |q-1/p-1.
Furthermore, as a straightforward consequence of Theorem <ref>, we find that if _q is a finite field of order q=p^n≡ 1 2k and k | p-1, then the multiplicative group of the prime subfield is contained in the subgroup of nonzero k-th powers if and only if n is a multiple of k.
Let k ≥ 2 and suppose that q=p^n is a prime power such that q ≡ 1 2k. If k | p-1, then _p^×≤ (_q^×)^k if and only if n ≡ 0 k.
With Theorems <ref> and <ref> in hand, we then formally develop the sorting algorithm conceived in <cit.> and use it to establish the Global Matching Condition on connected generalized Paley graphs in which the multiplicative group of the corresponding prime subfield is contained in the subgroup of nonzero k-th powers (see Section <ref>).
Let k ≥ 2 and suppose (q,k) is a connected generalized Paley graph of order q=p^n. If _p^×≤ (_q^×)^k, then (q,k) satisfies the Global Matching Condition.
Due to Theorems <ref>, <ref>, <ref>, and <ref>, we then obtain the following generalization of the unpublished work of <cit.> on quadratic Paley graphs (q,2) of even power order q=p^2m.
Let m ≥ 1 and suppose that (q,k)=(V,E) is a generalized Paley graph of order q=p^km where k is prime. Then the condensed Ricci curvature
(x,y) = k/q-1(2 + |∇_xy|)
for any edge xy ∈ E.
More generally, as a consequence of Theorems <ref>, <ref>, and <ref>, we obtain the same explicit formula for the condensed Ricci curvature on generalized Paley graphs (q,k) of order q=p^n satisfying the divisibility condition k |q-1/p-1.
Let k ≥ 2 and suppose that (q,k)=(V,E) is a generalized Paley graph of order q=p^n where k |q-1/p-1. Then the condensed Ricci curvature
(x,y) = k/q-1(2 + |∇_xy|)
for any edge xy ∈ E.
We would like to point out that the formula for the condensed Ricci curvature in Theorem <ref> holds for both connected and disconnected generalized Paley graphs. However, in our work we consider these cases separately (see Theorems <ref> and <ref>) and we recover the same formula for the condensed Ricci curvature by applying our results for connected graphs to the connected components of disconnected generalized Paley graphs. In the special case that the connected components of a disconnected generalized Paley graph are complete graphs, we appeal to the following result stated in <cit.> and proved in <cit.>.
A connected, finite, undirected, simple graph G = (V, E) is complete if and only if the condensed Ricci curvature (x,y) > 1 for all vertices x,y ∈ V. In particular, if G is a complete graph on n vertices, then (x,y) =n/n-1 for all vertices x,y ∈ V.
Generalized Paley graphs (q,k) can be defined over finite fields of even order q=2^n as in <cit.>. In this case, one requires q ≡ 1 k rather than requiring q ≡ 1 2k as in the case for odd prime powers. Our work applies to generalized Paley graphs of even order, although some our results are trivial when considering finite fields of characteristic 2. However, for simplicity in presentation, we have chosen to focus on generalized Paley graphs of odd order.
This paper is organized as follows: In Section <ref> we formally define generalized Paley graphs and we discuss some of their relavant properties. In particular, we extend the results <cit.> by providing a more precise description of the connected components of disconnected generalized Paley graphs and we give a new characterization of when generalized Paley graphs are disconnected. In Section <ref> we establish results on the containment of the multiplicative group of the prime subfield of a finite field in the subgroup of nonzero k-th powers and we develop a “sorting algorithm”, which leads to establishing the Global Matching Condition and our explicit formulas for the condensed Ricci curvature on Paley graphs and their generalizations.
§ ACKNOWLEDGEMENTS
This research was generously supported by the William and Linda Frost Fund in the Cal Poly Bailey College of Science and Mathematics. The authors would like to thank the referee for their careful reading of our work and their valuable input and insight. We also extend our gratitude to Professor Eric Brussel and Professor Rob Easton of Cal Poly for many vauble conversations.
§ GENERALIZED PALEY GRAPHS AND THEIR PROPERTIES
In this section we introduce the definition of generalized Paley graphs and we discuss some of their relevant properties. Consider the finite field _q of order q=p^n and let _q^× denote the multiplicative group of _q. Then the set of nonzero squares in _q is denoted by
(_q^×)^2 = {α∈_q^×|α = β^2 for some β∈_q^×}.
Paley graphs are then constructed by taking the field elements of certain finite fields as vertices and defining edges between those vertices that differ by squares.
Let q=p^n be a prime power such that q ≡ 1 4. Then the Paley graph of order q is defined to be the graph 𝒫(q)=(V,E) with vertex set V=_q and edge set E={xy | x-y ∈ (_q^×)^2}.
We refer to Paley graphs with edges between vertices that differ by squares as quadratic Paley graphs. It is known that quadratic Paley graphs are connected, self-complementary, strongly regular graphs with parameters (q, q-1/2, q-5/4, q-1/4) (cf. <cit.>). Naturally, one can generalize the definition of Paley graphs to define edges in terms of higher order powers. For integers k ≥ 2, we denote the set of nonzero k-th powers in _q by
(_q^×)^k = {α∈_q^×|α = β^k for some β∈_q^×}.
Let k ≥ 2 and suppose q=p^n is a prime power such that q ≡ 1 2k. Then the generalized Paley graph of order q with k-th powers is defined to be the graph 𝒫(q, k)=(V,E) with vertex set V=_q and edge set E={xy | x-y ∈ (_q^×)^k}.
Note that one can consider k=1 in (<ref>) and Definition <ref>. In this case the set of nonzero 1-st powers in _q is simply the multiplicative group _q^× and therefore (q,1)=K_q is the complete graph on q vertices. For simplicity in our presentation we will sometimes refer to complete graphs as generalized Paley graphs. For k=2, the generalized Paley graphs 𝒫(q,2) are quadratic Paley graphs as defined in Definition <ref>. It is natural to refer to generalized Paley graphs with k=3 as cubic, k=4 as quartic, k=5 as quintic, and so on. Throughout this work we refer to Paley graphs with edges between vertices that differ by k-th powers simply as generalized Paley graphs or k-Paley graphs.
We also note that the set of nonzero k-th powers of a finite field _q form a subgroup of _q^×. Moreover, if θ is a generator for the multiplicative cyclic group _q^×, then θ^k generates the subgroup (_q^×)^k of k-th powers. Hence, since |θ| = q-1 and we are considering finite fields _q where k | q-1 , it follows that |θ^k | = q-1/k and therefore q-1/k elements of _q^× are k-th powers, or equivalently, the subgroup of nonzero k-th powers in _q^× has order q-1/k.
The condition q ≡ 1 2k in Definition <ref> ensures that q-1 is even, so q=p^n must be an odd prime power for the generalized Paley graphs under consideration. It also guarantees that generalized Paley graphs are undirected. In other words, the condition 2k | q-1 guarantees that if x-y ∈ (_q^×)^k for some x, y ∈_q, then y-x ∈ (_q^×)^k, or equivalently that -1 ∈ (_q^×)^k. Indeed, if θ is a generator for the multiplicative group _q^×, then
1=(θ^q-1/2)^2
and therefore θ^q-1/2=-1.
But then since q-1/2k∈, it follows that -1 = (θ^q-1/2k)^k ∈(_q^×)^k.
Generalized Paley graphs retain some of the basic properties shared by quadratic Paley graphs. For example, generalized Paley graphs (q,k) are symmetric as they are easily seen to be arc-transitive under the subgroup of affine automorphims of the form x ↦ ax+b where a ∈ (_q^×)^k and b ∈_q. Moreover, k-Paley graphs are q-1/k-regular. For completeness, we record and prove these results in the following propositions.
Generalized Paley graphs 𝒫(q, k) = (V,E) of order q=p^n are symmetric.
For any edges x_1y_1, x_2y_2 ∈ E, taking
a=(y_2-x_2)(y_1-x_1)^-1∈ (_q^×)^k and b = x_2 - a x_1 ∈_q^×,
it follows that the automorphism ϕ:V → V defined by ϕ(x) = ax+b satisfies ϕ(x_1) = x_2 and ϕ(y_1) = y_2. Thus, (q,k) is arc-transitive and therefore symmetric.
Generalized Paley graphs 𝒫(q, k) = (V,E) of order q=p^n are q-1/k-regular.
Let θ be a generator for _q^× and suppose x ∈ V. Then for any other vertex y ∈ V, it follows that y ∈Γ(x) if and only if y-x ∈ (_q^×)^k. Hence, y ∈Γ(x) if and only if y = x+ (θ^k)^m for some m ∈. Thus, y ∈Γ(x) if and only if y ∈ x + ⟨θ^k ⟩, where x + ⟨θ^k ⟩ is the additive coset of the subgroup ⟨θ^k ⟩ of nonzero k-th powers. Noting that all cosets of ⟨θ^k ⟩ have order |θ^k | = q-1/k, we see that |Γ(x) | = q-1/k. Hence, since x was chosen arbitrarily, it follows that 𝒫(q, k) is q-1/k-regular.
In contrast to the case for quadratic Paley graphs, generalized Paley graphs (q,k) are not self-complementary for k > 2. This follows directly from the fact that self-complementary graphs with q vertices must have q(q-1)/4 edges, that is, half the number of edges as in a complete graph on q vertices. But (q,k) is q-1/k-regular by Proposition <ref> and therefore has
1/2· q ·q-1/k = q(q-1)/2k
edges, which is less than q(q-1)/4 for k>2. Furthermore, generalized Paley graphs (q,k) may not be connected when k > 2. However, even when a generalized Paley graph is disconnected, it turns out that each of its connected components is isomorphic to a single generalized Paley graph defined over a proper subfield of _q. These latter properties are due to Theorem 2.2 of <cit.>. We restate the relevant portions of this theorem in the context of our work below.
For k ≥ 2, the generalized Paley graph 𝒫(q, k) of order q=p^n is connected if and only if k is not a multiple of (q-1)/(p^a-1) for any proper divisor a of n. Furthermore, if 𝒫(q, k) is disconnected, then each connected component is isomorphic to the generalized Paley graph 𝒫(p^a, k') with k' = k (p^a-1)/(q-1) ≥ 1 where a is some proper divisor of n such that (q-1)/(p^a-1) divides k.
As noted in the introduction, the divisibility conditions in Theorem <ref> imply that a generalized Paley graph 𝒫(q, k) is connected if and only if no proper subfield of _q contains the subgroup of nonzero k-th powers. Indeed, since k | q-1, it follows that (q-1)/(p^a-1) divides k if and only if | (_q^×)^k | = q-1/k divides |_p^a^×| = p^a-1, or equivalently, (_q^×)^k≤_p^a^_×. Therefore, it is natural to expect that each of the connected components of a disconnected generalized Paley graph 𝒫(q, k) is isomorphic to a generalized Paley graph defined over the smallest subfield of _q that contains the subgroup of nonzero k-th powers.
Now if q=p^n is a prime power such that q ≡ 1 2k and b is a proper divisor of n for which (q-1)/(p^b-1) divides k, then k = k'(q-1)/(p^b-1) for some k' ∈ so
p^b-1= k' q-1/k =2k' q-1/2k.
But q-1/2k∈ so by (<ref>) it follows that 2k' | p^b-1 and therefore p^b ≡ 1 2k'. Thus, one obtains a generalized Paley graph (p^b,k') over the proper subfield _p^b of _q for every proper divisor b of n for which (q-1)/(p^b-1) divides k. However, if b is not the smallest such divisor of n, that is if _p^b is not the smallest subfield of _q containing the subgroup of nonzero k-th powers, then it turns out that (p^b,k') is also disconnected.
Although it is not explicitly stated in <cit.>, a further analysis shows that each of the connected components of a disconnected generalized Paley graph (q,k) is actually isomorphic to the generalized Paley graph 𝒫(p^a, k') as described in Theorem <ref> where a is in fact the smallest proper divisor of n such that (q-1)/(p^a-1) divides k, or equivalently, where _p^a is the smallest subfield of _q containing the subgroup of nonzero k-th powers.
Let k ≥ 2 and suppose the generalized Paley graph 𝒫(q, k) of order q=p^n is disconnected. Then each connected component of 𝒫(q, k) is isomorphic to the generalized Paley graph 𝒫(p^a, k') with k' = k (p^a-1)/(q-1) ≥ 1 where a is the smallest proper divisor of n such that (q-1)/(p^a-1) divides k.
By Theorem <ref> we may assume that (q-1)/(p^b-1) divides k for some proper divisor b of n. Equivalently, since k | q-1, we may write this divisibility condition as q-1/k| p^b-1. Therefore, since _p^b^× is cyclic and | (_q^×)^k | = q-1/k divides |_p^b^×| = p^b-1, it follows that
(_q^×)^k ≤_p^b^×.
Let a be the smallest proper divisor of n such that (q-1)/(p^a-1) divides k. If a < b, it follows that (_q^×)^k ≤_p^a^× as in (<ref>) and therefore
(_q^×)^k ≤_p^a^×∩_p^b^× = _p^d^×
where d=gcd(a,b). Thus, by Lagrange's theorem, it follows from (<ref>) that q-1/k| p^d-1, or equivalently
(q-1)/(p^d-1) divides k where d=gcd(a,b) ≤ a is a proper divisor of n. But since a is the smallest proper divisor of n such that (q-1)/(p^a-1) divides k, it follows that a=d and therefore a | b.
Now let k'_a = k(p^a-1)/(q-1) and k'_b = k(p^b-1)/(q-1). Then
k'_b= kp^b-1/q-1=k'_a q-1/p^a-1p^b-1/q-1 = k'_a p^b-1/p^a-1
and therefore (p^b-1)/(p^a-1) divides k'_b. Hence, since a is a proper divisor of b, it follows from Theorem <ref> that the generalized Paley graph (p^b,k'_b) is disconnected. Thus, in light of Theorem <ref>, the connected components of a disconnected generalized Paley graph (q,k) must be isomorphic to the generalized Paley graph (p^a,k') with k' = k (p^a-1)/(q-1) ≥ 1 where a is the smallest proper divisor of n such that (q-1)/(p^a-1) divides k, or equivalently, where _p^a is the smallest subfield of _q containing the subgroup of nonzero k-th powers.
Now suppose that θ is a primitive element of the finite field _q of order q=p^n with q ≡ 1 2k. Then the smallest subfield of _q that contains the subgroup of nonzero k-th powers is given by the field extension _p(θ^k) over _p. Thus, if the degree of this extension is strictly less than n, say
[_p(θ^k):_p] = a < n,
it follows that _p(θ^k) = _p^a is a proper subfield of _q where a is the smallest proper divisor of n such that | (_q^×)^k | = q-1/k divides |_p(θ^k)^×| = p^a-1, or equivalently, such that (q-1)/(p^a-1) divides k. As a result of these observations, we see that the parameter a in Theorem 2.5 of <cit.> and Theorem 2.6 is precisely the degree of the field extension _p(θ^k) over _p, or equivalently, the degree of the minimal polynomial of θ^k over _p. Therefore, we have the following reformulation of the results of <cit.> and Theorem 2.6.
Let k ≥ 2 and suppose that θ is a primitive element of the finite field _q of order q=p^n with q ≡ 1 2k. Then the generalized Paley graph 𝒫(q, k) is disconnected if and only if the field extension _p(θ^k) is a proper subfield of _q. Furthermore, if 𝒫(q, k) is disconnected, then each connected component is isomorphic to the generalized Paley graph 𝒫(p^a, k') with
k' = |_p(θ^k)^×|/| (_q^×)^k |≥ 1
where a < n is the degree of the extension _p(θ^k) over _p.
With these facts in hand, we focus our attention on connected generalized Paley graphs as our results will easily lend themselves to the connected components of disconnected generalized Paley graphs. One simple case where we can guarantee that generalized Paley graphs are connected is given in the following theorem.
Let k ≥ 2 and suppose that q=p^n is a prime power such that q ≡ 1 2k. If k < p^n/2+1, then the generalized Paley graph (q,k) of order q=p^n is connected.
Suppose that a is a proper divisor of n. Then
p^n-1/p^a-1≥p^n-1/p^n/2-1 = p^n/2+1 > k.
Hence, k is not a multiple of (p^n-1)/(p^a-1) for any proper divisor a of n and therefore 𝒫(q, k) is connected by Theorem <ref>.
To conclude this section we present another special case where the generalized Paley graphs under consideration are connected. In particular, we show that all generalized Paley graphs (q,k) of order q=p^km are connected. This result contributes to a complete generalization of the observations outlined in <cit.> for quadratic Paley graphs (q,2) of even power order q=p^2m.
Let m≥ 1, k ≥ 2 and suppose that q=p^km is a prime power such that q ≡ 1 2k. Then the generalized Paley graph 𝒫(q, k) of order q=p^km is connected.
It suffices to show that km > 2log_p(k-1) since then p^km/2+1 > k and therefore 𝒫(q, k) is connected by Theorem <ref>. To this end, consider the function f:(1,∞) → defined by
f(x) = x - 2log_2(x-1) = x - 2/ln 2ln(x-1).
Then
f'(x) = 1 - 2/(ln2)(x-1)
so f has a single critical point at
x^* = 2/ln2+1
where f'<0 on (1,x^*) and f'>0 on (x^*,∞). Thus, f is decreasing on (1,x^*) and increasing on (x^*,∞). Moreover, since
f(x^*) = 2/ln2(1-ln(2/ln2))+1 > 0,
it follows that f has a positive global minimum at x^*. Hence, f > 0 on (1,∞) and therefore x > 2log_2(x-1) for all x ∈ (1,∞). In particular, since k ≥ 2 and p is necessarily an odd prime, we see
km ≥ k > 2log_2(k-1) > 2log_p(k-1)
and the desired result follows.
§ THE SORTING ALGORITHM AND THE GLOBAL MATCHING CONDITION
In this section we establish the Global Matching Condition for certain Paley graphs and their generalizations. As a consequence of Theorem <ref> of <cit.> we then obtain an explicit formula for the condensed Ricci curvature along the edges of these graphs. Consider a connected generalized Paley graph (q,k) of order q=p^n with vertex set V = _q and edge set E = {xy | x-y ∈ (_q^×)^k}. Due to the symmetry of generalized Paley graphs we may focus on the edge 01 ∈ E and the neighbor sets
N_0 = Γ(0) ∖ (∇_01∪{1}) and N_1 = Γ(1) ∖ (∇_01∪{0})
as introduced in Section <ref>.
Let H denote the induced bipartite subgraph consisting of all edges in E between vertices in N_0 and N_1. Our strategy in establishing the Global Matching Condition on the generalized Paley graphs under consideration is to formally develop and generalize the sorting algorithm that was first conceived in <cit.> for the special case of quadratic Paley graphs. We then apply this sorting algorithm to decompose the bipartite subgraph H into complete balanced bipartite subgraphs. Then one can pairwise match vertices in these subgraphs to construct a perfect matching between N_0 and N_1 and appeal to the symmetry of generalized Paley graphs to realize a perfect matching between the neighbor sets N_x and N_y for every edge xy ∈ E.
On generalized Paley graphs the containment of the corresponding multiplicative group of the prime subfield in the subgroup of nonzero k-th powers plays a crucial role in the success of the sorting algorithm. With the aim of establishing an important case of this critical component of the sorting algorithm, we first present a special case of Theorem 9.1 in chapter 6, section 9 of <cit.>.
Let _p denote the field of prime order p and suppose k is prime. Let α∈_p^× and suppose that α∉(_p^×)^k. Then x^k - α is irreducible in _p[x].
For the case of quadratic Paley graphs (q,2) of even power order q=p^2m, it was observed in <cit.> that the corresponding multiplicative group of the prime subfield _p^× is contained in the subgroup of nonzero squares (_q^×)^2. We appeal to Theorem <ref> to establish a more general version of this observation.
Suppose p and k are prime. Then _p^×≤ (_p^km^×)^k for any positive integer m.
Let α∈_p^× and suppose that α∉(_p^×)^k. Then by Theorem <ref>, f(x) = x^k-α is irreducible in _p[x]. Therefore, if θ is a root of f, then
_p[x] ⟨ x^k-α⟩≅_p(θ) ≅_p^k.
Now suppose that φ:_p(θ) →_p^k is an isomorphism. Noting that φ(α)=α since α∈_p, it follows that
φ(θ)^k - α = φ(θ^k) - φ(α) = φ(θ^k - α) = φ(f(θ)) = φ(0) = 0.
Thus, α = φ(θ)^k ∈ (_p^k^×)^k and therefore since _p^k⊆_p^km for any positive integer m, it follows that α∈ (_p^km^×)^k. Hence, _p^×≤ (_p^km^×)^k as desired.
Now suppose _q is a finite field of order q=p^n such that q ≡ 1 2k with k ≥ 2. As a simple consequence of the properties of finite cyclic groups, we have the following necessary and sufficient divisibility conditions for the multiplicative group of the prime subfield of _q to be contained in the subgroup of nonzero k-th powers.
Let k ≥ 2 and suppose that q=p^n is a prime power such that q ≡ 1 2k. Then _p^×≤ (_q^×)^k if and only if k |q-1/p-1.
Let _q be a finite field of order q=p^n such that q ≡ 1 2k. Then since
|_p^×| = p-1 and | (_q^×)^k | = q-1/k,
it follows from properties of finite cyclic groups and Lagrange's theorem that _p^×≤ (_q^×)^k if and only if p-1 |q-1/k, or equivalently k |q-1/p-1.
Clearly, since we are considering generalized Paley graphs (q,k) of order q=p^n where q ≡ 1 2k, it follows that k | q-1.
Due to Theorem <ref>, it turns out that when k also divides p-1, the multiplicative group of the corresponding prime subfield _p^× is contained in the subgroup of nonzero k-th powers (_q^×)^k if and only if n is a multiple of k.
Let k ≥ 2 and suppose that q=p^n is a prime power such that q ≡ 1 2k. If k | p-1, then _p^×≤ (_q^×)^k if and only if n ≡ 0 k.
By Theorem <ref>, _p^×≤ (_q^×)^k if and only if k |q-1/p-1.
Hence, _p^×≤ (_q^×)^k if and only if
q-1/p-1 = p^n-1 + p^n-2 + ⋯ + p + 1 ≡ 0 k.
But k | p-1 so p ≡ 1 k and therefore
q-1/p-1 = p^n-1 + p^n-2 + ⋯ + p + 1 ≡ n k.
Hence, by (<ref>) and (<ref>) it follows that _p^×≤ (_q^×)^k if and only if n ≡ 0 k.
Now that we have a better understanding of when the multiplicative group of the prime subfield is contained in the subgroup of nonzero k-th powers of a given finite field, we develop the sorting algorithm that we will use to construct a perfect matching between the neighbor sets N_0 and N_1 in a generalized Paley graph. To this end, let θ be a generator of the multiplicative group _q^× of the finite field _q of order q=p^n. Consider the minimal polynomial f ∈_p[x] of θ such that
_q ≅_p(θ) ≅_p[x] ⟨ f(x) ⟩.
In light of the isomorphisms in (<ref>), it follows that 1, θ, θ^2,…,θ^n-1 is a basis for _q as a vector space over _p and therefore
_q = { a_0 + a_1θ + a_2 θ^2 + ⋯ + a_n-1θ^n-1| a_0, a_1,…,a_n-1∈_p}.
Our formalization of the sorting algorithm and substantiation of a perfect matching between the neighbor sets N_0 and N_1 relies on the following fact that was first observed for quadratic Paley graphs in <cit.>.
Let (q,k) be a connected generalized Paley graph of order q=p^n and suppose that θ is a generator of the multiplicative group _q^×. Fix a_1,…, a_n-1∈_p not all zero and set
S=S(a_1,…,a_n-1)={ b + a_1θ + a_2 θ^2 + ⋯ + a_n-1θ^n-1| b∈_p}.
Then | N_0 ∩ S | = | N_1 ∩ S |.
First note that x ∈Γ(0) if and only if x ∈ (_q^×)^k if and only if x+1 ∈Γ(1) and therefore
|Γ(0) ∩ S | = |Γ(1) ∩ S |.
Now since a_1,…, a_n-1∈_p are not all zero, it follows that
{0}∩ S = ∅ and {1}∩ S = ∅.
Moreover, since
Γ(0) = {1}∪ N_0 ∪∇_01 and Γ(1) = {0}∪ N_1 ∪∇_01
are disjoint unions, it follows that
|Γ(0) ∩ S | = | N_0 ∩ S | + |∇_01∩ S |
and
|Γ(1) ∩ S | = | N_1 ∩ S | + |∇_01∩ S |.
Thus, by (<ref>) we may equate equations (<ref>) and (<ref>) to find | N_0 ∩ S | = | N_1 ∩ S | .
Lemma <ref> provides an avenue for partitioning the neighbor sets N_0 and N_1 of a generalized Paley graph (q,k) by grouping vertices (or elements) in N_0 and N_1 with the same coefficients of θ,…,θ^n-1. In particular, when the nonzero elements of the prime subfield are all k-th powers, we can partition the induced bipartite subgraph H consisting of all edges in E between N_0 and N_1 into complete balanced bipartite subgraphs. By matching vertices in each of these complete subgraphs in pairwise fashion, we obtain a perfect matching between N_0 and N_1, which by symmetry shows that the generalized Paley graphs under consideration satisfy the Global Matching Condition. This approach is based on ideas from <cit.> where the argument was outlined for quadratic Paley graphs of even power order q=p^2m.
Let k ≥ 2 and suppose (q,k) is a connected generalized Paley graph of order q=p^n. If _p^×≤ (_q^×)^k, then (q,k) satisfies the Global Matching Condition.
By symmetry it suffices to consider the edge 01 ∈ E and to show that there is a perfect matching between N_0 and N_1. As in Lemma <ref>, fix a_1,…, a_n-1∈_p not all zero and set
S(a_1,…,a_n-1) = { b + a_1θ + a_2 θ^2 + ⋯ + a_n-1θ^n-1| b∈_p}.
Then for any α∈ N_0 ∩ S(a_1,…,a_n-1), β∈ N_1 ∩ S(a_1,…,a_n-1), it follows that
α - β∈_p^×≤ (_q^×)^k.
Hence, αβ∈ E and therefore the bipartite subgraph H(a_1,…,a_n-1) consisting of all edges in E between N_0 ∩ S(a_1,…,a_n-1) and N_1 ∩ S(a_1,…,a_n-1) is complete.
In light of Lemma <ref>, it follows that that the complete bipartite subgraph H(a_1,…,a_n-1) is balanced. Thus, we can match vertices in H(a_1,…,a_n-1) in a pairwise fashion to obtain a perfect matching between N_0 ∩ S(a_1,…,a_n-1) and N_1 ∩ S(a_1,…,a_n-1).
Now let a=(a_1,…,a_n-1) ∈_p^n-1 be a multi-index for a_1,…,a_n-1∈_p so that S(a) = S(a_1,…,a_n-1). Set 𝒮 = {S(a) | a ≠ 0 ∈_p^n-1} and let 𝒥 be a multi-indexing set for 𝒮. Noting that _p^×≤ (_q^×)^k implies _p ⊆{0}∪{1}∪∇_01, it follows that
⋃_a ∈𝒥 N_0 ∩ S(a) = N_0 ∩⋃_a ∈𝒥 S(a) = N_0 ∩(_q ∖_p) = N_0
and similarly
⋃_a ∈𝒥 N_1 ∩ S(a) = N_1.
Therefore, since the sets S(a)=S(a_1,…,a_n-1) are disjoint for each distinct choice of a=(a_1,…,a_n-1) ∈𝒥, it follows from (<ref>) and (<ref>) that the bipartite subgraph H between N_0 and N_1 can be partitioned into complete balanced bipartite subgraphs H(a)=H(a_1,…,a_n-1) consisting of all edges in E between N_0 ∩ S(a) and N_1 ∩ S(a). Thus, since each subgraph H(a) admits a perfect matching, it follows that there is a perfect matching between N_0 and N_1 and therefore (q,k) satisfies the Global Matching Condition by symmetry.
Due to Theorems <ref>, <ref>, <ref>, and <ref>, we obtain the following generalization of the unpublished work of <cit.> on quadratic Paley graphs (q,2) of even power order q=p^2m.
Let m ≥ 1 and suppose that (q,k)=(V,E) is a generalized Paley graph of order q=p^km where k is prime. Then the condensed Ricci curvature
(x,y) = k/q-1(2 + |∇_xy|)
for any edge xy ∈ E.
First note that since (q,k) is a generalized Paley graph of order q=p^km with k ≥ 2, it follows that (q,k) is connected by Theorem <ref>. Moreover, since k is prime it follows that _p^×≤ (_q^×)^k by Theorem <ref>. Hence, from Theorem <ref> we see that (q,k) satisfies the Global Matching Condition so that the condensed Ricci curvature
(x,y) = k/q-1(2 + |∇_xy|)
for any edge xy ∈ E by Theorem <ref>.
Furthermore, as a consequence of Theorems <ref>, <ref>, and <ref>, we obtain the same explicit formula for the condensed Ricci curvature on a large class of connected generalized Paley graphs.
Let k ≥ 2 and suppose that (q,k)=(V,E) is a connected generalized Paley graph of order q=p^n where k |q-1/p-1. Then the condensed Ricci curvature
(x,y) = k/q-1(2 + |∇_xy|)
for any edge xy ∈ E.
Since k |q-1/p-1 it follows that _p^×≤ (_q^×)^k from Theorem <ref>. Hence, (q,k) satisfies the Global Matching Condition by Theorem <ref>. Therefore, since (q,k) is connected, it follows from Theorem <ref> that the condensed Ricci curvature
(x,y) = k/q-1(2 + |∇_xy|)
for any edge xy ∈ E.
On the other hand, when a generalized Paley graph (q,k) with k |q-1/p-1 is disconnected, our work can be adapted to its connected components to recover the same formula as in Theorem <ref> for the condensed Ricci curvature.
Let k ≥ 2 and suppose that (q,k)=(V,E) is a disconnected generalized Paley graph order q=p^n where k |q-1/p-1. Then the condensed Ricci curvature
(x,y) = k/q-1(2 + |∇_xy|)
for any edge xy ∈ E.
From Theorems <ref> and <ref>, it follows that each of the connected components of (q,k) is isomorphic to the generalized Paley graph (p^a,k') with k'=k(p^a-1)/(q-1) ≥ 1 where a is the smallest proper divisor of n such that (q-1)/(p^a-1) divides k. Recalling that the divisibility condition k |q-1/p-1 is equivalent to p-1 |q-1/k, it follows that
p-1 |q-1/k = p^a-1/k' = | (_p^a^×)^k'|
and therefore _p^×≤ (_p^a^×)^k' since (_p^a^×)^k' is a finite cyclic group. So for k' > 1, applying Theorems <ref> and <ref> to the connected components of (q,k) and noting the fact that k'=k(p^a-1)/(q-1), we see that the condensed Ricci curvature
(x,y) = k'/p^a-1(2 + |∇_xy|) = k/q-1(2 + |∇_xy|)
for any edge xy ∈ E.
In the special case that k'=1, we recall that the subgroup of nonzero 1-st powers in _p^a is simply the multiplicative group _p^a^×. Therefore, each connected component of the disconnected generalized Paley graph (q,k) is isomorphic to the complete graph (p^a,1) = K_p^a on p^a vertices. Thus, since the complete graph K_p^a=(V,E) on p^a vertices is regular of degree p^a-1, it follows for any edge xy ∈ E that ∇_xy = V ∖ ({x}∪{y}). Hence, |∇_xy| = p^a-2 and by Theorem <ref> we recover the formula
(x,y) = p^a/p^a-1 = 1/p^a-1(2 + |∇_xy|)=k/q-1(2 + |∇_xy|)
for the condensed Ricci curvature of any edge xy ∈ E.
plain
|
http://arxiv.org/abs/2409.02843v1 | 20240904161542 | Multivariate Second-Order $p$-Poincaré Inequalities | [
"Tara Trauthwein"
] | math.PR | [
"math.PR",
"60F05 (Primary) 60H07, 60G55, 60D05 (Secondary)"
] |
arrows.meta
definition
defiDefinition
facts[defi]Facts
plain
thm[defi]Theorem
*thm*Theorem
lem[defi]Lemma
*lem*Lemma
cor[defi]Corollary
prop[defi]Proposition
conj[defi]Conjecture
remark
rem[defi]Remark
ex[defi]Example
nota[defi]Notation
lemlemmalemmas
thmtheoremtheorems
⌈⌉
⌊⌋
equationsectionMultivariate Second-Order p-Poincaré Inequalities
Tara TrauthweinDepartment of Mathematics, University of Luxembourg and Department of Statistics, University of Oxford, tara.trauthwein@stats.ox.ac.uk. The author was supported by the Luxembourg National Research Fund (PRIDE17/1224660/GPS) and by the UK Engineering and Physical Sciences Research Council (EPSRC) grant (EP/T018445/1).
=================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In this work, we discuss new bounds for the normal approximation of multivariate Poisson functionals under minimal moment assumptions. Such bounds require one to estimate moments of so-called add-one costs of the functional. Previous works required the estimation of 4^th moments, while our result only requires (2+ϵ)-moments, based on recent improvements introduced by (Trauthwein 2022). As applications, we show quantitative CLTs for two multivariate functionals of the Gilbert, or random geometric, graph. These examples were out of range for previous methods.
Keywords: Central Limit Theorem; Gilbert Graph; Malliavin Calculus; Multivariate Central Limit Theorem; Poincaré Inequality; Poisson Process; Stein's Method; Stochastic Geometry.
Mathematics Subject Classification (2020): 60F05, 60H07, 60G55, 60D05
§ INTRODUCTION
The present paper establishes new distance bounds for multivariate Poisson functionals, allowing to derive quantitative Central Limit Theorems under minimal moment assumptions. It thus provides a multivariate counterpart to the improved univariate second-order Poincaré inequalities recently introduced in <cit.>. As such, the paper extends some of the results from <cit.> and <cit.>, providing comparable probabilistic inequalities but substantially reducing the moment conditions. The method used to achieve these minimal assumptions is to combine, as was done in <cit.>, Stein's and interpolation methods with Malliavin Calculus, and to make use of moment inequalities recently proposed in <cit.>. Applications include, but are not limited to, the study of random geometric objects such as spatial random graphs.
The bounds in <Ref> are given in terms of the so-called add-one cost operator. Given a Poisson measure η of intensity measure λ on the σ-finite space (,λ), let F be a measurable, real-valued function of η. For any x∈, we define the add-one cost operator evaluated at x by
_xF = _xF(η) := F(η + δ_x) - F(η),
where δ_x is the Dirac measure at x. This operator describes the change in the functional F when a point x is added to the measure η. Under additional assumptions on F, the operator corresponds to the Malliavin derivative of F at x. The definition can be iterated to give the second derivative
_x,yF := _y(_xF).
Our main result <Ref> allows one to derive quantitative CLTs for vector-valued F=(F_1,…,F_m) by controlling the covariance matrix (F) and (2+ϵ)-moments of the terms _xF_i and _x,yF_i for i=1,…,m. Previous multivariate bounds as in <cit.> typically asked one to uniformly bound moments of the order 4+ϵ.
This type of bound relying on the add-one cost operators is particularly useful for quantities exhibiting a type of `local dependence', more generally known as stabilization. See e.g. <cit.> for further details on this topic. On a heuristic level, the first add-one cost _x F quantifies the amount of local change induced when adding the point x, while the second add-one cost _x,yF controls the dependence of points x and y which are further apart.
<Ref> provides bounds for distances of the type
d(F,N) = sup_h ∈ℋ | h(F) - h(N)|
where F is a multivariate Poisson functional and N a multivariate Gaussian with covariance matrix C. The distances we treat are the d_2 and the d_3 distances, where the test functions h are chosen to be 𝒞^2, resp. 𝒞^3, with boundedness conditions (see (<ref>) and (<ref>) for precise definitions). The bound of the d_2 distance uses the multivariate Stein method, which comes at the detriment of needing the matrix C to be positive-definite. The bound for the d_3 distance uses an interpolation method to circumvent this problem, but it comes at the cost of needing a higher degree of regularity in the test functions. In <cit.>, the authors also provide a bound in the convex distance, where test functions are indicator functions of convex sets. Such a bound under minimal moment assumptions is out of reach for now, as it utilizes an involved recursive estimate which introduces new terms needing to be bounded by moments of the add-one costs. These terms cannot be treated with the currently known methods of reducing moment conditions.
The passage from the bounds achieved by Stein's method (resp. the interpolation method) to a bound involving add-one cost operators is achieved using Malliavin Calculus. The first combination of the two methods dates back to <cit.> in a Gaussian context, and to <cit.> in a Poisson context, and has since seen countless applications. The first appearance of a bound relying solely on moments of add-one costs was in the seminal paper <cit.>.
We study two applications in this paper in <Ref>, both relating to the random geometric graph, or Gilbert graph, the first study of which dates back to <cit.>. In this model, two points are connected if their distance is less than some threshold parameter. We study functionals of the type
F = ∑_edges e in G_W |e|^α,
where G_W is the Gilbert graph restricted to vertices lying in some convex body W and |e| denotes the length of the edge e. We look at two types of vectors, one where the exponent α varies over different components, with fixed set W, and the other where the set W varies for different components and α is fixed. In both settings we derive quantitative CLTs when we let the intensity of the underlying Poisson process grow to infinity. The setting with varying exponents has also been studied in <cit.>, where a qualitative CLT has been derived in <cit.> and the limit of the covariance matrix was given in <cit.>. We provide a quantiative analogue to their result. In the setting with varying underlying sets, even the qualitative result is new.
Plan of the paper. We present our main results in Section <ref> and its applications in Section <ref>. The proof of <Ref> is given in Section <ref> and the ones for <Ref> in Section <ref>. We introduce the necessary Malliavin operators and results about Malliavin Calculus and Poisson measures in Appendix <ref>.
Acknowledgment. I would like to thank Giovanni Peccati and Gesine Reinert for their support and helpful comments on this project.
§ MAIN RESULT
We now present our main results. For a function ϕ:^m → which is k times continuously differentiable, denote by ϕk (resp. ϕ'_∞ if k=1) the supremum of the absolute values of all k^th partial derivatives, i.e.
ϕk := max_1 ≤ i_1,…,i_k ≤ msup_x ∈^m|∂^k ϕ/∂ x_i_1…∂ x_i_k(x)|.
For a vector x ∈^m, we denote by x the Euclidean norm of x.
We can now introduce the distances to be used in this context. Define the following two sets of functions:
* Let ℋ^(2)_m be the set of all 𝒞^2 functions h:^m → such that |h(x)-h(y)| ≤x-y for all x,y∈^m and sup_x∈^m h(x)_op≤ 1;
* let ℋ^(3)_m be the set of all 𝒞^3 functions h:^m→ such that h2 and h3 are bounded by 1.
Here h(x) denotes the Hessian matrix of h at the point x and ._∞ denotes the operator norm.
Let X,Y be two m-dimensional random vectors. Define the d_2 and d_3 distances between X and Y as follows:
d_2(X,Y) := sup_h ∈ℋ^(2)_m| h(X) - h(Y)|, if X,Y < ∞
d_3(X,Y) := sup_h ∈ℋ^(3)_m| h(X) - h(Y)|, if X^2,Y^2 < ∞.
Recall the definition of the add-one cost from (<ref>). We say that a measurable functional F of η is in L^2(_η) if F^2 <∞ and that it is in if in addition
∫_ (_x F)^2 λ(dx) < ∞.
With these definitions, we can state the main result of this paper.
Formally define the terms ζ_1^(p),...,ζ_4^(q) by
ζ_1^(p) := ∑_i,j=1^m |C_ij - (F_i,F_j)|
ζ_2^(p) := 2^2/p-1∑_i,j=1^m (∫_(∫_[|_x,y F_i|^2p]^1/2p[|_x,y F_j|^2p]^1/2pλ(dx))^pλ(dy))^1/p
ζ_3^(p) := 2^2/p∑_i,j=1^m (∫_(∫_[|_x F_i|^2p]^1/2p[|_x,y F_j|^2p]^1/2pλ(dx))^pλ(dy))^1/p
ζ_4^(q) := m^q-1∑_i,j=1^m ∫_[|_x F_i|^q+1]^1/(q+1)[|_x F_j|^q+1]^1-1/(q+1)λ(dx).
Then the following statement holds.
Let (,λ) be a σ-finite measure space and η be a (,λ)-Poisson measure. Let m ≥ 1 and let F=(F_1,...,F_m) be an ^m-valued random vector such that for 1 ≤ i ≤ m, we have F_i ∈ L^2(_η) ∩ and F_i=0. Let C=(C_ij)_1 ≤ i,j ≤ m be a symmetric positive-semidefinite matrix and let X ∼𝒩(0,C). Then for all p,q ∈ [1,2],
d_3(F,X) ≤12(ζ_1^(p) + ζ_2^(p) + ζ_3^(p)) + ζ_4^(q).
If moreover the matrix C is positive-definite, then for all p,q ∈ [1,2],
d_2(F,X) ≤C^-1_opC_op^1/2 (ζ_1^(p)+ζ_2^(p)+ζ_3^(p)) + (2C^-1_opC_op^1/2∨√(2π)8C^-1_op^3/2C_op) ζ_4^(q).
The proof of these bounds is located in <Ref>. It relies on the work <cit.>, where the bound on the d_2 distance uses the Malliavin-Stein method, whereas the one for the d_3 distance uses an interpolation technique taken from the context of spin glasses (see <cit.>). The advantage of using an interpolation technique is that the limiting covariance matrix C does not necessarily need to be positive-definite, but it comes at the cost of a higher regularity requirement for the test functions h. Building on the results of <cit.>, we adapt the univariate improvement of <cit.> to a multivariate setting and make use of the p-Poincaré inequality introduced in <cit.> (see also <Ref> in the Appendix).
§ APPLICATIONS
In this section, we study two multivariate functionals of the Gilbert graph. Both of these functionals consist of sums of power-weighted edge-lengths — in the first functional, we vary the powers of the edges-lengths, and in the second functional, we restrict the graph to different domains.
We start by setting the general framework. Let t≥ 1 and (ϵ_t)_t>0 a sequence of positive real numbers such that ϵ_t → 0 as t →∞. Define η^t to be a Poisson measure on ^d of intensity t dx. For any convex body (i.e. a convex compact set with non-empty interior) W ⊂^d and any exponent α∈, define
(W) := 1/2∑_x,y ∈η^t_|W0 < x-y < ϵ_tx-y^α,
where η^t_|W denotes the restriction of η to the set W. The quantity (W) is the sum of all edge-lengths to the power α in the Gilbert graph with parameter ϵ_t>0 and points in W.
§.§ Varying the Exponents
We fix W ⊂^d to be a convex body and consider real numbers α_1,...,α_m such that α_i+α_j>-d for all i,j ∈{1,...,m}. For every 1 ≤ i ≤ m, define i(W)=i as in (<ref>) and set
i := (tϵ_t^α_i + d/2∨ t^3/2ϵ_t^α_i+d)^-1(i-i).
Set furthermore
σ_ij^(1) := dκ_d/2|α_i+α_j+d| and σ_ij^(2) := d^2κ_d^2/(α_i+d)(α_j+d),
where κ_d denotes the volume of a unit ball in ^d, and define the matrix C=(C_ij)_1 ≤ i,j ≤ m by C_ij = |W|c_ij, where
c_ij :=
σ_ij^(1) if lim_t→∞ tϵ_t^d = 0
(σ_ij^(1) + θσ_ij^(2)) if lim_t→∞ tϵ_t^d = θ≤ 1
(1θσ_ij^(1) + σ_ij^(2)) if lim_t→∞ tϵ_t^d = θ > 1
σ_ij^(2) if lim_t→∞ tϵ_t^d = ∞,
and |W| denotes the volume of W. Defining the vector L̃_t as
L̃_t := (1,...,m),
we recall that a CLT for L̃_t has been shown in <cit.>, as well as convergence of the covariance matrix of L̃_t to C in <cit.>. The matrix C is positive-definite in the sparse and thermodynamic regime (i.e. if tϵ_t^d → 0 or tϵ_t^d → c>0), while it is singular in the dense regime (i.e. if tϵ_t^d →∞), see <cit.>.
Define also
β_ij^(t) := σ_ij^(1) + σ_ij^(2)tϵ_t^d/1 ∨ tϵ_t^d
and note that β_ij^(t)→ c_ij as t →∞.
Assume that t^2ϵ_t^d →∞ as t →∞. Let X∼𝒩(0,C) be a centred Gaussian with covariance matrix C. Then
* if α_1,...,α_m>-d/4, there is a constant c_1>0 such that for all t≥ 1 large enough
d_3(L̃_t, X) ≤ c_1 (ϵ_t + max_1 ≤ i,j ≤ m|β_ij^(t)-c_ij| + (t^-1/2∨ (t^2ϵ_t^d)^-1/2)).
* if -d/2<min{α_1,...,α_m}≤ -d/4, then for any 1<p<-d/2min{α_1,...,α_m}^-1, there is a constant c_2>0 such that for all t≥ 1 large enough
d_3(L̃_t, X) ≤ c_2 (ϵ_t + max_1 ≤ i,j ≤ m|β_ij^(t)-c_ij| + (t^-1+1/p∨ (t^2ϵ_t^d)^-1+1/p)).
If lim_t →∞ tϵ_t^d < ∞, then the bounds (<ref>) and (<ref>) apply to d_2(L̃_t,X) as well for different constants c_1,c_2>0.
The proof of this theorem can be found in <Ref>. The bounds given in <Ref> vary according to the limit of tϵ^d. We give a precise discussion of the bounds (<ref>) in Table <ref>. The bounds for (<ref>) follow when setting p=2.
§.§ Varying the Domains
Fix α>-d/2 to be a real number and let W_1,...,W_m ⊂^d be convex bodies. For every 1 ≤ i ≤ m, define F^(i)_t := (W_i) using the definition (<ref>) and set
F̃_t^(i) := t^-1ϵ_t^-α-d/2(12σ_1 + σ_2 tϵ_t^d)^-1/2(F^(i)_t - F^(i)_t)
and
F̃_t := (F̃_t^(1),...,F̃_t^(m)),
where
σ_1 := dκ_d/d+2α and σ_2 := (dκ_d/α+d)^2.
Define the matrix C=(C_ij)_1 ≤ i,j≤ m by
C_ij := |W_i ∩ W_j|.
Under the above conditions, the matrix C is the asymptotic covariance matrix of the m-dimensional random vector F̃_t.
Moreover, assume that t^2ϵ_t^d →∞ as t →∞. Let X∼𝒩(0,C) be a centred Gaussian with covariance matrix C. Then
* if α>-d/4, there is a constant c_1>0 such that for all t ≥ 1 large enough,
d_3(F̃_t, X) ≤ c_1 (ϵ_t + (t^-1/2∨ (t^2ϵ_t^d)^-1/2)).
* if -d/2<α≤ -d/4, then for any 1<p<-d/2α, there is a constant c_2>0 such that for all t ≥ 1 large enough,
d_3(F̃_t, X) ≤ c_2 (ϵ_t + (t^-1+1/p∨ (t^2ϵ_t^d)^-1+1/p)).
If the matrix C is positive definite, then the bounds (<ref>) and (<ref>) apply to d_2(F̃_t,X) as well for different constants c_1,c_2>0.
The speed of convergence varies according to the asymptotic behaviour of tϵ_t^d. In particular, one has
t^-1+1/p∨ (t^2ϵ_t^d)^-1+1/p =
(t^2ϵ_t^d)^-1+1/p if tϵ_t^d → 0 (sparse regime)
t^-1+1/p if tϵ_t^d →θ>0 (thermodynamic regime)
t^-1+1/p if tϵ_t^d →∞ (dense regime).
The proof of <Ref> can be found in <Ref>.
The question whether C is positive-definite is not entirely straightforward, but some things can be said. For a vector x∈^m, we have
x^TCx = ∑_i,j=1^m x_ix_j ∫_^d_W_i(z) _W_j(z) dz = ∫_^d(∑_i=1^m x_i _W_i(z) )^2 dz.
Since W_1,…,W_m form a collection of convex bodies, it is clearly a necessary and sufficient condition that the family of indicators _W_1(z),...,_W_m(z) is linearly independent in L^2(^d), which translates (to some extent) to none of the sets being obtainable from the other sets via certain combinations of unions, intersections and complements, disregarding sets of measure zero.
A simple sufficient condition for positive definiteness is that for each 1 ≤ i ≤ m-1, we have
W_i ∖⋃_j>i W_j ≠∅,
i.e. the sets W_1,...,W_m can be ordered in a sequence such that each set has a point not included in subsequent sets. This implies that the family of indicators _W_1(z),...,_W_m(z) is linearly independent, and it also entails positive-definiteness of C. Indeed, let z_1 ∈ W_1 ∖⋃_j>1 W_j. Since ⋃_j>1 W_j is closed, there is an open set U ⊂(⋃_j>1 W_j)^c such that z_1 ∈ U. As W_1 is a convex body, the intersection V of the interior of W_1 with U is open and non-empty. Assume that x^TCx=0, then the function f=x_1_W_1 + … + x_m_W_m is zero almost everywhere. In particular, it is constant f ≡ x_1 on V ⊂ W_1 ∖⋃_j>1 W_j, hence we must have x_1=0. One can now iterate this argument to show that x_2=…=x_m=0.
If the collection W_1,…,W_m consists of distinct balls, we can prove positive-definiteness of C using condition (<ref>). Indeed, let B_1,…,B_m be a collection of closed distinct balls in ^d. The union ⋃_i≥ 1 B_i is closed and bounded, and admits thus at least one point z whose first coordinate achieves the maximum over all first coordinates of points in ⋃_i≥ 1 B_i. Let B_i_1,…,B_i_n be the balls tangent to the hyperplane x_1=z_1, sorted by decreasing radius (i.e. B_i_1 is the largest ball among B_i_1,…,B_i_n). There is an open ball B(z,ϵ) ⊂(⋃_j ≠ i_1,…,i_nB_j)^c, and the intersection B(z,ϵ) ∩ B_i_1 contains a point not included in B_i_2,…,B_i_n. The ball B_i_1 can thus be made the first ball in the sequence. This construction can be iterated to satisfy condition (<ref>). See Figure <ref> for an illustration.
The condition (<ref>) is however not necessary for the matrix C to be positive definite. Indeed, consider a superposition of the sets W_1,…,W_4 depicted in Figure <ref>. One can show easily that if x_1_W_1 + … + x_4_W_4 =0 almost everywhere, then x_1=…=x_4=0, hence the indicators are linearly independent in L^2(^d) and the resulting matrix is positive-definite. However, these sets do not fulfill condition (<ref>).
§ PROOF OF <REF>
The proof of <Ref> uses both an interpolation technique and the multivariate Stein method. For a positive-definite symmetric matrix C=(C_ij)_1 ≤ i,j ≤ m and a function g:^m →, the multivariate Stein equation is given by
g(x) - g(X) = ⟨ x , ∇ f(x) ⟩_^m - ⟨ C , f(x) ⟩_H.S.,
for x ∈^m and X ∼𝒩(0,C). The inner product ⟨ .,. ⟩_H.S. is the Hilbert-Schmidt inner product defined as ⟨ A,B ⟩_H.S.:= Tr(AB^T) for real m × m matrices A and B and where Tr(.) denotes the trace function. If g∈𝒞^2(^m) has bounded first- and second-order partial derivatives, then a solution to (<ref>) is given by
f_g(x) := ∫_0^1 1/2t[g(√(t)x + √(1-t)X) - g(X)] dt.
The solution f_g satisfies the following bounds:
f_h2≤C^-1_opC_op^1/2g'_∞
and f_g3≤√(2π)/4C^-1_op^3/2C_opsup_x ∈^m g(x)_op,
where g(x) denotes the Hessian matrix of the function g evaluated at x and ._op denotes the operator norm.
These results can be found in <cit.>.
Before we can give the proof of <Ref>, we also need some technical estimates which improve the corresponding estimates given in <cit.>. The first lemma is an extension of Lemma 3.1 in <cit.>: contrary to what was done in <cit.>, we bound the rest term R_x in the development below by a q^th power of the add-one costs, with q ∈ [1,2]. Lemma 3.1 in <cit.> corresponds to the choice of q=2.
Let F=(F_1,...,F_m) for some m ≥ 1, where F_i ∈ L^2(_η) ∩ and F_i=0 for 1 ≤ i ≤ m. Then for all ϕ∈𝒞^2(^m) with ϕ'_∞,ϕ2<∞, it holds that for a.e. x∈ and all q∈[1,2],
_x ϕ(F) = ∑_i=1^m∂ϕ/∂ x_i(F) _x F_i + R_x,
where
|R_x| ≤(2ϕ'_∞∨12ϕ2) (∑_i=1^m |_x F_i|)^q.
One has that
_x ϕ(F) = ϕ(F + _xF) - ϕ(F)
= ∫_0^1 ∑_i=1^m∂ϕ/∂ x_i (F+t_x F_i) ·_x F_i dt
= ∑_i=1^m∂ϕ/∂ x_i (F) ·_x F_i + R_x,
where
R_x = ∑_i=1^m∫_0^1 (∂ϕ/∂ x_i(F+t_xF) - ∂ϕ/∂ x_i(F)) ·_x F_i dt.
Note that
|∂ϕ/∂ x_i(F+t_xF) - ∂ϕ/∂ x_i(F)| ≤(2 ϕ'_∞) ∧(tϕ2_xF)
by the mean value theorem. Hence
|R_x| ≤(2 ϕ'_∞∑_i=1^m |_x F_i|) ∧(12ϕ2_xF∑_i=1^m |_xF_i|)
≤(2ϕ'_∞∨1/2ϕ2) [∑_i=1^m |_xF_i| ∧(∑_i=1^m |_xF_i|)^2]
≤(2ϕ'_∞∨1/2ϕ2) (∑_i=1^m |_xF_i|)^q.
The next lemma is an improvement of Lemma 4.1 in <cit.>, where the improvement comes from the fact that we use <Ref> in the final step. For the definition of the operator , see (<ref>).
Let m ≥ 1 and for 0 ≤ i ≤ m, let F_i ∈ L^2(_η) ∩ and assume F_i=0. Then for all g ∈𝒞^2(^m) such that g'_∞,g2<∞, we have
g(F_1,...,F_m)F_0 = ∑_i=1^m∂ g/∂ x_i(F_1,...,F_m) ⟨ F_i, - F_0 ⟩_L^2(λ) + R,
where for all q∈[1,2],
|R| ≤(2g'_∞∨12g2) ∫_(∑_k=1^m |_xF_k|)^q |_x F_0| λ(dx).
Using integration by parts (see <Ref>) as detailed in the proof of <cit.>, we derive that
g(F_1,...,F_m)F_0 = ⟨ g(F_1,...,F_m),- F_0⟩_L^2(λ)
= ∑_i=1^m∂ g/∂ x_i (F_1,...,F_m) ⟨ F_i, - F_0 ⟩_L^2(λ) + ⟨ R_*,- F_0 ⟩_L^2(λ),
where R_* is a rest term satisfying the bound in <Ref>. The claimed result follows with R:= ⟨ R_*,- F_0 ⟩_L^2(λ) .
The next proposition is an extension of Theorems 3.3 and 4.2 in <cit.> and exploits much of the arguments rehearsed in the proofs of these theorems, which are combined with the content of <Ref>.
Let m ≥ 1 and let F=(F_1,...,F_m) be an ^m-valued random vector such that for 1≤ i ≤ m, we have F_i ∈ L^2(_η) ∩ and F_i=0. Let C=(C_ij)_1≤ i,j ≤ m be a symmetric positive-semidefinite matrix and let X∼𝒩(0,C). Then for all q ∈ [1,2],
d_3(F,X) ≤12∑_i,j=1^m[|C_ij - ⟨ F_i, - F_j ⟩_L^2(λ)|]
+ ∑_i=1^m∫_[(∑_j=1^m |_x F_j|)^q |_x F_i|] λ(dx).
If moreover the matrix C is positive-definite, then for all q ∈ [1,2],
d_2(F,X) ≤ C^-1_opC_op^1/2∑_i,j=1^m[|C_ij - ⟨ F_i, - F_j ⟩_L^2(λ)|]
+ (2C^-1_opC_op^1/2∨√(2π)/8C^-1_op^3/2C_op)
∑_i=1^m∫_[(∑_j=1^m |_x F_j|)^q |_x F_i|] λ(dx).
To show the bound on the d_3 distance, we proceed as in the proof of Theorem 4.2 in <cit.>, but we replace the use of Lemma 4.1 therein with our <Ref>. Indeed, we only need to show that
|[ϕ(F)] - [ϕ(X)]| ≤12ϕ2∑_i,j=1^m[|C_ij - ⟨ F_i, - F_j ⟩_L^2(λ)]
+ 12(2ϕ2∨12ϕ3) ∑_i=1^m∫_[(∑_j=1^m |_x F_j|)^p |_x F_i|] λ(dx)
for any ϕ∈𝒞^3(^m) with bounded second and third partial derivatives. Defining
ψ(t):= ϕ(√(1-t)F+√(t)X),
it is clear that
|ϕ(F) - ϕ(X)| ≤sup_t ∈ (0,1) |ψ'(t)|.
Defining moreover
ϕ_i^t,b(x) := ∂ϕ/∂ x_i(√(1-t)x+√(t)b)
for any vector b ∈^m, it is shown in the proof of <cit.> that ψ'(t) can be written as
ψ'(t)=1/2√(t)𝒜 - 1/2√(1-t)ℬ,
where
𝒜 = √(t)∑_i,j=1^m C_ij∂^2ϕ/∂ x_i ∂ x_j (√(1-t)F + √(t)X)
and
ℬ = ∑_i=1^m∂ϕ/∂ x_i (√(1-t)F + √(t)X) F_i.
Conditioning on X in ℬ, one can apply <Ref> and deduce that
ℬ = √(1-t)∑_i,j=1^m∂^2 ϕ/∂ x_i ∂ x_j (√(1-t)F + √(t)X) ⟨ F_i, - F_j ⟩ + ∑_i=1^m[R_X^i],
where R_X^i satisfies
|R_X^i| ≤(12(ϕ_i^t,X)^(2)_∞∨ 2 (ϕ_i^t,X)'_∞) ∫_(∑_j=1^m |_x F_j|)^q |_x F_i| λ(dx).
It suffices now to observe that
|∂^2 ϕ/∂ x_i ∂ x_j (√(1-t)F + √(t)X)| ≤ϕ2
and |∂ϕ_i^t,b/∂ x_j (x)| = √(1-t)|∂^2 ϕ/∂ x_i ∂ x_j (√(1-t)x + √(t)b)| ≤√(1-t)ϕ2
and |∂^2 ϕ_i^t,b/∂ x_j∂ x_k (x)| = (1-t) |∂^3 ϕ/∂ x_i ∂ x_j ∂ x_k (√(1-t)x + √(t)b)| ≤ (1-t) ϕ3
to deduce that
sup_t ∈ (0,1) |ψ'(t)| ≤ 12ϕ2∑_i,j=1^m[|C_ij - ⟨ F_i, - F_j ⟩_L^2(λ)|]
+ sup_t ∈ (0,1)1/2 √(1-t)(2√(1-t)ϕ2∨12 (1-t) ϕ3)
∑_i=1^m∫_[(∑_j=1^m |_x F_j|)^q |_x F_i|] λ(dx).
The bound for the d_3 distance now follows.
For the d_2 distance, as argued in the proof of Theorem 3.3 in <cit.>, it is enough to show that
|[g(F) - g(X)]| ≤ A C^-1_opC_op^1/2∑_i,j=1^m |C_ij - ⟨ F_i, - F_j ⟩_L^2(λ)|
+ (2A C^-1_opC_op^1/2∨√(2π)/8B C^-1_op^3/2C_op)
∑_i=1^m∫_[(∑_j=1^m |_x F_j|)^q |_x F_i|] λ(dx),
for smooth functions g ∈𝒞^∞(^m) whose first- and second-order derivatives are bounded in such a way that g'_∞≤ A and sup_x ∈^m g(x)_op≤ B. We now proceed as in the proof of Theorem 3.3 in <cit.> to deduce that
g(F) - g(X) = ∑_i,j=1^m[C_ij∂^2f_g/∂ x_i ∂ x_j (F)] - ∑_k=1^m[⟨(∂ f_g/∂ x_k(F)),- F_k ⟩_L^2(λ)],
where f_g is the canonical solution (<ref>) to the multivariate Stein equation (<ref>). Define ϕ_k(x) := ∂ f_g/∂ x_k(x). By <Ref>, we have that
_x ϕ_k(F) = ∑_i=1^m∂ϕ_k/∂ x_i(F) ·_x F_i + R_x,k,
where
|R_x,k| ≤(2 ϕ_k'_∞∨12ϕ_k^(2)_∞) (∑_i=1^m |_x F_i|)^q.
It follows that
| g(F) - g(X)| ≤f_g^(2)_∞∑_i,j=1^m |C_ij - ⟨ F_i, - F_j ⟩_L^2(λ)|
+ sup_1≤ k ≤ m(2 ϕ_k'_∞∨12ϕ_k^(2)_∞) ∑_i=1^m∫_[(∑_j=1^m |_x F_j|)^q |_x F_i|] λ(dx).
To see that (<ref>) holds, it suffices now to see that by (<ref>) and (<ref>), we have
f_g2≤C^-1_opC_op^1/2g'_∞≤ AC^-1_opC_op^1/2
ϕ_k'_∞≤f_g2≤ AC^-1_opC_op^1/2
ϕ2≤f_g3≤√(2π)/4C^-1_op^3/2C_opsup_x ∈^m g(x)_op≤√(2π)/4B C^-1_op^3/2C_op.
This concludes the proof.
Using <Ref>, it suffices to show that
∑_i,j=1^m[|C_ij - ⟨ F_i, - F_j ⟩_L^2(λ)|] ≤ζ_1^(p) + ζ_2^(p) + ζ_3^(p)
and
∑_i=1^m∫_[(∑_j=1^m |_x F_j|)^q |_x F_i|] λ(dx) ≤ζ_4^(q).
Fix i,j∈{1,...,m}. By the triangle inequality, we have that
[|C_ij - ⟨ F_i, - F_j ⟩_L^2(λ)|]
≤[|C_ij - (F_i,F_j)|] + [|(F_i,F_j) - ⟨ F_i, - F_j ⟩_L^2(λ)|].
Define G_ij:= (F_i,F_j) - ⟨ F_i, - F_j ⟩_L^2(λ). Then since F_i,F_j ∈, we have G_ij∈ L^1(_η) and by <Ref>, one has G_ij=0.
Using the p-Poincaré inequality (<ref>) given in <Ref>, we deduce that for any p ∈ [1,2],
|G_ij| ≤[|G_ij|^p]^1/p≤ 2^2/p-1[∫_ |_xG_ij|^pλ(dx)]^1/p.
Now note that
_x G_ij = _x ∫_(_y F_i) ·(-_y F_j) λ(dy),
and by the argument in the proof of <cit.>, we have that
|_x ∫_(_y F_i) ·(-_y F_j) λ(dy)| ≤∫_|_x((_y F_i) ·(-_y F_j))| λ(dy).
Using Minkowski's integral inequality, it follows that
[|(F_i,F_j) - ⟨ F_i, - F_j ⟩_L^2(λ)|]
≤(∫_(∫_[|_x((_y F_i) ·(-_y F_j))|^p]^1/pλ(dy))^pλ(dx))^1/p.
By (<ref>), we have that
_x((_y F_i) ·(-_y F_j))
= _x,yF_i·(-_y F_j) + _y F_i·(-_x,y F_j) + _x,yF_i·(-_x,y F_j).
Using Minkowski's norm inequality, the Cauchy-Schwarz inequality and <Ref>, one sees that
[|_x((_y F_i) ·(-_y F_j))|^p]^1/p≤ [|_x,yF_i|^2p]^1/2p [|_yF_j|^2p]^1/2p
+ [|_yF_i|^2p]^1/2p [|_x,yF_j|^2p]^1/2p
+ [|_x,yF_i|^2p]^1/2p [|_x,yF_j|^2p]^1/2p.
Combining (<ref>), (<ref>) and (<ref>) yields (<ref>).
To show (<ref>), note that
(∑_j=1^m |_x F_j|)^q ≤ m^q-1∑_j=1^m |_x F_j|^q.
The bound (<ref>) now follows by Hölder inequality and <Ref>.
§ PROOFS OF <REF>
Throughout this section, we denote by c a positive absolute constant whose value can change from line to line. We will need some technical bounds derived in <cit.> and presented in the next lemma.
Let α>-d/2 and r≥ 1 such that d+rα>0. Then (W) ∈ and there is a constant c_0>0 such that for all x,y ∈ W and t>0,
[(_x )^r]^1/r≤ c_0 ϵ_t^α (tϵ_t^d)^1/r (1 ∨ t ϵ_t^d)^1-1/r
and _x,y = 0<x-y<ϵ_tx-y^α.
Moreover, we will need some properties of convex bodies. Let W ⊂^d be a convex body and define its inner parallel set W_ϵ by
W_ϵ := {x ∈ W : dist(x,∂ W)>ϵ},
for ϵ>0 and where dist denotes the Euclidean distance and ∂ W the boundary of W. Since W is a convex set with non-empty interior, the set W_ϵ is non-empty for ϵ>0 small enough. Combining <cit.> with Steiner's formula (cf. <cit.>), one sees that there is a constant γ_W>0 such that
|W ∖ W_ϵ| ≤γ_W ϵ.
We can now start with the proof of <Ref>.
It suffices to bound the terms ζ_1^(p),...,ζ_4^(q) of <Ref>. We start by giving a bound for ζ_1^(p). Note that is has been shown in <cit.> that
|W| - γ_W ϵ_t < (i,j)/σ_ij^(1)t^2ϵ_t^α_i+α_j+d + σ_ij^(2)t^3 ϵ_t^α_i+α_j+2d≤ |W|,
where γ_W is a constant depending on W such that (<ref>) holds.
This implies that
β_ij^(t)(|W|-γ_Wϵ_t) < (i,j) ≤β_ij^(t) |W|,
where
β_ij^(t) = σ_ij^(1)t^2ϵ_t^α_i+α_j+d + σ_ij^(2)t^3 ϵ_t^α_i+α_j+2d/(tϵ_t^α_i+d/2∨ t^3/2ϵ_t^α_i+d)(tϵ_t^α_j+d/2∨ t^3/2ϵ_t^α_j+d) = σ_ij^(1) + σ_ij^(1) tϵ_t^d/1 ∨ tϵ_t^d,
as defined in (<ref>). Hence we have
|(i,j) - C_ij| ≤β_ij^(t)γ_W ϵ_t + ||W|β_ij^(t)-C_ij|
and thus
ζ_1^(p)≤∑_i,j=1^m(β_ij^(t)γ_W ϵ_t + ||W|β_ij^(t)-C_ij|) ≤ c (ϵ_t + max_1 ≤ i,j ≤ m|β_ij^(t) - c_ij|)
for some constant c>0.
Next, we bound ζ_2^(p). For this we use the expressions given in <Ref>. We deduce from (<ref>) that
ζ_2^(p) = 2^2/p-1∑_i,j=1^m (t ϵ_t^α_i+d/2∨ t^3/2ϵ_t^α_i+d)^-1(t ϵ_t^α_j+d/2∨ t^3/2ϵ_t^α_j+d)^-1
(∫_W (∫_W |x-y|<ϵ_t |x-y|^α_i + α_jtdx)^ptdy)^1/p.
Note that the inner integral is upper bounded by
dκ_d/d+α_i+α_j tϵ_t^d+α_i+α_j,
and hence we deduce, after simplification,
ζ_2^(p)≤ 2^2/p-1 |W|^1/p(∑_i,j=1^m dκ_d/d+α_i+α_j) t^-1+1/p(1 ∨ tϵ_t^d)^-1.
For ζ_3^(p), after plugging in the bounds from <Ref>, we get
ζ_3^(p)≤ 2^2/pc ∑_i,j=1^m (t ϵ_t^α_i+d/2∨ t^3/2ϵ_t^α_i+d)^-1(t ϵ_t^α_j+d/2∨ t^3/2ϵ_t^α_j+d)^-1
(∫_W (∫_W ϵ_t^α_i (tϵ_t^d)^1/(2p)(1 ∨ tϵ_t^d)^1-1/(2p)|x-y|<ϵ_t |x-y|^α_jtdx)^ptdy)^1/p.
After simplification, this bound yields
ζ_3^(p)≤ 2^2/pc|W|^1/p m(∑_i=1^m dκ_d/d+α_i) t^-1+1/p(1 ∧ tϵ_t^d)^1/(2p).
For ζ_4^(q), we plug in the first bound from <Ref> and deduce
ζ_4^(q)≤ m^q-1c^2 ∑_i,j=1^m (t ϵ_t^α_i+d/2∨ t^3/2ϵ_t^α_i+d)^-1(t ϵ_t^α_j+d/2∨ t^3/2ϵ_t^α_j+d)^-q
∫_W ϵ_t^α_i (tϵ_t^d)^1/(q+1) (1 ∨ tϵ_t^d)^1-1/(q+1)ϵ_t^qα_j (tϵ_t^d)^q/(q+1) (1 ∨ tϵ_t^d)^q-q/(q+1) tdx,
which, after simplification, yields
ζ_4^(q)≤ m^q c^2 |W| t^(1-q)/2 (1 ∧ tϵ_t^d)^(1-q)/2.
If we take q=3-2/p, then q ∈ (1,2] and (q+1)α_i + d>2pα_i+d>0 and we get
ζ_2^(p) + ζ_3^(p) + ζ_4^(q)≤ c (t^-1+1/p∨ (t^2ϵ_t^d)^-1+1/p),
which concludes the proof.
As a first step, we compute the asymptotic covariance matrix of the vector F̃_t. Define the functions
h_i(x,y) := |x-y|<ϵ_tx ∈ W_iy ∈ W_i |x-y|^α, i=1,…,m.
Then it holds that
F_t^(i) = 1/2∬_(^d)^2 h_i(x,y) (η^t)^(2)(dx,dy),
where (η^t)^(2) denotes the product measure η^t ⊗η^t. We deduce that for i,j ∈{1,…,m},
(F_t^(i),F_t^(j)) = 1/4[∬_(^d)^2 h_i(x,y) (η^t)^(2)(dx,dy) ∬_(^d)^2 h_j(z,w) (η^t)^(2)(dz,dw)]
- 1/4[∬_(^d)^2 h_i(x,y) (η^t)^(2)(dx,dy)][∬_(^d)^2 h_j(z,w) (η^t)^(2)(dz,dw)].
Since η^t is a point measure, in the first term on the RHS of (<ref>), it is possible to have x=z or x=w or similar equalities, which constitute the diagonals of the sets we are summing over. Using the Mecke formula (<ref>) and isolating these diagonals, one sees that
(F_t^(i),F_t^(j)) = 1/4∬_(^d)^2 h_i(x,y) t^2 dxdy ·∬_(^d)^2 h_j(z,w) t^2 dzdw
=+ ∭_(^d)^3 h_i(x,y)h_j(y,w) t^3 dxdydw
=+ 1/2∬_(^d)^2 h_i(x,y)h_j(x,y) t^2 dxdy
=-1/4∬_(^d)^2 h_i(x,y) t^2 dxdy ·∬_(^d)^2 h_j(z,w) t^2 dzdw.
The first and the last term cancel, thus we are left with
(F_t^(i),F_t^(j)) = ∭_(^d)^3 h_i(x,y)h_j(y,w) t^3 dxdydw + 1/2∬_(^d)^2 h_i(x,y)h_j(x,y) t^2 dxdy.
We start by computing the first term on the RHS of (<ref>). We have
∭_(^d)^3 h_i(x,y)h_j(y,w) t^3 dxdydw
= t^3 ∫_W_i ∩ W_jdy∫_W_idx∫_W_jdw x ∈ B(y,ϵ_t)w ∈ B(y,ϵ_t) |x-y|^α |w-y|^α.
Recall from (<ref>) the definition of the (possibly empty) inner parallel set
W_ϵ := {z ∈ W : (x,∂ W)>ϵ},
for ϵ>0. Recall also from (<ref>) that there is a constant γ_W>0 such that
|W ∖ W_ϵ| ≤γ_W ϵ.
We can now rewrite (<ref>) as
t^3 ∭_(^d)^3 h_i(x,y)h_j(y,w) dxdydw = t^3 ∫_(W_i ∩ W_j)_ϵ_tdy (∫_B(y,ϵ_t)dx |x-y|^α)^2 + R_t,
where R_t is given by
R_t := t^3 ∫_(W_i ∩ W_j)∖ (W_i ∩ W_j)_ϵ_tdy∫_W_idx∫_W_jdw x ∈ B(y,ϵ_t)w ∈ B(y,ϵ_t) |x-y|^α |w-y|^α.
The first term on the RHS of (<ref>) is given by
t^3∫_(W_i ∩ W_j)_ϵ_tdy (∫_B(y,ϵ_t)dx |x-y|^α)^2 = |(W_i ∩ W_j)_ϵ_t| (dκ_d/d+α)^2 t^3ϵ_t^2d+2α.
Using (<ref>), one sees that
0 ≤ R_t ≤ t^3|(W_i ∩ W_j)∖ (W_i ∩ W_j)_ϵ_t|(∫_B(y,ϵ_t)dx |x-y|^α)^2
≤γ_W_i ∩ W_jt^3ϵ_t^1+2d+2α(dκ_d/d+α)^2.
Combining (<ref>), (<ref>) and (<ref>) with (<ref>), one sees that
|t^3∭_(^d)^3 h_i(x,y)h_j(y,w) dxdydw - |W_i ∩ W_j| (dκ_d/d+α)^2 t^3ϵ_t^2d+2α|
≤γ_W_i ∩ W_jt^3ϵ_t^1+2d+2α(dκ_d/d+α)^2.
For the second term in (<ref>), we proceed similarly. We have
12t^2∬_(^d)^2 h_i(x,y)h_j(x,y) dxdy = 12t^2∬_W_i∩ W_j|x-y|<ϵ_t |x-y|^2α dxdy
=12t^2∫_(W_i ∩ W_j)_ϵ_tdy∫_B(y,ϵ_t)dx |x-y|^2α + R_t',
where
0 ≤ R_t' ≤12t^2γ_W_i ∩ W_jdκ_d/d+2αϵ_t^d+2α+1.
Hence we get
|12t^2∬_(^d)^2 h_i(x,y)h_j(x,y) dxdy - 12|W_i ∩ W_j| dκ_d/d+2αt^2ϵ^d+2α|
≤12γ_W_i ∩ W_jdκ_d/d+2αt^2ϵ^d+2α+1.
From (<ref>), (<ref>) and (<ref>), we deduce that
|(F_t^(i),F_t^(j))/12dκ_d/d+2α t^2ϵ_t^d+2α + (dκ_d/d+α)^2 t^3 ϵ_t^2d+2α - |W_i ∩ W_j|| ≤γ_W_i ∩ W_jϵ_t.
Now we use <Ref> and provide bounds for the terms ζ_1^(p),…,ζ_4^(q).
For the term ζ_1^(p), we have by (<ref>) that
ζ_1^(p)≤β_W_i ∩ W_jϵ_t.
Plugging in the bounds from <Ref>, we get for ζ_2^(p) that
ζ_2^(p)≤ 2^2/p-1c t^-2ϵ_t^-2α-d(1 ∨ tϵ_t^d)^-1
∑_i,j=1^m(∫_W_i∩ W_j(∫_W_i ∩ W_j|x-y|<ϵ_t |x-y|^2α tdx )^p tdy)^1/p.
Simplifying, we deduce
ζ_2^(p)≤ c t^-1+1/p (1 ∨ tϵ_t^d)^-1σ_1 ∑_i,j=1^m |W_i ∩ W_j|^1/p.
We proceed in the same way for ζ_3^(p) and ζ_4^(q), deducing
ζ_3^(p)≤ 2^2/pc m t^-1+1/p (1 ∧ tϵ_t^d)^1/(2p)dκ_d/d+α∑_i=1^m |W_i|^1/p
and
ζ_4^(q)≤ m^q-1c t^(1-q)/2 (1 ∧ tϵ_t^d)^(1-q)/2∑_i,j=1^m |W_i ∩ W_j|.
If we take, as in the proof of <Ref>, q=3-2/p, we get
ζ_2^(p) + ζ_3^(p) + ζ_4^(q)≤ c (t^-1+1/p∨ (t^2ϵ_t^d)^-1+1/p),
which concludes the proof.
§ SOME RESULTS ON POISSON FUNCTIONALS
In this section, we collect some necessary results on Poisson functionals and Malliavin calculus. Let η be a Poisson measure of intensity λ on a σ-finite measure space (,λ). We denote by L^p(_η) the set of measurable functionals F of η such that |F|^p < ∞.
Add-one cost. Let F be a measurable Poisson functional. For x ∈, we define the add-one cost operator by
_x F := F(η + δ_x) - F(η),
fo x ∈. For n ≥ 1, we set inductively ^(n) := ^(n-1), where ^(0) is the identity operator and ^(1)=. We say that F ∈ if F ∈ L^2(_η) and
∫_ (_x F)^2 λ(dx) < ∞.
Note that the following product formula holds for F,G measurable functionals of η:
_x (FG) = _xF · G + F ·_x G + _xF ·_xG.
Chaotic decomposition. Let n ∈ and h∈ L^2(^n,λ^(n)). Denote by I_n(h) the n^th Wiener-Itô integral (see <cit.>). Then for any F ∈ L^2(_η), it holds that
F = ∑_n=0^∞ I(f_n),
where f_n(x_1,…,x_n) = 1/n!_x_1,…,x_n^(n) F and f_0 = I(f_0) = F and the series converges in L^2(_η) (see <cit.>).
Mecke formula. Let h=h(x,η) be a measurable, non-negative function of η and x ∈ such that
∫_ |h(x,η)| λ(dx) <∞.
Then it holds that
∫_ h(x,η) η(dx) = ∫_ h(x,η + δ_x) λ(dx).
The operator . For functionals F ∈ L^2(_η) having expansion (<ref>), we define the (pseudo) inverse of the Ornstein-Uhlenbeck generator by
F := - ∑_n=1^∞1/n I_n(f_n).
See also <cit.> for further details. We also use the following properties of the operator , which can be found in <cit.>.
For any F ∈ L^2(_η) and r ≥ 1, it holds that
|_x F |^r ≤ |_x F|^r, for λ-a.e. x ∈,
and |_x,y F |^r ≤ |_x,y F|^r, for λ^(2)-a.e. (x,y) ∈^2.
Moreover, for F,G ∈ with F = G =0, we have
(F,G) = ∫_ (_x F) · (-_x F) λ(dx).
The improvement to the moment conditions given by the bounds of Theorem <ref> comes from the following inequality, the so-called p-Poincaré inequality.
Let F ∈ L^1(_η) and p ∈ [1,2]. Then
|F|^p - | F|^p ≤ 2^2-p∫_ |_x F|^p λ(dx).
When p=2, this inequality reduces to the classical Poincaré inequality, see e.g. <cit.>.
alpha
|
http://arxiv.org/abs/2409.02292v1 | 20240903210604 | RAMBO: Leaking Secrets from Air-Gap Computers by Spelling Covert Radio Signals from Computer RAM | [
"Mordechai Guri"
] | cs.CR | [
"cs.CR"
] |
M. Guri
Ben-Gurion University of the Negev, Israel
Department of Software and Information Systems Engineering
gurim@post.bgu.ac.il
Air-gap research page:
RAMBO: Leaking Secrets from Air-Gap Computers by Spelling Covert Radio Signals from Computer RAM
Mordechai Guri0000-0003-1806-8858
September 9, 2024
================================================================================================
§ ABSTRACT
Air-gapped systems are physically separated from external networks, including the Internet. This isolation is achieved by keeping the air-gap computers disconnected from wired or wireless networks, preventing direct or remote communication with other devices or networks. Air-gap measures may be used in sensitive environments where security and isolation are critical to prevent private and confidential information leakage.
In this paper, we present an attack allowing adversaries to leak information from air-gapped computers. We show that malware on a compromised computer can generate radio signals from memory buses (RAM). Using software-generated radio signals, malware can encode sensitive information such as files, images, keylogging, biometric information, and encryption keys. With software-defined radio (SDR) hardware, and a simple off-the-shelf antenna, an attacker can intercept transmitted raw radio signals from a distance. The signals can then be decoded and translated back into binary information. We discuss the design and implementation and present related work and evaluation results. This paper presents fast modification methods to leak data from air-gapped computers at 1000 bits per second. Finally, we propose countermeasures to mitigate this out-of-band air-gap threat.
§ INTRODUCTION
Today’s regulations, such as GDPR (General Data Protection Regulation), outline strict rules and principles for how organizations should collect, store, and share personal data. It grants individuals certain rights, such as the right to access their data, the right to be forgotten (i.e., to have their data erased), the right to data portability, and more. Organizations that handle personal data must follow certain practices to ensure privacy and security. They need explicit consent from individuals before processing their data. They need to implement strong data protection measures and report data breaches within a specific timeframe <cit.>.
When sensitive data such as personal or confidential information is involved, the collection, processing, and storage of the information may be done in networks disconnected from the Internet. This security measure is known as an 'air gap.' Air-gap isolation protects information from cyberattacks, and online risks, including phishing emails, social engineering, and compromised websites <cit.>.
§.§ Air-gap Isolation
Enforcing an air gap in a computing or networking environment involves physically and logically isolating a system, network, or device from external networks or communication channels. This can be done by disconnecting network cables, disabling wireless interfaces, and disallowing USB connections. In addition, it must be ensured that the isolated system has no direct link to any external communication infrastructure <cit.>.
§.§ Air-gap Attacks
Despite air-gapped networks being considered highly secure, there have been incidents demonstrating that air-gapped networks are not immune to breaches. Stuxnet is one of the most famous air-gap malware <cit.>. Discovered in 2010, Stuxnet was a highly sophisticated worm that targeted industrial control systems (ICS), particularly those used in nuclear facilities. It exploited zero-day vulnerabilities and used several methods, including infected USB drives, to jump the air gap and spread it across isolated networks. The Agent.BTZ worm <cit.> was another type of air gap computer worm with advanced capabilities and a targeted type. It was specifically designed to spread through removable media, such as USB flash drives, and infiltrate computer networks, including highly secure or air-gapped. According to reports, the worm affected the U.S. Department of Defense classified networks. Notably more than twenty-five reported malware in the past targeted highly secured and air-gapped networks <cit.>, including USBStealer, Agent.BTZ <cit.>, Stuxnet <cit.>, Fanny, MiniFlame, Flame, Gauss, ProjectSauron, EZCheese, Emotional Simian, USB Thief, USBFerry, Retro, and Ramsay.
§.§ The RAMBO Attack
In order to exfiltrate information from an infected air-gapped computer, attackers use special communication channels known as air-gap covert channels. There are several types of covert channels studied in the past twenty years <cit.><cit.>. These attacks leak data through electromagnetic emission <cit.><cit.><cit.>, optical signals <cit.>, acoustic noise <cit.><cit.>, thermal changes <cit.>, and even physical vibrations <cit.>.
In this paper, we show how malware can manipulate RAM to generate radio signals at clock frequencies. These signals are modified and encoded in a particular encoding allowing them to be received from a distance away. The attacker can encode sensitive information (keylogging, documents, images, biometric information, etc.) and exfiltrate it via these radio signals. An attacker with appropriate hardware can receive the electromagnetic signals, demodulate and decode the data, and retrieve the exfiltrated information.
This paper is organized as follows. The attack model is first described in Section <ref>. Section <ref> provides a review of related work. Section <ref> describes the design and implementation of a transmitter and receiver, including modulation and encoding. The analysis and evaluation results are presented in section <ref>. Section <ref> provides a list of countermeasures, and we conclude in Section <ref>.
§ ATTACK MODEL
Attacks on air-gapped networks involve multi-phase strategies to breach isolated systems by delivering specialized malware through physical media or insider agents, initiating malware execution, propagating within the network, exfiltrating data using covert channels or compromised removable media, establishing remote command and control, evading detection, and covering tracks. In the context of the RAMBO attack, the adversary must infect the air-gap network in the initial phase. This can be done via a variety of attack vectors <cit.><cit.><cit.><cit.>.
An attacker could plant malware on a USB drive and physically introduce it into an air-gapped network. An unsuspecting insider or employee might connect the USB drive to a computer within the isolated network, unknowingly activating the malware and allowing it to propagate and exfiltrate data through the same USB drive or via covert channels. An insider with access to the air-gapped network might intentionally introduce malware or provide unauthorized access to external parties. This could involve transferring sensitive data to personal devices or using covert communication methods like steganography to hide data within innocent-looking files. An attacker could also compromise hardware components or software updates during the supply chain process. Once these components are installed within the air-gapped network, hidden malware might activate and communicate with external parties. Note that APTs (Advanced Persistent Threats) in the past targeted highly secured and air-gapped networks, including USBStealer, Agent.BTZ <cit.>, Stuxnet, Fanny, MiniFlame, Flame, Gauss, ProjectSauron, EZCheese, Emotional Simian, USB Thief, USBFerry, Brutal Kangaroo, Retro, PlugX, and Ramsay <cit.>. More recently, in August 2023, researchers at Kaspersky discovered another new malware and attributed it to the cyber-espionage group APT31, which targets air-gapped and isolated networks via infected USB drives <cit.>.
In the second phase of the attack, the attacker collects information, e.g., keylogging, files, passwords, biometric data, and so on, and exfiltrate it via the air-gap covert channel. In our case, the malware utilizes electromagnetic emissions from the RAM to modulate the information and transmit it outward. A remote attacker with a radio receiver and antenna can receive the information, demodulate it, and decode it into its original binary or textual representation. The attack scenario is illustrated in Figure <ref>. The RAMBO malware within the infected air-gapped workstation (A) transmits sensitive images (Optimus Prime) using covert electromagnetic radiation from the RAM. A remote attacker intercepts the information and decodes the data.
§ RELATED WORK
Air-gap covert channels refer to a type of covert communication method that transfers information between two physically isolated systems or networks that are not directly connected through wired or wireless means. In the security research domain, air-gap covert channels are rooted in the idea that even systems disconnected from external networks might still communicate through unintended or concealed means. While the air gap is intended to prevent unauthorized data transfer, various techniques have been explored to bypass this isolation and create hidden communication channels. The main types of air-gap covert channels are acoustic, optical, thermal, and electromagnetic. In this paper, RAMBO covert channels are categorized as electromagnetic covert channels. In acoustic covert channels, systems might use ultrasonic sound waves inaudible to humans to transmit data between air-gapped devices. Specialized software or malware can encode data into sound signals picked up by a microphone on the receiving device <cit.><cit.>. Previous work shows that attacks can exploit CPU and GPU fans <cit.><cit.>, Hard-disk drives (HDD) <cit.>, CD/DVD noise <cit.>, and power-supply sound characteristics<cit.> to modulate information over an air gap. Data can be encoded and transmitted using light signals, such as rapidly flashing LED lights or screen brightness changes <cit.>. The receiving device might use a camera or light sensor to detect and decode signals. Previous work showed that attackers could exploit keyboards <cit.>, routers <cit.>, hard-disk drives (HDD) <cit.>, and screen LEDs <cit.> to modulate information over air gaps for long distances. In these cases, the receiver is a sensor or a camera. Attackers can transmit information by causing minor temperature fluctuations imperceptible to human senses but detectable by sensitive thermal sensors. E.g., the BitWhistper attack <cit.>, presented by Guri et al., shows that the CPU can generate thermal signals that nearby computers can sense to transfer data over air gaps. Electromagnetic emissions, often unintended byproducts of computational activities, can be modulated to encode data. These emissions can then be captured and interpreted by a receiver equipped with appropriate sensors. For example, malware might exploit electromagnetic emissions of a computer's central processing unit (CPU) to create a covert communication channel. Previous works focused on radio frequency covert channels including EMLoRa <cit.>, AirHopper <cit.>, GSMem <cit.>, Air-Fi <cit.>, SATAn <cit.>, and Lantenna <cit.>.
§ TRANSMISSION AND RECEPTION
This section presents the implementation of the transmitter and receiver and the signal generation, data modulation, demodulation, and encoding and decoding schemes.
The RAM bus operates electrical lines or pathways that connect the CPU to memory modules. These pathways transfer data, instructions, and addresses between the CPU and RAM. The RAM bus includes various components <cit.>.
* Data Bus. This is the portion of the RAM bus responsible for carrying the actual data being read from or written to memory. The data bus width determines the amount of data transferred simultaneously. For example, a 64-bit data bus can transfer 64 bits (8 bytes) of data in one operation.
* Address Bus. The address bus carries memory addresses that indicate the specific location in memory from which the CPU wants to read or write data. The address bus width determines the maximum amount of memory the CPU can access directly. For instance, a 32-bit address bus can address up to 4 gigabytes of memory.
* Control Lines. These lines carry control signals coordinating data transfer timing and sequencing. Control lines handle reading, writing, activating memory chips, and signaling when data is ready.
When data is transferred through a RAM bus, it involves rapid voltage and current changes, mainly in the Data bus. These voltage transitions create electromagnetic fields, which can radiate electromagnetic energy through electromagnetic interference (EMI) or radio frequency interference (RFI). The frequency range of electromagnetic emanation from the RAM bus mainly depends on its specific clock speed, measured in megahertz (MHz) or gigahertz (GHz). This clock dictates how quickly data can be transferred between the CPU and memory. The emanation levels are influenced by other bus characteristics, including its data width, clock speed, and overall architecture. Faster RAM buses (e.g., DDR4 and DDR5) with wider data paths can lead to quicker data transfers with increased emissions.
§.§ Signal Generation
As explained above, when data is read from or written to memory, electrical currents flow through the RAM chips and the associated traces on the printed circuit board (PCB). These electrical currents generate electromagnetic fields as a byproduct, which radiates EM energy. To create an EM covert channel, the transmitter needs to modulate memory access patterns in a way that corresponds to binary data. For instance, they could alter the timing or frequency of memory access operations to encode information. The sender and receiver must establish rules that define how memory access patterns translate to binary values. For example, a reading or writing array to the physical memory for a specific timing interval might represent a '0' while another interval represents a '1'. The receiver detects and decodes the EM emissions caused by the modulated memory activity. This could involve sensitive radio frequency (RF) receivers or electromagnetic field sensors.
§.§ Modulation
Algorithm 1 shows the signal generation with OOK (On-Off Keying) modulation, a basic form of digital modulation used in communication systems to transmit digital data over a carrier wave. In our case, the OOK modulation involves turning the carrier wave on and off to represent binary data, where the presence of the carrier wave generated by memory activity corresponds to one binary state ( "1"). The absence of the electromagnetic carrier wave (thread ) corresponds to the other binary state ("0"). Note that to maintain the activity in the RAM buses, we used the MOVNTI instruction <cit.>, which stands for Move Non-Temporal Integer. It performs a non-temporal store of integer data from a source operand to a destination memory location. This instruction is primarily associated with optimizing memory operations for certain types of data transfers, particularly in cases where the data is not to be reused immediately. Note that for the beginning of the transmission, we used the preamble sequence of , allowing the receiver to be synched with the transmitter.
§.§ Manchester Encoding
For the fast transmission, we used the Manchester encoding. In this encoding, each bit of the binary data is represented by a transition or change in signal level within a fixed period.
Manchester encoding ensures a consistent number of signal transitions, making it useful for clock synchronization and error detection. The outline of our transmitter with Manchester encoding is presented in Algorithm 2.
§.§ Demodulation and Framing
We encode the data with a frame consisting of an alternating sequence of eight alternating bits that represents the frame's beginning. Our demodulator is presented in Algorithm 3.
Figure <ref> shows the spectrogram and waveform of the word `DATA' () transmitted in the Manchester encoding (top) and OOK modulation (bottom). Our analysis shows that the Manchester encoding is more relevant for the requirements of the RAMBO covert channel due to two main reasons; (1) the encoding aids in clock synchronization between the sender and receiver, and (2) the frequent transitions make it easier to detect errors caused by signal loss, interference, or distortion. However, it's important to note that Manchester encoding doubles the required bandwidth compared to direct binary encoding (e.g., the OOK), as each bit requires two signal transitions within the bit interval. This increased bandwidth requirement can be a drawback in some scenarios, especially for high-speed data transmission.
§ EVALUATION
In this section, we present the evaluation of the covert channels. We tested three types of workstations. The PCs were all Intel i7 3.6GHz CPUs and 16GB of 2.133 - 2.400 GHz RAM. The PC ran Linux Ubuntu 18.04.6 LTS 64-bit. For the reception, we used the software-defined radio (SDR) Ettus B210, which is a specific model of the Universal Software Radio Peripheral (USRP) developed by Ettus Research, National Instruments (<ref>). The B210 offers a wide range of capabilities for researchers, engineers, and enthusiasts working in wireless communication, radio frequency (RF) research, and signal processing. It covers a frequency range from 70 MHz to 6 GHz and supports sample rates of up to 61.44 MS/s (mega-samples per second). The USRP was connected to a small form factor NUC computer with 16 GB RAM running the C demodulator. It also ran MathWorks Matlab for signal processing and spectrogram visualization.
§.§ Signal to Noise (SNR)
We evaluated the SNR levels at distances of 100 - 700 cm. Table <ref> lists the average SNR levels. The SNR levels ranged from 38dB - 8 dB, which reflects the effective distance the covert channel can operate in this setup. Note that the SNR is also affected by the bit times. Figure <ref> shows the thee different SNR with t=250 ms (A), t=100 ms (B), and t=50 ms (C). As can be seen, the SNR is significantly affected by the bit time, with a differentiation of an average 7 dB between speeds with a shifting of 50 bit/sec.
§.§ Bitrates
We evaluated the three speeds' effective bit rates and corresponding bit-error rates. Tables <ref>, <ref>, and <ref> shows the bit error rate (BER) values for t=10 ms, t=5 ms, t=1 ms, respectively. With a slow transmission rate (t=10 ms), A transmission is maintained at a distance of 700 cm away. With medium transmission rate (t=5 ms), A transmitted is maintained at a distance of 450 cm away and BER of 3%-4%. With a fast transmission rate (t=1 ms), a transmission is maintained at a distance of 300 cm away and BER of 2%-4%.
§.§ Data Exfiltration
Table <ref> presents the time it takes to exfiltrate various types of information for three timing parameters (t).
Keylogging can be exfiltrated in real-time with 16 bits per key (Unicode). A 4096-bit RSA encryption key can be exfiltrated at 41.96 sec at a low speed and 4.096 bits at a high speed. Biometric information, small files (.jpg), and small documents (.txt and .docx) require 400 seconds at the low speed to a few seconds at the fast speeds. This indicates that the RAMBO covert channel can be used to leak relatively brief information over a short period.
§.§ Faraday Shielding
It is possible to block electromagnetic radiation from the computer using a specialized metal chassis built as a Faraday cage.
The attenuation of a Faraday cage, which measures how effectively it blocks electromagnetic radiation, depends on various factors, including the frequency of the radiation, the conductivity of the cage material, and the thickness of the cage walls.
The attenuation (A) of electromagnetic radiation by a conductive material like a Faraday cage can be approximated using Equation <ref>. The attenuation factors are listed in Table <ref>.
A = 10 ·log_10( 1/1 + ( σ d/μ f)^2)
We analyzed and measured the effect of the Faraday chassis on the RAMBO covert channel using copper foil. This has electric field high Shielding properties (above 100 dB) and magnetic fields. The copper standard width is 1 mm effectively blocks the EMR from the transmitting workstation. However, as noted in the following section, this solution is costly and can not be deployed widely. Another option is to use a Faraday room which is typically constructed using metal that can conduct electric currents. The primary purpose of a Faraday room is to create an electromagnetically isolated environment, which means that electromagnetic fields from external sources are significantly reduced or prevented from entering the enclosed space. Faraday enclosures are presented in Figure <ref> with a PC-sized Faraday enclosure (A), general size Faraday enclosure (B), and a Faraday room (C).
§.§ Virtualization
We evaluated the effectiveness of the covert channel when the transmitting code operates from within a virtual machine (VM). For the evaluation, we used VMWare workstation 16.2.4 running Linux Ubuntu 18.04.6 LTS 64-bit on host and guest machines. Our test shows that the low BER of below 1% was kept even when the code ran with a VM. However, it is essential to note that a massive workload in the host OS or memory activity in another guest OS might interrupt the signal generation conducted by the compromised virtual machine.
§.§ Higher bit-rates
We tested the high bit rates of 5000 bps and above. Our evaluation shows that it is possible to demodulate the signal with mostly above 5% BER, rendering this speed less effective. The main reason is the low SNR levels the fast signal generation yielded. Figure <ref> shows the waveform of the alternating short signal generated with 10000 bps. As depicted, the SNR is low (below 5%) and causes high BER levels during the modulation.
§.§ Frequency Ranges
The electromagnetic emission from DDR RAM and other digital components can span a wide frequency range, including fundamental frequencies, subharmonics, and spurious missions.
The central frequencies are the direct clock frequencies and their harmonics. For example, with DDR RAM operating at a clock frequency of 1.6 GHz (corresponding to DDR4-3200), we can observe emissions around 1.6 GHz, 3.2 GHz, and 4.8 GHz (3rd harmonic). It is important to note that DDR RAM modules emit frequencies that are not direct harmonics but are related to the clock frequency more indirectly; these can include subharmonics and other spurious emissions. We don't use these frequencies for the RAMBO covert channel. Our tests show that some systems use spread spectrum clocking to spread electromagnetic emissions across a broader range of frequencies. This technique can help reduce the concentration of emissions at specific frequencies, making it less likely to carry the modulated information.
§ COUNTERMEASURES
Several defensive and protective countermeasures can be taken to defend against the proposed covert channel.
* Zone restrictions. The red-black separation concept involves creating a clear boundary or barrier between "red" and "black" components or environments to prevent unauthorized transfer of information from one domain to the other. This separation can be achieved through physical, logical, and procedural measures. In practice, defenders often use separate networks, hardware, and physical access controls to keep red and black systems physically separate from each other. There are several NATO and American standards, such as SDIP-27, AMSG, NSTISSAM, and ZONES, that mandate the segregation of areas that deal with the radiated electromagnetic, magnetic, optical, and acoustic energy of devices <cit.>. In this approach, radio receiver devices are eliminated from air-gapped computers or kept outside a specified radius of several meters away. The red-black separation concept may be applied in various domains, including military, intelligence, critical infrastructure, and organizations dealing with susceptible information <cit.>. In the context of the RAMBO attack, it can mitigate the risk of RAM leakage and unauthorized access by creating a clear separation between the two security domains.
* Host intrusion detection systems (HIDS). In this approach, we monitor the operating system's physical or virtual memory operations and detect suspicious operations. Such anomalies could be a process that abnormally reads and writes to memory regions. These are three different layers on which an intrusion detection system can operate. In this kernel-level approach, a driver/module is installed at the kernel level and continuously monitors the page access operations. Our experiment shows that all monitoring approaches imply high false positive rates. The main reason is that memory operations are always incurred by hundreds of threads in the OS, including the kernel level. Monitoring the analysis of these operations creates runtime overhead and leads to a high rate of false alarms.
* Hypervisor-level memory access monitoring. Because the hypervisor operates at a lower level of system control, it has visibility into the memory access patterns of the virtual machines it manages. This visibility allows the hypervisor to monitor memory access <cit.>. Technically, the hypervisor manages the virtual-to-physical memory mapping for each VM through memory page tables. An Extended Page Table (EPT) is a virtualization technology used in modern processors to enhance the performance of virtual machines (VMs) in a virtualized environment. EPT is specific to Intel processors and is equivalent to AMD's Nested Page Tables (NPT). By monitoring and controlling these page tables, the hypervisor can keep track of memory access patterns and perform memory isolation. Note that this approach was proposed by previous work to detect shellcode injection attacks and other types of vulnerability exploits. However, as the HIDS solution, it may lead to a high rate of false positives.
* External radio monitoring. Dedicated spectrum analyzers are specialized hardware devices designed to scan and analyze the radio frequency spectrum. They provide detailed information about signal strengths, frequency utilization, and interference sources. Spectrum monitoring refers to analyzing and observing the radio frequency (RF) spectrum used by wireless networks, e.g., those using the Wi-Fi standard. This monitoring helps understand the wireless communication environment's usage, interference, and overall health. It involves scanning and analyzing the different frequency channels within the RF spectrum to detect signals, identify sources of interference, and optimize the performance of wireless networks. In the context of a RAMBO attack, spectrum analyzers can provide visibility of the RF spectrum, showing signal strength across different frequency bands and detecting unintended transmission and covert channels.
* Internal RAM jamming. Another option is to interrupt the covert channel by applying random memory operations. Such jamming can be implemented as a user or kernel thread randomly interfering with memory and performing read-write operations. This approach has the main disadvantage of interfering with legitimate memory activities, resulting in significant overhead. In addition, internal jammer threads can be manipulated, bypassed, or terminated by malware with sufficient privileges running on a compromised computer.
* External Electromagnetic jamming. It is possible to defend against the covert channel using radio jammers, also known as signal jammers or RF (radio frequency) jammers. These devices are designed to interfere with or disrupt wireless communications by emitting radio frequency signals on the required frequencies used by the targeted communication systems. The goal of a radio jammer is to create a jamming signal that overwhelms and interferes with legitimate signals, rendering the communication systems ineffective or unreliable within the jamming area <cit.>. In the case of a RAMBO attack, the whole spectrum of DDR should be jammed. However, this approach requires dedicated external hardware transceivers and antennas, which are maintained in a secure area, which tend to pose another security threat.
* Radio reduction/blocking Faraday enclosures. A PC Faraday enclosure, also known as a Faraday cage or Faraday enclosure, is a shielded enclosure designed to block external electromagnetic fields and electromagnetic radiation from entering or leaving the enclosed space <cit.>. This shielding helps protect sensitive electronic devices and equipment from electromagnetic interference (EMI) and prevents emitted electromagnetic radiation from leaking out and potentially interfering with other devices or systems. The Faraday enclosures will limit the leakage of radio frequencies of the RAMBO attack. However, the solution is costly and not applied on a broad scale.
Table <ref> lists the countermeasures and their limitations.
§ CONCLUSION
We present an air gap covert channel attack that allows attackers to exfiltrate sensitive data from isolated computers. We show that malicious code in the infected computers can manipulate memory operations and generate radio signals from the memory buses. By precisely controlling the memory-related instructions, arbitrary information can be encoded and modulated on the electromagnetic wave. An attacker with a software-defined radio (SDR) can receive the information, demodulate it, and decide. We showed that this method could be used to exfiltrate arbitrary types of information, such as keystroke logging, files, images, biometric data, etc. We presented architecture and implementation, provided evaluation results, and discussed preventive countermeasures. With this method, attackers can leak data from highly isolated, air-gapped computers to a nearby receiver at a bit rate of hundreds bits per second.
unsrt
10
albrecht2016gdpr
Jan Philipp Albrecht.
How the gdpr will change the world.
Eur. Data Prot. L. Rev., 2:287, 2016.
guri2018bridgeware
Mordechai Guri and Yuval Elovici.
Bridgeware: The air-gap malware.
Communications of the ACM, 61(4):74–82, 2018.
guri2021usbculprit
Mordechai Guri.
Usbculprit: Usb-borne air-gap malware.
In European Interdisciplinary Cybersecurity Conference, pages
7–13, 2021.
chen2011lessons
Thomas M Chen and Saeed Abu-Nimeh.
Lessons from stuxnet.
Computer, 44(4):91–93, 2011.
gostev2014agent
Alexander Gostev.
Agent. btz: a source of inspiration?
SecureList, 12(3), 2014.
dorais2021jumping
Alexis Dorais-Joncas and Facundo Munõz.
Jumping the air gap.
2021.
cabaj2018new
Krzysztof Cabaj, Luca Caviglione, Wojciech Mazurczyk, Steffen Wendzel, Alan
Woodward, and Sebastian Zander.
The new threats of information hiding: The road ahead.
IT professional, 20(3):31–39, 2018.
caviglione2021trends
Luca Caviglione.
Trends and challenges in network covert channels countermeasures.
Applied Sciences, 11(4):1641, 2021.
Guri2015
Mordechai Guri, Assaf Kachlon, Ofer Hasson, Gabi Kedma, Yisroel Mirsky, and
Yuval Elovici.
Gsmem: Data exfiltration from air-gapped computers over gsm
frequencies.
In USENIX Security Symposium, pages 849–864, 2015.
Guri2014
Mordechai Guri, Gabi Kedma, Assaf Kachlon, and Yuval Elovici.
Airhopper: Bridging the air-gap between isolated networks and mobile
phones using radio frequencies.
In Malicious and Unwanted Software: The Americas (MALWARE), 2014
9th International Conference on, pages 58–67. IEEE, 2014.
shen2021lora
Cheng Shen, Tian Liu, Jun Huang, and Rui Tan.
When lora meets emr: Electromagnetic covert channels can be super
resilient.
In 2021 IEEE Symposium on Security and Privacy (SP), pages
1304–1317. IEEE, 2021.
guri2019ctrl
Mordechai Guri, Boris Zadov, Dima Bykhovsky, and Yuval Elovici.
Ctrl-alt-led: Leaking data from air-gapped computers via keyboard
leds.
In 2019 IEEE 43rd Annual Computer Software and Applications
Conference (COMPSAC), volume 1, pages 801–810. IEEE, 2019.
guri2020fansmitter
Mordechai Guri, Yosef Solewicz, and Yuval Elovici.
Fansmitter: Acoustic data exfiltration from air-gapped computers via
fans noise.
Computers & Security, page 101721, 2020.
guri2022gpu
Mordechai Guri.
Gpu-fan: Leaking sensitive data from air-gapped machines via covert
noise from gpu fans.
In Nordic Conference on Secure IT Systems, pages 194–211.
Springer, 2022.
guri2015bitwhisper
Mordechai Guri, Matan Monitz, Yisroel Mirski, and Yuval Elovici.
Bitwhisper: Covert signaling channel between air-gapped computers
using thermal manipulations.
In Computer Security Foundations Symposium (CSF), 2015 IEEE
28th, pages 276–289. IEEE, 2015.
guri2021exfiltrating
Mordechai Guri.
Exfiltrating data from air-gapped computers via vibrations.
Future Generation Computer Systems, 122:69–81, 2021.
AirGappe49:online
Air gapped networks: A false sense of security? - sentinelone.
<https://www.sentinelone.com/blog/air-gapped-networks-a-false-sense-of-security/>.
(Accessed on 07/14/2023).
Beatingt3:online
Beating the air-gap: How attackers can gain access to supposedly isolated
systems | energy central.
<https://energycentral.com/c/iu/beating-air-gap-how-attackers-can-gain-access-supposedly-isolated-systems>.
(Accessed on 05/04/2023).
Guri2018b
Mordechai Guri and Yuval Elovici.
Bridgeware: The air-gap malware.
Commun. ACM, 61(4):74–82, March 2018.
Kaspersk37:online
Kaspersky uncovers malware for targeted data exfiltration from air-gapped
environments | kaspersky.
<https://usa.kaspersky.com/about/press-releases/2023_kaspersky-uncovers-malware-for-targeted-data-exfiltration-from-air-gapped-environments>.
(Accessed on 08/20/2023).
Deshotels2014
Luke Deshotels.
Inaudible sound as a covert channel in mobile devices.
In WOOT, 2014.
de2022inkfiltration
Julian de Gortari Briseno, Akash Deep Singh, and Mani Srivastava.
Inkfiltration: Using inkjet printers for acoustic data exfiltration
from air-gapped networks.
ACM Transactions on Privacy and Security, 25(2):1–26, 2022.
guri2017acoustic
Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici.
Acoustic data exfiltration from speakerless air-gapped computers via
covert hard-drive noise (diskfiltration).
In European Symposium on Research in Computer Security, pages
98–115. Springer, 2017.
guri2020cd
Mordechai Guri.
Cd-leak: Leaking secrets from audioless air-gapped computers using
covert acoustic signals from cd/dvd drives.
In 2020 IEEE 44th Annual Computers, Software, and Applications
Conference (COMPSAC), pages 808–816. IEEE, 2020.
guri2021power
Mordechai Guri.
Power-supplay: Leaking sensitive data from air-gapped, audio-gapped
systems by turning the power supplies into speakers.
IEEE Transactions on Dependable and Secure Computing, 2021.
guri2019brightness
Mordechai Guri, Dima Bykhovsky, and Yuval Elovici.
Brightness: Leaking sensitive data from air-gapped workstations via
screen brightness.
In 2019 12th CMI Conference on Cybersecurity and Privacy (CMI),
pages 1–6. IEEE, 2019.
guri2018xled
Mordechai Guri, Boris Zadov, Andrey Daidakulov, and Yuval Elovici.
xled: Covert data exfiltration from air-gapped networks via switch
and router leds.
In 2018 16th Annual Conference on Privacy, Security and Trust
(PST), pages 1–12. IEEE, 2018.
Guri2017
Mordechai Guri, Boris Zadov, and Yuval Elovici.
LED-it-GO: Leaking (A Lot of) Data from Air-Gapped Computers via
the (Small) Hard Drive LED, pages 161–184.
Springer International Publishing, Cham, 2017.
guri2022air
Mordechai Guri.
Air-fi: Leaking data from air-gapped computers using wi-fi
frequencies.
IEEE Transactions on Dependable and Secure Computing, 2022.
guri2022satan
Mordechai Guri.
Satan: Air-gap exfiltration attack via radio signals from sata
cables.
In 2022 19th Annual International Conference on Privacy,
Security & Trust (PST), pages 1–10. IEEE, 2022.
guri2021lantenna
Mordechai Guri.
Lantenna: Exfiltrating data from air-gapped networks via ethernet
cables emission.
In 2021 IEEE 45th Annual Computers, Software, and Applications
Conference (COMPSAC), pages 745–754. IEEE, 2021.
romoddr
Joaquin Romo.
Ddr memories comparison and overview.
Beyond Bits, page 70.
MOVNTI—S81:online
Movnti — store doubleword using non-temporal hint.
<https://www.felixcloutier.com/x86/movnti>.
(Accessed on 08/20/2023).
NSTISSAM75:online
https://cryptome.org.
Nstissam tempest/2-95.
<https://cryptome.org/tempest-2-95.htm>, 2000.
(Accessed on 01/01/2023).
tang2018secure
Weiwen Tang and Zeyu Mi.
Secure and efficient in-hypervisor memory introspection using nested
virtualization.
In 2018 IEEE Symposium on Service-Oriented System Engineering
(SOSE), pages 186–191. IEEE, 2018.
kasturi2020detection
GS Kasturi, Ansh Jain, and Jagdeep Singh.
Detection and classification of radio frequency jamming attacks using
machine learning.
J. Wirel. Mob. Networks Ubiquitous Comput. Dependable Appl.,
11(4):49–62, 2020.
chapman2015mathematics
S Jonathan Chapman, David P Hewett, and Lloyd N Trefethen.
Mathematics of the faraday cage.
Siam Review, 57(3):398–417, 2015.
|
http://arxiv.org/abs/2409.03375v1 | 20240905092705 | Leveraging Large Language Models through Natural Language Processing to provide interpretable Machine Learning predictions of mental deterioration in real time | [
"Francisco de Arriba-Pérez",
"Silvia García-Méndez"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
]Leveraging Large Language Models through Natural Language Processing to provide interpretable Machine Learning predictions of mental deterioration in real time
1]Francisco de Arriba-Pérezfarriba@gti.uvigo.es
These authors contributed equally to this work.
[1]Silvia García-Méndezsgarcia@gti.uvigo.es
These authors contributed equally to this work.
[1]Information Technologies Group, atlanTTic, University of Vigo, Vigo, Spain
Based on official estimates, 50 million people worldwide are affected by dementia, and this number increases by 10 million new patients every year. Without a cure, clinical prognostication and early intervention represent the most effective ways to delay its progression. To this end, Artificial Intelligence and computational linguistics can be exploited for natural language analysis, personalized assessment, monitoring, and treatment. However, traditional approaches need more semantic knowledge management and explicability capabilities. Moreover, using Large Language Models (llms) for cognitive decline diagnosis is still scarce, even though these models represent the most advanced way for clinical-patient communication using intelligent systems. Consequently, we leverage an llm using the latest Natural Language Processing (nlp) techniques in a chatbot solution to provide interpretable Machine Learning prediction of cognitive decline in real-time. Linguistic-conceptual features are exploited for appropriate natural language analysis. Through explainability, we aim to fight potential biases of the models and improve their potential to help clinical workers in their diagnosis decisions. More in detail, the proposed pipeline is composed of (i) data extraction employing nlp-based prompt engineering; (ii) stream-based data processing including feature engineering, analysis, and selection; (iii) real-time classification; and (iv) the explainability dashboard to provide visual and natural language descriptions of the prediction outcome. Classification results exceed 80 in all evaluation metrics, with a recall value for the mental deterioration class about 85. To sum up, we contribute with an affordable, flexible, non-invasive, personalized diagnostic system to this work.
[
*
=====
This version of the article has been accepted for publication, after peer review but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/s13369-024-09508-2.
§ INTRODUCTION
Neurodegenerative Alzheimer’s disorder (ad) is the leading cause of chronic or progressive dementia, which negatively impacts cognitive functioning, including comprehension, speech, and thinking problems, memory loss, etc. <cit.>. More in detail, the typical stages of cognitive decline can be categorized as pre-clinical ad, Mild Cognitive Impairment (mci) caused by ad, and finally ad dementia <cit.>. Generally, cognitively impaired users find difficult to perform daily tasks with the consequent detrimental impact on their life quality <cit.>. In this line, cognitive decline is a leading cause of dependency and disability for our elders <cit.>.
According to the Alzheimer's Association report on the impact of this disease in the United States <cit.>, it is the sixth‐leading death cause that increased more than 145 in the last years. Moreover, it affects 6.7 million people 65 or older. Dreadfully, this number is predicted to grow to 13.8 million by 2060. Regarding medical expenses related to people affected with dementia 65 or older, these are three times greater than those of people without this condition, reaching 345 billion dollars so far in 2023. Overall, the World Health Organization estimates that 50 million people worldwide are affected by dementia, with 10 million new patients yearly[Available at <https://www.who.int/news-room/fact-sheets/detail/dementia>, May 2024.].
Clinical prognostication and early intervention, the most promising ways to address mental deterioration, rely on effective progression detection <cit.>. Among the benefits of early identification, care planning assistance, medical expense reduction, and the opportunity to receive the latest treatments, including non-invasive therapy, given the rapid biologic therapeutics advancements, stand out <cit.>. The social stigma and socioeconomic status must also be considered when accessing mental health services <cit.>. However, the latter early diagnosis is challenging since the symptoms can be confused with normal aging decline <cit.>. To address it, computational linguistics can be exploited <cit.>. Natural language analysis is particularly relevant, constituting a significant proportion of healthcare data <cit.>.
Particularly, impairment in language production mainly affects lexical (e.g., little use of nouns and verbs), semantic (e.g., the use of empty words like thing/stuff), and pragmatic (e.g., discourse disorganization) aspects <cit.>.
Digital and technological advances such as Artificial Intelligence (ai)-based systems represent promising approaches towards individuals' needs for personalized assessment, monitoring, and treatment <cit.>. Accordingly, these systems have the capabilities to complement traditional methodologies such as the Alzheimer’s Disease Assessment Scale-Cognition (adascog), the Mini-Mental State Examination (mmse), and the Montreal Cognitive Assessment (moca), which generally involve expensive, invasive equipment, and lengthy evaluations <cit.>. In fact, paper-and-pencil cognitive tests continue to be the most common approaches even though the latest advances in the Natural Language Processing (nlp) field enable easy screening from speech data while at the same time avoiding patient/physician burdening <cit.>. Summing up, language analysis can translate into an effective, inexpensive, non-invasive, and simpler way of monitoring cognitive decline <cit.> provided that spontaneous speech of cognitive impaired people is characterized by the aforementioned semantic comprehension problems and memory loss episodes <cit.>.
Consequently, Clinical Decision Support Systems (cdsss), Diagnostic Decision Support Systems (ddsss), and Intelligent diagnosis systems (idss) which apply ai techniques (e.g., Machine Learning - ml, nlp, etc.) to analyze patient medical data (i.e., clinical records, imaging data, lab results, etc.) and discover relevant patterns effectively and efficiently, have significantly attracted the attention of the medical and research community <cit.>. However, one of the main disadvantages of traditional approaches is their lack of semantic knowledge management and explicability capabilities <cit.>. The latter can be especially problematic in the medical domain regarding accountability of the decision process for the physicians to recommend personalized treatments <cit.>.
Integrating ai-based systems in conversational assistants to provide economical, flexible, immediate, and personalized health support is particularly relevant <cit.>. Their use has been greatly enhanced by the nowadays popular Large Language Models (llms), enabling dynamic dialogues compared to previous developments <cit.>. Subsequently, llms have been powered by the latest advancements in deep learning techniques and the availability of vast amounts of cross-disciplinary data <cit.>. These models represent the most innovative approach of ai into healthcare by expediting medical interventions and providing new markers and therapeutic approaches to neurological diagnosis from patient narrative processing <cit.>. Note that patient experience can also be improved with the help of llms in terms of information and support seeking <cit.>. Summing up, conversation assistants that leverage llms have the potential to monitor high-risk populations and provide personalized advice, apart from offering companion <cit.> constituting the future of therapy in the literature <cit.>.
Given the still poor accuracy of cdsss <cit.>, we plan to leverage an llm using the latest nlp techniques in a chatbot solution to provide interpretable ml prediction of cognitive decline in real-time. Linguistic-conceptual features are exploited for appropriate natural language analysis. The main limitation of llms is that their outcomes may be misleading. Thus, we apply prompt engineering to avoid the “hallucination" effect. Through explainability, we aim to fight potential biases of the models and improve their potential to help clinical workers in their diagnosis decisions. Summing up, we contribute with an affordable, non-invasive diagnostic system in this work.
The rest of this paper is organized as follows. Section <ref> reviews the relevant competing works on cognitive decline detection involving llms and interpretable ml predictions of mental deterioration. The contribution of this work is summarized in Section <ref>. Section <ref> explains the proposed solution, while Section <ref> describes the experimental data set, our implementations, and the results obtained. Finally, Section <ref> concludes the paper and proposes future research.
* Problem. The World Health Organization predicts a yearly increase of 10 million people affected with dementia.
* What is already known. Paper-and-pencil cognitive tests continue to be the most common approach. The latter is impractical, given the disease growth rate. Moreover, one of the main disadvantages of intelligent approaches is their lack of semantic knowledge management and explicability capabilities.
* What this paper adds. We leverage an llm using the latest nlp techniques in a chatbot solution to provide interpretable ml prediction of cognitive decline in real-time. To sum up, we contribute with an affordable, flexible, non-invasive, personalized diagnostic system to this work.
§ RELATED WORK
As previously mentioned, the main focus of dementia treatment is to delay the cognitive deterioration of patients <cit.>. Consequently, early diagnosis, which simultaneously contributes to reducing medical expenses in our aging society and avoiding invasive treatments with subsequent side effects on the users, is desirable <cit.>. To this end, ai has been successfully applied to idss in order to recommend treatments based on their diagnosis prediction <cit.>.
While ml models perform well and fast in diagnosis tasks, they require extensive training data previously analyzed by experts, which is labor-intensive and time-consuming <cit.>. In contrast, advanced nlp-based solutions exploit transformer-based models already trained with large corpora, including domain-related data, which results in very sensitive text analysis capabilities <cit.>. Consequently, transformer-based pre-trained language models (plms) (e.g., bert <cit.>, gpt-3 <cit.>) which preceded the popular llms (e.g., gpt-4[Available at <https://platform.openai.com/docs/models/gpt-4>, May 2024.]) have disruptively transformed the nlp research. These models exhibit great contextual latent feature extraction abilities from textual input <cit.>. The latter models are implemented to predict the next token based on massive training data, resulting in a word-by-word outcome <cit.>. Nowadays, they are used for various tasks, including problem-solving, question-answering, sentiment analysis, text classification, and generation, etc. <cit.>.
There exist plm versions over biomedical and clinical data such as Biobert <cit.>, Biogpt <cit.>, Bluebert <cit.>, Clinicalbert[Available at <https://github.com/EmilyAlsentzer/clinicalBERT>, May 2024.] and tcm-bert <cit.>. Open-domain conversational assistants, whose dialogue capabilities are not restricted to the conversation topic, exploit llms <cit.>. However, using llms for cognitive decline diagnosis is still scarce even though these models represent the most advanced way for clinical-patient communication using intelligent systems <cit.>. More in detail, they overcome the limitation of traditional approaches that lack semantic reasoning, especially relevant in clinical language <cit.>. Unfortunately, despite the significant advancement they represent, llms still exhibit certain limitations in open-domain task-oriented dialogues (e.g., medical use cases) <cit.>. For the latter, the Reinforcement Learning from Human Feedback (rlhf, i.e., prompt engineering) technique is applied to enhance their performance based on end users' instructions and preferences <cit.>.
Regarding the application of plm to the medical field, <cit.> performed two tasks: (i) dementia prediction and (ii) mmse score estimation from speech recordings combining acoustic features and text embeddings obtained with the bert model from their transcription. The input data correspond to cognitive tests (cts). <cit.> analyzed disfluencies (i.e., uh/um word frequency and speech pauses) with bert and ernie modes based on data from the Cookie Theft picture from the Boston Diagnostic Aphasia Exam. Close to the work by <cit.>, <cit.> analyzed the performance of bert model to extract embeddings in cognitive impairment detection from speech gathered during cts. <cit.> combined the Siamese bert networks (sberts) with ml classifiers to firstly extract the sentence embeddings and then predict Alzheimer’s disease from ct data. In contrast, <cit.> performed dementia detection combining ml, the bert model, and acoustic features to achieve improved performance. Moreover, <cit.> compared gpt-2 with its artificially degraded version (gpt-d) created with a dementia-related linguistic anomalies layer induction based on data from a picture description task, while <cit.> predicted dementia and cognitive score from ct data using gpt-3 exploiting both word embeddings and acoustic knowledge. Finally, <cit.> pre-trained the bert model with unstructured clinical notes from Electronic Health Records (ehrs) to detect mci to ad progression.
More closely related to our research is the work by <cit.>. The authors designed Pepper, a social robot with real-time conversational capabilities exploiting the Chatgpt gpt-3.5 model. However, the use case of the system is Autism Spectrum Disorder detection. Furthermore, <cit.> compared the diagnoses performance of different models of Chatgpt (i.e., ada, babbage, curie, davinci and gpt-3.5) with Google Bard and traditional ml approaches based on symptomatic data. The authors exploited prompt engineering to ensure appropriate performance when submitting clinical-related questions to the llm model. Moreover, <cit.> analyzed the diagnosis ability of Chatgpt gpt-3.5 model using clinical vignettes. Then, the llm was evaluated compared to physicians’ diagnosis. However, the authors again focus not on cognitive decline prediction but on ten common chief complaints. Consideration should be given to the work by <cit.>, who used Chatgpt (i.e., gpt-3.5 and gpt-4 models) and Google Bard to predict several neurodegenerative disorders based on clinical summaries in clinicopathological conferences without being a specific solution tailored for ad prediction. Finally, regarding conversational assistants that integrate llms, <cit.> developed a chatbot based on Chatgpt gpt-3.5 model to provide emotional support to caregivers (i.e., practical tips and shared experiences).
§.§ Contributions
As previously described, a vast amount of work in the state of the art exploits plms even in the clinical field <cit.>. However, scant research has been performed in the case of llm models. Table <ref> summarizes the reviewed diagnostic solutions that exploit llms in the literature. Note that explainability represents a differential characteristic of the solution proposed given the relevance of promoting transparency in ai-based systems <cit.>.
Given the comparison with competing works:
* Our system is the first that jointly considers the application of an llm over spontaneous speech and provides interpretable ml results for the use case of mental decline prediction.
* Our solution implements ml models in streaming to provide real-time functioning, hence avoiding the re-training cost of batch systems.
* In this work, we leverage the potential of llms by applying the rlhf technique through prompt engineering in a chatbot solution. Note that the natural language analysis is performed with linguistic-conceptual features. Consequently, we contribute with an affordable, non-invasive diagnostic system.
* Our system democratizes access to researchers and end users within the public health field to the latest advances in nlp.
§ METHODOLOGY
Figure <ref> depicts the system scheme proposed for real-time prediction of mental decline combining llms and ml algorithms with explainability capabilities. More in detail, it is composed of (i) data extraction employing nlp-based prompt engineering (Section <ref>); (ii) stream-based data processing including feature engineering, analysis and selection (Section <ref>); (iii) real-time classification (Section <ref>); and (iv) the explainability dashboard to provide visual and natural language descriptions of the prediction outcome (Section <ref>). Algorithm <ref> describes the complete process.
[!htbp]
Methodology
§.§ Data extraction
The Chatgpt gpt-3.5 model used serves two purposes: (i) it enables a natural, free dialogue with the end users, and (ii) data is extracted due to its semantic knowledge management capabilities. The latter information is gathered once the conversation is concluded (either more than 3 minutes of inactivity or farewell detected) and used to compute the features used for classification (see Section <ref>). For this extraction, prompt engineering is exploited. The complete data extraction process is described in Algorithm <ref>.
[!htbp]
Data extraction
§.§ Stream-based data processing
Stream-based data processing encompasses feature engineering, analysis, and selection tasks to ensure the optimal performance of the ml classifiers.
§.§.§ Feature engineering
Table <ref> details the features used to predict mental decline. Note that conversational, emotional, and linguistic-conceptual features are computed. The conversational features[Features 9-10 are not computed using the llm.] (1-10) represent relevant semantic and pragmatic information related to the free dialogue (e.g., fluency, repetitiveness, etc.), while emotional features focus on the mental and physical state of the users. Finally, linguistic features represent lexical and semantic knowledge (e.g., disfluencies, placeholder words, etc.).
Furthermore, the system maintains a history of each user data (i.e., past and current feature values) that enables the computation of four new characteristics per each in Table <ref>: average, q1, q2, and q3 as indicated in Equation (<ref>), where n is the user conversation counter and X[n] represents a particular feature with historical data.
∀ n ∈{1 ... ∞}
X[n] = { x[0],…,x[n]}.
Y[n] = {y_0[n], y_1[n],…,y_n-1[n]}|
y_0[n]≤ y_1[n]≤…≤ y_n-1[n],
; ∀ x ∈ X[n], x ∈ Y[n].
avg^n[n]=1/n∑_i=0^n y_i [n]
Q^n_1[n]=y_1/4n[n]
Q^n_2[n]=y_2/4n [n]
Q^n_3[n]=y_3/4n [n]
§.§.§ Feature analysis & selection
Feature analysis and selection tasks are necessary to optimize the performance of the ml classifiers. These tasks are even more important in the streaming scenario where samples arrive at a real-time pace. The latter means that the classification problem layout (e.g., the most relevant features) may vary over time.
The proposed system follows two thresholding strategies for feature analysis and selection based on cut-off points regarding correlation and variance values to remove irrelevant features. The former, correlation analysis, limits the number of features to extract the most relevant characteristics. For the latter variance analysis, the number of features selected is dynamically established in each interaction of the stream-based model, selecting those that meet the threshold criteria.
Algorithm <ref> details the data processing stage, including feature engineering, analysis, and selection.
[!htbp]
Data processing
§.§ Stream-based classification
Two classification scenarios are considered:
* Scenario 1 analyzes the behavior of the classifiers in a streaming setting. Under this consideration, sequential and continual testing and training over time is assumed.
* Scenario 2 analyzes the models' performance under more realistic conditions. Thus, the testing is continuous (i.e., in streaming) while training is performed desynchronized in blocks of 100 samples.
The following ml models are selected based on their good performance in similar classification problems <cit.>:
* Gaussian Naive Bayes (gnb) <cit.> exploits the Gaussian probability distribution in a stream-based ml model. It is used as a reference for performance analysis.
* Approximate Large Margin Algorithm (alma) <cit.> is a fast incremental learning algorithm comparable to Support Vector Machine to approximate the maximal margin between a hyperplane concerning a norm (with a value of p ≥ 2) for a set of linearly separable data.
* Hoeffding Adaptive Tree Classifier (hatc) <cit.> computes single-tree branch performance and is designed for stream-based prediction.
* Adaptive Random Forest Classifier (arfc) <cit.> constitutes an advanced model of hatc in which branch performance is computed by majority voting in an ensemble tree scenario.
Algorithm <ref> describes the stream-based prediction process.
[!htbp]
Classification
§.§ Explainability dashboard
Prediction transparency is promoted through explainability data provided to the end users regarding relevant features in the prediction outcome. Thus, those relevant features are included in the natural language description of the decision path. The five features whose mathematical module is highest or with the highest variance and whose values are the most distant from the average are selected. In the case of the counters (features 9-10), this average is obtained from the average of all users in the system.
§ EVALUATION AND DISCUSSION
This section discusses the experimental data set used, the implementation decisions, and the results obtained. The evaluations were conducted on a computer with the following specifications:
* Operating System: Ubuntu 18.04.2 LTS 64 bits
* Processor: IntelCore i9-10900K 2.80
* RAM: 96 DDR4
* Disk: 480 NVME + 500 SSD
§.§ Experimental data set
The experimental data set[Data are available on request from the authors.] consists of an average of 6.92±3.08 utterances with 62.73±57.20 words involving 44 users with 13.66±7.86 conversations by user. The distribution of mental deterioration in the experimental data set is 238 samples in which mental deterioration is present and 363 in which it is absent. Figure <ref> depicts the histogram distribution of words and interactions by absent and present mental deterioration, respectively. While the distributions of the number of interactions in the absence or presence of cognitive impairment follow a normal function, the number of words can be approximated by a positive normal centered on 0. The most relevant issue is that, as expected, users with mental deterioration present a lower number of interactions and a significant decrease in the number of words used in their responses.
§.§ Data extraction
Data to engineer conversational (1-8), emotional, and linguistic features in Table <ref> were obtained with [Available at <https://platform.openai.com/docs/models/gpt-3-5>, May 2024.] model. The prompt used is shown in Listing <ref>.
§.§ Stream-based data processing
This section reports the algorithms used for feature engineering, analysis, and selection and their evaluation results.
§.§.§ Feature engineering
A total of 88 features were generated[New four characteristics (average, q1, q2, and q3) per each of the 22 features in Table <ref>.] in addition to the 22 features generated in each conversation (see Table <ref>) resulting in 110 features. In Figure <ref>, we show the distribution of conversations by the user, which approaches a uniform density function, being relevant that the large majority concentrates between 15 and 20 conversations.
§.§.§ Feature analysis & selection
Correlation and variance thresholding decisions were based on experimental tests. For the correlation thresholding, [Available at <https://riverml.xyz/0.11.1/api/feature-selection/SelectKBest>, May 2024.] was applied using the Pearson correlation coefficient <cit.>. The K value corresponds to the most relevant features of the 80 experimental data. Table <ref> shows the features with a correlation value greater than 0.2 with the mental deterioration target when the last sample entered the stream-based classification model.
Regarding the variance thresholding, the implementation used was [Available at <https://riverml.xyz/0.11.1/api/feature-selection/VarianceThreshold>, May 2024.] from the library[Available at <https://riverml.xyz/0.11.1>, May 2024.]. Moreover, the cut-off point, 0.001, is computed with the 10th percentile variance value of the features contained in the 20 of the experimental data set, which acts as the cold start of this method. Consequently, only those features that exceed the abovementioned cut-off are selected as relevant for classification purposes. Table <ref> also details the features with a variance greater than 0.5[Note that we have discarded features 9 and 10 from Table <ref> from this example since they represent counters and their variance is always greater than 1.].
Table <ref> shows that among the conversational features, user initiative (feature 6 in Table <ref>) plays an important role. The same applies to the number of interactions within a dialogue (feature 9). Regarding emotional features, consideration should be given to fatigue (feature 12) and polarity (feature 14). Finally, using a colloquial/formal registry (features 16/19), disfluency (feature 18), and short responses (feature 22) stand out among linguistic characteristics. Considering correlation and variance analysis jointly, initiative and polarity are the most relevant data for prediction purposes.
§.§ Stream-based classification
The River implementations of the ml models selected are: gnb[Available at <https://riverml.xyz/dev/api/naive-bayes/GaussianNB>, May 2024.], alma[Available at <https://riverml.xyz/0.11.1/api/linear-model/ALMAClassifier>, May 2024.], hatc[Available at <https://riverml.xyz/0.11.1/api/tree/HoeffdingAdaptiveTreeClassifier>, May 2024.] and arfc[Available at <https://riverml.xyz/0.11.1/api/ensemble/AdaptiveRandomForestClassifier>, May 2024.]. Listings <ref>, <ref> and <ref> detail the hyper-parameter optimization ranges used, excluding the baseline model, from which the following values were selected as optimal:
* Correlation thresholding
* ALMA: alpha=0.5, B=1.0, C=1.0.
* HATC: depth=None, tiethreshold=0.5, maxsize=50.
* ARFC: models=10,features=5, lambda=50.
* Variance thresholding
* ALMA: alpha=0.5, B=1.0, C=1.0.
* HATC: depth=None, tiethreshold=0.5, maxsize=50.
* ARFC: models=100,features=sqrt, lambda=50.
[frame=single,caption=alma hyper-parameter configuration.,label=alma_conf,emphstyle=,escapechar=ä]
alpha = [0.5,0.7,0.9]
B = [1.0, 1.41, 1.2]
C = [1.0,1.11, 1.2]
[frame=single,caption=hatc hyper-parameter configuration.,label=hatc_conf,emphstyle=,escapechar=ä]
depth = [None, 50, 200]
tiethreshold = [0.5, 0.05, 0.005]
maxsize = [50, 100, 200]
[frame=single,caption=arfc hyper-parameter configuration.,label=arfc_conf,emphstyle=,escapechar=ä]
models = [10, 25, 100]
features = [sqrt, 5, 50]
lambda = [25, 50, 100]
Table <ref> presents the results for evaluation scenarios 1 and 2. In both scenarios, the feature selection methodology based on correlation thresholding returns lower classification metric values than those obtained with the variance method. Thus, once the variance feature selection method is applied, the arfc is the most promising performance algorithm regardless of the evaluation scenario.
Consideration should be given to the fact that even in scenario 2, in which training is performed desynchronized and in batch, the robustness of arfc stands out with classification results exceeding 80 and with a recall for the mental deterioration class about 85.
Provided that our system operates in streaming and to enable direct comparison with batch ml solutions, additional evaluation measures from 10-fold cross-validation are provided, particularly, for Random Forest (rf[Available at <https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html>, May 2024.]) equivalent to the best model, arfc, in stream-based classification. The results are displayed in Table <ref>, most surpassing the 90 threshold. Note that the increase in performance compared to streaming operation (e.g., +8.37 points in accuracy) is derived from the fact that in batch classification, the model has access to the 90 of the experimental data for training. In contrast, stream-based classification relies on the ordered incoming new samples, which is more demanding. Consequently, having achieved a comparable performance in batch and stream-based classification is noteworthy.
To verify the system's operation in a more challenging scenario, we have experimented with a data set from a previous study <cit.> with fewer interactions per session. Even when the system is fed with less information, the evaluation metrics are promising, as shown in Table <ref> with all values above 70, and the precision and recall of the mental deterioration category above 80. Comparing the rf batch model in our past research <cit.> with the proposed arfc algorithm, which operates in streaming, the improvement reaches 10 points and 4 points in the recall metric of mental deterioration and absence of mental deterioration categories, respectively.
§.§ Explainability dashboard
Figure <ref> shows the explainability dashboard. In this example, the variation in predicting cognitive impairment is visualized, considering two weeks of past data. This variation is represented with the function of arfc algorithm. At the bottom, the most relevant features are displayed. Each figure card contains the identifier and statistic represented in colors following this scheme: 1 to 0.5 in green, 0.5 to 0.25 in yellow, and 0.25 to 0 in red. The latter assignation is inverted for negative values. At the bottom, a brief description in natural language is provided. The average accumulated value, and the confidence prediction of the current sample are displayed on the right.
§ CONCLUSIONS
Cognitively impaired users find it difficult to perform daily tasks with the consequent detrimental impact on their life quality. Thus, progression detection and early intervention are essential to effectively and timely address mental deterioration to delay its progress. In this work, we focused on impairment in language production (i.e., lexical, semantic, and pragmatic aspects) to engineer linguistic-conceptual features towards spontaneous speech analysis (e.g., semantic comprehension problems, memory loss episodes, etc.). Compared to traditional diagnostic approaches, the proposed solution has semantic knowledge management and explicability capabilities thanks to integrating an llm in a conversational assistant.
Consideration should be given to the limitations of using llms, which are transversal into the healthcare field beyond mental deterioration detection. The potential biases and lack of inherent transparency stand out among the risks of applying these models for medical purposes. The latter black-box problem, also present in traditional opaque ml models, is particularly critical in the healthcare field by negatively impacting the decision process of physicians due to their limited corrective capabilities and even the end users, limiting their trust in medical applications. Moreover, these systems' current limited memory management capability is worth mentioning, which prevents the realization of longitudinal clinical analysis. The same applies to the associated complexity of context information management. Ultimately, the difficulty in collecting data due to the sensitivity and confidentiality of the information in the medical field should also be mentioned.
More in detail, the solution provides interpretable ml prediction of cognitive decline in real-time. rlhf (i.e., prompt engineering) and explainability are exploited to avoid the “hallucination" effect of llms and avoid potential biases by providing natural language and visual descriptions of the diagnosis decisions. Note that our system implements ml models in streaming to provide real-time functioning, hence avoiding the re-training cost of batch systems.
Summing up, we contribute with an affordable, flexible, non-invasive, personalized diagnostic system that enables the monitoring of high-risk populations and offers companionship. Ultimately, our solution democratizes access to researchers and end users within the public health field to the latest advances in nlp.
Among the challenges and potential ethical concerns raised by the application of ai into the healthcare field, the double effect principle must be considered. In this sense, few can deny its promising potential to provide innovative treatments while at the same time presenting safety-critical concerns, notably regarding their interpretability. Apart from the algorithmic transparency mentioned, the main considerations are privacy and safety of the medical data, fairness, and autonomous decision-making without human intervention. In future work, we plan to test the performance of new approaches, such as reinforcement learning, to enhance the system's personalizing capabilities further. Moreover, we will explore co-design practices with end users, and we seek to move our solution to clinical practice within an ongoing project with daycare facilities. Note that reinforcement learning with human feedback will also allow us to mitigate some of the limitations discussed, such as physicians' lack of interpretability and corrective capabilities. The latter will also have a positive ethical impact on the deployment of llm-based medical applications by ensuring fairness. The societal impact derived from reduced costs compared to traditional approaches may result in broader accessibility to clinical diagnosis and treatment on a demand basis. The equity will be impulsed by the capability of these systems to provide unlimited personalized support. In future research, we will work on mitigating health inequities by performing longitudinal studies to measure bias in our ai solution, particularly related to the algorithm design, bias in the training data, and the ground truth. Underperformance in certain social groups may also be considered. For that purpose, we will gather social context data, which will allow us to measure equity (e.g., gender, race, socioeconomic status, etc.). To ensure patient data protection while at the same time increasing data available for research, federated learning approaches will be explored.
§ DECLARATIONS
§.§ Competing interests
The authors have no competing interests to declare relevant to this article's content.
§.§ Funding
This work was partially supported by (i) Xunta de Galicia grants ED481B-2022-093 and ED481D 2024/014, Spain; and (ii) University of Vigo/CISUG for open access charge.
§.§ Authors contribution
Francisco de Arriba-Pérez: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data Curation, Writing - Original Draft, Writing - Review & Editing, Visualization, Supervision, Project administration, Funding acquisition. Silvia García-Méndez: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data Curation, Writing - Original Draft, Writing - Review & Editing, Visualization, Supervision, Project administration, Funding acquisition.
|
http://arxiv.org/abs/2409.03394v1 | 20240905100641 | Partitioning 2-edge-coloured bipartite graphs into monochromatic cycles | [
"Fabrício Siqueira Benevides",
"Arthur Lima Quintino",
"Alexandre Talon"
] | math.CO | [
"math.CO",
"cs.DM",
"05C15, 05C35, 05C70",
"G.2.2"
] |
Accelerating multipartite entanglement generation in non-Hermitian superconducting qubits
H. H. Jen
September 9, 2024
=========================================================================================
§ ABSTRACT
Given an r-colouring of the edges of a graph G, we say that it can be partitioned into p monochromatic cycles when there exists a set of p vertex-disjoint monochromatic cycles covering all the vertices of G. In the literature of this problem, an edge and a single vertex both count as a cycle.
We show that for every 2-colouring of the edges of a complete balanced bipartite graph, K_n,n, it can be partitioned into at most 4 monochromatic cycles. This type of question was first studied in 1970 for complete graphs and in 1983, by Gyárfás and Lehel, for K_n,n. In 2014, Pokrovskiy, has showed that any 2-colouring of the edges of K_n,n can be partitioned into at most 3 paths. It turns out that finding monochromatic cycles instead of paths is a natural question that has also being asked for other graphs. In 2015, Schaudt and Stein have showed that at most 14 cycles are necessary.
§ INTRODUCTION
An r-colouring of a graph G is a colouring of its edges with up to r different colours. For such an r-colouring we say that the coloured graph is partitioned into p monochromatic cycles when there exist a set of p vertex-disjoint monochromatic cycles covering all the vertices of G. We are interested in the cycle partitioning number of G, that is the smallest number, _r(G), such that, for every r-colouring of G, it can be partitioned into into at most _r(G) monochromatic cycles. In the literature of this problem, an edge and a single vertex both count as a cycle.
In the 1970’s, Lehel made an influential conjecture stating that when the edges of a complete graph K_n are coloured with two colours, it can be partitioned into at most two monochromatic cycles. Łuczak, Rödl and Szemerédi <cit.> confirmed this conjecture for sufficiently large complete graphs, after preliminary work of Gyárfás <cit.>. In 2010, Bessy and Thomassé <cit.> proved it for every complete graph, with a surprisingly short proof. For fixed r, the best upper bound, to our knowledge, is _r(K_n) ≤ 100r log r, for n large enough (see <cit.>).
In the 2-colour case, if we replace cycles by paths, the problem becomes much easier. As noted by Gerencsér and Gyárfás <cit.> in 1967: the vertex set of any 2-edge-coloured complete graph can be partitioned into two paths of different colours. To prove it, take any red-blue colouring of K_n and suppose that R and B are vertex disjoint red and blue paths, with respective endpoints r and b. Let v ∉V(R)∪ V(B). If vr is red or vb is blue then we can extend R or B accordingly. Otherwise, without loss of generality rb is red and {R∪ rb ∪ bv, B-{b}} is a pair of red-blue paths covering one extra vertex.
The general problem has also been studied for the case where the base graph G is a fair complete k-partite graph and for graphs with large minimum degree in general. We point to <cit.> for a (2016) survey and call attention to a couple of (older and newer) results.
Let us now define the basic concepts that we will use throughout this paper.
We say that a path is simple when it is the union of a blue path, v_1v_2… v_i and a red path, v_iv_i+1… v_k. We call v_i the turning point of the path.
We say that a bipartite graph is balanced when its partition classes have the same number of vertices.
Let G be a bipartite graph with partition classes X and Y, whose edges are coloured red and blue. The colouring on G is split when X and Y can each be partitioned into two non-empty sets, X=X_1∪ X_2 and Y=Y_1∪ Y_2, such that all edges between X_i and Y_j are red for i=j and blue for i≠ j. (Figure <ref>).
In 1983, Gyárfás and Lehel <cit.> showed that every balanced complete bipartite graph with a two-colouring that is not split contains two vertex-disjoint monochromatic paths that cover all but at most one of its vertices. This result was later extended by Pokrovskiy.
Let G be a balanced complete bipartite graph whose edges are coloured red and blue. There is a vertex-partition of G into two monochromatic paths with different colours if, and only if, the colouring on G is not split.
Let G be a complete bipartite graph G with a split colouring of its edges into red and blue. The vertices of G cannot be partitioned into two monochromatic cycles of different colours.
By <Ref> (or by a very simple argument), the graph G cannot be partitioned into two paths of different colours. Therefore, it cannot be partitioned into two cycles of different colours either.
However, for split colourings the following is (trivially) true (but we include a proof for completeness).
Let G be a balanced complete bipartite graph whose edges are coloured red and blue. If the colouring on G is split, then G can be vertex-partitioned into at most 3 monochromatic cycles. Furthermore, two cycles suffice if and only if |X_1| = |Y_1| or |X_1| = |Y_2|.
Let X and Y be the partition classes of G. And X = X_1∪ X_2, Y = Y_1∪ Y_2 as in the definition of split colouring (<Ref>). We may assume without loss of generality that |X_1|≥ |Y_1|. As |X|=|Y|, we have |X_1|-|Y_1|=|Y_2|-|X_2| and, therefore, |X_2|≤ |Y_2|. Hence, G can be vertex-partitioned into a red cycle that covers all vertices in Y_1 and |Y_1| vertices in X_1, a red cycle that covers all vertices in X_2 and |X_2| vertices in Y_2 and a blue cycle that covers all the remaining vertices in X_1 and Y_2. And clearly, if |X_1| = |Y_1|, then |X_2| = |Y_2| and two cycles (of same colour) suffice.
On the contrary, suppose that two monochromatic cycles suffice. By <Ref> they must be of the same colour. Suppose, without loss of generality that both are red. Each cycle must be entirely contained in X_1∪ Y_1 or X_2∪ Y_2. And because all the sets X_1, X_2, Y_1 and Y_2 are non-empty, one cycle must cover vertices in X_1∪ Y_1 and the other in X_2∪ Y_2. And each cycle covers as many vertices in X as in Y. Therefore |X_1| = |Y_1| and |X_2| = |Y_2|. Note: similarly blue cycles suffice if and only if |X_1| = |Y_2| and |X_2| = |Y_1|.
A natural conjecture would be that every 2-colouring of a balanced bipartite graph that is not split can be partitioned into at most 2 monochromatic cycles. In <Ref> we give an example of a colouring that is not split and needs at least 3 cycles.
For upper bounds, using the regularity method, Schaudt and Stein <cit.> showed that for every k, a 2-edge-coloured sufficiently large k-partite graph G, such that no partition class of G contains more than half of its vertices, can be partitioned into at most 14 monochromatic cycles. For k=2, they proved that this can be done with at most 12 monochromatic cycles.
Our main contribution is to reduce this number to only 4 monochromatic cycles, for any balanced bipartite graph. Also, note that our result does not require G to be large.
If G is a balanced complete bipartite graph whose edges are coloured red and blue, then G can be partitioned into at most 4 monochromatic cycles.
§.§ An alternative short proof of Theorem <ref>
To make the results of this paper self-contained, we provide a proof for <Ref>, more direct than the original one in <cit.>. It is worth noting that a short proof of this result is also presented in <cit.> (section 4.2).
We argue that if G does not have a Hamiltonian simple path, then the colouring on G is split.
Let P=(r_s,…,r_1,x,b_1,…,b_t) be a simple path of maximum size in G, where x is its turning point and r_s and b_t are its red and blue endpoints respectively. If P is not Hamiltonian, then we have the following three cases to analyze.
* Its endpoints belong to different partition classes. (See Figure <ref>.)
Since G is balanced, we know that there is at least one vertex that does not belong to P in each partition class. Hence, let y∉ V(P) be a vertex such that x and y belong to different partition classes. We may assume without loss of generality that the edge xy is red. Hence, the simple path (y,x,r_1,…,r_s,b_t,…,b_1) is bigger than P, a contradiction.
* Its endpoints and its turning point belong to the same partition class. (See Figure <ref>.)
Let y∉ V(P) be a vertex that belongs to the other partition class. We may assume without loss of generality that the edge xy is red. Hence, the simple path (r_s,…,r_1,x,y,b_t,…,b_1) is bigger than P, a contradiction.
* Its endpoints belong to the same partition class and its turning point belongs to the other. (See Figure <ref>.)
Let X and Y be the partition classes of G. In order to prove that the colouring on G is split, we shall define hereafter sets X_1, X_2, Y_1 and Y_2 satisfying all conditions of Definition <ref>. For this, we assume without loss of generality that x∈ X.
Firstly, let X_1=X∩ V(P), Y_1'=Y∩ V(P_R) and Y_2'=Y∩ V(P_B). We want to show that the edges between X_1 and Y_1' are red, whereas the edges between X_1 and Y_2' are blue. Indeed, the edge r_sx is red, since otherwise the simple path (r_1,…,r_s,x,b_1,…,b_t) would satisfy case (B), with turning point r_s. Hence,
for each r_i∈ X_1 , the edge r_ib_t is blue,
since otherwise the simple path (r_i-1,…,r_1,x,r_s,…,r_i,b_t,…,b_1) would satisfy case (B), with turning point b_t. Now, suppose by way of contradiction that the edge r_jx is blue for some fixed r_j∈ Y_1', with 1<j<s. Hence, let y∈ X∖ V(P). Then,
for each r_i∈ Y_1' , the edge yr_i is blue,
since otherwise the simple path (y,r_i,…,r_1,x,r_s,…,r_i+1,b_1,…,b_t) would be bigger than P. In particular, we know by (<ref>) that the edge r_j-1b_t is blue and by (<ref>) that the edges yr_1 and yr_j are blue too. Hence, we see that G[V(P)∪{ y }] can be vertex-partitioned into a red path with an even size and a simple cycle whose red and blue paths also have an even size, namely into the red path (r_j+1,…,r_s) and the simple cycle whose red and blue paths are (r_1,…,r_j-1) and (r_1,y,r_j,x,b_1,…,b_t,r_j-1) respectively. Now, consider one such vertex-partition 𝒫 of G[V(P)∪{ y }] in which the size of the blue path of the simple cycle is maximum. Let (u_1,…,u_k) be the red path and (v_1,…,v_l) and (v_l,w_1,…,w_m,v_1) be respectively the red and blue paths of the simple cycle of 𝒫 (see Figure <ref>). Since all paths have an even size, we know that each path has an endpoint in each partition class of G. Without loss of generality, we may assume then that u_1 and v_1 belong to the same partition class and, therefore, that u_k and v_l belong to the other. The edges u_1v_l and u_kv_1 cannot be both blue, since then G[V(P)∪{ y }] could be vertex-partitioned into the red path (v_2,…,v_l-1) and the simple cycle (u_1,…,u_k,v_1,w_m,…,w_1,v_l), whose blue path is bigger than that of the simple cycle of 𝒫. Without loss of generality, we may assume then that the edge u_kv_1 is red. Hence, the simple path (u_1,…,u_k,v_1,…,v_l,w_1,…,w_m) is bigger than P, a contradiction. Thus, for each r_i∈ Y_1', the edge r_ix is red. Analogously, for each b_i∈ Y_2', the edge b_ix is blue. In other words, we just showed that the edges between x and Y_1' are red, whereas the edges between x and Y_2' are blue. Furthermore, we know by (<ref>) that each r_i∈ X_1 is the turning point of a simple path Q such that X_1=X∩ V(Q), Y_1'=Y∩ V(Q_R) and Y_2'=Y∩ V(Q_B), namely of the simple path Q=(r_i-1,…,r_1,x,r_s,…,r_i,b_t,…,b_1). Analogously, we know that each b_i∈ X_1 also is the turning point of a simple path with such property. Hence, we may conclude that the edges between X_1 and Y_1' are red, whereas the edges between X_1 and Y_2' are blue, which is what we wanted to show.
Now, let X_2=X∖ V(P), Y_1”={ z∈ Y∖ V(P)the edge xz is red} and Y_2”={ w∈ Y∖ V(P)the edge xw is blue}. By (<ref>), we know that for each y∈ X_2 and each r_i∈ Y_1', the edge yr_i is blue. Analogously, for each y∈ X_2 and each b_i∈ Y_2', the edge yb_i is red. For each y∈ X_2 and each z∈ Y_1”, the edge yz is blue, since otherwise the simple path (y,z,x,r_1,…,r_s,b_2,…,b_t) would be bigger than P. Analogously, for each y∈ X_2 and each w∈ Y_2”, the edge yw is red. For each z∈ Y_1” and each r_i∈ X_1, the edge r_iz is red, since otherwise the simple path (r_s,…,r_i+1,r_i-2,…,r_1,x,z,r_i,b_1,…,b_t) would satisfy case (B), with turning point z. Analogously, for each w∈ Y_2” and each b_i∈ X_1, the edge b_iw is blue. For each z∈ Y_1” and each b_i∈ X_1, the edge b_iz is red, since otherwise the simple path (r_s,…,r_1,x,z,b_i,…,b_1,b_i+2,…,b_t) would satisfy case (B), with turning point z. Analogously, for each w∈ Y_2” and each r_i∈ X_1, the edge r_iw is blue.
Thus the sets X_1, X_2, Y_1=Y_1'∪ Y_1” and Y_2=Y_2'∪ Y_2” satisfy all conditions of Definition <ref>, and so the colouring on G is split.
§ PROVING MAIN THEOREM
Thanks to <Ref>, we only have to prove <Ref> when the given colouring is not split. Our proof is self-contained apart from the following result.
Let G be a balanced complete bipartite graph whose edges are coloured red and blue. If the colouring on G is not split, then G can be vertex-partitioned into a monochromatic path and a monochromatic cycle with different colours.
We observe that the above is a corollary to <Ref>, and follows from a very short case analysis on the parity of the paths given by <Ref>. This is done in details in Section 4.1 of <cit.>.
We also note that Section 4.2 of <cit.> presents an alternative proof of <Ref> that is also short and self-contained.
In view of <Ref>, it is enough to prove the following lemma.
Let G be a balanced complete bipartite graph whose edges are coloured red and blue. If G has a monochromatic Hamiltonian path, then G can be vertex-partitioned into at most 3 monochromatic cycles.
In fact, let H be any balanced complete bipartite graph with a two-colouring on its edges. When we apply <Ref> to H, the monochromatic cycle we obtain must be even because H is bipartite. And since H is balanced, the monochromatic path given by <Ref> is also even. So its vertices induce a balanced complete bipartite subgraph of H that contains a monochromatic Hamiltonian path.
<Ref> is best possible, in the sense that even with the extra condition that the colouring of the bipartite graph contains a Hamiltonian monochromatic path, there are 2-colourings that cannot be partitioned into 2 monochromatic cycles, as illustrated bellow.
Let G be a be a balanced complete bipartite graph on 2n vertices, with bipartition S = {x_1, …, x_n}∪{y_1,…,y_n}. Take the colouring where:
* x_1y_1, x_2y_2 and all edges of the form x_1y_i and y_1x_i for i ≥ 3 are blue;
* the edges x_2y_i for i ≥ 4 are blue;
* all the other edges are red.
Then G contains a Hamiltonian red path and cannot be partitioned into two monochromatic cycles.
Since the graph induced by 3 = {x_3, …, x_n}∪{y_3,…,y_n} has all edges coloured red, there is a path P starting in x_3 and ending in y_3 and passing through all vertices of 3. The red path x_1y_2x_3Py_3x_2y_1 is a red Hamiltonian path of G.
Now we show that G cannot be partitioned into 2 monochromatic cycles. First, the vertex y_1 cannot belong to a cycle with more than two vertices: such a cycle would be blue because only one red edge is incident to y_1, and it would then contain one x_i with i ≥ 2 but such an x_i only has one incident blue edge, towards y_1, hence it could not continue further. Now, if we make a cycle with y_1 only, given the fact that the graph is bipartite, we cannot make a single cycle with all the 2n-1 remaining vertices.
Let us now assume we have a partition into two monochromatic cycles. One cycle must contain y_1 and exactly one of the x_l's. The other cycle must contain y_2, therefore it cannot be blue since only one blue edge is incident to that vertex. So it needs to be a red cycle, and then cannot contain x_1 (only one incident red edge), hence the first cycle is x_1y_1. Then this second cycle, which is red, must contain both x_2 and y_2 but x_2 has only one incident red edge in G ∖{x_1, y_1} so this is a contradiction.
Next we present the structure of the proof of <Ref>. In all that follows, we consider a balanced complete bipartite graph G whose edges are coloured red and blue and contains a red Hamiltonian path. We also consider that this colouring is not split, and we want to show that there is partition of G into at most 3 monochromatic cycles.
Our idea is to search for more structure in the above graph G. Let's say that each partition class of G has n vertices. We will label its vertices in a particular way. Then, in order to prove <Ref>, we use induction with on a parameter k, 1≤ k ≤ n, with a well chosen hypothesis. We will try to show the existence of a specific subgraph which we will define later as a blue even plait: the first k/2 vertices with even indices from both classes (of the bipartition) inducing a complete blue bipartite graph, and the same with the first k/2 vertices with odd indices. This exists for k=1, and we will show that if G cannot be partitioned into three monochromatic cycles and the induction hypothesis holds with k, then it also holds with k+1. Iterating this will give us our result, because a blue even plait shall be trivially partitioned into 2 blue cycles. The proof will mostly consists in looking at specific edges and proving that they are blue (though we also need to argue that a few edges are red), with the goal to increase the order of the blue even plait.
We split this section into two subsections. Subsection <ref> contains some preliminary results that we will use many times and help us to organise the other subsection in a concise way. Subsection <ref> is devoted to the main proof of <Ref>, which is longer. The preliminary results include a lemma with a short inductive proof which suffices to deduce a weakening of <Ref> with 4 cycles, which in turn (by the above arguments) implies a weakening of <Ref> with 5 cycles.
Let G be a balanced complete bipartite graph whose edges are coloured red and blue. If G has a monochromatic Hamiltonian path, then G can be vertex-partitioned into at most 3 monochromatic cycles.
§.§ Preliminary structural results
Throughout the rest of this paper we deal with balanced complete bipartite graphs whose edges are coloured red and blue that have a monochromatic Hamiltonian path. Therefore, in order to avoid exhaustive repetitions, it will be convenient to establish the following convention for the order of the vertices in such a path.
A balanced complete bipartite graph with 2n vertices whose edges are coloured red and blue is a red zigzag graph when its partition classes are X={ x_1,…,x_n } and Y={ y_1,…,y_n } and the Hamiltonian path
P=(x_1,y_2,x_3,y_1,…,x_4,y_3,x_2,y_1)
is red. (See Figure <ref>.)
For every k≤ n, we also define the sets:
k = {x_i : i≤ k}∪{y_i : i≤ k},
k = {x_i : i< k}∪{y_i : i < k},
k = {x_i : i≥ k}∪{y_i : i≥ k},
k = {x_i : i> k}∪{y_i : i> k}.
We denote by x_iPy_j the red Hamiltonian subpath of P starting from vertex x_i and ending at y_j, both end-vertices being included.
In addition to these and to other definitions that will appear later, we shall also establish some simple facts that will help us in the proofs of the main results of this section. Such facts will be just stated as remarks, since they are straightforward or can be easily checked.
Let G be a red zigzag graph with 2n vertices. Then, for i≤ n:
* if the edge x_iy_i is red, then the subgraph G[i] has a red Hamiltonian cycle, namely (x_iPy_i). (See Figure <ref>.)
* if the edge x_iy_i+2 (resp. x_i+2y_i) is red, then the subgraph G[i] can be vertex-partitioned into two red cycles, namely into the red cycle (x_i,y_i+2Px_i+2,y_i+1) (resp. (x_i+1,y_i+2Px_i+2,y_i)) and the edge-cycle (x_i+1,y_i) (resp. (x_i,y_i+1)). (See Figure <ref>.)
In the above language, proving Lemma <ref> is the same as proving that every red zigzag graph can be vertex-partitioned into at most 3 monochromatic cycles. Therefore, we see that the lemma below is a weaker version of Lemma <ref>, which allows one extra monochromatic cycle to be used. As compensation, this version gives us some additional information about the monochromatic cycles of the obtained partition.
If G is a red zigzag graph with 2n vertices, then G can be vertex-partitioned into at most t monochromatic cycles satisfying one of the following conditions.
* t=2.
* t=3 and the edge x_1y_1 is used in some blue cycle of the partition.
* t=4 and the edges x_1y_1 and x_2y_2 are used in different blue cycles of the partition.
The proof is by induction on n. For n≤ 2, the graph G satisfies condition (i) trivially. For n≥ 3, we may assume that G does not satisfy condition (i), since otherwise we would be done. Hence, by Remark <ref>, we know that the edges x_1y_1, x_1y_3 and x_3y_1 are blue. Now, we apply the inductive hypothesis to the subgraph G[1]. If G[1] satisfies conditions (i) or (ii), then we may simply take the edge x_1y_1 as a blue cycle and so we see that G satisfies conditions (ii) or (iii), respectively. On the other hand, if G[1] satisfies condition (iii), then we may use the edges x_1y_1, x_1y_3 and x_3y_1 to extend the blue cycle of the partition of G[1] that uses the edge x_3y_3 and so we see that G satisfies condition (iii) as well. In any case, the result follows.
In view of the proof of Theorem <ref>, observe that the lemma above has a certain relevance by itself, since it implies that 5 monochromatic cycles are sufficient to vertex-partition any balanced complete bipartite graph whose edges are coloured red and blue, which already improves results from <cit.> (where it is proved that large balanced bipartite graphs can be partitioned into 12 cycles), with a significantly shorter proof.
However, Lemma <ref> is just the first step towards Theorem <ref>. The next step is to introduce some tools to prove <Ref>. We will also prove a weaker version of <Ref>, which will make the proof of <Ref> clearer.
Let G be a red zigzag graph with 2n vertices. For 2≤ k≤ n, the set k is a blue special set when the following two conditions hold:
* the subgraph G[k-2] is either empty of has a blue Hamiltonian cycle,
* the subgraph G[k-1] has blue Hamiltonian cycle, and
* the subgraph G[k] has a blue Hamiltonian cycle that uses the edges x_k-1y_k-1 and x_ky_k.
Furthermore, we need the following remark.
Let G be a complete bipartite graph and C_1=(u_1,…,u_s) and C_2=(v_1,…,v_t) be two disjoint blue cycles in G, where u_1,v_1∈ X. Then u_1v_t and v_1u_s are edges of G. If those edges are both blue, then there is a blue cycle in G that passes through all edges in C_1 and C_2 except u_1u_s and v_1v_t and covers all vertices in C_1 and C_2, namely (u_1,…,u_s,v_1,…,v_t). (See Figure <ref>.)
The following lemma shows us that the existence of a blue special set in a red zigzag graph is a sufficient extra condition to make Lemma <ref> valid.
Let G be a red zigzag graph with 2n vertices. If G has a blue special set, then G can be vertex-partitioned into at most 3 monochromatic cycles.
Let k be a blue special set in G for some 2≤ k≤ n. By Definition <ref> (iii), G[k] has a blue Hamiltonian cycle.
Hence, we may assume that n > k+2, since otherwise we would be done.
By Remark <ref>(b) and Definition <ref> (iii), we may also assume that the edges x_k-1y_k+1, x_k+1y_k-1, x_ky_k+2 and x_k+2y_k are blue, since otherwise we would be done.
Now, we apply <Ref> to the subgraph G[k]. If G[k] satisfies condition (i), then we may simply take a blue Hamiltonian cycle in G[k] to obtain a good partition of G.
By Remark <ref>, if G[k] satisfies condition (ii), then we use the edges x_k-1y_k+1 and x_k+1y_k-1 to combine the blue cycle that uses the edge x_k+1y_k+1 from the 3-cycle partition of G[k] with the blue Hamiltonian cycle of G[k] (since it uses the edge x_k-1y_k-1), thereby obtaining a good partition of G. Observe that the previously built blue cycle also passes through the edge x_ky_k. Hence, if G[k] satisfies condition (iii), then first we do the same operation as in the previous case (using edges x_k-1y_k+1 and x_k+1y_k-1) and after that we use the edges x_ky_k+2 and x_k+2y_k to build a blue cycle in G that covers all vertices in G[k] and all vertices of two different blue cycles of the partition of G[k] (remember that the edges x_k+1y_k+1 and x_k+2y_k+2 were in different cycles of the partition). Thereby obtaining a desired partition of G. In every case, the result follows.
To prove the next lemma, it will be necessary to make a more involved analysis of the edges compared to what we did in the proofs of Lemmas <ref> and <ref>. Therefore, as a way to facilitate our work and the reader's understanding, we shall make some considerations first.
An edge x_iy_j in a red zigzag graph is even when i and j have the same parity, i.e., when i+j is even.
Usually, if an even edge with “low” values for its indices is red, we can then partition the edges into 3 monochromatic cycles. Otherwise if that edge is blue we can look at other edges. The fact that such an edge is blue helps us little by little to establish a nice blue structure.
Let G be a red zigzag graph with 2n vertices. For 1≤ k≤ n, the subgraph G[k] is a blue even plait when all its even edges are blue. We denote by G_k^O the subgraph of G formed by vertices of odd indices in k, and G_k^E the one formed by vertices of even indices.
Let G be a red zigzag graph with 2n vertices. If G[k] is a blue even plait, for 1 ≤ k≤ n, then G_k^O and G_k^E are complete balanced bipartite graphs whose edges are all blue. In particular, each of them is Hamiltonian, so G[k] can be partitioned into at most two monochromatic blue cycles.
A Hamiltonian path in G_k^E starting in x_i and ending in y_j is denoted x_i[E_k]y_j (note that such a path needs to start and finish on different classes of the bi-partition).
A path in G_k^E starting in x_i and ending in x_j that passes through all vertices of G_k^E except for a given y_ℓ is denoted x_i[E_k∖ y_ℓ]x_j (note that such a path needs to start and finish on the same class of the bi-partition).
We use the same notations (x_i[O_k]y_j and x_i[O_k∖ y_ℓ]x_j) for G_k^O.
Such Hamiltonian or quasi-Hamiltonian paths with specific extremities will be extensively used in the proof of <Ref>. Indeed, we will regularly be able to show that a few edges between G_k^O and G_k^E are blue too. Then by choosing carefully the end-vertices of two (quasi-)Hamiltonian paths we will be able to combine the two paths into a cycle passing through all the vertices in G_k^O ∪ G_k^E except at most two of them, helping us to get a decomposition of the red zigzag graph into at most 3 monochromatic cycles.
§.§ Main proof
After these considerations, we may advance to our next result. As we shall see hereafter, Lemma <ref> follows almost directly from the lemma below.
Let G be a red zigzag graph with 2n vertices which cannot be vertex-partitioned into at most 3 monochromatic cycles. If the subgraph G[k] is a blue even plait for some 1 ≤ k ≤ n-2, then G[k+1] is a blue even plait as well (see Definition <ref>).
First since G cannot be vertex-partitioned into 3 monochromatic cycles, we can easily see that n ≥ 5. It is also easy to check that G[4] is a blue even plait: if the edge x_1y_1 (resp. x_2y_2) were red, then G (resp. G∖{x1_y1}) would be Hamiltonian; if the edge x_2y_2, x_1y_3 or x_3y_1 were red we could decompose G into a red cycle and one edge; x_3y_3 being red would imply a decomposition into a cycle and two paths. Now if x_2y_4 or x_4y_2 were red we could decompose G into a red cycle and two edges; if x_4y_4 were red then we would decompose into the big red cycle using x_4y_4, the blue cycle we just showed the existence (x_1y_1x_3y_3) and the edge x_2y_2. We have just proved that the even edges within G[4] are blue, hence G[4] is a blue even plait as we announced. Therefore we can assume that k≥ 4, so that each of G_k^O and G_k^E have at least 4 vertices.
Also, Remarks <ref>(a) and <ref>, imply that the edge x_k+1y_k+1 is blue.
Now, suppose by way of contradiction that G[k+1] is not a blue even plait. Hence, there is some natural j less than k+1 such that j and k+1 have the same parity and at least one of the edges x_k+1y_j or x_jy_k+1 is red. We shall show that this implies that G has a blue special set, which is a contradiction by Lemma <ref>.
We divide the remainder of the proof into two cases depending on the parity of k. These cases are treated quite similarly, but at some points they are not absolutely analogous. Therefore, we will give the full details for both cases whenever necessary but we will just guide how to make the analogy whenever possible.
Case A: k is even and k≥ 4, therefore j is odd with 1 ≤ j < k ≤ n-2.
Note that some claims below will require k < n-4, but we will see that these claims are only needed if k < n-4: otherwise the proof would end before.
case
* If there exists 1 ≤ l ≤ k odd such that x_k+1y_l is red, then x_1y_k and x_k-1y_2 are not both blue.
* If there exists 1 ≤ l ≤ k odd such that x_ly_k+1 is red, then x_ky_1 and x_2y_k-1 are not both blue.
We will prove only (<ref>) since (<ref>) is analogous. So x_k+1y_l is red. Suppose by way of contradiction that the edges x_1y_k and x_k-1y_2 are blue (see Figure <ref>). In this case, we shall prove that k+2 is a blue special set of G.
First, let us prove that the edge x_ky_l is blue. Indeed, otherwise we could decompose G into the red cycle (x_k,y_k+1Px_k+1,y_l) that covers k+1∪{x_k, y_l} and a blue cycle that covers k∖{x_k, y_l}. More precisely:
y_2[E_k∖ x_k] y_k x_1 [O_k∖ y_l] x_k-1 y_2.
Now, we can know construct a blue Hamiltonian cycle for G[k]: say
y_k[E_k] x_k y_l [O_k] x_1 y_k.
This is our first step towards proving that k+2 is a blue special set of G.
The edge x_k+1y_k-1 is blue, otherwise G could be vertex-partitioned into the red cycle (x_k,y_k+1Px_k+1,y_k-1) (recall that x_ky_k-1 is also an edge of P) and (similarly to above) a blue cycle that covers all vertices in k except x_k and y_k-1.
The edge x_l+1y_k+1 is blue too, since otherwise G could similarly be vertex-partitioned into the red cycle (x_l+1,y_k+1Px_k+1,y_l) and a blue cycle that
covers all vertices in k except x_l+1 and y_l. Therefore, we see that the subgraph G[k+1] has a blue Hamiltonian cycle:
C = (x_k+1 y_k+1 x_l+1[E_k] y_2 x_k-1[O_k] y_k-1 x_k+1).
By Remark <ref>(a), it follows that the edge x_k+2y_k+2 is blue.
Next, the edge x_k+2y_k is blue, otherwise G could be vertex-partitioned into the edge-cycle (x_l+1,y_k+1), the red cycle (x_k+1,y_k+2Px_k+2,y_k), and a blue cycle that covers k∖{x_l+1, y_k}. Note that such blue cycle does not use y_k so it does not benefit from the blue edge x_1y_k but rather x_ky_l. Use edges of G_k^E to go from y_2 to x_k passing through all its vertices except x_l+1 and y_k, add edge x_ky_l go through all vertices in G_k^O finishing in x_k-1 and add the edge x_k-1y_2.
The edge x_ky_k+2 is blue too, since otherwise G could be vertex-partitioned into the red cycle (x_k,y_k+2Px_k+2,y_k+1), the edge-cycle (x_k+1,y_k-1) and a blue cycle that covers all vertices of k∖{x_k, y_k-1} (using the edges x_1y_k and x_k-1y_2).
Hence, we see that the subgraph G[k+2] has a blue Hamiltonian cycle that uses the edges x_k+1y_k+1 and x_k+2y_k+2, which can be built by the following manner: take the blue Hamiltonian cycle C of G[k+1] that we have just built. We can require it to contain the edge x_ky_k. Then consider the cycle C' = (C - x_ky_k) + x_ky_k+2x_k+2y_k.
Therefore k+2 is a blue special set of G as needed.
The edges x_k+1y_1 and x_1y_k+1 are blue.
Suppose by way of contradiction that the edge x_k+1y_1 is red (see Figure <ref>). In this case, G can be vertex-partitioned into the red cycle (x_k+1Py_1) and the red path (x_1,y_2Px_k-1,y_k). By Claim <ref>(<ref>), with l=1, we know that the edges x_1y_k and x_k-1y_2 are not both blue. Hence, G can be vertex-partitioned into at most 3 monochromatic cycles, a contradiction. Thus, the edge x_k+1y_1 is blue. Analogously, the edge x_1y_k+1 is blue too, by Claim <ref>(<ref>). Thus, the result follows.
The edge x_k+2y_k+2 is blue.
By <Ref>, we see that the subgraph G[k+1] can be vertex-partitioned into a blue Hamiltonian cycle of G_k^E and a blue cycle that covers x_k+1, y_k+1 and all vertices in G_k^O, which can be built by the following manner: take a blue Hamiltonian cycle of G_k^O that uses the edge x_1y_1; and use the path (x_1,y_k+1,x_k+1,y_1) to extend it. Hence, by Remark <ref>(a), the result follows.
Remember that there is some odd j such that x_k+1y_j or x_jy_k+1 is red. In what remains, without loss of generality, we assume that the edge x_k+1y_j is red.
Note: with that we loose the symmetry between the sets X and Y. We have not made this assumption earlier, because we need Claim <ref> to hold regardless of which x_k+1y_j or x_jy_k+1 is red.
By <Ref>, we know that j ≠ 1, hence all the following claims will be on the assumption that 2 ≤ j. Besides, if k = n-2 then the edge x_k+2y_k+2 = x_ny_n belongs to the red path P, hence is not blue, contradicting <Ref>. Thus if k = n-2 the proof ends here. In the remainder, therefore, we have that:
2≤ j < k ≤ n-3.
This allows us to use indices j-1 and k+3 (that we have not used it so far).
The edges x_j+1y_1 and x_2y_j+2 are blue.
In particular, when j=k-1, this means that x_ky_1 and x_2y_k+1 are blue.
Suppose by way of contradiction that the edge x_j+1y_1 is red (see Figure <ref>). In this case, G can be vertex-partitioned into the red cycle (x_k+1Px_j+1, y_1Py_j) and the red path (x_1,y_2Px_k-1,y_k). By Claim <ref>(<ref>), we know that the edges x_1y_k and x_k-1y_2 are not both blue. Hence, we see that G can be vertex-partitioned into at most 3 monochromatic cycles, a contradiction. Thus, the edge x_j+1y_1 is blue.
Now, suppose by way of contradiction that the edge x_2y_j+2 is red (see Figure <ref>). In this case, the edge x_1y_k is blue, since otherwise G could be vertex-partitioned into the red cycles (x_k+1Py_j+2,x_2Py_j) and (x_1,y_2Px_k-1,y_k) and the edge-cycle (x_j+1,y_1). By Claim <ref>(<ref>), it follows that the edge x_k-1y_2 is red. Observe that the edges x_1y_1 and x_j+1y_k are even and, therefore, blue. Hence, we see that G can be vertex-partitioned into the red cycles (x_k+1Py_j+2,x_2Py_j) and (y_2Px_k-1) and the blue cycle (x_1,y_k,x_j+1,y_1), a contradiction. Thus, the edge x_2y_j+2 is blue.
The edges x_jy_k+2 and x_k+2y_k are blue.
Suppose by way of contradiction that the edge x_jy_k+2 is red (see Figure <ref>). In this case, G can be vertex-partitioned into the red cycle (x_jPx_k+1,y_jPy_k+2) and the subgraph G[j-1]. But, observe that the subgraph G[j-1] is a blue even plait. Hence, by Remark <ref>, we see that G can be vertex-partitioned into 3 monochromatic cycles, a contradiction. Thus, the edge x_jy_k+2 is blue.
Now, suppose by way of contradiction that the edge x_k+2y_k is red. Take the red cycle (x_k+1,y_k+2Px_k+2,y_k), the cover k+2∪{x_k+1, y_k}. We claim that the remaining vertices of G can be vertex-partitioned into at most 3 blue cycles. Indeed, if j=k-1, by <Ref> the edges x_ky_1 and x_2y_k+1 are blue and by <Ref> x_1 y_k+1 is blue (see Figure <ref>).
Take only the blue cycle
x_2[E_k∖ y_k] x_k y_1[O_k] x_1 y_k+1 x_2.
On the other hand, if j<k-1, then y_j+2∈ G_k^O (see Figure <ref>).
Again by <Ref> and <Ref>, we can form the blue cycle
x_2[E_k∖ y_k] x_j+1 y_1 [O_k∖ x_1] y_j+2 x_2
and we also take the edge-cycle (x_1,y_k+1).
The subgraph G[k+2] has a blue Hamiltonian cycle that uses the edges x_k+1y_k+1, x_k+2y_k+2 and x_k+2y_k. Furthermore, the edge x_k+3y_k+3 is blue.
Let us build such a cycle (see <Ref>). We will use the blue edges x_j+1y_1 (<Ref>), x_k+1y_1 and x_1y_k+1 (<Ref>), x_k+1y_k+1 (Remark <ref>(a)), x_k+2y_k (<Ref>), x_k+2y_k+2 (<Ref>), and x_jy_k+2 (<Ref>).
First take the cycle
C = x_j+1[E_k] y_k x_k+2 y_k+2 x_j [O_k∖ y_1] x_1 y_1 x_j+1.
It is a blue cycle covering all vertices of k∪{x_k+2,y_k+2} and it uses the edge x_1y_1. So C' = C - x_1y_1 +x_1y_k+1x_k+1y_1 is a blue cycle which satisfies the first part of the claim. By Remark <ref>(a), it follows that the edge x_k+3y_k+3 is blue.
As before, if k = n-3 the proof ends here since we have just proved that the edge x_ny_n is blue, whereas it belongs to the red path P. This is a contradiction. Hence, in the remaining claims
k ≤ n-4.
This allows us to use indices up to k+4.
The edge x_jy_k+1 is blue.
Suppose by way of contradiction that the edge x_jy_k+1 is red. In this case, we shall prove that k+2 is a blue special set of G. Now <Ref> tells us that the edges x_k+1y_1 and x_1y_k+1 are blue. <Ref> tells us that the edges x_j+1y_1 and x_2y_j+2 are blue too. Also, since here we assume that the edge x_jy_k+1 is red we can use the `alternate' version of <Ref> consisting in exchanging X and Y, therefore the edges x_1y_j+1 and x_j+2y_2 are also blue. Figure <ref> shows all those edges.
On the one hand, if j=k-1 then this means that the edges x_2y_k+1 and x_k+1y_2 are blue. Then x_2 [E_k] y_2 x_k+1 y_1 [O_k] x_1 y_k+1 x_2 is a blue Hamiltonian cycle of G[k+1].
On the other hand if j<k-1 then x_j+2,y_j+2∈ G_k^O. Start with
C = x_2 [E_k] y_2 x_j+2[O_k] y_j+2 x_2.
C is a blue Hamiltonian cycle of G[k] and we can require it to use the edge x_1y_1 (G_k^O is a blue bipartite complete graph, hence x_j+2[O_k] y_j+2 can be required to use x_1y_1). Therefore C' = C-x_1y_1 +x_1y_x+1x_k+1y_1 is a blue Hamiltonian cycle of G[k+1].
In both cases, by Claim <ref>, we see that k+2 is a blue special set of G, a contradiction. Thus, the result follows.
There is a blue cycle in G that covers all vertices in k+1 except x_k+1 and y_k.
By <Ref> and <Ref> the edges x_j+1y_1, x_2y_j+2 and x_1y_k+1 are blue.
On the one hand if j=k-1, this means that the edges x_ky_1 and x_2y_k+1 are blue (see Figure <ref>). Then
x_2[E_k∖ y_k] x_k y_1 [O_k] x_1 y_k+1 x_2 is a blue cycle covering y_k+1 and all vertices of k except y_k.
On the other hand (j < k-1), y_j+2∈ G_k^O. We will also need the edge x_jy_k+1, which is blue by <Ref>. Let
C = x_2 [E_k∖ y_k] x_j+1 y_1 x_1 y_k+1 x_j y_j+2 x_2. This cycle covers G_k^E ∖ y_k and passes through the vertices x_1, y_1, x_j, y_j+2 in G_k^O. Now, we can replace the edge x_1y_1 of C with a path that covers all remaining vertices of G_k^O.
The edge x_k+1y_k+3 is blue.
Suppose by way of contradiction that the edge x_k+1y_k+3 is red (see Figure <ref>). Hence, by <Ref>, we see that G can be vertex-partitioned into the red cycle (x_k+1,y_k+3Px_k+3,y_k+2), the edge-cycle (x_k+2,y_k) and a blue cycle that covers all vertices in k+1 except x_k+1 and y_k, a contradiction. Thus, the result follows.
The edges x_1y_k and x_k+1y_2 are not both blue.
Suppose by way of contradiction that the edges x_1y_k and x_k+1y_2 are blue (see Figure <ref>). In this case, we shall prove that k+2 is a blue special set of G, a contradiction. The edge x_1y_k+1 is blue by <Ref> and x_j+1y_1 is blue by <Ref>. So,
x_j+1[E_k] y_k x_1 [O_k] y_1 x_j+1 is a blue Hamiltonian cycle of G[k]. And x_j+1[E_k] y_2 x_k+1 y_k+1 x_1 [O_k] y_1 x_j+1 is a a blue Hamiltonian cycle G[k+1].
Together with Claim <ref>, we see that k+2 is a blue special set of G. Thus, the result follows.
The edge x_2y_k+2 is blue.
Suppose by way of contradiction that the edge x_2y_k+2 is red (see Figure <ref>). In this case, G can be vertex-partitioned into the red cycle (x_2Py_k+2), the red path (x_1,y_2Py_k,x_k+1) and the vertex y_1. By Claim <ref>, we know that the edges x_1y_k and x_k+1y_2 are not both blue. Hence, we see that G can be vertex-partitioned into 3 monochromatic cycles (one of which is an edge-cycle), a contradiction. Thus, the result follows.
The edge x_k+3y_k+1 is blue.
Suppose by way of contradiction that the edge x_k+3y_k+1 is red (see Figure <ref>). By <Ref> and <Ref> the edges x_j+1y_1 and x_2y_k+2 are blue. Let
C = x_j+1[E_k∖ y_k] x_2 y_k+2 x_j [O_k] y_1 x_j+1, which is a blue cycle that covers (k∖ y_k) ∪{y_k+2}. Then G can be vertex-partitioned into the red cycle (x_k+2,y_k+3Px_k+3,y_k+1), the edge-cycle (x_k+1,y_k) and C, a contradiction. Thus, the result follows.
The subgraph G[k+3] has a blue Hamiltonian cycle that uses the edges x_k+2y_k and x_k+3y_k+3. Consequently, the edge x_k+4y_k+4 is blue.
By <Ref>, start with a blue Hamiltonian cycle C,of G[k+2] that uses the edges x_k+1y_k+1 and x_k+2y_k. <Ref> also tell us that x_k+3y_k+3 is blue. Now, extend C by replacing the edge x_k+1y_k+1 by the path x_k+1y_k+3x_k+3y_k+1 (using Claims <ref> and <ref>). This gives the desired blue Hamiltonian cycle of G[k+3]. By <Ref>(a), it follows that the edge x_k+4y_k+4 is blue.
The edges x_k+4y_k and x_k+2y_k+4 are blue.
Suppose by way of contradiction that the edge x_k+4y_k is red (see Figure <ref>). Hence, by Claim <ref>, we see that G can be vertex-partitioned into the red cycle (x_k+1,y_k+2,x_k+3,y_k+4Px_k+4,y_k), the edge-cycle (x_k+2,y_k+3) and a blue cycle that covers all vertices in k+1 except x_k+1 and y_k, a contradiction. Thus, the edge x_k+4y_k is blue.
Now, suppose by way of contradiction that the edge x_k+2y_k+4 is red (see Figure <ref>). By <Ref> and <Ref> the edges x_j+1y_1 and x_2y_k+2 are blue. Together with <Ref> we build the blue cycle
C = x_j+1[E_k∖ y_k] x_2 y_k+2 x_j [O_k∖ y_1] x_1 y_k+1 x_k+1 y_1 x_j+1
that covers (k+1∖ y_k) ∪{y_k+2}.
Thus G can be vertex-partitioned into the red cycle (x_k+2,y_k+4Px_k+4,y_k+3), the edge-cycle (x_k+3,y_k) and C. Hence, we get a contradiction. Thus, the edge x_k+2y_k+4 is blue.
The subgraph G[k+4] has a blue Hamiltonian cycle that uses the edges x_k+3y_k+3 and x_k+4y_k+4.
Using <Ref>, take a blue Hamiltonian cycle of G[k+3] that uses the edges x_k+2y_k and x_k+3y_k+3 and extend it to a blue Hamiltonian cycle of G[k+4] that uses the edges x_k+3y_k+3 and x_k+4y_k+4 by replacing the edge x_k+2y_k by the path (x_k+2,y_k+4,x_k+4,y_k), which is blue by Claims <ref> and <ref>.
Thus, by Claims <ref>, <ref> and <ref>, we see that k+4 is a blue special set of G, which is our final contradiction for Case A.
Case B: k is odd and k≥ 4, therefore j is even with 2 ≤ j < k ≤ n-2.
Most proofs are similar to the ones in Case A. We will give the details on how to derive the A proofs to B proofs. Of course, we should also replace any reference to a Claim A to its Claim B counterpart.
case
* If there exists 1 ≤ l ≤ k odd such that x_k+1y_l is red, them x_2y_k and x_k-1y_1 are not both blue.
* If there exists 1 ≤ l ≤ k odd such that x_k+1y_l is red, them x_ky_2 and x_1y_k-1 are not both blue.
Interchange the roles of x_1 and x_2, of y_1 and y_2 and of G_k^O and G_k^E in the proof of Claim <ref>.
The edges x_k+1y_2 and x_2y_k+1 are blue.
Suppose by way of contradiction that the edge x_k+1y_2 is red (see Figure <ref>). In this case, G can be vertex-partitioned into the red cycle (x_k+1Py_2), the red path (y_1,x_2Px_k-1,y_k) and the vertex x_1 (a difference here is that in <Ref> we did not have this hanging vertex x_1). By Claim <ref>(<ref>), we know that the edges x_2y_k and x_k-1y_1 are not both blue. Hence, we see that G can be vertex-partitioned into 3 monochromatic cycles, a contradiction. Thus, the edge x_k+1y_2 is blue. Analogously, swapping the roles of X and Y and using <Ref>(<ref>), the edge x_2y_k+1 is blue too. Thus, the result follows.
The edge x_k+2y_k+2 is blue.
Just interchange the roles of x_1 and x_2, of y_1 and y_2 and of G_k^O and G_k^E.
As in Case A, from this point, without loss of generality
we assume that the edge x_k+1y_j is red.
By <Ref> we know that j ≠ 2. And as in Case A, if k = n-2, then the proof ends here: we proved that x_ny_n is red, a contradiction. Hence from now:
2 < j ≤ k ≤ n-3
The edges x_j+1y_2 and x_1y_j+2 are blue. In particular, when j=k-1, this means that x_ky_2 and x_1y_k+1 are blue.
This proof is similar to <Ref>, with minor changes. First, to prove that the edge x_j+1y_2 is blue, we do as in <Ref>, interchanging the roles of x_1 and x_2, and of y_1 and y_2.
(see Figure <ref>). In this case, G can be vertex-partitioned into the red cycle (x_k+1Px_j+1,y_2Py_j), the red path (y_1,x_2Px_k-1,y_k) and the vertex x_1. By Claim <ref>, we know that the edges x_2y_k and x_k-1y_1 are not both blue. Hence, we see that G can be vertex-partitioned into 3 monochromatic cycles, a contradiction. Thus, the edge x_j+1y_2 is blue.
Now, suppose by way of contradiction that the edge x_1y_j+2 is red (see Figure <ref>). Then, G can be vertex-partitioned into the red cycle (x_k+1Py_j+2,x_1Py_j), the red path (y_1,x_2Px_k-1,y_k) and the vertex x_j+1. By Claim <ref>, we know that the edges x_2y_k and x_k-1y_1 are not both blue. Hence, G can be vertex-partitioned into 3 monochromatic cycles, a contradiction. Thus, the edge x_1y_j+2 is blue.
The edges x_jy_k+2 and x_k+2y_k are blue.
To show that x_jy_k+2 is blue, use the proof of <Ref> as is (see Figure <ref>). Then to show that x_k+2y_k is blue, do as in Claim <ref>, interchanging the roles of x_1 and x_2, of y_1 and y_2, of O_k and E_k, and of G_k^O and G_k^E.
The subgraph G[k+2] has a blue Hamiltonian cycle that uses the edges x_k+1y_k+1, x_k+2y_k+2 and x_k+2y_k. Furthermore, the edge x_k+3y_k+3 is blue.
Interchange the roles of x_1 and x_2, of y_1 and y_2 and of G_k^O and G_k^E in the proof of <Ref>.
As in Case A, if k = n-3 then the proof ends here: we proved that x_ny_n is red, a contradiction. Hence, in the remaining claims we have
k ≤ n-4.
The edge x_jy_k+1 is blue.
Interchange the roles of x_1 and x_2, of y_1 and y_2 and of G_k^O and G_k^E in the proof of <Ref>.
There is a blue cycle in G that covers all vertices in k+1 except x_k+1 and y_k.
Interchange the roles of x_1 and x_2, of y_1 and y_2 and of G_k^O and G_k^E in the proof of <Ref>.
The edge x_k+1y_k+3 is blue.
The proof of Claim <ref> can be used as is.
The edges x_2y_k and x_k+1y_1 are not both blue.
Interchange the roles of x_1 and x_2, of y_1 and y_2 and of G_k^O and G_k^E in the proof of <Ref>.
The edge x_1y_k+2 is blue.
Suppose by way of contradiction that the edge x_1y_k+2 is red (see Figure <ref>). In this case, G can be vertex-partitioned into the red cycle (x_1Py_k+2) and the red path (x_k+1,y_kPx_2,y_1). By Claim <ref>, we know that the edges x_2y_k and x_k+1y_1 are not both blue. Hence, we see that G can be vertex-partitioned into at most 3 monochromatic cycles, a contradiction.
The edge x_k+3y_k+1 is blue.
Interchange the roles of x_1 and x_2, of y_1 and y_2 and of G_k^O and G_k^E in the proof of Claim <ref>.
The subgraph G[k+3] has a blue Hamiltonian cycle that uses the edges x_k+2y_k and x_k+3y_k+3. Consequently, the edge x_k+4y_k+4 is blue.
The proof of Claim <ref> can almost be used as is: just replace Claim <ref> by <ref>.
The edges x_k+4y_k and x_k+2y_k+4 are blue.
Interchange the roles of x_1 and x_2, of y_1 and y_2 and of G_k^O and G_k^E in the proof of Claim <ref>.
The subgraph G[k+4] has a blue Hamiltonian cycle that uses the edges x_k+3y_k+3 and x_k+4y_k+4.
We can use the proof of Claim <ref> almost as is: just use <Ref> instead of <Ref>.
Thus, by Claims <ref>, <ref> and <ref>, we see that k+4 is a blue special set of G, which is our final contradiction for Case B.
This ends the proof of <Ref>.
Now, we may finally prove Lemma <ref>.
Let us take G as in the statement: a balanced complete bipartite graph whose edges are coloured red or blue, and which has a monochromatic Hamiltonian path. Without loss of generality, we can assume this path is red and that its vertices are labelled so that it is a red zigzag graph. Assume for contradiction that G cannot be partitioned into three monochromatic cycles. Let the number of vertices of G be 2n.
By Remark <ref>(a), we may assume that the edges x_1y_1 and x_2y_2 are blue, since otherwise we would be done. Thus, the subgraph G[2] is a blue even plait and so we may apply Lemma <ref> iteratively until k = n-2 inclusive. Therefore n-1 is a blue plait, hence it can be decomposed into two blue cycles C_E and C_O by <Ref>. Finally, G can be decomposed into the red edge-cycle x_ny_n and the two blue cycles C_E and C_O, which is a contradiction and concludes the proof.
§ CONCLUSIONS AND PERSPECTIVES
In this paper, we proved a new bound on the number of monochromatic cycles needed to partition the vertices of any edge-bicoloured complete balanced bipartite graph. We made quite an improvement as the previous known bound was 12 and we reduced it to 4. Also we can point out that our work concerns all graphs, while many paper tackling similar problems focus on large graphs.
For this reason our proofs do not resort to strong results, and is self contained apart from Stein's result (<Ref>) for which we also had an alternate but very similar proof, hence not worthy of being shown here.
We wrote the proof of our main result in a very detailed way, with the goal to make it easy to be verified. So it ended up being somewhat long. Of course, one can make it shorter (but harder to check) by omitting some trivial details and remarks or not explicitly listing all cycles (the reader is free for not reading this details and search for the cycles in the pictures). We could also have omitted the “B claims” (when k is odd), asking the reader to believe that the proofs are similar to the ones of the case when k is even. However, we did not find a way to make the analysis intrinsically shorter. A good reason for this is that many of those cases have to be checked even to deal with small values of k (say k≤ 10).
We also showed that if the colouring is split, then we can partition the vertices into at most three monochromatic cycles and described exactly when 2 cycles are enough (and when are not). And we gave examples of a 2-colouring of a complete balanced bipartite graĵ that is not split and also cannot be partitioned into 2 cycles. Therefore, it remains to decide if the 2-colour cycle partition number of balanced bipartite graphs is 3 or 4.
|
http://arxiv.org/abs/2409.02370v1 | 20240904014020 | Do Large Language Models Possess Sensitive to Sentiment? | [
"Yang Liu",
"Xichou Zhu",
"Zhou Shen",
"Yi Liu",
"Min Li",
"Yujun Chen",
"Benzi John",
"Zhenzhen Ma",
"Tao Hu",
"Zhiyang Xu",
"Wei Luo",
"Junhui Wang"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Do We Trust What They Say or What They Do? A Multimodal User Embedding Provides Personalized Explanations
[
September 9, 2024
=========================================================================================================
§ ABSTRACT
Large Language Models (LLMs) have recently displayed their extraordinary capabilities in language understanding. However, how to comprehensively assess the sentiment capabilities of LLMs continues to be a challenge. This paper investigates the ability of LLMs to detect and react to sentiment in text modal. As the integration of LLMs into diverse applications is on the rise, it becomes highly critical to comprehend their sensitivity to emotional tone, as it can influence the user experience and the efficacy of sentiment-driven tasks. We conduct a series of experiments to evaluate the performance of several prominent LLMs in identifying and responding appropriately to sentiments like positive, negative, and neutral emotions. The models' outputs are analyzed across various sentiment benchmarks, and their responses are compared with human evaluations. Our discoveries indicate that although LLMs show a basic sensitivity to sentiment, there are substantial variations in their accuracy and consistency, emphasizing the requirement for further enhancements in their training processes to better capture subtle emotional cues. Take an example in our findings, in some cases, the models might wrongly classify a strongly positive sentiment as neutral, or fail to recognize sarcasm or irony in the text. Such misclassifications highlight the complexity of sentiment analysis and the areas where the models need to be refined. Another aspect is that different LLMs might perform differently on the same set of data, depending on their architecture and training datasets. This variance calls for a more in-depth study of the factors that contribute to the performance differences and how they can be optimized.
§ INTRODUCTION
Recently, large language models (LLMs) have made groundbreaking strides that have dramatically reshaped the artificial intelligence landscape <cit.>. These models have become a cornerstone in natural language processing (NLP), enabling advances in various tasks, from text generation <cit.> to question answering <cit.>. Despite their widespread adoption, one crucial area that remains insufficiently explored is their capability to accurately perceive and respond to sentiment. Sentiment analysis—the process of identifying the emotional tone within text—is vital for applications such as customer feedback analysis, social media monitoring, and conversational agents <cit.>. This raises an important question:
0.95
This paper aims to thoroughly evaluate the performance of large language models (LLMs) in sentiment analysis, specifically assessing their ability to detect and generate responses that correspond to the sentiment present in the input text. We explore models with varying architectures and sizes to identify both the strengths and the areas needing improvement in sentiment-related tasks. Our evaluation process follows a structured workflow, as illustrated in Fig <ref>. We begin by selecting a diverse set of prompts, which are then processed by various LLMs to produce soft outputs. These outputs undergo a similarity evaluation using word vector similarity techniques to assess their alignment with the intended sentiment. This systematic approach allows for a comprehensive analysis of the models' performance, offering insights into their ability to capture nuanced sentiment. The workflow's design not only ensures thoroughness in evaluation but also facilitates the identification of specific areas where LLMs excel or require improvement, ultimately contributing to more targeted advancements in sentiment analysis capabilities. Our research not only contributes to the ongoing discourse surrounding LLM evaluation but also highlights the necessary enhancements required to bolster the sentiment sensitivity of these models.
Our contributions are summarized as follows:
* Incorporating the sentiment analysis, we develop and introduce the ‘Sentiment Knowledge Workflow’ for LLMs. This innovative framework is pivotal in defining and advancing the self-awareness capacities of LLMs.
* We evaluate the sentiment sensitivity of a diverse array of LLMs across multiple public datasets. Our comprehensive analysis reveals that while these models exhibit a basic ability to detect sentiment, there are significant discrepancies in their accuracy and consistency. These findings underscore the need for further refinements in their training processes to improve their capacity to recognize and respond to subtle emotional cues more effectively.
§ RELATED WORKS
LLMs Development. The development of Large Language Models (LLMs) represents a significant milestone in the field of artificial intelligence, particularly in natural language processing (NLP) <cit.>. Originating from earlier efforts in neural networks and deep learning, LLMs have evolved rapidly, driven by advancements in computational power, algorithmic innovations, and the availability of vast amounts of textual data. These models, exemplified by architectures like GPT <cit.> and BERT <cit.>, are trained on diverse and extensive datasets, enabling them to generate human-like text, perform complex language tasks, and even demonstrate an understanding of context and nuance. The scaling of model parameters <cit.> and data has been a crucial factor in enhancing the capabilities of LLMs, allowing them to achieve state-of-the-art performance across a wide range of applications, from translation <cit.> and summarization <cit.> to more specialized tasks like code generation <cit.> and creative writing <cit.>. As research continues, LLMs are poised to further revolutionize how we interact with and understand language in both digital and real-world environments.
LLMs Sentiment Capability. Large Language Models (LLMs) have increasingly been designed to understand and emulate human emotions <cit.>, enhancing their role in more nuanced and empathetic communication <cit.>. These models are trained on vast datasets that include emotionally rich language, enabling them to recognize and generate text that reflects various emotional tones. By interpreting subtle cues <cit.> in language, such as word choice, tone, and context, LLMs can respond in ways that align with the emotional state of the user. This emotional capability is particularly valuable in applications like virtual assistants <cit.>, mental health support <cit.>, and customer service <cit.>, where understanding and responding to emotions is crucial for effective interaction. However, the development of these capabilities also raises important ethical considerations, as LLMs must navigate complex emotional landscapes without reinforcing biases or generating inappropriate responses. As this technology continues to advance, the emotional intelligence of LLMs <cit.> is expected to become increasingly sophisticated, allowing for more personalized and empathetic interactions between humans and machines.
§ BACKGROUND
§.§ Self-awareness.
Self-awareness <cit.> refers to the ability to recognize and comprehend one's own existence, emotions, thoughts, and behaviors. It encompasses an understanding of one's identity, abilities, strengths, and weaknesses, as well as an awareness of one's role and influence within various social and environmental contexts. Self-awareness <cit.> can be further categorized into several key aspects, including:
* Personal Identity Awareness. Knowing who you are, including your name, age, gender, occupation, interests, and hobbies.
* Sentiment Awareness. The ability to identify and understand your own emotional states, such as happiness, sadness, anger, and fear.
* Cognitive Self-awareness. Being aware of and reflecting on your own thoughts and beliefs, including how you make decisions, solve problems, and perceive the world.
* Social Self-awareness. Understanding your role and status in society, as well as being aware of how others perceive and expect from you.
* Physical Self-awareness. Recognizing your physical state and sensations, including your appearance, health, and bodily movements.
Self-awareness is a unique characteristic of humans that enables individuals to reflect on their actions, set goals, adjust behavior to adapt to changing environments, and interact effectively in complex social settings. Developing self-awareness can be achieved through self-reflection, psychological counseling, meditation, and communication with others. In this paper, we focus on the sentiment part.
§.§ Evaluation of LLMs.
We perform a thorough evaluation of the LLMs using a detailed question-answer workflow, as illustrated in Figure <ref>. The process begins with the creation of precise prompts, leading to the generation of initial outputs by the LLMs. These outputs are then analyzed for similarity. The workflow further includes evaluating multiple word vector similarities and categorizing responses based on emotional tones such as stunning, sentimental, positive, inspiring, uplifting, heartwarming, and hopeful. The evaluation concludes with an in-depth assessment of the overall performance and effectiveness of the LLMs.
Various metrics based on multiple-choice questions have been utilized in prominent benchmarks such as CommonsenseQA <cit.>, HellaSwag <cit.>, and MMLU <cit.>. These benchmarks have laid the groundwork for evaluating the accuracy of knowledge within language models by focusing on the correct responses to these questions. Building on the methodologies employed in these foundational studies <cit.>, our approach extends their insights by leveraging questions from our specific target tasks. These questions, which are designed to be seamlessly integrated into our evaluation framework, allow us to assess not only the accuracy of the responses but also the depth of understanding and reasoning capabilities exhibited by the models. By incorporating these metrics, we aim to provide a comprehensive evaluation that mirrors the rigor of the original benchmarks while adapting them to the nuanced requirements of our tasks.
§ EXPERIMENTAL SETTINGS
§.§ Dataset.
In this study, we utilized three publicly available datasets: Sentiment140, MyPersonality, and IMDB Reviews. The specific details of each dataset are outlined below.
* Sentiment140[<https://huggingface.co/datasets/stanfordnlp/sentiment140>]. It is a dataset developed by Stanford University for sentiment analysis research. It consists of 1.6 million tweets collected from Twitter, each labeled as positive, negative, or neutral. What sets Sentiment140 apart is its unique approach to labeling: it uses emoticons in the tweets (such as :-) or :-( ) as sentiment indicators, which are then removed to create a more authentic representation of social media content. This dataset is particularly valuable for handling the challenges of noisy text, including abbreviations, spelling errors, and informal language typical of Twitter. Sentiment140 is widely used for training and evaluating sentiment analysis models, especially those designed to analyze social media data.
* Mypersonality[<https://www.psychometrics.cam.ac.uk/productsservices/mypersonality>]. It is a well-known dataset in the fields of psychology and data science, originally created by researchers at the University of Cambridge in 2007. It was generated through an online personality test application hosted on the Facebook platform, where users could take various personality assessments and voluntarily share their Facebook data. This data includes profile information, social network activities, and the results of psychological assessments like the "Big Five Personality Traits" (OCEAN model). Mypersonality offers a unique opportunity for researchers to study the relationship between social media behavior and personality traits. Although the dataset has been controversial due to privacy concerns and data collection methods, it remains a valuable resource for research in psychology and social network analysis.
* IMDB Reviews[<https://huggingface.co/datasets/stanfordnlp/imdb>]. It is a widely-used sentiment analysis dataset composed of movie reviews from the Internet Movie Database (IMDb). The dataset typically includes 50,000 reviews, each labeled as either positive or negative, and is used for text classification tasks, particularly sentiment analysis. Unlike shorter text datasets, IMDB Reviews features longer reviews, rich with semantic information such as opinions, emotions, and arguments. This makes it an ideal dataset for evaluating and training deep learning models that need to handle more complex semantics and contextual information. Given IMDb’s global reach, the dataset encompasses a wide range of expressions and cultural backgrounds, making it valuable for testing the generalization capabilities of sentiment analysis models.
§.§ Tasks
The focus of this paper is on the task of multi-label classification, where each input query q can be linked to multiple labels simultaneously, rather than being limited to a single label. Formally, given a query q, the model outputs a set of labels y = {y_1, y_2, …, y_k}, with each y_i ∈{0, 1} representing the presence (1) or absence (0) of the corresponding label. Large language models (LLMs) address this task by leveraging their advanced ability to comprehend complex textual contexts, enabling them to effectively predict multiple relevant labels by identifying and capturing intricate patterns embedded within the data.
Evaluation Process of LLMs' Sentiment Analysis
Input: Questions (Prompt) 𝒬, candidate options set 𝒪 = { o_1, o_2, …, o_n }, pretrained embedding model ℳ, LLM API 𝒦.
Output: Option ô∈𝒪 with highest probability (LLM response).
§.§ Evaluation.
Metrics for multi-class classification are used to evaluate the performance of models that predict multiple classes. Key metrics include accuracy, precision, recall, and F1 score, each providing insights into different aspects of the model's performance. Accuracy measures the proportion of correctly predicted samples out of the total samples. Precision and recall assess the model's performance for each class, with precision indicating how many of the predicted positives are actually correct, and recall showing how many actual positives were correctly identified. The F1 score, the harmonic mean of precision and recall, offers a balanced evaluation of the model's effectiveness.
§.§ Baselines.
We utilize well-established LLMs, such as LLaMA, as baseline models in our experiments. Unless specified otherwise, all baseline models are implemented using the parameters provided in their respective original APIs.
* ChatGPT[<https://openai.com/>]. It is an advanced conversational AI developed by OpenAI, designed to generate human-like text based on given prompts. It comes in various versions, including GPT-3.5-turbo, GPT-4, and GPT-4-turbo. GPT-3.5-turbo offers efficient performance and responsiveness, making it well-suited for a wide range of applications. GPT-4, a more powerful and sophisticated model, provides enhanced language understanding and generation capabilities, ideal for complex tasks. GPT-4-turbo further optimizes performance, delivering faster responses while maintaining the high quality and depth of GPT-4's output.
* LLaMA[<https://huggingface.co/docs/transformers/main/model_doc/llama3>] (Large Language Model Meta AI). It is a family of advanced language models developed by Meta, designed to generate and understand human-like text. LLaMA models are known for their efficiency and effectiveness in various natural language processing tasks. Within this family, Mistral[<https://huggingface.co/mistralai/Mistral-7B-v0.1>] is a notable variant that focuses on optimizing performance and resource usage, offering high-quality outputs while being more computationally efficient. Mistral represents a key innovation within the LLaMA series, combining state-of-the-art language generation with enhanced scalability and accessibility for diverse applications.
* Doubao[<https://huggingface.co/doubao-llm>]. It is an advanced language model designed to excel in a wide range of natural language processing tasks, from text generation and sentiment analysis to machine translation. Built with cutting-edge deep learning techniques and trained on extensive data, Doubao captures intricate linguistic patterns and contextual meanings, enabling it to generate human-like text across various contexts. Its robust performance and versatility make it a valuable tool for industries such as customer service, content creation, academic research, and data-driven decision-making. Doubao's capabilities contribute significantly to the advancement of AI technologies and our understanding of language.
§ RESULTS AND INSIGHTS
§.§ Results
We present a comprehensive overview of our evaluation results across three different datasets, which are shown in Table <ref>, Table <ref>, and Table <ref>. These tables encompass a range of models and configurations, offering detailed insights into their performance. Additionally, we provide a demonstration analysis in Table <ref>, where we apply various prompt templates to the IMDB dataset. For further clarity, the specific details of the prompt templates can be found in Table <ref>. In the following sections, we discuss key findings and insights, addressing each observation individually for a more in-depth understanding.
§.§ Interesting Insights
Insight 1. Some LLMs possess a unique ability to be sensitive to sentiment. (Refer to Table <ref>, <ref> and <ref>)
Insight 2. Processing prompts cannot obscure or eliminate LLMs ability to detect sentiment with neutral prompts. (Refer to Table <ref> and <ref>)
Insight 3. Different versions of the same LLM can exhibit varying behaviors and performance. (Refer to Table <ref>, <ref> and <ref>)
§ DISCUSSION
The evaluation results across different datasets and models shed light on the varying capabilities of LLMs, particularly when it comes to sentiment detection.
LLMs exhibit a certain sensitivity towards sentiment that appears to be a unique characteristic across multiple models, as shown in Table <ref>, Table <ref>, and Table <ref>. For example, the Doubao-pro model consistently performs well in sentiment tasks, demonstrating strong scores in both precision and recall across multiple datasets. This suggests that the underlying architecture of some LLMs might be more adept at capturing emotional subtleties in text, despite the varying nature of the input data. This sensitivity indicates that certain LLMs can be fine-tuned or selected specifically for sentiment-related tasks, even in a competitive landscape of multiple LLM options.
The ability of LLMs to detect sentiment is not easily obscured by prompt processing, particularly when dealing with neutral prompts, as evident from Table <ref>. Doubao-pro maintains a relatively high performance with neutral prompts, despite changes in input structure. This suggests that the model's internal mechanisms for identifying sentiment are robust enough to operate even when the prompt is neutral, implying that sentiment detection in LLMs may be deeply ingrained in the model's learned representations, rather than being highly sensitive to prompt formulations. This highlights the model's flexibility and adaptability in different real-world scenarios where the exact phrasing of the input may vary.
The comparison between different versions of the same LLM, such as the Doubao and Doubao-pro models, reveals that even slight modifications to the architecture or training procedures can lead to notable differences in performance, as shown in Table <ref>, Table <ref>, and Table <ref>. Doubao-pro consistently outperforms its predecessor across multiple datasets and metrics, showing that model refinement plays a crucial role in enhancing the ability of LLMs to perform on sentiment tasks. This variability underscores the importance of continuous model development and experimentation to achieve optimal results in practical applications. These detailed insights together provide a deeper understanding of how LLMs behave under various conditions and prompt configurations, suggesting potential strategies for optimizing LLMs for sentiment analysis tasks in diverse applications.
§ APPENDIX
§ PROMPT DESIGN
For each task, we apply a consistent prompt engineering template to generate the input prompt. The templates are listed below.
|
http://arxiv.org/abs/2409.03200v1 | 20240905024636 | Active Fake: DeepFake Camouflage | [
"Pu Sun",
"Honggang Qi",
"Yuezun Li"
] | cs.CV | [
"cs.CV"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Active Fake: DeepFake Camouflage
Pu Sun, Honggang Qi, Member, IEEE, Yuezun Li, Member, IEEE
Honggang Qi and Yuezun Li are Corresponding authors.
Pu Sun and Honggang Qi are with the University of Chinese Academy of Sciences, China. e-mail: (sunpu21@mails.ucas.ac.cn;hgqi@ucas.ac.cn).
Yuezun Li is with the School of Computer Science and
Technology, Ocean University of China, China. e-mail: (liyuezun@ouc.edu.cn).
==============================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
DeepFake technology has gained significant attention due to its ability to manipulate facial attributes with high realism, raising serious societal concerns. Face-Swap DeepFake is the most harmful among these techniques, which fabricates behaviors by swapping original faces with synthesized ones. Existing forensic methods, primarily based on Deep Neural Networks (DNNs), effectively expose these manipulations and have become important authenticity indicators. However, these methods mainly concentrate on capturing the blending inconsistency in DeepFake faces, raising a new security issue, termed Active Fake, emerges when individuals intentionally create blending inconsistency in their authentic videos to evade responsibility. This tactic is called DeepFake Camouflage. To achieve this, we introduce a new framework for creating DeepFake camouflage that generates blending inconsistencies while ensuring imperceptibility, effectiveness, and transferability. This framework, optimized via an adversarial learning strategy, crafts imperceptible yet effective inconsistencies to mislead forensic detectors. Extensive experiments demonstrate the effectiveness and robustness of our method, highlighting the need for further research in active fake detection.
DeepFake, AI Security, Active Fake.
§ INTRODUCTION
DeepFake is a recent AI generative technique that has drawn increasing attention. It manipulates facial attributes such as identity, expression, and movement with high realism, causing serious societal concerns, such as attacks on face recognition systems <cit.>, the spread of misinformations <cit.>, and threats to societal stability <cit.>. Among these techniques, Face-Swap DeepFake is particularly notable and harmful, as it can fabricate the behavior of target identities by swapping the original face with a synthesized target face <cit.>. This technique has matured, and many Face-Swap tools have become prevalent and user-friendly, , DeepFaceLab <cit.>, Faceshifter <cit.>, FaceSwap <cit.>, Deepswap <cit.>, and Faceswapper <cit.>. Two major steps are usually employed to create a Face-Swap DeepFake face: Firstly, the central face area is cropped out from the original face image, which is used to synthesize a target face. Secondly, this face is blended back to the original face image. Fig. <ref> illustrates the pipeline of creating Face-Swap DeepFakes.
It is important to note that the current DeepFake attacks belong to Passive Fake, where attackers maliciously use photos of victims to create fake content without their consent.
These DeepFakes are visually indistinguishable from the naked eye, necessitating dedicated forensics methods to protect potential victims.
The mainstream forensics methods are developed on Deep Neural Networks (DNNs), leveraging their powerful feature-capturing capacities to expose manipulation traces <cit.>. These methods have been demonstrated highly effective and are indispensable for verifying the authenticity of faces.
Many of them are trained specifically on Face-Swap DeepFakes, allowing models to learn the specific manipulation patterns <cit.>. Since blending operations are commonly used in creating Face-Swap DeepFakes, these manipulation patterns inherently contain blending inconsistency, , the discrepancy between blended face area and authentic surroundings (validated in Sec. <ref>).
Based on this, several advanced data augmentation strategies have been explored to improve the generalization of DeepFake detection. These methods typically create pseudo-fake faces by blending various faces <cit.>. This enables models to focus more on the inconsistency introduced by blending operations.
Despite their impressive performance, these methods face a new security problem that can be exploited maliciously, which we refer to as Active Fake. Since forensic methods rely on blending inconsistency as evidence of authenticity, individuals can intentionally create inconsistency in their authentic but inappropriate videos and release them publicly. If legislative institutions investigate and hold them accountable, they can falsely claim that these videos were manipulated by DeepFake. We refer to this tactic as
“DeepFake Camouflage”
Fig. <ref> illustrates the idea of DeepFake camouflage.
Note that the adversarial attacks can also be used to mislead DNN models and have been explored in several anti-forensics methods for evading DeepFake detectors <cit.>. However, these methods are limited in practical applications: 1) they are often difficult to interpret because they are typically generated by disrupting classification objectives without considering the context of Face-Swap DeepFakes. 2) they are visible since they are not related to the face content. 3) they concentrate on specific models rather than the essence of DeepFake detection, making them more overfitted to specific models (See Sec. <ref>).
In this paper, we break away from previous methods and introduce a new framework for DeepFake camouflage. Our idea is to apply simple image operations with learned parameters on real faces to introduce blending inconsistency while satisfying three key criteria: 1) imperceptibility to human observers; 2) effectiveness in deceiving the DeepFake detectors; 3) transferability across various mainstream DeepFake detectors.
To achieve this, we propose Camouflage GAN (CamGAN), a framework designed to generate blending inconsistency that evades DeepFake detectors. Our framework comprises four key components: a configuration generator, a camouflage module, a visual discriminator, and a DeepFake detector (See Fig. <ref>). Specifically, we employ two operations to create inconsistency, Gaussian noising and Gaussian filtering. The configuration generator determines the intensity of these operations by learning to craft the appropriate parameters based on the input image.
The camouflage module preprocesses the face area with these learned parameters and blends it into the original face image using a blending mask derived from facial landmarks. It enhances visual quality by minimizing artifacts around the blending boundary through Gaussian filtering, with parameters generated by the configuration generator.
Training the configuration generator involves adversarial learning <cit.> with two discriminators: a visual discriminator, which ensures the camouflaged face appears visually real, and a DeepFake detector, which validates if the camouflaged face can mislead the detector. This training process is challenging as the operations within the camouflage module are not differentiable, making gradient back-propagation inapplicable. Thus, we describe a reinforcement learning-based scheme to optimize the framework. During inference, only the configuration generator and the camouflage module are needed to create camouflaged faces.
Our contributions are summarized as follows:
* We introduce a new approach, DeepFake Camouflage, to evade DeepFake detectors. Unlike conventional passive fake methods, this approach allows attackers to release authentic but inappropriate videos publicly while avoiding accountability.
* We propose a new generative framework (CamGAN) to achieve DeepFake Camouflage. CamGAN learns to generate appropriate preprocessing parameters to create blending inconsistency on authentic face images. This framework is adversarially trained and optimized using a reinforcement learning mechanism.
* Extensive experiments on standard datasets, in comparison to various adversarial attacks, demonstrate the effectiveness of our method in multiple scenarios. We also thoroughly study the effect of each component, offering insights for future research in active fake.
§ BACKGROUND AND RELATED WORK
§.§ DeepFake
DeepFake, a combination of Deep Learning and Fake Face, first appeared on Reddit in 2017<cit.>. Originally, DeepFake referred to a face-swap technique capable of generating highly realistic target identity faces and replacing the source identity faces in videos while maintaining consistent facial attributes such as expressions and orientation, as illustrated in Fig. <ref>. Nowadays, the term DeepFake has expanded to encompass all AI-generated faces, including whole face synthesis (created by GANs <cit.> and diffusion models <cit.>), face editing <cit.>, and face reenactment <cit.>. Nevertheless, among these forms, Face-Swap DeepFake has gained the most attention due to its significant negative social impacts, such as the creation of revenge porn videos <cit.>, the fabrication of inappropriate behavior by public figures <cit.>, and economic fraud <cit.>. The availability of user-friendly tools for Face-Swap DeepFakes has further lowered the barrier to making fake videos, thereby exacerbating the security risks. Therefore, this paper focuses specifically on the Face-Swap DeepFakes.
§.§ DeepFake Detection
To curb the misuse of DeepFake algorithms, DeepFake Detection algorithms have flourished in recent years. DeepFake Detection aims to perform binary classification on input images or videos to determine their authenticity. They could be roughly categorized as naive, spatial and frquency detectors <cit.>. Many CNN-based models are utilized in DeepFake Detection, with data-driven training <cit.> and various strategies, such as new designed architectures <cit.>, augmentations <cit.> and preprocessing <cit.>. <cit.> create pseudo-fake faces by blending different faces as s special data augmentation. Some DeepFake detectors specifically utilize the spatial information of images <cit.>. <cit.> introduce disentanglement learning into DeepFake detection. Nguyen <cit.> locates the forgery region besides classify the image. <cit.> utilize capsule network to detect the images. Some other detectors fully utilize the information in frequency domain for detection <cit.>. Qian <cit.> propose to learn the subtle forgery patterns through frequency components partition. SPSL <cit.> utilize phase spectrum analysis to improve the classification. SRM <cit.> notice the high-frequency noise could boost the performance. To fully prove the effectiveness of our method, we perform experiments on naive, spatial, and frequency detectors.
§.§ Evading DeepFake Detection
Existing methods for evading DeepFake detection <cit.> commonly use adversarial attacks <cit.> to add noise to the images, misleading DeepFake detectors to make incorrect predictions. While these approaches have shown promise, they suffer from several significant limitations that hinder their practical application: 1) Poor interpretability: Adversarial perturbations are typically generated by disrupting classification objectives and back-propagating gradients to the input face image. These perturbations often have little connection to the context of Face-Swap DeepFakes, making them difficult to interpret. 2) High visibility: The visibility of adversarial perturbations is closely tied to the content of the face image. However, because these attacks are designed without considering the specific facial content, they often result in highly visible artifacts. 3) Limited transferability: Adversarial attacks generally focus on targeting specific models, which can lead to overfitting and poor transferability to other models. Despite attempts to address this issue, this limitation is inherent because the design of these attacks does not align with the fundamental nature of DeepFake detection. Therefore, we depart from these existing methods and introduce a new framework to achieve DeepFake Camouflage.
§ ACTIVE FAKE: DEEPFAKE CAMOUFLAGE
§.§ Inspiration and Preliminary Analysis
To verify the feasibility of our idea, we employ Xception <cit.> as the DeepFake detector and train it using Face-Swap DeepFake faces. As shown in Fig. <ref>, we visualize the attention of Xception on DeepFake faces using Grad-CAM <cit.>. Row (a) indicates the DeepFake faces and Row (b) exhibits corresponding Grad-CAM maps. These visualizations demonstrate that the detector mainly concentrates on the manipulated face area. In contrast, the real face images, as shown in Row (c, d), have scattered attention over the backgrounds.
To demonstrate whether the detectors treat the blending inconsistency as important evidence of authenticity, we manually create an inconsistency by manually applying intense Gaussian filtering to the central face area of real images. The visual examples are shown in Row (e). We send these images into the detector and visualize the Grad-CAM maps. As shown in Row (f), these images are successfully identified as fake and are highlighted on the processed face area as in Row (b), demonstrating the feasibility of disrupting the detectors by introducing inconsistency.
Nevertheless, manually designing the inconsistency is infeasible, as it can hardly maintain imperceptibility, effectiveness, and transferability. Thus we describe a learnable framework to create blending inconsistency.
§.§ Problem Setup & Overview
Denote x_r ∈ℝ^H × W × 3 as a real clean face image and x^*_r as the camouflaged face image. Let a DeepFake detector as D. Our goal is to create imperceptible blending inconsistency on real faces, causing them to be classified as fake. This goal can be written as
min_w ‖x^*_r - x_r ‖_p, s.t. D (x^*_r ) = 0,
where ‖·‖ indicates the magnitude of the inconsistency, {0, 1} represents fake and real, respectively.
Denote the camouflage module as C. The camouflaged face can be denoted as x^*_r = C(x_r; w), where w represents the parameters learned in process. The camouflage module C
involves two steps: creating inconsistency and blending inconsistency.
Creating Inconsistency.
To create inconsistency, we adopt two image operations: Gaussian noising and Gaussian filtering. Denote the parameters for Gaussian noising as w_gn = (μ_gn, σ_gn), where μ_gn, σ_gn correspond to the mean and standard variance.
After adding Gaussian noise, we then apply Gaussian filtering. This operation blurs the images using a Gaussian kernel. The parameters for Gaussian filtering is denoted as w_gf = (k_gf, σ_gf), where k_gf, σ_gf correspond to the kernel size and standard variance. Denote x'_r as the face image after adding inconsistency.
Blending Inconsistency.
We blend the facial region of x'_r into the original face image x_r using a mask ℳ. This process can be described as
x^*_r = x'_r·ℳ + x_r· (1 - ℳ).
Straightforwardly, the mask ℳ is the convex hull that includes all the facial contour landmarks, where the pixels inside the mask are set to 1 and those outside the mask are set to 0 (see Fig.<ref>).
However, simply using this mask can introduce visible artifacts around the blending boundary, due to the color or texture discrepancy between x'_r and x_r. Inspired by <cit.>, we convert the binary mask into a soft mask by applying Gaussian filtering on mask boundary and use it for face blending. Denote the parameters for Gaussian filtering on mask as w_bl = (k_bl, σ_bl). Note that these parameters are also learned.
§.§ Camouflage GAN
To determine the values of parameters w = (w_gn,w_gf,w_bl), we propose a Camouflage GAN (CamGAN) that learns to generate parameters adaptive to the different face images.
Overview and Architectures.
As shown in Fig. <ref>, this framework is composed of four key components: a configuration generator G, a camouflage module C, a visual discriminator V, and a DeepFake detector D.
- Configuration Generator. This generator is designed to create all learnable parameters w= (σ_gn, μ_gn, σ_gf, k_gf, σ_bl, k_bl). This generator is developed on Xception <cit.> with six additional parallel fully connected layers for predicting corresponding parameters.
- Camouflage Module. Given a clean real face x_r, we first apply Gaussian noising and Gaussian filtering with the learned parameters (σ_gn, μ_gn, σ_gf, k_gf). Then we create a blending mask ℳ using the following steps: We first obtain a binary mask by drawing a convex hull including all facial boundary landmarks. We then apply Gaussian filtering with the learned parameters σ_bl, k_bl to this binary mask to obtain a soft mask as blending mask ℳ. The whole process is shown in Fig. <ref> (yellow box).
-Visual Discriminator. This discriminator is designed to simulate the human eyes, distinguishing between images with inconsistency and not. We also employ the Xception network and output a binary classification, , whether the input face having inconsistency. The camouflaged face x^*_r and DeepFake face x_f are expected to have inconsistency, while real images x_r are not.
- DeepFake Detector. This detector serves as a discriminator for distinguishing whether a face is real or fake. Note that the camouflaged faces should be detected as fake. In our method, we directly employ the well-trained DeepFake detectors.
§.§ Loss functions and Training
Denote the configuration generator, the visual discriminator, and the DeepFake detector as G, V, and D, respectively. To effectively instruct the learning of CamGAN, we introduce three simple loss terms: detector spoofing loss ℒ_ ds, visual inspection loss ℒ_ vi, and visual constraint loss ℒ_ vc.
Detector Spoofing Loss.
We expect that the camouflaged face x_r^* should be able to spoof the DeepFake detector D, , misleading the prediction of x_r^* as fake. Denote 0 and 1 correspond to fake and real respectively. Therefore, the detector spoofing loss can be defined as
L_ ds = logD(x_r^*),
where D(x_r^*) represents the probability of x_r^* being real. Minimizing this loss term can decrease the real probability of x_r^*, , being closer to label 0. Note that we directly employ well-trained DeepFake detectors and fix their parameters during training.
Visual Inspection Loss.
The visual discriminator V is designed to determine whether the given face is visually manipulated. We employ this discriminator to improve the synthesized quality in a way of adversarial learning. Specifically, given the camouflaged face x_r^*, we anticipate it can mislead this discriminator V, , being classified as real. Denote y ∈{0, 1} correspond to the label of visually fake and real respectively. This loss term can be defined as
L_ vi = -y logV(x_r) + (1 - y) logV(x_f) + logV(x_r^*),
where x_r and x_f denote the wild real and fake faces, and V(·) represents the probability of input face being real. By minimizing this discriminator, the camouflaged face x_r^* can be classified as fake.
Visual Constraint Loss.
To ensure the camouflaged faces are visually similar to real faces, we design a visual constraint loss to restrict the strength of distortions. This loss term can be formulated as the ℓ_p norm distance between x_r^* and x_r, as
ℒ_ vc = ‖x_r^* - x_r‖_p,
Overall Loss and Optimization. With these loss terms, we train CamGAN in the way of adversarial learning, which is expressed as
min_Gmax_V L_ ds + L_ vc - L_ vi.
Note that L_ ds, L_ vc only involve optimizing the configuration generator G, while L_ vi involves optimizing the configuration generator G and the visual discriminator V. We employ the scheme of adversarial training, which alternately optimizes G,V.
* When fixing the configuration generator G, both L_ ds and L_ vc remain unchanged. During this process, we maximize -L_ vi, leading to the reduction of V(x_r^*), corresponding to classify x_r^* as fake.
* When fixing discriminator V, we minimize L_ ds + L_ vc - L_ vi by optimizing the generator G. This means that the camouflaged faces aim to 1) spoof the DeepFake detector D, , being classified as fake, 2) have minimal distortions, and 3) deceive the visual discriminator V, , being classified as real.
Reinforcement Learning Based Optimization.
It is important to note that the process in the face synthesizer is typically not differentiable, leading to gradient interruption. This challenge affects the optimization of the configuration generator G, preventing it from being optimized by standard gradient back-propagation. To resolve this, we adopt the strategy in reinforcement learning <cit.> to optimize the configuration generator G.
Specifically, we reformulate the visual constraint loss L_ vc by disconnecting the configuration generator G with the face synthesizer and directly restricting the output of the configuration generator G. Since the generated operation parameters control the magnitude of face distortion, restricting them helps ensure the visual similarity between the camouflaged faces and real faces.
The configuration generator G is crafted to output three sets of operation parameters as w = (w_ gn, w_ gf, w_ bl), corresponding to the Gaussian noising and Gaussian
filtering in creating inconsistency step and the Gaussian masking in blending inconsistency step. The larger magnitude of w_ gn, w_ gf introduces more intense distortions. On the contrary, the smaller magnitude of w_ bl denotes the boundary of the blending mask is sharper, introducing more blending artifacts. Therefore, the visual constraint loss is reformulated to enlarge w_ gn, w_ gf while restricting w_ bl, as expressed by
ℒ_ vc = log (w_ gf) + log (w_ gn) - log (w_ bl).
Minimizing this equation corresponds to the better visual quality of camouflaged faces.
We then optimize G using the following equation as
θ_t+1 = θ_t - η·exp(ϕ(L_ ds - L_ vi))·∇_θ_tℒ_ vc,
where η is an optimization step and e^ϕ(L_ ds - L_ vi) is the penalty term, where ϕ(L_ ds - L_ vi) is a monotonically increasing function defined as
ϕ(L_ ds - L_ vi) = sigmoid(L_ ds - L_ vi)
This approach allows the penalty term exp(ϕ(L_ ds - L_ vi)) to influence the parameters of G by optimizing Eq.(<ref>), thereby approximating the process described in Eq.(<ref>).
§ EXPERIMENTS
§.§ Experimental Setups
Datasets.
We use the training set of FaceForensics++ <cit.> to train our model. In the testing phase, for FaceForensics++ <cit.> dataset, we directly use the real faces in its testing set. Since there is no division into training and testing sets for the Celeb-DF <cit.> dataset, we choose the real faces of 10 identities as the testing set. All the faces have the size of 256 × 256 as DeepFakeBench <cit.>.
Metrics.
We use four metrics for evaluation: ACC, SSIM, PSNR, and FID.
(1)ACC is the accuracy of the DeepFake detector in predicting whether an image is real or fake. We calculate the accuracy of various DeepFake detectors on real clean images and their corresponding camouflaged images. The greater the decrease in accuracy, the stronger the misleading effect our method has on the DeepFake detector. (2) SSIM, PSNR, and FID <cit.> are calculated between real clean images and their corresponding camouflaged images. They are used to measure the quality of camouflaged face images and how much our method has impacted the quality of the images. Higher SSIM and PSNR values, along with lower FID values, indicate smaller noise added to the camouflaged images.
Implementation Details.
We train the DeepFake detectors, Xception <cit.>, FFD <cit.>, SPSL <cit.>, and SRM <cit.>, using DeepfakeBench <cit.> with FaceForensics++ <cit.> dataset and fix their parameters as well-trained DeepFake detectors for all the subsequent experiments. These DeepFake detectors encompass naive, spatial, and frequency detectors. Our method for Camouflage GAN is implemented using PyTorch 1.9.0 on Ubuntu 20.04 with an Nvidia 3090 GPU. In the experiments, the batch size is set as 1, for optimizer we utilize RMSProp optimizer <cit.>, the initial learning rate is set as 1.0×10^-5. When adding Gaussian noise to an image, we first normalize the image to the range of 0.0 to 1.0 by dividing 255.0. After adding Gaussian noise, we then multiply the image by 255.0.
§.§ Results
Quantitative Results.
We conduct quantitative evaluations on our method as well as four adversarial attacks, , CW <cit.>, Jitter <cit.>, PGD <cit.>, and Pixle <cit.>, and the results are presented in Table <ref>, Table <ref>, Table <ref>, and Table <ref>. The left column in each table denotes the attack methods, , Ours-FFD represents our CamGAN trained with FFD as the DeepFake detector, and CW-FFD represents using CW to attack FFD. The top row of Table <ref> and Table <ref> denotes which DeepFake detector is used in testing. It can be observed that our method outperforms those four adversarial attacks in terms of both white-box[The DeepFake detector being attacked and the one being tested are the same.] and black-box[The DeepFake detector being attacked and the one being tested are NOT the same.] attacks, as well as visual fidelity. Our method has a much smaller impact on image quality compared to those adversarial attacks, yet it achieves superior performance in attacking DeepFake detectors. From Table <ref> and Table <ref> we could observe that the performances of the four adversarial attacks are not stable, , in many black-box scenarios, the accuracy of images perturbed by other attacks tend to increase. Only CW and PGD could compare with our method in white-box scenarios, but their performance in black-box scenarios is inferior to ours. Jitter and Pixel perform worse than our method in both white-box and black-box scenarios. In contrast, our method consistently interferes with the decision-making of the DeepFake detector, especially showing stable performance in all the black-box scenarios. The stability of our approach may be attributed to the way we add noise. We use Gaussian noising and Gaussian filtering for all the camouflage operations, which are independent of the specific architecture of DeepFake detector. This reduces the risk of overfitting to a particular detector type, making our method inherently detector-agnostic.
Both qualitative and quantitative results strongly prove the superiority of our method in terms of imperceptibility, effectiveness, and transferability.
Qualitative Results. Row(a) and Row (b) in
Fig. <ref> show examples where images are classified as real before camouflage and are classified as fake after camouflage. Row (c)-(f) are images attacked by CW,
Jitter, PGD, and Pixle, respctively. From Row (b) we could observe that images processed by our method have no obvious visual artifacts. Without a close comparison with the real clean face images, it is challenging to discern artifacts in the images from Row (b). In contrast, the images in Row (d) and Row (e) exhibit grain-like noise(which is typical of adversarial attacks) in the entire image. Although the noise in the images of Row (c) does not appear obvious and the noise in Row (f) consists of sporadic white spots, their attack success rates are not very high either. Additionally, none of these attack methods could surpass our camouflage in terms of attack success rate(See Table <ref> and Table <ref> for details). We could also observe that in our method, the noise is concentrated only in the facial region, whereas in the images processed by adversarial attacks, noise is distributed across the entire image. The visual quality of our method far exceeds or matches that of the adversarial attacks (Table <ref> and Table <ref>). The main reasons are as follows: 1) Our noise is closely related to facial texture, making it easier to be visually concealed. 2) The noise we add is Gaussian noise and through Gaussian filtering, which, compared to the irregular noise addition of the adversarial attack, appears more natural to the human eyes.
Fig. <ref> shows the different Grad-CAM maps for real clean images and those after being processed by our method. As depicted, our method successfully produce the camouflage described in Sec. <ref>. The camouflage is subtle enough not to be noticeable to the human eye but effective in deceiving the DeepFake detector. Our method causes the detector to focus on the facial region, producing Grad-CAM maps similar to those in Fig. <ref>, ultimately leading to incorrect classification. Our method effectively simulates the effects of DeepFake tampering without compromising the images' quality or any information.
§.§ Ablation Study
Other operations to create inconsistency.
Row (a) in Fig. <ref> represents camouflaged images with affine and elastic transforms <cit.> in camouflage process. We incorporated affine and elastic transforms into the camouflage process during both training and inference stages, with their essential parameters learned by the configuration generator. As observed in Row (a), Fig. <ref>, some images exhibit more pronounced facial edge artifacts than others, reflecting variations in the parameters obtained from different image inputs to the configuration generator. Table <ref> shows that after adding the affine and elastic transforms, the performance of our method decreases. Table <ref> also indicates a significant degradation in image quality after incorporating affine and elastic transforms. In summary, our camouflage method is already effective in evading DeepFake detectors while preserving image quality well, as demonstrated by both quantitative and qualitative tests.
Without Visual Discriminator.
After removing the visual discriminator during the training phase, we utilized the obtained configuration generator to camouflage the images, resulting in images as shown in Row (b), Fig. <ref>. Visually, the Gaussian noise values in the facial regions of images in Row (b) have increased, noticeably affecting visual quality. As indicated in Table <ref>, there is a slight decrease in image quality of camouflaged images after removing the visual discriminator during training. Table <ref> suggests a noticeable negative impact on accuracy, emphasizing the crucial role of the visual discriminator in successful camouflages. This also implies that better attack performance does not necessarily correlate with poorer image quality.
§.§ Further Analysis
Robustness.
The robustness of methods is also crucial in real-world scenarios, as images will inevitably undergo various degradations, such as compression during transmission over the
Internet. To test the robustness our method, we apply post-processing operations to images camouflaged by our method and images perturbed by the four adversarial attacks. Specifically, we use three kinds of post-processing operations, which are JPEG compression (quality factor as 75), Gaussian filtering(sigma 0.5, kernel size 5 × 5), and Gaussian noising (sigma 0.01, mean value 0.0) respectively. We then test the accuracy of the DeepFake detectors on these post-processed images and compared it to the accuracy on the images without post-processing operations. The results are shown in Fig. <ref>. The closer the bars are to the X-axis, the lower the accuracy of the images is. Fig. <ref> shows that our method achieves the best accuracy before and after post-processing and the highest robustness. Although PGD performs slightly better than our method in some cases, overall, our approach is much more stable. In almost all cases, the robustness of our method remains relatively stable without significant fluctuations. Additionally, our method sacrifices far less in terms of visual quality compared to PGD(See Table <ref> and Table <ref>).
Handcrafted Camouflage.
To demonstrate the effectiveness of our method, we experiment with manually configuring the camouflage parameters. We randomize parameter settings for camouflaging images. The values for σ_gn, μ_gn, σ_gf, k_gf, σ_bl, and k_bl are generated completely at random. As shown in Table <ref> and Table <ref>, there is a significant difference in the effectiveness between randomly set parameters and those generated by our trained model, demonstrating the significance of configuration generator.
§ CONCLUSION
This paper describes a new active fake method named DeepFake Camouflage to
evade DeepFake detectors. Specifically, we create and blend imperceptible inconsistency to the facial regions of the real images, making them be misclassified as fake. We design a new generative framework, CamGAN, for creating and blending the inconsistency. We design a strategy based on adversarial learning and reinforcement learning to train the framework. Extensive experiments on the FaceForensics++ and Celeb-DF datasets demonstrate the efficacy and superiority of our method.
Acknowledgments.
This material is based upon work supported by National Natural Science Foundation of China, NSFC No.62271466.
IEEEtran
|
http://arxiv.org/abs/2409.03653v1 | 20240905160857 | Information conservation in de Sitter tunneling | [
"Baoyu Tan"
] | gr-qc | [
"gr-qc"
] |
Quantum reservoir computing on random regular graphs
Tapio Ala-Nissila
====================================================
§ INTRODUCTION
It is well known that according to the unitary principle, a fundamental principle of quantum mechanics, pure states evolve over time as pure states and cannot become mixed states. But black holes seem to defy this. In 1975, Hawking used quantum field theory in curved spacetime to show that particles can escape black hole through quantum effects <cit.>. In 1976, Hawking proved that Hawking radiation is a pure thermal spectrum that carries no information <cit.>. That is, matter that enters the black hole in a pure state radiates out in a mixed state. The information is lost in the process of Hawking radiation, violating the principle of unitarity.
Page argues that Hawking's calculations did not take into account the effects of quantum entanglement. Although Hawking radiation does not carry information itself, it has been entangled with particles inside the black hole, thus transmitting information from the black hole to the outside. In 1993, Page proposed that in order to satisfy the unitary principle, the entanglement entropy of radiation should satisfy the Page curve <cit.>. In simple terms, although the radiation is a mixed state, the direct product of the radiation state and the black hole state is still a pure state. In recent years, there have been some breakthroughs in this approach <cit.>. But there is an essential difficulty with this approach. Since there is no complete theory of quantum gravity, it is impossible to know the concrete form of the density matrix. The entropy of entanglement is determined by the density matrix:
S_EE=-Tr(ρlogρ).
Where ρ is the density matrix and S_EE is the entropy of entanglement. Much of the discussion of information conservation by quantum entanglement has focused on AdS space-time. The AdS-CFT duality has automatically implied the unitary principle, that is, the conformal field on the boundary must satisfy the unitary principle and must not lose information. And the discussion of general black holes is still difficult, even impossible. Because this requires the semi-classical theory of gravity to automatically include the unitary principle, which is obviously unreasonable, at least unfounded.
In 2000, Parikh and Wilczekr believed that Hawking radiation was a tunneling process. They started from the perspective of energy conservation and the change in spatiotemporal background caused by the contraction of black holes after radiation, that is, they considered the self gravity effect during the particle emission process, thus avoiding the difficulty of calculating the specific form of the density matrix. They also came to the conclusion that the radiation process satisfies the unitary principle and information conservation <cit.>. After the Parikh-Wilczek method was proposed, researchers successively studied various static and stationary rotating black holes and reached the same conclusion. Due to the effect of self gravity, the spatiotemporal background changes, and the radiation spectrum will deviate from the pure thermal spectrum, thus proving that Hawking radiation satisfies the unitary principle. <cit.>. But until now, apart from the Hawking radiation of the Schwarzschild de Sitter spacetime discussed in ref. <cit.>, there has been little discussion on the asymptotic de Sitter spacetime that is more similar to the real universe.
In Section <ref>, we separately calculated the radiation spectra near the event horizon and cosmic horizon of the magnetically charged Reissner-Nordström de Sitter black hole, and briefly discussed our results. In Section <ref> and Section <ref>, we performed similar calculations on the Kerr-Newman-Kasuya de Sitter black hole and Bardeen de Sitter black hole, respectively. Finally, in Section <ref>, we summarized the results and the discussion of the results obtained in the previous section. We have adopted the system of natural units in this paper (G≡ħ≡ c≡ 1).
§ MAGNETICALLY CHARGED REISSNER-NORDSTRÖM DE SITTER BLACK HOLE
According to ref. <cit.>, for the case of both electric and magnetic charges, the electromagnetic field tensor can be defined as:
F_μν=∇_ν A_μ-∇_μ A_ν+G_μν^+.
Where G_μν^+ is the Dirac string term. The Maxwell equation can be rewritten as:
∇_ν F^μν=4πρ_eu^μ,
∇_ν F^+μν=4πρ_gu^μ.
Where F^+μν is the dual tensor of F^μν, ρ_e and ρ_g represent the charge density and magnetic charge density, respectively. For the convenience of discussion, we have defined a new antisymmetric tensor:
F̃^μν=F^μνcosα+F^+μνsinα.
Where α is a fixed angle. From (<ref>) and (<ref>), we can deduce that:
∇_νF̃^μν=4π(ρ_ecosα+ρ_gsinα)u^μ=4πρ_hu^μ,
∇_νF̃^+μν=4π(-ρ_esinα+ρ_gcosα)u^μ=0.
According to the no-hair theorem, we can consider a black hole as a conductor sphere, and assume that the charge density and magnetic charge density satisfy the following relationship to ensure that (<ref>) and (<ref>) hold true:
ρ_e/ρ_g=cotα.
We can conclude that:
Q_h^2=Q_e^2+Q_g^2.
Where Q_e and Q_g are the total electric and magnetic charges of the system, respectively, and Q_h is the equivalent charge corresponding to density ρ_h. Finally, we obtain the Lagrangian density of the electromagnetic field, which can be expressed by the following equation:
ℒ_h=-1/4F̃_μνF̃^μν.
The corresponding generalized coordinates for (<ref>) are:
Ã_μ=(Ã_0,Ã_1,Ã_2,Ã_3)=(-Q_h/r,0,0,0).
Obviously, Ã_0 is a cyclic coordinate. The above discussion also applies to Section <ref> and Section <ref>, and will not be repeated in the following text.
The line element of Magnetically charged Reissner-Nordström de Sitter black hole is:
ds^2=-(1-2M/r+Q_e^2+Q_g^2/r^2-Λ/3r^2)dt_s^2+(1-2M/r+Q_e^2+Q_g^2/r^2-Λ/3r^2)^-1dr^2+r^2dΩ^2.
Where M is the mass of the black hole, Q_e is the charge of the black hole, Q_g is the magnetic charge of the black hole, and Λ is the cosmological constant. According to (<ref>), the line element of Magnetically charged Reissner-Nordström de Sitter black hole can be rewritten as:
ds^2 =-(1-2M/r+Q_h^2/r^2-Λ/3r^2)dt_s^2+(1-2M/r+Q_h^2/r^2-Λ/3r^2)^-1dr^2+r^2dΩ^2
=-f(r)dt_s^2+1/f(r)dr^2+r^2dΩ^2.
Where f(r)=1-2M/r+Q_h^2/r^2-Λ/3r^2. The outer event horizon of black hole r_+ and cosmic horizon r_c satisfy the following equation:
f(r)=1-2M/r+Q_h^2/r^2-Λ/3r^2=0.
Clearly, (<ref>) has four solutions: r_non, r_-, r_+, and r_c. Among these, r_non is a negative solution without physical significance, r_- represents the inner event horizon of black hole, r_+ denotes the outer event horizon of black hole, and r_c signifies the cosmic horizon.
The specific forms of these four solutions are:
r_non= -1/2η-1/2ζ,
r_-= -1/2η+1/2ζ,
r_+= 1/2η-1/2ζ,
r_c= 1/2η+1/2ζ.
Where:
λ= [-54+972Λ M^2-648Λ Q_h^2+√((-54+972Λ M^2-648Λ Q_h^2)^2-4(9-36Λ Q_h^2)^3)]^1/3,
η= √(2/Λ+3(2)^1/3(1-4Λ Q_h^2)/Λλ+λ/3(2)^1/3Λ),
ζ= √(4/Λ-3(2)^1/3(1-4Λ Q_h^2)/Λλ-λ/3(2)^1/3Λ-12M/η).
For the process of particles tunneling out of the event horizon of a black hole, the initial position of the particle is:
r_i=1/2η_i-1/2ζ_i.
Where:
λ_i= [-54+972Λ M^2-648Λ Q_h^2+√((-54+972Λ M^2-648Λ Q_h^2)^2-4(9-36Λ Q_h^2)^3)]^1/3,
η_i= √(2/Λ+3(2)^1/3(1-4Λ Q_h^2)/Λλ_i+λ_i/3(2)^1/3Λ),
ζ_i= √(4/Λ-3(2)^1/3(1-4Λ Q_h^2)/Λλ_i-λ_i/3(2)^1/3Λ-12M/Λη_i).
The final position of the particle is:
r_f=1/2η_f-1/2ζ_f.
Where:
λ_f= {-54+972Λ (M-ω)^2-648Λ (Q_h-q_h)^2+
√([-54+972Λ (M-ω)^2-648Λ (Q_h-q_h)^2]^2-4[9-36Λ (Q_h-q_h)^2]^3)}^1/3,
η_f= √(2/Λ+3(2)^1/3[1-4Λ (Q_h-q_h)^2)]/Λλ_f+λ_f/3(2)^1/3Λ),
ζ_f= √(4/Λ-3(2)^1/3[1-4Λ (Q_h-q_h)^2]/Λλ_f-λ_f/3(2)^1/3Λ-12(M-ω)/Λη_f).
After the particle exits, the position of horizons change to:
r_non^'= -1/2η^'-1/2ζ^',
r_-^'= -1/2η^'+1/2ζ^',
r_+^'= 1/2η^'-1/2ζ^',
r_c^'= 1/2η^'+1/2ζ^'.
Where:
λ^'= {-54+972Λ (M-ω^')^2-648Λ (Q_h-q_h^')^2+
√([-54+972Λ (M-ω^')^2-648Λ (Q_h-q_h^')^2]^2-4[9-36Λ (Q_h-q_h^')^2]^3)}^1/3,
η^'= √(2/Λ+3(2)^1/3[1-4Λ (Q_h-q_h^')^2]/Λλ^'+λ^'/3(2)^1/3Λ),
ζ^'= √(4/Λ-3(2)^1/3[1-4Λ (Q_h-q_h^')^2]/Λλ^'-λ^'/3(2)^1/3Λ-12(M-ω^')/Λη^').
For the process of particle tunneling into the cosmic horizon, the initial position of the particle is:
r_i^'=1/2η_i^'+1/2ζ_i^'.
Where:
λ_i^'= [-54+972Λ M^2-648Λ Q_h^2+√((-54+972Λ M^2-648Λ Q_h^2)^2-4(9-36Λ Q_h^2)^3)]^1/3,
η_i^'= √(2/Λ+3(2)^1/3(1-4Λ Q_h^2)/Λλ_i^'+λ_i^'/3(2)^1/3Λ),
ζ_i^'= √(4/Λ-3(2)^1/3(1-4Λ Q_h^2)/Λλ_i^'-λ_i^'/3(2)^1/3Λ-12M/Λη_i^').
The final position of the particle is:
r_f^'=1/2η_f^'+1/2ζ_f^'.
Where:
λ_f^'= {-54+972Λ (M+ω)^2-648Λ (Q_h+q_h)^2+
√([-54+972Λ (M+ω)^2-648Λ (Q_h+q_h)^2]^2-4[9-36Λ (Q_h+q_h)^2]^3)}^1/3,
η_f^'= √(2/Λ+3(2)^1/3[1-4Λ (Q_h+q_h)^2)]/Λλ_f^'+λ_f^'/3(2)^1/3Λ),
ζ_f^'= √(4/Λ-3(2)^1/3[1-4Λ (Q_h+q_h)^2]/Λλ_f^'-λ_f^'/3(2)^1/3Λ-12(M+ω)/Λη_f^').
After the particle is incident, the position of horizons change to:
r_non^''= -1/2η^''-1/2ζ^'',
r_-^''= -1/2η^''+1/2ζ^'',
r_+^''= 1/2η^''-1/2ζ^'',
r_c^''= 1/2η^''+1/2ζ^''.
Where:
λ^''= {-54+972Λ (M+ω^')^2-648Λ (Q_h+q_h^')^2+
√([-54+972Λ (M+ω^')^2-648Λ (Q_h+q_h^')^2]^2-4[9-36Λ (Q_h+q_h^')^2]^3)}^1/3,
η^''= √(2/Λ+3(2)^1/3[1-4Λ (Q_h+q_h^')^2]/Λλ^''+λ^''/3(2)^1/3Λ),
ζ^''= √(4/Λ-3(2)^1/3[1-4Λ (Q_h+q_h^')^2]/Λλ^''-λ^''/3(2)^1/3Λ-12(M+ω^')/Λη^'').
§.§ Painlevé coordinate and time-like geodesic line equation
To describe the tunneling process of particles, Painlevé coordinates without event horizon singularity should be used. In order to obtain the Painlevé-Reissner-Nordström de Sitter coordinate, we performed the following coordinate transformation:
t_s=t+F(r), dt_s=dt+F^'(r)dr.
Where t_s is the time coordinate before transformation. In order for the constant time slice of Painlevé line elements to be a flat Euclidean spacetime in the radial direction, F(r) must satisfy:
1/1-2M/r+Q_h^2/r^2-Λ/3r^2-(1-2M/r+Q_h^2/r^2-Λ/3r^2)[F^'(r)]^2=1.
By substituting (<ref>) and (<ref>) into (<ref>), we obtain the Painlevé coordinate:
ds^2= -(1-2M/r+Q_h^2/r^2-Λ/3r^2)dt^2± 2√(2M/r-Q_h^2/r^2+Λ/3r^2)dtdr+dr^2+r^2dΩ^2
= g_00dt^2+2g_01dtdr+dr^2+r^2dΩ^2.
Obviously, in (<ref>), the coordinate singularity at the event horizon is eliminated. The selection of positive and negative signs in (<ref>) will have a clearer physical meaning in the equation of time-like geodesics.
In this article, we discuss particles that carry both electric and magnetic charges. So they are all massive particles, not following the optical geodesic equation. We consider massive particles as de Broglie s-waves, employing an approach analogous to that described in ref. <cit.>. The time-like geodesic equation, derived from the WKB approximation, is presented below:
ṙ=v_p=1/2v_g=-1/2g_00/g_01.
Where v_p is the phase velocity and v_g is the group velocity, respectively represented by the following equations:
v_p=dr/dt=ω/k.
v_g=dr_c/dt=dω/dk.
Where k is de Broglie wave number. By substituting g_00 and g_01 in (<ref>) into (<ref>), we can obtain:
ṙ=dr/dt=±1/2r√(Λ/3)(r-r_non)(r-r_-)(r-r_+)(r-r_c)/√(r^4+6M/Λr-3/ΛQ_h^2).
It can be clearly seen that when taking a positive sign, (<ref>) is the equation of motion for the outgoing particles near the outer event horizon of the black hole, while when taking a negative sign, (<ref>) is the equation of motion for the incoming particles near the cosmic horizon. Furthermore, if considering the influence of self gravity, M in (<ref>) and (<ref>) should be replaced by M∓ω, and Q_h in (<ref>) and (<ref>) should be replaced by Q_h∓ q_h. ω is the energy of the particle, q_h is the equivalent charge of the particle. The minus sign and plus sign represent the outgoing particles and incoming particles, respectively.
§.§ Tunneling out of the black hole outer event horizon
Let's first calculate the particle emission rate near the event horizon of a black hole. We are considering the tunneling of particles that are both charged and magnetic, so we must take into account the influence of electromagnetic fields. We provide the Lagrangian density for the coupling of matter field and electromagnetic field:
ℒ=ℒ_m+ℒ_h=ℒ_m-1/4F̃_μνF̃^μν.
The generalized coordinates corresponding to ℒ_h are Ã_μ=(Ã_0,0,0,0). Obviously, Ã_0 is a cyclic coordinate. To eliminate the degrees of freedom corresponding to Ã_0, the action can be written as:
S=∫_t_i^t_f(L-P_Ã_0Ã̇_0)dt.
Due to the fact that the constant time slice of the Painlevé line element is a flat Euclidean spacetime in the radial direction, and the Reissner-Nordström black hole is a steady-state black hole, the WKB approximation can be applied to it. Particle emission rate and the imaginary part of action has the following relationship:
Γ∼ e^-2ImS.
According to (<ref>), the imaginary part of the action can be written as:
ImS=Im{∫_r_i^r_f[P_r-P_Ã_0Ã̇_0/ṙ]dr}=Im{∫_r_i^r_f[∫_(0,0)^(P_r,P_Ã_0)dP^'_r-Ã̇_0/ṙdP^'_Ã_0]dr}.
Where p_r is the generalized momentum conjugate with r, and P_Ã_0 is the generalized momentum conjugate with Ã_0. r_i is the initial position of the particle radiation process, slightly within the event horizon of the black hole. r_f is the final position of the particle radiation process, slightly outside the event horizon of the black hole.
To proceed with our calculations, we substitute the Hamilton's equation into (<ref>):
ṙ=dH/dP_r|_(r;Ã_0,P_Ã_0).
Ã̇_0=dH/dP_Ã_0|_(Ã_0;r,P_r).
By substituting (<ref>) and (<ref>) into (<ref>), we can obtain:
ImS=Im{∫_r_i^r_f[∫_(M,E_Q_h)^(M-ω,E_Q_h-q_h)1/ṙ(dH)_r;Ã_0,P_Ã_0-1/ṙ(dH)_Ã_0;r,P_r]dr}.
Where E_Q_h is the energy corresponding to the electromagnetic field. M, E_Q_h is a fixed and unchanging quantity. M-ω, E_Q_h-q_h is the residual gravitational and electromagnetic energy of the black hole after radiation. It can be clearly seen that in our formula, energy conservation, charge conservation, and magnetic charge conservation are all reflected. It is worth emphasizing that in order to simplify the equivalent charge of a black hole after radiation to Q_h-q_h, we adopt the following assumption:
q_e/q_g=Q_e/Q_g.
Where q_e and q_g are the charges and magnetic charges of the emitted particles, respectively. The loss of gravitational energy and electromagnetic energy caused by particle tunneling in a black hole is represented by the following equations:
(dH)_r;Ã_0,P_Ã_0=d(M-ω^')=-dω^'.
(dH)_Ã_0;r,P_r=-Q_h-q_h^'/rdq^'_h.
By substituting (<ref>) and (<ref>) into (<ref>), we can obtain:
ImS=-Im{∫_r_i^r_f[∫_(0,0)^(ω,q_h)1/ṙ(dω^'-Q_h-q_h^'/rdq^'_h)]dr}.
For the process of particles tunneling out of the event horizon of a black hole, ṙ takes a positive sign, and accounting for the self-gravitational force, we make the following substitution: M→ M-ω^', Q_h→ Q_h-q^'_h, r_non→ r_non^', r_-→ r_-^', r_+→ r_+^', r_c→ r_c^', that is:
ṙ=1/2r√(Λ/3)(r-r_non^')(r-r_-^')(r-r_+^')(r-r_c^')/√(r^4+6(M-ω^')/Λr-3/Λ(Q_h-q_h^')^2).
Substituting (<ref>) into (<ref>), we have:
ImS=-Im{∫_r_i^r_f[∫_(0,0)^(ω,q_h)2r√(r^4+6(M-ω^')/Λr-3/Λ(Q_h-q_h^')^2)/√(Λ/3)(r-r_non^')(r-r_-^')(r-r_+^')(r-r_c^')(dω^'-Q_h-q_h^'/rdq^'_h)]dr}.
To proceed with the calculation, switch the order of integration and integrate r first. It can be clearly observed that r=r_+^' is a pole. By selecting a new integral path to bypass the poles and applying the residue theorem, we can obtain:
ImS=-6π/Λ∫_(0,0)^(ω,q_h)r_+^' 2/(r_+^'-r_non^')(r_+^'-r_-^')(r_+^'-r_c^')(dω^'-Q_h-q_h^'/r_+^'dq^'_h).
According to the relationship between r_non^', r_-^', r_+^', r_c^' and ω^', q_h^', we can obtain:
6/Λ[r_+^'dω^'-(Q_h-q_h^')dq^'_h]=(r_+^'-r_non^')(r_+^'-r_-^')(r_+^'-r_c^')dr_+^'.
Continuing the integration operation, through simple calculations, we can obtain the final result as:
ImS=-π∫_r_i^r_fr_+^'dr_+^'=-π/2(r_f^2-r_i^2)=-1/2Δ S_BH.
Where Δ S_BH=S_BH(M-ω,Q_e-q_e,Q_g-q_g)-S_BH(M,Q_e,Q_g) is the change in entropy before and after the black hole radiates particles. So the emission rate of particles can be expressed by the following equation:
Γ∼ e^-2ImS=e^Δ S_BH.
This result conforms to the unitary principle and supports the conservation of information.
§.§ Tunneling into the cosmic horizon
Next, we will discuss the particle tunneling process near the de Sitter cosmic horizon. Obviously, for the process of particles tunneling from outside the cosmic horizon to inside the cosmic horizon, the final position is smaller than the initial position, r_f^'<r_i^'. The total energy, total charge, and total magnetic charge of space-time will all increase, M→ M+ω^', Q_e→ Q_e+q^'_e, Q_h→ Q_h+q^'_h, Q_h→ Q_h+q^'_h. And the position of the horizon will also change, r_non→ r_non^'', r_-→ r_-^'', r_+→ r_+^'', r_c→ r_c^''. What we are considering now is the tunneling of the cosmic horizon, so ṙ should take the negative sign. Considering the influence of self gravity, the geodesic equation becomes:
ṙ=-1/2r√(Λ/3)(r-r_non^'')(r-r_-^'')(r-r_+^'')(r-r_c^'')/√(r^4+6(M+ω^')/Λr-3/Λ(Q_h+q_h^')^2).
Similarly, we consider the incident particle as a de Broglie s-wave and subtract the degrees of freedom of the cyclic coordinate Ã_0 to obtain the imaginary part of the action as:
ImS=Im{∫_r_i^'^r_f^'[P_r-P_Ã_0Ã̇_0/ṙ]dr}=Im{∫_r_i^'^r_f^'[∫_(0,0)^(P_r,P_Ã_0)dP^'_r-Ã̇_0/ṙdP^'_Ã_t]dr}.
r_i^' is the initial position of particle radiation, slightly outside the cosmic horizon. And r_f^' is the final position of particle radiation, slightly within the cosmic horizon.
The Hamilton's equation is:
ṙ=dH/dP_r|_(r;Ã_0,P_Ã_0)=d(M+ω^')/dP_r=dω^'/dP_r.
Ã̇_0=dH/dP_Ã_0|_(Ã_0;r,P_r)=Q_h+q_h^'/rdq^'_h/dP_Ã_0.
(<ref>) and (<ref>) respectively represent the changes in gravitational energy and electromagnetic energy after particles enter the cosmic horizon, from which it can be clearly seen that energy conservation, charge conservation, and magnetic charge conservation occur. Substituting (<ref>), (<ref>) and (<ref>) into (<ref>) yields:
ImS=-Im{∫_r_i^'^r_f^'[∫_(0,0)^(ω,q_h)2r√(r^4+6(M+ω^')/Λr-3/Λ(Q_h+q_h^')^2)/√(Λ/3)(r-r_non^'')(r-r_-^'')(r-r_+^'')(r-r_c^'')(dω^'-Q_h+q_h^'/rdq^'_h)]dr}.
Obviously, there is a pole at the cosmic horizon r=r_+^''. In order to proceed with the calculation, exchange the order of integration, first integrate r and apply the residue theorem, which yields:
ImS=-6π/Λ∫_(0,0)^(ω,q_h)r_c^'' 2/(r_c^''-r_non^'')(r_c^''-r_-^'')(r_c^''-r_+^'')(dω^'-Q_h+q_h^'/r_c^''dq^'_h).
According to the relationship between r_non^'', r_-^'', r_+^'', r_c^'' and ω^', q_h^', we can obtain:
6/Λ[r_c^''dω^'-(Q_h+q_h^')dq^'_h]=(r_c^''-r_non^'')(r_c^''-r_-^'')(r_c^''-r_+^'')dr_c^''.
By completing the integration operation through simple calculations, the final result obtained is:
ImS=-π∫_r_i^'^r_f^'r_c^''dr_c^''=-π/2(r_f^' 2-r_i^' 2)=-1/2Δ S_CH.
Where Δ S_CH=S_CH(M+ω,Q_e+q_e,Q_g+q_g)-S_CH(M,Q_e,Q_g) is the change in entropy of particles before and after passing through the cosmic horizon. The emission rate of particles can be expressed by the following equation:
Γ∼ e^-2ImS=e^Δ S_CH.
Obviously, such a result follows the unitary principle and supports the conservation of information.
§.§ Discussion
The above results can be easily obtained from the first law of black hole thermodynamics, and we can obtain the temperature at the surface of the black hole event horizon and the cosmic horizon from it. For the case of magnetic charge, the first law of black hole thermodynamics should be written as:
dS^'=1/T^'dM^'-V_e^'/T^'dQ_e^'-V_g^'/T^'dQ_g^'=1/T^'(dM^'-V_h^'dQ_h^').
Among them, for the process of particles passing through the event horizon of a black hole, dM^'=-dω^', dQ_h^'=-dq_h^', V_h^'=Q_h-q_h^'/r_+^'. For the process of particles crossing the cosmic horizon, dM^'=dω^', dQ_h^'=dq_h^', V_h^'=Q_h+q_h^'/r_c^''. So:
dS_BH^'=-1/T^'(dω^'-Q_h-q_h^'/r_+^'dq_h^').
dS_CH^'=1/T^''(dω^'-Q_h+q_h^'/r_c^''dq_h^').
Comparing (<ref>) and (<ref>) with (<ref>) and (<ref>) respectively, the temperature T^' at the event horizon surface of the black hole and the temperature T^'' at the cosmic horizon surface are obtained:
β^'=1/T^'=-12π/Λr_+^' 2/(r_+^'-r_non^')(r_+^'-r_-^')(r_+^'-r_c^')=lim_r→ r_+^'4π/|f^'(r)|.
β^''=1/T^''=12π/Λr_c^'' 2/(r_c^''-r_non^'')(r_c^''-r_-^'')(r_c^''-r_+^'')=lim_r→ r_c^''4π/|f^'(r)|.
Where f(r)=-Λ/3r^2(r-r_non)(r-r_-)(r-r_+)(r-r_c). This is consistent with the conclusion drawn from general black hole thermodynamics.
§ KERR-NEWMAN-KASUYA DE SITTER BLACK HOLE
The line element of Kerr Newman Kasuya de Sitter black hole is:
ds^2=g_00dt_k^2+g_11dr^2+g_22dθ^2+g_33dϕ^2+2g_03dt_kdϕ.
Where:
g_00= -Δ_r-a^2Δ_θsin^2θ/Ξ^2Σ,
g_11= Σ/Δ_r,
g_22= Σ/Δ_θ,
g_33= -h^2Δ_r-ξ^2Δ_θsin^2θ/Ξ^2Σ,
g_03= hΔ_r-aξΔ_θsin^2θ/Ξ^2Σ.
Where Ξ=1+Λ/3a^2, h=asin^2θ, Σ=r^2+a^2cos^2θ, ξ=a^2+r^2, Δ_θ=1+Λ/3a^2cos^2θ, Δ_r=ξ(1-Λ/3r^2)-2Mr+Q_h^2, Q_h^2=Q_e^2+Q_g^2. M is the mass of the black hole. Q_e and Q_g are the charges and magnetic charges of the black hole, respectively. a=J/M is the angular momentum per unit mass of a black hole.
The position of the horizon are given by equation Δ_r=0, that is:
Δ_r=-Λ/3r^4+(1-Λ/3a^2)r^2-2Mr+Q_h^2+a^2=0.
Let A=3/Λ(a^2+Q_h^2), B=3/Λ-a^2. The equation (<ref>) has four solutions, r_non, r_-, r_+, r_c. Among them, r_non is a negative value solution with no physical meaning, r_- is the inner event horizon of black hole, r_+ is the outer event horizon of black hole, and r_c is the cosmic horizon. The specific forms of these four solutions are:
r_non= -1/2η̃-1/2ζ̃,
r_-= -1/2η̃+1/2ζ̃,
r_+= 1/2η̃-1/2ζ̃,
r_c= 1/2η̃+1/2ζ̃.
Where:
λ̃= [972Λ M^2-72Λ^3AB-2Λ^3B^3+
√((72Λ^3AB+2Λ^3B^3-972Λ M^2)^2-4(Λ^2B^2-12Λ^2A)^3)]^1/3,
η̃= √(2B/3+(2)^1/3(Λ^2B^2-12Λ^2A)/3Λλ̃+λ̃/3(2)^1/3Λ),
ζ̃= √(4B/3-(2)^1/3(Λ^2B^2-12Λ^2A)/3Λλ̃-λ̃/3(2)^1/3Λ-12M/Λη̃).
For the process of particles tunneling out of the event horizon of a black hole, the initial position of the particle is:
r_i=1/2η̃_i-1/2ζ̃_i.
Where:
λ̃_i= [972Λ M^2-72Λ^3AB-2Λ^3B^3+
√((72Λ^3AB+2Λ^3B^3-972Λ M^2)^2-4(Λ^2B^2-12Λ^2A)^3)]^1/3,
η̃_i= √(2B/3+(2)^1/3(Λ^2B^2-12Λ^2A)/3Λλ̃_i+λ̃_i/3(2)^1/3Λ),
ζ̃_i= √(4B/3-(2)^1/3(Λ^2B^2-12Λ^2A)/3Λλ̃_i-λ̃_i/3(2)^1/3Λ-12M/Λη̃_i).
The final position of the particle is:
r_f=1/2η̃_f-1/2ζ̃_f.
Where:
λ̃_f= {972Λ (M-ω)^2-72Λ^3A_fB_f-2Λ^3B_f^3+
√([72Λ^3A_fB_f+2Λ^3B_f^3-972Λ (M-ω)^2]^2-4(Λ^2B_f^2-12Λ^2A_f)^3)}^1/3,
η̃_f= √(2B_f/3+(2)^1/3(Λ^2B_f^2-12Λ^2A_f)/3Λλ̃_f+λ̃_f/3(2)^1/3Λ),
ζ̃_f= √(4B_f/3-(2)^1/3(Λ^2B_f^2-12Λ^2A_f)/3Λλ̃_f-λ̃_f/3(2)^1/3Λ-12(M-ω)/Λη̃_f).
Where A_f=3/Λ[a_f^2+(Q_h-q_h)^2], B_f=3/Λ-a_f^2, a_f=J-j/M-ω. After the particle exits, the position of horizons change to:
r_non^'= -1/2η̃^'-1/2ζ̃^',
r_-^'= -1/2η̃^'+1/2ζ̃^',
r_+^'= 1/2η̃^'-1/2ζ̃^',
r_c^'= 1/2η̃^'+1/2ζ̃^'.
Where:
λ̃^'= {972Λ (M-ω^')^2-72Λ^3A^' B^'-2Λ^3B^' 3+
√([72Λ^3A^' B^'+2Λ^3B^' 3-972Λ (M-ω^')^2]^2-4(Λ^2B^' 2-12Λ^2A^')^3)}^1/3,
η̃^'= √(2B^'/3+(2)^1/3(Λ^2B^' 2-12Λ^2A^')/3Λλ̃^'+λ̃^'/3(2)^1/3Λ),
ζ̃^'= √(4B^'/3-(2)^1/3(Λ^2B^' 2-12Λ^2A^')/3Λλ̃^'-λ̃^'/3(2)^1/3Λ-12(M-ω^')/Λη̃^').
Where A^'=3/Λ[a^' 2+(Q_h-q_h^')^2], B^'=3/Λ-a^' 2, a^'=J-j^'/M-ω^'.
For the process of particle tunneling into the cosmic horizon, the initial position of the particle is:
r_i^'=1/2η̃_i^'+1/2ζ̃_i^'.
Where:
λ̃_i^'= [972Λ M^2-72Λ^3AB-2Λ^3B^3+
√((72Λ^3AB+2Λ^3B^3-972Λ M^2)^2-4(Λ^2B^2-12Λ^2A)^3)]^1/3,
η̃_i^'= √(2B/3+(2)^1/3(Λ^2B^2-12Λ^2A)/3Λλ̃_i^'+λ̃_i^'/3(2)^1/3Λ),
ζ̃_i^'= √(4B/3-(2)^1/3(Λ^2B^2-12Λ^2A)/3Λλ̃_i^'-λ̃_i^'/3(2)^1/3Λ-12M/Λη̃_i^').
The final position of the particle is:
r_f^'=1/2η̃_f^'+1/2ζ̃_f^'.
Where:
λ̃_f^'= {972Λ (M+ω)^2-72Λ^3A_f^' B_f^'-2Λ^3B_f^' 3+
√([72Λ^3A_f^' B_f^'+2Λ^3B_f^' 3-972Λ (M+ω)^2]^2-4(Λ^2B_f^' 2-12Λ^2A_f^')^3)}^1/3,
η̃_f^'= √(2B_f^'/3+(2)^1/3(Λ^2B_f^' 2-12Λ^2A_f^')/3Λλ̃_f^'+λ̃_f^'/3(2)^1/3Λ),
ζ̃_f^'= √(4B_f^'/3-(2)^1/3(Λ^2B_f^' 2-12Λ^2A_f^')/3Λλ̃_f^'-λ̃_f^'/3(2)^1/3Λ-12(M+ω)/Λη̃_f^').
Where A_f^'=3/Λ[a_f^' 2+(Q_h+q_h)^2], B_f^'=3/Λ-a_f^' 2, a_f^'=J+j/M+ω. After the particle is incident, the position of horizons change to:
r_non^''= -1/2η̃^''-1/2ζ̃^'',
r_-^''= -1/2η̃^''+1/2ζ̃^'',
r_+^''= 1/2η̃^''-1/2ζ̃^'',
r_c^''= 1/2η̃^''+1/2ζ̃^''.
Where:
λ̃^''= {972Λ (M+ω^')^2-72Λ^3A^'' B^''-2Λ^3B^'' 3+
√([72Λ^3A^'' B^''+2Λ^3B^'' 3-972Λ (M+ω^')^2]^2-4(Λ^2B^'' 2-12Λ^2A^'')^3)}^1/3,
η̃^''= √(2B^''/3+(2)^1/3(Λ^2B^'' 2-12Λ^2A^'')/3Λλ̃^''+λ̃^''/3(2)^1/3Λ),
ζ̃^''= √(4B^''/3-(2)^1/3(Λ^2B^'' 2-12Λ^2A^'')/3Λλ̃^''-λ̃^''/3(2)^1/3Λ-12(M+ω^')/Λη̃^'').
Where A^''=3/Λ[a^'' 2+(Q_h+q_h^')^2], B^''=3/Λ-a^'' 2, a^''=J+j^'/M+ω^'.
§.§ Painlevé coordinate and time-like geodesic line equation
To eliminate coordinate singularity at the horizon, we first introduce a dragged coordinate system for the Kerr Newman Kasuya de Sitter spacetime. Let:
Ω=dϕ/dt_k=-g_03/g_33=hΔ_r-aξΔ_θsin^2θ/h^2Δ_r-ξ^2Δ_θsin^2θ.
ĝ_00=g_00-g_03^2/g_33=ΣΔ_θsin^2θΔ_r/Ξ^2(h^2Δ_r-ξ^2Δ_θsin^2θ).
The line elements of Kerr Newman Kasuya de Sitter black hole can be rewritten as:
ds^2=ĝ_00dt_k^2+g_11dr^2+g_22dθ^2=ΣΔ_θsin^2θΔ_r/Ξ^2(h^2Δ_r-ξ^2Δ_θsin^2θ)dt_k^2+Σ/Δ_rdr^2+Σ/Δ_θdθ^2.
Due to the fact that the constant time slice of the dragging coordinate system is not a flat Euclidean space in the radial direction, it is not the coordinate system we want. We adopt a method similar to ref. <cit.> to perform coordinate transformation again:
dt_k=dt+F(r,θ)dr+G(r,θ)dθ.
t=t_k-∫[F(r,θ)dr+G(r,θ)dθ].
F(r,θ) and G(r,θ) satisfy the following relationship:
∂ F(r,θ)/∂θ=∂ G(r,θ)/∂ r.
In order to make the constant time slice of the new coordinates a flat Euclidean space in the radial direction, we require F(r,θ) to satisfy the following relationship:
g_11+ĝ_00F^2(r,θ)=1.
So we obtained the coordinates of Painlevé-Kerr-Newman-Kasuya de Sitter:
ds^2= ĝ_00dt^2± 2√(ĝ_00(1-g_11))dtdr+dr^2+[ĝ_00G^2(r,θ)+g_22]dθ^2
+2ĝ_00G(r,θ)dtdθ+2√(ĝ_00(1-g_11))G(r,θ)drdθ.
Similar to Section <ref>, we treat particles as de Broglie s-waves and derive the time-like geodesic equation for particle motion using the WKB approximate:
ṙ=-1/2ĝ_00/ĝ_01=±ΣΔ_r/2Ξ√(Δ_θsin^2θ/(Δ_r-Σ)Σ(h^2Δ_r-ξ^2Δ_θsin^2θ)).
Where ĝ_01=√(ĝ_00(1-g_11)). When a particle exits the event horizon of a black hole, (<ref>) takes a positive sign. When a particle is incident from the cosmic horizon, (<ref>) takes a negative sign. When considering self gravity, replace (<ref>) with the following, M→ M∓ω, Q_h→ Q_h-q_h, a=J/M→J∓ j/M∓ω=ã. The minus sign and plus sign represent the outgoing particles and incoming particles, respectively.
§.§ Tunneling out of the black hole outer event horizon
Similar to Section <ref>, it can be inferred from the Lagrangian density of matter-electromagnetic field that:
ℒ=ℒ_m+ℒ_h=ℒ_m-1/4F̃_μνF̃^μν.
Ã_0 is the cyclic coordinate. Under the condition of rotating with electricity, Ã_0=-Q_hr/Σ(1-aΩsin^2θ). The electromagnetic potential at the event horizon of a black hole is Ã_0|_r_+=-Q_hr_+/r_+^2+a^2 and it at the cosmic horizon is Ã_0|_r_c=-Q_hr_c/r_c^2+a^2. When the line element of a rotating black hole is written in a dragged coordinate system, the line element does not contain ϕ, so ϕ is also a cyclic coordinate. To eliminate the degrees of freedom corresponding to ϕ and Ã_0, the action can be written as:
S=∫^t_f_t_i(L-p_Ã_0Ã̇_0-p_ϕϕ̇)dt.
For the same reason as in Section <ref>, the WKB approximation can be applied. The emission rate of particles has the same relationship as (<ref>):
Γ∼ e^-2ImS.
We obtain the imaginary part of the action as:
ImS= Im{∫_r_i^r_f[P_r-P_Ã_0Ã̇_0/ṙ-p_ϕϕ̇/ṙ]dr}
= Im{∫_r_i^r_f[∫_(0,0)^(P_r,P_Ã_0)dP^'_r-Ã̇_0/ṙdP^'_Ã_0-ϕ̇/ṙdP^'_ϕ]dr}.
To proceed with the calculation, we write down the Hamilton's equation:
ṙ=dH/dP_r|_(r;Ã_0,P_Ã_0;ϕ,P_ϕ)=d(M-ω^')/dP_r=-dω^'/dP_r.
Ã̇_0=dH/dP_Ã_0|_(Ã_0;r,P_r;ϕ,P_ϕ)=-(Q_h-q_h^')r/r^2+a^' 2dq_h^'/dP_Ã_0.
Additionally, we have the following relationship:
ϕ̇=dϕ/dt=Ω^'.
p_ϕ^'=J^'=J-j^'=(M-ω^')a^'.
In (<ref>), (<ref>) and (<ref>), conservation of energy, conservation of angular momentum, conservation of electric charge and conservation of magnetic charge are manifested. Substitute (<ref>), (<ref>), (<ref>), (<ref>) into (<ref>), we can obtain:
ImS=-Im{∫_r_i^r_f[∫_(0,0,0)^(ω,q_h,j)1/ṙ(dω^'-(Q_h-q_h^')r/r^2+a^' 2dq^'_h-Ω^'dj^')]dr}.
When particles tunnel out of the event horizon of a black hole, ṙ takes a positive sign. And considering the self gravity effect, make the following substitution, M→ M-ω^', Q_h=Q_h-q_h^', a=J/M→J-j^'/M-ω^'=a^', Ξ=1+Λ/3a^2→ 1+Λ/3a^' 2=Ξ^', h=asin^2θ→ a^'sin^2θ=h^', Σ=r^2+a^2cos^2θ→ r^2+a^' 2cos^2θ=Σ^', ξ=a^2+r^2→ a^' 2+r^2=ξ^', Δ_θ=1+Λ/3a^2cos^2θ→ 1+Λ/3a^' 2cos^2θ=Δ_θ^', Δ_r=ξ(1-Λ/3r^2)-2Mr+Q_h^2→ξ^'(1-Λ/3r^2)-2(M-ω^')r+(Q_h-q_h^')^2=Δ_r^', r_non→ r_non^', r_-→ r_-^', r_+→ r_+^', r_c→ r_c^', that is:
ṙ=Σ^'Δ_r^'/2Ξ^'√(Δ_θ^'sin^2θ/(Δ_r^'-Σ^')Σ^'(h^' 2Δ_r^'-ξ^' 2Δ_θ^'sin^2θ)).
Where Δ_r=-Λ/3(r-r_non)(r-r_-)(r-r_+)(r-r_c). Substitute (<ref>) into (<ref>), we can obtain:
ImS= -Im{∫_r_i^r_f[∫_(0,0,0)^(ω,q_h,j)6Ξ^'/ΛΣ^'1/(r-r_non^')(r-r_-^')(r-r_+^')(r-r_c^')×
√((Δ_r^'-Σ^')Σ^'(h^' 2Δ_r^'-ξ^' 2Δ_θ^'sin^2θ)/Δ_θ^'sin^2θ)(dω^'-(Q_h-q_h^')r/r^2+a^' 2dq^'_h-Ω^'dj^')]dr}.
Switch the order of integration, integrating r first. It is obvious that r=r_+^' is a pole. It can be obtained by re-selecting the integral circumference and applying the residue theorem:
ImS=-2π∫_(0,0,0)^(ω,q_h,j)(Λ/3+a^' 2)(a^' 2+r_+^' 2)/(r_+^'-r_non^')(r_+^'-r_-^')(r_+^'-r_c^')(dω^'-(Q_h-q_h^')r_+^'/r_+^' 2+a^' 2dq^'_h-Ω^'dj^').
Where Ω^'|_r_+^'=a^'/r_+^' 2+a^' 2. Based on the relationship between r_non^', r_-^', r_+^', r_c^' and ω^', q_h^', j^', we can get:
(6/Λ+2a^' 2)[(r_+^' 2+a^' 2)dω^'-(Q_h-q_h^')r_+^'dq^'_h-a^'dj^']=(r_+^'-r_non^')(r_+^'-r_-^')(r_+^'-r_c^')dr_+^'.
By substituting (<ref>) into (<ref>), we get our final result:
ImS=-π∫_r_i^r_fr_+^'dr_+^'=-π/2(r_f^2-r_i^2)=-1/2Δ S_BH.
Where Δ S_BH=S_BH(M-ω,Q_e-q_e,Q_g-q_g,J-j)-S_BH(M,Q_e,Q_g,J) is the change in entropy before and after the black hole radiates particles. So the emission rate of particles can be expressed by the following equation:
Γ∼ e^-2ImS=e^Δ S_BH.
(<ref>) indicates that our results conform to the unitary principle and support the conservation of information.
§.§ Tunneling into the cosmic horizon
The process of Kerr-Newman-Kasuya de Sitter space-time particles tunneling into the cosmic horizon increases the total energy, total charge, total magnetic charge, and total angular momentum of space-time, M→ M+ω^', Q_h→ Q_h+q_h^', a=J/M→J+j^'/M+ω^'=a^''. Other quantities that depend on M, Q_h, J also change, Ξ=1+Λ/3a^2→ 1+Λ/3a^'' 2=Ξ^'', h=asin^2θ→ a^''sin^2θ=h^'', Σ=r^2+a^2cos^2θ→ r^2+a^' 2cos^2θ=Σ^'', ξ=a^2+r^2→ a^'' 2+r^2=ξ^'', Δ_θ=1+Λ/3a^2cos^2θ→ 1+Λ/3a^'' 2cos^2θ=Δ_θ^'', Δ_r=ξ(1-Λ/3r^2)-2Mr+Q_h^2→ξ^''(1-Λ/3r^2)-2(M+ω^')r+(Q_h+q_h^')^2=Δ_r^'', r_non→ r_non^'', r_-→ r_-^'', r_+→ r_+^'', r_c→ r_c^''. For particles tunneling into the cosmic horizon, ṙ is a negative sign. Taking into account the effect of self-gravitation, the geodesic equation becomes:
ṙ=-Σ^''Δ_r^''/2Ξ^''√(Δ_θ^''sin^2θ/(Δ_r^''-Σ^'')Σ^''(h^'' 2Δ_r^''-ξ^'' 2Δ_θ^''sin^2θ)).
We treat the incident particle as a de Broglie s-wave and subtract the degrees of freedom of the cyclic coordinates Ã_0 and ϕ to get the imaginary part of the action:
ImS= Im{∫_r_i^'^r_f^'[P_r-P_Ã_0Ã̇_0/ṙ-p_ϕϕ̇/ṙ]dr}
= Im{∫_r_i^'^r_f^'[∫_(0,0)^(P_r,P_Ã_0)dP^'_r-Ã̇_0/ṙdP^'_Ã_0-ϕ̇/ṙdP^'_ϕ]dr}.
The Hamilton's equation is:
ṙ=dH/dP_r|_(r;Ã_0,P_Ã_0;ϕ,P_ϕ)=d(M+ω^')/dP_r=dω^'/dP_r.
Ã̇_0=dH/dP_Ã_0|_(Ã_0;r,P_r;ϕ,P_ϕ)=(Q_h+q_h^')r/r^2+a^'' 2dq_h^'/dP_Ã_0.
In addition, we have the following relationship:
ϕ̇=dϕ/dt=Ω^'.
p_ϕ^'=J^'=J+j^'=(M+ω^')a^''.
(<ref>), (<ref>) and (<ref>) clearly represent conservation of energy, conservation of angular momentum, conservation of charge and conservation of magnetic charge. Substitute (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) into (<ref>), we can obtain:
ImS= -Im{∫_r_i^'^r_f^'[∫_(0,0,0)^(ω,q_h,j)6Ξ^''/ΛΣ^''1/(r-r_non^'')(r-r_-^'')(r-r_+^'')(r-r_c^'')×
√((Δ_r^''-Σ^'')Σ^''(h^'' 2Δ_r^''-ξ^'' 2Δ_θ^''sin^2θ)/Δ_θ^''sin^2θ)(dω^'-(Q_h+q_h^')r/r^2+a^'' 2dq^'_h-Ω^'dj^')]dr}.
Switch the order of integration, integrating r first. Clearly, r=r_c^'' is a pole. It can be obtained by re-selecting the integral circumference and applying the residue theorem:
ImS=-2π∫_(0,0,0)^(ω,q_h,j)(Λ/3+a^'' 2)(a^'' 2+r_c^'' 2)/(r_c^''-r_non^'')(r_c^''-r_-^'')(r_c^''-r_+^'')(dω^'-(Q_h+q_h^')r_c^''/r_c^'' 2+a^'' 2dq^'_h-Ω^'dj^').
Where Ω^'|_r_c^''=a^''/r_c^'' 2+a^'' 2. Based on the relationship between r_non^'', r_-^'', r_+^'', r_c^'' and ω^', q_h^', j^', we can get:
(6/Λ+2a^'' 2)[(r_c^'' 2+a^'' 2)dω^'-(Q_h+q_h^')r_c^''dq^'_h-a^''dj^']=(r_c^''-r_non^'')(r_c^''-r_-^'')(r_c^''-r_+^'')dr_c^''.
By substituting (<ref>) into (<ref>), we get our final result:
ImS=-π∫_r_i^'^r_f^'r_c^''dr_c^''=-π/2(r_f^' 2-r_i^' 2)=-1/2Δ S_CH.
Where Δ S_CH=S_CH(M+ω,Q_e+q_e,Q_g+q_g,J+j)-S_CH(M,Q_e,Q_g,J) is the change in entropy before and after particle enters the cosmic horizon. So the emission rate of particles can be expressed by the following equation:
Γ∼ e^-2ImS=e^Δ S_CH.
Clearly, (<ref>) indicates that our results conform to the unitary principle and support the conservation of information.
§.§ Discussion
Similarly, the results in this section can be easily obtained from the first law of black hole thermodynamics, and the temperature at the surface of black hole event horizon and cosmic horizon can be read from it. For the most general case of a rotating black hole that is both charged and magnetic, the first law of black hole thermodynamics can be written:
dS^'=1/T^'dM^'-V_e^'/T^'dQ_e^'-V_g^'/T^'dQ_g^'-Ω^'/T^'dJ^'=1/T^'(dM^'-V_h^'dQ_h^'-Ω^'dJ^').
For the process by which particles tunnel out of a black hole's event horizon, dM^'=-dω^', dQ_h^'=-dq_h^', V_h^'=(Q_h-q_h^')r_+^'/r_+^' 2+a^' 2, dJ^'=-dj^', Ω^'=a^'/r_+^' 2+a^' 2. For the process by which particles tunnel into the cosmic horizon, dM^'=dω^', dQ_h^'=dq_h^', V_h^'=(Q_h+q_h^')r_+^'/r_c^'' 2+a^'' 2, dJ^'=dj^', Ω^'=a^''/r_+^'' 2+a^'' 2. So:
dS_BH^'=-1/T^'(dω^'-(Q_h-q_h^')r_+^'/r_+^' 2+a^' 2dq_h^'-a^'/r_+^' 2+a^' 2dj^').
dS_CH^'=1/T^''(dω^'-(Q_h+q_h^')r_+^'/r_c^'' 2+a^'' 2dq_h^'-a^''/r_+^'' 2+a^'' 2dj^').
Comparing (<ref>) and (<ref>) with (<ref>) and (<ref>) respectively, the temperature T^' at the event horizon surface of the black hole and the temperature T^'' at the cosmic horizon surface are obtained:
β^'=1/T^'=-4π(Λ/3+a^' 2)(a^' 2+r_+^' 2)/(r_+^'-r_non^')(r_+^'-r_-^')(r_+^'-r_c^')=lim_r→ r_+^'4π√(-ĝ_00g_11)/|ĝ_00,1|.
β^''=1/T^''=4π(Λ/3+a^'' 2)(a^'' 2+r_c^'' 2)/(r_c^''-r_non^'')(r_c^''-r_-^'')(r_c^''-r_+^'')=lim_r→ r_c^''4π√(-ĝ_00g_11)/|ĝ_00,1|.
The Hawking temperature obtained by this method is consistent with the results obtained by traditional black hole thermodynamics.
§ BARDEEN DE SITTER BLACK HOLE
The line element of Bardeen de Sitter black hole is:
ds^2=-f(r)dt_s^2+1/f(r)dr^2+r^2dΩ^2.
Where
f(r)=1-2Mr^2/(r^2+Q_h^2)^3/2-Λ/3r^2.
It is evident from (<ref>) that the electromagnetic field in Bardeen de Sitter spacetime is nonlinear. Therefore, it is impossible to prove the conservation of information by the concrete form of the horizon r_h(M,Q_h). In this section, we use the first law of thermodynamics of black holes to prove the conservation of information. According to the basic knowledge of black hole thermodynamics, the Hawking temperature for spherically symmetric black holes has the following expression:
T=lim_r→ r_hκ/2π=lim_r→ r_h|f^'(r)|/4π.
Where r=r_h is the position of the horizon.
§.§ Painlevé coordinate and time-like geodesic line equation
Similar to Section <ref>, to eliminate coordinate singularity at the event horizon, we use Painlevé coordinates. To get the Painlevé-Bardeen de Sitter coordinates, we do the following coordinate transformation:
t_s=t+F(r), dt=dt_s-F^'(r)dr.
F(r) must satisfy:
1/1-2Mr^2/(r^2+Q_h^2)^3/2-Λ/3r^2-[1-2Mr^2/(r^2+Q_h^2)^3/2-Λ/3r^2][F^'(r)]^2=1.
So we have Painlevé coordinates:
ds^2= -[1-2Mr^2/(r^2+Q_h^2)^3/2-Λ/3r^2]dt^2± 2√(2Mr^2/(r^2+Q_h^2)^3/2+Λ/3r^2)dtdr+dr^2+r^2dΩ^2
= g_00dt^2+2g_01dtdr+dr^2+r^2dΩ^2.
Similar to Section <ref> and Section <ref>, we consider the particle as a de Broglie s-wave and get the subsequent time-like geodesic equation using the WKB approximation:
ṙ=-1/2g_00/g_01=±√(Λ/3)/2r3/Λ(r^2+Q_h^2)^3/2-6M/Λr^2-r^2(r^2+Q_h^2)^3/2/√(6M/Λ(r^2+Q_h^2)^3/2+(r^2+Q_h^2)^3)=±f(r)/2r1/√(2M/(r^2+Q_h^2)^3/2+Λ/3).
The positive sign represents particles exiting from the black hole event horizon, and the negative sign represents particles entering from the cosmic horizon. Considering the effect of self-gravitation, (<ref>) and (<ref>) should be replaced as follows, M→ M∓ω, Q_h→ Q_h∓ q_h. The minus sign and plus sign represent outgoing particles and incident particles, respectively.
§.§ Tunneling out of the black hole outer event horizon
Similar to Section <ref> and Section <ref>, we give the Lagrangian density of the matter-electromagnetic field:
ℒ=ℒ_m+ℒ_h=ℒ_m-1/4F̃_μνF̃^μν.
Obviously, Ã_0 is the circular coordinate. To eliminate the degree of freedom of Ã_0, the action can be written as:
S=∫_t_i^t_f(L-P_Ã_0Ã̇_0)dt.
For Painlevé line elements, the WKB approximation is valid. The relationship between emission rate of a particle and imaginary part of action is as follows:
Γ∼ e^-2ImS.
The imaginary part of action can be written as:
ImS=Im{∫_r_i^r_f[P_r-P_Ã_0Ã̇_0/ṙ]dr}=Im{∫_r_i^r_f[∫_(0,0)^(P_r,P_Ã_0)dP^'_r-Ã̇_0/ṙdP^'_Ã_0]dr}.
The Hamilton's equation is:
ṙ=dH/dP_r|_(r;Ã_0,P_Ã_0)=d(M-ω^')/dP_r=-dω^'/dP_r.
Ã̇_0=dH/dP_Ã_0|_(Ã_0;r,P_r)=-Q_h-q_h^'/rdq_h^'/dP_Ã_0.
In (<ref>) and (<ref>), energy conservation, charge conservation, and magnetic charge conservation are all reflected. By substituting (<ref>) and (<ref>) into (<ref>), we can obtain:
ImS=-Im{∫_r_i^r_f[∫_(0,0)^(ω,q_h)1/ṙ(dω^'-Q_h-q_h^'/rdq_h^')]}.
For the process of particles tunneling out of the event horizon of a black hole, ṙ takes a positive sign, that is:
ṙ=f(r)/2r1/√(2(M-ω^')/[r^2+(Q_h-q_h^')^2]^3/2+Λ/3).
Substituting (<ref>) into (<ref>) and considering the influence of self gravity, we can make the following substitution, M→ M-ω, Q_h→ q_h^'. We can obtain:
ImS=-Im{∫_r_i^r_f[∫_(0,0)^(ω,q_h)2r/f(r)√(2(M-ω^')/[r^2+(Q_h-q_h^')^2]^3/2+Λ/3)(dω^'-Q_h-q_h^'/rdq_h^')]}.
Exchange the order of points, first integrate r. Obviously, r=r_+^' is a pole. Re-select the integral contour and apply the residue theorem to obtain:
ImS= -∫_(0,0)^(ω,q_h)2π/f^'(r_+^')r_+^'√(1-f^(r_+^')/r_+^' 2)(dω^'-Q_h-q_h^'/r_+^'dq^'_h)
= -1/2∫_(M,E_Q_h)^(M-ω,E_Q_h-q_h)1/T^'(dM^'-Q_h^'/r_+^'dQ^'_h)=-1/2∫_r_i^r_fdS_BH^'=-1/2Δ S_BH.
Where Δ S_BH=S_BH(M-ω,Q_e-q_e,Q_g-q_g)-S_BH(M,Q_e,Q_g) is the change in entropy before and after the black hole radiates particles. So the emission rate of particles can be expressed by the following equation:
Γ∼ e^-2ImS=e^Δ S_BH.
Clearly, our results conform to the unitary principle and support the conservation of information.
§.§ Tunneling into the cosmic horizon
During the process of particle tunneling into the cosmic horizon, the total energy, total charge, and total magnetic charge of spacetime all increase, M→ M+ω^', Q_h→ Q_h+q_h^'. And the symbol of ṙ should be negative:
ṙ=-f(r)/2r1/√(2(M+ω^')/[r^2+(Q_h+q_h^')^2]^3/2+Λ/3).
We consider the incident particle as a de Broglie s-wave and subtract the degrees of freedom of the cyclic coordinate Ã_0 to obtain the imaginary part of action:
ImS=Im{∫_r_i^'^r_f^'[P_r-P_Ã_0Ã̇_0/ṙ]dr}=Im{∫_r_i^'^r_f^'[∫_(0,0)^(P_r,P_Ã_0)dP^'_r-Ã̇_0/ṙdP^'_Ã_0]dr}.
The Hamilton's equation is:
ṙ=dH/dP_r|_(r;Ã_0,P_Ã_0)=d(M+ω^')/dP_r=dω^'/dP_r.
Ã̇_0=dH/dP_Ã_0|_(Ã_0;r,P_r)=Q_h+q_h^'/rdq_h^'/dP_Ã_0.
In (<ref>) and (<ref>), energy conservation, charge conservation, and magnetic charge conservation are all reflected. By substituting (<ref>), (<ref>) and (<ref>) into (<ref>), we can obtain:
ImS=-Im{∫_r_i^r_f[∫_(0,0)^(ω,q_h)2r/f(r)√(2(M+ω^')/[r^2+(Q_h+q_h^')^2]^3/2+Λ/3)(dω^'-Q_h+q_h^'/rdq_h^')]}.
Switch the order of integration, integrating r first. Clearly, r=r_c^'' is a pole. It can be obtained by re-selecting the integral circumference and applying the residue theorem:
ImS= -∫_(0,0)^(ω,q_h)2π/f^'(r_c^'')(dω^'-Q_h+q_h^'/r_c^''dq_h^')
= -1/2∫_(M,E_Q_h)^(M+ω,E_Q_h+q_h)1/T^''(dM^'-Q_h^'/r_c^''dQ_h^')=-1/2∫_r_i^'^r_f^'dS_CH^'=-1/2Δ S_CH.
Where Δ S_CH=S_CH(M+ω,Q_e+q_e,Q_g+q_g)-S_CH(M,Q_e,Q_g) is the change in entropy before and after particle enters the cosmic horizon. So the emission rate of particles can be expressed by the following equation:
Γ∼ e^-2ImS=e^Δ S_CH.
(<ref>) indicates that our results conform to the unitary principle and support the conservation of information.
§ CONCLUSION AND DISCUSSION
In the above calculation, we discussed the three most general cases of asymptotic de Sitter spacetime. The charged and magnetic particles tunnel into the magnetically charged Reissner-Nordström de Sitter black hole (the most general case of a static black hole), the Kerr-Newman-Kasuya de Sitter black hole (the most general case of a rotating black hole), and Bardeen de Sitter black hole (black hole without singularities). We calculated the radiation spectra of particles exiting event horizon of a black hole and entering cosmic horizon using the Parikh-Wilczek method. Our results indicate that for these three most general cases, the particle radiation spectrum deviates from the pure thermal spectrum, in accordance with the unitary principle and supporting information conservation. That is, information conservation is generally valid for general asymptotic de Sitter space-times. The fundamental reason for information conservation in the Parikh-Wilczek framework is the requirement that radiation process of particles is a reversible quasi-static process. For reversible processes, entropy is conserved, and information is naturally conserved as well.
The authors would like to thank Prof. Jian Jing and his student LiuBiao Ma and Zheng Wang from the Department of Physics , Beijing University of Chemical Technology for their valuable comments and suggestions during the completion of this manuscript.
99
1
S. Hawking, Particle creation by black holes, Communications in mathematical physics 43 (1975) 199.
2
S. Hawking, Breakdown of predictability in gravitational collapse, Physical Review D 14 (1976) 2460.
3
D. N. Page, Information in black hole radiation, Physical review letters 71 (1993) 3743.
4
S. Ryu, T. Takayanagi, Holographic Derivation of Entanglement Entropy from the anti–de Sitter Space/Conformal Field Theory Correspondence, Physical review letters 96 (2006) 181602.
5
T. Faulkner, A. Lewkowycz, J. Maldacena, Quantum corrections to holographic entanglement entropy, Journal of High Energy Physics 2013 (2013) 74.
6
N. Engelhardt, A. C. Wall, Quantum extremal surfaces: holographic entanglement entropy beyond the classical regime, Journal of High Energy Physics 2015 (2015) 73.
7
A. Almheiri, N. Engelhardt, D. Marolf, H. Maxfield, The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole
, Journal of High Energy Physics 2019 (2019) 63.
8
G. Penington, Entanglement wedge reconstruction and the information paradox, Journal of High Energy Physics 2020 (2020) 2.
9
A. Almheiri, R. Mahajan, J. Maldacena, Y. Zhao, The Page curve of Hawking radiation from semiclassical geometry
, Journal of High Energy Physics 2020 (2020) 149.
10
M. Parikh, F. Wilczek, Hawking radiation as tunneling, Physical review letters 85 (2000) 5042.
11
M. Parikh, A secret tunnel through the horizon, International Journal of Modern Physics D 13 (2004) 2355.
12
S. Hemming, E. Keski-Vakkuri, Hawking radiation from AdS black holes, Physical Review D 64 (2001) 044006.
13
A. J. M. Medved, Radiation via tunneling from a de Sitter cosmological horizon, Physical Review D 66 (2002) 124009.
14
M. Alves, A Simple Line Element for the Dilaton Gravity, International Journal of Modern Physics D 10 (2001) 575.
15
E. C. Vagenas, Are extremal 2D black holes really frozen?, Physics Letters B 503 (2001) 399.
16
E. C. Vagenas, Semiclassical corrections to the Bekenstein–Hawking entropy of the BTZ black hole via self-gravitation, Physics Letters B 533 (2002) 302.
17
E. C. Vagenas, Two-dimensional dilatonic black holes and Hawking radiation, Modern Physics Letters A 17 (2002) 609.
18
E. C. Vagenas, Generalization of the KKW analysis for black hole radiation, Physics Letters B 559 (2003) 65.
19
E. C. Vagenas, Self-gravitational corrections to the Cardy–Verlinde formula of the Achúcarro–Ortiz black hole, Physics Letters B 584 (2004) 127.
20
E. C. Vagenas, On Hawking radiation as tunneling with back-reaction, Modern Physics Letters A 20 (2005) 2449.
21
M. Arzano, A. J. M. Medved, E. C. Vagenas, Hawking radiation as tunneling through the quantum horizon, Journal of High Energy Physics 2005 (2005) 037.
22
M. R. Setare, E. C. Vagenas, SELF-GRAVITATIONAL CORRECTIONS TO THE CARDY–VERLINDE FORMULA AND THE FRW BRANE COSMOLOGY IN SdS5 BULK, International Journal of Modern Physics A 20 (2005) 7219.
23
J. Zhang, Z. Zhao, Hawking radiation via tunneling from Kerr black holes, Modern Physics Letters A 20 (2005) 1673.
24
J. Zhang, Z. Zhao, New coordinates for Kerr–Newman black hole radiation, Physics Letters B 618 (2005) 14.
25
W. Liu, New coordinates for BTZ black hole and Hawking radiation via tunnelling, Physics Letters B 634 (2006) 541.
26
S. Q. Wu, Q. Q. Jiang, Remarks on Hawking radiation as tunneling from the BTZ black holes, Journal of High Energy Physics 2006 (2006) 079.
27
J. Zhang, Z. Zhao, Massive particles' black hole tunneling and de Sitter tunneling, Nuclear Physics B 725 (2005) 173.
28
J. Zhang, Z. Zhao, Hawking radiation of charged particles via tunneling from the Reissner-Nordström black hole, Journal of High Energy Physics 2005 (2005) 055.
29
J. Zhang, Z. Zhao, Massive particles' Hawking radiation via tunneling, Acta Physica Sinica 55 (2006) 3796.
30
J. Zhang, Y. Hu, Z. Zhao, Information loss in black hole evaporation, Modern Physics Letters A 21 (2006) 1865.
31
Q. Q. Jiang, S. Q. Wu, X. Cai, Hawking radiation as tunneling from the Kerr and Kerr-Newman black holes, Physical Review D 73 (2006) 064003.
32
J. Zhang, TUNNELING EFFECT OF CHARGED AND MAGNETIZED PARTICLES FROM THE REISSNER–NORDSTRÖM BLACK HOLE WITH MAGNETIC CHARGES, Modern Physics Letters A 22 (2007) 1821.
33
J. Zhang, J. H. Fan, Tunnelling effect of charged and magnetized particles from the Kerr–Newman–Kasuya black hole, Physics Letters B 648 (2007) 133.
34
B. Tan, Hawking radiation of magnetized particles via tunneling of Bardeen black hole, arxiv:2408.04408.
|
http://arxiv.org/abs/2409.02903v1 | 20240904174542 | Semi-local behaviour of non-local hypoelliptic equations: Boltzmann | [
"Amélie Loher"
] | math.AP | [
"math.AP",
"45K05, 35H10, 35K99, 35B45, 35B65"
] |
letterpaper
|
http://arxiv.org/abs/2409.02673v1 | 20240904130251 | A Parareal algorithm without Coarse Propagator? | [
"Martin J. Gander",
"Mario Ohlberger",
"Stephan Rave"
] | math.NA | [
"math.NA",
"cs.NA"
] |
A Parareal algorithm without Coarse Propagator?
Martin J. GanderUniversité de Genève, martin.gander@unige.ch
Mario OhlbergerUniversity of Münster, mario.ohlberger@uni-muenster.de
Stephan RaveUniversity of Münster, stephan.rave@uni-muenster.de
September 4, 2024
======================================================================================================================================================================================================
The Parareal algorithm was invented in 2001 in order to
parallelize the solution of evolution problems in the time
direction. It is based on parallel fine time propagators called F
and sequential coarse time propagators called G, which alternatingly
solve the evolution problem and iteratively converge to the fine
solution. The coarse propagator G is a very important component of
Parareal, as one sees in the convergence analyses. We present here
for the first time a Parareal algorithm without coarse propagator,
and explain why this can work very well for parabolic problems. We
give a new convergence proof for coarse propagators approximating in
space, in contrast to the more classical coarse propagators which
are approximations in time, and our proof also applies in the
absence of the coarse propagator. We illustrate our theoretical
results with numerical experiments, and also explain why this
approach can not work for hyperbolic problems.
§ INTRODUCTION
The Parareal algorithm was introduced in <cit.> as
a non-intrusive way to parallelize time stepping methods in the time
direction for solving partial differential equations. Its convergence
is now well understood: for linear problems of parabolic type,
Parareal converges superlinearly on bounded time intervals, and
satisfies a linear convergence bound on unbounded time intervals
<cit.>, which means it is a contraction even if
computations are performed on arbitrary long time intervals. It was
shown in the same reference also that for hyperbolic problems, the
linear convergence estimate over long time does not predict
contraction, and the superlinear estimate only indicates contraction
when there are already too many iterations performed and one is
approaching the finite step convergence property of Parareal. A
non-linear convergence analysis for Parareal can be found in
<cit.>, and also the performance of Parareal for
Hamiltonian problems is well understood, see
<cit.>, where also a derivative variant of Parareal
was proposed.
An essential ingredient in the Parareal algorithm is the coarse
propagator, and it is its accuracy that has a decisive influence on
the convergence of Parareal, as one can see in the estimates from
<cit.>. We consider here a
Parareal algorithm in the standard form for solving an evolution
problem ∂_t u= F(u,t), and time partition
0=T_0<T_1<…<T_N=T,
U_n+1^k+1=F(U_n^k,T_n,T_n+1)+G(U_n^k+1,T_n,T_n+1)-G(U_n^k,T_n,T_n+1),
where the fine solver F(U_n^k,T_n,T_n+1) and the coarse solver
G(U_n^k,T_n,T_n+1) solve the underlying evolution problem on time
intervals Δ T:=T_n+1-T_n,
∂_t u= F(u,t), u(T_n)=U_n^k,
with different accuracy. We are interested in understanding what
happens when we remove the coarse solver in the Parareal algorithm
(<ref>), i.e. we run instead the iteration
U_n+1^k+1=F(U_n^k,T_n,T_n+1).
Note that this is quite different from the Identity Parareal algorithm
<cit.>, where the coarse propagator was replaced by
the identity, a very coarse approximation, while here we remove the
coarse propagator altogether.
§ A NUMERICAL EXPERIMENT
We start with a numerical experiment for the one dimensional heat
equation on the spatial domain Ω:=(0,1) and the time interval (0,T],
∂_t u=∂_xxu+f, u(x,0)=u_0(x), u(0,t)=u(1,t)=0.
We show in Figure <ref>
the results obtained with zero initial conditions, u_0(x)=0, and the
source term
f(x,t)=10e^-100(x-0.5)^2∑_j=1^4e^-100(t-t_j)^2,
t_j=0.1,0.6,1.35,1.85,
like a heating device which is turned on and of at specific time
instances. We discretized the heat equation (<ref>)
using a centered finite difference discretization in space with mesh
size Δ x=1/128, and Backward Euler in time with time
step Δ t=1/96, and run the Parareal algorithm up to
T=3 with various numbers N of coarse time intervals, where the
coarse propagator G just does one big Backward Euler step. We see
two really interesting results: first, the Parareal algorithm without
coarse propagator is actually also converging when applied to the heat
equation, both methods are a contraction. And second, if there are not
many coarse time intervals, i.e. if the coarse time interval length
Δ T is becoming large, the Parareal algorithm without coarse
correction is converging even faster than with coarse correction!
In order to test this surprising property further, we repeat the
above experiment, but use now homogeneous Neumann boundary conditions
in (<ref>), ∂_xu(0,t)=∂_xu(1,t)=0,
instead of the Dirichlet conditions. We show the corresponding results
in Figure <ref>.
We see that while the Parareal algorithm with coarse propagator is
converging as before with Dirichlet conditions, the Parareal algorithm
without coarse propagator is not converging any more, except in the
last case when only 6 coarse time intervals are used, and here it is
the finite step convergence property of the Parareal algorithm when
iterating 6 times that leads to convergence, there is no contraction
without the coarse propagator with Neumann boundary conditions.
It is very helpful at this point to take a closer look at the
solutions we were computing in these two experiments for zero
Dirichlet and zero Neumann conditions, shown in Figure <ref>
in the two leftmost panels.
We see that the solution with Dirichlet boundary conditions is very
much local in time: in order to compute the solution around t=2, we
only need to know if the right hand side function was on just before,
but not at the much earlier times t<<2, since the solution does not
contain this information any more. This is very different in the case
of Neumann conditions, when using the same source term: now the
solution at time t=2 strongly depends on the fact that the right
hand side function was on at earlier times, since the heat injected into
the system is kept, as a constant value in space. In analogy, it is
very easy to predict the temperature in the room you are currently in
a year from now, you only need to know if the heater or air-conditioner
is on and the windows and doors are open or closed then, but not
what happened with the room over the entire year before.
§ CONVERGENCE ANALYSIS
In order to study our observation mathematically, it is easiest to
consider a Parareal algorithm which uses a spectral method in space
and is performing exact integrations in time for our heat equation
model problem (<ref>). The fine propagator
F(U_n^k,T_n,T_n+1) is thus using a highly accurate spectral method
in space with m_F basis functions to solve (<ref>),
and the coarse propagator G(U_n^k,T_n,T_n+1) solves the same
problem using a very cheap spectral method in space using only
m_G<<m_F basis functions. Note that we allow explicitly that
m_G=0, which means the coarse solver is not present in the Parareal
algorithm (<ref>)!
A spectral representation of the solution of the heat equation
(<ref>) can be obtained using separation of variables,
and considering a spatial domain Ω:=(0,π) to avoid to have to
carry the constant π in all the computations (see also Remark
<ref> later), we get
u(x,t):=∑_m=1^∞û_m(t)sin(m x).
Expanding also the right hand side in a sine series,
f(x,t):=∑_m=1^∞f̂_m(t)sin(m x),
and the initial condition,
u_0(x):=∑_m=1^∞û_0,m(t)sin(m x),
we find that the Fourier coefficients in the solution satisfy
∂_tû_m=-m^2û_m+f̂_m, û_m(0)=û_0,m.
The solution of this equation can be written in closed form using an
integrating factor,
û_m(t)=û_0,me^-m^2t
+∫_0^tf̂_m(τ)e^-m^2(t-τ)dτ.
If we use in the Parareal algorithm (<ref>) a spectral
approximation of the solution for the fine propagator using m_F
modes, we obtain
F(U_n^k,T_n,T_n+1)=∑_m=1^m_F(Û_m,n^k
e^-m^2Δ T+∫_T_n^T_n+1f̂_m(τ)
e^-m^2(t-τ)dτ)sin(m x),
and similarly for the coarse approximation when using m_G modes
G(U_n^k,T_n,T_n+1)=∑_m=1^m_G(Û_m,n^k
e^-m^2Δ T+∫_T_n^T_n+1f̂_m(τ)
e^-m^2(t-τ)dτ)sin(m x).
Because of the orthogonality of the sine functions, the Parareal
algorithm (<ref>) is diagonalized in this representation,
which is the main reason that we chose a spectral method, to simplify
the present analysis for this short note. For Fourier modes with m≤
m_G, the first and the third term in (<ref>) coincide, as we
see from (<ref>) and (<ref>),
and thus cancel, and hence the Parareal update formula simplifies to
[c][0.85]
Û_m,n+1^k+1=Û_m,n^k+1
e^-m^2Δ T+∫_T_n^T_n+1f̂_m(τ)
e^-m^2(t-τ)dτ,
For m>m_G, the coarse correction term is not present in the Parareal update
formula (<ref>), and thus only the contribution from the fine propagator remains,
[c][0.85]
Û_m,n+1^k+1=Û_m,n^k
e^-m^2Δ T+∫_T_n^T_n+1f̂_m(τ)
e^-m^2(t-τ)dτ,
The only difference is the iteration index k in the first term on
the right, but this term makes all the difference: for m≤ m_G,
the update formula (<ref>) represent the exact
integration of the corresponding mode sequentially going through the
entire time domain (0,T). For m> m_G, the update formula
(<ref>) represents simply a block Jacobi iteration
solving all time subintervals (T_n,T_n+1) in parallel starting
from the current approximation at iteration k. Using this insight,
we obtain the following convergence result.
The Parareal algorithm (<ref>) for the heat equation
(<ref>) using the spectral coarse propagator
(<ref>) and the spectral fine propagator
(<ref>) satisfies for any initial guess U_n^0
the convergence estimate
sup_n||U_n^k(·)-u(·,T_n)||_2≤
e^-(m_g+1)^2kΔ Tsup_n||U_n^0(·)-u(·,T_n)||_2,
and this estimate also holds if the coarse propagator does not
contain any modes, m_G=0, which means it is not present.
The Parareal algorithm therefore converges also without coarse
propagator.
We introduce the error in Fourier space at the
interfaces, Ê_m,n^k:=Û_m,n-Û_m,n^k,
where the converged solution Û_m,n satisfies
Û_m,n+1=Û_m,n
e^-m^2Δ T+∫_T_n^T_n+1f̂_m(τ)
e^-m^2(t-τ)dτ,
Taking the difference with the Parareal update formula for small m
in (<ref>) and using the fact that
U_m,0^k=U_m,0=u^0, we thus obtain a vanishing error in the
first Fourier coefficients after having performed one Parareal
iteration,
Ê_m,n^k=0, m=1,2,…,m_G, k=1,2,….
For m>m_G, taking the difference between (<ref>)
and the update formula for larger m in (<ref>), the
part with the source term f cancels, and we are left with
Ê_m,n^k+1=Ê_m,n^ke^-m^2Δ T,
m=m_G+1,m_G+2,….
Now using the Parseval-Plancherel identity, and taking the largest
convergence factor e^-(m_G+1)^2Δ T out of the sum, we obtain
the convergence estimate (<ref>) taking the sup
over all time intervals n.
We show in Figure <ref> a graphical comparison of the
convergence factor of each Fourier mode without coarse propagator from
(<ref>), i.e. e^-m^2Δ T, and with coarse
propagator |e^-m^2Δ T-R_G(-m^2Δ
T)|/1-|R_G(-m^2Δ T)| from <cit.> for one
Backward Euler step, R_G(z)=1/1-z, as we used for Figures
<ref> and
<ref>.
We see that indeed there are situations where the coarse correction
with one Backward Euler step is detrimental, as observed in Figures
<ref> and
<ref>, and there is also quantitative
agreement.
Theorem <ref> shows that the Parareal algorithm (<ref>)
for the heat equation is scalable, provided the time interval length
Δ T is held constant: no matter how many such time intervals
are solved simultaneously, the convergence rate remains the same,
and this even in the case without coarse propagator! This is like
the scalability of one level Schwarz methods discovered in
<cit.> for molecular simulations in solvent models,
and rigorously proved in
<cit.>
using three different techniques: Fourier analysis, maximum
principle, and projection arguments in Hilbert spaces.
If we had zero Neumann boundary conditions, a similar result could
also be derived using a cosine expansion, but then no coarse
correction would not lead to a convergent method, since the zero
mode would have the contraction factor one. One must therefore in
this case have at least the constant mode in the coarse correction
for the Parareal algorithm to converge. Otherwise however all
remains the same.
A precise dependence on the domain size can be obtained by
introducing the spatial domain of length L, Ω=(0,L). In
this case the convergence factor simply becomes
e^-((m_G+1)π/L)^2Δ T. The same
estimate can also be obtained for a more general parabolic equation,
the convergence factor then becomes e^-λ_m_G+1Δ T,
where λ_m_G+1 is the m_G plus first eigenvalue of the
corresponding spatial operator.
§ CONCLUSIONS
We have shown that Parareal without a coarse propagator can converge
very well for the prototype parabolic problem of the heat equation,
because such problems “forget” all fine information over time, and
only very coarse information, if at all, remains, depending on the
boundary conditions. This is very different for hyperbolic problems:
while for the advection equation u_t+au_x=f with Dirichlet boundary
condition, whose solution is shown in Figure <ref> (middle)
the information is also “forgotten” by leaving the domain on the
right and Parareal converges very well <cit.>, also
without coarse propagator, for the advection equation with periodic
boundary conditions in the fourth plot of Figure <ref> this is
not the case any more: nothing is forgotten, and the coarse propagator
needs to have fine accuracy quality for the Parareal algorithm not to
fail. The situation is similar for the second order wave equation
u_tt=u_xx, whose solution is shown on the right in Figure
<ref>.
Acknowledgement. Funded by the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) under Germany's Excellence Strategy
EXC 2044 – 390685587, Mathematics Münster: Dynamics–Geometry–Structure
and the Swiss National Science Foundation.
plain
|
http://arxiv.org/abs/2409.02269v1 | 20240903195934 | Simulation-calibration testing for inference in Lasso regressions | [
"Matthieu Pluntz",
"Cyril Dalmasso",
"Pascale Tubert-Bitter",
"Ismail Ahmed"
] | stat.ME | [
"stat.ME",
"math.ST",
"stat.CO",
"stat.TH"
] |
The Wanderer: Charting WASP-77A b's Formation and Migration Using a System-Wide Inventory of Carbon and Oxygen Abundances
Maleah Rhem
September 9, 2024
=========================================================================================================================
We propose a test of the significance of a variable appearing on the Lasso path and use it in a procedure for selecting one of the models of the Lasso path, controlling the Family-Wise Error Rate.
Our null hypothesis depends on a set A of already selected variables and states that it contains all the active variables. We focus on the regularization parameter value from which a first variable outside A is selected. As the test statistic, we use this quantity’s conditional p-value, which we define conditional on the non-penalized estimated coefficients of the model restricted to A. We estimate this by simulating outcome vectors and then calibrating them on the observed outcome’s estimated coefficients. We adapt the calibration heuristically to the case of generalized linear models in which it turns into an iterative stochastic procedure. We prove that the test controls the risk of selecting a false positive in linear models, both under the null hypothesis and, under a correlation condition, when A does not contain all active variables.
We assess the performance of our procedure through extensive simulation studies. We also illustrate it in the detection of exposures associated with drug-induced liver injuries in the French pharmacovigilance database.
Keywords: Variable selection, high-dimensional regression, Lasso, empirical p-value, conditional p-value, FWER control.
§ INTRODUCTION
Variable selection in high-dimensional regressions is a classic problem in health data analysis. It aims to identify a limited number of factors associated with a given health event among a large number of candidate variables such as genetic factors or environmental or drug exposures. The Lasso regression <cit.> provides a series of sparse models which variables enter one after another as the regularization parameter λ decreases. To be used in variable selection, the Lasso requires a further procedure for choosing the value of λ and thus the associated model. By itself, the Lasso does not indicate whether the variables it selects are statistically significant.
<cit.> address this by adapting the paradigm of hypothesis testing to the Lasso. They develop a significance test for j_k, the k-th variable selected on the Lasso path, by measuring the impact of its inclusion in the model compared to the selection by the Lasso of the set A of the first k-1 variables only. Their test statistic is a difference of covariances between the Lasso results on two datasets — complete or restricted to the variables in A — at the regularization parameter λ_k+1 where the next variable enters the Lasso path. They determine the distribution of this statistic in the linear model, under the null hypothesis asserting that the first k-1 variables selected by the Lasso contain all the active variables. It is therefore possible to calculate the exact p-value associated with this test. However, in the binary model, the test statistic's exact distribution is not known.
We propose a testing procedure that addresses the same issue as the test by Lockhart et al.: produce, for each variable selected by the Lasso, a p-value that measures its significance while correcting from the drawbacks of this test: invalidity in nonlinear cases, and sometimes lack of power. We retain the idea of exploiting the value of the regularization parameter at which a given variable enters the Lasso path. All else being equal, the earlier on the Lasso path a variable is selected — that is, at a high value of λ — the stronger and more significant its association with the response. Like Lockhart et al., we propose to test in the framework of hypothesis testing, for each of the variables entering the Lasso path, whether its selection λ is higher than it should be if there were no link between this variable and the response variable.
To this end, we are inspired by the permutation selection of <cit.>. Their method aims to determine from the data an optimal value of λ, and then select the model given by the Lasso at this value. The idea is that the optimal λ is the one below which it is likely that the Lasso inadvertently selects variables that are in reality independent of Y. To find it, they generate a population of random permutations y^(l), l = 1, .., N of the observed response y. This imposes "independence" between the permuted responses and each of the X_j (in the sense that all the y^(l)_i have the same distribution despite the variation in X_i,j). Then they perform the Lasso regression of each permuted response vector on X, and measure λ_0 (y^(l)), the smallest value of λ at which this Lasso regression selects no variable. The chosen regularization parameter is then the median of {λ_0 (y^(1)), .., λ_0 (y^(N)) }.
We adopt the idea of generating responses which follow the same distribution as the observed response, but which has been made artificially independent from the covariates. As in permutation selection, we perform the Lasso regression of the simulated response on the observed covariates and we focus on λ_0, the highest value of λ where a covariate, which by construction is in fact independent of the response, is selected. These λ_0 obtained from several simulated datasets make up a reference population which is representative of the case where the response and covariates are independent. Unlike permutation selection, where one applies the median λ_0 from this population to the data of interest, we compare the λ_0 obtained on the data of interest to this reference population in order to test the significance of this λ_0, estimating the p-value of the test by Monte Carlo from the reference population. This generalizes to quantities other than λ_0: the λ at which any variable enters the Lasso path, not just the first one. While Sabourin et al. retain only the distribution of the simulated response vectors while breaking their association with all the X_j through permutation, we instead retain the simulated responses' association with some of the covariates by applying a “calibration” to them. Therefore our algorithm is called simulation-calibration.
From a variable selection perspective, if the test concludes that the λ at which a variable enters the Lasso path is significant, we select the variable. By iterating the test, we obtain a complete model selection procedure.
Section <ref> presents our null hypothesis and basic notations. Section <ref> shows the need for, and defines, the conditional p-value which we which we will use as a test statistic in the linear case. Section <ref> presents the simulation-calibration algorithm estimating this statistic, and section <ref> shows that it is valid and permits control of the type 1 error. Section <ref> adapts this test and this algorithm to generalized linear models. Section <ref> presents iterative model selection procedures based on simulation-calibration. Section <ref> states a theorem which permits the model selection procedures to control the FWER (or FDR) under certain conditions in the linear case. Section <ref> shows extended simulation studies of the validity of the simulation-calibration test and of the iterative procedures' variable selection performances. Section <ref> shows an application to real-word pharmacovigilance data.
§ PROBLEM STATEMENT AND NOTATIONS
Let X ∈ℝ^n × p be a matrix of p covariates and n observations and y ∈ℝ^n a response vector of size n. We consider the linear regression model:
y = β_0 + Xβ + ϵ, ϵ∼𝒩(0,σ^2 I_n )
where β_0 ∈ℝ, β∈ℝ^p, and σ > 0. Let β̂^Lasso (λ,y) be the Lasso estimator of β, which depends on the regularization parameter λ and the response y. It also depends on the covariate matrix X, but this is considered constant in the following.
Let A be a subset of 1, .., p. We assume that the covariates X_j, j ∈ A, are active, and we seek to determine if other covariates are active. We thus define the null and alternative hypotheses, which depend on A:
H_0 (A) : ∀ j ∉ A,β_j = 0
H_1 (A) : ∃ j ∉ A,β_j ≠ 0
Although A is a priori an arbitrary subset of {1, .., p}, the idea, as in Lockhart et al., is to take A as the set of indices of the first k-1 covariates entering the Lasso path, for relatively small values of k (even if p can be large). The simplest particular case is A = ∅.
We consider the following statistic:
λ_A(y) = sup{λ≥ 0 : ∃ j ∉ A, β̂_̂ĵ^Lasso (λ,y) ≠ 0 }
That is, the λ at which the first variable outside of A is selected, or the k-th variable selected if A includes the first k-1 variables. We will reject the null hypothesis if we observe a value of λ_A that is too high compared to what is expected under the null hypothesis. An abnormally high λ at which a variable enters the Lasso typically results from an association between the response and this variable. Therefore, we interpret a test of H_0 (A) based on λ_A as assessing the significance of the first variable selected by the Lasso outside of A, whose index we call j_A.
However, H_0 (A) might be false while the variable j_A is inactive, if there exists an active variable not belonging to A that is selected "later" on the Lasso path, at a λ lower than λ_A. In this case, it is correct to reject H_0 (A) from a hypothesis testing perspective, but incorrect to conclude that the variable j_A is significant. In section <ref>, we show under which conditions the probability of this event is controlled.
Furthermore, it is possible that multiple variables enter the Lasso path simultaneously at the parameter λ_A, whether mathematically (for example, in the case of both X and y being binary, there is a non-zero probability of exact symmetry between the associations of y with two distinct covariates) or approximately if the difference between the λ at which two distinct variables are selected is too small to measure. In both cases, rejecting H_0 (A) leads to the selection of all these variables.
§ CONDITIONAL P-VALUE
If λ_A were used as the test statistic, its associated p-value would be, by definition:
p_A^0 (y) = P_H_0 (A)( λ_A (Y) ≥λ_A (y) )
where Y is a random variable that follows the model <ref>.
In general, there is no simple explicit analytical expression for the result of a Lasso regression. This prevents determining the exact distribution of λ_A (y) and thus calculating p_A^0 (y) or other similar quantities. Therefore, we aim to estimate this probability using the Monte Carlo method. To do this, we need to simulate i.i.d. outcome vectors which follow model <ref> and satisfy H_0(A).
However, this not possible without additional information because even assuming H_0(A) is true, the parameters of model <ref> are not known: H_0(A) guarantees that β_j = 0 for j ∉ A but says nothing about β_j for j ∈ A or σ. A naive attempt to perform this simulation, where β_0, the β_j for j ∈ A, and σ are simply replaced by their estimates on the data, leads to a biased estimation of the p-value (see section <ref> and the supplementary material).
Therefore, instead of p_A^0 (y), we focus on a variant whose definition includes all the necessary information for its estimation by Monte Carlo. It requires conditioning on the parameters that remain unknown under H_0(A). Consider the following linear model:
y = X_A β_A + ϵ_A, ϵ_A ∼𝒩(0,σ_A^2 I_n)
where X_A is the matrix composed of a column vector of 1s and the columns of X whose indices belong to A, and β_A is the vector (β_j)_j ∈{0}∪ A.
This is the reduced form that the model <ref> takes when H_0 (A) is true. Then we have σ = σ_A. Let θ_A = (β_A,σ_A) be the parameter vector of the model <ref>, Θ_A = ℝ^1+|A|×ℝ_+ the space of its possible values, and θ_A (y) the maximum likelihood estimator (unpenalized) of θ_A. We define the following test statistic:
p_A (y) = P_H_0 (A)(λ_A (Y) ≥λ_A (y) | θ_A (Y) = θ_A (y) ).
This formula is similar to that of a p-value but with an added conditioning, making p_A (y) a conditional p-value.
p_A (y) is defined so that we can know its distribution as precisely as possible. We have the following result:
Under H_0 (A),
∀ t ∈ [0,1], P_H_0 (A)(p_A(Y) ≤ t ) ≤ t.
In other words, the null distribution of p_A(Y) stochastically dominates the uniform distribution on [0,1]. This phrase, used in decision theory, means that the cumulative distribution function of the dominant probability distribution is lower at every point than that of the dominated distribution, and thus the dominant distribution is systematically shifted towards higher values compared to the dominated distribution.
In practice, the distribution of λ_A (Y) conditional on θ_A (Y) = θ_A' is typically continuous for all parameter vectors with a non-zero standard deviation, i.e., any θ_A' ∈Θ_A^* where Θ_A^* = ℝ^|A|×ℝ_+^*. We call the assumption that this is true the continuity assumption. It implies the following simpler result:
Under the continuity assumption, p_A(Y) follows a uniform distribution on [0,1].
The proofs of Lemma <ref> and corollary <ref> are in the supplementary material.
Lemma <ref> allows us to use p_A (y) as a test statistic. This quantity is close to 0 when λ_A (y) is high, which is indicative of H_1 (A). Therefore, we wish to reject H_0 (A) if p_A (y) is sufficiently small. The lemma ensures that for all α∈ [0,1], rejecting H_0 (A) if and only if p_A (y) ≤α guarantees that the type 1 error is at most α. The p-value associated with p_A (y) is p_A(y) itself under the continuity assumption, and not larger than p_A(y) in the general case.
§ ALGORITHM ESTIMATING THE CONDITIONAL P-VALUE
Unlike p_A^0 (y), we are able to estimate p_A (y) with a Monte Carlo method. The problem with estimating p_A^0 (y) is that it is not possible to simulate Y following its distribution under H_0 (A), which is not known. To estimate p_A (y), due to the conditioning in its definition, we must instead simulate Y following its distribution under H_0 (A) conditional on θ_A (Y). The following Monte Carlo algorithm does this. It is parameterized by an integer N, the number of simulations, which controls estimation accuracy.
In practice, at step 2.1, we use θ_A^sim = θ_A (y), but other choices are possible because all values of θ_A^sim produce calibrated responses y^(l) that follow the same distribution.
§ PROPERTIES OF THE ALGORITHM
§.§ Proof of consistency and properties for a given response vector
§.§.§ Conditional distribution of the response vector
The algorithm's consistency is based on knowing the conditional distribution of Y under the null hypothesis. It is given by the following lemma:
Under H_0 (A), the distribution of Y conditional on θ_A (Y) = θ_A (y) is the uniform distribution on the set S ( θ_A (y) ) = { y' ∈ℝ^n |θ_A (y') =θ_A (y) }.
The proof is in the supplementary material.
The definition of p_A (y) (<ref>) is based on an arbitrary random vector Y which follows this conditional distribution. Therefore, Y can be replaced by any random variable following the same distribution, given by lemma <ref>:
Any random variable Y^u that follows a uniform distribution over S ( θ_A (y) ) satisfies P(λ_A (Y^u) ≥λ_A (y)) = p_A (y).
§.§.§ Calibration and simulation under the conditional distribution
Corollary <ref> implies that to estimate p_A (y) by Monte Carlo, one just has to simulate random response vectors uniformly over S ( θ_A (y) ). Now, we show that step 2. of algorithm <ref> does this.
We use calibration functions, which we define as follows. For any parameter vectors θ_A^(1) = (β_A^(1), σ_A^(1)) ∈Θ_A^* and θ_A^(2) = (β_A^(2), σ_A^(2)) ∈Θ_A:
cal_θ_A^(1)→θ_A^(2) : ℝ^n → ℝ^n
y' → X_A β_A^(2) + σ_A^(2)/σ_A^(1)(y' - X_A β_A^(1)).
They map one given value of θ_A to another, that is:
For all θ_A^(1)∈Θ_A^*, θ_A^(2)∈Θ_A, for all y^(1)∈ℝ^n satisfying θ_A (y^(1)) = θ_A^(1), we have:
θ_A( cal_θ_A^(1)→θ_A^(2) (y^(1)) ) = θ_A^(2).
The proof is in the supplementary material.
Calibration functions are a way to "impose" on a response vector y' the condition that θ_A (y') = θ_A^(2) for any target parameter vector θ_A^(2). As a composition of a homothety and a translation, they can be seen informally as the simplest change one can make to y' while imposing this condition. In particular, the empirical correlation structure between y' and variables not belonging to A varies little between y' and its calibrated version: it is only modified to the extent that it is carried by the variables in A.
Calibration enables simulating the uniform distribution over S ( θ_A (y) ):
Step 2. of algorithm <ref> simulates each y^(l) following the uniform distribution over S ( θ_A (y) ).
By replacing Y and θ_A (y) in lemma <ref> with y^sim and θ_A (y^sim), we find that the distribution of y^sim conditional on θ_A (y^sim) is uniform over S (θ_A (y^sim)). Furthermore, conditional on θ_A (y^sim),
cal_θ_A (y^sim) →θ_A (y): y' → X_A β_A (y) + σ_A (y)/σ_A (y^sim)(y' - X_A β_A (y^sim) )
is the composition of a homothety and a translation. These transformations preserve the uniformity of a probability distribution. Also, according to lemma <ref>, it maps S ( θ_A (y^sim) ) onto S ( θ_A (y) ). Therefore, conditional on θ_A (y^sim), y^(l) follows the uniform distribution over S ( θ_A (y) ). By integration, without conditioning y^(l) follows the uniform distribution over S ( θ_A (y) ).
§.§.§ Distribution of the empirical conditional p-value
The preceding lemmas give us the distribution of p_A (y) produced by the algorithm. It is the binomial distribution:
N p_A (y) ∼Bin (N, p_A (y)).
For each l, y^(l) follows the uniform distribution over S ( θ_A (y) ). Therefore, according to corollary <ref>, for each simulation:
P ( λ_A (y^(l) ) ≥λ_A (y) ) = p_A (y).
N p_A (y) is thus the sum of N independent binary random variables with mean p_A (y).
p_A (y) is therefore an unbiased and, due to the law of large numbers, consistent estimator of p_A (y). Its variance is Var( p_A (y) ) = p_A (y)(1-p_A (y))/N, which converges to 0 as N goes to infinity.
§.§ Properties for a random response vector
The above applies to a given response vector y. In the case of a random response vector Y that follows model <ref> (or model <ref> under the null hypothesis), it is interesting to know the distribution of p_A (Y). That is because we do not observe the theoretical test statistic p_A (Y): we only estimate it. The decision to select or not a variable is based on p_A (Y), and any control of the type I error must be inferred from the distribution of p_A (Y) under the null hypothesis. We determine this distribution exactly under the continuity assumption defined in <ref>, and then we infer a stochastic dominance of this distribution in the general case which guarantees the type I error control.
The relationship between p_A (y) and p_A (y) (lemma <ref>) allows us to translate results about the distribution of p_A (Y) to that of p_A (Y), moving from a continuous to a discrete distribution. We have two results with and without the continuity assumption:
Under H_0 (A) and under the continuity assumption, the distribution of p_A (Y) is the discrete uniform distribution over {0, 1/N, .., 1-1/N, 1}.
Under H_0 (A), the distribution of p_A (y) stochastically dominates the discrete uniform distribution over {0, 1/N, .., 1-1/N, 1}.
This lemmas are analogous to (and derived from) corollary <ref> and lemmas <ref> respectively. Their proofs are in the supplementary material; that of lemma <ref> uses lemma <ref> and a criterion for stochastic dominance.
Lemma <ref> allows us to control type I error: for all α∈ [0,1], producing a p_A (y) and then rejecting H_0 (A) if and only if p_A (y) ≤α ensures that the type I error is less than or equal to ⌊α N⌋ + 1/N + 1≤α + 1 - α/N + 1. Conversely, rejecting H_0 (A) if and only if p_A (y) ≤α - 1 - α/N ensures a type I error less than or equal to α.
The residual terms 1 - α/N + 1 and - 1 - α/N are very small (smaller than the granularity of estimation of p_A (y), which is 1/N) and can be controlled by the user since N can be chosen as large as desired (at the cost of computational time). It is also possible to adapt the definition of p_A to formally eliminate these residual terms. With:
p_A^+ (y) = 1/N+1( 1 + ∑_l=1^N 1{λ_A (y^(l) ) ≥λ_A (y) }),
the rejection criterion p_A^+ (y) ≤α ensures error control at level α. However, p_A^+ (y) is a biased estimator of p_A (y).
§ CASE OF GENERALIZED LINEAR MODELS
§.§ Problem statement
Our algorithm is valid in the the linear model. We propose an adaptation of it to two discrete generalized linear models: the binary model y|X ∼Bernoulli( f(X β) ), and the Poisson model y|X ∼Poisson( f(X β) ). The link function f is the logistic function in the binary model and the exponential in the Poisson model. Unlike the linear one, these two models do not have a standard deviation σ, therefore we write β_A or β_A instead of θ_A or θ_A.
As in the linear case, we assume that the variables X_j, j ∈ A are active, and we want to know whether any variable outside of A is active. The hypotheses H_0 (A), H_1 (A), and the statistic λ_A (y), are defined the same way. Again, ideally we would like to estimate p_A^0 (y) = P_H_0 (A)( λ_A (Y) ≥λ_A (y) ), which is not possible because the distribution of Y under H_0 (A) is not known. As in the linear case, we propose an algorithm that computes p_A (y), an estimation of a statistic that approximates p_A^0 (y). This algorithm is similar to the algorithm <ref>, differing only in its calibration method. However, the statistic estimated by p_A (y) in the generalized linear models is not the same as in the linear case and does not have its theoretical properties.
The solution we chose in the linear case was to estimate the conditional p-value, p_A (y), defined in <ref>. Unfortunately, in discrete models this quantity is no more interesting because the set E of values that y can take is much more restricted: it is finite in the binary model (E = {0,1}^n) and countable in the Poisson model (E = ℕ^n) while the set Θ_A = ℝ^|A| of possible parameters is uncountable. One can generally expect the estimator function β_A, which sends a finite or countable set to an uncountable set, to be injective: only one vector, y, should produce exactly the estimate β_A (y). Therefore, the condition β_A (Y) = β_A (y) implies Y = y hence λ_A (Y) = λ_A (y) so necessarily p_A (y) = 1.
However, if n is large enough, the number of values that y (and thus β_A (y)) can take is very high (2^n in the binary model). So even if only y' = y satisfies β_A (y') = β_A (y), a large number of distinct vectors y' can satisfy β_A (y') ≈β_A (y). Therefore, the exact conditioning can — informally — be replaced by a "conditioning by β_A (Y) ≈β_A (y)". More formally, this amounts to using a probability distribution P_β_A (y) on the set E, producing random vectors Y compatible with H_0(A) and which satisfy β_A (Y) ≈β_A (y). This last condition is quantifiable in terms of mean square error:
MSE( P_β_A (y)) = E_Y ∼P_β_A (y)[ ( X_A β_A (Y) - X_A β_A (y) )^2 ]
which must be very close to 0. We will then estimate:
p_A (y) = P_Y ∼P_β_A (y)(λ_A (Y) ≥λ_A (y) )
which is the approximate equivalent in discrete models of the test statistic p_A (y) of the linear model.
In the linear model, we simulated vectors y^(l) following the distribution produced by conditioning on λ_A (Y) = λ_A (y) (a distribution we explicitly determined) and then used the population of y^(l) to estimate p_A(y) via Monte Carlo. We simulated vectors following this distribution by using a calibration function on vectors that were first simulated following the linear model under the null hypothesis. Similarly, in discrete models, we propose to simulate y^(l) following a distribution P_β_A (y) that meets the conditions above but is not explicitly determined, and to use these y^(l) to estimate p_A (y) via Monte Carlo. The simulation of vectors following P_β_A (y) is also carried out by applying a calibration procedure to vectors simulated according to the generalized linear model under the null hypothesis. This calibration is, however, more complex than that of the linear model.
§.§ Calibration in nonlinear models
Given two parameter vectors β_A^(1), β_A^(2)∈ℝ^n, we want a calibration procedure from β_A^(1) to β_A^(2) such that:
[Desirable properties of calibration]
For every y^(1)∈ E, denoting y^(2) as its calibrated version:
<ref>.1.y^(2) should be "close" to y^(1) to preserve as much as possible its correlation structure with variables outside of A.
<ref>.2.If β_A (y^(1)) = β_A^(1) then
β_A ( y^(2) ) ≈β_A^(2).
Condition <ref>.2 is the informal adaptation of the formal lemma <ref> satisfied by the linear calibration. The equality between parameters, which is a property too strong in discrete models, is replaced by an approximation.
In linear models, calibration is a function cal_β_A^(1)→β_A^(2) : E → E involving multiplication by and addition of non-integer constants. In discrete models, this is not possible because the calibration must produce a vector of integer numbers. Our solution is to replace deterministic non-integers with random integers having the same expected value. Therefore, calibration in nonlinear models is a random procedure described by the conditional distribution of the calibrated vector given the initial vector: Y^(2) | Y^(1)∼P_β_A^(1)→β_A^(2).
§.§.§ One-step calibration algorithm in nonlinear models
Below we present a one-step calibration procedure, which follows a conditional distribution denoted P^(1)_β_A^(1)→β_A^(2). It is a step of the complete procedure that simulates vectors following P_β_A^(1)→β_A^(2). In both generalized linear models, it is based on the prediction vectors produced by the initial and target parameters: e^(1) = f ( X β_A^(1)) and e^(2) = f ( X β_A^(2)).
§.§.§ Properties of the one-step calibration algorithm
The two non-exclusive scenarios of the binary case are consistent because if e^(1)_i = e^(2)_i then Y^(2)_i = Y^(1)_i regardless of the formula used. In both models, E[Y^(2) | Y^(1)] = Z and if β_A^(1) = β_A^(2) (the case where no calibration is necessary) then e^(1) = e^(2) and Y^(1) = Z = Y^(2).
If an individual i satisfies e^(1)_i ≈ e^(2)_i, then the probability that Y^(2)_i = Y^(1)_i is high. If β_A^(1)≈β_A^(2), this is usually true, thus, as desired (condition <ref>.1.), calibration does not modify much of Y^(1).
Furthermore, the calibration "transforms a vector following the parameters β_A^(1) into a vector following the parameters β_A^(2)" in the sense that, with Y^(1) being random:
E[Y^(1)] = e^(1)E[Y^(2)] = E[Z ] = e^(2).
However, this property does not imply condition <ref>.2., which is about the estimates β_A (Y^(1)) and β_A (Y^(2)), or, equivalently, the prediction vectors they produce: ê^(m) = f ( β_A (Y^(m)) ), m = 1 or 2. The condition can be verified in the simple case where the model is reduced to an intercept, that is, A = ∅. The prediction vectors e^(1) and e^(2) are then constants.
In single-parameter models, if β_A (Y^(1)) = β_A^(1) that is, ê^(1) = e^(1), then:
* E[ ê^(2) | Y^(1)] = e^(2)
* Var( ê^(2) | Y^(1)) ≤1/n| e^(2) - e^(1)| with equality in the binary model.
We prove it in the supplementary material. In single-parameter models, P^(1)_β_A (Y^(1)) →β_A^(2) thus produces a distribution of Y^(2) where ê^(2) is distributed around e^(2) with low variance. Therefore, by applying f^-1, β_A (Y^(2)) is distributed around the target value β^(2) with low variance. This result, combined with the parameter transfer property from β^(1) to β^(2) (<ref>) which is valid in multi-parameter models, suggests that the distribution of β_A (Y^(2)) under P^(1)_β_A (Y^(1)) →β_A^(2) has a similar property in multi-parameter models.
§.§.§ Iterative calibration algorithm
The variance inequality of lemma <ref> indicates that, at least in single-parameter models, ê^(2) (or β_A (Y^(2))) approaches its target with greater precision when the initial vector (ê^(1) or β_A (Y^(1))) is already closer to it. It is therefore useful to reapply the calibration procedure following a distribution P^(1)_· →β_A^(2) to an already calibrated vector. We thus define of P_β_A^(1)→β_A^(cal), the conditional distribution describing the complete calibration procedure, via the following simulation algorithm.
§.§ Simulation-calibration testing algorithm in generalized linear models
With the calibration algorithms <ref> and <ref>, we can propose the algorithm for generating the empirical conditional p-value p_A (y) in binary and Poisson generalized linear models. It is similar in form to algorithm <ref> for linear models, with two main differences:
* theoretically, the quantity estimated by p_A (y) is not the same conditional p-value, p_A(Y) instead of p_A(Y) (see <ref>);
* algorithmically, the calibration step is more complex, using the sub-algorithms <ref> and <ref>.
p_A (y) is intended to be interpreted the same way in linear and generalized linear models. However, the properties relating to its distribution under the null hypothesis (lemmas <ref> and <ref>) are only demonstrated in the linear case. The construction of p_A (y) in generalized linear models is designed so that these properties are approximately valid there.
§ VARIABLE SELECTION PROCEDURE
The simulation-calibration test measures the significance of a covariate selected by the Lasso. When no active variables are known a priori and a complete model needs to be selected, it is necessary to use the test iteratively.
§.§ Notations and algorithm
For every k ∈{1, .., p}, let j_k be the index of the k-th variable selected by the Lasso and let A_k = { j_1, .., j_k } be the set of the first k variables selected, with A_0 = ∅. Each j_k is the first variable selected outside the set A_k-1, that is, following to the notation introduced in <ref>, j_k = j_A_k-1. This is a slight variant of the Lasso path where variables do not "reenter": if there is a λ such that β_j^Lasso (λ) ≠ 0 (which translates into a k such that j ∈ A_k), then by definition j belongs to all A_k', k' > k even when there are λ' < λ such that β_j^Lasso (λ') = 0.
The simulation-calibration test of H_0 (A_k-1) thus measures the significance of the variable j_k. Furthermore, since variables do not "reenter" the sequence of sets, the null hypotheses H_0 (A_k) become progressively weaker: if k < k', then A_k ⊂ A_k' and since the hypotheses concern the complements of these sets (on which they assert the nullity of β), H_0 (A_k) implies H_0 (A_k').
§.§ Choice of the halting criterion
In this procedure, we carry out several hypothesis tests. However, since the procedure halts at the first test where the null hypothesis is not rejected, it cannot be considered a multiple testing procedure per se, in which a potentially high number of tests satisfy their null hypothesis. Therefore, it is not advisable to make the rejection threshold more stringent as done by procedures like Bonferroni and Benjamini and Hochberg <cit.>, which are used to control the risks of false positives deriving from the large number of p-values generated under the null hypothesis.
However, it is possible to adapt the halting criterion of the procedure to the fact that it evaluates a sequence of ordered tests. In the general problem of a sequence of ordered tests whose p-values p_1, .., p_m are measured, <cit.> proposed the ForwardStop criterion. This rejects the first k̂_F tests where:
k̂_F = max{ k ∈{1, .., m } : p_k^FS≤α}
p_k^FS = -1/k∑_i = 1^klog(1 - p_i).
This criterion has the advantage that, in the vast majority of cases, one can be compute it knowing only the first values of the sequence (p_k), because the sequence (p_k^FS) is calculated from p_i, i ≤ k and it generally increases with k. In practice, when we apply ForwardStop, we select the first k̂_F' variables, where:
k̂_F' = min{ k ∈{1, .., m } : p_k^FS > α} - 1
which amounts to replacing the condition to continue the algorithm at step 2 with p_k^FS≤α.
In contrast to ForwardStop, the simple criterion p_k ≤α is called thresholding. Thresholding controls both the FWER and the FDR at level α <cit.>, while ForwardStop, generally less conservative, controls the FDR at level α <cit.>.
These control results apply to the type I error, that is, incorrect rejection of the null hypothesis when it is true. However, in the sequential procedure, we often test H_0 (A) hypotheses that are not true: notably at the first step of the procedure, where A = ∅ and it is sufficient that there is an active variable for H_0 (A) to be incorrect. It is possible that there are active variables outside of A but that they are selected by the Lasso only at low values of λ and that the first variable selected outside of A is an inactive variable. In this case, retaining this variable constitutes a false positive from the perspective of variable selection, but not a type I error since H_0 (A) is not true. Section <ref> presents a result controlling this occurrence.
§ EXTENDED THEOREM: CONTROL OF THE SELECTION ERROR
We divide the set of covariates into three disjoint subsets: {1, .. ,p } = A ∪ B ∪ C with ∀ j ∈ C, β_j = 0.
We perform the simulation-calibration test of the hypothesis H_0 (A) with the goal of variable selection, that is, if the test rejects H_0 (A), we select the (almost certainly unique) variable j_A (Y) ∈ B∪ C such that β_j_A (Y)^Lasso (λ) ≠ 0 for λ near λ_A. This selection is a true positive if j_A (Y) ∈ B and a false positive if j_A (Y) ∈ C.
Since H_0 (A) is not true (because there may exist j ∈ B such that β_j ≠ 0), the results from the previous sections do not apply. However, we have the following result:
Assume that the active variables are orthogonal to the inactive variables, that is:
X_C^T X_A∪ B=0.
Then, the simulation-calibration test of H_0 (A) at level α has a probability less than α + 1 - α/N+1 of selecting a false positive.
We prove it in the supplementary material.
Like under the null hypothesis (section <ref>), this result can also be understood as controlling at the level α the risk of selecting a false positive if the selection criterion is p_A (y) ≤α - 1 - α/N.
The theorem renders applicable the properties of control of the FWER and FDR by the variable selection procedure with thresholding and ForwardStop respectively, provided that the active variables are orthogonal to the inactive variables.
§ SIMULATION STUDIES
To assess the performance of the test by simulation-calibration, we conducted two simulation studies.
First, we measured the distribution of the p-value generated under H_0 (A) in a large number of different scenarios. By construction, this distribution is supposed to be the uniform on [0,1]. More precisely, we proved in Lemma <ref> that in the linear model, under the continuity assumption, p_A (Y) follows the uniform distribution on a discrete analogue of [0,1], and in Lemma <ref> that by relaxing the continuity assumption, in the linear model p_A (Y) stochastically dominates the uniform distribution. However, we do not have equivalent theorems in generalized linear models, although the properties of nonlinear calibration demonstrated in section <ref> suggest that a distribution close to the uniform is also to be expected in these models.
We also measured the performance of the variable selection procedures —– with both the thresholding and ForwardStop halting criteria —– using three usual metrics of variable selection: family-wise error rate (FWER), false discovery rate (FDR), and sensitivity. Due to the properties of thresholding and ForwardStop, we expect control of the FWER with the former and of the FDR with the latter, at least under the conditions where the extended theorem applies: in the linear model and with no correlation between covariates. Additionally, we compared these performances with those of an equivalent procedure based on the CovTest by <cit.>.
§.§ Simulation plan
The following simulation plan is shared by the study of the p-value's distribution under the null and by that of the variable selection procedure. We simulate n_sim = 500 data sets for each of the 252 parameter sets (or scenarios). In all cases, the number of observations is n = 1000 and the number of covariates is p = 500. The parameters which vary across scenarios are:
* The type of model: linear, binary with dense data, binary with sparse data, or Poisson. In binary models, dense data means that E[Y|X = 0] = 0.5 and sparse data that E[Y|X = 0] = 0.1, with the value of β_0 distinguishing these two cases.
* The correlation matrix used to simulate the regressors: a Toeplitz matrix of coefficients ρ_(i,j)=ρ^| i - j |, with ρ=0, ρ=0.9, or ρ=0.99. These high values are used by <cit.> in their simulation study.
* The number of active variables: 0, 1, 2, 5, or 10. They are drawn uniformly among the 500 covariates.
* For scenarios with at least one active variable, the signal-to-noise ratio (SNR). This quantity is inspired by the SNR used by Sabourin et al. Unlike them, we use an empirical version which is defined for each generalised linear model. This is the ratio between the empirical variance of the signals (E_β[Y_i | X], i = 1, ... , n) and the empirical mean of the variances of the noises (Var_β(Y_i | X), i = 1, ... , n): E_β[Y_i | X], i = 1, ... , n):
SNR(X, β) = 1/n-1Σ_i=1^n (E_β[Y_i | X] - 1/nΣ_i=1^n E_β[Y_i | X] )^2 /1/nΣ_i=1^n Var_β(Y_i | X).
A higher SNR means that the impact of each active variable on Y is more easily observable, which should make variable selection more effective. Scenarios with 0 active variables necessarily have a zero signal-to-noise ratio. We set SNR(X, β) = 1, 0.3, 0.1, 0.03, or 0.01.
§.§ p-value under the null hypothesis
To verify that the test follows its expected behavior under the null hypothesis, we assume that the set A of 0 to 10 active regressors is known and we test H_0 (A). For each scenario, in each of the n_sim = 500 data sets s characterized by A_s, X_s, and Y_s, we produce a p̂_s = p_A_s (Y_s, X_s) using the simulation-calibration algorithm based on N = 100 simulations of calibrated response vectors.
In one of the scenarios (linear model, ρ = 0.99, 1 active regressor, SNR = 1), as an example, we also produced in each of the 500 data sets a naive estimate of the unconditional p-value p_A^0 (y) (see section <ref>). It is obtained by Monte Carlo without the calibration step, i.e. by applying the algorithm <ref> without the calibration step (2.3.), taking y^(l) = y^sim with θ_A^sim = θ_A (y). This intends to illustrate the impact of calibration on the distribution of the p-values.
In each scenario, we observe whether the population of ( p̂_s )_1 ≤ s ≤ n_sim is distinguishable from a sample drawn the uniform distribution on [0,1]. Graphically, agreement or disagreement with the uniform distribution is observed on quantile-quantile (q-q) plots where for each s = 1, .., n_sim, the s-th smallest value p̂_(s) is represented at the coordinates ( s/n_sim, p̂_(s)).
The diagrams in the supplementary material illustrate the necessity of the calibration step for the empirical p-values produced to be valid. In this example (linear model, ρ = 0.99, 1 active variable, SNR = 0.1), p-values produced by simulation-calibration are compatible with the uniform distribution. In contrast, p-values produced without calibration deviate significantly, the smallest p-value among 500 being 0.13. At usual test levels, false positives are thus practically impossible instead of being possible with a controlled probability, suggesting a very low power of the test. At higher test levels, type I error is no longer controlled.
To systematically evaluate the adequacy of the p̂_s to the uniform distribution across all scenarios, we used the Kolmogorov-Smirnov (K-S) test on the population of ( p̂_s )_1 ≤ s ≤ n_sim in each scenario. We conducted the test in its two-sided version, where the null hypothesis is that the population considered is a sample from the target distribution (here the uniform distribution on [0,1]), and in one of its one-sided versions having a less strict null hypothesis: that the population is sampled from a distribution which has stochastically dominates the target distribution. We are interested in this one-sided test because, as seen in the conclusion of section <ref>, basing a test on an empirical p-value whose distribution dominates the uniform distribution on [0,1] allows control of its type I error.
Results vary across the model type: linear, dense binary, sparse binary, or Poisson. For each of the four types and each variant of the Kolmogorov-Smirnov test, we used the Bonferroni correction on all the p-values of the 63 K-S tests applied to the scenarios of that model type.
In the linear and dense binary models, the bilateral K-S tests do not reject the hypothesis that each population of p-values follows a uniform distribution over [0,1]. The smallest p-value is 0.0112 among the linear scenarios and 0.0546 among the balanced binary scenarios, which, combined with the Bonferroni correction for 63 tests, does not lead to rejection at usual test levels (63 × 0.0112 = 0.706). Figure <ref> illustrates this general adequacy with the uniform distribution across eight example scenarios.
In the sparse binary model, there are scenarios where the distribution of generated p-values significantly deviates from the uniform distribution and does not dominate it. At the 0.1 level and with the Bonferroni correction, the null hypothesis of the K-S test (unilateral as well as bilateral) is rejected in two out of 63 scenarios: those with maximum correlation among covariates (ρ = 0.99) and the number of active regressors (|A| = 10), and where the signal-to-noise ratio is high (SNR = 0.3 or SNR = 1). The p-values of the bilateral K-S test in these scenarios are respectively p = 2.86 × 10^-6 = 1.81 × 10^-4 / 63 and p = 2.55 × 10^-7 = 1.61 × 10^-5 / 63. Despite this non-adequacy, the q-q plots (Figure <ref>, which includes the SNR = 1 scenario where the deviation from the uniform distribution is most pronounced) show that P_H_0 (A)(p̂_A ≤α) ≤α at low values of α, the opposite occurring only when α exceeds about 0.4. Then, in practice, we have a more conservative control of the type I error than the nominal level at usual test levels. In the most divergent scenario, the diagram even shows a type I error significantly lower than its expected level, signaling a loss of test power. However, this effect is considerably less pronounced than in the uncalibrated simulation example.
In the Poisson model, the unilateral K-S tests do not reject the hypothesis that each population of p-values stochastically dominates the uniform distribution over [0,1] (the minimum p-value across unilateral tests is 0.0214 = 1.35/63). Thus, FWER control is at least as conservative as the nominal level at all test levels. However, at the 0.1 level and with the Bonferroni correction, the bilateral K-S tests reject the hypothesis of a uniform distribution of p-values in 5 out of 63 scenarios. These scenarios present a different profile from those where deviations from the uniform distribution were observed in the sparse binary case: 4 out of 5 are scenarios with zero correlation, all have an SNR of 0.1 or less, and the minimum p-value (p = 1.79 × 10^-6 = 1.13 × 10^-4 / 63) is reached at zero correlation, minimum signal-to-noise ratio (SNR = 0.01), and |A| = 10 known active regressors. Moreover, as seen in Figure <ref>, where the scenario with the minimum K-S test p-value appears, even then the deviation from the uniform is small, especially at the small quantiles. The actual level of first type error is therefore very close to its nominal level at its usual values.
§.§ Variable selection procedure
In this simulation study, unlike in section <ref>, we did not assume the set of active regressors to be known. This set is estimated by the sequential procedure described in algorithm <ref>. The number of simulations in the calculation of empirical p-values is higher, N = 500, which allows for a more precise estimation of p-values.
At first, we applied the algorithm with the thresholding halting criterion at α = 0.95, a high level that produces on each dataset a relatively complete sequence of variables that may be selected, along with their associated p-values. In a second step, we used the variable selection procedure on these sequences of p-values for each α in a relatively dense grid of values (from 0.01 to 0.5, with a step of 0.01), with both thresholding and ForwardStop. In this second step, we observe the procedure's performance on a large number of values of α in a computationally affordable manner since it does not require recomputing the empirical p-values through simulation-calibration, as this was done in the first step.
§.§.§ Control of FWER and FDR
Figure <ref> and <ref> show how the FWER and FDR of the thresholding and ForwardStop procedures vary with α in the same 8 example scenarios as in Figure <ref>: 10 active variables, either zero or maximal correlation between covariates, minimal or maximal SNR, linear or dense binary model. In the the supplementary material, Figures 1 and 2 of show the variation of the FWER and FDR of both procedures with α in the 8 scenarios of Figure <ref> (sparse binary and Poisson models with the same other parameters) while Figures 3 and 4 display the variations of their sensitivity in those 16 scenarios.
Although theoretical results on controlling the FWER or FDR are valid only in the linear model, the shape of these curves varies little across model types. In scenarios where it is easiest to detect a signal, i.e., those with high SNR and especially among those, the uncorrelated scenarios, it is observed that the ForwardStop criterion is significantly less conservative than thresholding at an equal α, with higher FWER and FDR. This is explained by a substantial number of small p-values among the first values of the sequence (p_k), corresponding to easily detected variables, which, by averaging, pull down the quantity p_k^FS at higher k, making it significantly lower than p_k.
In all uncorrelated scenarios, as expected, FWER control is very close to α with the thresholding criterion (blue curves in Figure <ref> are close to or below the bisectors). Additionally, in these scenarios, we observe FDR control at a level close to or lower than α with the ForwardStop criterion (red curves in Figure <ref>). Due to the more conservative nature of thresholding compared to ForwardStop, thresholding also controls the FDR below α in uncorrelated scenarios, but ForwardStop does not control the FWER. The gain in sensitivity of ForwardStop compared to thresholding is modest, reaching a maximum of 0.127 across all 252 scenarios.
On the other hand, in scenarios with many active variables and strong correlation among covariates, neither the FWER nor the FDR is controlled by either halting criterion. The strong correlation between active and inactive variables often allows a nominally inactive variable to be selected by Lasso before the corresponding active variable. The selected variable can then have a statistically significant association with the response which is a manifestation of the actual association between the active variable and the response. This cannot be attributed to the nominally active variable, because it is not known and not detected by the Lasso. The status of these "false positives" carrying a statistical signal has been discussed in the literature <cit.>.
§.§.§ Comparison with Lockhart et al.'s Covariance Test
To evaluate the sensitivity of the variable selection procedure via simulation-calibration, it is useful to compare it to another selection method aiming for the same objectives. This competitor is the covariance test (CovTest) by <cit.>, which, like the simulation-calibration test, measures the significance of variables entering the Lasso path by assigning each one a p-value.
Due to reasons of implementation of CovTest, we focused on the 63 linear model scenarios. We used CovTest with the equivalent of algorithm <ref>, with the thresholding halting criterion at α = 0.05, which is supposed to control the FWER at this level. Figure <ref> shows the FWER and sensitivity of both methods across all linear model scenarios.
We observe a failure to control the FWER in correlated scenarios by both CovTest and simulation-calibration (points ∘ and × in figure <ref>). In these scenarios, simulation-calibration tends to produce false positives more frequently than the CovTest when the signal-to-noise ratio (SNR) is low, but less frequently when it is high (see the impact of SNR in figure 6 of the supplementary material). However, the FWER is controlled at 5% or less in the 21 linear scenarios with no correlation, both by the CovTest and by simulation-calibration (points ∙), but with a clear difference between the two methods.
The FWER observed with the CovTest selection procedure falls well below its nominal level in certain scenarios, even reaching 0 — meaning no false positives observed in 500 simulations — in the scenario with 10 regressors, SNR = 0.3.
In contrast, the FWERs observed with the simulation-calibration procedure across the 21 uncorrelated scenarios all hover close to their nominal levels, ranging from 0.042 to 0.072. This implies that the number of simulated data sets on which the procedure produces at least one false positive ranges between 21 and 36 out of 500. Given the Bonferroni correction, this is consistent with the hypothesis that the true FWER is equal to α in each of these scenarios, i.e., that the number of data sets exhibiting at least one false positive in each scenario follows a Binomial(500, 0.05) distribution. Indeed, the p-value associated with the largest of the 21 observed FWERs is 0.0196 = 0.412 / 21.
This stronger conservatism of the CovTest procedure in uncorrelated scenarios results in notable gaps in sensitivity to the advantage of the simulation-calibration procedure. The sensitivity difference is positive or zero in each of these 21 scenarios, exceeds 0.1 in 6 of them (all with at least 5 active regressors and an SNR of at least 0.1), and reaches a maximum of 0.388 in the scenario with 10 regressors, SNR = 0.3. Therefore, the simulation-calibration test represents an improvement over the CovTest under the ideal condition of no correlation between covariates, with substantial gains in sensitivity allowed by a controlled increase in FWER that does not significantly exceed its nominal level.
§ APPLICATION TO PHARMACOVIGILANCE DATA
We illustrated the sequential variable selection procedure by using it on data from the national pharmacovigilance database (BNPV). We used the same data preprocessing as described in <cit.>, producing a database of n = 452914 spontaneous notifications of adverse drug reactions from January 1, 2000, to December 29, 2017, with 6617 distinct adverse events (coded at the Preferred Term level of the Medical Dictionary for Regulatory Activities, MedDRA) and p = 1692 distinct drugs (coded at the 5th level of the Anatomical Therapeutic Chemical hierarchy) reported at least 10 times. We focused on a binary outcome, the adverse event "Drug-Induced Liver Injury" (DILI). It is one of the most frequent adverse events with 25187 occurrences, accounting for 5.56% of all spontaneous notifications. We used a logistic regression model on the drug exposures, which are binary covariates. To reduce computation time, we used only N = 50 simulations in the computation of empirical p-values.
When a variable is selected with a positive coefficient estimate, we considered it to be a pharmacovigilance signal. To assess the performance of our method, as in <cit.>, we used the reference set DILIrank of known pharmacovigilance signals related to drug-induced liver injuries <cit.>. It includes 203 negative controls (drugs known not to be associated with DILI) and 133 positives (drugs known to be associated with DILI).
Given initial results, it appeared necessary to preprocess the data before performing the Lasso. Figure 7 in the supplementary material shows that the p-values estimated by simulation-calibration are zero for the first 14 variables on the Lasso path, then distributed around 0.1. This sudden change coincides with a peculiarity in the correlation structure of the exposure matrix X: the 14th exposure selected, trimethoprim (ATC code J01EA01), is exceptionally correlated (ρ = 0.9998) with another exposure, sulfamethoxazole (J01EC01), a drug with which trimethoprim is almost always co-prescribed.
We removed from the matrix X the 6 covariables that had a correlation of at least 0.9 with any of the variables on the Lasso path (computed before this removal), and re-used the Lasso and the sequence of simulation-calibration tests on these purged data (p = 1686). The Lasso yields identical results since the removed variables were not selected by it. However, p-values estimated by simulation-calibration (Figure <ref>) are lower starting from the 15th variable, and no longer show a regime change at that point. They do show a comparable regime change further along the Lasso path, after the p-value associated with the 86th selected variable (P01BDXX, diaminopyrimidines). This variable is also correlated with another variable in the database (P01BD01, pyrimethamine), but at a lower level: ρ = 0.307, similar to the other correlations involving selected variables.
Figure <ref> shows that using ForwardStop instead of thresholding, in addition to resulting in less conservative selection at the same α, has a stabilizing effect: while the empirical p-values exhibit fluctuations due to the limited number of simulations from which they are estimated, the quantity of interest p^FS on which ForwardStop is based presents a much smoother profile, being calculated by averaging from the empirical p-values.
Table <ref> shows the performances of the variable selection procedure — with the thresholding or ForwardStop halting criterion, at the same level α = 0.2 — calculated from the status of drug exposures listed in the DILIrank reference set <cit.>. Only exposures where Lasso estimates a positive association with DILI are considered signals. The status of the exposures is also indicated in figures <ref> and <ref> allowing visualization of performances at other levels of α. Both approaches are markedly more conservative than the information criteria (including ForwardStop, which itself is less conservative than thresholding). This allows them to achieve a lower false positive rate at the expense of lower sensitivity. Daunorubicin (L01DB02) is the sole false positive according to DILIrank.
§ DISCUSSION
We have proposed a test for the significance of variables entering the Lasso path, which <e use sequentially to select a model. It tests the null hypothesis H_0 (A) which states that a known set of variables—A—includes all the active variables, and it focuses on λ_A, the largest value of λ at which a variable not belonging to A is selected. Rejecting the test means selecting this variable.
We built the test to circumvent the difficulty of using λ_A directly as a test statistic. Just as a p-value is the probability under the null hypothesis that the test statistic exceeds its observed value, the statistic we consider, p_A, is a conditional probability that λ_A exceeds its observed value: conditional on the correlation structure relating the response Y to the already selected variables X_A, given by θ_A. This is why we call it a conditional p-value. We estimate it by the Monte Carlo method provided by algorithm <ref>: simulation-calibration. It involves simulating response vectors under the null hypothesis and then calibrating them on the condition on θ_A. This distributes the response vectors following the conjunction of the null hypothesis and this condition, which allows using them to estimate the conditional p-value.
This can be seen as a generalization of the permutation selection by <cit.>, which simulates a population of response vectors with the same distribution as the observed y, and considers the population of the λ at which a variable enters the Lasso path computed on these simulated vectors. In permutation selection, the focus in on the λ at which the very first variable enters the Lasso path, i.e., in our notation, λ_∅. Permutation is also a way to impose conserving θ_∅ = (β_0, σ̂), the empirical mean and standard deviation of y. Permutation selection resembles the test of H_0 (∅) by simulation-calibration in these aspects. Since permutation selection retains the median of λ_∅, using it to decide only on the selection or not of the first variable on the Lasso path would essentially amount to conducting this test at α = 0.5 (the only difference being in the method of calibrated simulation, permutation, or post-simulation calibration of arbitrary vectors).
We have proven in sections <ref> to <ref> the validity of our method in the case of the linear model. The conditional p-value p_A (Y) under the null hypothesis follows a distribution which stochastically dominates the uniform distribution over [0,1] (lemma <ref>), and its estimator p_A (Y) is unbiased, consistent with the number N of simulations chosen to perform (lemma <ref> and its consequences), and itself dominates the uniform distribution over the discrete set of values it can take (lemma <ref>). This means that rejecting the null hypothesis based on a threshold α on p_A (Y) controls the type I error rate at this level α, with a small residual term. From a variable selection perspective, this type I error means the selection (necessarily erroneous) of an additional variable while all active variables are already selected, i.e., belong to A. Furthermore, we have proven (section <ref> and supplementary matereial) that even when some active variables have not been selected, the risk of selecting an inactive variable by simulation-calibration is controlled at this same level. This latter result requires orthogonality between active and inactive covariates. The control of the error under H_0 (A) does not depend on any assumptions about the correlation structure.
In discrete generalized linear models (binary and Poisson), the theoretical framework is different since there is no conditional p-value that follows or dominates the continuous uniform distribution over [0,1]. Nevertheless, the production of an empirical p-value by simulation-calibration adapts well to these models. Theoretically, p_A (Y) is seen as the estimator of a probability whose definition approaches that of the conditional p-value (section <ref>). Practically, the "calibration" part of the algorithm becomes more complex: it is iterative and stochastic (algorithm <ref>) while linear calibration was a simple affine function. Although it is not guaranteed by a theorem, the simulation study (section <ref>) shows that the distribution of empirical p-values is indistinguishable from the uniform distribution in the majority of non-linear simulation scenarios and deviates slightly in others, without any scenario of failure to control the type I error at usual levels of α.
Simulation-calibration has the disadvantage of requiring a computation time that can be significant under certain circumstances. It involves, for each of the N simulated vectors, adjusting the unpenalized model restricted to A in order to perform the calibration, then applying the Lasso to the entire set of covariates to the calibrated vector. In nonlinear models, calibration is itself iterative and requires repeatedly readjusting the unpenalized restricted model. In the application to the BNPV data, the conditions for a very long computation time were met: large data (452914 by 1686), a binary model, and especially an A set of selected variables that reaches a significant size (over 100). It was therefore necessary to repeatedly adjust relatively large-dimensional logistic models without penalty. By contrast, the simulation study was carried out with a maximum of 10 active variables, so the number of selected variables did not exceed this order of magnitude, which limited the computation time.
Our test is comparable, in its objectives, to the covariance test (CovTest) of <cit.>, which is also a test of significance for a variable preselected by the Lasso. We focus on the distribution of λ_A alone and not, like Lockhart et al., on the evolution of the Lasso between two consecutive values of λ where a variable enters the Lasso. Therefore, we avoid a situation of low power of the covariance test: when two active variables enter the Lasso at values of λ close from each other, the CovTest's test statistic is the difference between the same quantity computed on Lasso results which differ only a little; this can lead artificially to selecting none of the two active variables even if both are significantly associated with the outcome. This could explain simulation-calibration's better sensitivity in the absence of correlation between covariates that we observe in section <ref>. In the presence of correlation, however, the performance of simulation-calibration is not always better than that of the selection by CovTest, with lower FWER at equal sensitivity in high SNR scenarios, but higher FWER at equal sensitivity in low SNR scenarios (Figure <ref> and, in the supplementary material, Figure 6).
Using the procedure on data from the French national pharmacovigilance database has highlighted an undesirable phenomenon that could help explain the poorer performance in some cases of correlation. We observe in Figure <ref> a change of pattern in the sequence of estimated p-values where, once a certain variable j_0 belongs to A, p_A (y) is always relatively high. The detail of the Lasso performed in the simulation-calibrations provides an explanation for this phenomenon. We observe that when it occurs, there is a variable j_1 ∉ A correlated with j_0 and possibly with other variables in A such that, among the population of λ_A (y^(l)) obtained by simulation-calibration (see algorithm <ref>), a significant proportion are high values associated with an early selection of j_1 by the Lasso applied to y^(l). Indeed, calibrating the y^(l) on one or more associations with variables correlated with j_1 can lead to an association with j_1 itself, which is captured by the Lasso. These high λ_A (y^(l)) lead to the estimation of a high p_A (y).
This phenomenon reflects the influence of j_A on the distribution of λ_A under the null hypothesis. It might be more relevant, from the perspective of the test's power, to compare the observed λ (y), not as we have done with the distribution of λ_A (Y) conditional on θ_A (Y), but with its distribution under a double conditioning by θ_A (Y), j_A (Y), or more generally taking into account j_A (Y). In practice, this would involve introducing a weighting in the calculation by averaging p_A (Y).
This phenomenon results, when j_0 ∈ A, in a loss of power in the selection of all variables, including those that are not correlated with j_0 or j_1. This loss of power in certain cases of correlation could offset the tendency of the test by simulation-calibration, observed in simulations without correlation, to be more powerful but less conservative than the CovTest which would explain why in some correlated scenarios, variable selection by simulation-calibration is less conservative than that by CovTest while it has practically the same sensitivity.
Thesimulation-calibration procedure's lower FWER compared to that by CovTest in high-SNR correlated scenarios could be explained by the fact that due to the high SNR, it is more common for all active variables to be selected before inactive variables on the Lasso path. When this is the case, H_0 (A) is verified (thus the risk of false positive is controlled) in all tests of the iterative procedure that are likely to produce a false positive. For the risk control to fail, it is necessary both for the active variables to be mixed with inactive variables on the Lasso path – for there to be an iteration where an unverified H_0 (A) is tested which can nevertheless lead to the selection of an inactive variable –, and for active variables to be correlated with inactive variables – so that the extended theorem of control of the selection error (section <ref>) does not apply.
|
http://arxiv.org/abs/2409.03370v1 | 20240905091553 | Identification of non-causal systems with arbitrary switching modes | [
"Yanxin Zhang",
"Chengpu Yu",
"Filippo Fabiani"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Bit]Yanxin Zhangzhangyanxin@bit.edu.cn, Bit]Chengpu Yuyuchengpu@bit.edu.cn, and MIT]Filippo Fabianifilippo.fabiani@imtlucca.it
[Bit]School of Automation, Beijing Institute of Technology, Beijing 100081, PR China
[MIT]IMT School for Advanced Studies Lucca, Piazza San Francesco 19, 55100, Lucca, Italy
This work was supported by the National Natural Science Foundation of China (Grant No. 61991414, 62088101, 6193000461), Chongqing Natural Science Foundation CSTB
2023NSCQ-JQX0018, and Beijing Natural Science Foundation L221005. Corresponding author: Chengpu Yu.
Switching systems; Non-causal systems; Expectation maximization; Kalman filter
§ ABSTRACT
We consider the identification of non-causal systems with arbitrary switching modes (NCS-ASM), a class of models essential for describing typical power load management and department store inventory dynamics. The simultaneous identification of causal-and-anticausal subsystems, along with the presence of possibly random switching sequences, however, make the overall identification problem particularly challenging. To this end, we develop an expectation-maximization (EM) based system identification technique, where the E-step proposes a modified Kalman filter (KF) to estimate the states and switching sequences of causal-and-anticausal subsystems, while the M-step consists in a switching least-squares algorithm to estimate the parameters of individual subsystems. We establish the main convergence features of the proposed identification procedure, also providing bounds on the parameter estimation errors under mild conditions. Finally, the effectiveness of our identification method is validated through two numerical simulations.
§ INTRODUCTION
Non-causal switching dynamics denote a class of systems able to modeling several real-world scenarios, such as load management in power systems <cit.>, traffic signal control systems <cit.>, and robotic systems <cit.>. These systems encompass non-causality, meaning that their output signals depend not only on the current or past control actions, but also on future inputs. In addition, the systems exhibit switching characteristics, potentially transitioning among different operational states, and thus leading to variations in the system behavior. Understanding and managing the complexity of these systems is therefore crucial for enhancing efficiency, reliability, and adaptability, enabling them to better meet the demands of industrial production and operations. This essentially motivates the interest in modeling, analyzing, and controlling such type of systems.
In many parameter identification problems for dynamical systems, the input-output data are accompanied by temporal mode sequences. As the mode of the system changes over time, each data point corresponds to the mode at its corresponding time instant. In these circumstances, it is hence crucial to model the dynamics of different modes and estimate the transitions from one mode to another <cit.>. However, obtaining direct estimates of the dynamical system from input-output data is challenging and, in practice, the prior knowledge on the mode transitions is often unavailable. Therefore, estimating the switching behaviors poses a challenging, yet highly significant, problem addressed by several researchers. Available works indeed propose algorithms to estimate the individual system dynamics and the mode transition sequence based on observed behaviors <cit.>.
§.§ Literature review
Several works consider the identification of switching models <cit.>. In <cit.>, a joint smoothing algorithm is proposed based on the expectation-maximization (EM) framework, where an E-step solution is introduced to effectively address issues related to the exponential complexity in the jump Markov linear model. In <cit.>, a numerically efficient, two-step estimation method was proposed, which iteratively updates the parameters and the switching sequence. The flexibility of this technique consists in its adaptability to different loss functions employed in the jump models, which significantly impact the overall shape and jumping behavior of the models. Furthermore, the identification of jump Box-Jenkins systems is investigated in <cit.>, where a maximum a-posteriori method is proposed to estimate the switching sequence of the model. Subsequently, the system parameters of the jump Box-Jenkins models are alternately estimated using the Gauss-Newton and the prediction error methods. In <cit.>, a switching least-squares algorithm for autonomous Markov jump linear systems is proposed. Here, the authors provided a formal proof characterizing the strong consistency of the underlying method, as well as established its convergence rate as 𝒪(√(log(T)/T)) a.s., where T is the time horizon. The aforementioned literature primarily focuses on linear systems and assumes that the switching behavior follows a Markov chain. However, these techniques are not applicable when the mode switching is random. Consequently, a number of approaches for identifying systems with random mode switching behavior have been proposed in the literature. For example, in <cit.> a kernel-based method is used to estimate the random switching system, which can solve both estimation and classification problems simultaneously. In <cit.>, a maximum-likelihood algorithm is presented for the identification of a random switching linear system, which combines the maximum-likelihood estimation criterion and the Kalman filtering technique to estimate the system modes in switched linear systems, resulting in a significant contribution to the estimation error stability of general switched linear systems.
All the studies mentioned above are focused on switching causal systems, where the system output is only related to current or past excitation. However, to the best of our knowledge, there is no literature on the system identification problem of non-causal systems with arbitrary switching modes (NCS-ASM). Nevertheless, NCS-ASM are widely present in real-world scenarios. For instance, in automatic control systems the time delay caused by sensors can lead to the switching non-causal characteristics of the system <cit.>. In financial markets, there exist interaction and feedback between investor decisions and market prices <cit.>. Although, there are some studies available on system identification for non-causal systems, such as the subspace <cit.> and the kernel methods <cit.>, these studies can only handle a single, non-causal system, rather than switching non-causal systems.
§.§ Summary of contribution
In this paper, we focus on the identification of NCS-ASM. The proposed method is developed under the expectation-maximization (EM) framework, which can be divided into two main parts. Specifically, in the E-step we adopt a Bayesian rule to compute the posterior estimate of the switching sequence, along with a modified Kalman filter (KF) for estimating the state of the causal and anti-causal parts. In the M-step, instead, we propose a switching least-squares method to obtain the closed-form solution for the parameters and establish the convergence rate of the estimated parameters. Our main contributions can hence be summarized as follows:
* To the best of our knowledge, this is the first work considering the idenfitication of NCS-ASM. In particular, in the E-step of the EM framework a modified KF is proposed to compute the posterior state estimates of the causal and anti-causal parts, which is crucial to deal with the identification of non-causal systems;
* Compared with the system dynamics in <cit.>, where the switching behavior of the subsystems only occurs in the causal part, our methodology can cope with switching behaviors in both causal and anti-causal parts. Moreover, the switching sequences of the two directions is allowed to differ from each other.
§.§ Paper organization
The rest of the paper is organized as follows:
in Section <ref> we describe the considered system and formulate the related identification problem. In Section <ref>, instead, we discuss our EM method for the identification of the NCS-ASM, while in Section <ref> we provide its implementation details, as well as characterize the related convergence properties. Two simulation examples are finally discussed in Section <ref> to test the effectiveness of the proposed method numerically.
The proofs of the technical results of the paper are all deferred to Appendix <ref>.
Notations:
ℤ and ℝ denote the set of integer and real numbers, respectively.
Given a matrix X, ‖ X‖ and ‖ X‖_∞ denote respectively its spectral and infinity norms, λ_max(X) and λ_min(X) are respectively its maximum and minimum eigenvalues, and tr(X) denotes the trace. ℙ[·] and 𝔼[·] denote respectively a distribution probability and the related expected value. 𝕊^n is the space of n × n symmetric matrices and 𝕊_≻ 0^n (𝕊_≽ 0^n) is the cone of positive (semi-)definite matrices. Given two square matrices A, B of compatible dimension, A≽ B means that A-B is positive semidefinite. For a sequence {s_t}_t∈ℕ, s_T=𝒪(T) indicates that limsup_T→∞s_T/T∞, while s_T=o(T) that limsup_T→∞s_T/T=0. Finally, I identifies a standard identity matrix. 𝒩(μ,σ^2) denotes the normal distribution of a random variable with mean μ and standard deviation σ.
§ MATHEMATICAL FORMULATION
We now describe the system concerned in this paper, together with the main assumptions, and successively formalize the problem to be addressed.
§.§ System model description
Consider the following discrete-time, non-causal system characterized by arbitrary switching modes:
x_c(t) = A_c(s_c(t))x_c(t-1)+v_c(t),
x_a(t) = A_a(s_a(t))x_a(t+1)+v_a(t),
y(t) = C_c(s_c(t))x_c(t)+C_a(s_a(t))x_a(t)+v_m(t),
where t∈ℤ is the time instant, x_c(t)∈ℝ^n_x_c,x_a(t)∈ℝ^n_x_a are the causal and anti-causal state vectors, respectively, y(t)∈ℝ^n_y denotes the system output, while s_c(t)∈{1,2,…,m_c}≜Λ_c and s_a(t)∈{1,2,…,m_a}≜Λ_a are two discrete variables representing the possible switching modes. In addition, v_c(t)∈ℝ^n_x_c and v_a(t)∈ℝ^n_x_a are the system noise vectors, and v_m(t)∈ℝ^n_y is the measurement noise vector. Finally, A_c:Λ_c→ℝ^n_x_c× n_x_c and A_a:Λ_a→ℝ^n_x_a× n_x_a denote the matrix functions associated to the causal and anti-causal state dynamics, respectively, while C_c:Λ_c→ℝ^n_y× n_x_c and C_a:Λ_a→ℝ^n_y× n_x_a are those mapping the two state vectors to the measured output.
Assume that the noise terms v_c(t), v_a(t) and v_m(t) are distributed according to a Gaussian distribution with zero mean and finite variance as follows:
v_c(t) ∼𝒩(0,Σ_c(s_c(t))),
v_a(t) ∼𝒩(0,Σ_a(s_a(t))),
v_m(t) ∼𝒩(0,Σ_m).
The NCS-ASM (<ref>) is stable in the average sense, which means
∑_i=1^T‖ x_c(i)‖^2=𝒪(T), ∑_i=1^T‖ x_a(i)‖^2=𝒪(T).
where T is the sample size of the available dataset.
Stability in the average sense is widely applied in linear systems <cit.>. Note that, compared to other commonly used notions, such as mean-square stability and almost sure stability, the assumption of stable in the average sense is weaker.
The noise vectors v_c(t),v_a(t),v_m(t) are the martingale difference sequences with respect to an increasing sequence of σ-fields 𝒢_T generated by the history states, and satisfy the following conditions:
liminf_N→∞1/T∑_i=1^Tv_c(i)v_c(i)^⊤=𝒞_1,
liminf_N→∞1/T∑_i=1^Tv_a(i)v_a(i)^⊤=𝒞_2,
liminf_N→∞1/T∑_i=1^Tv_m(i)v_m(i)^⊤=𝒞_3,
where 𝒞_1∈𝕊_≻0^n_x_c, 𝒞_2∈𝕊_≻0^n_x_a, and 𝒞_3∈𝕊_≻0^n_y are also bounded.
Standing Assumption <ref> denotes a common requirement for analyzing the convergence of system identification algorithms, enabling the noise process to exhibit non-stationary and heavy-tailed characteristics – see, e.g., <cit.>.
The NCS-ASM in (<ref>) thus consists of two state equations and one output equation. Specifically, the first state equation represents the dynamics of the causal state variables, while the second one the dynamics of the non-causal state variables. The system output is determined by both the causal and non-causal states. Furthermore, both the causal and non-causal parts of the system are composed of multiple subsystems, and their corresponding switching sequences are different. Given some T∈ℤ, which will denote the sample size of the available dataset, let the switching sequences of the causal and anti-causal parts being denoted by s_c≜{s_c(i)}_i=1^T and s_a≜{s_a(i)}_i=1^T, respectively. Each of them corresponds to a set of parameters, i.e., s_c(t)=i determines the model parameter θ^c_i≜{A_c(i),C_c(i),Σ_c(i)} that is active at the time instant t. In particular, the sequences s_c and s_a undergo random switches with certain (fixed) probabilities over time. Let us denote the underlying switching probabilities with ℙ[s_c=i]=π^c_i and ℙ[s_a=i]=π^a_i. We then have that ∑_i=1^m_cπ^c_i=1 and ∑_i=1^m_aπ^a_i=1.
The complete set of model parameters that comprehensively describe the NCS-ASM can be conveniently encapsulated into a parameter object θ, defined as follows:
θ≜{{θ^c_i}_i=1^m_c,{θ^a_i}_i=1^m_a,{π^c_i}_i=1^m_c,{π^a_i}_i=1^m_a,Σ_m}.
§.§ Problem statement
Our goal is hence to estimate the unknown model parameters θ characterizing the NCS-ASM (<ref>) with the known state dimension, number of causal system modes m_c and anti-causal system modes m_a, together with a collection of noisy output measurements y:
y≜y_1:T={y(1),…,y(T)}.
It is worth remarking that the NCS-ASM consists of both causal and non-causal parts, and their switching sequences are different. Therefore, the problem we wish to addressing has two main challenges. First, it is difficult to obtain the parameters for each subsystem of the NCS-ASM, since the states of both parts are unknown, and the system output is determined by both causal and non-causal states. Identifying both parts simultaneously is hence challenging, especially due to the continuous switching pattern of different subsystems. Second, the system has two switching sequences, and the combination of subsystems varies at different time instants. For example, at time t the causal part may activate subsystem i (s_c(t)=i) while the non-causal part may activate the subsystem j (s_a(t)=j), thereby originating a total of m_a× m_c possible combinations. In addition, the switching behavior of the subsystems is random and independent across different time instants:
ℙ[s_c(t)| s_c(t-1),…,s_c(1)]=ℙ[s_c(t)],
ℙ[s_a(t)| s_a(t+1),…,s_a(T)]=ℙ[s_a(t)], t=1,…,T.
To deal with the identification problem of the NCS-ASM (<ref>), the EM framework is adopted, which is an iterative method that can yield an estimate of the parameters at each iteration <cit.>. Let us denote the parameter estimate at the k-th iteration of the underlying algorithm as θ^k. Then, the proposed method can be (qualitatively, for the moment) described by means of the following two steps:
* In the E-step, we develop a modified KF to estimate the states of the causal and anti-causal parts. Furthermore, the Bayesian rule is used to obtain a posterior estimate of the switching sequence. Subsequently, the full-data likelihood function Q(θ,θ^k) can be calculated.
* In the M-step, the likelihood function Q(θ,θ^k) is maximized with respect to the parameters θ. Then, the identification of the NCS-ASM is updated, yielding θ^k+1.
Next section will discuss in detail each step of the proposed technique for NCS-ASM identification.
§ THE EM METHOD FOR IDENTIFYING NCS-ASM
By making use of the dataset y, we aim at estimating the system parameters θ. To this end, a standard approach is to let coincide θ̂, i.e., our estimate of the true θ, with a maximizer of the likelihood function, namely:
θ̂=θmax lnℙ_θ(y).
where we indicate with ℙ_θ(y) the probability density function of the output y given some sets of parameters θ. In the remainder, we will tacitly assume that the solution to the likelihood maximization problem (<ref>) is unique.
Given any collection of data y, note that the likelihood function lnℙ_θ(y), also called marginal density function of y, can be decomposed into the following form:
lnℙ_θ(y)
=lnℙ_θ[y(1)]+∑_i=2^T-1lnℙ_θ[y(i)]+lnℙ_θ[y(T)]
=ln∫∫∑_s_c(1)∑_s_a(1)ℙ_θ[y(1)| x_c(1),x_a(1),s_c(1),s_a(1)]
ℙ_θ[x_c(1),x_a(1),s_c(1),s_a(1)] dx_c(1)dx_a(1)
+ln∫∫∑_s_c(T)∑_s_a(T)ℙ_θ[y(T)| x_c(T),x_a(T),s_c(T),s_a(T)]
ℙ_θ[x_c(T),x_a(T),s_c(T),s_a(T)] dx_c(T)dx_a(T)
+∑_i=2^T-1ln∫∫∑_s_c(i)∑_s_a(i)ℙ_θ[y(i)| x_c(i),x_a(i),s_c(i),s_a(i)]
ℙ_θ[x_c(i),s_c(i)|y_1:i-1)p_θ(x_a(i),s_a(i)|y_i+1:T] dx_c(i)dx_a(i).
Let us denote the collection of state variables over T as x_c≜{x_c(i)}_i=1^T and x_a≜{x_a(i)}_i=1^T. Recall that in the NCS-ASM (<ref>) the state variables x_c and x_a are determined by the switching sequences s_c and s_a. Besides the potential nonconvexity of lnℙ_θ(y), which makes the direct maximization of lnℙ_θ(y) challenging (together with its high-dimensionality), from the decomposition above it is also clear that for calculating ℙ_θ(y) we need to sum over all possible values of s_a, s_c, thereby further complicating the solution of (<ref>).
Another way to marginalize the latent variables (such as x_c, x_a, s_c, s_a) is by taking the expectation over these latter. Instead of maximizing the incomplete likelihood function lnℙ_θ(y), we can estimate the conditional density of the hidden variables given the observations y and an estimate of parameter θ̂. Then, parameter estimate θ̂ can be obtained by maximizing the complete likelihood function.
To stand out our technical contributions and contrast them with existing results, we will give a sample complexity analysis related to our EM-based technique for the identification of NCS-ASM. To achieve this, we have to further postulate the following:
The following conditions hold true:
* The switching sequences s_c, s_a, and the subsystem parameters θ^c, θ^a are all independent among them, i.e.,
ℙ[s_c|θ^c] =ℙ[s_c], ℙ[θ^c|s_c]=ℙ[θ^c],
ℙ[s_a|θ^a] =ℙ[s_a], ℙ[θ^a|s_a]=ℙ[θ^a].
* The switching sequence follows a polynomial distribution, i.e.,
ℙ[s_c=i]=π^c_i, i=1,…,m_c,
ℙ[s_a=i]=π^a_i, i=1,…,m_a,
with ∑_i=1^m_cπ^c_i=1, ∑_i=1^m_aπ^a_i=1.
Then, the full-data complete likelihood function can be expressed as follows:
lnℙ_θ[y,x_c,s_c,x_a,s_a]= lnℙ_θ[y]
+lnℙ_θ[x_c,s_c,x_a,s_a|y].
This relation directly links ℙ_θ(y) and ℙ_θ[y,x_c,s_c,x_a,s_a], with the latter depending on the unknown states x_c, x_a and switching sequences s_c, s_a. The key step is then to approximate lnℙ_θ[y] by the above relation (<ref>), where x_c, s_c, x_a, and s_a can be approximated by their conditional expectations based on the observed data y.
Therefore, at each iteration k of our EM-based algorithm, given the estimate θ^k the conditional expectation of lnℙ_θ^k[y,x_c,s_c,x_a,s_a] can be obtained based on the available data y as follows:
𝔼_θ^k[lnℙ_θ[y,x_c,s_c,x_a,s_a]]
=𝔼_θ^k[lnℙ_θ(y)]+𝔼_θ^k[lnℙ_θ[x_c,s_c,x_a,s_a|y]]
=lnℙ_θ(y)+𝔼_θ^k[lnℙ_θ[x_c,s_c,x_a,s_a|y]].
Let us then define:
Q(θ,θ^k) =𝔼_θ^k[lnℙ_θ[y,x_c,s_c,x_a,s_a]],
V(θ,θ^k) =𝔼_θ^k[lnℙ_θ[x_c,s_c,x_a,s_a|y]].
The EM approach iteratively estimates the parameters in the following two steps. First, we compute the expectation Q(θ,θ^k) based on θ^k obtained from the previous iteration. By the Bayesian rule and Markov properties, it can be inferred that:
lnℙ_θ[y,x_c,s_c,x_a,s_a]
=∑_i=1^Tlnℙ_θ[y(i)| x_c(i),x_a(i),s_c(i),s_a(i)]
+lnℙ_θ[x_c(1),s_c(1)]+∑_i=2^Tlnℙ_θ[x_c(i)| x_c(i-1),s_c(i)]
+lnℙ_θ[x_a(T),s_a(T)]+∑_i=1^T-1lnℙ_θ[x_a(i)| x_a(i+1),s_a(i)]
=∑_i=1^T∑_j=1^m_c∑_l=1^m_alnℙ_θ[y(i)| x_c(i),x_a(i)]π_j^cπ_l^a
+∑_j=1^m_clnℙ_θ[x_c(1)]π_j^c+∑_i=2^T∑_j=1^m_clnℙ_θ[x_c(i)| x_c(i-1)]π_j^c
+∑_l=1^m_alnℙ_θ[x_a(T)]π_l^a+∑_i=1^T-1∑_l=1^m_alnℙ_θ[x_a(i)| x_a(i+1)]π_l^a.
In view of the white noise assumption characterizing the disturbance affecting both state variables and measured output, note that the distribution of the these variables, given the subsystem modes s_c(i)=j, s_a(i)=l, is Gaussian too and given as follows:
ℙ_θ[y(i)| x_c(i),x_a(i)]=| 2πΣ_m|^-1/2
exp{(y(i)-μ_1(i))^⊤Σ_m^-1(y(i)-μ_1(i))},
ℙ_θ[x_c(i)| x_c(i-1)]=| 2πΣ_c(i)|^-1/2
exp{(x_c(i)-μ_2(i))^⊤Σ_c^-1(i)(x_c(i)-μ_2(i))},
ℙ_θ[x_a(i)| x_a(i+1)]=| 2πΣ_a(i)|^-1/2
exp{(x_a(i)-μ_3(i))^⊤Σ_a^-1(i)(x_a(i)-μ_3(i))},
where
μ_1(i) =y(i)-C_c(j)x_c(i)-C_a(l)x_a(i),
μ_2(i) =x_c(i)-A_c(j)x_c(i-1),
μ_3(i) =x_a(i)-A_a(l)x_a(i+1).
Let us indicate with w_ij^c the posterior probability of the switching sequence given that s_c(i)=j (w_il^a is defined similarly). Then, the objective function Q(θ,θ^k) assumes the following form:
𝔼_θ^k[lnℙ_θ[y,x_c,s_c,x_a,s_a]|y]
=∑_i=1^5 Q_i(θ,θ^k),
where the closed form for the terms Q_i is in (<ref>). Note that the posterior densities w^c_ij and w^a_il, given the parameter θ^k and dataset y, can be computed as follows:
w^c_ij= ℙ_θ^k[s_c(i) =j|y]=ℙ_θ_i^k[y,s_c(i)=j]/ℙ_θ_i^k[y]
=ℙ_θ_i^k[y| s_c(i)=j]π^c_j/∑_i=1^Tℙ_θ_i^k[y| s_c(i)=j]π^c_j,
w^a_il= ℙ_θ^k[s_a(i) =l|y]=ℙ_θ_i^k[y,s_a(i)=l]/ℙ_θ_i^k[y]
=ℙ_θ_i^k[y| s_a(i)=l]π^a_l/∑_i=1^Tℙ_θ_i^k[y| s_a(i)=l]π^c_l.
Then, the second step is to maximize the Q(θ,θ^k) to obtain θ^k+1, formally defined as θ^k+1=max_θ Q(θ,θ^k).
Algorithm <ref> summarizes the two main steps of the proposed identification methodology for NCS-ASM. We characterize next the monotonic properties of the likelihood function in (<ref>) when the EM algorithm is iteratively applied to estimate the system parameters θ:
Given a dataset y, let {θ^k}_k∈ℤ be the sequence generated by Algorithm <ref>. Then, the likelihood function in (<ref>), evaluated along {θ^k}_k∈ℤ, is non-decreasing, thereby yielding lnℙ_θ^k+1[y]≥lnℙ_θ^k[y] for all k∈ℤ.
§ IMPLEMENTATION DETAILS OF THE EM ALGORITHM
We now delve into the details of the steps outlined in Algorithm <ref>, ultimately establishing our main technical result characterizing the sample complexity of the proposed identification technique for NCS-ASM.
§.§ The E-step
This step requires the calculation of the objective function Q(θ,θ^k). Specifically, this shall be achieved on the basis of the parameter θ^k estimated in the previous iteration. Then, according to the expression of Q(θ,θ^k) in (<ref>), the expectations of states x_c,x_a and the switching sequences s_c,s_a given the data y are required.
First, we calculate the posterior estimates of the switching sequences s_c and s_a by leveraging the Bayesian rule, namely ℙ_θ[s_c|y]=ℙ_θ[s_c,y]/ℙ_θ(y) and ℙ_θ[s_a|y]=ℙ_θ[s_a,y]/ℙ_θ(y). In addition, according to the formula of total probability one obtains:
ℙ_θ(y) =∑_j=1^m_cℙ_θ[y|s_c=j] ℙ[s_c=j]=∑_j=1^m_cℙ_θ[y|s_c=j]π_j^c,
ℙ_θ(y) =∑_l=1^m_aℙ_θ[y|s_a=l] ℙ[s_a=l]=∑_l=1^m_aℙ_θ[y|s_a=l]π_l^a.
Then, the data point can be assigned to each subsystem at time i by solving the following optimization problem:
ŝ_c(i) =j∈{1,…,m_c}max ℙ_θ[y(i)| s_c(i)=j]π_j^c,
ŝ_a(i) =l∈{1,…,m_a}max ℙ_θ[y(i)| s_a(i)=l]π_l^a,
where maximizing ℙ_θ[y(i)| s_c(i)=j]π_j^c is equivalent to maximizing the posterior probability of ℙ_θ[s_c(i)=j| y(i)] which is commonly used for data classification.
After obtaining ŝ_c(i) and ŝ_a(i), for any (i,j)∈{1,…,T}×{1,…,m_c} (or (i,l)∈{1,…,T}×{1,…,m_a}), we obtained that
w_ij^c= {[ 1 if ŝ_c(i)=j; 0 else ].,
w_il^a= {[ 1 if ŝ_a(i)=l; 0 else ].
Successively, we focus on the reconstruction of the state variables x_c and x_a, a task that is traditionally accomplished by means of a Kalman filter. The latter amounts to a recursive algorithm that estimates the system states by incorporating the information inferred from previous estimates and current measurements.
Adapting the KF to our problem, however, requires few key modifications due to the following reason.
According to the system description in (<ref>), we note that the calculation of the posterior probability ℙ_θ[x_c(t)|y] is related to two equations. The first one requires to propagate the estimate from the previous time step, ℙ_θ[x_c(t-1)|y], to the current time step recursively, whereas the second one involves correcting the prior estimate with the measurement equation in a reverse manner (for the non-causal states happens exactly the opposite). Then, when correcting the prior prediction of the state variables x_c and x_a using the data y, we note that x_c and x_a are mutually correlated, thereby calling for a careful design of the KF as described below. To simplify notation we omit the dependency on the switching sequence, e.g., A_c=A_c(ŝ_c(t))):
First, we need to compute the prior state estimates of x_c and x_a, denoted as x̂_c^- and x̂_a^-, through the first two relations in (<ref>), yielding x̂_c^-(t)=A_cx̂_c(t-1) and x̂_a^-(t)=A_ax̂_a(t+1). With this regard, note that the switching sequence for each step has already been calculated. Successively, the measurement equation in (<ref>) allows us to perform posterior corrections x̂_c and x̂_a on the underlying prior estimates x̂_c^- and x̂_a^- as follows:
x̂_c(t) =x̂_c^-(t)+K_cC_c(C_c^-1(y(t)-C_ax̂_a^-(t))-C_cx̂_c^-(t))
=x̂_c^-(t)+K_c(y(t)-C_ax̂_a^-(t)-C_cx̂_c^-(t)),
x̂_a(t) =x̂_a^-(t)+K_aC_a(C_a^-1(y(t)-C_ax̂_a^-(t))-C_cx̂_c^-(t))
=x̂_a^-(t)+K_a(y(t)-C_ax̂_a^-(t)-C_cx̂_c^-(t)),
where K_c∈ℝ^n_x_c× n_y and K_a∈ℝ^n_x_a× n_y are the Kalman gains for the causal and anti-causal states, respectively, whose design is critical for the effectiveness of the KF.
Before delving into the derivation of K_c and K_a, let us first calculate the error covariance matrix for the prior state estimates based on the prior estimation errors e_c^-(t)=x_c(t)-x̂^-_c(t) and e_a^-(t)=x_a(t)-x̂^-_a(t), and the posterior estimation errors e_c(t)=x_c(t)-x̂_c(t) and e_a(t)=x_a(t)-x̂_a(t). Then, the prior error covariance and posterior estimation error covariance matrices can be defined as follows:
P_c^- ≜𝔼[(x_c(t)-x̂_c^-(t))(x_c(t)-x̂_c^-(t))^⊤],
P_a^- ≜𝔼[(x_a(t)-x̂_a^-(t))(x_a(t)-x̂_a^-(t))^⊤],
P_c ≜𝔼[(x_c(t)-x̂_c(t))(x_c(t)-x̂_c(t))^⊤],
P_a ≜𝔼[(x_a(t)-x̂_a(t))(x_a(t)-x̂_a(t))^⊤].
Then, we have:
P_c^- =𝔼[(x_c(t)-x̂_c^-(t))(x_c(t)-x̂_c^-(t))^⊤]
=𝔼[(A_c(x_c(t-1)-x̂_c(t-1))+v_c(t))]
(A_c(x_c(t-1)-x̂_c(t-1))+v_c(t))^⊤]
=A_cP_c(t-1)A_c^⊤+Σ_c,
P_a^- =𝔼[(x_a(t)-x̂_a^-(t))(x_a(t)-x̂_a^-(t))^⊤]
=𝔼[(A_a(x_a(t+1)-x̂_a(t+1))+v_a(t))
(A_a(x_a(t+1)-x̂_a(t+1))+v_a(t))^⊤]
=A_aP_a(t+1)A_a^⊤+Σ_a.
Then, the Kalman gains can be calculated so that the error covariance matrices for the posterior state estimates is minimized. The posterior estimation error can be rewritten as:
e_c(t) =x_c(t)-x̂_c(t)
=x_c(t)-x̂_c^-(t)+K_c(y(t)-C_ax̂_a^-(t)-C_cx̂_c^-(t))
=(I-K_cC_c)e_c^-(t)-K_cC_ae_a^-(t)-K_cv_m(t),
e_a(t) =x_a(t)-x̂_a(t)
=x_a(t)-x̂_a^-(t)+K_a(y(t)-C_ax̂_a^-(t)-C_cx̂_c^-(t))
=(I-K_aC_a)e_a^-(t)-K_aC_ce_c^-(t)-K_av_m(t),
while the error covariance matrices of the state estimates:
P_c =𝔼[((I-K_cC_c)e_c^-(t)-K_cC_ae_a^-(t)-K_cv_m(t))
((I-K_cC_c)e_c^-(t)-K_cC_ae_a^-(t)-K_cv_m(t))^⊤]
=(I-K_cC_c)P_c^-(I-K_cC_c)^⊤+K_cC_aP_a^-C_a^⊤ K_c^⊤
+K_cΣ_mK_c^⊤
=P_c^–P_c^-C_c^⊤ K_c^⊤-K_cC_cP_c^-+K_cC_cP_c^-C_c^⊤ K_c^⊤
+K_cC_aP_a^-C_a^⊤ K_c^⊤+K_cΣ_mK_c^⊤,
P_a =𝔼[((I-K_aC_a)e_a^-(t)-K_aC_ce_c^-(t)-K_av_m(t))
((I-K_aC_a)e_a^-(t)-K_aC_ce_c^-(t)-K_av_m(t))^⊤]
=(I-K_aC_a)P_a^-(I-K_aC_a)^⊤+K_aC_cP_c^-C_c^⊤ K_a^⊤
+K_aΣ_mK_a^⊤
=P_a^–P_a^-C_a^⊤ K_a^⊤-K_aC_aP_a^-+K_aC_aP_a^-C_a^⊤ K_a^⊤
+K_aC_cP_c^-C_c^⊤ K_a^⊤+K_aΣ_mK_a^⊤,
where the second equality in each derivation is established based on the independence of e_c^-(t), e_a^-(t) and v_m(t). Note that minimizing the variances of P_c and P_a is equivalent to minimizing their traces. Therefore, given the unconstrained nature of such trace minimization, the optimal Kalman gains K_c and K_a can be found as:
d(tr(P_c))/dK_c =0-2[C_cP_c^-]^⊤+2K_cC_cP_c^-C_c^⊤
+2K_cC_aP_a^-C_a^⊤+2K_cΣ_m=0,
⇒ K_c =(C_cP_c^-C_c^⊤+C_aP_a^-C_a^⊤+Σ_m)^-1(P_c^-C_c^⊤),
d(tr(P_a))/dK_a =0-2[C_aP_a^-]^⊤+2K_aC_aP_a^-C_a^⊤
+2K_aC_cP_c^-C_c^⊤+2K_aΣ_m=0,
⇒ K_a =(C_aP_a^-C_a^⊤+C_cP_c^-C_c^⊤+Σ_m)^-1(P_a^-C_a^⊤).
By substituting the Kalman gains above into (<ref>)–(<ref>), the updated error covariance matrices can be obtained as:
P_c =P_c^–P_c^-C_c^⊤ K_c^⊤-K_cC_cP_c^-+P_c^-C_c^⊤ K_c^⊤
=(I-K_cC_c)P_c^-,
P_a =P_a^–P_a^-C_a^⊤ K_a^⊤-K_aC_aP_a^-+P_a^-C_a^⊤ K_a^⊤
=(I-K_aC_a)P_a^-.
By completing the steps of the modified KF, including the prediction, measurement update, and error covariance matrix update <cit.>, one can obtain all the posterior estimates of the state vectors x_c and x_a, which are optimal state estimates based on the available measurements and prior knowledge. In addition, to ensure the convergence of the proposed state estimation method, we establish the following properties of the state estimates:
Let η_c(t)=x_c(t)-A(ŝ_c(t))x_c(t-1), η_a(t)=x_a(t)-A(ŝ_a(t))x_a(t+1), and η_m(t)=y(t)-C_c(ŝ_c(t))x_c(t)-C_a(ŝ_a(t))x_a(t). There exist α_1, α_2, α_3 > 0 so that η_c(t)^2≤α_1, η_a(t)^2≤α_2, and η_m(t)^2≤α_3, for all t∈ℤ.
Lemma <ref> states that the error of state estimation is bounded in the mean square sense, regardless of how the state trajectory evolves in time.
By making use of the steps described in this subsection one is able to find an expression for the objective function Q(θ,θ^k). We will discuss next how to actually maximize it with respect to its first argument.
§.§ The M-step
The second step in Algorithm <ref> requires the maximization of Q(θ,θ^k) to update the parameters estimate θ^k:
θ^k+1=θmax Q(θ,θ^k).
Let us first focus on the elements {{π^c_i}_i=1^m_c,{π^a_i}_i=1^m_a}, and recall that the objective function reads as:
Q(θ,θ^k)
=𝔼_θ^k[lnℙ_θ[y,x_c,s_c,x_a,s_a]|y],
=∫∫∫∫lnℙ_θ[y,x_c,s_c,x_a,s_a] ℙ_θ^k[x_c|y] ℙ_θ^k[x_a|y]
ℙ_θ^k[s_c|y] ℙ_θ^k[s_a|y] d(x_c)d(x_a)d(s_c)d(s_a)
=∑_i=1^T∑_j=1^m_c∑_l=1^m_a∫∫ w^c_ijw^a_illnℙ_θ[y(i)| x_c(i),x_a(i)]
ℙ_θ^k[x_c(i)|y] ℙ_θ^k[x_a(i)|y]π^c_jπ^a_l d(x_c(i))d(x_a(i)).
The (k+1)-th estimate of {{π^c_i}_i=1^m_c,{π^a_i}_i=1^m_a} can hence be obtained in closed-form by applying the first-order optimality conditions as follows:
π^c_j =π^c_jmax Q(θ,θ^k)=∑_i=1^Tw^c_ij/∑_i=1^T∑_j=1^m_cw^c_ij,
π^a_l =π^a_lmax Q(θ,θ^k)=∑_i=1^Tw^a_il/∑_i=1^T∑_l=1^m_aw^a_il.
Furthermore, also the expression for the parameters {{θ^c_i}_i=1^m_c,{θ^a_i}_i=1^m_a,Σ_m} can be computed in closed-form by using the switching least-squares approach as follows:
A_c(j)=A_c(j)min ∑_i=1^Tw^c_ij‖ x_c(i)-A_c(j)x_c(i-1)‖^2,
A_a(l)=A_a(l)min ∑_i=1^Tw^a_il‖ x_a(i)-A_a(l)x_a(i+1)‖^2,
(C_c(j),C_a(l))=
(C_c(j),C_a(l))min ∑_i=1^Tw^c_ijw^a_il‖ y(i)
-C_c(j)x_c(i)-C_a(l)x_a(i)‖^2.
Then, the covariance matrices related to the disturbances v_c, v_a, and v_m can also be estimated as:
Σ_c(j)=
∑_i=1^T w^c_ij(x_c(i)-A_c(j)x_c(i-1))(x_c(i)-A_c(j)x_c(i-1))^⊤,
Σ_a(l)=
∑_i=1^Tw^a_il(x_a(i)-A_a(l)x_a(i+1))(x_a(i)-A_a(l)x_a(i+1))^⊤,
Σ_m=∑_i=1^T w^c_ijw^a_il(y(i)-C_c(j)x_c(i)-C_a(l)x_a(i))
(y(i)-C_c(j)x_c(i)-C_a(l)x_a(i))^⊤.
To show the convergence rate of the system matrices, we need the following definition of strong consistency of the parameter estimates, and few auxiliary results. Recall that θ̂ is the estimate of θ made by exploiting T samples.
The estimate θ̂ is strongly consistent if
lim_T→∞θ̂=θ.
The following asymptotic relations hold true almost surely (a.s.):
‖∑_i=1^TA(s_c(i))x_c(i)v_c^⊤(i)+v_c(i)x_c^⊤(i)A(s_c(i))‖=o(T),
‖∑_i=1^TA(s_a(i))x_a(i)v_a^⊤(i)+v_a(i)x_a^⊤(i)A(s_a(i))‖=o(T).
The standard least-squares solution can be expressed as
β̂_T=min_β‖ z-β^⊤ u‖^2,
where u is the system inputs, z is the system output, β is the unknown parameters. Let U_T=∑_i=1^Tu(i)u(i)^⊤. If
(C1) λ_min(U_T)→∞ a.s., and
(C2) logλ_max(U_T)=o(λ_min(U_T)) a.s.,
then the least-squares estimate β̂_T is strongly consistent with convergence rate
‖β̂_T-β‖_∞=𝒪(√(log(λ_max(U_T))/λ_min(U_T))) a. s.
Lemma <ref> indicates that the convergence rate of β̂_T depends on the covariance matrix of the system inputs.
We are now ready to establish the convergence rate for θ̂. Due to the possible different active subsystems at time t, it is convenient to define the following partition of the considered time interval {1, …, T} as 𝕋^c_j,T={t≤ T| s_c(t)=j} and 𝕋^a_l,T={t≤ T| s_a(t)=l}.
Let W^c_j,T=∑_t∈𝕋^c_j,T x_c(t)x_c^⊤(t) and W^a_l,T=∑_t∈𝕋^a_l,Tx_a(t)x_a^⊤(t). Then, the estimate θ̂ generated by Algorithm <ref> is strongly consistent for any s_c∈Λ_c^T and s_a∈Λ_a^T, and the convergence rates are
‖Â_c(j)-A_c(j)‖_∞≤𝒪(√(log(λ_max(W^c_j,T))/λ_min(W^c_j,T))),
‖Â_a(l)-A_a(l)‖_∞≤𝒪(√(log(λ_max(W^a_l,T))/λ_min(W^a_l,T))),
‖Ĉ_c(j)-C_c(j)‖_∞≤𝒪(√(log(λ_max(W^c_j,T))/λ_min(W^c_j,T))),
‖Ĉ_a(l)-C_a(l)‖_∞≤𝒪(√(log(λ_max(W^a_l,T))/λ_min(W^a_l,T))).
Furthermore, the convergence rate of Σ̂_c(j), Σ̂_a(l), Σ̂_m are
‖Σ̂_c(j)-Σ_c(j)‖_∞≤𝒪(log(T)/T),
‖Σ̂_a(l)-Σ_a(l)‖_∞≤𝒪(log(T)/T),
‖Σ̂_m-Σ_m‖_∞≤𝒪(log(T)/T).
Theorem <ref> gives data-dependent upper bounds for the estimation errors of the parameter matrices. In order to have a data-independent characterization of the convergence rate for adaptive control or reinforcement learning purposes, in the proof of Theorem <ref>, specifically equation (<ref>), we provide with the corresponding convergence rate of the parameter estimate θ̂, which is equal to 𝒪(√(log(T)/T)).
§ NUMERICAL EXAMPLES
We now verify the effectiveness of the proposed methodology on two simulation examples. In both cases, we note that the true switching sequences s_c and s_a are only used to verify the accuracy of the estimated switching sequences, i.e., ŝ_c and ŝ_a. As performance index we make use of the mode match rate, defined as:
L_mr= 1/T∑_i=1^Tι(s_c(i),ŝ_c(i)),
where ι(·,·) denotes the standard indicator function, i.e., ι(s_c(i),ŝ_c(i))=1 if s_c(i)=ŝ_c(i), 0 otherwise.
§.§ Example 1: Academic NCS-ASM
For illustrative purposes, we start by considering a simple non-causal system described in (<ref>) with m_c=m_a=2 modes and main parameters reported in Table <ref> (refer to the “True” columns). The dimensions of the outputs, causal states, and anti-causal states are n_y=1,n_x_c=n_x_a=2. The probabilities of all modes are π^c_1=0.7,π^c_2=0.3,π^a_1=π^a_2=0.5. The system is excited with white noise with zero mean and finite variance, and the data length is T=10^4.
The true and estimated parameters are reported in Table <ref>, which clearly shows that the parameter estimates are very close to their true values. In Fig. <ref> we report the partial estimation of the switching sequences s_c and s_a, where the mode match rates are 97.4% and 99.2%, respectively. Note that our method achieves an accurate parameter estimate, since each data point can be accurately assigned to the corresponding mode.
To better validate the accuracy of the proposed algorithm in parameter estimation, Fig. <ref> illustrates the estimated states using the modified KF. The relative estimation errors, defined as δ_c=‖ x_c-x̂_c‖^2/‖ x_c‖^2 (δ_a has the same structure), are δ_c=3.74% and δ_a=3.14%, respectively.
For comparison purposes, we now implement the EM method for jump Markov linear systems proposed in <cit.>. The length of the data is set to T=10^4. The transition matrix in <cit.> is set to 𝒯=[[ 0.5 0.5; 0.5 0.5 ]], and the probability of the switching sequence in this paper is set to π^c_1=π^c_2=π^a_1=π^a_2=0.5. The subsystem match rates of the proposed method and <cit.> are compared at different noise levels by assuming Σ=Σ_c=Σ_a. The identification accuracy of the switching sequences are shown in Table <ref>.
To verify the robust performance of the proposed method against several noise levels, we run 100 Monte Carlo experiments under four different noise conditions, i.e., Σ∈{0.01I,0.1I,0.5I,I}. In Fig. <ref> we report the mean and the variance of the match rates in all the considered cases. We observe that the estimation accuracy of the switching sequence is not significantly affected by the noise variance, since even for high noise levels the estimation accuracy can still reach 98% due to the excellent performance of the modified KF.
§.§ Example 2: The Department Store Inventory Price Index
In this subsection we adopt “The Department Store Inventory Price Index"(DSIP) dataset from The Bureau of Labor Statistics (BLS). These data come from inventory weighted price indices of goods carried by department stores.
The department store inventory product prices can be influenced by various factors, such as supply and demand dynamics, market competition, seasonal variations and so on. These factors can collectively contribute to a complex relationship with the prices, thereby, suggesting that the inventory product prices may adhere to a mixed causal non-causal system. There are many causal relationships that affect the prices of department stores, such as supply and demand, cost of production, and market competition. These will directly lead to changes of future prices. There are also many non-causal factors that may not directly cause changes in the price of goods, such as economic conditions, consumer preferences, and trends. More importantly, seasonal changes can also lead to changes in its price index. Therefore, the system dynamics may change at different times to characterize DSIP. To sum up, a NCS-ASM model (<ref>) is suitable for describing the DSIP.
In Fig. <ref> we show the true prices and the estimated prices with different number of subsystems. Specifically, we can infer that the larger the number of subsystems, the better the ability to describe the changes in the commodity price index. The estimation errors δ=‖y-ŷ‖/‖y‖ with different number of subsystems are shown in Table <ref>.
In conclusion, from Fig. <ref> and Table <ref> we note that switching systems with a larger number of modes can better describe the variations in DSIP, because seasonal changes can lead to different patterns in dynamic systems. The proposed identification method can therefore accurately capture the typical fluctuations of the DSIP.
§ CONCLUSION
We have proposed an expectation-maximization framework for identifying non-causal systems with arbitrary switching modes. In the E-step, we have embedded the reconstructed switching sequence into the modified Kalman filter so that the proposed algorithm can handle the joint state variable estimation for the causal and anti-causal parts. Furthermore, in M-step we have developed a switching least-squares algorithm that can get the parameter estimates in closed-form. From a technical perspective, we have established the convergence of our identification methodology, also deriving an upper bound 𝒪(√(log(T)/T)) for the parameter errors.
Note that the identification algorithm proposed in this paper can be adapted to the identification of switching linear descriptor systems with minor modifications, since a descriptor state-space model can be represented in the mixed causal and anti-causal form. When the subsystems are nonlinear, however, the identification task becomes more challenging, thus posing greater difficulties. This aspect will be further investigated in our future work. In addition, addressing the joint identification of structured subsystems and piecewise constant switching sequences is an interesting future research direction.
99
Tan2023 K. Tan, W. J. Parquette, & M. Tao. (2023). A predictive algorithm for maximum power point tracking in solar photovoltaic systems through load management. Solar Energy, 265, 112127.
Liao2024 S. Liao, Y. Wu, K. Ma, & Y. Niu, (2024). Ant Colony Optimization With Look-Ahead Mechanism for Dynamic Traffic Signal Control of IoV Systems. IEEE Internet of Things Journal, 11(1), 366-377.
Carloni2007 R. Carloni, R. G. Sanfelice, A. R. Teel, & C. Melchiorri. (2007). A hybrid control strategy for robust contact detection and force regulation. In Proc. American Control Conf., New York City, USA, 1461-1466.
Schlegl2003 T. Schlegl, M. Buss, & G. Schmidt. (2003). A hybrid systems approach toward modeling and dynamical simulation of dextrous manipulation. IEEE/ASME Trans. on Mechatronics, 8(3), 352-361.
Chan2008 Chan, A. B., & Vasconcelos, N. (2008). Modeling, clustering, and segmenting video
with mixtures of dynamic textures. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 30(5), 909–926.
Ferrari2003 Ferrari-Trecate, G., Muselli, M., Liberati, D., & Morari, M. (2003). A clustering
technique for the identification of piecewise affine systems. Automatica, 39(2),
205–217.
Garulli2012 Garulli, Andrea, Paoletti, Simone, & Vicino, Antonio (2012). A survey on switched
and piecewise affine system identification. In 16th IFAC symposium on system
identification, Brussels, Belgium (pp. 344–355).
Bianchi2021 Bianchi, Federico, Breschi, Valentina, Piga, Dario, & Piroddi, Luigi (2021). Model
structure selection for switched NARX system identification: A randomized
approach. Automatica, 125, Article 109415.
Mark2022 Mark P. Balenzuela, Adrian G. Wills, Christopher Renton, & Brett Ninness. (2022). Parameter estimation for Jump Markov Linear Systems. Automatica, 135 109949.
Bemporad2018 Alberto Bemporad, Valentina Breschi, Dario Piga, & Stephen P. Boyd. (2018). Fitting jump models. Automatica, 96, 11-21.
Piga2020 Dario Piga, Valentina Breschi, & Alberto Bemporad. (2020). Estimation of jump Box–Jenkins models. Automatica, 120 109126.
Sayedana2024 Borna Sayedana, Mohammad Afshari, Peter E. Caines, & Aditya Mahajan. (2024). Strong Consistency and Rate of Convergence of Switched Least Squares System Identification for Autonomous Markov Jump Linear Systems. IEEE transactions on Automatic Control, 1-8.
Anna2018 Anna Scampicchio, Alberto Giaretta, & Gianluigi Pillonetto. (2018). Nonlinear Hybrid Systems Identification using Kernel-Based Techniques. In IFAC-PapersOnline, 51(15), 269-274.
Angelo2010 Angelo Alessandri, Marco Baglietto, & Giorgio Battistelli. (2010). A maximum-likelihood Kalman filter for switching discrete-time linear systems. Automatica, 46, 1870-1876.
Tsurumoto2022 K. Tsurumoto, W. Ohnishi, T. Koseki, N. Strijbosch, & T. Oomen. (2022). A non-causal approach for suppressing the estimation delay of state observer. 022 American Control Conference (ACC), 3356-3356.
El2023 El Ammari Anis, & Terzi, Chokri. (2023). Causal Nexus Between Ownership Structure, Dividend Policy and Financial Performance: A Bootstrap Panel Granger non-causality Analysis. Journal of African Business, 24(4), 562-579.
Verhaegen1996 Verhaegen, M. (1996). A subspace model identification solution to the identification of mixed causal, anti-causal LTI systems. SIAM Journal on Matrix Analysis and Applications, 17(2), 332–347.
Fang2024 X. Fang, & T. Chen. (2024). On kernel design for regularized non-causal system identification. Automatica, 159, 111335.
Blanken2020 Blanken, L., & Oomen, T. (2020). Kernel-based identification of non-causal systems with application to inverse model control. Automatica, 114.
sta_as1 T. E. Duncan, & B. Pasik-Duncan. (1990). Adaptive control of continuoustime linear stochastic systems. Math. Control signals systems, 3(1), 45–60.
sta_as2 M. K. S. Faradonbeh, A. Tewari, & G. Michailidis. (2020). On adaptive linear–quadratic regulators. Automatica, 117, 108982.
Dempster1977 Dempster, Arthur P., Laird, Nan M., & Rubin, Donald B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B. Statistical Methodology, 1–38.
Lai1982 T. L. Lai, & C. Z. Wei. (1982). Least squares estimates in stochastic regression
models with applications to identification and control of dynamic
systems. Ann. Statist., 10(1), 154–166.
PE2018 P. E. Caines. (2018). Linear stochastic systems. SIAM.
Chen1986 H. F. Chen, & L. Guo. (1986). Convergence rate of least-squares identification and adaptive control for stochastic systems. Int J Control, 44(5), 1459–1476.
Stout1974 W. F. Stout. (1974). Almost Sure Convergence. Academic Press.
Gibson2005 Stuart Gibson, & Brett Ninness. (2005). Robust maximum-likelihood estimation of multivariable dynamic
systems. Automatica, 41, 1667-1682.
Kalman Kalman RE. (1960). A new approach to linear fltering and prediction problems. J Basic Eng, 82(1), 35–45.
§ TECHNICAL PROOFS
Proof of Lemma <ref>: The log likelihood difference between the θ and θ^k can be expressed as
lnℙ_θ(y)-lnℙ_θ^k[y]=Q(θ,θ^k)- Q(θ^k,θ^k)
+V(θ,θ^k)-V(θ^k,θ^k),
where the difference V(θ,θ^k)-V(θ^k,θ^k) coincides with the Kullback–Leibler distance that possess an important property, i.e., being non-negative. Therefore, the maximization of Q(θ,θ^k) can yield an increase in the log-likelihood function lnℙ_θ(y), namely
Q(θ,θ^k+1)≥ Q(θ,θ^k)⇒lnℙ_θ^k+1[y]≥lnℙ_θ^k[y],
thus concluding the proof.
Proof of Lemma <ref>: Only the boundedness of η_c(t) will be proven in detail, since that of η_a(t) and η_m(t) can be derived in a similar way.
First, we note that x_c(t-1) can be equivalently expressed as follows:
x_c(t-1)=φ_1(s_c)x_c(1)+φ_2(s_c)v_c(1:t-1),
where φ_1(s_c) and φ_2(s_c) are shown in A.1, A.2. v_c(1:t-1)≜[v_c(1),⋯,v_c(t-1)]. Both matrices are uniquely determined by the switching sequence s_c and system matrices A_c. Then, one obtains that:
TempEqCnt
η_c(t) =x_c(t)-A(ŝ_c(t))x_c(t-1)
=(A_c(s_c(t))-A(ŝ_c(t))) x_c(t-1)+v_c(t)
=(A_c(s_c(t))-A(ŝ_c(t)))φ_1(s_c)x_c(1)
+φ_3(s_c)v_c(1:t),
where φ_3(s_c)=[φ_2(s_c),1]. Passing to the (squared) norm in the expression above we note that, in view of the fact that the noise v_c has a bounded covariance, the last term is bounded too. For what concerns the first term, instead, we have:
‖ [A_c(s_c(t))-A(ŝ_c(t))]φ_1(s)x_c(1)‖^2≤λ_1‖ x_c(1)‖^2,
where
λ_1≜λ_max(φ^⊤_1(s) (A_c(s_c(t))-A(ŝ_c(t)))^⊤
(A_c(s_c(t))-A(ŝ_c(t)))φ_1(s)),
which concludes the proof.
Proof of Theorem <ref>: In the interest of space, we establish the convergence rate for Â_a(l) only, since the other bounds on the system matrices can be derived similarly.
In view of Lemma <ref>, sufficient conditions for establishing the convergence rate of Â_a(l) are (C1) λ_min(W^a_l,T)→∞, a.s., and (C2) logλ_max(W^a_l,T)=o(λ_min(W^a_l,N)), a.s.. We therefore have to show that these two conditions are verified in our case. Then, for what concerns (C1), one has:
x_a(t)x_a(t)^⊤ =(Â_a(l)x_a(t+1)+v_a(t))
(Â_a(l)x_a(t+1)+v_a(t))^⊤
=Â_a(l)x_a(t+1)x_a^⊤(t+1)Â^⊤_a(l)
+2v_a(t)x^⊤_a(t+1)Â^⊤_a(l)+v_a(t)v_a^⊤(t).
Since Â_a(l)x_a(t+1)x_a^⊤(t+1)Â^⊤_a(l) is a positive semidefinite matrix, by relying on Lemma <ref> we can infer that
W^a_l,T =∑_t∈𝕋^a_l,Tx_a(t)x^⊤_a(t)
≽∑_t∈𝕋^a_l,Tv_a(t)v_a^⊤(t)+x_a(T)x^⊤_a(T)
+∑_t∈𝕋^a_l,T(Â_a(l)x_a(t+1)v^⊤_a(t)+v_a(t)x^⊤_a(t+1)Â^⊤_a(l))
≽∑_t∈𝕋^a_l,Tv_a(t)v_a^⊤(t)+o(T).
Then, we readily obtain:
lim_|𝕋^a_l,T|→∞inf ∑_t∈𝕋^a_l,Tx_a(t)x^⊤_a(t)/|𝕋^a_l,T|
≽lim_|𝕋^a_l,T|→∞inf∑_t∈𝕋^a_l,Tv_a(t)v_a^⊤(t)/|𝕋^a_l,T|≻0.
Therefore, we can conclude that λ_min(W^a_l,T)→∞ a.s..
To prove (C2) we note that:
λ_max(∑_t∈𝕋^a_l,Tx_a(t)x^⊤_a(t)) ≤tr(∑_t∈𝕋^a_l,Tx_a(t)x^⊤_a(t))
≤∑_i=1^T‖ x_a(t)‖^2=𝒪(N),
where the last equality follows in view of the stability, in average sense, of the NCS-ASM in (<ref>).
Then, one can readily obtain that
lim_T→∞log(λ_max(W^a_l,T))/λ_min(W^a_l,T)≤lim_T→∞log(T)/|𝕋_i,T|=log(T)/𝒪(T)=0.
We are now able to establish the convergence rate for the covariance matrices. Specifically, we will give the detailed proof for Σ̂_c(j) only, since the remaining ones follow similarly.
From the NCA-ASM in (<ref>), the true covariance matrix for v_c can be expressed as:
Σ_c(j)=1/|𝕋^c_j,T|∑_t∈𝕋^c_j,T(x_c(t)- A_c(j)x_c(t-1))
(x_c(t)-A_c(j)x_c(t-1))^⊤.
Then, the estimation error can take the following form:
Σ̂_c(j)-Σ_c(j)= 1/|𝕋^c_j,T|∑_t∈𝕋^c_j,T((A_c(j)-Â_c(j))x_c(t-1))
((A_c(j)-Â_c(j))x_c(t-1))^⊤.
Therefore, the convergence rate for Σ̂_c(j) reads as:
‖Σ̂_c(j)-Σ_c(j)‖_∞≤ ∑_t∈𝕋^c_j,Tx_c(t-1)x_c(t-1)^⊤/|𝕋^c_j,T|
‖ (A_c(j)-Â_c(j))(A_c(j)-Â_c(j))^⊤‖_∞
≤𝒪(log(T)/T),
which completes the proof.
|
http://arxiv.org/abs/2409.02472v1 | 20240904064507 | A two-way coupled high resolution wave hindcast for the South China Sea | [
"Tiziano Bagnasco",
"Alessandro Stocchino",
"Michalis I. Vousdoukas",
"Jinghua Wang"
] | physics.ao-ph | [
"physics.ao-ph",
"physics.geo-ph"
] |
A Variable Power Surface Error Function backstepping based Dynamic Surface Control of Non-Lower Triangular Nonlinear Systems
[
3 September 2024
==============================================================================================================================
§ ABSTRACT
In the present study, we performed a 53-year wave hindcast (1970-2022) for a significant portion of the South China Sea (SCS) with an unstructured mesh that reaches considerably high resolution along the coasts of the Guangdong province (China). The adopted modeling approach is based on the fully two-way coupled SCHISM-WWMIII numerical suite. The model was forced with ERA5 wind velocities that were compared to IFREMER altimeter wind velocities and then bias-corrected for a more accurate treatment of the wind component. Eight major tidal harmonics extracted from FES2014 were imposed to the open boundaries.
After a preliminary mesh independence analysis, the model results have been validated against satellite altimeter observations retrieved from the European Space Agency database spanning the period from 1992 to 2019. Moreover, 28 year in-situ measurements from two coastal wave buoys and data from four tidal gauge stations (approximately 20 years) were used to test the nearshore skills of the model.
Several statistical indicators have been used to evaluate the offshore and nearshore performance of the model results in terms of the main wave parameters (significant wave height, peak wave period, mean wave direction) and water levels. All statistical metrics suggest that the present hindcast improved the predictions of waves and water levels compared to previous datasets, especially in the coastal regions. The high spatial resolution together with a full coupling allowed the model to capture and simulate processes that are induced by the non-linear interactions between waves and currents, especially nearshore.
§ INTRODUCTION
An in-depth knowledge of the wave dynamics and the effects they induce are of primary importance in the effective design of nearshore/offshore structures, for the protection of coastal areas, for the wave energy assessment and for facilitating any marine operation. If the propagation of waves and their repercussions on beaches, coastal structures and piers can be anticipated, their adverse effects can be effectively mitigated <cit.>. Furthermore, given the limited availability of real-time wave measurements and observations, the
implementation of numerical models to generate wave hindcasts can serve as a beneficial approach to comprehend the oceanic conditions and enhance wave forecasting for future scenario predictions.
In this regard, numerous numerical models can be utilized but their choice mainly depends on the purpose of the analysis. At present, the numerical models that deploy an unstructured mesh have become a valid alternative to models that use regular grids, especially for large-scale applications as stressed by <cit.>. This is because unstructured-grid models can generate detailed meshes that easily adapt to complex shorelines and sophisticated geometries. This becomes critical in areas characterized by numerous islands and profiles with irregular topography. The necessity of improving numerical simulations in such contexts, along with the need for higher-resolution results under constrasting spatial and temporal scales, has lead to the advent of unstructured-grid models <cit.>.
Noteworthy examples of widely used circulation unstructured models include, for instance, the Finite-Volume Coastal Ocean Model (FVCOM, <cit.>), the System of HydrodYnamic Finite Element Modules (SHYFEM, <cit.>, ADCIRC <cit.>, TELEMAC model <cit.> and the Semi-Implicit Cross-scale Hydroscience Integrated System Model (SCHISM,<cit.>).
Concerning wave modeling, among the frequently employed spectral wave models are SWAN <cit.>, TOMAWAC <cit.>, WAVEWATCH III <cit.>, WAM <cit.>, MIKE21 SW <cit.>,CREST <cit.> and WWMIII <cit.>.
The existing literature provides several examples and applications of both global-scale wave hindcasts <cit.>
and more regionally focused studies <cit.>. The Eastern part of the North Atlantic coast was studied by <cit.> and led to the generation of a 44-year wave hindcast (1958-2001) with the WAM model.
<cit.> analyzed the performance of WAVEWATCH III in the Western Mediterranean Sea concerning seventeen storm events. The Central and South Pacific was investigated by <cit.> while generating a 31-year global wave hindcast with the deployment of WAVEWATCH III. <cit.> also generated a global wave hindcast from 1979 to 2015 with improved resolution of 0.25^∘ in coastal areas and near the poles.
Only later, <cit.> developed a 73-year global hindcast of waves and storm surges utilizing SCHISM-WWMV and considering an unstructured mesh characterized by more than 650000 nodes and with highest resolutions ranging between 2 and 4 kilometers near costal areas.
The area examined in this paper encompasses a substantial portion of the SCS: a marginal sea of the Western Pacific Ocean enclosed by the Indochinese Peninsula, China, Taiwan, Philippines, Indonesia, Malaysia and Borneo. During the summer and autumn seasons, this region experiences the impact of intense winds and typhoons, occasionally leading to significant harm to coastal structures, offshore installations, and vessels <cit.>. Extensive research efforts have been undertaken in this region to develop forecasting or hindcasting of waves, analyze wave climate trends, and achieve a comprehensive understanding of the wave dynamics. Several studies focused on wave condition analysis and wave energy assessment <cit.>. Researchers have also been exploring the effects of concurrent storm-tide-tsunami events <cit.> and the observations and modelling of typhoon waves <cit.>. The literature also includes examples of wave hindcasts produced in this region.
<cit.> conducted a 31-year wave hindcast using the WAVEWATCH III model, aiming to evaluate the long-term alterations and inter-annual variations in the wave climate of the SCS. The hindcast employed a minimum resolution of 0.3 degrees to depict the wave characteristics accurately. The wave climate in the Bohai Sea, Yellow Sea, and East China Sea for the period spanning 1990 to 2011 was investigated by <cit.> using the SWAN model and a minimum resolution of 1 arcminute. Another study by <cit.> produced a wave hindcast covering the time span from 1979 to 2017 along the Chinese coast reaching resolutions of 1km in shallow areas deploying the TOMAWAC model.
Numerical wave modeling can be performed accounting for the influence and interaction of currents or by neglecting them entirely.
As mentioned by <cit.>, the exchange of data in two-way coupled models between waves and currents results in a more accurate representation of some real complex phenomena such as wave-induced turbulent mixing <cit.>. Practice has shown that, typically, coupled models exhibit superior performance in simulating wave conditions compared to wave-only models <cit.>. This improvement can be attributed to the inclusion of current effects, which play a crucial role in accurately depicting phenomena such as dissipation, wave breaking, and wave steepening <cit.>.
For this study, we relied on the 2-way fully coupled SCHISM-WWMIII models, the specifics of which will be described in the next sections.
The primary objective of this analysis is to produce a 53-year wave hindcast for a significant portion of the South China Sea, with a specific focus on the coastline of the Guangdong province (China), which stands as one of the most densely populated coastal areas. The numerical predictions in terms of wave characteristics and water levels will be validated against offshore satellite altimeter observations, coastal wave measurement stations and tidal gauges.
§ MATERIAL AND METHODS
§.§ Numerical Model Set-up
The domain considered in this paper covers the majority of the SCS, spanning from Longitudes of approximately 106^∘degree E to 123^∘ E and from Latitudes of approximately 3^∘ N to 28^∘ N. Figure <ref> shows the part of the South China Sea considered for the numerical simulations. A particular focus was dedicated to the Great Bay Area (GBA) and the coasts along the Guangdong Province (China).
This study is based on a 2-dimensional, 2-way coupled model, combining the Semi-implicit Cross Scale Hydroscience Integrated System Model SCHISM and the third-generation spectral Wind Wave Model WWMIII.
The 2-way coupling implies that the current velocities and sea water levels are provided to the wave model and the latter computes the wind stresses that are then handed back to SCHISM.
The structure of SCHISM is derived from the original Semi-implicit Eulerian-Lagrangian Finite-Element model (SELFE), of which further details can be retrieved in <cit.> and <cit.>. Since the interest of the study is to assess the interaction between wind waves and currents, no particular attention is dedicated along the vertical direction. For this purpose, SCHISM applies the 2D depth-integrated barotropic equations.
The integration time steps implemented in the analysis are 100s for SCHISM and 600s for WWMIII: this implies that the two models exchange information every 6 time steps. This choice was made after testing different model time steps in order to minimize possible numerical dispersion errors and/or excessive truncation errors. Being SCHISM a semi-implicit model itself that applies no mode splitting, the fulfillment of the CFL constraint is more easily met and this leads to greater numerical stability. As stressed by <cit.>, the typical integration time steps used in these types of field applications usually vary between 100s and 200s. The unstructured mesh covers the majority of the South China Sea (SCS) and was generated with the aid of the 13.1 version of SMS (Surface-water Modeling System by Aquaveo, https://www.xmswiki.com/wiki/SMS:SMS).
Both SCHISM and WWMIII share the same unstructured mesh which comprises 15523 nodes and 29039 triangular elements.
The lowest mesh resolution is circa 0.008^∘ around the GBA and reaches a size of 0.35^∘ in the open ocean.
Several unstructured meshes were generated and tested in order to find the optimal mesh size that can better capture the wave propagation nearshore in the Hong Kong waters. The bathymetry of the whole domain was extracted from the 2022 General Bathymetric Chart of the Oceans (GEBCO) dataset <cit.> with a resolution of 15 arc-second intervals and then merged with a more refined dataset of water depths related to Hong Kong only.
The bottom friction was evaluated considering a minimum boundary layer thickness of 0.2m and a roughness length equal to 5.78·10-6m and constant for all the nodes. The latter corresponds to a Manning coefficient of about 0.012 s/m1/3 which was also applied by <cit.> for a similar application in the SCS. The atmospheric forcing (sea level pressure and wind velocities) and the tidal oscillations at the open boundaries start from a zero value and then have a warm-up time equal to 365 days in order to provide enough time for the Oceanic conditions to settle properly in such a large domain. The wetting and drying mechanism was considered active and accounted for a minimum water depth of 0.01m. The surface stress was calculated according to the parameters of <cit.>, whose formulation is based on the wave age. The discretization of the spectral domain was performed regrouping the frequencies into 36 bins ( lower limit of 0.04 Hz and higher limit of 1 Hz) and the directions into 24 bins with minimum and maximum directions of respectively 0^∘ and 360 ^∘. The wave boundary layer was activated and treated as explained by <cit.>. WWMIII solves the wave action equation and accounts for different contributions and phenomena, mainly referred to as source terms, that cause the energy content to be rearranged within the spectrum. The work done in this paper considers non-linear interactions (DIA approximation,<cit.>), wind-induced energy input and wave energy dissipation in deep waters (whitecapping) according to the ST4 parametrization <cit.>. The model incorporates the JONSWAP bottom friction parametrization <cit.> with a bottom friction coefficient equal to 0.067 m2s-3 and shallow water wave breaking characterized by a constant gamma criterion. The wave breaking formulation is based on the work of <cit.> and the model also incorporates the effects of triad 3-wave interactions (LTA Lumped Triad Approximation,<cit.>).
The numerical simulations aimed at generating a 53-year wave hindcast cover the time period between the years 1970 and 2022. The outputs of both SCHISM and WWMIII are generated and saved hourly and comprise the main variables used for the purpose of this paper: significant wave height (H_s), peak wave period (T_p), mean wave direction (D_m) and water level (h).
§.§ Meteocean forcing and open boundary conditions
The boundary conditions applied at the open boundary nodes of the unstructured mesh (blue nodes in Figure <ref> right panel) consist of the 8 major harmonic components of the tides ( M2, S2, N2, K2, K1, O1, Q1, P1) and were extracted from the FES2014 package <cit.>. The boundary nodes that connect Malaysia to the region above Manila in the Philippines were considered as land nodes since the portion of water that they exclude is enclosed by various islands and therefore not interesting for wind waves generation. The main meteorological forcing at stake are hourly data of sea level pressure and wind velocities defined over a 0.25^∘x0.25^∘ grid obtained from the fifth generation ECMWF reanalysis (ERA5, <cit.>).
To improve the quality of the forcing wind data we compared the ERA5 wind velocities to the IFREMER satellite radiometer velocities (https://cerweb.ifremer.fr/datarmor/products/satellite/l4/multi-sensor/ifr-l4-ewsb-blendedwind-glo-025-6h-rep/data/). The comparison subsequently lead to a bias correction of the ERA5 wind field.
This was performed since wind velocities provided from reanalyisis products are often biased at high winds (extreme conditions) and therefore a comparison with satellite radiometers (or observations) can result in a better treatment of the meteorological forcing <cit.>. A similar approach was adopted by <cit.>, who performed a bias-correction of the ERA5 wind field in the SCS considering wind measurements from 15 buoys. The study revealed that ERA5 winds can underestimate or overestimate the wind intensities.
For the matter of this work, in order to correct the bias we preprocessed the reanalyis as follows.
The 10-meter northward and eastward wind velocity components provided by the ERA5 dataset span from 1970 to 2022 and are defined hourly on a regular grid of 0.25^∘. In contrast, the IFREMER dataset contains wind velocity components from 1992 to 2020 that are available every 6 hours and with the same spatial resolution. Although the locations of the two datasets did not align, a series of pre-processing steps were undertaken to ensure consistent spatiotemporal resolution between them. First, a downsampling process was applied to the ERA5 velocities in order to convert them to a 6-hourly interval. Additionally, a cubic spatial interpolation was performed in order to account for the mismatched locations leading to the same spatiotemporal resolutions for both the datasets.
These initial steps were required to compute the 6-hourly biases for the years 1992-2020. For each year, the wind velocity measurements at each grid node were considered individually.
The velocities occurring at each grid node, for a specific year, were grouped into bins of 1 m/s and the 6-hourly biases were divided into 0.10 m/s bins. Subsequently, a probability density function was computed enabling the assignment of velocities falling within a specific bin to the most probable 6-hourly bias bin value for that particular year. This last operation yielded a 6-hourly map representing the most likely biases for both the wind velocity components. To further fine-tune the biases, an additional cubic spatial interpolation was performed so that hourly maps of biases were generated. These maps were then added to the original ERA5 wind field, resulting in a corrected wind field suitable for utilization in the numerical simulations.
§.§ Model validations and statistical performance indexes
The validation of the model required at first the tuning of the integration time steps for both the SCHISM and the WWMIII models as well as the choice of the unstructured mesh size of which more details will be explained in section <ref>.
The overall quality of the results and the performance of the model were quantified through the comparison with observations provided by physical stations (H_s, T_p, D_m, h) and satellite measurements (H_s).
§.§.§ Statistical analysis
The validation was carried out by evaluating a series of statistical indicators that are commonly used in literature <cit.> for assessing the accuracy of the results such as the Correlation coefficient CC,
Centered root mean square difference RMSD and the Standard deviation σ.
These three fundamental indicators can be grouped within the Taylor diagram <cit.>, which provides an effective graphical representation of the results.
As also mentioned by <cit.> it is advised not to rely on the more traditional error indicators ( such as the Normalized Root Mean Square Error NRMSE and the Scatter Index SI); instead, a more accurate evaluation of the errors can be performed with the exploitation of the symmetrically normalized root mean square error HH <cit.>. <cit.> pointed out that, in some circumstances, simulations that exhibit negative biases tend to be characterized by smaller RMSE values: this implies that, paradoxically, these simulations perform better that simulations with absent biases. For these reasons, the statistical indicators considered in the validation for scalar integrated variables (H_s and T_p) are:
Correlation Coefficient CC = ∑ (S_i - S̅) · (O_i - O̅)/σ_S ·σ_O · N
Normalized Bias NBI = ∑ (S_i - O_i)/∑ O_i× 100
Hanna and Heinold indicator HH = √(∑ (S_i - O_i)^2/∑ (S_i · O_i))
HH is also referred as a symmetrically normalized root mean square error: it takes into account both the average and scatter components of the error and it is unbiased towards simulations that underestimate the average.
In these formulations N is the total number of ith samples, S_i and O_i are the simulated and observed variables with their respective means S̅,O̅ and standard deviations σ_S,σ_O.
A different treatment is reserved for D_m, which is a circular quantity, and the indicators analyzed for this matter were normalized with an angle of 2π radiants <cit.>:
Normalized Bias NBI_D_m = ∑mod_-π,π(S_i - O_i)/2π N
Hanna and Heinold indicator HH_D_m = √(∑[mod_-π,π(S_i - O_i)]^2/N)·1/2π
where mod_-π,π(S_i - O_i) is equal to (S_i - O_i)-2π if (S_i - O_i)>π. Viceversa, mod_-π,π(S_i - O_i) is equal to (S_i - O_i)+2π if (S_i - O_i)<-π.
In order to better represent the accuracy of the comparisons, also the model skill score (skill) <cit.>, the root mean square error (RMSE) and the BIAS were computed.
Model skill score SKILL = ∑ (O_i - S_i)^2/∑ (O_i - O̅)^2
Root Mean Square Error RMSE = √(∑ (O_i - S_i)^2/N)
BIAS BIAS =∑ (S_i - O_i)/N
§.§.§ Tidal and wave stations
The simulated water levels (h) and the simulated wave variables considered in this paper (H_s, T_p and D_m) were compared to the observations provided in the stations shown in Figure <ref>.
More specifically, the four tide stations T1, T2, T3 and T4, of which more details can be seen in table <ref>, are managed by the Hydrographic Office of Marine Department (HO) and provide water level values every 10 minutes. These records are referred to the Hong Kong time, which is 8 hours head of the Coordinated Universal Time UTC and their values are in meters above the Chart Datum , which is 0.146 meters above the Hong Kong Principal Datum. The two only existing wave stations in the territory of Hong Kong, W1 and W2 (details in table <ref>), are handled by the Port Works Division of the Hong Kong Civil Engineering and Development Department (CEDD). Specifically, they are bed-mounted wave recorders that provide a hourly long-term wave monitoring programme in the harbour starting from 1994.
§.§.§ Satellite wave heights
Further validation of the simulated wave parameters was done using satellite altimetry.
The data is retrieved from the European Space Agency (ESA) Sea State Climate Change Initiative (CCI) project <cit.> that produced a global dataset of H_s with a spatial resolution of approximately 6 km for each of the satellites deployed in the missions . This paper considers the database version 1.1 ( Topex satellite from 1992 to 2005) and the database version 3 (Envisat satellite from 2002 to 2012 and Jason-2 satellite from 2008 to 2019): figure <ref> shows the satellite tracks in the area of interest.
The modeled H_s were interpolated in space and time in order to match the resolution of the measurements along the satellite tracks.
§ RESULTS
§.§ Mesh Independence analysis
One of the strictest requirements that any explicit numerical scheme has to fulfill in order to be stable and convergent is the so-called Courant-Friedrichs-Lewy (CFL) condition <cit.>: CFL < 1, where CFL = Δ t · v/Δ x. In this formula, Δ t is the integration time step, v is the flow velocity and Δ x is the mesh size. However, the fact that SCHISM is an unstructured model based on implicit time stepping schemes makes the CFL constraint no longer a stringent restriction, as stressed by <cit.>. For this reason, even significant changes in both the mesh size and time step will not considerably affect the results. Several mesh sizes were tested, with minimum lengths ranging between 0.001^∘ and 0.01^∘ in the Hong Kong waters. In particular, five different meshes that differ for the resolution around Hong Kong but share the same grid size offshore and far from the area of interest were considered in the sensitivity analysis: mesh A (minimum size 0.003^∘ ), mesh B (minimum size 0.004^∘), mesh C (minimum size 0.006), mesh D (minimum size 0.008^∘), mesh E (minimum size 0.01^∘).
Figure <ref> illustrates the outcomes obtained for H_s using the above mentioned unstructured meshes, along with the Taylor Diagram in W1 during a brief time period in 2001.
The correlation coefficient (CC) values remain consistently around 0.9, the root mean square difference (RMSD) hovers around 0.12 m, and the standard deviation (STD) remains approximately 0.22 m. These results show that increasing the nearshore mesh resolution more than 0.008^∘ doesn't further improve the model skill.
For these reasons, a resolution of 0.008^∘ was selected in order to provide sufficient resolution nearshore and reduce the computational time of the numerical simulations.
§.§ ERA5 wind bias correction
As anticipated in section <ref>, ERA5 wind velocities were compared to IFREMER wind velocities in order to apply a hourly bias correction to each node velocity and for the years spanning between 1992 and 2020. The mere comparison between the two datasets, both for the northward and eastward wind components, provides an understanding of the degree of similarity between their distributions. For this matter, due to the large domain area and the considerable amount of wind nodes, three different locations were selected and examined: location A (22^∘N, 114^∘E), location B (15^∘N, 114^∘E) and location C (11^∘N, 109.25^∘E).
Figure <ref> shows the positions of these three locations and the related scatter plots where each mark is colored by the spatial density (PDE) of nearby points, computed by means of the Kernel smoothing function. The plots also display the quantiles of the IFREMER wind velocities versus the quantiles of the ERA5 wind velocities: in general, the two distributions are alike if they do not deviate from red dashed line. It can be observed that the two datasets show a strong agreement when the absolute value of their velocity does not exceed circa 7 m/s. In this range of velocities, the eastward and northward wind components therefore manifest robust values of BIAS,RMSE and CC. The eastward wind component at Location A is characterized by CC equal to 0.998, RMSE equal to 0.169 m and a BIAS of -0.025 m.
Beyond that threshold, the disparities between ERA5 and IFREMER, although still within the range of approximately of 1 m/s, begin to grow. In this case, location A shows poorer statistic metrics (CC= 0.979, RMSE=1.207 m and BIAS=-1.073 m).
From the plots it seems that ERA5 velocities tend to be higher then IFREMER velocities for values greater than 7 m/s. However, location C seems to show an opposite behaviour when the absolute value of the eastward wind velocity exceeds the value of 10 m/s: BIAS reaches a positive value of 0.117 m, RMSE is equal to 0.126 m and CC is equal to 0.983. Given the diverse nature of the domain , it is challenging to identify a distinct behavior that applies uniformly along the wind nodes. Yearly colored maps were generated in order to offer a glimpse of how the mean wind bias is distributed along the SCS: it is therefore possible to understand when the bias becomes positive and when it becomes negative. To enhance the visualization of the results and facilitate plotting, the colorbar range was adjusted between -1 m/s and 1 m/s. This range was selected based on the observation that the bias typically fluctuates within these values.
Figures <ref> and <ref> show the mean of the bias for the two velocity components for the years 2006 and 2017. In 2006, the observational data for eastward wind velocities (u_10) in the coastal regions between Hainan Island and Taiwan, Taiwan and the northern part of the Philippines, Manila and north of Brunei, and Brunei and Ho Chi Minh City showed positive biases. Differently, the northward wind velocities (v_10) are predominantly negative with a few positive patches around Taiwan, at the Philippines and in proximity of Hainan Island. In the year 2017, there appears to be a noticeable resemblance in the behavior of both u_10 and v_10 mean biases. Similar to the biases observed in 2006, the coastline extending from the northern open boundary to the left edge of the southern open boundary within the numerical domain exhibits comparable biases. However, in the eastern region of the domain, certain areas around the Philippines, and generally in deep water locations, the mean bias for both components exhibits an opposite sign.
§.§ Model performance
The overall performance of the SCHISM-WWMIII two-way coupled model deployed in this study was assessed by considering and analyzing the water level elevations h, significant wave heights H_s, peak wave period T_p and mean wave direction D_m.
As mentioned in section <ref>, 4 tide stations in Hong Kong (T1, T2, T3, T4) handled by the Hydrographic Office of Marine Department (HO) were considered for the water level validation. We compared the hourly modeled water elevation versus hourly observed water elevation for each of the tide stations and for the entire simulation time (1970-2022). Generally, all the locations exhibit similar behavior and demonstrate a solid agreement with the observations, with the exception of a few observed negative water level values of which the respective simulated values appear to be positive.
Table <ref> summarizes the values of the statistical parameters evaluated to assess the performance of the comparisons. All the four stations are depicted by CC values in the range 0.780 (for T4) and 0.845 (for T1), BIAS values of the order of 0.1 m and RMSE values varying between 0.369 m and 0.407 m. The SKILL parameters for all the locations are greater than 0.7, with the highest value of 0.818 for T2, which suggest a robust agreement between the observations and the simulated variables.
Figure <ref> displays the Taylor diagrams for all the tide stations in order to provide more information about the comparison. Here, it is possible to see that the RMSD values are always less than 0.4 m and that the standard deviations of the simulated water levels h fluctuate around 0.6 m.
Regarding the wave heights validation, we started comparing the numerical model results with the satellite observations. in particular, the significant wave height H_S for the whole numerical domain were interpolated along the satellite tracks, as described in <ref>.
Tables <ref>,<ref> and <ref> show the findings of the yearly statistical analysis for each of the satellite missions considered. It can be observed that the correlation between satellite data and the simulation is significantly high with CC values between 0.88 (TOPEX satellite - year 1992) and 0.94 (JASON-2 satellite - year 2011). The biases always appear to be negative with a minimum of -0.008 (ENVISAT satellite - 2012) except for a positive bias of 0.012 (JASON-2 satellite - year 2018). The RMSE values show relatively low values and don't exceed 0.475 m. The normalized bias (NBI) parameters, which serve as indicators of the average component of the error, manifest minimal values. And the same can be said for the Hanna and Heinold (HH) indicators that oscillate around 0.2 therefore suggesting good quality of the comparisons.
The statistical parameters were also evaluated for the whole satellite missions, as seen in table <ref> and from the scatter plots of Figure <ref>. The overall good performance of the present hindcast when comparing the significant wave heights to the satellite observations is confirmed considering the whole observation periods.
Notably, the bias and, consequently the normalized bias, are on average always negative, indicating that the numerical model tends to underestimate the wave height predictions.
Regarding the in-situ comparisons, the historical data provided by the two bed-mounted wave recorders in Hong Kong (W1 and W2) managed by the CEDD Department were utilized in order to validate the model and to check the quality of the results. The exact coordinates and other additional details can be seen in table <ref> and <ref>.
The quality assessment was performed by comparing the significant wave height (H_s), peak wave period (T_p) and mean wave direction (D_m) obtained by the wave station to the simulated ones and by computing a statistical analysis. Both W1 and W2 observed significant wave heights (H_s), as it can be seen from the scatter plots and Taylor diagrams in figure <ref> for both stations, seem to well represent the simulation findings.
Results related to W1 and W2 generally show good agreements for H_s values greater than 1 m with only a few outliers. Significant wave heights smaller than 1 m tend to be overestimated by the here deployed numerical model. The complexity of the coastline and the bathymetry around the two investigated sites could be the reasons why the comparison is not optimal. The Taylor diagrams show values of RMSD around 0.15 and standard deviation around 0.2 for both stations while the correlation coefficients CC reach values of 0.64 and 0.76 for W1 and W2 respectively. More details can be seen in table <ref>.
In the same table, the statistical parameters referred to the peak wave periods (T_p) are reported. Correlation values CC for both W1 and W2 are considerably lower than the values obtained for the significant wave heights (H_s). RMSD values are more or less equal to 2s and the standard deviations of the simulated peak wave periods (T_p) are a little less than 2s.
The comparative analysis of mean wave directions (D_m) was conducted using wave roses, as depicted in Figure <ref>. These visual representations illustrate the average directions from which the waves originate, along with their corresponding significant wave heights (H_s). As a result of the geographical positioning of the two wave stations, waves originating from the second quadrant (between North and West) are not captured in the simulations. Nevertheless, the observed series of mean wave directions for both stations exhibit the occurrence of minor waves emanating from that direction. This occurrence is likely attributed to the influence of marine traffic within the Hong Kong harbor, including boat wakes, as well as potential errors originating from the wave stations. The analysis reveals that in station W1, the most frequently occurring mean wave direction aligns well with the simulated results from the numerical model. However, this congruence does not hold true for station W2. In W2, the most frequently observed mean wave direction is South-East, whereas the most frequently simulated mean wave direction tends to be South/South-West. The statistical analysis on the whole dataset for W1 showed a HH_D_m value of 0.289 and a NBI_D_m equal to 0.250. The W2 station show almost the exact values: HH_D_m equal to 0.288 and NBI_D_m equal to 0.250. The D_m results show a relatively poor agreement between the outputs of the numerical model and the observations as also highlighted by <cit.>. <cit.> compared the outcomes of the Operational Marine Forecasting System (OMFS) managed by the Hong Kong Observatory to the observations in both W1 and W2, stressing the fact that apparent discrepancies arise.
§ DISCUSSION
§.§ Tidal levels
As shown in the previous section, the simulated water levels extracted at the four tidal stations in Hong Kong (T1,T2,T3,T4) show a good agreement when compared to the in-situ data. In fact, CC values range between 0.780 and 0.845, BIAS values are of the order of 0.1 m and RMSE values range between 0.369 m and 0.407 m.
We also performed an harmonic analysis of the water levels and we compared it with the observations. Figure <ref> shows the Power Spectral Density function for one station, i.e. station T2, as representative of the general behaviour. it can be noted that the diurnal constituents are very well captured by the model, as the shallow water (less intense) constituents with periods around 6 and 8 hours. The larger difference can be observed for the diurnal constituents, especially for the M2 tides, which is the dominant. Most of the differences between the simulations and the observation can be attributed to this. The overestimation of the most intense tidal components could be attributed to the difficulty of the 2D depth averaged models in correctly represent the frictional dissipation compared to more accurate 3D models, even hydrostatic <cit.>.
In fact, comparing the performance of the present model with other global studies <cit.> or regional studies <cit.>, we noticed that tidal levels are predicted with lower accuracy. Note that the mentioned study were based on three dimensional hydrodynamic models. The comparison and consequently the statistical metrics are strongly influenced by the resolution of the mesh, the details of the bathymetry and 3D baroclinic effects <cit.>. Regarding regional studies, <cit.> and <cit.> computed the water level validations at the same locations deploying a 3D approach, increased resolution in shallow water areas and temperature and salinity as additional input fields. In this way they improved the performance of the model leading to CC values ranging between 0.95 and 0.98, RMSE values equal to approximately 0.10 m at all the tide stations and SKILL values between 0.97 and 0.99.
§.§ Simulations of extreme events
The performance of large scale wave hindcast are known to underperform in the case of extreme events, typically underestimating the significant wave heights <cit.>. This tendency is commonly ascribed to the reanalysis of the atmospheric forcing that tend to predict lower winds and atmospheric pressure extremes, typical of tropical storms or typhoons <cit.>. We acknowledge the limitation of the present hindcast analysis to correctly reproduce wave fields generated by extreme events such as Tropical Storms and Typhoons, which would require a dedicated model set up. In particular, several approaches have been showed to effectively reproduce wave and storm surges caused by tropical extreme events <cit.>.
In the context of extreme events, operational wave models have been observed to underestimate the peak of extreme wave heights, with errors reaching several meters. This phenomenon is frequently observed in comparisons of model forecasts and hindcast to observations during the passage of cyclones <cit.>.A poor understanding of the physics that govern extreme regimes, the accuracy of wind forcing data, and the challenge of representing the aforementioned dynamics on discrete grids in time and space are often advocated as the main source of discrepancies. These factors lead to the smoothing and underestimation of maximum values, sharp gradients, and overall variability <cit.>. This is particularly the case with regard to the area of highest waves, which can be observed in the vicinity of a cyclone's forward motion. <cit.>.
In order to discuss the performance of the present hindcast in simulating extreme events, we focused on three periods recorded by the wave buoys in the Hong Kong waters during the typhoon seasons for the years 2003, 2008 and 2018. In particular, we used the recordings from W2 (see Figure <ref>) that is more exposed to the open ocean waves.
In the following, we briefly describe the extreme events simulated, providing the main characteristics of the tropical storms and typhoons. The data have been collected by the official report yearly issued by the Hong Kong Observatory, which is a government department responsible for monitoring and forecasting weather.
During the first period of simulation from July 1 to August 1 2003 two events were observed, close to each other. In particular, the Tropical Storm Koni (0308) developed as a tropical depression about 1000 km east-southeast of Manila on 16 July and intensified entering the South China Sea into a severe tropical storm and attained a maximum wind speed of about 100 km/h near the centre on 20 July. Koni made landfall in northern Vietnam on July 22.
Soon after Koni passed over the GBA, a new typhoon named Imbudo (0307) was recorded. Imbudo developed as a tropical depression about 730 km southwest of Guam on 17 July and intensified up to attain the grade of typhoon on July 20. The maximum wind speed was about 185 km/h near its centre on 21 July. The trajectory of Imbudo was almost parallel to the one of Koni, but northern landing near Yangjiang of western Guangdong. The maximum significant wave heights recorded by the wave buoy were about 2.4 m and 3.4 m under Koni and Imbudo, respectively, see Figure <ref> panel a).
More extreme significant wave heights were recorded in the second period of simulation from August 1 to October 1 2008. During this period three events hit the SCS landing on the Chinese coastline: severe Tropical Storm Kammuri (0809) between 4 and 8 August; Typhoon Nuri (0812) between 17 and 23 August; and Typhoon Hagupit (0814) between 19 – 25 September. In particular, the two typhoons Nuri and Hagupit formed as a tropical depression over the western North Pacific and the intensified to typhoon in the SCS reaching maximum wind speeds about 180 Km/h. The results of these extreme winds led to H_s between 3 and 3.5 m, see Figure <ref> panel c).
The third period of simulation covered one of the most intense typhoon ever recorded by the wave bouy in the Hong Kong waters, namely the Super Typhoon Mangkhut (1822) between 7 and 17 September 2018. During the simulation period (from August 15 to October 15) other tropical storms have been reported by the Hong Kong Observatory in the annual report, namely: Severe Tropical Storm Bebinca (1816) between 9 and 17 August 2018 and Tropical Storm Barijat (1823) between 10 and 13 September 2018.
Mangkhut developed into a super typhoon on 11 September, reaching its peak intensity before making landfall over Luzon with an estimated maximum sustained wind of 250 km/h near the centre. The maximum recorded H_s was about 4.4 m at WS2, see Figure <ref> panel e). Due to its devastating impact, Mangkhut received a lot of attention and several studies have been published both on the analysis of the atmospheric event <cit.> and on its impact on the coastal environment and urban areas <cit.>.
We compared the output of the simulations in terms of significant wave height H_s against the buoy measurements and evaluated the model performance using the Taylor diagram. The results of the analysis are shown in Figure <ref>. Every row corresponds to a simulation period, and the left column (panels a), c) and e)) shows the time signals comparison, whereas the right column (panels b), d) and f)) shows the corresponding Taylor diagrams.
The correlation coefficient was between 0.807 and 0.895, with the lowest value for the 2018 events (CC 0.807) and the highest during the 2003 events (CC 0.895). The RMSD was almost invariably found to be around 0.2 (2003: RMSD 0.192 m, 2008: RMSD 0.207 m and 2018: RMSD 0.210 m), and the standard deviation was around 0.34 (2003: STD 0.393 m, 2008: STD 0.307 m and 2018: STD 0.334 m). In terms of statistical parameters, the overall performance of the numerical models could be considered fairly satisfactory. However, as expected, the maximum values of H_s is not well described by the numerical simulations. Wave peaks around 2 m are reasonably captured by the model, with the only exception of the Tropical Storm Koni (July 2003). The model fails to reach wave heights of 3 m or above, which have been consistently observed during several events in the period analyzed.
Overall, the performance with regards to the extreme events can be considered in line with previous wave hindcast <cit.>.
§.§ Comparison with previous hindcast products and limitations
Several global wave and storm surge hindcast are now available, see <cit.> for a recent review and analysis of fourteen existing global wave hindcast and reanalysis products or the recent global high resolution, coupled, hindcast <cit.>. Global product are necessarily developed on relatively coarser grids from about 0.25^∘ in specific subdomains up to 1.5^∘ resolution <cit.>.
Figure <ref> shows the distribution of four statistical parameters (RMSE, NBI, HH and CC) computed compared the simulated significant wave height and the satellite altimeter considering all tracks between 1992 and 2019, using a spatial binning of 0.5^∘×0.5^∘. Comparing the present model performance in terms of the statistical parameters, the offshore skills are comparable with most of the global hindcast datasets. A detailed inspection of Figure <ref> reveals that poorest performance are observed along the Taiwan Strait and Luzon Strait where RMSE, NBI and HH assume the highest values, whereas the correlation coefficient remains relatively high. The lower performance could be ascribed to the strong large scale currents that are not well described by the present model setup <cit.>. Other two spots of high values of RMSE and NBI can be observed around the Paracel Islands (around 112^∘ E, 16^∘ N) and the Spartly Islands (around 113^∘ E, 8^∘ N). The lower resolution of the mesh in open ocean does not allow for a correct representation of these two archipelagos.
Regarding the nearshore skills, as reasonable, the statistical parameters of the present study are in general better than the larger scales models <cit.>. The main reasons for the better performance of the model could be found in the higher resolution and the model configuration, namely the two ways coupling used for the present analysis. In fact, we imposed non only the atmospheric forcing, but also the tidal elevation at the open boundaries to improve the description of the non linear interactions in the nearshore zones <cit.>.
If we consider regional hindcast of the South China Sea, a few studies have been published in the last years <cit.>. The wave climate hindcasts were obtained using WAVEWATCHIII, SWAN and TOMAWAC with grid resolution ranging between 0.31∘-1^∘ (offshore) to 0.15^∘-0.01^∘ along the coasts, with no coupling.
The wave models were validated against satellite altimeter <cit.> and wave buoy or ADCP measurements <cit.>. It is worth noting that the validations against wave buoys in all three cases covers time ranges much shorter than the overall hindcast range, and the position of the wave buoys were much offshore compared the data used in the present study. The values reported in <cit.> in terms of normalized bias (NB) and between -0.02 and 0.03 for the 6 months in situ observations and ±0.04 for the altimeter observations. The symmetrically normalized root mean square error HH were reported between 0.18 and 0.23 for the in situ measurements. <cit.> reported RMSE values between 0.27 and 0.44 m with an average equal to 0.35 m for one year satellite track (2009), whereas the RMSE for the four months in situ observations was 0.22 m. Finally, <cit.> for the period of in situ validation using seven wave buoys reported a correlation coefficient CC between 0.59 and 0.93 and RMSE between 0.21 m and 0.55 m.
Comparing the present results, summarized in Tables <ref> for the mean values of all satellite tracks and <ref> for the two in situ observations, we can safely state that the performance are comparable or, even, slightly better if we consider the nearshore comparison with the wave buoy. Note that the wave stations used for the present validation are located nearshore (both at water depths between 9m and 10m) and surrounded by several islands, making the comparison even more challenging. The inclusion of the nearshore wind and tidal currents in the present model, together with a fairly high resolution of the mesh, has improved the skills of the model in predicting very nearshore wave propagation.
In the future, the present dataset can be can be improved with specific simulations aimed to better predict the cyclone-related storm surge and waves, using dedicated storm track archives for the atmospheric forcing, e.g. the IBTrACS best-track archive <cit.>, and the above mentioned implementations into the SCHISM‐WWMIII model package.
It is worth mentioning that the analysis conducted in this work do not account for steric effects (baroclinic conditions), which are known to improve the simulation of water levels <cit.>. As highlighted by <cit.> and <cit.>,the contribution of steric effects in the SCS play a vital role.
Moreover, the coupling waves and current is currently performed adopting a 2D depth-averaged approach for the circulation model. A three dimensional model would improve the representation of the complicated open ocean and coastal circulation, most probably leading to a better prediction in terms of wave climate.
§ CONCLUSIONS
In this study the fully coupled SCHISM-WWMIII model was considered in order to generate a comprehensive 53-year wave hindcast spanning from 1970 to 2022 in the majority of the South China Sea (SCS). The detailed unstructured mesh and the high resolution in the Hong Kong waters allowed the model to successfully capture the non-linear processes caused by the mutual effects of waves and currents, particularly in shallow regions.
Five different meshes were considered in the sensitivity analysis and the most performing mesh, which minimized computational time while optimizing numerical accuracy was utilized for the numerical simulations. The comparison between model outputs ( H_s, T_p, D_m and h) and observations showed close conformity offshore and slightly reduced accuracy in nearshore areas, known to be notoriously challenging because of the complexity induced by the coastline and the bathymetry.
In general, the statistical indicators related to water level and significant wave height demonstrated high values of correlation and relatively low errors. However, peak wave periods and mean wave directions appeared to be more challenging.
The observation data from wave stations and satellites, as well as wind velocities and water level forcing, were retrieved from online databases.
Further research effort could focus on the analysis of the coastal processes amplified by the presence of small islands and complex coastlines. Employing a more detailed mesh and bathymetry could also lead to improved statistical indicators and enhance the performance of the model. This study serves as a preliminary stage upon which more solid studies about wave climate, wave trend and wave energy assessment can be carried out.
unsrtnat
89
natexlab#1#1
[#1],#1
[Ardhuin et al.(2001)Ardhuin, Herbers and
O'Reilly]ardhuin2001hybrid
authorArdhuin, F., authorHerbers, T.,
authorO'Reilly, W., year2001.
titleA hybrid eulerian–lagrangian model for spectral wave
evolution with application to bottom friction on the continental shelf.
journalJournal of physical oceanography
volume31, pages1498–1516.
[Ardhuin et al.(2010)Ardhuin, Rogers, Babanin, Filipot, Magne, Roland,
Van Der Westhuysen, Queffeulou, Lefevre, Aouf
et al.]ardhuin2010semiempirical
authorArdhuin, F., authorRogers, E.,
authorBabanin, A.V., authorFilipot, J.F.,
authorMagne, R., authorRoland, A.,
authorVan Der Westhuysen, A., authorQueffeulou, P.,
authorLefevre, J.M., authorAouf, L., et al.,
year2010.
titleSemiempirical dissipation source functions for ocean
waves. part i: Definition, calibration, and validation.
journalJournal of Physical Oceanography
volume40, pages1917–1941.
[Ardhuin et al.(2012)Ardhuin, Roland, Dumas, Bennis, Sentchev, Forget,
Wolf, Girard, Osuna and Benoit]ardhuin2012numerical
authorArdhuin, F., authorRoland, A.,
authorDumas, F., authorBennis, A.C.,
authorSentchev, A., authorForget, P.,
authorWolf, J., authorGirard, F.,
authorOsuna, P., authorBenoit, M.,
year2012.
titleNumerical wave modeling in conditions with strong
currents: Dissipation, refraction, and relative wind.
journalJournal of Physical Oceanography
volume42, pages2101–2120.
[Battjes and Janssen(1978)]battjes1978energy
authorBattjes, J.A., authorJanssen, J.,
year1978.
titleEnergy loss and set-up due to breaking of random
waves, in: booktitleCoastal engineering 1978, pp.
pages569–587.
[Benetazzo et al.(2022)Benetazzo, Davison, Barbariol, Mercogliano,
Favaretto and Sclavo]benetazzo2022correction
authorBenetazzo, A., authorDavison, S.,
authorBarbariol, F., authorMercogliano, P.,
authorFavaretto, C., authorSclavo, M.,
year2022.
titleCorrection of era5 wind for regional climate
projections of sea waves, water, 14, 1590.
[Benoit et al.(1997)Benoit, Marcos and Becq]benoit1997development
authorBenoit, M., authorMarcos, F.,
authorBecq, F., year1997.
titleDevelopment of a third generation shallow-water wave
model with unstructured spatial meshing, in: booktitleCoastal
Engineering 1996, pp. pages465–478.
[Booij et al.(1999)Booij, Ris and Holthuijsen]booij1999third
authorBooij, N., authorRis, R.C.,
authorHolthuijsen, L.H., year1999.
titleA third-generation wave model for coastal regions: 1.
model description and validation.
journalJournal of geophysical research: Oceans
volume104, pages7649–7666.
[Campos et al.(2022)Campos, Gramcianinov, de Camargo and
da Silva Dias]campos2022assessment
authorCampos, R.M., authorGramcianinov, C.B.,
authorde Camargo, R., authorda Silva Dias, P.L.,
year2022.
titleAssessment and calibration of era5 severe winds in
the atlantic ocean using satellite data.
journalRemote Sensing volume14,
pages4918.
[Cardone et al.(1996)Cardone, Jensen, Resio, Swail and
Cox]cardone1996evaluation
authorCardone, V.J., authorJensen, R.E.,
authorResio, D.T., authorSwail, V.R.,
authorCox, A.T., year1996.
titleEvaluation of contemporary ocean wave models in rare
extreme events: The “halloween storm” of october 1991 and the “storm of
the century” of march 1993.
journalJournal of Atmospheric and Oceanic Technology
volume13, pages198–230.
[Carrere et al.(2015)Carrere, Lyard, Cancet and
Guillot]carrere2015fes
authorCarrere, L., authorLyard, F.,
authorCancet, M., authorGuillot, A.,
year2015.
titleFes 2014, a new tidal model on the global ocean with
enhanced accuracy in shallow seas and in the arctic region, in:
booktitleEGU general assembly conference abstracts, p.
pages5481.
[Cavaleri(2009)]cavaleri2009wave
authorCavaleri, L., year2009.
titleWave modeling—missing the peaks.
journalJournal of Physical Oceanography
volume39, pages2757–2778.
[Chen et al.(2003)Chen, Liu and Beardsley]chen2003unstructured
authorChen, C., authorLiu, H.,
authorBeardsley, R.C., year2003.
titleAn unstructured grid, finite-volume,
three-dimensional, primitive equations ocean model: application to coastal
ocean and estuaries.
journalJournal of atmospheric and oceanic technology
volume20, pages159–186.
[Cheng and Qi(2010)]cheng2010steric
authorCheng, X., authorQi, Y., year2010.
titleOn steric and mass-induced contributions to the
annual sea-level variations in the south china sea.
journalGlobal and Planetary Change volume72,
pages227–233.
[Collins et al.(2021)Collins, Hesser, Rogowski and
Merrifield]collins2021altimeter
authorCollins, C., authorHesser, T.,
authorRogowski, P., authorMerrifield, S.,
year2021.
titleAltimeter observations of tropical cyclone-generated
sea states: Spatial analysis and operational hindcast evaluation.
journalJournal of Marine Science and Engineering
volume9, pages216.
[Courant et al.(1928)Courant, Friedrichs and
Lewy]courant1928partiellen
authorCourant, R., authorFriedrichs, K.,
authorLewy, H., year1928.
titleÜber die partiellen differenzengleichungen der
mathematischen physik.
journalMathematische annalen volume100,
pages32–74.
[Di Luccio et al.(2018)Di Luccio, Benassai, Budillon, Mucerino,
Montella and Pugliese Carratelli]di2018wave
authorDi Luccio, D., authorBenassai, G.,
authorBudillon, G., authorMucerino, L.,
authorMontella, R., authorPugliese Carratelli, E.,
year2018.
titleWave run-up prediction and observation in a
micro-tidal beach.
journalNatural Hazards and Earth System Sciences
volume18, pages2841–2857.
[Dietrich et al.(2011)Dietrich, Zijlema, Westerink, Holthuijsen,
Dawson, Luettich Jr, Jensen, Smith, Stelling and
Stone]dietrich2011modeling
authorDietrich, J., authorZijlema, M.,
authorWesterink, J., authorHolthuijsen, L.,
authorDawson, C., authorLuettich Jr, R.,
authorJensen, R., authorSmith, J.,
authorStelling, G., authorStone, G.,
year2011.
titleModeling hurricane waves and storm surge using
integrally-coupled, scalable computations.
journalCoastal Engineering volume58,
pages45–65.
[Dodet et al.(2020)Dodet, Piolle, Quilfen, Abdalla, Accensi, Ardhuin,
Ash, Bidlot, Gommenginger, Marechal et al.]dodet2020sea
authorDodet, G., authorPiolle, J.F.,
authorQuilfen, Y., authorAbdalla, S.,
authorAccensi, M., authorArdhuin, F.,
authorAsh, E., authorBidlot, J.R.,
authorGommenginger, C., authorMarechal, G., et al.,
year2020.
titleThe sea state cci dataset v1: towards a sea state
climate data record based on satellite observations.
journalEarth System Science Data volume12,
pages1929–1951.
[Donelan et al.(1993)Donelan, Dobson, Smith and
Anderson]donelan1993dependence
authorDonelan, M.A., authorDobson, F.W.,
authorSmith, S.D., authorAnderson, R.J.,
year1993.
titleOn the dependence of sea surface roughness on wave
development.
journalJournal of physical Oceanography
volume23, pages2143–2149.
[Durrant et al.(2014)Durrant, Greenslade, Hemer and
Trenham]durrant2014global
authorDurrant, T., authorGreenslade, D.,
authorHemer, M., authorTrenham, C.,
year2014.
titleA global wave hindcast focussed on the Central and
South Pacific. volume volume40.
publisherCiteseer.
[Eldeberky(1996)]eldeberky1996nonlinear
authorEldeberky, Y., year1996.
titleNonlinear transformation of wave spectra in the
nearshore .
[Emanuel and Rotunno(2011)]emanuel2011self
authorEmanuel, K., authorRotunno, R.,
year2011.
titleSelf-stratification of tropical cyclone outflow. part
i: Implications for storm structure.
journalJournal of the Atmospheric Sciences
volume68, pages2236–2249.
[Federico et al.(2017)Federico, Pinardi, Coppini, Oddo, Lecci and
Mossa]federico2017coastal
authorFederico, I., authorPinardi, N.,
authorCoppini, G., authorOddo, P.,
authorLecci, R., authorMossa, M.,
year2017.
titleCoastal ocean forecasting with an unstructured grid
model in the southern adriatic and northern ionian seas.
journalNatural Hazards and Earth System Sciences
volume17, pages45–59.
[Galland et al.(1991)Galland, Goutal and
Hervouet]galland1991telemac
authorGalland, J.C., authorGoutal, N.,
authorHervouet, J.M., year1991.
titleTelemac: A new numerical model for solving shallow
water equations.
journalAdvances in water resources volume14,
pages138–148.
[Greatbatch(1994)]greatbatch1994note
authorGreatbatch, R.J., year1994.
titleA note on the representation of steric sea level in
models that conserve volume rather than mass.
journalJournal of Geophysical Research: Oceans
volume99, pages12767–12771.
[Group(1988)]group1988wam
authorGroup, T.W., year1988.
titleThe wam model—a third generation ocean wave
prediction model.
journalJournal of physical oceanography
volume18, pages1775–1810.
[Hanna and Heinold(1985)]hanna1985development
authorHanna, S.R., authorHeinold, D.W.,
year1985.
titleDevelopment and application of a simple method for
evaluating air quality models.
number4409, publisherAmerican Petroleum
Institute.
[Hasselmann et al.(1973)Hasselmann, Barnett, Bouws, Carlson,
Cartwright, Enke, Ewing, Gienapp, Hasselmann, Kruseman
et al.]hasselmann1973measurements
authorHasselmann, K., authorBarnett, T.P.,
authorBouws, E., authorCarlson, H.,
authorCartwright, D.E., authorEnke, K.,
authorEwing, J., authorGienapp, A.,
authorHasselmann, D., authorKruseman, P., et al.,
year1973.
titleMeasurements of wind-wave growth and swell decay
during the joint north sea wave project (jonswap).
journalErgaenzungsheft zur Deutschen Hydrographischen
Zeitschrift, Reihe A .
[Hasselmann(1985)]hasselmann1985computations
authorHasselmann, S., year1985.
titleComputations and parametrizations of the nonlinear
energy transfer in a gravity wave spectrum. part i: a new method for
efficient computations of the exact nonlinear transfer integral.
journalJournal of physical oceanography ,
pages1369–1377.
[He et al.(2022a)He, Yin, Stocchino, Wai and Li]he2022coastal
authorHe, C., authorYin, Z.Y.,
authorStocchino, A., authorWai, O.W.H.,
authorLi, S., year2022a.
titleThe coastal macro-vortices dynamics in hong kong
waters and its impact on water quality.
journalOcean Modelling volume175,
pages102034.
[He et al.(2022b)He, Chan, Li, Li, Zhang and
Yang]he2022observations
authorHe, J., authorChan, P., authorLi, Q.,
authorLi, L., authorZhang, L., authorYang,
H., year2022b.
titleObservations of wind and turbulence structures of
super typhoons hato and mangkhut over land from a 356 m high meteorological
tower.
journalAtmospheric Research volume265,
pages105910.
[He et al.(2020a)He, He, Li, Chan, Zhang, Yang and
Li]he2020observational
authorHe, J., authorHe, Y., authorLi, Q.,
authorChan, P., authorZhang, L.,
authorYang, H., authorLi, L., year2020a.
titleObservational study of wind characteristics, wind
speed and turbulence profiles during super typhoon mangkhut.
journalJournal of Wind Engineering and Industrial
Aerodynamics volume206, pages104362.
[He et al.(2020b)He, He, Chen, Chan, Fu and Li]he2020insights
authorHe, Y., authorHe, J., authorChen, W.,
authorChan, P., authorFu, J., authorLi,
Q., year2020b.
titleInsights from super typhoon mangkhut (1822) for wind
engineering practices.
journalJournal of Wind Engineering and Industrial
Aerodynamics volume203, pages104238.
[Hemer et al.(2013)Hemer, Katzfey and Trenham]hemer2013global
authorHemer, M.A., authorKatzfey, J.,
authorTrenham, C.E., year2013.
titleGlobal dynamical projections of surface ocean wave
climate for a future high greenhouse gas emission scenario.
journalOcean Modelling volume70,
pages221–245.
[Hersbach et al.(2023)Hersbach, Bell, Berrisford, Biavati,
Horányi, Sabater, Nicolas, Peubey, Radu, Rozum et al.]hersbach2023era5
authorHersbach, H., authorBell, B.,
authorBerrisford, P., authorBiavati, G.,
authorHorányi, A., authorSabater, J.M.,
authorNicolas, J., authorPeubey, C.,
authorRadu, R., authorRozum, I., et al.,
year2023.
titleEra5 hourly data on single levels from 1940 to
present.,(2023).
journalCopernicus Climate Change Service (C3S) Climate
Data Store (CDS)< date-in-citation content-type=" access-date" iso-8601-date
, pages10–26.
[Hodges et al.(2017)Hodges, Cobb and Vidale]hodges2017well
authorHodges, K., authorCobb, A.,
authorVidale, P.L., year2017.
titleHow well are tropical cyclones represented in
reanalysis datasets?
journalJournal of Climate volume30,
pages5243–5264.
[Holland(1980)]holland1980analytic
authorHolland, G.J., year1980.
titleAn analytic model of the wind and pressure profiles
in hurricanes .
[Huang et al.(2022)Huang, Zhang, Wang, Ye, Moghimi, Myers and
Yu]huang2022tidal
authorHuang, W., authorZhang, Y.J.,
authorWang, Z., authorYe, F.,
authorMoghimi, S., authorMyers, E.,
authorYu, H., year2022.
titleTidal simulation revisited.
journalOcean Dynamics volume72,
pages187–205.
[Jiang et al.(2019)Jiang, Wu, Ding, Ma, Fang and
Wang]jiang2019assessment
authorJiang, B., authorWu, G., authorDing,
J., authorMa, C., authorFang, Y.,
authorWang, X., year2019.
titleAssessment of the wave energy resource in the south
china sea, in: booktitleProceedings of the Institution of Civil
Engineers-Maritime Engineering, organizationThomas Telford Ltd.
pp. pages23–33.
[Knapp et al.(2010)Knapp, Kruk, Levinson, Diamond and
Neumann]knapp2010international
authorKnapp, K.R., authorKruk, M.C.,
authorLevinson, D.H., authorDiamond, H.J.,
authorNeumann, C.J., year2010.
titleThe international best track archive for climate
stewardship (ibtracs) unifying tropical cyclone data.
journalBulletin of the American Meteorological Society
volume91, pages363–376.
[Kong et al.(2024)Kong, Lam, Lau, Chow, Chong, Chan and
Leung]kong2024model
authorKong, W., authorLam, C., authorLau,
D., authorChow, C., authorChong, S.,
authorChan, P., authorLeung, N.,
year2024.
titleModel validation and applications of wave and current
forecasts from the hong kong observatory's operational marine forecasting
system.
journalOcean Modelling , pages102393.
[Kumar et al.(2012)Kumar, Voulgaris, Warner and
Olabarrieta]kumar2012implementation
authorKumar, N., authorVoulgaris, G.,
authorWarner, J.C., authorOlabarrieta, M.,
year2012.
titleImplementation of the vortex force formalism in the
coupled ocean-atmosphere-wave-sediment transport (coawst) modeling system for
inner shelf and surf zone applications.
journalOcean Modelling volume47,
pages65–95.
[Liang et al.(2016)Liang, Liu, Li, Wu and Lee]liang2016wave
authorLiang, B., authorLiu, X., authorLi,
H., authorWu, Y., authorLee, D.,
year2016.
titleWave climate hindcasts for the bohai sea, yellow sea,
and east china sea.
journalJournal of Coastal Research volume32,
pages172–180.
[Lodise et al.(2024)Lodise, Merrifield, Collins, Behrens and
Terrill]lodise2024performance
authorLodise, J., authorMerrifield, S.,
authorCollins, C., authorBehrens, J.,
authorTerrill, E., year2024.
titlePerformance of era5 wind speed and significant wave
height within extratropical cyclones using collocated satellite radar
altimeter measurements.
journalCoastal Engineering Journal volume66,
pages89–114.
[Luettich et al.(1992)Luettich, Westerink, Scheffner
et al.]luettich1992adcirc
authorLuettich, R.A., authorWesterink, J.J.,
authorScheffner, N.W., et al., year1992.
titleAdcirc: an advanced three-dimensional circulation
model for shelves, coasts, and estuaries. report 1, theory and methodology of
adcirc-2dd1 and adcirc-3dl .
[Lynch et al.(1996)Lynch, Ip, Naimie and
Werner]lynch1996comprehensive
authorLynch, D.R., authorIp, J.T.,
authorNaimie, C.E., authorWerner, F.E.,
year1996.
titleComprehensive coastal circulation model with
application to the gulf of maine.
journalContinental Shelf Research volume16,
pages875–906.
[Mentaschi et al.(2013a)Mentaschi, Besio, Cassola and
Mazzino]mentaschi2013developing
authorMentaschi, L., authorBesio, G.,
authorCassola, F., authorMazzino, A.,
year2013a.
titleDeveloping and validating a forecast/hindcast system
for the mediterranean sea.
journalJournal of Coastal Research ,
pages1551–1556.
[Mentaschi et al.(2013b)Mentaschi, Besio, Cassola and
Mazzino]mentaschi2013problems
authorMentaschi, L., authorBesio, G.,
authorCassola, F., authorMazzino, A.,
year2013b.
titleProblems in rmse-based wave model validations.
journalOcean Modelling volume72,
pages53–58.
[Mentaschi et al.(2015)Mentaschi, Besio, Cassola and
Mazzino]mentaschi2015performance
authorMentaschi, L., authorBesio, G.,
authorCassola, F., authorMazzino, A.,
year2015.
titlePerformance evaluation of wavewatch iii in the
mediterranean sea.
journalOcean Modelling volume90,
pages82–94.
[Mentaschi et al.(2023)Mentaschi, Vousdoukas, García-Sánchez,
Fernández-Montblanc, Roland, Voukouvalas, Federico, Abdolali, Zhang and
Feyen]mentaschi2023global
authorMentaschi, L., authorVousdoukas, M.I.,
authorGarcía-Sánchez, G.,
authorFernández-Montblanc, T., authorRoland, A.,
authorVoukouvalas, E., authorFederico, I.,
authorAbdolali, A., authorZhang, Y.J.,
authorFeyen, L., year2023.
titleA global unstructured, coupled, high-resolution
hindcast of waves and storm surge.
journalFrontiers in Marine Science volume10,
pages1233679.
[Mirzaei et al.(2013)Mirzaei, Tangang, Juneng, Mustapha, Husain and
Akhir]mirzaei2013wave
authorMirzaei, A., authorTangang, F.,
authorJuneng, L., authorMustapha, M.A.,
authorHusain, M.L., authorAkhir, M.F.,
year2013.
titleWave climate simulation for southern region of the
south china sea.
journalOcean Dynamics volume63,
pages961–977.
[Morim et al.(2022)Morim, Erikson, Hemer, Young, Wang, Mori, Shimura,
Stopa, Trenham, Mentaschi et al.]morim2022global
authorMorim, J., authorErikson, L.H.,
authorHemer, M., authorYoung, I.,
authorWang, X., authorMori, N.,
authorShimura, T., authorStopa, J.,
authorTrenham, C., authorMentaschi, L., et al.,
year2022.
titleA global ensemble of ocean wave climate statistics
from contemporary wave reanalysis and hindcasts.
journalScientific data volume9,
pages358.
[Murphy(1988)]murphy1988skill
authorMurphy, A.H., year1988.
titleSkill scores based on the mean square error and their
relationships to the correlation coefficient.
journalMonthly weather review volume116,
pages2417–2424.
[Pan et al.(2020)Pan, Lai and Devlin]pan2020channel
authorPan, J., authorLai, W., authorDevlin,
A.T., year2020.
titleChannel-trapped convergence and divergence of lateral
velocity in the pearl river estuary: Influence of along-estuary variations of
channel depth and width.
journalJournal of Geophysical Research: Oceans
volume125, pagese2019JC015369.
[Perez et al.(2017)Perez, Menendez and Losada]perez2017gow2
authorPerez, J., authorMenendez, M.,
authorLosada, I.J., year2017.
titleGow2: A global wave hindcast for coastal
applications.
journalCoastal Engineering volume124,
pages1–11.
[Pilar et al.(2008)Pilar, Soares and Carretero]pilar200844
authorPilar, P., authorSoares, C.G.,
authorCarretero, J., year2008.
title44-year wave hindcast for the north east atlantic
european coast.
journalCoastal Engineering volume55,
pages861–871.
[Piollé et al.(2020)Piollé, Dodet and Quilfen]piolle2020esa
authorPiollé, J.F., authorDodet, G.,
authorQuilfen, Y., year2020.
titleEsa sea state climate change
initiative(sea_state_cci): Global remote sensing merged multi-mission
monthly gridded significant wave height, l4 product, version 1.1., centre for
environmental data analysis.
journalCentre for Environmental Data Analysis .
[Qu et al.(2022)Qu, Jevrejeva, Williams and Moore]qu2022drivers
authorQu, Y., authorJevrejeva, S.,
authorWilliams, J., authorMoore, J.C.,
year2022.
titleDrivers for seasonal variability in sea level around
the china seas.
journalGlobal and Planetary Change volume213,
pages103819.
[Roland et al.(2009)Roland, Cucco, Ferrarin, Hsu, Liau, Ou, Umgiesser
and Zanke]roland2009development
authorRoland, A., authorCucco, A.,
authorFerrarin, C., authorHsu, T.W.,
authorLiau, J.M., authorOu, S.H.,
authorUmgiesser, G., authorZanke, U.,
year2009.
titleOn the development and verification of a 2-d coupled
wave-current model on unstructured meshes.
journalJournal of Marine Systems volume78,
pagesS244–S254.
[Roland et al.(2012)Roland, Zhang, Wang, Meng, Teng, Maderich,
Brovchenko, Dutour-Sikiric and Zanke]roland2012fully
authorRoland, A., authorZhang, Y.J.,
authorWang, H.V., authorMeng, Y.,
authorTeng, Y.C., authorMaderich, V.,
authorBrovchenko, I., authorDutour-Sikiric, M.,
authorZanke, U., year2012.
titleA fully coupled 3d wave-current interaction model on
unstructured grids.
journalJournal of Geophysical Research: Oceans
volume117.
[Roland et al.(2008)]roland2008development
authorRoland, A., et al., year2008.
titleDevelopment of WWM II: Spectral wave modelling on
unstructured meshes.
Ph.D. thesis. Ph. D. thesis, Technische Universität Darmstadt,
Institute of Hydraulic and ….
[Rozendaal et al.(2024)Rozendaal, Dijkstra and
Schuttelaars]rozendaal2024relationship
authorRozendaal, M.P., authorDijkstra, Y.M.,
authorSchuttelaars, H.M., year2024.
titleThe relationship between linearised 3d and 2dh models
for tidally dominated shallow waters.
journalOcean Modelling volume188,
pages102330.
[Saponieri et al.(2019)Saponieri, Besio, Simonetti, Radulescu,
Valentini, Damiani and Veltri]saponieri2019evaluation
authorSaponieri, A., authorBesio, G.,
authorSimonetti, F., authorRadulescu, V.,
authorValentini, N., authorDamiani, L.,
authorVeltri, P., year2019.
titleEvaluation of wave hindcast models skill in the black
sea, in: booktitleISOPE International Ocean and Polar Engineering
Conference, organizationISOPE. pp. pagesISOPE–I.
[Schenkel and Hart(2012)]schenkel2012examination
authorSchenkel, B.A., authorHart, R.E.,
year2012.
titleAn examination of tropical cyclone position,
intensity, and intensity life cycle within atmospheric reanalysis datasets.
journalJournal of Climate volume25,
pages3453–3475.
[Shi et al.(2017)Shi, Joly, Zheng, Zhang, Chen and
Chen]shi2017development
authorShi, J., authorJoly, A., authorZheng,
J., authorZhang, J., authorChen, T.,
authorChen, G., year2017.
titleDevelopment of a sea-state database in the northwest
pacific ocean by long-term numerical hindcast modelling, in:
booktitleInternational Conference on Nuclear Engineering,
organizationAmerican Society of Mechanical Engineers. p.
pagesV004T14A008.
[Shi et al.(2019)Shi, Zheng, Zhang, Joly, Zhang, Xu, Sui and
Chen]shi201939
authorShi, J., authorZheng, J., authorZhang,
C., authorJoly, A., authorZhang, W.,
authorXu, P., authorSui, T., authorChen,
T., year2019.
titleA 39-year high resolution wave hindcast for the
chinese coast: Model validation and wave climate analysis.
journalOcean Engineering volume183,
pages224–235.
[Sørensen et al.(2005)Sørensen, Kofoed-Hansen, Rugbjerg and
Sørensen]sorensen2005third
authorSørensen, O.R., authorKofoed-Hansen, H.,
authorRugbjerg, M., authorSørensen, L.S.,
year2005.
titleA third-generation spectral wave model using an
unstructured finite volume technique, in: booktitleCoastal
Engineering 2004: (In 4 Volumes). publisherWorld Scientific, pp.
pages894–906.
[Soulsby(1997)]soulsby1997dynamics
authorSoulsby, R., year1997.
titleDynamics of marine sands .
[Stopa et al.(2019)Stopa, Ardhuin, Stutzmann and
Lecocq]stopa2019sea
authorStopa, J.E., authorArdhuin, F.,
authorStutzmann, E., authorLecocq, T.,
year2019.
titleSea state trends and variability: Consistency between
models, altimeters, buoys, and seismic data (1979–2016).
journalJournal of Geophysical Research: Oceans
volume124, pages3923–3940.
[Sun et al.(2020)Sun, Zhang, Xu, Liu and Ding]sun2020assessment
authorSun, Z., authorZhang, H., authorXu,
D., authorLiu, X., authorDing, J.,
year2020.
titleAssessment of wave power in the south china sea based
on 26-year high-resolution hindcast data.
journalEnergy volume197,
pages117218.
[Tausía et al.(2023)Tausía, Delaux, Camus, Rueda,
Méndez, Bryan, Pérez, Costa, Zyngfogel and
Cofiño]tausia2023rapid
authorTausía, J., authorDelaux, S.,
authorCamus, P., authorRueda, A.,
authorMéndez, F., authorBryan, K.R.,
authorPérez, J., authorCosta, C.G.,
authorZyngfogel, R., authorCofiño, A.,
year2023.
titleRapid response data-driven reconstructions for storm
surge around new zealand.
journalApplied Ocean Research volume133,
pages103496.
[Taylor(2001)]taylor2001summarizing
authorTaylor, K.E., year2001.
titleSummarizing multiple aspects of model performance in
a single diagram.
journalJournal of geophysical research: atmospheres
volume106, pages7183–7192.
[Tolman(1991)]tolman1991third
authorTolman, H.L., year1991.
titleA third-generation model for wind waves on slowly
varying, unsteady, and inhomogeneous depths and currents.
journalJournal of Physical Oceanography
volume21, pages782–797.
[Umgiesser et al.(2004)Umgiesser, Canu, Cucco and
Solidoro]Umgiesser2004
authorUmgiesser, G., authorCanu, D.M.,
authorCucco, A., authorSolidoro, C.,
year2004.
titleA finite element model for the venice lagoon.
development, set up, calibration and validation.
journalJournal of Marine Systems volume51,
pages123–145.
[Wang et al.(2017)Wang, Yang, Sun and Shi]wang2017improvements
authorWang, H., authorYang, Y., authorSun,
B., authorShi, Y., year2017.
titleImprovements to the statistical theoretical model for
wave breaking based on the ratio of breaking wave kinetic and potential
energy.
journalScience China Earth Sciences volume60,
pages180–187.
[Wang and Liu(2021)]wang2021numerical
authorWang, J., authorLiu, P.L.F.,
year2021.
titleNumerical study on impacts of a concurrent
storm-tide-tsunami event in macau and hong kong.
journalCoastal Engineering volume170,
pages104000.
[Wang et al.(2018)Wang, Duan and Dong]wang2018long
authorWang, Z., authorDuan, C., authorDong,
S., year2018.
titleLong-term wind and wave energy resource assessment in
the south china sea based on 30-year hindcast data.
journalOcean Engineering volume163,
pages58–75.
[Weatherall et al.(2015)Weatherall, Marks, Jakobsson, Schmitt, Tani,
Arndt, Rovere, Chayes, Ferrini and Wigley]weatherall2015new
authorWeatherall, P., authorMarks, K.M.,
authorJakobsson, M., authorSchmitt, T.,
authorTani, S., authorArndt, J.E.,
authorRovere, M., authorChayes, D.,
authorFerrini, V., authorWigley, R.,
year2015.
titleA new digital bathymetric model of the world's
oceans.
journalEarth and space Science volume2,
pages331–345.
[Williams et al.(2018)Williams, Irazoqui Apecechea, Saulter and
Horsburgh]williams2018radiational
authorWilliams, J., authorIrazoqui Apecechea, M.,
authorSaulter, A., authorHorsburgh, K.J.,
year2018.
titleRadiational tides: their double-counting in storm
surge forecasts and contribution to the highest astronomical tide.
journalOcean Science volume14,
pages1057–1068.
[Wu et al.(2018)Wu, Jiang, Deng, Chen, Cao and Li]wu2018evaluation
authorWu, Z.y., authorJiang, C.b.,
authorDeng, B., authorChen, J., authorCao,
Y.g., authorLi, L.j., year2018.
titleEvaluation of numerical wave model for typhoon wave
simulation in south china sea.
journalWater Science and Engineering
volume11, pages229–235.
[Xu et al.(2017)Xu, He, Song, Hou and Li]xu2017observations
authorXu, Y., authorHe, H., authorSong, J.,
authorHou, Y., authorLi, F., year2017.
titleObservations and modeling of typhoon waves in the
south china sea.
journalJournal of Physical Oceanography
volume47, pages1307–1324.
[Yang et al.(2019)Yang, Li, Zhao, Wang, Wang, Sou, Yang, Hu, Tang, Mok
et al.]yang2019comparative
authorYang, J., authorLi, L., authorZhao,
K., authorWang, P., authorWang, D.,
authorSou, I.M., authorYang, Z., authorHu,
J., authorTang, X., authorMok, K.M., et al.,
year2019.
titleA comparative study of typhoon hato (2017) and
typhoon mangkhut (2018)—their impacts on coastal inundation in macau.
journalJournal of Geophysical Research: Oceans
volume124, pages9590–9619.
[Yang et al.(2020)Yang, Fan, Duan, Zheng, Li, Li and
Xu]yang2020long
authorYang, S., authorFan, L., authorDuan,
S., authorZheng, C., authorLi, X.,
authorLi, H., authorXu, J., year2020.
titleLong-term assessment of wave energy in the china sea
using 30-year hindcast data.
journalEnergy Exploration & Exploitation
volume38, pages37–56.
[Yang et al.(2015)Yang, Hou and Liu]yang2015observed
authorYang, S., authorHou, Y., authorLiu,
Y., year2015.
titleObserved typhoon wave spectrum in northern south
china sea.
journalChinese journal of oceanology and limnology
volume33, pages1286–1294.
[Zhai et al.(2023)Zhai, Huang, Yang, Tang and
Zhang]zhai2023applicability
authorZhai, R., authorHuang, C., authorYang,
W., authorTang, L., authorZhang, W.,
year2023.
titleApplicability evaluation of era5 wind and wave
reanalysis data in the south china sea.
journalJournal of Oceanology and Limnology
volume41, pages495–517.
[Zhang and Baptista(2008)]zhang2008selfe
authorZhang, Y., authorBaptista, A.M.,
year2008.
titleSelfe: A semi-implicit eulerian–lagrangian
finite-element model for cross-scale ocean circulation.
journalOcean modelling volume21,
pages71–96.
[Zhang et al.(2023)Zhang, Fernandez-Montblanc, Pringle, Yu, Cui and
Moghimi]zhang2023global
authorZhang, Y.J., authorFernandez-Montblanc, T.,
authorPringle, W., authorYu, H.C.,
authorCui, L., authorMoghimi, S.,
year2023.
titleGlobal seamless tidal simulation using a 3d
unstructured-grid model (schism v5. 10.0).
journalGeoscientific Model Development
volume16, pages2565–2581.
[Zhang et al.(2016)Zhang, Ye, Stanev and Grashorn]zhang2016seamless
authorZhang, Y.J., authorYe, F.,
authorStanev, E.V., authorGrashorn, S.,
year2016.
titleSeamless cross-scale modeling with schism.
journalOcean Modelling volume102,
pages64–81.
[Zhou et al.(2020)Zhou, Li and Li]zhou2020dynamic
authorZhou, K., authorLi, Q.S., authorLi,
X., year2020.
titleDynamic behavior of supertall building with active
control system during super typhoon mangkhut.
journalJournal of Structural Engineering
volume146, pages04020077.
|
http://arxiv.org/abs/2409.02902v1 | 20240904174539 | Fluctuations for non-Hermitian dynamics | [
"Paul Bourgade",
"Giorgio Cipolloni",
"Jiaoyang Huang"
] | math.PR | [
"math.PR"
] |
arrows,
petri,
topaths, automata
𝒜
ℬ
𝒪
𝒲
papersize=8.5in,11in
=0in
=0in
=6.5in
equationsection
ℋ
|
http://arxiv.org/abs/2409.02833v1 | 20240904155854 | The Parameterized Complexity of Extending Stack Layouts | [
"Thomas Depian",
"Simon D. Fink",
"Robert Ganian",
"Martin Nöllenburg"
] | cs.CG | [
"cs.CG",
"cs.DS"
] |
long
longtrue
long
prooflaterm
proofsketcho +b
statelaterm
prooflaterm +b
#1#2
proofsketchOProof sketch.
statelaterm +b
#1#2
=1
./graphics/
plainurl
The Parameterized Complexity of Extending Stack Layouts
Algorithms and Complexity Group, TU Wien, Vienna, Austriatdepian@ac.tuwien.ac.athttps://orcid.org/0009-0003-7498-6271
Algorithms and Complexity Group, TU Wien, Vienna, Austriasfink@ac.tuwien.ac.athttps://orcid.org/0000-0002-2754-1195
Algorithms and Complexity Group, TU Wien, Vienna, Austriarganian@ac.tuwien.ac.athttps://orcid.org/0000-0002-7762-8045
Algorithms and Complexity Group, TU Wien, Vienna, Austrianoellenburg@ac.tuwien.ac.athttps://orcid.org/0000-0003-0454-3937
T. Depian, S. D. Fink, R. Ganian, M. Nöllenburg
Thomas Depian, Simon D. Fink, Robert Ganian, Martin Nöllenburg
[500]Theory of computation Fixed parameter tractability
[500]Mathematics of computing Graphs and surfaces
long
[cite=]Full versionhttps://arxiv.org/abs/2409.XYZ
All authors acknowledge support from the Vienna Science and Technology Fund (WWTF) [10.47379/ICT22029]. Robert Ganian and Thomas Depian furthermore acknowledge support from the Austrian Science Fund (FWF) [10.55776/Y1329].
Stefan Felsner and Karsten Klein
2
32nd International Symposium on Graph Drawing and Network Visualization (GD 2024)
GD 2024
GD
2024
September 18–20, 2024
Vienna, Austria
320
10
arrows.meta,patterns,ipe
reduction-ruleReduction Rule
propertyProperty
propertyPropertyProperties
propertyPropertyProperties
FPFPFPs
FPFPFPs
ECECECs
ECECECs
The Parameterized Complexity of
Extending Stack Layouts
Martin Nöllenburg
Accepted Sep 3 2024 to ApJ Letters
=======================================================
§ ABSTRACT
An ℓ-page stack layout (also known as an ℓ-page book embedding) of a graph is a linear order of the vertex set together with a partition of the edge set into ℓ stacks (or pages), such that
the endpoints of no two edges on the same stack alternate.
We study the problem of extending a given partial ℓ-page stack layout into a complete one, which can be seen as a natural generalization of the classical -hard problem of computing a stack layout of an input graph from scratch. Given the inherent intractability of the problem, we focus on identifying tractable fragments through the refined lens of parameterized complexity analysis.
Our results paint a detailed and surprisingly rich complexity-theoretic landscape of the problem which includes the identification of para-hard, [1]-hard and -tractable, as well as fixed-parameter tractable fragments of stack layout extension via a natural sequence of parameterizations.
§ INTRODUCTION
An ℓ-page stack layout (or ℓ-page book embedding) of a graph G consists, combinatorially speaking, of (i) a linear order ≺ of its vertex set V(G) and (ii) a partition σ of its edge set E(G) into ℓ≥ 1 (stack-)pages such that for no two edges (with distinct endpoints) uv and wx with u ≺ v and w ≺ x that are assigned to the same page their endpoints alternate in ≺, i.e., we have
u ≺ w ≺ v ≺ x.
When drawing a stack layout, the vertices are placed on a line called the spine in the order given by ≺ and the edges of each page are drawn as pairwise non-crossing arcs in a separate half-plane bounded by the spine, see <ref>a.
Stack layouts are a classic and well-studied topic in graph drawing and graph theory <cit.>.
They have immediate applications in graph visualization <cit.> as well as in bioinformatics, VLSI design, and parallel computing <cit.>; see also the overview by Dujmović and Wood <cit.>.
The minimum number ℓ such that a given graph G admits an ℓ-page stack layout is known as the stack number, page number, or book thickness of G.
While the graphs with stack number ℓ=1 are the outerplanar graphs, which can be recognized in linear time, the problem of computing the stack number is -complete in general.
Indeed, the class of graphs with stack number ℓ≤ 2 are precisely the subhamiltonian graphs (i.e., the subgraphs of planar Hamiltonian graphs) and recognizing them is -complete <cit.>.
Computing the stack number is known to also remain -complete if the vertex order is provided as part of the input and ℓ=4 <cit.>, and overcoming the intractability of these problems has been the target of several recent works in the field <cit.>.
Many other results on stack layouts are known—for instance, every planar graph has a 4-page stack layout and this bound is tight <cit.>.
For a comprehensive list of known upper and lower bounds for the stack number of different graph classes, we refer to the collection by
Pupyrev <cit.>.
In this paper, we take a new perspective on stack layouts, namely the perspective of drawing extensions.
In drawing extension problems, the input consists of a graph G together with a partial drawing of G, i.e., a drawing of a subgraph H of G.
The task is to insert the vertices and edges of G which are missing in H in such a way that a desired property of the drawing is maintained; see <ref>b for an example.
Such drawing extension problems occur, e.g., when visualizing dynamic graphs in a streaming setting, where additional vertices and edges arrive over time and need to be inserted into the existing partial drawing.
Drawing extension problems have been investigated for many types of drawings in recent years—including planar drawings <cit.>, upward planar drawings <cit.>, level planar drawings <cit.>, 1-planar drawings <cit.>, and planar orthogonal drawings <cit.>—but until now, essentially nothing was known about the extension of stack layouts/book embeddings.
Since it is -complete to determine whether a graph admits an ℓ-page stack layout (even when ℓ is a small fixed integer), the extension problem for ℓ-page stack layouts is -complete as well—after all, setting H to be empty in the latter problem fully captures the former one.
In fact, the extension setting can seemlessly also capture the previously studied -complete problem of computing an ℓ-stack layout with a prescribed vertex order <cit.>; indeed, this corresponds to the special case where V(H) = V(G) and E(H)=∅.
Given the intractability of extending ℓ-page stack layouts in the classical complexity setting, we focus on identifying tractable fragments of the problem through the more refined lens of parameterized complexity analysis <cit.>, which considers both the input size of the graph and some additional parameter k of the instance[We assume familiarity with the basic foundations of parameterized complexity theory, notably including the notions of fixed-parameter tractability, , [1]-, and para-hardness <cit.>.].
Contributions.
A natural parameter in any drawing extension problem is the size of the missing part of the graph, i.e., the missing number of vertices and/or edges.
We start our investigation by showing that the Stack Layout Extension problem () for instances without any missing vertices, i.e., V(G) = V(H), is fixed-parameter tractable when parameterized by the number of missing edges |E(G)∖ E(H)| (<ref>).
The above result, however, only applies in the highly restrictive setting where no vertices are missing—generally, we would like to solve instances with missing vertices as well as edges.
A parameterization that has been successfully used in this setting is the vertex+edge deletion distance, i.e., the number of vertex and edge deletion operations[As usual, we assume that deleting a vertex automatically also deletes all of its incident edges.] required to obtain H from G. But while this parameter has yielded parameterized algorithms when extending, e.g., 1-planar drawings <cit.> and orthogonal planar drawings <cit.>, we rule out any analogous result for by establishing its -completeness even if H can be obtained from G by deleting only two vertices (<ref>). This means that more “restrictive” parameterizations are necessary to achieve tractability for the problem of extending ℓ-page stack layouts.
Since the missing vertices in our hardness reduction have a high degree, we then consider parameterizations by the combined number of missing vertices and edges κ = V(G) ∖ V(H) + E(G)∖ E(H).
We show that belongs to the class when parameterized by κ (<ref>) while being [1]-hard (<ref>), which rules out the existence of a fixed-parameter tractable algorithm under standard complexity assumptions.
The latter result holds even if we additionally bound the page width ω of the stack layout of H, which measures the maximum number of edges that are crossed on a single page by a line perpendicular to the spine <cit.>.
On our quest towards a fixed-parameter tractable fragment of the problem, we thus need to include another restriction, namely the number ℓ of pages of the stack layout.
So finally, when parameterizing by the combined parameter κ + ω + ℓ, we show that it becomes fixed parameter tractable (<ref>).
Our results are summarized in <ref>.
long
Due to space constraints, full proofs of statements marked by are deferred to the appendix.
§ PRELIMINARIES
We assume the reader to be familiar with standard graph terminology <cit.>.
Throughout this paper, we assume standard graph representations, e.g., as double-linked adjacency list, that allow for efficient graph modifications.
For two integers p ≤ q we denote with [p, q] the set {p, p+1, …, q} and use [p]_0 and [p] as abbreviations for [0, p] and [1, p], respectively.
Let G be a graph that is, unless stated otherwise, simple and undirected, with vertex set V(G) and edge set E(G).
For X⊆ V(G), we denote by G[X] the subgraph of G induced on X.
Stack Layouts.
For an integer ℓ≥ 1, an ℓ-page stack layout of G is a tuple G where ≺_G is a linear order of V(G) and σ_G E(G) → [ℓ] is a function that assigns each edge to a page p ∈ [ℓ] such that
for each pair of edges u_1v_1 and u_2v_2 with σ(e_1) = σ(e_2) it does not hold u_1 ≺ u_2 ≺ v_1 ≺ v_2.
For the remainder of the paper, we write ≺ and σ if the graph G is clear from context.
We call ≺ the spine (order) and σ the page assignment.
Observe that we can interpret a stack layout as a drawing of G on different planar half-planes, one per page p ∈ [ℓ], each of which is bounded by the straight-line spine delimiting all half-planes.
One fundamental property of a stack layout is its page width—denoted as or simply ω if is clear from context—which is the maximum number of edges that are crossed on a single page by a line perpendicular to the spine <cit.>.
The properties of stack layouts with small page width have been studied, e.g., by Stöhr <cit.>.
We say that two vertices u and v are consecutive on the spine if they occur consecutively in ≺.
A vertex u ∈ V(G) sees a vertex v ∈ V(G) on a page p ∈ [ℓ] if there does not exist an edge e = xy ∈ E(G) with σ(e) = p and x ≺ u ≺ y ≺ v or u ≺ x ≺ v ≺ y.
Note that if u sees v, then v also sees u.
For two vertices u and v which are consecutive in ≺, we refer to the segment
on the spine between u and v as the interval between u and v, denoted as [u, v].
Problem Statement.
Let H ⊆ G be a subgraph of a graph G.
We say that G is an extension of H if σ_H⊆σ_G and ≺_H⊆≺_G.
We now formalize our problem of interest:
()Integer ℓ≥ 1, graph G, subgraph H of G, and ℓ-page stack layout H.Does there exist an ℓ-page stack layout G of G that extends H?
We remark that while is defined as a decision problem for complexity-theoretic reasons, every algorithm presented in this article is constructive and can be trivially adapted to also output a layout G as a witness (also called a solution) for positive instances. For an instance of , we use as shorthand for |V(G)|+|E(G)|+ℓ.
In line with the terminology previously used for drawing extension problems <cit.>, we refer to the vertices and edges in V(H)∪ E(H) as old and call all other vertices and edges of G new.
Let and denote the sets of all new vertices and edges, respectively, and set and . Furthermore, we denote with the set of new edges incident to two old vertices, i.e., {e = uv ∈| u, v ∈ V(H)}.
We consider the parameterized complexity of our extension problem by measuring how “incomplete” the provided partial solution is using the following natural parameters that have also been used in this setting before <cit.>:
the vertex+edge deletion distance, which is +, and the total number of missing vertices and edges, i.e., +.
§ WITH ONLY MISSING EDGES IS
We begin our investigation by first analyzing the special case where V(G)=V(H), i.e., when only edges are missing from H. We recall that the problem remains -complete even in this setting, as it generalizes the problem of computing the stack number of a graph with a prescribed vertex order <cit.>.
Furthermore, both of the aforementioned measures of the incompleteness of H are the same and equal =.
As a “warm-up” result, we show that in this setting is fixed-parameter tractable parameterized by .
Towards this, consider the set S(e) ⊆ [ℓ] of pages on which we could place a new edge e without introducing a crossing with edges from H; formally, p∈ S(e) if and only if ⟨≺_H,σ_H∪(e,p)⟩ is an ℓ-page stack layout of H∪{e}.
Intuitively, if S(e) is large enough, then we are always able to find a
“free” page to place e independent of the placement of the remaining new edges.
Formally, one can easily show:
lem:only-edges-remove-easy-edgeslemmalemmaOnlyEdgesRemoveEasyEdges
Let = be an instance of with = ∅ that contains an edge e ∈ with S(e)≥.
The instance ' = (ℓ, H, G', ) with G' = G ∖{e} is a positive instance if and only if is a positive instance.
plemmaOnlyEdgesRemoveEasyEdges
First, note that removing an edge from G and adapting the page assignment σ accordingly does not invalidate an existing solution to for .
Hence, the “(⇐)-direction” holds trivially, and we focus on the “(⇒)-direction”.
*(⇒)
Let ' be a positive instance of with the solution .
By our selection of e, there exists a page p ∈ S(e) such that we have for every edge e' ∈ with e' ≠ e that σ(e') ≠ p holds.
We take and extend σ by the page assignment (e, p) to obtain G.
By the definition of S(·), this cannot introduce a crossing with edges from H and by our selection of p, no crossings with other edges from are possible either.
Hence, G is a valid stack layout of G that extends H as we did not alter except extending the page assignment.
Thus, it witnesses that is a positive instance of .
With <ref> in hand, we can establish the desired result:
thm:only-edges-fpttheoremtheoremOnlyEdgesFPT
Let = be an instance of with = ∅.
We can find an ℓ-page stack layout of G that extends or report that none exists in ^· time.
We compute for a single edge e ∈ the set S(e) in linear time by checking with which of the old edges e would cross.
If S(e) ≥, then following <ref>, we remove e from G.
Overall, this takes · time and results in a graph G' with H ⊆ G' ⊆ G.
Furthermore, each remaining new edge e' ∈ E(G') ∖ E(H) can be put in fewer than different pages.
Hence, we can brute-force over all the at most ^ page assignments σ' that extend σ_H for all edges in E(G') ∖ E(H), and for each such assignment we check in linear time whether no pair of edges e', e”∈ E(G')∖ E(H) cross each other.
ptheoremOnlyEdgesFPT
We compute for a single edge e ∈ the set S(e) in linear time by checking with which of the old edges e would cross.
If S(e) ≥, then we remove e from G.
Overall, this takes · time and results in a graph G' with H ⊆ G' ⊆ G.
Furthermore, each edge e' ∈ E(G') ∖ E(H) can be put in fewer than different pages.
Hence, we can brute force the possible page assignments for each new edge e'.
Each of the resulting ^ different ℓ-page stack layouts is by construction an extension of H.
Creating can be done by copying H and then augmenting it with new edges.
This amounts to time.
For each created ℓ-page stack layout, we can check in linear time whether it is crossing free.
Note that by our pre-processing step, no new edge can cross an old edge, and thus it suffices to check whether no pair of edges e', e”∈ E(G')∖ E(H) cross each other.
If there exists a crossing free layout, then applying <ref> iteratively tells us that we can extend it to a solution G for .
If none of them is crossing free, we conclude by applying (iteratively) <ref> that does not admit the desired ℓ-page stack layout.
Combining all, the overall running time is ^·.
§ WITH TWO MISSING VERTICES IS -COMPLETE
Adding only edges to a given linear layout is arguably quite restrictive.
Therefore, we now lift this restriction and consider in its full generality, i.e., also allow adding vertices.
Somewhat surprisingly, as our first result in the general setting we show that is -complete even if the task is to merely add two vertices, i.e., for = 2 and = ∅.
This rules out not only fixed-parameter but also tractability when parameterizing by the vertex+edge deletion distance, and represents—to the best of our knowledge—the first example of a drawing extension problem with this behavior.
To establish the result, we devise a reduction from <cit.>.
long
In our reduction, we insert two new vertices into a partial layout derived from the given formula, and use the page assignment of their incident edges to encode a truth assignment and validate that it satisfies all clauses.
For this, we will need to restrict the positions of the new vertices to a certain range along the spine.
In <ref>, we introduce the fixation gadget that ensures this.
We also reuse this gadget in the reduction shown in <ref>.
But first, we use it in this section to perform our reduction and prove -completeness in <ref>.
sectionParaNPIntuition
Let φ = (𝒳, 𝒞) be an instance of consisting of N variables 𝒳 = {x_1, …, x_N} and M clauses 𝒞 = {c_1, …, c_M}, each consisting of three different and pairwise non-complementary literals.
longIntuitively, theThe
reduction constructs an instance = of which represents each variable x_i and each clause c_j of φ, respectively, by a corresponding vertex in H.
The linear order ≺_H has the form x_1 ≺ x_2 ≺…≺ x_N ≺ c_1 ≺…≺ c_M; see long<ref><ref>.
Furthermore, contains two new vertices s and v.
The vertex s is adjacent to all variable-vertices and the construction will ensure that the page assignment for its incident edges represents, i.e., selects, a truth assignment Γ for φ.
The vertex v is adjacent to all clause-vertices, and its purpose is to verify that the truth assignment satisfies all clauses.
For the followinglong high-level description of how this is achieved, we assume s ≺ v ≺ x_1 longand will ensure lateras we will use a fixation gadget to ensure that every solution of has this property.
To each variable x_i, we associate two pages p_i and p_¬ i corresponding to its possible truth states.
We ensure that s can see each variable-vertex only on its associated pages using edges incident to dummy vertices d_q with q ∈ [N + M + 1].
These dummy vertices are distributed as in long<ref><ref>.
Hence, a page assignment for the edges incident to s induces a truth assignment.
Similar edges also ensure that v can see a clause-vertex c_j only on the pages that are associated to the negation of the literals the clause c_j is composed of, see long<ref>the blue arcs in <ref> for an illustration.
We defer the full construction to <ref>.
long
We now ensure that s ≺ v ≺ x_1 holds in every solution of by using a fixation gadget on two vertices, i.e., for F = 2.
In particular, we set a_3 ≺ d_1, i.e., we place the fixation gadget at the beginning of the spine, and identify s = f_1 and v = f_2.
The spine order ≺_H is then the transitive closure of all the partial orders stated until now; see <ref>.
Finally, we add the edge d_1d_N + M + 1 and set σ(d_1d_N + M + 1) = p_d to ensure that our construction has <ref>.
Regarding the correctness of our reduction, we make the following observation.
If an induced truth assignment does not satisfy a clause c_j, then it must use the pages associated to the negated literals of c_j.
Thus, the new edge vc_j will cross another edge no matter which page we uselong, see also <ref>.
However, if a clause c_j is satisfied, we can find a page for the edge vc_j that does not introduce a crossing: the page associated to the negation of the literal that satisfies c_j.
Consequently, if φ is satisfiable, then there exists an extension G.
Similarly, the page assignment of an extension G induces a truth assignment Γ that satisfies φ.
long
An intuitive example of the reduction is provided in <ref>, and we obtain the following theorem.
long
Finally, recall that our approach hinges on some way to restrict the new vertices s and v to be placed within a certain range, i.e., before x_1.
We realize this using the fixation gadget that we describe in <ref>.
We also reuse this gadget in the reduction from <ref>.
In <ref>, we show how this can be build into our reduction to prove <ref>.
The graph H that we construct will have multi-edges to facilitate the presentation of the reduction. The procedure for removing multi-edges is detailed in <ref>.
sectionParaNPBaseLayout
§.§ Encoding the Variables and Clauses: The Base Layout
For each variable x_i and each clause c_j in φ we introduce a vertex in H.
In the following, we use the same symbol to address an element of φ and its representation in H.
Let us first fix the spine order ≺_H.
For every i ∈ [N - 1] and j ∈ [M - 1], we set x_i ≺ x_i + 1 and c_j≺ c_j + 1.
Furthermore, we order the variables before the clauses on the spine, i.e., we set x_N ≺ c_1.
Next, we add N + M + 1 dummy vertices d_q to H and distribute them on the spine.
More concretely, we set d_i ≺ x_i ≺ d_i + 1 and d_N + j≺ c_j ≺ d_N + j + 1 for every i ∈ [N] and j ∈ [M].
By taking the transitive closure of the above order, we obtain the following linear order, which we also visualize in <ref>.
d_1 ≺ x_1 ≺ d_2 ≺ x_2 ≺…≺ x_N ≺ d_N + 1≺ c_1 ≺…≺ c_M ≺ d_N + M + 1
We now turn our attention to the page assignment σ_H and create for each variable x_i ∈𝒳 the two pages p_i and p_¬ i.
Intuitively, the assignment of an edge incident to x_i to either of these two pages will determine whether x_i is true or false.
For each i, j ∈ [N] with i ≠ j, we create the edges e(x_i, p_j) = d_id_i + 1 and e(x_i, p_¬ j) = d_id_i + 1 in H.
We assign the edge e(x_i, p_j) to the page p_j and the edge e(x_i, p_¬ j) to p_¬ j, i.e., we have σ(e(x_i, p_j)) = p_j and σ(e(x_i, p_¬ j)) = p_¬ j.
<ref> (left) visualizes this.
Note that this introduces multi-edges in H, but recall that we resolve them in <ref>. In particular, we will address these multi-edges in <ref>.
Next, we consider each combination of a clause c_j ∈𝒞 and a variable x_i ∈𝒳.
If x_i does not appear in c_j, we create the edges e(c_j, p_i) = d_N + jd_N + j + 1 and e(c_j, p_¬ i) = d_N + jd_N + j + 1.
Similar to before, we set σ(e(c_j, p_i)) = p_i and σ(e(c_j, p_¬ i)) = p_¬ i.
If x_i appears in c_j without negation, we create the edge e(c_j, p_i) = d_N + jd_N + j + 1 and set σ(e(c_j, p_i)) = p_i.
Symmetrically, if x_i appears negated in c_j, we create the edge e(c_j, p_¬ i) = d_N + jd_N + j + 1 and set σ(e(c_j, p_¬ i)) = p_¬ i.
We visualize this page assignment in <ref> (middle and right).
This completes the base layout of our reduction.
longHowever, we have only defined parts of H and its stack layout and complete the construction in <ref>.
Next, we introduce two new vertices s and v in G.
The vertex s is adjacent to each x_i ∈𝒳 and the vertex v is adjacent to each c_j ∈𝒞.
Let us assume for the moment that in every extension G of H we have s ≺ v ≺ d_1.
Then, the vertex s can “see” each x_i ∈𝒳 only on the pages p_i and p_¬ i.
Hence, the page assignment of sx_i can be interpreted as the truth state of x_i.
Similarly, from the perspective of the vertex v, each c_j ∈𝒞 is only visible on the pages that correspond to the complementary literals of c_j.
So intuitively, if the page assignment of the edges incident to s induces a truth assignment that falsifies c_j, then these new edges will block the remaining available pages for the edge vc_j.
Hence, v verifies the truth assignment induced by the edges incident to s, see also <ref>.
longThis indicates the intended semantics of our reduction but has still one caveat:
We have to ensure that we have s ≺ v ≺ d_1 in every solution to our created instance.Note that this relies on the assumption that we have s ≺ v ≺ d_1. However, we can satisfy this assumption using a fixation gadget.
§.§ Restricting the Placement of New Vertices: The Fixation Gadget
The purpose of the so-called fixation gadget is to restrict the possible positions of new vertices to given intervals.
As this gadget will also find applications outside this reduction, we describe in the following in detail its general construction for F > 1 new vertices ℱ = {f_1, …, f_F}.
First, we introduce 3(F + 1) new vertices v_1, …, v_F + 1, b_1, …, b_F + 1, and a_1, …, a_F + 1.
We fix the spine order ≺_H among these vertices
to
b_1 ≺ v_1 ≺ a_1 ≺ b_2 ≺ v_2 ≺ a_2 ≺…≺ b_F + 1≺ v_F + 1≺ a_F + 1; see also <ref>.
Then, every new vertex f_i is made adjacent to v_i and v_i + 1 and we aim to allow these new edges to be placed only in a dedicated further page p_d.
To achieve this, we first introduce for every i ∈ [F + 1] and every page p ≠ p_d an edge e(b_i, a_i, p) = b_ia_i in H and set σ(e(b_i, a_i, p)) = p; see <ref>.
Furthermore, we also introduce the edges b_iv_i and v_ia_i and set σ(b_iv_i) = σ(v_ia_i) = p_d for all i ∈ [F + 1] .
For every i ∈ [F], we add the edge v_iv_i + 1 and place it on the page p_d, i.e., we have σ(v_iv_i + 1) = p_d as in <ref>.
Finally, we also create the edge b_1a_F + 1 and set σ(b_1a_F + 1) = p_d.
To complete the construction of the fixation gadget, we add the new edges f_iv_i and f_iv_i + 1 for every i ∈ [F] to G.
<ref> shows an example of the fixation gadget for F = 2.
Next, we show that the fixation gadget forces f_i to lie between v_i and v_i + 1 on the spine and the edges f_iv_i and f_iv_i + 1 to be on the page p_d for every i ∈ [F].
lem:fixation-gadget-propertieslemmalemmaFixationGadgetProperties
Let = be an instance of that contains a fixation gadget on F vertices {f_1, …, f_F}.
In any solution G to and for every i ∈ [F], we have v_i ≺ f_i ≺ v_i + 1 and σ(f_iv_i) = σ(f_iv_i + 1) = p_d.
Furthermore, the fixation gadget contributes 4F + 3 vertices and (ℓ + 4)F + ℓ + 2 edges to the size of .
Towards establishing v_i ≺ f_i ≺ v_i + 1, one can show that f_i≺ v_i would prevent f_i from seeing v_i+1 on any page: As f_i≺ v_i implies f_i ≺ b_i + 1≺ v_i + 1≺ a_i + 1 and we have the edge b_i+1a_i + 1 on any page except p_d, only visibility on page p_d would still be possible.
However, the edges on the page p_d prevent visibility to v_i+1 for any spine position left of v_i.
By symmetric arguments, we can obtain that v_i+1≺ f_i would prevent v_i from seeing f_i.
Using again the fact that we have the edge b_ia_i on any page except p_d, in concert with the relation v_i ≺ f_i ≺ v_i + 1 shown above and the edges v_ia_i and b_i + 1v_i + 1 on the page p_d, one can deduce that σ(f_iv_i) = σ(f_iv_i + 1) = p_d must hold.
Finally, the bound on the size of the gadget can be obtained by a close analysis of the construction.
plemmaFixationGadgetProperties
Let G be a solution to =.
First, we will show that v_i ≺ f_i ≺ v_i + 1 must hold.
Towards a contradiction, assume that there exists a solution G with f_i ≺ v_i.
Observe that f_i ≺ v_i also implies f_i ≺ v_i + 1 and recall that we have in H the edges e(b_i + 1, a_i + 1, p) for every page p ≠ p_d.
As b_i + 1≺_H v_i + 1≺_H a_i + 1 and G is an extension of H, f_i can see v_i + 1 only on the page p_d.
Hence, we must have σ(f_iv_i + 1) = p_d.
We will now distinguish between the following two cases.
On the one hand, there could exist a j with 1 ≤ j < i ≤ F, such that v_j ≺ f_i ≺ v_j + 1≺ v_i + 1.
However, since we have σ_H(v_jv_j + 1) = p_d, this cannot be the case, as this would introduce a crossing on the page p_d between the edges f_iv_i + 1 and v_jv_j + 1.
On the other hand, we could have f_i≺ b_1, i.e., f_i is placed at the beginning of the fixation gadget.
Observe that we have in this situation f_i≺ b_1 ≺ v_i + 1≺ a_F + 1 and σ_H(b_1a_F + 1) = p_d.
Hence, we would introduce a crossing on the page p_d between the edges f_iv_i + 1 and b_1a_F + 1.
As σ(f_iv_i + 1) = p_d can therefore not hold, we have no page to which we could assign f_iv_i + 1 without introducing a crossing, contradicting the assumption that we have a solution with f_i≺ v_i.
As the arguments that exclude v_i + 1≺ f_i are symmetric, we obtain that v_i≺ f_i≺ v_i + 1 must hold in any solution to .
Secondly, we will show that σ(f_iv_i) = σ(f_iv_i + 1) = p_d holds.
Towards a contradiction, assume that there exists a solution G with σ(f_iv_i) ≠ p_d for some i ∈ [F].
Hence, σ(f_iv_i) = p holds for some page p ≠ p_d.
Recall that we have the edge e(b_i, a_i, p) with σ_H(e(b_i, a_i, p)) = p.
This allows us to strengthen v_i≺ f_i≺ v_i + 1, which we have shown before, to v_i ≺ f_i ≺ a_i under the assumption of σ(f_iv_i) = p, as we would otherwise have a crossing on the page p.
However, then we can conclude from σ_H(v_ia_i) = p_d and the existence of the edges e(b_i + 1, a_i + 1, p'), with σ_H(e(b_i + 1, a_i + 1, p')) = p', for any page p' ≠ p_d and f_i ≺ b_i + 1≺ v_i + 1≺ a_i + 1 that there does not exist a feasible page assignment for the edge f_iv_i + 1.
This contradicts our assumption of a solution with σ(f_iv_i) ≠ p_d and a symmetric argument rules out any solution with σ(f_iv_i + 1) ≠ p_d.
Thirdly, we analyze the size of the fixation gadget.
Recall that ℱ consists of F vertices, and we introduce 3(F + 1) vertices in H.
Furthermore, the fixation gadget contributes one page to an (existing) stack layout of H on ℓ - 1 pages.
Regarding the number of edges, we create in H (ℓ - 1)(F + 1) edges of the form b_ia_i, i ∈ [F + 1], 2(F + 1) edges of the form b_iv_i or v_ia_i, i ∈ [F + 1], F edges of the form v_iv_i + 1, i ∈ [F], and the edge b_1a_F + 1.
Together with the 2F new edges that we add to G, this sums up to
(ℓ + 4)F + ℓ + 2 edges.
<ref> tells us that we can restrict the feasible positions for f_i to a pre-defined set of consecutive intervals by choosing suitable positions for v_i and v_i + 1 in the spine order ≺_H.
As the fixation gadget requires an additional page p_d, we must ensure that the existence of the (otherwise mostly empty) page p_d does not violate the semantics of our reductions.
In particular, we will (have to) ensure that our full constructions satisfy the following property.
Let = be an instance of that contains a fixation gadget on F vertices {f_1, …, f_F}.
In any solution G to and for every new edge e ∈ with σ(e) = p_d, we have e ∈{f_iv_i, f_iv_i + 1| i ∈ [F]}.
§.§ The Complete Reduction
long
Recall the base layout of our reduction that we described in <ref> and illustrated with <ref>.
There, we created, for a given formula φ, one vertex for each variable x_i and each clause c_j.
Furthermore, each x_i should only be visible on two pages that correspond to its individual truth state and each c_j should only be visible on three pages that correspond to the complementary literals in c_j.
However, the intended semantics of our reduction rely on the assumption that new vertices can only be placed on a specific position, and in a specific order, on the spine.
Equipped with the fixation gadget, we will now satisfy this assumption.
For an instance φ = (𝒳, 𝒞) of , we take the base layout of our reduction as described in <ref> and incorporate in H a fixation gadget on two vertices, i.e., for F = 2.
We set a_3 ≺ d_1, i.e., we place the fixation gadget at the beginning of the spine, and identify s = f_1 and v = f_2.
Furthermore, we add the edge d_1d_N + M + 1 and set σ(d_1d_N + M + 1) = p_d.
Observe that this ensures that our construction will have <ref>, as this edge prevents connecting s with x_i or v with c_j on page p_d for any i ∈ [N] and j ∈ [M].
Finally, we add to G the new edges sx_i and vc_j for every i ∈ [N] and j ∈ [M].
This completes our reduction and we establish with the following theorem its correctness; see also <ref> for an example of our construction for a small formula.
thm:paranp-hardnesstheoremtheoremParaNPHardness
is -complete even if we have just two new vertices and = ∅.
ptheoremParaNPHardness
-membership of follows immediately from the fact that we can encode a solution in space and can verify it in polynomial time.
Thus, we focus in the remainder of the proof on showing -hardness of .
Let φ = (𝒳, 𝒞) be an instance of and let = be the obtained instance of .
The number of vertices and edges in H that we created in the base layout of <ref> is in N + M.
Furthermore, in the base layout, we used 2N pages.
Together with the page from the fixation gadget, we have ℓ = 2N + 1.
As F = 2 is constant, the contribution of the fixation gadget to the size of H and G is linear in φ by <ref>.
Hence, the overall size of is polynomial in the size of φ.
Clearly, the time required to create is also polynomial in the size of φ and it remains to show the correctness of the reduction.
*(⇒)
Assume that φ is a positive instance of and let Γ: 𝒳→{0,1} be a truth assignment that satisfies every clause.
To show that is a positive instance of , we create a stack layout of G.
We first ensure that it extends H by copying H.
Next, we set a_1 ≺ s ≺ b_2, a_2 ≺ v ≺ b_3 and σ(sv_1) = σ(sv_2) = σ(vv_2) = σ(vv_3) = p_d as shown in <ref>.
Furthermore, for every variable x_i ∈𝒳, we set σ(sx_i) = p_i if Γ(x_i) = 1 and σ(sx_i) = p_¬ i otherwise.
For every clause c_j ∈𝒞, we identify a variable x_i that satisfies the clause c_j under Γ.
As Γ satisfies every clause, the existence of such an x_i is guaranteed.
Then, we set σ(sx_i) = p_i if Γ(x_i) = 0 and σ(sx_i) = p_¬ i otherwise.
To show that is crossing-free, we first observe that for the fixation gadget, our generated solution satisfies the necessary properties stated in <ref>.
We can observe that in our page assignment new edges cannot cross old edges.
Hence, we only have to ensure that no two new edges cross.
No two new edges on page p_d can cross, so assume that there is a crossing on page p_i for some i ∈ [N].
We observe that only edges of the form sx_i and vc_j for some x_i ∈𝒳 and c_j ∈𝒞 can cross, as they are otherwise incident to the same vertex.
As there is a crossing on the page p_i, we must have by construction that the variable x_i appears negated in the clause c_j but we have Γ(x_i) = 1.
Hence, x_i does not satisfy c_j, which is a contradiction to our construction of , for which we only considered variables that satisfy the clause c_j.
Therefore, a crossing on the page p_i cannot exist.
As the argument for a crossing on page p_¬ i is symmetric, we conclude that must be crossing-free and hence witnesses that is a positive instance of .
*(⇐)
Assume that is a positive instance of .
This implies that there exists a witness extension G of H.
As contains the fixation gadget, we can apply <ref> and deduce that s ≺ v ≺ d_1 holds.
Based on G, we now construct a truth assignment Γ𝒳→{0,1} for φ.
For each variable x_i ∈𝒳, we consider the page assignment σ(sx_i).
Recall that we have s ≺ d_1 ≺ x_i ≺ d_N + M + 1 and σ_H(d_1d_N + M + 1) = p_d.
Together with s ≺ d_i ≺ x_i ≺ d_i + 1 and σ_H(d_id_i + 1) = p for any page p ∈[ℓ] ∖{p_d, p_i, p_¬ i}, we conclude that σ(sx_i) ∈{p_i, p_¬ i} must hold.
We set Γ(x_i) = 1 if σ(sx_i) = p_i holds and Γ(x_i) = 0 if σ(sx_i) = p_¬ i holds and know by the above arguments that Γ is well-defined.
What remains to show is that Γ satisfies φ.
Let c_j be an arbitrary clause of φ and consider the page p with σ(vc_j) = p.
As we have σ_H(d_1d_N + M + 1) = p_d we know that p ≠ p_d must hold, i.e., the page p is associated to some variable x_i.
For the remainder of the proof, we assume p = p_i as the case p = p_¬ i is symmetric.
Recall that we have d_N + j≺_H c_j ≺_H d_N + j + 1, σ_H(e(c_j, p_i')) = p_i', and σ_H(e(c_j, p_¬ i')) = p_¬ i' for any variable x_i'∈𝒳 that does not occur in c_j.
Hence, we know that x_i must occur in c_j.
Furthermore, the same reasoning allows us to conclude that x_i must appear negated in c_j, as we would otherwise have a crossing.
Using s ≺ v ≺ x_i ≺ c_j and our assumption of σ(vc_j) = p_i, we derive that x_i will be visible from s on page p_¬ i only, i.e., we have σ(sx_i) = p_¬ i.
By our construction of Γ, we conclude Γ(x_i) = 0.
Hence, x_i satisfies c_j under Γ.
As c_j was selected arbitrarily, this holds for all clauses and therefore must Γ satisfy the whole formula φ, i.e., it witnesses that φ is a positive instance of .
long
Finally, we want to remark that <ref> is tight in the sense that with only one new vertex v and = ∅ can be solved in polynomial time.
To that end, we can branch over all n possible spine positions where v can be placed.
For each of these, the observation that edges incident to the same vertex can never cross each other allows us to greedily assign a new edge uv to the first page p where v can see u.
Recall that we only add one new vertex v.
Hence, u is an old vertex whose spine position is known.
Clearly, an extension exists if and only if there exists a spine position for v such that our greedy page assignment can find a page for all new edges.
Let = be an instance of with = 1 and = ∅.
We can find an ℓ-page stack layout of G that extends or report that none exists in n ·· time.
§ PARAMETERIZED BY MISSING VERTICES AND EDGES IS IN
In the light of <ref>, which excludes the use of the vertex+edge deletion distance as a pathway to tractability, we consider parameterizing by the total number of missing vertices and edges κ+.
As our first result in this direction, we show that
parameterizing by κ makes it -tractable.
To this end,
we combine a branching-procedure with the
fixed parameter algorithm for the special case obtained in <ref>.
Let = be an instance of .
We can find
an ℓ-page stack layout of G that extends or report that none exists in ^+ 1^ time.
We branch over the possible assignments of new vertices to the intervals in ≺_H.
As a solution could assign multiple vertices to the same interval,
we also branch over the order in which all vertices will appear in the spine order ≺_G.
Observe that ≺_H induces V(H) + 1 different intervals, out of which we have to choose with repetition.
Together with the possible orders of the new vertices, we can bound the number of branches by !·V(H) +.
We can simplify this expression to
!· (V(H) + )!/!· ((V(H) + ) - )! = (V(H) + )!/V(H)! = Π_i = 1^(V(H) + i) = ^.
In each branch, the spine order ≺_G is fixed and extends ≺_H.
Hence, it only remains to check whether ≺_G allows for a valid page assignment σ_G.
As each branch corresponds to an instance
of where only edges are missing,
we use <ref> to check in ^· time whether such an assignment σ_G exists.
The overall running time now follows readily.
The running time stated in <ref> not only proves that is in when parameterized by κ, but also when parameterized by for constant .
However, common complexity assumptions rule out an efficient algorithm parameterized by κ, as we show next.
§ PARAMETERIZED BY MISSING VERTICES AND EDGES IS [1]-HARD
In this section, we show that parameterized by the number κ of missing vertices and edges is [1]-hard.
To show [1]-hardness, we reduce from the Multi-colored Clique () problem.
Here, we are given a graph G_C, an integer k > 0, and a partition of V(G_C) into k independent subsets V_1, …, V_k, and ask whether there exists a colorful k-clique 𝒞⊆ V(G_C) in G_C, i.e., a clique on k vertices that contains exactly one vertex of every set V_i, i ∈ [k].
It is known that is [1]-hard when parameterized by k <cit.>.
In the following, we will use Greek letters for the indices of the partition and denote with n_α the number of vertices in V_α, i.e., n_α = V_α.
Observe that ∑_α∈ [k] n_α = N with N = V(G_C).
As we can interpret the partitioning of the vertices into V_1, …, V_k as assigning to them one of k colors, we will call a vertex v_α^i
with α∈ [k] and i ∈ [n_α] a vertex with color α.
Our construction will heavily use the notion of a successor and predecessor of a vertex in a given spine order ≺.
For a vertex u, the function ≺u returns the successor of u in the spine order ≺,
i.e., the consecutive vertex in ≺ after u.
Note that ≺u is undefined if there is no vertex v ∈ V(G) with u ≺ v.
We write u if ≺ is clear from context.
The predecessor function ≺u is defined analogously.
mccIntro
Let (G_C, k, (V_1, …, V_k)) be an instance of .
We will construct an instance parameterized by κ that will fulfill two crucial properties to ensure its correctness.
While, at the time of stating the property, our construction might not yet fulfill it, we show in <ref> that in the end it indeed has the desired properties.
long
Our instance contains for every original vertex v_α^i ∈ V(G_C) a copy u_α^i ∈ V(H).
Furthermore, we add for each color α∈ [k] to H two additional vertices and, overall, three further dummy vertices that we use to ensure correctness of the reduction.
We place the vertices on the spine based on their color α and index i; see <ref> and <ref>, where we give the full details of the base layout.
Observe that every vertex v_α^i ∈ V(G_C) induces the interval [u_α^i, u_α^i + 1] in ≺_H, which we denote with v_α^i.
The equivalence between the two problems will be obtained by adding a k-clique to G that consists of the k new vertices 𝒳 = {x_1, …, x_k}.
Placing x_α∈𝒳 in v_β^i indicates that v_β^i will be part of the colorful k-clique in G_C.
To establish the correctness of our reduction, we have to ensure two things.
First, we have to model the adjacencies in G_C.
In particular, two new vertices x_α and x_β, with α < β, should only be placed in intervals induced by vertices adjacent in G_C.
We enforce this by adding for every edge e = v_α^iv_β^j ∈ E(G_C) a page p_e that contains a set of edges creating a tunnel on p_e, see <ref>, and thereby allowing us to place the edge x_αx_β∈ E(G) in the page p_e if and only if x_α is placed in v_α^i and x_β in v_β^j.
Hence, the page assignment verifies that only pairwise adjacent vertices are in
the solution, i.e., new vertices can only be placed in intervals induced by a clique in G_C.
We
describe the tunnel further in <ref>.
Second, we have to ensure that we select exactly one vertex v_α^i ∈ V_α for every color α∈ [k].
In particular, the new vertex x_α should only be placed in intervals that are induced by vertices from V_α.
To this end, we modify H to include an appropriate fixation gadget by
re-using some vertices of the base layout; see <ref> for details.
As the whole base layout thereby forms the fixation gadget, our construction trivially satisfies <ref>.
The above two ideas are formalized in <ref>.
With these properties at hand, we show at the end of <ref> that is [1]-hard when parameterized by κ.
As in the reduction from <ref>, we will allow multi-edges in the graph H to facilitate presentation and understanding.
In <ref> we will discuss a way to remove the multi-edges by distributing the individual edges over auxiliary vertices.
In the following, we first give an overview of and intuition behind our reduction in <ref>, before we show its correctness in <ref>.
Note that the full details of the construction can be found in <ref>.
Furthermore, as in the reduction from <ref>, we will allow multi-edges in the graph H to facilitate the presentation of the reduction. The procedure for removing multi-edges by distributing the individual edges over auxiliary vertices is detailed in <ref>.
§.§ An Overview of the Construction
First, we define the base layout of our reduction, see <ref> for full details.
In the base layout, we create the N + 2k + 3 vertices {u_α^j |α∈ [k], j ∈ [n_α + 1]_0}∪{u_0^0, u_^0, u_^1} in H.
Note that for each original vertex v_α^i ∈ V(G_C), we have a copy u_α^i.
We will refer to the vertices u_0^0, u_^0, and u_^1 as dummy vertices and set, for ease of notation, = k + 1 and n_ = 1.
The vertices are placed on the spine based on their color α and index i; see <ref>.
Finally, observe that u_α^i = u_α^i + 1 for every v_α^i ∈ V(G_C).
Furthermore, every vertex v_α^i ∈ V(G_C) induces the interval [u_α^i, u_α^i + 1] in ≺_H, which we denote with v_α^i.
The equivalence between the two problems will be obtained by adding a k-clique to G that consists of the k new vertices 𝒳 = {x_1, …, x_k}.
Placing x_α∈𝒳 in v_β^i indicates that v_β^i will be part of the colorful k-clique in G_C, i.e., we will have the equivalence u_α^i ≺ x_α≺ ≺_Hu_α^i p. d.⟺ x_α is placed in v_α^i ⟺ v_α^i ∈𝒞 between a solution G to and a solution 𝒞 to .
To guarantee that 𝒞 is colorful, i.e., contains exactly one vertex from each color, we will ensure the following property with our construction.
To establish the correctness of our reduction, we have to ensure two things.
First, we have to model the adjacencies in G_C.
In particular, two new vertices x_α and x_β, with α < β, should only be placed in intervals induced by vertices adjacent in G_C.
We enforce this by adding for every edge e = v_α^iv_β^j ∈ E(G_C) a page p_e.
On this page p_e, we create the following edges in H; see also <ref> for a visualization.
Firstly, we create for every color γ∈ [k] ∖{α, β} an edge that spans exactly the intervals induced by vertices of color γ, thereby intuitively blocking visibility to any interval induced by a vertex of a color different to α and β; see <ref>a.
Secondly, we create up to two edges that span all intervals induced by vertices of color α except v_α^i; see <ref>b.
We do so similarly for color β.
These edges in concert with a tunnel that we create on page p_e, see <ref>c and <ref> for a formal description, allow us to place the edge x_αx_β∈ E(G) in the page p_e if and only if x_α is placed in v_α^i and x_β in v_β^j.
More formally, our construction will ensure the following property.
Second, we have to ensure that we select exactly one vertex v_α^i ∈ V_α for every color α∈ [k].
In particular, the new vertex x_α should only be placed in intervals that are induced by vertices from V_α.
To this end, we modify H to include a fixation gadget on F = k vertices by
re-using some vertices of the base layout.
Most importantly, we identify v_α = u_α^0 for every α∈[k + 1] and f_α = x_α for every α∈[k]; see <ref> for details.
As the whole base layout thereby forms the fixation gadget, our construction trivially satisfies <ref>.
long
Throughout the description of the reduction, we explicitly highlight crucial places where our construction should fulfill certain properties to ensure its correctness.
While, at the time of stating the property, our construction might not yet fulfill it, we argue in <ref>long (and <ref>) that it in the end indeed has the desired properties.
With these properties at hand, we show at the end of <ref> that is [1]-hard when parameterized by κ.
sectionWOneBaseLayout
§.§ Creating Intervals on the Spine: Our Base Layout
long
Recall that G_C has the vertex set V(G_C) = {v_1^1, …, v_k^n_k} partitioned into V_1, …, V_k with V_α = {v_α^1, …, v_α^n_α} for α∈ [k].
We create the N + 2k + 3 vertices {u_α^j |α∈ [k], j ∈ [n_α + 1]_0}∪{u_0^0, u_^0, u_^1} in H.
Note that for each original vertex v_α^i ∈ V(G_C), we have a copy u_α^i.
We will refer to the vertices u_0^0, u_^0, and u_^1 as dummy vertices and set, for ease of notation, = k + 1 and n_ = 1.
We order the vertices of H on the spine by setting u_α^i ≺ u_α^i + 1 and u_α^n_α + 1≺ u_α+1^0 for every α∈ [k] and i ∈ [n_α]_0.
Furthermore, we set u_0^0 ≺ u_1^0 and u_^0 ≺ u_^1.
The spine order is then the transitive closure of the above partial orders.
We visualize it in <ref> and observe that u_α^i = u_α^i + 1 for every v_α^i ∈ V(G_C).
As already indicated, we define the set 𝒳 = {x_1, …, x_k} to contain k (new) vertices which we add to G.
Furthermore, we form a k-clique on 𝒳, i.e., we add the edges x_αx_β for 1 ≤α < β≤ k to E(G).
Recall that v_α^i denotes the interval [u_α^i, ≺_Hu_α^i].
We use the following equivalence between a solution G to and a solution 𝒞 to .
u_α^i ≺ x_α≺≺_Hu_α^i p. d.⟺ x_α is placed in v_α^i ⟺ v_α^i ∈𝒞
To guarantee that 𝒞 is colorful, i.e., contains exactly one vertex from each color, we will ensure the following property with our construction.
long*
longOf course, our construction does not yet fulfill <ref>.
We will show in <ref> that the finished construction indeed does fulfill <ref>.
sectionWOneEdges
§.§ Creating One Page Per Edge: Encoding the Adjacencies
Having fixed the base order on the spine, we now ensure that we only select vertices that are adjacent in G_C, i.e., we encode the edges of G_C in our stack layout H.
Let e = v_α^iv_β^j be an edge of G_C and recall that, by our assumption, we have α≠β.
Furthermore, we assume for ease of presentation that α < β holds, which implies u_α^i≺ u_β^j.
We create the following edges in H; see <ref>.
Note that each of the them is assigned to the page p_e in σ, where p_e is a new empty page that we associate with the edge e.
Firstly, we create the edge u_γ^1u_γ^n_γ + 1 for each γ∈ [k] ∖{α, β}; see <ref>a.
Secondly, we create the edges u_α^1u_α^i and u_α^i + 1u_α^n_α + 1 as shown in <ref>b.
Similarly, we add the edges u_β^1u_β^j and u_β^j + 1u_β^n_β + 1.
If i ∈{1, n_α} or j ∈{1, n_β}, we omit creating the respective edge to not introduce self-loops in H.
Thirdly, we create the edges u_α^iu_β^j + 1 and u_α^i + 1u_β^j, which we mark in <ref>c in black.
long
One can readily verify that these edges in the page p_e do not cross.
Furthermore, observe that the edges u_α^iu_β^j + 1 and u_α^i + 1u_β^j create a tunnel on the page p_e connecting v_α^i and v_β^j.
Intuitively, the edges on the page p_e ensure that if x_i is placed in v_α^i then x_i sees only v_β^j on page p_e.
More formally, we will have the following property.
long*
sectionWOneFixationGadget
§.§ Restricting The Placement of New Vertices: Encoding the Colors
Until now, the new vertices in 𝒳 can be placed in any interval on the spine.
However, in we must select exactly one vertex from each color.
Recall <ref>, which intuitively states that each new vertex should only be placed in intervals that correspond to its color.
We now use the fixation gadget to ensure that our construction fulfills <ref>.
Observe that we already introduced in <ref> the required vertices of the fixation gadget when creating the base layout of our reduction.
More specifically, we (implicitly) create in H and our construction the fixation gadget on F = k vertices ℱ = 𝒳 by identifying, for α∈ [k + 1], v_i = u_α^0, b_i = u_α^0, and a_i = u_α^0, where we use i = α to differentiate between the vertices from the fixation gadget and the graphs G_C and H.
We also introduce the corresponding edges of the fixation gadget, which we visualize for the vertices in this construction in <ref>.
Recall that when introducing the fixation gadget in <ref>, we required that our instance must fulfill <ref>, which states that p_d must only be used by new edges that were introduced in the fixation gadget.
However, observe that any new edge e ∈ E(G) ∖ (E(H) ∪{x_αu_α^0, x_αu_α + 1^0 |α∈ [k]}), i.e., that was not introduced in the fixation gadget, is of the form e = x_αx_β for 1 ≤α < β≤ k, i.e., one of the edges in the k-clique.
<ref> tells us that for every new vertex x_α∈𝒳 we have u_α^0 ≺ x_α≺ u_α + 1^0 in any extension G of H.
This implies that we have u_α^0 ≺ x_α≺ u_α + 1^0 ≼ u_β^0 ≺ x_β≺ u_β + 1^0.
Together with having u_α^0u_α + 1^0, u_β^0u_β + 1^0 ∈ E(H) and σ_H(u_α^0u_α + 1^0) = σ(u_β^0u_β + 1^0) = p_d, this rules out σ_G(x_αx_β) = p_d.
Hence, we observe that the fixation gadget is formed on the base layout and our construction thus (trivially) fulfills <ref>.
§.§ Bringing It Together: Showing Correctness of the Reduction
long
We start by shortly summarizing our construction.
Recall that we want to insert k new vertices that form a clique.
On a high level, we first created for each vertex v_α^i ∈ V(G_C) a copy u_α^i ∈ V(H) and ordered the latter vertices depending on the color α and the index i.
Then, we created for each edge e ∈ E(G_C) a page p_e on which we formed a tunnel that will enforce for every new edge assigned to p_e that its endpoints lie in specific intervals via <ref>.
Finally, we used the fixation gadget to ensure that a new vertex can only be placed in intervals for its color, i.e., that our construction will enforce <ref>.
Overall, we obtain for an instance (G_C, k, (V_1, …, V_k)) of an instance = of that we parameterize by the number κ of missing vertices and edges.
Before we show correctness of the reduction, we first argue in <ref> that fulfills <ref>, respectively.
With the overview of the construction and the intuition behind the reduction settled, we now proceed to show its correctness in <ref>.
In the proof, we make use of <ref>.
Therefore, on our path to obtain <ref>,
we first have to show that our construction fulfills them.
Recall that <ref> is defined as follows.
*
longIn <ref>, whenWhen
incorporating the fixation gadget on F = k vertices in our construction, we identified
long
v_i = u_α^0 and f_i = x_α for every α∈ [k] and i = α.
v_α = u_α^0 and f_α = x_α for every α∈ [k].
Similarly, we identified v_F + 1 = u_k + 1^0.
The fixation gadget now guarantees thanks to <ref> that we have v_α≺_G f_α≺_G v_α + 1, i.e., u_α^0 ≺ x_α≺ u_α + 1^0, in any solution G.
Hence, we can observe the following.
Our instance of fulfills <ref>.
Recall that <ref> furthermore tells us that we have in any solution G the page assignment σ(x_αu_α^0) = σ(x_αu_α + 1^0) = p_d for every α∈ [k].
As we have by <ref> u_α^0 ≺ x_α≺ u_α + 1^0 and furthermore by the construction of the fixation gadget
σ_H(u_α^0u_α^0) = σ_H(u_α^0u_α^0) = p_d
for every α∈ [k], we cannot have in G u_α^0 ≺ x_α≺≺_Hu_α^0 or ≺_Hu_α + 1^0≺ x_α≺ u_α + 1^0, as this would introduce a crossing on page p_d.
As we have in ≺_H the equality u_α^0 = u_α^1 and u_α + 1^0 = u_α^n_α + 1 for every α∈ [k], we can strengthen <ref> and obtain the following.
In a solution to we have u_α^1 ≺ x_α≺ u_α^n_α + 1 for every α∈ [k].
Finally, we now show that our construction fulfills <ref>, which was defined as follows.
*
lemmalemmaWOneFulfillsPropertyLayers
Our instance of fulfills <ref>.
First, recall that we made <ref>, i.e., our construction fulfills <ref>.
Let G be a solution to with σ(x_αx_β) = p_e, for e = v_α^iv_β^j∈ E(G_C), 1 ≤α < β≤ k.
<ref> tells us that u_α^1 ≺ x_α≺ u_α^n_α + 1 and u_β^1 ≺ x_β≺ u_β^n_β + 1 holds.
<ref> also holds for any new vertices x_γ and x_δ with γ, δ∈ [k] ∖{α, β} and γ≠δ.
Furthermore, we have the edges u_γ^1u_γ^n_γ + 1 and u_δ^1u_δ^n_δ + 1 on page p_e.
Hence, all new edges on page p_e must be among new vertices placed in intervals induced by vertices of color α or β.
Now assume that we have u_α^1 ≼ x_α≼ u_α^i.
Using σ_H(u_α^1u_α^i) = p_e together with u_α^1 ≼ x_α≼ u_α^i ≺ x_β, we derive that u_α^1 ≼ x_α≼ u_α^i results in a crossing on page p_e.
Hence, u_α^1 ≼ x_α≼ u_α^i cannot hold.
Now assume that we have u_α^i + 1≼ x_α≼ u_α^n_α + 1.
From σ_H(u_α^i + 1u_α^n_α + 1) = p_e and u_α^i + 1≼ x_α≼ u_α^n_α + 1≺ x_β we get that u_α^i + 1≼ x_α≼ u_α^n_α + 1 results in a crossing on page p_e.
Hence, u_α^i + 1≼ x_α≼ u_α^n_α + 1 cannot hold.
Since we can exclude u_α^1 ≼ x_α≼ u_α^i and u_α^i + 1≼ x_α≺ u_α^n_α + 1 by the construction of the tunnel on page p_e, we can derive that x_α must be placed in v_α^i.
As similar arguments can be made for x_β, we can conclude that we get a crossing on page p_e unless x_α is placed in v_α^i and x_β in v_β^i.
long
We are now ready to show correctness of our reduction, i.e., show the following theorem.
thm:w-1theoremtheoremWOne
parameterized by the number κ of missing vertices and edges is -hard.
Let (G_C, k, (V_1, …, V_k)) be an instance of with N = V(G_C) and M = E(G_C) and let = be the instance of parameterized by the number κ of missing vertices and edges created by our construction described above. Closer analysis
reveals that the size of is bounded by N + Mk + k^2, and we have κ = 3k + k2 as = k and = k2 + 2k; recall that the fixation gadget contributes 2k new edges.
Towards arguing correctness, assume that (G_C, k, (V_1, …, V_k)) contains a colorful k-clique 𝒞. We construct a solution to by, for every new vertex x_α∈𝒳, considering the vertex v_α^i ∈𝒞 and placing x_α immediately to the right of the copy u_α^i of v_α^i in H. The fact that 𝒞 is a clique then guarantees that, for each edge e∈ E(G_C[𝒞]), there exists the page p_e in which the corresponding edge e' ∈ E(G[𝒳]) can be placed in. For the remaining edges from the fixation gadget, we can use the page assignment from <ref>.
For the converse (and more involved) direction, assume that admits a solution G. By <ref>, we have that each x_α∈ must be placed between u_α^0 and u_α + 1^0.
Moreover, our construction together with the page assignment forced by <ref> guarantees that x_α is placed between precisely one pair of consecutive vertices u_α^i_α and u_α^i_α+1, for some i_α∈ [n_α]; recall <ref>.
Our solution 𝒞 to the instance of will consist of the vertices v_α^i_α, i.e., exactly one vertex per color α.
Moreover, each new edge x_α x_β∈ E(G[𝒳]) must be placed by σ_G on some page, and as our construction satisfies <ref>, this page must be one that is associated to one edge e = v_α^i_αv_β^i_β of G_C.
<ref> now also guarantees that this page assignment enforces that x_α and x_β are placed precisely between the consecutive vertices u_α^i_α and u_α^i_α+1 and u_β^i_β and u_β^i_β+1 of H, respectively.
This means that the vertices in 𝒞 are pairwise adjacent, which implies that 𝒞 is a colorful k-clique.
ptheoremWOne
Let (G_C, k, (V_1, …, V_k)) be an instance of with N = V(G_C) and M = E(G_C).
Furthermore, let = be the instance of parameterized by the number κ of missing vertices and edges created by our construction described above.
We first bound the size of and note that we create N + 2k + 3 vertices in H in <ref> and k additional new vertices in G in <ref>.
We enrich H by k + 4 edges for each edge e ∈ E(G_C).
This gives us M(k + 4) edges in H so far.
Furthermore, in <ref>, we use a fixation gadget to keep the new vertices in place.
As we introduce for the fixation gadget no new vertex but rather identify vertices of the fixation gadget with already introduced ones from H, it only remains to account for the edges of the gadget, which are (M + 5)k + M + 3; see <ref> and note that H has ℓ = M + 1 pages as we create one page for each edge in G_C and have the dummy page from the fixation gadget.
Finally, we also add a clique among the k new vertices, which are k2 additional edges.
Overall, the size of H and G is therefore in N + Mk + k^2.
Hence, the size of the constructed instance is polynomial in the size of G_C and the new parameters are bounded by a (computable) function of the old parameter, more specifically, we have = k and = k2 + 2k, thus κ = 3k + k2.
The instance can trivially be created in (κ)-time.
We conclude with showing the correctness of our construction.
*(⇒)
Let (G_C, k, (V_1, …, V_k)) be a positive instance of with solution 𝒞 = {v_1^i, …, v_k^j}.
We construct a witness extension G of H to show that is a positive instance of .
First, we copy H to ensure that
G extends H.
Then, we extend G as follows.
For every v_α^i∈ 𝒞, we set u_α^i≺ x_α≺ u_α^i + 1.
We also set σ(x_αu_α^0) = σ(x_αu_α + 1^0) = p_d.
For every x_α, x_β∈𝒳 with 1 ≤α < β≤ k let x_α be placed in v_α^i and x_β be placed in v_β^j.
We set σ(x_αx_β) = p_e for the edge e = v_α^iv_β^j.
As 𝒞 is a clique, we must have v_α^iv_β^j∈ E(G_C) and thus we have the page p_e in , i.e., this page assignment is well-defined.
This completes the creation of G.
As it is an extension of H by construction, we only show that no two edges on the same page cross.
It is trivial that no two new edges, i.e., edges from can cross as they are all put on different pages.
For the edges x_αu_α^0 and x_αu_α + 1^0 it is sufficient to observe that we assemble the necessary page assignment from <ref>.
What remains to do is to analyze edges of the form x_αx_β.
Recall that we set σ(x_αx_β) = p_e for e = v_α^iv_β^j and x_α is placed in v_α^i and x_β is placed in v_β^j.
To see that there does not exist an old edge e' = uv ∈ E(H) that crosses e, i.e., with σ(e') = p_e and u ≺ x_α≺ v ≺ x_β, recall that the only old edge e' = uv on the page p_e for which we have u ≺ x_α≺ v is the edge e' = u_α^iu_β^j + 1 but we have x_β≺ u_β^j + 1, i.e., the edge e' “spans over” the edge x_αx_β.
As a similar argument can be made to show that there cannot exist an edge e' = u v ∈ E(H) with σ(e') = p_e and x_α≺ u ≺ x_β≺ v, we conclude that there are no crossings on the page p_e.
As all edges are covered by these cases, we conclude that no two edges of the same page in G can cross, i.e., G is a witness that is a positive instance of .
*(⇐)
Let be a positive instance of .
Hence, their exists a stack layout G that extends H.
We now construct, based on G, a set 𝒞 of k vertices and show that it forms a colorful clique in G_C.
Recall that fulfills <ref> and contains the fixation gadget.
From <ref> and <ref>, we conclude that we have u_α^1 ≺ x_α≺ u_α^n_α + 1 for each α∈ [k].
Let x_α be placed in some v_α^i.
We now employ our intended semantics and add v_α^i to 𝒞.
<ref> ensures that each vertex in 𝒞 will have a different color, i.e., for each α∈ [k] there exists exactly one vertex v_α^i ∈𝒞 such that v_α^i ∈ V_α.
Hence, it remains to show that 𝒞 forms a clique in G_C.
Let x_α, x_β∈ be two arbitrary new vertices placed in v_α^i and v_β^j, respectively.
Assume without loss of generality x_α≺ x_β.
To show v_α^iv_β^j∈ E(G_C) let us consider the edge x_αx_β∈ E(G).
We have σ(x_αx_β) = p for some page p ∈ [ℓ].
Trivially, p ≠ p_d because u_α^0 ≺ x_α≺ u_α + 1^0 ≺ x_β and σ_H(u_α^0u_α + 1^0) = p_d.
Furthermore, for any e = uv ∈ E(G_C) with u ∉V_α and v ≠ V_α we get p ≠ p_e.
This follows from σ_H(u_α^1u_α^n_α + 1) = p_e and u_α^1 ≺ x_α≺ u_α^n_α + 1≺ x_β.
Similar arguments also hold if u ∉V_β and v ∉V_β.
Hence, p = p_e for an edge e ∈ E(G_C) ∩ (V_α× V_β) must hold.
However, now all prerequisites for <ref> are fulfilled.
Thus, we can conclude that the only possible edge e is e = v_α^iv_β^j.
For any other edge e' ∈ E(G_C) ∩ (V_α× V_β) either x_α or x_β are not positioned in the right interval with respect to ≺_G.
Thus, <ref> tells us (indirectly) that we cannot use this page for the edge x_αx_β.
As the edge x_αx_β has to be placed in some page, and we ruled out every possibility but the page that would be created for the edge v_α^iv_β^j, we conclude that v_α^iv_β^j∈ E(G_C) must hold.
As x_α and x_β were two arbitrary new vertices from , we derive that 𝒞 forms a (colorful) clique in G_C, i.e., (G_C, k, (V_1, …, V_k)) is a positive instance of .
<ref> shows an example of the reduction for a small graph G_C with three colors.
long
Taking a closer look at our construction for <ref> (and <ref>), we make the following observation.
Consider a line l perpendicular to the spine.
On the page p_e for an edge e = v_α^iv_β^j ∈ E(G_C), the line l intersects at most one edge if placed in the interval for a color γ with γ < α or γ > β.
If α < γ < β, the line l can in addition intersect the edges u_α^iu_β^j + 1 and u_α^i + 1u_β^j of <ref>c.
Finally, if γ∈{α, β}, the line l intersects at most the full span of the tunnel, which has a width of three, see again <ref>.
Hence, the width of the page p_e is at most three.
Similarly, for the page p_d, <ref> show that its width is also at most three. Hence, the page width ω of H is constant and we obtain <ref>.
Since in a stack layout constructed by our reduction each line perpendicular to the spine intersects a constant number of edges, see also <ref>, we also obtain:
parameterized by the number κ of missing vertices and edges and the page width ω of the given layout, i.e., by κ + ω, is -hard.
§ ADDING THE NUMBER OF PAGES AS PARAMETER FOR
In this section, we complete the landscape of <ref> by showing that becomes fixed-parameter tractable once we add ℓ to the parameterization considered by <ref>long, i.e., we show the following theorem.
.
We will make use of the following concepts.
long
Consider a page p of a stack layout of G and recall that we can interpret it as a plane drawing of the graph G' with V(G') = V(G) and E(G') = {e ∈ E(G) |σ(e) = p} on a half-plane, where the edges are drawn as (circular) arcs.
A face on the page p in coincides with the notion of a face in the drawing (on the half-plane p) of G'.
This also includes the definition of the outer-face.
See <ref> for a visualization of longthese and the following conceptsthis concept and observe that we can identify every face, except the outer-face, by the unique edge e = uv ∈ E(G) with u ≺ v and σ(e) = p that bounds it from upwards.
long
In the following, we will address a face by the edge it is identified with.
Similarly, we say that an edge induces the face it identifies.
We say that a vertex w is incident to the face uv (on some page p) if u ≼ w ≼ v holds and there does not exist a different face u'v' (on the page p) with u ≼ u' ≼ w ≼ v' ≼ v.
Similarly, an interval [w, x] is incident to a face if w and x are incident to the face.
Finally, we say that a face uv spans an interval [w, x] if u ≼ w ≺ x ≼ v holds; note that [w, x] might not be incident to the face uv.
Let ⊆ V(H) be the vertices of H that are incident to new edges, i.e., {u ∈ V(H) |there is an edge e = uv ∈}.
The size of is upper-bounded by 2.
We will define an equivalence class on the intervals of ≺_H based on the location of the vertices from .
Consider the two intervals [u_1, v_1] and [u_2, v_2] defined by the old vertices u_1, v_1, u_2 and v_2, respectively.
These two intervals are in the same equivalence class if and only if
{w ∈| w ≼ u_1} = {w ∈| w ≼ u_2} and {w ∈| v_1 ≼ w} = {w ∈| v_2 ≼ w }
holds.
Each equivalence class, which we call , consists of a set of consecutive intervals delimited by (up to) two old vertices; see <ref>.
Note that the first and last are defined by a single vertex v ∈.
The number of is bounded by 2 + 1.
notationSuperInterval
We denote the delimited by the two vertices u, v ∈ with u ≺ v by [[u, v]].
For the remainder of this paper, we assume that every is bounded by two vertices.
This is without loss of generality, since we can place dummy vertices at the beginning and end of the spine and assume that they bound the first and the last interval.
Furthermore, we write w = [[u, v]] to denote that the new vertex w ∈ is placed, with respect to a given spine order ≺_G, in the [[u, v]].
Furthermore, for a given ≺_G, we define to be its restriction to new vertices, i.e., for every two vertices u, v ∈ we have that u ≺ v implies u v.
long
A Helpful Lemma Towards the Fixed-Parameter Algorithm.
The Algorithm.
With the above concepts at hand, we can now describe our algorithm.
It consists of a branching step, where we consider all possible page assignments for the new edges, all relative orders among the new vertices, all their possible assignment to , and all distances new edges can have from the outer-face.
long
The core of our algorithm is a (dynamic programming) algorithm that we apply in each branch.
In particular, we aim to show the following lemma.
In the following, we show that we can verify in polynomial-time whether a branch can be extended to a solution or not.
The core of our algorithm is a dynamic program that we apply in each branch.
lem:dplemmalemmaDP
Given an instance = of ,
a page assignment σ_G for all edges,
an order in which the new vertices will appear along the spine,
for every new vertex v ∈ an assignment to a , and
for every new edge e an assigned distance ω_e to the outer face with respect to H and .
In
··
time we can compute an ℓ-page stack layout of G that extends and respects the given assignments – or report that no such layout exists.
We first observe that assignments (i)–(iv) fix everything except for the actual position of the new vertices within their .
Especially, assignment (i) allows us to check whether an edge e∈ incident to two old vertices crosses any old edge or another new edge from .
Furthermore, assignments (i) and (ii) allow us to check whether two new edges e = ua, e' = vb ∈ with u,a,v,b∈ will cross.
Adding assignment (iii), we can also check this for new edges with some endpoints in V(H), i.e., extend this to all u,a,v,b∈ V(G).
If the assignments imply a crossing or contradict each other, we can directly return that no desired layout exists.
These checks can be performed in
^2 + · time.
It remains to check whether there exists a stack layout in which no edge of ∖ intersects an old edge.
This depends on the exact intervals new vertices are placed in.
To do so, we need to assign new vertices to faces such that adjacent new vertices are in the exact same face and not two different faces with the same distance to the outer face.
We will find this assignment using a dynamic program that models whether there is a solution that places the first j new vertices (according to ) within the first i intervals in ≺_H.
When placing vertex v_j + 1 in the ith interval, we check that all preceding neighbors are visible in the faces assigned by (iv).
When advancing to the interval i + 1, we observe that when we leave a face, all edges with the same or a higher distance to the outer face need to have both endpoints placed or none.
We thus ensure that for no edge only one endpoint has been placed; see also <ref>.
These checks require time for each of the ·V(H) combinations of j and i.
Once we reach the interval V(H) + 1 and have successfully placed all new vertices, we know that there exists an ℓ-page stack layout of G that extends and respects the assignments.
Finally, by applying standard backtracing techniques, we can extract the spine positions of the new vertices to also obtain the layout.
long
We observe that there are ℓ^ ·! ·^ ·ω^ different possibilities for assignments (i)–(iv).
sectionDP
Before we show <ref>, we first make some observations on the assignments (i)–(iv) and their immediate consequences.
In the following, we only consider consistent branches, i.e., we discard branches where from assignment (ii) we get u v but from assignment (iii) u = [[a, b]] and v = [[c, d]] with c ≺ d ≼ a ≺ b, as this implies v ≺ u.
First, we observe that assignment (i) fully determines the page assignment σ_G.
Thus, it allows us to check whether an edge e∈, i.e., a new edge incident to two old vertices, crosses any old edge or another new edge from .
From now on, we consider all edges from as old since their placement is completely determined by assignment (i).
Second, assignments (i) and (ii) allow us to check whether two new edges e = ua, e' = vb ∈ with u,a,v,b∈ will cross each other.
Third, adding assignment (iii), we can also check this for new edges with some endpoints in V(H), i.e., extend this to all u,a,v,b∈ V(G).
Hence, assignments (ii) and (iii) together with ≺_H fix the relative order among vertices incident to new edges.
<ref> shows an example where the assignments imply a crossings among two new edges.
All of the above checks together can be done in
^2 + · time.
Clearly, if the assignments (i)–(iv) imply a crossing in an ℓ-page stack layout of G or contradict each other, we report that no layout exists that respects assignments (i)–(iv).
However, if not, we still need to find concrete spine positions for the new vertices.
The main challenge here is to assign new vertices to faces such that adjacent new vertices are in the same face and not two different faces with the same distance to the outer-face.
For this, we use assignment (iv) together with the following dynamic programming (DP) algorithm.
The Intuition Behind the DP-Algorithm.
Recall that assignment (iv) determines for every new edge e ∈ the distance ω_e to the outer-face with respect to H, i.e., how many edges of H we need to remove until e lies in the outer-face of the adapted stack layout H; see also <ref>.
We first observe that ω_e together with σ and the intervals in ≺_H in which the endpoint vertices of e are placed in uniquely determine a single face, namely the one the edge e is embedded in.
Furthermore, for every possible distance ω_e ∈ [ω]_0 and every interval in ≺_H, there is at most one face on page σ(e) with distance ω_e to the outer-face that spans the interval.
Hence, we address in the following for a given interval [a, b] with a ≺_H b the face on page σ(e) at the distance ω_e to the outer-face, if it exists, with ω_e^[a, b].
Note that ω_e^[a, b] = 0 always refers to the outer-face, independent of the vertices a and b.
However,
for two different intervals [a, b] and [a', b'] the expressions ω_e^[a, b] and ω_e^[a', b'] can identify two different faces.
As a consequence of the above observation, we can decide for each interval of ≺_H whether we can position a new vertex v there, i.e., whether v sees its adjacent vertices using the faces (in the assigned pages) at the corresponding distance from the outer-face.
We now consider the ordering of the new vertices as in , i.e., we have v_1 ≺ v_2 ≺…≺ v_.
Furthermore, we number the intervals of ≺_H from left to right and observe that there are V(H) + 1 intervals.
Consider a hypothetical solution which we cut vertically at the ith interval of ≺_H.
This partitions the new vertices into those that have been placed left and right of the cut.
For new vertices placed in the ith interval of ≺_H, different cuts at the ith interval yield different partitions into left and right.
Furthermore, some of the new edges lie completely on one side of the cut, while others span the cut.
For j ∈ [], let G_j be the graph G[V(H) ∪{v_1, …, v_j}], i.e., the subgraph of G induced by the vertices of H and the first j new vertices.
We will refer to edges that span the cut and thereby only have one endpoint in G_j as half-edges and denote with v· a half-edge with endpoint v∈ G_j.
Let G_j^+ be G_j extended by the half-edges E_j^+ = {v_x·| v_xv_y ∈, 1 ≤ x ≤ j < y ≤}; see <ref>.
In the following, we denote with e^+ the half-edge that we create for the edge e.
For the half-edge e^+ = v_x· and the edge e = v_xv_y, we call the vertex v_x inside G_j, denoted as V_je, and the vertex v_y outside G_j, denoted as V_je.
Consider again the hypothetical solution and its vertical cut at the ith interval.
Assume that j vertices have been placed in left of the cut.
The stack layout witnesses the existence of a stack layout for G_j that extends H and uses only the first i intervals.
Furthermore, for every half-edge e^+ in G_j^+ the face where e is placed in gives us a set of candidate intervals for the vertex V_je, namely those incident to that face.
Hence, we can describe a partial solution of by a tuple (i, j).
The DP-Algorithm.
Let D be an (V(H) + 1) × ( + 1) × 2 binary table.
In the following, we denote an entry (i, j, r) for i ∈ [V(H) + 1], j ∈ [ + 1], and r ∈{0, 1} as a state of the algorithm.
A state (i, j, r) for i ∈ [V(H) + 1], j ∈ [ + 1], and r ∈{0, 1} is called feasible if and only if there exists, in the current branch, an extension of H for the graph G_j with the following properties.
* The j new vertices are positioned in the first i intervals and their placement respects assignment (iii).FP]dp:feasible-j-in-i
* If r = 1, the last vertex v_j has been placed in the ith interval.
Otherwise, i.e., if r = 0, the last vertex v_j has been placed in some interval i' with i' < i.FP]dp:feasible-r
* For every half-edge e^+ of G_j^+, the face in which we placed (the first endpoint of) e^+ spans the ith interval.FP]dp:feasible-half-edges
Note that for <ref>, we neither require that there exists some interval i' with i ≤ i' for the vertex V_je that is incident to ω_e^i nor that this i'th interval is part of V_je, i.e., the for V_je according to assignment (iii).
However, we will ensure all of the above points in the DP when placing V_je.
Furthermore, while the last (binary) dimension is technically not necessary, it simplifies our following description.
Finally, note that we (correctly, as required in some solutions) allow to position multiple vertices in the same interval of ≺_H.
Our DP will mark a state (i, j, r) for i ∈ [V(H) + 1], j ∈ [ + 1], and r ∈{0, 1} as feasible by setting D[i, j, r] = 1.
Before we can show in <ref> that our DP indeed captures this equivalence, let us first relate different states and thus also partial solutions to each other.
We observe that if we have a solution for i ∈ [V(H)], j ∈ []_0, and r ∈{0,1}, and for every half-edge e^+ of G_j^+ the face ω_e^i also spans the i + 1th interval, then we also have a solution in the state (i + 1, j, 0).
Otherwise, we cannot find an interval for the vertex V_je for some half-edge e^+ of G_j^+; see <ref>b.
More formally, let [a, b] be the ith interval and [b, c] the i + 1th interval.
Assume that we have the new edge e = v_xv_y in G with 1 ≤ x ≤ j < y ≤ k, which is the half-edge v_x· in G_j^+.
Let a'b' be the edge (on page σ_G(e)) that bounds the face identified by ω_e^i upwards with a' ≺ b ≼ b'.
We call (i, j, r) an admissible predecessor of (i + 1, j, 0) if b ≠ b'; see <ref>a.
If we decide to place the vertex v_j in the ith interval, say [a, b], we have to be more careful.
In particular, we have to ensure the following criteria before we can conclude that (i, j, 1) has an extension.
* The interval [a, b] is part of the v_j, i.e., we place the new vertex in the determined .EC]dp:extension-super-interval
* For all new edges e = v_ju with u ∈ V(H) the face ω_e^i exists on the page σ_G(e) and both the interval i (and thus v_j) and u are incident to the face ω_e^i.EC]dp:extension-old-vertex
* For all new edges e = v_jv_q incident to two new vertices the face ω_e^i exists on the page σ_G(e) and the interval i (and thus v_j) is incident to it.EC]dp:extension-new-vertex
The second criterion ensures that edges incident to v_j and an old vertex can be inserted without introducing a crossing.
The third criterion ensures for a new edge e incident to two new vertices that the half-edge e^+ has been completed to a full edge without introducing a crossing or is placed in the face ω_e^i that spans the ith interval (if j < q).
Similar to before, we call (i, j - 1, r) an admissible predecessor of (i, j, 1) if all of the above criteria are met.
Finally, note that if we decide to place the vertex v_j in the ith interval, then the state (i, j, 0) is not feasible due to <ref>.
Our considerations up to now are summarized by the recurrence relation in <ref>.
In <ref>, we show that the recurrence relation identifies exactly all feasible states and thus partial solutions.
We have the following relation for all i ∈ [2,V(H) + 1] and j ∈ []_0.
D[1, 0, 0] = 1
D[1, 0, 1] = 0
D[i, j, 0] =
1 if (i - 1, j, r) is an admissible predecessor of (i, j, 0)
and D[i - 1, j, r] = 1 for some r ∈{0,1}
0 otherwise
D[i, j, 1] =
1 if (i, j - 1, r) is an admissible predecessor of (i, j, 1)
and D[i, j - 1, r] = 1 for some r ∈{0,1}
0 otherwise
lemmalemmaDPRecurrenceCorrectness
For all i ∈ [V(H) + 1], j ∈ []_0, and r ∈{0,1} we have D[i, j, r] = 1 if and only if the state (i, j, r) is feasible.
Furthermore, evaluating the recurrence relation of <ref> takes time.
We first use induction over i and j to show correctness of the recurrence relation and later argue the time required to evaluate it.
In the following, we let [a, b] be the ith interval.
*Base Case (i = 1 and j = 0).
If we have i = 1 and j = 0, we are in the first interval and have not placed any new vertex.
Thus, G_j = G_j^+ = H holds and G_j has clearly a solution, namely H.
Furthermore, there are no half-edges in G_j^+ and thus (1, 0, r) is a feasible state if and only if r = 0.
Note that (1, 0, 1) contradicts <ref> and thus cannot be a feasible state.
Hence, <ref> are correct and serve as our base case.
In our inductive hypothesis, we assume that the table D has been correctly filled up until some value i' ∈ [V(H) + 1] and j' ∈ []_0.
*Inductive Step for i (i = i' + 1 and j = j').
First, note that by moving one interval to the right, having r = 1 is not possible and hence we focus on <ref> in this step.
We consider the cases D[i, j, 0] = 1 and D[i, j, 0] = 0 separately.
For D[i, j, 0] = 1, there exists by the definition of <ref> an admissible predecessor (i', j, r) for some r ∈{0,1} with D[i', j, r] = 1.
By our inductive hypothesis, this means that the state (i', j, r) is feasible.
In particular, it has a solution for G_j that places all new vertices in the first i' intervals.
Clearly, the same solution positions them also in the first i intervals.
Furthermore, every half-edge e^+ of G_j^+ is assigned to a face that spans the i'th interval.
As (i', j, r) is an admissible predecessor, we know that for every e ∈ G_j^+ the face ω_e^i' also spans the ith interval.
Hence, the state (i, j, 0) is feasible, i.e., D[i, j, 0] = 1 correctly holds.
For D[i, j, 0] = 0, there are two cases to consider by the definition of <ref>.
Either (i, j, 0) does not have an admissible predecessor, or for all admissible predecessors (i', j, r) of (i, j, 0) we have D[i', j, r] = 0.
Observe that only states of the form (i', j, r) for some r ∈{0,1} can be admissible predecessors.
In the former case, there exists by our definition of admissible predecessor some half-edge e^+ in G_j^+ that is assigned to the face ω_e^i' which does not span the ith interval.
Hence, (i, j, 0) is not feasible by <ref>.
In the latter case, we know by our inductive hypothesis that D[i', j, r] = 0 implies that (i', j, r) is not feasible, i.e., there does not exist a solution for the graph G_j in which we place the new vertices in the first i' intervals.
As we do not place the vertex v_j in the ith interval, no solution can exist for the state (i, j, 0) either and D[i, j, 0] = 0 correctly holds.
This concludes the inductive step for i.
*Inductive Step for j (i = i' and j = j' + 1).
Analogous to before, we note that by placing v_j in the ith interval, r = 0 is not possible and hence we focus on <ref> in this step.
We again consider the cases D[i, j, 1] = 1 and D[i, j, 1] = 0 separately.
For D[i, j, 1] = 1, there exists an admissible predecessor (i, j', r) for some r ∈{0,1} with D[i, j', r] = 1 by the definition of <ref>.
By our inductive hypothesis, this means that the state (i, j', r) is feasible and has a solution G_j - 1 that places all new vertices of G_j - 1 in the first i intervals.
We now construct a solution G_j by setting σ_G_j according to assignment (i), and extending the spine order ≺_G_j - 1 by placing v_j in the ith interval.
More concretely, we take ≺_G_j - 1, set a ≺ v_j ≺ b, and take the transitive closure to obtain a linear order on the vertices of G_j.
As (i, j', r) is an admissible predecessor, v_j is placed within v_j in G_j and G_j extends H.
So it remains to show that G_j is crossing-free.
As we discard all assignments (i)–(iv) that imply a crossing among two new edges, a new edge e incident to v_j could only cross an old edge e'.
However, as (i, j', r) is an admissible predecessor, we have that the vertices incident to e lie in the same face ω_e^i.
Every old edge induces a face of H.
Therefore, we deduce that a crossing between an old and a new edge whose endpoints lie in the same face is impossible.
Consequently, also e and e' cannot cross.
Thus, G_j is a solution for the graph G_j.
Finally, it is clear that for every half-edge e^+ of G_j^+, the face assigned to its associated edge e spans the ith interval.
For half-edges that already existed in G_j'^+, this holds as (i, j', r) is feasible.
For half-edges introduced in G_j^+, this holds by the definition of admissible predecessor; see <ref>.
Thus, (i, j, 1) is feasible and D[i, j, 1] = 1 correctly holds.
For D[i, j, 1] = 0, there are again two cases to consider by the definition of <ref>.
Either (i, j, 1) does not have an admissible predecessor, or for all admissible predecessors (i, j', r) of (i, j, 0) we have D[i, j', r] = 0.
Again, we observe that it suffices to consider only states of the form (i, j', r) for some r ∈{0,1} as potential admissible predecessors.
For the former case, clearly, if both such states (i, j', r) are not admissible predecessors, then (i, j, 1) cannot be feasible:
Either, we violate <ref> by placing v_j outside v_j, which contradicts assignment (iii), or an edge e incident to v_j crosses an old edge as one of its endpoints is not incident to ω_e^i; see <ref>.
Note that we can assume that all relevant faces span the ith interval, as we have already shown that the inductive step for i is correct.
In both cases, (i, j, 1) is clearly not feasible; see <ref> for the former and observe that a crossing contradicts the existence of an extension for the latter case.
We now consider the case where all admissible predecessor (i, j', r) of (i, j, 1) have D[i, j', r] = 0.
Using proof by contradiction, we show that in this case (i, j, 1) cannot be feasible either.
Assume that (i, j, 1) would be a feasible state.
Then, there exists a solution G_j for the graph G_j.
Using G_j, we can create a solution G_j' for G_j' by removing v_j from ≺_G_j and all its incident edges from σ_G_j.
Clearly, G_j' respects the assignments (i)–(iv).
Furthermore, for every half-edge e^+ in G_j^+, the assigned face spans the ith interval as (i, j, 1) is feasible; see <ref>.
For every half-edge e^+ in G_j'^+ with V_j'e = v_j, i.e., that was completed to an ordinary edge in G_j, the assigned face spans the ith interval as it is incident to it.
Hence, if (i, j, 1) is feasible and (i, j', r) is an admissible predecessor, then (i, j', r) is also feasible.
However, this contradicts the inductive hypothesis, as we have D[i, j - 1, r] = 0.
Thus, the state (i, j, 1) cannot be feasible and D[i, j, 1] = 0 correctly holds.
This concludes the inductive step for j.
*Evaluation Time of Recurrence Relation.
We observe that, apart from checking whether a state is an admissible predecessor of (i, j, r), the steps required to perform in order to evaluate the recurrence relation take constant time.
In the following, we assume that we can access a look-up table that stores the faces in H that span a given interval in ≺_H on a given page p ∈ [ℓ] ordered from outside in, i.e., starting with the outer-face.
We will account for this in our proof of <ref>.
In <ref>, we ensure for every half-edge e^+ in G_j^+ that the face ω_e^i - 1 does not end at the interval i - 1.
As there are at most half-edges, we can do this in time.
For <ref>, we have to ensure that the ith interval [a, b] is part of v_j, which takes constant time.
Furthermore, we have to check for every new edge e = v_ju that the face ω_e^i exists on the page σ(e) and that v_j is incident to it.
For a single edge, this takes constant time, as we can look up the faces that span the ith interval and v_j is always incident to the bottom-most face.
Furthermore, if u ∈ V(H) holds, we also have to check if u is incident to ω_e^i.
If u ≼ a ≺ b, we can access the faces that span the interval [u, ≺_Hu] and check if u is incident to the face ω_e^i.
An analogous check can be made if we have a ≺ b ≼ u.
This takes constant time per edge e.
Hence, we can evaluate <ref> in time.
Combining all, the claimed running time follows.
Putting Everything Together.
With our DP at hand, we are now ready to prove <ref>.
First, recall that we can check in
^2 + ·
time whether assignments (i)–(iv) are consistent and do not imply a crossing.
Thus, we assume for the remainder of the proof that they are.
Furthermore, recall that there are V(H) + 1 intervals in H and observe that we have G_k = G_k^+ = G.
Therefore, G_k^+ does not not contain any half-edge and the feasibility propery <ref> is trivially satisfied.
Hence, as a consequence of <ref>, we deduce that there exists an ℓ-page stack layout of G that extends and respects assignments (i)–(iv) if and only if D[V(H) + 1, , r] = 1 for r = 0 or r = 1.
Furthermore, by applying standard backtracing techniques, we can also determine the concrete spine positions for every new vertex, i.e., compute such a stack layout.
We now bound the running time of the DP.
For that, we first observe that the DP-table D has ·V(H) entries.
We have seen in <ref> that the time required to evaluate the recurrence relation is in .
However, we assumed that we have access to a lookup-table that stores for each interval and each page the faces that span it.
We can compute this table in a pre-processing step by iterating from left to right over the spine order ≺_H and keeping at each interval [a, b] for each page p track of the edges a' b' with σ(a'b') = p and a' ≼ a ≺ b ≼ b.
This can be done in ^2 time.
However, as this table can be re-used in different invocations of the DP-Algorithm, it has to be computed only once in the beginning.
As the overall running time of the -algorithm will clearly dominate ^2, we neglect this pre-computation step.
Hence, the running time of the DP is ·V(H) · and together with the initial checks, this amounts to ·· time.
long
Finally, we observe that for assignment (i), i.e., σ_G, there are ℓ^ different possibilities, for assignment (ii), i.e., , there are ! possibilities, for assignment (iii), i.e., the assignment of new vertices to , there are ^ possibilities, and for assignment (iv), i.e., the distance to the outer face, there are ω^ different possibilities.
This gives us overall ℓ^ ·! ·^ ·ω^ different possibilities for assignments (i)–(iv).
long
Applying <ref> to each of these, we get the desired theorem:
*
Applying <ref> to each of these, we get the following theorem.
long
§ TOWARDS A TIGHTER FIXED-PARAMETER ALGORITHM FOR
As a natural next step, we would like to generalize <ref> by considering only κ and ℓ as parameters.
However, the question of whether one can still achieve fixed-parameter tractability for when parameterizing by κ+ℓ is still open.
Nevertheless, as our final result, we show that strengthening <ref> is indeed possible at least in the restricted case where no two missing vertices are adjacent, as we can then greedily assign the first “possible” interval to each vertex that complies with assignment (i)–(iii).
ptheoremIndependentSetFPT
Observe that G[] being an independent set removes the need for synchronizing the position of adjacent new vertices to ensure that they are incident to the same face.
We propose a fixed-parameter algorithm that loosely follows the ideas introduced in <ref> and adapts them to the considered setting.
The following claim will become useful.
Given an instance = of where G[] is an independent set,
(i) a page assignment σ_G for all edges,
(ii) an order in which the new vertices will appear along the spine, and
(iii) for every new vertex v ∈ an assignment to a .
In
·^2
time we can compute an ℓ-page stack layout of G that extends and respects the given assignments (i)–(iii) or report that no such layout exists.
Towards showing the claim, we first note that we only miss assignment (iv) from <ref>.
Hence, by the same arguments as in the proof of <ref>, we can check in
^2 + ·
time whether assignments (i)–(iii) are consistent and do not imply a crossing.
For the remainder of the proof, we assume that they are, as we can otherwise immediately return that there does not exist an ℓ-page stack layout of G that respects the assignments.
We still need to to assign concrete spine positions to new vertices.
However, in contrast to <ref>, there is no need to ensure that adjacent new vertices are in the same face, because there are no two new vertices are adjacent by assumption.
This allows us to use a greedy variant of the DP from <ref>.
The Greedy Algorithm.
We maintain a counter j initialized at j = 1 and consider the ith interval [a, b].
If [a, b] ∈v_j, i.e., if [a, b] is part of the for v_j, we check the following for every new edge e = v_ju ∈ incident to v_j.
Assuming that v_j would be placed in [a, b], we check whether v_j sees u on the page σ(e).
These checks can be done in · time.
If this is the case, we place v_j in the interval [a, b] and increase the counter by one, otherwise we continue with the next interval [b, c].
We stop once we have j = k + 1 as we have assigned an interval to all new vertices.
To obtain the ℓ-page stack layout of G, we can store in addition for each vertex the interval we have placed it in.
If after processing the last interval there are still some new vertices that have not been placed, we can return that there does not exist an ℓ-page stack layout of G that extends H and respects the assignments (i)—(iii).
The greedy algorithm runs in ·^2 time.
Correctness of the Greedy Algorithm.
In the following, we show that if there is an ℓ-page stack layout of G that extends H and respects assignments (i)–(iii), then our greedy algorithm finds also some.
To that end, we assume that there is such a stack layout ⟨≺^*_G, σ^*_G⟩.
As we only assign intervals and thus spine positions to the new vertices, it suffices to show that if ⟨≺^*_G, σ^*_G⟩ is a solution, so is ⟨≺, σ^*_G⟩, where ≺ is the spine order we obtain with our greedy algorithm.
Observe that we must find some tuple G that extends H and respects the assignments (i)–(iii), as we only ensure that no new edge incident to a new vertex crosses old edges.
As this is clearly not the case in ⟨≺^*_G, σ^*_G⟩, there must be a feasible interval for each new vertex.
For the remainder of the proof, we assume that ≺^*_G and ≺ differ only in the position of some new vertex v.
This is without loss of generality, as we can apply the following arguments for all new vertices iteratively from left to right according to ≺^*_G, until all of them are placed as in the greedy solution.
As we assign the intervals greedily, we assume that v ≺^*_G u implies v ≺ u for all old vertices u, i.e., v appears in ≺ earlier than in ≺^*_G.
Clearly, ⟨≺, σ^*_G⟩ extends H as ⟨≺^*_G, σ^*_G⟩ does.
Therefore, we only need to show that ⟨≺, σ^*_G⟩ does not contain crossings.
Towards a contradiction, assume that ⟨≺, σ^*_G⟩ contains a crossing among the edges e = vu and e' = ab.
We assume that e is a new edge incident to the new vertex v and observe that u must be an old vertex.
Furthermore, we assume without loss of generality that b is also an old vertex.
As already argued in the beginning, we have checked, when placing v, that e does not cross an old edge.
Hence, we observe that e' cannot be an old edge.
As we also treated all new edges incident to two old vertices as old edges, we conclude that e' must be a new edge incident to a new vertex a and an old vertex b.
Furthermore, we assume a ≺^*_G b, a ≺^*_G v, and v ≺^*_G u.
This is without loss of generality, as in any other case the arguments will be symmetric.
As ⟨≺^*_G, σ^*_G⟩ is crossing free, we have a ≺^*_G b ≺^*_G v ≺^*_G u or a ≺^*_G v ≺^*_G u ≺^*_G b.
Furthermore, as our greedy algorithm positions v further to the left compared to ≺^*_G and e and e' now cross, we must have a ≺ v ≺ b ≺ u or v ≺ a ≺ u ≺ b, respectively.
However, we observe that neither situation is possible.
For the former case, i.e., when we turn a ≺^*_G b ≺^*_G v ≺^*_G u into a ≺ v ≺ b ≺ u, we observe that b is an old vertex incident to a new edge, i.e., it defines a , see also <ref>a.
Hence, having b ≺^*_G v and v ≺ b implies that the for v differs between ⟨≺^*_G, σ^*_G⟩ and ⟨≺, σ^*_G⟩, which violates assignment (iii) and thus contradicts our assumption on the existence of ⟨≺^*_G, σ^*_G⟩ and ⟨≺_G, σ^*_G⟩.
In the latter case, when we turn a ≺^*_G v ≺^*_G u ≺^*_G b into v ≺ a ≺ u ≺ b, we move v left of the new vertex a.
Hence, we change the relative order among the new vertices, see <ref>b.
This is a contradiction to assignment (ii) and thus our assumption on the existence of ⟨≺^*_G, σ^*_G⟩ and ⟨≺_G, σ^*_G⟩.
As we obtain in all cases a contradiction, we conclude that e and e' cannot cross.
Applying the above arguments inductively, we derive that our algorithm must find a solution if there exists one, i.e., is correct.
*Putting Everything Together.
We can now branch over all possible assignments (i)—(iii).
For assignment (i), i.e., σ_G, there are ℓ^ different possibilities, for assignment (ii), i.e., , there are ! possibilities, and for assignment (iii), i.e., the assignment of new vertices to , there are ^ possibilities
This gives us overall ℓ^ ·! ·^ different possibilities for assignments (i)–(iii).
Applying our greedy algorithm to each of them, we obtain the theorem.
figFPTISCorrectness
§ CONCLUDING REMARKS
Our results provide the first investigation of the drawing extension problem for stack layouts through the lens of parameterized algorithmics. We show that the complexity-theoretic behavior of the problem is surprisingly rich and differs from that of previously studied drawing extension problems. One prominent question left for future work is whether one can still achieve fixed-parameter tractability for when parameterizing by κ+ℓ, thus generalizing <ref>long and <ref>As our final result, we show that this is indeed possible at least in the restricted case where no two missing vertices are adjacent, as we can then greedily assign the first “possible” interval to each vertex that complies with assignment (i)–(iii).
.
A further natural and promising direction for future work is to consider generalizing the presented techniques to other types of linear layouts, such as queue layouts.
Finally, future work could also investigate the following generalized notion of extending linear layouts:
Given a graph G, the spine order for some subset of its vertices and the page assignment for some subset of its edges, does there exist a linear layout of G that extends both simultaneously?
long
§ OMITTED PROOFS FROM SECTION <REF>
*
*
§ OMITTED DETAILS FROM SECTION <REF>
§.§ Omitted Proofs
*
*
§ OMITTED DETAILS FROM SECTION <REF>
§.§ Omitted Proofs
*
§ OMITTED DETAILS FROM SECTION <REF>
The main task left open in <ref> was the proof of the following lemma.
*
In the following, we use the following notation related to .
*
We observe that for assignment (i), i.e., σ_G, there are ℓ^ different possibilities, for assignment (ii), i.e., , there are ! possibilities, for assignment (iii), i.e., the assignment of new vertices to , there are ^ possibilities, and for assignment (iv), i.e., the distance to the outer face, there are ω^ different possibilities.
This gives us overall ℓ^ ·! ·^ ·ω^ different possibilities for assignments (i)–(iv).
The theorem thus follows by applying <ref> to each of these.
§ OMITTED DETAILS FROM SECTION <REF>
In <ref>, we stated the following theorem, which we now want to prove.
*
§ REMOVING MULTI-EDGES
In the following section, we will describe how one can adapt the - and [1]-hardness reductions to not rely on multi-edges.
While the basic idea is always the same, namely to introduce several auxiliary vertices in order to distribute the multiple edges among them, the concrete implementation depends on the (part of) the reduction we are currently discussing, see the following sections.
§.§ Fixation Gadget (Section <ref>)
While the fixation gadget gives us a lot of power to show hardness of different parameter combinations, we use multi-edges to construct it.
As our graphs H and G are assumed to be simple, it is now time to remove these multi-edges by distributing them over several (additional) auxiliary vertices.
In order to do that, we no longer introduce the 2(F + 1)-many vertices b_1, …, b_F + 1 and a_1, …, a_F + 1, but introduce for each page p ≠ p_d 2(F + 1) vertices b_1^p, …, b_F + 1^p and a_1^p, …, a_F + 1^p.
We also adapt the spine order ≺_H and set b_i^p ≺ v_i ≺ a_i^p for i ∈ [F + 1] and p ∈ [ℓ] ∖{p_d}.
Furthermore, we fix the order among the newly introduced vertices by enforcing b_i^p + 1≺ b_i^p and a_i^p ≺ a_i^p + 1 for every i ∈ [F + 1] and p ∈ [ℓ] ∖{p_d}.
The linear order ≺ is then obtained by taking the transitive closure of the above (partial) orders.
Observe that the above spine order places for each page p one vertex before (b_i^p) and after (a_i^i) v_i.
Next, we adapt the edges that we created in <ref> and their page assignment σ_H.
To that extend, recall that we introduced for every i ∈ [F + 1] and every page p ≠ p_d the edge e(b_i, a_i, p) = b_ia_i with σ(e(b_i, a_i, p)) = p.
Now, this edge should be incident to vertices created “for the page p”, i.e., we create instead the edge b_i^pa_i^p and set σ(b_i^pa_i^p) = p.
Although the remaining edges created in <ref> are already simple, we have to adapt (some of) them, as their incident vertices no longer exist, i.e., have been replaced.
In particular, for every i ∈ [F + 1], we now no longer introduce the edges b_iv_i and v_ia_i but the edges b_i^ℓ - 1v_i and v_ia_i^ℓ - 1 and set σ(b_i^ℓ - 1v_i) = σ(v_ia_i^ℓ - 1) = p_d.
Finally, we create instead of the edge b_1a_F + 1 the edge b_1^ℓ - 1a_F + 1^ℓ - 1 and set σ(b_1^
ℓ - 1a_F + 1^ℓ - 1) = p_d.
<ref> is an adapted version of <ref> and shows the updated construction.
We end this section with showing <ref>, which is an adapted version of <ref>.
Let = be an instance of that contains the adapted fixation gadget (without multi-edges) on F vertices {f_1, …, f_F}.
In any solution G to and for every i ∈ [F], we have v_i ≺ f_i ≺ v_i + 1 and σ(f_iv_i) = σ(f_iv_i + 1) = p_d.
Furthermore, the adapted fixation gadget contributes 2Fℓ + 2ℓ - 1 vertices and (ℓ + 4)F + ℓ + 2 edges to the size of .
Let G be a solution to .
Regarding the two properties of any possible solution to , i.e., that v_i ≺ f_i ≺ v_i + 1 and σ(f_iv_i) = σ(f_iv_i + 1) = p_d holds for every i ∈ [F], it suffices to make the following observations in the proof of <ref>.
We first consider the argument we made to show v_i ≺ f_i ≺ v_i + 1.
There, we first assumed that f_i ≺ v_i would hold for an i ∈ [F].
Using the observation on the presence of the edges e(b_i + 1, a_i + 1, p) for every page p ≠ p_d and the spine order b_i + 1≺_H v_i + 1≺_H a_i + 1, we concluded that f_i can see v_i + 1 only on the page p_d.
This ultimately led to a contradiction to the assumption on the existence of a solution with f_i ≺ v_i.
Now, for the modified fixation gadget we can make a similar observation.
Consider any page p ≠ p_d.
The graph H contains the edge b_i + 1^pa_i + 1^p and we have b_i + 1^p ≺_H v_i + 1≺_H a_i + 1^p.
Hence, f_i can still see v_i + 1 only on the page p_d, i.e., we still must have σ(f_iv_i + 1) = p_d.
And by the very same arguments as in the proof of <ref> this leads to a contradiction.
Let us now re-visit the argument to show σ(f_iv_i) = σ(f_iv_i + 1) = p_d.
There, we assumed the existence of a solution with σ(f_iv_i) ≠ p_d for some i ∈ [F], i.e., we assumed that σ(f_iv_i) = p holds for some page p ≠ p_d.
Under this assumption, we (again) make the observation that we have the edge b_i^pa_i^p with σ_H(b_i^pa_i^p) = p, which allows us to strengthen above result to v_i ≺ f_i ≺ a_i^p.
As in the proof of <ref>, we deduce from σ_H(v_ia_i^ℓ - 1) = p_d and σ_H(b_i + 1^p'a_i + 1^p') = p' for any page p' ≠ p_d that there does not exist a feasible page assignment for the edge f_iv_i + 1, which leads to a contradiction as in the proof of <ref>.
Regarding the size of the adapted fixation gadget, we first observe that we have not introduced any additional edge, but rather re-distributed existing edges to additional vertices.
For the number of vertices in the adapted fixation gadget, we recall that ℱ consists of F vertices and we have F + 1 vertices of the form v_i.
As we introduce 2(F + 1) vertices per page p ≠ p_d, this amounts to 2(F + 1)(ℓ - 1) additional vertices.
Combining all, we conclude that we have 2Fℓ + 2ℓ - 1 vertices in the adapted fixation gadget.
§.§ -hardness Reduction (Section <ref>)
Our reduction from <ref> that we used to show that with two new vertices is -complete, see <ref>, creates a graph H with several multi-edges.
More concretely, we created multi-edges both in the base layout from <ref>, for example to block visibility to a vertex, and in the fixation gadget.
For the latter part of the reduction, i.e., the fixation gadget, we have discussed in <ref> how to remove the multi-edges.
This section is devoted to describe a way to remove the multi-edges in the former part, i.e., the base layout.
At the end of this section, we argue that our reduction remains correct.
Removal of the Multi-Edges.
Let φ = (𝒳, 𝒞) be an instance of with N = 𝒳 variables and M = 𝒞 clauses.
Recall that we introduced N + M + 1 dummy vertices d_q in <ref> and distributed them in ≺_H on the spine, i.e., we set d_i ≺ x_i ≺ d_i + 1 and d_N + j≺ c_j ≺ d_N + j + 1 for every i ∈ [N] and j ∈ [M].
We now create 2N(N + M + 1) dummy vertices d_q^p instead with q ∈ [N + M + 1] and p ∈ [2N], i.e., N + M + 1 dummy vertices for every page p ∈ [ℓ] ∖{p_d} associated to a variable.
These new dummy vertices are ordered on the spine as follows.
We set d_i^p ≺ x_i ≺ d_i + 1^p and d_N + j^p ≺ c_j ≺ d_N + j + 1^p for every i ∈ [N], j ∈ [M], and p ∈ [2N].
Furthermore, we set d_q^p_i≺ d_q^p_¬ i for every q ∈ [N + M + 1] and i ∈ [N].
Finally, we set d_q^p_¬ i≺ d_q^p_i + 1 for every q ∈ [N + M + 1] and i ∈ [N - 1].
We obtain the linear order ≺_H by taking the transitive closure of the above relative orders, see also <ref>.
Next, we redistribute the multi-edges over the new dummy vertices.
Recall that we created for every pair of variables x_i, x_j ∈𝒳 with i ≠ j the edges e(x_i, p_j) = d_id_i + 1 and e(x_i, p_¬ j) = d_id_i + 1.
Now, we create the edges e(x_i, p_j) = d_i^p_jd_i + 1^p_j and e(x_i, p_¬ j) = d_i^p_¬ jd_i + 1^p_¬ j instead.
We leave the page assignment σ_H as it is, i.e., we have σ(e(x_i, p_j)) = p_j and σ(e(x_i, p_¬ j)) = p_¬ j.
Furthermore, we created for every combination of a clause c_j ∈𝒞 and a variable x_i ∈𝒳 edges depending on the occurrence of x_i in c_j.
We now also re-distribute these edges as follows.
If x_i does not appear in c_j, we create the edges e(c_j, p_i) = d_N + j^p_id_N + j + 1^p_i and e(c_j, p_¬ i) = d_N + j^p_¬ id_N + j + 1^p_¬ i.
We set σ(e(c_j, p_i)) = p_i and σ(e(c_j, p_¬ i)) = p_¬ i.
If x_i appears in c_j without negation, we create the edge e(c_j, p_i) = d_N + j^p_id_N + j + 1^p_i and set σ(e(c_j, p_i)) = p_i.
Symmetrically, if x_i appears negated in c_j, we create the edge e(c_j, p_¬ i) = d_N + j^p_¬ id_N + j + 1^p_¬ i and set σ(e(c_j, p_¬ i)) = p_¬ i.
Finally, we set a_3^p_ℓ - 1≺ d_1^p_1 to ensure that the (in <ref> adapted) fixation gadget is placed at the very beginning of the spine.
Furthermore, we add the edge d_1^p_1d_N +M + 1^p_¬ N and set σ(d_1^p_1d_N +M + 1^p_¬ N) = p_d to ensure that our adapted construction still has <ref>.
Correctness of the Reduction.
First, observe that we added a polynomial number of additional vertices to H and no new edges.
Hence, the size of H (and G) increased by a factor polynomial in the size of φ and the size of remains polynomial in the size of φ.
Furthermore, due to <ref>, it still holds that in any solution G to we have s ≺ v ≺ d_1^p_1≺ x_i ≺ c_j ≺ d_N + M + 1^p_¬ N for every i ∈ [N] and j ∈ [M].
Similarly, the relative position in ≺_H of the dummy vertices on the spine in the adapted reduction did not change compared to the reduction from <ref>, i.e., instead of d_i ≺ x_i ≺ d_i + 1 and d_N + j≺ c_j ≺ d_N + j + 1 we now have d_i^p ≺ x_i ≺ d_i + 1^p and d_N + j^p ≺ c_j ≺ d_N + j + 1^p for every i ∈ [N] and j ∈ [M] (and p ∈ [2N]).
Furthermore, the edges span over the same vertices.
Hence, we conclude that the proof of <ref> readily carries over, i.e., <ref> also holds for our modified construction that does not have multi-edges.
§.§ [1]-hardness Reduction (Section <ref>)
The reduction we have discussed in <ref> has one catch: While we only consider simple graphs as input, the graph H we construct contains multi-edges.
Although we remove already some multi-edges when incorporating the adapted fixation gadget from <ref>, some multi-edges are also introduced in <ref>.
In particular, consider the case where we have two edges u_α^iu_β^j, u_α^i'u_β^j'∈ E(G_C).
Then, for a γ∈ [k] ∖{α, β}, we would create twice the edge u_γ^1u_γ^n_γ + 1 but assign them to different pages.
In this section, we remove these multi-edges by using the additional (dummy) vertices that we obtain from the adapted fixation gadget,
However, note that this will not effect our intended equivalence from <ref> between a solution G to and a solution 𝒞 to .
In the following, we extend in <ref> the base layout of our reduction to accommodate the additional vertices for the adapted fixation gadget.
In <ref>, we describe how to distribute the multi-edges from <ref> across the new vertices.
Finally, we argue in <ref> that our reduction is still correct.
§.§.§ Adapting the Base Layout of our Reduction
We now extend the base layout from <ref> by additional vertices that in the end will be identified with their respective “partner” in the (adapted) fixation gadget.
For each edge e ∈ E(G_C) and color α∈ [k + 1], we create the vertices b_α^e and a_α^e.
To order these additional vertices on the spine, we assume that the edges E(G_C) are ordered, i.e., that we have E(G_C) = {e_1, …, e_M}.
We use this ordering to extend the spine order ≺_H as follows, where we assume u_0^n_0 + 1 = u_0^0 for ease of notation.
For all α∈ [k + 1] and i ∈ [M], we set u_α-1^n_α - 1 + 1≺ b_α^e_i≺ u_α^0 ≺ a_α^e_i≺ u_α^1.
Furthermore, for j ∈ [M - 1], we also set b_α^e_j + 1≺ b_α^e_j≺ a_α^e_j≺ a_α^e_j + 1.
Informally speaking, according to the above spine order, we place for each edge e ∈ E(G_C) and each α∈ [k + 1] one vertex before (b_e^α) and after (a_e^α) u_α^0.
The vertices before u_α^0 are ordered decreasingly by the index of their respective edge and the vertices after u_α^0 are ordered increasingly by the index of their respective edge.
To obtain the adapted linear order ≺_H we take the transitive closure of the above relation and the relations from <ref>.
Finally, we (re-)introduce the (adapted) fixation gadget on F = k vertices ℱ = 𝒳 by identifying the following vertices for α∈ [k + 1]. Note that we use i = α to distinguish between the vertices of the fixation gadget and hardness reduction.
As in <ref>, we identify v_i = u_α^0.
For the other vertices, we identify b_i^p = b_i^e_p, and a_i^p = a_α^e_p, i.e., the vertices for the pth edge e_p ∈ E(G_C) are identified with the vertices for page p in the fixation gadget.
The edges of the fixation gadget are adapted accordingly.
§.§.§ Redistributing the Multi-Edges
While incorporating the adapted fixation gadget removes some of the multi-edges, we still have to deal with the multi-edges introduced in <ref>, where we encoded the adjacencies from G_C into H.
We do this now by redistributing the edges over the new vertices introduced in <ref> and visualize our process with <ref>.
Let e = v_α^iv_β^j∈ E(G_C) be an edge of G_C and assume α < β.
Recall that we created in <ref> a set of edges in H dedicated to e and assigned them with σ_H to the page p_e.
In particular, we created for every γ∈ [k] ∖{α, β} the edge u_γ^1u_γ^n_γ + 1.
This edge is replaced by the edges a_γ^eu_γ^1, a_γ^eb_γ + 1^e, and u_γ^n_γ + 1b_γ + 1^e, as in <ref>a.
Furthermore, we created the edges u_α^1u_α^i and u_α^i + 1u_α^n_α + 1.
Instead, we now create the edges a_α^eu_α^1 and a_α^eu_α^i, and u_α^i + 1b_α + 1^e and u_α^n_α + 1b_α + 1^e as shown in <ref>b.
Similarly, we also create the edges a_β^eu_β^1, a_β^eu_β^j, u_β^j + 1b_β + 1^e, and b_β + 1^eu_β^n_β + 1.
All of these edges are assigned to the page p_e.
Observe that for i ∈{1, n_α} or j ∈{1, n_β}, above edges would become multi-edges (on the same page), which can easily be avoided.
Recall that we created in <ref> a tunnel on the page p_e by adding the edges u_α^iu_β^j + 1 and u_α^i + 1u_β^j, see also <ref>c.
We do not need to adapt these edges, as they can only result in multi-edges if G_C would contain them.
One can readily verify that we no longer introduce multi-edges while still ensuring that the page p_e is crossing free.
Finally, recall that with the original version of the fixation gadget, we had for α∈ [k + 1] the edges u_α^0u_α^0 and u_α^0u_α^0 that were assigned to the page p_d.
In particular, these edges corresponded for α∈ [k] to the edges u_α^0u_α^1 and u_α^n_α + 1u_α + 1^0.
However, these edges are no longer present in the adapted reduction, since no vertex of the (adapted) fixation gadget is identified with the vertices u_α^1 or u_α^n_α + 1.
As above edges facilitated the arguments that showed correctness of our approach, we re-introduce them in H as follows.
For every α∈ [k], we add the edges u_α^0u_α^1 and u_α^n_α + 1u_α + 1^0 and set σ(u_α^0u_α^1) = σ(u_α^n_α + 1u_α + 1^0) = p_d.
As the above edges span the respective edges u_α^0a_α^p_e_M and b_α + 1^p_e_Mu_α + 1^0, they do not introduce crossings on the page p_d.
§.§.§ Showing Correctness of the Modified Reduction
This completes the adaptions we need to make to our reduction and it remains to show that <ref> still holds.
Regarding the size of the created instance, it is sufficient to observe that we introduce 2(k + 1)M additional vertices (for the adapted fixation gadget, see also <ref>), and for each edge of G_C we introduce a constant number of additional edges to the already existing ones.
Thus, the size of the instance remains polynomial in the size of G_C and we still have κ = 3k + k2.
So it remains to show the correctness of the reduction.
For that, we can, on the one hand, observe that the transformation of a solution 𝒞 to an instance of to a solution G to the created instance of as described for the “(⇒)-direction” in the proof of <ref> is unaffected by the changes we made.
One way to see this is that all the additional edges that we introduced run between vertices in placed between u_α^0 and u_α^1] or u_α^n_α + 1 and u_α + 1^0 for the corresponding α∈ [k + 1].
Furthermore, if we changed existing edges, then we moved their incident vertices from u_α^1 or u_α^n_α + 1 to a vertex in the above range.
However, we created the spine order ≺_G such that u_α^1 ≺ x_α≺ u_α^n_α + 1 holds.
Hence, the relative order ≺_G (and thus ≺_H) among two new vertices x_α and x_β with α, β∈ [k], or a new vertex x_α and an old vertex u_β^i with α, β∈ [k], α≠β, and i ∈{0, 1, n_β + 1} remains untouched.
Hence, G is crossing free and thus a solution to our instance of .
On the other hand, for the “(⇐)-direction”, we used the fact that the created instance of fulfills <ref> to construct the solution 𝒞.
Hence, if we can convince ourselves that still fulfills said properties, then the arguments we gave in <ref> will readily carry over.
Recall that <ref> is defined as follows.
*
To see that we still have this property, we can observe that we incorporated in <ref> the adapted fixation gadget on k vertices into our construction.
As we identify, for α∈ [k], v_i = u_α^0 and f_i = x_α for i = α, <ref> follows directly from <ref>.
Recall that we introduced at the end of <ref> for every α∈ [k] the edges u_α^0u_α^1 and u_α^n_α + 1u_α + 1^0 that are placed on the dummy page p_d.
We now use these edges to make the following observation.
From <ref>, we get that we have in any solution G and for every α∈ [k] that σ(x_αu_α^0) = σ(x_αu_α + 1^0) = p_d holds.
From <ref>, that still holds in our construction, we get u_α^0 ≺ x_α≺ u_α + 1^0 for every α∈ [k].
Using the above-mentioned edges on page p_d, we observe that we cannot have u_α^0 ≺ x_α≺u_α^1 or u_α^n_α + 1≺ x_α≺ u_α + 1^0, as this would introduce a crossing on the page p_d.
Thus, our construction not only fulfills <ref>, but, furthermore, <ref> still applies.
We will now argue that our construction also fulfills <ref>, which is defines as follows.
*
To see that <ref> still holds, we first apply <ref>.
This allows us to conclude that we have u_α^1 ≺ x_α≺ u_α^n_α + 1 and u_β^1 ≺ x_β≺ u_β^n_β + 1.
Then, by exchanging u_α^0 with a_α^e and u_α + 1^0 with b_α + 1^e in the proof of <ref>, we can exclude u_α^1 ≼ x_α≼ u_α^i and u_α^i + 1≼ x_α≼ u_α + 1^0.
Therefore, we derive that x_α must be placed in v_α^i and analogously must x_β be placed in v_β^i, i.e., our construction still fulfills <ref>.
We use this now to argue that the “(⇐)-direction” of our reduction is still correct.
Correctness of the “(⇐)-direction” in the Reduction.
We perform the arguments as in the “(⇐)-direction” of the proof of <ref>.
Most of the arguments are implied by <ref>, which are also fulfilled in our adapted reduction.
However, in the proof of <ref> we argued that the pre-requisites for <ref> are fulfilled.
In the following, we make this argument for our adapted construction.
Let us again assume that x_α and x_β are placed in v_α^i and v_β^j, respectively.
We again consider the edge x_αx_β∈ E(G) and the page p it is placed in the solution G.
By the very same arguments as in the proof of <ref>, we can exclude p = p_d.
Furthermore, as we have σ_H(a_α^eb_α + 1^e) = p_e and a_α^e ≺ u_α^1 ≺ x_α≺ u_α^n_α + 1≺ b_α + 1^e≺ x_β for an edge e = uv ∈ E(G_C) with u ∉V_α and v ≠ V_α we get that p = p_e for an edge e ∈ E(G_C) ∩ (V_α× V_β) must hold.
This shows that all prerequisites for <ref> are fulfilled.
Hence, we obtain that the edge x_αx_β can only be placed in the page p_e that we created for the edge v_α^iv_β^j.
Hence, v_α^i and v_β^j are adjacent in G_C and the proof of the “(⇐)-direction” in <ref> readily carries over.
Combining all, we conclude that <ref> also holds for our modified construction that does not have multi-edges.
|
http://arxiv.org/abs/2409.02699v1 | 20240904133515 | CLDA: Collaborative Learning for Enhanced Unsupervised Domain Adaptation | [
"Minhee Cho",
"Hyesong Choi",
"Hayeon Jo",
"Dongbo Min"
] | cs.CV | [
"cs.CV"
] |
A Data Selection Approach for Enhancing Low Resource Machine Translation Using Cross-Lingual Sentence Representations
Nidhi Kowtal *
SCTR's Pune Institute of Computer Technology
Pune, India
kowtalnidhi@gmail.com
Tejas Deshpande *
SCTR's Pune Institute of Computer Technology
Pune, India
tejasdeshpande1112@gmail.com
Raviraj Joshi
Indian Institute of Technology Madras, India
L3Cube Labs, Pune
Pune, India
ravirajoshi@gmail.com
September 9, 2024
==============================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Unsupervised Domain Adaptation (UDA) endeavors to bridge the gap between a model trained on a labeled source domain and its deployment in an unlabeled target domain. However, current high-performance models demand significant resources, resulting in prohibitive deployment costs and highlighting the need for small yet effective models. For UDA of lightweight models, Knowledge Distillation (KD) in a Teacher-Student framework can be a common approach, but we find that domain shift in UDA leads to a significant increase in non-salient parameters in the teacher model, degrading model's generalization ability and transferring misleading information to the student model. Interestingly, we observed that this phenomenon occurs considerably less in the student model. Driven by this insight, we introduce Collaborative Learning, a method that updates the teacher's non-salient parameters using the student model and at the same time enhance the student's performance using the updated teacher model. Experiments across various tasks and datasets show consistent performance improvements for both student and teacher models. For example, in semantic segmentation, CLDA achieves an improvement of +0.7% mIoU for teacher and +1.4% mIoU for student compared to the baseline model in the GTA to Cityscapes. In the Synthia to Cityscapes, it achieves an improvement of +0.8% mIoU for teacher and +2.0% mIoU for student.
§ 1. INTRODUCTION
While the availability of extensive labeled datasets has driven remarkable advancements in various computer vision tasks, there exists a much larger quantity of unlabeled data in real-world scenarios. To accommodate a variety of real-world applications, it is crucial for deep neural networks (DNNs) to generalize to these unlabeled data. However, generalizing the models trained in labeled data (source domain) to unlabeled data (target domain) is non-trivial and pose several challenges. Unsupervised domain adaptation (UDA) has been studied to resolve this issue, which transfers knowledge learned from the source domain to the target domain through adversarial training <cit.> or self-training <cit.>.
Although UDA methods have made remarkable progress in recent years, the best-performing approaches are mostly based on resource-intensive networks, such as DAFormer <cit.> and HRDA <cit.>. Deploying such models in resource-constrained environments is often infeasible. Training compact models in the same way using existing UDA approaches is not very effective either, leading to the necessity of developing compact yet effective models. One promising solution is to apply Knowledge Distillation (KD) <cit.> that leverages a high-capacity teacher model to train a compact student model. Fig. <ref> (a) shows the example of applying KD for training a lightweight student model in the UDA setup <cit.>. The student model is trained by leveraging knowledge from a fixed teacher model that possesses domain adaptation capabilities against the target domain by utilizing adversarial learning <cit.> to minimize the discrepancies between the source and target domains.
However, directly leveraging the fixed teacher model in the UDA problem exhibits inherent limitations by heterogeneous data distributions of source and target domains. In other words, due to the gap between the data distributions of the source and target domains, some of teacher parameters learned on the source domain may not be useful or even adversely work on the target domain <cit.>. This discrepancy generates non-salient parameters, limiting generalization ability <cit.> when used in the target domain. Consequently, the misleading information from the frozen teacher model can be transferred to the student model in Fig. <ref> (a). We define this as the problem of Domain Shift induced Non-salient parameters (DSN), which will be detailed in the following analysis section.
We tackle this challenging problem from a new perspective of updating the teacher model to alleviate the DSN problem, unlike KD approaches that use fixed teacher models in Fig. <ref> (a).
To this end, we first analyzed the occurrence of the DSN problem across layers in both the teacher and student models through Layer Saliency Rate (LSR), which indicates the contribution of each layer to the model performance when the layer is removed.
Our key observation from Fig. <ref> is that approximately half of the layers in the teacher model, marked as `Non-salient layer', suffer from the DSN problem, while less than 10% of the layers in the student model are affected. This indicates that the student model contributes more effectively to the target domain, and thus can effectively complement the DSN problem of the teacher model. Going one step further, we introduce collaborative learning, where the knowledge of the improved teacher model is used to train the student model through the distillation process, as depicted in Fig. <ref> (b).
A naive way to improve the teacher model would involve retraining the layers of the teacher model that exhibit DSN problem to resemble the student model. However, simultaneously training both the teacher and student models requires significant computational costs and large memory capacity, especially for heavy teacher models. Instead, as illustrated in Fig. <ref> (b), we attempt to enhance the performance through mutual information exchange without the need for backpropagation to the teacher model, facilitating knowledge transfer from teacher to student (T → S) and vice versa (S → T). To achieve this, it is necessary to carefully analyze the interaction that occurs when information is transferred in both directions.
Initially, we observed that when knowledge is transferred as ‘T → S’, certain parts of the student model are more influenced by the teacher compared to an independently trained student model. Based on this, it is reasonable to update the teacher model using the student model (S → T) in the regions of active interaction occur. Consequently, we explored strategies to identify and leverage the appropriate relationship between the student and teacher models in these areas to enhance the teacher model's performance. Further details are discussed in section 4.3.
Based on this comprehensive analysis, we introduce Collaborative Learning for UDA, termed CLDA, a novel approach that enables the teacher and student models to complement each other, simultaneously enhancing the performance of both models. To maximize the synergy between the two models, we transfer knowledge from the teacher model to the student model (T → S), and simultaneously identify the appropriate layers in the student model that correspond to the non-salient layers in the teacher model. Then, the non-salient layer of the teacher is updated based on the corresponding layer of the student (S → T). We applied CLDA to semantic segmentation and classification tasks, conducting experiments across various datasets and domains. Consistent performance improvements were observed, demonstrating the effectiveness of our approach. Notably, our approach achieved performance comparable to the baseline teacher model for the lightweight student model.
To summarize, our key contributions include the followings:
* We address the problem of Domain Shift Induced Non-salient parameters (DSN) in UDA and introduce a novel method to effectively resolve this issue.
* Through detailed analysis, we discover the student model can sufficiently complement the DSN problem that occurs in the teacher model.
* We propose a novel and effective strategy for UDA that enables the teacher and student models to complement each other, thereby simultaneously enhancing the performance of both, without requiring backpropagation in the teacher model.
* Experimental results on several datasets and models show consistent performance improvements in both the teacher and student models.
§ 2. RELATED WORK
§.§.§ Unsupervised Domain Adaptation
To effectively adapt a network to the target domain, several strategies have been proposed, most can be categorized into adversarial learning <cit.> and self-training <cit.>. Inspired by the success of Generative Adversarial Networks (GANs) <cit.>, adversarial learning methods aim to learn invariant representations that reduce the distance between source and target distributions at the image <cit.>, feature <cit.>, and output levels<cit.>. Recently, self-training has emerged as a promising alternative for domain adaptation. Self-training leverages pseudo labels <cit.> for unlabeled target data<cit.>. To mitigate the noise in pseudo labels caused by domain shift, approaches such as confidence thresholding <cit.>, prototypes <cit.>, and data augmentation <cit.> have been employed.
§.§.§ Knowledge distillation
Knowledge distillation (KD) aims to transfer the knowledge <cit.> acquired by a complex teacher model <cit.> to a smaller student model. Since <cit.> introduced the concept of knowledge distillation, where the teacher model’s dark knowledge is provided through temperature-scaled softmax outputs, various studies have been inspired to utilize more teacher information <cit.>. Some methods have been proposed to enhance interaction between the teacher and the student <cit.>. For instance, <cit.> proposed a novel student-friendly learning technique within the teacher network to facilitate knowledge distillation. However, these methods assume the teacher and student operate within the same domain. When this assumption fails, the teacher's performance degrades, transferring incorrect information to the student. Our research proposes a KD method that remains robust despite domain shifts.
§ 3. PRELIMINARIES
UDA aims to alleviate the performance degradation caused by a domain gap between source domain datasets P = { ( x^i_p, y^i_p )}^N_p_i=1 and target domain datasets Q = { ( x^i_q )}^N_q_i=1, where N_p and N_q indicate the number of training images in the source and target domains. The model f comprises a feature extractor h_ϕ parameterized by ϕ and a head g_θ parameterized θ, i.e. f_θ ,ϕ = g_θ ( h_ϕ ( x ) ).
The training process employs a total loss ℒ=ℒ_sup+ℒ_da consisting of a supervised loss ℒ_sup using the source data P and an domain adaptation loss ℒ_da using the target data Q. The type of the supervised loss ℒ_sup varies depending on tasks <cit.>. Also, adaptation loss L_da is defined according to the UDA strategy such as adversarial learning <cit.> or self-training <cit.>. The overall training process is as follows:
min_θ,ϕ1/N_P∑_k=1^N_Pℒ_sup^k + 1/N_Q∑_k=1^N_Qℒ_da^k .
§ 4. ANALYSIS
To address the DSN problem in UDA, we investigate the characteristics of the teacher and student models <cit.>. Given that a Transformer-based method has recently outperformed previous CNN-based UDA methods across various benchmark datasets under the UDA setting, we assume the Transformer-based framework <cit.> throughout this paper. Key findings from our analysis are summarized as follows:
* Layer Saliency Analysis: By analyzing Layer Saliency Rate (LSR), we found that the occurrence of DSN at the layer level was significantly lower in the student model compared to the teacher model. This indicates that the layers of the student model contribute more substantially to the target domain.
* Layer Contribution: We discovered that the student model contributes approximately 2.5 times more to the target domain than the teacher model. Consequently, we conclude that the student model can effectively complement the teacher model in addressing the DSN problem.
* Analysis for Collaborative Learning: Knowledge transfer from the teacher to the student (T → S) significantly influences the attention module in the deeper layers. Also, when updating the teacher from the student (S → T), a channel similarity-based approach proves to be effective.
§.§ 4.1. Layer Saliency Analysis
To evaluate the occurrence of DSN problem at the layer level within the model, we investigate the distribution of saliency at the layer level. For this purpose, we define a Layer Saliency Rate, referred to as LSR, to measure the saliency of each layer. Given a domain-adapted model, the LSR for the weight ϕ at layer i within the task accuracy R for the target domain is defined as follows:
LSR(f,ϕ ^(i)) = | R(f(ϕ ))-R(f(ϕ -ϕ _i)) | .
The LSR value indicates the contribution of a layer to the model's performance when the layer i is removed. A higher value means a higher contribution of the layer. For semantic segmentation, we designate layers with LSR values less than 0.1% as non-salient layers, and for classification, we designate layers with LSR values less than 0.3% as non-salient layers. Conversely, layers in the opposite case are considered salient. Fig. <ref> reveals that while the non-salient layers occupy a large portion of the teacher model, the student model exhibits the opposite result. In the layers of the student model, the proportion of salinet layers is overwhelmingly high.
In the case of DAFormer <cit.>, these non-salient layers comprise more than half of the teacher model but are not found in the student model. This indicates that the proportion of layers contributing to the target domain is significantly higher in the student model.
§.§ 4.2. Layer Contribution
To explore the degree of contribution of the student model to the target domain relative to the teacher model, we employ a popular method for measuring similarity in neural networks; Centered Kernel Alignment (CKA) <cit.>. Given two representations, X and Y of the same inputs, CKA(X,Y) ∈ [0,1] indicates how similar (close to 1) or dissimilar (close to 0) the two are. We measure the CKA similarity for every pair of modules in the layer X ∈ℝ^n × p_1 and Y ∈ℝ^n × p_2, within the teacher and student models, where the modules with p_1 and p_2 neurons are evaluated n examples in the target domain. Fig. <ref> shows the results as a heatmap. We observed that the lower half of the student model's layers functionally correspond to twice the number of the teacher model's layers. The remaining half of the student model's layers functionally align with 2.5 times the number of the teacher model's layers. These percentages indicate that the student's layer contributes significantly to the target, enough to resolve the DSN problems originating from the teacher's layer.
§.§ 4.3. Analysis for Collaborative Learning
To address the DSN problem in the teacher model, it is crucial to effectively leverage the layers of the student model, which are less affected by DSN problem. Understanding the information exchange and interaction between the two models is essential for this purpose. With this understanding, the teacher model can effectively incorporate the strengths of the student model, ultimately deriving the optimal solution to the DSN problem. Based on this understanding, we propose a novel Collaborative Learning approach that exploit the strengths of both models.
§.§.§ Teacher to Student
Initially, we investigate the changes when knowledge was transferred from the teacher model to the student model. Our hypothesis posited that when the teacher model exerts greater influence on certain modules of the student model, notable changes in the parameters of those modules are observed compared to a student model trained independently. To validate this, we measured the average per-pixel variation rate (PVR) across modules within the layer, such as attention module and mlp module, between the distilled student model f_ID and the independently trained student model f_K as follows:
PVR (f,ϕ^(m) ) = ∑_d = 0^D| ϕ_ID,d^m - ϕ_K,d^m|/D .
where ϕ^i,m denotes the weight for module m within layer i, D represents the dimensions of module m. In Fig. <ref>, we identified two key findings. First, the Attention module was found to be more affected than the MLP module.
Second, the deeper layers of the model exhibited more pronounced effects. These results suggest a need to design collaborative learning strategies particularly in the Attention modules and deeper layers, where the teacher's knowledge has substantial impact.
§.§.§ Student to Teacher
Based on the aforementioned observations, it is a reasonable choice to update the teacher model by leveraging the student model in areas where active interaction occurs. To determine the optimal layer in the student's attention module that corresponds to the teacher model, we carefully considered three key aspects: 1) Token-wise relationships, 2) Channel-wise relationships, and 3) Combining both 1 and 2. We measure each relationship as cosine similarity for the attention feature map in the student's layer based on a randomly selected non-salienct layer of the teacher. As shown in Fig <ref> (b), examining the token-wise relationships reveals that most layers exhibit low similarity and inconsistent similarity distributions. In contrast, the channel-wise relationships (Fig <ref>a) show a consistent distribution.
Detailed experimental results can be found in Section 6.2.
This findings leads to the following design: In the attention modules of the deeper layers, it is crucial to consider the channel similarity between the student and teacher models ,during updating the teacher model from the student model (S → T).
§ 5. METHOD
§.§ 5.1 Motivation and Overview
Recent advancements in knowledge distillation have primarily utilized a frozen teacher model to transfer information to the student. However, our investigations reveal that this static approach is not appropriate for UDA due to the DSN problem in the teacher, as illustrated in Fig. <ref>. Consequently, this inadequacy diminishes the teacher model's generalization ability, which in turn increases the risk of spreading incorrect information to the student.
Intriguingly, our analysis indicates that the student has the capability to address these issues. This observation prompts us to propose a novel approach; collaborative learning between the teacher and the student. The proposed CLDA consists of the following stages.
Stage 1: In order to boost a synergy between these two models, the KD is first conducted from the teacher model to the student model (T → S).
Stage 2: A layer mapping process identifies the student layers used to update the non-salient layers of the teacher model.
Stage 3: The non-salient layer of the teacher is updated based on the corresponding layer of the student (S → T).
The proposed CLDA framework is illustrated in Fig. <ref>.
§.§ 5.2 CLDA framework
§.§.§ Stage 1: Knowledge Distillation (T → S)
The training process consists of two stages. In the first stage, the baseline model of substantial size is trained following the same DA procedure as described in <cit.>. Subsequently, this model, possessed with domain adaptation capabilities, serves as the teacher model f_T in our framework. The distillation process is computed in the output space using the ℒ_distill :
ℒ_distill^(i) = - ∑_j=1^H × W∑_c=1^N_C q^ip̂_T^(i,j,c)log f_S(x_q^i)_(j,c) .
where H and W are height and width of an target image x_q, respectively, while N_C is the number of categories shared between the source and the target domain. Hard labels p̂_T are generated from the teacher's most confident class prediction.
p̂_T^(i,j,c) = [ c = max_c' f_T(x_q^(i))^(j,c')] .
where [ · ] denotes the Iverson bracket. As the hard-labels are potentially wrong, the loss is weighted by the quality estimate q <cit.>.
§.§.§ Stage 2: Layer Mapping
Within the teacher's blocks, non-salient layers are identified using Eq.(<ref>), and subsequently, 30% of these non-salient layers are randomly selected for collaborative learning. To effectively mitigate the non-salient layers of the teacher model, it is crucial to accurately identify the corresponding student layers. Based on our analysis, we evaluate the channel relationships between the teacher and student within an attention module. Therefore, we extract the attention feature map A_T,γ∈ℝ^B × N × C from the target image at a non-salient layer γ from the teacher model f_T, where B and N are the batch size and the number of tokens, respectively, while C is the number of channels. Similarly, we extract attention feature maps {A_S,i}_i=1^L_s∈ℝ^B × N × C from the same target image across all layers L_s within the block of the student model. We then calculate the cosine similarity score ϑ between the channels within the attention feature maps of the teacher and student models in training step η.
ϑ^η_i = ∑_j=1^CA_T,γ^n[:, : , j] · A_S,i^n[:, : , j]/|A_T,γ^n[:, : , j]| · |A_S,i^n[:, : , j]| .
Finally, the student's layer with the highest similarity is selected to update the teacher’s non-salient layer.
i^* = iargmax∑_η=T_2^T_3 ϑ^η_i .
§.§.§ Stage 3: Teacher Update (S → T)
By adopting an Exponentially Moving Average (EMA) approach, we can gradually update the non-salient parameters of the teacher model using the parameter of the student model, thereby avoiding additional backpropagation in the teacher model. After each training step η, the teacher
model f_T is updated with the exponentially moving average of the weights of the student f_S:
ϕ_T,γ^η←αϕ_T,γ^η + (1 - α) ϕ_S,i^*^η .
The total loss of the proposed CLDA is formulated as
L_CLDA = min_θ,ϕ1/N_P∑_k=1^N_Pℒ^k_sup + 1/N_Q∑_k=1^N_Qℒ^k_distill .
The overall algorithm is summarized in Algorithm 1.
§ 6. EXPERIMENTS
In this chapter, we demonstrate experimental results under different settings in order to verify the effectiveness of our proposed method.
We compare our method with other existing baselines on different benchmarks in Sec. 6.1. We also analyze the contribution from each of our vital model components in ablation studies in Sec. 6.2. Detailed information regarding the experiments and classification performance can be found in the supplementary material.
§.§ 6.1. CLDA for semantic segmentation
First, we integrated CLDA with various Unsupervised Domain Adaptation (UDA) methods. Tab. <ref> and <ref> demonstrate that CLDA consistently enhances performance across different UDA methods and datasets, confirming improvements in both the teacher and student models. Specifically, when employing DAFormer on the GTA-to-Cityscapes dataset, the teacher model exhibited a performance improvement of 0.7%, while the student model showed an enhancement of 1.4%. In addition, using HRDA, the teacher and student models achieved performance improvements of 0.8% and 0.7%, respectively. Similarly, on the Synthia-to-Cityscapes dataset, CLDA improved performance by 0.8% for the teacher and 2.0% for the student with DAFormer, and by 0.3% for the teacher and 1.5% for the student with HRDA. These findings suggest that CLDA can be flexibly combined with various Transformer-based models, notably enhancing the teacher model's performance without calculating the backpropagation of the teacher. Thereby, effectively transmitting informative knowledge to the student model and maximizing the synergy between the two models.
§.§ 6.2. Ablation study of CLDA
§.§.§ Component Ablation:
We conducted a component ablation study to gain further insights into CLDA by incrementally removing its components and evaluating performance on the GTA→Cityscape dataset using DAFormer[MiT-b3] . In Tab. <ref>. The complete CLDA configuration achieves a mIoU of 69.5 (row 2), which is +1.4% mIoU higher than DAFormer (row 1).
When only distillation from the teacher to the student (S → T) was applied , the performance showed a 0.8% improvement over the DAFormer. However, when updating the student to teacher layers (T → S) without a mapping process, the mismatch between student and teacher layers led to a performance degradation of 0.6%, compared to the initial (S → T) approach. On the other hand, when Layer mapping was performed before updating, an improvement of 0.6% was observed. These findings highlight the importance of accurately matching the student's layer for the teacher's non-salient layer during CLDA training.
§.§.§ Mapping Ablation:
In the previous chapter, we analyzed the relationship between the student and teacher models from three perspectives: 1) token-wise, 2) channel-wise, and 3) a combination of both. To gain deeper insights, we conducted experiments on the GTA to Cityscapes with DAFormer[MiT-b3], in Tab. <ref>. When measuring the similarity of the attention feature maps between the two models at the token level (Row 1), we observed a performance decrease of 0.5% compared to the scenario where knowledge was transferred from teacher to student. This decline is attributed to the difficulty of identifying the appropriate student layer for updating, as token-wise approach failed to capture the correct relational context. Conversely, the channel-wise approach (row2) led to the update of suitable layers, resulting in a performance improvement of 1.4% over the DAFormer. When considering both token-wise and channel-wise relationships by flattening the attention feature maps, the performance improved compared to the token-wise approach but did not surpass the results achieved with channel-wise approach, indicating difficulty in identifying the optimal layers for update.
§ 7. CONCLUSION
We explore the underexamined yet practical challenge of developing compact and efficient models in DA. We analyze the DSN issue caused by a fixed teacher model in conventional KD within DA. Based on this analysis, we propose CLDA, a practical approach that addresses the DSN problem by leveraging the complementary effects between teacher and student models. A notable strength of CLDA is its ability to resolve the DSN issue without additional backpropagation for the teacher model. Our empirical results demonstrate significant performance gains for both teacher and student models across various settings. Future work could include a unified theoretical analysis of the DSN problem.
Limitations. CLDA demonstrates effectiveness across various environments, but its reliance on the teacher model may introduce constraints in some scenarios. This suggests potential for integrating additional techniques to enhance generalization without compromising the model’s efficiency.
|
http://arxiv.org/abs/2409.03568v1 | 20240905142202 | Enabling Practical and Privacy-Preserving Image Processing | [
"Chao Wang",
"Shubing Yang",
"Xiaoyan Sun",
"Jun Dai",
"Dongfang Zhao"
] | cs.CR | [
"cs.CR",
"C.2.0; K.6.5"
] |
Enabling Practical and Privacy-Preserving Image Processing
Chao Wang
cwang17@wpi.edu
Worcester Polytechnic Institute
Shubing Yang
sueyoung@uw.edu
University of Washington
Xiaoyan Sun
xsun7@wpi.edu
Worcester Polytechnic Institute
Jun Dai
jdai@wpi.edu
Worcester Polytechnic Institute
Dongfang Zhao
dzhao@uw.edu
University of Washington
=============================================================================================================================================================================================================================================================================================
empty
[1]Chao Wang and Shubing Yang made equal contributions to this work. Corresponding authors: Drs. Dongfang Zhao and Jun Dai.
§.§ Abstract
Fully Homomorphic Encryption (FHE) enables computations on encrypted data, preserving confidentiality without the need for decryption. However, FHE is often hindered by significant performance overhead, particularly for high-precision and complex data like images. Due to serious efficiency issues, traditional FHE methods often encrypt images by monolithic data blocks (such as pixel rows), instead of pixels. However, this strategy compromises the advantages of homomorphic operations and disables pixel-level image processing. In this study, we address these challenges by proposing and implementing a pixel-level homomorphic encryption approach, , based on the CKKS scheme.
To enhance the computational efficiency of CKKS in image processing, we propose three novel caching mechanisms to pre-encrypt radix values or frequently occurring pixel values, substantially reducing redundant encryption operations. Extensive experiments demonstrate that our approach achieves up to a 19-fold improvement in encryption speed
compared to the original CKKS, while maintaining high image quality. Additionally, real-world image applications such as mean filtering, brightness enhancement, image matching and watermarking are tested based on FHE, showcasing up to a 91.53% speed improvement. We also proved that our method is IND-CPA (Indistinguishability under Chosen Plaintext Attack) secure, providing strong encryption security. These results underscore the practicality and efficiency of , marking a significant advancement in privacy-preserving image processing at scale.
§ INTRODUCTION
Fully homomorphic encryption (FHE), a revolutionary concept pioneered by Gentry <cit.>, has emerged as a focal point in privacy-preserving computational research. Its unprecedented ability to perform arbitrary computations on encrypted data without decryption has catalyzed a paradigm shift in secure data processing, particularly in sensitive domains such as healthcare and finance. Various FHE schemes have been proposed, including BFV <cit.>, BGV <cit.> and CKKS <cit.>, each designed to address specific computational needs and applications. However, despite its transformative potential, FHE continues to face challenges in computational efficiency and practical implementation when applied to real-world scenarios, especially in the realms of cloud computing and medical imaging <cit.>, where data integrity and confidentiality are paramount <cit.>.
While extant research on homomorphic encryption has predominantly concentrated on optimizing underlying mathematical constructs and enhancing the efficiency of fundamental arithmetic operations on ciphertexts <cit.>, the specific challenges inherent in applying FHE to image data have received comparatively little attention.
Due to the substantial computational overhead and low efficiency, current implementations for FHE-based image encryption typically treat images as monolithic data blocks <cit.>, encrypting them without considering the images' intrinsic structure and properties. An example is to encrypt the image with CKKS by pixel rows, instead of individual pixels, to reduce the number of encryption operations and thus reduce total time needed for encryption. However, this approach compromises the advantages of homomorphic operations and greatly impedes the practical application of FHE in image-related tasks <cit.>.
In the domain of image processing, encryption by pixel is often required to offer unparalleled flexibility and granularity. Despite its higher computational demands, pixel-wise encryption enables a diverse range of sophisticated homomorphic operations directly on encrypted data <cit.>. This method excels in facilitating advanced image processing techniques such as mean filtering, brightness adjustment, and watermarking, all while maintaining data privacy <cit.>. The pixel-level granularity not only enhances privacy protection but also provides precise control over each image element, making it ideal for applications dealing with sensitive visual data or requiring intricate manipulation of encrypted images.
While encryption by row or even by image offers certain performance and efficiency advantages, particularly in processing speed and resource utilization, encryption by pixel presents a more versatile foundation for advanced homomorphic image processing. Although it may initially seem less efficient due to the increased number of encryption operations, the pixel-wise approach opens up a wealth of possibilities for complex algorithms and fine-grained data manipulation.
In response to these limitations, we propose a novel pixel-level homomorphic encryption approach, iCHEETAH (image-focused Caching-optimized fully Homomorphic Encryption for Efficient Transformation of Assets with High resolutions) based on the CKKS framework, which is renowned for its support of approximate arithmetic <cit.>. This granular approach facilitates fine-grained pixel-level control over encrypted data and enables the direct implementation of various pixel-level image processing operations on ciphertexts.
The reason why we choose CKKS as the base scheme is because CKKS is the only FHE scheme supporting homomorphic operations over floating numbers, which are commonly used in image processing (e.g., intermediate results).
To combat the efficiency issue of pixel-level encryption, we introduce three types of caching mechanism, radix-based caching, scanning-based caching, and full caching, to pre-compute and store encrypted radix or frequently occurring pixel values. This approach achieves a remarkable acceleration in the encryption process.
Particularly, the full caching mechanism results in a more than 19-fold speed improvement compared to original CKKS methods.
This substantial performance boost makes large-scale image encryption and processing both feasible and highly efficient, underscoring the practical potential of FHE-based privacy-preserving techniques.
Furthermore, to showcase the versatility of our pixel-level homomorphic encryption approach, we implement a comprehensive suite of fundamental image processing operations, including image brightening, contrast enhancement, and denoising, directly on encrypted data. By leveraging the homomorphic properties of the CKKS scheme, we execute these operations without compromising the encryption of the images, thereby ensuring the utmost privacy and security of sensitive visual information. Our experimental results demonstrate the efficacy of our approach and highlight its potential to enable privacy-preserving image processing across various domains.
Threat Model.
Our threat model involves a secure system for storing and processing confidential images (such as medical ones), utilizing pixel-level homomorphic encryption.
We assumes that users (such as medical institutions) need to upload sensitive images to the central storage (such as cloud, or central servers) for processing and sharing. The primary assets we aim to protect are the images and the associated encryption keys.
The adversary in our model is assumed to have potential access to the central storage, either as an external attacker or a curious but honest insider within the organization. We assume the storage provider is “honest but curious” correctly executing operations but potentially attempting to learn about the data they're processing <cit.>. The users and institutions involved are considered trusted entities with secure local environments for encryption and decryption processes.
Additionally, our threat model explicitly assumes that while attackers may observe the encrypted data (ciphertext) within the central storage, they are unable to compromise or gain access to the users' encryption keys required to decrypt this data. As a result, even if the attacker has full visibility of the ciphertext, the lack of access to the decryption keys ensures that the data remains secure and unreadable.
Figure <ref> represents our threat model. It compares two scenarios: non-homomorphic encryption and homomorphic encryption using the CKKS scheme. In the non-homomorphic scenario, images are encrypted before upload but must be decrypted for processing, creating a vulnerability window where an attacker could potentially access sensitive patient data. Our proposed homomorphic encryption scenario uses CKKS to encrypt images before upload, allowing all processing to occur on encrypted data without decryption in the central storage <cit.>. This eliminates the vulnerability window present in the non-homomorphic approach but introduces a trade-off: the initial CKKS encryption process is significantly slower than non-homomorphic methods. The primary security goal is to maintain the confidentiality of images throughout their lifecycle in the central storage, enabling secure processing without exposing unencrypted data.
Our approach guards against potential attacks such as data interception during upload or download and unauthorized access to decrypted images (in the non-homomorphic scenario). The use of CKKS homomorphic encryption mitigates many of these risks by ensuring data remains encrypted during processing. However, the performance impact of CKKS encryption creates a trade-off between security and efficiency that must be carefully balanced. Our work focuses on enhancing the encryption efficiency for pixel-level CKKS-based FHE to make it more practical for real-world image applications, without compromising security.
Contributions. The key insights and contributions of this paper are summarized as follows.
* First, we propose a novel pixel-level homomorphic encryption approach that adapts CKKS to operate on individual pixels, enabling fine-grained control over encrypted image data while preserving homomorphic properties (Section <ref>). To dramatically increase processing speed, we introduce three caching mechanisms – radix-based, scanning-based, and full caching – that significantly boost the efficiency of CKKS-based encryption, making FHE practical for fast and secure image processing (Section <ref>).
* Second, we prove that our method is IND-CPA (Indistinguishability under Chosen Plaintext Attack) secure, providing strong encryption security. We include a detailed analysis and proof to confirm this, illustrating that our approach combines both high efficiency and robust protection, ensuring it is both secure and practical (Section <ref>).
* Third, we demonstrate the versatility and practicality of our approach by implementing a range of fundamental image processing operations directly on encrypted data, such as mean filtering, brightness enhancement, image matching, and watermarking. Our experimental results validate the effectiveness of our technique and highlight its potential for enabling privacy-preserving image processing across various domains, including healthcare and finance (Section <ref>). We also utilize hundreds of real and complex images from the USC-SIPI <cit.> database to demonstrate the applicability of our approach to a wide range of images (Section <ref>).
§ BACKGROUND
In this section, we briefly review FHE and CKKS, to facilitate a comprehensive understanding.
§.§ Fully Homomorphic Encryption (FHE)
Fully Homomorphic Encryption (FHE) stands as a groundbreaking cryptographic method, enabling computations on encrypted data without the need for prior decryption, as initially proposed by Rivest in 1978 <cit.>. Gentry <cit.> introduced the first viable framework for fully homomorphic encryption using lattice-based cryptography, a type of cryptographic approach that relies on the hardness of mathematical problems related to lattices in high-dimensional spaces. Lattice is a grid-like structure in mathmatics. Gentry's method allows for both addition and multiplication operations on encrypted data, enabling the construction of circuits (such as circuits composed of gates) to perform arbitrary computations <cit.>. The approach [id=SY]of Gentry involves several stages where noise is initially introduced and then carefully managed <cit.>. [id=SY]The noise refers to a small amount of random data added to the ciphertext to ensure security. It is introduced as part of the encryption process to make it difficult for an attacker to recover the original plaintext from the ciphertext <cit.>.This ensures that an unlimited number of additions and multiplications can be performed without excessively amplifying the noise. C. Gentry, A. Sahai, and B. Waters (GSW) <cit.> in 2013 introduced a new method for constructing FHE schemes that eliminates the costly “relinearization” step in homomorphic multiplications.
Noise management is critical in FHE. The noise level η in a ciphertext c is controlled to remain manageable after a series of operations, often using techniques like bootstrapping[id=SY], which reduces accumulated noise and enables further computations by refreshing the ciphertext through homomorphic decryption and re-encryption, ensuring the ciphertext's integrity remains uncompromised.
§.§ CKKS
CKKS (Cheon-Kim-Kim-Song) was originally introduced by Cheon, Kim, Kim and Song in 2016 <cit.>. It leverages the RLWE (Ring Learning with Errors) problem, a foundational problem in lattice-based cryptography [id=SY]that involves adding small random errors to polynomial equations, making it difficult for adversaries to solve and ensuring security while supporting efficient homomorphic operations on encrypted data. Table <ref> shows key parameters of CKKS we use in this paper <cit.>.
CKKS scheme supports arithmetic operations over ℂ^N/2. [id=SY]ℂ^N/2 represents a space of complex vectors of length N/2, where N is a power of 2.
The plaintext space and ciphertext space are defined over the same domain, as given by the formula ℤ_Q[X]/(X^N + 1),
[id=SY]where ℤ_Q is the ring of integers modulo Q and X is the polynomial variable.
The batch encoding of this scheme maps an array of complex numbers to a polynomial with the property ℂ^N/2↔ℤ_Q[X]/(X^N + 1),
where ↔ denotes the encoding and decoding processes between the two sides.
The property of the encoding and decoding is given by decode(encode(m_1) ⊗encode(m_2)) ≈ m_1 ⊙ m_2,
where ⊗ represents a component-wise product, and ⊙ represents a nega-cyclic convolution.
In CKKS, N represents the ring dimension, and Q is the ciphertext modulus. Increasing N and Q enhances security but also increases computational complexity. A larger N boosts security by expanding key size and encrypted data complexity. However, it also increases encryption/decryption time and storage needs. Additionally, a larger N improves noise tolerance, supporting more homomorphic operations. Conversely, a larger Q enhances precision and noise management but raises computational demands. Thus, choosing N balances security and performance, while selecting Q manages precision and noise tolerance.
§.§.§ RLWE
RLWE (Ring Learning with Errors) <cit.>
is defined over a polynomial ring R = ℤ[X]/f(X),
where f(X) is typically a cyclotomic polynomial, and a modulus q ∈ℤ [id=SY](q is a positive integer that defines the modulus for the coefficients of polynomials).
Let R_q = R/qR be the quotient ring.
Given a secret element s ∈ R_q and access to a set of noisy ring products, the RLWE problem can be described as (a_i, b_i = a_i · s + e_i) ∈ R_q × R_q,
where a_i represents uniformly random elements in R_q, [id=SY]b_i is the polynomial generated in the encryption process and e_i denotes small error terms sampled from a discrete Gaussian distribution. The RLWE problem is to distinguish the pairs (a_i, b_i) from uniformly random pairs in R_q × R_q <cit.>.
§.§.§ CKKS Basics
RLWE provides a strong foundation for lattice-based cryptography, enabling secure and efficient encryption schemes. CKKS builds on this by extending these capabilities to homomorphic encryption, allowing for approximate arithmetic on encrypted data.
Key generation in CKKS involves the following steps:
* Select a secret key s ∈ R_Q, typically sampled from a small discrete distribution.
* Generate a public key (a, b = a · s + e), where a is uniformly random in R_Q and e is sampled from a discrete Gaussian distribution. The public key is (a, b) ∈ R_Q × R_Q.
* Generate an evaluation key required for homomorphic operations.
To encrypt a plaintext m ∈ R_Q, the CKKS encryption process is as follows:
* Encode the plaintext message m into a polynomial m(X) ∈ R_Q, where a cyclotomic polynomial ring is R_Q = ℤ_Q[X]/(X^N + 1).
* Choose a random polynomial r ∈ R_Q and compute the ciphertext (ct_0, ct_1) as ct_0 = r · a + m + e_0 and ct_1 = r · b + e_1,
where e_0, e_1 are small error terms.
To decrypt a ciphertext (ct_0, ct_1), the decryption algorithm computes m' = ct_0 - s · ct_1 = m + e',
where e' is the combined error term. If the error is sufficiently small, the original message m can be recovered by rounding. <cit.>
§.§.§ Operations of CKKS
The CKKS scheme allows for approximate arithmetic on encrypted data, making it well-suited for applications in machine learning and signal processing <cit.>. In CKKS, plaintexts are encoded as complex vectors, and ciphertexts are structured to support homomorphic addition and multiplication, albeit with some approximation error. There are two basic homomorphic operations in CKKS: homomorphic addition (⊕) and homomorphic multiplication (⊗).
Homomorphic addition in CKKS is straightforward. Given two ciphertexts ct_1 and ct_2 that encrypt plaintexts m_1 and m_2, respectively, their sum is computed as ct_add = ct_1 ⊕ ct_2.
This operation results in a new ciphertext ct_add that encrypts the sum of the plaintexts, i.e., m_1 + m_2. The addition of ciphertexts does not introduce additional noise beyond what is present in the original ciphertexts, preserving the integrity of the computation.
Homomorphic multiplication is more complex due to the need to handle the growth of noise and manage scaling factors. For two ciphertexts ct_1 and ct_2, the product is computed as ct_mul = ct_1 ⊗ ct_2.
The resulting ciphertext ct_mul encrypts the product of the plaintexts m_1 × m_2. However, multiplication increases both the degree and noise in the ciphertext, making it necessary to apply a process known as relinearization.
Relinearization reduces the degree of the resulting ciphertext back to its original form, which is crucial for maintaining efficiency in further operations. This is done by using an evaluation key evk generated during the key generation phase ct_rel = Relin(ct_mul, evk).
Here, ct_rel is the relinearized ciphertext, which now has a reduced degree, making it more manageable for subsequent operations.
After multiplication, the ciphertext may require rescaling to maintain the balance between the ciphertext and the plaintext. Rescaling involves dividing the ciphertext by a scaling factor Δ to reduce the noise ct_rescaled = Rescale(ct_rel, Δ)
This operation helps in maintaining the correctness of the plaintext after decryption and prevents the noise from overwhelming the signal.
§ OUR APPROACH
In the following sections, we will provide a detailed overview of our approach (Section <ref>), the caching mechanisms (Section <ref>), how to add randomness (Section <ref>), and our approach's compliance to IND-CPA security (Section <ref>).
§.§ Overview
Our approach is designed to encrypt image data while preserving the homomorphic properties of the ciphertext, thereby enabling ciphertext-based image processing.
To accomplish this, implementing pixel-level FHE for image encryption is essential, however, it will entail a substantial time overhead. Based on the CKKS scheme, our approach achieves pixel-level image encryption and addresses the incurred time overhead through the strategic use of precomputation and caching. The approach is also structured to ensure that the encrypted data is resistant to chosen plaintext attacks (CPA) while maintaining computational efficiency.
Figure <ref> shows our approach overview. Adapted from the CKKS scheme, this approach begins with encoding and encrypting plaintext values, which are then cached to be used in subsequent operations. The plaintext values can be encrypted and cached in three ways: Radix-based caching that encrypts radix values, scanning-based caching that scans pixel values contained in an image and then encrypts these values, and full caching that encrypts all possible pixel values.
Based on the using scenario, one of the three caching strategies can be selected to produce the cached ciphertexts. These ciphertexts are then directly used during the image encryption process and thus effectively reduces the computational complexity.
Once the ciphertexts for radices or pixel values are cached, theoretically the image can be encrypted by directly using the cached values to encrypt each pixel. However, this will suffer from CPA because identical pixel values result in identical ciphertexts due to using cached values. Adversaries can reveal the plaintexts by encrypting chosen plaintexts and comparing the ciphertexts, which is known as CPA. Therefore, our approach also needs to add randomness to the ciphertext of each pixel to make sure same plaintexts will generate different ciphertexts. We impose randomness through two different homomorphic operation strategies: 1) For radix-based caching, we perform some extra adding and subtracting operations to the cached ciphertext of the original pixel, such as adding ciphertext of 32, and then substrcting ciphertext of 16 twice.
2) For scanning-based and full caching, we add cached ciphertexts of zeros to the cached ciphertext of the original pixel. The ciphertext of zero can be different every time when zero is encrypted, and thus brings in randomness.
Both strategies do not change the decrypted value for the original pixel, guaranteeing that the encrypted image can be accurately decrypted and the integrity of the original data is preserved. More details about randomness is in Section <ref>.
§.§ Radix-based Caching, Scanning-based Caching, and Full Caching
To address the performance challenges of pixel-level FHE encryption, which is crucial for enabling ciphertext-based image processing, a strategic approach is needed to accelerate FHE operations. Section <ref> reviews existing work on HE acceleration, but few of these methods are directly applicable to the goals of this paper, or fully consider the unique characteristics of image data. In contrast, the design of is informed by our insights on image data, with the underlying rationale detailed below:
Insight 1: Pixel-level image data consists exclusively of integer values. Consequently, pixel-level FHE image encryption is inherently encrypting a matrix of integers, with each operation carried out at the level of individual pixel values. To expedite the homomorphic encryption of various integer values, it is effective to cache the encryption of specific unit values and reconstruct the ciphertext for a target value by applying homomorphic additions to the cached ciphertexts of the corresponding unit values that sum to the target value. A natural selection for unit values is the radices. This strategy, rooted in math, is consistent with previous works, such as <cit.> and <cit.>, despite their differing objectives and scenarios. This caching strategy based on CKKS is formalized as follows.
Radix-based Caching. Radix-based caching leverages base-r representation to decompose each value in the dataset into a sum of powers of r. For any value p in the dataset, it can be represented as a sum of distinct powers of r, where each component is then encrypted using the CKKS scheme:
c_p = ∑_i=0^nCKKS.Enc(r^k_i)
By precomputing and caching the ciphertexts for these powers of r, computational overhead during the encryption process is significantly reduced, as retrieving the ciphertext for any pixel value becomes a simple lookup operation. A common choice for the radix is 2, i.e., r=2, where the CKKS ciphertexts of 0, 2, 4, 8, 16, … are precomputed and cached.
Insight 2: Pixel values frequently recur in image data. Images often contain frequently occurring values within their pixel datasets. By identifying and caching the ciphertexts of these frequently occurring pixel values, we can further optimize the homomorphic encryption process. Specifically, because many images have large regions with uniform or similar pixel values, this strategy can significantly enhance performance in image encryption. Additionally, it is adaptable to handle a wide range of values beyond image data. We refer to this strategy as scanning-based caching.
Scanning-based Caching. Scanning-based caching conducts a pre-processing scan of the dataset, and identifies the most common values, p_1, p_2, …, p_k. For each value p_j, we compute and store the corresponding ciphertext:
c_p_j = CKKS.Enc(p_j)
Scanning-based Caching can be used in many circumstances. When encrypting the image, these precomputed ciphertexts are retrieved from the cache, thereby avoiding the need to repeatedly encrypt the same pixel values. This caching strategy reduces the computational burden and enhances the overall efficiency of the encryption process, particularly in images with high redundancy or repeated patterns.
Insight 3: Pixel values in an image are drawn from a finite set of numbers. An image is a visual representation made up of a grid of pixels, each corresponding to a specific point. In digital imaging, we assume the use of the RGB color scheme, where each pixel's color is defined by a combination of Red, Green, and Blue. Each color channel is represented by a pixel value ranging from 0 to 255, with 0 indicating no intensity and 255 indicating full intensity. We therefore pursue offline caching of all such values, referring to this strategy as full caching. Other schemes such as HSV (Hue, Saturation, Value) and HSL (Hue, Saturation, Lightness) are not focus of this paper and can be the future work.
Full Caching. Full caching involves precomputing and caching the ciphertexts for all possible pixel values p in the range [0, 255], ensuring that every potential pixel value has a corresponding cached ciphertext c_p:
c_p = CKKS.Enc(p)
This approach is perfectly suited for image processing and ensures that each pixel value in the image can be encrypted efficiently without the need for on-the-fly computation. While scanning-based caching can handle cases where plaintext values are indefinite, full caching are applicable to cases where all possible plaintext values are already known.
§.§ Adding Randomness
When ciphertexts are exposed, they become vulnerable to chosen-plaintext attacks (CPA). Thus, it is essential to introduce randomness each time a cached ciphertext is used.
For radix-based caching, we proposed to iterate through each precomputed ciphertext in radixes[i] and randomly decide whether to add it to the main ciphertext Ctxt[i], where i is the iteration index. If this addition occurs, the process then subtracts radixes[i-1] from Ctxt[i] a total of r times, effectively introducing a controlled form of randomness to the ciphertext modification process:
Ctxt[i] ⊕radixes[i] ⊖ r ×radixes[i-1]
where ⊕ represents the homomorphic addition between 2 cipertexts and ⊖ represents the homomorphic subtraction between 2 ciphertexts. For example, if radix r=2 and i=5, the randomness for Ctxt[5] can be done through
Ctxt[5] ⊕ 2^5 ⊖ 2 × 2^4
For scanning-based caching and full caching, we implement a thread pool-like mechanism, called “randomness pool”, by creating a pool consisting exclusively of precomputed ciphertexts of zeros. This pool ensures a continuous and efficient supply of ciphertexts of zeros, which are retrieved as needed during encryption or homomorphic operations. This pool consists of pre-encrypted zero values {c_z1, c_z2, …, c_zn}, where each c_zi is an encrypted zero:
c_zi = CKKS.Enc(0)
During the encryption process, for each pixel value p, a randomly selected zero ciphertext c_zi from the pool is added to the cached ciphertext:
c_p, final = c_p + c_zi
This addition ensures that identical pixel values do not produce identical ciphertexts. By maintaining the certain number of encrypted zeros in the randomness opol, is provably IND-CPA secure (see Section <ref>) while reducing the computational load associated with image encryption.
encrypts the pixel values of an image using the CKKS scheme and then caches the ciphertexts to accelerate subsequent operations on the encrypted data. Assuming that the pixel values of the image lie within the range [0, 255], we can represent each pixel value as an element in the integer set, i.e., p ∈{0, 1, 2, …, 255}. For each pixel value p, we apply the CKKS encryption scheme to generate the corresponding ciphertext c_p = CKKS.Enc(p), which is then stored in a cache to avoid redundant encryption in future operations.
Specifically, since there are only 256 possible pixel values, we precompute and cache these 256 ciphertexts, denoted as {c_0, c_1, c_2, …, c_255}. During the encryption of an image, for each pixel value p, we can directly reference the corresponding cached ciphertext c_p, significantly reducing the time complexity of the encryption process. This caching strategy not only minimizes computational overhead but also enhances the overall efficiency of the encryption process. Mathematically, this precomputation approach is analogous to constructing a lookup table, where the encrypted mapping of each pixel value p is pre-stored as a cached ciphertext, enabling retrieval in O(1) time.
Building on this approach, the implementation optimizes the caching mechanism by ensuring that all 256 possible pixel values are pre-encoded as ciphertexts using the CKKS encryption scheme. This is done by encoding each value p into a plaintext format m_p, encrypting it, and storing the result as a ciphertext c_p in memory:
c_p = CKKS.Enc(m_p)
where m_p = p, p ∈ [0, 255].
By doing so, we guarantee that the system can handle pixel values efficiently during image encryption operations without the need for on-the-fly computation, which is both time-consuming and computationally expensive.
Since adding encrypted zeros is essential for the security of this method, generating the encrypted zeros is an indispensable step. To save the time of encrypting zeros and speed up the image encryption process, a secondary list of zero-valued ciphertexts {c_z_1, c_z_2, …, c_z_n} is generated and stored, where each c_z_i corresponds to an encrypted zero value. These zero-valued ciphertexts are then randomly selected and added to the primary pixel ciphertexts during the encryption process to ensure that identical pixel values do not result in identical ciphertexts:
c_p^final = c_p + c_z_i,
where c_p is the cached ciphertext for pixel value p, and c_z_i is a randomly selected zero ciphertext. This randomization adds a layer of security by introducing variability in the encrypted output, even for identical pixel values.
This method of caching precomputed ciphertexts is critical for large-scale image encryption, where the pixel values are densely populated. Without such a cache, the system would face significant delays due to repetitive encryption operations. The cache not only reduces encryption time but also lowers the overall computational burden, which is particularly advantageous when processing high-resolution images or performing operations on encrypted data in real-time.
Overall, the cache serves as an essential component of , enabling efficient encryption of image data while maintaining the robustness of the underlying homomorphic encryption scheme. By leveraging precomputation and storage of ciphertexts, the system ensures fast and secure processing of pixel data, making it well-suited for applications where both performance and security are paramount.
§.§ Security Analysis
In this section, we establish that our modified CKKS scheme, which includes caching ciphertexts for plaintexts 0 to 255 and the use of pre-encrypted zeros, maintains IND-CPA (Indistinguishability under Chosen Plaintext Attack) security. This is achieved by reducing the security of the modified scheme to that of the original CKKS scheme.
Assume for the sake of contradiction that the modified CKKS scheme Π̃ is not IND-CPA secure. This would imply the existence of a probabilistic polynomial-time (PPT) adversary 𝒜 capable of distinguishing between the encryptions of two plaintexts m_0 and m_1 under Π̃ with a non-negligible advantage:
| [𝒜(Π̃(m_0)) = 1] - [𝒜(Π̃(m_1)) = 1] | ≥ϵ(n),
where ϵ(n) is a non-negligible function of the security parameter n.
To derive a contradiction, we construct a PPT adversary ℬ, which uses 𝒜 as a subroutine to break the IND-CPA security of the original CKKS scheme Π. The adversary ℬ is provided with a public key pk and receives two challenge ciphertexts c_0 = Π(m_0) and c_1 = Π(m_1) from the IND-CPA challenger for Π. Then, ℬ randomly selects a ciphertext z from the set Z = {Π(0)} of pre-encrypted zeros. The adversary ℬ computes the following ciphertexts:
c̃_0 = c_0 + z and c̃_1 = c_1 + z
Subsequently, ℬ feeds the ciphertexts c̃_0 and c̃_1 to 𝒜 and outputs 𝒜's guess as its own.
Since 𝒜 can distinguish between c̃_0 and c̃_1 with a non-negligible advantage, ℬ would similarly distinguish between c_0 and c_1, thereby breaking the IND-CPA security of the original CKKS scheme Π. This contradicts the established security of the CKKS scheme, which is proven to be IND-CPA secure under the hardness assumption of the Ring-LWE problem. Hence, no such adversary 𝒜 can exist, and we conclude that the modified CKKS scheme Π̃ is also IND-CPA secure.
It is important to note that the use of zero ciphertexts does not significantly impact noise levels, thus preserving the correctness of decryption. Furthermore, caching encryptions of known plaintexts introduces no additional vulnerabilities, as these ciphertexts are generated using the original CKKS encryption function, which is secure.
§.§ Security Analysis
In this section, we establish that our modified CKKS scheme, which includes the caching of ciphertexts for plaintexts 0 to 255 and the use of pre-encrypted zeros, maintains IND-CPA (Indistinguishability under Chosen Plaintext Attack) security. This is accomplished by demonstrating that the security of our approach can be reduced to the security of the original CKKS scheme.
The following format/presentation looks like being generated by GPT. Please take a look at the security proof in other published papers.
Assumption: Suppose, for the sake of contradiction, that the modified CKKS scheme Π̃ is not IND-CPA secure. Specifically, assume there exists a probabilistic polynomial-time (PPT) adversary 𝒜 capable of distinguishing between the encryptions of two plaintexts m_0 and m_1 under Π̃ with non-negligible advantage:
| [𝒜(Π̃(m_0)) = 1] - [𝒜(Π̃(m_1)) = 1] | ≥ϵ(n),
where ϵ(n) is a non-negligible function of the security parameter n.
To derive a contradiction, we construct a PPT adversary ℬ that leverages 𝒜 as a subroutine to break the IND-CPA security of the original CKKS scheme Π.
Adversary Construction:
* ℬ is given a public key pk and receives two challenge ciphertexts c_0 = Π(m_0) and c_1 = Π(m_1) from the IND-CPA challenger for Π.
* ℬ selects a zero ciphertext z uniformly at random from the set Z = {Π(0)} of pre-encrypted zeros.
* ℬ then computes:
c̃_0 = c_0 + z and c̃_1 = c_1 + z
* Finally, ℬ runs 𝒜 on the ciphertexts c̃_0 and c̃_1, and outputs whatever 𝒜 outputs as its own guess for the challenge bit.
Since 𝒜 can distinguish between c̃_0 and c̃_1 with a non-negligible advantage, the adversary ℬ will similarly distinguish between c_0 and c_1, thereby breaking the IND-CPA security of the original CKKS scheme Π.
However, this contradicts with the established security of CKKS, which is proven to be IND-CPA secure under the hardness assumption of the Ring-LWE problem. Therefore, no such adversary 𝒜 can exist, and we conclude that our modified CKKS scheme Π̃ is also IND-CPA secure.
Remarks:
* The inclusion of zero ciphertexts does not significantly impact noise levels, thus preserving the decryption correctness.
* Caching encryptions of known plaintexts introduces no additional vulnerabilities, as these ciphertexts are generated using the original CKKS encryption function, which is secure.
§ EVALUATION
This section will present our evaluation objectives (Section <ref>), setup of experiments (Section <ref>), performance comparisons (Section <ref>), real-world applications as case studies (Section <ref>), and broader applicability (Section <ref>). Notably, our method demonstrates exceptional scalability and can successfully handle operations on 1024×1024 images – an achievement not previously reported in the literature <cit.>.
§.§ Objectives
We aim to address the following questions through our experimental evaluation:
* How does our pixel-level homomorphic encryption method with caching optimization perform compared to existing methods in terms of encryption time and scalability? (Section <ref>)
* When performing secure image processing operations for encryption and decryption, to which extent can our approach preserve image quality while maintaining computational efficiency? (Section <ref>)
* Does our approach effectively enable ciphertext-based image processing in real-world image processing and analysis applications? (Section <ref>)
* Can our approach be broadly applicable to a diverse range of images? (Section <ref>)
§.§ Experimental Setup
Our implementation is based on the Lattigo library <cit.>, an open-source Go library for lattice-based cryptography. We chose Lattigo for its efficient implementation of the CKKS scheme and its support for homomorphic operations. consists of roughly 4K lines of code (LOC) of Go. Our solution is integrated with the Go image processing library for handling various image operations. The project is developed using Go 1.20.4 and compiled with the standard Go compiler. For parallelization, we utilize the built-in concurrency features of Go, including goroutines and channels.Our experiments are conducted on a high-performance computing cluster of CloudLab Clemson. Each node is equipped with Two 32-core AMD 7542 at 2.9GHz, 512 GB of RAM, and 2 TB of SSD storage. The operating system is Ubuntu 20.04 LTS.We utilize the USC-SIPI Image Database <cit.> for our experiments. This dataset was chosen for its diverse range of image types and content, which allows us to test our encryption method across various scenarios. Additionally, the USC-SIPI database is free from copyright restrictions, making it ideal for research purposes. The dataset includes a wide variety of images, such as textures, aerial photographs, and standard test images, providing a comprehensive basis for our experiments.
As we mentioned in Section <ref>, increasing N and Q can indeed enhance the security of the system, but it also introduces trade-offs in computational complexity and performance. Therefore, in our experiments, we chose N = 2^12 and Q = 2^109, making sure a security of 128 bits for a secret key with uniform ternary distribution equation s ∈{-1, 0, 1}^N. These parameters also ensure that our experiments can proceed efficiently within the constraints of our limited resources.
Prior to our experiments, we perform several preprocessing steps to standardize the dataset. First, we conduct image cropping and resizing. As the original dataset contains images of varying resolutions, we create subsets of images at specific resolutions: 8x8, 64x64, 128x128, 256x256, 512x512, and 1024x1024 pixels. This standardization allows us to evaluate our encryption method's performance across different image sizes. Second, we convert all images to BMP (Bitmap Image File) format to ensure a consistent file format, which guarantees compatibility and facilitates accurate performance comparisons <cit.>. BMP is selected for its pixel-based structure, which is ideal for applications such as medical imaging due to its uncompressed format and high image quality.
§.§ Performance Comparison
This subsection presents the results of performance comparison between our proposed approaches and the baseline method (without any caching optimization) for pixel-level encryption across a diverse range of images.
§.§.§ Time and Scalability
In our evaluation of the encryption methods, we observed significant differences in encryption efficiency. Figure <ref> compares the encryption times of these methods across various image sizes, presented on a linear scale. The methods analyzed include the baseline pixel-level encryption without caching, pixel-level encryption with scanning-based caching, pixel-level encryption with full caching, and pixel-level encryption using radix-based caching, both with and without randomness.
As evident in the graph, the baseline method (Enc by Pixel without Caching) shows a steep increase in encryption time as the image size grows. This method, while straightforward, lacks optimizations to mitigate the computational complexity inherent in pixel-level encryption, particularly for larger images. The encryption time for this method rises exponentially, making it less practical for high-resolution image processing.
In contrast, the full caching method (Enc by Pixel with Full Caching) demonstrates a substantial improvement in efficiency across all image sizes.
The most substantial improvement is observed with the largest image sizes, where the full caching method achieves up to 19 times the performance of the baseline. This method also outperforms the radix-based caching method, regardless of whether randomness is applied to radix-based caching.
The scanning-based caching, which only differs from full caching by the scanning part which occurs before image encryption, produces nearly identical results to full caching. Consequently, in the graph, the full caching and scanning-based caching methods are represented by the same plot, though the scanning-based caching may encrypt and cache less pixel values than full caching during the caching phase.
The radix-based caching method (Enc by Pixel with Radix-based Caching), with or without randomness, exhibits enhanced performance over the baseline, though it does not match the level of efficiency achieved by the full caching method.
The substantial reduction in encryption time observed in our experiments can be attributed to our innovative caching strategy. The full caching method resembles the creation of a lookup table, where the encrypted mapping of each pixel value p is pre-stored as a cached ciphertext. This allows retrieval of the encrypted value in O(1) time, remarkably reducing the computational overhead of encrypting each pixel individually.
It is important to note that this performance gain significantly outweighs the overhead associated with adding randomness to defeat CPA attacks. Consequently, we transform the traditionally time-consuming encryption process into a swift lookup operation, enabling the efficient processing of large images without compromising security.
§.§.§ Image Quality
Upon completing the tests and analyzing the images, we observed that the decrypted images exhibit a high degree of similarity to the original images, as demonstrated in Figure <ref>. To rigorously assess the effectiveness of our encryption methods in preserving image integrity and quality, we conducted extensive tests utilizing a range of image quality metrics.
We evaluated the Mean Squared Error (MSE), which measures the average squared difference between pixel values of the original and decrypted images. It is defined as the average of the squared differences between corresponding pixels of the two images <cit.>. The formula for MSE is MSE = 1/mn∑_i=1^m∑_j=1^n( I(i,j) - K(i,j) )^2,
where I(i,j) and K(i,j) represent the pixel values at position (i,j) in the original and the decrypted images, respectively, and m and n are the number of rows and columns in the image. A lower MSE indicates higher similarity between the original and decrypted images.
We also assessed the Peak Signal-to-Noise Ratio (PSNR), which quantifies the reconstruction quality of the decrypted image. It is usually expressed in decibels (dB) <cit.>. The formula for PSNR is PSNR = 10 ·log_10( MAX_I^2/MSE),
where MAX_I is the maximum possible pixel value of the image (e.g., MAX_I = 255 for an 8-bit image). A higher PSNR indicates better quality and higher similarity of a decrypted image to the original image.
As shown in Table <ref>, the low MSE and high PSNR values confirm that our encryption and decryption processes preserve the image quality effectively. This demonstrates that our caching-optimized encryption method maintains the integrity and visual fidelity of processed images while significantly enhancing encryption efficiency. Specifically, we obtained a Mean Squared Error (MSE) of 0.48 and a Peak Signal-to-Noise Ratio (PSNR) of 51.31 dB, indicating a high level of fidelity between the original and decrypted images.
As shown in Table <ref>, the low MSE and high PSNR values demonstrate that our encryption and decryption processes effectively preserve image quality. These results confirm that our caching-optimized encryption method not only maintains the integrity and visual fidelity of the processed images, but also significantly enhances encryption efficiency. In particular, we achieved an average MSE of 0.476 and an average PSNR of 51.30 dB. In the field of image processing, MSE values below 10 and PSNR values above 40 dB are typically considered to indicate high similarity between two images <cit.>. Our results, therefore, indicate a high degree of similarity between the original and decrypted images after employing our FHE encryption method, ensuring minimal perceptual loss.
§.§ Real-world Applications
To demonstrate that enables ciphertext-based
image processing in real-world applications, we will implement it across four distinct use cases: mean filtering, brightness enhancement, image matching, and watermarking.
§.§.§ Mean Filtering
Mean filtering is a fundamental image processing technique used for noise reduction and image smoothing. It replaces each pixel value with the average value of its neighboring pixels within a specified window. In this case study, we demonstrate the implementation of mean filtering on image ciphertext generated by our FHE method.Given a pixel value at position (i, j) within an image I, the mean filter operation in the plaintext domain is mathematically expressed as I'_(i,j) = 1/n^2∑_k=-n/2^n/2∑_l=-n/2^n/2 I_(i+k,j+l),
where n is the size of the filter window (e.g., n = 3 for a 3x3 filter), and I'_(i,j) is the filtered pixel value <cit.>.
The process of mean filtering in the encrypted domain involves three key steps:
* Homomorphic addition of pixel values within the filter window:
S_(i,j) = ∑_k=-n/2^n/2∑_l=-n/2^n/2 Enc(I_(i+k,j+l)),
where S_(i,j) is the sum of encrypted neighboring pixel values.
* Homomorphic multiplication by a constant:
Enc(I'_(i,j)) = 1/n^2· S_(i,j),
where n^2 is the total number of pixels in the filter window. For a 3x3 filter, n^2 = 9.
* Rounding and decryption of the result:
I'_(i,j) = Dec( ⌊ Enc(I'_(i,j)) ⌋),
where Dec(·) is the decryption operation, and ⌊·⌋ denotes the rounding operation, ensuring the final pixel value is an integer.
As shown in Figure <ref>, we applied to a 64×64 pixel image, followed by a 3x3 mean filtering operation. Frequency domain analysis confirmed that the decrypted image accurately reflected the application of the mean filter, verifying the correctness of our ciphertext-based homomorphic operations for image mean filtering. Specifically, the MSE between the original image and the filtered image is 218.13, with a PSNR of 24.74 dB. Given that a PSNR value between 20 dB and 30 dB indicates noticeable differences, these results highlight a significant loss of detail, as anticipated from the mean filtering.
Our experimental results show remarkable performance improvements. Traditional pixel-wise encryption requwired 84.29 seconds, while our full cache method took only 4.65 seconds, achieving an 18-fold speedup.
Without using , the pixel-wise encryption time for this example was 84.29 seconds, whereas our full caching method reduced it to just 4.65 seconds, delivering an impressive 18-fold speedup.
Our analysis confirms that the image processed based on ciphertext indistinguishable from the ones processed directly on plaintext for mean filtering.
§.§.§ Brightness Enhancement
Brightness enhancement is a common image processing technique used to improve image visibility.
We applied homomorphic operations to real images encrypted by , performing brightness enhancement while the images remained encrypted. Specifically, we incremented each pixel value by 50, resulting in brighter images <cit.>. The image was then decrypted to visualize the processed effect. It is important to note that this brightness enhancement technique is effective only when encryption is applied at the pixel level.
Figure <ref> illustrates the result of brightness enhancement applied to an encrypted image, showing a marked improvement in clarity. Our analysis confirms that the image processed based on ciphertext is indistinguishable from the image processed directly on plaintext.
In this example, accelerated the image encryption by 4 times. For the 256×256 image, we reduced the encryption time from 16.96 seconds to 4.57 seconds.
§.§.§ Image Matching
Image matching is a fundamental task in computer vision, where the goal is to determine whether two images are similar. We're matching images using the L1 distance (also known as Manhattan distance) in a pixel-by-pixel manner and comparing the images directly based on their pixel values.
The L1 distance d_L1 between two images I_1 and I_2 is d_L1(I_1, I_2) = ∑_i=1^M ∑_j=1^N | I_1(i, j) - I_2(i, j) |,
where I_1(i, j) and I_2(i, j) represent the pixel intensities at position (i, j) in images I_1 and I_2, respectively <cit.>.
Using three 128×128 images for illustration, [id=SY]as shown in Figure <ref> , our specific operation involved applying to encrypte them, followed by homomorphic subtraction and multiplication on the ciphertexts to compute the L1 distance in encrypted form. The user can then decrypt the distance using their key to determine which image in the database is most similar to the one they uploaded. This technique can be applied in facial recognition or medical image identification.
As shown in Figure <ref>, Image 2 is obtained by brightening Image 1 and Image 3 is a totally different one. As shown in Table <ref>, the distances between the images are as follows: 11075.62 between Image 1 and Image 2; 21036.48 between Image 1 and Image 3; and 15133.89 between Image 2 and Image 3. The distance results are consistent with what we observe visually in Figure <ref>.
Without using , the encryption time for this example was 826.19 seconds. By using , it was reduced to 69.99 seconds. The performance has been improved by about 91.53%, which is a significant enhancement.
§.§.§ Watermarking
Watermarking images is crucial for ensuring the authenticity and ownership of digital content. In an era where digital media can be easily copied and distributed, embedding a watermark provides a layer of protection against unauthorized use and counterfeiting <cit.>. We performed a watermarking operation on the encrypted images by adding a value of x to a specific pixel. The value of x is set by the user; for the convenience of the experiment, we chose a value of 5.0. This slight alteration creates a watermark within the encrypted image, ensuring that the watermark is embedded securely without revealing any underlying data.
This approach is valuable for scenarios where image authenticity and traceability are critical.
Adding a watermark at a specific pixel location can make the watermark invisible, as shown in Image (a) and (b) from Figure <ref>. Image (c) from reflect the differences between (a) and (b): it is a binary image where the white areas represent the pixel locations with differences above the threshold. In this case, the thredshold is the value of 5.0 we chose before. This technique ensures that the watermark does not alter the visual appearance of the image, maintaining its aesthetic integrity.
In this example, without using , the encryption time was 63.08 seconds. By using , it was reduced to 4.8 seconds, achieving an 16-fold improvement in speed.
§.§ Broader Applicability
To ensure that our approach is broadly applicable and robust across various image types and content, we conducted experiments using a diverse range of images, although not all could be displayed in this paper. To make sure the image selection captured a broad spectrum of variation, we calculated Structural Similarity Index (SSIM) values between images, quantifying their degrees of similarity and dissimilarity.
SSIM is a metric used to assess the similarity between two images, focusing on the preservation of structural information. Unlike MSE and PSNR, SSIM considers luminance, contrast, and structure in its assessment <cit.>. The formula for SSIM is SSIM(I, K) = (2μ_I μ_K + C_1)(2σ_IK + C_2)/(μ_I^2 + μ_K^2 + C_1)(σ_I^2 + σ_K^2 + C_2),
where μ_I and μ_K are the mean values of images I and K, σ_I^2 and σ_K^2 are the variances, σ_IK is the covariance between I and K, and C_1 and C_2 are constants included to stabilize the division when the denominator is close to zero.
This guided our selection of 200 distinct images, each with dimensions of 256×256 pixels, from the USC-SIPI <cit.> database. Five representative examples from this image set are displayed in Figure <ref>. The top row showcases the five original images with distinct characteristics: three in grayscale and two in color. We applied the operation to all the images. The bottom portion of Figure <ref> illustrates the images after undergoing encryption and subsequent decryption, with each restored to its original form. The identical appearance of the images before and after the process highlights the effectiveness and broader applicability of our approach.
Figure <ref> presents the experimental results from processing images using both non-caching and full-caching approaches. The figure demonstrates a significant reduction in encryption time, with decrypted images closely matching the originals. Specifically, the average processing time decreased by a factor of 17.57, while the average MSE difference is 0.146. These results underscore the robustness and the broad applicability of our approach in practical scenarios <cit.>.
The SSIM values calculated between these images range from 0.0737 to 0.5640, reflecting a spectrum of similarity and diversity within the dataset. This variation in SSIM values corresponds to the differing structural characteristics among the images, illustrating the range of similarity and dissimilarity present. The variability in SSIM values highlights that the image set used in this study represents a wide range of image types, thereby reinforcing the broad applicability of our findings. This distribution of SSIM values suggests that the approach employed is robust across images with diverse structural content, thereby extending the applicability of our results to a broad array of real-world scenarios.
§ RELATED WORK
This section reviews relevant work of FHE, starting with a general overview (Section <ref>), HE performance acceleration (Section <ref>), followed by recent developments in the CKKS scheme (Section <ref>), and ending with the CKKS application to image encryption (Section <ref>).
§.§ FHE
Fully Homomorphic Encryption (FHE) represents a groundbreaking advancement in the field of cryptography, enabling computations on encrypted data without needing to decrypt it first. It was initially proposed by Rivest <cit.> in 1978. Later, Gentry et al. <cit.> introduced pioneering work on FHE in 2009, which provided the first feasible construction of an FHE scheme, and since then there has been significant progress in making FHE more practical for real-world applications <cit.> .
FHE schemes have evolved from theoretical constructs to more efficient implementations, addressing the challenges of high computational overhead and limited functionality. Various FHE schemes have been proposed, including BFV <cit.>, BGV <cit.>, CKKS <cit.>, TFHE <cit.> and its optimizations, each tailored to specific types of computations and use cases.
BFV and BGV both operate over integers, making them ideal for exact computations, with BFV generally offering faster performance for certain operations while BGV allows for deeper computational circuits before requiring bootstrapping. These distinctions have led to various optimization efforts. For instance, Halevi et al. <cit.> focused on enhancing the BFV scheme by optimizing decryption and homomorphic multiplication in the Residue Number System (RNS). Robin et al. <cit.> introduced BASALISC, the first to implement the BGV scheme with fully-packed bootstrapping, i.e., the noise removal capability necessary for arbitrary-depth computation. In contrast, CKKS is designed for approximate arithmetic on real and complex numbers, introducing controlled noise to enhance efficiency. This makes CKKS particularly well-suited for machine learning and signal processing tasks where minor approximation errors are acceptable <cit.>. While BFV and BGV provide precise results for integer arithmetic, CKKS trades some precision for improved performance and versatility in floating-point operations <cit.>. Additionally, TFHE, which was proposed by Chillotti et al. <cit.>, introduces significant optimizations in bootstrapping and leveled homomorphic encryption on Boolean circuits, reducing computational overhead and improving the practicality of FHE for arithmetic functions. There have been numerous efforts to optimize TFHE, including works such as <cit.>, <cit.>, and <cit.>. Unlike CKKS, schemes like BFV, BGV, and TFHE operate over integers rather than floating-point numbers, which are commonly used in image processing. The choice between these schemes depends on the specific requirements of the application, balancing factors such as data type, precision needs, computational depth, and overall performance considerations <cit.>.
While the first group of schemes – BFV, BGV, CKKS and TFHE – fall under the category of FHE, allowing for arbitrary-depth computation, another line of research is Partial Homomorphic Encryption (PHE) methods. This paper does not focus on PHE and will therefore not discuss it in detail.
§.§ HE Performance Acceleration
While FHE methods have made significant strides in security, they often encounter challenges like high computational costs and slow processing speeds in real-world applications. Recently, there have been many works focusing on the acceleration on homomophic encryption (HE). Agrawal et al. <cit.> propose memory-aware design (MAD) techniques to accelerate the bootstrapping operation of CKKS. Genise et al. <cit.> improve the runtime of the main bottleneck operation using “gadget” by 18x for 2 attributes and up to 289x for 16 attributes. Additionally, efforts in hardware acceleration have been investigated to speed up homomorphic operations. For instance, Di Matteo et al. <cit.> propose hardware accelerator, presenting a configurable Number Theoretic Transform (NTT) unit for all the polynomial degrees available on the SEAL-Embedded. In addition to acceleration on FHE, PHE performance acceleration is also significantly improved. One of the state-of-the-art techniques is Rache <cit.>, which introduces a radix-based parallel caching optimization for accelerating the performance of homomorphic encryption (HE) of outsourced databases in cloud computing. However, it is only available for accelerating PHE schemes like Paillier <cit.> and Symmetria <cit.>, and was not designed for handling image data.
§.§ CKKS
The CKKS scheme, introduced by Cheon, Kim, Kim, and Song in 2017<cit.>, is a leveled homomorphic encryption scheme designed for approximate arithmetic. Unlike earlier schemes that focused on exact computations, CKKS allows for efficient encrypted computations with some tolerable error, which is often acceptable in real-world applications. This trade-off between accuracy and efficiency has made CKKS a popular choice for applications in privacy-preserving machine learning, encrypted databases, and secure cloud computing .
Recent work has focused on optimizing CKKS to reduce its computational overhead and enhance its practicality. Techniques such as bootstrapping, ciphertext packing, and key switching have been explored to improve the performance of CKKS-based systems. For example, Lee et al. <cit.> propose to compose the optimal approximate polynomial of the inverse sine function to the sine/cosine function to improve the precision of the bootstrapping. Cheon et al. <cit.> proposed Mult2 to perform ciphertext multiplication in the CKKS scheme with lower modulus consumption. Lee et al. <cit.> proposed a scheme to actively control the scale of a ciphertext, making the impact of noise on an error smaller. Bossuat et al. <cit.> optimize the key-switch procedure and propose a new technique for linear transformations. Chen et al. <cit.> generalize the bootstrapping techniques for HE to obtain multi-key fully homomorphic encryption schemes. Jain et al. <cit.> proposed that the efficiency of ciphertext packing schemes, model optimization, and multi-threading strategies significantly impacts the throughput and latency during the inference process.
§.§ CKKS for Images
Given the unique features of CKKS, its application in image encryption has been the subject of extensive research. Boemer et al. <cit.> introduced nGraph-HE2, which optimizes ciphertext-plaintext addition and multiplication using the CKKS scheme. Similarly, Mertens et al. <cit.> proposed a practical image compression and processing pipeline tailored for CKKS, emphasizing FHE-friendliness.
Recently, there has been a growing interest in integrating CKKS with machine learning frameworks for images. For example, Jia et al. <cit.> introduced Homomorphic Encryption Image Classification Evaluation (HEICE), a novel approach for secure image classification. Dimitris et al. <cit.> introduced a scheme combining federated learning and CKKS for neuroimaging analysis. Benaissa et al. <cit.> presented TenSEAL, an open-source library for Privacy-Preserving Machine Learning based on CKKS.
Despite these advancements, challenges persist in making FHE schemes, including CKKS, more accessible and scalable. Issues such as key management, noise growth during computations, and the requirement for high-performance hardware remain active areas of research <cit.>. Moreover, the trade-offs between security and efficiency—especially regarding bootstrapping and noise management—are critical considerations <cit.>. Nonetheless, the increasing focus on optimizing and applying CKKS in various domains, such as privacy-preserving machine learning <cit.> and secure data analytics, underscores its potential to become a cornerstone of secure computation in the era of big data and artificial intelligence.
§ CONCLUSION
In conclusion, this paper advances practical ciphertext-based image processing by proposing and implementing a pixel-level homomorphic encryption approach based on the CKKS scheme. Our approach employs three caching strategies – radix-based caching, scanning-based caching, and full caching – to pre-encrypt radix values and frequently occurring or all pixel values, thereby significantly reducing redundant encryption operations. Our evaluation across a diverse set of images demonstrates substantial improvements in encryption efficiency, with up to a 19-fold reduction in encryption time for large images, all while maintaining image quality. We further showcased the versatility and practicality of our approach by applying it to real-world image processing tasks, including mean filtering, brightness enhancement, image matching, and watermarking, all performed directly on ciphertexts. The results indicate notable performance gains, with encryption tasks becoming up to 91.53% faster and achieving a 16-fold reduction in encryption time. Additionally, we validated that our approach is IND-CPA secure, benefiting from the integrated randomness.
acm
|
http://arxiv.org/abs/2409.03230v1 | 20240905035620 | Improving agent performance in fluid environments by perceptual pretraining | [
"Jin Zhang",
"Jianyang Xue",
"Bochao Cao"
] | cs.RO | [
"cs.RO",
"physics.flu-dyn"
] |
AIP/123-QED
cbc@fudan.edu.cn
Department of Aeronautics and Astronautics, Fudan University, Shanghai 200433, China
§ ABSTRACT
In this paper, we construct a pretraining framework for fluid environment perception, which includes an information compression model and the corresponding pretraining method. We test this framework in a two-cylinder problem through numerical simulation. The results show that after unsupervised pretraining with this framework, the intelligent agent can acquire key features of surrounding fluid environment, thereby adapting more quickly and effectively to subsequent multi-scenario tasks. In our research, these tasks include perceiving the position of the upstream obstacle and actively avoiding shedding vortices in the flow field to achieve drag reduction. Better performance of the pretrained agent is discussed in the sensitivity analysis.
Improving agent performance in fluid environments by perceptual pretraining
Jin Zhang, Jianyang Xue, Bochao Cao^*
September 9, 2024
===========================================================================
§ INTRODUCTION
In water environments, aquatic animals or underwater robots can be regarded as embodied intelligent agents who collect information (pressure, temperature, vision, etc.) from their surrounding water environments using their sensory organs or electronic sensors. And these time-series information would then be processed by these underwater agents and help them make flow field perception or fulfill underwater missions, such as cruising, obstacle avoidance, and so on.
Embodied intelligent agents acquire knowledge through interactions with their environment, often with limited information, and apply this knowledge across various tasks <cit.>. Reinforcement learning (RL) has emerged as a promising technique for training embodied agents in complex fluid environments<cit.>. In the framework of RL, agents update their decision-making strategies through trials and errors, and finally maximize their cumulative rewards. There have been many successful cases of training embodied agents using RL algorithms. For example, RL has been adopted for training gliders to improve gliding performance in complex convection environments <cit.>, for active flow control to reduce drag <cit.>, and for studying the behavior of both individual fish-like swimmers and fish schooling from a biomimetic perspective <cit.>. Research works by Paris et al. <cit.> and <cit.> indicate the existence of optimal sensor placement for specific scenarios. Placing sensors in highly sensitive regions enables effective flow control even with a limited number of detection points. Wang et al. <cit.> propose that integrating temporal data into agent state variable can significantly enhance performance of the agent. These findings reveal potential advantages of flow control strategies using deep reinforcement learning techniques.
However, most of previous studies focus on training agents to perform well in isolated tasks within fixed scenarios. Given the inherent complexity and nonlinearity of fluid dynamics, training agents to generalize their abilities across different fluid environment remains extremely challenging which demands innovative approaches in both RL frameworks and training methodologies.
Extraction of meaningful representations from collected data might be crucial before training agents to fulfill multiple tasks. Some recent research works have indicated that unsupervised pretraining allows agents to develop a deep understanding of complex patterns and relationships within the data, enhancing their ability to generalize from one task to another. This procedure has been validated in various areas, including large language models <cit.>, computer vision <cit.> and reinforcement learning in video game environments <cit.>. In the area of fluid dynamics, information compression method has also been introduced in several research works. For instance, Fukami et al. <cit.> and Murata et al. <cit.> use nonlinear encoders to compress spatial features of flow fields into low-dimensional manifolds. Racca et al. <cit.> compress flow field data from both temporal and spatial dimensions using nonlinear encoders and successfully predict the development of unsteady turbulent flows by compressed low-dimensional information.
In this study, we build a spatiotemporal compression model to transform perceptual information collected from a fluid environment into actionable knowledge. We start with unsupervised pretraining which enables the agent to learn environmental information without data labeling. Subsequently, using pretraining knowledge, we test the ability of the agent in different tasks, including obstacle position prediction and drag reduction reinforcement learning. The results illustrate that perceptual compression significantly enhances agent performance across different tasks. Our approach provides a new way of training general-purpose agents capable of various tasks in complex fluid environments.
§ METHODOLOGY
§.§ Numerical Method
In this research, flow around two streamwisely aligned identical cylinders is simulated in OpenFOAM (http://www.openfoam.comwww.openfoam.com). The downstream cylinder acts as the perceptual intelligent agent in our study, while the upstream one serves as the obstacle which could oscillate in the vertical direction as shown in figure <ref>. The Reynolds number of the incoming flow is set as 100, and hence laminar algorithm is used in the simulation and current problem is solved by incompressible Navier-Stokes equations as shown below,
∂ u_i/∂ x_i = 0
∂ u_i/∂ t + u_j ∂ u_i/∂ x_j = -1/ρ∂ p/∂ x_i + ν∂^2 u_i/∂ x_j ∂ x_j
where u_i denotes velocity component in the i-th direction, ρ is the fluid density, p is the pressure, and ν is the kinematic viscosity.
In the simulation, using overset mesh technique, the computational domain is partitioned into overlapping subdomains, each with its own grid. A background mesh spans the entire computational domain, while refined grids are used around the cylinders, as shown in figure <ref>. Computational information is exchanged between these grids through interpolation in overlapping regions. The illustration of computational domain is shown in figure <ref>. The background mesh size is 60D x 40D, where D is the cylinder diameter. The mesh around each cylinder is set in a circular region with diameter of 8D. The cylinders are placed along the centerline of the computational domain, with cylinder 1 located 20D from the inlet and cylinder 2 6D downstream of cylinder 1. Free-stream condition is given at the inlet of computational domain, while far-field condition is set at the other three boundaries, as shown in figure <ref>. A no-slip boundary condition is imposed on the cylinder surfaces. Moreover, since cylinder 2 is set as the perceptual agent in this study, its surface pressure is collected at each computational time step. Validation of current computational method is provided in APPENDIX <ref>.
§.§ Perceptual network
The perceptual network consists of a convolutional neural network encoder and a GRU network <cit.>. As shown in figure <ref>, firstly, instant pressure data p is compressed by convolutional neural network encoder, and instantaneous spatial feature z is extracted. The spatial feature is then further compressed in time dimension by a GRU network, yielding the dynamic feature h_t at time instant t.
In this study, p is a 1×200 vector sampled at 10 Hz. The spatial encoder consists of five convolutional layers with kernel sizes of [10, 8, 7, 5, 3] and strides of [2, 2, 1, 1, 1], and ReLU is used as the activation function. Then GRU network compresses spatial feature time series with size of w×50 into a 1×64 dynamic feature vector h, where w is the length of perceptual time window.
During pretraining, a network with two linear layers maps h_t to ẑ_t+1 and ẑ_t+2. The network is trained by maximizing the cosine similarity between z_t+1 and ẑ_t+1, and z_t+2 and ẑ_t+2, as defined below,
ℒ = - 𝔼_t [ 𝐳_𝐭+1·ẑ_𝐭+1/‖𝐳_𝐭+1‖‖ẑ_𝐭+1‖ +
𝐳_𝐭+2·ẑ_𝐭+2/‖𝐳_𝐭+2‖‖ẑ_𝐭+2‖]
This process is similar to contrastive predictive coding <cit.>, which enables the agent to learn valuable knowledge by trying to forecast future information.
After pretraining, we anticipate the knowledge about surrounding fluid environment is well represented by the feature vector h which can then be used in other tasks. For example, we can simply connect a network with linear layers to this feature vector to make predictions of flow quantities. Or we can use this feature vector as a state variable of given reinforcement learning model to optimize the performance of an intelligent agent.
§.§ Reinforcement learning
A reinforcement learning problem is typically modeled as a Markov Decision Process (MDP). In an MDP, an agent interacts with an environment by taking actions and receiving feedback in the form of rewards. At each time step t, the agent selects an action a_t ∼π_x (a|s_t) from a policy π. This policy is parameterized by x. Given an initial state s_0 ∈ S, the agent takes an action and receives a reward r_t+1. The objective is to learn a policy that maximizes the expected cumulative reward 𝔼[∑_t=0^∞γ^t r_t|s_0,π_x], where γ∈ [0,1) is a discount factor that prioritizes earlier rewards over later ones.
In this study, we employ the Proximal Policy Optimization (PPO) algorithm, developed by Schulman et al. <cit.>. PPO is an episodic learning method, which means learning process is divided into episodes. Each episode consists of a fixed number (set as 10 in current work) of actions. A key feature of PPO is to use Generalized Advantage Estimation (GAE) to reduce variance in the advantage estimates, which stabilizes training process. Additionally, PPO employs a clipping strategy to constrain policy updates, which ensures that the new policy does not deviate excessively from the old one, and thus a balance between exploration and exploitation is maintained. The PPO algorithm uses two neural networks: a policy network which decides the next action, and a value network that estimates the expected cumulative reward. Both networks include perceptual layers to process input data, followed by a fully connected layer to generate action or estimated advantage. In the training process, the learning rates for the policy network and for the value network are set as 0.0001 and 0.001, respectively.
§ RESULTS AND DISCUSSION
§.§ Pretraining
In the unsupervised pretraining process, the upstream obstacle cylinder oscillates randomly along the y-direction in the range of -1D to 1D for 5000 seconds, while surface pressure of the downstream agent cylinder is sampled with a frequency of 10Hz. During pretraining, the perception network extracts dynamic features from past pressure data and uses them to predict spatial features z for the next two time steps, as illustrated in figure <ref>.
§.§ Obstacle Perception
Obstacle perception is a common task that could be encountered by embodied agents navigating in fluid environments. Disturbances originating from upstream obstacles propagate downstream, which can be perceived by downstream agents and used to estimate obstacle positions. In the current study, we design a supervised learning task to investigate obstacle perception ability of a pretrained agent. In this supervised learning task, the downstream agent is trained to reconstruct the disturbance trajectory of the upstream cylinder using surface pressure information of the agent cylinder. We build a training set by annotating the positions of the upstream cylinder over the 5000-second pretraining data set. Considering the time-delay of disturbance propagation, we use a 10-second window of pressure information to estimate the position of the obstacle cylinder 10 seconds ago.
For comparison, we carry out the same training process on agents both with and without pretraining. The network structures for both agents are identical, but the weights of the non-pretrained agent are randomly initialized. The non-pretrained model serves as the baseline model. After the obstacle perception training, we evaluate performance of both agents on four different test data sets. In test set 1, the obstacle cylinder oscillates in the same random manner as in the training set, but the data sets are different. Test set 2 features intermittent motion with a 5-second pause at the end of each oscillation. Test set 3 involves sinusoidal motion restricted at one side of the flow region. In test set 4, the upstream obstacle cylinder is kept still at the centerline of the flow field.
In the training process, the errors for both training and test sets are calculated at each training step. The test error is calculated as the average of all 4 tests. Loss curves for both agents (pretrained and baseline) are plotted in figure <ref>. Figure <ref> shows that both agents perform similarly on the training set, while their performance on the test set differs significantly. The pretrained model achieves a minimum test set error of 0.0432, while the minimum error for the baseline model is 0.0696, which is 70% higher. Comparisons between perceived and real trajectories of the obstacle cylinder are shown in figure <ref>. Both agents perform well on test set 1 (figure <ref>), since the obstacle cylinder oscillates in the same random manner in this test as it does in the training process. However, on test sets 2, 3, and 4 (figures <ref>, <ref>, and <ref>), the pretrained agent performs much better in perceiving the trajectory of obstacle cylinder. This demonstrates that the agent exhibits enhanced generalization capabilities in new scenarios after the pretraining process.
§.§ Reinforcement Learning for drag reduction
In the reinforcement learning task, the upstream obstacle cylinder is kept still. A Kármán vortex street is generated from the cylinder surface and sheds downstream with a Strouhal number of St=fD/U=0.167.
Meanwhile, the agent cylinder is allowed to move in the y-direction in the range of -1D to 1D, and it keeps optimizing its motion trajectory in the vortical wake flow to minimize the drag exerted on it.
In the RL training process, the agent identifies instant dynamic feature of pressure time series data over past 10-second window and takes it as its state. This window length ensures sufficient information included in the state variable. The agent's action variable has two components: the next position of the cylinder a_pos and the average velocity of the motion a_vel. We enforce the agent cylinder to cross the centerline of the flow field during each action. The reward function is defined as the negative value of the average drag induced by the current action. In this study, a_pos ranges from -1 to 1, and a_vel ranges from 0.2 to 0.4. In our setting, actions are sampled from a normal distribution, with the variance decreasing independently as the episodes progress.
Figure <ref> presents the learning curves of the pretrained and baseline agents. The solid lines represent the mean values of five repeated RL training processes, while the shaded areas correspond to one standard deviation. As shown in figure <ref>, initially the average drag coefficient of the agent cylinder with random motion is about 0.57. After 100 episodes of training, the pretrained agent significantly reduces its drag coefficient to a value below 0.37. In contrast, the baseline agent is trapped in the complexity of the problem and fails to find a motion pattern to reduce the drag acting on it.
Performance evolution of pretrained agent can be better illustrated by flow field snapshots taken in the RL training process. Figure <ref> shows flow field snapshots captured at 4 continuous time instants in the early stage of RL training process on the pretrained agent. The time interval between snapshots is 2 seconds. As shown in figure <ref>, in the early stage of training, the agent cylinder oscillates aimlessly in y-direction and it keeps colliding with the shedding vortices from upstream. The breakdown of shedding vortices induces additional drag force on the agent cylinder. However, through tens of episodes of trial-and-error, the agent is able to adjust its action strategy using pretrained knowledge. As depicted in flow field snapshots obtained from the later stage of training process (figure <ref>), the agent cylinder chooses to move through the gaps between vortices to avoid direct collision with shedding vortices, which significantly reduces the drag force exerted on the cylinder.
§.§ Sensitivity Analysis
Previous sections have shown that the agent with pretraining outperforms the baseline agent in both obstacle perception and reinforcement learning tasks. To further analyze this performance difference, we compute the sensitivity of the dynamic feature h_t with respect to each surface pressure input and plot the distribution of sensitivity values over the agent cylinder surface in figure <ref>. This sensitivity distribution reveals which areas of the surface pressure information on the agent cylinder the perception network focuses on. Before pretraining, with initially randomized parameters, the perception network's understanding of the surrounding fluid environment is chaotic and lacks a focused region (figure <ref>). After pretraining, the perception network adjusts its concentration to the upstream side of the cylinder which indicates this area is important in the pretraining task—the prediction of future spatial features of surface pressure distribution (figure <ref>).
In the following specific tasks, the agent further adjusts parameters of the perception network to fit objectives of the tasks. For example, in the obstacle perception task (figure <ref>, <ref>), sensitivity concentration region is shifted from the upstream side of the cylinder to both lateral sides where the shedding vortices might hit the cylinder. In contrast, sensitivity distribution of the baseline agent only focuses on one side of the cylinder, which might result in worse performance in the obstacle perception task. Furthermore, in the reinforcement learning task (figure <ref>, <ref>), the pretrained agent distributes its attention to upstream, downstream and lateral sides of the cylinder. In this complex task, the agent needs to perceive not only the incoming flow but also the pressure variance induced by the lateral motion of the cylinder. On the other hand, the baseline agent mainly focuses on the downstream and slightly on the lateral sides of the cylinder. A lack of information on lateral and upstream sides of the cylinder might lead to the failure of the baseline agent in this drag reduction task.
The sensitivity analysis above shows that the pretraining process enables the agent to adjust the parameters of the perception network to fit new task scenarios better and faster, thus outperforming the agent with a randomly initialized network.
§ CONCLUSION
We develop a perception network that can be used for information compression in fluid environments. The network consists of a convolutional neural network for spatial compression and a GRU network for temporal compression. We propose a pretraining process which is to predict spatial feature variables of next two time instants.
We have tested the perception network and the pretraining process in a two-cylinder problem. The results show that the pretrained agent cylinder can better perceive the position of the upstream obstacle cylinder. Moreover, in the subsequent reinforcement learning task, the baseline agent fails to develop an effective strategy for drag reduction. On the other hand, through trials and errors, the pretrained agent successfully finds a motion strategy to avoid shedding vortices in the Kármán vortex street, which results in significant drag reduction. The sensitivity analysis indicates that this performance could be due to the better attention distribution of the pretrained agent.
This study demonstrates that the pretraining process with the perception network in fluid environments can significantly improve the agent's performance in subsequent tasks, offering a promising pathway toward a robust and effective solution for real-world, complex, multi-scenario fluid problems.
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are avail-able from the corresponding author upon reasonablerequest.
§ DISCLOSURE STATEMENT
The authors report there are no competing interests to declare.
§ VALIDATION OF THE FLUID SOLVER
In numerical simulation, time step is chosen so that the maximum Courant number during computation does not exceed 0.5. A mesh convergence test is carried out to confirm the mesh size. In mesh convergence test, flow field around two fixed cylinders is computed and both both coarse (67,352 elements) and fine grids (89,400 elements) are used. The time histories of drag and lift coefficients of the downstream agent cylinder are recorded and plotted in figure <ref>. The results demonstrate spatial convergence on the fine grid, which is used in subsequent simulations.
We validate the fluid solver by computing flow field around single cylinder. The computed average drag and maximum lift coefficients at Re=100 are compared with previous literature, as shown in Table <ref>. The comparison of variance of Strouhal number versus Reynolds number is shown in Table <ref>.
To further validate the accuracy of the overset mesh technique in simulating flow around moving cylinder, additional simulations are performed on an in-line oscillating cylinder in uniform flow at a Reynolds number of 100. The cylinder oscillates parallel to the free stream at a frequency twice of the vortex shedding frequency for a fixed cylinder, with an oscillation amplitude of 0.14 times the cylinder diameter D. Table <ref> shows that the present results are consistent with those reported in the earlier literature.
|
http://arxiv.org/abs/2409.03584v1 | 20240905143848 | A multi-scale analysis of the CzrA transcription repressor highlights the allosteric changes induced by metal ion binding | [
"Marta Rigoli",
"Raffaello Potestio",
"Roberto Menichetti"
] | q-bio.BM | [
"q-bio.BM",
"physics.bio-ph"
] |
APS/123-QED
Computational mOdelling of NanosCalE and bioPhysical sysTems, Istituto Italiano di Tecnologia, Via Enrico Melen, 83 I-16152 Genoa, Italy
CIBIO Department, University of Trento, Via Sommarive, 9 I-38123 Trento, Italy
Physics Department, University of Trento, Via Sommarive, 14 I-38123 Trento, Italy
Trento Institute for Fundamental Physics and Applications – INFN TIFPA, Via Sommarive, 14 I-38123 Trento, Italy
raffaello.potestio@unitn.it
Physics Department, University of Trento, Via Sommarive, 14 I-38123 Trento, Italy
Trento Institute for Fundamental Physics and Applications – INFN TIFPA, Via Sommarive, 14 I-38123 Trento, Italy
§ ABSTRACT
Allosteric regulation is a widespread strategy employed by several proteins to transduce chemical signals and perform biological functions. Metal sensor proteins are exemplary in this respect, e.g., in that they selectively bind and unbind DNA depending on the state of a distal ion coordination site. In this work, we carry out an investigation of the structural and mechanical properties of the CzrA transcription repressor through the analysis of microsecond-long molecular dynamics (MD) trajectories; the latter are processed through the mapping entropy optimisation workflow (MEOW), a recently developed information-theoretical method that highlights, in an unsupervised manner, residues of particular mechanical, functional, and biological importance. This approach allows us to unveil how differences in the properties of the molecule are controlled by the state of the zinc coordination site, with particular attention to the DNA binding region. These changes correlate with a redistribution of the conformational variability of the residues throughout the molecule, in spite of an overall consistency of its architecture in the two (ion-bound and free) coordination states. The results of this work corroborate previous studies, provide novel insight into the fine details of the mechanics of CzrA, and showcase the MEOW approach as a novel instrument for the study of allosteric regulation and other processes in proteins through the analysis of plain MD simulations.
A multi-scale analysis of the CzrA transcription repressor
highlights the allosteric changes induced by metal ion binding
Roberto Menichetti
September 9, 2024
==========================================================================================================================
§ INTRODUCTION
The ArsR/SmtB family of prokaryotic metal sensor proteins is the largest and most functionally diverse metalloregulatory protein group, making it also the most extensively investigated one <cit.>. It includes molecules that bind a large variety of metal ions, such as Zn(II), Ni(II), and Co(II), as well as As(III), Cd(II) and Pb(II). ArsR/SmtB transcription factors regulate genes that are responsible for detoxifying the cytosol from metal ions in excess. When they bind to DNA, they repress the transcription of downstream genes, while the coordination with metals is responsible for transcriptional derepression as it induces a transition to a low-affinity state of the protein, which in turn results in the dissociation from DNA <cit.>. Their behaviour as mechanical switches operated through metal ion binding thus makes them key elements of the cell maintenance apparatus <cit.>, as well as a commonly employed model to understand metal-operated allosteric regulatory pathways <cit.>.
In the ArsR/SmtB group one finds CzrA, a transcription repressor regulated by the concentration of Zn(II) and Co(II), which has been used as testbed for the investigation of zinc homeostasis in cell biology <cit.>. An allosteric mechanism is thought to be responsible for the lower affinity of the CzrA dimer for the DNA CzrO operator upon Zn(II) ion binding <cit.>, this hypothesis being supported by structural studies with both experimental (e.g. NMR <cit.>) and computational <cit.> techniques.
In these works, it was demonstrated that zinc coordination remodulates the internal dynamics with respect to the apo state, lowering the protein's binding affinity towards the DNA substrate while largely preserving its overall architecture; in contrast, a sensible structural rearrangement with respect to the apo form is visible when the molecule is in complex with the DNA substrate, in which case the protein is found in a “bent” conformation <cit.> Fig. <ref> illustrates the apo and holo (i.e. zinc-bound) forms of CzrA.
The aforementioned studies described in detail the structural organisation of CzrA and provided a convincing picture of the local and global changes that occur upon binding with the DNA and/or the zinc ions; in particular, the computational investigation carried out by Chakravorty and coworkers <cit.> has demonstrated that relevant insight can be gained from molecular dynamics (MD) simulations of this molecule. In the present work, we conducted an in silico study of CzrA which builds upon these results and furthers them, with a focus on the differences that can be observed between the apo and the zinc-bound states of the protein in the absence of the DNA substrate.
More specifically, microsecond-long, all-atom molecular dynamics simulations of CzrA in the apo and holo (zinc-bound) conformations were here perfomed and analysed through the mapping entropy optimisation workflow (MEOW) <cit.>; the latter is an information-theoretic method recently developed by some of us—and freely available within the EXCOGITO software suite <cit.>—that aims at highlighting subsets of atoms that play a key role in the collective behaviour of a molecular system.
Following MEOW, CzrA was inspected through the lenses of its maximally-informative reduced representations, namely low-resolution, or coarse-grained, descriptions of the protein that, despite a reduction in the number of degrees of freedom employed to observe the system, are able to retain the largest amount of statistical information on the original, all-atom reference. We underline that the theoretical foundations of MEOW are such that
the mechanical and functional insight that emerges about the protein is intrinsically multi-body—i.e. it is not straightforwardly decomposable in terms of the interplay among few constituent atoms <cit.>.
The application of MEOW to the MD simulations of the apo and holo states of CzrA allowed us to highlight the effect of metal coordination on the molecular structure, with the change in the binding state of the zinc pockets reverberating on an alteration of the properties of the whole protein, including the distal DNA binding region.
Interestingly, this alteration is not accompanied by a major rearrangement of the structure, which, as anticipated, only takes place when the protein is bound to the DNA substrate alone <cit.>. The MEOW analysis is thus shown to be capable of pinpointing key features of the protein's allosteric regulation that occurs upon zinc coordination. These results, which corroborate the current understanding of this molecule's functioning built by previous studies and extend their scope, are instrumental in establishing the MEOW analysis protocol as a novel and enabling tool that can contribute to the investigation of allosteric proteins, perspectively also in the absence of previous information on the substrates and/or binding regions of the system of interest.
In Sec. <ref>, we report the results of the MD simulations of CzrA and their subsequent investigation, while in Sec. <ref> we sum up their implications and provide our concluding remarks. A detailed description of the system setup and the tools employed for the production and analyses of the data are provided in Sec. <ref>.
§ RESULTS AND DISCUSSION
§.§ Structural and dynamical analysis of the trajectories
We performed 1 μs all-atom molecular dynamics simulations of the CzrA transcription repressor in both the apo (PDB code 1R1U) and holo (PDB code 2M30) states of its zinc coordination site, see Sec. <ref> for the associated technical details. To set the stage for the subsequent investigation of the two systems carried out through the MEOW protocol, we will first discuss the outcome of a characterisation of the MD trajectories in terms of their ensemble structural properties, in line with the study performed by Chakravorty and coworkers in Ref. <cit.>.
For each analysed form of CzrA, in Fig. <ref> we display the time series of the root mean square deviation (RMSD) of its C_α atoms calculated with respect to the energy-minimised PDB configuration (panel a) as well as the corresponding distribution (panel b). A qualitative inspection of these plots suggests that, within the investigated timescale of 1 μs, both the apo and holo systems individually explore a single conformational basin—in principle different for each state of the protein—with fairly comparable global fluctuations; this can be deduced by the fact that the two distributions in Fig. <ref>b appear to be unimodal and characterised by similar variances. At the same time, the apo form presents a lower average value of RMSD (namely ∼ 0.15 nm) compared to the one of the holo form (∼ 0.25 nm); this is indicative of the 1R1U crystallographic structure of CzrA being closer to its “equilibrium” conformation than the zinc-bound 2M30 one, albeit by a relatively small amount.
The heuristic picture for the variability of the apo form of CzrA in terms of conformational basins that emerges from our analysis of its RMSD differs from the corresponding one reported in the work of Chakravorty et al. <cit.>, where we make particular reference to their 120 ns simulations performed in the absence of the DNA substrate. Indeed, in Ref. <cit.> the RMSD distribution of the zinc-bound protein was found to be unimodal as in our Fig. <ref>b, while the apo system was observed to explore three distinct states, with two lateral, metastable free energy minima being separated from a central stable one by relatively low barriers. One of the aforementioned metastable basins was further linked to the stable conformation sampled by their simulations of the holo form, thus suggesting the existence of this latter state also in the configurational space accessible to the apo structure. As previously stated, our results instead qualitatively hint at the presence of a single basin associated with each system; it is thus logical to assess the degree of similarity between these two conformational states, investigating to what extent the coordination with the metal ions results in a readjustment of what, within the analysed timescale, appears to be the one equilibrium conformation of apo-CzrA. To this aim, we performed an RMSD-based clustering of the two trajectories by relying on the UPGMA algorithm <cit.>, arbitrarily fixing the distance threshold employed in the analysis of each simulation such that the associated configurations were partitioned in no more than 10 distinguishable macrostates. This resulted in the apo trajectory being divided into 9 clusters, while the holo one was broken down to 7. The most representative cluster for each form of CzrA was then identified, comprising ∼89% and ∼55% of the total number of apo and holo MD frames, respectively, and its central structure extracted; finally, the RMSD between the C_α atoms of these two configurations was calculated, amounting at ∼ 0.11 nm and with the largest discrepancies between the two structures being found on the protein termini. Overall, these results suggest that, in the absence of the DNA substrate and on the timescale explored, the global molecular architecture of CzrA is only marginally affected by the binding with the zinc ions, with the protein fluctuating around an average conformation that is quite compatible between the apo and holo forms <cit.>.
The next natural step in the analysis is to investigate how the zinc coordination state impacts the protein's fluctuation patterns around such conformations. The structural variability of the two forms of CzrA is thus inspected on a residue basis in Fig. <ref>, where we compare the apo and holo systems' root mean square fluctuations (RMSF) of the C_α atoms calculated over all the MD trajectory frames. The comparison is presented separately for the two chains of the protein, in that slight discrepancies can be observed between their RMSF in both systems; these variations can be ascribed to a structural asymmetry of the two identical monomers composing the molecule, an asymmetry that is already present in the 1R1U and 2M30 PDB configurations and that persists throughout the associated MD simulations. A general trend can nonetheless be appreciated when comparing the two structures, namely that, consistently with what was observed in previous computational and experimental studies <cit.>, the holo system typically features lower values of RMSF compared to the apo one. This reduction in mobility is particularly evident in the stretch between residues 80 and 100 of each monomer—that is, in the region containing the binding sites of the zinc ions—and globally suggests that the molecule displays an increased rigidity upon coordination with the metal. The only significant exception to this feature of the holo state is given by the β wing regions located approximately between residues 70 and 80 of both chains, which wave comparably to how they do in the apo form <cit.>.
Another crucial aspect regarding the configurational variability of the two systems is their capability to explore “open” and “closed” conformations, respectively characterised by low and high binding affinities with the DNA substrate <cit.>. A simple yet useful metric to discriminate such states is the inter-protomer distance between the Ser54 serine residues of each chain <cit.>; relying on this collective coordinate, in their computational study <cit.> Chakravorty and coworkers classified as open those configurations of their MD simulations in which the distance between the two serines was larger than 4.3 nm, and closed those displaying a distance below 4.1 nm. Along their two 120 ns trajectories of apo- and holo-CzrA in the absence of the DNA substrate, both systems were found to sample, on average, an open conformation, and no marked discrepancy was observed between the patterns of their inter-protomer distances, again suggesting the overall structural similarity of the two forms. At the same time, the zinc-bound protein was detected to seldom explore a “flat” conformation characterised by a reduced DNA binding affinity, displaying a serine-serine distance of ∼ 4.8 nm. On the other hand, in their simulation of the apo form they did not observe the closed structure of the molecule that occurs in the quaternary CzrA∙DNA complex, whose serine-serine distance, according to the 2KJB ensemble of NMR structures <cit.>, amounts instead to ∼ 4.0 nm. We underline that, in Ref. <cit.>, closed conformations akin to the experimental ones only appeared in MD simulations of apo-CzrA that were either (i) initiated from the 2KJB structure to investigate the protein's closed-to-open transition; or (ii) performed in the presence of a DNA substrate explicitly bound to the molecule.
Consistently with these analyses, we measured the distance between the two serines in each frame of our MD trajectories for both the apo and holo forms of CzrA. From the time series of the inter-protomer distances presented in Fig. <ref>a, it is possible to appreciate that the behaviour of the two systems is largely compatible; the associated distributions, reported in Fig. <ref>b, indeed exhibit very similar variabilities and average values, where all configurations are characterised by serine distances lying in the range between 4 and 5 nm, and a mean distance of ∼4.4 nm is observed in both forms. These trends are in line with the ones reported in the work of Chakravorty et al. <cit.>, again referencing their MD simulations of apo- and holo-CzrA performed in the absence of DNA. This notwithstanding, the data for the two forms also feature small but noticeable discrepancies, in particular the presence of spikes in the time series that, in the case of the holo system, reach values up to ∼ 4.9 nm, with the molecule adopting a “flat”, less DNA-binding prone conformation—already observed in Ref. <cit.>—due to the coordination with the zinc ions. As for the apo state, we interestingly note that within our analysed timescale the serine-serine distance attains instead values as low as ∼ 4.0 nm, hence in agreement with the experimental NMR structures of the closed conformation of CzrA in complex with DNA <cit.>. This suggests that the protein in the apo state is capable of making excursions to a state of high DNA binding affinity even in the absence of the latter; whether this is sufficient evidence of a binding mechanism uniquely or mainly relying on conformational selection is hard to infer, since the fraction of simulated time spent in the lowest inter-serine distance configuration is rather small, see Fig. <ref>. At this stage, the hypothesis of induced fit seems to be more substantiated.
Summarizing the results gathered insofar, our MD simulations show that the apo and holo forms of the protein are rather similar in terms of overall architecture of the molecule; in contrast, appreciable differences can be observed in their dynamics, with zinc coordination significantly dampening the conformational variability of CzrA around the equilibrium configuration characteristic of the investigated timescale, as highlighted by the fluctuation patterns of the two structures presented in Fig. <ref>.
This picture is further supported by a coarse but informative analysis of the equilibrated apo and holo structures of CzrA, obtained by relying on the protein structure quasi-rigid domain decomposition (PiSQRD) <cit.> method. Specifically, we performed optimal partitions of the two structures in a range between 2 and 10 domains, where residues are assigned to a given domain so as to maximise the amount of collective dynamics that can be ascribed to inter-domain motion rather than intra-domain strain; that is to say, in an optimal partition the largest possible fraction of the internal dynamics is due to displacements of the domains relative to each other, while the distortion within the domains is minimised.
In Fig. <ref> we compare the subdivisions of the apo (panel a) and holo (panel b) equilibrated conformations of CzrA in 2 domains. One can easily notice the large degree of consistency between them, as well as the fact that in both cases the partition is skewed with respect to the symmetry plane of the protein as a consequence of the aforementioned asymmetric arrangement of the two monomers. Despite the similar architectures and partitions, however, the internal dynamics of the two structures differs. Indeed, the apo state features larger flexibility than the holo one, as it can be noticed from the plot in Fig. <ref>c: here, we report the fraction of essential dynamics that the optimal partition entails—with a higher fraction of captured motion implying a more collective dynamics. Fig. <ref>c displays that, as we subdivide the protein into an increasing number of quasi-rigid domains, the apo form remains more flexible and its dynamics more collective (and hence more easily partitioned) than the holo one up to 5 domains, despite the two structures being characterised by similar decompositions also in these cases (data not shown). Beyond 5 blocks, the curve for the apo state almost flattens; in contrast, the one for the holo state features a slower growth only after 7 domains, indicating that the most relevant collective motions of the zinc-bound system take place below this structural resolution scale <cit.>.
These observations suggest that the coordination with the metal ions does not change appreciably how the protein moves, but rather how much: in fact, while the dynamics-based partition of the structure is essentially the same in the apo and holo forms, in the latter case we observe a modulation of the degree of internal flexibility of the molecule, which becomes stiffer. This result is based on a very simple description of the protein—represented in terms of a coarse-grained elastic network model <cit.>—and its equilibrium dynamics; yet, the differences exhibited by the two states are consistent with the observed behaviour of this molecule, which, upon zinc binding from the apo state, does not manifest major conformational rearrangements but rather changes its internal motions substantially <cit.>.
The conclusion we can draw from these first analyses is that the apo and holo forms of CzrA entail noticeable differences, albeit subtle ones. An accurate assessment can highlight them, as is the case for the equilibrium dynamical features that are appreciably different in the two cases. In the following, we report a novel, complementary manner of investigating CzrA, which proved capable of bringing to light further interesting details about this system.
§.§ Mapping entropy optimisation workflow
Central to the analysis method employed in the following is the concept of mapping entropy S_map <cit.>; this is a measure of the statistical information that is lost when the elements of a dataset are described in terms of a subset of their original features, see Sec. <ref> for a summary of the associated theoretical details. In the context of this work, the dataset consists of the configurations sampled during computer simulations of the protein of interest; these configurations are observed through a low-resolution representation that only considers the positions of a subset of N<n atoms out of the n ones composing the molecule, in what is called a decimation mapping. By discarding constituent units from the analysis—or, more precisely, integrating them out <cit.>—one ignores part of the properties characterising the whole structure; the mapping entropy estimates the “quality” of such reduced representation, in that it quantifies the statistical information that a coarse description of the protein lacks with respect to the all-atom one. Critically, a specific choice of the decimation mapping one employs to inspect the system is associated with a single value of S_map, while the opposite does not necessarily hold.
From the preceding discussion, it follows that the mapping entropy is a function of the selection σ_i, i=1,...,n of atoms employed to inspect the molecule, see Sec. <ref>, where σ_i = 1 if atom i is maintained in the system's description and 0 otherwise, with ∑_i=1^n σ_i=N (note that we exclude hydrogen atoms from the pool of “eligible” ones). One can thus minimise S_map in the space of atom selections <cit.>: this procedure aims at detecting the so-called maximally informative reduced representations of the system, namely those whose overall information content is as close as possible to the one characterising the all-atom reference, despite being a low-dimensional projection of the latter. Given the typically rugged profile of the mapping entropy <cit.>, several optimisation runs should be performed so as to collect a number M_opt of maximally informative representations σ̅_i^(k), k = 1, ⋯ M_opt. From these, the frequentist probability p_i for an atom to be retained in an optimal mapping can be obtained as
p_i = 1/M_opt∑_k=1^M_optσ̅_i^(k),
resulting in a probability value defined on each atom of the molecule; in the following, we will refer to this set of probabilities p_i as the information field associated with the structure. This protocol, which processes in silico simulations of the system of interest to identify the optimal reduced representations of the latter and extract the associated information field, is dubbed mapping entropy optimisation workflow (MEOW), and is implemented for public use in the recently released EXCOGITO software suite presented in Ref. <cit.>. A more detailed description of the approach is given in Sec. <ref>.
It has been shown that the minimisation of the mapping entropy selects subgroups of atoms whose configuration is a maximally informative proxy for the global state of the whole molecule <cit.>. Moreover, it was observed that the atoms that are most frequently included in optimal mappings—that is, those with high probabilities p_i—are often highly mobile and/or display a strong energetic variability <cit.>; this is rationalised by the fact that a subset of elements is informative about the rest of the system if the configurational state of the selected subgroup strongly correlates with the discarded part. This condition is more easily verified if the retained atoms entail a large conformational variability (albeit not necessarily large-amplitude fluctuations) and interact with the others so strongly that their configuration “dictates” that of the remainder of the molecule <cit.>. Notably, this does not imply that all atoms that are individually mobile and strongly interacting are selected as important: in fact, several such atoms can be ignored in optimal mappings, while few others of them are retained. This is because the kind of information that can be extracted from the MEOW analysis is an intrinsically multi-body feature of the system: indeed, while the probability of a given atom to be retained is, by definition, an atom-specific property, it is derived from the processing of multiple optimal atom subsets. The atoms in these subsets are identified collectively, see Sec. <ref>, and the participation of an atom in a given optimal mapping cannot be deduced from properties of this atom alone (such as, e.g., type, charge, mobility, interaction strength...). In this sense, the probability field p_i only represents a useful and intelligible one-body representation of an otherwise complex, intrinsically multi-body property.
Importantly, the atoms belonging to the optimal group happen to bear a nontrivial significance for the function of the molecule, as previous works have shown that the group of atoms with higher p_i correlates with the biological relevance of the amino acids they belong to <cit.>. It is consequently possible to employ the mapping entropy optimisation workflow with the aim of highlighting and pinpointing atoms and residues that play a critical mechanical, energetic, and functional role in a protein.
In this work, we leverage these features of the MEOW approach to investigate how the properties of the CzrA transcription repressor are influenced by the state of the zinc coordination site. Specifically, we study the system in terms of the information fields derived from the optimal low-resolution representations of its apo and holo structures (see Eq. <ref>), and inspect if and how the residues' retainment probabilities—and hence their functional relevance as predicted by MEOW—modulate upon binding with the metal ions. Building on and expanding the scope of previous applications of the method that only focussed on isolated, unperturbed biomolecules <cit.>, here for the first time we apply MEOW to investigate how environmental changes that take place in a system can result in a shift throughout the molecular structure of what are its biologically relevant regions.
§.§ Application of MEOW to CzrA
To analyse CzrA through the lenses of its maximally informative reduced representations, the MEOW pipeline of EXCOGITO <cit.> was separately applied to both the apo and holo form MD simulations results; for each structure, a total of 10^4 all-atom MD configurations sampled regularly from the simulation trajectories were considered in the calculations, and the number N of retained sites employed to describe the system at low resolution was chosen to be equal to the number of its C_α atoms—that is, 187 and 190 for the apo and holo states, respectively. We performed M_opt=48 independent S_map optimisations for each form of CzrA, resulting in 48 maximally informative mappings σ̅_i^(k) from which the probabilities p_i of each atom to be retained separately in the apo or holo systems—and hence their associated information fields—were calculated via Eq. <ref>.
The histograms of the sets of p_i obtained from the MEOW analysis, shown in Fig. <ref>, highlight that the apo and holo forms of CzrA are roughly compatible in terms of how many (heavy) atoms appear
with a given value of p_i in the pool of optimised reduced representations of the protein. In both cases, the distribution is approximately unimodal, with the most prominent peak being centred around p∼ 0.13. Atoms located to the left of such peak are predominantly excluded from the pool of optimised mappings of the corresponding form of CzrA; conversely, the relatively long tails that extend from the mode of the distributions up to high probability values pertain to atoms that, according to MEOW, need to be retained when describing the system at a coarser level of detail in order to preserve the maximum amount of information about its statistical behaviour. The next natural step in the analysis is thus to investigate how the two ensembles of single-atom probabilities p_i distribute throughout the molecular structure, and if dissimilarities in the MEOW information fields that result from these projections are present due to zinc coordination. To this aim, in Fig. <ref> we separately show the apo and holo forms of CzrA rendered as thin ribbons decorated with beads; each bead is representative of a single amino acid and is located on the position of the most frequently conserved heavy atom in the residue. Beads are further coloured according to the value of their underlying atom's p_i, going from red to blue (through white) in transitioning from low to high probabilities.
An inspection of Fig. <ref> reveals that, in both states, the projection results in a fairly smooth colouring pattern throughout the protein structure, hence with whole regions of the system that are identified by MEOW as being more or less informative. Significant differences are however appreciable between the probability fields associated with the apo and holo states of the molecule. Most notably, we previously discussed how the overall amount of atoms displaying a specific value of retainment probability is approximately compatible in the two protein forms, see Fig. <ref>; critically, Fig. <ref> reveals that, in contrast to the apo case, upon zinc coordination a large number of residues in the protein end up being characterised by a relatively low p_i. As only the most likely conserved heavy atom of each amino acid has been explicitly depicted in Fig. <ref>, this result suggests that the information content of the holo form of CzrA is somewhat concentrated in more localised regions of the molecular structure compared to its apo counterpart. We remind that the analyses performed in Sec. <ref> highlighted that the binding with the metal ions induces a significant reduction in the mobility of the protein around its average conformation, see Fig. <ref>. Interestingly, MEOW hints that such stiffening goes on par with a remodulation of the system's energetic frustration; starting from a quite spread distribution of the latter throughout the molecular structure in the apo case (resulting in rather diffused peaks in the MEOW information field), upon zinc coordination localised regions characterised by a high energetic variability instead appear, which stand out of what is otherwise a rather “silent” background and acquire high relevance as prescribed by the mapping entropy approach. To gain deeper insight into the effects of ion binding onto CzrA, let us now further analyse the discrepancies in the information fields of the two protein forms; in particular, we focus our attention on the regions of the system with a known biological function, namely those involved in the coordination with the metal and the binding with DNA.
The conservation probabilities of the residues belonging to the zinc coordination site of CzrA are displayed in Fig. <ref> separately for the apo and holo states; we recall that this per-residue probability is defined, in each system, as the largest p_i value among the (heavy) atoms belonging to the selected amino acid. Despite the presence of slight differences between the results of the two protein chains due to the structural asymmetry discussed in Sec. <ref>, we observe that the p_i of the zinc coordination residues are generally higher than the mode of the histograms reported in Fig. <ref>; most notably, almost all probability values in the apo case are larger than 0.2, hence being located in the right tail of the associated distribution. This indicates that these residues entail a large information content about the system, a remarkable fact in that, while their functional role as zinc coordination regions is known empirically, the MEOW protocol unveils their importance only starting from raw MD simulations data in an unsupervised manner.
Moreover, we observe that the majority of the binding site residues are retained more frequently in the apo state of the molecule than in the holo one, with MEOW thus highlighting a decrease in their relevance upon coordination with the metal. Such a decrease can be ascribed to the strong interaction of these residues with the zinc ions, which reduces their original mobility (see the RMSF in Fig. <ref>) as well as energetic frustration, thus negatively impacting two pivotal features that in the apo state drove the protocol to preserve these regions in order to minimise the mapping entropy, see Secs. <ref> and <ref>. Consequently, in the holo form these amino acids return a lesser amount of information about the behaviour of the protein as quantified by S_map.
Even more compelling is the MEOW analysis of the DNA binding region of CzrA, for which, opposite to what was observed in the case of the zinc coordination site, the results highlight an increment in the relevance of the associated residues when the molecule is bound to the metal ions. The MEOW information fields of the DNA binding region in the two structures, summarised in Fig. <ref>, indeed show that its amino acids are generally more frequently conserved when CzrA is in complexation with the zinc ions than in the absence of the latter. Also in this case differences are present between the details of the probability patterns of the two protein chains due to their structural asymmetry; this notwithstanding, the overall increase of the information field in going from the apo to the holo state can be appreciated in both units. Notably, while the p_i values for the apo state of the DNA binding region are on average slightly below the corresponding ones of the zinc coordination site (see Fig. <ref>), their boost in the holo state is such that they reach peaks as high as p_i ≃ 0.30-0.37, hence being located in the far right of the tail of the holo distribution reported in Fig. <ref>. This increment in information field values upon zinc coordination takes place in the presence of an appreciable reduction in the amplitude of the DNA binding region's fluctuations <cit.>, see Fig. <ref>. As previously discussed, the minimisation of S_map highlights those residues such that, once their configuration is fixed, the structural and energetic variability of the remainder of the molecule is constrained as much as possible. Hence, an increase of the p_i of the residues in the DNA binding region in the holo state is indicative of the fact that, in spite of a reduced amplitude of their mobility, they acquire greater energetic frustration, and hence greater MEOW relevance, in what is a more conformationally restrained state of the molecule.
In summary, the application of the MEOW approach to the configurations sampled in the MD simulations of the apo and holo forms of CzrA has shown important differences between them, which can elude a point-wise analysis of the structural and dynamical features of the corresponding residues.
§ CONCLUSIONS
Allosteric regulation is one of the most relevant transduction mechanisms at the molecular level <cit.>. The system-wide modifications that one or few atoms, interacting in very local regions, induce on the whole protein make allostery an exquisitely multi-scale process, whose versatility and consequent ubiquity are paired by the complexity of its investigation. Ion binding in metal sensor proteins is a brilliant example of the exploitation of this multi-scale strategy, in which the affinity of the molecule to the DNA substrate is modulated through the control of the distal ion coordination site <cit.>.
In this work, we have addressed the study of the ArsR/SmtB transcription repressor factor CzrA. This is a metal-sensor protein, in which the coordination with zinc ions determines large-scale alterations of the system's structural and dynamic properties, further leading to a state of low affinity towards the DNA substrate <cit.>. Specifically, we focused on the differences between the apo and holo (zinc-bound) states of the protein in the absence of the DNA, analysing μs-long all-atom molecular dynamics simulations of these two systems through well-established techniques as well as more recent approaches.
First, we studied the consequence of ion binding on the structural arrangement of the protein as well as on its large-scale flexibility. Our analyses have shown virtually no discrepancies between the protein's overall architecture in the presence or absence of the zinc ions; indeed, the average conformation of the molecule in the apo and holo states differ by as little as 0.11 nm.
This notwithstanding, the occasional excursions to more “closed” configurations that the protein was observed to undergo in the apo form, as well as the seldom transitions to more “open” states that appeared in the presence of the ions, suggests a tendency of the holo state to display a reduced affinity towards the DNA substrate compared to its apo counterpart. Further inspection of the trajectories has highlighted subtle yet marked changes in the equilibrium dynamics of the protein that occur upon coordination with the metal; these are quite evident in the local conformational variability of the molecule, and most prominently consist in a sensible reduction of its overall flexibility when in the zinc-bound state.
These results are consistent with, and corroborate previous computational <cit.> as well as experimental works <cit.>; in particular, in the former, through the analysis of MD simulations of the apo and holo systems it was shown that the most prominent differences between these two states of the protein, rather than in their structural properties, are to be observed in their dynamics, with the DNA-bound state being the only one in which a major conformational rearrangement of the molecule occurs. Our observations support this picture through the analysis of more extensive simulations.
Additionally, it was suggested in various experimental works <cit.> that the allosteric mechanism employed by CzrA does not involve important distortions in the overall organization and architecture of the molecule, but rather relies on a diffused redistribution of the residue-specific conformational entropy of the system in going from the apo to the holo state. This hypothesis found further consistency in the analysis we carried out in this work by relying on the recently-developed mapping entropy optimisation workflow <cit.>. MEOW analyzes an MD trajectory of a protein to identify optimal low-resolution representations, or optimal mappings, of the system; these are particular subsets of the molecule's atoms, such that the amount of information about the conformational space they sample is as close as possible to the one associated to the full protein.
From the atom-wise probability of being part of an optimal mapping, one can construct what we refer to as the protein's information field, which assigns to each atom a measure of its relevance in what is a synthetic yet informative description of the molecule. Most importantly, the information field returned by the MEOW protocol was previously found to single out regions of particular structural, energetic, and functional relevance of a system <cit.>; in this work, we have for the first time exploited this feature to investigate how such relevance is affected by changes in the system's environmental conditions.
The MEOW analysis of CzrA has shown that, by coordinating with the ion substrates, the information pattern of the molecule redistributes throughout the structure, and concentrates in rather localized areas that are hence characterised by an increased energy frustration. Such redistribution is particularly interesting when one looks in detail at the regions of the molecule playing a key biological role, namely its binding sites; focusing on the zinc coordination site, we found that the informativeness associated with its residues decreases when going from the apo to the holo state. This change is suggestive of the fact that the presence of the ions somehow “deactivates” the zinc coordination site. At the same time, the MEOW approach highlighted an opposite behaviour in the residues of the DNA binding region depending on the state of the distal zinc coordination site. Specifically, we have observed that the level of informativeness of the former is larger in the holo conformation than in the apo one, thus implying their “activation” in the presence of the metal ions—but in absence of the DNA substrate.
As anticipated, the remodulation of the information field of the molecule upon zinc binding can be interpreted in terms of an overall change in its structural and energetic frustration. In fact, we observed that the presence of the zinc ions determines an overall increase in the stiffness of the protein, which affects the binding sites of both substrates making them less mobile; this notwithstanding, the informativeness of the DNA binding region increases in the holo form, thereby signaling a greater conformational and energetic variability in spite of a reduced amplitude of its fluctuations. This variability is instrumental in the operation of the allosteric mechanism that allows the protein to release from the DNA filament; notably, such a picture is consistent with the previously observed entropy remodulation that the molecule undergoes upon zinc binding <cit.>.
We stress here that the MEOW protocol has highlighted the change in the properties of the residues involved in the interaction with the DNA in the absence of this substrate, thereby providing novel and complementary insight into the allosteric mechanism employed by CzrA. This makes the MEOW approach a promising candidate to complement and expand the scope of the already available methods for the study of binding sites in proteins <cit.>.
In conclusion, these results contribute interesting information about the behaviour of CzrA, and demonstrate, in addition to previous applications <cit.>, the validity of the MEOW analysis pipeline to characterize the properties of key residues in a protein and rationalise its global behaviour also through the comparison of the information fields across different states of the molecule. We thus foresee fruitful applications of the proposed approach to gain insight into the way proteins perform and modulate their biological function.
§ ACKNOWLEDGMENTS
The authors are indebted to Marco Giulini for technical support and an insightful reading of the manuscript.
RP acknowledges support from ICSC - Centro Nazionale di Ricerca in HPC, Big Data and Quantum Computing, funded by the European Union under NextGenerationEU. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or The European Research Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
Funded by the European Union under NextGenerationEU. PRIN 2022 PNRR Prot. n. P2022MTB7E.
§ DATA AND SOFTWARE AVAILABILITY
Raw data produced and analyzed in this work are freely available on the Zenodo repository https://zenodo.org/records/10700290https://zenodo.org/records/10700290.
§ AUTHOR CONTRIBUTIONS
MR and RM proposed the study; RP and RM conceived the work plan and proposed the method; MR carried out the simulations; MR and RM carried out the preliminary data analyses. All authors contributed to the analysis and interpretation of the data. All authors drafted the paper, reviewed the results, and approved the final version of the manuscript.
§ METHODS
§.§ Protein systems setup and simulations
In this work, MD simulations of CzrA were performed starting from the experimentally resolved structures of its apo and holo forms. The 3d configuration of the apo system (PDB code 1R1U) was determined through X-ray diffraction with a resolution of 2 Å <cit.>, while the structure of the holo state (PDB code 2M30) consists of a complex formed by the CzrA protein and two Zn^2+ ions, and was determined via NMR spectroscopy refined through quantum mechanics/molecular mechanics (QM/MM) simulations <cit.>. In silico, we individually solvated the two systems in water using the TIP3P model <cit.>, adding Na and Cl ions to neutralise the total charge and to mimic the physiological salt concentration (150 mM). The simulation box for the apo and holo forms was chosen of dodecahedron shape, with the protein having a minimum distance of 1.1 nm from the box edge in both cases. After energy minimisation and proper equilibration, the production runs of each system were then performed in the NPT ensemble at 300 K and 1 bar through the stochastic velocity-rescale thermostat <cit.> and the Parrinello-Rahman barostat <cit.>, respectively with a temperature coupling constant τ_t=0.1 ps and a pressure coupling constant τ_p = 2 ps. The integration time step was set to 2 fs, the selected integrator was leap-frog, and holonomic constraints were accounted for by means of the LINCS algorithm <cit.>. The two forms of CzrA were simulated for 1 μs of production run each by relying on the Amber14sb <cit.> force field. Finally, we included position restraints in the setup of the holo system to mimic the binding of the two zinc ions to their respective coordination sites and maintain them in the correct position. The atoms involved in the interaction with the first Zn^2+ ion are ASP 84.B OD2, HIS 86.B ND1, HIS 97.A ND1, HIS 100.A NE2. As for the second zinc atom, the restraints were put between the ion and ASP 84.A OD2, HIS 86.A ND1, HIS 97.B ND1, HIS 100.B NE2. The functional form of the restraining potential was
V(r_ij)=
0 r_ij<r_ij^0,
1/2k_dr(r_ij-r_ij^0)^2 r_ij^0≤ r_ij<r^1,
1/2k_dr(r^1-r_ij^0)×
×(2r_ij-r^1-r_ij^0) r_ij≥ r^1,
where r_ij is the distance between the two atoms involved in the interaction, r_ij^0 is the corresponding distance measured in the experimental 2M30 structure, r^1 is the maximum length set to 0.35 nm, and k_dr is set to 2 · 10^3 kJ/mol/nm^2.
§.§ Mapping entropy and MEOW analysis tool
The mapping entropy optimisation workflow is a tool developed by Giulini and coworkers that aims at identifying functional regions of a biomolecular system only starting from raw MD simulation data <cit.>. The protocol relies on the concept of coarse-graining <cit.>, interpreted as the analysis of the biomolecule's configurational space via the projection of the latter on a restricted subset of the original degrees of freedom. In general, such a procedure entails a loss of information on the system's statistical properties, which critically depends on the choice of the projection and is quantifiable in terms of the mapping entropy S_map to be briefly summarised in the following <cit.>; the aim of MEOW is to detect the low-resolution representations for which the aforementioned information loss—and accordingly the mapping entropy—is as small as possible, and are hence maximally informative about the statistical behaviour of the system despite a coarsening of its structure.
The type of coarse-graining we rely on in this work is a decimation, in which the low-resolution representation in terms of which the molecule is inspected is obtained by selecting a subset of N atoms out of its n constituent ones. This can be expressed via a set of binary variables σ={σ_i}, i=1,...,n, where σ_i =1 or 0 depending on whether atom i is maintained or neglected in the system's description, and ∑_i=1^n σ_i=N. A specific choice of which atoms are retained constitutes a mapping; in the formal theory of coarse-graining, the latter is defined through a projection operator M that converts a high-resolution configuration r={ r_i} of the system—where r_i are the three-dimensional Cartesian coordinates of atom i—into a coarser configuration, or macrostate R={ R_I}, I = 1,...,N < n given in terms of the fewer atoms that were selected <cit.>, with
M_I( r) = R_I = ∑_i=1^n c_Ii r_i.
In the case of a decimated representation, in Eq. <ref> c_Ii = 1 if atom i is retained (and thus mapped onto the coarse-grained site I), and 0 otherwise.
Critically, the filtered version of the system one obtains through such coarse-graining procedure is characterised by an information loss on what are the statistical properties of the reference, high-resolution structure; for a specific selection of atoms employed to describe the biomolecule, this loss can be measured via the associated mapping entropy S_map defined as <cit.>
S_map(σ) = S_map(𝐌) = k_B∫ d r p_r( r) ln[ p_r( r)/p̅_r( r)].
We observe that the mapping entropy is a (non-negative) Kullback-Leibler divergence between two probability distributions <cit.>; the first, p_r( r), is the one characterising the original high-resolution system, and in the case of thermal equilibrium it is given by the Boltzmann measure:
p_r( r)=1/Z e^-β u( r),
Z =∫ d r e^-β u( r),
where β=1/k_BT, u( r) is the microscopic potential energy of the system, and Z its (configurational) canonical partition function. The probability p̅_r( r) in Eq. <ref>, on the other hand, reads <cit.>
p̅_r( r) = p_R( M( r))/Ω_1( M( r))
and represents the statistical description of the all-atom system that one would obtain in an attempt to reconstruct the properties of the latter only starting from a knowledge of its low-resolution, filtered counterpart, thus reverting the coarse-graining procedure. In Eq. <ref>, p_R( R) is the probability to sample the CG macrostate R, given by
p_R( R)=∫ d r p_r( r)δ( M( r) - R),
while Ω_1( R) is defined as
Ω_1( R) = ∫ d r δ( M( r) - R).
Ω_1( R) indicates the degeneracy of the macrostate, that is, the number of microstates r that map onto the same CG macrostate R. Eqs. <ref>-<ref> clarify the origin of the loss of information generated by coarse-graining: indeed, all the microscopic configurations that enter the composition of a specific macrostate become statistically equivalent upon backmapping, with the probability p̅_r that is common to all of them being given by the average of their original probabilities p_r( r). The mapping entropy in Eq. <ref> quantifies this loss globally via a Kullback-Leibler divergence between the reconstructed, smeared all-atom distribution and the “genuine”, detailed one. It is crucial to underline that S_map depends on the selection of atoms σ in a multi-body fashion, simultaneously tethering together all the N constituents that are employed to describe the molecule at a lower resolution. This is a consequence of the probability p_R( R) in Eq. <ref> entering the calculation of the mapping entropy: due to the coarse-graining procedure, p_R( R) is, in fact, intrinsically N-body in nature, not simply factorizable as the product of distributions of lower order even if the original all-atom system comprises, e.g., only pair interaction potentials acting among its constituents. <cit.>.
The aim of MEOW is thus now to determine, among all the possible selections σ of atoms that can be designed to describe the system at a lower resolution, those that retain the largest amount of information about the all-atom reference—and hence minimise the mapping entropy. The first step in such an analysis is, quite naturally, estimating the S_map associated with a specific choice of the CG mapping. To fulfil this task, rather than on Eq. <ref>, in this work we rely on the approximate expression derived by Giulini and coworkers <cit.>, which enables the calculation of S_map only provided a set of all-atom configurations sampled from p_r( r) via, e.g. an MD simulation, as well as the selected CG representations. More specifically, S_map is evaluated as a weighted average, over all CG macrostates 𝐑, of the variance of the atomistic potential energies of all configurations 𝐫 that map onto a specific macrostate. We refer the interested reader to Ref. <cit.> for the theoretical details and an in-depth discussion of the algorithmic implementation of the resulting S_map estimation workflow, further reminding that the latter is included in the freely available EXCOGITO software suite described in Ref. <cit.>.
With this ingredient at hand, we now move to the identification of the maximally informative reduced representations of the system that minimise the mapping entropy. In principle, this could be achieved by exhaustively probing all the possible selections σ of subsets of N atoms within the molecular structure, ranking them according to their value of S_map. The size of the CG mapping space is, however, overwhelmingly large (if one retains, e.g. 100 atoms out of a protein of 100 amino acids, the number of possible selections to be probed in this scheme would be ∼ 10^272) preventing the tackling of the optimisation problem via simple enumeration. S_map is thus minimised in the space of possible CG representations of the system with N retained atoms through a Monte Carlo simulated annealing protocol <cit.>, see Refs. <cit.> for all technical details. As the S_map landscape is likely rugged and prone to have a large number of more or less degenerate local minima <cit.>, several optimisations are further performed to gain a robust and informative idea about the landscape of minimum information loss for the system at hand, resulting in a pool of M_opt different optimal mappings σ̅^(k), k = 1, ⋯ M_opt. These are employed as described in Eq. <ref> to compute the frequentistic probability with which a given atom is found in a CG representation of minimal mapping entropy—the so-called information field. As it was shown in various works <cit.>, this protocol is found to be able to provide an ensemble of low-resolution descriptions of the system retaining atoms that are important from the biological point of view, e.g. atoms involved in biochemical functions such as substrate binding or catalysis.
|
http://arxiv.org/abs/2409.03415v1 | 20240905110338 | Anisotropic spin filtering by an altermagnetic barrier in magnetic tunnel junctions | [
"Boyuan Chi",
"Leina Jiang",
"Yu Zhu",
"Guoqiang Yu",
"Caihua Wan",
"Xiufeng Han"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
APS/123-QED
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100190, China
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100190, China
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100190, China
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100190, China
Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100190, China
xfhan@iphy.ac.cn
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100190, China
Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China
Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
§ ABSTRACT
The spin filtering effect, distinct decaying lengths experienced by oppositely spin-polarized electrons in a magnetic barrier, generally occurs in ferromagnetic (FM) insulators or semiconductors. With the rise of altermagnetic (ALM) materials which exhibit similar capability of spin-polarizing electrons with ferromagnets, it is a nature question whether the ALM insulators or semiconductors can also act as unique barriers for the spin splitting effect. Here, through first-principles calculations, we investigated the complex band structure of the ALM insulator FeF_2 and found that it possesses an anisotropic spin filtering effect: along the [001] direction of FeF_2, a current remains spin-neutral but has locally nonvanishing spin polarizations in the momentum space; moreover, along the [110] direction of FeF_2, a current will be globally spin-polarized by different attenuation lengths of oppositely spin-polarized electrons. Leveraging this anisotropic spin filtering effect, we designed two types of MTJs with the ALM barrier: ALM electrode/ALM insulator barrier/non-magnetic (NM) electrode and FM electrode/ALM insulator barrier/NM electrode, using RuO_2(001)/FeF_2/IrO_2 and CrO_2(110)/FeF_2/IrO_2 as the corresponding prototypes, respectively. We found that these two proposed MTJs exhibited the tunneling magnetoresistance (TMR) ratios of 216% and 3956%, by matching the conduction channels of the electrodes and the spin-resolved lowest decay rate of the barrier in the momentum space. Our work deepens and generalizes understanding toward the spin filtering effect for the rising ALM insulators and semiconductors, and broadens applications of the AFM spintronics.
Anisotropic spin filtering by an altermagnetic barrier in magnetic tunnel junctions
Xiufeng Han
September 9, 2024
===================================================================================
The magnetic tunnel junction (MTJ), composed of two ferromagnetic (FM) electrodes and a sandwiched insulating barrier, is elementary for spintronic devices <cit.>. The tunneling magnetoresistance (TMR) effect is the core of MTJ, making it suitable for magnetic random-access memories (MRAMs), magnetic sensors, and others spintronic devices <cit.>. To achieve reliable reading and writing, a higher TMR ratio is imperative. Initially, TMR was regarded dominantly determined by the spin polarization of FM electrodes. Therefore, various FM metals with high spin polarizations such as Heusler alloys have been paid plenty of attentions. Thereafter, the importance of symmetry filtering of crystallized barriers for tunneling was gradually realized <cit.>. By analyzing the evanescent states, MgO was predicted to exhibit weak attenuation effect for electrons with the Δ_1 symmetry <cit.>. Consequently, by combining Fe electrode with MgO, one successfully developed MTJs with large TMR ratios <cit.>, manifesting the powerfulness of the barrier engineering for MTJs.
Recently, the discovery of the antiferromagnetic (AFM) materials with spin splitting band structures, i.e., altermagnetic (ALM) materials <cit.>, has attracted researchers to explore their applications in magnetic devices, two-dimensional materials, superconductivity, etc. <cit.>. Focusing on the local spin polarization in momentum space, researchers have designed a novel all-antiferromagnetic tunnel junctions (AFMTJs) utilizing two ALM (or spin splitting AFM) electrodes instead of FM electrodes <cit.>. Moreover, the TMR ratio can be further enhanced by matching the distribution of the lowest decay rates of the barrier with that of the conduction channels of the AFM electrodes in the reciprocal space <cit.>.
In fact, the momentum-dependent spin splitting of ALM materials is not only in the real part as reported in <cit.> but also in the imaginary part of their band structures as shown below. Here, by analyzing the complex band structure of the ALM insulator FeF_2, we found a noticeable splitting in the decay rate κ of the evanescent states for the two spin-channels. This feature is distinguished from that observed in conventional non-magnetic (NM) or trivial AFM insulators with the same decay rate for opposite spins. The splitting decay rates of evanescent states thereby result in the spin-resolved effective barrier for tunneling electrons with opposite spins, thus allowing ALM insulators to act as a spin filter with zero stray field. The generation of spin filtering effect without producing stray fields is a distinctive property of ALM materials, which makes them applicable in spin- and angle-resolved photoemission spectroscopy (SARPES) and enhancing the TMR effect, etc.
Utilizing the spin filtering effect of ALM insulators, it is viable to design non-volatile MTJs with a magnetic compensated barrier. In this study, through first-principles calculations, we demonstrated the anisotropic spin filtering effect of ALM material FeF_2. Based on this effect, we designed two kinds of MTJ: ALM/ALM insulator barrier/NM and FM/ALM insulator barrier/NM MTJs. Specifically, we used RuO_2(001)/FeF_2/IrO_2 and CrO_2(110)/FeF_2/IrO_2 as the corresponding prototypes to study these two kinds of MTJs. The quantum transport calculations revealed that the two MTJs exhibit the TMR ratios of 216% and 3956%, respectively. The TMR effect in both arises from the concerted play of the spin filtering effect of FeF_2 and the spin-polarization of the FM and ALM electrodes, CrO_2 and RuO_2. The computational details are presented in the Supplementary Materials <cit.>. Due to the usage of NM electrode and ALM barrier with compensated magnetization, both MTJs do not require additional pinning structures, simplifying the device structures. Furthermore, the former MTJ benefits from zero stray field; the latter can be applied in magnetic sensor applications.
In general, the transmission coefficient T_σ(k_∥) is usually factorized as <cit.>:
T_σ(k_∥)=t_L^σ(k_∥)exp[-2κ(k_∥)d]t_R^σ(k_∥)
where t_L^σ(k_∥) and t_R^σ(k_∥) are the probabilities for an electron with spin σ to transmit through the left and right electrode/barrier interfaces, respectively. κ(k_∥) is the lowest decay rate in the barrier and d is the barrier width.
This equation applies to NM or trivial AFM barriers with spin degeneracy, such as MgO, NiO, and CoO, etc. However, for FM insulators, the lowest decay rate κ is dependent on spin. Thus, Eq.(2) should be rewritten as:
T_σ(k_∥)=t_L^σ(k_∥)exp[-2κ_σ(k_∥)d]t_R^σ(k_∥)
The spin-dependent κ_σ(k_∥) indicates that electrons with opposite spins experience different effective barrier heights during tunneling, and the barrier acts as a spin filter.
An ALM material is a nontrivial AFM material that exhibits spin splitting in the absence of spin-orbit coupling. Therefore, for ALM insulators, like FeF_2, LaMnO_3, MnTe, et al., spin splitting occurs along certain high-symmetry directions. This indicates that the spin splitting can also manifest itself in the complex band along these directions as shown below, resulting in a spin-dependent κ_σ(k_∥), analogous to the case of FM insulators, as described by Eq. [eq.2](2). As a result, when electrons tunnel along these spin-split directions of the ALM insulators, the current will be spin-polarized, and the ALM insulators can thus exhibit the spin filtering effect.
The properties of ALM materials are closely related to its crystalline orientation <cit.>. Electrons undergo different spin filtering effect when tunneling along different crystalline orientations of ALM insulators. Figs. [fig1]1(a) and [fig1]1(b) show the crystal-orientation-dependent spin filtering effect when electrons tunnel through the ALM FeF_2. Dashed circles denote areas with small decay rates. As shown in Fig. [fig1]1(a), when a current flows along the [001] direction of FeF_2, due to the same shape of decay rates for the two spin channels, the total spin polarization remains zero, though the current is locally polarized in the momentum space. When a current flows along the [110] direction of FeF_2, the current is then globally spin-polarized [Fig. [fig1]1(b)]. Therefore, the ALM insulators can polarize currents as FM materials, while still retaining advantages of zero stray field and robustness against external magnetic fields.
To provide a clearer explanation on the spin filtering effect of ALM insulators, we chose FeF_2 as an example. FeF_2 is an ALM insulator with collinear AFM magnetic configuration and a rutile structure as plotted in Fig. [fig2]2(a). Thanks to its magnetic space group P4_2'/mnm’, FeF_2 exhibits a momentum-dependent spin splitting band structure. From the density of states (DOS) of FeF_2(001) depicted in Fig. [fig2]2(c), FeF_2 has zero net magnetization and a 3.5 eV band gap, which is consistent with the experimental study <cit.>. The spin-up and spin-down bands of FeF_2(001) in Fig. [fig2]2(d) degenerate on most high-symmetry lines, but they split along the Γ-M and Z-A directions, reflecting the altermagnetism. Therefore, when electrons tunnel along the Γ-M (Z-A) direction, i.e. the [110] direction of FeF_2, the band gap of the spin-up and spin-down electrons at this high-symmetry line is diverse [see Supplementary Materials <cit.> Fig. S1], which leads to a different barrier height felt by oppositely polarized electrons and thus results in the spin filtering effect.
The complex band structure can more accurately illustrate the decay of tunneling electrons inside barrier. In the elastic scattering, the transverse wave vector is conserved, and the longitudinal wave vector k_z can be expressed as k_z=q+iκ, where the real part q corresponds to the real band, and the imaginary part κ corresponds to the wave functions that decay into the bulk <cit.>. As a result, tunneling electrons exhibits exponential decay [exp(-2κ d)] in the barrier. Figs. [fig3]3(a) and [fig3]3(b) show the case of tunneling along the [001] direction of FeF_2. The three complex bands in Fig. [fig3]3(b) correspond to the Γ(0, 0), C(0.15, 0.15) and C'(0.15, -0.15) point. On one hand, for the spin degeneracy at the Γ point, the spin-up and spin-down electrons share a similar complex band, thus they have the same lowest decay rate κ at the Fermi energy. On the other hand, the spin splitting along Γ-M leads to a different lowest decay rates for both spins in the complex band structure. However, for FeF_2 with altermagnetism, the spin of the Γ-M' band is opposite to that of the Γ-M band, resulting in κ_σ(Γ-M)=κ_-σ(Γ-M'). Eventually, the two spins share the same shape of k_∥-resolved lowest decay rates, producing a spin-neutral current, but with nonzero local spin polarization in the momentum space.
Based on this local spin filtering effect in the momentum space, we proposed an MTJ with an ALM metal and an NM metal as electrodes, i.e., ALM/ALM insulator barrier/NM MTJ. As shown in Fig. [fig1]1(c), for each spin channel (blue and red), its conductance is proportional to the overlapping between the distribution of the conduction channels in the ALM electrode and the distribution of the lowest decay rates in the barrier. Switching the Néel vector of the ALM electrode, the spin polarization at each k_∥ point in the momentum space will be reversed, and the overlapping area between the conduction channels and the lowest decay rates is also changed, resulting in a magnetoresistance between the parallel (P) and antiparallel (AP) states.
We calculated RuO_2(001)/FeF_2/IrO_2 MTJ as a prototype to verify our design. RuO_2 is an ALM metal, and IrO_2 is an NM metal, whose conduction channels are shown in Fig. S2. Fig. S3 shows the atomic structure and the transmission coefficient distributions for the P and AP states of this MTJ, it can be seen that the barrier suppresses the transmission coefficients outside the Γ point in the AP state, which leads to a TMR ratio of 216%. This ALM/ALM insulator barrier/NM MTJ can produce the TMR effect with a spin-neutral current and benefit from zero stray field and fast spin dynamics, similar to the AFMTJs <cit.>. This [001] transporting MTJ also has the same disadvantages as AFMTJs, which have high demand for single crystals and are hard to be switched by a magnetic field. To overcome these drawbacks, we shift our focus on the transport configuration along the [110] direction.
Fig. [fig2]2(b) is the atomic and magnetic structures of FeF_2 cleaved from the (110) crystal face. When electrons tunnel along the [110] direction of FeF_2, the distribution of the lowest decay rates satisfies Fig. [fig3]3(c). Since the band splits along the Γ-M direction, the two spins have completely different complex band structures at the Γ̅ point, as shown in Fig. [fig3]3(d), and this is also true at other k_∥ points, resulting in entirely diverse spin-resolved distribution of the lowest decay rates. Consequently, as long as the current flows along the [110] direction, it will be globally polarized, and the spin polarization will rapidly increase with the thickness of FeF_2. As a result, the ALM insulator FeF_2(110) has the spin filtering effect just like FM insulators.
Since the spin filtering effect of the FeF_2(110) can polarize current globally, the FM electrode is a matched choice of detecting the spin polarization of the current flowing through FeF_2(110) by the TMR effect. As depicted in Fig. [fig1]1(d), we designed a kind of MTJ composed of FM and NM electrodes, i.e., FM/ALM insulator barrier/NM MTJ. The TMR effect can be generated by switching the magnetization of FM electrode. In this MTJ, only the ALM barrier needs to be single-crystallized for the spin filtering effect. Thus, it is probably easier for experimentalists to grow than the MTJ with both ALM electrode and barrier as discussed before. Furthermore, this MTJ is free from the additional complex pinning structures thanks to the robustness of the ALM reference layer against external fields, and is easily switched by both electrical and magnetic methods, etc.
Hence, we constructed CrO_2(110)/FeF_2/IrO_2 MTJ as an example to exhibit the transport properties of the FM/ALM insulator barrier/NM MTJ. CrO_2 is a half-metallic material with 100% spin polarization, whose conduction channel is shown in Fig. S2(c) of the Supplementary Material <cit.>. In fact, CrO_2 and IrO_2 are only chosen for their same atomic structure with FeF_2. We recommend replacing CrO_2 and IrO_2 with CoFe (or CoFeB etc.) and Pt (or Cu etc.) in experiments, respectively. As shown in Fig. [fig4]4(a), the P and AP states are defined as the relative arrangement between the magnetization of CrO_2 and the Néel vector of FeF_2. Because CrO_2 is a half-metal, there is only spin-up channel in the P state. Switching the magnetization of CrO_2, the MTJ becomes the AP state, where only spin-down channel contributes to conductance. For the P state, the higher transmission coefficients appear at the area where the big conduction channels of electrodes match with the smaller κ_↑(k_∥) of FeF_2(110). By comparison, for the AP state, due to the generally larger κ_↓(k_∥) of FeF_2(110), the transmission coefficients are significantly suppressed. Thus, thanks to the large difference of the lowest decay rates of the barrier between the spin-up and spin-down channels, the transmission coefficient of the P state is one order of magnitude larger than that of the AP state, resulting in a huge TMR ratio of 3956%, which nicely meets the needs for the MRAM and sensor applications.
Fig. [fig4]4(c) displays the dependence of total transmissions on energy for the P state (T_P) and AP state (T_AP). It is found that T_P is always greater than T_AP around the E_F, resulting in a positive TMR effect. Specifically, the TMR at the Fermi energy (E_F) reaches as high as 3956%. In addition, the TMR ratio remains around 3000% within the range of E_F ± 0.3 eV, indicating that the influence of factors such as doping on the shift of the E_F has a small impact on this TMR ratio.
The spin filtering effect is sensitive to the barrier thickness, as shown in Fig. [fig4]4(d), we calculated the total transmissions and TMR ratios as a function of the number of FeF_2(110) monolayers. Both T_P and T_AP decrease exponentially with increasing barrier layers, while thanks to the spin filtering effect, the TMR ratio increases exponentially, consistent with the simple model in the Supplementary Materials <cit.>. As a result, the CrO_2(110)/FeF_2/IrO_2 MTJ can further increase the TMR effect by increasing the barrier thickness, potentially achieving a TMR value significantly greater than 3956%. Note that this rapid increase in the TMR ratio with barrier thickness does not depend on the selection of FM electrode. Therefore, other FM electrodes, such as CoFeB, Fe, etc., can also be used to replace CrO_2 and obtain a large TMR ratio once an appropriate barrier thickness is used. Additionally, since FeF_2(110) has the lowest decay rate for electrons with Σ_1 symmetry, selecting an FM electrode where Σ_1 symmetry electrons appear only in the majority state can further enhance the TMR effect by the symmetry filtering effect.
Both proposed ALM barrier-based MTJs have experimental feasibility. To generate the TMR effect, only the ALM layer needs to have good single-crystal properties, while the FM and NM layers are not subject to any restrictions. This makes the FM/ALM barrier/NM MTJ, with only one ALM layer, easier to fabricate. Additionally, since the barrier has AFM configuration, it can serve as the reference layer without additional pinning structures, significantly reducing the complexity of MTJ device fabrication and optimization. Furthermore, the choice of barrier materials for the proposed MTJs is also abundant. The spin filtering effect with AFM configuration is not unique to FeF_2, but is an intrinsic property widely shared by ALM insulators, like MnF_2 [shown in Supplementary Materials <cit.> Fig. S4], MnTe <cit.>, LaMnO_3, etc. <cit.>. The MTJ with an ALM barrier constructed based on this effect has a very wide range of candidacy in materials. In our opinion, selecting MnTe with appropriate crystal orientation as the ALM barrier, combined with suitable electrodes, is another good choice for constructing the MTJs proposed in this work, especially considering that the Néel temperature of MnTe is above the room temperature. Furthermore, if the NM electrode of the FM/ALM insulator barrier/NM MTJ is replaced with an FM electrode, then by changing the magnetization directions of both electrodes and the Néel vector of the barrier, four different resistance states can be generated, enabling multi-state storage.
In conclusion, based on the first-principles calculations, we investigated the tunneling process through the ALM insulators. Owing to the spin-momentum locking band structures, the ALM insulators possess spin-resolved lowest decay rates, resulting in an anisotropic spin filtering effect, which has broad application prospects. We demonstrated the crystal-orientation-dependent spin filtering effect in ALM FeF_2, and by using RuO_2(001)/FeF_2/IrO_2 and CrO_2(110)/FeF_2/IrO_2 as two prototypes, we proposed two kinds of MTJs, i.e., ALM/ALM insulator barrier/NM and FM/ALM insulator barrier/NM MTJs, and reached 216% and 3956% TMR ratios, respectively. The former MTJ benefits from zero stray field, and the latter MTJ has numerous advantages, such as free from an additional complex pinning structure, switchable by both electric methods and magnetic fields, and the only necessity for the single-crystalline structure is the ALM barrier. Thus, these MTJs with an ALM barrier have great potential for applications in the fields of MRAM and magnetic sensors. Our work also provides a promising way to explore the electrical and transport properties of ALM and AFM insulators and promotes the development and application of AFM spintronics.
This work was financially supported by the National Key Research and Development Program of China [MOST Grant No. 2022YFA1402800], the National Natural Science Foundation of China [NSFC, Grant No. 12134017, 12204517, and 12374131], and partially supported by the Strategic Priority Research Program (B) of Chinese Academy of Sciences [CAS Grant No. XDB33000000, Youth Innovation Promotion Association of CAS (2020008)]. The atomic structures were produced using VESTA software <cit.>. High-performance computing resources for contributing to the research results were provided by Beijing PARATERA Technology Co., LTD.
57
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Julliere(1975)]PLA-1975
author author M. Julliere, title title Tunneling between
ferromagnetic films, https://doi.org/https://doi.org/10.1016/0375-9601(75)90174-7 journal journal Phys. Lett. A volume
54, pages 225 (year 1975)NoStop
[Moodera et al.(1995)Moodera, Kinder, Wong, and Meservey]PRL-74-1995
author author J. S. Moodera, author L. R. Kinder,
author T. M. Wong, and author R. Meservey, title title Large magnetoresistance at room temperature in
ferromagnetic thin film tunnel junctions, https://doi.org/10.1103/PhysRevLett.74.3273 journal journal Phys. Rev. Lett. volume 74, pages 3273 (year 1995)NoStop
[Miyazaki and Tezuka(1995)]JMMM-1995
author author T. Miyazaki and author N. Tezuka, title title Giant magnetic tunneling
effect in Fe/Al_2O_3/Fe junction, https://doi.org/https://doi.org/10.1016/0304-8853(95)90001-2 journal journal J. Magn. Magn. Mater. volume 139, pages L231 (year
1995)NoStop
[Wolf et al.(2001)Wolf,
Awschalom, Buhrman, Daughton,
von Molnár, Roukes, Chtchelkanova, and Treger]S-2001
author author S. A. Wolf, author D. D. Awschalom,
author R. A. Buhrman, author J. M. Daughton, author
S. von Molnár, author
M. L. Roukes, author
A. Y. Chtchelkanova, and author D. M. Treger, title
title Spintronics: A spin-based electronics vision for the
future, https://doi.org/10.1126/science.1065389 journal journal Science volume
294, pages 1488 (year 2001)NoStop
[ŽŽuti ćć et al.(2004)ŽŽuti ćć, Fabian, and Das Sarma]RMP-2004
author author I. ŽŽuti ćć,
author J. Fabian, and author S. Das Sarma, title
title Spintronics: Fundamentals and applications, https://doi.org/10.1103/RevModPhys.76.323 journal journal Rev. Mod. Phys. volume 76, pages 323 (year 2004)NoStop
[Lenz and Edelstein(2006)]IEEE-2006
author author J. Lenz and author S. Edelstein, title title Magnetic sensors and
their applications, https://doi.org/10.1109/JSEN.2006.874493
journal journal IEEE Sens. J. volume 6, pages 631 (year
2006)NoStop
[Zhang and Butler(2003)]JPCM-15-R1603-2003
author author X.-G. Zhang and author W. H. Butler, title title Band structure, evanescent
states, and transport in spin tunnel junctions, https://doi.org/10.1088/0953-8984/15/41/R01 journal journal J. Phys.: Condens. Matter volume 15, pages R1603 (year 2003)NoStop
[Tsymbal et al.(2003)Tsymbal, Mryasov, and LeClair]JPCM-15-R109-2003
author author E. Y. Tsymbal, author O. N. Mryasov, and author P. R. LeClair, title title Spin-dependent tunnelling
in magnetic tunnel junctions, https://doi.org/10.1088/0953-8984/15/4/201 journal journal J. Phys.: Condens. Matter volume 15, pages R109 (year 2003)NoStop
[Mavropoulos et al.(2000)Mavropoulos, Papanikolaou, and Dederichs]PRL-85-2000
author author P. Mavropoulos, author N. Papanikolaou, and author P. H. Dederichs, title title Complex band structure
and tunneling through Ferromagnet/Insulator/Ferromagnet junctions, https://doi.org/10.1103/PhysRevLett.85.1088 journal
journal Phys. Rev. Lett. volume 85, pages 1088 (year 2000)NoStop
[Mathon and Umerski(2001)]PRB-63-2001-1
author author J. Mathon and author A. Umerski, title title Theory of tunneling
magnetoresistance of an epitaxial Fe/MgO/Fe(001) junction, https://doi.org/10.1103/PhysRevB.63.220403 journal journal Phys. Rev. B volume 63, pages 220403 (year 2001)NoStop
[Butler et al.(2001)Butler,
Zhang, Schulthess, and MacLaren]PRB-63-2001-2
author author W. H. Butler, author X.-G. Zhang,
author T. C. Schulthess, and author J. M. MacLaren, title title Spin-dependent tunneling conductance
of Fe/MgO/Fe sandwiches, https://doi.org/10.1103/PhysRevB.63.054416 journal journal Phys. Rev. B volume 63, pages 054416 (year 2001)NoStop
[Yuasa et al.(2004)Yuasa,
Nagahama, Fukushima, Suzuki, and Ando]NM-3-2004-1
author author S. Yuasa, author T. Nagahama,
author A. Fukushima, author Y. Suzuki, and author
K. Ando, title title Giant room-temperature magnetoresistance in single-crystal
Fe/MgO/Fe magnetic tunnel junctions, https://doi.org/10.1038/nmat1257 journal journal
Nat. Mater. volume 3, pages 868
(year 2004)NoStop
[Parkin et al.(2004)Parkin,
Kaiser, Panchula, Rice,
Hughes, Samant, and Yang]NM-3-2004-2
author author S. S. P. Parkin, author C. Kaiser, author A. Panchula,
author P. M. Rice, author B. Hughes, author
M. Samant, and author
S.-H. Yang, title title Giant tunnelling magnetoresistance at room temperature with MgO
(100) tunnel barriers, https://doi.org/10.1038/nmat1256 journal journal Nat. Mater. volume
3, pages 862 (year 2004)NoStop
[ ŠŠmejkal et al.(2022a) ŠŠmejkal, Sinova, and Jungwirth]PRX-12-031042-2022
author author L. ŠŠmejkal, author J. Sinova, and author T. Jungwirth, title title
Beyond conventional ferromagnetism and antiferromagnetism: A phase with
nonrelativistic spin and crystal rotation symmetry, https://doi.org/10.1103/PhysRevX.12.031042 journal journal Phys. Rev. X volume 12, pages 031042 (year 2022a)NoStop
[ ŠŠmejkal et al.(2022b) ŠŠmejkal, Sinova, and Jungwirth]PRX-12-040501-2022
author author L. ŠŠmejkal, author J. Sinova, and author T. Jungwirth, title title
Emerging research landscape of altermagnetism, https://doi.org/10.1103/PhysRevX.12.040501 journal journal Phys. Rev. X volume 12, pages 040501 (year 2022b)NoStop
[Krempaský et al.(2024)Krempaský, Šmejkal, D'Souza,
Hajlaoui, Springholz, Uhlířová, Alarab, Constantinou, Strocov, Usanov,
Pudelko, González-Hernández,
Birk Hellenes, Jansa, Reichlová, Šobáň, Gonzalez Betancourt, Wadley, Sinova,
Kriegner, Minár, Dil, and Jungwirth]N-2024
author author J. Krempaský, author L. Šmejkal, author S. W. D'Souza, author M. Hajlaoui,
author G. Springholz, author K. Uhlířová, author F. Alarab, author
P. C. Constantinou, author
V. Strocov, author D. Usanov, author W. R. Pudelko, author R. González-Hernández, author A. Birk Hellenes, author Z. Jansa, author H. Reichlová, author Z. Šobáň, author R. D. Gonzalez Betancourt, author
P. Wadley, author J. Sinova, author D. Kriegner, author J. Minár, author J. H. Dil, and author T. Jungwirth, title title Altermagnetic lifting
of kramers spin degeneracy, https://doi.org/10.1038/s41586-023-06907-7 journal journal Nature volume 626, pages
517 (year 2024)NoStop
[Shao and Tsymbal(2024)]npj2024
author author D.-F. Shao and author E. Y. Tsymbal, title title Antiferromagnetic tunnel
junctions for spintronics, https://doi.org/10.1038/s44306-024-00014-7 journal journal npj Spintronics volume 2, pages 13 (year 2024)NoStop
[Shao et al.(2023)Shao,
Jiang, Ding, Zhang,
Wang, Xiao, Gurung,
Lu, Sun, and Tsymbal]PRL-130-2023
author author D.-F. Shao, author Y.-Y. Jiang,
author J. Ding, author
S.-H. Zhang, author
Z.-A. Wang, author
R.-C. Xiao, author
G. Gurung, author W. J. Lu, author Y. P. Sun, and author E. Y. Tsymbal, title title Néel spin
currents in antiferromagnets, https://doi.org/10.1103/PhysRevLett.130.216702 journal
journal Phys. Rev. Lett. volume 130, pages 216702 (year 2023)NoStop
[Leiviskä et al.(2024)Leiviskä, Rial, Bad'ura, Seeger, Kounta, Beckert, Kriegner, Joumard, Schmoranzerová,
Sinova, Gomonay, Thomas,
Goennenwein, Reichlová, ŠŠmejkal, Michez,
Jungwirth, and Baltz]PRB-109-2024-224430
author author M. Leiviskä, author J. Rial,
author A. Bad'ura, author R. L. Seeger, author
I. Kounta, author S. Beckert, author D. Kriegner, author I. Joumard, author E. Schmoranzerová, author J. Sinova, author O. Gomonay, author A. Thomas, author S. T. B. Goennenwein, author H. Reichlová, author L. ŠŠmejkal, author L. Michez, author
T. c. v. Jungwirth, and author
V. Baltz, title title Anisotropy of the anomalous Hall effect in thin films of the
altermagnet candidate Mn_5Si_3, https://doi.org/10.1103/PhysRevB.109.224430 journal journal Phys. Rev. B volume 109, pages 224430 (year 2024)NoStop
[Mazin(2023)]PRB-107-2023
author author I. I. Mazin, title title Altermagnetism in MnTe:
Origin, predicted manifestations, and routes to detwinning, https://doi.org/10.1103/PhysRevB.107.L100418 journal
journal Phys. Rev. B volume 107, pages L100418 (year 2023)NoStop
[Lee et al.(2024)Lee,
Lee, Jung, Jung,
Kim, Lee, Seok, Kim, Park, ŠŠmejkal, Kang, and Kim]PRL-132-2024
author author S. Lee, author S. Lee, author S. Jung, author
J. Jung, author D. Kim, author Y. Lee, author B. Seok, author J. Kim, author
B. G. Park, author
L. ŠŠmejkal,
author C.-J. Kang, and author C. Kim, title title Broken kramers degeneracy in altermagnetic mnte, https://doi.org/10.1103/PhysRevLett.132.036702 journal
journal Phys. Rev. Lett. volume 132, pages 036702 (year 2024)NoStop
[Banerjee and Scheurer(2024)]PRB-110-2024-024503
author author S. Banerjee and author M. S. Scheurer, title title Altermagnetic
superconducting diode effect, https://doi.org/10.1103/PhysRevB.110.024503 journal journal Phys. Rev. B volume 110, pages 024503 (year 2024)NoStop
[Antonenko et al.(2024)Antonenko, Fernandes, and Venderbos]arXiv10201
author author D. S. Antonenko, author R. M. Fernandes, and author J. W. F. Venderbos, @noop title Mirror chern bands and weyl
nodal loops in altermagnets (year 2024), https://arxiv.org/abs/2402.10201 arXiv:2402.10201 NoStop
[Das and Roy(2024)]arXiv14620
author author S. K. Das and author B. Roy, @noop title From local to emergent altermagnetism:
Footprints of free fermions band topology (year 2024), https://arxiv.org/abs/2403.14620 arXiv:2403.14620 NoStop
[Zhu et al.(2024)Zhu,
Chen, Li, Qiao, Ma, Liu, Hu, Gao, and Ren]NL-2024
author author Y. Zhu, author T. Chen, author Y. Li, author
L. Qiao, author X. Ma, author C. Liu, author T. Hu, author H. Gao, and author
W. Ren, title title Multipiezo effect in altermagnetic V_2SeTeO monolayer, https://doi.org/10.1021/acs.nanolett.3c04330 journal
journal Nano Lett. volume 24, pages 472 (year 2024)NoStop
[Shao et al.(2021)Shao,
Zhang, Li, Eom, and Tsymbal]NC-2021
author author D.-F. Shao, author S.-H. Zhang,
author M. Li, author
C.-B. Eom, and author
E. Y. Tsymbal, title
title Spin-neutral currents for spintronics, https://doi.org/10.1038/s41467-021-26915-3 journal journal Nat. Commun. volume 12, pages 7061 (year 2021)NoStop
[Dong et al.(2022)Dong,
Li, Gurung, Zhu,
Zhang, Zheng, Tsymbal, and Zhang]PRL-128-2022
author author J. Dong, author X. Li, author G. Gurung, author
M. Zhu, author P. Zhang, author F. Zheng, author E. Y. Tsymbal, and author J. Zhang, title title Tunneling
magnetoresistance in noncollinear antiferromagnetic tunnel junctions, https://doi.org/10.1103/PhysRevLett.128.197201 journal
journal Phys. Rev. Lett. volume 128, pages 197201 (year 2022)NoStop
[Qin et al.(2023)Qin,
Yan, Wang, Chen,
Meng, Dong, Zhu,
Cai, Feng, Zhou,
Liu, Zhang, Zeng,
Zhang, Jiang, and Liu]N-613-485-2023
author author P. Qin, author H. Yan, author X. Wang, author
H. Chen, author Z. Meng, author J. Dong, author M. Zhu, author J. Cai, author
Z. Feng, author X. Zhou, author L. Liu, author T. Zhang, author Z. Zeng, author
J. Zhang, author C. Jiang, and author Z. Liu, title title
Room-temperature magnetoresistance in an all-antiferromagnetic tunnel
junction, https://doi.org/10.1038/s41586-022-05461-y journal journal Nature volume 613, pages 485 (year 2023)NoStop
[Chen et al.(2023)Chen,
Higo, Tanaka, Nomoto,
Tsai, Idzuchi, Shiga,
Sakamoto, Ando, Kosaki,
Matsuo, Nishio-Hamane, Arita,
Miwa, and Nakatsuji]N-613-490-2023
author author X. Chen, author T. Higo, author K. Tanaka, author
T. Nomoto, author H. Tsai, author H. Idzuchi, author M. Shiga,
author S. Sakamoto, author R. Ando, author
H. Kosaki, author T. Matsuo, author D. Nishio-Hamane, author R. Arita, author S. Miwa, and author S. Nakatsuji, title title
Octupole-driven magnetoresistance in an antiferromagnetic tunnel junction, https://doi.org/10.1038/s41586-022-05463-w journal
journal Nature volume 613, pages 490 (year 2023)NoStop
[Jiang et al.(2023)Jiang,
Wang, Samanta, Zhang,
Xiao, Lu, Sun, Tsymbal, and Shao]PRB-108-2023
author author Y.-Y. Jiang, author Z.-A. Wang,
author K. Samanta, author S.-H. Zhang, author
R.-C. Xiao, author
W. J. Lu, author Y. P. Sun, author E. Y. Tsymbal, and author D.-F. Shao, title title Prediction of
giant tunneling magnetoresistance in RuO_2/TiO_2/RuO_2 (110)
antiferromagnetic tunnel junctions, https://doi.org/10.1103/PhysRevB.108.174439 journal journal Phys. Rev. B volume 108, pages 174439 (year 2023)NoStop
[ ŠŠmejkal et al.(2022c) ŠŠmejkal, Hellenes, González-Hernández, Sinova, and Jungwirth]PRX-12-011028-2022
author author L. ŠŠmejkal, author A. B. Hellenes, author R. González-Hernández, author J. Sinova, and author T. Jungwirth, title title Giant
and tunneling magnetoresistance in unconventional collinear antiferromagnets
with nonrelativistic spin-momentum coupling, https://doi.org/10.1103/PhysRevX.12.011028 journal journal Phys. Rev. X volume 12, pages 011028 (year 2022c)NoStop
[Xu et al.(2023)Xu,
Zhang, Mahfouzi, Huang,
Cheng, Dai, Cai,
Shi, Zhu, Guo, Cao, Liu, Fert, Kioussis,
Wang, Zhang., and Zhao]arXiv02458
author author S. Xu, author Z. Zhang, author F. Mahfouzi, author
Y. Huang, author H. Cheng, author B. Dai, author W. Cai, author K. Shi, author D. Zhu, author
Z. Guo, author C. Cao, author Y. Liu, author A. Fert, author N. Kioussis, author
K. L. Wang, author
Y. Zhang., and author
W. Zhao, @noop title
Spin-flop magnetoresistance in a collinear antiferromagnetic tunnel
junction (year 2023), https://arxiv.org/abs/2311.02458 arXiv:2311.02458 NoStop
[Gurung et al.(2023)Gurung,
Shao, and Tsymbal]arXiv03026
author author G. Gurung, author D.-F. Shao, and author E. Y. Tsymbal, @noop title Extraordinary tunneling magnetoresistance
in antiferromagnetic tunnel junctions with antiperovskite electrodes
(year 2023), https://arxiv.org/abs/2306.03026
arXiv:2306.03026 NoStop
[Yuan et al.(2020)Yuan,
Wang, Luo, Rashba, and Zunger]PRB-102-2020
author author L.-D. Yuan, author Z. Wang, author J.-W. Luo, author
E. I. Rashba, and author
A. Zunger, title title Giant momentum-dependent spin splitting in centrosymmetric low-Z
antiferromagnets, https://doi.org/10.1103/PhysRevB.102.014422
journal journal Phys. Rev. B volume 102, pages 014422 (year
2020)NoStop
[Yuan et al.(2021)Yuan,
Wang, Luo, and Zunger]PRM-2021
author author L.-D. Yuan, author Z. Wang, author J.-W. Luo, and author
A. Zunger, title title Prediction of low-Z collinear and noncollinear antiferromagnetic
compounds having momentum-dependent spin splitting even without spin-orbit
coupling, https://doi.org/10.1103/PhysRevMaterials.5.014409
journal journal Phys. Rev. Mater. volume 5, pages 014409 (year
2021)NoStop
[Hayami et al.(2019)Hayami,
Yanagi, and Kusunose]JPSJ-2019
author author S. Hayami, author Y. Yanagi, and author H. Kusunose, title title Momentum-dependent spin splitting by
collinear antiferromagnetic ordering, https://doi.org/10.7566/JPSJ.88.123702 journal journal J. Phys. Soc. Jpn. volume 88, pages 123702 (year 2019)NoStop
[SM()]SM
@noop journal See Supplemental Material at [url] for
details of first-principles calculations; the spin splitting band structures
of FeF_2, the conduction channels of RuO_2, IrO_2 and CrO_2; the
atomic structures and transmission coefficient distributions of
RuO_2(001)/FeF_2/IrO_2 MTJ; the atomic and electronic structures, and
the lowest decay rate distributions of MnF_2; simple model of the
dependence of TMR ratio on the spin filtering barrier thickness, which
includes Refs.
<cit.> NoStop
[Belashchenko et al.(2004)Belashchenko, Tsymbal, van Schilfgaarde,
Stewart, Oleynik, and Jaswal]PRB-69-2004-2
journal author author K. D. Belashchenko, author E. Y. Tsymbal, author M. van Schilfgaarde, author D. A. Stewart, author I. I. Oleynik, and author S. S. Jaswal, title title Effect of
interface bonding on spin-dependent tunneling from the oxidized Co
surface, https://doi.org/10.1103/PhysRevB.69.174408 journal journal Phys. Rev. B volume
69, pages 174408 (year 2004)NoStop
[Velev et al.(2007)Velev,
Duan, Belashchenko, Jaswal, and Tsymbal]PRL-98-2007
author author J. P. Velev, author C.-G. Duan,
author K. D. Belashchenko,
author S. S. Jaswal, and author E. Y. Tsymbal, title title Effect of ferroelectricity on electron
transport in Pt/BaTiO_3/Pt tunnel
junctions, https://doi.org/10.1103/PhysRevLett.98.137201
journal journal Phys. Rev. Lett. volume 98, pages 137201 (year
2007)NoStop
[Chi et al.(2024)Chi,
Jiang, Zhu, Yu, Wan, Zhang, and Han]PRA-21-2024
author author B. Chi, author L. Jiang, author Y. Zhu, author
G. Yu, author C. Wan, author J. Zhang, and author X. Han, title title Crystal-facet-oriented altermagnets
for detecting ferromagnetic and antiferromagnetic states by giant tunneling
magnetoresistance, https://doi.org/10.1103/PhysRevApplied.21.034038 journal
journal Phys. Rev. Appl. volume 21, pages 034038 (year 2024)NoStop
[Samanta et al.(2024)Samanta, Jiang, Paudel, Shao, and Tsymbal]PRB-109-2024
author author K. Samanta, author Y.-Y. Jiang,
author T. R. Paudel, author D.-F. Shao, and author E. Y. Tsymbal, title title Tunneling magnetoresistance in magnetic tunnel
junctions with a single ferromagnetic electrode, https://doi.org/10.1103/PhysRevB.109.174407 journal journal Phys. Rev. B volume 109, pages 174407 (year 2024)NoStop
[Santos-Ortiz et al.(2013)Santos-Ortiz, Volkov, Schmid, Kuo, Kisslinger, Nag, Banerjee, Zhu, and Shepherd]ACSAMI-2013
author author R. Santos-Ortiz, author V. Volkov, author S. Schmid,
author F.-L. Kuo, author K. Kisslinger, author
S. Nag, author R. Banerjee, author Y. Zhu, and author N. D. Shepherd, title title
Microstructure and electronic band structure of pulsed laser deposited iron
fluoride thin film for battery electrodes, https://doi.org/10.1021/am3017569 journal journal ACS Appl. Mater. Interfaces volume 5, pages 2387 (year 2013)NoStop
[Song et al.(2018)Song,
Cai, Tu, Zhang, Huang, Wilson, Seyler, Zhu,
Taniguchi, Watanabe, McGuire,
Cobden, Xiao, Yao, and Xu]S-2018
author author T. Song, author X. Cai, author M. W.-Y. Tu, author
X. Zhang, author B. Huang, author N. P. Wilson, author K. L. Seyler, author L. Zhu, author T. Taniguchi,
author K. Watanabe, author M. A. McGuire, author
D. H. Cobden, author
D. Xiao, author W. Yao, and author X. Xu, title title Giant
tunneling magnetoresistance in spin-filter van der waals heterostructures, https://doi.org/10.1126/science.aar4851 journal
journal Science volume 360, pages 1214 (year 2018)NoStop
[Song et al.(2019)Song,
Tu, Carnahan, Cai,
Taniguchi, Watanabe, McGuire,
Cobden, Xiao, Yao, and Xu]NL-2019
author author T. Song, author M. W.-Y. Tu,
author C. Carnahan, author X. Cai, author
T. Taniguchi, author
K. Watanabe, author
M. A. McGuire, author
D. H. Cobden, author
D. Xiao, author W. Yao, and author X. Xu, title title Voltage
control of a van der waals spin-filter magnetic tunnel junction, https://doi.org/10.1021/acs.nanolett.8b04160 journal
journal Nano Lett. volume 19, pages 915 (year 2019)NoStop
[Momma and Izumi(2011)]JAC-2011
author author K. Momma and author F. Izumi, title title VESTA3 for three-dimensional
visualization of crystal, volumetric and morphology data, https://doi.org/10.1107/S0021889811038970 journal journal J. Appl. Cryst. volume 44, pages 1272 (year 2011)NoStop
[Kresse and Furthmüller(1996)]PRB-54-1996
author author G. Kresse and author J. Furthmüller, title title Efficient iterative
schemes for ab initio total-energy calculations using a plane-wave basis
set, https://doi.org/10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume
54, pages 11169 (year 1996)NoStop
[Blöchl(1994)]PRB-50-1994
author author P. E. Blöchl, title title Projector augmented-wave
method, https://doi.org/10.1103/PhysRevB.50.17953 journal journal Phys. Rev. B volume
50, pages 17953 (year 1994)NoStop
[Kresse and Joubert(1999)]PRB-59-1999
author author G. Kresse and author D. Joubert, title title From ultrasoft
pseudopotentials to the projector augmented-wave method, https://doi.org/10.1103/PhysRevB.59.1758 journal journal Phys. Rev. B volume 59, pages 1758 (year 1999)NoStop
[Perdew et al.(1996)Perdew,
Burke, and Ernzerhof]PRL-77-1996
author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, title title Generalized gradient approximation made simple, https://doi.org/10.1103/PhysRevLett.77.3865 journal
journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop
[Anisimov et al.(1991)Anisimov, Zaanen, and Andersen]PRB-44-1991
author author V. I. Anisimov, author J. Zaanen, and author O. K. Andersen, title title Band theory and Mott insulators:
Hubbard U instead of Stoner I, https://doi.org/10.1103/PhysRevB.44.943 journal journal Phys. Rev. B volume 44, pages 943 (year 1991)NoStop
[Dudarev et al.(1998)Dudarev, Botton, Savrasov, Humphreys, and Sutton]PRB-57-1998
author author S. L. Dudarev, author G. A. Botton,
author S. Y. Savrasov, author C. J. Humphreys, and author A. P. Sutton, title
title Electron-energy-loss spectra and the structural stability
of nickel oxide: An LSDA+U study, https://doi.org/10.1103/PhysRevB.57.1505 journal journal Phys. Rev. B volume 57, pages 1505 (year 1998)NoStop
[Haines et al.(1997)Haines,
Léger, Schulte, and Hull]AC-1997
author author J. Haines, author J. M. Léger, author O. Schulte, and author S. Hull, title title Neutron Diffraction Study
of the Ambient-Pressure, Rutile-Type and the High-Pressure, CaCl2-Type Phases of Ruthenium Dioxide, https://doi.org/10.1107/S0108768197008094 journal journal Acta Crystallogr. Sect. B: Struct. Sci. volume 53, pages 880 (year 1997)NoStop
[Berlijn et al.(2017)Berlijn, Snijders, Delaire, Zhou, Maier, Cao, Chi,
Matsuda, Wang, Koehler,
Kent, and Weitering]PRL-118-2017
author author T. Berlijn, author P. C. Snijders, author O. Delaire,
author H.-D. Zhou, author T. A. Maier, author
H.-B. Cao, author S.-X. Chi, author M. Matsuda, author Y. Wang, author M. R. Koehler, author P. R. C. Kent, and author H. H. Weitering, title title Itinerant
antirromagnetism in RuO_2, https://doi.org/10.1103/PhysRevLett.118.077201 journal
journal Phys. Rev. Lett. volume 118, pages 077201 (year 2017)NoStop
[Strempfer et al.(2001)Strempfer, Rütt, and Jauch]PRL-86-2001
author author J. Strempfer, author U. Rütt, and author W. Jauch, title title Absolute spin magnetic moment of
FeF_2 from high energy photon diffraction, https://doi.org/10.1103/PhysRevLett.86.3152 journal journal Phys. Rev. Lett. volume 86, pages 3152 (year 2001)NoStop
[Strempfer et al.(2004)Strempfer, Rütt, Bayrakci, Brückel, and Jauch]PRB-69-2004-1
author author J. Strempfer, author U. Rütt,
author S. P. Bayrakci, author T. Brückel, and author W. Jauch, title
title Magnetic properties of transition metal fluorides
MF_2 (M=Mn, Fe, Co, Ni) via high-energy
photon diffraction, https://doi.org/10.1103/PhysRevB.69.014417
journal journal Phys. Rev. B volume 69, pages 014417 (year
2004)NoStop
[Goering et al.(2002)Goering, Bayer, Gold, Schütz, Rabe, Rüdiger, and Güntherodt]PRL-88-2002
author author E. Goering, author A. Bayer,
author S. Gold, author
G. Schütz, author
M. Rabe, author U. Rüdiger, and author G. Güntherodt, title title
Strong Anisotropy of Projected 3d Moments in Epitaxial CrO_2
Films, https://doi.org/10.1103/PhysRevLett.88.207203 journal journal Phys. Rev. Lett. volume 88, pages 207203 (year
2002)NoStop
[Taylor et al.(2001)Taylor,
Guo, and Wang]PRB-63-2001-3
author author J. Taylor, author H. Guo, and author J. Wang, title title Ab initio modeling of quantum transport properties
of molecular electronic devices, https://doi.org/10.1103/PhysRevB.63.245407 journal journal Phys. Rev. B volume 63, pages 245407 (year 2001)NoStop
|
http://arxiv.org/abs/2409.03530v1 | 20240905134220 | Use of triplet loss for facial restoration in low-resolution images | [
"Sebastian Pulgar",
"Domingo Mery"
] | cs.CV | [
"cs.CV",
"cs.CR"
] |
Use of triplet loss for facial restoration in low-resolution images
Sebastián Pulgar, Domingo Mery
This work is supported byFondecyt-Chile 1191131 and National Center for Artificial Intelligence CENIA FB210017, Basal ANID, partly supported this work.
September 9, 2024
=====================================================================================================================================================================================================
§ ABSTRACT
In recent years, facial recognition (FR) models have become the most widely used biometric tool, achieving impressive results on numerous datasets. However, inherent hardware challenges or shooting distances often result in low-resolution images, which significantly impact the performance of FR models. To address this issue, several solutions have been proposed, including super-resolution (SR) models that generate highly realistic faces. Despite these efforts, significant improvements in FR algorithms have not been achieved. In this paper, we propose a novel SR model called Face Triplet Loss GAN (FTLGAN), which focuses on generating high-resolution images that preserve individual identities rather than merely improving image quality, thereby maximizing the performance of FR models. The results are compelling, demonstrating a mean value of d' 21% above the best current state-of-the-art models, specifically having a value of d' = 1.099 and AUC = 0.78 for 14×14 pixels, d' = 2.112 and AUC = 0.92 for 28×28 pixels, and d' = 3.049 and AUC = 0.98 for 56×56 pixels.The contributions of this study are significant in several key areas. Firstly, a notable improvement in facial recognition performance has been achieved in low-resolution images, specifically at resolutions of 14×14, 28×28, and 56×56 pixels. Secondly, the enhancements demonstrated by FTLGAN show a consistent response across all resolutions, delivering outstanding performance uniformly, unlike other comparative models. Thirdly, an innovative approach has been implemented using triplet loss logic, enabling the training of the super-resolution model solely with real images, contrasting with current models, and expanding potential real-world applications. Lastly, this study introduces a novel model that specifically addresses the challenge of improving classification performance in facial recognition systems by integrating facial recognition quality as a loss during model training.
Face Recognition (FR), GAN, Triplet Loss, face re-identification.
§ INTRODUCTION
In recent years, thanks to the emergence of artificial intelligence models, face recognition (FR) algorithms have achieved tremendous improvements that have led to the development of innovative FR models such as <cit.>. These have achieved an accuracy of over 99% on datasets such as LFW <cit.>. These amazing results have turned facial recognition into the most widely used biometric technique in recent years, generating great contributions in areas such as security, finance, or even forensic cases <cit.>.
However, despite the remarkable achievements made by neural models in the field of face recognition, their original design is oriented to high-resolution (HR) images, which hinders their direct application in contexts involving low-resolution (LR) images. This results in extensive problems in real applications such as surveillance, where capture distances and hardware limitations often result in very low-resolution facial images, with motion or even blurring, strongly decreasing the performance of these models <cit.>.
In the face of the severe performance degradation suffered by face recognition models with low-resolution images, two approaches have emerged: methods that learn a unified feature space and methods based on superresolution (SR) <cit.>. Super-resolution models, which seek to generate a high-resolution (HR) face from a low-resolution (LR) input to improve face recognition, have presented a more successful approach in recent years compared to other methods <cit.>. Despite the improvements presented by super-resolution mechanisms, there is still a wide challenge in very low-resolution images, such as 14×14, 28×28 and 56×56 pixels, where state-of-the-art models, such as GFPGAN or Real-SRGAN, generate visual deformations in faces (see Figure <ref>) and loss of essential features that impede the re-identification task of FR algorithms. This fact has led bicubic interpolations to be considered as the best model in cases of very low-resolution faces since they preserve in a better way the original information of the face <cit.>.
The problems of deformations and poor face recognition (FR) performance affecting current super-resolution models are mainly attributed to excessive beautification of the restored images, leading to a loss of facial features during the face recognition process. This over-embellishment occurs because current models are trained with losses that prioritize the generation of realistic images, without taking into account a
variable that specifically evaluates and seeks to improve facial recognition. This variable, considered a second-order variable, is not included in the training of the models so its improvement is a second-order factor that can present improvements as a consequence of the beautification of the images.
Due to the problems that still exist in SR models, in this work, we will focus on the development of a low-resolution face restoration model that focuses on improving the face recognition process. This work has resulted in FTLGAN (Face Triplet Loss GAN) a novel super-resolution model that is able to maintain the identity of individuals by incorporating the quality of the FR as a loss in the generative network, which allows maintaining the distinctive features in very low-resolution cases. This model will be compared with several models evaluated in <cit.> at the resolutions of 14×14, 28×28 and 56×56 pixels, in the VGG-Face 2 dataset <cit.>, following the protocols and performing a fair comparison between the models.
The contributions of the present work are significant in several key areas. Firstly, a notable improvement in facial recognition performance has been achieved in low-resolution images, specifically at resolutions of 14×14, 28×28, and 56×56 pixels. Secondly, the experienced enhancements by the model demonstrate a consistent response across all resolutions, consistently delivering outstanding performance, unlike other comparative models. As a third contribution, an innovative approach has been implemented using triplet loss logic, enabling the training of a super-resolution model solely with real images, contrasting with current models, thus expanding the potential for real-world applications. Lastly, as a fourth contribution, a novel model has been introduced that specifically addresses the challenge of improving classification performance in facial recognition systems by integrating facial recognition quality as a loss during model training.
The next of the paper is organized as follows. In section 2, a detailed literature review will address the theoretical basis of the FTLGAN model, including aspects such as the face-reidentification process, face-recognition models, super-resolution model, and relevant evaluation metrics. In addition, the dataset used for the study will be presented and its relevance will be discussed. In section 3 will be devoted to the detailed exposition of the FTLGAN architecture, going in-depth into its components and its operation. Subsequently, in section 4, experimental results of the FTLGAN model will be provided, accompanied by a detailed ablation analysis to better understand its performance. In section 5 will be devoted to an in-depth discussion of the results obtained, analyzing their implications and possible limitations. Finally, in section 6 will present the conclusions derived from this study, highlighting key findings and possible future directions for research.
§ RELATED WORK AND DATASETS
This section will present the different face recognition techniques, the different super-resolution mechanisms, and the dataset used to evaluate the performance of the different models.
§.§ Face re-identification
Facial re-identification is the process of determining whether two facial images captured at different times and with different cameras represent the same person <cit.>. Unlike other facial tasks, such as facial verification, which verifies whether a given face corresponds to a specific person, and facial recognition, which identifies a person from a given image, facial re-identification involves comparing facial features to establish the similarity between two images, as can be seen in the Figure <ref>.
This process becomes particularly challenging when one of the images is low resolution (LR), as the reduced quality can make it difficult to accurately extract facial features <cit.>. Low-resolution person re-identification is an important area of research, especially in surveillance and security applications, where images captured by cameras are often of low quality due to factors such as distance, viewing angle, and variable illumination.
§.§ Face Recognition Models (FR)
To make consistent comparisons between faces, face recognition algorithms are used. These are mechanisms capable of extracting the characteristics of the facial images through an n-dimensional vector, obtaining a numerical representation of the faces, which allows comparing the similarity or distance between the images <cit.>.
There are numerous methodologies to perform feature extraction, however, deep learning has become the predominant mechanism in the last ten years, allowing the development of a multitude of models, which are characterized by using similar backbones, but different loss functions. Depending on the type of loss function, FR algorithms can be classified into three major groups <cit.>:
§.§.§ Euclidean-distance-based loss
Models based on Euclidean distance are characterized by using vector representations of the faces to incorporate them into the Euclidean space, seeking to reduce the intravariance and increase the intervariance between faces. The most popular loss using this principle is contrastive loss, a type of loss that seeks to minimize the Euclidean distance between positive face representations (same person) and maximize the Euclidean distance between negative face representations (different persons) <cit.>, by the following equation:
ℒ =y_i jmax(0,f(x_i)-f(x_j)_2-ϵ^+)
+(1-y_i j) max(0, ϵ^--f(x_i)-f(x_j)_2) ,
where y_i j=1 means x_i and x_j are matching samples and y_i j=0 means non-matching samples. f(·) is the feature embedding, and ϵ^+and ϵ^-control the margins of the matching and non-matching pairs respectively.
Among the face recognition models that use Euclidean losses, FaceNet <cit.> stands out, which uses a triplet loss type loss, which, unlike the constraint loss, which takes into account the absolute distances of matched and mismatched pairs, considers the relative difference between them, according to the next formula:
f(x_i^a)-f(x_i^p)_2^2+α<f(x_i^a)-f(x_i^n)_2^2 ,
where x_i^a, x_i^p and x_i^n are the anchor, positive and negative samples, respectively, α is a margin and f(·) represents a nonlinear transformation embedding an image into a feature space.
§.§.§ Angular/cosine-margin-based loss
The angular losses arise from the Softmax loss concept which is characterized by training focused on classifying faces into classes representing identities, which presents serious inter/itra-class problems <cit.>. To improve the problems of the Softmax model, the use of angular/cosine margin-based loss was proposed, generating a margin between classes that are located in the cortex of a hypersphere allowing for better classification <cit.>.
Angular models are based on the intrinsic angular behavior of the softmax loss, located in the crust of a feature hypersphere, reformulating the softmax expression as a function of the angle between the feature vector and the column vector of weights, which allows the emergence of state-of-the-art models, such as ArcFace <cit.> and AdaFace <cit.>. These models, in addition to expressing the function in terms of the angle, incorporate a margin, allowing better differentiation:
L=-1/N∑_i=1^N lnexp{s ·cos(θ_y_i, i+m)}/exp{s ·cos(θ_y_i, i+m)}+∑_j ≠ y_iexp{s ·(cos(θ_j, i)}. .
§.§.§ Loss variations
Numerous studies have proposed variations on the softmax and angular models by generating normalizations of the characteristics and weights of the loss functions to improve the performance of the models <cit.> as follows:
W=W/W, x=αx/x ,
where α is a scalar parameter, x is the learned feature vector and W are the weights of the last fully connected layer.
§.§ Upsampling methods
Super-resolution mechanisms are responsible for converting low-resolution (LR) images into high-resolution (HR) images, seeking to preserve as much detail of the identity of persons as possible in the case of facial images. This process of transforming low-resolution images to high-resolution is known as upsampling operation and can be divided into two types: interpolation methods and learning-based upsampling <cit.>.
§.§.§ Interpolation methods
Interpolation is the most commonly used oversampling method <cit.>. The interpolation-based upsampling methods performs a scaling only using information from known pixels to estimate the value of unknown pixels, being an easy-to-implement methodology <cit.>. This logic has allowed the emergence of several subtypes of interpolations among which stand out:
* Nearest-neighbor Interpolation: a model that selects the nearest pixel value for each position to be interpolated independently of any other pixel.
* Bilinear Interpolation: a model that performs linear interpolation on one axis of the image and then performs it on the other axis.
* Bicubic interpolation: which similarly to Bilinear interpolation performs a cubic interpolation on each of the two axes, however, it takes into account 4×4 pixels and produces smoother results with fewer artifacts <cit.>.
§.§.§ Learning-based Upsampling
Unlike interpolation models, learning base upsample models are characterized by learning end-to-end resampling by intruding learning convolutional layers. Among these, two logics stand out.
* Sub-pixel Convolutional Layer: Also known as the deconvolution layer, it is responsible for performing an inverse transformation to the standard convolution. Its main purpose is to predict the possible input from feature maps that have a similar dimension to the convolution output. In essence, this layer seeks to increase the resolution of the image through an expansion process involving the insertion of zeros, followed by the application of the convolution operation <cit.>.
* Sub-pixel Convolutional Layer: This corresponds to another mechanism that is fully learnable end-to-end and performs upsampling by generating multiple channels through convolution and subsequent reshaping. Within this layer, an initial convolution is implemented to produce outputs with s^2 times the channels, where s denotes the scale factor. This convolution process is repeated within the layer to generate outputs with s^2 times the channels, where s continues to represent the scale factor <cit.>.
§.§ Evaluation metrics
Once the vectors and the distance between the pairs have been calculated, it is possible to evaluate the performance of the model. For this purpose, tests are performed on all the pairs of the dataset, and the performance of the model is evaluated, visualizing that the pairs of impostors (faces of different persons) are recognized as different persons and the pairs of genuines (faces of the same person) are effectively recognized as the same identity. Since the model performance is variable depending on the selected threshold, a genuine and impostor curve is performed in which all dataset pairs are evaluated for all possible thresholds, generating two curves showing the classification status, as can be seen in the example in Figure <ref>.
To evaluate objectively (and not visually) the separation between the curves and the confusion zone, parameter d' is calculated as a metric using the:
d^'(g, i) = |μ_g - μ_i|/√(σ_g^2 + σ_i^2/2)
This parameter allows a less ambiguous comparison, considering the mean and the standard deviation of the curves.
On the other hand, a Receiver Operating Characteristic (ROC) <cit.> curve is plotted, which allows to verify the performance of the model for all thresholds by means of a graph of FMR vs FNMR as shown in Figure <ref>.
§ PROPOSED METHOD: FTLGAN
Due to the mentioned problems with the super-resolution models, this section will first, explore the development of the FTLGAN model, followed by the presentation of several variants of the FTLGAN model.
§.§ Triplet Loss Training
As mentioned in the introduction, current super-resolution models often do not incorporate the embeddings of face recognition models in their training. In this context, the FTLGAN model focuses on rethinking the traditional training logic of GANs and face enhancement models. To achieve this, the FTLGAN consists of two stages in its training process, which operate together following a triplet loss-based logic. During training, image triplets composed of a low-resolution Anchor image (the target identity), a high-resolution Positive image (the same target identity), and a high-resolution Negative face (of another identity) are used.
In the first stage, called generative, a neural network acts as a decoder performing the scaling process on the low-resolution Anchor image, converting it into a high-resolution image of equal size to the Positive and Negative images. In the second stage, called feature extraction, a pre-trained face recognition algorithm, with weights frozen, is used to extract a latent vector of the Positive, Negative, and Anchor face with scaling. The quality of the image restored by the decoder is evaluated by calculating the triplet loss between the latent vectors, and backpropagation is performed to train the generative decoder, as shown in Figure <ref>.
In order to focus the results on the training logic instead of the blocks used, we decide to use as a decoder the ESGAN model generator <cit.>, a widely recognized architecture that differs from other current GANs, such as styleGAN <cit.>, by having superresolution tasks as its main objective. The topology used was identical to the one proposed in <cit.>, using 16 RRDB residual blocks. This allows us to have a versatile model targeting various tasks, including image quality enhancement in the SR task <cit.>.
In the feature extraction phase, we propose to integrate a feature extraction model trained with triplet loss logic. In this line, the FaceNet <cit.> architecture was selected. The FaceNet architecture implemented for the FTLGAN model employs a ResNet100 backbone <cit.>, previously trained on the VGG-Face 2 dataset <cit.>.
In addition to allowing a focus on face recognition quality, the triplet loss training logic can be trained with real low-resolution images, since no comparison between the restored image and an ideal image is required. This mitigates a problem present in <cit.>, which need to train with synthetic low-resolution images generated from high-resolution image compressions. This effect allows the FTLGAN model to learn effects other than low resolution, including blurring and noising.
§.§ Perceptual Loss
In our approach, it is crucial to note that the proposed model does not present any loss that works directly in the image space and controls that the result actually looks like a face. Instead, the model seeks to optimize the n-dimensional representations of the faces which indirectly impacts on obtaining realistic images. This second-order strategy implies that, during the first few training epochs, the model may experience noticeable divergence due to the complexity of learning the subtle, nonlinear correlations that characterize facial features. By not imposing strict constraints from the outset, the model has the flexibility to adapt and adjust to the inherent diversity in facial appearance, although this initial process may result in less accurate or consistent results. To mitigate potential divergence in early epochs and guide the model toward generating more consistent facial images, a second loss is known as the perceptual loss <cit.>.
The perceptual loss proposed by Johnson et al. <cit.> is based on the concept of minimizing the distance between the features activated in a reference image and another restored in a deep network based on the idea of being closer to the perceptual similarity <cit.>. In the case of FTLGAN, perceptual loss is implemented in a VGG19-54 network following the architecture defined in <cit.>. In this, a 19-layer pre-trained VGG <cit.> network is used, where 54 indicates the features obtained from the 4th convolution before the 5th max-pooling layer. By using the 4th layer, we can capture features that are not too deep while maintaining strong supervision.
Although the incorporation of the perceptual loss can help the model by integrating more direct relations that allow for the reduction of the divergence in the first epochs, this topology presents difficulties in its implementation. It prevents training with only real images of low resolution since both a low and high-resolution image are required to perform the training supervision. To mitigate this drawback, it was proposed to perform a synthetic compression of the Positive image using a bicubic interpolation, generating the architecture that can be visualized in figure <ref>.
Thus the use of L_percep and the L_triplet results in an overall loss for the FTLGAN model defined by the linear combination between both losses in the form:
L = α L_percep + β L_triplet,
For this work, we tested other losses that incorporate more direct correlations, such as MSE <cit.>, in conjunction with Triplet Loss. However, no other mechanism presented better results than the presented combination.
§ EXPERIMENTS
In order to obtain comparable and valid results, the proposed FTLGAN model was evaluated following the experimental protocol proposed in <cit.> in which 14×14, 28×28 and 56×56, scaling to ×8, ×4 and ×2 to generate high-resolution images of 112 pixels.
§.§ Dataset
In the world of face recognition, finding complete datasets that allow a deep and comparable evaluation of the data is a challenge, in this line to achieve replicable and comparable results, it was decided to use the dataset proposed in <cit.>, generating with this comparable results with other SR models. This Dataset is conformed by an edited version of VGG-Face 2 <cit.>, which contains high-resolution images of 112×112 pixels and low-resolution images of three types of resolution: 56×56, 28×28 and 14×14 pixels.
Each group of the dataset is made up of a total of 163,564 training images with 8,605 different identities, in addition to 8,791 test images with 497 different identities. It is important to note that in the creation of these sets, no enlargement was performed, so no new information was created, however, in some cases given the protocol, minimal downscaling had to be performed to standardize the dimensions, for which bicubic interpolation was used.
In addition, to the images in the four resolution types, the dataset has 163,564 triplets of data for each resolution in the training set in which an LR anchor image, a positive HR image (same identity), and a negative HR image (different identity) are presented to enable contrastive or triple-loss type training.
§.§ Training details and parameters
For training the FTLGAN model, a linear combination between perceptual loss and triplet loss was used. In all experiments, we set α = 0.8 and β = 0.2, following the equation <ref>, which intensified the impact of triplet loss on perceptual loss. In addition, a learning rate of 1e^-5 was used for all experiments.
The burial process was performed independently for each of the dataset resolutions (14×14, 28×28, and 56×56). Ten epochs on an NVIDIA RTX3090 Ti video card were used for each of the training.
§.§ Experiments results
The results obtained from the FTLGAN model are compared with both the interpolation methods and with learning-base upsampling methods shown in section <ref>, generating the Table <ref>. When observing the results it is possible to notice that the proposed FTLGAN model presents the best results in 28×28 and 56×56 resolution, as well as a better average than all the other models SR presented in <cit.>.
When comparing the data, it is possible to notice a clear improvement of the FTLGAN model concerning all the other topologies presenting a performance 11% better in 28×28, 9.4% better in 56×56 and 21% better on average than the best model of the state of the art and baseline in this problem, only being surpassed by Nearest + Nearest T. S in 14×14 pixels.
The genuine and impostor curves, present in Figure <ref> show how the separation between them improves as the resolution increases. An almost total separation in images of 56×56 pixels is particularly noticeable. Also, when analyzing the curves, it is evident that as the resolution increases, it is the genuine curve that mainly experiences an increase in the average distance between the pairs, going from an average distance of 0.3 at 14×14 resolutions to an average distance of 0.65 at 56×56. On the other hand, the impostor curve remains fixed close to a mean of 0, which shows that the model is robust in detecting impostor pairs even at low resolution, but has difficulties in identifying genuine pairs at very low resolutions.
In order to understand the results obtained, a visual SR test was performed with a recognizable public figure, Ewan Mcgregor, enlarging images in the three resolutions worked (14×14, 28×28, and 56×56). Figure <ref> shows the results of four of the most important models: bicubic interpolation, GFPGAN, and Real-SRGAN compared with FTLGAN.
When observing the results of the images, it can be seen how the FTLGAN model shows similar results to those generated by a bicubic interpolation, but with subtle differences in the smoothness. It is important to note that before this study, interpolations offered the best results in this type of problem, and it is in this line that the FTLGAN model follows, by introducing less new information and taking better advantage of the visual information available, being a learning based model but behaving similarly to an interpolation model.
In contrast, it can be noted how the rest of the state-of-the-art learning based models tend to generate visually smoother results. Altering the identity of individuals, as occurs in 56×56 pixel images, or definitely deforming the face completely, as in the case of 28×28 and 14×14 resolutions, where the details of the identity are completely lost.
§.§ Ablation study
In this section, the impact of different elements on the FTLGAN model will be thoroughly analyzed through a series of experiments. Among the tests performed, the impact of using only real images in training versus synthetic images will be evaluated and the FR Angular ArcFace model, recognized for its accuracy in identifying facial features, will be used. In addition, several loss techniques will be integrated, including the well-known Mean Squared Error (MSE) along with the online triplet mining method <cit.>, which generates dynamic data triplets (anchor, positive and negative) during training by selecting difficult triplets from similar anchor and positive samples. These combined elements will provide a deeper understanding of the model performance in the super-resolution process. Each experiment will be performed on images exclusively at 28×28 pixels, an intermediate resolution that allows for agile evaluation and faithfully represents behavior between 14×14 pixels and 56×56 pixels. All combinations of experiments can be visualized in Table <ref>.
The results of the ablation study are presented in Table <ref>. Comparing the results of experiments 1 and 2, one can observe the positive impact of incorporating real images, resulting in a slight improvement of the d' value from 2.036 to 2.098. These results indicate marginal improvements when using this type of images.
Similarly, when comparing Experiments 3 and 4 with the base experiment, it is observed that neither the use of online triplet mining nor the use of MSE loss generates a positive contribution in improving the model, reducing the d' value to 2.017 and 2.064 respectively.
In the case of experiment 5, it is possible to notice a much lower performance than that obtained in the previous experiments with a d' = 0.108 because the model presented a divergent behavior during training. This behavior was presented in all the training that included an angular type face recognition model such as AdaFace <cit.> or CosFace <cit.>. The details of this behavior will be addressed in the <ref> section.
§ DISCUSSION OF RESULTS
§.§ Why is FTLGAN the best model in the experiments?
As can be seen in Figure <ref>, the results of learning-based models, such as GFPGAN, present smoother results compared to those delivered by the FTLGAN model. The latter seems to have a behavior closer to a bicubic interpolation. However, despite this, the FTLGAN results are superior at all resolutions, improving the d'. This effect may be due to several factors, the main one being the incorporation of face recognition embedding as part of the loss function.
The inclusion of face recognition embedding in the FTLGAN loss function incorporates the quality of face recognition into the face restoration, shifting the focus from image space, where models generally work, to the space of face representations. This shift in focus allows FTLGAN to use the limited information available in low-resolution faces to generate images that are more faithful to the original data.
In contrast, other GAN models tend to invent a lot of new information in order to smooth the image. These models often produce images that, while visually pleasing, may depart significantly from the original information contained in the low-resolution image. For example, in cases of 14×14 pixel images, where the information is contained in only 196 pixels, FTLGAN takes full advantage of this limited information to generate more accurate images by generating new pixels similar but not the same as those generated in interpolation. This ability of FTLGAN to maintain fidelity to the little information available is what allows it to outperform other models in terms of quality and accuracy in facial image restoration.
§.§ Why does FaceNet work better than models like Arcface or Adaface?
Angular loss-based models such as ArcFace and AdaFace have dominated the state of the art in the last 5 years in HR datasets such as LFW <cit.>, however, as can be seen in Table <ref> and Table <ref> angular models presented worse performances than Euclidean models in low-resolution cases.
The presented results can have several explanations, however, the main one is centered on the major problems that FR models are based on angular losses present in images with excessive noise or compression, which has been previously studied by in <cit.>. The results of the study show how the performance of angular models decays strongly as the image is damaged, an effect that is not as clearly seen in non-angular or contrastive model, those that perform best in these situations.
§.§ Why does FTLGAN not converge with FR models based on angular losses?
As visualized in <ref> the versions of FTLGAN based on angular losses do not converge, however, this fact was not an isolated case, since, when using optimizers other than SGD or when making changes in the learning rate, the model presented the same divergent and erratic behavior. These cases show that FTLGAN is a highly unstable model, which may be one of the major triggers of the instability seen when using models such as ArcFace or AdaFace.
This instability makes even more sense with the results visualized in Table <ref> where it is possible to observe the poor performance of the angular models for classifying low-resolution images. This poor performance is likely to affect the stability of FTLGAN since the training of the generator is directly dependent on the face recognition model.
The instability of the model and the fact that FTLGAN converges only with certain specific parameters may be largely due to the fact that the model uses a Triplet Loss, which has been considered a highly unstable type of loss in the training process by numerous authors <cit.>, so it is an important future task to improve the stability and convergence of this model in the future.
§ CONCLUSIONS
In recent years, advances in facial recognition have made this technique the most widely used biometric method, however, the inherent hardware problems still generate numerous cases in which the facial images obtained are of low resolution, which generates a strong loss of performance of the facial recognition models. Numerous solutions such as super-resolution have sought to improve the performance of these cases, however, the problems have persisted over time. Due to these problems of face recognition, the present work aimed to define the current limits of FR and propose a new solution to this problem, using the quality of face recognition as a training loss of SR models.
The work showed the poor performance of current super-resolution models, which focus on generating smoother, lifelike images, but in reality, perform poorly when it comes to face recognition. These poor results are largely due to the fact that traditional generative models do not incorporate face recognition as a primary task, making it a second-order objective.
Due to the low performance of face recognition at low resolution, this work developed a new super-resolution model: FTLGAN, which incorporates the quality of face recognition as a training loss using a triplet loss logic. This approach allows the development of a SR model focused on the quality of face recognition rather than the aesthetic quality of the image. The results of this model show a d' 21% higher than the best models of the current state of the art, specifically achieving a d' = 1.099 and AUC = 0.78 for 14×14 pixels, d' = 2.112 and AUC = 0.92 for 28×28 pixels, and d' = 3.049 and AUC = 0.98 for 56×56 pixels.
The positive results observed can be further explained by a detailed analysis of the two key contributions. First, by using real images for training, the model's performance improved, increasing the d' from 2.036 to 2.098 for 28×28 pixels. However, this improvement is marginal compared to the significant enhancement achieved by incorporating facial recognition embedding into the loss function. This latter approach raised the d' from 1.715 to 2.036 for 28×28 pixels. These findings underscore the importance of integrating facial recognition quality into the model’s training process for more effective low-resolution facial restoration, aligning with the promising results demonstrated by FTLGAN.
The development of this work opens a new line of research for future projects, allowing possible improvements in various face recognition problems such as image degradations with blurring or noise or even in recognition tasks with age changes, allowing real improvements in these tasks.
§ ACKNOWLEDGMENT
Fondecyt-Chile 1191131 and National Center for Artificial Intelligence CENIA FB210017, Basal ANID, partly supported this work.
10
url@samestyle
ada
M. Kim, A. K. Jain, and X. Liu, “Adaface: Quality adaptive margin for face recognition,” 2023.
arc
J. Deng, J. Guo, J. Yang, N. Xue, I. Kotsia, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 10, p. 5962–5979, Oct. 2022. [Online]. Available: <http://dx.doi.org/10.1109/TPAMI.2021.3087709>
sphere
W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “Sphereface: Deep hypersphere embedding for face recognition,” 2018.
facenet
F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).1em plus 0.5em minus 0.4emIEEE, Jun. 2015. [Online]. Available: <http://dx.doi.org/10.1109/CVPR.2015.7298682>
LFW
G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” University of Massachusetts, Amherst, Tech. Rep. 07-49, October 2007.
du2021elements
H. Du, H. Shi, D. Zeng, X.-P. Zhang, and T. Mei, “The elements of end-to-end deep face recognition: A survey of recent advances,” 2021.
Li_2019
P. Li, L. Prieto, D. Mery, and P. J. Flynn, “On low-resolution face recognition in the wild: Comparisons and new techniques,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 8, p. 2000–2012, Aug. 2019. [Online]. Available: <http://dx.doi.org/10.1109/TIFS.2018.2890812>
wang2021realesrgantrainingrealworldblind
X. Wang, L. Xie, C. Dong, and Y. Shan, “Real-esrgan: Training real-world blind super-resolution with pure synthetic data,” 2021. [Online]. Available: <https://arxiv.org/abs/2107.10833>
9072532
J. Chen, J. Chen, Z. Wang, C. Liang, and C.-W. Lin, “Identity-aware face super-resolution for low-resolution face recognition,” IEEE Signal Processing Letters, vol. 27, pp. 645–649, 2020.
loreto_paper
L. Prieto, S. Pulgar, P. Flynn, and D. Mery, “On low-resolution face re-identification with high-resolution-mapping,” in Image and Video Technology: 10th Pacific-Rim Symposium, PSIVT 2022, Virtual Event, November 12–14, 2022, Proceedings.1em plus 0.5em minus 0.4emBerlin, Heidelberg: Springer-Verlag, 2023, p. 89–102. [Online]. Available: <https://doi.org/10.1007/978-3-031-26431-3_8>
loreto
L. Prieto, “Dataset and experimental protocol for face re-identification with low resolution images,” Master's Thesis, Pontifical Catholic University of Chile, Department of Computer Science, Santiago of Chile, November 2020.
vggface_2
Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “Vggface2: A dataset for recognising faces across pose and age,” 2018.
Parkhi2015
O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Procedings of the British Machine Vision Conference 2015, ser. BMVC 2015.1em plus 0.5em minus 0.4emBritish Machine Vision Association, 2015. [Online]. Available: <http://dx.doi.org/10.5244/c.29.41>
Luo_Zhu_Liu_Wang_Tang_2016
P. Luo, Z. Zhu, Z. Liu, X. Wang, and X. Tang, “Face model compression by distilling knowledge from neurons,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, Mar. 2016. [Online]. Available: <https://ojs.aaai.org/index.php/AAAI/article/view/10449>
Cheng_2020_CVPR
Z. Cheng, Q. Dong, S. Gong, and X. Zhu, “Inter-task association critic for cross-resolution person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
8985713
P. Saha and A. Das, “Nfgs enabled face re-identification for efficient surveillance in low quality video,” in 2019 Fifth International Conference on Image Information Processing (ICIIP), 2019, pp. 114–118.
Kortli2020
Y. Kortli, M. Jridi, A. Al Falou, and M. Atri, “Face recognition systems: A survey,” Sensors, vol. 20, no. 2, p. 342, Jan. 2020. [Online]. Available: <http://dx.doi.org/10.3390/s20020342>
Wang2021
M. Wang and W. Deng, “Deep face recognition: A survey,” Neurocomputing, vol. 429, p. 215–244, Mar. 2021. [Online]. Available: <http://dx.doi.org/10.1016/j.neucom.2020.10.081>
sun2015deeply
Y. Sun, X. Wang, and X. Tang, “Deeply learned face representations are sparse, selective, and robust,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 2892–2900.
liu2016large
W. Liu, Y. Wen, Z. Yu, and M. Yang, “Large-margin softmax loss for convolutional neural networks,” arXiv preprint arXiv:1612.02295, 2016.
li2021beginner
J. Li, Z. Pei, and T. Zeng, “From beginner to master: A survey for deep learning-based single-image super-resolution,” 2021.
fadnavis2014image
S. Fadnavis, “Image interpolation techniques in digital image processing: an overview,” International Journal of Engineering Research and Applications, vol. 4, no. 10, pp. 70–73, 2014.
Zeiler2014
M. D. Zeiler and R. Fergus, Visualizing and Understanding Convolutional Networks.1em plus 0.5em minus 0.4emSpringer International Publishing, 2014, p. 818–833. [Online]. Available: <http://dx.doi.org/10.1007/978-3-319-10590-1_53>
Shi_2016_CVPR
W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
Fawcett2006
T. Fawcett, “An introduction to roc analysis,” Pattern Recognition Letters, vol. 27, no. 8, p. 861–874, Jun. 2006. [Online]. Available: <http://dx.doi.org/10.1016/j.patrec.2005.10.010>
wang2018esrgan
X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, C. C. Loy, Y. Qiao, and X. Tang, “Esrgan: Enhanced super-resolution generative adversarial networks,” 2018.
karras2019stylebased
T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” 2019.
Lim_2017_CVPR_Workshops
B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017.
he2015deep
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015.
wang2021realworld
X. Wang, Y. Li, H. Zhang, and Y. Shan, “Towards real-world blind face restoration with generative facial prior,” 2021.
yang2021gan
T. Yang, P. Ren, X. Xie, and L. Zhang, “Gan prior embedded network for blind face restoration in the wild,” 2021.
johnson2016perceptual
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” 2016.
Johnson2016
——, Perceptual Losses for Real-Time Style Transfer and Super-Resolution.1em plus 0.5em minus 0.4emSpringer International Publishing, 2016, p. 694–711. [Online]. Available: <http://dx.doi.org/10.1007/978-3-319-46475-6_43>
NIPS2015_a5e00132
L. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis using convolutional neural networks,” in Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, Eds., vol. 28.1em plus 0.5em minus 0.4emCurran Associates, Inc., 2015. [Online]. Available: <https://proceedings.neurips.cc/paper_files/paper/2015/file/a5e00132373a7031000fd987a3c9f87b-Paper.pdf>
wang2021realesrgan
X. Wang, L. Xie, C. Dong, and Y. Shan, “Real-esrgan: Training real-world blind super-resolution with pure synthetic data,” 2021.
simonyan2015deep
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2015.
kato2021mse
S. Kato and K. Hotta, “Mse loss with outlying label for imbalanced classification,” 2021.
Sikaroudi_2020
M. Sikaroudi, B. Ghojogh, A. Safarpoor, F. Karray, M. Crowley, and H. R. Tizhoosh, Offline Versus Online Triplet Mining Based on Extreme Distances of Histopathology Patches.1em plus 0.5em minus 0.4emSpringer International Publishing, 2020, p. 333–345. [Online]. Available: <http://dx.doi.org/10.1007/978-3-030-64556-4_26>
wang2018cosface
H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu, “Cosface: Large margin cosine loss for deep face recognition,” 2018.
wang2018devil
F. Wang, L. Chen, C. Li, S. Huang, Y. Chen, C. Qian, and C. C. Loy, “The devil of face recognition is in the noise,” 2018.
|
http://arxiv.org/abs/2409.02171v1 | 20240903180001 | Entanglement dynamics in monitored Kitaev circuits: loop models, symmetry classification, and quantum Lifshitz scaling | [
"Kai Klocke",
"Daniel Simm",
"Guo-Yi Zhu",
"Simon Trebst",
"Michael Buchhold"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech"
] |
These two authors contributed equally.
Department of Physics, University of California, Berkeley, California 94720, USA
These two authors contributed equally.
Institut für Theoretische Physik, Universität zu Köln, D-50937 Cologne, Germany
Institut für Theoretische Physik, Universität zu Köln, D-50937 Cologne, Germany
Advanced Materials Thrust, The Hong-Kong University of Science and Technology (Guangzhou), Guangzhou, China
Institut für Theoretische Physik, Universität zu Köln, D-50937 Cologne, Germany
§ ABSTRACT
Quantum circuits offer a versatile platform for simulating digital quantum dynamics and uncovering novel states of non-equilibrium quantum matter.
One principal example are measurement-induced phase transitions arising from non-unitary dynamics in monitored circuits,
which employ mid-circuit measurements as an essential building block next to standard unitary gates.
Although a comprehensive understanding of the dynamics in generic circuits is still evolving, we contend that
monitored quantum circuits give rise to robust phases of dynamic matter,
which – akin to Hamiltonian ground state phases – can be categorized based on circuit symmetries and spatial dimensionality.
To illustrate this concept, we focus on measurement-only quantum circuits within symmetry classes BDI and D,
which are measurement-only circuit adaptations of the paradigmatic Kitaev and Yao-Kivelson models,
embodying particle-hole-symmetric Majorana fermions with or without time-reversal.
We establish a general framework – Majorana loop models – for both symmetry classes (in arbitrary spatial dimensions)
to provide access to the phenomenology of the entanglement dynamics in these circuits,
displaying both an area-law phase of localized Majorana loops and a delocalized, highly entangled Majorana liquid phase.
The two phases are separated by a continuous transition displaying quantum Lifshitz scaling,
albeit with critical exponents of two distinct universality classes.
The loop model framework provides not only analytical understanding of these universality classes in terms of non-linear sigma models,
but also allows for highly efficient numerical techniques capable of simulating excessively large circuits with up to 10^8 qubits.
We utilize this framework to accurately determine universal probes that distinguish both the entangled phases and the critical points
of the two symmetry classes. Our work thereby further solidifies the concept of emergent circuit phases and their phase transitions.
Entanglement Dynamics in Monitored Kitaev Circuits: Loop Models, Symmetry Classification, and Quantum Lifshitz Scaling
Michael Buchhold
September 9, 2024
=======================================================================================================================
§ INTRODUCTION
Many-body entanglement has emerged as a fundamental concept in understanding and classifying quantum phases of matter <cit.>.
In an equilibrium setting, it can serve as a tool to discern ground states of gapped from gapless Hamiltonians, to identify topological order, and to distinguish between generic and non-generic excited states <cit.>. Recently, this notion of many-body entanglement has been extended beyond static Hamiltonian states to characterize dynamically generated quantum states, in particular those arising in monitored quantum circuits. Analogous to Hamiltonian systems, robust entanglement structures, such as for instance area-law or volume-law scaling, arise in quantum circuits and can be used to characterize the generated quantum states <cit.>.
The robustness of these entanglement structures against small variations in microscopic circuit parameters defines entanglement phases of matter, akin to phases of matter in thermal equilibrium, which depend only on global properties, such as underlying symmetries, the range of quantum gates and measurements, and the spatial geometry. This perspective of defining entanglement phases in quantum circuits is intricately connected to the concept of universality in statistical mechanics – stating that macroscopic features of a many-body system are governed by a few global properties, such as symmetries, interaction range, and dimensionality – and it anticipates a symmetry classification of entanglement phases in monitored quantum circuits.
Along these lines it has been recognized that in generic quantum circuits with measurements, the presence or absence of a U(1)-symmetry, associated with the conservation of the total particle number or magnetization, dramatically alters the potential phases <cit.>. However, a comprehensive classification of circuit phases in terms of their symmetries is presently absent. Here we provide a first step in this direction, analyzing two symmetry classes of measurement-only circuits in two spatial dimensions. In addition, we provide a powerful framework for the analytical and numerical analysis of said circuits and symmetry classes in arbitrary spatial dimensions.
In a quantum circuit with mid-circuit measurements, monitored circuit for short, the global symmetries are inherited from the symmetries of the combined set of generators of the dynamics, i.e., from all measurement projectors and quantum gates that evolve the wave function. In this work, we focus on systems characterized by particle-hole symmetry, expressed through operators that are even in Majorana fermions. Specifically, our circuits are measurement-only circuit analogs <cit.> of the paradigmatic Kitaev model <cit.> (and variations thereof <cit.>), where frustration arises not from non-commuting Hamiltonian terms but from non-commuting two-qubit parity check measurements, see Fig. <ref> for an illustration.
In terms of symmetry, these “Kitaev circuits" adhere to C^2=1 particle-hole symmetry and, depending on the underlying spatial geometry of the circuit, fall into two categories with regard to time reversal symmetry. For bipartite spatial geometries (such as the honeycomb geometry) the system exhibits T^2=1 time-reversal symmetry and falls within symmetry class BDI of the ten-fold way classification of free-fermion systems <cit.>.
For non-bipartite spatial geometries (such as the next-nearest neighbor Kitaev <cit.> or the decorated honeycomb geometry which Yao and Kivelson have first explored as Hamiltonian model <cit.>), time-reversal symmetry in the ground state is broken and the system belongs to symmetry class D.
For the Hamiltonian systems, this distinction with regard to time-reversal symmetry has crucial impact on their (spin liquid) ground states. The Yao-Kivelson model, for instance, allows for the formation of a chiral spin liquid which is absent in the conventional honeycomb Kitaev model.
Whether a similar distinction arises also for the quantum states stabilized by the corresponding measurement-only circuits in symmetry class BDI and D is a central question that we address in this manuscript.
For both symmetry classes, e.g., the Kitaev honeycomb circuit or its generalizations including next-nearest neighbor couplings or the Yao-Kivelson geometry, we find that when the measurement-induced frustration is large, i.e., when non-commuting operators are measured frequently, each circuit creates an entangled Majorana liquid state. It displays a distinctive subsystem entanglement entropy S(A)∼ Llog(L) for a subsystem A=L× L of linear dimension L. In contrast, when the measurement-induced frustration is low, a weakly entangled area-law state is realized. The two phases are separated by a critical line, which displays a characteristic quantum Lifshitz scaling behavior <cit.>, previously associated with quantum dimer models <cit.>, Dirac fermions and certain (2+1)-dimensional conformal field theories (CFTs) <cit.>.
But while the general entanglement phase diagrams appear almost identical for the two systems, their underlying symmetry does manifest itself, primarily at the phase transition between the two principal entanglement phases which fall into distinct universality classes whose critical exponents we determine.
To unveil this physics, we introduce Majorana loop models in (d+1) dimensions as a joint theoretical framework which covers both the Hamiltonian and measurement cases. These loop models naturally encapsulate the dynamics of Majorana world lines, distinguishing between symmetry classes BDI and D through the orientability or non-orientability of world lines <cit.>. Moreover, they differentiate between Hamiltonian ground states and monitored systems using an internal parameter – the loop fugacity (which, in topological terms, is also referred to as d-isotopy parameter) <cit.>.
The fugacity n has an intuitive interpretation: it determines whether all world lines appear with equal probability, yielding measurement-only circuits (n=1) or whether world line configurations corresponding to high energy states are suppressed, yielding Hamiltonian ground states (n=√(2)) <cit.>.
The loop framework gives access to both analytical arguments and efficient numerical simulation techniques for monitored Majorana circuits. This allows for a detailed and thorough analysis of the statistical mechanics of circuits in symmetry classes BDI and D in two spatial dimensions and a detailed exploration of Lifshitz criticality in a setting of free, monitored fermions. It is enabled through new, optimized simulation techniques for loop models, which yield a numerical complexity scaling as 𝒪(Nlog(t)) for N∝ L^2 numbers of qubits and circuit depth t. In numerical simulations, system sizes of 10^8 qubits can be realized, reaching the largest system sizes currently accessible for monitored circuits and raising the standard for the quantitative determination of critical exponents and correlation functions in two spatial dimensions. In addition, we simulate the monitored Kitaev and Yao-Kivelson circuits of interest in this manuscript also in the conventional stabilizer representation. We employ state-of-the-art Clifford circuit simulations techniques <cit.> that reach system sizes of 10^4 qubits and critically analyze what kind of qualitative and quantitative insight they afford.
In summary, we discuss measurement-only Kitaev and Yao-Kivelson models as representatives of symmetry classes BDI and D in monitored circuits. We establish a joint framework for both classes – Majorana loop models – and utilize it to accurately determine the long-wavelength properties of a localized area-law and an entangled Majorana liquid phase, and the measurement induced phase transition separating them. This is a rigorous step towards the symmetry classification of monitored circuits in arbitrary dimensions. In addition, we establish that the measurement-induced phase transition in both symmetry classes is of the quantum Lifshitz-type, uncovering a hitherto unexplored link between loop models in (2+1) dimensions, monitored free fermions, and (2+1)-dimensional CFTs.
The paper is organized as follows: Below we provide an overview of the main results for the universal long-distance behavior of both circuits from the loop model and the Clifford simulations. In Sec. <ref>, similarities and differences between Hamiltonian and measurement dynamics are discussed, introducing the Kitaev and Yao-Kivelson models both in the Hamiltonian as well as in the measurement-only circuit framework. In Sec. <ref>, the framework of Majorana loop models is introduced for class BDI and class D and for Hamiltonians and circuits in arbitrary dimensions. In addition, we provide a dictionary between the loop framework and circuit observables. The analytical and numerical results for the loop model framework are presented in Sec. <ref>. These are complemented by a selection of observables obtained in high-performance Clifford simulations in Sec. <ref>. A discussion and conclusion is presented in Sec. <ref>.
§.§ Overview of main results
The measurement-induced dynamics of monitored circuits (discussed in this manuscript) share several similarities with the energetics-induced dynamics of their Hamiltonian counterparts.
In both cases, frustration is induced by the presence of non-commuting operators and the statistical mechanics of the stationary state can be mapped to a suitable (2+1)-dimensional loop model. While, however, the physics of Kitaev and Yao-Kivelson Hamiltonians is typically understood in terms of their Majorana band structure, their circuit counterparts feature a different link to many-body quantum physics. As a guiding picture, the Majorana loop model representation of both the measurement-only Kitaev and the Yao-Kivelson model in d spatial dimensions anticipate a link between the measurement-only circuits and the physics of diffusive Majorana metals – highly entangled liquid phases of Majorana fermions undergoing Brownian motion on a lattice – and disorder-induced localization in (d+1) dimensions. The transition between the liquid and the localized phase is induced by the degree of measurement-induced frustration, which we will use as an organizing principle in the following.
§.§.§ Large measurement-induced frustration
When all allowed bond operators of nearest neighbor qubits (or free Majorana fermions) in the honeycomb or decorated honeycomb lattice are measured with equal probability, i.e., at the isotropic point in the phase diagram, the measurement-induced frustration is maximal. This pushes the wave function into an entangled state with the subsystem entanglement of an A=L× L square growing as
S(A)∼ Llog(L) ,
characteristic of quantum liquids with a nodal (Fermi) surface in two dimensions <cit.>. The liquid state is robust against small variations in the circuit parameters, i.e., away from the point of maximum frustration, defining an extended highly entangled phase at large measurement frustration, as illustrated in the phase diagrams of Fig. <ref>.
For measurements drawn randomly in space-time, this liquid phase is reminiscent of diffusive metals arising in weakly disordered Fermi systems. This picture is strengthened in the loop model framework: large measurement-frustration causes the endpoints of Majorana loops to perform an isotropic random walk in (d+1)-dimensional space-time. This yields an entanglement structure akin to a disordered metal with diffusion constant 𝒟, see Sec. <ref>.
The long-distance properties of the random walkers are governed by a non-linear sigma model in class ℂℙ^n-1 for symmetry class BDI or ℝℙ^n-1 for symmetry class D, see Eq. (<ref>) below. The replica limit is determined by the fugacity n of the underlying loop model, i.e., n=1 for the circuit.
Both are single-parameter theories and depend only on the diffusion constant 𝒟, which consequently determines the behavior of physical properties at large distances. For instance, the entanglement entropy in the Majorana liquid phase is
S(A)∼√(𝒟)· Llog(L) .
The effective diffusion constant 𝒟 at large distances emerges from the under renormalization group transformations. Deep in the liquid phase, the renormalization of 𝒟 is negligible and we find reasonable agreement when approximating 𝒟=𝒟_mic, where 𝒟_mic is the microscopic diffusion constant from the measurement probabilities, see Fig. <ref>(a).
We show that the physical properties in the circuit, such as, e.g., entanglement, mutual information, and correlation functions, can be inferred from the distribution function P(ℓ) of Majorana loops of length ℓ in (d+1)-dimensional space-time. For a two-dimensional system with linear dimensions L_x, L_y in the x- and y-direction, this distribution separates into two parts: a Brownian regime for ℓ<L_xL_y, and a Poisson-Dirichlet tail at space-time volume filling distances ℓ>L_xL_y, see Fig. <ref>. The Brownian regime features a loop distribution P(ℓ)∼ℓ^-5/2, which does not distinguish between symmetry class BDI and D. In contrast, the Poisson-Dirichlet tail differentiates between both symmetry classes: it displays an asymptotic behavior P(ℓ) ∼ℓ^-1(ℒ-ℓ)^θ-1, where ℒ is the maximum distance of a single loop in space-time and θ=1 for class BDI and θ=12 for class D, respectively. Both symmetry classes thus realize space-time filling loops in the Majorana liquid phase. The tails of the distribution are not detectable at the space time boundary and therefore do not discriminate different wave functions at final times. Instead, they reflect different dynamics in the circuit, which can be detected by out-of-time-ordered correlators (OTOCs), observable via circuit-ancilla measurement schemes, see Fig. <ref>. Such measurements allow to universally discriminate the dynamics of Majorana liquids in different symmetry classes.
§.§.§ Small measurement-induced frustration
For small measurement-induced frustration, i.e., when a large fraction of measurements can be satisfied simultaneously, the Majorana world lines rarely undergo a measurement-induced motion. Instead, they remain localized in space with an average localization length ξ, akin to disorder-induced localization of a Fermi liquid. The loop distribution P(ℓ) thus acquires an exponential cutoff, P(ℓ)∼ℓ^-5/2exp(-ℓ/ξ), suppressing the buildup of correlations and entanglement over distances larger than ξ. Consequently, the subsystem entanglement of a contractible region L× L obeys an area law
S(A)∼ Llogξ - ln 2 + ⋯
on distances L>ξ.
The negative subleading term is a universal topological correction contributed by the gauge flux, as the complementary part of the fractionalized degrees of freedom from the qubits. Such a topological state is in the same universality class as the toric code wavefunction <cit.>, and the circuit can potentially serve as a (dynamical) quantum error correction code – when the frustration is tuned towards zero, the state resembles an instantaneous state of the Hastings-Haah Floquet code <cit.>.
§.§.§ Critical measurement-induced frustration
The Majorana liquid and the area-law phase are separated by a phase transition at a line of critical measurement-frustration. In the Majorana loop framework, this is a localization-delocalization transition, which takes place when the diffusion constant in the non-linear sigma model renormalizes to zero. Precisely at the transition, the Majorana fermion entanglement obeys an area law but with a peculiar subsystem-dependence that reveals quantum Lifshitz criticality. Despite obeying an area law, correlation functions and the loop distribution function are not bound by a finite correlation length but display algebraic, scale invariant decay, which is faster than the conventional ∼ 1/(distance)^2.
The fact that the is a single-parameter theory, with the only microscopic parameter entering being the microscopic diffusion constant 𝒟_mic, allows us to identify the location of the critical line of the phase transition. It is located at 𝒟_mic=𝒟_mic,c for some critical value 𝒟_mic,c.
In symmetry class BDI, we can extract the value of 𝒟_mic,c=3/16 exactly from the high symmetry points of the phase diagram (the yellow circles in Fig. <ref>(a)). This enables us to analytically determine, without any free parameter, the phase boundary in the Kitaev honeycomb geometry, see Fig. <ref>(b), and, tracing the same value of 𝒟_mic,c, also allows us to determine the phase boundary of the Kekulé Kitaev model, see Fig. <ref>(c).
This connection to the universal value of 𝒟_mic,c in these two BDI models not only illustrates the universality of the loop framework, but it also explains the previously obtained
circular structure of the phase diagram for the honeycomb geometry <cit.>.
The universal critical behavior at long wavelengths is determined by the ℂℙ^n-1 (ℝℙ^n-1) non-linear sigma model for symmetry class BDI (D) with the replica limit n→1 in d+1 dimensions. The corresponding universality classes have been discussed in the context of dense (fully-packed) polymers <cit.>, localization in gapless spin-singlet superconductors with disorder <cit.>, quantum magnets and spin-ice systems <cit.>. We numerically extract four different universal exponents using both the loop model framework with simulations of up to 10^8 qubits and, using the stabilizer framework, for Clifford circuits with up to 10^4 qubits. These results are summarized in Tab. <ref>, where we also compare our estimates with previous loop model simulations of both symmetry classes (which reached system sizes equivalent to 10^6 qubits <cit.>).
The critical exponents and their interpretation in the loop model and in the quantum circuit can be summarized as:
* The correlation length exponent ν determines the average loop length ⟨ℓ⟩
and the spanning number between the final and the initial state in the loop framework.
In the circuit, the former sets the average entanglement entropy in the localized phase
and the latter controls the late-time residual entropy during dynamical purification, see Fig. <ref>.
* The anomalous dimension η describes the power-law scaling of loop correlations in the bulk of space-time.
In particular, for the two-point watermelon correlator we have
G_2(r) ∼{[ exp(-r/ξ) localized phase; r^-(1+η) critical point; const. liquid phase ]. .
In the circuit this corresponds to the scaling of a two-point OTOC, see Fig. <ref>.
Alternatively, the anomalous dimension can be extracted through the hyperscaling relation τ = (11-η)/(5-η)
through the Fisher exponent τ.
The latter determines the loop length in (d+1)-dimensional space time P(ℓ) ∼ℓ^-τ, see Figs. <ref> and
<ref>.
* The fractal dimension d_f, which is related to η,τ by hyperscaling relations d_f = (5-η)/2=3/(τ-1),
describes the structure of the loops in space-time. As with η, these correlations are reflected in circuit OTOCs in the bulk of space-time.
See Fig. <ref> for the scaling at the transition and Figs. <ref>, <ref>
for scaling in the critical phase.
* The order parameter exponent β determines the probability that a given space-time point falls on a macroscopic loop.
Analogous to quantum magnets, extensive loops correspond to macroscopic ordering of the spins, e.g., in a ferromagnet.
This is determined by the two-leg watermelon correlator in the limit of large separations, G_2(r ≫ 1, Δ) ∝Δ^2β,
where Δ is the distance to the critical point in the liquid phase, see Fig. <ref>.
This determines circuit OTOCs at long space-time distances.
Together with the universal OTOC ratios in the Majorana liquid phases, this provides a complete characterization of the two symmetry classes of measurement-only circuits.
§ KITAEV MODELS, CIRCUITS, AND SYMMETRIES
To set the stage we briefly review the fundamental traits of Kitaev physics – the fractionalization of quantum mechanical degrees of freedom and the subsequent formation of long-range entanglement <cit.>. We do this first in the context of the well-known honeycomb Kitaev model, i.e. in the Hamiltonian model in the parlance of this manuscript, where the ground states are quantum spin liquids with a characteristic and defining entanglement structure.
In a second step we turn to the quantum circuit variant of interest in this manuscript. We then turn to an elementary symmetry classification of these systems and the conclusions one can draw from this with regard to the formation of long-range entanglement.
§.§ Kitaev physics
§.§.§ Hamiltonian model
The Kitaev Hamiltonian can be defined on any lattice geometry that allows for a tricolorization of its bonds. This is naturally the case for trivalent lattice geometries such as the honeycomb lattice, but can also be expanded to more complicated lattice geometries with higher coordination number
<cit.>.
On each bond, an Ising-like interaction couples two neighboring spin-1/2 degrees of freedom along the X, Y, Z directions (in spin space)
associated with the bond's color
H = ∑_⟨ jk⟩_μ K_μσ_j^μσ_k^μ ,
where μ =x,y,z indicates the easy-axis of the bond-directional exchange.
For any tricoordinated lattice geometry, this model can be exactly solved via rewriting the model in a Majorana basis <cit.>.
To this end, each spin-1/2 is mapped onto four Majorana fermions {b^x, b^y, b^z, c} satisfying a gauge constraint b^x b^y b^z c = 1, see also the illustration in Fig. <ref> above and Fig. <ref> below. This new basis allows us to reexpress the Pauli operator as Majorana bilinears σ_j^μ=ib_j^μ c_j^μ. The bond-dependent Ising interaction is then equivalent to a gauged Majorana hopping bilinear σ_j^μσ_k^μ = i u_kjc_jc_k, where u_kj≡ i b_k^μ b_j^μ=± 1 denotes the gauge connection on the bond.
Note that the gauge field u is not a gauge-invariant physical observable, but instead one needs to consider the Wilson loop plaquette operator, as a product of the Ising interactions surrounding a plaquette, which evaluates the gauge flux penetrating the plaquette. Importantly, these gauge fields remain entirely static and assume a fixed low-energy configuration (favored by a Majorana fermion mediated interaction between the gauge fluxes)
that can typically be predicted via Lieb's theorem <cit.>.
We will discuss this gauge physics in more detail below for the two principal lattice geometries of our manuscript.
The virtue of this reformulation, which at first sight might be perceived as simple operator algebra, is that it faithfully captures the low-energy physics
of the Hamiltonian model: The spin-1/2 degrees of freedom fractionalize into an itinerant Majorana fermion coupled to a static lattice gauge theory in the background – key characteristics of quantum spin liquid ground states <cit.>. The precise nature of these spin liquids depend on the underlying lattice geometry and the relative strength of couplings, but generally include gapless, gapped, and chiral spin liquids which are accompanied by long-range entanglement structures <cit.>. In fact, it is this formation of long-range entanglement in a quantum magnet that is the defining characteristic of a quantum spin liquid state and allows to unambiguously distinguish them from ordered ground states or finite-temperature paramagnetic states <cit.>.
§.§.§ Quantum circuits
In a new twist on Kitaev physics, several groups have recently started to look into monitored Kitaev models <cit.> or “Kitaev circuits".
In these circuits one replaces the bond-directional exchange terms of the Hamiltonian by bond-directional joint two-qubit measurements, also known as parity checks. These are illustrated in Fig. <ref>, along with their circuit implementation using an ancilla qubit.
On a theoretical level, these bond-directional parity checks implement a non-unitary Kraus operator
M_jk≡exp(-τ/2s^μ_jkσ_j^μσ_k^μ) / √(2 cosh(τ)) ,
where μ = x, y, z indicates the type of bond-directional parity check, s=± 1 is the measurement outcome of the ancilla qubit,
and τ∈[0,+∞) characterizes the strength of measurement that is controlled by a unitary entangling gate parameter t∈[0,π/4]:
tanh(τ/2)=tan(t).
The entangling CNOT gates of Fig. <ref> correspond to t = π/4 rotations, resulting in strong, projective measurements.
Here we focus on this Clifford regime of projective measurements and randomly sample the two-qubit parity checks in every microstep without evoking any spatio-temporal ordering (see Fig. <ref> above).
On a global scale one thereby transitions from (continuous) Hamiltonian dynamics where each exchange term is a projector onto a single, energetically favored two-spin state to a (discrete) measurement-only dynamics <cit.> where each projective measurement results in two possible, random outcomes.
While this might sound like a somewhat obscure idea, one should be reminded that such a connection between Hamiltonian and measurement-only circuit is also deeply embedded in the concept of the toric code <cit.> – it points to the implementation of a topological quantum memory via
stabilizer measurements, i.e. a measurement-only quantum circuit that implements rounds of four-qubit measurements
(whose outcome is interpreted as a syndrome and fed into a decoder to perform quantum error correction).
Of course, the toric code is also very well known in its Hamiltonian version which serves as an exactly solvable spin model that gives rise to a topological spin liquid ground state.
A key distinction between the toric code and the monitored Kitaev models of interest here is the commutativity of operators.
While all Hamiltonian terms/measurement operators commute with one another for the former, they do not for the latter.
This elevated level of measurement-induced frustration has important consequences on the entanglement structures
that can be stabilized in these circuits as we have discussed in the introduction and which, in contrast to the toric code,
are also distinct from the entanglement in the ground states of the corresponding Hamiltonians.
Returning to the monitored Kitaev models at hand, it is important to note that running such a Kitaev circuit on a given initial state
will give rise to a multitude of possible trajectories, depending on the sequence of measurement outcomes.
It is the ensuing ensemble of pure states,
created by running a Kitaev circuit multiple times, that is the object of interest in the following. It allows to
define (in a statistical sense) typical states with a characteristic entanglement structure which we wish to classify
and compare to the ground-state entanglement stabilized by Hamiltonian dynamics
(see Appendix <ref> for some Hamiltonian phase diagrams).
To understand the formation of long-range entanglement in the circuit setting, one can again rely on the language of fractionalization
which we have introduced above in the context of the exact solution of the Hamiltonian model in a Majorana basis. To do so, let us
perform a completely analogous rewriting of the 2-level qubit states in terms of four Majorana fermions and a subsequent regrouping into one
c-Majorana fermion per site and gauge connections (arising from Majorana bilinears on every bond) as illustrated in Fig. <ref> below.
In this language, every two-qubit parity check has the effect of measuring the local Majorana fermion parity,
while effectively teleporting single Majorana fermions <cit.> – for a schematic illustration see panels (c) and (d) in Fig. <ref>. Repeated local measurements can then quickly lead to teleportation over large distances and the formation
of long-range entanglement, see Fig. <ref>(e).
§.§ Time-reversal symmetry and gauge physics
A crucial ingredient in the classification of entanglement emerging either in the ground states of Kitaev Hamiltonians or in typical states stabilized by Kitaev circuits is the role of time-reversal symmetry and gauge physics.
To set the stage, let us recall that, for a bipartite lattice/circuit geometry, the time reversal transformation can be defined as 𝒯=iσ^y 𝒦 where 𝒦 is the complex conjugate, which leads to σ^x(y)(z)→ -σ^x(y)(z). The combination of time-reversal symmetry 𝒯^2=+1 with the intrinsic particle-hole symmetry of Majorana fermions puts both the Hamiltonian model and the quantum circuit in the BDI symmetry class.
The situation is fundamentally different when the symmetry class is changed from BDI to class D, which can be realized by moving to a non-bipartite geometry as first noted by Kitaev <cit.> and later explored in the context of a lattice model by Yao and Kivelson <cit.>.
Even though on the microscopic level of the Hamiltonian or quantum circuit description the system remains time-reversal invariant,
the low-temperature ground state or dynamically stabilized typical state is not.
Instead, time-reversal symmetry is spontaneously broken in this non-bipartite setting as we will discuss in the following.
§.§.§ Hamiltonian model
Let us first inspect time-reversal symmetry in the context of Hamiltonian models, where its role in stabilizing ground-state flux patterns
and its spontaneous breaking for non-bipartite lattice geometry is well established.
As a first example, consider the seminal honeycomb model with its bipartite lattice geometry.
Here the Wilson loop, indicating the flux through one elementary hexagonal plaquette, can be expressed as
W_p = σ_1^xσ_2^yσ_3^zσ_4^xσ_5^yσ_6^z=± 1,
with a clock-wise ordering of the six spin-1/2s around the plaquette.
This flux is invariant under time reversal symmetry, i.e., 𝒯^-1 W_p 𝒯 = W_p.
In addition, the Wilson loop operator commutes with the Hamiltonian, resulting in a static flux pattern, which by virtue of Lieb's theorem <cit.>
is the completely flux-free state in the ground state. As a consequence, the ensuing spin liquid is primarily characterized by the band
structure of the free Majorana fermions. For isotropic coupling, K_x = K_y = K_z in Eq. (<ref>), this is the celebrated
quasi-relativistic band structure with two gapless Dirac cones.
In terms of entanglement, this is an area-law entangled state albeit one with a subleading O(ln L) correction <cit.>
S(A) ∼ L + ln L + … .
Now, if we move to the Yao-Kivelson model on its non-bipartite lattice, note that while the flux through the large, dodecagonal plaquette (of length 12)
is invariant under time reversal, the triangle flux is not – W_ = σ_1^xσ_2^yσ_3^z changes sign under time reversal. In the low-temperature regime of the Hamiltonian (i.e. below the thermal crossover at which the spin degrees fractionalize <cit.>), the Majorana fermions mediate an effective Ising interaction between the triangle fluxes <cit.>. This leads to a flux-ordering transition at a finite transition temperature <cit.> at which the system spontaneously breaks time reversal symmetry. Simultaneously, the Majorana fermions respond to the chiral triangle flux and form a p+ip superconductor which falls into two topologically distinct phases <cit.>: a topological phase at weak pairing and a trivial phase at strong pairing. The transition between the two states can be driven by the relative strength of the triangle versus hexagonal coupling (J and K in the overview of Fig. <ref>).
In the language of quantum magnetism, the time-reversal symmetry breaking, topological state is called a chiral spin liquid <cit.>,
with an entanglement structure
S(A) ∼ L - ln 2 ,
where the topological nature of this phase is reflected by a non-vanishing, subleading topological correction γ_ topo = ln(2) to the area-law entanglement <cit.>.
§.§.§ Quantum circuits
Let us now turn to a discussion of the role of time-reversal symmetry and gauge physics in the measurement-only circuit analogs of the Kitaev model.
If starting from a featureless, maximally mixed state, the purification dynamics due to measurements can be separated into two contributions <cit.> – a gauge flux purification and a Majorana purification.
Since the gauge flux commutes with the measurement operators, it is purified with a constant rate leading to an exponential decay of its entropy in time.
The Majorana purification, in contrast, depends on the degree of frustration – it is exponentially fast for the small frustration phase, but algebraically slow for the large frustration phase. Its behavior is tightly related to the pure state entanglement entropy that will be discussed in the following.
To summarize the main results, the purification time for the gauge flux always scales logarithmically with the system size (ln N), while the purification time of the Majorana fermion takes (ln N) in the less frustrated and less entangled phase, but (N^p) power-law scaling with the system size in the highly frustrated and entangled liquid-like phase.
Such two-stage purification dynamics was found to be a rather generic feature for measurement-only circuits built from subsystem codes <cit.>.
*Gauge purification.—
Let us first discuss the gauge flux purification dynamics in more detail. Note that the flux is not directly measured in our protocol but instead indirectly via the Majorana fermion parity checks.
Every single Majorana parity measurement enforces a Majorana hopping (see Fig. <ref>), and when a set of measurements encircles a plaquette the moving Majorana detects and collapses the gauge flux to one of its eigenstates.
As a consequence, a particular block in the block-diagonal matrix representing the Gaussian Majorana fermion state is picked up
(
ρ(W) 0 0 ⋯
0 ρ(W') 0 ⋯
0 0 ρ(W”) ⋯
⋯ ⋯ ⋯ ⋯
) ,
where each block corresponds to a fixed gauge flux configuration W, W', W”.
Over time, this block matrix is being purified towards a pure state, akin to lowering the temperature of a free fermion system.
Due to the Clifford nature of the Kitaev circuits discussed here, the probability distribution for any possible flux configuration is equal, such that the state is purified into a random flux configuration, distinct from the flux-ordered Hamiltonian ground state <cit.>. Nevertheless, the states associated with any gauge flux configuration share exactly the same entanglement entropy. This is due to the generic property of Clifford stabilizer states: the entanglement of a Clifford stabilizer state ρ = ∏_j=1^N1± O_j/2 where {O_j} are the stabilizer generators, does not depend on the eigenvalues being positive or negative. A minimal example is that all four possible Bell pairs |↑↑⟩+|↓↓⟩, |↑↓⟩+|↓↑⟩, |↑↑⟩-|↓↓⟩, |↑↓⟩-|↓↑⟩ share one bit of entanglement entropy.
Since the Majorana parity checks are performed stochastically, there is a constant rate of purifying the gauge flux (denoted as r in the following),
which leads to an exponential decay of the flux entropy in time <cit.>
S_flux(t) ∼ N_p e^-rt ,
where N_p is the number of the plaquettes of the same type.
This decay is illustrated in Fig. <ref>
for various points in the phase diagram of the monitored honeycomb Kitaev model – the initial decay of the state entropy towards a plateau at S(t)/N=12 signals the flux purification, which is independent of the circuit phase.
In general, the purification rate r depends on microscopic details, such as the number of edges of the plaquette
– the larger the plaquettes, the slower the gauge purification.
Apart from these microscopic details, the time scale to purify the flux to a pure configuration can be estimated as t ∝ln N_p / r
– the purification evolution here is akin to lowering the temperature to approach the ground state flux configuration in the Hamiltonian <cit.>.
For plaquettes with an odd number of edges <cit.> there is an obstruction to purification (that renders r=0) due to the time-reversal symmetry of the protocol, as proved in the following: Any stabilizer of the time-evolved state O is composed of a subset of the local quantum measurement operators K_j
O = ∏_j K_j .
Thus, when every local measurement respects a global symmetry such as time reversal, so does any stabilizer operator of the evolved state
𝒯^-1 O 𝒯 =∏_j ( 𝒯^-1 K_j𝒯) = O .
For example, a hexagon or dodecagon plaquette can be decomposed into its surrounding six bond operators, while a triangle plaquette operator is time reversal odd and cannot be generated by any two-body time-reversal-even operator measurements. Nevertheless, an even product of triangle plaquettes are time reversal even and can be purified, for example considering the sequence of stabilizer generators depicted in Fig. <ref> below.
This implies that a product of an even number of triangle plaquettes can still be purified exponentially fast, while a product of an odd number of triangle plaquettes remains always undetermined, respecting the global time-reversal symmetry. Thus, one global bit, corresponding to the action of a chiral Ising time-reversal transformation, remains conserved, leaving one bit of entropy, the same as the time reversal symmetric mixed state of the Yao-Kivelson Hamiltonian at low temperature.
Crucially, for the quantum circuits, the entanglement dynamics does not depend on the individual flux assignments of the gauge configurations.
Thus we have two strategies to break time-reversal symmetry:
(i) we directly measure the 3-body next-nearest neighbor (n.n.n.) Kitaev interaction on the honeycomb lattice,
which explicitly breaks the time reversal symmetry by fixing the ±π/2 fluxes on this triangle spanned by the n.n.n. spins;
(ii) we move to the decorated honeycomb lattice with triangle plaquettes (a la Yao and Kivelson) and fix a single triangle flux from the initial state.
Both strategies yield, as we will show in the following, the same entanglement phases.
*Majorana purification.—
The purification dynamics of the Majorana fermions (the second stage of purification in Fig. <ref>), in contrast, primarily depends on the amount of frustration, given by the relative weight
of non-commuting operators, and not the specifics of the underlying lattice geometries.
For small frustration, i.e. a large bias towards certain commuting measurement patterns (such as the toric code stabilizers),
the Majorana fermion purifies into the preferred dimer states exponentially fast, thereby stabilizing only a few, exponentially long-lived,
global topological states (potentially serving as logical qubits).
For large frustration, i.e. closer to the point of isotropic measurement strengths, in the liquid-like phases of the phase diagram,
the Majorana fermion generally purifies in a power-law, governed by the physics of the statistical loop model to be introduced in Section <ref>.
§.§ Relation to Hastings-Haah code
This section has introduced and discussed Kitaev circuits as measurement-only quantum circuit analogs of the Kitaev model with an eye on the underlying lattice/circuit geometry. Before closing we also want to draw a few connections to the Hastings-Haah code <cit.>.
The latter has attracted considerable interest from the quantum information community for its ability to dynamically stabilize a topological qubit,
while also being intimately connected to the Kitaev physics discussed above.
The Hastings-Haah code imposes two-qubit parity checks with a strong spatio-temporal ordering. In space, it assigns a Kekulé pattern
of couplings to the honeycomb geometry (see Fig. <ref>) and then performs time-periodic measurements of
all XX, YY, ZZ parity checks. This Floquet dynamics periodically swaps the instantaneously stabilized state among the three corners of the
Kekulé Kitaev circuit phase diagram (see Fig. <ref>),
which like their conventional honeycomb counterpart, are toric code phases. This is what allows it to create a topological qubit state.
Here we take the broad interest in the Hastings-Haah code as motivation to study the measurement-only circuit variant of the
Kekulé Kitaev model <cit.> along with its conventional honeycomb siblings.
§ MAJORANA LOOP MODELS IN DIMENSIONS
Loop models arise quite generically in systems of quadratic Majorana fermions <cit.>.
Here we describe how the loop picture emerges, both in Hamiltonian dynamics and in quantum circuits.
We briefly review the statistical mechanics of loops in 3D and then discuss the universality classes of loop models,
based on symmetry and loop fugacity, and name representative models therein.
Finally, we define important observables and entanglement quantities for loops and Majorana circuits, providing a dictionary between the two.
In a quadratic Majorana model all information about the state is encoded in the two-point Majorana correlation functions ⟨ i γ_l γ_m ⟩.
If one asks, in addition, that the quadratic Majorana state is also a stabilizer state, then all Majoranas have definite pairings ⟨ i γ_l γ_m ⟩ = 0, ± 1.
Such a stabilizer state can be graphically represented by a pairing diagram wherein nodes l and m are connected by an open arc if and only if
i⟨γ_l γ_m ⟩ = 1.
Any generic Gaussian state can then be written as a linear combination of stabilizer states or, alternatively, of pairing diagrams.
Evolving the state over time leads to rearrangement of the Majorana pairings, with fermion world lines tracing out loops in space-time as schematically illustrated in Fig. <ref>.
The entanglement and mutual information between two distinct subregions in space A,B are given by the number of arcs connecting A and B <cit.> (e.g., the blue loop in Fig. <ref>),
independent of the sign of the Majorana parities [Diagrammatic representations including the parity sign are possible but follow a more complicated loop algebra than required here <cit.>.].
When the number of Majoranas is conserved at all times (e.g., as for a spin-chain after a Jordan-Wigner transformation), the space-time is fully packed with loops.
Generic models of fully-packed loops may be described by a braid-monoid algebra <cit.>.
Loop configurations and transformations between them are then generated by a sequence of local operations (i.e., generators of the algebra) which rearrange the world lines.
The allowed local operations acting on Majoranas γ_l and γ_m are depicted in Fig. <ref>(a).
The identity 1_lm leaves the pairings unmodified, amounting to uninterrupted propagation forward in time.
By contrast, the Temperley-Lieb (TL) generators e_lm result in spatial propagation of loops.
On the Majoranas, this generator acts as a projector onto the local fermion parity,
e_lm∝_lm = 12(1 + iγ_lγ_m) .
In particular, for initial pairings (k,l) and (m,n), e_lm implements a loop surgery to yield new pairings (l,m) and (k,n).
Finally, the braid operator b_lm acts as the Majorana swap operator
_lm = 1√(2)(1 + γ_l γ_m) ,
exchanging the two Majoranas and causing the world lines to cross over one another.
Loop models corresponding to a braid-monoid algebra have three key properties – ambient isotopy, scalar fugacity (d-isotopy), and twist relations – which are depicted in Fig. <ref>(b-d).
Ambient isotopy reflects the ability to smoothly deform loops.
The scalar fugacity endows closed loops with a weight n.
Lastly, the twist relations allow braids b_lm (b_lm^-1) to be undone by incurring a factor ω (ω^-1) related to spin statistics.
All three leave the connectivity of the loop endpoints unchanged and may thus be viewed as equivalence relations (up to a scalar) between Majorana trajectories with identical entanglement in the initial and final states.
Throughout this work, we are interested exclusively in the dynamics of entanglement and mutual information in Majorana models, such that only the connectivity ⟨ iγ_lγ_m⟩ matters.
Braiding phases can thus be neglected and we fix ω = 1.
This yields the equivalence relation b_lm∼ b_lm^-1 on the space of loop configurations, reducing the braid-monoid algebra to the Brauer algebra (n) <cit.>.
By decomposing the Majorana dynamics into a sequence of operations {1_lm, e_lm, b_lm}, one obtains a transfer matrix representation of the evolution, i.e. a brickwall quantum circuit as illustrated on the left-hand side of Fig. <ref>.
We may now distinguish two principal scenarios where Majorana loop models arise: Hamiltonian and circuit dynamics.
§.§.§ Hamiltonian dynamics
Suppose that the local transfer matrix takes the form
T ∼1 + τ_lm .
Such a transfer matrix arises naturally from Trotterizing the imaginary time evolution e^-β H of a quadratic Majorana Hamiltonian H = i∑_⟨ l m ⟩ A_lmγ_l γ_m.
In order to satisfy the loop isotopy condition e_lm e_mn e_lm = e_lm, we take normalized projectors e_lm = √(2)_lm akin to defining the generators of the TL algebra <cit.>.
This implies the relation e_lm^2 = √(2) e_lm and identifies the loop fugacity
n = √(2)
for the imaginary time evolution of a Gaussian Majorana Hamiltonian.
The details of the Majorana adjacency matrix A_lm≠ 0 fix the geometry and the symmetries for the model.
§.§.§ Circuit dynamics
Alternatively, the transfer matrix may arise in a measurement-only circuit consisting of projective measurements of parities iγ_lγ_m.
Here the sign of the measurement outcome poses a challenge to specifying a loop representation.
In particular, the two possible outcomes correspond to _lm and _ml.
Distinguishing such phase information in the graphical language of loops would require non-trivial twisting relations (ω≠ 1), thereby complicating the relative weights in the trajectory-averaged loop ensemble.
If, however, we are interested only in entanglement and other quantities which depend exclusively on the modulus ⟨ i γ_l γ_m ⟩, then the sign of the measurement outcome may be neglected.
This amounts to imposing a twist relation ω = 1 and thus an equivalence relation _lm∼_ml, with a loop configuration now representing an equivalence class of circuits.
Unlike the Trotterized imaginary-time evolution from before, here the projector _lm is always accompanied by a normalization of the state (i.e. |ψ⟩→_lm√(⟨_lm⟩)|ψ⟩) yielding a transfer matrix of the form
T ∼1 + p1-p_lm .
Due to the normalization, projective measurements automatically satisfy the loop isotopy condition.
We may thus identify the generator e_lm as the operator which acts on any state |ψ⟩ as the normalized projective measurement _lm.
In addition, the idempotence of projectors _lm^2 = _lm fixes the loop fugacity
n=1
for each individual measurement in the circuit.
§.§.§ Universality classes of loop models
The behavior of correlation functions and observables at large distances in loop models is determined by two ingredients: the loop fugacity n and the underlying symmetry of the loop model. Despite realizing a different loop fugacity, the transfer matrix of both the Hamiltonian and measurement-only circuit are generated by the operators 𝒫_lm, which are quadratic in Majorana fermions. Thus each Hamiltonian symmetry class has a corresponding symmetry class counterpart in the measurement-only circuit. The loop fugacity then yields a fine structure of universality classes for each symmetry class.
For quadratic Majorana theories, particle-hole (PH) symmetry is always present and we thus distinguish two different symmetry classes based on whether time-reversal symmetry is present or not. In class BDI time-reversal is present, which enables a bipartition of the Majorana lattice into sublattices A and B, such that each operator 𝒫_lm acts exactly on one Majorana fermion on sublattice A and on one Majorana fermion on sublattice B. The paradigmatic example of such a model is Kitaev's honeycomb model in two spatial dimensions <cit.>. In the loop model framework, this symmetry is known as “orientability”. It allows to assign a unique orientation to each lattice site, either forward or backward in time, such that each loop carries this orientation through space-time.
In the absence of time-reversal, no bipartition of the lattice can be found, translating to the absence of orientable loop configurations. In the loop framework, the latter is known as non-orientable loops models or completely packed loop models with crossings (CPLC). This is symmetry class D and paradigmatic Hamiltonian models include the Yao-Kivelson model <cit.>, the Kitaev honeycomb model with next-nearest neighbor interactions as well as in the presence of a magnetic field. In the following, we will synonymously apply the language of Hamiltonian symmetry classes and loop model symmetry classes, i.e., we will refer to symmetry class BDI for orientable loop models and symmetry class D for non-orientable ones.
§.§ Statistical mechanics of loops in 3D
A d-dimensional Majorana circuit (d=2 in this work) corresponds to a d+1-dimensional loop model in space-time.
Each lattice site l with Majorana fermion γ_l at a given time t when measurements are performed represents a vertex in a regular space-time lattice.
When all Majorana fermions are included in at least one measurement 𝒫_lm during the circuit evolution, the loop model is fully packed, meaning that every vertex in the lattice has coordination number z=4.
Then the space of possible loop configurations is generated by routing loops through each vertex in one of the three ways depicted in Fig. <ref>(a).
Such a model is described by a partition function
Z = ∑_𝒞 W(𝒞) n^N(𝒞) ,
where the sum is over all possible loop configurations 𝒞.
Each term in the sum involves two components: (i) a “local” part W(𝒞) and (ii) a “non-local” part n^N(𝒞).
The local term W(𝒞) is a product of Boltzmann weights associated to the choice of loop connections at each vertex, giving the probability of performing the unique series of measurements (or Hamiltonian evolution steps) that yields the loop configuration 𝒞.
This is the product of local measurement probabilities (or Hamiltonian matrix elements A_lm).
The non-local term n^N(𝒞) accounts for the loop fugacity n associated to each of the N(𝒞) closed loops in configuration 𝒞.
As a side remark, we note that the weight n^N(𝒞) poses a challenge to Monte Carlo simulations of loop models for any fugacity n≠1 since it cannot be represented by a local update rule for evolving a single time slice.
This singles out measurement-only circuits with n=1 as a particularly attractive example for simulating novel entanglement phases.
For larger integer fugacity n ∈ℤ_+, simulation is possible by more complicated Monte-Carlo methods involving the whole space-time lattice <cit.> or additional ancilla degrees of freedom <cit.>, but this is beyond the scope of this work.
§.§.§ Non-linear sigma models
A field theory formulation of the loop model partition function Z is provided by Z=∫ D[Q]exp(-S[Q]) with the non-linear sigma model () action
S[Q] = 1/2g∫^d x [(∇ Q)^2 ] + (topological terms)
Here, Q takes values in for symmetry class BDI and for symmetry class D.
The field Q is parametrized by a vector z, which is complex for class BDI and real for class D, such that Q^αβ = z^αz̅^β - δ_αβ is a traceless n × n Hermitian (orthogonal) matrix with normalization condition z^† z = n.
At short distances, the coupling constant g scales with the microscopic diffusion constant 𝒟_mic∼ g^-2, which is derived in Sec. <ref>.
The measurement-only limit n=1 is obtained by taking n→ 1^+ as a replica limit or alternatively by considering a supersymmetric formulation <cit.>.
Both the and the non-linear sigma models appear in a variety of different applications.
Prominent examples are disordered fermion systems and quantum Hall layers in two dimensions.
Although the loop models considered here are in (2+1) dimensions, the relation to disordered fermions remains prominent: independent of the loop fugacity and the symmetry class, but depending on the parameter g and the topological terms, the support: (i) a long-loop phase with 𝒟≠0 at the largest distances, yielding long-range entangled Majorana pairs akin to a disordered metal for which the topological terms are irrelevant,
(ii) a short-loop phase with a diffusion constant 𝒟 flowing to zero under renormalization group transformations, yielding area-law entanglement akin to localized fermions states but with distinct topological properties [We note that this is different from the (1+1)-dimensional case, for which class BDI is always localized, except for at the critical points separating different topological phases, while class D may support a metallic state akin to weak anti-localization, known as Goldstone phase.].
While for n=2 the metallic phases of the action S[Q] in (2+1) dimensions are known from the Hamiltonian case, the n=1 limit describes loops undergoing Brownian motion with diffusion constant 𝒟∼ g^-2. For n=1, the intermediate distance behavior, i.e., on distances ℓ≤ L_x L_y of the spatial volume, of Brownian random walkers is the same for both symmetry classes. This, however, manifestly changes on larger distances ℓ≥ L_x L_y: here, the Brownian random walkers have non-zero probability of forming space-time volume filling loops which occupy a non-zero fraction of the space-time lattice. These macroscopic loops are sensitive to the symmetry class and yield a means to distinguish symmetry classes BDI and D in a measurement-only quantum circuit, see Sec. <ref>.
At the transition from the metallic to the localized regime, a phase transition occurs with universal scaling behavior depending on the symmetry class and the fugacity n <cit.>, which we explore below.
§.§ Loop-Circuit Dictionary
Here, we connect the key observables of the loop model framework to the Majorana circuits, providing a mapping between the two pictures.
In passing from loops in (2+1)-dimensional space-time to the dynamics of Majorana fermions in two dimensions, one dimension in the loop picture is designated as time.
The state of the Majorana circuit at a given time corresponds to the surface of the loop model along one direction.
There are two such temporal surfaces: (i) that at t=0 where boundary conditions on the loops correspond to a choice of initial state for the Majoranas, and (ii) that at time t=T corresponding to the final state produced by the circuit.
Correlations in this output state amount to boundary correlations in the loop model.
By contrast, loop correlations in the space-time bulk require the ability to peek at the state midway through the circuit evolution.
To track information regarding both the surface and the bulk of the loop model, we keep track of not only the loop connectivity, but also their integrated path lengths.
Any open arc is labeled by a tuple (l, m, ℓ), where l and m represent the endpoints and ℓ is the path length.
In a numerical simulation this information is readily incorporated into the update scheme:
Consider a state with initial pairings (k, l, ℓ_1) and (m, n, ℓ_2) where l and m are nearest neighbors.
Measuring the parity iγ_lγ_m yields a state with new pairings and loop lengths (k, n, ℓ_1 + ℓ_2 + 1) and (l, m, 1).
When k = m, such a measurement closes the loop with total length ℓ_1 + 1 which is then recorded in a histogram for the trajectory.
§.§.§ Loop length distributions
The central observable for the loop models we consider here is the probability distribution for the length of loops, both along the (temporal) surface and through the space-time bulk.
The surface loop length between two points l,m at fixed time T describes the distances between two entangled Majorana fermions γ_l,γ_m in the circuit.
The distribution of surface loops thus yields complete information on the quantum state at time T with regard to its entanglement and mutual information.
The bulk loop length yields information on out-of-time ordered correlators (OTOCs), explored below.
For an open arc connecting Majoranas γ_l and γ_m on the t=T boundary, we define the surface length ℓ_surf^α along the α-direction as follows.
Let ℓ_x x̂ + ℓ_y ŷ be the vector connecting sites l and m in the two-dimensional plane.
Using periodic boundary conditions (PBC), the surface lengths of the loop are defined as ℓ_surf^α≡min(ℓ_α, L_α - ℓ_α).
We define the corresponding length distributions P_surf^α(ℓ) as that obtained in the stationary state of the circuit when taking a pure initial state with only local pairings.
Characteristics of the entanglement dynamics during the evolution are encoded in the bulk loop length distribution P_bulk(ℓ).
It is the probability distribution for the total path length of a randomly selected loop.
Here we need to make a distinction between open arcs and closed loops, as these will generically exhibit distinct length distributions.
During the circuit evolution, many closed loops are formed, but the number of open arcs is fixed by boundary conditions and remains constant.
If we impose PBC in time, all loops are closed and there is no need to make this distinction.
In practice, temporal PBC can be implemented with the assistance of ancilla measurements, which is discussed in Sec. <ref> together with the subtle but important role of boundary conditions for the bulk loop statistics.
§.§.§ Surface observables: Entanglement entropy and mutual information
Observables described by the loop distribution of a temporal surface at time t=T correspond to the entanglement content of the Majorana fermion state ρ(T).
We consider the von Neumann entanglement entropy S_A ≡ -[ρ_A log_2 ρ_A ] for a subsystem A, where the reduced density matrix ρ_A ≡_A̅[ρ] is obtained by tracing out the complementary system A̅,
and the mutual information I_2(A,B) ≡ S_A + S_B - S_AB between subsystems A and B.
For Majorana stabilizer states, these measures of entanglement are determined wholly by the non-vanishing two-point Majorana correlations ⟨ iγ_l γ_m ⟩ = 1.
Thus the connectivity encoded by the loop representation contains all necessary information to specify S_A and I_2(A,B).
The mutual information I_2(A,B) therefore corresponds precisely to the number of loops connecting subsystems A and B, while the von Neumann entanglement entropy for a pure state is S_A = 12 I_2(A, A̅), reflecting that each Majorana arc carries a half qubit of entanglement.
Multipartite entanglement measures such as the tripartite mutual information I_3(A,B,C) vanish due to the additivity of mutual information in the loop framework.
Throughout this work we will frequently consider the entanglement entropy of a cylindrical subsystem of length ℓ, with boundaries encircling the system in the x̂ or ŷ directions.
For such a partition, the entanglement entropy can be computed from the surface loop length distribution <cit.>.
In particular, the average entanglement entropy for subsystem A in a cylindrical region of length ℓ along the x̂ direction is given by
S^x_ℓ = L_y ∑_ℓ'min(ℓ, ℓ') P_surf^x(ℓ') .
The mutual information between such cylindrical subsystems can be computed in a similar manner.
§.§.§ Spanning loops and purification
A single loop connecting the two temporal surfaces at t=0 and t=T correspond to Majorana fermions, which remain unpaired during the entire circuit evolution. Such loops are called spanning and reflect the dynamical purification under measurement: Consider a maximally mixed initial state ρ(t=0) ∝1, corresponding to each Majorana being unpaired at t=0. The purity of the Majorana state ρ at time t=T is given by the total entropy S_L(T) = -[ρ(T)log_2ρ(T)]. Each unpurified qubit at time T corresponds to two unpaired Majorana fermions and thus to half the number of spanning loops n_s(T), i.e., S_L(T) = 12 n_s(T).
In the loop model framework, the spanning number, i.e., the dynamical purification, serves as an accurate quantifier for the different loop model phases and the critical point separating them <cit.>.
To do so, we assume fixed aspect ratios T / L_y and L_x / L_y.
Then for small distance δ from the critical point, we employ a scaling ansatz,
n_s(δ, L) = f(x)[1 + βL^ ] ,
x = L^1/ν δ[1 + αδ] ,
where is the dimension of the leading irrelevant operator and α≠ 0 gives a polynomial correction.
§.§.§ Bulk loops, watermelons, and OTOCs
Beyond the distribution of bulk loop lengths P_link(ℓ), bulk properties of the loop model are captured by the watermelon correlators G_k(r), the probability that two points in space-time separated by distance r are connected by exactly k distinct arcs.
For example, G_2(r) gives the probability that two points with separation r lie along the same loop.
When the points do not lie along the same fixed-time slice, then G_k(r) takes the form of an out-of-time order correlator (OTOC) for the Majoranas.
As was previously pointed out in Ref. Klocke_2023, watermelon correlators can be accessed in the circuit by an ancilla-based measurement scheme.
For example, the two-leg watermelon correlator G_2(r) can be measured by considering space-time points (t_1, l) and (t_2, m) separated by distance r.
Consider ancilla A and B, each consisting of a pair of definite parity, e.g. ⟨ i γ_A,1γ_A,2⟩ = 1.
After evolving the circuit to time t_1 (t_2), a “marked” loop can be effectively inserted into the circuit by coupling the ancilla and bulk via measurement of iγ_A,2γ_l (iγ_B,2γ_m).
The history of the world lines which had been passing through these points in space-time is then effectively stored in the remaining ancilla γ_A,1 and γ_B,1.
We then evolve the circuit to late time t_f ≫ t_2 and impose boundary conditions on the bulk qubits (e.g., by measurement) such that the only remaining open arcs are those through the ancilla.
Had we taken only individual Majoranas for each ancilla, then the watermelon correlator G_2(r) would explicitly take the form of an OTOC, ⟨γ_A(t_f) γ_A(0) γ_B(t_f) γ_B(0) ⟩ - ⟨γ_A(t_f) γ_A(0) ⟩⟨γ_B(t_f) γ_B(0) ⟩, which for stabilizer states coincides with the mutual information between the ancilla I_2(A,B).
By construction, the mutual information I_2(A, B) counts the number of loops connecting points (t_1, l) and (t_2, m) in space-time such that G_2(r) = 12 I_2(A, B).
For higher order correlators G_k(r), additional ancilla are required, but the general procedure remains the same.
§ DYNAMICS OF MAJORANA LOOPS
In this section we present results from large-scale simulation of the measurement-only circuits in the loop framework. As representatives for the symmetry classes, we consider the honeycomb Kitaev model for class BDI and the next-nearest neighbor honeycomb Kitaev model for class D. Numerical results for the Yao-Kivelson circuit are presented in App. <ref>.
By varying the relative probability for measuring different bonds, one may tune the circuit between distinct measurement-induced phases, including several topologically distinct area-law phases and an extended metallic phase with a logarithmic area-law violation and long space-time loops.
In symmetry class BDI, i.e., in the measurement-only Kitaev circuit, the area-law phases arise in the toric-code limit where one flavor of bond measurement dominates.
Area-law phases also appear in symmetry class D, i.e., in the next-nearest neighbor Kitaev geometry and the Yao-Kivelson circuit by appropriate choice of parameters, see Fig. <ref>.
Besides the various area-law phases, circuits in both symmetry classes host an entangled liquid phase, reminiscent of a weakly disordered Majorana metal phase, which we characterize in Sec. <ref>.
Then in Sec. <ref>, we shed light on the universality of the entanglement transition separating metallic and area-law phases. Finally, we discuss the shape of the phase diagram for class BDI and the properties of the metallic phase based on the picture of random Brownian walkers.
§.§ Long loops and the Majorana liquid phase
When the non-commuting measurements introduce a sufficient degree of frustration, the endpoints of the Majorana world lines no longer remain confined to a finite area but start to undergo a random Brownian motion.
This generates long loops in the circuit and stabilizes a Majorana liquid or metallic phase with characteristic Llog L entanglement.
The entanglement structure in the stationary state is determined by the surface loop distribution in the loop model.
In the liquid phase, this distribution does not distinguish between the two symmetry classes BDI and D, which we discuss below.
A universal distinction between BDI and D, however, is observable in the bulk loop distribution.
§.§.§ Surface loops and entanglement
The entanglement structure of a Majorana state ρ(T) at time t=T is captured by the probability distribution P^x,y_surf(ℓ) of open loop arcs along the temporal boundary of the loop model.
In the liquid phase, the loop length distribution exhibits power-law scaling
P^α_surf(ℓ) ∼ℓ^-2 .
For a cylindrical subsystem of length L and circumference L_y, Eq. (<ref>) yields an entanglement entropy S(L, L_y) ∼ L_y log(L) (see Appendix <ref> for a numerical confirmation).
Such a logarithmic violation of the area-law of fermions in 2D is often associated with the presence of a Fermi surface <cit.>, but it is also expected for metallic states with non-vanishing conductivity in the presence of weak disorder <cit.>.
The scaling of P^α_surf(ℓ) and of the entanglement entropy is a general feature of the liquid phase for both class BDI and D.
Let us denote c̃_α for α = x,y as the coefficient for the entanglement S=c̃_α L_αlog(L) for a cut along the α-direction. Away from the critical point, this coefficient is generally not universal but rather varies continuously with microscopic details of the model (i.e. the lattice geometry and the bond weights). This may be seen analogously to a smooth deformation of the Fermi surface yielding a continuous modification of the logarithmic entanglement scaling in the Hamiltonian setting, an analogy, which
we develop further below.
The effect of orientability here is a rather trivial one.
Suppose that we fix the length ℓ_x.
In the orientable case, there are fewer allowed Majorana pairings for any ℓ_x, as correlations between sites on the same sublattice must necessarily vanish.
As a result, the typical total loop length ℓ = √(ℓ_x^2 + ℓ_y^2) is longer, and the probability density P_surf(ℓ_x) is reduced by a constant factor.
§.§.§ Bulk loops and universal circuit correlations
The two symmetry classes BDI and D are distinguishable by their bulk loop statistics. On short distances ℓ≤ L_x L_y and away from the critical point, Brownian motion in three space-time dimensions yields a mean-field decay
P_bulk(ℓ) ∝ℓ^-τ
with τ=5/2 and a fractal dimension d_f=3/(τ-1)=2 for both symmetry classes, depicted in Fig. <ref>(b). However, Brownian walkers in 3D have a non-zero probability of never returning to their starting point, and to form “macroscopic” loops which occupy a non-zero fraction of the space-time volume.
The statistics of such extensive loops provide an unambiguous means for distinguishing between the two symmetry classes.
For finite system sizes, macroscopic loops are sensitive to the spatiotemporal boundary conditions.
Here, one distinguishes absorbing boundary conditions, yielding open Majorana world lines, such as, e.g. a mixed initial state, from reflecting boundary conditions in space-time [Strictly speaking, macroscopic loops would emerge also for periodic boundary conditions in time, which are, however, challenging to implement in a circuit.].
For the latter, all world lines are closed including at the final and initial time.
This is realized by starting with a pure initial state at t=0 and by terminating the circuit at t=T with a fixed set of measurements that closes all world lines.
With reflecting boundaries all loops are closed, and macroscopic loops manifest in the distribution P_bulk(ℓ) giving rise to a distinct scaling regime at distances ℓ≥ L_x L_y, shown in Fig. <ref>(b).
Here, macroscopic loops follow a Poisson-Dirichlet (PD) distribution <cit.>
ℓ·P_bulk(ℓ) = θ/ℒ(1 - ℓ/fℒ)^θ- 1, (ℓ≫L_x L_y)
where ℒ is the total number of links in space-time (i.e. the volume). Whereas f is a non-universal quantity,
the parameter θ is a universal parameter which depends only on the loop fugacity n and the symmetry class.
In the most general case, one finds θ = n for symmetry class BDI, while for symmetry class D, θ = n / 2.
For the Majorana circuits at hand with fugacity n=1, this leads the normalized bulk loop length distribution to approach a constant
ℓ· P_bulk(ℓ) ∼ℒ^-1 (orientable)
for ℓ→ℒ in class BDI,
while class D displays a square-root divergence
ℓ· P_bulk(ℓ) ∼ℒ^-1/2(ℒ-ℓ/f)^-1/2 (non-orientable) .
Such a divergence reflects the tendency for a concentration of probability density in larger macroscopic clusters, due to the greater mobility of loops without the BDI constraint of time-reversal.
The results of numerical simulations, shown in Fig. <ref>(b), confirm the PD scaling regime for macroscopic loops, with the parameter θ consistent with loop fugacity n=1.
In order to access the universal part of the PD distribution, i.e., the parameter θ, one faces two challenges:
(i) implementing measurements of loop quantities in space-time and on distances ℓ≫ L_x L_y much larger than the spatial extent and
(ii) eliminating the dependence on the non-universal parameter f in Eq. (<ref>).
Let us start by solving the latter.
Consider m points r⃗_i with i=1,…,m in space-time which are separated by macroscopic distances r⃗_i - r⃗_j≳ L_x,L_y.
In the thermodynamic limit, if any two points r⃗_i and r⃗_j are on the same loop, then that loop is macroscopic.
The probability that all m points lie along the same loop is <cit.>
P_m ≡P(r⃗_1, …, r⃗_m) = f^m Γ(1 + θ)Γ(m)/Γ(m + θ).
Here, Γ is the Γ-function and the non-universal quantity f appears as a prefactor.
Thus, f can be eliminated by taking appropriate ratios P_m_1^k_1 / P_m_2^k_2 for m_1 k_1 = m_2 k_2, leading to universal quantities which distinguish the two symmetry classes <cit.>.
We propose a way to measure the probabilities P_m directly by generalizing existing ancilla schemes <cit.>.
Since P_2 is simply the large-separation limit r→∞ of the two-point watermelon correlator G_2(r), it may be measured via the protocol described earlier in Sec. <ref>.
For higher-order quantities P_m, we use m ancillae to insert marked loops at all m points in space-time, as depicted in Fig. <ref>.
Once again we evolve the circuit to late times and then measure all bulk Majoranas, leaving open loops only terminating on the ancillae.
The mutual information between groups of ancillae counts precisely the probability that the space-time points where we measured the bulk and ancilla had been along the same loop, so a quantity like P_3 amounts to requiring I_2(A,B) = I_2(B, C) = I_2(C,A) = 1.
In practice this can be obtained by comparing the magnitude of I_2(A,B) before and after tracing out ancilla C, where the latter case simply reduces to P_2.
Here we consider such a scheme with four points r⃗_i at separations on the order of L_y and measure P_2, P_3, and P_4 to obtain the following ratios,
P_2^2/P_4 = Γ(1+θ) Γ(4+θ)/6Γ(2+θ)^2 = 1, θ = 1
35/36, θ=1/2,
P_2^3/P_3^2 = Γ(1+θ)Γ(3+θ)^2/4 Γ(2+θ)^3 = 9/8, θ = 1
25/24, θ=1/2.
Importantly, both give values which differ between the two symmetry classes.
In Fig. <ref>, we show that the ratios indeed converge toward the universal value as system size is increased.
This then offers a practical scheme for distinguishing between the long-loop phases which would otherwise not be possible from the steady-state entanglement.
We note that when implementing (open) absorbing boundaries in space-time, any macroscopic loop will be cut into an extensive number of individual open loops.
Then the loop length distribution matches the first passage time distribution for a random walk in one spatial dimension with absorbing boundaries
P_bulk(ℓ) ∼ P_FP(ℓ) ∼ℓ^-5/2 e^-αℓ
with α^-1∼ L_x L_y.
In Fig. <ref> (see Appendix), we verify this expected scaling for open loops in the metallic phase of both symmetry classes.
§.§.§ Analytical entanglement scaling and phase boundaries from the diffusion picture
Viewing the dynamics in the metallic phase as Majorana world lines undergoing Brownian motion in space-time allows us to establish a direct link between (i) the non-linear sigma model in Eq. (<ref>), (ii) the entanglement entropy and, (iii) for class BDI, the microscopic measurement probabilities.
Note that the latter enables accurate analytical predictions, for symmetry class BDI, for the entanglement scaling and the location of the transition line between the metallic and the localized phases for arbitrary geometries.
We start by considering loop endpoints undergoing Brownian motion in space-time with an effective diffusion constant 𝒟 at large wavelengths.
The entanglement entropy at a fixed time t=T is then determined from open arcs terminating at the temporal boundary.
Designating one of the arc endpoints as the “start”, we may view the arc as the path of a random walker which starts near and eventually terminates upon an absorbing boundary at t=T.
Then the total bulk length of the arc is equivalent to the first passage time τ of the random walker
[More precisely, the movement of the loop endpoint along the temporal direction should be viewed as a persistent random walk, where the direction of propagation is reflected whenever the Majorana is involved in a measurement. This yields a mild correction to the timescales, but does not alter the universality.].
The distribution of first passage times for one absorbing boundary in three dimensions is known to be P_FP(τ) ≈√(2/π)τ^-3/2.
During this time, both endpoints undergo Brownian motion in the two-dimensional spatial plane, yielding a distribution for the distances between endpoints P_2D(r,τ) ≈2r𝒟τe^-r^2/𝒟τ.
Averaging with respect to the time τ, yields the expected distribution of spatial displacements
P_surf(r) = ∫τ P_FP(τ) P_2D(r,τ) = √(2𝒟)/r^2
with r^2=ℓ_x^2+ℓ_y^2.
The entanglement entropy along a cut depends on the distribution of loop lengths projected along a fixed axis.
For isotropic Brownian motion in space, this gives
P_surf^x(ℓ_x) = 4∫_0^∞ℓ_y P_surf(r)/2π r = 2√(2𝒟)/πℓ_x^2.
From this distribution we infer that the logarithmic entanglement scaling has coefficient c̃_x = 6√(2𝒟)π.
Varying the relative measurement probabilities of different bonds will generally introduce some anisotropy to the Brownian motion such that 𝒟^x ≠𝒟^y and c̃_α∝√(D^α). This scaling of the logarithmic entanglement proportional to the square root of the diffusion constant appears also in monitored fermion systems with unitary dynamics <cit.>, and is seemingly generic for measurement-induced fermion liquid-type phases.
The effective diffusion constant 𝒟 emerges at long wavelengths from the renormalization group (RG) flow of the in Eq. (<ref>). The is a single-parameter theory, initialized with the microscopic short-distance diffusion constant 𝒟_mic, which depends on the lattice geometry. The RG flow only depends on the fugacity n, the symmetry class and the dimensionality d. For n=1 and d=3, each symmetry class thus has a one-to-one correspondence 𝒟_mic↔𝒟. For bipartite lattices, i.e., for class BDI, the random walker is always on the same sublattice after two steps and the microscopic diffusion constant is readily inferred from the measurement probabilities.
Consider the measurement-only Kitaev circuit, where bonds are measured with probabilities K_x + K_y + K_z = 1. The diffusion constants may be estimated by considering the mean-squared displacement of a random walker with jump rates set by K_α and bonds of unit length. Setting 𝒟^z_mic as the diffusion constant for the direction parallel to the ZZ-bonds and 𝒟^⊥_mic as the diffusion constant perpendicular to it yields
𝒟_mic^z = 9/8 K_z(K_x + K_y) , 𝒟_mic^⊥ = 1/6 (2𝒟_mic^z + 9K_x K_y) ,
𝒟_mic = 1/2(𝒟_mic^z + 𝒟_mic^⊥) =3/4 (K_x K_y + K_x K_z + K_yK_z) .
The total diffusion constant 𝒟_mic is invariant under continuous rotation of K_x/y/z with respect to the isotropic point
– this explains the circular symmetry of both the metallic phase and its phase boundary, as numerically found in Refs. <cit.>.
We emphasize two particular regimes: (i) In the vicinity of the isotropic point K_α=1/3, i.e., deep in the metallic phase, we expect the RG flow corrections to 𝒟 to be weak. Thus setting 𝒟=𝒟_mic proves to be a good approximation in the vicinity of the isotropic point, see Fig. <ref>(a). Since under RG, the diffusion constant will generally flow towards a smaller value, 𝒟_mic at the isotropic point serves as an upper bound for the average entanglement growth in each direction 𝒟≤ 1/4. (ii) In the vicinity of the critical line marking the transition between the metallic and the localized phase, the renormalization of 𝒟_mic is strong and 𝒟_mic≫𝒟. Nevertheless, the one-to-one correspondence 𝒟_mic↔𝒟 [Neglecting the effect of anisotropy in the microscopic values.] allows us to extract the position of the critical line for any bipartite geometry. At K_x = K_y = 12 and K_z = 0, the location of the critical point is exactly known: the system decouples into disconnected one-dimensional strings each of which displays percolation critical behavior <cit.>. Using this point as an estimate for the critical value 𝒟_mic,c provides
critical diffusion constant: 𝒟_mic,c=3/16.
For the Kitaev honeycomb lattice, this yields a circular contour in parameter space (see Fig. <ref>(b)) which coincides exceptionally well with numerical estimates for the entanglement transition.
To further underpin the strength of the diffusion picture, we provide the numerically obtained phase diagram for the measurement-only Kekulé-Kitaev geometry illustrated in Fig. <ref>(b). This system also belongs to symmetry class BDI but its phase diagram exhibits a different shape of the critical line as displayed in Fig. <ref>(c). Using Eq. (<ref>) again provides a rather accurate estimate for its phase boundary based on a calculation of the diffusion constant as
𝒟_mic = 3/8 [K_g K_r (2-3K_gK_r) + K_b(K_g + K_r)(2 + K_g K_r) - 3 K_b^2 (K_g^2 - K_g K_r + K_r^2) ],
where again we have considered the mean-squared displacement of a random walker after two steps along lattice edges, averaging over starting points in the unit cell.
Lastly, let us discuss a more rigorous approximation scheme for the entanglement entropy. We note that even deep in the metallic phase, the microscopic diffusion constant 𝒟_mic is larger than 𝒟 determined from the entanglement scaling by a factor of approximately 3/2.
This difference reflects the fact that the microscopic calculation treated the loop endpoints as non-interacting Brownian walkers.
In actuality, due to the Pauli principle, the loops are constrained to not overlap with themselves or one another, corresponding to an effective excluded volume.
To account for this, consider the role of a nonzero density ρ of closed loops in the bulk of space-time.
These act as obstructions to the Brownian motion of open arc endpoints, suppressing diffusion.
At leading order, this may be treated as a uniform probability ρ that at any step the random walker remains in place, sending 𝒟→𝒟(1-ρ). The density ρ is hard to extract analytically but easily accessible numerically.
We find that the fraction ρ varies continuously from ρ≈1/3 at the isotropic point to ρ=1 in the area-law phase.
In Fig. <ref>(a), we show that the rescaled microscopic diffusion coefficient 𝒟_mic(1-ρ) better matches the fitted 𝒟, reproducing the magnitude deep in the metallic phase and going to zero continuously at the transition.
Thus, accounting for the excluded volume effect accurately captures the interactions neglected in 𝒟_mic.
Alternatively we might arrive at the same correction by considering a self-avoiding walk on a lattice with coordination number z and connectivity constant μ.
Here the quantity 1-ρ corresponds to the ratio μ/z, which falls in the range μ/z ∈ [1.2, 1.5] for various 3D lattices <cit.>.
The direct relation between the entanglement scaling, i.e., the prefactor c̃ of the metallic Llog(L) term, and the effective diffusion constant 𝒟 at distance L allows us to numerically track the RG flow of 𝒟 from its microscopic starting point 𝒟_mic towards its asymptotic value as illustrated in Fig. <ref>(a) for linear system sizes L=16,64,1024. This is remarkable, since in d=3 dimensions, the is no longer perturbatively controlled and the RG flow of 𝒟 is not accessible analytically.
We emphasize the connection between c̃, the diffusion constant for Brownian motion of loops, and their relation to the logarithmic entanglement scaling found in weakly disordered Fermi liquids.
In the ground state of a disordered metal with disorder strength γ, the diffusion constant is related to the Fermi velocity v_F and the disorder by 𝒟∼ v_F^2 / γ, implying c̃∼ v_F / √(γ).
Indeed, such proportionality is expected from calculations of the entanglement entropy from a Fermi surface <cit.>.
§.§ Quantum Lifshitz criticality and universal scaling behavior at the entanglement transition
In both symmetry classes, the Majorana liquid and localized phases are separated by a critical line exhibiting quantum Lifshitz scaling of the entanglement entropy. At this critical line, the effective diffusion constant vanishes, resulting in an area law for the entanglement entropy. The distribution of Majorana loop lengths, however, is not yet localized but displays universal algebraic scaling behavior that clearly differentiates the two symmetry classes. This unique combination of universal algebraic scaling and the area law leads to the distinctive subleading entanglement scaling characteristic for quantum Lifshitz criticality.
In the Kitaev honeycomb model for symmetry class BDI, the critical line is reached by tuning the probabilities K_α. Quantum Lifshitz scaling is observed, except at the three points where one of the K_α = 0, reducing the model to an array of one-dimensional chains. For class D, we set the next-nearest neighbor bond measurement probabilities to J_x = J_y = J_z = J and tune J, K_α through the critical regions. For J > 0, quantum Lifshitz scaling is generally present.
§.§.§ Quantum Lifshitz entanglement and surface loops
The entanglement entropy in the loop framework is computed from Eq. (<ref>).
At a genuine two-dimensional transition we observe an algebraic decay of surface loop lengths, P_surf(ℓ) ∝ℓ^-γ_surf.
Away from the percolation points with one K_α=0, we find that γ_surf≈3, shown in Fig. <ref>(a).
This is faster than in the Majorana liquid phase, where γ_surf = 2 leads to an Llog(L)-growth.
As a result, the transition exhibits a subsystem dependence of the entanglement entropy S(ℓ, L) = aL + b J(ℓ / L).
Here J(u) is the “Lifshitz” scaling function [For unit aspect ratio of the spatio-temporal volume.]
J(u) = log(θ_3(iλu)θ_3(iλ(1-u))/η(2iu)η(2i(1-u))) ,
where θ_3 is the Jacobi theta function, η is the Dedekind eta function, and λ is a fitting parameter <cit.>.
In Fig. <ref>(b) we show the collapse of the entanglement data onto a curve accurately described by this Lifshitz scaling function.
For both symmetry classes, we find λ = 3.4 ± 0.1 for cuts along either direction of the system.
Although derived initially for a quantum Lifshitz model <cit.>, this torus entanglement scaling has been observed in a variety of (2+1)-dimensional CFTs <cit.>.
Notably, the Rényi entropy takes precisely the form of Eq. (<ref>) also for quantum dimer models and resonating valence bond (RVB) states <cit.>.
Given the close link between models of dimers, loops, and height fields <cit.>, one may expect that this entanglement scaling ought to extend also to the case of loops with fugacity n=1 (relevant to the quantum circuits at hand).
Indeed, higher fugacity loop models have been employed in numerical studies to verify the Lifshitz scaling of entanglement in quantum magnets <cit.>.
Similarly, the bond-length distribution in the antiferromagnetic Heisenberg model on the square lattice exhibits the same scaling exponent as P_surf(ℓ) in our circuit model <cit.>.
We note here that quantum Lifshitz scaling is compatible with the simultaneous presence of conformal symmetry and an area law of the entanglement in (2+1) dimensions.
It is remarkable that this entanglement scaling, though apparently quite universal, has been reported previously only in the context of interacting circuits <cit.>.
For fermion-parity symmetric circuits, the mapping to an interacting dimer model suggests that Lifshitz entanglement scaling is quite generic.
More broadly, requiring conformal invariance at the entanglement transition imposes strict constraints such that one might generically expect scaling of the form Eq. (<ref>) except at fine-tuned points.
§.§.§ Correlation length exponent ν
To accurately determine the critical point K_c and the correlation length exponent ν, we consider the spanning number n_s for fixed aspect ratios L_x = 2L_y (i.e. L_x × L_x plaquettes) and circuit depth t=L_y.
We then perform a finite-size scaling (FSS) analysis using an ansatz generically of the form in Eq. (<ref>) and a cubic B-spline to fit the scaling function.
Moreover, by taking sufficiently large system sizes and a narrow window around the critical point, we find that any irrelevant scaling variables may be dropped from the ansatz.
In Fig. <ref> we show the resulting data collapse for the transition in (i) the (orientable) honeycomb Kitaev circuit along the line K_x = K_y = (1 - K_z)/2
and (ii) the (non-orientable) honeycomb Kitaev circuit with next-nearest neighbor parity checks along K_x = K_y = J = (1 - K_z)/3.
The results are summarized in Tab. <ref>. We find consistent values of ν in both symmetry classes for several other lattice geometries (see Appendix <ref>).
These results thereby highlight that the entanglement transitions in (2+1)-dimensional measurement-only Majorana circuits have distinct universality depending on the presence or absence of orientability, i.e., depending on the symmetry class. We fix the critical points K_c at the values reported here and use them in all further analysis.
§.§.§ Fisher exponent τ
At the transition and throughout the extended critical phase, the bulk loop length distribution is expected to follow a power-law P_bulk(ℓ) ∝ℓ^-τ, where the Fisher exponent τ is related to the fractal dimension d_f via the hyperscaling relation τ = (d / d_f) + 1.
In the extended critical phase, the Brownian nature of the loops yields d_f = 2 and thus τ = 5/2.
By contrast, a mean-field analysis <cit.> predicts η = 0, corresponding to d_f = 5/2 and τ = 11/5 at the transition.
Tuning to the critical point determined from FSS of the spanning number, we obtain the exponent τ by fitting a power-law to the bulk loop length distribution P_bulk(ℓ).
The result is provided in Tab. <ref>.
These results deviate very slightly from previous numerical studies on 3D classical loop models, where hyperscaling gave τ = 2.184 ± 0.003 for the orientable case <cit.> and τ=2.178 ± 0.002 for the non-orientable case.
In both symmetry classes, the decay of P_bulk(ℓ) is slower than expected for τ = 11/5, indicative of small but finite renormalization due to fluctuations beyond the mean-field prediction.
However, the difference in τ between the two symmetry classes is rather subtle, making it challenging to distinguish them even at the large system sizes accessible in the loop framework.
Finally, we note that a direct observation of the Fisher exponent τ might be practically infeasible as it would require access to the bulk lengths of loops in space-time.
As an alternative, it might be far more practical to employ ancilla probes <cit.> to instead extract the anomalous dimension η, from which we may determine τ via hyperscaling relations, see below.
§.§.§ Anomalous dimension η
The anomalous dimension η can be determined by examining the finite-size scaling behavior of the total length of spanning loops ℳ,
ℳ L^-(5-η)/2 = f(L^1/ν (K - K_c)).
Moreover, this provides a second, independent estimate of the correlation length exponent ν.
In Fig. <ref>, we show the scaling collapse of ℳ in both symmetry classes. The results are displayed in Tab. <ref>. We note that the estimates for ν obtained in this finite-size scaling analysis are independent of the values extracted from the spanning number collapse, Tab. <ref>.
Both approaches yield almost perfect agreement, which is a strong internal consistency check of our numerical approach.
An alternative way to obtain an estimate for the anomalous dimension η is to probe the two-leg watermelon correlator G_2(r).
At the transition, this correlator decays algebraically with the distance r, G_2(r) ∝ r^-(1 + η).
Analogously to the correlators P_n discussed in Fig. <ref> which we considered earlier, G_2(r) can be measured in an experiment by using an ancilla scheme: G_2(r) is the average mutual information between two ancillae that were entangled with the circuit at space-time points separated by a distance r.
Here, one would entangle ancillae at identical spatial positions but at times t_1, t_2 such that r = t_1 - t_2.
Then, for a fixed system size, the scaling of G_2(r) can be studied by simply varying the temporal separation (i.e. the circuit depth).
As shown in Fig. <ref>(a), we observe a power-law decay of the watermelon correlator with respect to the space-time separation that allows one to extract the anomalous dimension for both symmetry classes, displayed in Tab. <ref>.
We note that this approach is highly sensitive to subleading corrections and small deviations from the exact local of the critical point. We consider it thus generally less accurate to extract η in this manner.
§.§.§ Order parameter exponent ß
In the loop framework, the Majorana liquid phase acquires a non-zero ferromagnetic-type order parameter, represented by the probability that two infinitely far separated space-time coordinates are connected by a single closed loop, i.e., P_2∼ |K-K_c|^2β.
The critical exponent β associated to this order parameter can be detected in an ancilla scheme similar to Fig. <ref>.
In practice, for systems of linear dimension L, one measures G_2(r) for large space-time separations r∼ L.
This introduces a subleading correction G_2(r) = A r^-x_2 + P_2, where x_2 = 2 for Brownian loops <cit.>.
For even moderate system sizes L, this subleading correction becomes negligible.
We may then treat finite-size effects for G_2 similarly to those for the order parameter in percolation,
G_2(r∼L, K) = L^-2β/νf(K̃),
K̃ = (K - K_c)L^1/ν[1 + A(K-K_c)].
In the liquid phase, this scaling function reduces to the simpler form G_2(r∼ L, K) ∝K_c - K^2β on distances L much larger than the correlation length ξ.
We determine β by fitting the ansatz in Eq. (<ref>) to the data in a narrow window enclosing the critical point.
In order to avoid systematic drift in the fitted exponents, we fix ν and K_c at the values found previously from examining the spanning number.
The resulting scaling collapse is shown in Fig. <ref>(b).
This yields accurate estimates of the critical exponent β from the ancilla scheme which clearly distinguish both symmetry classes and which are summarized in Tab. <ref>.
§.§.§ Hyperscaling and Discussion
Having obtained several independent estimates of various critical exponents, we now examine their consistency with one another via hyperscaling relations.
We further comment on the reliability of the different approaches for determining critical exponents and justify the choice of values reported in Table <ref>.
Given the Fisher exponent τ, the anomalous dimension is given by the scaling relation η = 5 - 6τ-1.
The resulting values are given in Tab. <ref>.
For class BDI, scaling gives a value of η which lies between the two estimates determined earlier in Tab. <ref>.
In symmetry class D, scaling suggests a slightly smaller but not inconsistent magnitude of η than previously found.
The anomalous dimension η is notoriously difficult to determine directly, even for the large system sizes accessible with our loop model simulations.
Computing η via the hyperscaling relation is highly susceptible to small deviations in the Fisher exponent τ and thus even smallest uncertainties in the critical point K_c.
By contrast, determining η directly via FSS of ℳ leaves K_c as a fitting parameter and is generally found to be more robust.
This is supported not only by the near perfect scaling collapse shown in Fig. <ref>, but also by the close agreement with the value of ν found by FSS of the spanning number.
As such we opted to report the corresponding value of η in Tab. <ref>.
Hyperscaling allows for two alternative ways of estimating the order parameter exponent β:
(i) given exponents η and ν from the FSS of ℳ, one finds β = ν(η + 1)/2, and
(ii) using the correlation length exponent ν and Fisher exponent τ from the FSS of ℳ and P_bulk(ℓ), one finds β = 3ν(τ - 2)/(τ-1).
We find that both approaches give values consistent with one another and clearly distinct in the two symmetry classes.
Moreover, hyperscaling gives values of β which are in very good agreement with those found from the FSS analysis of the watermelon correlator G_2(r,K) (see Tab. <ref>).
For consistency we also report β as determined by the hyperscaling relation β = ν(η+1)/2 in Tab. <ref>.
Lastly, let us comment on how the exponents we determine here compare with previous numerical studies.
The orientable loop model was studied previously in Ref. Ortuno_2009.
Our results for the measurement-only Kitaev model yields critical exponents perfectly consistent with these earlier results
(see Tab. <ref>), confirming the universality of the entanglement transition.
With our loop model simulations alloing us to reach appreciably larger system sizes (by two orders of magnitude)
we are able to refine the estimated critical exponents to greater precision.
By contrast, the non-orientable loop model was studied only relatively recently in Ref. Serna_2021.
Unlike the orientable case, the critical exponents we report for this symmetry class seem to differ appreciably from
these previous estimates.
In particular, whereas we find ν=0.9403± 0.0006, Ref. Serna_2021 suggested a smaller value ν=0.918 ± 0.005.
This in turn leads to a systematic difference in all other critical exponents found by hyperscaling relations or FSS which involves ν [
For instance, consider the anomalous dimension η as found by FSS of ℳ.
From the RG flow <cit.>, one expects that ν^-1 - 2 + η is a constant.
Thus, a small error in the fitted value of ν is compensated by a corresponding shift in η.
For Δ(ν^-1) = (0.9403^-1 - 0.918^-1), one finds Δη = 0.026, almost precisely matching the difference in reported values of η.
Since the fractal dimension was determined in Ref. Serna_2021 via the scaling relation d_f = (5-η)/2, we find that the discrepancy in d_f can also be accounted for using a similar reasoning.
Finally, the assumption that ν^-1 + η is constant in tandem with the scaling relation β=ν(1 + η) allows one to show that Δβ=Δν = 0.0223, consistent with the values reported in Tab. <ref>.
].
To reconcile this apparent discrepancy, we have simulated two alternative circuit/lattice geometries, including the L-lattice studied in Ref. Serna_2021, which has allowed us to track down its source: In our approach we have expanded the FSS ansatz by non-linear terms, which we find to give superior data collapses over wider parameter ranges and system sizes. As discussed in Appendix <ref>, this also allows us to smoothly interpolate between our estimates and the ones found in Ref. <cit.> as a function of (i) the range of included system sizes (ii) the choice of scaling variable, and (iii) the density of spline points. This leads to conclude that an improved FSS ansatz for the numerical finite-size data of Ref. Serna_2021 would give a somewhat higher estimate of the critical exponent ν, perfectly consistent with what we report here.
Despite the discrepancy with earlier works, there is strong reason to believe that the critical exponent values we report here represent the most accurate and precise determination for this symmetry class to date.
Our numerical simulations reach system sizes which are much larger than previously studied, enabling greater control over subleading corrections in the FSS analysis.
This is further bolstered by the internal consistency between the two independent approaches for estimating ν.
In Appendix. <ref> we further demonstrate the universality of ν≈ 0.94 by examining two additional lattice geometries, namely the measurement-only Yao-Kivelson model and Cardy's 3D L-lattice.
§.§ Simulation setup
Let us close this Section by highlighting the benefits of the loop model representation in numerical simulations.
In short, the loop model representation allows for a substantially more efficient numerical simulation of Majorana circuits <cit.> reaching system sizes of up to 10^8 qubits far exceeding those accessible via standard Clifford circuit methods (employed and discussed in the next Section).
To understand this efficiency gain of about two orders of magnitude in number of simulated qubits remember that, in the tableau representation of conventional Clifford simulations,
an N qubit stabilizer state requires 𝒪(N^2) memory, and each Clifford operation (e.g. a measurement) requires 𝒪(N^2) time.
By contrast, in the loop model representation, the loop connectivity compactly encodes the state in only 𝒪(N) space and can be updated after a measurement in 𝒪(1) time.
This substantial computational advantage of the loop model representation is further enhanced by noting that a circuit of depth t acts as a transfer matrix on the loop connectivity.
Two such depth t circuits can be efficiently composed in time 𝒪(N), yielding a new circuit of depth 2t.
The loop model simulations are then carried out as follows:
We begin by preparing a “pool” of N_p shallow circuits of depth t=1.
In particular, we measure N bonds from the Majorana honeycomb lattice chosen at random with relative probabilities K_α for nearest-neighbor bonds and J for next-nearest-neighbor bonds.
Then all trajectories are evolved to depth T by log_2(T) rounds of concatenation.
When concatenating, the circuits are randomly translated with respect to one another in order to enlarge the accessible configuration space as well as to reduce spurious correlations arising from finite N_p.
From this pool, N_s independent sample trajectories are drawn by choosing two random elements, translating them with respect to one another, and concatenating.
Due to self-averaging in large systems, random translations before concatenating allows one to take N_s ≫ N_p.
For each such sample, quantities such as the entanglement entropy and the spanning number may be directly computed from the loop connectivity after imposing appropriate boundary conditions at the temporal boundaries.
Moreover, for PBC, the entanglement entropy averaged over all cuts can be efficiently computed in one 𝒪(N) shot.
By contrast, computing entanglement entropy in the tableau (Clifford) representation requires 𝒪(N^3) time for each subsystem, making it far more costly.
To probe bulk loop statistics, we track not only the loop connectivity, but also the total length of each loop, storing the lengths of closed loops in a histogram during the evolution.
Our simulations are completely based on self-implemented loop model codes (as opposed to codes using highly optimized open-source community packages as in the case of the Clifford simulations, see next Section). These codes are run on national high-performance computing resources, specifically the AMD EPYC (v3 Milan)-based Noctua2 cluster at the Paderborn Center for Parallel Computing (PC^2), the AMD EPYC (v4 Genoa)-based RAMSES cluster at RRZK/University of Cologne, and the Intel XEON Platinum 8168-based JUWELS cluster at FZ Julich.
Limited by the amount of system memory available, we initialize pool sizes between 120 (L=128) and 20 (L=4096) and generate 10^5 samples from a given configuration pool. Depending on the system size, we generate between 50 and 100 independent pools.
Let us provide an example of the resulting computational cost: at each set of parameters in Fig. <ref>, generating 10^6 samples consumes 3 core-hours at L=128 and 1000 core-hours at L=4096. For both systems, across all system sizes and 100 different parameters, a budget of less than 200,000 core-hours was sufficient to generate the data shown in Fig. <ref>. Using a budget of about 2000 core-hours per data point, this has allowed us to simulate system sizes up to N ∼ 10^8 qubits in the loop representation (versus N ∼ 10^4 in the Clifford tableau representation). An important technical detail is that our loop model code is highly parallelizable and limited in performance primarily by memory bandwidth, inviting GPU acceleration in future simulations.
§ CLIFFORD SIMULATIONS
In this final section, we provide numerical results from conventional Clifford simulations relying on stabilizer tableau algebra.
Such simulations often serve as a starting point for a quick exploration of entanglement phase diagrams before indulging into
more complex numerical simulations (such as the loop model numerics discussed in the previous Section) or analytical
considerations. It is the purpose of this Section to demonstrate what kind of qualitative and quantitative insight can already be
afforded when performing such Clifford simulations using state-of-the-art codes <cit.>,
with which one can readily simulate systems of the order of 10^4 qubits.
§.§ Phase diagrams from tripartite mutual information
One particularly powerful observable to quickly map out the entanglement phase diagram of a given Clifford circuit is the tripartite mutual information
I_3(A:B:C) = I_2(A:B) + I_2(A:C)-I_2(A:BC)
=S_A + S_B + S_C - S_AB - S_BC - S_AC + S_ABC .
where the subregions A,…, D are equal partitions of the torus on which the lattice is defined.
Already for a fixed, finite linear system size, this observable allows to distinguish the entanglement structure of the localized and liquid phases,
for which is assumes values <cit.> of I_3(A:B:C) = +1 (localized) and I_3(A:B:C) = -1 (liquid), respectively.
As we show in Fig. <ref> for the original honeycomb Kitaev circuit and its Kekulé variant, a heat map-style visualization of I_3 readily maps out the entanglement phase diagram already with reasonable accuracy (even though the critical point does not necessarily coincide with I_3 = 0) – compare, e.g., with the phase diagrams in the overview of Fig. <ref>.
A more precise quantitative estimate of the phase boundaries can be obtained via a finite-size scaling analysis of the tripartite mutual information, which exhibits an easy-to-locate crossing point for different system sizes at the phase transition K=K_c.
Performing a finite-size data collapse as shown in Fig. <ref> one can further extract the correlation length exponent, though with far less accuracy as what is achievable with the loop model framework discussed in the previous section.
The values of the correlation length exponent ν are clearly distinct between the two symmetry classes, even at these small system sizes.
However, the specific values do not agree with our previous results from the loop simulations, even within margin of error. In the Clifford simulations,
there are several limitations at play which affect the accuracy of the extracted correlation length exponents and which are likely to lead to this disagreement. This includes
(i) strong finite-size effects arising from the limited system sizes available in the Clifford simulations (which are hard to overcome with the current algorithms/numerical platforms), and
(ii) a necessity to extend the first-order scaling ansatz x = (K - K_c) L^1/ν to higher orders (which is not straight-forward).
One manifestation of these limitations is a system-size dependent drift of the tripartite mutual information within the liquid phase,
making it impossible to properly collapse the data. The origin of this drift can be rationalized as follows: even deep in the liquid phase, there is always a non-zero probability that the system will not be long-range entangled at a given time. In essence, a series of measurements on the K bonds can localize the Majorana pairs and temporarily destroy long-range entanglement. Alternatively, this can be phrased as a random walk in configuration space with a small fraction of area-law states, which vanishes in the thermodynamic limit. In taking the average over a large number of samples, this small fraction of localized (area-law) states then manifests itself as a small, finite offset in the tripartite information and is most pronounced for small system sizes and more so for the orientable circuit (due to the reduced number of paths for long-range strings). This is indeed visible in Fig. <ref>, where I_3 never actually saturates to -1 (in the liquid phase).
In our finite-size scaling analysis we have worked around this by performing the optimization of the fitting cost function only for data points at K > K_c.
§.§ Entanglement entropy and Lifshitz criticality
Another way to explore the entanglement phase diagram of a Kitaev circuit and to zoom in on the phase transitions is,
of course, to directly explore the entanglement structure of the ensemble of stationary states.
To this end, one might want to look at “entanglement arcs",
i.e. the bipartite entanglement entropy for subsystem cuts of varying size 0 ≤ l/L ≤ 1. This is shown in Fig. <ref>
for parameter scans through the phase diagram of the honeycomb Kitaev circuit as an example of an orientable circuit as well as the non-orientable version with next-nearest neighbor parity checks. One can clearly distinguish the liquid region where the entanglement entropy S/L spans an arc akin to the well-known Calabrese-Cardy result <cit.>
S(l/L)/L = a logsin(π l/L) + b ,
for conformally invariant systems (red traces), versus the localized phase whose area-law scaling reveals itself as a wide plateau of constant entanglement entropy (blue traces).
When moving towards the phase boundary within the entanglement phase diagram, a close inspection of the scaling of the bipartite entanglement entropy reveals the characteristic shape of Lifshitz critical scaling, marked by the pink trace in Fig. <ref>, which shows a clearly distinct behavior from the “entanglement arcs" in the liquid regime and the almost constant behavior in the localized phase. This deviation from the liquid-like scaling is also reflected in an enlarged χ^2 value of the respective fit, as shown in the right panels of Fig. <ref>.
Zooming in on the critical behavior and plotting the entanglement entropy for different system sizes in Fig. <ref>, we find already for the system sizes achievable in our Clifford simulations
an indisputable signature of the Lifshitz scaling, consistent with our discussion in the context of the loop model simulations in Section <ref> above.
These findings support the usefulness of Clifford simulations to not only map out the entanglement phase diagram
of a given monitored circuit with relatively modest numerical resources (for instance via scans of the tripartite mutual information), but also to extract qualitative features of the phase transitions (such as a distinction of ∼ Llog L versus
Lifshitz scaling) and even get a rough estimate of critical exponents. At the same time, high-quality fits of critical
exponents require substantially larger system sizes than attainable in Clifford simulations (∼20,000 qubits), leaving
an opening for more advanced numerical schemes (such as the loop model simulations) that allow to go beyond
these limitations.
§.§ Simulation setup
Let us close also this Section on Clifford simulations with a detailed review of our numerical approach.
To obtain the stationary state as efficiently as possible, we stabilize all conserved quantities at initial time. Depending on system size, we take between 5 × 10^2 and 4× 10^4 samples for each set of parameters K_x,y,z, J, which determine the probability to projectively measure a bond (see Fig. <ref>). We use a highly efficient implementation <cit.> of stabilizer tableau algebra. While it is not possible to improve upon either the 𝒪(N^2) scaling for performing a single measurement (and thus 𝒪(N^3) per circuit layer) or the 𝒪(N^3) scaling for canonicalization, we perform several HPC optimizations to reduce memory access, since tableau-based simulations are always bound by memory bandwidth. This includes a hybrid parallelization strategy based on the physical layout of the hardware used, which in this case is a 50 × 2 × AMD EPYC 9654 cluster.
Furthermore, we disregard phase information and measurement outcomes – ensuring that measurements always scale ∼𝒪(N^2) – and make use of the fact that there are 3L/4 possible partitions of the honeycomb lattice, allowing to perform self-averaging. Even though computing the entanglement entropy of a stabilizer state always requires to canonicalize the tableau (𝒪(N^3)), it is possible to get a slight advantage when the only quantity of interest is the scaling of S(l/L) by converting to the clipped gauge. While this also requires an 𝒪(N^3) operation, it allows one to extract the entanglement entropy for all subsystem sizes in one shot. This trick, however, is of little use for the tripartite mutual information, since it requires tracing out non-contiguous regions – which is not possible in the clipped gauge representation. In the end, we are able to generate high quality data for up to 2× 96^2=18,432 qubits, albeit at exploding computational cost.
As an example, for a sweep of 100 different parameters, generating 192 trajectories with 150 samples each consumes close to 2,000 core hours at L=32 while generating only 96 trajectories with 5 samples each takes slightly more than half a million core hours at L=96 – the majority of which is however spent on performing canonicalization.
§ DISCUSSION
Summarizing our main results in brief, we have investigated measurement-only circuit variants of the Kitaev and Yao-Kivelson model to elucidate that
(i) monitored quantum circuits give rise to robust entanglement phases, i.e. dynamic phases of matter that are stable to local variations of the circuit parameters (measurement probabilities in our case),
(ii) the notions of symmetry and dimensionality can be used, akin to their Hamiltonian counterparts, to define universal behavior of the entanglement phases and the quantum criticality of phase transitions between them, and
(iii) that the particle-hole symmetric Majorana circuits, relevant to these monitored Kitaev models, can be recast in terms of Majorana loop models.
These Majorana loop models are a powerful resource as they not only allow to recast the entanglement phases in terms of localization physics of loops and map their phase transitions to non-linear sigma models, but they also provide a numerical framework allowing to simulate up to 10^8 qubits. This has allowed us to clearly discriminate the universality classes of orientable versus non-orientable loop models in (2+1) dimensions, relevant to Majorana circuits in symmetry classes BDI and D, respectively and determine their critical exponents to unprecedented precision.
Furthermore, our work suggests a broader perspective on the tenfold way classification of free fermionic wave functions. The connection between orientable and non-orientable loop models offers a unified framework for embedding both free Majorana circuits (fugacity n=1) and ground states of free Majorana Hamiltonians (fugacity n=√(2)) within symmetry classes BDI and D. This raises intriguing possibilities for future research, such as exploring whether other symmetry classes allow similar embeddings or investigating higher loop fugacities, potentially realized in quantum circuits coupled to classical dynamical agents <cit.>. This approach could refine the tenfold way classification by distinguishing universality not only by symmetry classes but also by Hamiltonian dynamics versus monitored circuits. Moreover, this offers a tractable handle on broader emerging questions regarding symmetry as an organizing principle in open quantum systems <cit.>.
One unexpected discovery in our study is the presence of quantum Lifshitz scaling at the phase transition of monitored Majorana circuits—a feature previously associated only with interacting circuits. This finding suggests a new and unexplored connection between loop models in (2+1) dimensions, monitored free fermions, and (2+1)-dimensional conformal field theories, warranting further investigation.
There are many future directions one might pursue in studying universal behavior in monitored quantum circuits and measurement-only Kitaev models.
This includes variations of dimensionality, such as going to higher spatial dimensions and studying three-dimensional monitored Kitaev models <cit.>, which is also closely related to the 3D Floquet code <cit.>, or exploring quantum systems with arbitrary qudit dimension <cit.>.
One might also think of generalizations of monitored Majorana systems to more general monitored non-Abelian anyon systems, such as a monitored Fibonacci anyons in a measurement-only variant of the golden chain <cit.>.
This work also raises a number of questions regarding the relation between the universality of entanglement transitions in Majorana circuits and those in more generic (2+1)-dimensional Clifford circuits.
Moving away from the Gaussian limit, we obtain random quantum circuits which can be related to a broad class of interacting loop or dimer models and for which new strong-coupling fixed-points may emerge.
Clifford operations which are even (odd) under fermionic-parity will preserve (break) the orientability of the loop model <cit.>.
Exploration of the parity-symmetric case has been initiated already in Ref. Zhu23structuredVolumeLaw, where the one finds volume-law entanglement and a correlation length exponent consistent with the loop model in class BDI.
Far from the Gaussian limit, large-scale numerical simulation of generic (2+1)-dimensional Clifford circuits give compelling evidence that the measurement-induced transition is in the same universality class as 3D percolation <cit.>.
However, the exponent ν=0.876 ± 0.004 <cit.> for 3D percolation is markedly smaller than what we find here for non-orientable Majorana circuits via loop simulations.
As is apparent from our own Clifford simulations in class D, finite-size effects lead to an underestimation of ν even for the largest system sizes.
It would be of interest to better resolve whether this apparent difference is due to finite-size effects or if perturbing away from the Gaussian Majorana limit leads to a genuine renormalization of the critical exponents.
Let us close this outlook by pointing to some of the physics awaiting us when going beyond the (random) Clifford regime
of strong, project measurements.
To do so, one can consider non-random but weak, non-projective measurements
– as realized by moving the rotation t∈[0,π/4] in the unitary entangling gates, drawn as CNOT gates in Fig. <ref>,
away from a perfect π/4-rotation.
One crucial implication here is that moving away from a perfect t=π/4 rotation is akin to reducing τ,
the effective inverse temperature in the Kraus operator (<ref>) above, to a finite value.
This connection between weak measurements and finite temperature physics has been demonstrated <cit.> in rigorous terms
for commuting two-qubit checks, resulting in Nishimori physics that subsequently has been probed in experimental devices <cit.>.
In the context of the Hastings-Haah code it allows one to explore the physics of qubit fractionalization
(and intermediate phases) as one goes out of the Clifford limit of projective measurements to weak measurements <cit.>.
It will be interesting to investigate such weakly monitored Kitaev circuits in more depth in the future.
For instance, note that analogous to cooling a Hamiltonian in equilibrium, tuning the measurement strength from weak to strong can purify all the time reversal invariant plaquettes or the product of plaquette operators, leaving a Kramers doublet as a time reversal symmetric mixed state.
Applied to the non-orientable circuits of interest in this manuscript this poses an interesting question for future research – is the spontaneous breaking of time-reversal symmetry in such non-orientable circuits captured by a hitherto unexplored chiral Nishimori critical point?
KK was supported by an NSF Graduate Fellowship under Grant No. DGE 2146752 and by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center.
The Cologne group acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 390534769, and by the DFG Collaborative Research Center (CRC) 183 Project No. 277101999 - projects B01 and B02.
GYZ additionally acknowledges support from a start-up fund from HKUST(GZ).
The numerical simulations have been performed on the JUWELS cluster at Forschungszentrum Jülich,
the Noctua2 cluster at the Paderborn Center for Parallel Computing (PC^2), and the RAMSES cluster at RRZK, University of Cologne.
Data availability.–
All code for generating the figures shown and the underlying numerical data is available on Zenodo <cit.>.
§ HAMILTONIAN PHASE DIAGRAMS
Here for completeness we show, in Fig. <ref>, a comparison of the Hamiltonian phase diagrams of the original honeycomb Kitaev model and the Kekulé-Kitaev model.
For the isotropic honeycomb Kitaev model, a unit-cell consists of two spin sites, and in the momentum space it hosts two linearly dispersive Majorana Dirac cones at the Brillouin zone corners <cit.>. The Dirac cones are stable in the presence of translation and C_2 𝒯 symmetry (a combination of lattice inversion and time reversal transformation). The perturbation away from the isotropic point gradually shifts the location of the two Dirac cones, which merge and annihilate and gap out at the phase boundary between the gapped and gapless phase.
In contrast, the Kekulé variant of the model tricolors the bonds and triples the size of the unit-cell into 6 spin sites, which induces a periodic potential with large wave vector that can scatter one Dirac cone to the other, which immediately gaps out the Dirac cones when perturbed away from the isotropic point <cit.>. As a result, the gapless phase shrinks to a singular point in the phase diagram.
The distinct topology of the phase diagram of the Kekulé-Kitaev model allows for an adiabatic path connecting the three gapped phases. This feature is exploited in the Floquet code <cit.>, which measures all the plaquette operators by four rounds of bond measurements without collapsing the global Wilson loops as logical qubits, and inducing an effective Hadamard transformation for the logical qubit due to the winding around the gapless point.
§ FRUSTRATION GRAPHS
For generic protocols involving only Majorana parity checks, we can generate the frustration graph by a duality transformation. The original matter Majorana fermion spans a graph where each node hosts a matter Majorana fermion, and each edge corresponds to a parity check. To dualize the graph, we turn every edge to a node, and every Majorana site into a all-to-all connected cluster, see Fig. <ref>. In the dual graph, the node refers to the measured operators, and the edge connects two operators that anti-commutate each other, due to the fact that they overlap with a Majorana fermion.
If the original graph has uniform coordinate number, say, every matter Majorana fermion has z neighbors, then in the dual graph, each node as an operator has 2(z-1) neighbors. For example, in the (decorated) honeycomb lattice z=3 and the frustration graph has 4 neighbors. For the special case of next-nearest neighbor measurement only, the Majorana fermions form two disconnected triangle lattice with z=6, and thus the coordinate number of the dual graph is 10.
§ UNIVERSALITY AND LATTICE GEOMETRY IN NON-ORIENTABLE SYMMETRY CLASS D
Here we provide additional evidence for the universality of the critical exponents reported in Tab. <ref> for the non-orientable symmetry class.
We consider two additional lattice geometries:
(i) the measurement-only Yao-Kivelson model, and
(ii) Cardy's L-lattice, as studied in Ref. Serna_2021.
Results are summarized in Tab. <ref>, showing a striking agreement between the three geometries as expected for universal
critical behavior.
§.§ Measurement-only Yao-Kivelson Model
Let us now consider the decorated honeycomb lattice of the measurement-only Yao-Kivelson model depicted in Fig. <ref>(d).
We take a lattice of L × L large (dodecagonal) plaquettes such that the corresponding squashed brickwall geometry has dimensions L_x = 6L and L_y = L.
Moreover, we fix parameters K_α = K and J_α = J = (1 - K) for simplicity such that for 0 < K < 1 the loop model is non-orientable.
Then varying K allows one to tune the system through the localization transition for symmetry class D.
§.§.§ Correlation length exponent ν
In Fig. <ref> we show the data collapse for the measurement-only Yao-Kivelson circuit in the vicinity of the critical point.
From this FSS analysis we find correlation length exponent ν = 0.9401 ± 0.0015, which is in close agreement with the value found both on the honeycomb lattice with NNN couplings and on Cardy's L-lattice.
This result further bolsters the reliability of our reported value of ν and demonstrates universality of the transition independent of the underlying lattice geometry.
§.§.§ Fisher exponent τ
In order to find the remaining critical exponents, we determine the Fisher exponent τ from the bulk loop length distribution P_bulk(ℓ) at the transition.
Fitting the power-law scaling P_bulk(ℓ) ∼ℓ^-τ shown in Fig. <ref> yields an estimate of τ which is consistent with the corresponding result on the other two lattice geometries.
As for Cardy's L-lattice, we report the critical exponents η, d_f and β as found by hyperscaling relations in Tab. <ref>.
§.§.§ Bulk loop statistics
Finally, we highlight universal aspects of bulk loop statistics in the metallic phase.
In Fig. <ref> we show the distribution of bulk loop lengths P_bulk(ℓ) at the isotropic point K = J deep in the critical (metallic) phase.
These results mirror those presented in Fig. <ref> for the honeycomb lattice, revealing Brownian and Poisson-Dirichlet scaling regimes.
Moreover, the asymptotic scaling in the Poisson-Dirichlet regime is consistent with Eq. (<ref>) for both symmetry classes.
We further validate the universality of the metallic phase by examining ratios of the long-distance loop connectivities.
In Fig. <ref>, we show that these ratios rapidly approach the values predicted from the universal quantity θ, consistent with the result in Fig. <ref> for the honeycomb lattice.
§.§ Cardy's L-lattice
As noted in the main text, the value of the critical exponent ν reported here differs notably from that reported previously in Ref. Serna_2021, which employed loop simulations on Cardy's L-lattice <cit.>.
Here we re-examine the non-orientable loop model on the L-lattice in order to validate our results from the honeycomb lattice with NNN couplings.
The lattice geometry and loop model parameterization are defined in Fig. <ref>.
Taking a time slice through the unit cell, we see that only four of the loops may contribute to the spanning number.
We thus define the cross-sectional area of the unit cell to be 2 × 2 rather than 4 × 4 as was used by Ref. Serna_2021.
For linear dimension L, we take the 3D lattice to have unit aspect ratio, corresponding to size L × L × L.
Here we focus on the line p=q, along which the loop model is always non-orientable.
§.§.§ Correlation length exponent ν
We compute the spanning number n_s for a set of 100 points in close proximity to the transition for linear dimensions L = {64,128,256,512,1024,2048,4096}.
For each parameter p=q, we average over 100 independent “pools”, drawing 10^4 circuit trajectories from each such pool.
In order to fit the scaling function f(x) we again employ a cubic B-spline.
Here we find it useful to take the spline points more densely clustered near x=0 where the curvature of f(x) is greatest.
In particular, we follow Ref. Ortuno_2009, taking spline points at spacing Δ x = 0.1 for x ∈ [-0.5, 1.5] and Δ x = 1 for x ∈ [2, 5].
While this choice of spline points gives a more robust fit, taking uniformly distributed spline points yields consistent results.
In Fig. <ref>, we show the scaling collapse resulting from two different ansatze.
For scaling variable x = L^1/ν(p-p_c), we obtain a good collapse with ν=0.9318±0.004 when restricting to a narrow window about x=0 and system sizes L ≥ L_min = 256.
When allowing for a non-linear correction to the scaling variable x = L^1/ν(p-p_c)(1 + A(p-p_c)), we find a nearly perfect collapse with ν=0.939 ± 0.001 for all system sizes and a much wider interval around the transition point.
Note that this value of ν also nicely matches the one found for the non-orientable circuit on the honeycomb lattice with NNN couplings.
Moreover, this FSS analysis is consistent with the earlier observation that a non-linear term in the scaling variable is necessary for robust scaling collapse.
To further validate the reported value of ν, we carry out a careful examination of the fitting procedure.
In Fig. <ref>, we show the quality of fit and optimal value of ν as a function of (i) the minimum system size L_min, (ii) the choice of scaling variable, and (iii) the density of spline points.
We find that the simpler (linear) ansatz consistently underestimates the exponent ν relative to the non-linear ansatz except in the limit of very large system sizes.
Moreover, we see that the non-linear ansatz consistently results in a robust scaling collapse which is largely independent of the density of spline points.
This analysis provides strong support for the universality of critical exponent ν = 0.9403 ± 0.0006 in non-orientable loop models, independent of the underlying lattice geometry.
§.§.§ Fisher exponent τ
As an additional check, we determine the Fisher exponent τ at the critical point from the power-law scaling of the bulk loop length distribution P_bulk(ℓ) ∼ℓ^-τ.
Fixing the critical point from the FSS of the spanning number, we plot the distribution P_bulk(ℓ) in Fig. <ref>.
The results here may be compared with those for the honeycomb lattice shown in Fig. <ref>(b), showing comparable values of the exponent τ.
Taken together, the values of ν and τ determined here allow for the remaining critical exponents to be determined by hyperscaling relations.
In particular, we report the values of critical exponents η = 5-6τ-1, d_f = 3/(τ-1), and β=3ν(τ-2)/(τ-1) in Tab. <ref>.
All such exponents are in close agreement with the values found for the honeycomb lattice in the main text.
§ SUPPLEMENTAL NUMERICAL DATA
§.§ Entanglement scaling in the metallic phase
Throughout the metallic phase of both the orientable and non-orientable models, we observe logarithmic entanglement scaling S(L,L_y) L_y log(L) for a cylindrical subsystem of length L along the x̂ direction and circumference L_y.
This is accompanied by a power-law scaling of the surface loop length distribution P_surf^α(ℓ) ∼ℓ^-2.
In Fig. <ref>, we verify the expected scaling of P_surf^α(ℓ) and S(L, L_y) at the isotropic point.
Notably, the prefactor of the logarithmic entanglement scaling is larger in the non-orientable symmetry class as alleviating the orientability constraint yields greater mobility for the random walk undergone by loop endpoints.
Moreover, the relative magnitudes of the logarithmic term in the symmetry classes is comparable to that predicted by the ratio of the mean-squared displacement ⟨r⃗^2⟩ of the random walk.
§.§ Bulk length of open arcs
As noted in Sec. <ref>, an open boundary is absorbing for the random walk undergone by loop endpoints.
Thus in the metallic phase, the bulk length of open arcs is dictated by the first passage time distribution P_FP(ℓ) ∝ℓ^-3/2 of the random walk along the temporal direction.
This is shown clearly in Fig. <ref> for both symmetry classes.
For finite system size L, the first passage time distribution acquires an exponential cutoff which is reflected in Fig. <ref> for loop length ℓ comparable to the cross-sectional area A.
|