diff --git "a/test.jsonl" "b/test.jsonl" new file mode 100644--- /dev/null +++ "b/test.jsonl" @@ -0,0 +1,5000 @@ +{"abstract": " This paper proposes a new Jacobian-based inverse kinematics (IK) explicitly\nconsidering box-constrained joint space. To control humanoid robots, the\nreference pose of end effector(s) is planned in task space, then mapped into\nthe reference joints by IK. Due to the limited analytical solutions for IK,\niterative numerical IK solvers based on Jacobian between task and joint spaces\nhave become popular. However, the conventional Jacobian-based IK does not\nexplicitly consider the joint constraints, and therefore, they usually clamp\nthe obtained joints during iteration according to the constraints in practice.\nThe problem in clamping operation has been pointed out that it causes numerical\ninstability due to non-smoothed objective function. To alleviate the clamping\nproblem, this study explicitly considers the joint constraints, especially the\nbox constraints in this paper, inside the new IK solver. Specifically, instead\nof clamping, a mirror descent (MD) method with box-constrained real joint space\nand no-constrained mirror space is integrated with the conventional\nJacobian-based IK methods, so-called MD-IK. In addition, to escape local optima\nnearly on the boundaries of constraints, a heuristic technique, called\n$\\epsilon$-clamping, is implemented as margin in software level. As a result,\nMD-IK achieved more stable and enough fast i) regulation on the random\nreference poses and ii) tracking to the random trajectories compared to the\nconventional IK solvers.\n"} +{"abstract": " The HGR is a quite challenging task as its performance is influenced by\nvarious aspects such as illumination variations, cluttered backgrounds,\nspontaneous capture, etc. The conventional CNN networks for HGR are following\ntwo stage pipeline to deal with the various challenges: complex signs,\nillumination variations, complex and cluttered backgrounds. The existing\napproaches needs expert expertise as well as auxiliary computation at stage 1\nto remove the complexities from the input images. Therefore, in this paper, we\nproposes an novel end-to-end compact CNN framework: fine grained feature\nattentive network for hand gesture recognition (Fit-Hand) to solve the\nchallenges as discussed above. The pipeline of the proposed architecture\nconsists of two main units: FineFeat module and dilated convolutional (Conv)\nlayer. The FineFeat module extracts fine grained feature maps by employing\nattention mechanism over multiscale receptive fields. The attention mechanism\nis introduced to capture effective features by enlarging the average behaviour\nof multi-scale responses. Moreover, dilated convolution provides global\nfeatures of hand gestures through a larger receptive field. In addition,\nintegrated layer is also utilized to combine the features of FineFeat module\nand dilated layer which enhances the discriminability of the network by\ncapturing complementary context information of hand postures. The effectiveness\nof Fit- Hand is evaluated by using subject dependent (SD) and subject\nindependent (SI) validation setup over seven benchmark datasets: MUGD-I,\nMUGD-II, MUGD-III, MUGD-IV, MUGD-V, Finger Spelling and OUHANDS, respectively.\nFurthermore, to investigate the deep insights of the proposed Fit-Hand\nframework, we performed ten ablation study.\n"} +{"abstract": " The residue cocycle associated to a suitable spectral triple is the key\ncomponent of the Connes-Moscovici local index theorem in noncommutative\ngeometry. We review the relationship between the residue cocycle and heat\nkernel asymptotics. We use a modified version of the Getzler calculus to\ncompute the cocycle for a class of Dirac-type operators introduced by Bismut,\nobtained by deforming a Dirac operator by a closed 3-form B. We also compute\nthe cocycle in low-dimensions when the 3-form B is not closed.\n"} +{"abstract": " We propose the adversarially robust kernel smoothing (ARKS) algorithm,\ncombining kernel smoothing, robust optimization, and adversarial training for\nrobust learning. Our methods are motivated by the convex analysis perspective\nof distributionally robust optimization based on probability metrics, such as\nthe Wasserstein distance and the maximum mean discrepancy. We adapt the\nintegral operator using supremal convolution in convex analysis to form a novel\nfunction majorant used for enforcing robustness. Our method is simple in form\nand applies to general loss functions and machine learning models. Furthermore,\nwe report experiments with general machine learning models, such as deep neural\nnetworks, to demonstrate that ARKS performs competitively with the\nstate-of-the-art methods based on the Wasserstein distance.\n"} +{"abstract": " The dynamics and radiation of ultrarelativistic electrons in strong\ncounterpropagating laser beams are investigated. Assuming that the particle\nenergy is the dominant scale in the problem, an approximate solution of\nclassical equations of motion is derived and the characteristic features of the\nmotion are examined. A specific regime is found with comparable strong field\nquantum parameters of the beams, when the electron trajectory exhibits\nultrashort spike-like features, which bears great significance to the\ncorresponding radiation properties. An analytical expression for the spectral\ndistribution of spontaneous radiation is derived in the framework of the\nBaier-Katkov semiclassical approximation based on the classical trajectory. All\nthe analytical results are further validated by exact numerical calculations.\nWe consider a non-resonant regime of interaction, when the laser frequencies in\nthe electron rest frame are far from each other, avoiding stimulated emission.\nSpecial attention is devoted to settings when the description of radiation via\nthe local constant field approximation fails and to corresponding spectral\nfeatures. Periodic and non-periodic regimes are considered, when lab\nfrequencies of the laser waves are always commensurate. The sensitivity of\nspectra with respect to the electron beam spread, focusing and finite duration\nof the laser beams is explored.\n"} +{"abstract": " Interactive driving scenarios, such as lane changes, merges and unprotected\nturns, are some of the most challenging situations for autonomous driving.\nPlanning in interactive scenarios requires accurately modeling the reactions of\nother agents to different future actions of the ego agent. We develop\nend-to-end models for conditional behavior prediction (CBP) that take as an\ninput a query future trajectory for an ego-agent, and predict distributions\nover future trajectories for other agents conditioned on the query. Leveraging\nsuch a model, we develop a general-purpose agent interactivity score derived\nfrom probabilistic first principles. The interactivity score allows us to find\ninteresting interactive scenarios for training and evaluating behavior\nprediction models. We further demonstrate that the proposed score is effective\nfor agent prioritization under computational budget constraints.\n"} +{"abstract": " The performance of fine-tuning pre-trained language models largely depends on\nthe hyperparameter configuration. In this paper, we investigate the performance\nof modern hyperparameter optimization methods (HPO) on fine-tuning pre-trained\nlanguage models. First, we study and report three HPO algorithms' performances\non fine-tuning two state-of-the-art language models on the GLUE dataset. We\nfind that using the same time budget, HPO often fails to outperform grid search\ndue to two reasons: insufficient time budget and overfitting. We propose two\ngeneral strategies and an experimental procedure to systematically troubleshoot\nHPO's failure cases. By applying the procedure, we observe that HPO can succeed\nwith more appropriate settings in the search space and time budget; however, in\ncertain cases overfitting remains. Finally, we make suggestions for future\nwork. Our implementation can be found in\nhttps://github.com/microsoft/FLAML/tree/main/flaml/nlp/.\n"} +{"abstract": " The relative abundance of alpha particles with respect to proton, usually\nexpressed as $A_{He}$ = ($n_\\alpha/n_p$)*100, is known to respond to solar\nactivity although changes in its behaviour in the last four solar cycles are\nnot known. In this letter, by systematically analysing inter-calibrated\n$A_{He}$ data obtained from the first Lagrangian point of the Sun-Earth system,\nwe show that $A_{He}$ variations are distinctively different in solar cycle 24\nas compared to the last three cycles. The frequency of $A_{He}$ = 2-3% events\nis significantly higher in slow/intermediate solar winds in cycle 24 as opposed\nto the dominance of the typical $A_{He}$ = 4-5% events in the previous three\ncycles. Further, the occurrence of $A_{He}$ $\\geq$ 10% events is significantly\nreduced in cycle 24. Not only that, the changes in delay of $A_{He}$ with\nrespect to peak sunspot numbers are less sensitive to changes in solar wind\nvelocity in cycle 24. The investigation suggests that the coronal magnetic\nfield configuration started undergoing systematic changes starting from cycle\n23 and this altered magnetic field configuration affected the way helium got\nprocessed and depleted in the solar atmosphere.\n"} +{"abstract": " Large-area crop classification using multi-spectral imagery is a widely\nstudied problem for several decades and is generally addressed using classical\nRandom Forest classifier. Recently, deep convolutional neural networks (DCNN)\nhave been proposed. However, these methods only achieved results comparable\nwith Random Forest. In this work, we present a novel CNN based architecture for\nlarge-area crop classification. Our methodology combines both spatio-temporal\nanalysis via 3D CNN as well as temporal analysis via 1D CNN. We evaluated the\nefficacy of our approach on Yolo and Imperial county benchmark datasets. Our\ncombined strategy outperforms both classical as well as recent DCNN based\nmethods in terms of classification accuracy by 2% while maintaining a minimum\nnumber of parameters and the lowest inference time.\n"} +{"abstract": " Man-in-The-Middle (MiTM) attacks present numerous threats to a smart grid. In\na MiTM attack, an intruder embeds itself within a conversation between two\ndevices to either eavesdrop or impersonate one of the devices, making it appear\nto be a normal exchange of information. Thus, the intruder can perform false\ndata injection (FDI) and false command injection (FCI) attacks that can\ncompromise power system operations, such as state estimation, economic\ndispatch, and automatic generation control (AGC). Very few researchers have\nfocused on MiTM methods that are difficult to detect within a smart grid. To\naddress this, we are designing and implementing multi-stage MiTM intrusions in\nan emulation-based cyber-physical power system testbed against a large-scale\nsynthetic grid model to demonstrate how such attacks can cause physical\ncontingencies such as misguided operation and false measurements. MiTM\nintrusions create FCI, FDI, and replay attacks in this synthetic power grid.\nThis work enables stakeholders to defend against these stealthy attacks, and we\npresent detection mechanisms that are developed using multiple alerts from\nintrusion detection systems and network monitoring tools. Our contribution will\nenable other smart grid security researchers and industry to develop further\ndetection mechanisms for inconspicuous MiTM attacks.\n"} +{"abstract": " In this paper, we present a provably correct controller synthesis approach\nfor switched stochastic control systems with metric temporal logic (MTL)\nspecifications with provable probabilistic guarantees. We first present the\nstochastic control bisimulation function for switched stochastic control\nsystems, which bounds the trajectory divergence between the switched stochastic\ncontrol system and its nominal deterministic control system in a probabilistic\nfashion. We then develop a method to compute optimal control inputs by solving\nan optimization problem for the nominal trajectory of the deterministic control\nsystem with robustness against initial state variations and stochastic\nuncertainties. We implement our robust stochastic controller synthesis approach\non both a four-bus power system and a nine-bus power system under generation\nloss disturbances, with MTL specifications expressing requirements for the grid\nfrequency deviations, wind turbine generator rotor speed variations and the\npower flow constraints at different power lines.\n"} +{"abstract": " It is often said that asymmetric dark matter is light compared to typical\nweakly interacting massive particles. Here we point out a simple scheme with a\nneutrino portal and $\\mathcal{O}(60 \\text{ GeV})$ asymmetric dark matter which\nmay be ''added'' to any standard baryogenesis scenario. The dark sector\ncontains a copy of the Standard Model gauge group, as well as (at least) one\nmatter family, Higgs, and right-handed neutrino. After baryogenesis, some\nlepton asymmetry is transferred to the dark sector through the neutrino portal\nwhere dark sphalerons convert it into a dark baryon asymmetry. Dark hadrons\nform asymmetric dark matter and may be directly detected due to the vector\nportal. Surprisingly, even dark anti-neutrons may be directly detected if they\nhave a sizeable electric dipole moment. The dark photons visibly decay at\ncurrent and future experiments which probe complementary parameter space to\ndark matter direct detection searches. Exotic Higgs decays are excellent\nsignals at future $e^+ e^-$ Higgs factories.\n"} +{"abstract": " A high statistics $\\Sigma p$ scattering experiment has been performed at the\nJ-PARC Hadron Experimental Facility. Data for momentum-tagged $\\Sigma^{-}$\nrunning in a liquid hydrogen target were accumulated by detecting the $\\pi^{-}p\n\\to K^{+}\\Sigma^{-}$ reaction with a high intensity $\\pi^{-}$ beam of 20\nM/spill. Differential cross sections of the $\\Sigma^{-}p$ elastic scattering\nwere derived with a drastically improved accuracy by identifying the largest\nstatistics of about 4,500 events from 1.72 $\\times$ $10^{7}$ $\\Sigma^{-}$. The\nderived differential cross section shows a clear forward-peaking angular\ndistribution for a $\\Sigma^{-}$ momentum range from 470 to 850 MeV/$c$. The\naccurate data will impose a strong constraint on the theoretical models of the\nbaryon-baryon interactions.\n"} +{"abstract": " Bound states in the continuum (BICs) are non-radiating solutions of the wave\nequation with a spectrum embedded in the continuum of propagating waves of the\nsurrounding space. The complete decoupling of BICs from the radiation continuum\nmakes their excitation impossible from the far-field. Here, we develop a\ngeneral theory of parametric excitation of BICs in nonlinear systems with\nKerr-type nonlinearity via spontaneous symmetry breaking, which results in a\ncoupling of a BIC and a bright mode of the system. Using the temporal\ncoupled-mode theory and perturbation analysis, we found the threshold intensity\nfor excitation of a BIC and study the possible stable and unstable solutions\ndepending on the pump intensity and frequency detuning between the pump and\nBIC. We revealed that at some parameters of the pump beam, there are no stable\nsolutions and the BIC can be used for frequency comb generation. Our findings\ncan be very promising for use in nonlinear photonic devices and all-optical\nnetworks.\n"} +{"abstract": " Six significant new methodological developments of the previously-presented\n\"metastimuli architecture\" for human learning through machine learning of\nspatially correlated structural position within a user's personal information\nmanagement system (PIMS), providing the basis for haptic metastimuli, are\npresented. These include architectural innovation, recurrent (RNN) artificial\nneural network (ANN) application, a variety of atom embedding techniques\n(including a novel technique we call \"nabla\" embedding inspired by\nlinguistics), ANN hyper-parameter (one that affects the network but is not\ntrained, e.g. the learning rate) optimization, and meta-parameter (one that\ndetermines the system performance but is not trained and not a hyper-parameter,\ne.g. the atom embedding technique) optimization for exploring the large design\nspace. A technique for using the system for automatic atom categorization in a\nuser's PIMS is outlined. ANN training and hyper- and meta-parameter\noptimization results are presented and discussed in service of methodological\nrecommendations.\n"} +{"abstract": " We present a full analysis of a broadband spectral line survey of Sagittarius\nB2 (Main), one of the most chemically rich regions in the Galaxy located within\nthe giant molecular cloud complex Sgr B2 in the Central Molecular Zone. Our\ngoal is to derive the molecular abundances and temperatures of the high-mass\nstar-forming region Sgr B2(M) and thus its physical and astrochemical\nconditions. Sgr B2(M) was observed using the Heterodyne Instrument for the\nFar-Infrared (HIFI) on board the Herschel Space Observatory in a spectral line\nsurvey from 480 to 1907 GHz at a spectral resolution of 1.1 MHz, which provides\none of the largest spectral coverages ever obtained toward this high-mass\nstar-forming region in the submillimeter with high spectral resolution and\nincludes frequencies > 1 THz unobservable from the ground. We model the\nmolecular emission from the submillimeter to the far-IR using the XCLASS\nprogram. For each molecule, a quantitative description was determined taking\nall emission and absorption features of that species across the entire spectral\nrange into account. Additionally, we derive velocity resolved ortho / para\nratios for those molecules for which ortho and para resolved molecular\nparameters are available. Finally, the temperature and velocity distributions\nare analyzed and the derived abundances are compared with those obtained for\nSgr B2(N) from a similar HIFI survey. A total of 92 isotopologues were\nidentified, arising from 49 different molecules, ranging from free ions to\ncomplex organic compounds and originating from a variety of environments from\nthe cold envelope to hot and dense gas within the cores. Sulfur dioxide,\nmethanol, and water are the dominant contributors. For the ortho / para ratios\nwe find deviations from the high temperature values between 13 and 27 %. In\ntotal 14 % of all lines remain unidentified.\n"} +{"abstract": " This paper concerns the a priori generalization analysis of the Deep Ritz\nMethod (DRM) [W. E and B. Yu, 2017], a popular neural-network-based method for\nsolving high dimensional partial differential equations. We derive the\ngeneralization error bounds of two-layer neural networks in the framework of\nthe DRM for solving two prototype elliptic PDEs: Poisson equation and static\nSchr\\\"odinger equation on the $d$-dimensional unit hypercube. Specifically, we\nprove that the convergence rates of generalization errors are independent of\nthe dimension $d$, under the a priori assumption that the exact solutions of\nthe PDEs lie in a suitable low-complexity space called spectral Barron space.\nMoreover, we give sufficient conditions on the forcing term and the potential\nfunction which guarantee that the solutions are spectral Barron functions. We\nachieve this by developing a new solution theory for the PDEs on the spectral\nBarron space, which can be viewed as an analog of the classical Sobolev\nregularity theory for PDEs.\n"} +{"abstract": " In this paper, we study the problem of fair sparse regression on a biased\ndataset where bias depends upon a hidden binary attribute. The presence of a\nhidden attribute adds an extra layer of complexity to the problem by combining\nsparse regression and clustering with unknown binary labels. The corresponding\noptimization problem is combinatorial, but we propose a novel relaxation of it\nas an \\emph{invex} optimization problem. To the best of our knowledge, this is\nthe first invex relaxation for a combinatorial problem. We show that the\ninclusion of the debiasing/fairness constraint in our model has no adverse\neffect on the performance. Rather, it enables the recovery of the hidden\nattribute. The support of our recovered regression parameter vector matches\nexactly with the true parameter vector. Moreover, we simultaneously solve the\nclustering problem by recovering the exact value of the hidden attribute for\neach sample. Our method uses carefully constructed primal dual witnesses to\nprovide theoretical guarantees for the combinatorial problem. To that end, we\nshow that the sample complexity of our method is logarithmic in terms of the\ndimension of the regression parameter vector.\n"} +{"abstract": " Neutral atom arrays are promising for large-scale quantum computing\nespecially because it is possible to prepare large-scale qubit arrays. An\nunsolved issue is how to selectively excite one qubit deep in a 3D atomic array\nto Rydberg states. In this work, we show two methods for this purpose. The\nfirst method relies on a well-known result: in a dipole transition between two\nquantum states driven by two off-resonant fields of equal strength but opposite\ndetunings $\\pm\\Delta$, the transition is characterized by two counter-rotating\nRabi frequencies $\\Omega e^{\\pm i\\Delta t}$~[or $\\pm\\Omega e^{\\pm i\\Delta t}$\nif the two fields have a $\\pi$-phase difference]. This pair of detuned fields\nlead to a time-dependent Rabi frequency $2\\Omega \\cos(\\Delta t)$~[or $2i\\Omega\n\\sin(\\Delta t)$], so that a full transition between the two levels is\nrecovered. We show that when the two detuned fields are sent in different\ndirections, one atom in a 3D optical lattice can be selectively addressed for\nRydberg excitation, and when its state is restored, the state of any nontarget\natoms irradiated in the light path is also restored. Moreover, we find that the\nRydberg excitation by this method can significantly suppress the fundamental\nblockade error of a Rydberg gate, paving the way for a high-fidelity entangling\ngate with commonly used quasi-rectangular pulse that is easily obtained by\npulse pickers. Along the way, we find a second method for single-site Rydberg\naddressing in 3D, where a selected target atom can be excited to Rydberg state\nwhile preserving the state of any nontarget atom due to a spin echo sequence.\nThe capability to selectively address a target atom in 3D atomic arrays for\nRydberg excitation makes it possible to design large-scale neutral-atom\ninformation processor based on Rydberg blockade.\n"} +{"abstract": " The $r$-index (Gagie et al., JACM 2020) represented a breakthrough in\ncompressed indexing of repetitive text collections, outperforming its\nalternatives by orders of magnitude. Its space usage, $\\mathcal{O}(r)$ where\n$r$ is the number of runs in the Burrows-Wheeler Transform of the text, is\nhowever larger than Lempel-Ziv and grammar-based indexes, and makes it\nuninteresting in various real-life scenarios of milder repetitiveness. In this\npaper we introduce the $sr$-index, a variant that limits the space to\n$\\mathcal{O}(\\min(r,n/s))$ for a text of length $n$ and a given parameter $s$,\nat the expense of multiplying by $s$ the time per occurrence reported. The\n$sr$-index is obtained by carefully subsampling the text positions indexed by\nthe $r$-index, in a way that we prove is still able to support pattern matching\nwith guaranteed performance. Our experiments demonstrate that the $sr$-index\nsharply outperforms virtually every other compressed index on repetitive texts,\nboth in time and space, even matching the performance of the $r$-index while\nusing 1.5--3.0 times less space. Only some Lempel-Ziv-based indexes achieve\nbetter compression than the $sr$-index, using about half the space, but they\nare an order of magnitude slower.\n"} +{"abstract": " We present a subsampling strategy for the offline stage of the Reduced Basis\nMethod. The approach is aimed at bringing down the considerable offline costs\nassociated with using a finely-sampled training set. The proposed algorithm\nexploits the potential of the pivoted QR decomposition and the discrete\nempirical interpolation method to identify important parameter samples. It\nconsists of two stages. In the first stage, we construct a low-fidelity\napproximation to the solution manifold over a fine training set. Then, for the\navailable low-fidelity snapshots of the output variable, we apply the pivoted\nQR decomposition or the discrete empirical interpolation method to identify a\nset of sparse sampling locations in the parameter domain. These points reveal\nthe structure of the parametric dependence of the output variable. The second\nstage proceeds with a subsampled training set containing a by far smaller\nnumber of parameters than the initial training set. Different subsampling\nstrategies inspired from recent variants of the empirical interpolation method\nare also considered. Tests on benchmark examples justify the new approach and\nshow its potential to substantially speed up the offline stage of the Reduced\nBasis Method, while generating reliable reduced-order models.\n"} +{"abstract": " We address the detection of material defects, which are inside a layered\nmaterial structure using compressive sensing based multiple-input and\nmultiple-output (MIMO) wireless radar. Here, the strong clutter due to the\nreflection of the layered structure's surface often makes the detection of the\ndefects challenging. Thus, sophisticated signal separation methods are required\nfor improved defect detection. In many scenarios, the number of defects that we\nare interested in is limited and the signaling response of the layered\nstructure can be modeled as a low-rank structure. Therefore, we propose joint\nrank and sparsity minimization for defect detection. In particular, we propose\na non-convex approach based on the iteratively reweighted nuclear and\n$\\ell_1-$norm (a double-reweighted approach) to obtain a higher accuracy\ncompared to the conventional nuclear norm and $\\ell_1-$norm minimization. To\nthis end, an iterative algorithm is designed to estimate the low-rank and\nsparse contributions. Further, we propose deep learning to learn the parameters\nof the algorithm (i.e., algorithm unfolding) to improve the accuracy and the\nspeed of convergence of the algorithm. Our numerical results show that the\nproposed approach outperforms the conventional approaches in terms of mean\nsquare errors of the recovered low-rank and sparse components and the speed of\nconvergence.\n"} +{"abstract": " A two-dimensional string is simply a two-dimensional array. We continue the\nstudy of the combinatorial properties of repetitions in such strings over the\nbinary alphabet, namely the number of distinct tandems, distinct quartics, and\nruns. First, we construct an infinite family of $n\\times n$ 2D strings with\n$\\Omega(n^{3})$ distinct tandems. Second, we construct an infinite family of\n$n\\times n$ 2D strings with $\\Omega(n^{2}\\log n)$ distinct quartics. Third, we\nconstruct an infinite family of $n\\times n$ 2D strings with $\\Omega(n^{2}\\log\nn)$ runs. This resolves an open question of Charalampopoulos, Radoszewski,\nRytter, Wale\\'n, and Zuba [ESA 2020], who asked if the number of distinct\nquartics and runs in an $n\\times n$ 2D string is $\\mathcal{O}(n^{2})$.\n"} +{"abstract": " Bit depth adaptation, where the bit depth of a video sequence is reduced\nbefore transmission and up-sampled during display, can potentially reduce data\nrates with limited impact on perceptual quality. In this context, we conducted\na subjective study on a UHD video database, BVI-BD, to explore the relationship\nbetween bit depth and visual quality. In this work, three bit depth adaptation\nmethods are investigated, including linear scaling, error diffusion, and a\nnovel adaptive Gaussian filtering approach. The results from a subjective\nexperiment indicate that above a critical bit depth, bit depth adaptation has\nno significant impact on perceptual quality, while reducing the amount\ninformation that is required to be transmitted. Below the critical bit depth,\nadvanced adaptation methods can be used to retain `good' visual quality (on\naverage) down to around 2 bits per color channel for the outlined experimental\nsetup - a large reduction compared to the typically used 8 bits per color\nchannel. A selection of image quality metrics were subsequently bench-marked on\nthe subjective data, and analysis indicates that a bespoke quality metric is\nrequired for bit depth adaptation.\n"} +{"abstract": " Trialities of $\\mathcal{W}$-algebras are certain nontrivial isomorphisms\nbetween the affine cosets of three different $\\mathcal{W}$-(super)algebras, and\nwere first conjectured in the physics literature by Gaiotto and Rap\\v{c}\\'ak.\nIn this paper we prove trialities among eight families of\n$\\mathcal{W}$-(super)algebras of types $B$, $C$, and $D$. The key idea is to\nidentify the affine cosets of these algebras with one-parameter quotients of\nthe universal two-parameter even spin $\\mathcal{W}_{\\infty}$-algebra which was\nrecently constructed by Kanade and the second author. Our result is a vast\ngeneralization of both Feigin-Frenkel duality in types $B$, $C$, and $D$, and\nthe coset realization of principal $\\mathcal{W}$-algebras of type $D$ due to\nArakawa and us. It also provides a new coset realization of principal\n$\\mathcal{W}$-algebras of types $B$ and $C$. As an application, we prove the\nrationality of the affine vertex superalgebra $L_k(\\mathfrak{osp}_{1|2n})$, the\nminimal $\\mathcal{W}$-algebra $\\mathcal{W}_{k-1/2}(\\mathfrak{sp}_{2n+2},\nf_{\\text{min}})$, and the coset $\\text{Com}(L_k(\\mathfrak{sp}_{2m}),\nL_k(\\mathfrak{sp}_{2n}))$, for all integers $k,n,m \\geq 1$ with $m