Upload ArXiv-Tweets-from-AK.csv
Browse files- ArXiv-Tweets-from-AK.csv +620 -0
ArXiv-Tweets-from-AK.csv
ADDED
@@ -0,0 +1,620 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
,id,tweet_text,paper_reference,like_count
|
2 |
+
0,1546707909748342784,"High-resource Language-specific Training for Multilingual Neural Machine Translation
|
3 |
+
abs: https://t.co/fYrwIPVpV2 https://t.co/b23EVZ6J5O",High-resource Language-specific Training for Multilingual Neural Machine Translation,11
|
4 |
+
1,1546669556789387264,"Exploring Length Generalization in Large Language Models
|
5 |
+
abs: https://t.co/7Gphb7Q8jJ https://t.co/cCpLTSrXfR",Exploring Length Generalization in Large Language Models,17
|
6 |
+
2,1546667351885729792,"LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
|
7 |
+
abs:… https://t.co/lCk3P8KIwM","LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action",32
|
8 |
+
3,1546665636734140417,"Scaling the Number of Tasks in Continual Learning
|
9 |
+
abs: https://t.co/F4HxAxGUpI https://t.co/cyvXSBKthk",Scaling the Number of Tasks in Continual Learning,47
|
10 |
+
4,1546707909748342784,"High-resource Language-specific Training for Multilingual Neural Machine Translation
|
11 |
+
abs: https://t.co/fYrwIPVpV2 https://t.co/b23EVZ6J5O",High-resource Language-specific Training for Multilingual Neural Machine Translation,11
|
12 |
+
5,1546669556789387264,"Exploring Length Generalization in Large Language Models
|
13 |
+
abs: https://t.co/7Gphb7Q8jJ https://t.co/cCpLTSrXfR",Exploring Length Generalization in Large Language Models,17
|
14 |
+
6,1546667351885729792,"LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
|
15 |
+
abs:… https://t.co/lCk3P8KIwM","LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action",32
|
16 |
+
7,1546665636734140417,"Scaling the Number of Tasks in Continual Learning
|
17 |
+
abs: https://t.co/F4HxAxGUpI https://t.co/cyvXSBKthk",Scaling the Number of Tasks in Continual Learning,47
|
18 |
+
8,1546379163803721729,"CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal Relationships
|
19 |
+
abs: https://t.co/ozIrQ7gx68 https://t.co/gSGfnsZbji",CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal Relationships,53
|
20 |
+
9,1546376106122567681,"The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
|
21 |
+
a… https://t.co/TOPpVPQbM8","The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications",11
|
22 |
+
10,1546375104262725632,"Code Translation with Compiler Representations
|
23 |
+
abs: https://t.co/nTT3dmXH4c
|
24 |
+
|
25 |
+
method improves upon the state of the… https://t.co/wD4SozbilN",Code Translation with Compiler Representations,127
|
26 |
+
11,1546363822121820162,"End-to-End Binaural Speech Synthesis
|
27 |
+
abs: https://t.co/tR86cSAjQO
|
28 |
+
project page: https://t.co/nB1iSV68U2
|
29 |
+
|
30 |
+
end-to-end… https://t.co/OTzfVZTFqb",End-to-End Binaural Speech Synthesis,58
|
31 |
+
12,1545243820496936960,"Cross-Scale Vector Quantization for Scalable Neural Speech Coding
|
32 |
+
abs: https://t.co/AbE9rP0ApQ https://t.co/pZXUTNipgs",Cross-Scale Vector Quantization for Scalable Neural Speech Coding,25
|
33 |
+
13,1545240373328592897,"Finding Fallen Objects Via Asynchronous Audio-Visual Integration
|
34 |
+
abs: https://t.co/mv9Rvl0hFA
|
35 |
+
project page:… https://t.co/N8l4zaP9bH",Finding Fallen Objects Via Asynchronous Audio-Visual Integration,33
|
36 |
+
14,1545228848391938048,"Back to the Source: Diffusion-Driven Test-Time Adaptation
|
37 |
+
abs: https://t.co/5jmESOLQxG https://t.co/cI5UFyQI0B",Back to the Source: Diffusion-Driven Test-Time Adaptation,82
|
38 |
+
15,1544897525664169986,"When does Bias Transfer in Transfer Learning?
|
39 |
+
abs: https://t.co/tf8FWyf8Ge https://t.co/0l6vy8RHXI",When does Bias Transfer in Transfer Learning?,135
|
40 |
+
16,1544865587343630342,"Transformers are Adaptable Task Planners
|
41 |
+
abs: https://t.co/6lgFJD2Olt
|
42 |
+
|
43 |
+
TTP can be pre-trained on multiple preferenc… https://t.co/XrolcxlV22",Transformers are Adaptable Task Planners,82
|
44 |
+
17,1544853650316599299,"Ultra-Low-Bitrate Speech Coding with Pretrained Transformers
|
45 |
+
abs: https://t.co/rYRe5N7Bqu https://t.co/zOsCY53r2s",Ultra-Low-Bitrate Speech Coding with Pretrained Transformers,34
|
46 |
+
18,1544721641049145345,"CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations
|
47 |
+
|
48 |
+
abs:… https://t.co/6ng3UArKdE","CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations",52
|
49 |
+
19,1544521037274046464,"An Empirical Study of Implicit Regularization in Deep Offline RL
|
50 |
+
abs: https://t.co/rCjHkQ2jwL https://t.co/8hJOsVA6D0",An Empirical Study of Implicit Regularization in Deep Offline RL,45
|
51 |
+
20,1544519268234153984,"Offline RL Policies Should be Trained to be Adaptive
|
52 |
+
abs: https://t.co/kC7TPSOTt2 https://t.co/Ox2D028P33",Offline RL Policies Should be Trained to be Adaptive,34
|
53 |
+
21,1544491557293854721,"Efficient Representation Learning via Adaptive Context Pooling
|
54 |
+
abs: https://t.co/zZzezhvbN7 https://t.co/xJoStGBSqp",Efficient Representation Learning via Adaptive Context Pooling,163
|
55 |
+
22,1544488616734429185,"CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning
|
56 |
+
abs:… https://t.co/HqXmDpaUEh",CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning,102
|
57 |
+
23,1544485593991811072,"How Much More Data Do I Need? Estimating Requirements for Downstream Tasks
|
58 |
+
abs: https://t.co/RNXT4IRIaL https://t.co/uJGrEfgaAv",How Much More Data Do I Need? Estimating Requirements for Downstream Tasks,230
|
59 |
+
24,1544483235542990856,"Neural Networks and the Chomsky Hierarchy
|
60 |
+
abs: https://t.co/u6Jl2WvKMr
|
61 |
+
|
62 |
+
sota architectures, such as LSTMs and Trans… https://t.co/DyHnH8Q8z7",Neural Networks and the Chomsky Hierarchy,209
|
63 |
+
25,1544207617102331906,"GlowVC: Mel-spectrogram space disentangling model for language-independent text-free voice conversion
|
64 |
+
abs:… https://t.co/kFYdKhrhSA",GlowVC: Mel-spectrogram space disentangling model for language-independent text-free voice conversion,19
|
65 |
+
26,1544201186739458049,"Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation
|
66 |
+
abs:… https://t.co/yL9kWlUYfs",Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation,112
|
67 |
+
27,1544193877053161480,"WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents
|
68 |
+
abs: https://t.co/8hZyMt90Rv
|
69 |
+
pro… https://t.co/eHzGN2GHqj",WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents,52
|
70 |
+
28,1544127293660037120,"UserLibri: A Dataset for ASR Personalization Using Only Text
|
71 |
+
abs: https://t.co/0bug7OWU42 https://t.co/OMqJSGlqDx",UserLibri: A Dataset for ASR Personalization Using Only Text,9
|
72 |
+
29,1543981460964708352,"LaserMix for Semi-Supervised LiDAR Semantic Segmentation
|
73 |
+
abs: https://t.co/SvqHy1y7LI
|
74 |
+
project page:… https://t.co/jbQtQiDbDy",LaserMix for Semi-Supervised LiDAR Semantic Segmentation,74
|
75 |
+
30,1543766808309669889,"Rethinking Optimization with Differentiable Simulation from a Global Perspective
|
76 |
+
abs: https://t.co/trEcw4VZb2
|
77 |
+
proje… https://t.co/1UsI0q03IL",Rethinking Optimization with Differentiable Simulation from a Global Perspective,94
|
78 |
+
31,1543763117515182082,"Visual Pre-training for Navigation: What Can We Learn from Noise?
|
79 |
+
abs: https://t.co/Rn5UGvvMMz
|
80 |
+
github:… https://t.co/eKeMSlBxVx",Visual Pre-training for Navigation: What Can We Learn from Noise?,134
|
81 |
+
32,1543759817449390080,"DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale
|
82 |
+
abs:… https://t.co/IbF6IdUDj7",DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale,120
|
83 |
+
33,1543757524356272134,"When Does Differentially Private Learning Not Suffer in High Dimensions?
|
84 |
+
abs: https://t.co/yws7BhoBaP https://t.co/bD2Gz6B3GU",When Does Differentially Private Learning Not Suffer in High Dimensions?,28
|
85 |
+
34,1542740430084792320,"Implicit Neural Spatial Filtering for Multichannel Source Separation in the Waveform Domain
|
86 |
+
abs:… https://t.co/3cNoOlr5SD",Implicit Neural Spatial Filtering for Multichannel Source Separation in the Waveform Domain,31
|
87 |
+
35,1542713456268304384,"Denoised MDPs: Learning World Models Better Than the World Itself
|
88 |
+
abs: https://t.co/CPwlF0soWZ
|
89 |
+
project page:… https://t.co/5BBwGXYZ2l",Denoised MDPs: Learning World Models Better Than the World Itself,98
|
90 |
+
36,1542712192746782720,"Forecasting Future World Events with Neural Networks
|
91 |
+
abs: https://t.co/tD8F0ZC1rC
|
92 |
+
github: https://t.co/v8HZgye0ZH… https://t.co/eJaakYSUSw",Forecasting Future World Events with Neural Networks,77
|
93 |
+
37,1542709853516431361,"Learning Iterative Reasoning through Energy Minimization
|
94 |
+
abs: https://t.co/WDLx1hKPqG
|
95 |
+
project page:… https://t.co/oDEClr0ho1",Learning Iterative Reasoning through Energy Minimization,125
|
96 |
+
38,1542709029964849154,"Improving the Generalization of Supervised Models
|
97 |
+
abs: https://t.co/3CzEuuxvHt
|
98 |
+
project page: https://t.co/uSjiKvSMN8 https://t.co/ffUkpTL7Ng",Improving the Generalization of Supervised Models,189
|
99 |
+
39,1542325850036752394,"RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
|
100 |
+
abs:… https://t.co/iFAou98U0X",RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness,172
|
101 |
+
40,1542316111743664133,"Masked World Models for Visual Control
|
102 |
+
abs: https://t.co/eZx53zuqnm
|
103 |
+
project page: https://t.co/hgZwrV3zO5
|
104 |
+
|
105 |
+
Can MAE… https://t.co/UfybFx81uj",Masked World Models for Visual Control,83
|
106 |
+
41,1542313347835731970,"Beyond neural scaling laws: beating power law scaling via data pruning
|
107 |
+
abs: https://t.co/OFYkTt5b2d https://t.co/7SKXMClaR8",Beyond neural scaling laws: beating power law scaling via data pruning,164
|
108 |
+
42,1542312585768435712,"3D-Aware Video Generation
|
109 |
+
abs: https://t.co/N64ARXFKMJ
|
110 |
+
project page: https://t.co/5MoGVKqItn https://t.co/uZdLIXWc1P",3D-Aware Video Generation,122
|
111 |
+
43,1541957148070010881,"DayDreamer: World Models for Physical Robot Learning
|
112 |
+
abs: https://t.co/quyTQGcjEA
|
113 |
+
project page:… https://t.co/DD67NUzgJy",DayDreamer: World Models for Physical Robot Learning,182
|
114 |
+
44,1541948699559006210,"Long Range Language Modeling via Gated State Spaces
|
115 |
+
abs: https://t.co/HEd2lwlGan https://t.co/tPOHv7dP0T",Long Range Language Modeling via Gated State Spaces,124
|
116 |
+
45,1541945827035332610,"ProGen2: Exploring the Boundaries of Protein Language Models
|
117 |
+
abs: https://t.co/kelWMlhH8r
|
118 |
+
github:… https://t.co/nzvei5pMJR",ProGen2: Exploring the Boundaries of Protein Language Models,64
|
119 |
+
46,1541626617490837504,"Multitask vocal burst modeling with ResNets and pre-trained paralinguistic Conformers
|
120 |
+
abs: https://t.co/QZLcoFOeSz https://t.co/315WfiVVRr",Multitask vocal burst modeling with ResNets and pre-trained paralinguistic Conformers,11
|
121 |
+
47,1541599748624351233,"Programmatic Concept Learning for Human Motion Description and Synthesis
|
122 |
+
abs: https://t.co/uIoxGozwhD
|
123 |
+
project page:… https://t.co/MmCMQouLF7",Programmatic Concept Learning for Human Motion Description and Synthesis,83
|
124 |
+
48,1541592312094101506,"Prompting Decision Transformer for Few-Shot Policy Generalization
|
125 |
+
abs: https://t.co/bD2f4SjRP6
|
126 |
+
project page:… https://t.co/ZfAxxx6zCu",Prompting Decision Transformer for Few-Shot Policy Generalization,48
|
127 |
+
49,1541590513241006080,"Repository-Level Prompt Generation for Large Language Models of Code
|
128 |
+
abs: https://t.co/GG1YHoCQdf
|
129 |
+
github:… https://t.co/Z9fUO4r8sU",Repository-Level Prompt Generation for Large Language Models of Code,56
|
130 |
+
50,1541588372631818241,"Your Autoregressive Generative Model Can be Better If You Treat It as an Energy-Based One
|
131 |
+
abs:… https://t.co/uJuKxO7XJC",Your Autoregressive Generative Model Can be Better If You Treat It as an Energy-Based One,121
|
132 |
+
51,1541226747533922308,"PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction
|
133 |
+
abs: https://t.co/yXdFTqRWF3
|
134 |
+
|
135 |
+
dataset… https://t.co/ZDNMPI2NVR",PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction,94
|
136 |
+
52,1541219433259175937,"Megapixel Image Generation with Step-Unrolled Denoising Autoencoders
|
137 |
+
abs: https://t.co/6fX9PseXBT
|
138 |
+
|
139 |
+
obtain FID score… https://t.co/HPodJ8xzPx",Megapixel Image Generation with Step-Unrolled Denoising Autoencoders,147
|
140 |
+
53,1540184734390706176,"Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
|
141 |
+
abs: https://t.co/NO2vzfdYdS https://t.co/WoN73BzgeQ",Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision,66
|
142 |
+
54,1540176838017916933,"Offline RL for Natural Language Generation with Implicit Language Q Learning
|
143 |
+
abs: https://t.co/wYTtUgdryZ
|
144 |
+
project p… https://t.co/xS8JCODxwP",Offline RL for Natural Language Generation with Implicit Language Q Learning,43
|
145 |
+
55,1540161095930880001,"MaskViT: Masked Visual Pre-Training for Video Prediction
|
146 |
+
abs: https://t.co/uhMEB6ashb
|
147 |
+
project page:… https://t.co/gbnxrCxUrc",MaskViT: Masked Visual Pre-Training for Video Prediction,147
|
148 |
+
56,1540156319923060736,"The ArtBench Dataset: Benchmarking Generative Models with Artworks
|
149 |
+
abs: https://t.co/Zzq0A2i5ob
|
150 |
+
github:… https://t.co/SfQlvTLrk3",The ArtBench Dataset: Benchmarking Generative Models with Artworks,200
|
151 |
+
57,1539811680359796739,"TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning
|
152 |
+
abs:… https://t.co/UArbr7zhRE",TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning,85
|
153 |
+
58,1539794210190155778,"Jointist: Joint Learning for Multi-instrument Transcription and Its Applications
|
154 |
+
abs: https://t.co/xeuPUBcr01
|
155 |
+
proje… https://t.co/QmyCioKviJ",Jointist: Joint Learning for Multi-instrument Transcription and Its Applications,18
|
156 |
+
59,1539780412297330689,"GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
|
157 |
+
abs: https://t.co/pKS5mgoDkG
|
158 |
+
|
159 |
+
GEMv2 supports 40 docum… https://t.co/qMitHzTlO0",GEMv2: Multilingual NLG Benchmarking in a Single Line of Code,18
|
160 |
+
60,1539777865688010753,"reStructured Pre-training
|
161 |
+
abs: https://t.co/mYm7qbt59N https://t.co/O5T3tSY4PL",reStructured Pre-training,32
|
162 |
+
61,1539672920456298498,"Scaling Autoregressive Models for Content-Rich Text-to-Image Generation
|
163 |
+
paper: https://t.co/NKkTeHttLd
|
164 |
+
project page… https://t.co/CcKxsWPmjR",Scaling Autoregressive Models for Content-Rich Text-to-Image Generation,137
|
165 |
+
62,1539480179151712256,"Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding
|
166 |
+
abs: https://t.co/Bq3GUQywPV https://t.co/iLTaoXm0yC",Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding,66
|
167 |
+
63,1539460213211910150,"EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine
|
168 |
+
abs: https://t.co/F4XkHLRxPi
|
169 |
+
github:… https://t.co/JiwSuMdkZH",EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine,34
|
170 |
+
64,1539459120667021312,"EpiGRAF: Rethinking training of 3D GANs
|
171 |
+
abs: https://t.co/RcY2vQr0NH
|
172 |
+
project page: https://t.co/kuXPKA00bZ https://t.co/CVCsseAS21",EpiGRAF: Rethinking training of 3D GANs,145
|
173 |
+
65,1539453554578055168,"Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors
|
174 |
+
abs:… https://t.co/noluSxtqzu",Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors,72
|
175 |
+
66,1539435374103220226,"Global Context Vision Transformers
|
176 |
+
abs: https://t.co/d6go0yv7fu
|
177 |
+
github: https://t.co/rUYFs09ReC
|
178 |
+
|
179 |
+
On ImageNet-1K dat… https://t.co/HJnw5wclQV",Global Context Vision Transformers,89
|
180 |
+
67,1539421251076247554,"(Certified!!) Adversarial Robustness for Free!
|
181 |
+
abs: https://t.co/NTU6lioyII
|
182 |
+
|
183 |
+
show how to achieve sota certified adv… https://t.co/2VW1CDARya",(Certified!!) Adversarial Robustness for Free!,42
|
184 |
+
68,1539076449788997632,"A Closer Look at Smoothness in Domain Adversarial Training
|
185 |
+
abs: https://t.co/GgKE9695vj
|
186 |
+
github:… https://t.co/33MX6TZhjt",A Closer Look at Smoothness in Domain Adversarial Training,97
|
187 |
+
69,1538710356444471296,"Fast Finite Width Neural Tangent Kernel
|
188 |
+
abs: https://t.co/iY1lFoYMjA https://t.co/hWzzcCd5OZ",Fast Finite Width Neural Tangent Kernel,23
|
189 |
+
70,1538706936211951617,"What do navigation agents learn about their environment?
|
190 |
+
abs: https://t.co/eXelV0REgZ
|
191 |
+
github:… https://t.co/TGSzEQ1v1c",What do navigation agents learn about their environment?,37
|
192 |
+
71,1538698653493338114,"Bootstrapped Transformer for Offline Reinforcement Learning
|
193 |
+
abs: https://t.co/YiEY3uiTgL https://t.co/yle4hPgMmf",Bootstrapped Transformer for Offline Reinforcement Learning,137
|
194 |
+
72,1538695457550921728,"Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning
|
195 |
+
abs:… https://t.co/uLQLmf4l3M",Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning,42
|
196 |
+
73,1538692524830769152,"MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
|
197 |
+
abs: https://t.co/etfGL1xnum
|
198 |
+
project pa… https://t.co/Fv1aLuEJSV",MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge,265
|
199 |
+
74,1538687423722541056,"Lossy Compression with Gaussian Diffusion
|
200 |
+
abs: https://t.co/tw5YiZAN3B
|
201 |
+
|
202 |
+
implement a proof of concept and find that… https://t.co/4nvLjhIX4e",Lossy Compression with Gaussian Diffusion,102
|
203 |
+
75,1538686489491648514,"NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates
|
204 |
+
abs: https://t.co/4S8sBXq6Ko
|
205 |
+
|
206 |
+
a diffu… https://t.co/xd3eQ0ApQJ",NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates,87
|
207 |
+
76,1538006265363738625,"iBoot: Image-bootstrapped Self-Supervised Video Representation Learning
|
208 |
+
abs: https://t.co/dkZUd4QC81 https://t.co/pJFpxd7ckU",iBoot: Image-bootstrapped Self-Supervised Video Representation Learning,73
|
209 |
+
77,1538000649933115393,"Neural Scene Representation for Locomotion on Structured Terrain
|
210 |
+
abs: https://t.co/68xY622f4w https://t.co/W3wTYp31f6",Neural Scene Representation for Locomotion on Structured Terrain,83
|
211 |
+
78,1537924151389736961,"Programmatic Concept Learning for Human Motion Description and Synthesis
|
212 |
+
paper: https://t.co/Qemk23gUHX
|
213 |
+
project pag… https://t.co/ImHeYQC5vj",Programmatic Concept Learning for Human Motion Description and Synthesis,60
|
214 |
+
79,1537640654968324099,"Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing
|
215 |
+
abs: https://t.co/9tpvhXuaRw
|
216 |
+
project page:… https://t.co/XxpZg5PGke",Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing,73
|
217 |
+
80,1537637590274277376,"MoDi: Unconditional Motion Synthesis from Diverse Data
|
218 |
+
abs: https://t.co/YBV9jSUemo https://t.co/o1uvG18RSk",MoDi: Unconditional Motion Synthesis from Diverse Data,70
|
219 |
+
81,1537630146244517889,"OmniMAE: Single Model Masked Pretraining on Images and Videos
|
220 |
+
abs: https://t.co/j9a3imUEJ6
|
221 |
+
|
222 |
+
single pretrained model… https://t.co/OiR2pY5emm",OmniMAE: Single Model Masked Pretraining on Images and Videos,146
|
223 |
+
82,1537622879386456064,"SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos
|
224 |
+
abs: https://t.co/0MkpFJiUzM
|
225 |
+
|
226 |
+
using spars… https://t.co/x1Hvgf13qE",SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos,54
|
227 |
+
83,1537621348339572736,"BYOL-Explore: Exploration by Bootstrapped Prediction
|
228 |
+
abs: https://t.co/xXQtolzjlP
|
229 |
+
|
230 |
+
BYOL-Explore achieves superhuman… https://t.co/uZvAbVd1Bb",BYOL-Explore: Exploration by Bootstrapped Prediction,79
|
231 |
+
84,1537618457365303296,"Know your audience: specializing grounded language models with the game of Dixit
|
232 |
+
abs: https://t.co/T8d5ir8LDQ https://t.co/zSk5oR2F9D",Know your audience: specializing grounded language models with the game of Dixit,39
|
233 |
+
85,1537323042380124160,"VCT: A Video Compression Transformer
|
234 |
+
abs: https://t.co/llH1L1ooKa
|
235 |
+
|
236 |
+
presented an elegantly simple transformer-based… https://t.co/ErovCWVDg3",VCT: A Video Compression Transformer,68
|
237 |
+
86,1537314480056672258,"Contrastive Learning as Goal-Conditioned Reinforcement Learning
|
238 |
+
abs: https://t.co/6dv7PNn0qq
|
239 |
+
project page:… https://t.co/vRSdekL9If",Contrastive Learning as Goal-Conditioned Reinforcement Learning,77
|
240 |
+
87,1537288570880368640,"Masked Siamese ConvNets
|
241 |
+
abs: https://t.co/YMG1O1ZZ5N https://t.co/LCVqVvFNfR",Masked Siamese ConvNets,83
|
242 |
+
88,1537265816609116161,"Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone
|
243 |
+
abs: https://t.co/UgdYW9Cf1g
|
244 |
+
project page:… https://t.co/v2sTfFBq5r",Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone,89
|
245 |
+
89,1537257011657814016,"Variable Bitrate Neural Fields
|
246 |
+
abs: https://t.co/Rp1t2LaQaW
|
247 |
+
project page: https://t.co/e2t8OrznxI https://t.co/6hw7OwbjZN",Variable Bitrate Neural Fields,162
|
248 |
+
90,1537254679188488195,"A Unified Sequence Interface for Vision Tasks
|
249 |
+
abs: https://t.co/hXbVXdqHh1
|
250 |
+
|
251 |
+
explore a unified sequence interface fo… https://t.co/QG5UxvIgS4",A Unified Sequence Interface for Vision Tasks,50
|
252 |
+
91,1537252952666087424,"Prefix Language Models are Unified Modal Learners
|
253 |
+
abs: https://t.co/BD4b3rQnKg https://t.co/2ofScnMIKN",Prefix Language Models are Unified Modal Learners,66
|
254 |
+
92,1537248480074293251,"Diffusion Models for Video Prediction and Infilling
|
255 |
+
abs: https://t.co/MwfxwKXG4z
|
256 |
+
project page:… https://t.co/rnwB8eGFAs",Diffusion Models for Video Prediction and Infilling,103
|
257 |
+
93,1536879515883945984,"ReCo: Retrieve and Co-segment for Zero-shot Transfer
|
258 |
+
abs: https://t.co/YwxkCGGyG1
|
259 |
+
project page:… https://t.co/WzVhmfhWCz",ReCo: Retrieve and Co-segment for Zero-shot Transfer,58
|
260 |
+
94,1536872875885580288,"Object Scene Representation Transformer
|
261 |
+
abs: https://t.co/SUfNIBGAxt
|
262 |
+
project page: https://t.co/j8ebSAeM8v
|
263 |
+
|
264 |
+
scales… https://t.co/wa4vo3RJAK",Object Scene Representation Transformer,97
|
265 |
+
95,1536871347372052480,"Adversarial Audio Synthesis with Complex-valued Polynomial Networks
|
266 |
+
abs: https://t.co/ekeC0nKIhR
|
267 |
+
|
268 |
+
APOLLO results in… https://t.co/sDcl2nydkt",Adversarial Audio Synthesis with Complex-valued Polynomial Networks,23
|
269 |
+
96,1536526888289574915,"Large-Scale Retrieval for Reinforcement Learning
|
270 |
+
abs: https://t.co/fjzGvI3ZXB https://t.co/eFRHt8yXoq",Large-Scale Retrieval for Reinforcement Learning,86
|
271 |
+
97,1536522198785183744,"GLIPv2: Unifying Localization and Vision-Language Understanding
|
272 |
+
abs: https://t.co/3GomrHG8xq
|
273 |
+
github:… https://t.co/bD68NZk4Lp",GLIPv2: Unifying Localization and Vision-Language Understanding,73
|
274 |
+
98,1536521362898145280,"Self-critiquing models for assisting human evaluators
|
275 |
+
abs: https://t.co/8Zy2xfA5Qz https://t.co/qndZMS9zXa",Self-critiquing models for assisting human evaluators,19
|
276 |
+
99,1536515535202136064,"Multi-instrument Music Synthesis with Spectrogram Diffusion
|
277 |
+
abs: https://t.co/UNDV4e7A6R
|
278 |
+
|
279 |
+
use a simple two-stage pr… https://t.co/AebIraqLF2",Multi-instrument Music Synthesis with Spectrogram Diffusion,87
|
280 |
+
100,1536493418305703938,"How Much is Enough? A Study on Diffusion Times in Score-based Generative Models
|
281 |
+
abs: https://t.co/qFEZBDrdrq https://t.co/iBlNs4iNE2",How Much is Enough? A Study on Diffusion Times in Score-based Generative Models,60
|
282 |
+
101,1536491133513129990,"Meta Optimal Transport
|
283 |
+
abs: https://t.co/UKdYXKA8Vd
|
284 |
+
github: https://t.co/xb9FVcim7g
|
285 |
+
|
286 |
+
Meta OT models surpass the sta… https://t.co/OlfwZIC52r",Meta Optimal Transport,67
|
287 |
+
102,1535656084488192005,"Neural Prompt Search
|
288 |
+
abs: https://t.co/wZTUHIcqdv
|
289 |
+
github: https://t.co/vnYEMBrKzt
|
290 |
+
|
291 |
+
view existing parameter-efficien… https://t.co/pLvxNt84gV",Neural Prompt Search,174
|
292 |
+
103,1535521674233319424,"Deep Surrogate Assisted Generation of Environments
|
293 |
+
abs: https://t.co/1RYhxJ71tt
|
294 |
+
project page:… https://t.co/5MuAOKIePA",Deep Surrogate Assisted Generation of Environments,58
|
295 |
+
104,1535521046257975297,"Deep Hierarchical Planning from Pixels
|
296 |
+
abs: https://t.co/xXBDevsRnK
|
297 |
+
project page: https://t.co/LoNsGVecaR https://t.co/K7RKIq2hBT",Deep Hierarchical Planning from Pixels,101
|
298 |
+
105,1535506620624642048,"VN-Transformer: Rotation-Equivariant Attention for Vector Neurons
|
299 |
+
abs: https://t.co/OkS58YpYq8 https://t.co/ailLjhzsqa",VN-Transformer: Rotation-Equivariant Attention for Vector Neurons,144
|
300 |
+
106,1535469100436271105,"Factuality Enhanced Language Models for Open-Ended Text Generation
|
301 |
+
abs: https://t.co/YX83NnfpMU
|
302 |
+
|
303 |
+
factual-nucleus sa… https://t.co/suFGgO8Ajv",Factuality Enhanced Language Models for Open-Ended Text Generation,31
|
304 |
+
107,1535449832332177408,"Unveiling Transformers with LEGO: a synthetic reasoning task
|
305 |
+
abs: https://t.co/FCnAD9AjMY https://t.co/LsUblvE3Ig",Unveiling Transformers with LEGO: a synthetic reasoning task,77
|
306 |
+
108,1535392356068892674,"BigVGAN: A Universal Neural Vocoder with Large-Scale Training
|
307 |
+
abs: https://t.co/4NRS1WBePa
|
308 |
+
project page:… https://t.co/rpuKyOEGMH",BigVGAN: A Universal Neural Vocoder with Large-Scale Training,170
|
309 |
+
109,1535069067052195862,"Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
|
310 |
+
abs:… https://t.co/v2aIh9B5H2",Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models,158
|
311 |
+
110,1535067850435600403,"Draft-and-Revise: Effective Image Generation with Contextual RQ-Transformer
|
312 |
+
abs: https://t.co/0s94Tbwh3q
|
313 |
+
|
314 |
+
propose i… https://t.co/lQZWEHXeRI",Draft-and-Revise: Effective Image Generation with Contextual RQ-Transformer,52
|
315 |
+
111,1535066703075352601,"VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution
|
316 |
+
abs:… https://t.co/UKXo53aomf",VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution,146
|
317 |
+
112,1535061799975919633,"Diffusion probabilistic modeling of protein backbones in 3D for the motif-scaffolding problem
|
318 |
+
abs:… https://t.co/fUyM4hz22a",Diffusion probabilistic modeling of protein backbones in 3D for the motif-scaffolding problem,48
|
319 |
+
113,1535026713100537871,"Sparse Fusion Mixture-of-Experts are Domain Generalizable Learners
|
320 |
+
abs: https://t.co/koYO5SuiDQ
|
321 |
+
github:… https://t.co/1xMmVzboCC",Sparse Fusion Mixture-of-Experts are Domain Generalizable Learners,70
|
322 |
+
114,1534712305790894081,"STable: Table Generation Framework for Encoder-Decoder Models
|
323 |
+
abs: https://t.co/P8GcsztVFp https://t.co/lJnhODKXyn",STable: Table Generation Framework for Encoder-Decoder Models,32
|
324 |
+
115,1534702470202630144,"Neural Diffusion Processes
|
325 |
+
abs: https://t.co/do2pFgpRWY
|
326 |
+
|
327 |
+
empirically show that NDPs are able to capture functional… https://t.co/Fx5BFrA9qQ",Neural Diffusion Processes,229
|
328 |
+
116,1534701793183252485,"Patch-based Object-centric Transformers for Efficient Video Generation
|
329 |
+
abs: https://t.co/oeAa0hiBqZ
|
330 |
+
project page:… https://t.co/qCoaulnDfS",Patch-based Object-centric Transformers for Efficient Video Generation,30
|
331 |
+
117,1534700653628764160,"Accelerating Score-based Generative Models for High-Resolution Image Synthesis
|
332 |
+
abs: https://t.co/rC90ydANVJ
|
333 |
+
project… https://t.co/5reyDDPyBN",Accelerating Score-based Generative Models for High-Resolution Image Synthesis,69
|
334 |
+
118,1534476660355043329,"On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning
|
335 |
+
abs: https://t.co/1gEuTB7Sf1
|
336 |
+
|
337 |
+
multi-task pre… https://t.co/zx8QDoxq2l",On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning,39
|
338 |
+
119,1534465882512146432,"Few-Shot Learning by Dimensionality Reduction in Gradient Space
|
339 |
+
abs: https://t.co/IMwlsW0r5V
|
340 |
+
|
341 |
+
introduce SubGD, a no… https://t.co/YltxH8mUtF",Few-Shot Learning by Dimensionality Reduction in Gradient Space,204
|
342 |
+
120,1534376291453083648,"DETR++: Taming Your Multi-Scale Detection Transformer
|
343 |
+
abs: https://t.co/kOQ5V4vC3C
|
344 |
+
|
345 |
+
DETR++, a new architecture that… https://t.co/i7qtSX9eA3",DETR++: Taming Your Multi-Scale Detection Transformer,85
|
346 |
+
121,1534347375128547328,"Intra-agent speech permits zero-shot task acquisition
|
347 |
+
abs: https://t.co/2yVGA91kSA
|
348 |
+
|
349 |
+
with ~ 150 additional image cap… https://t.co/DtBczvw7lh",Intra-agent speech permits zero-shot task acquisition,60
|
350 |
+
122,1534343347334176770,"Universal Speech Enhancement with Score-based Diffusion
|
351 |
+
abs: https://t.co/jv1rQ14Do4
|
352 |
+
project page:… https://t.co/UMEE3irGWN",Universal Speech Enhancement with Score-based Diffusion,125
|
353 |
+
123,1534341405920870400,"Generating Long Videos of Dynamic Scenes
|
354 |
+
abs: https://t.co/SjMCJub1RO
|
355 |
+
project page: https://t.co/c97Jcf3lcC
|
356 |
+
|
357 |
+
presen… https://t.co/jgcfMwGMo6",Generating Long Videos of Dynamic Scenes,336
|
358 |
+
124,1533997063951765506,"Zero-Shot Voice Conditioning for Denoising Diffusion TTS Models
|
359 |
+
abs: https://t.co/iTfFppABzr
|
360 |
+
|
361 |
+
method requires a sho… https://t.co/GALvAsiQ0J",Zero-Shot Voice Conditioning for Denoising Diffusion TTS Models,89
|
362 |
+
125,1533996337557020672,"Drawing out of Distribution with Neuro-Symbolic Generative Models
|
363 |
+
abs: https://t.co/PcRRRLIVyV
|
364 |
+
|
365 |
+
DooD trained on MNI… https://t.co/h28KgM3m3k",Drawing out of Distribution with Neuro-Symbolic Generative Models,39
|
366 |
+
126,1533993050627776512,"Separable Self-attention for Mobile Vision Transformers
|
367 |
+
abs: https://t.co/Xj1aZMucFe
|
368 |
+
|
369 |
+
With ~ 3M parameters, MobileV… https://t.co/LTag2ck7Ew",Separable Self-attention for Mobile Vision Transformers,89
|
370 |
+
127,1533989659017199617,"Extreme Compression for Pre-trained Transformers Made Simple and Efficient
|
371 |
+
abs: https://t.co/7epbwDmV31 https://t.co/n9nppcTgGJ",Extreme Compression for Pre-trained Transformers Made Simple and Efficient,84
|
372 |
+
128,1533988146102288386,"On the duality between contrastive and non-contrastive self-supervised learning
|
373 |
+
abs: https://t.co/O2GdHjqiTz https://t.co/nUibodNE9M",On the duality between contrastive and non-contrastive self-supervised learning,83
|
374 |
+
129,1533982101653098503,"ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
|
375 |
+
abs:… https://t.co/tQuBWS3uaH",ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers,25
|
376 |
+
130,1533980842867015681,"Torsional Diffusion for Molecular Conformer Generation
|
377 |
+
abs: https://t.co/VfhEdlJLd7
|
378 |
+
github: https://t.co/DYpXh7NbKe https://t.co/khz3yO5FFZ",Torsional Diffusion for Molecular Conformer Generation,24
|
379 |
+
131,1533980437114232832,"Blended Latent Diffusion
|
380 |
+
abs: https://t.co/5K8QQnlQfz
|
381 |
+
project page: https://t.co/ztlJtR4Sio
|
382 |
+
|
383 |
+
present an accelerated… https://t.co/qzrdUJc4i9",Blended Latent Diffusion,55
|
384 |
+
132,1533979552761913344,"Diffusion-GAN: Training GANs with Diffusion
|
385 |
+
abs: https://t.co/rxRpORfP5U
|
386 |
+
|
387 |
+
DiffusionGAN can provide stable and data-… https://t.co/ScQTvm3XaA",Diffusion-GAN: Training GANs with Diffusion,237
|
388 |
+
133,1533676404063232000,"Beyond Tabula Rasa: Reincarnating Reinforcement Learning
|
389 |
+
abs: https://t.co/r8TcfqPyIs https://t.co/qSO5K11vYB",Beyond Tabula Rasa: Reincarnating Reinforcement Learning,34
|
390 |
+
134,1533649732345778177,"Improving Fairness in Large-Scale Object Recognition by CrowdSourced Demographic Information
|
391 |
+
abs:… https://t.co/3mGwmSsO6M",Improving Fairness in Large-Scale Object Recognition by CrowdSourced Demographic Information,17
|
392 |
+
135,1533634419986153472,"Positive Unlabeled Contrastive Learning
|
393 |
+
abs: https://t.co/LC33ii48Q6 https://t.co/eWLoasRamS",Positive Unlabeled Contrastive Learning,67
|
394 |
+
136,1533633258545610754,"Reinforcement Learning with Neural Radiance Fields
|
395 |
+
abs: https://t.co/8ESw75I2N9
|
396 |
+
project page:… https://t.co/DQrpZ5dyrb",Reinforcement Learning with Neural Radiance Fields,131
|
397 |
+
137,1533619945996697600,"Compositional Visual Generation with Composable Diffusion Models
|
398 |
+
abs: https://t.co/FEKYaDOlwf
|
399 |
+
project page:… https://t.co/qvaTyuj3un",Compositional Visual Generation with Composable Diffusion Models,122
|
400 |
+
138,1533611409069711368,"Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules
|
401 |
+
abs:… https://t.co/rQTNT4yfcB",Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules,40
|
402 |
+
139,1532729442321170433,"Deep Learning on Implicit Neural Datasets
|
403 |
+
abs: https://t.co/nPGleDBRSq
|
404 |
+
|
405 |
+
introduce the INR-Net, the first general fr… https://t.co/i1xT7bLhSN",Deep Learning on Implicit Neural Datasets,81
|
406 |
+
140,1532726423697465344,"SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners
|
407 |
+
abs: https://t.co/SIR2ufE89J
|
408 |
+
github:… https://t.co/tZoNFvtDFQ",SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners,178
|
409 |
+
141,1532558380119752705,"DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
|
410 |
+
abs:… https://t.co/dHBUdpmqm9",DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks,31
|
411 |
+
142,1532554016072376323,"Cascaded Video Generation for Videos In-the-Wild
|
412 |
+
abs: https://t.co/wDkiRCEWXN https://t.co/GJSVK80qC0",Cascaded Video Generation for Videos In-the-Wild,57
|
413 |
+
143,1532547568567300096,"Finding the Right Recipe for Low Resource Domain Adaptation in Neural Machine Translation
|
414 |
+
abs:… https://t.co/FAEEhSyQpY",Finding the Right Recipe for Low Resource Domain Adaptation in Neural Machine Translation,12
|
415 |
+
144,1532540853071265799,"BayesFormer: Transformer with Uncertainty Estimation
|
416 |
+
abs: https://t.co/0OqGgau2D2
|
417 |
+
|
418 |
+
introduce BayesFormer, a Transfo… https://t.co/znYfXmUPpJ",BayesFormer: Transformer with Uncertainty Estimation,188
|
419 |
+
145,1532539121662574605,"Improving Diffusion Models for Inverse Problems using Manifold Constraints
|
420 |
+
abs: https://t.co/Mt78QlNgZZ https://t.co/d6T7XFkqf1",Improving Diffusion Models for Inverse Problems using Manifold Constraints,115
|
421 |
+
146,1532538212438130697,"DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps
|
422 |
+
abs:… https://t.co/PBn2cEeEle",DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps,93
|
423 |
+
147,1532201565167267840,"Hopular: Modern Hopfield Networks for Tabular Data
|
424 |
+
abs: https://t.co/O5h6GYoGZd
|
425 |
+
github: https://t.co/kztLUsmzMY
|
426 |
+
pro… https://t.co/xqlUFoil7K",Hopular: Modern Hopfield Networks for Tabular Data,485
|
427 |
+
148,1532173830428442627,"PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs
|
428 |
+
abs: https://t.co/MdoshW31xe
|
429 |
+
gith… https://t.co/d0PWKpIufP",PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs,121
|
430 |
+
149,1532162242715721728,"Elucidating the Design Space of Diffusion-Based Generative Models
|
431 |
+
abs: https://t.co/WtodJSq1wa
|
432 |
+
|
433 |
+
improve efficiency… https://t.co/Fp84kzysBZ",Elucidating the Design Space of Diffusion-Based Generative Models,257
|
434 |
+
150,1531810146178957312,"Chefs' Random Tables: Non-Trigonometric Random Features
|
435 |
+
abs: https://t.co/qrt5BnhG2g https://t.co/AuWq9HKnl5",Chefs' Random Tables: Non-Trigonometric Random Features,19
|
436 |
+
151,1531802121280147457,"Few-Shot Diffusion Models
|
437 |
+
abs: https://t.co/Oz75eOx0Ue
|
438 |
+
|
439 |
+
At test time, the model is able to generate samples from pr… https://t.co/qw3Wdivfks",Few-Shot Diffusion Models,114
|
440 |
+
152,1531798720550952961,"SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections
|
441 |
+
abs: https://t.co/eviBoaJ1Zw… https://t.co/XsdD2CSafR",SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections,148
|
442 |
+
153,1531484127177936896,"Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual Imitation Learning
|
443 |
+
abs:… https://t.co/yafGze7shH",Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual Imitation Learning,36
|
444 |
+
154,1531466054492364800,"Dataset Condensation via Efficient Synthetic-Data Parameterization
|
445 |
+
abs: https://t.co/IA66WHQQCH
|
446 |
+
github:… https://t.co/PuBEVyx5EK",Dataset Condensation via Efficient Synthetic-Data Parameterization,110
|
447 |
+
155,1531465172262567937,"Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors
|
448 |
+
abs: https://t.co/25EYR1yE1A
|
449 |
+
pro… https://t.co/qdqxXZtyYx",Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors,56
|
450 |
+
156,1531460153152786432,"Teaching Models to Express Their Uncertainty in Words
|
451 |
+
abs: https://t.co/rKcZNhBLt5
|
452 |
+
|
453 |
+
GPT-3 model can learn to expres… https://t.co/Z3YCzXqaMX",Teaching Models to Express Their Uncertainty in Words,163
|
454 |
+
157,1531454478968406016,"Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning
|
455 |
+
abs:… https://t.co/U47eMKEmf3",Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning,36
|
456 |
+
158,1531451492120535041,"Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
|
457 |
+
abs:… https://t.co/Ar0fNxMRi9",Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers,28
|
458 |
+
159,1531445364217237509,"Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models
|
459 |
+
abs: https://t.co/myWID3paI2 https://t.co/S0WUP71wz8",Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models,66
|
460 |
+
160,1531444059780308996,"Neural Volumetric Object Selection
|
461 |
+
abs: https://t.co/ZLiJ5iBZzQ
|
462 |
+
project page: https://t.co/YGsNO14XK7 https://t.co/4twrRcyExx",Neural Volumetric Object Selection,97
|
463 |
+
161,1531442002814025728,"Multi-Game Decision Transformers
|
464 |
+
abs: https://t.co/5JtgTx3B49
|
465 |
+
project page: https://t.co/rKk7h7wLga
|
466 |
+
|
467 |
+
a single trans… https://t.co/zcJXA5tDhR",Multi-Game Decision Transformers,105
|
468 |
+
162,1531440090161025024,"Diffusion-LM Improves Controllable Text Generation
|
469 |
+
abs: https://t.co/YYVX2fuWrM
|
470 |
+
|
471 |
+
Diffusion-LM iteratively denoises… https://t.co/1pJ5djHV9T",Diffusion-LM Improves Controllable Text Generation,145
|
472 |
+
163,1531176037400338432,"MyoSuite -- A contact-rich simulation suite for musculoskeletal motor control
|
473 |
+
abs: https://t.co/HpRvGT2UDz
|
474 |
+
project… https://t.co/6noxiVtz85",MyoSuite -- A contact-rich simulation suite for musculoskeletal motor control,47
|
475 |
+
164,1531174102572191744,"Neural Basis Models for Interpretability
|
476 |
+
abs: https://t.co/u0G7oK87X4 https://t.co/ML7UCNPDkP",Neural Basis Models for Interpretability,55
|
477 |
+
165,1531173694214656005,"Scalable Interpretability via Polynomials
|
478 |
+
abs: https://t.co/EKZDra09oM https://t.co/XyIoQHWftG",Scalable Interpretability via Polynomials,32
|
479 |
+
166,1531173081393336320,"Sharpness-Aware Training for Free
|
480 |
+
abs: https://t.co/R6SSrWAjL2 https://t.co/alHDGt3zQo",Sharpness-Aware Training for Free,155
|
481 |
+
167,1531165352037691392,"Global Normalization for Streaming Speech Recognition in a Modular Framework
|
482 |
+
abs: https://t.co/OfIb7wiVkx
|
483 |
+
|
484 |
+
demonstr… https://t.co/0iVBVXVBBs",Global Normalization for Streaming Speech Recognition in a Modular Framework,21
|
485 |
+
168,1531104909927628806,"Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions
|
486 |
+
abs: https://t.co/gVXiOx5Df3 https://t.co/eufEJbHHRr",Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions,47
|
487 |
+
169,1531100741166833664,"FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
|
488 |
+
abs: https://t.co/3aHeecihur
|
489 |
+
|
490 |
+
an IO-awa… https://t.co/GoJsOKYEgt",FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness,233
|
491 |
+
170,1531098962932944896,"Contrastive Siamese Network for Semi-supervised Speech Recognition
|
492 |
+
abs: https://t.co/SL374ByjZO
|
493 |
+
|
494 |
+
experiments show t… https://t.co/efVonWBQC5",Contrastive Siamese Network for Semi-supervised Speech Recognition,71
|
495 |
+
171,1531096569365282816,"X-ViT: High Performance Linear Vision Transformer without Softmax
|
496 |
+
abs: https://t.co/A6HZ2vXKDB https://t.co/kArY0Tm4VE",X-ViT: High Performance Linear Vision Transformer without Softmax,120
|
497 |
+
172,1531093245308059650,"Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval
|
498 |
+
|
499 |
+
transformer… https://t.co/OSLGlyUNqb",Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval,12
|
500 |
+
173,1531092289090736129,"Quark: Controllable Text Generation with Reinforced Unlearning
|
501 |
+
abs: https://t.co/OmS9AqhC7d
|
502 |
+
|
503 |
+
introduce Quantized Re… https://t.co/M4DHSUpwF3",Quark: Controllable Text Generation with Reinforced Unlearning,144
|
504 |
+
174,1531091654567919616,"Training and Inference on Any-Order Autoregressive Models the Right Way
|
505 |
+
abs: https://t.co/G8DNeKtoJK
|
506 |
+
|
507 |
+
leads to impr… https://t.co/JjXafy7iAu",Training and Inference on Any-Order Autoregressive Models the Right Way,22
|
508 |
+
175,1531090584231890947,"Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation
|
509 |
+
abs:… https://t.co/binMlc2scV",Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation,52
|
510 |
+
176,1531089687263293442,"Maximum Likelihood Training of Implicit Nonlinear Diffusion Models
|
511 |
+
abs: https://t.co/U2YtYUURqH https://t.co/lw7hcspT7o",Maximum Likelihood Training of Implicit Nonlinear Diffusion Models,110
|
512 |
+
177,1531088458839740416,"Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters
|
513 |
+
a… https://t.co/e1H5ZyvcQg","Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters",20
|
514 |
+
178,1531086920461307906,"Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures
|
515 |
+
abs:… https://t.co/7DWwix1kP1","Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures",81
|
516 |
+
179,1531017163284393987,"CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
|
517 |
+
github: https://t.co/1JuOHU7puc https://t.co/Wilcq2Xxb9",CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers,1498
|
518 |
+
180,1530278551676657665,"Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
|
519 |
+
abs: https://t.co/swtjYLryr5 https://t.co/Ny4wTtkaAI",Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality,31
|
520 |
+
181,1530029153101168645,"Towards Learning Universal Hyperparameter Optimizers with Transformers
|
521 |
+
abs: https://t.co/yON7zKZCRy
|
522 |
+
|
523 |
+
extensive expe… https://t.co/UWv7nrCmhF",Towards Learning Universal Hyperparameter Optimizers with Transformers,129
|
524 |
+
182,1530028097692647449,"BiT: Robustly Binarized Multi-distilled Transformer
|
525 |
+
abs: https://t.co/buQ40Vo9ee https://t.co/Q8iyC2Auql",BiT: Robustly Binarized Multi-distilled Transformer,37
|
526 |
+
183,1530018008667660300,"Evaluating Multimodal Interactive Agents
|
527 |
+
abs: https://t.co/CtrOihrZBZ https://t.co/sThFVydSUZ",Evaluating Multimodal Interactive Agents,23
|
528 |
+
184,1530013711645253632,"Matryoshka Representations for Adaptive Deployment
|
529 |
+
abs: https://t.co/KkqN7sxmnN
|
530 |
+
|
531 |
+
flexibility within the learned Mat… https://t.co/RYra48uEKN",Matryoshka Representations for Adaptive Deployment,69
|
532 |
+
185,1530010193836244992,"Green Hierarchical Vision Transformer for Masked Image Modeling
|
533 |
+
abs: https://t.co/r4Y9LfE4HC
|
534 |
+
github:… https://t.co/o7ZihujhkM",Green Hierarchical Vision Transformer for Masked Image Modeling,26
|
535 |
+
186,1529673576835698691,"Inception Transformer
|
536 |
+
abs: https://t.co/EoPDBOafSS
|
537 |
+
|
538 |
+
iFormer-S hits the top-1 accuracy of 83.4% on ImageNet-1K, much… https://t.co/24J3SnTBdm",Inception Transformer,117
|
539 |
+
187,1529640184081534977,"FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech
|
540 |
+
abs: https://t.co/IABvUreqHv https://t.co/iUUzNPaPFp",FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech,30
|
541 |
+
188,1529637573462831105,"Autoformalization with Large Language Models
|
542 |
+
abs: https://t.co/SoGYXkMGhV
|
543 |
+
|
544 |
+
methodology results in a new state-of-th… https://t.co/pTxpC00QFC",Autoformalization with Large Language Models,24
|
545 |
+
189,1529630110885851136,"AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
|
546 |
+
abs: https://t.co/aD0daO7HEa
|
547 |
+
|
548 |
+
By… https://t.co/NW3DbOJdwH",AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models,64
|
549 |
+
190,1529625016471633920,"An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems
|
550 |
+
abs:… https://t.co/gks4xeDd22",An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems,10
|
551 |
+
191,1529341790335246336,"Policy Compliance Detection via Expression Tree Inference
|
552 |
+
abs: https://t.co/Ic7Wm852Qz https://t.co/4RtEnug1RD",Policy Compliance Detection via Expression Tree Inference,8
|
553 |
+
192,1529309686318653441,"History Compression via Language Models in Reinforcement Learning
|
554 |
+
abs: https://t.co/N1smkJUAW9 https://t.co/4v1an4CkTU",History Compression via Language Models in Reinforcement Learning,85
|
555 |
+
193,1529303237572034560,"On the Role of Bidirectionality in Language Model Pre-Training
|
556 |
+
abs: https://t.co/fG2SbUhB1W
|
557 |
+
|
558 |
+
propose a new framewor… https://t.co/Gc40i0zyeV",On the Role of Bidirectionality in Language Model Pre-Training,26
|
559 |
+
194,1529301315221917699,"Large Language Models are Zero-Shot Reasoners
|
560 |
+
abs: https://t.co/GgdLms77wF
|
561 |
+
|
562 |
+
LLMs are decent zero-shot reasoners by… https://t.co/PTH6QpdSo2",Large Language Models are Zero-Shot Reasoners,85
|
563 |
+
195,1529278657856000000,"Naive Few-Shot Learning: Sequence Consistency Evaluation
|
564 |
+
abs: https://t.co/ySAzuujz2O https://t.co/aVVLHJdBUC",Naive Few-Shot Learning: Sequence Consistency Evaluation,19
|
565 |
+
196,1529075001256824834,"All Birds with One Stone: Multi-task Text Classification for Efficient Inference with One Forward Pass
|
566 |
+
abs:… https://t.co/fcPGWaFEk5",All Birds with One Stone: Multi-task Text Classification for Efficient Inference with One Forward Pass,12
|
567 |
+
197,1529071850860453888,"StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models
|
568 |
+
abs:… https://t.co/MDT1Bxw9by",StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models,20
|
569 |
+
198,1528909940324192256,"Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods
|
570 |
+
abs:… https://t.co/B65LGrnCLg",Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods,38
|
571 |
+
199,1528907841335066625,"Flexible Diffusion Modeling of Long Videos
|
572 |
+
abs: https://t.co/Cx1BUqA7zM
|
573 |
+
|
574 |
+
demonstrate improved video modeling over p… https://t.co/Y15RoaMAFg",Flexible Diffusion Modeling of Long Videos,84
|
575 |
+
200,1528904484553900034,"Scaling Laws and Interpretability of Learning from Repeated Data
|
576 |
+
abs: https://t.co/UbSQazzMwa
|
577 |
+
|
578 |
+
performance of 800M… https://t.co/4HHdSCe8ZT",Scaling Laws and Interpretability of Learning from Repeated Data,46
|
579 |
+
201,1528851863306752000,"Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
|
580 |
+
project page:… https://t.co/5yJZQIqMdn",Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding,2724
|
581 |
+
202,1528584642407841792,"Self-Supervised Depth Estimation with Isometric-Self-Sample-Based Learning
|
582 |
+
abs: https://t.co/rE7gjT0COx https://t.co/EtbaT2jTle",Self-Supervised Depth Estimation with Isometric-Self-Sample-Based Learning,116
|
583 |
+
203,1528576691152494592,"Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors
|
584 |
+
abs: https://t.co/bXqr4sP1V4
|
585 |
+
github:… https://t.co/efonU0Az1m",Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors,59
|
586 |
+
204,1528569396456828929,"Why GANs are overkill for NLP
|
587 |
+
abs: https://t.co/zwjCFxh22z https://t.co/tuM1ufFC7x",Why GANs are overkill for NLP,139
|
588 |
+
205,1528555058916429828,"Lossless Acceleration for Seq2seq Generation with Aggressive Decoding
|
589 |
+
abs: https://t.co/7bmGFXe47E
|
590 |
+
github:… https://t.co/bXfTVfP56t",Lossless Acceleration for Seq2seq Generation with Aggressive Decoding,48
|
591 |
+
206,1528541664561844229,"Planning with Diffusion for Flexible Behavior Synthesis
|
592 |
+
abs: https://t.co/HSoQhC6WBV
|
593 |
+
project page:… https://t.co/PA69vLOYmb",Planning with Diffusion for Flexible Behavior Synthesis,119
|
594 |
+
207,1527765335528591361,"Disentangling Visual Embeddings for Attributes and Objects
|
595 |
+
abs: https://t.co/QlDsekM1rH https://t.co/YfsJGNzjlX",Disentangling Visual Embeddings for Attributes and Objects,253
|
596 |
+
208,1527452603264733184,"RankGen: Improving Text Generation with Large Ranking Models
|
597 |
+
abs: https://t.co/uVVfXNnZeR
|
598 |
+
github:… https://t.co/GfRxgf4hKe",RankGen: Improving Text Generation with Large Ranking Models,37
|
599 |
+
209,1527450826343604233,"Robust and Efficient Medical Imaging with Self-Supervision
|
600 |
+
abs: https://t.co/oBqk2TTp73
|
601 |
+
|
602 |
+
strategy leads to strong d… https://t.co/ptwGGG2NkL",Robust and Efficient Medical Imaging with Self-Supervision,45
|
603 |
+
210,1527097137825202177,"Masked Autoencoders As Spatiotemporal Learners
|
604 |
+
abs: https://t.co/MWlK2uV6qF
|
605 |
+
|
606 |
+
MAE method can learn strong representa… https://t.co/KX2kb7Zf0m",Masked Autoencoders As Spatiotemporal Learners,288
|
607 |
+
211,1527092033374113795,"Meta-Learning Sparse Compression Networks
|
608 |
+
abs: https://t.co/pDKyAXyGmg https://t.co/ExJQyGQefn",Meta-Learning Sparse Compression Networks,34
|
609 |
+
212,1526825242026512385,"An Empirical Investigation of Representation Learning for Imitation
|
610 |
+
abs: https://t.co/P6C15OJ0ft https://t.co/C0PcBJ72kH",An Empirical Investigation of Representation Learning for Imitation,40
|
611 |
+
213,1526798231191048192,"Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
|
612 |
+
abs: https://t.co/kKjj0oKSyE https://t.co/bk1DPxlwZ9",Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space,49
|
613 |
+
214,1526738002621472768,"SKILL: Structured Knowledge Infusion for Large Language Models
|
614 |
+
abs: https://t.co/vbExGmg4hx https://t.co/3hVTWxLVE1",SKILL: Structured Knowledge Infusion for Large Language Models,46
|
615 |
+
215,1526435187093123072,"FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization
|
616 |
+
abs:… https://t.co/qG4s6MlGmd",FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization,46
|
617 |
+
216,1526428632868167682,"PrefixRL: Optimization of Parallel Prefix Circuits using Deep Reinforcement Learning
|
618 |
+
abs: https://t.co/JTAU1vnmst https://t.co/fyonWO1rKa",PrefixRL: Optimization of Parallel Prefix Circuits using Deep Reinforcement Learning,45
|
619 |
+
217,1526373200183033857,"Diffusion Models for Adversarial Purification
|
620 |
+
abs: https://t.co/VdSXsTahOY https://t.co/lFxumNcuIj",Diffusion Models for Adversarial Purification,135
|