File size: 48,177 Bytes
b09522a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
,id,tweet_text,paper_reference,total_likes
0,1541238366599012355,"HM3D-ABO: A Photo-realistic Dataset for Object-centric Multi-view 3D Reconstruction
abs: https://t.co/fSVklQH3H4
gi… https://t.co/38aK0bOtoh",HM3D-ABO: A Photo-realistic Dataset for Object-centric Multi-view 3D Reconstruction,77
1,1541226747533922308,"PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction
abs: https://t.co/yXdFTqRWF3

dataset… https://t.co/ZDNMPI2NVR",PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction,51
2,1541224802425442305,"RT @aerinykim: Before I forget, I'd like to summarize some interesting papers that I found at #CVPR2022.

Dual-key multimodal backdoors for…","RT @aerinykim: Before I forget, I'd like to summarize some interesting papers that I found at #CVPR2022.",0
3,1541222358735790082,"Text-Driven Stylization of Video Objects
abs: https://t.co/dQps6x2n65
project page: https://t.co/Ycsjsus0y6

TL;DR:… https://t.co/l9v0AGY7Ks",Text-Driven Stylization of Video Objects,70
4,1541219433259175937,"Megapixel Image Generation with Step-Unrolled Denoising Autoencoders
abs: https://t.co/6fX9PseXBT

obtain FID score… https://t.co/HPodJ8xzPx",Megapixel Image Generation with Step-Unrolled Denoising Autoencoders,94
5,1541125242118078465,"RT @dasayan05: #CVPR2022 summary:
1. Boiling temperature at NOLA
2. Reading NeRF posters
3. Searching for @ak92501 
4. Reading more NeRF po…",RT @dasayan05: #CVPR2022 summary:,0
6,1541101988125048838,"The @CVPR event on @huggingface is ending on June 30th  (AOE Time Zone), 118 team members and 25 @Gradio demos have… https://t.co/dS8GWnOvid","The @CVPR event on @huggingface is ending on June 30th  (AOE Time Zone), 118 team members and 25 @Gradio demos have… https://t.co/dS8GWnOvid",37
7,1540790151273517056,github: https://t.co/nw8tY5xWN3 https://t.co/VmCO75ftIQ,github: https://t.co/nw8tY5xWN3 https://t.co/VmCO75ftIQ,63
8,1540760803900530691,"RT @zhengzhongtu: Already back in Austin now!

Finally caught up with @ak92501 the Arxiv robot on the last day of CVPR~ https://t.co/9hFLvt…",RT @zhengzhongtu: Already back in Austin now!,0
9,1540531617609011200,RT @saihv: @sitzikbs @CSProfKGD @ak92501 #6 seems interesting.. https://t.co/7PIEQOraSz,RT @saihv: @sitzikbs @CSProfKGD @ak92501 #6 seems interesting.. https://t.co/7PIEQOraSz,0
10,1540526641264353283,"RT @MatthewWalmer: Today we’re presenting our poster for “Dual Key Multimodal Backdoors for Visual Question Answering” at #cvpr2022

Aftern…",RT @MatthewWalmer: Today we’re presenting our poster for “Dual Key Multimodal Backdoors for Visual Question Answering” at #cvpr2022,0
11,1540518390904807424,RT @sitzikbs: @WaltonStevenj @ak92501 @CSProfKGD Wow! Same thing happned to me! https://t.co/SndtMVGdkd,RT @sitzikbs: @WaltonStevenj @ak92501 @CSProfKGD Wow! Same thing happned to me! https://t.co/SndtMVGdkd,0
12,1540514393653395457,RT @WaltonStevenj: @CSProfKGD @ak92501 I tried to get a picture but this happened https://t.co/LFqqqwfwGl,RT @WaltonStevenj: @CSProfKGD @ak92501 I tried to get a picture but this happened https://t.co/LFqqqwfwGl,0
13,1540498719245746178,RT @apsdehal: Come stop by at our WinoGround poster during afternoon session at #CVPR2022  today to talk about where today's advanced visio…,RT @apsdehal: Come stop by at our WinoGround poster during afternoon session at #CVPR2022  today to talk about where today's advanced visio…,0
14,1540496892018188289,"WALT: Watch And Learn 2D amodal representation from Time-lapse imagery
paper: https://t.co/8GHgNUGdi6
project page:… https://t.co/5YSt8ydEu0",WALT: Watch And Learn 2D amodal representation from Time-lapse imagery,64
15,1540492673039187969,RT @CSProfKGD: FUN FACT: @ak92501 spends 4-5 hours each night sifting through the arXiv feed and posting.,RT @CSProfKGD: FUN FACT: @ak92501 spends 4-5 hours each night sifting through the arXiv feed and posting.,0
16,1540451974797316096,@mervenoyann Happy birthday! 🎈🎉 🎁,@mervenoyann Happy birthday! 🎈🎉 🎁,4
17,1540439841007083520,RT @shahrukh_athar: Really excited to present RigNeRF today at Poster Session 4.2 of #CVPR2022 (@CVPR)!! Drop by PosterID 161b to discuss R…,RT @shahrukh_athar: Really excited to present RigNeRF today at Poster Session 4.2 of #CVPR2022 (@CVPR)!! Drop by PosterID 161b to discuss R…,0
18,1540422370153881601,RT @jw2yang4ai: We are at 46b to present our UniCL/mini-Florence! https://t.co/U5nvHiO4bR,RT @jw2yang4ai: We are at 46b to present our UniCL/mini-Florence! https://t.co/U5nvHiO4bR,0
19,1540407710038065152,"RT @sitzikbs: OK, @ak92501 just stopped by our poster. Officially, not a bot. https://t.co/tSljzLLjer","RT @sitzikbs: OK, @ak92501 just stopped by our poster. Officially, not a bot. https://t.co/tSljzLLjer",0
20,1540383826630909953,"RT @DrJimFan: Introducing MineDojo for building open-ended generalist agents! https://t.co/PmOCWz6T5E
✅Massive benchmark: 1000s of tasks in…",RT @DrJimFan: Introducing MineDojo for building open-ended generalist agents! https://t.co/PmOCWz6T5E,0
21,1540367998745206784,RT @YiwuZhong: #CVPR2022 We just released a web demo for RegionCLIP (https://t.co/rGvI5L9tXN). The pre-trained RegionCLIP demonstrates inte…,RT @YiwuZhong: #CVPR2022 We just released a web demo for RegionCLIP (https://t.co/rGvI5L9tXN). The pre-trained RegionCLIP demonstrates inte…,0
22,1540353957289234432,will be here until 11,will be here until 11,8
23,1540350076274593794,"RT @karol_majek: @PDillis @ak92501 Real, 3 instances, they balance the load https://t.co/eMMYwmS3xV","RT @karol_majek: @PDillis @ak92501 Real, 3 instances, they balance the load https://t.co/eMMYwmS3xV",0
24,1540349713953595393,"RT @Jerry_XU_Jiarui: 🥰This morning 10:00AM-12:30PM at #CVPR2022, I will present GroupViT at poster 208a. Please come by and  have a chat!…","RT @Jerry_XU_Jiarui: 🥰This morning 10:00AM-12:30PM at #CVPR2022, I will present GroupViT at poster 208a. Please come by and  have a chat!…",0
25,1540349465265061889,RT @CSProfKGD: Got an autograph 🤩 #CVPR2022 https://t.co/897WuqIdM4,RT @CSProfKGD: Got an autograph 🤩 #CVPR2022 https://t.co/897WuqIdM4,0
26,1540347498606346245,"RT @jw2yang4ai: If you are interested, just stop at our RegionCLIP poster detected by our RegionCLIP model. https://t.co/Qnc71nMGuZ","RT @jw2yang4ai: If you are interested, just stop at our RegionCLIP poster detected by our RegionCLIP model. https://t.co/Qnc71nMGuZ",0
27,1540336050488446977,"Sitting at tables on the other side of coffee shop next to door and between cafe, wearing a red shirt https://t.co/EgkMDHNvyQ","Sitting at tables on the other side of coffee shop next to door and between cafe, wearing a red shirt https://t.co/EgkMDHNvyQ",29
28,1540320889753030661,"RT @sitzikbs: Are you still at #CVPR2022 ? Come chat with us at the last poster session (4.2). @ChaminHewa and I will be at poster 61b, 14:…","RT @sitzikbs: Are you still at #CVPR2022 ? Come chat with us at the last poster session (4.2). @ChaminHewa and I will be at poster 61b, 14:…",0
29,1540320736971300871,"RT @confusezius: If contrastive learning and language is something that sounds interesting, drop by at this mornings oral (or poster) sessi…","RT @confusezius: If contrastive learning and language is something that sounds interesting, drop by at this mornings oral (or poster) sessi…",0
30,1540306609594826753,"RT @jw2yang4ai: If you are there, please try our CVPR 2022 work RegionCLIP demo! You can feed any queries to localize the fine-grained obje…","RT @jw2yang4ai: If you are there, please try our CVPR 2022 work RegionCLIP demo! You can feed any queries to localize the fine-grained obje…",0
31,1540197464543838208,"""New York City, oil painting"" - CogView2
demo: https://t.co/KgWC23knx7 https://t.co/28oJbeDKsm","""New York City, oil painting"" - CogView2",18
32,1540187756164423687,"RT @Zhao_Running: Our #INTERSPEECH paper introduces Radio2Speech, a #wirelesssensing system that recovers high quality speech via RF signal…","RT @Zhao_Running: Our #INTERSPEECH paper introduces Radio2Speech, a #wirelesssensing system that recovers high quality speech via RF signal…",0
33,1540184734390706176,"Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
abs: https://t.co/NO2vzfdYdS https://t.co/WoN73BzgeQ",Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision,65
34,1540180978425073664,"BlazePose GHUM Holistic: Real-time 3D Human Landmarks and Pose Estimation
abs: https://t.co/qnxAmRVP71

present Bla… https://t.co/w4Zi72blos",BlazePose GHUM Holistic: Real-time 3D Human Landmarks and Pose Estimation,81
35,1540176838017916933,"Offline RL for Natural Language Generation with Implicit Language Q Learning
abs: https://t.co/wYTtUgdryZ
project p… https://t.co/xS8JCODxwP",Offline RL for Natural Language Generation with Implicit Language Q Learning,40
36,1540173636774002688,github: https://t.co/Nu0jgZ3qKo https://t.co/cnG50SKwpf,github: https://t.co/Nu0jgZ3qKo https://t.co/cnG50SKwpf,12
37,1540173392996958209,"GODEL: Large-Scale Pre-Training for Goal-Directed Dialog
abs: https://t.co/ayJI8xXVL2

GODEL outperforms sota pre-t… https://t.co/eUfnl7dszD",GODEL: Large-Scale Pre-Training for Goal-Directed Dialog,40
38,1540166602364174338,RT @victormustar: « A lion man is typing in the office »  CogView2 demo is nice 😅 https://t.co/6ZTomM8NBs https://t.co/4wnutOZASQ,RT @victormustar: « A lion man is typing in the office »  CogView2 demo is nice 😅 https://t.co/6ZTomM8NBs https://t.co/4wnutOZASQ,0
39,1540166227162812421,"Adversarial Multi-Task Learning for Disentangling Timbre and Pitch in Singing Voice Synthesis 🎤🎤
abs:… https://t.co/acdjzVMMU3",Adversarial Multi-Task Learning for Disentangling Timbre and Pitch in Singing Voice Synthesis 🎤🎤,35
40,1540161095930880001,"MaskViT: Masked Visual Pre-Training for Video Prediction
abs: https://t.co/uhMEB6ashb
project page:… https://t.co/gbnxrCxUrc",MaskViT: Masked Visual Pre-Training for Video Prediction,144
41,1540156319923060736,"The ArtBench Dataset: Benchmarking Generative Models with Artworks
abs: https://t.co/Zzq0A2i5ob
github:… https://t.co/SfQlvTLrk3",The ArtBench Dataset: Benchmarking Generative Models with Artworks,177
42,1540151560939921409,"RT @ccloy: We cast blind 😀 restoration as a code prediction task, and exploit global compositions and long-range dependencies of low-qualit…","RT @ccloy: We cast blind 😀 restoration as a code prediction task, and exploit global compositions and long-range dependencies of low-qualit…",0
43,1540138378498383873,a @Gradio Demo for RegionCLIP: Region-based Language-Image Pretraining on @huggingface Spaces for @CVPR 2022 by… https://t.co/XZCASqN208,a @Gradio Demo for RegionCLIP: Region-based Language-Image Pretraining on @huggingface Spaces for @CVPR 2022 by… https://t.co/XZCASqN208,45
44,1540136841155907585,I will be near the coffee shop outside Hall C tomorrow if anyone wants to meet up after 9 am at CVPR,I will be near the coffee shop outside Hall C tomorrow if anyone wants to meet up after 9 am at CVPR,90
45,1540134704057294848,"EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
abs: https://t.co/qzJtFOGuNK
project page:… https://t.co/drOF3x8DLH",EventNeRF: Neural Radiance Fields from a Single Colour Event Camera,160
46,1540114214756536320,RT @elliottszwu: .@ak92501 is real! Come to hall C!,RT @elliottszwu: .@ak92501 is real! Come to hall C!,0
47,1540109042584064001,"@CSProfKGD @elliottszwu @CVPR thanks, would also be great to meet, sent a dm, also I am at the coffee shop outside… https://t.co/j3i3h6Bbfs","@CSProfKGD @elliottszwu @CVPR thanks, would also be great to meet, sent a dm, also I am at the coffee shop outside… https://t.co/j3i3h6Bbfs",17
48,1540101501456187395,"RT @hyungjin_chung: For those interested diffusion models and inverse problems, come check out our poster on 174a #CVPR2022 ! Joint work wi…","RT @hyungjin_chung: For those interested diffusion models and inverse problems, come check out our poster on 174a #CVPR2022 ! Joint work wi…",0
49,1540098318029692928,"RT @gclue_akira: CogView2のWebデモ
https://t.co/OVu6EE6YQD

https://t.co/kUtxCq4EqV",RT @gclue_akira: CogView2のWebデモ,0
50,1540078626745589761,RT @cyrilzakka: Was working on something very similar but never got the chance to publish due to finals and graduation. Still a WIP but I'v…,RT @cyrilzakka: Was working on something very similar but never got the chance to publish due to finals and graduation. Still a WIP but I'v…,0
51,1540073247177408516,RT @ducha_aiki: #CVPR2022 https://t.co/6NU0e5LA16,RT @ducha_aiki: #CVPR2022 https://t.co/6NU0e5LA16,0
52,1540043756216492035,@elliottszwu @CVPR I will be around in the poster session today in the exhibits hall,@elliottszwu @CVPR I will be around in the poster session today in the exhibits hall,21
53,1540035360860045312,https://t.co/qTaxrKwP7R,https://t.co/qTaxrKwP7R,10
54,1540033980128436226,a @Gradio Demo for CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers on… https://t.co/qQF0GG5cxR,a @Gradio Demo for CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers on… https://t.co/qQF0GG5cxR,119
55,1540032783023849473,RT @elliottszwu: How can we find @ak92501 @CVPR?,RT @elliottszwu: How can we find @ak92501 @CVPR?,0
56,1540028949920710657,RT @jeffclune: Introducing Video PreTraining (VPT): it learns complex behaviors by watching (pretraining on) vast amounts of online videos.…,RT @jeffclune: Introducing Video PreTraining (VPT): it learns complex behaviors by watching (pretraining on) vast amounts of online videos.…,0
57,1539985557937340418,"RT @douwekiela: Check out these FLAVA-based demos: https://t.co/VmnTJwIGey
And this one for Winoground:
https://t.co/rU3Gf2ZOwz
Loading FLA…",RT @douwekiela: Check out these FLAVA-based demos: https://t.co/VmnTJwIGey,0
58,1539982089113767936,RT @lidaiqing: Excited to share BigDatasetGAN @CVPR!  We are able to synthesize ImageNet with pixel-wise labels using as few as 5 annotatio…,RT @lidaiqing: Excited to share BigDatasetGAN @CVPR!  We are able to synthesize ImageNet with pixel-wise labels using as few as 5 annotatio…,0
59,1539961370971541505,"RT @yangtao_wang: #CVPR2022 23/6
Welcome to our poster ""TokenCut: Self-Supervised Transformers for Unsupervised Object Discovery Using Norm…",RT @yangtao_wang: #CVPR2022 23/6,0
60,1539820424376320000,"Multimodal Colored Point Cloud to Image Alignment
paper: https://t.co/YD9bnByUYx
colab: https://t.co/vwGwlrWZhg https://t.co/zE5z2gnzdb",Multimodal Colored Point Cloud to Image Alignment,35
61,1539811680359796739,"TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning
abs:… https://t.co/UArbr7zhRE",TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning,83
62,1539809856168890368,proposed system Qin achieves 40 points higher than the average scores made by students and 15 points higher than GP… https://t.co/bAiPTd9WlF,proposed system Qin achieves 40 points higher than the average scores made by students and 15 points higher than GP… https://t.co/bAiPTd9WlF,8
63,1539809066033487872,"BenchCLAMP: A Benchmark for Evaluating Language Models on Semantic Parsing
abs: https://t.co/mi3tdM4hjU https://t.co/C5sOd9hwUk",BenchCLAMP: A Benchmark for Evaluating Language Models on Semantic Parsing,13
64,1539806514466144257,"Radio2Speech: High Quality Speech Recovery from Radio Frequency Signals
abs: https://t.co/oFcSQlgsX8
project page:… https://t.co/xfYJtJWIpQ",Radio2Speech: High Quality Speech Recovery from Radio Frequency Signals,239
65,1539794210190155778,"Jointist: Joint Learning for Multi-instrument Transcription and Its Applications
abs: https://t.co/xeuPUBcr01
proje… https://t.co/QmyCioKviJ",Jointist: Joint Learning for Multi-instrument Transcription and Its Applications,17
66,1539782468504412160,"Towards Robust Blind Face Restoration with Codebook Lookup Transformer
abs: https://t.co/NNhj6EhwIP
project page:… https://t.co/3lkIhDyh6P",Towards Robust Blind Face Restoration with Codebook Lookup Transformer,96
67,1539780412297330689,"GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
abs: https://t.co/pKS5mgoDkG

GEMv2 supports 40 docum… https://t.co/qMitHzTlO0",GEMv2: Multilingual NLG Benchmarking in a Single Line of Code,17
68,1539779702306603008,"Questions Are All You Need to Train a Dense Passage Retriever
abs: https://t.co/qdSmN5pe7a

a novel approach to tra… https://t.co/NKgAHWaLsh",Questions Are All You Need to Train a Dense Passage Retriever,57
69,1539777865688010753,"reStructured Pre-training
abs: https://t.co/mYm7qbt59N https://t.co/O5T3tSY4PL",reStructured Pre-training,31
70,1539756137070878721,"RT @earthcurated: Gausdal, Norway ✨ https://t.co/tCYoryrbff","RT @earthcurated: Gausdal, Norway ✨ https://t.co/tCYoryrbff",0
71,1539755999065772034,"RT @earthcurated: Tuscany, Italy 🇮🇹 https://t.co/tswGswZcJL","RT @earthcurated: Tuscany, Italy 🇮🇹 https://t.co/tswGswZcJL",0
72,1539751376263192577,RT @wightmanr: I’m excited to announce that I’ve joined @huggingface  to take AI based computer vision to the next level. I will continue t…,RT @wightmanr: I’m excited to announce that I’ve joined @huggingface  to take AI based computer vision to the next level. I will continue t…,0
73,1539749459915149313,a @Gradio Demo for FLAVA: A Foundation Language And Vision Alignment Model on @huggingface Spaces for @CVPR 2022 by… https://t.co/fxXcV0KZkQ,a @Gradio Demo for FLAVA: A Foundation Language And Vision Alignment Model on @huggingface Spaces for @CVPR 2022 by… https://t.co/fxXcV0KZkQ,23
74,1539736626087206913,RT @imtiazprio: Catch us at the #CVPR2022 Oral Session 3.1.1 at 8:30 am Thursday and Poster Session 10:30 am right after!!,RT @imtiazprio: Catch us at the #CVPR2022 Oral Session 3.1.1 at 8:30 am Thursday and Poster Session 10:30 am right after!!,0
75,1539728223638097920,"RT @Sa_9810: It was really great to see everyone today at the poster session. Thanks for coming!
If you would like to meet for coffee or if…",RT @Sa_9810: It was really great to see everyone today at the poster session. Thanks for coming!,0
76,1539711494522392577,RT @AnimaAnandkumar: Minedojo is largest open-ended language-prompted  multitask #benchmark #AI agents explore procedurally generated #3D w…,RT @AnimaAnandkumar: Minedojo is largest open-ended language-prompted  multitask #benchmark #AI agents explore procedurally generated #3D w…,0
77,1539705700347219975,@RealGilbaz @DatagenTech Sure will visit,@RealGilbaz @DatagenTech Sure will visit,1
78,1539689285137432578,RT @ducha_aiki: #CVPR2022 https://t.co/xRaw8ulZi6,RT @ducha_aiki: #CVPR2022 https://t.co/xRaw8ulZi6,0
79,1539672920456298498,"Scaling Autoregressive Models for Content-Rich Text-to-Image Generation
paper: https://t.co/NKkTeHttLd
project page… https://t.co/CcKxsWPmjR",Scaling Autoregressive Models for Content-Rich Text-to-Image Generation,134
80,1539672517903847425,RT @victormustar: Looking for inspiration? https://t.co/0pyZ02Xxu6 is full of awesome ML demos 🤩 https://t.co/F3eYSZAC3x,RT @victormustar: Looking for inspiration? https://t.co/0pyZ02Xxu6 is full of awesome ML demos 🤩 https://t.co/F3eYSZAC3x,0
81,1539665352258625537,"Check out Talking Face Generation with Multilingual TTS  at @CVPR and try out the live @Gradio Demo

online… https://t.co/mCj9bIMB5u",Check out Talking Face Generation with Multilingual TTS  at @CVPR and try out the live @Gradio Demo,18
82,1539638155111956480,"RT @abidlabs: Slides for my @CVPR 2022 talk: 

""Papers and Code Aren't Enough: Why Demos are Critical to ML Research and How to Build Them""…",RT @abidlabs: Slides for my @CVPR 2022 talk: ,0
83,1539622527890333697,"RT @Gradio: 🔥 Exciting to see live *physical* @Gradio demos at #CVPR2022 

Demo link for automatic sign language recognition:  https://t.co…",RT @Gradio: 🔥 Exciting to see live *physical* @Gradio demos at #CVPR2022 ,0
84,1539614419541528578,"RT @zsoltkira: @ak92501 Thanks @ak92501! The poster at #CVPR202 for this is today!

Location: Halls B2-C
Poster number: 183b
Time: 6/22 (We…",RT @zsoltkira: @ak92501 Thanks @ak92501! The poster at #CVPR202 for this is today!,0
85,1539612340718637057,RT @Jimantha: To all the CVPR-heads out there -- check out @KaiZhang9546's work on inverse rendering in this morning's oral session! Religh…,RT @Jimantha: To all the CVPR-heads out there -- check out @KaiZhang9546's work on inverse rendering in this morning's oral session! Religh…,0
86,1539480179151712256,"Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding
abs: https://t.co/Bq3GUQywPV https://t.co/iLTaoXm0yC",Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding,65
87,1539473926778236934,"RT @zhanghe920312: Thanks @ak92501  for sharing.  
Our poster session happening on Thursday Morning at @CVPR.   Feel free to check out our…",RT @zhanghe920312: Thanks @ak92501  for sharing.  ,0
88,1539473873816719360,RT @zengxianyu18: Thanks for sharing our work😀 I will be presenting SketchEdit @CVPR 2022. If you are interested in our work or just want t…,RT @zengxianyu18: Thanks for sharing our work😀 I will be presenting SketchEdit @CVPR 2022. If you are interested in our work or just want t…,0
89,1539460213211910150,"EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine
abs: https://t.co/F4XkHLRxPi
github:… https://t.co/JiwSuMdkZH",EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine,32
90,1539459120667021312,"EpiGRAF: Rethinking training of 3D GANs
abs: https://t.co/RcY2vQr0NH
project page: https://t.co/kuXPKA00bZ https://t.co/CVCsseAS21",EpiGRAF: Rethinking training of 3D GANs,142
91,1539453554578055168,"Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors
abs:… https://t.co/noluSxtqzu",Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors,71
92,1539451329034297349,RT @ahatamiz1: Please check out our new paper which introduces a new vision transformer model dubbed as GC ViT !,RT @ahatamiz1: Please check out our new paper which introduces a new vision transformer model dubbed as GC ViT !,0
93,1539442569733718016,"GAN2X: Non-Lambertian Inverse Rendering of Image GANs
abs: https://t.co/ziYgRUK2Sr
project page:… https://t.co/rLK6Qp9by0",GAN2X: Non-Lambertian Inverse Rendering of Image GANs,182
94,1539435374103220226,"Global Context Vision Transformers
abs: https://t.co/d6go0yv7fu
github: https://t.co/rUYFs09ReC

On ImageNet-1K dat… https://t.co/HJnw5wclQV",Global Context Vision Transformers,87
95,1539434284213227528,"M&M Mix: A Multimodal Multiview Transformer Ensemble
abs: https://t.co/jQEZR3WCY4 https://t.co/8LZDCG0ePF",M&M Mix: A Multimodal Multiview Transformer Ensemble,39
96,1539431648374099968,"CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation
abs: https://t.co/yy78osDplK

CMTDeepLab improv… https://t.co/zCvYqSLp3G",CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation,26
97,1539425826177007616,"nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models
abs:… https://t.co/13fwAaXIn3",nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models,84
98,1539423930984931329,"Temporally Consistent Semantic Video Editing
abs: https://t.co/sg1dRt2xkw
project page: https://t.co/PyZKnxUQko https://t.co/1Az9nG5ccH",Temporally Consistent Semantic Video Editing,93
99,1539421251076247554,"(Certified!!) Adversarial Robustness for Free!
abs: https://t.co/NTU6lioyII

show how to achieve sota certified adv… https://t.co/2VW1CDARya",(Certified!!) Adversarial Robustness for Free!,39
100,1539419136467554305,"DALL-E for Detection: Language-driven Context Image Synthesis for Object Detection
abs: https://t.co/rXx4npbY5G https://t.co/QBHP494eSn",DALL-E for Detection: Language-driven Context Image Synthesis for Object Detection,143
101,1539379827966459904,"paper: https://t.co/cm0NWvfHVO
poster: https://t.co/cyLKrP84wD https://t.co/8iW8nEYdUi",paper: https://t.co/cm0NWvfHVO,4
102,1539379340324048898,a @Gradio Demo for SPOTER + Media Pipe: Combining Efficient and Precise Sign Language Recognition on @huggingface S… https://t.co/wg6qExJtL3,a @Gradio Demo for SPOTER + Media Pipe: Combining Efficient and Precise Sign Language Recognition on @huggingface S… https://t.co/wg6qExJtL3,17
103,1539355589159026689,"GlideNet: Global, Local and Intrinsic based Dense Embedding NETwork for Multi-category Attributes Prediction
abs:… https://t.co/ztR7AnAQHl","GlideNet: Global, Local and Intrinsic based Dense Embedding NETwork for Multi-category Attributes Prediction",32
104,1539322541482860545,RT @SaurabhBanga4: @ak92501 @CVPR @Gradio @abidlabs @huggingface https://t.co/9KxGEaHp0J,RT @SaurabhBanga4: @ak92501 @CVPR @Gradio @abidlabs @huggingface https://t.co/9KxGEaHp0J,0
105,1539304673211031554,Starting in 10 minutes @CVPR https://t.co/tAppaZFKep,Starting in 10 minutes @CVPR https://t.co/tAppaZFKep,10
106,1539302809404952577,RT @ak92501: Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How to Build The…,RT @ak92501: Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How to Build The…,0
107,1539291146710654976,Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How t… https://t.co/rmjCWbTxJH,Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How t… https://t.co/rmjCWbTxJH,41
108,1539260231062065154,"RT @mattjr97: I somehow didn’t see this until today. Whomever is at CVPR, swing by the poster tomorrow afternoon, I’d love to answer any qu…","RT @mattjr97: I somehow didn’t see this until today. Whomever is at CVPR, swing by the poster tomorrow afternoon, I’d love to answer any qu…",0
109,1539256590737580034,"RT @permutans: Best paper shortlisted at CVPR’22 (U. Washington, OpenAI, Google Brain, Columbia U)

“ensembling the weights of the zero-sho…","RT @permutans: Best paper shortlisted at CVPR’22 (U. Washington, OpenAI, Google Brain, Columbia U)",0
110,1539246900020449281,"RT @humphrey_shi: Last Minute UPDATE:
Our Invited Talk about ML Demos @ Hall B1 will be 1-1:30PM instead due to a scheduling conflict. @CVP…",RT @humphrey_shi: Last Minute UPDATE:,0
111,1539113571388366849,GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-Supervised Learning and Explicit Policy I… https://t.co/9i8574hPgN,GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-Supervised Learning and Explicit Policy I… https://t.co/9i8574hPgN,23
112,1539111398437011460,"RT @yan_xg: Code/pretained model is released, please have a try! 😁https://t.co/iAW5MlgDcp","RT @yan_xg: Code/pretained model is released, please have a try! 😁https://t.co/iAW5MlgDcp",0
113,1539093616886534146,RT @humphrey_shi: Come join us tmr/Tue 10am - 5pm @CVPR to check out in-person Demos at the Demo Area. (also online 27/7 ones at https://t.…,RT @humphrey_shi: Come join us tmr/Tue 10am - 5pm @CVPR to check out in-person Demos at the Demo Area. (also online 27/7 ones at https://t.…,0
114,1539076449788997632,"A Closer Look at Smoothness in Domain Adversarial Training
abs: https://t.co/GgKE9695vj
github:… https://t.co/33MX6TZhjt",A Closer Look at Smoothness in Domain Adversarial Training,96
115,1539066735965380608,"a @Gradio Demo for Thin-Plate Spline Motion Model for Image Animation on @huggingface Spaces for @CVPR 2022
 
demo:… https://t.co/ieg4Xlfnu0",a @Gradio Demo for Thin-Plate Spline Motion Model for Image Animation on @huggingface Spaces for @CVPR 2022,121
116,1539058707643961345,"Holiday at arXiv, underway 🔧, I can sleep today
status: https://t.co/JEXsWfngyb https://t.co/rVve6lNLfB","Holiday at arXiv, underway 🔧, I can sleep today",58
117,1538970393859526656,"Day 2 at @CVPR 2022

Join the CVPR event on @huggingface to build @Gradio demos for CVPR papers here:… https://t.co/ekTNYuUkCQ",Day 2 at @CVPR 2022,47
118,1538765711169966080,@_arohan_ there is already a queue 😄 https://t.co/3ggYefcjMI,@_arohan_ there is already a queue 😄 https://t.co/3ggYefcjMI,2
119,1538764856991547393,https://t.co/UjLVdJKjDt,https://t.co/UjLVdJKjDt,12
120,1538757119796715520,https://t.co/ghtd6xHQ7c,https://t.co/ghtd6xHQ7c,4
121,1538756244298661889,temporary link: https://t.co/fHFgtTir64 https://t.co/9Qbwr3mUwu,temporary link: https://t.co/fHFgtTir64 https://t.co/9Qbwr3mUwu,5
122,1538754677466087424,WIP @Gradio Demo for CogView2 https://t.co/hPmcvwjLsk,WIP @Gradio Demo for CogView2 https://t.co/hPmcvwjLsk,66
123,1538734927604338688,"a @Gradio Demo for V-Doc : Visual questions answers with Documents on @huggingface Spaces for @CVPR 2022
 
demo:… https://t.co/dF6Y2s4H5d",a @Gradio Demo for V-Doc : Visual questions answers with Documents on @huggingface Spaces for @CVPR 2022,20
124,1538731091175038977,"RT @Seungu_Han: Our paper ""NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates"" got accepted to Interspeech 2022…","RT @Seungu_Han: Our paper ""NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates"" got accepted to Interspeech 2022…",0
125,1538719219818409994,"TAVA: Template-free Animatable Volumetric Actors
abs: https://t.co/lJ2C6e1VpG
project page: https://t.co/lpUgeGI7CX https://t.co/D62WYod4by",TAVA: Template-free Animatable Volumetric Actors,71
126,1538716898015293440,"RT @yilin_sung: Excited to participate in my first in-person @CVPR to present VL-Adapter, that benchmarks different parameter-efficient tra…","RT @yilin_sung: Excited to participate in my first in-person @CVPR to present VL-Adapter, that benchmarks different parameter-efficient tra…",0
127,1538710356444471296,"Fast Finite Width Neural Tangent Kernel
abs: https://t.co/iY1lFoYMjA https://t.co/hWzzcCd5OZ",Fast Finite Width Neural Tangent Kernel,22
128,1538706936211951617,"What do navigation agents learn about their environment?
abs: https://t.co/eXelV0REgZ
github:… https://t.co/TGSzEQ1v1c",What do navigation agents learn about their environment?,36
129,1538700561800912896,RT @DrJimFan: @ak92501 Thank you so much AK for posting our work 🥰! What an honor! I’m the first author of MineDojo. We will have an announ…,RT @DrJimFan: @ak92501 Thank you so much AK for posting our work 🥰! What an honor! I’m the first author of MineDojo. We will have an announ…,0
130,1538698653493338114,"Bootstrapped Transformer for Offline Reinforcement Learning
abs: https://t.co/YiEY3uiTgL https://t.co/yle4hPgMmf",Bootstrapped Transformer for Offline Reinforcement Learning,136
131,1538695806311665665,RT @mark_riedl: MineDojo: a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse op…,RT @mark_riedl: MineDojo: a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse op…,0
132,1538695457550921728,"Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning
abs:… https://t.co/uLQLmf4l3M",Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning,41
133,1538694061531533313,"Evolution through Large Models
abs: https://t.co/2B0yygTiWa

pursues the insight that large language models trained… https://t.co/tfvNrHbTYG",Evolution through Large Models,97
134,1538692524830769152,"MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
abs: https://t.co/etfGL1xnum
project pa… https://t.co/Fv1aLuEJSV",MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge,262
135,1538689482534309890,"EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes
abs:… https://t.co/GfAeLP6iAD","EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes",105
136,1538687423722541056,"Lossy Compression with Gaussian Diffusion
abs: https://t.co/tw5YiZAN3B

implement a proof of concept and find that… https://t.co/4nvLjhIX4e",Lossy Compression with Gaussian Diffusion,102
137,1538686489491648514,"NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates
abs: https://t.co/4S8sBXq6Ko

a diffu… https://t.co/xd3eQ0ApQJ",NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates,85
138,1538685207385079809,"Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks
abs: https://t.co/ydrEo1SVh9
project page:… https://t.co/4LgYqVNenf","Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks",177
139,1538685023708127238,RT @phiyodr: Check out our work/demo for the #VizWiz workshop at #CVPR2022,RT @phiyodr: Check out our work/demo for the #VizWiz workshop at #CVPR2022,0
140,1538642504609832960,"RT @gclue_akira: I shared #CogView2 colab working.

https://t.co/jwFBWFCSos

@ak92501",RT @gclue_akira: I shared #CogView2 colab working.,0
141,1538593847764197386,Made it to @CVPR 2022 https://t.co/alBnBYHmnT,Made it to @CVPR 2022 https://t.co/alBnBYHmnT,222
142,1538558197459460096,"RT @mitts1910: Excited to share our #CVPR2022 paper, a collaboration of @Microsoft & @RITtigers, that achieves SOTA on Online Action Detect…","RT @mitts1910: Excited to share our #CVPR2022 paper, a collaboration of @Microsoft & @RITtigers, that achieves SOTA on Online Action Detect…",0
143,1538347108671049728,RT @gowthami_s: I will be in person at #CVPR22 to discuss our paper on understanding model reproducibility! Drop by and say hi if you are a…,RT @gowthami_s: I will be in person at #CVPR22 to discuss our paper on understanding model reproducibility! Drop by and say hi if you are a…,0
144,1538331269863510017,Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boun… https://t.co/oqjzwd8h3E,Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boun… https://t.co/oqjzwd8h3E,326
145,1538211869017653249,"RT @keunwoochoi: https://t.co/wEZo4Sxn0Q

AI Song Contest 2022 - the finalists 🔥🔥🔥",RT @keunwoochoi: https://t.co/wEZo4Sxn0Q,0
146,1538200789243596800,"RT @_tingliu: See you at Poster Session 3.2 on Thursday June 23, 2:30 - 5pm at #CVPR2022!","RT @_tingliu: See you at Poster Session 3.2 on Thursday June 23, 2:30 - 5pm at #CVPR2022!",0
147,1538200381863481344,submit @Gradio demos for CVPR papers by joining the organization on @huggingface here: https://t.co/sNaZf2ztdy https://t.co/jc7VX1Hekd,submit @Gradio demos for CVPR papers by joining the organization on @huggingface here: https://t.co/sNaZf2ztdy https://t.co/jc7VX1Hekd,21
148,1538026339747307521,"RT @weichiuma: Can you match images with little or no overlaps?
 
Humans can🧠but most existing methods fail😰
 
Our #CVPR2022 paper shoots c…",RT @weichiuma: Can you match images with little or no overlaps?,0
149,1538019922667659265,"RT @humphrey_shi: AI Research is empowering the world, and DEMO is a best way to showcase this power. Besides in-person Demos, we invite @C…","RT @humphrey_shi: AI Research is empowering the world, and DEMO is a best way to showcase this power. Besides in-person Demos, we invite @C…",0
150,1538006265363738625,"iBoot: Image-bootstrapped Self-Supervised Video Representation Learning
abs: https://t.co/dkZUd4QC81 https://t.co/pJFpxd7ckU",iBoot: Image-bootstrapped Self-Supervised Video Representation Learning,72
151,1538002482088931331,dalle2 - robot reading arxiv papers on a laptop at midnight on a small desk with a lamp turn on and a full coffee m… https://t.co/sg2WIavOZn,dalle2 - robot reading arxiv papers on a laptop at midnight on a small desk with a lamp turn on and a full coffee m… https://t.co/sg2WIavOZn,38
152,1538000649933115393,"Neural Scene Representation for Locomotion on Structured Terrain
abs: https://t.co/68xY622f4w https://t.co/W3wTYp31f6",Neural Scene Representation for Locomotion on Structured Terrain,82
153,1537998346350043137,"Disentangling visual and written concepts in CLIP
abs: https://t.co/VsyuDV4HNI
project page: https://t.co/2hTQnhR2o1 https://t.co/LbWpnpTTHT",Disentangling visual and written concepts in CLIP,93
154,1537992206987845638,dalle2 -  a digital art piece of a robot reading arxiv papers at midnight on a small desk with a lamp turn on and a… https://t.co/V7tHDksfFX,dalle2 -  a digital art piece of a robot reading arxiv papers at midnight on a small desk with a lamp turn on and a… https://t.co/V7tHDksfFX,221
155,1537989713256099848,"a @Gradio Demo for It's About Time: Analog Clock Reading in the Wild on @huggingface Spaces for @CVPR 2022
 
demo:… https://t.co/P8xkisydJQ",a @Gradio Demo for It's About Time: Analog Clock Reading in the Wild on @huggingface Spaces for @CVPR 2022,10
156,1537972518438379520,"RT @imisra_: Why train separate models for visual modalities?

Following up on our Omnivore work: We train a single model on images, videos…",RT @imisra_: Why train separate models for visual modalities?,0
157,1537924151389736961,"Programmatic Concept Learning for Human Motion Description and Synthesis
paper: https://t.co/Qemk23gUHX
project pag… https://t.co/ImHeYQC5vj",Programmatic Concept Learning for Human Motion Description and Synthesis,59
158,1537825873931472898,"RT @abidlabs: Excited to announce the 2022 @CVPR-@Gradio competition ahead of the conference next week!

Our goal is to make it machine lea…",RT @abidlabs: Excited to announce the 2022 @CVPR-@Gradio competition ahead of the conference next week!,0
159,1537818135444828160,a @Gradio Demo for Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model on @huggingface Spaces for… https://t.co/tpSavhBA9G,a @Gradio Demo for Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model on @huggingface Spaces for… https://t.co/tpSavhBA9G,17
160,1537817765213519873,RT @taesiri: @ak92501 @Gradio @huggingface @CVPR Neat! 😄 https://t.co/R6vy3QXcfB,RT @taesiri: @ak92501 @Gradio @huggingface @CVPR Neat! 😄 https://t.co/R6vy3QXcfB,0
161,1537796080238305280,"RT @armandjoulin: Thanks @ak92501 for sharing our work! Masked Autoencoders are insanely easy to use. You can throw any data at them, and t…","RT @armandjoulin: Thanks @ak92501 for sharing our work! Masked Autoencoders are insanely easy to use. You can throw any data at them, and t…",0
162,1537790206946181120,"RT @danxuhk: Please check our paper and project for talking head video generation at the incoming CVPR 22 😃😃😃
@harlan_hong 
You may also tr…",RT @danxuhk: Please check our paper and project for talking head video generation at the incoming CVPR 22 😃😃😃,0
163,1537778006302793728,"RT @_rohitgirdhar_: Excited to share the next evolution of Omnivore: https://t.co/SikzTdVIgx  

Omnivore meets MAE! OmniMAE is a single mod…",RT @_rohitgirdhar_: Excited to share the next evolution of Omnivore: https://t.co/SikzTdVIgx  ,0
164,1537777742590230528,RT @CVPR: The papers to be presented will be listed here: https://t.co/IZfETICs8J https://t.co/dcRQ1BayrT,RT @CVPR: The papers to be presented will be listed here: https://t.co/IZfETICs8J https://t.co/dcRQ1BayrT,0
165,1537775332316614656,"RT @victormustar: 🚪Can you tell if a Neural Net contains a Backdoor Attack? 🤓
A really cool HF Space with good explanations and some nice e…",RT @victormustar: 🚪Can you tell if a Neural Net contains a Backdoor Attack? 🤓,0
166,1537688195206418433,"Virtual Correspondence: Humans as a Cue for Extreme-View Geometry
abs: https://t.co/hAx8x4rnIO
project page:… https://t.co/z19LsVo2qX",Virtual Correspondence: Humans as a Cue for Extreme-View Geometry,195
167,1537685927505678337,"Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning
abs:… https://t.co/n02uqo0cb2",Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning,167
168,1537650506683801601,"GateHUB: Gated History Unit with Background Suppression for Online Action Detection
abs: https://t.co/3DqwFesEZi https://t.co/t1Pcz09AUR",GateHUB: Gated History Unit with Background Suppression for Online Action Detection,24
169,1537640654968324099,"Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing
abs: https://t.co/9tpvhXuaRw
project page:… https://t.co/XxpZg5PGke",Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing,72
170,1537639309888610305,"Realistic One-shot Mesh-based Head Avatars
abs: https://t.co/aETolvwoiH
project page: https://t.co/rTTLG67oPy https://t.co/C8aUN3VS37",Realistic One-shot Mesh-based Head Avatars,562
171,1537637590274277376,"MoDi: Unconditional Motion Synthesis from Diverse Data
abs: https://t.co/YBV9jSUemo https://t.co/o1uvG18RSk",MoDi: Unconditional Motion Synthesis from Diverse Data,70
172,1537630146244517889,"OmniMAE: Single Model Masked Pretraining on Images and Videos
abs: https://t.co/j9a3imUEJ6

single pretrained model… https://t.co/OiR2pY5emm",OmniMAE: Single Model Masked Pretraining on Images and Videos,144
173,1537626871319470080,"FWD: Real-time Novel View Synthesis with Forward Warping and Depth
abs: https://t.co/hbo0vxrlDd

propose a generali… https://t.co/etVCe4HPI9",FWD: Real-time Novel View Synthesis with Forward Warping and Depth,37
174,1537622879386456064,"SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos
abs: https://t.co/0MkpFJiUzM

using spars… https://t.co/x1Hvgf13qE",SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos,54
175,1537621348339572736,"BYOL-Explore: Exploration by Bootstrapped Prediction
abs: https://t.co/xXQtolzjlP

BYOL-Explore achieves superhuman… https://t.co/uZvAbVd1Bb",BYOL-Explore: Exploration by Bootstrapped Prediction,79
176,1537618457365303296,"Know your audience: specializing grounded language models with the game of Dixit
abs: https://t.co/T8d5ir8LDQ https://t.co/zSk5oR2F9D",Know your audience: specializing grounded language models with the game of Dixit,39
177,1537616695749230592,"Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
abs: https://t.co/JVutpfCfIq

pro… https://t.co/8nvWHPxXYm",Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models,11
178,1537615160172589056,"GoodBye WaveNet -- A Language Model for Raw Audio with Context of 1/2 Million Samples
abs: https://t.co/XRTTRbABXG… https://t.co/2ewOJYVqTC",GoodBye WaveNet -- A Language Model for Raw Audio with Context of 1/2 Million Samples,360
179,1537613030225240066,"Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
abs: https://t.co/RBbFId9jPF

On dance-to… https://t.co/IrXLM4bPcQ",Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation,68
180,1537593193407053826,a @Gradio Demo for Dual-Key Multimodal Backdoors for Visual Question Answering on @huggingface Spaces for @CVPR 202… https://t.co/g0MakJAhtz,a @Gradio Demo for Dual-Key Multimodal Backdoors for Visual Question Answering on @huggingface Spaces for @CVPR 202… https://t.co/g0MakJAhtz,16
181,1537586831310602240,"RT @chaaarig: Also have a try at our demo on @Gradio/@huggingface !

Demo: https://t.co/qyqmbg4eIC

and do join the CVPR 2022 organization…",RT @chaaarig: Also have a try at our demo on @Gradio/@huggingface !,0
182,1537568313504681986,RT @jw2yang4ai: We added a heat map visualization for our demo. It can somehow segment the concepts you are querying. Try it out.,RT @jw2yang4ai: We added a heat map visualization for our demo. It can somehow segment the concepts you are querying. Try it out.,0
183,1537546603262787584,"RT @gadelha_m: Always nice to see the work in AK’s feed! Congrats, @YimingXie4!","RT @gadelha_m: Always nice to see the work in AK’s feed! Congrats, @YimingXie4!",0
184,1537539330901782528,"RT @MatthewWalmer: Can you tell if a Neural Net contains a Backdoor Attack? Try this demo for ""Dual-Key Multimodal Backdoors for Visual Que…","RT @MatthewWalmer: Can you tell if a Neural Net contains a Backdoor Attack? Try this demo for ""Dual-Key Multimodal Backdoors for Visual Que…",0
185,1537489260126904322,"a @Gradio Demo for Bamboo_ViT-B16 for Image Recognition on @huggingface Spaces for @CVPR 2022
 
demo:… https://t.co/lEM23bNPL0",a @Gradio Demo for Bamboo_ViT-B16 for Image Recognition on @huggingface Spaces for @CVPR 2022,26
186,1537478059154079751,"RT @K_S_Schwarz: Sparse voxel grids have proven super useful for speeding up novel view synthesis. Inspired by this, our latest work uses a…","RT @K_S_Schwarz: Sparse voxel grids have proven super useful for speeding up novel view synthesis. Inspired by this, our latest work uses a…",0
187,1537477283409272836,"RT @skamalas: TLDR is now accepted at the Transactions of Machine Learning Research (TMLR) journal - @TmlrOrg 

Openreview: https://t.co/wV…",RT @skamalas: TLDR is now accepted at the Transactions of Machine Learning Research (TMLR) journal - @TmlrOrg ,0
188,1537460438463651842,RT @yilin_sung: Do you still get Out-of-Memory error even when you've saved >95% params w. adapter/prompt-tuning? Try Ladder Side-Tuning (L…,RT @yilin_sung: Do you still get Out-of-Memory error even when you've saved >95% params w. adapter/prompt-tuning? Try Ladder Side-Tuning (L…,0
189,1537460412937019396,"RT @yilin_sung: All our code is available at https://t.co/gTrTXtEodS. Feel free to check it out. @uncnlp
 
(and thanks @ak92501 for sharing)",RT @yilin_sung: All our code is available at https://t.co/gTrTXtEodS. Feel free to check it out. @uncnlp,0
190,1537446428259233792,"RT @roeiherzig: Thanks for featuring our work @ak92501! For more info, please visit our page!

This research is a collaborative effort w/ @…","RT @roeiherzig: Thanks for featuring our work @ak92501! For more info, please visit our page!",0
191,1537324192978419713,"AVATAR: Unconstrained Audiovisual Speech Recognition
abs: https://t.co/ZXdnRJppOk https://t.co/OTcPmcNM9E",AVATAR: Unconstrained Audiovisual Speech Recognition,30
192,1537323042380124160,"VCT: A Video Compression Transformer
abs: https://t.co/llH1L1ooKa

presented an elegantly simple transformer-based… https://t.co/ErovCWVDg3",VCT: A Video Compression Transformer,68
193,1537319908920393729,"It’s Time for Artistic Correspondence in Music and Video
abs: https://t.co/BKyP9MErgw
project page:… https://t.co/NYbUVqPTFo",It’s Time for Artistic Correspondence in Music and Video,58
194,1537316756880072705,"PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos
abs:… https://t.co/TpuSD4Ybkd",PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos,763
195,1537315443932815360,"LET-3D-AP: Longitudinal Error Tolerant 3D Average Precision for Camera-Only 3D Detection
abs:… https://t.co/tRCXSz3kxE",LET-3D-AP: Longitudinal Error Tolerant 3D Average Precision for Camera-Only 3D Detection,33
196,1537314480056672258,"Contrastive Learning as Goal-Conditioned Reinforcement Learning
abs: https://t.co/6dv7PNn0qq
project page:… https://t.co/vRSdekL9If",Contrastive Learning as Goal-Conditioned Reinforcement Learning,77
197,1537312940956712961,RT @ashkamath20: Presenting FIBER (Fusion In-the-Backbone transformER) a novel V&L architecture w/ deep multi-modal fusion + a new pre-trai…,RT @ashkamath20: Presenting FIBER (Fusion In-the-Backbone transformER) a novel V&L architecture w/ deep multi-modal fusion + a new pre-trai…,0
198,1537301855595790337,"LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling
abs:https://t.co/RGQy8Vv1LG https://t.co/G1bdakn5Pr",LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling,42
199,1537288570880368640,"Masked Siamese ConvNets
abs: https://t.co/YMG1O1ZZ5N https://t.co/LCVqVvFNfR",Masked Siamese ConvNets,83