lckr
's Collections
random_papers
updated
DoLa: Decoding by Contrasting Layers Improves Factuality in Large
Language Models
Paper
•
2309.03883
•
Published
•
33
LoRA: Low-Rank Adaptation of Large Language Models
Paper
•
2106.09685
•
Published
•
30
Agents: An Open-source Framework for Autonomous Language Agents
Paper
•
2309.07870
•
Published
•
41
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI
Feedback
Paper
•
2309.00267
•
Published
•
47
One Wide Feedforward is All You Need
Paper
•
2309.01826
•
Published
•
31
Retentive Network: A Successor to Transformer for Large Language Models
Paper
•
2307.08621
•
Published
•
170
Large Language Models as Optimizers
Paper
•
2309.03409
•
Published
•
75
Connecting Large Language Models with Evolutionary Algorithms Yields
Powerful Prompt Optimizers
Paper
•
2309.08532
•
Published
•
52
CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large
Language Models in 167 Languages
Paper
•
2309.09400
•
Published
•
82
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Paper
•
2309.09958
•
Published
•
18
Contrastive Decoding Improves Reasoning in Large Language Models
Paper
•
2309.09117
•
Published
•
37
Do Vision Transformers See Like Convolutional Neural Networks?
Paper
•
2108.08810
•
Published
•
1
Neural Networks are Decision Trees
Paper
•
2210.05189
•
Published
•
1
On the cross-validation bias due to unsupervised pre-processing
Paper
•
1901.08974
•
Published
•
1
The Forward-Forward Algorithm: Some Preliminary Investigations
Paper
•
2212.13345
•
Published
•
2
Grokking: Generalization Beyond Overfitting on Small Algorithmic
Datasets
Paper
•
2201.02177
•
Published
•
2
DINOv2: Learning Robust Visual Features without Supervision
Paper
•
2304.07193
•
Published
•
5
High-Resolution Image Synthesis with Latent Diffusion Models
Paper
•
2112.10752
•
Published
•
11
Training Compute-Optimal Large Language Models
Paper
•
2203.15556
•
Published
•
10
Training language models to follow instructions with human feedback
Paper
•
2203.02155
•
Published
•
15
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper
•
2201.11903
•
Published
•
9
Language Models are Few-Shot Learners
Paper
•
2005.14165
•
Published
•
11
Automatic Prompt Optimization with "Gradient Descent" and Beam Search
Paper
•
2305.03495
•
Published
•
1
The Flan Collection: Designing Data and Methods for Effective
Instruction Tuning
Paper
•
2301.13688
•
Published
•
8
LLaMA: Open and Efficient Foundation Language Models
Paper
•
2302.13971
•
Published
•
13
Toolformer: Language Models Can Teach Themselves to Use Tools
Paper
•
2302.04761
•
Published
•
11
RWKV: Reinventing RNNs for the Transformer Era
Paper
•
2305.13048
•
Published
•
14
AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model
Paper
•
2309.16058
•
Published
•
55
Context Tuning for Retrieval Augmented Generation
Paper
•
2312.05708
•
Published
•
16
GAIA: a benchmark for General AI Assistants
Paper
•
2311.12983
•
Published
•
183
Exponentially Faster Language Modelling
Paper
•
2311.10770
•
Published
•
118
Orca 2: Teaching Small Language Models How to Reason
Paper
•
2311.11045
•
Published
•
70
LocalMamba: Visual State Space Model with Windowed Selective Scan
Paper
•
2403.09338
•
Published
•
7