-
Attention Is All You Need
Paper • 1706.03762 • Published • 45 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11 -
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Paper • 2305.13245 • Published • 5 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 242
Eli Chen
elichen3051
AI & ML interests
Learning Algorithm, Reinforcement Learning, Data Synthesize, Benchmarking
Recent Activity
upvoted
an
article
7 days ago
Efficient LLM Pretraining: Packed Sequences and Masked Attention
updated
a dataset
9 days ago
skymizer/axolotl-tiny-set
updated
a dataset
10 days ago
skymizer/fineweb-edu-dedup-5B
Organizations
Collections
2
models
None public yet