TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention Paper • 2410.05076 • Published Oct 7 • 7
SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs Paper • 2410.13276 • Published Oct 17 • 25
Star Attention: Efficient LLM Inference over Long Sequences Paper • 2411.17116 • Published Nov 26 • 47
KV Shifting Attention Enhances Language Modeling Paper • 2411.19574 • Published about 1 month ago • 8