English
kcz358 commited on
Commit
8c9f777
1 Parent(s): 20ab04a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - lmms-lab/LLaVA-NeXT-Data
5
+ language:
6
+ - en
7
+ ---
8
+
9
+ # Large Multi-modal Models Can Interpret Features in Large Multi-modal Models
10
+
11
+ For the first time in the multimodal domain, we demonstrate that features learned by Sparse Autoencoders (SAEs) in a smaller Large Multimodal Model (LMM) can be effectively interpreted by a larger LMM. Our work introduces the use of SAEs to analyze the open-semantic features of LMMs, providing the solution for feature interpretation across various model scales.
12
+
13
+ This research is inspired by Anthropic's remarkable [work](https://transformer-circuits.pub/2024/scaling-monosemanticity/) on applying SAEs to interpret features in large-scale language models. In multimdoal models, we discovered intriguing features that correlate with diverse semantics and can be leveraged to steer model behavior, enabling more precise control and understanding of LMM functionality.
14
+
15
+ This model is the trained SAE on LLaVA-NeXT sft data with 131k features and 256 activated features. For how to use it, you can refer to instructions in the [GitHub](https://github.com/EvolvingLMMs-Lab/multimodal-sae/tree/main)
16
+