Post
1542
๐ง ย ๐๐๐๐๐ฅ: ๐ณ๐ถ๐ฟ๐๐ ๐บ๐๐น๐๐ถ๐บ๐ผ๐ฑ๐ฎ๐น ๐ฏ๐ฒ๐ป๐ฐ๐ต๐บ๐ฎ๐ฟ๐ธ ๐๐ผ ๐บ๐ฎ๐ธ๐ฒ ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ณ๐ผ๐ฟ๐ด๐ฒ๐ ๐๐ต๐ฎ๐ ๐๐ฒ ๐๐ฎ๐ป๐ ๐๐ต๐ฒ๐บ ๐๐ผ ๐ณ๐ผ๐ฟ๐ด๐ฒ๐
With privacy concerns rising, we sometimes need our models to "forget" specific information - like a person's data - while keeping everything else intact. Researchers just released CLEAR, the first benchmark to test how well this works with both text and images.
โย Bad news: Current methods either fail to truly forget or end up forgetting way too much. It's like trying to remove a single ingredient from a baked cake!
โจย But there's hope: Adding simple mathematical constraints (L1 regularization) during the forgetting process significantly improves results.
๐ฏย Key insights:
โ ย The benchmark tests forgetting on 200 fictional personas
โฃ 3,770 visual Q&A pairs
โฃ 4,000 textual Q&A pairs
โฃ Additional real-world tests
๐ย Most current forgetting methods don't work well with both text and images
โฃ They either remember what they should forget
โฃ Or they forget too much unrelated information
โจย Simple mathematical constraints work surprisingly well
โฃ L1 regularization prevents excessive forgetting
โฃ Works especially well with the LLMU method
๐ย Read the full paper here: CLEAR: Character Unlearning in Textual and Visual Modalities (2410.18057)
With privacy concerns rising, we sometimes need our models to "forget" specific information - like a person's data - while keeping everything else intact. Researchers just released CLEAR, the first benchmark to test how well this works with both text and images.
โย Bad news: Current methods either fail to truly forget or end up forgetting way too much. It's like trying to remove a single ingredient from a baked cake!
โจย But there's hope: Adding simple mathematical constraints (L1 regularization) during the forgetting process significantly improves results.
๐ฏย Key insights:
โ ย The benchmark tests forgetting on 200 fictional personas
โฃ 3,770 visual Q&A pairs
โฃ 4,000 textual Q&A pairs
โฃ Additional real-world tests
๐ย Most current forgetting methods don't work well with both text and images
โฃ They either remember what they should forget
โฃ Or they forget too much unrelated information
โจย Simple mathematical constraints work surprisingly well
โฃ L1 regularization prevents excessive forgetting
โฃ Works especially well with the LLMU method
๐ย Read the full paper here: CLEAR: Character Unlearning in Textual and Visual Modalities (2410.18057)