Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
m-ricย 
posted an update 10 days ago
Post
778
๐Ÿ” Meta teams use a fine-tuned Llama model to fix production issues in seconds

One of Meta's engineering teams shared how they use a fine-tuned small Llama (Llama-2-7B, so not even a very recent model) to identify the root cause of production issues with 42% accuracy.

๐Ÿค” 42%, is that not too low?
โžก๏ธ Usually, whenever there's an issue in production, engineers dive into recent code changes to find the offending commit. At Meta's scale (thousands of daily changes), this is like finding a needle in a haystack.
๐Ÿ’ก So when the LLM-based suggestion is right, it cuts incident resolution time from hours to seconds!

How did they do it?

๐Ÿ”„ Two-step approach:
โ€ฃ Heuristics (code ownership, directory structure, runtime graphs) reduce thousands of potential changes to a manageable set
โ€ฃ Fine-tuned Llama 2 7B ranks the most likely culprits

๐ŸŽ“ Training pipeline:
โ€ฃ Continued pre-training on Meta's internal docs and wikis
โ€ฃ Supervised fine-tuning on past incident investigations
โ€ฃ Training data mimicked real-world constraints (2-20 potential changes per incident)

๐Ÿ”ฎ Now future developments await:
โ€ฃ Language models could handle more of the incident response workflow (runbooks, mitigation, post-mortems)
โ€ฃ Improvements in model reasoning should boost accuracy further

Read it in full ๐Ÿ‘‰ https://www.tryparity.com/blog/how-meta-uses-llms-to-improve-incident-response
In this post