--- license: mit datasets: - adeeshajayasinghe/devops-guide-demo metrics: - accuracy base_model: - microsoft/phi-2 new_version: microsoft/phi-2 pipeline_tag: text-generation library_name: transformers tags: - code - text-generation-inference --- # DevOps Mastermind Model This repository hosts the **DevOps Mastermind** model, a pre-trained model based on `microsoft/phi-2` with modifications tailored for specialized DevOps knowledge tasks. The model is designed to support various downstream tasks, such as code generation, documentation assistance, and knowledge inference in DevOps domains. ## Model Details - **Base Model**: `microsoft/phi-2` - **Purpose**: Enhanced with additional training and modifications for DevOps and software engineering contexts. - **Files Included**: - `config.json`: Model configuration. - `pytorch_model.bin`: The primary model file containing weights. - `tokenizer.json`: Tokenizer for processing text inputs. - `added_tokens.json`: Additional tokens specific to DevOps vocabulary. - `generation_config.json`: Generation configuration for text generation tasks. - Other auxiliary files required for model usage and compatibility. ## Usage To load and use this model in your code, run the following commands: ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load the model and tokenizer model_name = "kavinduc/devops-mastermind" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Example usage input_text = "Explain how to set up a CI/CD pipeline" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(generated_text)