Project Organization

β”œβ”€β”€ LICENSE
β”œβ”€β”€ Makefile           <- Makefile with commands like `make dirs` or `make clean`
β”œβ”€β”€ README.md          <- The top-level README for developers using this project.
β”œβ”€β”€ data
β”‚   β”œβ”€β”€ processed      <- The final, canonical data sets for modeling.
β”‚   └── raw            <- The original, immutable data dump
β”‚
β”œβ”€β”€ models             <- Trained and serialized models, model predictions, or model summaries
β”‚
β”œβ”€β”€ notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
β”‚                         the creator's initials, and a short `-` delimited description, e.g.
β”‚                         `1.0-jqp-initial-data-exploration`.
β”œβ”€β”€ references         <- Data dictionaries, manuals, and all other explanatory materials.
β”œβ”€β”€ reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
β”‚   └── figures        <- Generated graphics and figures to be used in reporting
β”‚   └── metrics.txt    <- Relevant metrics after evaluating the model.
β”‚   └── training_metrics.txt    <- Relevant metrics from training the model.
β”‚
β”œβ”€β”€ requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
β”‚                         generated with `pip freeze > requirements.txt`
β”‚
β”œβ”€β”€ setup.py           <- makes project pip installable (pip install -e .) so src can be imported
β”œβ”€β”€ src                <- Source code for use in this project.
β”‚   β”œβ”€β”€ __init__.py    <- Makes src a Python module
β”‚   β”‚
β”‚   β”œβ”€β”€ data           <- Scripts to download or generate data
β”‚   β”‚   β”œβ”€β”€ great_expectations  <- Folder containing data integrity check files
β”‚   β”‚   β”œβ”€β”€ make_dataset.py
β”‚   β”‚   └── data_validation.py  <- Script to run data integrity checks
β”‚   β”‚
β”‚   β”œβ”€β”€ models         <- Scripts to train models and then use trained models to make
β”‚   β”‚   β”‚                 predictions
β”‚   β”‚   β”œβ”€β”€ predict_model.py
β”‚   β”‚   └── train_model.py
β”‚   β”‚
β”‚   └── visualization  <- Scripts to create exploratory and results oriented visualizations
β”‚       └── visualize.py
β”‚
β”œβ”€β”€ .pre-commit-config.yaml  <- pre-commit hooks file with selected hooks for the projects.
β”œβ”€β”€ dvc.lock           <- constructs the ML pipeline with defined stages.
└── dvc.yaml           <- Traing a model on the processed data.

Project based on the cookiecutter data science project template. #cookiecutterdatascience


To create a project like this, just go to https://dagshub.com/repo/create and select the Cookiecutter DVC project template.

Made with 🐢 by DAGsHub.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .