repo_id
stringlengths 19
138
| file_path
stringlengths 32
200
| content
stringlengths 1
12.9M
| __index_level_0__
int64 0
0
|
---|---|---|---|
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/mkdocs.yml | # Project information
site_name: YOLO3D
site_url: https://ruhyadi.github.io/yolo3d-lightning
site_author: Didi Ruhyadi
site_description: >-
YOLO3D: 3D Object Detection with YOLO
# Repository
repo_name: ruhyadi/yolo3d-lightning
repo_url: https://github.com/ruhyadi/yolo3d-lightning
edit_uri: ""
# Copyright
copyright: Copyright © 2020 - 2022 Didi Ruhyadi
# Configuration
theme:
name: material
language: en
# Don't include MkDocs' JavaScript
include_search_page: false
search_index_only: true
features:
- content.code.annotate
# - content.tabs.link
# - header.autohide
# - navigation.expand
- navigation.indexes
# - navigation.instant
- navigation.sections
- navigation.tabs
# - navigation.tabs.sticky
- navigation.top
- navigation.tracking
- search.highlight
- search.share
- search.suggest
# - toc.integrate
palette:
- scheme: default
primary: white
accent: indigo
toggle:
icon: material/weather-night
name: Vampire Mode
- scheme: slate
primary: indigo
accent: blue
toggle:
icon: material/weather-sunny
name: Beware of Your Eyes
font:
text: Noto Serif
code: Noto Mono
favicon: assets/logo.png
logo: assets/logo.png
icon:
repo: fontawesome/brands/github
# Plugins
plugins:
# Customization
extra:
social:
- icon: fontawesome/brands/github
link: https://github.com/ruhyadi
- icon: fontawesome/brands/docker
link: https://hub.docker.com/r/ruhyadi
- icon: fontawesome/brands/twitter
link: https://twitter.com/
- icon: fontawesome/brands/linkedin
link: https://linkedin.com/in/didiruhyadi
- icon: fontawesome/brands/instagram
link: https://instagram.com/didiir_
extra_javascript:
- javascripts/mathjax.js
- https://polyfill.io/v3/polyfill.min.js?features=es6
- https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js
# Extensions
markdown_extensions:
- admonition
- abbr
- pymdownx.snippets
- attr_list
- def_list
- footnotes
- meta
- md_in_html
- toc:
permalink: true
- pymdownx.arithmatex:
generic: true
- pymdownx.betterem:
smart_enable: all
- pymdownx.caret
- pymdownx.details
- pymdownx.emoji:
emoji_index: !!python/name:materialx.emoji.twemoji
emoji_generator: !!python/name:materialx.emoji.to_svg
- pymdownx.highlight:
anchor_linenums: true
- pymdownx.inlinehilite
- pymdownx.keys
- pymdownx.magiclink:
repo_url_shorthand: true
user: squidfunk
repo: mkdocs-material
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
format: !!python/name:pymdownx.superfences.fence_code_format
- pymdownx.tabbed:
alternate_style: true
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
# Page tree
nav:
- Home:
- Home: index.md | 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/LICENSE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2021-2022 Megvii Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/requirements.txt | # --------- pytorch --------- #
torch>=1.8.0
torchvision>=0.9.1
pytorch-lightning==1.6.5
torchmetrics==0.9.2
# --------- hydra --------- #
hydra-core==1.2.0
hydra-colorlog==1.2.0
hydra-optuna-sweeper==1.2.0
# --------- loggers --------- #
# wandb
# neptune-client
# mlflow
# comet-ml
# --------- others --------- #
pyrootutils # standardizing the project root setup
pre-commit # hooks for applying linters on commit
rich # beautiful text formatting in terminal
pytest # tests
sh # for running bash commands in some tests
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/Makefile |
help: ## Show help
@grep -E '^[.a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
clean: ## Clean autogenerated files
rm -rf dist
find . -type f -name "*.DS_Store" -ls -delete
find . | grep -E "(__pycache__|\.pyc|\.pyo)" | xargs rm -rf
find . | grep -E ".pytest_cache" | xargs rm -rf
find . | grep -E ".ipynb_checkpoints" | xargs rm -rf
rm -f .coverage
clean-logs: ## Clean logs
rm -rf logs/**
format: ## Run pre-commit hooks
pre-commit run -a
sync: ## Merge changes from main branch to your current branch
git pull
git pull origin main
test: ## Run not slow tests
pytest -k "not slow"
test-full: ## Run all tests
pytest
train: ## Train the model
python src/train.py
debug: ## Enter debugging mode with pdb
#
# tips:
# - use "import pdb; pdb.set_trace()" to set breakpoint
# - use "h" to print all commands
# - use "n" to execute the next line
# - use "c" to run until the breakpoint is hit
# - use "l" to print src code around current line, "ll" for full function code
# - docs: https://docs.python.org/3/library/pdb.html
#
python -m pdb src/train.py debug=default
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/convert.py | """ Conver checkpoint to model (.pt/.pth/.onnx) """
import torch
from torch.utils.data import Dataset, DataLoader
from pytorch_lightning import LightningModule
from src import utils
import dotenv
import hydra
from omegaconf import DictConfig
import os
# load environment variables from `.env` file if it exists
# recursively searches for `.env` in all folders starting from work dir
dotenv.load_dotenv(override=True)
log = utils.get_pylogger(__name__)
@hydra.main(config_path="configs/", config_name="convert.yaml")
def convert(config: DictConfig):
# assert model convertion
assert config.get('convert_to') in ['pytorch', 'torchscript', 'onnx', 'tensorrt'], \
"Please Choose one of [pytorch, torchscript, onnx, tensorrt]"
# Init lightning model
log.info(f"Instantiating model <{config.model._target_}>")
model: LightningModule = hydra.utils.instantiate(config.model)
# regressor: LightningModule = hydra.utils.instantiate(config.model)
# regressor.load_state_dict(torch.load(config.get("regressor_weights"), map_location="cpu"))
# regressor.eval().to(config.get("device"))
# Convert relative ckpt path to absolute path if necessary
log.info(f"Load checkpoint <{config.get('checkpoint_dir')}>")
ckpt_path = config.get("checkpoint_dir")
if ckpt_path and not os.path.isabs(ckpt_path):
ckpt_path = config.get(os.path.join(hydra.utils.get_original_cwd(), ckpt_path))
# load model checkpoint
model = model.load_from_checkpoint(ckpt_path)
model.cuda()
# input sample
input_sample = config.get('input_sample')
# Convert
if config.get('convert_to') == 'pytorch':
log.info("Convert to Pytorch (.pt)")
torch.save(model.state_dict(), f'{config.get("name")}.pt')
log.info(f"Saved model {config.get('name')}.pt")
if config.get('convert_to') == 'onnx':
log.info("Convert to ONNX (.onnx)")
model.cuda()
input_sample = torch.rand((1, 3, 224, 224), device=torch.device('cuda'))
model.to_onnx(f'{config.get("name")}.onnx', input_sample, export_params=True)
log.info(f"Saved model {config.get('name')}.onnx")
if __name__ == '__main__':
convert() | 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/pyproject.toml | [tool.pytest.ini_options]
addopts = [
"--color=yes",
"--durations=0",
"--strict-markers",
"--doctest-modules",
]
filterwarnings = [
"ignore::DeprecationWarning",
"ignore::UserWarning",
]
log_cli = "True"
markers = [
"slow: slow tests",
]
minversion = "6.0"
testpaths = "tests/"
[tool.coverage.report]
exclude_lines = [
"pragma: nocover",
"raise NotImplementedError",
"raise NotImplementedError()",
"if __name__ == .__main__.:",
]
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/README.md | <div align="center">
# YOLO3D: 3D Object Detection with YOLO
</div>
## Introduction
YOLO3D is inspired by [Mousavian et al.](https://arxiv.org/abs/1612.00496) in their paper **3D Bounding Box Estimation Using Deep Learning and Geometry**. YOLO3D uses a different approach, we use 2d gt label result as the input of first stage detector, then use the 2d result as input to regressor model.
## Quickstart
```bash
git clone git@github.com:ApolloAuto/apollo-model-yolo3d.git
```
### creat env for YOLO3D
```shell
cd apollo-model-yolo3d
conda create -n apollo_yolo3d python=3.8 numpy
conda activate apollo_yolo3d
pip install -r requirements.txt
```
### datasets
here we use KITTI data to train. You can download KITTI dataset from [official website](http://www.cvlibs.net/datasets/kitti/). After that, extract dataset to `data/KITTI`.
```shell
ln -s /your/KITTI/path data/KITTI
```
```bash
├── data
│ └── KITTI
│ ├── calib
│ ├── images_2
│ └── labels_2
```
modify [datasplit](data/datasplit.py) file to split train and val data customerly.
```shell
cd data
python datasplit.py
```
### train
modify [train.yaml](configs/train.yaml) to train your model.
```shell
python src/train.py experiment=sample
```
> log path: /logs \
> model path: /weights
### covert
modify [convert.yaml](configs/convert.yaml) file to trans .ckpt to .pt model
```shell
python convert.py
```
### inference
In order to show the real model infer ability, we crop image according to gt 2d box as yolo3d input, you can use following command to plot 3d results.
modify [inference.yaml](configs/inference.yaml) file to change .pt model path.
**export_onnx=True** can export onnx model.
```shell
python inference.py \
source_dir=./data/KITTI \
detector.classes=6 \
regressor_weights=./weights/pytorch-kitti.pt \
export_onnx=False \
func=image
```
- source_dir: path os datasets, include /image_2 and /label_2 folder
- detector.classes: kitti class
- regressor_weights: your model
- export_onnx: export onnx model for apollo
> result path: /outputs
### evaluate
generate label for 3d result:
```shell
python inference.py \
source_dir=./data/KITTI \
detector.classes=6 \
regressor_weights=./weights/pytorch-kitti.pt \
export_onnx=False \
func=label
```
> result path: /data/KITTI/result
```bash
├── data
│ └── KITTI
│ ├── calib
│ ├── images_2
│ ├── labels_2
│ └── result
```
modify label_path、result_path and label_split_file in [kitti_object_eval](kitti_object_eval) folder script run.sh, with the help of it we can calculate mAP:
```shell
cd kitti_object_eval
sh run.sh
```
## Acknowledgement
- [yolo3d-lighting](https://github.com/ruhyadi/yolo3d-lightning)
- [skhadem/3D-BoundingBox](https://github.com/skhadem/3D-BoundingBox)
- [Mousavian et al.](https://arxiv.org/abs/1612.00496)
```
@misc{mousavian20173d,
title={3D Bounding Box Estimation Using Deep Learning and Geometry},
author={Arsalan Mousavian and Dragomir Anguelov and John Flynn and Jana Kosecka},
year={2017},
eprint={1612.00496},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/setup.py | #!/usr/bin/env python
from setuptools import find_packages, setup
setup(
name="src",
version="0.0.1",
description="Describe Your Cool Project",
author="",
author_email="",
url="https://github.com/user/project", # REPLACE WITH YOUR OWN GITHUB PROJECT LINK
install_requires=["pytorch-lightning", "hydra-core"],
packages=find_packages(),
)
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/inference.py | """ Inference Code """
from typing import List
from PIL import Image
import cv2
from glob import glob
import numpy as np
import torch
from torchvision.transforms import transforms
from pytorch_lightning import LightningModule
from src.utils import Calib
from src.utils.averages import ClassAverages
from src.utils.Math import compute_orientaion, recover_angle, translation_constraints
from src.utils.Plotting import Plot3DBoxBev
import dotenv
import hydra
from omegaconf import DictConfig
import os
import pyrootutils
import src.utils
from src.utils.utils import KITTIObject
import torch.onnx
from torch.onnx import OperatorExportTypes
log = src.utils.get_pylogger(__name__)
try:
import onnxruntime
import openvino.runtime as ov
except ImportError:
log.warning("ONNX and OpenVINO not installed")
dotenv.load_dotenv(override=True)
root = pyrootutils.setup_root(__file__, dotenv=True, pythonpath=True)
class Bbox:
def __init__(self, box_2d, label, h, w, l, tx, ty, tz, ry, alpha):
self.box_2d = box_2d
self.detected_class = label
self.w = w
self.h = h
self.l = l
self.tx = tx
self.ty = ty
self.tz = tz
self.ry = ry
self.alpha = alpha
def mkdir(path):
folder = os.path.exists(path)
if not folder:
os.makedirs(path)
print("--- creating new folder... ---")
print("--- finished ---")
else:
# print("--- pass to create new folder ---")
pass
def format_img(img, box_2d):
# transforms
normalize = transforms.Normalize(
mean=[0.406, 0.456, 0.485],
std=[0.225, 0.224, 0.229])
process = transforms.Compose([
transforms.ToTensor(),
normalize
])
# crop image
pt1, pt2 = box_2d[0], box_2d[1]
point_list1 = [pt1[0], pt1[1]]
point_list2 = [pt2[0], pt2[1]]
if point_list1[0] < 0:
point_list1[0] = 0
if point_list1[1] < 0:
point_list1[1] = 0
if point_list2[0] < 0:
point_list2[0] = 0
if point_list2[1] < 0:
point_list2[1] = 0
if point_list1[0] >= img.shape[1]:
point_list1[0] = img.shape[1] - 1
if point_list2[0] >= img.shape[1]:
point_list2[0] = img.shape[1] - 1
if point_list1[1] >= img.shape[0]:
point_list1[1] = img.shape[0] - 1
if point_list2[1] >= img.shape[0]:
point_list2[1] = img.shape[0] - 1
crop = img[point_list1[1]:point_list2[1]+1, point_list1[0]:point_list2[0]+1]
try:
cv2.imwrite('./tmp/img.jpg', img)
crop = cv2.resize(crop, (224, 224), interpolation=cv2.INTER_CUBIC)
cv2.imwrite('./tmp/demo.jpg', crop)
except cv2.error:
print("pt1 is ", pt1, " pt2 is ", pt2)
print("image shape is ", img.shape)
print("box_2d is ", box_2d)
# apply transform for batch
batch = process(crop)
return batch
def inference_label(config: DictConfig):
"""Inference function"""
# ONNX provider
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] \
if config.get("device") == "cuda" else ['CPUExecutionProvider']
# global calibration P2 matrix
P2 = Calib.get_P(config.get("calib_file"))
# dimension averages
class_averages = ClassAverages()
# initialize regressor model
if config.get("inference_type") == "pytorch":
# pytorch regressor model
log.info(f"Instantiating regressor <{config.model._target_}>")
regressor: LightningModule = hydra.utils.instantiate(config.model)
regressor.load_state_dict(torch.load(config.get("regressor_weights"), map_location="cpu"))
regressor.eval().to(config.get("device"))
elif config.get("inference_type") == "onnx":
# onnx regressor model
log.info(f"Instantiating ONNX regressor <{config.get('regressor_weights').split('/')[-1]}>")
regressor = onnxruntime.InferenceSession(config.get("regressor_weights"), providers=providers)
input_name = regressor.get_inputs()[0].name
elif config.get("inference_type") == "openvino":
# openvino regressor model
log.info(f"Instantiating OpenVINO regressor <{config.get('regressor_weights').split('/')[-1]}>")
core = ov.Core()
model = core.read_model(config.get("regressor_weights"))
regressor = core.compile_model(model, 'CPU')
infer_req = regressor.create_infer_request()
# initialize preprocessing transforms
log.info(f"Instantiating Preprocessing Transforms")
preprocess: List[torch.nn.Module] = []
if "augmentation" in config:
for _, conf in config.augmentation.items():
if "_target_" in conf:
preprocess.append(hydra.utils.instantiate(conf))
preprocess = transforms.Compose(preprocess)
# Create output directory
os.makedirs(config.get("output_dir"), exist_ok=True)
# loop thru images
imgs_path = sorted(glob(os.path.join(config.get("source_dir") + "/image_2", "*")))
image_id = 0
for img_path in imgs_path:
image_id += 1
print("\r", end="|")
print("now is saving : {} ".format(image_id) + "/ {}".format(len(imgs_path)) + " label")
# read gt image ./eval_kitti/image_2_val/
img_id = img_path[-10:-4]
# dt result
result_label_root_path = config.get("source_dir") + '/result/'
mkdir(result_label_root_path)
f = open(result_label_root_path + img_id + '.txt', 'w')
# read image
img = cv2.imread(img_path)
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
gt_label_root_path = config.get("source_dir") + '/label_2/'
gt_f = gt_label_root_path + img_id + '.txt'
dets = []
try:
with open(gt_f, 'r') as file:
content = file.readlines()
for i in range(len(content)):
gt = content[i].split()
top_left, bottom_right = (int(float(gt[4])), int(float(gt[5]))), (int(float(gt[6])), int(float(gt[7])))
bbox_2d = [top_left, bottom_right]
label = gt[0]
dets.append(Bbox(bbox_2d, label, float(gt[8]), float(gt[9]), float(gt[10]), float(gt[11]), float(gt[12]), float(gt[13]), float(gt[14]), float(gt[3])))
except:
continue
DIMENSION = []
# loop thru detections
for det in dets:
# initialize object container
obj = KITTIObject()
obj.name = det.detected_class
if(obj.name == 'DontCare'):
continue
if(obj.name == 'Misc'):
continue
if(obj.name == 'Person_sitting'):
continue
obj.truncation = float(0.00)
obj.occlusion = int(-1)
obj.xmin, obj.ymin, obj.xmax, obj.ymax = det.box_2d[0][0], det.box_2d[0][1], det.box_2d[1][0], det.box_2d[1][1]
crop = format_img(img, det.box_2d)
# # preprocess img with torch.transforms
crop = crop.reshape((1, *crop.shape)).to(config.get("device"))
# regress 2D bbox with Regressor
if config.get("inference_type") == "pytorch":
[orient, conf, dim] = regressor(crop)
orient = orient.cpu().detach().numpy()[0, :, :]
conf = conf.cpu().detach().numpy()[0, :]
dim = dim.cpu().detach().numpy()[0, :]
# dimension averages
try:
dim += class_averages.get_item(obj.name)
DIMENSION.append(dim)
except:
dim = DIMENSION[-1]
obj.alpha = recover_angle(orient, conf, 2)
obj.h, obj.w, obj.l = dim[0], dim[1], dim[2]
obj.rot_global, rot_local = compute_orientaion(P2, obj)
obj.tx, obj.ty, obj.tz = translation_constraints(P2, obj, rot_local)
# output prediction label
obj.score = 1.0
output_line = obj.member_to_list()
output_line = " ".join([str(i) for i in output_line])
f.write(output_line + '\n')
f.close()
def inference_image(config: DictConfig):
"""Inference function"""
# ONNX provider
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] \
if config.get("device") == "cuda" else ['CPUExecutionProvider']
# global calibration P2 matrix
P2 = Calib.get_P(config.get("calib_file"))
# dimension averages
class_averages = ClassAverages()
export_onnx = config.get("export_onnx")
# initialize regressor model
if config.get("inference_type") == "pytorch":
# pytorch regressor model
log.info(f"Instantiating regressor <{config.model._target_}>")
regressor: LightningModule = hydra.utils.instantiate(config.model)
regressor.load_state_dict(torch.load(config.get("regressor_weights"), map_location="cpu"))
regressor.eval().to(config.get("device"))
elif config.get("inference_type") == "onnx":
# onnx regressor model
log.info(f"Instantiating ONNX regressor <{config.get('regressor_weights').split('/')[-1]}>")
regressor = onnxruntime.InferenceSession(config.get("regressor_weights"), providers=providers)
input_name = regressor.get_inputs()[0].name
elif config.get("inference_type") == "openvino":
# openvino regressor model
log.info(f"Instantiating OpenVINO regressor <{config.get('regressor_weights').split('/')[-1]}>")
core = ov.Core()
model = core.read_model(config.get("regressor_weights"))
regressor = core.compile_model(model, 'CPU')
infer_req = regressor.create_infer_request()
# initialize preprocessing transforms
log.info(f"Instantiating Preprocessing Transforms")
preprocess: List[torch.nn.Module] = []
if "augmentation" in config:
for _, conf in config.augmentation.items():
if "_target_" in conf:
preprocess.append(hydra.utils.instantiate(conf))
preprocess = transforms.Compose(preprocess)
# Create output directory
os.makedirs(config.get("output_dir"), exist_ok=True)
imgs_path = sorted(glob(os.path.join(config.get("source_dir") + "/image_2", "*")))
image_id = 0
for img_path in imgs_path:
image_id += 1
print("\r", end="|")
print("now is saving : {} ".format(image_id) + "/ {}".format(len(imgs_path)) + " image")
# Initialize object and plotting modules
plot3dbev = Plot3DBoxBev(P2)
img_name = img_path.split("/")[-1].split(".")[0]
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# check if image shape 1242 x 375
if img.shape != (375, 1242, 3):
# crop center of image to 1242 x 375
src_h, src_w, _ = img.shape
dst_h, dst_w = 375, 1242
dif_h, dif_w = src_h - dst_h, src_w - dst_w
img = img[dif_h // 2 : src_h - dif_h // 2, dif_w // 2 : src_w - dif_w // 2, :]
img_id = img_path[-10:-4]
# read image
img = cv2.imread(img_path)
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
gt_label_root_path = config.get("source_dir") + '/label_2/'
gt_f = gt_label_root_path + img_id + '.txt'
# use gt 2d result as output of first stage
dets = []
try:
with open(gt_f, 'r') as file:
content = file.readlines()
for i in range(len(content)):
gt = content[i].split()
top_left, bottom_right = (int(float(gt[4])), int(float(gt[5]))), (int(float(gt[6])), int(float(gt[7])))
bbox_2d = [top_left, bottom_right]
label = gt[0]
dets.append(Bbox(bbox_2d, label, float(gt[8]), float(gt[9]), float(gt[10]), float(gt[11]), float(gt[12]), float(gt[13]), float(gt[14]), float(gt[3])))
except:
continue
DIMENSION = []
for det in dets:
# initialize object container
obj = KITTIObject()
obj.name = det.detected_class
if(obj.name == 'DontCare'):
continue
if(obj.name == 'Misc'):
continue
if(obj.name == 'Person_sitting'):
continue
obj.truncation = float(0.00)
obj.occlusion = int(-1)
obj.xmin, obj.ymin, obj.xmax, obj.ymax = det.box_2d[0][0], det.box_2d[0][1], det.box_2d[1][0], det.box_2d[1][1]
crop = format_img(img, det.box_2d)
crop = crop.reshape((1, *crop.shape)).to(config.get("device"))
# regress 2D bbox with Regressor
if config.get("inference_type") == "pytorch":
[orient, conf, dim] = regressor(crop)
orient = orient.cpu().detach().numpy()[0, :, :]
conf = conf.cpu().detach().numpy()[0, :]
dim = dim.cpu().detach().numpy()[0, :]
if(export_onnx):
traced_script_module = torch.jit.trace(regressor, (crop))
traced_script_module.save("weights/yolo_libtorch_model_3d.pth")
onnx_model_save_path = "weights/yolo_onnx_model_3d.onnx"
# dynamic batch
# dynamic_axes = {"image": {0: "batch"},
# "orient": {0: "batch", 1: str(2), 2: str(2)}, # for multi batch
# "conf": {0: "batch"},
# "dim": {0: "batch"}}
if True:
torch.onnx.export(regressor, crop, onnx_model_save_path, opset_version=11,
verbose=False, export_params=True, operator_export_type=OperatorExportTypes.ONNX,
input_names=['image'], output_names=['orient','conf','dim']
# ,dynamic_axes=dynamic_axes
)
print("Please check onnx model in ", onnx_model_save_path)
import onnx
onnx_model = onnx.load(onnx_model_save_path)
# for dla&trt speedup
onnx_fp16_model_save_path = "weights/yolo_onnx_model_3d_fp16.onnx"
from onnxmltools.utils import float16_converter
trans_model = float16_converter.convert_float_to_float16(onnx_model,keep_io_types=True)
onnx.save_model(trans_model, onnx_fp16_model_save_path)
export_onnx = False # once
try:
dim += class_averages.get_item(obj.name)
DIMENSION.append(dim)
except:
dim = DIMENSION[-1]
obj.alpha = recover_angle(orient, conf, 2)
obj.h, obj.w, obj.l = dim[0], dim[1], dim[2]
obj.rot_global, rot_local = compute_orientaion(P2, obj)
obj.tx, obj.ty, obj.tz = translation_constraints(P2, obj, rot_local)
# output prediction label
output_line = obj.member_to_list()
output_line.append(1.0)
output_line = " ".join([str(i) for i in output_line]) + "\n"
# save results
if config.get("save_txt"):
with open(f"{config.get('output_dir')}/{img_name}.txt", "a") as f:
f.write(output_line)
if config.get("save_result"):
# dt
plot3dbev.plot(
img=img,
class_object=obj.name.lower(),
bbox=[obj.xmin, obj.ymin, obj.xmax, obj.ymax],
dim=[obj.h, obj.w, obj.l],
loc=[obj.tx, obj.ty, obj.tz],
rot_y=obj.rot_global,
gt=False
)
# gt
plot3dbev.plot(
img=img,
class_object=obj.name.lower(),
bbox=[obj.xmin, obj.ymin, obj.xmax, obj.ymax],
dim=[det.h, det.w, det.l],
loc=[det.tx, det.ty, det.tz],
rot_y=det.ry,
gt=True
)
# save images
if config.get("save_result"):
plot3dbev.save_plot(config.get("output_dir"), img_name)
def copy_eval_label():
label_path = './data/KITTI/ImageSets/val.txt'
label_root_path = './data/KITTI/label_2/'
label_save_path = './data/KITTI/label_2_val/'
# get all labels
label_files = []
sum_number = 0
from shutil import copyfile
with open(label_path, 'r') as file:
img_id = file.readlines()
for id in img_id:
label_path = label_root_path + id[:6] + '.txt'
copyfile(label_path, label_save_path + id[:6] + '.txt')
def copy_eval_image():
label_path = './data/KITTI/ImageSets/val.txt'
img_root_path = './data/KITTI/image_2/'
img_save_path = './data/KITTI/image_2_val'
# get all labels
label_files = []
sum_number = 0
with open(label_path, 'r') as file:
img_id = file.readlines()
for id in img_id:
img_path = img_root_path + id[:6] + '.png'
img = cv2.imread(img_path)
cv2.imwrite(f'{img_save_path}/{id[:6]}.png', img)
@hydra.main(version_base="1.2", config_path=root / "configs", config_name="inference.yaml")
def main(config: DictConfig):
if(config.get("func") == "image"):
# inference_image:
# inference for kitti bev and 3d image, without model
inference_image(config)
else:
# inference_label:
# for kitti gt label, predict without model
inference_label(config)
if __name__ == "__main__":
# # tools for copy target files
# copy_eval_label()
# copy_eval_image()
main() | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/LICENSE | MIT License
Copyright (c) 2018
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/run.sh | python evaluate.py evaluate \
--label_path=/home/your/path/data/KITTI/label_2 \
--result_path=/home/your/path/data/KITTI/result \
--label_split_file=/home/your/path/data/KITTI/ImageSets/val.txt \
--current_class=0,1,2 | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/README.md | # Note
This code is from [traveller59/kitti-object-eval-python](https://github.com/traveller59/kitti-object-eval-python)
# kitti-object-eval-python
Fast kitti object detection eval in python(finish eval in less than 10 second), support 2d/bev/3d/aos. , support coco-style AP. If you use command line interface, numba need some time to compile jit functions.
_WARNING_: The "coco" isn't official metrics. Only "AP(Average Precision)" is.
## Dependencies
Only support python 3.6+, need `numpy`, `skimage`, `numba`, `fire`, `scipy`. If you have Anaconda, just install `cudatoolkit` in anaconda. Otherwise, please reference to this [page](https://github.com/numba/numba#custom-python-environments) to set up llvm and cuda for numba.
* Install by conda:
```
conda install -c numba cudatoolkit=x.x (8.0, 9.0, 10.0, depend on your environment)
```
## Usage
* commandline interface:
```
python evaluate.py evaluate --label_path=/path/to/your_gt_label_folder --result_path=/path/to/your_result_folder --label_split_file=/path/to/val.txt --current_class=0 --coco=False
```
* python interface:
```Python
import kitti_common as kitti
from eval import get_official_eval_result, get_coco_eval_result
def _read_imageset_file(path):
with open(path, 'r') as f:
lines = f.readlines()
return [int(line) for line in lines]
det_path = "/path/to/your_result_folder"
dt_annos = kitti.get_label_annos(det_path)
gt_path = "/path/to/your_gt_label_folder"
gt_split_file = "/path/to/val.txt" # from https://xiaozhichen.github.io/files/mv3d/imagesets.tar.gz
val_image_ids = _read_imageset_file(gt_split_file)
gt_annos = kitti.get_label_annos(gt_path, val_image_ids)
print(get_official_eval_result(gt_annos, dt_annos, 0)) # 6s in my computer
print(get_coco_eval_result(gt_annos, dt_annos, 0)) # 18s in my computer
```
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/kitti_common.py | import concurrent.futures as futures
import os
import pathlib
import re
from collections import OrderedDict
import numpy as np
from skimage import io
def get_image_index_str(img_idx):
return "{:06d}".format(img_idx)
def get_kitti_info_path(idx,
prefix,
info_type='image_2',
file_tail='.png',
training=True,
relative_path=True):
img_idx_str = get_image_index_str(idx)
img_idx_str += file_tail
prefix = pathlib.Path(prefix)
if training:
file_path = pathlib.Path('training') / info_type / img_idx_str
else:
file_path = pathlib.Path('testing') / info_type / img_idx_str
if not (prefix / file_path).exists():
raise ValueError("file not exist: {}".format(file_path))
if relative_path:
return str(file_path)
else:
return str(prefix / file_path)
def get_image_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'image_2', '.png', training,
relative_path)
def get_label_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'label_2', '.txt', training,
relative_path)
def get_velodyne_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'velodyne', '.bin', training,
relative_path)
def get_calib_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'calib', '.txt', training,
relative_path)
def _extend_matrix(mat):
mat = np.concatenate([mat, np.array([[0., 0., 0., 1.]])], axis=0)
return mat
def get_kitti_image_info(path,
training=True,
label_info=True,
velodyne=False,
calib=False,
image_ids=7481,
extend_matrix=True,
num_worker=8,
relative_path=True,
with_imageshape=True):
# image_infos = []
root_path = pathlib.Path(path)
if not isinstance(image_ids, list):
image_ids = list(range(image_ids))
def map_func(idx):
image_info = {'image_idx': idx}
annotations = None
if velodyne:
image_info['velodyne_path'] = get_velodyne_path(
idx, path, training, relative_path)
image_info['img_path'] = get_image_path(idx, path, training,
relative_path)
if with_imageshape:
img_path = image_info['img_path']
if relative_path:
img_path = str(root_path / img_path)
image_info['img_shape'] = np.array(
io.imread(img_path).shape[:2], dtype=np.int32)
if label_info:
label_path = get_label_path(idx, path, training, relative_path)
if relative_path:
label_path = str(root_path / label_path)
annotations = get_label_anno(label_path)
if calib:
calib_path = get_calib_path(
idx, path, training, relative_path=False)
with open(calib_path, 'r') as f:
lines = f.readlines()
P0 = np.array(
[float(info) for info in lines[0].split(' ')[1:13]]).reshape(
[3, 4])
P1 = np.array(
[float(info) for info in lines[1].split(' ')[1:13]]).reshape(
[3, 4])
P2 = np.array(
[float(info) for info in lines[2].split(' ')[1:13]]).reshape(
[3, 4])
P3 = np.array(
[float(info) for info in lines[3].split(' ')[1:13]]).reshape(
[3, 4])
if extend_matrix:
P0 = _extend_matrix(P0)
P1 = _extend_matrix(P1)
P2 = _extend_matrix(P2)
P3 = _extend_matrix(P3)
image_info['calib/P0'] = P0
image_info['calib/P1'] = P1
image_info['calib/P2'] = P2
image_info['calib/P3'] = P3
R0_rect = np.array([
float(info) for info in lines[4].split(' ')[1:10]
]).reshape([3, 3])
if extend_matrix:
rect_4x4 = np.zeros([4, 4], dtype=R0_rect.dtype)
rect_4x4[3, 3] = 1.
rect_4x4[:3, :3] = R0_rect
else:
rect_4x4 = R0_rect
image_info['calib/R0_rect'] = rect_4x4
Tr_velo_to_cam = np.array([
float(info) for info in lines[5].split(' ')[1:13]
]).reshape([3, 4])
Tr_imu_to_velo = np.array([
float(info) for info in lines[6].split(' ')[1:13]
]).reshape([3, 4])
if extend_matrix:
Tr_velo_to_cam = _extend_matrix(Tr_velo_to_cam)
Tr_imu_to_velo = _extend_matrix(Tr_imu_to_velo)
image_info['calib/Tr_velo_to_cam'] = Tr_velo_to_cam
image_info['calib/Tr_imu_to_velo'] = Tr_imu_to_velo
if annotations is not None:
image_info['annos'] = annotations
add_difficulty_to_annos(image_info)
return image_info
with futures.ThreadPoolExecutor(num_worker) as executor:
image_infos = executor.map(map_func, image_ids)
return list(image_infos)
def filter_kitti_anno(image_anno,
used_classes,
used_difficulty=None,
dontcare_iou=None):
if not isinstance(used_classes, (list, tuple)):
used_classes = [used_classes]
img_filtered_annotations = {}
relevant_annotation_indices = [
i for i, x in enumerate(image_anno['name']) if x in used_classes
]
for key in image_anno.keys():
img_filtered_annotations[key] = (
image_anno[key][relevant_annotation_indices])
if used_difficulty is not None:
relevant_annotation_indices = [
i for i, x in enumerate(img_filtered_annotations['difficulty'])
if x in used_difficulty
]
for key in image_anno.keys():
img_filtered_annotations[key] = (
img_filtered_annotations[key][relevant_annotation_indices])
if 'DontCare' in used_classes and dontcare_iou is not None:
dont_care_indices = [
i for i, x in enumerate(img_filtered_annotations['name'])
if x == 'DontCare'
]
# bounding box format [y_min, x_min, y_max, x_max]
all_boxes = img_filtered_annotations['bbox']
ious = iou(all_boxes, all_boxes[dont_care_indices])
# Remove all bounding boxes that overlap with a dontcare region.
if ious.size > 0:
boxes_to_remove = np.amax(ious, axis=1) > dontcare_iou
for key in image_anno.keys():
img_filtered_annotations[key] = (img_filtered_annotations[key][
np.logical_not(boxes_to_remove)])
return img_filtered_annotations
def filter_annos_low_score(image_annos, thresh):
new_image_annos = []
for anno in image_annos:
img_filtered_annotations = {}
relevant_annotation_indices = [
i for i, s in enumerate(anno['score']) if s >= thresh
]
for key in anno.keys():
img_filtered_annotations[key] = (
anno[key][relevant_annotation_indices])
new_image_annos.append(img_filtered_annotations)
return new_image_annos
def kitti_result_line(result_dict, precision=4):
prec_float = "{" + ":.{}f".format(precision) + "}"
res_line = []
all_field_default = OrderedDict([
('name', None),
('truncated', -1),
('occluded', -1),
('alpha', -10),
('bbox', None),
('dimensions', [-1, -1, -1]),
('location', [-1000, -1000, -1000]),
('rotation_y', -10),
('score', None),
])
res_dict = [(key, None) for key, val in all_field_default.items()]
res_dict = OrderedDict(res_dict)
for key, val in result_dict.items():
if all_field_default[key] is None and val is None:
raise ValueError("you must specify a value for {}".format(key))
res_dict[key] = val
for key, val in res_dict.items():
if key == 'name':
res_line.append(val)
elif key in ['truncated', 'alpha', 'rotation_y', 'score']:
if val is None:
res_line.append(str(all_field_default[key]))
else:
res_line.append(prec_float.format(val))
elif key == 'occluded':
if val is None:
res_line.append(str(all_field_default[key]))
else:
res_line.append('{}'.format(val))
elif key in ['bbox', 'dimensions', 'location']:
if val is None:
res_line += [str(v) for v in all_field_default[key]]
else:
res_line += [prec_float.format(v) for v in val]
else:
raise ValueError("unknown key. supported key:{}".format(
res_dict.keys()))
return ' '.join(res_line)
def add_difficulty_to_annos(info):
min_height = [40, 25,
25] # minimum height for evaluated groundtruth/detections
max_occlusion = [
0, 1, 2
] # maximum occlusion level of the groundtruth used for evaluation
max_trunc = [
0.15, 0.3, 0.5
] # maximum truncation level of the groundtruth used for evaluation
annos = info['annos']
dims = annos['dimensions'] # lhw format
bbox = annos['bbox']
height = bbox[:, 3] - bbox[:, 1]
occlusion = annos['occluded']
truncation = annos['truncated']
diff = []
easy_mask = np.ones((len(dims), ), dtype=np.bool)
moderate_mask = np.ones((len(dims), ), dtype=np.bool)
hard_mask = np.ones((len(dims), ), dtype=np.bool)
i = 0
for h, o, t in zip(height, occlusion, truncation):
if o > max_occlusion[0] or h <= min_height[0] or t > max_trunc[0]:
easy_mask[i] = False
if o > max_occlusion[1] or h <= min_height[1] or t > max_trunc[1]:
moderate_mask[i] = False
if o > max_occlusion[2] or h <= min_height[2] or t > max_trunc[2]:
hard_mask[i] = False
i += 1
is_easy = easy_mask
is_moderate = np.logical_xor(easy_mask, moderate_mask)
is_hard = np.logical_xor(hard_mask, moderate_mask)
for i in range(len(dims)):
if is_easy[i]:
diff.append(0)
elif is_moderate[i]:
diff.append(1)
elif is_hard[i]:
diff.append(2)
else:
diff.append(-1)
annos["difficulty"] = np.array(diff, np.int32)
return diff
def get_label_anno(label_path):
annotations = {}
annotations.update({
'name': [],
'truncated': [],
'occluded': [],
'alpha': [],
'bbox': [],
'dimensions': [],
'location': [],
'rotation_y': []
})
with open(label_path, 'r') as f:
lines = f.readlines()
# if len(lines) == 0 or len(lines[0]) < 15:
# content = []
# else:
content = [line.strip().split(' ') for line in lines]
annotations['name'] = np.array([x[0] for x in content])
annotations['truncated'] = np.array([float(x[1]) for x in content])
annotations['occluded'] = np.array([int(x[2]) for x in content])
annotations['alpha'] = np.array([float(x[3]) for x in content])
annotations['bbox'] = np.array(
[[float(info) for info in x[4:8]] for x in content]).reshape(-1, 4)
# dimensions will convert hwl format to standard lhw(camera) format.
annotations['dimensions'] = np.array(
[[float(info) for info in x[8:11]] for x in content]).reshape(
-1, 3)[:, [2, 0, 1]]
annotations['location'] = np.array(
[[float(info) for info in x[11:14]] for x in content]).reshape(-1, 3)
annotations['rotation_y'] = np.array(
[float(x[14]) for x in content]).reshape(-1)
if len(content) != 0 and len(content[0]) == 16: # have score
annotations['score'] = np.array([float(x[15]) for x in content])
else:
annotations['score'] = np.zeros([len(annotations['bbox'])])
return annotations
def get_label_annos(label_folder, image_ids=None):
if image_ids is None:
filepaths = pathlib.Path(label_folder).glob('*.txt')
prog = re.compile(r'^\d{6}.txt$')
filepaths = filter(lambda f: prog.match(f.name), filepaths)
image_ids = [int(p.stem) for p in filepaths]
image_ids = sorted(image_ids)
if not isinstance(image_ids, list):
image_ids = list(range(image_ids))
annos = []
label_folder = pathlib.Path(label_folder)
for idx in image_ids:
image_idx = get_image_index_str(idx)
label_filename = label_folder / (image_idx + '.txt')
annos.append(get_label_anno(label_filename))
return annos
def area(boxes, add1=False):
"""Computes area of boxes.
Args:
boxes: Numpy array with shape [N, 4] holding N boxes
Returns:
a numpy array with shape [N*1] representing box areas
"""
if add1:
return (boxes[:, 2] - boxes[:, 0] + 1.0) * (
boxes[:, 3] - boxes[:, 1] + 1.0)
else:
return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
def intersection(boxes1, boxes2, add1=False):
"""Compute pairwise intersection areas between boxes.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes
boxes2: a numpy array with shape [M, 4] holding M boxes
Returns:
a numpy array with shape [N*M] representing pairwise intersection area
"""
[y_min1, x_min1, y_max1, x_max1] = np.split(boxes1, 4, axis=1)
[y_min2, x_min2, y_max2, x_max2] = np.split(boxes2, 4, axis=1)
all_pairs_min_ymax = np.minimum(y_max1, np.transpose(y_max2))
all_pairs_max_ymin = np.maximum(y_min1, np.transpose(y_min2))
if add1:
all_pairs_min_ymax += 1.0
intersect_heights = np.maximum(
np.zeros(all_pairs_max_ymin.shape),
all_pairs_min_ymax - all_pairs_max_ymin)
all_pairs_min_xmax = np.minimum(x_max1, np.transpose(x_max2))
all_pairs_max_xmin = np.maximum(x_min1, np.transpose(x_min2))
if add1:
all_pairs_min_xmax += 1.0
intersect_widths = np.maximum(
np.zeros(all_pairs_max_xmin.shape),
all_pairs_min_xmax - all_pairs_max_xmin)
return intersect_heights * intersect_widths
def iou(boxes1, boxes2, add1=False):
"""Computes pairwise intersection-over-union between box collections.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes.
boxes2: a numpy array with shape [M, 4] holding N boxes.
Returns:
a numpy array with shape [N, M] representing pairwise iou scores.
"""
intersect = intersection(boxes1, boxes2, add1)
area1 = area(boxes1, add1)
area2 = area(boxes2, add1)
union = np.expand_dims(
area1, axis=1) + np.expand_dims(
area2, axis=0) - intersect
return intersect / union | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/evaluate.py | import time
import fire
import kitti_common as kitti
from eval import get_official_eval_result, get_coco_eval_result
def _read_imageset_file(path):
with open(path, 'r') as f:
lines = f.readlines()
return [int(line) for line in lines]
def evaluate(label_path, # gt
result_path, # dt
label_split_file,
current_class=0, # 0: bbox, 1: bev, 2: 3d
coco=False,
score_thresh=-1):
dt_annos = kitti.get_label_annos(result_path)
# print("dt_annos[0] is ", dt_annos[0], " shape is ", len(dt_annos))
# if score_thresh > 0:
# dt_annos = kitti.filter_annos_low_score(dt_annos, score_thresh)
# val_image_ids = _read_imageset_file(label_split_file)
gt_annos = kitti.get_label_annos(label_path)
# print("gt_annos[0] is ", gt_annos[0], " shape is ", len(gt_annos))
if coco:
print(get_coco_eval_result(gt_annos, dt_annos, current_class))
else:
print("not coco")
print(get_official_eval_result(gt_annos, dt_annos, current_class))
if __name__ == '__main__':
fire.Fire()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/eval.py | import io as sysio
import time
import numba
import numpy as np
from scipy.interpolate import interp1d
from rotate_iou import rotate_iou_gpu_eval
def get_mAP(prec):
sums = 0
for i in range(0, len(prec), 4):
sums += prec[i]
return sums / 11 * 100
@numba.jit
def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41):
scores.sort()
scores = scores[::-1]
current_recall = 0
thresholds = []
for i, score in enumerate(scores):
l_recall = (i + 1) / num_gt
if i < (len(scores) - 1):
r_recall = (i + 2) / num_gt
else:
r_recall = l_recall
if (((r_recall - current_recall) < (current_recall - l_recall))
and (i < (len(scores) - 1))):
continue
# recall = l_recall
thresholds.append(score)
current_recall += 1 / (num_sample_pts - 1.0)
# print(len(thresholds), len(scores), num_gt)
return thresholds
def clean_data(gt_anno, dt_anno, current_class, difficulty):
CLASS_NAMES = [
'car', 'pedestrian', 'cyclist', 'van', 'person_sitting', 'car',
'tractor', 'trailer'
]
MIN_HEIGHT = [40, 25, 25]
MAX_OCCLUSION = [0, 1, 2]
MAX_TRUNCATION = [0.15, 0.3, 0.5]
dc_bboxes, ignored_gt, ignored_dt = [], [], []
current_cls_name = CLASS_NAMES[current_class].lower()
num_gt = len(gt_anno["name"])
num_dt = len(dt_anno["name"])
num_valid_gt = 0
for i in range(num_gt):
bbox = gt_anno["bbox"][i]
gt_name = gt_anno["name"][i].lower()
height = bbox[3] - bbox[1]
valid_class = -1
if (gt_name == current_cls_name):
valid_class = 1
elif (current_cls_name == "Pedestrian".lower()
and "Person_sitting".lower() == gt_name):
valid_class = 0
elif (current_cls_name == "Car".lower() and "Van".lower() == gt_name):
valid_class = 0
else:
valid_class = -1
ignore = False
if ((gt_anno["occluded"][i] > MAX_OCCLUSION[difficulty])
or (gt_anno["truncated"][i] > MAX_TRUNCATION[difficulty])
or (height <= MIN_HEIGHT[difficulty])):
# if gt_anno["difficulty"][i] > difficulty or gt_anno["difficulty"][i] == -1:
ignore = True
if valid_class == 1 and not ignore:
ignored_gt.append(0)
num_valid_gt += 1
elif (valid_class == 0 or (ignore and (valid_class == 1))):
ignored_gt.append(1)
else:
ignored_gt.append(-1)
# for i in range(num_gt):
if gt_anno["name"][i] == "DontCare":
dc_bboxes.append(gt_anno["bbox"][i])
for i in range(num_dt):
if (dt_anno["name"][i].lower() == current_cls_name):
valid_class = 1
else:
valid_class = -1
height = abs(dt_anno["bbox"][i, 3] - dt_anno["bbox"][i, 1])
if height < MIN_HEIGHT[difficulty]:
ignored_dt.append(1)
elif valid_class == 1:
ignored_dt.append(0)
else:
ignored_dt.append(-1)
return num_valid_gt, ignored_gt, ignored_dt, dc_bboxes
@numba.jit(nopython=True)
def image_box_overlap(boxes, query_boxes, criterion=-1):
N = boxes.shape[0]
K = query_boxes.shape[0]
overlaps = np.zeros((N, K), dtype=boxes.dtype)
for k in range(K):
qbox_area = ((query_boxes[k, 2] - query_boxes[k, 0]) *
(query_boxes[k, 3] - query_boxes[k, 1]))
for n in range(N):
iw = (min(boxes[n, 2], query_boxes[k, 2]) - max(
boxes[n, 0], query_boxes[k, 0]))
if iw > 0:
ih = (min(boxes[n, 3], query_boxes[k, 3]) - max(
boxes[n, 1], query_boxes[k, 1]))
if ih > 0:
if criterion == -1:
ua = (
(boxes[n, 2] - boxes[n, 0]) *
(boxes[n, 3] - boxes[n, 1]) + qbox_area - iw * ih)
elif criterion == 0:
ua = ((boxes[n, 2] - boxes[n, 0]) *
(boxes[n, 3] - boxes[n, 1]))
elif criterion == 1:
ua = qbox_area
else:
ua = 1.0
overlaps[n, k] = iw * ih / ua
return overlaps
def bev_box_overlap(boxes, qboxes, criterion=-1):
riou = rotate_iou_gpu_eval(boxes, qboxes, criterion)
return riou
@numba.jit(nopython=True, parallel=True)
def d3_box_overlap_kernel(boxes,
qboxes,
rinc,
criterion=-1,
z_axis=1,
z_center=1.0):
"""
z_axis: the z (height) axis.
z_center: unified z (height) center of box.
"""
N, K = boxes.shape[0], qboxes.shape[0]
for i in range(N):
for j in range(K):
if rinc[i, j] > 0:
min_z = min(
boxes[i, z_axis] + boxes[i, z_axis + 3] * (1 - z_center),
qboxes[j, z_axis] + qboxes[j, z_axis + 3] * (1 - z_center))
max_z = max(
boxes[i, z_axis] - boxes[i, z_axis + 3] * z_center,
qboxes[j, z_axis] - qboxes[j, z_axis + 3] * z_center)
iw = min_z - max_z
if iw > 0:
area1 = boxes[i, 3] * boxes[i, 4] * boxes[i, 5]
area2 = qboxes[j, 3] * qboxes[j, 4] * qboxes[j, 5]
inc = iw * rinc[i, j]
if criterion == -1:
ua = (area1 + area2 - inc)
elif criterion == 0:
ua = area1
elif criterion == 1:
ua = area2
else:
ua = 1.0
rinc[i, j] = inc / ua
else:
rinc[i, j] = 0.0
def d3_box_overlap(boxes, qboxes, criterion=-1, z_axis=1, z_center=1.0):
"""kitti camera format z_axis=1.
"""
bev_axes = list(range(7))
bev_axes.pop(z_axis + 3)
bev_axes.pop(z_axis)
rinc = rotate_iou_gpu_eval(boxes[:, bev_axes], qboxes[:, bev_axes], 2)
d3_box_overlap_kernel(boxes, qboxes, rinc, criterion, z_axis, z_center)
return rinc
@numba.jit(nopython=True)
def compute_statistics_jit(overlaps,
gt_datas,
dt_datas,
ignored_gt,
ignored_det,
dc_bboxes,
metric,
min_overlap,
thresh=0,
compute_fp=False,
compute_aos=False):
det_size = dt_datas.shape[0]
gt_size = gt_datas.shape[0]
dt_scores = dt_datas[:, -1]
dt_alphas = dt_datas[:, 4]
gt_alphas = gt_datas[:, 4]
dt_bboxes = dt_datas[:, :4]
# gt_bboxes = gt_datas[:, :4]
assigned_detection = [False] * det_size
ignored_threshold = [False] * det_size
if compute_fp:
for i in range(det_size):
if (dt_scores[i] < thresh):
ignored_threshold[i] = True
NO_DETECTION = -10000000
tp, fp, fn, similarity = 0, 0, 0, 0
# thresholds = [0.0]
# delta = [0.0]
thresholds = np.zeros((gt_size, ))
thresh_idx = 0
delta = np.zeros((gt_size, ))
delta_idx = 0
for i in range(gt_size):
if ignored_gt[i] == -1:
continue
det_idx = -1
valid_detection = NO_DETECTION
max_overlap = 0
assigned_ignored_det = False
for j in range(det_size):
if (ignored_det[j] == -1):
continue
if (assigned_detection[j]):
continue
if (ignored_threshold[j]):
continue
overlap = overlaps[j, i]
dt_score = dt_scores[j]
if (not compute_fp and (overlap > min_overlap)
and dt_score > valid_detection):
det_idx = j
valid_detection = dt_score
elif (compute_fp and (overlap > min_overlap)
and (overlap > max_overlap or assigned_ignored_det)
and ignored_det[j] == 0):
max_overlap = overlap
det_idx = j
valid_detection = 1
assigned_ignored_det = False
elif (compute_fp and (overlap > min_overlap)
and (valid_detection == NO_DETECTION)
and ignored_det[j] == 1):
det_idx = j
valid_detection = 1
assigned_ignored_det = True
if (valid_detection == NO_DETECTION) and ignored_gt[i] == 0:
fn += 1
elif ((valid_detection != NO_DETECTION)
and (ignored_gt[i] == 1 or ignored_det[det_idx] == 1)):
assigned_detection[det_idx] = True
elif valid_detection != NO_DETECTION:
# only a tp add a threshold.
tp += 1
# thresholds.append(dt_scores[det_idx])
thresholds[thresh_idx] = dt_scores[det_idx]
thresh_idx += 1
if compute_aos:
# delta.append(gt_alphas[i] - dt_alphas[det_idx])
delta[delta_idx] = gt_alphas[i] - dt_alphas[det_idx]
delta_idx += 1
assigned_detection[det_idx] = True
if compute_fp:
for i in range(det_size):
if (not (assigned_detection[i] or ignored_det[i] == -1
or ignored_det[i] == 1 or ignored_threshold[i])):
fp += 1
nstuff = 0
if metric == 0:
overlaps_dt_dc = image_box_overlap(dt_bboxes, dc_bboxes, 0)
for i in range(dc_bboxes.shape[0]):
for j in range(det_size):
if (assigned_detection[j]):
continue
if (ignored_det[j] == -1 or ignored_det[j] == 1):
continue
if (ignored_threshold[j]):
continue
if overlaps_dt_dc[j, i] > min_overlap:
assigned_detection[j] = True
nstuff += 1
fp -= nstuff
if compute_aos:
tmp = np.zeros((fp + delta_idx, ))
# tmp = [0] * fp
for i in range(delta_idx):
tmp[i + fp] = (1.0 + np.cos(delta[i])) / 2.0
# tmp.append((1.0 + np.cos(delta[i])) / 2.0)
# assert len(tmp) == fp + tp
# assert len(delta) == tp
if tp > 0 or fp > 0:
similarity = np.sum(tmp)
else:
similarity = -1
return tp, fp, fn, similarity, thresholds[:thresh_idx]
def get_split_parts(num, num_part):
same_part = num // num_part
remain_num = num % num_part
if remain_num == 0:
return [same_part] * num_part
else:
return [same_part] * num_part + [remain_num]
@numba.jit(nopython=True)
def fused_compute_statistics(overlaps,
pr,
gt_nums,
dt_nums,
dc_nums,
gt_datas,
dt_datas,
dontcares,
ignored_gts,
ignored_dets,
metric,
min_overlap,
thresholds,
compute_aos=False):
gt_num = 0
dt_num = 0
dc_num = 0
for i in range(gt_nums.shape[0]):
for t, thresh in enumerate(thresholds):
overlap = overlaps[dt_num:dt_num + dt_nums[i], gt_num:gt_num +
gt_nums[i]]
gt_data = gt_datas[gt_num:gt_num + gt_nums[i]]
dt_data = dt_datas[dt_num:dt_num + dt_nums[i]]
ignored_gt = ignored_gts[gt_num:gt_num + gt_nums[i]]
ignored_det = ignored_dets[dt_num:dt_num + dt_nums[i]]
dontcare = dontcares[dc_num:dc_num + dc_nums[i]]
tp, fp, fn, similarity, _ = compute_statistics_jit(
overlap,
gt_data,
dt_data,
ignored_gt,
ignored_det,
dontcare,
metric,
min_overlap=min_overlap,
thresh=thresh,
compute_fp=True,
compute_aos=compute_aos)
pr[t, 0] += tp
pr[t, 1] += fp
pr[t, 2] += fn
if similarity != -1:
pr[t, 3] += similarity
gt_num += gt_nums[i]
dt_num += dt_nums[i]
dc_num += dc_nums[i]
def calculate_iou_partly(gt_annos,
dt_annos,
metric,
num_parts=50,
z_axis=1,
z_center=1.0):
"""fast iou algorithm. this function can be used independently to
do result analysis.
Args:
gt_annos: dict, must from get_label_annos() in kitti_common.py
dt_annos: dict, must from get_label_annos() in kitti_common.py
metric: eval type. 0: bbox, 1: bev, 2: 3d
num_parts: int. a parameter for fast calculate algorithm
z_axis: height axis. kitti camera use 1, lidar use 2.
"""
assert len(gt_annos) == len(dt_annos)
total_dt_num = np.stack([len(a["name"]) for a in dt_annos], 0)
total_gt_num = np.stack([len(a["name"]) for a in gt_annos], 0)
num_examples = len(gt_annos)
split_parts = get_split_parts(num_examples, num_parts)
parted_overlaps = []
example_idx = 0
bev_axes = list(range(3))
bev_axes.pop(z_axis)
for num_part in split_parts:
gt_annos_part = gt_annos[example_idx:example_idx + num_part]
dt_annos_part = dt_annos[example_idx:example_idx + num_part]
if metric == 0:
gt_boxes = np.concatenate([a["bbox"] for a in gt_annos_part], 0)
dt_boxes = np.concatenate([a["bbox"] for a in dt_annos_part], 0)
overlap_part = image_box_overlap(gt_boxes, dt_boxes)
elif metric == 1:
loc = np.concatenate(
[a["location"][:, bev_axes] for a in gt_annos_part], 0)
dims = np.concatenate(
[a["dimensions"][:, bev_axes] for a in gt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in gt_annos_part], 0)
gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
loc = np.concatenate(
[a["location"][:, bev_axes] for a in dt_annos_part], 0)
dims = np.concatenate(
[a["dimensions"][:, bev_axes] for a in dt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in dt_annos_part], 0)
dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
overlap_part = bev_box_overlap(gt_boxes,
dt_boxes).astype(np.float64)
elif metric == 2:
loc = np.concatenate([a["location"] for a in gt_annos_part], 0)
dims = np.concatenate([a["dimensions"] for a in gt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in gt_annos_part], 0)
gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
loc = np.concatenate([a["location"] for a in dt_annos_part], 0)
dims = np.concatenate([a["dimensions"] for a in dt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in dt_annos_part], 0)
dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
overlap_part = d3_box_overlap(
gt_boxes, dt_boxes, z_axis=z_axis,
z_center=z_center).astype(np.float64)
else:
raise ValueError("unknown metric")
parted_overlaps.append(overlap_part)
example_idx += num_part
overlaps = []
example_idx = 0
for j, num_part in enumerate(split_parts):
gt_annos_part = gt_annos[example_idx:example_idx + num_part]
dt_annos_part = dt_annos[example_idx:example_idx + num_part]
gt_num_idx, dt_num_idx = 0, 0
for i in range(num_part):
gt_box_num = total_gt_num[example_idx + i]
dt_box_num = total_dt_num[example_idx + i]
overlaps.append(
parted_overlaps[j][gt_num_idx:gt_num_idx +
gt_box_num, dt_num_idx:dt_num_idx +
dt_box_num])
gt_num_idx += gt_box_num
dt_num_idx += dt_box_num
example_idx += num_part
return overlaps, parted_overlaps, total_gt_num, total_dt_num
def _prepare_data(gt_annos, dt_annos, current_class, difficulty):
gt_datas_list = []
dt_datas_list = []
total_dc_num = []
ignored_gts, ignored_dets, dontcares = [], [], []
total_num_valid_gt = 0
for i in range(len(gt_annos)):
rets = clean_data(gt_annos[i], dt_annos[i], current_class, difficulty)
num_valid_gt, ignored_gt, ignored_det, dc_bboxes = rets
ignored_gts.append(np.array(ignored_gt, dtype=np.int64))
ignored_dets.append(np.array(ignored_det, dtype=np.int64))
if len(dc_bboxes) == 0:
dc_bboxes = np.zeros((0, 4)).astype(np.float64)
else:
dc_bboxes = np.stack(dc_bboxes, 0).astype(np.float64)
total_dc_num.append(dc_bboxes.shape[0])
dontcares.append(dc_bboxes)
total_num_valid_gt += num_valid_gt
gt_datas = np.concatenate(
[gt_annos[i]["bbox"], gt_annos[i]["alpha"][..., np.newaxis]], 1)
dt_datas = np.concatenate([
dt_annos[i]["bbox"], dt_annos[i]["alpha"][..., np.newaxis],
dt_annos[i]["score"][..., np.newaxis]
], 1)
gt_datas_list.append(gt_datas)
dt_datas_list.append(dt_datas)
total_dc_num = np.stack(total_dc_num, axis=0)
return (gt_datas_list, dt_datas_list, ignored_gts, ignored_dets, dontcares,
total_dc_num, total_num_valid_gt)
def eval_class(gt_annos,
dt_annos,
current_classes,
difficultys,
metric,
min_overlaps,
compute_aos=False,
z_axis=1,
z_center=1.0,
num_parts=50):
"""Kitti eval. support 2d/bev/3d/aos eval. support 0.5:0.05:0.95 coco AP.
Args:
gt_annos: dict, must from get_label_annos() in kitti_common.py
dt_annos: dict, must from get_label_annos() in kitti_common.py
current_class: int, 0: car, 1: pedestrian, 2: cyclist
difficulty: int. eval difficulty, 0: easy, 1: normal, 2: hard
metric: eval type. 0: bbox, 1: bev, 2: 3d
min_overlap: float, min overlap. official:
[[0.7, 0.5, 0.5], [0.7, 0.5, 0.5], [0.7, 0.5, 0.5]]
format: [metric, class]. choose one from matrix above.
num_parts: int. a parameter for fast calculate algorithm
Returns:
dict of recall, precision and aos
"""
assert len(gt_annos) == len(dt_annos)
num_examples = len(gt_annos)
split_parts = get_split_parts(num_examples, num_parts)
rets = calculate_iou_partly(
dt_annos,
gt_annos,
metric,
num_parts,
z_axis=z_axis,
z_center=z_center)
overlaps, parted_overlaps, total_dt_num, total_gt_num = rets
N_SAMPLE_PTS = 41
num_minoverlap = len(min_overlaps)
num_class = len(current_classes)
num_difficulty = len(difficultys)
precision = np.zeros(
[num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
recall = np.zeros(
[num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
aos = np.zeros([num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
all_thresholds = np.zeros([num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
for m, current_class in enumerate(current_classes):
for l, difficulty in enumerate(difficultys):
rets = _prepare_data(gt_annos, dt_annos, current_class, difficulty)
(gt_datas_list, dt_datas_list, ignored_gts, ignored_dets,
dontcares, total_dc_num, total_num_valid_gt) = rets
for k, min_overlap in enumerate(min_overlaps[:, metric, m]):
thresholdss = []
for i in range(len(gt_annos)):
rets = compute_statistics_jit(
overlaps[i],
gt_datas_list[i],
dt_datas_list[i],
ignored_gts[i],
ignored_dets[i],
dontcares[i],
metric,
min_overlap=min_overlap,
thresh=0.0,
compute_fp=False)
tp, fp, fn, similarity, thresholds = rets
thresholdss += thresholds.tolist()
thresholdss = np.array(thresholdss)
thresholds = get_thresholds(thresholdss, total_num_valid_gt)
thresholds = np.array(thresholds)
all_thresholds[m, l, k, :len(thresholds)] = thresholds
pr = np.zeros([len(thresholds), 4])
idx = 0
for j, num_part in enumerate(split_parts):
gt_datas_part = np.concatenate(
gt_datas_list[idx:idx + num_part], 0)
dt_datas_part = np.concatenate(
dt_datas_list[idx:idx + num_part], 0)
dc_datas_part = np.concatenate(
dontcares[idx:idx + num_part], 0)
ignored_dets_part = np.concatenate(
ignored_dets[idx:idx + num_part], 0)
ignored_gts_part = np.concatenate(
ignored_gts[idx:idx + num_part], 0)
fused_compute_statistics(
parted_overlaps[j],
pr,
total_gt_num[idx:idx + num_part],
total_dt_num[idx:idx + num_part],
total_dc_num[idx:idx + num_part],
gt_datas_part,
dt_datas_part,
dc_datas_part,
ignored_gts_part,
ignored_dets_part,
metric,
min_overlap=min_overlap,
thresholds=thresholds,
compute_aos=compute_aos)
idx += num_part
for i in range(len(thresholds)):
precision[m, l, k, i] = pr[i, 0] / (pr[i, 0] + pr[i, 1])
if compute_aos:
aos[m, l, k, i] = pr[i, 3] / (pr[i, 0] + pr[i, 1])
for i in range(len(thresholds)):
precision[m, l, k, i] = np.max(
precision[m, l, k, i:], axis=-1)
if compute_aos:
aos[m, l, k, i] = np.max(aos[m, l, k, i:], axis=-1)
ret_dict = {
# "recall": recall, # [num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS]
"precision": precision,
"orientation": aos,
"thresholds": all_thresholds,
"min_overlaps": min_overlaps,
}
return ret_dict
def get_mAP_v2(prec):
sums = 0
for i in range(0, prec.shape[-1], 4):
sums = sums + prec[..., i]
return sums / 11 * 100
def do_eval_v2(gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos=False,
difficultys=(0),
z_axis=1,
z_center=1.0):
# min_overlaps: [num_minoverlap, metric, num_class]
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
0,
min_overlaps,
compute_aos,
z_axis=z_axis,
z_center=z_center)
# ret: [num_class, num_diff, num_minoverlap, num_sample_points]
mAP_bbox = get_mAP_v2(ret["precision"])
mAP_aos = None
if compute_aos:
mAP_aos = get_mAP_v2(ret["orientation"])
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
1,
min_overlaps,
z_axis=z_axis,
z_center=z_center)
mAP_bev = get_mAP_v2(ret["precision"])
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
2,
min_overlaps,
z_axis=z_axis,
z_center=z_center)
mAP_3d = get_mAP_v2(ret["precision"])
return mAP_bbox, mAP_bev, mAP_3d, mAP_aos
def do_eval_v3(gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos=False,
difficultys=(0, 1, 2),
z_axis=1,
z_center=1.0):
# min_overlaps: [num_minoverlap, metric, num_class]
types = ["bbox", "bev", "3d"]
metrics = {}
for i in range(3):
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
i,
min_overlaps,
compute_aos,
z_axis=z_axis,
z_center=z_center)
metrics[types[i]] = ret
return metrics
def do_coco_style_eval(gt_annos,
dt_annos,
current_classes,
overlap_ranges,
compute_aos,
z_axis=1,
z_center=1.0):
# overlap_ranges: [range, metric, num_class]
min_overlaps = np.zeros([10, *overlap_ranges.shape[1:]])
for i in range(overlap_ranges.shape[1]):
for j in range(overlap_ranges.shape[2]):
min_overlaps[:, i, j] = np.linspace(*overlap_ranges[:, i, j])
mAP_bbox, mAP_bev, mAP_3d, mAP_aos = do_eval_v2(
gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos,
z_axis=z_axis,
z_center=z_center)
# ret: [num_class, num_diff, num_minoverlap]
mAP_bbox = mAP_bbox.mean(-1)
mAP_bev = mAP_bev.mean(-1)
mAP_3d = mAP_3d.mean(-1)
if mAP_aos is not None:
mAP_aos = mAP_aos.mean(-1)
return mAP_bbox, mAP_bev, mAP_3d, mAP_aos
def print_str(value, *arg, sstream=None):
if sstream is None:
sstream = sysio.StringIO()
sstream.truncate(0)
sstream.seek(0)
print(value, *arg, file=sstream)
return sstream.getvalue()
def get_official_eval_result(gt_annos,
dt_annos,
current_classes,
difficultys=[0, 1, 2],
z_axis=1,
z_center=1.0):
"""
gt_annos and dt_annos must contains following keys:
[bbox, location, dimensions, rotation_y, score]
"""
overlap_mod = np.array([[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7],
[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7],
[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7]])
overlap_easy = np.array([[0.5, 0.5, 0.5, 0.7, 0.5, 0.5, 0.5, 0.5],
[0.25, 0.25, 0.25, 0.5, 0.25, 0.5, 0.5, 0.5],
[0.25, 0.25, 0.25, 0.5, 0.25, 0.5, 0.5, 0.5]])
min_overlaps = np.stack([overlap_mod, overlap_easy], axis=0) # [2, 3, 5]
class_to_name = {
0: 'Car',
1: 'Pedestrian',
2: 'Cyclist',
3: 'Van',
4: 'Person_sitting',
5: 'car',
6: 'tractor',
7: 'trailer',
}
name_to_class = {v: n for n, v in class_to_name.items()}
if not isinstance(current_classes, (list, tuple)):
current_classes = [current_classes]
current_classes_int = []
for curcls in current_classes:
if isinstance(curcls, str):
current_classes_int.append(name_to_class[curcls])
else:
current_classes_int.append(curcls)
current_classes = current_classes_int
min_overlaps = min_overlaps[:, :, current_classes]
result = ''
# check whether alpha is valid
compute_aos = False
for anno in dt_annos:
if anno['alpha'].shape[0] != 0:
if anno['alpha'][0] != -10:
compute_aos = True
break
metrics = do_eval_v3(
gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos,
difficultys,
z_axis=z_axis,
z_center=z_center)
for j, curcls in enumerate(current_classes):
# mAP threshold array: [num_minoverlap, metric, class]
# mAP result: [num_class, num_diff, num_minoverlap]
for i in range(min_overlaps.shape[0]):
mAPbbox = get_mAP_v2(metrics["bbox"]["precision"][j, :, i])
mAPbbox = ", ".join(f"{v:.2f}" for v in mAPbbox)
mAPbev = get_mAP_v2(metrics["bev"]["precision"][j, :, i])
mAPbev = ", ".join(f"{v:.2f}" for v in mAPbev)
mAP3d = get_mAP_v2(metrics["3d"]["precision"][j, :, i])
mAP3d = ", ".join(f"{v:.2f}" for v in mAP3d)
result += print_str(
(f"{class_to_name[curcls]} "
"AP(Average Precision)@{:.2f}, {:.2f}, {:.2f}:".format(*min_overlaps[i, :, j])))
result += print_str(f"bbox AP:{mAPbbox}")
result += print_str(f"bev AP:{mAPbev}")
result += print_str(f"3d AP:{mAP3d}")
if compute_aos:
mAPaos = get_mAP_v2(metrics["bbox"]["orientation"][j, :, i])
mAPaos = ", ".join(f"{v:.2f}" for v in mAPaos)
result += print_str(f"aos AP:{mAPaos}")
return result
def get_coco_eval_result(gt_annos,
dt_annos,
current_classes,
z_axis=1,
z_center=1.0):
class_to_name = {
0: 'Car',
1: 'Pedestrian',
2: 'Cyclist',
3: 'Van',
4: 'Person_sitting',
5: 'car',
6: 'tractor',
7: 'trailer',
}
class_to_range = {
0: [0.5, 1.0, 0.05],
1: [0.25, 0.75, 0.05],
2: [0.25, 0.75, 0.05],
3: [0.5, 1.0, 0.05],
4: [0.25, 0.75, 0.05],
5: [0.5, 1.0, 0.05],
6: [0.5, 1.0, 0.05],
7: [0.5, 1.0, 0.05],
}
class_to_range = {
0: [0.5, 0.95, 10],
1: [0.25, 0.7, 10],
2: [0.25, 0.7, 10],
3: [0.5, 0.95, 10],
4: [0.25, 0.7, 10],
5: [0.5, 0.95, 10],
6: [0.5, 0.95, 10],
7: [0.5, 0.95, 10],
}
name_to_class = {v: n for n, v in class_to_name.items()}
if not isinstance(current_classes, (list, tuple)):
current_classes = [current_classes]
current_classes_int = []
for curcls in current_classes:
if isinstance(curcls, str):
current_classes_int.append(name_to_class[curcls])
else:
current_classes_int.append(curcls)
current_classes = current_classes_int
overlap_ranges = np.zeros([3, 3, len(current_classes)])
for i, curcls in enumerate(current_classes):
overlap_ranges[:, :, i] = np.array(
class_to_range[curcls])[:, np.newaxis]
result = ''
# check whether alpha is valid
compute_aos = False
for anno in dt_annos:
if anno['alpha'].shape[0] != 0:
if anno['alpha'][0] != -10:
compute_aos = True
break
mAPbbox, mAPbev, mAP3d, mAPaos = do_coco_style_eval(
gt_annos,
dt_annos,
current_classes,
overlap_ranges,
compute_aos,
z_axis=z_axis,
z_center=z_center)
for j, curcls in enumerate(current_classes):
# mAP threshold array: [num_minoverlap, metric, class]
# mAP result: [num_class, num_diff, num_minoverlap]
o_range = np.array(class_to_range[curcls])[[0, 2, 1]]
o_range[1] = (o_range[2] - o_range[0]) / (o_range[1] - 1)
result += print_str((f"{class_to_name[curcls]} "
"coco AP@{:.2f}:{:.2f}:{:.2f}:".format(*o_range)))
result += print_str((f"bbox AP:{mAPbbox[j, 0]:.2f}, "
f"{mAPbbox[j, 1]:.2f}, "
f"{mAPbbox[j, 2]:.2f}"))
result += print_str((f"bev AP:{mAPbev[j, 0]:.2f}, "
f"{mAPbev[j, 1]:.2f}, "
f"{mAPbev[j, 2]:.2f}"))
result += print_str((f"3d AP:{mAP3d[j, 0]:.2f}, "
f"{mAP3d[j, 1]:.2f}, "
f"{mAP3d[j, 2]:.2f}"))
if compute_aos:
result += print_str((f"aos AP:{mAPaos[j, 0]:.2f}, "
f"{mAPaos[j, 1]:.2f}, "
f"{mAPaos[j, 2]:.2f}"))
return result | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/rotate_iou.py | #####################
# Based on https://github.com/hongzhenwang/RRPN-revise
# Licensed under The MIT License
# Author: yanyan, scrin@foxmail.com
#####################
import math
import numba
import numpy as np
from numba import cuda
@numba.jit(nopython=True)
def div_up(m, n):
return m // n + (m % n > 0)
@cuda.jit('(float32[:], float32[:], float32[:])', device=True, inline=True)
def trangle_area(a, b, c):
return ((a[0] - c[0]) * (b[1] - c[1]) - (a[1] - c[1]) *
(b[0] - c[0])) / 2.0
@cuda.jit('(float32[:], int32)', device=True, inline=True)
def area(int_pts, num_of_inter):
area_val = 0.0
for i in range(num_of_inter - 2):
area_val += abs(
trangle_area(int_pts[:2], int_pts[2 * i + 2:2 * i + 4],
int_pts[2 * i + 4:2 * i + 6]))
return area_val
@cuda.jit('(float32[:], int32)', device=True, inline=True)
def sort_vertex_in_convex_polygon(int_pts, num_of_inter):
if num_of_inter > 0:
center = cuda.local.array((2, ), dtype=numba.float32)
center[:] = 0.0
for i in range(num_of_inter):
center[0] += int_pts[2 * i]
center[1] += int_pts[2 * i + 1]
center[0] /= num_of_inter
center[1] /= num_of_inter
v = cuda.local.array((2, ), dtype=numba.float32)
vs = cuda.local.array((16, ), dtype=numba.float32)
for i in range(num_of_inter):
v[0] = int_pts[2 * i] - center[0]
v[1] = int_pts[2 * i + 1] - center[1]
d = math.sqrt(v[0] * v[0] + v[1] * v[1])
v[0] = v[0] / d
v[1] = v[1] / d
if v[1] < 0:
v[0] = -2 - v[0]
vs[i] = v[0]
j = 0
temp = 0
for i in range(1, num_of_inter):
if vs[i - 1] > vs[i]:
temp = vs[i]
tx = int_pts[2 * i]
ty = int_pts[2 * i + 1]
j = i
while j > 0 and vs[j - 1] > temp:
vs[j] = vs[j - 1]
int_pts[j * 2] = int_pts[j * 2 - 2]
int_pts[j * 2 + 1] = int_pts[j * 2 - 1]
j -= 1
vs[j] = temp
int_pts[j * 2] = tx
int_pts[j * 2 + 1] = ty
@cuda.jit(
'(float32[:], float32[:], int32, int32, float32[:])',
device=True,
inline=True)
def line_segment_intersection(pts1, pts2, i, j, temp_pts):
A = cuda.local.array((2, ), dtype=numba.float32)
B = cuda.local.array((2, ), dtype=numba.float32)
C = cuda.local.array((2, ), dtype=numba.float32)
D = cuda.local.array((2, ), dtype=numba.float32)
A[0] = pts1[2 * i]
A[1] = pts1[2 * i + 1]
B[0] = pts1[2 * ((i + 1) % 4)]
B[1] = pts1[2 * ((i + 1) % 4) + 1]
C[0] = pts2[2 * j]
C[1] = pts2[2 * j + 1]
D[0] = pts2[2 * ((j + 1) % 4)]
D[1] = pts2[2 * ((j + 1) % 4) + 1]
BA0 = B[0] - A[0]
BA1 = B[1] - A[1]
DA0 = D[0] - A[0]
CA0 = C[0] - A[0]
DA1 = D[1] - A[1]
CA1 = C[1] - A[1]
acd = DA1 * CA0 > CA1 * DA0
bcd = (D[1] - B[1]) * (C[0] - B[0]) > (C[1] - B[1]) * (D[0] - B[0])
if acd != bcd:
abc = CA1 * BA0 > BA1 * CA0
abd = DA1 * BA0 > BA1 * DA0
if abc != abd:
DC0 = D[0] - C[0]
DC1 = D[1] - C[1]
ABBA = A[0] * B[1] - B[0] * A[1]
CDDC = C[0] * D[1] - D[0] * C[1]
DH = BA1 * DC0 - BA0 * DC1
Dx = ABBA * DC0 - BA0 * CDDC
Dy = ABBA * DC1 - BA1 * CDDC
temp_pts[0] = Dx / DH
temp_pts[1] = Dy / DH
return True
return False
@cuda.jit(
'(float32[:], float32[:], int32, int32, float32[:])',
device=True,
inline=True)
def line_segment_intersection_v1(pts1, pts2, i, j, temp_pts):
a = cuda.local.array((2, ), dtype=numba.float32)
b = cuda.local.array((2, ), dtype=numba.float32)
c = cuda.local.array((2, ), dtype=numba.float32)
d = cuda.local.array((2, ), dtype=numba.float32)
a[0] = pts1[2 * i]
a[1] = pts1[2 * i + 1]
b[0] = pts1[2 * ((i + 1) % 4)]
b[1] = pts1[2 * ((i + 1) % 4) + 1]
c[0] = pts2[2 * j]
c[1] = pts2[2 * j + 1]
d[0] = pts2[2 * ((j + 1) % 4)]
d[1] = pts2[2 * ((j + 1) % 4) + 1]
area_abc = trangle_area(a, b, c)
area_abd = trangle_area(a, b, d)
if area_abc * area_abd >= 0:
return False
area_cda = trangle_area(c, d, a)
area_cdb = area_cda + area_abc - area_abd
if area_cda * area_cdb >= 0:
return False
t = area_cda / (area_abd - area_abc)
dx = t * (b[0] - a[0])
dy = t * (b[1] - a[1])
temp_pts[0] = a[0] + dx
temp_pts[1] = a[1] + dy
return True
@cuda.jit('(float32, float32, float32[:])', device=True, inline=True)
def point_in_quadrilateral(pt_x, pt_y, corners):
ab0 = corners[2] - corners[0]
ab1 = corners[3] - corners[1]
ad0 = corners[6] - corners[0]
ad1 = corners[7] - corners[1]
ap0 = pt_x - corners[0]
ap1 = pt_y - corners[1]
abab = ab0 * ab0 + ab1 * ab1
abap = ab0 * ap0 + ab1 * ap1
adad = ad0 * ad0 + ad1 * ad1
adap = ad0 * ap0 + ad1 * ap1
return abab >= abap and abap >= 0 and adad >= adap and adap >= 0
@cuda.jit('(float32[:], float32[:], float32[:])', device=True, inline=True)
def quadrilateral_intersection(pts1, pts2, int_pts):
num_of_inter = 0
for i in range(4):
if point_in_quadrilateral(pts1[2 * i], pts1[2 * i + 1], pts2):
int_pts[num_of_inter * 2] = pts1[2 * i]
int_pts[num_of_inter * 2 + 1] = pts1[2 * i + 1]
num_of_inter += 1
if point_in_quadrilateral(pts2[2 * i], pts2[2 * i + 1], pts1):
int_pts[num_of_inter * 2] = pts2[2 * i]
int_pts[num_of_inter * 2 + 1] = pts2[2 * i + 1]
num_of_inter += 1
temp_pts = cuda.local.array((2, ), dtype=numba.float32)
for i in range(4):
for j in range(4):
has_pts = line_segment_intersection(pts1, pts2, i, j, temp_pts)
if has_pts:
int_pts[num_of_inter * 2] = temp_pts[0]
int_pts[num_of_inter * 2 + 1] = temp_pts[1]
num_of_inter += 1
return num_of_inter
@cuda.jit('(float32[:], float32[:])', device=True, inline=True)
def rbbox_to_corners(corners, rbbox):
# generate clockwise corners and rotate it clockwise
angle = rbbox[4]
a_cos = math.cos(angle)
a_sin = math.sin(angle)
center_x = rbbox[0]
center_y = rbbox[1]
x_d = rbbox[2]
y_d = rbbox[3]
corners_x = cuda.local.array((4, ), dtype=numba.float32)
corners_y = cuda.local.array((4, ), dtype=numba.float32)
corners_x[0] = -x_d / 2
corners_x[1] = -x_d / 2
corners_x[2] = x_d / 2
corners_x[3] = x_d / 2
corners_y[0] = -y_d / 2
corners_y[1] = y_d / 2
corners_y[2] = y_d / 2
corners_y[3] = -y_d / 2
for i in range(4):
corners[2 *
i] = a_cos * corners_x[i] + a_sin * corners_y[i] + center_x
corners[2 * i
+ 1] = -a_sin * corners_x[i] + a_cos * corners_y[i] + center_y
@cuda.jit('(float32[:], float32[:])', device=True, inline=True)
def inter(rbbox1, rbbox2):
corners1 = cuda.local.array((8, ), dtype=numba.float32)
corners2 = cuda.local.array((8, ), dtype=numba.float32)
intersection_corners = cuda.local.array((16, ), dtype=numba.float32)
rbbox_to_corners(corners1, rbbox1)
rbbox_to_corners(corners2, rbbox2)
num_intersection = quadrilateral_intersection(corners1, corners2,
intersection_corners)
sort_vertex_in_convex_polygon(intersection_corners, num_intersection)
# print(intersection_corners.reshape([-1, 2])[:num_intersection])
return area(intersection_corners, num_intersection)
@cuda.jit('(float32[:], float32[:], int32)', device=True, inline=True)
def devRotateIoUEval(rbox1, rbox2, criterion=-1):
area1 = rbox1[2] * rbox1[3]
area2 = rbox2[2] * rbox2[3]
area_inter = inter(rbox1, rbox2)
if criterion == -1:
return area_inter / (area1 + area2 - area_inter)
elif criterion == 0:
return area_inter / area1
elif criterion == 1:
return area_inter / area2
else:
return area_inter
@cuda.jit('(int64, int64, float32[:], float32[:], float32[:], int32)', fastmath=False)
def rotate_iou_kernel_eval(N, K, dev_boxes, dev_query_boxes, dev_iou, criterion=-1):
threadsPerBlock = 8 * 8
row_start = cuda.blockIdx.x
col_start = cuda.blockIdx.y
tx = cuda.threadIdx.x
row_size = min(N - row_start * threadsPerBlock, threadsPerBlock)
col_size = min(K - col_start * threadsPerBlock, threadsPerBlock)
block_boxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32)
block_qboxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32)
dev_query_box_idx = threadsPerBlock * col_start + tx
dev_box_idx = threadsPerBlock * row_start + tx
if (tx < col_size):
block_qboxes[tx * 5 + 0] = dev_query_boxes[dev_query_box_idx * 5 + 0]
block_qboxes[tx * 5 + 1] = dev_query_boxes[dev_query_box_idx * 5 + 1]
block_qboxes[tx * 5 + 2] = dev_query_boxes[dev_query_box_idx * 5 + 2]
block_qboxes[tx * 5 + 3] = dev_query_boxes[dev_query_box_idx * 5 + 3]
block_qboxes[tx * 5 + 4] = dev_query_boxes[dev_query_box_idx * 5 + 4]
if (tx < row_size):
block_boxes[tx * 5 + 0] = dev_boxes[dev_box_idx * 5 + 0]
block_boxes[tx * 5 + 1] = dev_boxes[dev_box_idx * 5 + 1]
block_boxes[tx * 5 + 2] = dev_boxes[dev_box_idx * 5 + 2]
block_boxes[tx * 5 + 3] = dev_boxes[dev_box_idx * 5 + 3]
block_boxes[tx * 5 + 4] = dev_boxes[dev_box_idx * 5 + 4]
cuda.syncthreads()
if tx < row_size:
for i in range(col_size):
offset = row_start * threadsPerBlock * K + col_start * threadsPerBlock + tx * K + i
dev_iou[offset] = devRotateIoUEval(block_qboxes[i * 5:i * 5 + 5],
block_boxes[tx * 5:tx * 5 + 5], criterion)
def rotate_iou_gpu_eval(boxes, query_boxes, criterion=-1, device_id=0):
"""rotated box iou running in gpu. 500x faster than cpu version
(take 5ms in one example with numba.cuda code).
convert from [this project](
https://github.com/hongzhenwang/RRPN-revise/tree/master/lib/rotation).
Args:
boxes (float tensor: [N, 5]): rbboxes. format: centers, dims,
angles(clockwise when positive)
query_boxes (float tensor: [K, 5]): [description]
device_id (int, optional): Defaults to 0. [description]
Returns:
[type]: [description]
"""
box_dtype = boxes.dtype
boxes = boxes.astype(np.float32)
query_boxes = query_boxes.astype(np.float32)
N = boxes.shape[0]
K = query_boxes.shape[0]
iou = np.zeros((N, K), dtype=np.float32)
if N == 0 or K == 0:
return iou
threadsPerBlock = 8 * 8
cuda.select_device(device_id)
blockspergrid = (div_up(N, threadsPerBlock), div_up(K, threadsPerBlock))
stream = cuda.stream()
with stream.auto_synchronize():
boxes_dev = cuda.to_device(boxes.reshape([-1]), stream)
query_boxes_dev = cuda.to_device(query_boxes.reshape([-1]), stream)
iou_dev = cuda.to_device(iou.reshape([-1]), stream)
rotate_iou_kernel_eval[blockspergrid, threadsPerBlock, stream](
N, K, boxes_dev, query_boxes_dev, iou_dev, criterion)
iou_dev.copy_to_host(iou.reshape([-1]), stream=stream)
return iou.astype(boxes.dtype) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/weights/get_regressor_weights.py | """Get checkpoint from W&B"""
import wandb
run = wandb.init()
artifact = run.use_artifact('3ddetection/yolo3d-regressor/experiment-ckpts:v11', type='checkpoints')
artifact_dir = artifact.download() | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_configs.py | import hydra
from hydra.core.hydra_config import HydraConfig
from omegaconf import DictConfig
def test_train_config(cfg_train: DictConfig):
assert cfg_train
assert cfg_train.datamodule
assert cfg_train.model
assert cfg_train.trainer
HydraConfig().set_config(cfg_train)
hydra.utils.instantiate(cfg_train.datamodule)
hydra.utils.instantiate(cfg_train.model)
hydra.utils.instantiate(cfg_train.trainer)
def test_eval_config(cfg_eval: DictConfig):
assert cfg_eval
assert cfg_eval.datamodule
assert cfg_eval.model
assert cfg_eval.trainer
HydraConfig().set_config(cfg_eval)
hydra.utils.instantiate(cfg_eval.datamodule)
hydra.utils.instantiate(cfg_eval.model)
hydra.utils.instantiate(cfg_eval.trainer)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/conftest.py | import pyrootutils
import pytest
from hydra import compose, initialize
from hydra.core.global_hydra import GlobalHydra
from omegaconf import DictConfig, open_dict
@pytest.fixture(scope="package")
def cfg_train_global() -> DictConfig:
with initialize(version_base="1.2", config_path="../configs"):
cfg = compose(config_name="train.yaml", return_hydra_config=True, overrides=[])
# set defaults for all tests
with open_dict(cfg):
cfg.paths.root_dir = str(pyrootutils.find_root())
cfg.trainer.max_epochs = 1
cfg.trainer.limit_train_batches = 0.01
cfg.trainer.limit_val_batches = 0.1
cfg.trainer.limit_test_batches = 0.1
cfg.trainer.accelerator = "cpu"
cfg.trainer.devices = 1
cfg.datamodule.num_workers = 0
cfg.datamodule.pin_memory = False
cfg.extras.print_config = False
cfg.extras.enforce_tags = False
cfg.logger = None
return cfg
@pytest.fixture(scope="package")
def cfg_eval_global() -> DictConfig:
with initialize(version_base="1.2", config_path="../configs"):
cfg = compose(config_name="eval.yaml", return_hydra_config=True, overrides=["ckpt_path=."])
# set defaults for all tests
with open_dict(cfg):
cfg.paths.root_dir = str(pyrootutils.find_root())
cfg.trainer.max_epochs = 1
cfg.trainer.limit_test_batches = 0.1
cfg.trainer.accelerator = "cpu"
cfg.trainer.devices = 1
cfg.datamodule.num_workers = 0
cfg.datamodule.pin_memory = False
cfg.extras.print_config = False
cfg.extras.enforce_tags = False
cfg.logger = None
return cfg
# this is called by each test which uses `cfg_train` arg
# each test generates its own temporary logging path
@pytest.fixture(scope="function")
def cfg_train(cfg_train_global, tmp_path) -> DictConfig:
cfg = cfg_train_global.copy()
with open_dict(cfg):
cfg.paths.output_dir = str(tmp_path)
cfg.paths.log_dir = str(tmp_path)
yield cfg
GlobalHydra.instance().clear()
# this is called by each test which uses `cfg_eval` arg
# each test generates its own temporary logging path
@pytest.fixture(scope="function")
def cfg_eval(cfg_eval_global, tmp_path) -> DictConfig:
cfg = cfg_eval_global.copy()
with open_dict(cfg):
cfg.paths.output_dir = str(tmp_path)
cfg.paths.log_dir = str(tmp_path)
yield cfg
GlobalHydra.instance().clear()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_train.py | import os
import pytest
from hydra.core.hydra_config import HydraConfig
from omegaconf import open_dict
from src.train import train
from tests.helpers.run_if import RunIf
def test_train_fast_dev_run(cfg_train):
"""Run for 1 train, val and test step."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.fast_dev_run = True
cfg_train.trainer.accelerator = "cpu"
train(cfg_train)
@RunIf(min_gpus=1)
def test_train_fast_dev_run_gpu(cfg_train):
"""Run for 1 train, val and test step on GPU."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.fast_dev_run = True
cfg_train.trainer.accelerator = "gpu"
train(cfg_train)
@RunIf(min_gpus=1)
@pytest.mark.slow
def test_train_epoch_gpu_amp(cfg_train):
"""Train 1 epoch on GPU with mixed-precision."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 1
cfg_train.trainer.accelerator = "cpu"
cfg_train.trainer.precision = 16
train(cfg_train)
@pytest.mark.slow
def test_train_epoch_double_val_loop(cfg_train):
"""Train 1 epoch with validation loop twice per epoch."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 1
cfg_train.trainer.val_check_interval = 0.5
train(cfg_train)
@pytest.mark.slow
def test_train_ddp_sim(cfg_train):
"""Simulate DDP (Distributed Data Parallel) on 2 CPU processes."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 2
cfg_train.trainer.accelerator = "cpu"
cfg_train.trainer.devices = 2
cfg_train.trainer.strategy = "ddp_spawn"
train(cfg_train)
@pytest.mark.slow
def test_train_resume(tmp_path, cfg_train):
"""Run 1 epoch, finish, and resume for another epoch."""
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 1
HydraConfig().set_config(cfg_train)
metric_dict_1, _ = train(cfg_train)
files = os.listdir(tmp_path / "checkpoints")
assert "last.ckpt" in files
assert "epoch_000.ckpt" in files
with open_dict(cfg_train):
cfg_train.ckpt_path = str(tmp_path / "checkpoints" / "last.ckpt")
cfg_train.trainer.max_epochs = 2
metric_dict_2, _ = train(cfg_train)
files = os.listdir(tmp_path / "checkpoints")
assert "epoch_001.ckpt" in files
assert "epoch_002.ckpt" not in files
assert metric_dict_1["train/acc"] < metric_dict_2["train/acc"]
assert metric_dict_1["val/acc"] < metric_dict_2["val/acc"]
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_sweeps.py | import pytest
from tests.helpers.run_if import RunIf
from tests.helpers.run_sh_command import run_sh_command
startfile = "src/train.py"
overrides = ["logger=[]"]
@RunIf(sh=True)
@pytest.mark.slow
def test_experiments(tmp_path):
"""Test running all available experiment configs with fast_dev_run=True."""
command = [
startfile,
"-m",
"experiment=glob(*)",
"hydra.sweep.dir=" + str(tmp_path),
"++trainer.fast_dev_run=true",
] + overrides
run_sh_command(command)
@RunIf(sh=True)
@pytest.mark.slow
def test_hydra_sweep(tmp_path):
"""Test default hydra sweep."""
command = [
startfile,
"-m",
"hydra.sweep.dir=" + str(tmp_path),
"model.optimizer.lr=0.005,0.01",
"++trainer.fast_dev_run=true",
] + overrides
run_sh_command(command)
@RunIf(sh=True)
@pytest.mark.slow
def test_hydra_sweep_ddp_sim(tmp_path):
"""Test default hydra sweep with ddp sim."""
command = [
startfile,
"-m",
"hydra.sweep.dir=" + str(tmp_path),
"trainer=ddp_sim",
"trainer.max_epochs=3",
"+trainer.limit_train_batches=0.01",
"+trainer.limit_val_batches=0.1",
"+trainer.limit_test_batches=0.1",
"model.optimizer.lr=0.005,0.01,0.02",
] + overrides
run_sh_command(command)
@RunIf(sh=True)
@pytest.mark.slow
def test_optuna_sweep(tmp_path):
"""Test optuna sweep."""
command = [
startfile,
"-m",
"hparams_search=mnist_optuna",
"hydra.sweep.dir=" + str(tmp_path),
"hydra.sweeper.n_trials=10",
"hydra.sweeper.sampler.n_startup_trials=5",
"++trainer.fast_dev_run=true",
] + overrides
run_sh_command(command)
@RunIf(wandb=True, sh=True)
@pytest.mark.slow
def test_optuna_sweep_ddp_sim_wandb(tmp_path):
"""Test optuna sweep with wandb and ddp sim."""
command = [
startfile,
"-m",
"hparams_search=mnist_optuna",
"hydra.sweep.dir=" + str(tmp_path),
"hydra.sweeper.n_trials=5",
"trainer=ddp_sim",
"trainer.max_epochs=3",
"+trainer.limit_train_batches=0.01",
"+trainer.limit_val_batches=0.1",
"+trainer.limit_test_batches=0.1",
"logger=wandb",
]
run_sh_command(command)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_mnist_datamodule.py | from pathlib import Path
import pytest
import torch
from src.datamodules.mnist_datamodule import MNISTDataModule
@pytest.mark.parametrize("batch_size", [32, 128])
def test_mnist_datamodule(batch_size):
data_dir = "data/"
dm = MNISTDataModule(data_dir=data_dir, batch_size=batch_size)
dm.prepare_data()
assert not dm.data_train and not dm.data_val and not dm.data_test
assert Path(data_dir, "MNIST").exists()
assert Path(data_dir, "MNIST", "raw").exists()
dm.setup()
assert dm.data_train and dm.data_val and dm.data_test
assert dm.train_dataloader() and dm.val_dataloader() and dm.test_dataloader()
num_datapoints = len(dm.data_train) + len(dm.data_val) + len(dm.data_test)
assert num_datapoints == 70_000
batch = next(iter(dm.train_dataloader()))
x, y = batch
assert len(x) == batch_size
assert len(y) == batch_size
assert x.dtype == torch.float32
assert y.dtype == torch.int64
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_eval.py | import os
import pytest
from hydra.core.hydra_config import HydraConfig
from omegaconf import open_dict
from src.eval import evaluate
from src.train import train
@pytest.mark.slow
def test_train_eval(tmp_path, cfg_train, cfg_eval):
"""Train for 1 epoch with `train.py` and evaluate with `eval.py`"""
assert str(tmp_path) == cfg_train.paths.output_dir == cfg_eval.paths.output_dir
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 1
cfg_train.test = True
HydraConfig().set_config(cfg_train)
train_metric_dict, _ = train(cfg_train)
assert "last.ckpt" in os.listdir(tmp_path / "checkpoints")
with open_dict(cfg_eval):
cfg_eval.ckpt_path = str(tmp_path / "checkpoints" / "last.ckpt")
HydraConfig().set_config(cfg_eval)
test_metric_dict, _ = evaluate(cfg_eval)
assert test_metric_dict["test/acc"] > 0.0
assert abs(train_metric_dict["test/acc"].item() - test_metric_dict["test/acc"].item()) < 0.001
| 0 |
apollo_public_repos/apollo-model-yolo3d/tests | apollo_public_repos/apollo-model-yolo3d/tests/helpers/package_available.py | import platform
import pkg_resources
from pytorch_lightning.utilities.xla_device import XLADeviceUtils
def _package_available(package_name: str) -> bool:
"""Check if a package is available in your environment."""
try:
return pkg_resources.require(package_name) is not None
except pkg_resources.DistributionNotFound:
return False
_TPU_AVAILABLE = XLADeviceUtils.tpu_device_exists()
_IS_WINDOWS = platform.system() == "Windows"
_SH_AVAILABLE = not _IS_WINDOWS and _package_available("sh")
_DEEPSPEED_AVAILABLE = not _IS_WINDOWS and _package_available("deepspeed")
_FAIRSCALE_AVAILABLE = not _IS_WINDOWS and _package_available("fairscale")
_WANDB_AVAILABLE = _package_available("wandb")
_NEPTUNE_AVAILABLE = _package_available("neptune")
_COMET_AVAILABLE = _package_available("comet_ml")
_MLFLOW_AVAILABLE = _package_available("mlflow")
| 0 |
apollo_public_repos/apollo-model-yolo3d/tests | apollo_public_repos/apollo-model-yolo3d/tests/helpers/run_if.py | """Adapted from:
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/tests/helpers/runif.py
"""
import sys
from typing import Optional
import pytest
import torch
from packaging.version import Version
from pkg_resources import get_distribution
from tests.helpers.package_available import (
_COMET_AVAILABLE,
_DEEPSPEED_AVAILABLE,
_FAIRSCALE_AVAILABLE,
_IS_WINDOWS,
_MLFLOW_AVAILABLE,
_NEPTUNE_AVAILABLE,
_SH_AVAILABLE,
_TPU_AVAILABLE,
_WANDB_AVAILABLE,
)
class RunIf:
"""RunIf wrapper for conditional skipping of tests.
Fully compatible with `@pytest.mark`.
Example:
@RunIf(min_torch="1.8")
@pytest.mark.parametrize("arg1", [1.0, 2.0])
def test_wrapper(arg1):
assert arg1 > 0
"""
def __new__(
self,
min_gpus: int = 0,
min_torch: Optional[str] = None,
max_torch: Optional[str] = None,
min_python: Optional[str] = None,
skip_windows: bool = False,
sh: bool = False,
tpu: bool = False,
fairscale: bool = False,
deepspeed: bool = False,
wandb: bool = False,
neptune: bool = False,
comet: bool = False,
mlflow: bool = False,
**kwargs,
):
"""
Args:
min_gpus: min number of GPUs required to run test
min_torch: minimum pytorch version to run test
max_torch: maximum pytorch version to run test
min_python: minimum python version required to run test
skip_windows: skip test for Windows platform
tpu: if TPU is available
sh: if `sh` module is required to run the test
fairscale: if `fairscale` module is required to run the test
deepspeed: if `deepspeed` module is required to run the test
wandb: if `wandb` module is required to run the test
neptune: if `neptune` module is required to run the test
comet: if `comet` module is required to run the test
mlflow: if `mlflow` module is required to run the test
kwargs: native pytest.mark.skipif keyword arguments
"""
conditions = []
reasons = []
if min_gpus:
conditions.append(torch.cuda.device_count() < min_gpus)
reasons.append(f"GPUs>={min_gpus}")
if min_torch:
torch_version = get_distribution("torch").version
conditions.append(Version(torch_version) < Version(min_torch))
reasons.append(f"torch>={min_torch}")
if max_torch:
torch_version = get_distribution("torch").version
conditions.append(Version(torch_version) >= Version(max_torch))
reasons.append(f"torch<{max_torch}")
if min_python:
py_version = (
f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
)
conditions.append(Version(py_version) < Version(min_python))
reasons.append(f"python>={min_python}")
if skip_windows:
conditions.append(_IS_WINDOWS)
reasons.append("does not run on Windows")
if tpu:
conditions.append(not _TPU_AVAILABLE)
reasons.append("TPU")
if sh:
conditions.append(not _SH_AVAILABLE)
reasons.append("sh")
if fairscale:
conditions.append(not _FAIRSCALE_AVAILABLE)
reasons.append("fairscale")
if deepspeed:
conditions.append(not _DEEPSPEED_AVAILABLE)
reasons.append("deepspeed")
if wandb:
conditions.append(not _WANDB_AVAILABLE)
reasons.append("wandb")
if neptune:
conditions.append(not _NEPTUNE_AVAILABLE)
reasons.append("neptune")
if comet:
conditions.append(not _COMET_AVAILABLE)
reasons.append("comet")
if mlflow:
conditions.append(not _MLFLOW_AVAILABLE)
reasons.append("mlflow")
reasons = [rs for cond, rs in zip(conditions, reasons) if cond]
return pytest.mark.skipif(
condition=any(conditions),
reason=f"Requires: [{' + '.join(reasons)}]",
**kwargs,
)
| 0 |
apollo_public_repos/apollo-model-yolo3d/tests | apollo_public_repos/apollo-model-yolo3d/tests/helpers/run_sh_command.py | from typing import List
import pytest
from tests.helpers.package_available import _SH_AVAILABLE
if _SH_AVAILABLE:
import sh
def run_sh_command(command: List[str]):
"""Default method for executing shell commands with pytest and sh package."""
msg = None
try:
sh.python(command)
except sh.ErrorReturnCode as e:
msg = e.stderr.decode()
if msg:
pytest.fail(msg=msg)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/docs/command.md | # Quick Command
## Train Regressor Model
- Train original
```bash
python src/train.py
```
- With experiment
```bash
python src/train.py \
experiment=sample
```
## Train Detector Model
### Yolov5
- Multi GPU Training
```bash
cd yolov5
python -m torch.distributed.launch \
--nproc_per_node 4 train.py \
--epochs 10 \
--batch 64 \
--data ../configs/detector/yolov5_kitti.yaml \
--weights yolov5s.pt \
--device 0,1,2,3
```
- Single GPU Training
```bash
cd yolov5
python train.py \
--data ../configs/detector/yolov5_kitti.yaml \
--weights yolov5s.pt \
--img 640
```
## Hyperparameter Tuning with Hydra
```bash
python src/train.py -m \
hparams_search=regressor_optuna \
experiment=sample_optuna
``` | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/docs/index.md | # YOLO3D: 3D Object Detection with YOLO
<div align="center">
<a href="https://www.python.org/"><img alt="Python" src="https://img.shields.io/badge/-Python 3.8+-blue?style=flat&logo=python&logoColor=white"></a>
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/-PyTorch 1.8+-ee4c2c?style=flat&logo=pytorch&logoColor=white"></a>
<a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning 1.5+-792ee5?style=flat&logo=pytorchlightning&logoColor=white"></a>
<a href="https://hydra.cc/"><img alt="Config: hydra" src="https://img.shields.io/badge/config-hydra 1.1-89b8cd?style=flat&labelColor=gray"></a>
<a href="https://black.readthedocs.io/en/stable/"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-black.svg?style=flat&labelColor=gray"></a>
<a href="https://github.com/ashleve/lightning-hydra-template"><img alt="Template" src="https://img.shields.io/badge/-Lightning--Hydra--Template-017F2F?style=flat&logo=github&labelColor=gray"></a><br>
</div>
## ⚠️ Cautions
> This repository currently under development
## 📼 Demo
<div align="center">
![demo](./assets/demo.gif)
</div>
## 📌 Introduction
Unofficial implementation of [Mousavian et al.](https://arxiv.org/abs/1612.00496) in their paper **3D Bounding Box Estimation Using Deep Learning and Geometry**. YOLO3D uses a different approach, as the detector uses **YOLOv5** which previously used Faster-RCNN, and Regressor uses **ResNet18/VGG11** which was previously VGG19.
## 🚀 Quickstart
> We use hydra as the config manager; if you are unfamiliar with hydra, you can visit the official website or see the tutorial on this web.
### 🍿 Inference
You can use pretrained weight from [Release](https://github.com/ruhyadi/yolo3d-lightning/releases), you can download it using script `get_weights.py`:
```bash
# download pretrained model
python script/get_weights.py \
--tag v0.1 \
--dir ./weights
```
Inference with `inference.py`:
```bash
python inference.py \
source_dir="./data/demo/images" \
detector.model_path="./weights/detector_yolov5s.pt" \
regressor_weights="./weights/regressor_resnet18.pt"
```
### ⚔️ Training
There are two models that will be trained here: **detector** and **regressor**. For now, the detector model that can be used is only **YOLOv5**, while the regressor model can use all models supported by **Torchvision**.
#### 🧭 Training YOLOv5 Detector
The first step is to change the `label_2` format from KITTI to YOLO. You can use the following `src/kitti_to_yolo.py`.
```bash
cd yolo3d-lightning/src
python kitti_to_yolo.py \
--dataset_path ../data/KITTI/training/
--classes ["car", "van", "truck", "pedestrian", "cyclist"]
--img_width 1224
--img_height 370
```
The next step is to follow the [wiki provided by ultralytics](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data). **Note:** *readme will updated in future*.
#### 🪀 Training Regessor
Selanjutnya, kamu dapat melakukan training model regressor. Model regressor yang dapat dipakai bisa mengacu pada yang tersedia di `torchvision`, atau kamu bisa mengkustomnya sendiri.
Langkah pertama adalah membuat train dan validation sets. Kamu dapat menggunakan `script/generate_sets.py`:
```bash
cd yolo3d-lightning/script
python generate_sets.py \
--images_path ../data/KITTI/training/images # or image_2
--dump_dir ../data/KITTI/training
--postfix _80
--train_size 0.8
```
Pada langkah selanjutnya, kita hanya akan menggunakan model yang ada di `torchvision` saja. Langkah termudah adalah dengan mengubah configurasi di `configs.model.regressor.yaml`, seperti di bawah:
```yaml
_target_: src.models.regressor.RegressorModel
net:
_target_: src.models.components.base.RegressorNet
backbone:
_target_: torchvision.models.resnet18 # edit this
pretrained: True # maybe this too
bins: 2
lr: 0.001
momentum: 0.9
w: 0.4
alpha: 0.6
```
Langkah selanjutnya adalah dengan membuat konfigurasi experiment pada `configs/experiment/your_exp.yaml`. Jika bingung, kamu dapat mengacu pada [`configs/experiment/demo.yaml`](./configs/experiment/demo.yaml).
Setelah konfigurasi experiment dibuat. Kamu dapat dengan mudah menjalankan perintah `train.py`, seperti berikut:
```bash
cd yolo3d-lightning
python train.py \
experiment=demo
```
## ❤️ Acknowledgement
- [YOLOv5 by Ultralytics](https://github.com/ultralytics/yolov5)
- [skhadem/3D-BoundingBox](https://github.com/skhadem/3D-BoundingBox)
- [Mousavian et al.](https://arxiv.org/abs/1612.00496)
```
@misc{mousavian20173d,
title={3D Bounding Box Estimation Using Deep Learning and Geometry},
author={Arsalan Mousavian and Dragomir Anguelov and John Flynn and Jana Kosecka},
year={2017},
eprint={1612.00496},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | 0 |
apollo_public_repos/apollo-model-yolo3d/docs | apollo_public_repos/apollo-model-yolo3d/docs/javascripts/mathjax.js | window.MathJax = {
tex: {
inlineMath: [["\\(", "\\)"]],
displayMath: [["\\[", "\\]"]],
processEscapes: true,
processEnvironments: true
},
options: {
ignoreHtmlClass: ".*|",
processHtmlClass: "arithmatex"
}
};
document$.subscribe(() => {
MathJax.typesetPromise()
}) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/convert.yaml | # @package _global_
# specify here default training configuration
defaults:
- _self_
- model: regressor.yaml
# enable color logging
- override hydra/hydra_logging: colorlog
- override hydra/job_logging: colorlog
# pretty print config at the start of the run using Rich library
print_config: True
# disable python warnings if they annoy you
ignore_warnings: True
# root
root: ${hydra:runtime.cwd}
# TODO: cahnge to your checkpoint file
checkpoint_dir: ${root}/weights/last.ckpt
# dump dir
dump_dir: ${root}/weights
# input sample shape
input_sample:
__target__: torch.randn
size: (1, 3, 224, 224)
# convert to
convert_to: "pytorch" # [pytorch, onnx, tensorrt]
# TODO: model name without extension
name: ${dump_dir}/pytorch-kitti
# convert_to: "onnx" # [pytorch, onnx, tensorrt]
# name: ${dump_dir}/onnx-3d-0817-5
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/train.yaml | # @package _global_
# specify here default configuration
# order of defaults determines the order in which configs override each other
defaults:
- _self_
- datamodule: kitti_datamodule.yaml
- model: regressor.yaml
- callbacks: default.yaml
- logger: null # set logger here or use command line (e.g. `python train.py logger=tensorboard`)
- trainer: dgx.yaml
- paths: default.yaml
- extras: default.yaml
- hydra: default.yaml
# experiment configs allow for version control of specific hyperparameters
# e.g. best hyperparameters for given model and datamodule
- experiment: null
# config for hyperparameter optimization
- hparams_search: null
# optional local config for machine/user specific settings
# it's optional since it doesn't need to exist and is excluded from version control
- optional local: default.yaml
# debugging config (enable through command line, e.g. `python train.py debug=default)
- debug: null
# task name, determines output directory path
task_name: "train"
# tags to help you identify your experiments
# you can overwrite this in experiment configs
# overwrite from command line with `python train.py tags="[first_tag, second_tag]"`
# appending lists from command line is currently not supported :(
# https://github.com/facebookresearch/hydra/issues/1547
tags: ["dev"]
# set False to skip model training
train: True
# evaluate on test set, using best model weights achieved during training
# lightning chooses best weights based on the metric specified in checkpoint callback
test: False
# simply provide checkpoint path to resume training
# ckpt_path: weights/last.ckpt
ckpt_path: null
# seed for random number generators in pytorch, numpy and python.random
seed: null
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/inference.yaml | # @package _global_
# specify here default training configuration
defaults:
- _self_
- detector: yolov5.yaml
- model: regressor.yaml
- augmentation: inference_preprocessing.yaml
# debugging config (enable through command line, e.g. `python train.py debug=default)
- debug: null
# enable color logging
- override hydra/hydra_logging: colorlog
- override hydra/job_logging: colorlog
# run name
name: inference
# directory
root: ${hydra:runtime.cwd}
output_dir: ${root}/${hydra:run.dir}/inference
# calib_file
calib_file: ${root}/assets/global_calib.txt
# save 2D bounding box
save_det2d: False
# show and save result
save_result: True
# save result in txt
# save_txt: True
# regressor weights
regressor_weights: ${root}/weights/regressor_resnet18.pt
# regressor_weights: ${root}/weights/mobilenetv3-best.pt
# inference type
inference_type: pytorch # [pytorch, onnx, openvino, tensorrt]
# source directory
# source_dir: ${root}/tmp/kitti/
source_dir: ${root}/tmp/video_001
# device to inference
device: 'cpu'
export_onnx: False
func: "label" # image/label
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/evaluate.yaml | # @package _global_
# specify here default training configuration
defaults:
- _self_
- detector: yolov5.yaml
- model: regressor.yaml
- augmentation: inference_preprocessing.yaml
# debugging config (enable through command line, e.g. `python train.py debug=default)
- debug: null
# enable color logging
- override hydra/hydra_logging: colorlog
- override hydra/job_logging: colorlog
# run name
name: evaluate
# directory
root: ${hydra:runtime.cwd}
# predictions/output directory
# pred_dir: ${root}/${hydra:run.dir}/${name}
# calib_file
calib_file: ${root}/assets/global_calib.txt
# regressor weights
regressor_weights: ${root}/weights/regressor_resnet18.pt
# validation images directory
val_images_path: ${root}/data/KITTI/images_2
# validation sets directory
val_sets: ${root}/data/KITTI/ImageSets/val.txt
# class to evaluated
classes: 6
# class_to_name = {
# 0: 'Car',
# 1: 'Cyclist',
# 2: 'Truck',
# 3: 'Van',
# 4: 'Pedestrian',
# 5: 'Tram',
# }
# gt label path
gt_dir: ${root}/data/KITTI/label_2
# dt label path
pred_dir: ${root}/data/KITTI/result
# device to inference
device: 'cuda:0' | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/eval.yaml | # @package _global_
defaults:
- _self_
- datamodule: mnist.yaml # choose datamodule with `test_dataloader()` for evaluation
- model: mnist.yaml
- logger: null
- trainer: default.yaml
- paths: default.yaml
- extras: default.yaml
- hydra: default.yaml
task_name: "eval"
tags: ["dev"]
# passing checkpoint path is necessary for evaluation
ckpt_path: ???
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/hparams_search/optuna.yaml | # @package _global_
# example hyperparameter optimization of some experiment with Optuna:
# python train.py -m hparams_search=mnist_optuna experiment=example
defaults:
- override /hydra/sweeper: optuna
# choose metric which will be optimized by Optuna
# make sure this is the correct name of some metric logged in lightning module!
optimized_metric: "val/loss"
# here we define Optuna hyperparameter search
# it optimizes for value returned from function with @hydra.main decorator
# docs: https://hydra.cc/docs/next/plugins/optuna_sweeper
hydra:
mode: "MULTIRUN"
sweeper:
_target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper
# storage URL to persist optimization results
# for example, you can use SQLite if you set 'sqlite:///example.db'
storage: null
# name of the study to persist optimization results
study_name: null
# number of parallel workers
n_jobs: 2
# 'minimize' or 'maximize' the objective
direction: 'minimize'
# total number of runs that will be executed
n_trials: 10
# choose Optuna hyperparameter sampler
# docs: https://optuna.readthedocs.io/en/stable/reference/samplers.html
sampler:
_target_: optuna.samplers.TPESampler
seed: 42069
n_startup_trials: 10 # number of random sampling runs before optimization starts
# define range of hyperparameters
params:
model.lr: interval(0.0001, 0.001)
datamodule.batch_size: choice(32, 64, 128)
model.optimizer: choice(adam, sgd) | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/hparams_search/mnist_optuna.yaml | # @package _global_
# example hyperparameter optimization of some experiment with Optuna:
# python train.py -m hparams_search=mnist_optuna experiment=example
defaults:
- override /hydra/sweeper: optuna
# choose metric which will be optimized by Optuna
# make sure this is the correct name of some metric logged in lightning module!
optimized_metric: "val/acc_best"
# here we define Optuna hyperparameter search
# it optimizes for value returned from function with @hydra.main decorator
# docs: https://hydra.cc/docs/next/plugins/optuna_sweeper
hydra:
mode: "MULTIRUN" # set hydra to multirun by default if this config is attached
sweeper:
_target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper
# storage URL to persist optimization results
# for example, you can use SQLite if you set 'sqlite:///example.db'
storage: null
# name of the study to persist optimization results
study_name: null
# number of parallel workers
n_jobs: 1
# 'minimize' or 'maximize' the objective
direction: maximize
# total number of runs that will be executed
n_trials: 20
# choose Optuna hyperparameter sampler
# you can choose bayesian sampler (tpe), random search (without optimization), grid sampler, and others
# docs: https://optuna.readthedocs.io/en/stable/reference/samplers.html
sampler:
_target_: optuna.samplers.TPESampler
seed: 1234
n_startup_trials: 10 # number of random sampling runs before optimization starts
# define hyperparameter search space
params:
model.optimizer.lr: interval(0.0001, 0.1)
datamodule.batch_size: choice(32, 64, 128, 256)
model.net.lin1_size: choice(64, 128, 256)
model.net.lin2_size: choice(64, 128, 256)
model.net.lin3_size: choice(32, 64, 128, 256)
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/datamodule/kitti_datamodule.yaml | _target_: src.datamodules.kitti_datamodule.KITTIDataModule
dataset_path: ${paths.data_dir} # data_dir is specified in config.yaml
train_sets: ${paths.data_dir}/train_80.txt
val_sets: ${paths.data_dir}/val_80.txt
test_sets: ${paths.data_dir}/test_80.txt
batch_size: 64
num_worker: 32 | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/augmentation/inference_preprocessing.yaml | to_tensor:
_target_: torchvision.transforms.ToTensor
normalize:
_target_: torchvision.transforms.Normalize
mean: [0.406, 0.456, 0.485]
std: [0.225, 0.224, 0.229] | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/comet.yaml | # https://www.comet.ml
comet:
_target_: pytorch_lightning.loggers.comet.CometLogger
api_key: ${oc.env:COMET_API_TOKEN} # api key is loaded from environment variable
save_dir: "${paths.output_dir}"
project_name: "lightning-hydra-template"
rest_api_key: null
# experiment_name: ""
experiment_key: null # set to resume experiment
offline: False
prefix: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/csv.yaml | # csv logger built in lightning
csv:
_target_: pytorch_lightning.loggers.csv_logs.CSVLogger
save_dir: "${paths.output_dir}"
name: "csv/"
prefix: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/tensorboard.yaml | # https://www.tensorflow.org/tensorboard/
tensorboard:
_target_: pytorch_lightning.loggers.tensorboard.TensorBoardLogger
save_dir: "${paths.output_dir}/tensorboard/"
name: null
log_graph: False
default_hp_metric: True
prefix: ""
# version: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/neptune.yaml | # https://neptune.ai
neptune:
_target_: pytorch_lightning.loggers.neptune.NeptuneLogger
api_key: ${oc.env:NEPTUNE_API_TOKEN} # api key is loaded from environment variable
project: username/lightning-hydra-template
# name: ""
log_model_checkpoints: True
prefix: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/wandb.yaml | # https://wandb.ai
wandb:
_target_: pytorch_lightning.loggers.wandb.WandbLogger
# name: "" # name of the run (normally generated by wandb)
save_dir: "${paths.output_dir}"
offline: False
id: null # pass correct id to resume experiment!
anonymous: null # enable anonymous logging
project: "yolo3d-regressor"
log_model: True # upload lightning ckpts
prefix: "" # a string to put at the beginning of metric keys
# entity: "" # set to name of your wandb team
group: ""
tags: []
job_type: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/many_loggers.yaml | # train with many loggers at once
defaults:
# - comet.yaml
- csv.yaml
# - mlflow.yaml
# - neptune.yaml
- tensorboard.yaml
- wandb.yaml
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/mlflow.yaml | # https://mlflow.org
mlflow:
_target_: pytorch_lightning.loggers.mlflow.MLFlowLogger
# experiment_name: ""
# run_name: ""
tracking_uri: ${paths.log_dir}/mlflow/mlruns # run `mlflow ui` command inside the `logs/mlflow/` dir to open the UI
tags: null
# save_dir: "./mlruns"
prefix: ""
artifact_location: null
# run_id: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/rich_progress_bar.yaml | # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.RichProgressBar.html
# Create a progress bar with rich text formatting.
# Look at the above link for more detailed information.
rich_progress_bar:
_target_: pytorch_lightning.callbacks.RichProgressBar
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/wandb.yaml | defaults:
- default.yaml
watch_model:
_target_: src.callbacks.wandb_callbacks.WatchModel
log: "all"
log_freq: 100
upload_code_as_artifact:
_target_: src.callbacks.wandb_callbacks.UploadCodeAsArtifact
code_dir: ${original_work_dir}/src
upload_ckpts_as_artifact:
_target_: src.callbacks.wandb_callbacks.UploadCheckpointsAsArtifact
ckpt_dir: "checkpoints/"
upload_best_only: True
# log_f1_precision_recall_heatmap:
# _target_: src.callbacks.wandb_callbacks.LogF1PrecRecHeatmap
# log_confusion_matrix:
# _target_: src.callbacks.wandb_callbacks.LogConfusionMatrix
# log_image_predictions:
# _target_: src.callbacks.wandb_callbacks.LogImagePredictions
# num_samples: 8 | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/early_stopping.yaml | # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.EarlyStopping.html
# Monitor a metric and stop training when it stops improving.
# Look at the above link for more detailed information.
early_stopping:
_target_: pytorch_lightning.callbacks.EarlyStopping
monitor: ??? # quantity to be monitored, must be specified !!!
min_delta: 0. # minimum change in the monitored quantity to qualify as an improvement
patience: 3 # number of checks with no improvement after which training will be stopped
verbose: False # verbosity mode
mode: "min" # "max" means higher metric value is better, can be also "min"
strict: True # whether to crash the training if monitor is not found in the validation metrics
check_finite: True # when set True, stops training when the monitor becomes NaN or infinite
stopping_threshold: null # stop training immediately once the monitored quantity reaches this threshold
divergence_threshold: null # stop training as soon as the monitored quantity becomes worse than this threshold
check_on_train_epoch_end: null # whether to run early stopping at the end of the training epoch
# log_rank_zero_only: False # this keyword argument isn't available in stable version
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/model_summary.yaml | # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.RichModelSummary.html
# Generates a summary of all layers in a LightningModule with rich text formatting.
# Look at the above link for more detailed information.
model_summary:
_target_: pytorch_lightning.callbacks.RichModelSummary
max_depth: 1 # the maximum depth of layer nesting that the summary will include
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/model_checkpoint.yaml | # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.ModelCheckpoint.html
# Save the model periodically by monitoring a quantity.
# Look at the above link for more detailed information.
model_checkpoint:
_target_: pytorch_lightning.callbacks.ModelCheckpoint
dirpath: null # directory to save the model file
filename: null # checkpoint filename
monitor: null # name of the logged metric which determines when model is improving
verbose: False # verbosity mode
save_last: null # additionally always save an exact copy of the last checkpoint to a file last.ckpt
save_top_k: 1 # save k best models (determined by above metric)
mode: "min" # "max" means higher metric value is better, can be also "min"
auto_insert_metric_name: True # when True, the checkpoints filenames will contain the metric name
save_weights_only: False # if True, then only the model’s weights will be saved
every_n_train_steps: null # number of training steps between checkpoints
train_time_interval: null # checkpoints are monitored at the specified time interval
every_n_epochs: null # number of epochs between checkpoints
save_on_train_epoch_end: null # whether to run checkpointing at the end of the training epoch or the end of validation
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/default.yaml | defaults:
- model_checkpoint.yaml
- early_stopping.yaml
- model_summary.yaml
- rich_progress_bar.yaml
- _self_
# model save config
model_checkpoint:
dirpath: "weights"
filename: "epoch_{epoch:03d}"
monitor: "val/loss"
mode: "min"
save_last: True
save_top_k: 1
auto_insert_metric_name: False
early_stopping:
monitor: "val/loss"
patience: 100
mode: "min"
model_summary:
max_depth: -1
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/paths/default.yaml | # path to root directory
# this requires PROJECT_ROOT environment variable to exist
# PROJECT_ROOT is inferred and set by pyrootutils package in `train.py` and `eval.py`
root_dir: ${oc.env:PROJECT_ROOT}
# path to data directory
data_dir: ${paths.root_dir}/data/KITTI
# path to logging directory
log_dir: ${paths.root_dir}/logs/
# path to output directory, created dynamically by hydra
# path generation pattern is specified in `configs/hydra/default.yaml`
# use it to store all files generated during the run, like ckpts and metrics
output_dir: ${hydra:runtime.output_dir}
# path to working directory
work_dir: ${hydra:runtime.cwd}
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/detector/yolov5.yaml | _target_: inference.detector_yolov5
model_path: ${root}/weights/detector_yolov5s.pt
cfg_path: ${root}/yolov5/models/yolov5s.yaml
classes: 5
device: 'cpu' | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/detector/yolov5_kitti.yaml | # KITTI to YOLO
path: ../data/KITTI/ # dataset root dir
train: train_yolo.txt # train images (relative to 'path') 3712 images
val: val_yolo.txt # val images (relative to 'path') 3768 images
# Classes
nc: 5 # number of classes
names: ['car', 'van', 'truck', 'pedestrian', 'cyclist'] | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/model/regressor.yaml | _target_: src.models.regressor.RegressorModel
net:
_target_: src.models.components.base.RegressorNet
backbone:
_target_: torchvision.models.resnet18 # change model on this
pretrained: True
bins: 2
optimizer: adam
lr: 0.0001
momentum: 0.9
w: 0.8
alpha: 0.2 | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/experiment/sample.yaml | # @package _global_
# to execute this experiment run:
# python train.py experiment=example
defaults:
- override /datamodule: kitti_datamodule.yaml
- override /model: regressor.yaml
- override /callbacks: default.yaml
- override /logger: wandb.yaml
- override /trainer: dgx.yaml
# all parameters below will be merged with parameters from default configurations set above
# this allows you to overwrite only specified parameters
seed: 42069
# name of the run determines folder name in logs
name: "new_network"
datamodule:
train_sets: ${paths.data_dir}/ImageSets/train.txt
val_sets: ${paths.data_dir}/ImageSets/val.txt
test_sets: ${paths.data_dir}/ImageSets/test.txt
trainer:
min_epochs: 1
max_epochs: 200
# limit_train_batches: 1.0
# limit_val_batches: 1.0
gpus: [0]
strategy: ddp | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/ddp.yaml | defaults:
- default.yaml
# use "ddp_spawn" instead of "ddp",
# it's slower but normal "ddp" currently doesn't work ideally with hydra
# https://github.com/facebookresearch/hydra/issues/2070
# https://pytorch-lightning.readthedocs.io/en/latest/accelerators/gpu_intermediate.html#distributed-data-parallel-spawn
strategy: ddp_spawn
accelerator: gpu
devices: 4
num_nodes: 1
sync_batchnorm: True
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/kaggle.yaml | _target_: pytorch_lightning.Trainer
gpus: 0
min_epochs: 1
max_epochs: 10
# number of validation steps to execute at the beginning of the training
# num_sanity_val_steps: 0
# ckpt path
resume_from_checkpoint: null
# disable progress_bar
enable_progress_bar: False | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/ddp_sim.yaml | defaults:
- default.yaml
# simulate DDP on CPU, useful for debugging
accelerator: cpu
devices: 2
strategy: ddp_spawn
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/cpu.yaml | defaults:
- default.yaml
accelerator: cpu
devices: 1
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/dgx.yaml | defaults:
- default.yaml
# strategy: ddp
accelerator: gpu
devices: [0]
num_nodes: 1
sync_batchnorm: True | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/gpu.yaml | defaults:
- default.yaml
accelerator: gpu
devices: 1
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/mps.yaml | defaults:
- default.yaml
accelerator: mps
devices: 1
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/default.yaml | _target_: pytorch_lightning.Trainer
default_root_dir: ${paths.output_dir}
min_epochs: 1 # prevents early stopping
max_epochs: 25
accelerator: cpu
devices: 1
# mixed precision for extra speed-up
# precision: 16
# set True to to ensure deterministic results
# makes training slower but gives more reproducibility than just setting seeds
deterministic: False
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/hydra/default.yaml | # https://hydra.cc/docs/configure_hydra/intro/
# enable color logging
defaults:
- override hydra_logging: colorlog
- override job_logging: colorlog
# output directory, generated dynamically on each run
run:
dir: ${paths.log_dir}/${task_name}/runs/${now:%Y-%m-%d}_${now:%H-%M-%S}
sweep:
dir: ${paths.log_dir}/${task_name}/multiruns/${now:%Y-%m-%d}_${now:%H-%M-%S}
subdir: ${hydra.job.num}
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/profiler.yaml | # @package _global_
# runs with execution time profiling
defaults:
- default.yaml
trainer:
max_epochs: 1
profiler: "simple"
# profiler: "advanced"
# profiler: "pytorch"
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/overfit.yaml | # @package _global_
# overfits to 3 batches
defaults:
- default.yaml
trainer:
max_epochs: 20
overfit_batches: 3
# model ckpt and early stopping need to be disabled during overfitting
callbacks: null
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/limit.yaml | # @package _global_
# uses only 1% of the training data and 5% of validation/test data
defaults:
- default.yaml
trainer:
max_epochs: 3
limit_train_batches: 0.01
limit_val_batches: 0.05
limit_test_batches: 0.05
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/fdr.yaml | # @package _global_
# runs 1 train, 1 validation and 1 test step
defaults:
- default.yaml
trainer:
fast_dev_run: true
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/default.yaml | # @package _global_
# default debugging setup, runs 1 full epoch
# other debugging configs can inherit from this one
# overwrite task name so debugging logs are stored in separate folder
task_name: "debug"
# disable callbacks and loggers during debugging
callbacks: null
logger: null
extras:
ignore_warnings: False
enforce_tags: False
# sets level of all command line loggers to 'DEBUG'
# https://hydra.cc/docs/tutorials/basic/running_your_app/logging/
hydra:
job_logging:
root:
level: DEBUG
# use this to also set hydra loggers to 'DEBUG'
# verbose: True
trainer:
max_epochs: 1
accelerator: cpu # debuggers don't like gpus
devices: 1 # debuggers don't like multiprocessing
detect_anomaly: true # raise exception if NaN or +/-inf is detected in any tensor
datamodule:
num_workers: 0 # debuggers don't like multiprocessing
pin_memory: False # disable gpu memory pin
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/extras/default.yaml | # disable python warnings if they annoy you
ignore_warnings: False
# ask user for tags if none are provided in the config
enforce_tags: True
# pretty print config tree at the start of the run using Rich library
print_config: True
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/video_to_gif.py | """Convert video to gif with moviepy"""
import argparse
import moviepy.editor as mpy
def generate(video_path, gif_path, fps):
"""Generate gif from video"""
clip = mpy.VideoFileClip(video_path)
clip.write_gif(gif_path, fps=fps)
clip.close()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Convert video to gif")
parser.add_argument("--video_path", type=str, default="outputs/videos/004.mp4", help="Path to video")
parser.add_argument("--gif_path", type=str, default="outputs/gif/002.gif", help="Path to gif")
parser.add_argument("--fps", type=int, default=5, help="GIF fps")
args = parser.parse_args()
# generate gif
generate(args.video_path, args.gif_path, args.fps) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/schedule.sh | #!/bin/bash
# Schedule execution of many runs
# Run from root folder with: bash scripts/schedule.sh
python src/train.py trainer.max_epochs=5 logger=csv
python src/train.py trainer.max_epochs=10 logger=csv
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/frames_to_video.py | """
Generate frames to vid
Usage:
python scripts/frames_to_video.py \
--imgs_path /path/to/imgs \
--vid_path /path/to/vid \
--fps 24 \
--frame_size 1242 375 \
--resize
python scripts/frames_to_video.py \
--imgs_path outputs/2023-05-13/22-51-34/inference \
--vid_path tmp/output_videos/001.mp4 \
--fps 3 \
--frame_size 1550 387 \
--resize
"""
import argparse
import cv2
from glob import glob
import os
from tqdm import tqdm
def generate(imgs_path, vid_path, fps=30, frame_size=(1242, 375), resize=True):
"""Generate frames to vid"""
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
vid_writer = cv2.VideoWriter(vid_path, fourcc, fps, frame_size)
imgs_glob = sorted(glob(os.path.join(imgs_path, "*.png")))
if resize:
for img_path in tqdm(imgs_glob):
img = cv2.imread(img_path)
img = cv2.resize(img, frame_size)
vid_writer.write(img)
else:
for img_path in imgs_glob:
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
vid_writer.write(img)
vid_writer.release()
print('[INFO] Video saved to {}'.format(vid_path))
if __name__ == "__main__":
# create argparser
parser = argparse.ArgumentParser(description="Generate frames to vid")
parser.add_argument("--imgs_path", type=str, default="outputs/2022-10-23/21-03-50/inference", help="path to imgs")
parser.add_argument("--vid_path", type=str, default="outputs/videos/004.mp4", help="path to vid")
parser.add_argument("--fps", type=int, default=24, help="fps")
parser.add_argument("--frame_size", type=int, nargs=2, default=(int(1242), int(375)), help="frame size")
parser.add_argument("--resize", action="store_true", help="resize")
args = parser.parse_args()
# generate vid
generate(args.imgs_path, args.vid_path, args.fps, args.frame_size) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/get_weights.py | """Download pretrained weights from github release"""
from pprint import pprint
import requests
import os
import shutil
import argparse
from zipfile import ZipFile
def get_assets(tag):
"""Get release assets by tag name"""
url = 'https://api.github.com/repos/ruhyadi/yolo3d-lightning/releases/tags/' + tag
response = requests.get(url)
return response.json()['assets']
def download_assets(assets, dir):
"""Download assets to dir"""
for asset in assets:
url = asset['browser_download_url']
filename = asset['name']
print('[INFO] Downloading {}'.format(filename))
response = requests.get(url, stream=True)
with open(os.path.join(dir, filename), 'wb') as f:
shutil.copyfileobj(response.raw, f)
del response
with ZipFile(os.path.join(dir, filename), 'r') as zip_file:
zip_file.extractall(dir)
os.remove(os.path.join(dir, filename))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Download pretrained weights')
parser.add_argument('--tag', type=str, default='v0.1', help='tag name')
parser.add_argument('--dir', type=str, default='./', help='directory to save weights')
args = parser.parse_args()
assets = get_assets(args.tag)
download_assets(assets, args.dir)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/post_weights.py | """Upload weights to github release"""
from pprint import pprint
import requests
import os
import dotenv
import argparse
from zipfile import ZipFile
dotenv.load_dotenv()
def create_release(tag, name, description, target="main"):
"""Create release"""
token = os.environ.get("GITHUB_TOKEN")
headers = {
"Accept": "application/vnd.github.v3+json",
"Authorization": f"token {token}",
"Content-Type": "application/zip"
}
url = "https://api.github.com/repos/ruhyadi/yolo3d-lightning/releases"
payload = {
"tag_name": tag,
"target_commitish": target,
"name": name,
"body": description,
"draft": True,
"prerelease": False,
"generate_release_notes": True,
}
print("[INFO] Creating release {}".format(tag))
response = requests.post(url, json=payload, headers=headers)
print("[INFO] Release created id: {}".format(response.json()["id"]))
return response.json()
def post_assets(assets, release_id):
"""Post assets to release"""
token = os.environ.get("GITHUB_TOKEN")
headers = {
"Accept": "application/vnd.github.v3+json",
"Authorization": f"token {token}",
"Content-Type": "application/zip"
}
for asset in assets:
asset_path = os.path.join(os.getcwd(), asset)
with ZipFile(f"{asset_path}.zip", "w") as zip_file:
zip_file.write(asset)
asset_path = f"{asset_path}.zip"
filename = asset_path.split("/")[-1]
url = (
"https://uploads.github.com/repos/ruhyadi/yolo3d-lightning/releases/"
+ str(release_id)
+ f"/assets?name={filename}"
)
print("[INFO] Uploading {}".format(filename))
response = requests.post(url, files={"name": open(asset_path, "rb")}, headers=headers)
pprint(response.json())
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Upload weights to github release")
parser.add_argument("--tag", type=str, default="v0.6", help="tag name")
parser.add_argument("--name", type=str, default="Release v0.6", help="release name")
parser.add_argument("--description", type=str, default="v0.6", help="release description")
parser.add_argument("--assets", type=tuple, default=["weights/mobilenetv3-best.pt", "weights/mobilenetv3-last.pt", "logs/train/runs/2022-09-28_10-36-08/checkpoints/epoch_007.ckpt", "logs/train/runs/2022-09-28_10-36-08/checkpoints/last.ckpt"], help="directory to save weights",)
args = parser.parse_args()
release_id = create_release(args.tag, args.name, args.description)["id"]
post_assets(args.assets, release_id)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/kitti_to_yolo.py | """
Convert KITTI format to YOLO format.
"""
import os
import numpy as np
from glob import glob
from tqdm import tqdm
import argparse
from typing import Tuple
class KITTI2YOLO:
def __init__(
self,
dataset_path: str = "../data/KITTI",
classes: Tuple = ["car", "van", "truck", "pedestrian", "cyclist"],
img_width: int = 1224,
img_height: int = 370,
):
self.dataset_path = dataset_path
self.img_width = img_width
self.img_height = img_height
self.classes = classes
self.ids = {self.classes[i]: i for i in range(len(self.classes))}
# create new directory
self.label_path = os.path.join(self.dataset_path, "labels")
if not os.path.isdir(self.label_path):
os.makedirs(self.label_path)
else:
print("[INFO] Directory already exist...")
def convert(self):
files = glob(os.path.join(self.dataset_path, "label_2", "*.txt"))
for file in tqdm(files):
with open(file, "r") as f:
filename = os.path.join(self.label_path, file.split("/")[-1])
dump_txt = open(filename, "w")
for line in f:
parse_line = self.parse_line(line)
if parse_line["name"].lower() not in self.classes:
continue
xmin, ymin, xmax, ymax = parse_line["bbox_camera"]
xcenter = ((xmax - xmin) / 2 + xmin) / self.img_width
if xcenter > 1.0:
xcenter = 1.0
ycenter = ((ymax - ymin) / 2 + ymin) / self.img_height
if ycenter > 1.0:
ycenter = 1.0
width = (xmax - xmin) / self.img_width
if width > 1.0:
width = 1.0
height = (ymax - ymin) / self.img_height
if height > 1.0:
height = 1.0
bbox_yolo = f"{self.ids[parse_line['name'].lower()]} {xcenter:.3f} {ycenter:.3f} {width:.3f} {height:.3f}"
dump_txt.write(bbox_yolo + "\n")
dump_txt.close()
def parse_line(self, line):
parts = line.split(" ")
output = {
"name": parts[0].strip(),
"xyz_camera": (float(parts[11]), float(parts[12]), float(parts[13])),
"wlh": (float(parts[9]), float(parts[10]), float(parts[8])),
"yaw_camera": float(parts[14]),
"bbox_camera": (
float(parts[4]),
float(parts[5]),
float(parts[6]),
float(parts[7]),
),
"truncation": float(parts[1]),
"occlusion": float(parts[2]),
"alpha": float(parts[3]),
}
# Add score if specified
if len(parts) > 15:
output["score"] = float(parts[15])
else:
output["score"] = np.nan
return output
if __name__ == "__main__":
# argparser
parser = argparse.ArgumentParser(description="KITTI to YOLO Convertion")
parser.add_argument("--dataset_path", type=str, default="../data/KITTI")
parser.add_argument(
"--classes",
type=Tuple,
default=["car", "van", "truck", "pedestrian", "cyclist"],
)
parser.add_argument("--img_width", type=int, default=1224)
parser.add_argument("--img_height", type=int, default=370)
args = parser.parse_args()
kitit2yolo = KITTI2YOLO(
dataset_path=args.dataset_path,
classes=args.classes,
img_width=args.img_width,
img_height=args.img_height,
)
kitit2yolo.convert()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/generate_sets.py | """Create training and validation sets"""
from glob import glob
import os
import argparse
def generate_sets(
images_path: str,
dump_dir: str,
postfix: str = "",
train_size: float = 0.8,
is_yolo: bool = False,
):
images = glob(os.path.join(images_path, "*.png"))
ids = [id_.split("/")[-1].split(".")[0] for id_ in images]
train_sets = sorted(ids[: int(len(ids) * train_size)])
val_sets = sorted(ids[int(len(ids) * train_size) :])
for name, sets in zip(["train", "val"], [train_sets, val_sets]):
name = os.path.join(dump_dir, f"{name}{postfix}.txt")
with open(name, "w") as f:
for id in sets:
if is_yolo:
f.write(f"./images/{id}.png\n")
else:
f.write(f"{id}\n")
print(f"[INFO] Training set: {len(train_sets)}")
print(f"[INFO] Validation set: {len(val_sets)}")
print(f"[INFO] Total: {len(train_sets) + len(val_sets)}")
print(f"[INFO] Success Generate Sets")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create training and validation sets")
parser.add_argument("--images_path", type=str, default="./data/KITTI/images")
parser.add_argument("--dump_dir", type=str, default="./data/KITTI")
parser.add_argument("--postfix", type=str, default="_95")
parser.add_argument("--train_size", type=float, default=0.95)
parser.add_argument("--is_yolo", action="store_true")
args = parser.parse_args()
generate_sets(
images_path=args.images_path,
dump_dir=args.dump_dir,
postfix=args.postfix,
train_size=args.train_size,
is_yolo=False,
)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/video_to_frame.py | """
Convert video to frame
Usage:
python video_to_frame.py \
--video_path /path/to/video \
--output_path /path/to/output/folder \
--fps 24
python scripts/video_to_frame.py \
--video_path tmp/video/20230513_100429.mp4 \
--output_path tmp/video_001 \
--fps 20
"""
import argparse
import os
import cv2
def video_to_frame(video_path: str, output_path: str, fps: int = 5):
"""
Convert video to frame
Args:
video_path: path to video
output_path: path to output folder
fps: how many frames per second to save
"""
if not os.path.exists(output_path):
os.makedirs(output_path)
cap = cv2.VideoCapture(video_path)
frame_count = 0
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
if frame_count % fps == 0:
cv2.imwrite(os.path.join(output_path, f"{frame_count:06d}.jpg"), frame)
frame_count += 1
cap.release()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--video_path", type=str, required=True)
parser.add_argument("--output_path", type=str, required=True)
parser.add_argument("--fps", type=int, default=30)
args = parser.parse_args()
video_to_frame(args.video_path, args.output_path, args.fps) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/data/datasplit.py | #!/usr/bin/env python
# Copyright (c) Baidu apollo, Inc.
# All Rights Reserved
import os
import random
# TODO: change this to your own data path
pnglabelfilepath = r'./KITTI/label_2'
savePath = r"./KITTI/ImageSets/"
target_png = os.listdir(pnglabelfilepath)
total_png = []
for t in target_png:
if t.endswith(".txt"):
id = str(int(t.split('.')[0])).zfill(6)
total_png.append(id + '.png')
print("--- iter for image finished ---")
# TODO: change this ratio to your own
train_percent = 0.85
val_percent = 0.1
test_percent = 0.05
num = len(total_png)
# train = random.sample(num,0.9*num)
list = list(range(num))
num_train = int(num * train_percent)
num_val = int(num * val_percent)
train = random.sample(list, num_train)
num1 = len(train)
for i in range(num1):
list.remove(train[i])
val_test = [i for i in list if not i in train]
val = random.sample(val_test, num_val)
num2 = len(val)
for i in range(num2):
list.remove(val[i])
def mkdir(path):
folder = os.path.exists(path)
if not folder:
os.makedirs(path)
print("--- creating new folder... ---")
print("--- finished ---")
else:
print("--- pass to create new folder ---")
mkdir(savePath)
ftrain = open(os.path.join(savePath, 'train.txt'), 'w')
fval = open(os.path.join(savePath, 'val.txt'), 'w')
ftest = open(os.path.join(savePath, 'test.txt'), 'w')
for i in train:
name = total_png[i][:-4]+ '\n'
ftrain.write(name)
for i in val:
name = total_png[i][:-4] + '\n'
fval.write(name)
for i in list:
name = total_png[i][:-4] + '\n'
ftest.write(name)
ftrain.close()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/assets/global_calib.txt | # KITTI
P_rect_02: 7.188560e+02 0.000000e+00 6.071928e+02 4.538225e+01 0.000000e+00 7.188560e+02 1.852157e+02 -1.130887e-01 0.000000e+00 0.000000e+00 1.000000e+00 3.779761e-03
calib_time: 09-Jan-2012 14:00:15
corner_dist: 9.950000e-02
S_00: 1.392000e+03 5.120000e+02
K_00: 9.799200e+02 0.000000e+00 6.900000e+02 0.000000e+00 9.741183e+02 2.486443e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_00: -3.745594e-01 2.049385e-01 1.110145e-03 1.379375e-03 -7.084798e-02
R_00: 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00
T_00: -9.251859e-17 8.326673e-17 -7.401487e-17
S_rect_00: 1.241000e+03 3.760000e+02
R_rect_00: 9.999454e-01 7.259129e-03 -7.519551e-03 -7.292213e-03 9.999638e-01 -4.381729e-03 7.487471e-03 4.436324e-03 9.999621e-01
P_rect_00: 7.188560e+02 0.000000e+00 6.071928e+02 0.000000e+00 0.000000e+00 7.188560e+02 1.852157e+02 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00
S_01: 1.392000e+03 5.120000e+02
K_01: 9.903522e+02 0.000000e+00 7.020000e+02 0.000000e+00 9.855674e+02 2.607319e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_01: -3.712084e-01 1.978723e-01 -3.709831e-05 -3.440494e-04 -6.724045e-02
R_01: 9.993440e-01 1.814887e-02 -3.134011e-02 -1.842595e-02 9.997935e-01 -8.575221e-03 3.117801e-02 9.147067e-03 9.994720e-01
T_01: -5.370000e-01 5.964270e-03 -1.274584e-02
S_rect_01: 1.241000e+03 3.760000e+02
R_rect_01: 9.996568e-01 -1.110284e-02 2.372712e-02 1.099810e-02 9.999292e-01 4.539964e-03 -2.377585e-02 -4.277453e-03 9.997082e-01
P_rect_01: 7.188560e+02 0.000000e+00 6.071928e+02 -3.861448e+02 0.000000e+00 7.188560e+02 1.852157e+02 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00
S_02: 1.392000e+03 5.120000e+02
K_02: 9.601149e+02 0.000000e+00 6.947923e+02 0.000000e+00 9.548911e+02 2.403547e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_02: -3.685917e-01 1.928022e-01 4.069233e-04 7.247536e-04 -6.276909e-02
R_02: 9.999788e-01 -5.008404e-03 -4.151018e-03 4.990516e-03 9.999783e-01 -4.308488e-03 4.172506e-03 4.287682e-03 9.999821e-01
T_02: 5.954406e-02 -7.675338e-04 3.582565e-03
S_rect_02: 1.241000e+03 3.760000e+02
R_rect_02: 9.999191e-01 1.228161e-02 -3.316013e-03 -1.228209e-02 9.999246e-01 -1.245511e-04 3.314233e-03 1.652686e-04 9.999945e-01
S_03: 1.392000e+03 5.120000e+02
K_03: 9.049931e+02 0.000000e+00 6.957698e+02 0.000000e+00 9.004945e+02 2.389820e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_03: -3.735725e-01 2.066816e-01 -6.133284e-04 -1.193269e-04 -7.600861e-02
R_03: 9.995578e-01 1.656369e-02 -2.469315e-02 -1.663353e-02 9.998582e-01 -2.625576e-03 2.464616e-02 3.035149e-03 9.996916e-01
T_03: -4.738786e-01 5.991982e-03 -3.215069e-03
S_rect_03: 1.241000e+03 3.760000e+02
R_rect_03: 9.998092e-01 -9.354781e-03 1.714961e-02 9.382303e-03 9.999548e-01 -1.525064e-03 -1.713457e-02 1.685675e-03 9.998518e-01
P_rect_03: 7.188560e+02 0.000000e+00 6.071928e+02 -3.372877e+02 0.000000e+00 7.188560e+02 1.852157e+02 2.369057e+00 0.000000e+00 0.000000e+00 1.000000e+00 4.915215e-03
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/src/train.py | import pyrootutils
root = pyrootutils.setup_root(
search_from=__file__,
indicator=[".git", "pyproject.toml"],
pythonpath=True,
dotenv=True,
)
# ------------------------------------------------------------------------------------ #
# `pyrootutils.setup_root(...)` is recommended at the top of each start file
# to make the environment more robust and consistent
#
# the line above searches for ".git" or "pyproject.toml" in present and parent dirs
# to determine the project root dir
#
# adds root dir to the PYTHONPATH (if `pythonpath=True`)
# so this file can be run from any place without installing project as a package
#
# sets PROJECT_ROOT environment variable which is used in "configs/paths/default.yaml"
# this makes all paths relative to the project root
#
# additionally loads environment variables from ".env" file (if `dotenv=True`)
#
# you can get away without using `pyrootutils.setup_root(...)` if you:
# 1. move this file to the project root dir or install project as a package
# 2. modify paths in "configs/paths/default.yaml" to not use PROJECT_ROOT
# 3. always run this file from the project root dir
#
# https://github.com/ashleve/pyrootutils
# ------------------------------------------------------------------------------------ #
from typing import List, Optional, Tuple
import hydra
import pytorch_lightning as pl
from omegaconf import DictConfig
from pytorch_lightning import Callback, LightningDataModule, LightningModule, Trainer
from pytorch_lightning.loggers import LightningLoggerBase
from src import utils
log = utils.get_pylogger(__name__)
@utils.task_wrapper
def train(cfg: DictConfig) -> Tuple[dict, dict]:
"""Trains the model. Can additionally evaluate on a testset, using best weights obtained during
training.
This method is wrapped in optional @task_wrapper decorator which applies extra utilities
before and after the call.
Args:
cfg (DictConfig): Configuration composed by Hydra.
Returns:
Tuple[dict, dict]: Dict with metrics and dict with all instantiated objects.
"""
# set seed for random number generators in pytorch, numpy and python.random
if cfg.get("seed"):
pl.seed_everything(cfg.seed, workers=True)
log.info(f"Instantiating datamodule <{cfg.datamodule._target_}>")
datamodule: LightningDataModule = hydra.utils.instantiate(cfg.datamodule)
log.info(f"Instantiating model <{cfg.model._target_}>")
model: LightningModule = hydra.utils.instantiate(cfg.model)
log.info("Instantiating callbacks...")
callbacks: List[Callback] = utils.instantiate_callbacks(cfg.get("callbacks"))
log.info("Instantiating loggers...")
logger: List[LightningLoggerBase] = utils.instantiate_loggers(cfg.get("logger"))
log.info(f"Instantiating trainer <{cfg.trainer._target_}>")
trainer: Trainer = hydra.utils.instantiate(cfg.trainer, callbacks=callbacks, logger=logger)
object_dict = {
"cfg": cfg,
"datamodule": datamodule,
"model": model,
"callbacks": callbacks,
"logger": logger,
"trainer": trainer,
}
if logger:
log.info("Logging hyperparameters!")
utils.log_hyperparameters(object_dict)
# train
if cfg.get("train"):
log.info("Starting training!")
trainer.fit(model=model, datamodule=datamodule, ckpt_path=cfg.get("ckpt_path"))
train_metrics = trainer.callback_metrics
if cfg.get("test"):
log.info("Starting testing!")
ckpt_path = trainer.checkpoint_callback.best_model_path
if ckpt_path == "":
log.warning("Best ckpt not found! Using current weights for testing...")
ckpt_path = None
trainer.test(model=model, datamodule=datamodule, ckpt_path=ckpt_path)
log.info(f"Best ckpt path: {ckpt_path}")
test_metrics = trainer.callback_metrics
# merge train and test metrics
metric_dict = {**train_metrics, **test_metrics}
return metric_dict, object_dict
@hydra.main(version_base="1.2", config_path=root / "configs", config_name="train.yaml")
def main(cfg: DictConfig) -> Optional[float]:
# train the model
metric_dict, _ = train(cfg)
# safely retrieve metric value for hydra-based hyperparameter optimization
metric_value = utils.get_metric_value(
metric_dict=metric_dict, metric_name=cfg.get("optimized_metric")
)
# return optimized metric
return metric_value
if __name__ == "__main__":
main()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/src/eval.py | import pyrootutils
root = pyrootutils.setup_root(
search_from=__file__,
indicator=[".git", "pyproject.toml"],
pythonpath=True,
dotenv=True,
)
# ------------------------------------------------------------------------------------ #
# `pyrootutils.setup_root(...)` is recommended at the top of each start file
# to make the environment more robust and consistent
#
# the line above searches for ".git" or "pyproject.toml" in present and parent dirs
# to determine the project root dir
#
# adds root dir to the PYTHONPATH (if `pythonpath=True`)
# so this file can be run from any place without installing project as a package
#
# sets PROJECT_ROOT environment variable which is used in "configs/paths/default.yaml"
# this makes all paths relative to the project root
#
# additionally loads environment variables from ".env" file (if `dotenv=True`)
#
# you can get away without using `pyrootutils.setup_root(...)` if you:
# 1. move this file to the project root dir or install project as a package
# 2. modify paths in "configs/paths/default.yaml" to not use PROJECT_ROOT
# 3. always run this file from the project root dir
#
# https://github.com/ashleve/pyrootutils
# ------------------------------------------------------------------------------------ #
from typing import List, Tuple
import hydra
from omegaconf import DictConfig
from pytorch_lightning import LightningDataModule, LightningModule, Trainer
from pytorch_lightning.loggers import LightningLoggerBase
from src import utils
log = utils.get_pylogger(__name__)
@utils.task_wrapper
def evaluate(cfg: DictConfig) -> Tuple[dict, dict]:
"""Evaluates given checkpoint on a datamodule testset.
This method is wrapped in optional @task_wrapper decorator which applies extra utilities
before and after the call.
Args:
cfg (DictConfig): Configuration composed by Hydra.
Returns:
Tuple[dict, dict]: Dict with metrics and dict with all instantiated objects.
"""
assert cfg.ckpt_path
log.info(f"Instantiating datamodule <{cfg.datamodule._target_}>")
datamodule: LightningDataModule = hydra.utils.instantiate(cfg.datamodule)
log.info(f"Instantiating model <{cfg.model._target_}>")
model: LightningModule = hydra.utils.instantiate(cfg.model)
log.info("Instantiating loggers...")
logger: List[LightningLoggerBase] = utils.instantiate_loggers(cfg.get("logger"))
log.info(f"Instantiating trainer <{cfg.trainer._target_}>")
trainer: Trainer = hydra.utils.instantiate(cfg.trainer, logger=logger)
object_dict = {
"cfg": cfg,
"datamodule": datamodule,
"model": model,
"logger": logger,
"trainer": trainer,
}
if logger:
log.info("Logging hyperparameters!")
utils.log_hyperparameters(object_dict)
log.info("Starting testing!")
trainer.test(model=model, datamodule=datamodule, ckpt_path=cfg.ckpt_path)
# for predictions use trainer.predict(...)
# predictions = trainer.predict(model=model, dataloaders=dataloaders, ckpt_path=cfg.ckpt_path)
metric_dict = trainer.callback_metrics
return metric_dict, object_dict
@hydra.main(version_base="1.2", config_path=root / "configs", config_name="eval.yaml")
def main(cfg: DictConfig) -> None:
evaluate(cfg)
if __name__ == "__main__":
main()
| 0 |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 50