ShoufaChen
commited on
Commit
•
55c831e
1
Parent(s):
8d5877e
Create README.md
Browse files- AnimalKingdom/README.md +127 -0
AnimalKingdom/README.md
ADDED
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Pose Estimation
|
2 |
+
|
3 |
+
## Dataset and Code
|
4 |
+
* [Download dataset and code here](https://forms.office.com/r/WCtC0FRWpA)
|
5 |
+
|
6 |
+
## Structure of Pose Estimation Dataset
|
7 |
+
* Annotations follow MPII format and are stored in .json format
|
8 |
+
* Annotations:
|
9 |
+
* `image`: Path to image
|
10 |
+
|
11 |
+
* `animal`: Name of animal
|
12 |
+
* `animal_parent_class`: Parent class of the animal (e.g., Amphibian)
|
13 |
+
* `animal_class`: Class of the animal (e.g., Amphibian)
|
14 |
+
* `animal_subclass`: Subclass of the animal (e.g., Frog / Toad)
|
15 |
+
|
16 |
+
* `joints_vis`: Visibility of joints (1 means visible, 0 means not visible)
|
17 |
+
* `joints`: Coordinates of the joints. All images are in 640×360 px(width × height) resolution. Invisible joint coordinates are [-1, -1].
|
18 |
+
|
19 |
+
* `scale`: Scale of bounding box with respect to 200px
|
20 |
+
* `center`: Coordinates of the centre point of the bounding box
|
21 |
+
|
22 |
+
* There are 23 keypoints in the following order:
|
23 |
+
* `joint_id`:
|
24 |
+
<details><summary>Click to show list of keypoints</summary>
|
25 |
+
|
26 |
+
* 0: Head_Mid_Top
|
27 |
+
* 1: Eye_Left
|
28 |
+
* 2: Eye_Right
|
29 |
+
* 3: Mouth_Front_Top
|
30 |
+
* 4: Mouth_Back_Left
|
31 |
+
* 5: Mouth_Back_Right
|
32 |
+
* 6: Mouth_Front_Bottom
|
33 |
+
* 7: Shoulder_Left
|
34 |
+
* 8: Shoulder_Right
|
35 |
+
* 9: Elbow_Left
|
36 |
+
* 10: Elbow_Right
|
37 |
+
* 11: Wrist_Left
|
38 |
+
* 12: Wrist_Right
|
39 |
+
* 13: Torso_Mid_Back
|
40 |
+
* 14: Hip_Left
|
41 |
+
* 15: Hip_Right
|
42 |
+
* 16: Knee_Left
|
43 |
+
* 17: Knee_Right
|
44 |
+
* 18: Ankle_Left
|
45 |
+
* 19: Ankle_Right
|
46 |
+
* 20: Tail_Top_Back
|
47 |
+
* 21: Tail_Mid_Back
|
48 |
+
* 22: Tail_End_Back
|
49 |
+
|
50 |
+
</details>
|
51 |
+
|
52 |
+
## Evaluation Metric
|
53 |
+
* We chose PCK@0.05.
|
54 |
+
* For the evaluation code, please refer to <https://github.com/leoxiaobin/deep-high-resolution-net.pytorch/blob/master/lib/core/evaluate.py>
|
55 |
+
|
56 |
+
|
57 |
+
## Instructions to run Pose Estimation models
|
58 |
+
This code was separately tested on RTX 3090, and 3080Ti using CUDA10.2.
|
59 |
+
|
60 |
+
1. To prepare the environment, refer to
|
61 |
+
* [HRNet] <https://github.com/leoxiaobin/deep-high-resolution-net.pytorch>
|
62 |
+
* [HRNet-DARK] <https://github.com/ilovepose/DarkPose#distribution-aware-coordinate-representation-for-human-pose-estimation>
|
63 |
+
|
64 |
+
* **IMPORTANT**: Perform the next step (Step 2) first before performing make in make libs (Step 4) in HRNet so that the dataset will be initialized
|
65 |
+
|
66 |
+
2. Move and replace files according to the directories in `$DIR_AK_AR/pose_estimation/code/code_new`:
|
67 |
+
* Helper script to move / create symbolic links to files
|
68 |
+
* Remember to change the root directory `$DIR_ROOT` in `$DIR_AK/pose_estimation/code/code_new/prepare_dir_PE.sh`
|
69 |
+
* `bash $DIR_AK/pose_estimation/code/code_new/prepare_dir_PE.sh`
|
70 |
+
|
71 |
+
3. Untar the dataset
|
72 |
+
* `tar -zxvf $DIR_AK/pose_estimation/dataset.tar.gz`
|
73 |
+
|
74 |
+
4. Execute the code
|
75 |
+
* `python tools/train.py --cfg $DIR_HRNET/experiments/mpii/hrnet/w32_256x256_adam_lr1e-3_ak.yaml `
|
76 |
+
|
77 |
+
5. [Alternative] We have also specially prepared the dataset for use in MMPose <https://mmpose.readthedocs.io/en/latest/get_started.html> by OpenMMLab.
|
78 |
+
* COCO annotations are available (Not used in our experiments)
|
79 |
+
* Only mAP metric is available (Not used in our experiments) for COCO datasets in MMPose <https://github.com/open-mmlab/mmpose/issues/721#issuecomment-859453118>, <https://github.com/open-mmlab/mmpose/issues/707>
|
80 |
+
* Helper script to set up environment
|
81 |
+
* Remember to change the root directory `$DIR_ROOT` in `$DIR_AK/pose_estimation/code/code_new/prepare_dir_PE.sh`
|
82 |
+
* `bash $DIR_AK/pose_estimation/code/code_new/prepare_dir_PE_mmpose.sh`
|
83 |
+
* `python $DIR_MMPOSE/tools/train.py configs/animal/2d_kpt_sview_rgb_img/topdown_heatmap/ak/hrnet_w32_ak_256x256.py`
|
84 |
+
|
85 |
+
|
86 |
+
## Solutions to potential issues:
|
87 |
+
<details><summary>Click to expand</summary>
|
88 |
+
|
89 |
+
1. unable to execute 'gcc': No such file or directory. error: command 'gcc' failed with exit status 1
|
90 |
+
* `sudo apt install gcc`
|
91 |
+
|
92 |
+
2. ModuleNotFoundError: No module named 'nms.cpu_nms'
|
93 |
+
* <https://github.com/leoxiaobin/deep-high-resolution-net.pytorch/issues/24>
|
94 |
+
* `cd $DIR_HRNET/lib`
|
95 |
+
* `make`
|
96 |
+
|
97 |
+
3. OSError: The nvcc binary could not be located in your $PATH. Either add it to your path, or set $CUDAHOME
|
98 |
+
* <https://github.com/leoxiaobin/deep-high-resolution-net.pytorch/issues/143>
|
99 |
+
* `export CUDAHOME="/usr/lib/cuda"`
|
100 |
+
|
101 |
+
4. OSError: The CUDA nvcc path could not be located in /usr/lib/cuda/bin/nvcc
|
102 |
+
* Ensure cuda and nvcc are installed
|
103 |
+
* `sudo apt install nvidia-cuda-toolkit`
|
104 |
+
* `which nvcc`
|
105 |
+
* should show: `/usr/bin/nvcc`
|
106 |
+
* `echo $CUDAHOME`
|
107 |
+
* should show: `/usr/lib/cuda`
|
108 |
+
|
109 |
+
* `sudo ln -s /usr/bin/nvcc /usr/lib/cuda/bin/nvcc`
|
110 |
+
|
111 |
+
5. RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW
|
112 |
+
* Driver may have been uninstalled after running `sudo apt install nvidia-cuda-toolkit`
|
113 |
+
|
114 |
+
* Check if the driver is installed
|
115 |
+
* `nvidia-smi`
|
116 |
+
* should show the drivers available for installation (e.g., `sudo apt install nvidia-utils-470`)
|
117 |
+
|
118 |
+
6. AttributeError: module 'torch.onnx' has no attribute 'set_training'
|
119 |
+
* <https://github.com/leoxiaobin/deep-high-resolution-net.pytorch/issues/230>
|
120 |
+
* `pip install tensorboardX --upgrade`
|
121 |
+
* `pip install tensorboard`
|
122 |
+
|
123 |
+
7. ImportError: libcudart.so.10.2: cannot open shared object file: No such file or directory
|
124 |
+
* <https://itsfoss.com/solve-open-shared-object-file-quick-tip>
|
125 |
+
* `sudo /sbin/ldconfig -v`
|
126 |
+
|
127 |
+
</details>
|