Datasets:
Modalities:
Image
Languages:
English
Size:
100K<n<1M
Tags:
human action recognition
skeleton-based human action recognition
joint skeletons
human interaction
cyber-physical-social systems
digital twins
License:
John Martins
commited on
Commit
•
c5ac3f6
1
Parent(s):
583c0cb
Update README.md
Browse files
README.md
CHANGED
@@ -23,3 +23,133 @@ task_categories:
|
|
23 |
- other
|
24 |
task_ids: []
|
25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
- other
|
24 |
task_ids: []
|
25 |
---
|
26 |
+
|
27 |
+
|
28 |
+
# Dataset Card for this Human-Machine Interaction Dataset
|
29 |
+
|
30 |
+
## Table of Contents
|
31 |
+
- [Table of Contents](#table-of-contents)
|
32 |
+
- [Dataset Overview](#dataset-overview)
|
33 |
+
- [Summary of Data](#summary-of-data)
|
34 |
+
- [Motivation for this Dataset](#motivation-for-this-dataset)
|
35 |
+
- [Supported Tasks](#supported-tasks)
|
36 |
+
- [Languages](#languages)
|
37 |
+
- [Data Contents](#data-contents)
|
38 |
+
- [Data Frame](#data-frame)
|
39 |
+
- [Data Collection](#data-collection)
|
40 |
+
- [Machine of Focus and Facility](#machine-of-focus-and-facility)
|
41 |
+
- [Sensor and Data Modality](#sensor-and-data-modality)
|
42 |
+
- [A Note on Privacy](#a-note-on-privacy)
|
43 |
+
- [Additional Information and Analysis Techniques](#additional-information-and-analysis-techniques)
|
44 |
+
- [Action List](#action-list)
|
45 |
+
- [Skeleton Features](#skeleton-features)
|
46 |
+
- [Machine Learning Techniques](#machine-learning-techniques)
|
47 |
+
- [Acknowledgements](#acknowledgements)
|
48 |
+
- [Dataset Curators](#dataset-curation)
|
49 |
+
- [Funding and Support](#funding-and-support)
|
50 |
+
- [Citation](#citation)
|
51 |
+
|
52 |
+
|
53 |
+
## Dataset Overview
|
54 |
+
This dataset contains a collection of observed interactions between humans and an advanced manufacturing machine, specifically a Wire Arc Additive Manufacuturing (WAAM) machine. The motivations for collecting this dataset, the contents of this dataset, and some ideas for how to analyze and use this dataset can be found below.
|
55 |
+
Additionally, the paper introducing this dataset is undergoing review for publication to the American Society of Mechanical Engineers(ASME)’s Journal of Mechanical Design (JMD) special issue: “Cultivating Datasets for Engineering Design”. If accepted, the paper will be referenced here.
|
56 |
+
|
57 |
+
### Motivation for this Dataset
|
58 |
+
The engineering design process for any solution or product is essential to ensure quality results and standards. However, this process can be very tedious and require many re-iterations, especially if it involves manufacturing a product. If engineers and designers are designing a product to be manufactured, but are disconnected from the realities of their available manufacturing capabilities, there can be many redesign iterations stemming from this misunderstanding between design specifications and production / supply chain abilities. Design for Manufacturing (DfM) is a style of design that, relying on accurate simulation and modeling of available manufacturing processes, takes into account the product manufacturing when designing products such that the design reiteration inefficiency is improved. To improve the transparency between manufacturing and design, establishing methods to understand and quantify the various steps in the manufacturing process is crucial. Within this effort, and in manufacturing, one of the most difficult aspects to understand and quantify is the interactions of humans and machinery. While manufacturing is undergoing immense change due to automation technologies and robotics, humans still play a central role in operations, however their behaviors / actions and how it influences the manufacturing process is poorly understood. This dataset attempts to support the understanding of humans in manufacturing by observing realistic interactions between humans and an advanced manufacturing machine.
|
59 |
+
|
60 |
+
|
61 |
+
### Supported Tasks
|
62 |
+
- 'video-classification': Using the series of provided frames of depth images and joint skeletons, machine learning techniques can be used to classify these by human actions.
|
63 |
+
|
64 |
+
### Languages
|
65 |
+
English
|
66 |
+
|
67 |
+
## Data Contents
|
68 |
+
|
69 |
+
This dataset comprises 3.87 hours of footage (209,230 frames of data at 15 FPS) representing a total of 1228 interactions captured over 6 months.
|
70 |
+
|
71 |
+
The depth images were captured from the Microsoft Azure Kinect DK sensor in NFOV mode (More can be found on the [Azure Kinect Hardware Specs Website](https://learn.microsoft.com/en-us/azure/kinect-dk/hardware-specification) )and skeletons extracted of the humans in each frame were extracted using the Azure Kinect Body Tracking SDK (found [here](https://microsoft.github.io/Azure-Kinect-Body-Tracking/release/1.1.x/index.html) ).
|
72 |
+
|
73 |
+
### Data Frame
|
74 |
+
Each frame contains the following data points and labels:
|
75 |
+
|
76 |
+
* image: A 320x288 16-bit grayscale .png file of the depth image captured. This depth image is either from the outer machine perspective or the inner perspective according to the view label.
|
77 |
+
* frame(#): An integer (from 0 - 209230) representing a unique frame identifier number. The frames are numbered in chronological order.
|
78 |
+
* skeleton: An array of 32 3D coordinates. Each skeleton array captures 32 joints on the human body within the frame according to the Microsoft Azure Kinect Body Tracking SDK (linked above). For more information about the indexing of each joint, see this [Azure Kinect Joint Skeleton Webpage](https://learn.microsoft.com/en-us/azure/kinect-dk/body-joints).
|
79 |
+
* action_label: A label of which action the current frame is capturing. A list of all the label actions can be found below.
|
80 |
+
* location_label: A label of where on the machine the human is performing the interaction in the current frame.
|
81 |
+
* user_label: A label of the unique user ID given to the person in the frame. There are a total of 4 users (numbered 0 - 3). This order of user id is also the frequency with which they use the machine - 0 being the most frequent and 3 being the least.
|
82 |
+
* view_label: A label of which sensor perspective best captures the action in the frame (0 for outer perspective and 1 for inner).
|
83 |
+
* action_number: A label (0 - 1227) describing which of the total 1228 actions a particular frame is a part of. The data originally consisted of 1228 depth video clips of each action from its start to finish and all these videos were later split into individual frames. Since analyzing human actions usually needs temporal context, the action number allows for the grouping and ordering (in conjunction with the frame number or timestamp label) of all frames that comprise of a complete action.
|
84 |
+
* datetime: A timestamp of when this frame was captured. This allows for ordering of frames and actions as well as seeing how long was waited in between adjacent actions. This also allows for the splitting of experimental sessions between days. The context of the ordering of actions as well as which may occur at the beginning or end of a day is very useful.
|
85 |
+
|
86 |
+
## Data Collection
|
87 |
+
|
88 |
+
### Machine of Focus and Facility
|
89 |
+
The machine being interacted with in this dataset is the Lincoln Electric Sculptprint RND Wire Arc Additive Manufacturing (WAAM) machine. The WAAM machine is a large-format metal 3D printer housed in a 2.2m x 4.1m x 2.3 m (LxWxH) chamber that includes a robotic welder arm that deposits molten metal filament upon a specially configured build plate in a layered fashion. We chose this machine as a starting point because it exemplifies a wide variety of different human interactions. Actions range from very direct, hands-on actions like grinding down the metal build plate or refitting parts on the build plate to more indirect hands-off actions like calibrating the robot arm with a joystick or using the digital control panel.
|
90 |
+
Additionally, the machine we studied was housed at Mill19, a manufacturing and robotics research facility run by the Manufacturing Future Institute (MFI) at Carnegie Mellon University. More about this machine and facility can be found at [MFI's page about the WAAM](https://engineering.cmu.edu/mfi/facilities/equipment-details/lincoln-electric-sculptprint-rnd.html).
|
91 |
+
|
92 |
+
### Sensor and Data Modality
|
93 |
+
For our data collection, we used 2 Microsoft Azure Kinect DK cameras (Linked again [here](https://learn.microsoft.com/en-us/azure/kinect-dk/hardware-specification) for convenience). Due to the WAAM machine having points of interaction both inside its welding chamber and outside, we installed 2 Azure Kinect sensors to observe human interactions, 1 captures the ‘outer perspective’ and the other the ‘inner perspective’. While the Azure Kinect captures many modalities of data, we chose to focus on depth images (in near-field-of-view ’NFOV’ mode) and human joint skeletons. These were extracted at a rate of 1/15 frames per second.
|
94 |
+
|
95 |
+
### A Note on Privacy
|
96 |
+
The choice to focus on just depth and joint skeletons was made in order to preserve the privacy of users being sensed. This is very important to maintain when observing humans in a largely shared environment. This is also important in industry or any public infrastructure settings, thus if we can show that meaningful knowledge can be learned using privacy preserving technologies, more wide-spread use of these technologies can be used safely.
|
97 |
+
|
98 |
+
|
99 |
+
## Additional Information and Analysis Techniques
|
100 |
+
|
101 |
+
### Action List
|
102 |
+
A complete list of actions and a brief description include:
|
103 |
+
|
104 |
+
* using_control_panel : Interfacing with machine start/stop controls and digital screen used for visualizing build files and configuring machine parameters.
|
105 |
+
* using_flexpendant_mounted : Flexpendant being used in its control mode for loading build parameters and viewing machine output logs.
|
106 |
+
* using_flexpendant_mobile : Flexpendant being used in its machine operation mode for moving the robotic arm with the attached joystick.
|
107 |
+
* inspecting_buildplate : Performing light build plate modifications and inspections before or after a build.
|
108 |
+
* preparing_buildplate : Clearing or moving build plate to set up next build.
|
109 |
+
* refit_buildplate : Completely switching out the build plate configuration for a new project.
|
110 |
+
* grinding_buildplate : Grinding down the new build plate to expose conductive metal and level surface.
|
111 |
+
* toggle_lights : Turn the internal WAAM light on/off.
|
112 |
+
* open_door : Opening the WAAM door.
|
113 |
+
* close_door : Closing the WAAM door.
|
114 |
+
* turning_gas_knobs : Turning on/off shielding gas.
|
115 |
+
* adjusting_tool : Installing or modifying new/existing sensors on the robotic welder arm.
|
116 |
+
* wiring : Installing or adjusting wiring of tool sensors.
|
117 |
+
* donning_ppe : Users putting on personal protective equipment.
|
118 |
+
* doffing_ppe : Users taking off personal protective equipment.
|
119 |
+
* observing : Simply looking around or watching WAAM activity.
|
120 |
+
* walking : Simply walking around the WAAM.
|
121 |
+
|
122 |
+
### Skeleton Features
|
123 |
+
|
124 |
+
The skeleton data provided in each frame consists of an array of 32 joint coordinates in 3D space (x,y,z). The units of each coordinate value are in millimeters and the origin is the respective Kinect sensor capturing the particular frame (more on the coordinate system can be found on [the Azure Kinect webpage on the sensor coordinate system](https://learn.microsoft.com/en-us/azure/kinect-dk/coordinate-systems) and the [Body Tracking SDK’s webpage on joints](https://learn.microsoft.com/en-us/azure/kinect-dk/body-joints)).
|
125 |
+
|
126 |
+
While analysis techniques can be used on these ‘raw’ coordinate, there are many hand-picked features that can be extracted from these coordinates. Some basic and popular examples include:
|
127 |
+
* Joint Coordinate Normalization: The coordinates from the skeletons can be normalized with respect to each other. Additionally, another technique can be to choose a single joint in the center of the body to be the ‘origin’ coordinate, then re-calculate the coordinates of every other joint in relation to this central one.
|
128 |
+
* Joint Velocities: Calculated by the difference in a joint’s coordinates between frames (each frame is 1/15 of a second apart)
|
129 |
+
* Joint Angles: Calculate the angle created at a specific joint by adjacent limbs by performing some trigonometric calculations using the vectors from the joint of focus and its adjacent joints.
|
130 |
+
* Joint Distances: Pick 2 joints of interest and derive the distance between them using some basic geometric calculation.
|
131 |
+
|
132 |
+
|
133 |
+
### Machine Learning Techniques
|
134 |
+
|
135 |
+
Human action recognition often utilizes deep learning techniques to analyze and identify patterns in human actions. This is due to some deep learning techniques having great ability to analyze data both temporally and spatially. Some popular deep learning models include:
|
136 |
+
|
137 |
+
* Long-Short Term Memory (LSTM) : This deep learning model is a type of recurrent neural network (RNN) specifically targeted to avoid the vanishing gradient problem and tailored to temporal / sequential data with invariance to large or small gaps in important information distributed through the sequence.
|
138 |
+
* Convolutional Neural Network (CNN) : A powerful image-based model that can extract visual features from complex imagery.
|
139 |
+
* Graph Neural Networks (GCN) : A convolutional model performed over a defined / specialized graph network as opposed to an array of pixels. A specific example of this is the Spatial-Temporal GCN (STGCN), which is popularly used among skeleton-based human action recognition.
|
140 |
+
* Autoencoding : An unsupervised learning technique that can be used to learn sets of patterns and features shared by data. This can be particularly powerful for clustering data and quantifying differences between particular actions. This is also powerful in reducing data dimensionality - being able to represent the data using a smaller set of features than originally.
|
141 |
+
|
142 |
+
|
143 |
+
## Acknowledgements
|
144 |
+
|
145 |
+
### Dataset Curators
|
146 |
+
This dataset was collected by John Martins with the guidance of Katherine Flanigan and Christopher McComb
|
147 |
+
|
148 |
+
The corresponding paper was written by John Martins, Katherine Flanigan, and Chrisopher McComb
|
149 |
+
|
150 |
+
### Funding and Support
|
151 |
+
We thank Carnegie Mellon’s Manufacturing Futures Institute for graciously funding and supporting the endeavors to collect this data. We also want to thank Mill19 for granting access to their facilities and allowing us to install sensors. Lastly, we would like to thank the users of the WAAM machine for allowing us to collect data on their uses of the machine over the 6 month data collection period.
|
152 |
+
|
153 |
+
### Citation
|
154 |
+
As mentioned before, the paper introducing this dataset is undergoing review for publication to the American Society of Mechanical Engineers(ASME)’s Journal of Mechanical Design (JMD) special issue: “Cultivating Datasets for Engineering Design”. If accepted, the paper will be referenced here.
|
155 |
+
|