dc.description.abstract |
Detecting the user’s action, anticipating his or her needs and performing realistic virtual and physical world mapping are the major modules for wearable computing devices. However, the wearable computing community lacks a multipurpose dataset to develop all these methods. In this work, we employed a ZED camera to generate a novel, challenging benchmark dataset that can be used to develop hand detection, hand gesture estimation (egocentric), stereo visual odometry/ SLAM trajectory estimation and disparity estimation methods. Our "EgoCentric+" dataset comprises 231 stereo image pair sequences along with ground truth trajectory values, hand gesture annotations and 2D bounding boxes for hand detection. We designed 20 continuous and 15 static hand gestures focused on interactions with virtual content in head-mounted wearable computers. Furthermore, the dataset is collected with a head-mounted camera under diverse backgrounds and illumination settings to resemble the wearable glasses environment. These settings can help researchers to develop appropriate and robust methodologies for head-mounted wearable devices and to thoroughly test and evaluate their models and applications. |
en_US |