The iCub multisensor datasets
This website hosts novel datasets constructed by employing the iCub robot
equipped with an additional depth sensor and color camera, IntelRealSense 435i. We employed the iCub robot to acquire
color and depth information for 210 objects in different acquisition scenarios. These datasets can be used for robot and computer vision
applications: multisensory object representation, object concept formation, action recognition, rotation and distance invariant object recognition.
Note that this is an ongoing work and the content of the website will be updated regularly.
The table below provides the specifications of the datasets.
- RGB-D turntable dataset: the experiment setup for this dataset consists of a motorized turntable and the iCub robot.
To capture color and depth images, we put the objects on the turntable, and then the table was rotated by five degrees
till it completed a full rotation. At the end of this experiment, we obtained 72 different views of an object, and for each view, we
collected a depth and 3 color images (60480 color and depth images), for a total of 288 images per object.
Link to download dataset: https://box.hu-berlin.de/d/6f7b371ea38744a1bcf3/
- RGB-D Operator dataset: the experiment setup for this dataset is similar to the turndable dataset. Here, instead of a turntable, we
employed four different operator to obtain more different views of the object in three rotational axes, instead
of a single one, with more poses, and under noisier environmental conditions that introduced factors such as specularities, shadows,etc.
Link to download dataset: will be available soon.
- Scene understanding dataset(s): in this setup, we constructed two datasets to perform indoor
scene understanding by creating a cluttered scene with 10 and 20 objects. In the first setup, we put 10 objects in front of the robot to
capture depth and color images. After obtaining color and depth data of the scene, we randomly shuffled objects’ poses 20 times
to capture more variations. This procedure was repeated for all objects in the dataset. To this end, we created 21 different scenes
with 10 objects, and we collected 1680 depth and color images. In the second setup, we replicated the same procedure with 20 objects,
and we shuffled these objects in the scene 40 times, creating a dataset with 1760 color and depth images. We collected 3340 color
and depth images by combining the data from the first and second experiments.
Link to download dataset: https://box.hu-berlin.de/d/8cf5d696d4d5497180d2/
- Action recognition dataset we designed a setup where the iCub robot becomes an action-observer, and four different operators have the role of being action-
performers. The operators use four different tools to perform four different actions of 20 objects. In this setting, the color and depth images were captured before and after
performing an action on an object with a tool.
Link to download dataset: https://box.hu-berlin.de/d/6bc742f6dafb4dc2a36a/
Object and data specification
The color and depth images in these dataset were captured with the size of 640x480 and saved in PNG format. Note that the depth data were visualized by using OpenCV (cv2.COLORMAP_JET)
and saved as an image.
Objects: As can be seen from the banner of the web page, the datasets were constructed with various types of objects. Note that we used the same objects
for the RGB-D turntable, RGBD-operator, and scene understanding datasets. Here, you can access the complete list
of object with corresponding object id. The action recognition dataset constructed with 20 objects and the list of object can be found in here.
Citing the dataset
If you use the dataset for your project, please cite the following technical report:
Murat Kirtay, Ugo Albanese, Lorenzo Vannucci, Guido Schillaci, Cecilia
Laschi, and Egidio Falotico. 2020. The iCub multisensor datasets for robot
and computer vision applications. In Proceedings of the 2020 International
Conference on Multimodal Interaction (ICMI’20), October 25–29, 2020, Virtual