These datasets are provided for research purposes only. When using the data, please be sure to cite the publications properly.
Paired Egocentric Video (PEV) Dataset (CVPR’16)
Ryo Yonetani, Kris M. Kitani, and Yoichi Sato, “Recognizing Micro-Actions and Reactions from Paired Egocentric Videos,” In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR2016), 2016.
EgoSurf Dataset (CVPR’15)
Ryo Yonetani, Kris M. Kitani, and Yoichi Sato, “Ego-Surfing First-Person Videos,” In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR2015), 2015.
Multi-view Gaze Dataset (CVPR’14)
Yusuke Sugano, Yasuyuki Matsushita, and Yoichi Sato, “Learning-by-Synthesis for Appearance-based 3D Gaze Estimation,” In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR2014), 2014
- Subject 00 – 09 (4GB)
- Subject 10 – 19 (4GB)
- Subject 20 – 29 (4GB)
- Subject 30 – 39 (4GB)
- Subject 40 – 49 (4GB)
Gaze on BSDS500 Dataset (TAP’13)
Yusuke Sugano, Yasuyuki Matsushita, and Yoichi Sato, “Graph-based Joint Clustering of Fixations and Visual Entities,” ACM Transactions on Applied Perception (TAP), Volume 10, Issue 2, Article 10, June 2013.
Gaze + EgoMotion Dataset (ECV’12)
Keisuke Ogaki, Kris M. Kitani, Yusuke Sugano, and Yoichi Sato. “Coupling Eye-Motion and Ego-Motion features for First-Person Activity Recognition.” CVPR workshop on Ego-Centric Vision (ECV2012), June 2012.
Wide-field Gaze Dataset (ETRA’12)
Hideyuki Kubota, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, and Kazuo Hiraki, “Incorporating Visual Field Characteristics into a Saliency Map,” in Proc. the 7th International Symposium on Eye Tracking Research & Applications (ETRA2012), March 2012.