Sato Lab./Sugano Lab.
Sato Lab./Sugano Lab.
Y. Sato Lab.
Sugano Lab.
News
Publications
Contact
Resources
Internal Wiki
English
日本語
Paper-Conference
UniGaze: Towards Universal Gaze Estimation via Large-scale Pre-Training
Despite decades of research on data collection and model architectures, current gaze estimation models encounter significant challenges …
Jiawei Qin
,
Xucong Zhang
,
Yusuke Sugano
PDF
Cite
Code
Robust Long-term Test-Time Adaptation for 3D Human Pose Estimation through Motion Discretization
Online test-time adaptation addresses the train-test domain gap by adapting the model on unlabeled streaming test inputs before making …
Yilin Wen
,
Kechuan Dong
,
Yusuke Sugano
Cite
Generative Modeling of Shape-Dependent Self-Contact Human Poses
One can hardly model self-contact of human poses without considering underlying body shapes. For example, the pose of rubbing a belly …
Takehiko Ohkawa
,
Jihyun Lee
,
Shunsuke Saito
,
Jason Saragih
,
Fabian Prada
,
Yichen Xu
,
Shoou-I Yu
,
Ryosuke Furuta
,
Yoichi Sato
,
Takaaki Shiratori
PDF
Cite
Code
AssemblyHands-X: Modeling 3D Hand-Body Coordination for Understanding Bimanual Human Activities
Bimanual human activities inherently involve coordinated movements of both hands and body. However, the impact of this coordination in …
Tatsuro Banno
,
Takehiko Ohkawa
,
Ruicong Liu
,
Ryosuke Furuta
,
Yoichi Sato
PDF
Cite
DOI
EgoInstruct: An Egocentric Video Dataset of Face-to-face Instructional Interactions with Multi-modal LLM Benchmarking
Analyzing instructional interactions between an instructor and a learner who are co-present in the same physical space is a critical …
Yuki Sakai
,
Ryosuke Furuta
,
Juichun Yen
,
Yoichi Sato
PDF
Cite
Can MLLMs Read the Room? A Multimodal Benchmark for Verifying Truthfulness in Multi-Party Social Interactions
As AI systems become increasingly integrated into human lives, endowing them with robust social intelligence has emerged as a critical …
Caixin Kang
,
Yifei Huang
,
Liangyang Ouyang
,
Mingfang Zhang
,
Yoichi Sato
PDF
Cite
DOI
Affordance-Guided Diffusion Prior for 3D Hand Reconstruction
How can we reconstruct 3D hand poses when large portions of the hand are heavily occluded by itself or by objects? Humans often resolve …
Naru Suzuki
,
Takehiko Ohkawa
,
Tatsuro Banno
,
Jihyun Lee
,
Ryosuke Furuta
,
Yoichi Sato
PDF
Cite
DOI
Egocentric Action-aware Inertial Localization in Point Clouds with Vision-Language Guidance
This paper presents a novel inertial localization framework named Egocentric Action-aware Inertial Localization (EAIL), which leverages …
Mingfang Zhang
,
Ryo Yonetani
,
Yifei Huang
,
Liangyang Ouyang
,
Ruicong Liu
,
Yoichi Sato
PDF
Cite
Code
Data-driven Head Motion Generation through Natural Gaze-Head Coordination
We present the first data-driven approach to model temporal gaze-head coordination from large-scale in-the-wild facial videos. To …
Xiaohan Liu
,
Yilin Wen
,
Yusuke Sugano
PDF
Cite
Project
DOI
ChartQC: Question Classification from Human Attention Data on Charts
Understanding how humans interact with information visualizations is crucial for improving user experience and designing effective …
Takumi Nishiyasu*
,
Tobias Kostorz*
,
Yao Wang
,
Yoichi Sato
,
Andreas Bulling
PDF
Cite
DOI
»
Cite
×