Patient-Specific Pose Estimation in a Clinical Environment

Published in SoCal Machine Learning Symposium, 2017

K. Chen, P. Gabriel, A. Alasfour, W.K. Doyle, O. Devinsky, D. Friedman, T. Thesen, and V. Gilja, “Patient-Specific Pose Estimation in a Clinical Environment,” SoCal Machine Learning Symposium, USC Viterbi School of Engineering, Los Angeles, CA, 2017.



We demonstrate a method for improving automatic upper-body pose estimation from hours of RGB video recordings of a subject in a clinical environment. Our semi-automated approach uses a patient-specific ConvNet model trained on a 15-minute subset of postures taken from manually annotated video to estimate the same patient’s postures on an additional 120 minutes of video. By including temporal constraints and adapting to scene lighting changes, the proposed framework yields higher labeling consistency for increased spatial tolerances compared to similar methods for a single subject recorded in a clinical setting.