Image domain adaption of simulated data for human pose estimation

Published in Proc. SPIE 11543, Artificial Intelligence and Machine Learning in Defense Applications II, 2020

Leveraging the power of deep neural networks, single-person pose estimation has made substantial progress throughout the last years. More recently, multi-person pose estimation has also become of growing importance, mainly driven by the high demand for reliable video surveillance systems in public security. To keep up with these demands, certain efforts have been made to improve the performance of such systems, which is yet limited by the insufficient amount of available training data. This work addresses this lack of labeled data: by diminishing the often faced problem of domain shift between synthetic images from computer game graphics engines and real world data, annotated training data shall be provided at zero labeling-cost. To this end, generative adversarial networks are applied as domain adaption framework, adapting the data of a novel synthetic pose estimation dataset to several real world target domains. State-of-the-art domain adaption methods are extended to meet the important requirement of exact content preservation between synthetic and adapted images. Experiments, that are subsequently conducted, indicate the improved suitability of the adapted data as human pose estimators trained on this data outperform those which are trained on purely synthetic images.