Smartphone Classifies Single-Leg Squat Into Three Levels

Researchers at Sangji and Yonsei Universities present an interpretable machine-learning framework (2026) that classifies single-leg squat (SLS) performance into three levels (good/moderate/poor) from frontal-view smartphone videos of 105 young adults. Using 17 engineered trunk/pelvis/knee features and adaptive boosting, the model achieved 0.84 accuracy, 0.85 F1, and 0.92 AUC; SHAP and LIME highlighted coordination-informed features as primary drivers, supporting clinical screening feasibility.
Scoring Rationale
Combines credible peer-reviewed results and actionable, interpretable smartphone workflow; novelty is incremental relative to prior markerless SLS studies.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.
Sources
- Read OriginalSmartphone-Based Interpretable Machine Learning for Classifying Single-Leg Squat Performance Using Trunk, Pelvic, and Knee Kinematics: Cross-Sectional Studymhealth.jmir.org


