Wearables and machine learning assess balance

  • July 8, 2024
  • Steve Rogerson

Using wearable sensors and machine learning algorithms, researchers from Florida Atlantic University (FAU) have developed a way to assess balance.

They say this sets a benchmark in the application of wearable technology and machine learning in health care.

The approach by researchers at the university’s College of Engineering & Computer Science is an advance in objective balance assessment, especially for remote monitoring in home-based or nursing care settings, potentially transforming balance disorder management.

Balance can be impacted by various factors, including diseases such as Parkinson’s, acute and chronic injuries to the nervous system, and the natural aging process. Accurately assessing balance in patients is important to identify and manage conditions that affect coordination and stability. Balance assessments also play a key role in preventing falls, understanding movement disorders, and designing appropriate therapeutic interventions across age groups and medical conditions.

However, traditional methods used to assess balance often suffer from subjectivity, are not comprehensive enough and cannot be administered remotely. Moreover, these assessments rely on expensive, specialised equipment that may not be readily accessible in all clinical settings and depend on the clinician’s expertise, which can lead to variability in results. More objective and comprehensive assessment tools in balance evaluation are needed.

For the study, researchers used the modified clinical test of sensory interaction on balance (m-CTSIB), widely used in healthcare to assess a person’s ability to maintain balance under different sensory conditions. Wearable sensors were placed on study participants’ ankle, lumbar (lower back), sternum, wrist and arm.

Researchers collected comprehensive motion data from the participants under four different sensory conditions of m-CTSIB: balance performance with eyes open and closed on a stable surface; and eyes open and closed on a foam surface. Each test condition lasted about eleven seconds without breaks to simulate continuous balance challenges and streamline the assessment process. Researchers used inertial measurement unit (IMU) sensors coupled with a system to evaluate ground truth m-CTSIB balance scores for their analysis.

The data were then pre-processed and features wereextracted for analysis. To estimate the m-CTSIB scores, researchers applied multiple linear regression, support vector regression and XGBoost algorithms. The wearable sensor data served as the input for their machine-learning models, and the corresponding m-CTSIB scores from Falltrak II, one of the leading tools in fall prevention, acted as the ground truth labels for model training and validation. Multiple machine-learning models were then developed to estimate m-CTSIB scores from the wearable sensor data. Researchers also explored the most effective sensor placements to optimise balance analysis.

Results of the study, published in the journal Frontiers in Digital Health (www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2024.1366176/full), underscore this approach’s accuracy and correlation with ground truth balance scores, suggesting the method is effective and reliable in estimating balance. Data from lumbar and dominant ankle sensors demonstrated the highest performance in balance score estimation, highlighting the importance of strategic sensor placement for capturing relevant balance adjustments and movements.

“Wearable sensors offer a practical and cost-effective option for capturing detailed movement data, which are essential for balance analysis,” said senior author Behnaz Ghoraani, an associate professor at FAU. “Positioned on areas like the lower back and lower limbs, these sensors provide insights into 3D movement dynamics, essential for applications such as fall risk assessment in diverse populations. Coupled with the evolution of machine learning, these sensor-derived datasets transform into objective, quantifiable balance metrics, using an array of machine-learning techniques.”

Results provide important insights into the significance of specific movements, feature selection and sensor placement in estimating balance. Notably, the XGBoost model, using the lumbar sensor data, achieved outstanding results in both cross-validation methods and demonstrated a high correlation and a low mean absolute error, indicating consistent performance.

“Findings from this important research suggest that this novel method has the potential to revolutionise balance assessment practices, especially in situations where traditional methods are impractical or inaccessible,” said Stella Batalama, dean at the FAU’s College of Engineering & Computer Science (www.fau.edu/engineering). “This approach is more accessible, cost-effective and capable of remote administration, which could have significant implications for health care, rehabilitation, sports science or other fields where balance assessment is important.”

The objectives of this study emerged from recognising the need for tools to capture the nuanced effects of different sensory inputs on balance.

“Traditional balance assessments often lack the granularity to dissect these influences comprehensively, leading to a gap in our understanding and management of balance impairments,” said Ghoraani. “Moreover, wearables support remote monitoring, enabling healthcare professionals to evaluate patients’ balance remotely, which is particularly useful in diverse healthcare scenarios.”

Study co-authors are Marjan Nassajpour and Mustafa Shuqair in the FAU (www.fau.edu) Department of Electrical Engineering & Computer Science; and Amie Rosenfeld, Magdalena Tolea and James Galvin at the University of Miami Miller School of Medicine.

The work was supported by Ed and Ethel Moore Alzheimer’s Disease Research Program at the Florida Department of Health and the National Science Foundation.