<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>EPSL | Shovito Barua Soumma</title><link>https://www.shovitobarua.com/authors/epsl/</link><atom:link href="https://www.shovitobarua.com/authors/epsl/index.xml" rel="self" type="application/rss+xml"/><description>EPSL</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© 2025 Shovito Barua Soumma</copyright><lastBuildDate>Thu, 17 Jul 2025 23:58:08 -0800</lastBuildDate><item><title>HeatMind</title><link>https://www.shovitobarua.com/project/heatmind/</link><pubDate>Thu, 17 Jul 2025 23:58:08 -0800</pubDate><guid>https://www.shovitobarua.com/project/heatmind/</guid><description>&lt;p>Firefighters are faced with myriad stressors from hazardous work conditions and exposure to extreme heat that place their health at significant risks. The intense heat, smoke, shift work, long working hours, and stressful work put firefighters at substantial risk for heat-related injuries, long-term chronic complications, and mental health challenges. The health risks are further compounded in the Phoenix metropolitan, which has one of the highest heat indexes in the nation. Therefore, there is a need to develop technologies to objectively assess the impacts of extreme heat and harsh working conditions on firefighters' health and to provide actionable information to mitigate the health risks. This project develops HeatMind, an AI-powered sensor-based platform that provides firefighters and community organizations with the tools to objectively monitor heat-related health and provide intervention strategies to minimize risks. The project brings together researchers with expertise in AI, pervasive computing, social and behavioral science, user-centered design, community engagement, heat resilience, and hydration science to collaborate with community partners including firefighters, fire and forestry departments, and nonprofit organizations. The project aims to improve the physical and mental health of firefighters, reduce healthcare costs, improve performance, and enhance job satisfaction and efficiency. The developed technologies can be further refined for use in other communities, such as construction workers, miners, and agricultural workers, who experience prolonged heat exposure.&lt;/p>
&lt;p>This interdisciplinary project will design a scalable and adaptable infrastructure for continuous and objective heat-related health monitoring and proactive decision making by developing new methods for community engagement, passive monitoring of key aspects of heat-related health, real-time risk mitigation, and sustaining engagement in digital platforms. Specifically, the project will (1) establish a structured community engagement approach, called Design Studios for Health, where each design studio session will focus on collaborative discussions, prototype testing, and structured feedback collection from firefighters and community partners; (2) design deep learning algorithms that use multimodal wearable sensor data to continuously assess firefighters' health; (3) develop AI methods that identify mitigation strategies to minimize firefighters' health risks by generating counterfactual explanations that reason about the machine learning predictions and provide counterfactual feedback to avert impending high-risk events; (4) develop new techniques to ensure robust inference of the AI algorithms so that the HeatMind platform can be reliably deployed in uncontrolled settings and across different environments; and (5) implement a community-facing testbed that integrates sensors, data, and algorithms in a unified framework for data collection, visualization, inference, and intervention delivery.&lt;/p>
&lt;h5 id="this-material-is-based-upon-work-supported-by-the-us-national-science-foundation-nsf-under-award-number-2531465-any-opinions-findings-and-conclusions-or-recommendations-expressed-in-this-material-do-not-necessarily-reflect-the-views-of-the-us-national-science-foundation">&lt;em>This material is based upon work supported by the U.S. National Science Foundation (NSF) under Award Number 2531465. Any opinions, findings and conclusions or recommendations expressed in this material do not necessarily reflect the views of the U.S. National Science Foundation.&lt;/em>&lt;/h5>
&lt;style>
.people-widget .col-12.col-lg-auto {
flex: 0 0 auto;
width: auto;
max-width: 25%;
}
@media (max-width: 768px) {
.people-widget .col-12.col-lg-auto {
max-width: 50%;
}
}
&lt;/style>
&lt;div class="container">
&lt;div class="row justify-content-center people-widget">
&lt;div class="col-md-12 section-heading">&lt;h1>Team&lt;/h1>&lt;/div>
&lt;div class="col-md-12">&lt;h2 class="mb-4">Faculty&lt;/h2>&lt;/div>
&lt;div class="col-12 col-lg-auto people-person">&lt;a href="https://search.asu.edu/profile/4018242">&lt;img class="avatar avatar-circle" src="Faculty/hassan.jpg" alt="Avatar">&lt;/a>&lt;div class="portrait-title">&lt;h2>&lt;a href="https://search.asu.edu/profile/4018242">Hassan Zadeh&lt;/a>&lt;/h2>&lt;/div>&lt;/div>
&lt;div class="col-12 col-lg-auto people-person">&lt;a href="https://search.asu.edu/profile/1783317">&lt;img class="avatar avatar-circle" src="Faculty/mbuman.png" alt="Avatar">&lt;/a>&lt;div class="portrait-title">&lt;h2>&lt;a href="https://search.asu.edu/profile/1783317">Matthew Buman&lt;/a>&lt;/h2>&lt;/div>&lt;/div>
&lt;div class="col-12 col-lg-auto people-person">&lt;a href="https://search.asu.edu/profile/4872722">&lt;img class="avatar avatar-circle" src="Faculty/smcarpe4.png" alt="Avatar">&lt;/a>&lt;div class="portrait-title">&lt;h2>&lt;a href="https://search.asu.edu/profile/4872722">Stephanie Carpenter&lt;/a>&lt;/h2>&lt;/div>&lt;/div>
&lt;div class="col-12 col-lg-auto people-person">&lt;a href="https://search.asu.edu/profile/3166703">&lt;img class="avatar avatar-circle" src="Faculty/fwardena.png" alt="Avatar">&lt;/a>&lt;div class="portrait-title">&lt;h2>&lt;a href="https://search.asu.edu/profile/3166703">Floris Wardenaar&lt;/a>&lt;/h2>&lt;/div>&lt;/div>
&lt;div class="col-12 col-lg-auto people-person">&lt;a href="https://search.asu.edu/profile/3335553">&lt;img class="avatar avatar-circle" src="Faculty/skavoura.png" alt="Avatar">&lt;/a>&lt;div class="portrait-title">&lt;h2>&lt;a href="https://search.asu.edu/profile/3335553">Stavros Kavouras&lt;/a>&lt;/h2>&lt;/div>&lt;/div>
&lt;div class="col-12 col-lg-auto people-person">&lt;a href="https://search.asu.edu/profile/1795222">&lt;img class="avatar avatar-circle" src="Faculty/pavan.jpg" alt="Avatar">&lt;/a>&lt;div class="portrait-title">&lt;h2>&lt;a href="https://search.asu.edu/profile/1795222">Pavan Turaga&lt;/a>&lt;/h2>&lt;/div>&lt;/div>
&lt;div class="col-12 col-lg-auto people-person">&lt;a href="https://search.asu.edu/profile/3817853">&lt;img class="avatar avatar-circle" src="Faculty/mkhatib2.png" alt="Avatar">&lt;/a>&lt;div class="portrait-title">&lt;h2>&lt;a href="https://search.asu.edu/profile/3817853">Maissa Khatib&lt;/a>&lt;/h2>&lt;/div>&lt;/div>
&lt;div class="col-12 col-lg-auto people-person">&lt;a href="https://www.mayo.edu/research/faculty/kocher-jean-pierre-a-ph-d/bio-00094538">&lt;img class="avatar avatar-circle" src="Faculty/Jean-Pierre.jpg" alt="Avatar">&lt;/a>&lt;div class="portrait-title">&lt;h2>&lt;a href="https://www.mayo.edu/research/faculty/kocher-jean-pierre-a-ph-d/bio-00094538">Jean-Pierre A. Kocher&lt;/a>&lt;/h2>&lt;/div>&lt;/div>
&lt;div class="col-md-12">&lt;h2 class="mb-4">Students&lt;/h2>&lt;/div>
&lt;div class="col-12 col-lg-auto people-person">&lt;a href="https://search.asu.edu/profile/4754067">&lt;img class="avatar avatar-circle" src="Students/shovito.jpg" alt="Avatar">&lt;/a>&lt;div class="portrait-title">&lt;h2>&lt;a href="https://search.asu.edu/profile/4754067">Shovito Barua Soumma&lt;/a>&lt;/h2>&lt;/div>&lt;/div>
&lt;/div>
&lt;/div></description></item><item><title>Mental Health</title><link>https://www.shovitobarua.com/project/mental-health/</link><pubDate>Wed, 13 Jan 2021 23:58:23 -0800</pubDate><guid>https://www.shovitobarua.com/project/mental-health/</guid><description>&lt;!-- Stress and challenges associated with stress management are prevalent problems of modern life. Many physical and mental health problems are driven by or escalate with the degree of stress. Stress has harmful effects on those who suffer from mental and physical health problems. Therefore a comprehensive study of stress and its effect is an important research topic in the mobile health domain. Our research lies at the intersection of sensor systems and machine learning, in which we research methods of detecting stress in real-life settings. We use wearable sensor systems to capture bio-markers of stress and design and develop machine learning algorithms for stress detection and classification. Our research aims to develop tools, methodologies, and algorithms for comprehensive approaches to stress detection and to invent smart interventions strategies to promote the well-being of individuals. --></description></item><item><title>Human-in-the-Loop Learning</title><link>https://www.shovitobarua.com/project/human-in-the-loop-learning/</link><pubDate>Thu, 27 Aug 2020 05:18:17 +0000</pubDate><guid>https://www.shovitobarua.com/project/human-in-the-loop-learning/</guid><description>&lt;h1 id="user-studies">User Studies:&lt;/h1>
&lt;h2 id="human-in-the-loop-data-collection-using-wearable-sensors-and-mobile-devices-for-machine-learning-algorithm-design">Human-in-the-Loop Data Collection Using Wearable Sensors and Mobile Devices for Machine Learning Algorithm Design&lt;/h2>
&lt;p>This research study aims to collect sensor data that will be used to develop machine learning algorithms. We want to collect data from a set of typical activities of daily living. The data collected in this study will be fully de-identified. You are being asked to take part because you are eligible to participate in this experiment. You will be considered eligible if you 1) are 18 years of age or older, 2) speak English, 3) have the desire to participate in data collection, and 4) have an Android smartphone with data plan. You are not eligible to participate if 1) you have severe cognitive, hearing, visual, or mobility impairment that would impact your ability to complete study procedures. We will collect data from 20 individuals in this study.&lt;br>&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://ghasemzadeh.com/cps_resources/cps_flyer_aug_2023.pdf" target="_blank">Flyer&lt;/a>&lt;/li>
&lt;li>Interested? Fill this &lt;a href="https://www.ghasemzadeh.com/cps" target="_blank">screning survey&lt;/a>.&lt;/li>
&lt;li>&lt;a href="https://ghasemzadeh.com/cps_resources/cps_consent_aug_2023.pdf" target="_blank">Consent Form&lt;/a>&lt;/li>
&lt;/ul>
&lt;h1 id="related-research-papers">Related Research Papers:&lt;/h1>
&lt;h2 id="mindful-active-learning">Mindful Active Learning&lt;/h2>
&lt;p>We propose a novel active learning framework for activity recognition using wearable sensors. Our work is unique in that it takes limitations of the oracle into account when selecting sensor data for annotation by the oracle. Our approach is inspired by human-beings' limited capacity to respond to prompts on their mobile device. This capacity constraint is manifested not only in the number of queries that a person can respond to in a given time-frame but also in the time lag between the query issuance and the oracle response. We introduce the notion of mindful active learning and propose a computational framework, called EMMA, to maximize the active learning performance taking informativeness of sensor data, query budget, and human memory into account. We formulate this optimization problem, propose an approach to model memory retention, discuss the complexity of the problem, and propose a greedy heuristic to solve the optimization problem. Additionally, we design an approach to perform mindful active learning in batch where multiple sensor observations are selected simultaneously for querying the oracle. We demonstrate the effectiveness of our approach using three publicly available activity datasets and by simulating oracles with various memory strengths. We show that the activity recognition accuracy ranges from 21% to 97% depending on memory strength, query budget, and difficulty of the machine learning task. Our results also indicate that EMMA achieves an accuracy level that is, on average, 13.5% higher than the case when only informativeness of the sensor data is considered for active learning. Moreover, we show that the performance of our approach is at most 20% less than the experimental upper-bound and up to 80% higher than the experimental lower-bound. To evaluate the performance of EMMA for batch active learning, we design two instantiations of EMMA to perform active learning in batch mode. We show that these algorithms improve the algorithm training time at the cost of a reduced accuracy in performance. Another finding in our work is that integrating clustering into the process of selecting sensor observations for batch active learning improves the activity learning performance by 11.1% on average, mainly due to reducing the redundancy among the selected sensor observations. We observe that mindful active learning is most beneficial when the query budget is small and/or the oracle&amp;rsquo;s memory is weak. This observation emphasizes advantages of utilizing mindful active learning strategies in mobile health settings that involve interaction with older adults and other populations with cognitive impairments.&lt;/p>
&lt;h2 id="human-in-the-loop-learning-for-personalized-diet-monitoring-from-unstructured-mobile-data">Human-in-the-Loop Learning for Personalized Diet Monitoring from Unstructured Mobile Data&lt;/h2>
&lt;p>Lifestyle interventions with the focus on diet are crucial in self-management and prevention of many chronic conditions such as obesity, cardiovascular disease, diabetes, and cancer. Such interventions require a diet monitoring approach to estimate overall dietary composition and energy intake. Although wearable sensors have been used to estimate eating context (e.g., food type and eating time), accurate monitoring of dietary intake has remained a challenging problem. In particular, because monitoring dietary intake is a self-administered task that involves the end-user to record or report their nutrition intake, current diet monitoring technologies are prone to measurement errors related to challenges of human memory, estimation, and bias. New approaches based on mobile devices have been proposed to facilitate the process of dietary intake recording. These technologies require individuals to use mobile devices such as smartphones to record nutrition intake by either entering text or taking images of the food. Such approaches, however, suffer from errors due to low adherence to technology adoption and time sensitivity to the dietary intake context. We introduce EZNutriPal, an interactive diet monitoring system that operates on unstructured mobile data such as speech and free-text to facilitate dietary recording, real-time prompting, and personalized nutrition monitoring. EZNutriPal features a natural language processing unit that learns incrementally to add user-specific nutrition data and rules to the system. To prevent missing data that are required for dietary monitoring (e.g., calorie intake estimation), EZNutriPal devises an interactive operating mode that prompts the end-user to complete missing data in real-time. Additionally, we propose a combinatorial optimization approach to identify the most appropriate pairs of food names and food quantities in complex input sentences. We evaluate the performance of EZNutriPal using real data collected with 23 human subjects who participated in two user studies conducted in 13 days each. The results demonstrate that EZNutriPal achieves an accuracy of 89.7% in calorie intake estimation. We also assess the impacts of the incremental training and interactive prompting technologies on the accuracy of nutrient intake estimation and show that incremental training and interactive prompting improve the performance of diet monitoring by 49.6% and 29.1%, respectively, compared to a system without such computing units.&lt;/p>
&lt;h2 id="proximity-based-active-learning-for-eating-moment-recognition-in-wearable-systems">Proximity-Based Active Learning for Eating Moment Recognition in Wearable Systems&lt;/h2>
&lt;p>Detecting when eating occurs is an essential step toward automatic dietary monitoring, medication adherence assessment, and diet-related health interventions. Wearable technologies play a central role in designing unobtrusive diet monitoring solutions by leveraging machine learning algorithms that work on time-series sensor data to detect eating moments. While much research has been done on developing activity recognition and eating moment detection algorithms, the performance of the detection algorithms drops substantially when the model is utilized by a new user. To facilitate the development of personalized models, we propose PALS, Proximity-based Active Learning on Streaming data, a novel proximity-based model for recognizing eating gestures to significantly decrease the need for labeled data with new users. Our extensive analysis in both controlled and uncontrolled settings indicates F-score of PALS ranges from 22% to 39% for a budget that varies from 10 to 60 queries. Furthermore, compared to the state-of-the-art approaches, off-line PALS achieves up to 40% higher recall and 12% higher F-score in detecting eating gestures.&lt;/p></description></item><item><title>Gait and Mobility</title><link>https://www.shovitobarua.com/project/gait-and-mobility/</link><pubDate>Mon, 13 Jan 2020 23:58:41 -0800</pubDate><guid>https://www.shovitobarua.com/project/gait-and-mobility/</guid><description>&lt;p>&lt;img src="gait2.jpg" alt="">&lt;/p>
&lt;p>The utility of wearable sensors for continuous gait monitoring has grown substantially, enabling novel applications on mobility assessment in healthcare. Existing approaches for gait cycle detection rely on predefined or experimentally tuned platform parameters and are often platform-specific, parameter-sensitive, and unreliable in noisy environments with constrained generalizability. To address these challenges, we develop algorithms and tools for reliable, platform-independent, and reconfigurable gait cycle detection and step counting. We also study the utility of gait monitoring is various populations.&lt;/p>
&lt;!-- ## CyclePro
The utility of wearable sensors for continuous gait monitoring has grown substantially, enabling novel applications on mobility assessment in healthcare. Existing approaches for gait cycle detection rely on predefined or experimentally tuned platform parameters and are often platform-specific, parameter-sensitive, and unreliable in noisy environments with constrained generalizability. To address these challenges, we introduce CyclePro, a novel framework for reliable and platform-independent gait cycle detection. CyclePro offers unique features: (1) It leverages physical properties of human gait to learn model parameters; (2) captured signals are transformed into signal magnitude and processed through a normalized cross-correlation module to compensate for noise and search for repetitive patterns without predefined parameters; and (3) an optimal peak detection algorithm is developed to accurately find strides within the motion sensor data.
## Gait Speed and Survival of Older Cancer Surgical Patients
Gait speed in older patients with cancer is associated with their mortality risk. One approach to assess the gait speed is through the Timed Up and Go (TUG) test. However, performing TUG is personnel-dependent and may be infeasible to perform on every patient as routine care. We utilize machine learning algorithms to automatically predict the result of the TUG test and its association with survival using patient-generated responses. In this research, we propose to learn a decision tree classifier based on functional status, obtained from preoperative geriatric assessment, and TUG test performance of older patients with cancer. The functional status data are used as input features to the decision tree and the actual TUG data are used as ground truth labels. The decision tree is then constructed to assign each patient one of the three categories: “TUG less than ten seconds”, “TUG more than ten seconds”, and “uncertain.” We demonstrate that machine learning algorithms can be trained to accurately predict the gait speed of older patients with cancer based on their response to questions addressing other aspects of functional status.
## Gait Pattern Examination in Glaucoma Patients
This research presents a wearable wireless sensor system designed for real-time gait pattern analysis in glaucoma patients. Many clinical studies have reported that glaucoma patients experienced mobility issues such as walking slowly and bumping into obstacles frequently. The gait attributes of glaucoma patients, however, have not been studied in the literature. We design and develop a shoe-integrated sensing system for objective bio-information collection, utilize signal processing algorithms for feature estimation and leverage machine learning as well as statistical analysis approaches for gait pattern examination. The developed sensor platform is utilized in a randomized clinical trial with 19 participants. We develop signal processing and machine learning algorithms to provide a quantitative comparison between gait characteristics in older adults with and without glaucoma. Our results demonstrate that machine learning algorithms achieve an accuracy of over 80% in distinguishing extracted gait features of those with glaucoma from healthy individuals.
## How Accurate is Your Activity Tracker?
As commercially available activity trackers are being utilized for step counting in clinical trials, the research community remains uncertain about reliability of the trackers, particularly in studies that involve aided walking and in those involving low-intensity activities (i.e., a metabolic equivalent of task &lt; 3). While these trackers have been tested for reliability during normal walking and running, there has been limited research on validating these trackers during low-intensity activities and walking with assistive tools. The aim of this study is threefold: (1) To determine the accuracy of three Fitbit devices (i.e., Zip, One, and Flex) at different wearing positions (i.e., pants pocket, chest, and wrist) during walking at three different speeds including 2.5 km/h, 5 km/h, and 8 km/h performed by healthy adults during treadmill walking; (2) To determine the accuracy of the Fitbit trackers (Zip, One, and Flex) worn at different sites (pants pocket, chest, and wrist) during real-life activities including walking with a shopping cart, walking with a walker, and eating; and (3) To examine whether intensity of physical activities impacts the choice of optimal wearing site of the tracker.
## Glaucoma-Specific Gait Pattern Assessment Using Body-Worn Sensors
Many studies have reported that glaucoma patients experience mobility issues such as walking slowly and bumping into obstacles frequently. However, little is known to date about how a person’s gait is impacted due to glaucoma. This research presents signal processing and machine learning algorithms to automatically detect effective gait cycles and extract both steady-state and spatio-temporal gait features from the signal segments. We perform machine learning algorithms to distinguish glaucoma patients from healthy controls, and identify several prominent features with high discriminability between the two groups. The results demonstrate that classification algorithms can be used to identify gait patterns of glaucoma patients with an accuracy higher than 94% in a 10-meter-walk test. It is also demonstrated that gait features such as evenness of the sway speed along medio-lateral direction between the two feet are significantly different (p-value &lt; 0.001) between older adults with and without glaucoma. --></description></item></channel></rss>