Helping college students manage their stress-eating tendencies by leveraging wearable tech and just-in-time interventions.
As part of one of my core classes at Georgia Tech, ‘HCI Foundations', students were divided into teams of 4 and tasked to work on a semester-long project to formulate an evidence-based design solution. The theme for the class was "wellbeing" and my team chose to tackle the problem of stress-eating in college students.
College is a huge transition for students as it presents them with an environment that amplifies academic, social, and financial pressures. Acting as an incubator for stress, college often leaves students with little to no helpful resources to cope with these challenges. In such situations, one of the maladaptive coping strategies that students turn to is stress-eating.
Stress, the hormones it unleashes, and the effects of high-fat, sugary 'comfort foods' push people toward overeating. With unpredictable schedules and little time to devote to nutritional needs, students often skip meals and eat a high number of unhealthy snacks (we're all guilty of reaching for that bag of Flamin' Hot Cheetos while trying to beat a deadline). For many, barriers to healthy eating include time constraints, less availability of nutritious options, and high cost of healthier alternatives. Thus, we realized that college students need a low-effort and affordable solution to curb these maladaptive eating behaviors.
Over the course of 12 weeks, we developed an application that we feel tackles these issues successfully. CompanionAI creates a calming and encouraging experience for users, that provides them with a delightful and engaging way to alleviate their stress and curb stress-eating.
How can we help college students manage their stress-eating tendencies?
A mobile application that works in tandem with a wearable device to identify users' stress levels and suggest stress-alleviation activities based on physiological metrics.
First off, we aimed to Understand our Stakeholders, Users and Existing Solutions. To holistically understand the problem space and our user needs, our team created a research plan that incorporated the following methods.
We decided to divide our research into two phases: Upon engaging in several interpretation sessions, we synthesized all of the data into an affinity map that helped us build personas, empathy maps, journey maps, and design concepts.
To better understand our problem space, we conducted a literature review to learn about existing research on the behavior of stress-eating among college students. We pored over published studies and journal articles to understand the connection between stress and eating and the specific triggers to stress-eating they may encounter.
Upon establishing our research goals, we talked to college students and 2 registered dietitians at Georgia Tech to better understand how stress affects students' eating habits. Through our conversations, we were also able to identify the target users and their goals.
After understanding why stress-eating happens, we wanted to know if there were existing tools that address our problem of interest. While we didn't find a solution that specifically addresses stress eating among students, we found applications that address emotional eating through mindfulness.
Noom uses periodical journals that prompt users to consciously thing about their eating habits while Am I Hungry? intervenes before the emotional eating episode even takes place by guiding the user through a series of questions. Alternatively, In the Moment leverages gamification and positive affirmations to steer users away from emotional eating and stress eating.
Upon attaining a theoretical understanding of the problem space, we decided to move forward with surveys to obtain clear, quantitative data about students’ snacking habits and coping strategies. Due to the relative ease of distributing and analyzing surveys, we rationalized that this method would let us collect data from a larger number of target users, providing us with a data set that is comprehensive and reliable.
We distributed our survey through various communication channels to peers and students across the Georgia Tech network. We also shared the survey with students in various colleges across the US including Georgetown University and University of Minnesota. In total, we recieved 74 survey responses.
To better inform our design implications, we wanted to identify which interventions would help our target users overcome the urge to stress eat. To gather more in-depth contextual data about our users' behaviors day-to-day, we decided to conduct a diary study.
In order to recruit participants for the diary study, we first conducted an initial screening survey through which we excluded anyone who reported having been diagnosed with an eating disorder or mental health disorder. We set this exclusion criteria because we recognized that asking participants to record their eating habits and report their stress for a week may allow problematic thoughts and behaviors to resurface.
In the survey, we asked participants about how they relieve stress and how they view their relationship with food. In addition, we asked participants to rate the likelihood of doing a particular activity (meditation, doing a physical activity, talking to a loved one, playing a game, taking a nap, etc.) when they felt stressed. Overall, 11 people responded and 3 were excluded due to mental health and eating disorders.
We designed the diary study in a survey format on Qualtrics and conducted feedback sessions with two registered dietitians before we started deploying it. Each participant received an anonymized link to their personal survey, which they could access using the same link everyday. Overall, we were able to collect seven days worth of data from six participants and five days worth of data from two participants.
The insights gained from the wants and needs analysis allowed us to iterate on our visual task analysis model and create a more holistic picture of the user experience. For the task analysis, our focus was on understanding what our users do when they are faced with a stressor. We wanted to understand how they try to alleviate their stress when faced with a stressor and how they end up using food as a solution to stress eating. We found a large variation in the stress-eating behaviors of our participants, including when they stress eat, in which settings they stress eat, and the food that they choose for stress-eating. However, we identified that there were two distinct factors that made our users choose food as a stress-eating solution: lack of time to spend on a stress-alleviation activity and the lack of motivation to complete a stress-alleviation activity.
We started the design ideation phase of the project by establishing functional and non-functional design requirements. We then used brainwriting to come up with divergent solutions and sketched our top 10 design ideas. We then picked our top 2 designs and established a clear criterion to determine which design to move forward with for our final prototype.
We studied the findings from our literature review, survey, and diary study and divided our design requirements into functional and non-functional requirements.
Upon reviewing the functional requirements, the team came together to do a brainwriting exercise where we came up with several design ideas through rapid brainstorming. The exercise followed four rounds, with each round lasting five minutes. We used a template to guide us through the process but each team member took turns writing new ideas or expanding on ideas written in the previous round. Once the round was finished, team members passed the sheet to the person to the right and the process repeated. Ultimately, we were able to generate 30+ unique ideas using the brainwriting exercise. Upon completing the exercise, the team engaged in a discussion and grouped ideas that complemented each other. At this time, we removed any ideas that did not meet our functional design requirements.
After coming up with 30+ initial design ideas in the Brainwriting session, the team narrowed them down and sketched out the top 10 designs.
After the design brainstorming and ideation phases, we decided on the top two designs that met our design requirements and user goals.
In order to move forward with the solution that best caters to our application’s needs, we reevaluated both ideas and assessed how well each one satisfied our functional design requirements and user goals. Ultimately, the design that fulfilled the majority of our functional requirements and emerged as the most effective solution was CompanionAI.
We did a 5-day design sprint to convert our ideation concepts and sketches into an interactive, high-fidelity prototype. We used a clean interface with subtle contrast and rounded edges to make the content more appealing.
Before conducting our evaluations, the team went through each of our functional requirements and their corresponding evaluation goals and ultimately picked the following design requirements to evaluate:
To understand how well our application met these design requirements, we used a think-aloud task-based benchmark protocol in a semi-structured interview format to test our prototype with four target users. During these interviews, participants were asked to perform six application-specific tasks and rate them on a system usability scale (SUS). At this stage in the project, we wanted to understand the opinions of users while they were interacting with the system, and how easy it was for them to accomplish the given tasks.
Target Population: Since the think-aloud protocol is solely focused on identifying usability issues within the prototype and does not address the stress-detection component of CompanionAI, the inclusion criteria for this evaluation was less rigid and thus did not exclude participants to be free of any mental or eating disorders.
Test Users: We had four first-year MS-HCI students evaluate our prototype for the discount evaluation. These students fit our target user criteria very closely since they were all students currently enrolled in a graduate program.
Procedure: In each evaluation session, the facilitator began by providing a brief overview of the problem statement and the primary aim of our application. The first 15 minutes of the session were spent observing the user interact with the prototype using a think-aloud protocol. The facilitator would then ask the designated user to perform a series of benchmark tasks to test the viability of the application.
The users were asked to perform and rate the following tasks on a scale of 1-7, 1 being very easy and 7 being very difficult:
Upon completing and rating the above tasks, we then asked users to perform two separate tasks that were designed to evaluate the two chosen design requirements:
Upon completing these tasks that test the viability of our two design requirements, we asked users to rate our application using the System Usability Scale. We asked users to score certain usability metrics on a scale of Strongly Disagree (1) to Strongly Agree (5).
Overall, we gathered a mostly positive response to the prototype. Participants were able to perform the benchmark tasks with ease, and the semi-structured interviews allowed us to obtain qualitative data about user's thoughts while using the app. Here are a few highlights from the results: