What is at stake when sensors can predict death? My job is to find out

I interview family members of people with dementia to understand what technology can do to help them die with dignity—and the pitfalls we must avoid.

By Justin Haugland-Pruitt, PhD candidate at the University of Bergen and DIGIT member, Class of 2025.

  • Each year, new PhD fellows in the DIGIT Research School are invited to submit a short popular science text about their research as part of the research school’s kick-off.

    This contribution by Justin Haugland-Pruitt was selected as one of the best submissions from the 2025 cohort.

    This article is written for policymakers, healthcare practitioners, and anyone interested in ethical technology in elder care.

Sensing digital biomarkers: A participant in the 5-D project at UiB wearing a Garmin smartwatch that monitors her movement and heart-rate. Photo credit: Silje Robinson.

Can technology help us understand what someone with dementia is feeling when they can’t tell us themselves?

This question is no longer hypothetical. Advances in digital biomarkers—measurable signals from the body collected through sensors and wearable devices—are rapidly moving from research labs into real-world care. They promise earlier recognition of distress, better pain management, and even predictions about the final stage of life.

But before society embraces these tools, we must ask what they mean for autonomy, consent, and the deeply human experience of dying.

For people with dementia, who often lose the ability to communicate clearly, digital biomarkers may help caregivers recognize suffering sooner. A sensor that detects agitation, disrupted sleep, or changes in heart rate could enable staff to respond faster.


The global prevalence of dementia is expected to rise from 57 million in 2019 to 152 million by 2050, with Alzheimer’s disease being the most common cause. Studies show that up to 80% of nursing home residents with dementia experience clinically significant pain, frequently undiagnosed and untreated.


Imagine being told, based on sensor data, that your loved one is likely in the final days of life. That knowledge could be a gift: time to prepare, to say goodbye, to make sure comfort is prioritized. But it could also be a burden. What if the prediction is wrong? What if it leads to decisions that hasten death, or change the way someone is treated before their time?

These are not abstract concerns. In clinical settings, decisions about pain management, sedation, and medical intervention often hinge on assessments of suffering and prognosis. If digital biomarkers suggest someone is actively dying, caregivers might choose to administer stronger painkillers, which can also suppress breathing and accelerate death. In cases where the person cannot speak for themselves, as is often true with advanced dementia, the ethical stakes are even higher.

Consent is another unresolved question. Can a person with dementia truly agree to be monitored? Do families feel empowered to make a decision on behalf of their loved ones—or pressured by the promise of better care? These questions belong not only in ethics boards but in everyday conversations between clinicians, caregivers, and families.

This is where my work begins.

At the University of Bergen’s Centre for Elderly and Nursing Home Medicine (SEFAS), the ongoing 5-D project (Decoding Death and Dying in people with Dementia by Digital thanotyping) uses wearable sensors to study distress and end-of-life patterns in nursing home residents with dementia. My role is to ask the difficult questions—not only of the technology, but of the people living alongside it.

This is why I ask family members how they feel about participating in studies that use sensor data to predict death. These are the people who make proxy decisions for their parents or partners, who live with the emotional weight of those choices, and who must navigate the tension between hope, realism, and compassion. They share diverse experiences—watching a husband visibly in pain but unable to speak, trying to fulfill a mother’s wishes, keeping diaries to understand a disease that steals words.

Families I speak to consistently emphasize this point: technology should support—not replace—the relationships and moral judgment that define good caregiving. A sensor may detect a spike in heart rate, but it cannot know the person’s story or what dignity means to them. Dying is not just a technical problem; it is a human one.


I don’t like the idea of being watched in my home, but my mother, this is her home now, so she’s being watched in some ways. But I feel it’s more positive than negative. Because I think she needs somebody to take care of her

Predictive technology in end-of-life care demands caution. Doctors must be careful when, how, and with whom we use these tools. Society needs to prepare for how this will shape our view of death—and the part leading up to it. Accuracy is not enough; responsible use matters.

Digital biomarkers are not inherently good or bad. They are tools. They can help us hear what the body cannot say. They can guide care, reduce suffering, and offer clarity. But dying is not just a technical problem to be solved; it is a deeply human experience. The future of care may be more informed, but it must also be more humane.

So we end with a question: What kind of death do you want—and who gets to decide?

Previous
Previous

Campus classrooms as a way to reclaim our humanity

Next
Next

Join our research school – now accepting applications for the class of 2026!