From Wearables to Workflows: A Q&A with Dr. Aiden J. Chauntry on What Public Health Practitioners Should Know About Physiological Data

Introduction

Public health and program teams are increasingly interested in using wearable and physiological data to inform prevention, surveillance, and intervention efforts, yet many struggle with a fundamental challenge: knowing when these data are truly ready to support action rather than adding uncertainty, burden, or unintended consequences. Practitioners must decide how much weight to give wearable signals, how to interpret them in context, and how to integrate them into real‑world workflows marked by limited staff capacity, variable data quality, and equity considerations.

Dr. Aiden J. Chauntry is a clinical research scientist who works at an industry leader in medical device design and implementation where his work directly addresses these challenges. Working at the intersection of cardiovascular physiology, stress science, and real‑world public health implementation, his research focuses on validating wearable‑derived physiological data in real‑world settings and helping teams understand when—and how—those data can actually support decision‑making. Drawing on applied research and operational validation, he examines how lifestyle behaviors—such as sedentary time, physical activity, and stress exposure—translate into measurable physiological signals, and whether those signals can be captured, interpreted, and used responsibly outside controlled laboratory environments.

In this Q&A, Dr. Chauntry addresses the practical questions practitioners face, including how to assess decision‑readiness, interpret signals in context, and avoid common pitfalls related to usability, data quality, staff capacity, and equity—offering guidance for integrating wearable data into public health workflows that support, rather than replace, professional judgment.

When Wearable Data Are Ready for Action

Q: When teams are considering whether to act on wearable‑derived data, what questions and warning signs most clearly separate data that are truly decision‑ready from data that are interesting but likely to stall or undermine confident action?

A: In my experience across academic wearable studies and current validation work in real-world settings, wearable data becomes decision-ready when there is a clear chain from measurement to interpretation to action. This means that teams understand what the metric represents physiologically, what level of uncertainty is acceptable for the intended use, and what action should follow when the value(s) changes. The performance of the wearable must also have been tested in the actual setting where the data will be used (not just under ideal lab conditions). Some warning signs that the data may stall or undermine decision-making include: (1) the metric is a proprietary “score” with unclear meaning or poor transparency, (2) missingness and artifact rates are high in the real workflow, (3) the data are not available in the timeframe needed for the intended decision/action, (4) or different stakeholders interpret the same signal differently. A practical rule I like to use is if the team spends more time debating what the metric means than what to do with it, the data are probably not decision-ready yet.

Choosing Metrics That Work in Real‑World Settings

Q: Drawing on your validation work, which types of physiological measures tend to integrate most smoothly into applied public health efforts today—and which ones often introduce complications once teams try to use them operationally?

A: From a translational physiology and validation perspective, measures that integrate most smoothly are the ones that are simple, interpretable, and less sensitive to context or sensor placement. This can include things like physical activity measures and some relatively stable cardiovascular metrics. By contrast, some other measures (such as “stress biomarkers”) can be very informative, but are heavily influenced by posture, timing, prior physical activity, temperature, medications, hydration, sickness, and many other factors. It is often the case that a metric looks compelling on a screen, but its usability decreases if teams do not understand what is driving changes in the signal. When teams try to use a metric before they have assessed signal quality in the target population, performance under routine workflow conditions, and what level of error is acceptable for the intended decision, this can cause serious problems. So, I do not think it’s as simple as “good metrics” versus “bad metrics.” It is more about fit-for-purpose metrics versus metrics being used outside the conditions where they are currently well understood.

Interpreting Physiological Signals in Context

Q: Because physiological and behavioral data are highly context‑dependent, how should practitioners account for context when deciding how much weight to give wearable metrics in day‑to‑day workflows?

A: Context is key. In practice, I would encourage teams to interpret wearable metrics using a “signal + context + trend” mindset rather than treating a single number as the final answer. For example, the same physiological value may mean very different things depending on whether someone is resting, exercising, under acute mental stress, sleep-deprived, or sick. This means teams should define the conditions under which a metric is most interpretable (e.g., resting vs mental stress), collect contextual data that matters (time of day, posture/activity state, relevant recent events), prioritize within-person trends rather than relying only on one-off values, and document when data quality is uncertain so staff do not give too much weight to low-confidence signals. In my own work, especially when thinking about real-world physiological monitoring, one of the biggest lessons I’ve learned is that context turns data into interpretable information. Without context, even the most accurate measurements can be misinterpreted and misused.

Fit, Misfit, and Moving Beyond Pilots

Q: When deciding both whether wearable data are the right tool for a specific public health task and whether they’re ready to move beyond a pilot, what practical criteria and “fit/misfit” signals do you look for—and what signals suggest the data may mislead or distract?

A: I often advise starting with a basic question. Is the wearable solving an actual problem, or are we trying to find a use for wearable data simply because it is available? That is a critical starting point. Wearables are a good fit when they provide information that existing systems cannot capture well, or cannot capture at the right frequency, context, or level of detail. This includes more ecologically valid data or insight into real-world physiology/behavior that is hard to capture with traditional methods or in traditional settings. For moving beyond a pilot, I look for evidence in five areas: (1) use-case clarity (what decision/workflow is being improved), (2) technical performance in the real setting/environment, (3) operational feasibility (staff time, training, device management, troubleshooting), (4) data interpretability, (5) net benefit (i.e., the data actually supports action). There are also key warning signs that suggest a poor fit, including things like pilots that do not have the potential to result in changes to decisions or workflows, persistent missing data or poor adherence without a realistic plan to fix it, excessive manual processing, and growing confidence in metrics that have not actually been validated for the intended purpose.

Safeguards and Unintended Consequences

Q: Physiological signals related to stress or cardiovascular function can be compelling but easy to over‑interpret. How can practitioners build safeguards into decision‑making so these data inform action without driving unintended consequences?

A: This is a very important point. Physiological signals can feel “objective,” which sometimes gives them more authority than they deserve in a specific context. For example, “stress measures” are typically trying to capture specific aspects of the body’s physiological response to stress, but they do not directly measure how stressed a person feels. That distinction matters, especially because perceived stress and physiological responses do not always map neatly onto each other. In addition, “stress measures” are influenced by many other “non-stress” factors (e.g., exercise, posture, sleep, illness, emotions, caffeine, and environmental conditions), so they can be easy to over-interpret if context is ignored. A practical safeguard is to treat these data as decision support, not decision replacement. In other words, the signal should inform judgment, not automatically determine action, unless the use case has been carefully validated and the action pathway is clearly defined. Specifically, I would advise against acting on isolated readings when trend and context are available, using physiological metrics without clear guidance on what they do and do not mean, and rolling out wearables before staff are trained in interpretation and limitations. Teams should incorporate simple safeguards such as predefined response rules and periodic review of false alarms, missed events, and unintended consequences.

Workflow, Capacity, and Organizational Reality

Q: Once wearable data are collected, what organizational or infrastructure factors most strongly determine whether they become a useful input to public health work—or an additional burden?

A: What matters most is often less about the wearable device itself and more about the systems and workflows around it. Wearable data are most useful when organizations build a clear process for how the data should be reviewed, interpreted, and used in practice. That usually means having clear ownership, defined staff roles, basic data quality checks, practical integration into existing workflows, and realistic expectations about time and staffing. Wearable initiatives can quickly become burdensome when they are layered onto existing responsibilities without any changes to workflow or capacity. A common problem we see is that teams collect large amounts of data but do not have a clear plan or enough time to turn those data into action. Success depends on treating wearables as a process that needs infrastructure and support, not simply as a technology add-on.

Wearable Data as a Complement, Not a Standalone System

Q: In your experience, when do wearable data add the most value as a supplement to existing public health data systems, rather than as a standalone source of information?

A: Most of the time, wearable data add the most value as a complementary layer, not a standalone source of information. Their main strength is that they can often provide continuous or real-world information. This can be very useful for understanding patterns over time, identifying periods of increased physiological strain or behavior change, and improving the timing or targeting of interventions. But on their own, a lot of wearable data often lacks the broader clinical, behavioral, social, and environmental context needed for confident interpretation. Practically, this means the most value is often found when wearable data are combined with clinical, self-report, or symptom data, as well as relevant contextual information. In essence, wearables tell you when physiology is changing, but other sources often help explain why it is changing and what action is appropriate.

Governance, Transparency, and Trust

Q: What governance and transparency practices (e.g., data ownership, documentation, access, communication with participants/communities) most often determine whether wearable data can be used responsibly and sustained within public health workflows over time?

A: Programs that maintain wearable use over time usually take governance seriously from the start rather than treating it as an afterthought. For example, wearable outputs can change meaningfully with firmware updates, algorithm changes, or different preprocessing pipelines. If those changes are not documented, organizations can lose comparability over time and this can inadvertently undermine trust.

At a minimum, teams should document device/firmware versions, preprocessing rules, and who has access to which data and for what purpose. It also helps to define data ownership, permitted uses, and decision-making authority up front so there is less ambiguity once the data begin to influence workflows.

Communication with participants and communities is also central. People are more likely to engage when expectations are clear and the purpose feels legitimate and beneficial. In my view, responsible use is not just about privacy compliance, it is also about interpretive transparency, accountability, and maintaining trust over time.

Equity and Responsible Implementation

Q: How should practitioners think about equity when deciding whether wearable data are appropriate for a given effort—especially when access, adherence, or data quality may differ across populations or settings?

A: Equity should be built into wearable data initiatives from the outset, rather than treated as something to address later. In public health practice, the key question is not always whether people have access to a wearable, but whether they can use it consistently in real-world conditions and whether the resulting data are comparable across groups. Teams should consider who is more likely to experience lower data quality, who carries the burden of charging, syncing and troubleshooting the device, and who may be unintentionally excluded if the program assumes high adherence or near-complete data.

In practice, this means assessing patterns of missingness and data quality across populations and settings, as well as being cautious about workflows that may penalize people for structural barriers such as limited connectivity, demanding work schedules, caregiving responsibilities, language access, device comfort, or digital literacy. A useful principle is to avoid making wearable adherence a hidden gatekeeper for receiving services or support, unless there is strong evidence that access, burden, and usability are equitable. Wearable data can strengthen public health programs, but if equity is not built into implementation and interpretation, they can also unintentionally widen existing gaps.

Looking Ahead: What Needs to Change

Q: Looking ahead, what changes—technological, analytical, or organizational—would most improve the transition from wearable data collection to actionable public health workflows, and what evidence gaps still need to be addressed?

A: The biggest improvement would be better alignment across three areas: measurement quality, interpretability, and workflow integration. In public health practice, it is not enough for wearables to simply collect large amounts of data; the data also need to be reliable in real-world conditions, understandable to the people using them, and incorporated into workflows in ways that support timely action rather than create additional burden.

On the technology side, the field would benefit from more robust device performance under real-world conditions, clearer indicators of signal quality, and better interoperability across devices and platforms so teams are not repeatedly rebuilding data pipelines. On the analytics side, what is still needed is stronger validation in the actual populations and settings where these tools are being used, along with clearer evidence about which metrics meaningfully improve decisions and outcomes in real workflows.

From an organizational perspective, workforce capacity is a major limiting factor. Many promising efforts struggle not because the idea is poor, but because the technical, clinical, and operational components are handled in separate silos. Public health teams often need people who can bridge physiology, data interpretation, and implementation, so that wearable data can be translated into workable processes.

There are also important evidence gaps that still need to be addressed before wider adoption is justified in many settings. These include whether validation findings generalize across populations and contexts, which thresholds or trends are truly actionable in practice, whether wearable-informed workflows meaningfully improve health outcomes, whether programs are feasible and sustainable over time, and how equity impacts emerge during implementation. I’m optimistic about the field, but I think the next phase needs to focus less on whether we can collect wearable data and more on whether we can use wearable data responsibly, consistently, and in ways that genuinely improve public health decision-making.

Key Takeaways

  • Wearable data are most useful when there is a clear chain from measurement to interpretation to action.
  • Fit‑for‑purpose matters more than novelty; metrics validated in controlled settings may not perform well in real‑world workflows.
  • Physiological signals should be interpreted using a signal + context + trend approach rather than isolated values.
  • Wearables should function as decision support, not decision replacement, with safeguards to prevent over‑interpretation.
  • Workflow integration, staff capacity, and governance often determine whether wearable data help or hinder practice.
  • Equity must be built in from the start to avoid making adherence or data completeness a hidden gatekeeper for services.

 


About Our Guest

Aiden J. Chauntry, PhD, is a Clinical Research Scientist whose work focuses on translational cardiovascular physiology, stress psychophysiology, and lifestyle behaviors. His research examines how physiological responses to stressors and daily lifestyle patterns relate to meaningful health outcomes, and how these relationships can be measured and interpreted reliably enough to support real-world public health practice and clinical decision-making.
.
Dr. Chauntry’s training spans behavioral medicine, exercise science, and cardiovascular physiology. He completed his PhD in Behavioral Medicine at Loughborough University (UK), following undergraduate and master’s degrees in Exercise and Sport Science from the University of Birmingham (UK), and later undertook postdoctoral training at the University of North Carolina at Chapel Hill (US). Across his academic career, his research has examined stress-related physiology, prolonged sitting and other lifestyle exposures, and their links to cardiovascular risk across a range of populations, with publications in journals such as British Journal of Sports Medicine, Annals of Behavioral MedicineBiological Psychology, and Discover Public Health.
.
In his current translational role, Dr. Chauntry focuses on the clinical validation and reproducibility of wearable physiological monitoring in real-world settings. His work emphasizes the practical constraints that determine whether physiological data can be used responsibly in public health and clinical practice, including measurement quality in artifact-prone environments.
.
His ongoing work also includes implementation-oriented research and program development, including population-level intervention mapping and efforts to support integration into clinical workflows. His overall goal is to translate academic findings into solutions that are robust, reproducible, and implementation-relevant, so they can have a lasting positive impact on cardiovascular outcomes and global public health.
.
.
Connect with Dr. Chauntry:
.
.
.

About the Author

Sheryl Monks
Sheryl Monks is the Managing Editor of the Journal of Public Health Management and Practice. She is passionate about connecting public health professionals with the insights and resources they need to improve community health.

Leave Us a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.