Skip to content
AfricureAnalytics
  • Home
  • About
  • Solutions
  • Platform
  • Impact
  • Research
  • Team
  • Partnerships
  • Insights
  • Contact
Contact us
Navigation
  • Home
  • About
  • Solutions
  • Platform
  • Impact
  • Research
  • Team
  • Partnerships
  • Insights
  • Contact
Contact us
AfricureAnalytics

Health analytics for institutions, researchers, and programmes. Risk scoring, reporting, population monitoring, and research tools.

General enquiries
hello@africureanalytics.com
Phone
+2349023885989
Address
Lagos, Nigeria

Company

  • About
  • Team
  • Impact
  • Partnerships

Solutions

  • All solutions
  • Diabetes risk analytics
  • Image pattern analytics
  • Population analytics
  • Live demos

Trust

  • Research and methodology
  • Security
  • Privacy policy
  • Evidence and governance
  • Scope and intended use

Important notice: Africure Analytics focuses on analytics, reporting, interpretation, and monitoring workflows. Public product pages describe analytical scope only.

PrivacyTermsScope and intended use

Copyright 2026 Africure Analytics. All rights reserved.

EpidemiologyHealth Equity

Effect Modification vs Confounding in Programme Reports

Programme reports often adjust for confounders without checking whether the risk factor actually behaves differently in different groups. That distinction changes what the numbers mean.

April 2, 2026 · 10 min read · Africure Analytics

Most applied epidemiology reports adjust for age, sex, and a few other variables and call it done. That handles confounding. But it misses effect modification entirely, and the two are not the same thing.

Confounding distorts the association

A confounder is something that is associated with both the exposure and the outcome, and distorts the relationship between them. Age is the classic example. Older people are more likely to have diabetes and more likely to have heart disease, so if you do not adjust for age, diabetes looks more strongly associated with heart disease than it really is.

Adjusting for confounders is standard practice. Most statistical software makes it straightforward. The problem is not that people forget to do it. The problem is that people stop there.

Identifying the right confounders requires thinking about the causal structure of the problem, not just which variables are available in the dataset. A variable can be associated with both the exposure and the outcome and still not be a confounder if it sits on the causal pathway between them. Adjusting for a mediator (a variable that is caused by the exposure and in turn causes the outcome) removes part of the effect you are trying to measure. A programme report that adjusts for HIV viral load when estimating the effect of an adherence intervention on mortality may inadvertently adjust away the very mechanism through which the intervention works.

Directed acyclic graphs (DAGs) are useful for making these distinctions explicit. Drawing the assumed causal relationships between variables before running the analysis forces the team to articulate which variables are confounders, which are mediators, and which are colliders. In practice, most programme reports skip this step entirely and adjust for whatever variables are available, which sometimes introduces more bias than it removes.

Effect modification changes the association

Effect modification is different. It means the relationship between exposure and outcome is genuinely different in different groups. If a treatment works well in younger patients but poorly in older patients, age is an effect modifier. Adjusting for age would hide that difference instead of revealing it.

The correct approach for effect modification is stratification (reporting the results separately for each group), not adjustment. When a programme report adjusts for a variable that is actually an effect modifier, it produces a single average effect that does not describe any real subgroup accurately.

This matters for resource allocation. If a prevention programme works in urban areas but not rural areas, the average effect across both looks modest. A programme manager reading the average might conclude the programme is marginally effective everywhere, when the truth is it works well in one setting and not at all in another.

A real-world example: a community health worker programme to improve childhood immunisation coverage showed an overall modest effect of 8 percentage points. But when stratified by distance to the nearest health facility, the programme increased coverage by 22 percentage points in communities more than 10 kilometres from a facility, and had no measurable effect in communities within 5 kilometres. The overall average of 8 points would have led to a conclusion that the programme was marginally helpful. The stratified result showed it was transformative in hard-to-reach areas and unnecessary in areas with good facility access. That distinction changes where the programme should be scaled.

Testing for effect modification is straightforward: include an interaction term between the exposure and the suspected modifier in the regression model. If the interaction term is statistically significant, there is evidence of effect modification, and the results should be reported stratified. If it is not significant, the pooled estimate with adjustment is appropriate. Many analysts skip the interaction test because stratified results are harder to present, but skipping it means potentially hiding the most policy-relevant finding in the data.

  • Confounders distort. Adjust for them
  • Effect modifiers change the relationship. Stratify by them
  • An averaged effect can hide important subgroup differences
  • Test for interaction before deciding to adjust or stratify

Common mistakes in programme reporting

One of the most common errors is adjusting for a variable without considering whether it might be an effect modifier. In malaria programme reports, adjusting for bed net ownership when estimating the effect of indoor residual spraying produces a single adjusted estimate. But if spraying is more effective in households that do not have bed nets (because bed nets already provide protection), net ownership is an effect modifier, and the adjusted estimate understates the benefit of spraying for the unprotected population.

Another frequent mistake is interpreting a non-significant interaction term as evidence that effect modification does not exist. In small programme datasets, the sample size may be too small to detect a meaningful interaction even when one is present. A non-significant interaction test in a study with 200 participants per group does not rule out effect modification; it means the study was not powered to detect it. The report should note this limitation rather than defaulting to a single adjusted estimate.

Presenting only adjusted results without showing the stratified results as a supplement is a missed opportunity. Even when the formal interaction test is non-significant, showing the stratum-specific estimates in a supplementary table allows readers to see the pattern and form their own judgment. We include stratified results as standard practice in every programme report, regardless of the interaction test result.

Reporting that respects the distinction

We build programme reports that test for interaction before defaulting to adjustment. When evidence of effect modification is found, the report presents stratified results with a clear explanation of what the difference means in practical terms.

This takes more work than running a single adjusted model. But it produces reports that programme managers can actually act on, because the results describe the groups they are responsible for, not an average that applies to nobody in particular.

The report format we use includes a dedicated section titled 'Subgroup Analyses and Effect Modification' that presents the interaction tests, the stratum-specific estimates, and a plain-language interpretation. For example: 'The programme effect differed significantly by facility distance (interaction p=0.003). In communities more than 10km from the nearest facility, coverage increased by 22.1 percentage points (95% CI: 15.3 to 28.9). In communities within 5km, the change was 1.4 percentage points (95% CI: -3.2 to 6.0).' That level of specificity is what programme managers need to make allocation decisions.

Discuss this topic with us

Related insights

Machine LearningApplied AI

April 1, 2026 / 10 min read

Designing Risk Analytics for Real Operational Workflows

Useful risk analytics starts with the workflow it needs to support. Model novelty matters far less than whether the output fits real review, reporting, and follow-through.

Read article
Population AnalyticsEpidemiology

April 1, 2026 / 10 min read

Why Population Analytics Must Reflect Local Conditions

Population analytics works best when it reflects local burden, reporting structures, and the real operational environment.

Read article
Applied AIData Governance

April 1, 2026 / 10 min read

Image Analytics Without Overclaiming

Image models can add analytical value when scope, validation, and reporting boundaries are described with precision.

Read article