AI workflows for the decisions public health teams make every day.

We map the questions public health professionals ask, then build reviewable AI workflows that public health teams can inspect before action.

Human accountable Source linked Evaluation first
Source linked Human accountable Evaluation ready

The APHI Method

APHI is designed as an operating system for disciplined public health AI work: start with the question, build the workflow, evaluate the output, then decide what is ready for a pilot.

01

Question

What decision or operational task is the team trying to support?

02

Workflow

What inputs, outputs, review roles, and boundaries are required?

03

Draft

What can AI summarize, compare, structure, or flag for review?

04

Review

Which qualified human approves the output before use?

05

Evaluate

How are accuracy, safety, usability, equity, and source quality tested?

Built from real public health AI needs

APHI extends the practical logic of the Public Health AI Handbook into a product system: evaluate before adoption, keep humans accountable, and make field constraints visible in every workflow.

Read the Public Health AI Handbook
01

Signal review without false certainty

Surveillance workflows should separate early warning from confirmed action and expose data delays, bias, and alternative explanations.

02

Evidence briefs with source discipline

AI output needs visible citations, uncertainty, reviewer questions, and a clear boundary between draft assistance and final guidance.

03

Evaluation that survives real operations

Each workflow should document accuracy, safety, usability, equity, drift, reviewer acceptance, and when not to deploy.

Workflow Library

APHI organizes AI support around the work public health teams already do. Each workflow includes the question, intended user, input requirements, AI-assisted output, human-led boundary, and evaluation method.

Open the library
Category Example question Output
Surveillance Is this pattern unusual? Signal review memo
Evidence What source supports action? Citation checked brief
Communication What should the public know? Reviewed advisory draft
Evaluation Is the program working? Indicator and rubric pack

Flagship workflow: outbreak signal review

A serious APHI workflow does not just produce a polished paragraph. It creates a reviewable evidence trail: signal summary, uncertainty, source checks, reviewer questions, escalation logic, and explicit limits.

Workflow statusDraft specification
Decision boundaryEpidemiologist approval
Evaluation pathShadow review before pilot
Example output Outbreak Signal Review Memo
Signal summary
Respiratory syndromic visits increased in two districts over the current reporting window.
Uncertainty
Recent reporting delays and care-seeking changes could explain part of the pattern.
Reviewer questions
Does the pattern persist after late reports? Are laboratory confirmations aligned?

Evaluation that funders and agencies can inspect

APHI makes evaluation visible. Every workflow should move through readiness levels only when its risks, tests, reviewers, and failure modes are documented.

View evaluation framework
Accuracy Claim and citation checks
Safety Refusal and escalation rules
Usability Reviewer acceptance rubric
Equity Bias and missingness review
Governance Privacy and approval path

Health department pilot pathway

APHI can be scoped as a practical pilot: choose one workflow, define data and review boundaries, run shadow evaluation, and decide whether the output improves the real public health process.

1

Define the operational question and reviewer role.

2

Build a workflow packet with inputs, outputs, limits, and review rules.

3

Test against synthetic, historical, or partner-approved data.

4

Report accuracy, safety, usability, equity, and implementation findings.

Build public health AI around real work and accountable review.

APHI is for institutions that want useful AI support with evidence standards, review boundaries, and implementation discipline.