Signal review without false certainty
Surveillance workflows should separate early warning from confirmed action and expose data delays, bias, and alternative explanations.
We map the questions public health professionals ask, then build reviewable AI workflows that public health teams can inspect before action.
APHI is designed as an operating system for disciplined public health AI work: start with the question, build the workflow, evaluate the output, then decide what is ready for a pilot.
What decision or operational task is the team trying to support?
What inputs, outputs, review roles, and boundaries are required?
What can AI summarize, compare, structure, or flag for review?
Which qualified human approves the output before use?
How are accuracy, safety, usability, equity, and source quality tested?
APHI extends the practical logic of the Public Health AI Handbook into a product system: evaluate before adoption, keep humans accountable, and make field constraints visible in every workflow.
Read the Public Health AI HandbookSurveillance workflows should separate early warning from confirmed action and expose data delays, bias, and alternative explanations.
AI output needs visible citations, uncertainty, reviewer questions, and a clear boundary between draft assistance and final guidance.
Each workflow should document accuracy, safety, usability, equity, drift, reviewer acceptance, and when not to deploy.
APHI organizes AI support around the work public health teams already do. Each workflow includes the question, intended user, input requirements, AI-assisted output, human-led boundary, and evaluation method.
Open the libraryA serious APHI workflow does not just produce a polished paragraph. It creates a reviewable evidence trail: signal summary, uncertainty, source checks, reviewer questions, escalation logic, and explicit limits.
APHI makes evaluation visible. Every workflow should move through readiness levels only when its risks, tests, reviewers, and failure modes are documented.
View evaluation frameworkAPHI can be scoped as a practical pilot: choose one workflow, define data and review boundaries, run shadow evaluation, and decide whether the output improves the real public health process.
Define the operational question and reviewer role.
Build a workflow packet with inputs, outputs, limits, and review rules.
Test against synthetic, historical, or partner-approved data.
Report accuracy, safety, usability, equity, and implementation findings.
APHI is for institutions that want useful AI support with evidence standards, review boundaries, and implementation discipline.