Public health questions, turned into reviewable AI workflows
APHI is an early-stage initiative for organizing real public health work around practical AI support, human review, and measurable evaluation.
APHI is an early-stage initiative for organizing real public health work around practical AI support, human review, and measurable evaluation.
APHI maps the questions public health professionals ask every day, then builds AI-assisted workflows that help answer them safely, transparently, and with human accountability.
This initiative is currently in its foundational phase, led by Dr. Bryan Tegomoh. The project reviews public evidence, defines implementation frameworks, and builds collaborative pathways with health departments, researchers, and public health practitioners.
Public health teams ask recurring questions across surveillance, outbreak response, communication, policy, and evaluation, but those questions are rarely organized as reusable workflows.
Drafts can look polished while containing weak sources, false certainty, fabricated citations, privacy problems, or recommendations that do not fit local context.
AI is useful only when it fits how public health teams actually work: the data they trust, the decisions they own, the approvals they need, and the time pressure they face.
Public health AI needs task-specific checks for accuracy, equity, source quality, privacy, usability, uncertainty, and failure modes before operational use.
APHI is exploring tools that help public health teams review emergency department visits, laboratory reports, environmental signals, and other surveillance inputs with better context and source traceability.
The goal is disciplined situational awareness: faster review where evidence supports it, explicit uncertainty where it does not.
Public health interventions often need to be adapted by community, setting, and implementation capacity. AI can support that work when outputs are governed and validated.
From vaccine outreach to behavioral health interventions, machine learning should be used to inform planning, not to make unsupported individual-level decisions.
Policy makers need clear evidence, options, uncertainty, and implementation constraints, not just more text. APHI workflows help structure those inputs for human review.
Modeling and scenario analysis can support planning when assumptions, data limits, and review responsibilities are explicit.
Public health teams already work across surveillance feeds, reports, guidance, community signals, and program data. The practical need is better organization, not more unsupported claims.
Natural language processing, computer vision, and deep learning can extract useful signals from unstructured sources such as clinical notes, medical images, and news reports. Public health use still requires validation and human oversight.
COVID-19 exposed gaps in surveillance, communication, coordination, and evaluation. AI tools should be built around those operational lessons, with clear limits and review.
APHI combines epidemiological reasoning, responsible AI development, and public health partnerships to build tools that can be tested honestly before scale.
Every prototype starts from a public health question, a defined workflow, and an evaluation plan. Real-world validation is required before any deployment claim.
AI systems can perpetuate or amplify existing biases. APHI workflows require equity review, local context, subgroup checks where applicable, and attention to who may be undercounted or harmed.
Public health requires public trust. Privacy-preserving methods such as differential privacy, federated learning, and secure multi-party computation are part of the technical design space, but each use case needs its own governance review.
Trustworthy public health AI requires transparent methods, source quality, reproducibility where possible, and honest reporting of what has not been validated.
Workflows should be grounded in evidence, clear assumptions, and reviewable outputs.
Technology must serve public health goals. Safety, fairness, and accountability are workflow requirements.
Useful workflows need practitioners, researchers, communities, and technical teams.
Our tools must work for under-resourced communities, not just wealthy institutions.
APHI is looking for practical use cases, workflow review, evaluation methods, and implementation partners.