For Health Departments

Build Public Health AI with Evidence and Restraint.

APHI works with public health teams to define practical AI use cases, design validation, and build tools that fit existing surveillance and decision workflows.

Challenges We Solve

Purpose-built for the realities of modern public health departments

Slow Signal Review

Surveillance teams often work across delayed feeds, manual review queues, and competing priorities. APHI prototypes are designed to organize signals for faster human review.

Data Silos

ED visits, lab results, environmental signals, and public reports often sit in separate systems. Integration only helps when sources are trusted, governed, and interpretable.

Equity Gaps

Vulnerable populations often have weaker data coverage. Every prototype needs missingness review, subgroup checks, and a plan for communities poorly represented in the data.

Resource Constraints

Limited staff cannot review every possible signal. Automated triage should reduce noise, not create a larger queue.

Interoperability Hurdles

Legacy systems don't talk to each other. We support HL7 FHIR, LOINC, SNOMED and integrate with CDC's National Syndromic Surveillance Program infrastructure.

Evidence Demands

Leadership needs proof before scale. APHI work starts with prototype cards, validation plans, and plain documentation of what is known, unknown, and not yet claimed.

How APHI Works with Your Department

Step 1

Data Integration (No Rip-and-Replace)

We connect to your existing systems via secure APIs:

  • Electronic lab reporting (ELR) feeds
  • Syndromic surveillance platforms (NSSP-compatible)
  • Hospital EHR data (HL7 FHIR)
  • Environmental monitoring (wastewater, air quality)
  • Optional: News articles, social signals, mobility data

Privacy-first: Differential privacy and federated learning keep sensitive data on-premises while enabling collaborative intelligence.

Step 2

AI-Powered Analysis

Our ensemble models run continuously:

  • Anomaly detection across 50+ syndromic indicators
  • Multi-pathogen forecasting (flu, COVID, RSV, etc.)
  • Spatial clustering & hotspot identification
  • Risk stratification by demographic & geographic factors
  • Equity impact assessment for every alert

Design standard: Models must produce reviewable evidence, not opaque alerts.

Step 3

Actionable Intelligence

Epidemiologists receive:

  • Prioritized alerts with confidence scores & evidence trails
  • Interactive dashboards for drill-down investigation
  • Automated briefings for leadership (plain language summaries)
  • Decision notes with assumptions and uncertainty clearly marked
Step 4

Continuous Learning

Models improve with your feedback:

  • Active learning from epidemiologist validation/rejection of alerts
  • Quarterly model retraining with new local data
  • Performance monitoring dashboards
  • External validation before public performance claims

Evidence Standard

APHI will not publish proprietary performance numbers until they are sourced, scoped, and reviewed.

Public Examples

CDC and other public health groups have documented AI uses in surveillance support, coding, summarization, and response readiness.

These examples inform APHI's design, but they are not APHI deployment metrics.

Local Validation

Any pilot should compare APHI outputs against the existing local workflow, including alert burden, missed signals, and reviewer usefulness.

Claims should be specific to disease area, data source, jurisdiction, and evaluation period.

Equity Review

Evaluation must ask which communities are poorly represented, which signals are less reliable, and whether recommendations shift resources fairly.

Equity checks are part of product readiness, not a separate compliance exercise.

External Metrics Policy

APHI may cite public CDC or peer-reviewed metrics when the source is linked and the scope is clear. APHI will not transfer those numbers to its own prototypes.

  • Public source required: link to government, peer-reviewed, or partner-approved documentation
  • Scope required: define setting, denominator, and method
  • Review date required: fast-moving AI claims need currency checks
  • APHI metrics required: partner evaluation before public performance claims

Pilot Evaluation Questions

A useful pilot should answer operational questions before scale.

Operational Fit:
  • Does the tool reduce review burden?
  • Does it fit existing escalation workflows?
  • Do epidemiologists trust the evidence trail?
Readiness:
  • Are data sources reliable enough?
  • Are equity risks documented?
  • Are governance and audit logs in place?

Note: APHI will report outcomes only after partner-approved validation. Until then, this page describes the evaluation standard.

Pilot Shape

A useful first pilot should stay narrow enough to validate honestly.

Discovery
  • Data infrastructure assessment
  • Stakeholder interviews (epidemiology, IT, leadership)
  • Privacy/security review & BAA execution
  • Pilot scope definition
Integration
  • API connections to ELR, syndromic systems
  • User training for epidemiology staff
  • Alert threshold calibration
  • Dashboard customization
Shadow Review
  • Shadow mode (alerts generated, validated)
  • Performance validation
  • Workflow refinement
  • Equity metrics monitoring
Decision Review
  • Go/no-go decision for continued development
  • Leadership briefings and SOP documentation
  • Validation report with limitations
  • Publication or public summary only if partner-approved

Discuss a Pilot

Early conversations should focus on one specific workflow, not a broad platform rollout.

What's Included

  • Narrow pilot scope
  • Data integration support
  • Staff training & onboarding
  • Performance validation report
  • Cost-benefit analysis
  • Optional: Peer-reviewed publication support

Eligibility Criteria

  • State/local health department (U.S.)
  • Electronic lab reporting capability
  • At least 1 FTE epidemiologist available
  • Commitment to share de-identified outcomes
  • IRB approval (we can assist)
Start a Pilot Conversation

Transparency Commitment: APHI will keep concept-stage claims separate from validated results. Partner pilots should produce honest evidence, including limitations and reasons not to scale.