Skip to main content

← Back to Home · Back to chart story

Methodology appendix

Understand how Medicaid charts are computed, verified, and bounded.

This page is the public data contract and methodology reference. The narrative experience and role actions live at /medicaid.

This tool does not predict payer actions. It highlights statistical variance patterns that may warrant clearer documentation. These metrics measure statistical consistency, not clinical appropriateness.

How to use this appendix

This appendix is a methodology reference. Use it to verify boundaries, data coverage, and interpretation limits before applying any chart insight.

Trend context, not expected payment.

Data coverage at a glance

All charts are derived from a filtered aggregate cohort and do not evaluate clinical appropriateness.

  • The model currently evaluates 5,000 codes.
  • 1,673 codes meet the eligibility thresholds.
  • 3,327 codes are excluded by those criteria.
  • The threshold rule requires At least 100 samples, 25 providers, and 12 active months.

Data governance and contract checks

Page values come from the latest release file and are validated with automated checks in CI. Hardcoded cohort counts are disallowed.

  • The current release file was generated on Feb 16, 2026.
  • The current schema version is medicaid-visuals-manifest.v2.
  • Every visual includes lineage context and a limit statement before reuse.
  • No PHI, patient-level records, or provider-identifiable data is displayed.

Scope tiers and alignment

Numeric context is separated into tiers so platform-scale context is not mixed with chart-rendering cohorts.

  • The context layer includes 227,000,000 aggregated rows from 2018-01 to 2024-12.
  • The discovery layer covers about 18,000 observed codes, with 11,847 meeting broader significance thresholds.
  • The visualization layer models 5,000 codes, and 1,673 are eligible (33.5%).
  • The latest release reports 1,673 eligible codes (33.5% coverage).

Bill analysis happens client-side. Bill content is processed in your browser.

Client-side analysis note: bill analysis happens in your browser.

Visual registry and interpretation boundaries

Use this registry to understand what each visual represents and what it does not claim. Narrative sequencing lives on /medicaid.

Visual 1: The Shape of Variability

Observed coverage

This visual covers 1,673/1,673 eligible codes (100.0%). Full eligible cohort.

Derived from filtered aggregate cohort. Does not assess clinical appropriateness.

Open in hub narrative

Visual 2: The Red Zone

Observed coverage

This visual covers 1,673/1,673 eligible codes (100.0%). Full eligible cohort.

Derived from filtered aggregate cohort. Does not assess clinical appropriateness.

Open in hub narrative

Visual 3: Mechanism of Risk

Observed coverage

This visual covers 1,673/1,673 eligible codes (100.0%). Full eligible cohort.

Derived from filtered aggregate cohort. Does not assess clinical appropriateness.

Open in hub narrative

Visual 4: The Volatility Spectrum

Observed coverage

This visual covers 1,673/1,673 eligible codes (100.0%). Full cohort grouped by band.

Derived from filtered aggregate cohort. Does not assess clinical appropriateness.

Open in hub narrative

Visual 5: Seasonality Signature

Planning context examples

This visual covers 6 planning-context examples (reference N=1,673). Stratified band examples with monthly archetype curves.

Planning-context only due to source limits: Monthly code-level time-series volume is not present in current aggregate source fields.

Open in hub narrative

Visual 6: Specialization Index

Observed coverage

This visual covers 1,673/1,673 eligible codes (100.0%). Full cohort grouped by band.

Derived from filtered aggregate cohort. Does not assess clinical appropriateness.

Open in hub narrative

Visual 7: Modifier Fingerprint

Planning context examples

This visual covers 8 planning-context examples (reference N=1,673). Stratified band examples using a modifier-mix estimate.

Planning-context only due to source limits: Modifier-level frequencies are not present in current aggregate source fields.

Open in hub narrative

Visual 8: Telehealth Drift

Planning context examples

This visual covers 5 planning-context examples (reference N=1,673). Cohort-informed category curves (2020–2024).

Planning-context only due to source limits: Telehealth modality share over time is not present in current aggregate source fields.

Open in hub narrative

Visual 9: Geographic Variance

Planning context examples

This visual covers 8 planning-context examples (reference N=1,673). Cohort-informed setting templates.

Planning-context only due to source limits: Geographic setting dimensions are not present in current aggregate source fields.

Open in hub narrative

Visual 10: Documentation Burden Matrix

Planning context examples

This visual covers 15 planning-context examples (reference N=1,673). Stratified band examples using a documentation-burden estimate.

Planning-context only due to source limits: Documentation minutes and spend measures are not present in current aggregate source fields.

Open in hub narrative

Module boundaries

Variance intelligence

Uses aggregate Medicaid patterns by code cohort.

Does not determine clinical appropriateness or expected payment.

Coding rule checks

Uses published NCCI structural edit logic.

Does not determine medical necessity.

Geographic pricing context

Uses Medicare locality proxy bands for context.

Does not represent Medicaid fee schedules.

Derived trend sample (educational)

Aggregate trend sample only.

No provider-level ranking or adjudication inference.

These metrics measure statistical consistency, not clinical appropriateness.

Explore or request a guided workflow

Explore Medicaid Intelligence now. If you want this mapped to your codes, state programs, or internal review workflows, request a guided session.

After you request a guided session, we send a short intake email and scheduling options within one business day.

Return to the chart-led story

Use /medicaid for narrative interpretation and role actions. Use this page for methodology, boundaries, and release checks.

Page updated: 2026-02-16

Frequently asked questions

What inclusion criteria determine which charts are observed versus illustrative?

Observed chart cohorts require sample size >= 100, active months >= 12, and provider count >= 25. Charts are labeled illustrative when required source dimensions are not present in the current aggregate fields.

Do these appendix metrics predict payer actions?

No. These metrics show statistical consistency and volatility context, not adjudication outcomes.

Where do displayed counts and provenance labels come from?

Counts and provenance labels are sourced from the generated visuals manifest that ships with the published chart artifacts.