- Technology Illumination
- Posts
- Chief Architect Thinking - Why Healthcare Plan Qualification Is a Graph Problem, Not Just a SQL Problem
Chief Architect Thinking - Why Healthcare Plan Qualification Is a Graph Problem, Not Just a SQL Problem
Executive Summary
Healthcare enterprises must deliver the right plans to the right citizens across channels such as web, phone, advisor apps, kiosks, mobile apps, and partner portals. Plans are not simple records: eligibility depends on customer demographics, (medical history or current health profile), regional regulations, billing cadences, and upgrade paths.
Relational databases (RDBMS) are excellent for transactions, but when the relationships themselves are the business logic, SQL becomes rigid and unscalable. This post shows why plan qualification is best modeled in a Graph Database and how to integrate it with in-process indexes, caches, and rules engines to meet a strict 2-second SLA at massive scale.
Business Context
Large healthcare plan providers in the US, UK, Europe, India, Australia, and New Zealand must manage hundreds of plans, each with different qualification rules.
Eligibility depends on:
Customer demographics (from SSN, Aadhaar ID, NHS ID, or National Health ID, typically stored in an Enterprise Data Warehouse [EDW])
(Medical history or current health profile) (maintained in the EDW alongside demographics)
Plan attributes (wellness-only, chronic-care inclusive, senior citizen eligibility, etc.)
Country-specific regulations
Upgrade and downgrade paths (migrating from a basic plan to a premium plan)
The EDW typically resides behind its own API-driven microservice. For this post, we focus only on the Plan Qualification service.
How It’s Usually Done in a Microservices + RDBMS World
In most enterprises, customer profile data is handled separately by an EDW microservice. The Plan Qualification service manages plans, products, and rules.
Typical RDBMS Schema
PLAN
PLAN_ID
PLAN_CODE
PLAN_NAME (examples: 1-Month Plan, 1-Year Plan, Walk-in Plan, Enterprise Plan)
BILLING_CADENCE (examples: ONE_TIME, DAILY, WEEKLY, MONTHLY, QUARTERLY, YEARLY, LIFETIME)
PRODUCT
PRODUCT_ID
PRODUCT_NAME (examples: Cancer Treatment, Diabetes WellCare, Diabetes Prevention, Liver Care, Kidney Care, Blood Disorders, Hormonal Therapy, Neurology, Eye Care, ENT, Orthopedics)
PLAN_PRODUCT_MAPPING
PLAN_CODE
PRODUCT_ID
PLAN_QUAL_RULE
PLAN_CODE
COUNTRY_COVERAGE_TYPE (examples: US_Medicare, UK_NHS, AU_Private)
CHANNEL (examples: Web, Mobile, Partner, Phone, Kiosk, Advisor, Reseller)
AGE_RANGE
CHRONIC_CONDITION_FLAG
FIRST_PURCHASE_ONLY_FLAG
QUALIFICATION_DURATION (examples: 90 days)
DISCOUNT_PERCENTAGE
ETC_BUNDLES
BUNDLE_CODE
PRODUCT_NAME (examples: Preventive Care Bundle, Chronic Care Bundle, Wellness Bundle)
Runtime Flow Today
Input: Account number and national identifier (SSN, Aadhaar ID, NHS ID).
Profile Fetch: Demographics and health profile retrieved from EDW API.
Plan Lookup: Joins across PLAN, PLAN_PRODUCT_MAPPING, PRODUCT, PLAN_QUAL_RULE, and ETC_BUNDLES.
Filter: Intent (first-purchase-only), channel-specific logic, upgrade and downgrade rules.
Output: Return eligible plans to the requesting channel.
Why SQL Struggles
As dimensions grow—billing cadences, coverage types, regulations, channels, bundles—JOINs multiply. SQL queries become:
Hard to maintain (8–10 table queries are common)
Rigid (schema churn for new products or rules)
Split-brain (data in SQL, policy in a rules engine)
Eligibility is fundamentally a relationship problem, and SQL treats relationships as JOINs rather than first-class citizens.
Graph Database Model: A Better Fit
Graph databases store nodes (plans, products, rules, constraints) and edges (applies-to, requires, allowed-in, upgrades-to). This makes querying natural and declarative.
Core Nodes
Plan {code, name, billingCadence, discount, effectiveFrom, effectiveTo, status}
Product {code, name, type}
Bundle {name}
Country {iso}
Channel {name}
DemographicCriteria {minAge, maxAge, flags}
HealthCondition {name}
Intent {type, durationDays}
Core Relationships
(Plan)-[:COVERS]->(Product)
(Plan)-[:PART_OF_BUNDLE]->(Bundle)
(Plan)-[:ALLOWED_IN_COUNTRY]->(Country)
(Plan)-[:ALLOWED_IN_CHANNEL]->(Channel)
(Plan)-[:REQUIRES_DEMOGRAPHIC]->(DemographicCriteria)
(Plan)-[:HAS_INTENT]->(Intent)
(Plan)-[:UPGRADES_TO]->(Plan)
(CustomerProfile)-[:HAS_CONDITION]->(HealthCondition)
Example Query (Cypher)
MATCH (pl:Plan)-[:COVERS]->(:Product)
MATCH (pl)-[:ALLOWED_IN_COUNTRY]->(:Country {iso:$country})
MATCH (pl)-[:ALLOWED_IN_CHANNEL]->(:Channel {name:$channel}) WHERE (coalesce(pl.effectiveFrom, date('1900-01-01')) <= date()) AND (coalesce(pl.effectiveTo, date('9999-12-31')) >= date()) AND pl.status = 'PUBLISHED'
WITH DISTINCT pl
MATCH (pl)-[:REQUIRES_DEMOGRAPHIC]->(d:DemographicCriteria) WHERE $age BETWEEN d.minAge AND d.maxAge RETURN pl.code, pl.name, pl.billingCadence, pl.discount ORDER BY pl.discount DESC
Performance and Scale Strategy (Strict 2-Second SLA)
Constraints:
Every API response must complete in 2,000 ms or less.
No synchronous calls to the graph database in the request path.
Solution:
Graph DB is the authoritative configuration source.
A materializer service computes (qualifier key → plan list) offline.
Each pod holds an immutable in-process index (on-heap or off-heap) for constant-time lookups.
Redis or blob storage can be used to distribute segments, but not in the hot path.
The Rules Engine applies customer-specific policies (first-purchase-only, upgrades, time windows).
This design guarantees sub-2s latency even at 100k RPS.
SRE Perspective: Reliability, Budgets, and Guardrails
Graph DB is off the hot path. Requests read only from the in-process index.
Versioned index: each config publish builds a new version, validated off-thread, and hot-swapped atomically using an AtomicReference.
Canary deployments: some pods activate the new version first, others continue serving the old one. Rollback is instant.
No flat-file snapshots. They require restarts and break canary rollouts.
On-heap index: simplest approach, fast, but monitor GC. Use ZGC or Shenandoah for low pause times.
Off-heap index: Chronicle Map, memory-mapped files, or Direct ByteBuffers for GB-scale indexes with zero GC overhead.
Latency budgets:
EDW profile (cached): 20–80 ms
Index lookup (local map): < 1 ms
Rules evaluation: 5–25 ms
Serialization and network: 50–100 ms
Headroom well below 2,000 ms even at high percentiles
Observability:
Metrics for active config version, index size, lookup time, swap duration
Cache hit ratio ≥ 99%
Materializer lag seconds monitored and alerted
Developer Workflow and Governance
Authoring: Plan config stored in Git as YAML/JSON.
CI/CD: Validates config, publishes to Graph DB, triggers materializer.
Materializer: Builds new index version and distributes segments.
Pods: Validate and load the new index in background, hot-swap atomically.
Canary: Route subset of traffic to pods with new version.
Rollback: Repoint pods to previous version instantly.
Side-by-Side Mental Model
RDBMS: PLAN connects to PRODUCT via PLAN_PRODUCT_MAPPING; PLAN_QUAL_RULE encodes country, channel, and demographics; ETC_BUNDLES aggregates products. Eligibility requires deep JOINs and external rules.
Graph: Plans, Products, Bundles, Countries, Channels, Demographics, and Intents are nodes. Relationships like COVERS, ALLOWED_IN, REQUIRES_DEMOGRAPHIC, HAS_INTENT, and UPGRADES_TO express rules directly. Eligibility is precomputed into keys for fast lookup.
Key Takeaways
Healthcare plan qualification is a relationship-heavy problem.
RDBMS are strong for transactions, weak for evolving eligibility rules.
Graph DBs are natural for modeling plan-product-rule relationships.
To meet a strict 2-second SLA at scale, use an in-process index built from graph data.
Versioning, atomic swaps, and canary rollouts provide safety and agility.
This design balances flexibility (graph), speed (local index plus rules engine), and reliability (SRE guardrails).