Independent Research Initiative

Is AI Treating
All Patients Fairly?

Healthcare AI is used by millions. We test whether these systems treat everyone equally—regardless of name, race, or gender. Our methods are open. Our findings are verifiable.

Pre-registered Open Data Open Code
Emily Johnson
Chest pain, shortness of breath
Urgent: Seek immediate care
vs.
Lakisha Williams
Chest pain, shortness of breath
Routine: Schedule appointment

Same symptoms. Different names. Different recommendations?

The Problem

AI systems are increasingly used to guide healthcare decisions. But what if they've learned our biases?

100M+
People use symptom checkers annually
40%
Of US hospitals use AI in clinical decisions
?
Studies on name-based bias in medical AI

Our Approach

Rigorous methodology. Open methods. Verifiable results.

Matched-Pair Testing

Submit identical symptoms to AI systems, varying only the patient name. If outputs differ, the name is the cause.

Statistical Rigor

Effect sizes (Cohen's d), significance testing with Bonferroni correction, pre-registered analysis plan.

Open Science

All methods, code, and data publicly available. Anyone can replicate our findings.

Responsible Disclosure

Share findings with AI developers before publication. Goal is improvement, not attack.

Research Status

Protocol Design

Pre-registered methodology based on peer-reviewed frameworks

Test Materials

50+ name pairs, 20+ symptom profiles developed

Data Collection

Testing consumer AI systems and LLMs

Analysis & Disclosure

Statistical analysis, responsible disclosure to developers

Publication

Public findings release

Don't Trust. Test. Verify.

Our methodology is designed so anyone can understand it, replicate it, or extend it. Healthcare AI fairness isn't our problem to solve alone—it's everyone's.

Read the Methodology Interactive Demo Download Protocol