Methodology

How UNSAID Works

A transparent look at our approach to modeling non-responder behavior—including what we can prove, what we're working to validate, and honest limitations.

The Non-Response Problem

Every survey suffers from the same fundamental issue: the people who respond are systematically different from those who don't.

70%
Average non-response rate in modern surveys. This is up from 40% just 20 years ago.

Traditional approaches try to fix this through statistical weighting—adjusting results to match known demographic distributions. But this assumes non-responders within each demographic group think the same as responders. They don't.

A 35-year-old woman who takes surveys is fundamentally different from one who ignores them. They have different personalities, different time pressures, different attitudes toward research. Weighting by demographics alone can't capture this.

Our Approach

UNSAID takes a different approach: we model the reasons people don't respond, then simulate what they would have said.

Step 1: Census-Validated Demographics

We start with demographic distributions that match real-world populations. Our persona generation achieves Total Variation Distance (TVD) < 0.05 against U.S. Census data—meaning our demographic distributions are statistically indistinguishable from reality.

Step 2: Behavioral Segmentation

We segment potential non-responders by their likely reasons for not responding:

  • Time-constrained — Too busy, survey fatigue
  • Privacy-conscious — Suspicious of data collection
  • Topic-disengaged — Subject doesn't interest them
  • Digitally inaccessible — Don't use the survey channel
  • Actively avoiding — Deliberately opt out of research

Step 3: Psychographic Modeling

For each non-responder type, we model psychological profiles using:

  • Big Five personality traits (OCEAN model)
  • Decision fatigue and cognitive load factors
  • Values and motivations frameworks
  • Anti-stereotyping constraints to avoid demographic assumptions

Step 4: AI-Powered Response Simulation

Using large language models (Claude, GPT-4), we simulate how each persona type would respond to survey questions—if they had responded. The model is prompted with the persona's full psychological profile and asked to reason through each question from that perspective.

What's Validated

✓ Validated

Demographic Accuracy

Census-level accuracy with TVD < 0.05 across age, gender, education, income, and geography.

✓ Validated

Psychographic Consistency

Personas maintain consistent personality traits across multiple survey interactions.

◐ In Progress

Response Prediction Accuracy

Comparing AI-predicted responses vs. actual late-responder data. Target: TVD < 0.20.

○ Planned

Longitudinal Stability

Tracking prediction accuracy across multiple survey waves and contexts.

We're actively partnering with research firms to validate non-response predictions against real-world follow-up surveys. Request a Bias Audit to participate.

Honest Limitations

This is AI speculation, not ground truth

Our models generate plausible responses based on persona profiles. They cannot truly know what any individual would say. Use for hypothesis generation and exploration, not final decision-making.

Prediction accuracy is not yet validated

While our demographics are census-accurate, we have not yet published peer-reviewed validation of response predictions. This research is underway.

Cannot capture truly unique perspectives

AI models are trained on aggregated data and may miss genuinely novel viewpoints that exist in small populations.

Cultural and regional limitations

Current models are primarily calibrated on U.S. populations. International accuracy varies.

Sensitive topics require extra caution

AI responses on politically sensitive, health-related, or deeply personal topics should be treated as directional only.

When to Use UNSAID

✓ Good Use Cases

  • Pre-testing surveys before panel deployment
  • Identifying potential blind spots in research design
  • Generating hypotheses about non-responder attitudes
  • Exploring "what if" scenarios quickly
  • Supplementing (not replacing) traditional research

✗ Not Recommended For

  • Regulatory filings or legal claims
  • Final market sizing decisions
  • Clinical trial data
  • Financial forecasting with fiduciary obligations
  • Any use requiring "certified" accuracy

See It In Action

Request a free Bias Audit to see how UNSAID analyzes your specific survey context.

Get Free Bias Audit →