strategy · 6 min read · April 2026

Political Polling Methodology — NUUN's Method | NUUN Digital

Insight

A plain-spoken explainer of modern political polling methodology — sampling, likely-voter screens, weighting, and AAPOR/CRIC disclosure standards.

Categorystrategy
UpdatedApril 2026

Last updated:

Quick answer
Modern political polling methodology centres on four disciplines: probability-based or quota-controlled sampling, rigorous likely-voter screens, multi-variable weighting to population benchmarks, and AAPOR- and CRIC-compliant disclosure. The widely-reported polling misses of the 2010s traced to sampling and likely-voter screen weakness, not method collapse. Done honestly, polling still produces the most reliable predictive signal available to campaigns, media, and government.

POLITICAL POLLING METHODOLOGY — NUUN'S METHOD

Quick Answer: Modern political polling is a chain of methodological choices — sample frame, mode, likely-voter screen, question wording, weighting, leaner allocation — where any weak link can bias the result by more than the headline margin of error. This piece walks through each choice, publishes NUUN's defaults, and explains what AAPOR Transparency Initiative and CRIC disclosure actually require.

THE METHODOLOGICAL CHAIN

A political poll's validity depends on seven sequential choices:

  1. Sample frame
  2. Mode (online, phone, hybrid)
  3. Screening (adult population vs registered voters vs likely voters)
  4. Question wording and order
  5. Fielding window
  6. Weighting
  7. Leaner allocation

A failure in any of these distorts the result. Headline margin of error only addresses one source (sampling variability), not the six other sources of potential bias.

SAMPLE FRAME

Probability-based online panels. The modern default. Leading Canadian frames include Léger Opinion and Dynata's probability sample. Coverage of adult Canadians is high but non-random on digital engagement.

IVR/live phone. Declining response rates (now typically under 4%) have eroded pure phone sampling. Live phone remains valuable for hard-to-reach older rural populations and can be hybridized with online.

Mixed-mode. Online plus phone blend. Captures both digitally engaged and phone-first populations. Operationally expensive; methodologically defensible.

Our default: probability-based online for national and most provincial work; mixed-mode for rural-heavy jurisdictions and where coverage risk is high.

LIKELY VOTER SCREEN

The likely-voter screen is the highest-leverage methodological choice in election polling.

Self-report certainty. Standard approach — asks respondents how certain they are to vote. Over-predicts turnout (social desirability bias).

Behavioural history. Incorporates past-election self-report. Better predictive validity; vulnerable to respondent memory error.

Composite index. Combines certainty, past vote, interest, and demographics. Our default. Published weighting.

Validated voter file. Where a voter file with validated turnout history is available (rarely in Canada federally, often in US primary states), joining the survey sample to the voter file is the gold standard.

The difference between approaches is material. In a typical Canadian federal race, vote intent shifts 2–5 points between an "all decided voters" read and a "high-probability voters" read.

WEIGHTING

Weighting corrects for sample-frame imbalances. Core weights:

  • Demographics. Age × gender × region × education. Drawn from Statistics Canada census.
  • Turnout propensity. For likely-voter models.
  • Past vote. Recall of last election weighted against actual outcome. Controversial — corrects for some biases, risks over-fitting.
  • Mode. In mixed-mode designs.

Over-weighting (any single cell weighted >3x its raw value) inflates variance and should trigger a review of sample frame rather than heavier weighting.

LEANER ALLOCATION

"Leaners" are respondents who decline to name a party but indicate a direction. Three treatments:

  • Exclude leaners. Cleanest; produces smaller decided-voter reads with wider confidence intervals.
  • Allocate leaners proportionally. Apportions leaners to parties in proportion to decided-voter support.
  • Allocate per stated lean. Most common in NUUN's work — allocates leaners to the party they indicated, with partial weight.

We publish the leaner treatment applied to every wave.

QUESTION WORDING

Question wording is under-discussed publicly but drives disproportionate variance. Four principles:

  1. Vote-intent wording should mirror the ballot. "If a federal election were held tomorrow, for which party's candidate would you vote?" not "Which party do you support?"
  2. Party names should rotate. To control for order bias.
  3. Undecided must be an allowed response. Forced-choice designs overstate vote intent.
  4. Leader questions should be separate from party questions. Favourability ≠ vote intent.

FIELDING WINDOW

Campaign polls should field tightly (2–5 days) to capture news-cycle shifts. Tracker polls (like NUUN's Public Opinion Tracker) field 6–10 days monthly for stability. Very short fields (<48 hours) risk weekend-effect bias and should be disclosed.

DISCLOSURE — WHAT AAPOR AND CRIC ACTUALLY REQUIRE

Under AAPOR Transparency Initiative and CRIC standards, public-release polls must disclose:

  • Sponsor and fieldwork organization
  • Sample frame and mode
  • Dates of fielding
  • Sample size and margin of error (or credibility interval)
  • Question wording for any released result
  • Weighting variables and methodology
  • Response rate (or completion rate for online panels)
  • Funding source

No disclosure, no credibility. When a poll release omits any of these, treat the result with appropriate skepticism.

WHAT GOES WRONG IN ELECTION POLLING

Four recurring failure modes:

Herding. Pollsters unconsciously trim outlier results toward the consensus. Detected via lower-than-expected cross-pollster variance late in a campaign.

Non-response bias. Supporters of one party may be less reachable than supporters of another. Corrected by post-stratification but not eliminated.

Late deciders. Last-week movement captured only by tight fielding and modelled allocation.

Turnout estimation. Most polling misses arise from over-estimating turnout among low-propensity groups.

The 2020 US and the 2015 UK polling misses were primarily turnout-estimation and non-response issues. The methods get better after each major miss.

NUUN'S POLLING STANDARDS

  • AAPOR Transparency Initiative member
  • CRIC code-of-conduct signatory
  • ESOMAR 28 published
  • Full methodology note with every public release
  • Partisan-campaign work declined while the Public Opinion Tracker is active
  • Conflict-of-interest register published

FAQ

Q: Why do pollsters sometimes disagree?

A: Methodological choices — sample frame, screen, weighting, leaner treatment — account for most of the between-pollster variance. A healthy polling environment has methodological diversity.

Q: How large a margin of error is acceptable?

A: n=800–1,500 for national reads (±2.5 to ±3.5pp) is standard. Smaller samples are defensible for directional reads; quoting a margin of error ≤±1pp should trigger skepticism unless the sample is n=10,000+.

Q: Are online polls less accurate than phone?

A: Modern probability-based online polls are comparable to or better than late-era phone polling on electoral accuracy. The comparison depends on sample frame and weighting discipline, not mode alone.

Q: Why do polls sometimes miss election results?

A: Most misses come from turnout estimation or late movement not captured in the final field window. The 2020 US polling miss was primarily a non-response bias among Trump-leaning voters.

Q: Can AI models replace traditional polling?

A: No. Predictive models can usefully forecast election outcomes given data; they cannot replace the underlying data (polls, voter files, economic fundamentals). Claims to the contrary are sales pitches.

Q: What's the minimum disclosure I should demand before trusting a poll?

A: Sponsor, sample frame, mode, dates, size, margin of error, weighting variables, and question wording. If any is missing, treat the result as unverified.

Q: How does NUUN handle campaign work vs public tracker?

A: We decline partisan vote-intent work while the Public Opinion Tracker is active, to avoid conflict of interest. We accept non-partisan public-affairs and ballot-measure work under separate methodology notes.

Q: What's the 2026 Canadian polling benchmark?

A: Expect a pre-election polling average within 2–3 percentage points of final vote share on each party's two-party preferred. Individual pollsters will vary; aggregators (338Canada, CBC Poll Tracker) smooth noise.

RELATED READING

SOURCES & FURTHER READING

About the author

NUUN Digital Polling Practice

Reviewed by NUUN's polling leads with external academic review

AAPOR Transparency Initiative–aligned methodology; CRIC-compliant public polling across Canadian federal and provincial cycles.

Frequently asked.

How is modern political polling sample built?
Through a combination of probability-based panels (where available), mixed-mode telephone/online sampling for representativeness, and quota controls by demographic, region, and vote history. Single-source online panels are deranked for benchmark work.
What is a likely-voter screen and why does it matter?
A set of questions used to filter poll respondents down to those most likely to actually vote. Bad likely-voter screens — too generous, not calibrated to historical turnout — are the single biggest source of polling miss in recent cycles.
How is polling data weighted?
On age, sex, education, region, vote history, and (where valid) party self-identification — benchmarked to census and elections-authority data. Over-weighting on any single variable increases risk; best practice uses raking across multiple variables.
What disclosure standards should a political poll meet?
AAPOR Transparency Initiative in the US, CRIC/MRIA in Canada. Disclose sample source, mode, dates, sample size, margin of error, weighting, and the exact wording of questions. Pollsters that decline disclosure are not credible.
How close to election day is a poll still reliable?
Tracking polls are valid up to 24–48 hours before election day; after that late-deciders and turnout volatility drive unpredictable variance. Most reputable pollsters pause publication in the final 24 hours.
Why do polls sometimes miss by large margins?
Usually some combination of sample coverage gaps, likely-voter screen miscalibration, late-campaign shifts unmeasured, or shy-voter effects. When multiple pollsters miss the same way, methodology assumptions are usually the cause.

Run A Public-interest Poll

If you need a poll fielded to a fully published methodology — for public affairs, ballot measures, or issue research — we're built for it.