POLITICAL POLLING METHODOLOGY — NUUN'S METHOD
Quick Answer: Modern political polling is a chain of methodological choices — sample frame, mode, likely-voter screen, question wording, weighting, leaner allocation — where any weak link can bias the result by more than the headline margin of error. This piece walks through each choice, publishes NUUN's defaults, and explains what AAPOR Transparency Initiative and CRIC disclosure actually require.
THE METHODOLOGICAL CHAIN
A political poll's validity depends on seven sequential choices:
- Sample frame
- Mode (online, phone, hybrid)
- Screening (adult population vs registered voters vs likely voters)
- Question wording and order
- Fielding window
- Weighting
- Leaner allocation
A failure in any of these distorts the result. Headline margin of error only addresses one source (sampling variability), not the six other sources of potential bias.
SAMPLE FRAME
Probability-based online panels. The modern default. Leading Canadian frames include Léger Opinion and Dynata's probability sample. Coverage of adult Canadians is high but non-random on digital engagement.
IVR/live phone. Declining response rates (now typically under 4%) have eroded pure phone sampling. Live phone remains valuable for hard-to-reach older rural populations and can be hybridized with online.
Mixed-mode. Online plus phone blend. Captures both digitally engaged and phone-first populations. Operationally expensive; methodologically defensible.
Our default: probability-based online for national and most provincial work; mixed-mode for rural-heavy jurisdictions and where coverage risk is high.
LIKELY VOTER SCREEN
The likely-voter screen is the highest-leverage methodological choice in election polling.
Self-report certainty. Standard approach — asks respondents how certain they are to vote. Over-predicts turnout (social desirability bias).
Behavioural history. Incorporates past-election self-report. Better predictive validity; vulnerable to respondent memory error.
Composite index. Combines certainty, past vote, interest, and demographics. Our default. Published weighting.
Validated voter file. Where a voter file with validated turnout history is available (rarely in Canada federally, often in US primary states), joining the survey sample to the voter file is the gold standard.
The difference between approaches is material. In a typical Canadian federal race, vote intent shifts 2–5 points between an "all decided voters" read and a "high-probability voters" read.
WEIGHTING
Weighting corrects for sample-frame imbalances. Core weights:
- Demographics. Age × gender × region × education. Drawn from Statistics Canada census.
- Turnout propensity. For likely-voter models.
- Past vote. Recall of last election weighted against actual outcome. Controversial — corrects for some biases, risks over-fitting.
- Mode. In mixed-mode designs.
Over-weighting (any single cell weighted >3x its raw value) inflates variance and should trigger a review of sample frame rather than heavier weighting.
LEANER ALLOCATION
"Leaners" are respondents who decline to name a party but indicate a direction. Three treatments:
- Exclude leaners. Cleanest; produces smaller decided-voter reads with wider confidence intervals.
- Allocate leaners proportionally. Apportions leaners to parties in proportion to decided-voter support.
- Allocate per stated lean. Most common in NUUN's work — allocates leaners to the party they indicated, with partial weight.
We publish the leaner treatment applied to every wave.
QUESTION WORDING
Question wording is under-discussed publicly but drives disproportionate variance. Four principles:
- Vote-intent wording should mirror the ballot. "If a federal election were held tomorrow, for which party's candidate would you vote?" not "Which party do you support?"
- Party names should rotate. To control for order bias.
- Undecided must be an allowed response. Forced-choice designs overstate vote intent.
- Leader questions should be separate from party questions. Favourability ≠ vote intent.
FIELDING WINDOW
Campaign polls should field tightly (2–5 days) to capture news-cycle shifts. Tracker polls (like NUUN's Public Opinion Tracker) field 6–10 days monthly for stability. Very short fields (<48 hours) risk weekend-effect bias and should be disclosed.
DISCLOSURE — WHAT AAPOR AND CRIC ACTUALLY REQUIRE
Under AAPOR Transparency Initiative and CRIC standards, public-release polls must disclose:
- Sponsor and fieldwork organization
- Sample frame and mode
- Dates of fielding
- Sample size and margin of error (or credibility interval)
- Question wording for any released result
- Weighting variables and methodology
- Response rate (or completion rate for online panels)
- Funding source
No disclosure, no credibility. When a poll release omits any of these, treat the result with appropriate skepticism.
WHAT GOES WRONG IN ELECTION POLLING
Four recurring failure modes:
Herding. Pollsters unconsciously trim outlier results toward the consensus. Detected via lower-than-expected cross-pollster variance late in a campaign.
Non-response bias. Supporters of one party may be less reachable than supporters of another. Corrected by post-stratification but not eliminated.
Late deciders. Last-week movement captured only by tight fielding and modelled allocation.
Turnout estimation. Most polling misses arise from over-estimating turnout among low-propensity groups.
The 2020 US and the 2015 UK polling misses were primarily turnout-estimation and non-response issues. The methods get better after each major miss.
NUUN'S POLLING STANDARDS
- AAPOR Transparency Initiative member
- CRIC code-of-conduct signatory
- ESOMAR 28 published
- Full methodology note with every public release
- Partisan-campaign work declined while the Public Opinion Tracker is active
- Conflict-of-interest register published
FAQ
Q: Why do pollsters sometimes disagree?
A: Methodological choices — sample frame, screen, weighting, leaner treatment — account for most of the between-pollster variance. A healthy polling environment has methodological diversity.
Q: How large a margin of error is acceptable?
A: n=800–1,500 for national reads (±2.5 to ±3.5pp) is standard. Smaller samples are defensible for directional reads; quoting a margin of error ≤±1pp should trigger skepticism unless the sample is n=10,000+.
Q: Are online polls less accurate than phone?
A: Modern probability-based online polls are comparable to or better than late-era phone polling on electoral accuracy. The comparison depends on sample frame and weighting discipline, not mode alone.
Q: Why do polls sometimes miss election results?
A: Most misses come from turnout estimation or late movement not captured in the final field window. The 2020 US polling miss was primarily a non-response bias among Trump-leaning voters.
Q: Can AI models replace traditional polling?
A: No. Predictive models can usefully forecast election outcomes given data; they cannot replace the underlying data (polls, voter files, economic fundamentals). Claims to the contrary are sales pitches.
Q: What's the minimum disclosure I should demand before trusting a poll?
A: Sponsor, sample frame, mode, dates, size, margin of error, weighting variables, and question wording. If any is missing, treat the result as unverified.
Q: How does NUUN handle campaign work vs public tracker?
A: We decline partisan vote-intent work while the Public Opinion Tracker is active, to avoid conflict of interest. We accept non-partisan public-affairs and ballot-measure work under separate methodology notes.
Q: What's the 2026 Canadian polling benchmark?
A: Expect a pre-election polling average within 2–3 percentage points of final vote share on each party's two-party preferred. Individual pollsters will vary; aggregators (338Canada, CBC Poll Tracker) smooth noise.
RELATED READING
- Public opinion tracker Canada
- Top public affairs firms Canada
- Best market research firms Canada
- Public opinion polling (glossary)