Polling data in CBC News programming

Review from the Office of the Ombudsman | English Services


The use of polling data in CBC News programming

You wrote in June, 2009, to complain about the use of polling data in CBC News programming. You said you felt the results were inaccurate because the sample of Canadians surveyed was not randomly chosen. There are two reasons the sample was not random, you wrote: First, polling companies draw their “representative” sample from telephone lists that do not include mobile phones, even in the face of increasingly widespread and exclusive use of mobile phones; and, second, they draw their sample from a pool of internet users, which may be representative of internet users, but not of Canadians as a whole.

Esther Enkin, Executive Editor, CBC News, responded: “Your concerns about the methodology used in traditional telephone polling and the more recent internet polling, I understand, are shared by many in the industry; however, you have assumed that one or both of those techniques were used in this survey when they were not.

EKOS says none of the interviews conducted for the surveys reported by the CBC was conducted on the internet. The company says the telephone survey methodology used in these surveys included both the traditional landlines and mobile telephones. It says in an effort to reduce the landline-only bias, it created a dual landline/mobile phone-sampling frame. As a result, it says, it was able to reach both those with a landline and mobile phone, as well as mobile phone-only households and landline- only households. It says the combination produced “a near perfect” unweighted distribution on age group and gender.”

You rejected this explanation, added some further comment, and asked for a review.

Before dealing with the specifics of your complaint, it may be useful to review part of the relevant CBC policies on the broadcasting of public opinion data.

Under the heading of Production Standards there is a section entitled Broadcasting Results of Surveys, with specific reference to “Non-CBC Surveys.” That section says:


Prior to broadcasting results of any non-CBC survey, the CBC journalists concerned are expected to: obtain all necessary information on the methods used, as well as the main results of the survey; compare the interpretation of the results by the authors of thestudy with the opinions of other experts in the field; seek the advice of CBC Research as to the validity of the methods used and the interpretation of the results.


In broadcasting the results of surveys, CBC journalists must: give prominence to the actual data over interpretations of that data; report the name of the person or organization conducting the survey and, where relevant, their political party affiliation, the name of the sponsor, the population surveyed, the size of the sample, the period during which the survey was conducted, the response rate and the margin of error; avoid leaving any impression that public opinion surveys are predictive.

The last sentence is particularly important: “…avoid leaving any impression that public opinion surveys are predictive.”

I have to confess that I have been involved to varying degrees with public opinion—mainly political—polling for a long time. One of the most significant errors that can be committed is violating that clause. Any reasonable experience in journalism leads one to the conclusion that political polling is not predictive, but illustrative: within a margin of error, the data may show how segments of the population were leaning on the date the poll was taken.

One needs only to have covered a couple of federal elections to know that, depending on the timing of the survey, the predictive power of the data is minimal. The most recent example comes from the United States—the mayoral race in New York City, where the polls were substantially off-target in showing the margin of victory for Mr. Bloomberg. In the Canadian federal election that first brought Mr. Harper to power, it was “clear” in polling through most of December, 2005 that Mr. Martin would win at least a minority. Events overtook that.

So, your concern about using polls as predictors of political behaviour (or any other kind of behaviour, for that matter) is well placed.

However, you seem to range more widely, casting doubt on the whole exercise of this kind of polling. But after consulting a number of independent academic experts I find general agreement that the method outlined by EKOS would generally produce a representative sample.

That being said, it should be noted that the crucial elements are what is being asked of that sample, and how it is being asked. It has long been known that the order of questions, their framing, and the content of the rest of the survey all could influence outcome

It then becomes vitally important that the journalistic organization take appropriate steps to ensure that the surveying organization has taken those caveats into consideration. When reporting on surveys over which the organization has no control, the results should be flagged appropriately and placed in context for the viewer, listener or reader.

Skillfully done polls on the eve of an election might have some predictive value, but it has been my experience, and the opinion of the experts I spoke with, that the value decreases sharply the further away from voting day we are.

All of this presents challenges for the journalist. Polls are part of the national discussion and cannot easily be ignored. Political parties use them to plot strategy, as do lobbyists, trade organizations and others trying to become part of the national debate. Journalists have an obligation to report such information.

However, such reports must always be put in context—not just the now-routine disclaimers (plus/minus 2 or 3 percentage points, 19 times out of 20, etc.), but real information on who paid for the poll and how it fits with other information that the journalist may have available. What effect would those percentage points really have? What do we know about non-participation? If smaller segments are being cited (such as provincial data) what are the sample sizes for those “slices”?

The New York story seems illustrative: the “accepted” narrative was that Mayor Bloomberg was going to win, and win by a wide margin against his relatively unknown opponent. Downplayed in that narrative was the fact that only 50 percent of the public seemed happy with Mr. Bloomberg's decision to change the law that imposed term limits, and run again. By voting day, sufficient numbers of people either stayed away from the polls or changed their minds to bring the margin down from a reported 18 percentage points to 5.

In his own review of the issue, my colleague Clark Hoyt, Public Editor of the New York Times, had this to say:

“… a newspaper both reports an impression and helps cement it when it says routinely, without elaboration, that Bloomberg has a ‘commanding' lead, a ‘healthy' lead or a ‘comfortable' one, or says he is ahead by a ‘wide margin' in unnamed polls.”

He also mentioned the reporting on the two most prominent university polling organizations in the New York area:

“In their press releases, Marist and Quinnipiac emphasized the gap between the candidates, which was as high as 18 points in Quinnipiac's polls. That gap got headlines on the Times Web site. The printed paper did not carry full articles on the polls and did not name them, referring only to Bloomberg's lead or Thompson's poor showing in ‘polls.' Had The Times put less emphasis on the spread and more on the fact that Bloomberg was stuck at roughly 50 percent, readers would have been better served.”

In my long years of covering Canadian politics, I have seen the same story unfold time and again: an “accepted” narrative forming, only to be challenged by reality. And the narrative was usually based on polling. I do not argue that the polls were wrong; only that the reporting tended to skew the context and the possibilities. I recall an anecdote told by a staff member of former Ontario Premier Bill Davis, well known for patience and foresight. This staffer related that when Mr. Davis decided that some action had to be taken on an issue that the public appeared to be largely against, he would raise the issue periodically, framing it to his ends; a debate would ensue with him still on the negative side. But, he would do another poll and watch the results move slightly in his direction. After several reprises, he would introduce the appropriate legislation when the numbers began to line up in his favour.

This should not be news to anyone covering Canadian politics today, but it is still not widely known to the average voter. However, it emphasizes that polls are illustrative, not predictive. It might be useful to view such polls as akin to the paintings of Monet: they give us a fairly accurate impression of the world, but one must always bear in mind that the edges are fuzzy.


Receiving and publishing public opinion polls based on the best practices of the industry is not a violation of CBC's Journalistic Standards and Practices. However, extreme care should be taken in presenting the information, always bearing in mind that polls are not predictive and can be, at times, wildly inaccurate for a number of reasons including methodological error or respondent truthfulness.

Note: I would like to thank Nancy Reid, Professor of Statistics at the University of Toronto, Nelson Weisman, Professor of Politics at the University of Toronto and Christopher Waddell, Director of the School of Journalism and Communications at Carleton University, for the input they shared with me. All of the conclusions are my own and would not necessarily be shared by any of them.

Vince Carlin
CBC Ombudsman