‘Science Is Real’ Category Archive


The 2016 campaign was a study in media effects

I use this textbook to teach media effects

I use this textbook to teach media effects, which I wouldn’t do if the contention that media didn’t affect public opinion were correct.

As different as the 2016 election cycle was from the norm, in many ways the actions and attitudes of the national political press were entirely as usual. This was especially true when it came to how reporters saw their influence on the public when they were critiqued over their coverage of Hillary Clinton and Donald Trump. A typical response to the idea that mainstream news coverage had helped deliver the election to Trump, either by helping him or hurting Clinton, argued that the press had provided honest coverage of Trump’s negatives, and that the negative things they reported about Clinton were true. In other words, coverage didn’t influence voters’ decisions; reality did. If the press had influenced public opinion, per se, it would’ve meant that the press had been unfair in its reporting in terms of accuracy or completeness.

That working journalists so routinely fail to understand elementary media effects is a failure of curiosity and reflection for them, and a failure of communication and engagement for those of us in journalism scholarship. In this case, there are at least three major and fairly basic types of effects that almost certainly occurred, but that most journalists are unable or unwilling to see. All of them stem from the fact that reporting a piece of information accurately is neither the beginning nor the end of representing truth through the news.

Agenda setting

Some argue that agenda setting is the only real news effect that we’ve ever discovered, and everything else is just a variant on it. I disagree with that, but its various “levels” are all over the kinds of effects we can see in this campaign. The basic idea behind agenda setting is that the news doesn’t tell people what to think, but does tell people what to think about. That is, based on what the press chooses to cover, and the extent of coverage it devotes to a range of stories, the public develops a set of beliefs about how different stories compare in terms of importance, and obviously stories that don’t get covered are stories the public never learns about.

That key relationship with knowledge tells us a lot about how Gallup’s word cloud came to be. Clinton’s email server (and most likely the unrelated email hacking stories contributed to this) dominated the press agenda, as confirmed by every study of campaign coverage. Negative Trump topics, such as the seemingly criminal Trump Foundation, were covered, but much less so. In the development of the press agenda, they were only minor pieces, sometimes pursued by only one outlet or even one reporter.

Perhaps more importantly, by running his rallies live so frequently, TV news allowed Trump to directly set their agenda. Print and digital outlets joined in by picking up elements from those rallies, as well as by obsessively covering his Twitter feed, which they continue to do now. As many have noted, Trump has essentially become a media outlet himself, and as such, theory related to intermedia agenda-setting is highly relevant. Needless to say, Clinton’s events and campaign communications were not granted the same access to the press agenda, which may be why so many people now wonder why she “never talked about economics.”

Framing

Mere prominence of one story over another was not the other thing going on. How those stories are presented contextually, even if the facts are 100% true, also has an impact on what people take away from them. In 2016, it’s clear a scandal frame was used to present many stories about Hillary Clinton; you can see this is the way the New York Times and others so often reported on “clouds” being raised or “shadows” being cast around her. For Clinton, the scandal frame was always operative, and it interacted with the game frame to produce stories that were often first about how things looked, and second about how the way they looked might affect the campaign.

We can hypothesize that these frames were picked up by the public by looking at candidate honesty judgments, which mostly found the public thinking Trump was the more honest candidate, despite direct assessment showing Clinton to be one of the most honest politicians, and Trump the very least. On top of that, Clinton was transparent about her and her foundation’s finances, while Trump refused to release anything he wasn’t legally required to. Given that, why would the public have things so backwards? Framing provides the simplest answer. It also provides the foundation which the single most important priming effect of all time was built.

Priming

I can’t be sure what various scholars might have planned to study the effects described above, but I’d be astonished if there weren’t a dozen or more public opinion scholars ready to pounce on the effect of priming in late vote decisions. Back in 1990, Jon Krosnick and Donald Kinder published a seminal article on the role of priming in assessing the president. This article used data from the 1986 National Election Study, a panel survey that included interviews of the same people before and after the revelation of the Iran-Contra affair. What they found was evidence of the extent to which evaluations of Ronald Reagan shifted in the direction of evaluations of him specifically on foreign policy grounds when the new scandal emerged and primed people to think of it when they thought of him.

James Comey’s October 28 letter to Congress serves the same role in the 2016 election as the Iran-Contra news did for the 1986 NES data. An analysis of panel data recently conducted for 538 structurally confirms this, as pre-letter and post-letter data gathered from the same respondents show a 4% swing to Trump as the dormant email story was made newly salient for late-deciding or flippable voters. Without additional data we can’t say for sure this was due to Comey’s letter, but there is no other plausible competing theory.

Some members of the press might point out that this is not really a media effect — Comey is a newsworthy individual who did a newsworthy thing in the context of the election, and they simply reported it. But this is where all three of these effects come together. The national political press responded to this event with zone-flooding coverage; the New York Times alone ran seven front page stories on it in the first three days of coverage, making it highly salient, telling the public it was an especially important story, and presenting it in the scandal frame. These choices were not made with any Trump stories, such as the illegal donation his foundation made to the attorney general of Florida before she dropped her investigation of Trump University, or indeed, the $25 million settlement of the broader Trump University case.

None of this is “fake news.” And yet, we have a journalism that is working at odds with what we think journalism ought to be for. Matthew Yglesias has this right when he says that the overriding issue of what looks to be the most consequential election since 1932 was email server management. That’s not something that just happens. But really, none of the things that happen in public opinion just happen. The press and the decisions that journalists make everyday — and they do make decisions everyday and in every story — affect what people know, think, and believe. Their desire to stand apart and abdicate responsibility for outcomes doesn’t change that.

Filed: Science Is Real || 9:52, December 25 || No Comments »


Examining social identity influence: A meandering series of thought exercises

At the annual MAPOR conference a few weeks ago, we held the third in a series of panels built around Doug Hindman’s “belief gap” hypothesis. Following the panels in 2011 and 2012 I posted summaries of where my thoughts had gone based on what we’d all presented and talked about in these fruitful sessions. I had planned to do the same this year, but the thoughts have gotten too big. Instead, I have some thoughts on where the belief gap model has gone and how it can integrate with other similar models of social identity influence, to be followed by several posts exploring the deeper questions that are raised by what current research has done with this model. Read the rest of this entry »

Filed: Science Is Real || 8:00, December 22 || 1 Comment »


It’s so hard to say goodbye to yesterday

Nothing much to add to this, but half of Republicans believe that ACORN, an organization that no longer exists, stole the election for Barack Obama:

49% of GOP voters nationally say they think that ACORN stole the election for President Obama. We found that 52% of Republicans thought that ACORN stole the 2008 election for Obama, so this is a modest decline, but perhaps smaller than might have been expected given that ACORN doesn’t exist anymore.

The crosstabs show a slightly larger percentage of “very conservative” respondents believing ACORN stole the election (53%), and I wonder if this is the sort of question where ideology trumps partisanship, even though it’s typically the other way around. Republican leaders are going to have a politically tough time talking about ACORN in specific for obvious reasons (e.g., it doesn’t exist anymore), but the ideological leaders of very conservative voters may not be restrained by such “reality-based” concerns.

There’s also some very interesting fake knowledge stuff from the same poll:

The 39% of Americans with an opinion about Bowles/Simpson is only slightly higher than the 25% with one about Panetta/Burns, a mythical Clinton Chief of Staff/former western Republican Senator combo we conceived of to test how many people would say they had an opinion even about something that doesn’t exist.

Bowles/Simpson does have bipartisan support from the small swath of Americans with an opinion about it. Republicans support it 26/18, Democrats favor it 21/14, and independents are for it by a 24/18 margin. Panetta/Burns doesn’t fare as well with 8% support and 17% opposition.

I’ve been curious about the assertion of fake knowledge for quite a while (I conducted a study way back when that found people claiming more knowledge about a fake race riot story than the real Canadian elections), and I wonder if there isn’t a connection there to the belief gap mechanism.

Filed: Science Is Real || 9:36, December 13 || No Comments »


Books I will never write

I’m not going to write these books, but I offer them up as a public service to others in need of high-concept ideas to pitch to publishers.

 

OK, back to finishing ICA papers.

Filed: Science Is Real || 1:23, November 2 || 1 Comment »


The first slide of my MAPOR presentation

I’m following up last year’s belief gap study at MAPOR this year, and analysis is still in progress, but I’ve found the thing that will open my presentation in Chicago — it’s these polls from Public Policy Polling in Ohio and North Carolina:

Who do you think deserves more credit for the killing of Osama bin Laden: Barack Obama or Mitt Romney?

OHIO All Democrats Independents Republicans
Barack Obama 63% 86% 64% 38%
Mitt Romney 6% 1% 1% 15%
Not Sure 31% 13% 36% 47%
NORTH CAROLINA All Democrats Independents Republicans
Barack Obama 63% 87% 61% 29%
Mitt Romney 6% 1% 3% 15%
Not Sure 31% 11% 36% 56%

Got that? Nearly 1 in 6 Republicans in these swing states say Mitt Romney deserves more credit than Barack Obama does for Osama bin Laden’s death. Unlike, say, the auto bailout, which Romney occasionally claims he ought to be credited for, neither Romney nor anyone else has ever said he deserves credit for this. And why would he? By any standard it’s a belief that’s totally detached from reality. And yet, there it is.

Filed: Science Is Real || 10:12, September 11 || 1 Comment »


Publication bias, pharmaceutical efficacy and the peer-review process

Reading this MetaFilter discussion on publication bias in favor of positive findings for studies of antipsychotics and antidepressants has me thinking about how research gets reviewed and published. The current model involves a research (or a team of researchers) having an idea (presumably based on theory), developing it into a set of hypotheses, designing a study to test them, securing funding to actually do the study, writing up the results, submitting the paper to a journal, getting peer-reviewed, and ultimately getting either rejected or published. Publication bias occurs at the peer-review stage — studies that find no relationships between the things they examine are less likely to be published than those that do. This is particularly problematic for pharmaceutical research, because it basically means that the body of pharmaceutical literature describes drugs as more effective than they actually are. This is likely to bias both drug application and subsequent research.

There are reasons why this model persists, but it doesn’t make a lot of sense in the era of unlimited online page-space. When we could only publish a certain number of studies per issue, it made sense to filter them based on not only theoretical foundation and study design quality, but also on the impact of their contributions to the literature. Most studies with null findings aren’t going to make much of a splash, of course. But now, especially in medical research, why should we restrict publication on these criteria?

A better approach could be to submit proposals to the peer-review process, with approval carrying a commitment to publish. Right now, grants go through this process, but they’re not tied to publication — journal referees see finished products, with results. But if we really believe in building on theory and hypothesis testing, a literature review, hypothesis and study design should be enough to know if the research is worthwhile. A well-designed study that has null findings may not be a big attention-getter, but it is something that should be available to the rest of the research community, especially at the aggregate level. That is, if 40 null studies don’t get published, while 10 studies with positive findings are published, researchers ought to have a way to find out that null findings actually dominate this area of research.

Filed: Science Is Real || 17:55, April 3 || 3 Comments »


Sports, perception and sample-size bias

New York Rangers goalie Henrik Lundqvist is the Tim Tebow of hockey. Actually, he's even better, having won nine of his last ten. Photo by Robert Kowal.

Apologies in advance, but if you don’t follow sports this post may not make much sense.

This afternoon, the Denver Broncos picked up their sixth win in seven games this season with Tim Tebow starting at quarterback. If you haven’t heard, Tebow has what might be called non-traditional passing mechanics, but as many commentators have noted, he “just wins.” There’s a lot that could be said, and already has been said, about the strange way in which quarterbacks are credited with team success in football, but that’s not really the point of this post. Rather, I want to point out how odd it is that seven games — even seven games that include six wins — can be considered so meaningful in football.

This stretch has turned Denver’s season around, to be sure. They lost four of their first five, but now find themselves tied for the lead in their division. But this is only possible because of the NFL’s relatively tiny schedule. Consider that, for a hockey goaltender — probably the only every-game player in North American major team sports with as much impact as a quarterback — six wins in seven games is barely noticeable; it’s less than a tenth of the season. For a baseball player (where there isn’t such a great analogue, since starting pitchers only go every fifth game), six wins in seven games is a good week. You could win your league’s player of the week award in May and be sent down to the minors in June. For Tebow, six wins in seven games is two months and half the season, and it’s especially significant when one of your divisional rivals (San Diego) is imploding at the same time.

But this is all perception; if we’re trying to think about what this seven-game sample means in terms of predicting the larger population of games that is a player’s career, seven tells us nothing. It doesn’t matter that an NFL season is only 16 games long; seven games don’t provide enough observations to reduce the error to an acceptable level. If we were to look at a proportionally similar number of baseball games — 70 — we’d keep the proportion the same, but reduce the sampling error by examining nominally more cases.

So where does this perception error come from? Is it just the kind of rank innumeracy we see in many contexts? Maybe, but I suspect there’s also an important media effect as well. Sports media — both reporters and game broadcasters — and the sports culture they’re embedded in frequently express hostility toward data-driven strategy. Narratives and tradition rule in sports, and when data contradict them it’s because data can’t possibly figure out the relevant “intangibles.” Noting that a seven-game span isn’t really an illuminating sample gets in the way of a lot of narrative structure.

Filed: Science Is Real || 21:49, December 4 || 1 Comment »


Who cues?

At the MAPOR conference last weekend, I presented a study on how partisan media strengthens belief gaps. The belief gap idea, first identified by Doug Hindman a couple years ago, is an extension of the knowledge gap, a theory with over 40 years of work behind it. Whereas the knowledge gap hypothesis suggests that education predicts differential gains of knowledge about political issues — everyone learns, but high-education people learn more, creating a widening gap — the belief gap hypothesis suggests that ideology is better than education for such predictions. It’s called the “belief” gap because the conceptualization of beliefs better fits the context in which facts and knowledge are politically contested.

My paper (with students Delwar Hossain and Ben Lyons) took the initial findings and expanded them in three ways. First, we examined partisanship in addition to ideology and found that it’s consistently a better predictor of beliefs. We attribute this to both being essentially markers of group affiliation, but partisanship being a clearer one for both researchers and survey respondents. Ideology has long been conceptualized as a coherent belief system that drives opinion-formation, but most research suggests few people actually have this kind of formal ideology. Instead, we use cues from elites to guide our opinions, attitudes and beliefs.

Second, we examined the role of partisan traditional and social media in the belief gap process. Despite concern that social media are politically polarizing and insular, we found that partisan traditional media are far stronger drivers of partisan beliefs. There is a structural explanation for this — cable TV and radio have far larger audiences than do blogs and pundits’ social media outlets — as well as a psychological one — we’re exposed to more elite opinion through these outlets, whether those elites are elected officials or opinion-leading commentators.

Finally, we examined belief gaps in five issues — two science-related issues that had previously been studied by Hindman (climate change and abstinence-only sex education), two evidence-free rumors about President Obama (he’s a Muslim, he was born outside the U.S.) and one factual economic issue (whether most Americans’ taxes have gone up during the Obama Administration). Each of these issues has a correct answer by consensus of relevant authorities, but each is also highly politicized. We found belief gaps for each, with largely similar patterns of partisan media influence.

I lay all this out because thinking about our findings in the context of the other presentations in the belief gap panel — from Hindman; Ken Blake and Misa Culley; and Rob Daves, Allen White and Stephen Everett — led me to thinking a lot about the broader, more abstract facets of this idea.1 To my mind, there are two big questions to be answered. First, we need to think about what things a person can have “beliefs” about. During the panel, Rob Daves talked about “verifiable” issues and referenced the work of Cecilie Gaziano in this area, but I think we can think of this in cognitive terms. Given that the belief gap idea grow out of the knowledge gap, I suggest that we look towards the cognitive structure of knowledge to understand what we mean by “belief.” Presumably we are thinking of issues about which the believer can feel that their beliefs are “correct,” even if all evidence and authoritative consensus suggests otherwise, even if there is no consensus to draw on, and even if the answer exists but is unknowable. We may further want to separate issues that are retrospective (about which verification may already be possible), prospective (about which verification can’t be done yet) and ongoing (about which verification may be ephemeral or in constant dispute). These orthogonal issue dimensions would co-exist with the dimension already in use in existing research, politicization.2 The typology might look something like this — consistent with a seat-of-the-pants typology, the examples are the results of just some quick thinking on this and may not fit all that well:

Politicized Non-politicized
Retrospective Prospective Ongoing Retrospective Prospective Ongoing
Consensus Obama born in U.S. Global temperatures will rise Climate change Lincoln killed by Booth Vaccines and autism
Disputed Roe v. Wade lowered crime rate Economy will improve next year Gun ownership and safety JFK killed by Oswald
Unknowable 2000 election stolen Jesus will return someday Alien life exists

 
If we believe that the process observed in the belief gap phenomenon is one of elite cuing by like-minded political leaders (consistent with the work of, e.g., John Zaller), the next question is who does the cuing across the range of this issue typology. For politicized issues, we’ve got a pretty strong hypothesis that political elites provide the most relevant cues, but who those elites are might vary by issues. Particularly for issues that are politicized along evangelical/non-evangelical religious lines, we might expect to see different people and sources playing important roles in mass-opinion formation. Maybe economic, defense and science issues all have different arrays of influential elites; still, we’re probably talking about a relatively narrow band of elites that cue beliefs across a lot of political issues.

But what about beliefs for which elite political cues are not relevant? With the possible exception of Michele Bachmann, nobody’s politicizing childhood vaccinations. So who cues beliefs about vaccinations? Is it scientific consensus (as reported by news media)? Jenny McCarthy? Oprah? If we can explain how non-political beliefs are cued, we may go a long way toward identifying the underlying cognitive and social psychological processes of political belief formation.

1. I should also acknowledge the suggestions of several members of the SIUC political science department (particularly Tobin Grant and Scott McClurg) during a preliminary presentation of this work, which I subsequently incorporated into the final product, and which have informed my ongoing thinking about this topic.

2. There’s another wrinkle here, which is the concept of issue domains and the cognitive work that goes into connecting our attitudes on related issues. For example, in the data used in our paper above, beliefs that tax cuts encourage job creation and that federal deficits discourage job creation were strongly correlated, even though tax cuts help to increase deficits. Additionally, our respondents also anticipate strong inflation over the next year, even though we’ve been in a period of historically low inflation during the global recession. What it looks like is that, instead of considering each issue on its own, there’s a relationship between all these economic issues and general economic attitudes — that is, the economy is bad, and inflation is bad, so we’re in an inflationary period. Job creation is good and tax cuts are good, so they must go together. Probably relevant, but also probably not worth getting into until the first level of questions have been worked out.

Filed: Science Is Real || 20:11, November 21 || 2 Comments »


Gordian knots in research methods

Whenever I have a project in mind that involves Facebook, there’s a methodological stumbling block that almost always comes up: Most of what’s interesting isn’t accessible unless you are friends with the people you’re trying to study. So maybe you rework the research questions, or you come up with a way to address them using survey data, etc.

But now I see that I was overlooking the obvious solution: Just create fake profiles to friend people with, as a group of four researchers at the University of British Columbia did. For them, it was entirely necessary, as they were studying the vulnerability of online social networks to malicious bots, so they basically created their own benign bots and observed what they accomplished. The very first phase resulted in about a 20% friend-acceptance rate, so if you’ve got a good sampling method, this is looking decent enough as a way of getting real, live Facebook content.

Filed: Science Is Real || 23:10, November 6 || 2 Comments »


Civility out of context

I fielded a survey recently, and this out of context result really intrigues me. For the non-quantitative people, these are regression results that show how each variable predicts the outcome variable with all the others controlled. The outcome here is agreement with the statement, “When most Americans debate issues facing the country, they are more civil today compared to ten years ago.” I’m looking at blog use, ideology and partisanship in this study, and here are the predictive results (the ones with the footnote symbols are statistically significant):

Beta
Gender (Female) .03
Age -.05
Race (White) -.19***
Education -.11*
Income .01
Partisanship (Republican) .10†
Ideology (Conservative) -.04
Conservative Blog Use .23**
Liberal Blog Use .03
*** p < .001, ** p < .01, * p < .05, † p < .1

 

I’m pretty sure that the partisanship and conservative blog use results are manifestations of those individuals remembering 2001 as a time when everybody was being so mean to George W. Bush all the time. It’s likely also the reverse — Democrats seeing the current environment as severely uncivil — but the distribution of the blog use data suggests to me it’s more the former than the latter. More on this at MAPOR next month.

Filed: Science Is Real || 19:53, October 1 || No Comments »