Posts Tagged ‘research’


Moving forward: RQ3. What is journalism for?

This is admittedly not a great research question, but it’s a fundamentally important one to grounding research on the role of journalism in a mature democracy. It implies some key research questions — for example, what journalists think journalism is for, how the public understands the role of journalism, etc. — but this is conceptually a bigger question that the major institutions of the press have proven ideologically unable to grapple with.

The reason why this is so important is that an unmoored press is as likely to damage a democratic society as to serve it, and a press pointed at the wrong goals can be just as bad. The press, by which I mean the national political press, is a political institution, the only extra-governmental institution mentioned in the Constitution. But also, it doesn’t want to acknowledge that status — for example, Michael Wolff, who declared that “stenographer is what you’re supposed to be” after receiving criticism for a feckless interview with Steve Bannon, recently mocked the idea that journalism is necessarily political as a “millennial view.” Instead, the press defines itself by its reflexive ideology — that is, getting between two conflicting poles as much as possible. In an electoral campaign, that bipolar structure is essentially a given.

Placing itself conspicuously in the middle makes things very easy, and is why we have so much “he said/she said” journalism. But it also detaches the press from the idea that it plays its own unique role in our democracy. As such, journalists become predisposed to disclaim any actual impact on things that tilt toward one pole or the other. Thus they can claim at the same time that they successfully informed the public about, for example, Donald Trump’s conflicts of interest, and that the choices they made didn’t affect the outcome of the election — or in other words: “Media reported all the terrible things that make us angry about him. Voters didn’t care.”

2016wordcloudThe enormous newshole modern news organizations have to fill makes it trivial to say you’ve “reported” or “covered” something. What we know looking at the aggregate is that the Clinton email server story received three times as much network TV coverage as all policy issues combined during 2016; all that reporting can be factually correct and still present an incorrect picture to the public. To wit, the incredible word cloud based on Gallup’s summer surveys, mapping things respondents had recently heard about the candidates, may be the rosetta stone of the 2016 election (n.b., “email” defined Clinton in late 2015 as well).

The need for a bipolar structure also manifests in other important ways. Jay Rosen has written recently about what he calls “accusation-based” vs. “evidence-based” reporting. Though he was reacting to coverage of Trump’s false claim of millions of illegal votes being cast for Clinton, it was a story that fit cleanly into the press’s campaign season mindset — a partisan makes an accusation, they report it, counter-partisans respond, and they report that. It is fundamentally detached — to use another Rosen term, a “view from nowhere.” The problems with this are self-evident: If you know your claims will be repeated credulously by the press, you have no incentive not to lie like crazy. It’s especially insidious in a context like this one, whereby democratic principles come to be seen through the zero-sum lens of partisan politics.

But strangely, we can also see the press react to a unipolar environment with more willingness to be adversarial. In mid-November, we saw a mini boom of stories addressing Trump’s unprecedented conflicts of interest and, indeed, nascent pre-inauguration corruption. These stories were known during the campaign and could just as easily have been front-paged then; they weren’t, in part because Clinton didn’t make specific campaign issues of them.

In all these situations, the press is taking an effectively nihilistic approach, as reflected quite clearly in the centrist consensus that Jonathan Chait critiques in his David Brooks-focused assessment of the mainstream, establishment opinion press. But the unmoored desires of Brooks and his peers to put themselves in the center regardless of its location isn’t a symptom of our times — see The Washington Post’s Paul Taylor from 1990:

Sometimes I worry that my squeamishness about making sharp judgments, pro or con, makes me unfit for the slam-bang world of daily journalism. Other times I conclude that it makes me ideally suited for newspapering– certainly for the rigors and conventions of modern ‘objective’ journalism. For I can dispose of my dilemmas by writing stories straight down the middle. I can search for the halfway point between the best and the worst that might be said about someone (or some policy or idea) and write my story in that fair-minded place. By aiming for the golden mean, I probably land near the best approximation of truth more often than if I were guided by any other set of compasses– partisan, ideological, psychological, whatever… Yes, I am seeking truth. But I’m also seeking refuge. I’m taking a pass on the toughest calls I face.

There are a million more places I could take this, but it’s meant to be a philosophical question/blog post, and not a book. So I’m going to lay down some normative markers that I think ought to guide a period of introspection for major American press institutions:

The question then becomes, in light of these factors, what means ought the press use to achieve its ends? To be sure, major news organizations are facing institutional challenges for which they have few answers: traditional revenue streams evaporating, historically low levels of trust from the public, loss of their gatekeeping oligopoly. But they also face challenges to which they have simply failed to adapt: How to handle a president-elect who doesn’t need you, and who broadcasts lies on a regular basis? How to respond quickly to new information with both context and clarity, rather than stenography (for example, to claims from Trump that he is “handing off” his business)? How to incorporate an understanding of the potential effects of news on public opinion, as well as the value of “partisans” in the news ecosystem? These questions all have to do with the press’s willingness to stand as its own democratic institution, rather than primarily a reflexive one — a sun, rather than a moon. Without sunshine, this is where we are:

Filed: Super Special Questions || 13:59, December 1 || No Comments »


Moving forward: RQ2. How do we know what Facebook affects?

“Trump didn’t win because of X” has become a popular genre of punditry in the last two and a half weeks, along with any number of declarations backed up by little or no specific data.[1] In the wake of concern about “fake news” and partisan echo chambers online, fed by both Russian intelligence and American hoaxsters, Facebook (and social media more broadly) has become the focal point. Keith Hampton and Eszter Hargittai make this point, but like most such analyses, don’t have data specific to actual voters; instead, they note that the demographics of Trump support are negatively correlated with social media use, and that most people don’t click through from headlines in their Facebook feeds.

But this sort of supposition ignores a range of ways that we know information filters through even pre-Internet social networks, let alone the supercharged networking that is the core function of Facebook. The point here is not to say that Facebook did or didn’t do anything, but that stitching together population-level generalities is not going to provide anything like compelling evidence.

So how do we figure out what Facebook affected, if anything, and how it did it? It’s important to have some handle on what we mean here, because no what matter we do there are going to be lots of variables tangled up in a mess of colinearity. We also need to note that getting a look at actual Facebook content is difficult to impossible, but the online environment presents a lot of problems along these lines. Survey respondents might be able to recall how frequently they visited a major source; can they recall whether or not they ever read something from one of the minor partisan sources that use Facebook as their primary distribution platform?

If actual content is out, we’re going to need to contextualize Facebook use. One way to do this is at the model level, putting Facebook use for news into an mediation model with other media use, and online and offline political discussion. Some co-authors and I have a paper in development that takes one approach to this, essentially wrapping an online version of the communication mediation model in a Facebook-based container. We find no direct effects of Facebook news use on any outcomes outside of Facebook, but significant indirect effects running through links to other media and discussion behaviors. This sort of thinking also suggests examining the relationship of Facebook shares to prominence in other media, and especially major partisan media. Facebook may act as a conduit for stories that bubble up from 4chan, Reddit, or Twitter to make their way to Fox News and conservative talk radio, for example.

Understanding potential Facebook effects at the individual level requires understanding individuals within their network contexts, as both senders and receivers of information. This helps us get at the central complicating factor of measuring Facebook’s effects, which is that everyone’s Facebook experience is different. Unlike a measurement of how often one watches network news broadcasts, for example, just asking for Facebook use frequency tells us basically nothing. However, what if we also knew something about people’s networks? In a survey this would be imperfect self-reported data, but we could ask questions about political homogeneity of one’s network, along with things like tendency to engage with agreeing or disagreeing others. An interaction term between frequency of Facebook use for news and network homogeneity would give us a measure of Facebook as a filter bubble or echo chamber; putting that in a model with reflection, elaboration, and talk would start us toward a model of how a variety of influences affect individuals’ attitudes. I have another paper in progress that utilizes an interaction term like this, and one problem with it is that it’s basically an impossible measure to validate. But that’s a problem for another day!


[1] This is especially weird given the ultimate closeness of the election. Anything that could have cost Clinton 100,000 total votes across Pennsylvania, Michigan, and Wisconsin could be said to be the reason Trump won. The existence of multiple “but for” causes doesn’t make any single one invalid.

Filed: Super Special Questions || 13:32, November 26 || No Comments »


Moving forward: RQ1. Who were the mediated voters?

I love blogging, but it’s obviously not the right medium for me. I’m going to try to rectify that with this series of short posts using the chaos that currently ensnares us to develop some research questions for 2017 and beyond. Some of them are strongly journalism-focused, others about campaign organization, some about information systems. I’m going to dig at the necessary research designs a bit, but I’m not thinking too much here about doability; this is more about what we should be figuring out.

The first one came to mind reading this New York Times piece on voters and non-voters, and primarily black ones, in Milwaukee County:

“We went to the beach,” said Maanaan Sabir, 38, owner of the Juice Kitchen, a brightly painted shop a few blocks down West North Avenue, using a metaphor to describe the emotion after Mr. Obama’s election. “And then eight years happened.”

All four barbers had voted for Mr. Obama. But only two could muster the enthusiasm to vote this time. And even then, it was a sort of protest. One wrote in Mrs. Clinton’s Democratic opponent, Senator Bernie Sanders of Vermont. The other wrote in himself.

This sort of voter profile piece is a staple of post-election reporting, particularly when a candidate under or overperforms in an unexpected way, and is presented as a way to understand the broader scope of what happened in the election. At the same time, this piece managed to find just one reluctant Clinton voter in a city that cast 76% of its votes for her (“…as did many others here” is how the story puts it). This genre existed during the campaign as well, as noted frequently by Eric Boehlert of Media Matters for America:

In general, I understand the media’s desire to try to explain what’s driving the support for Trump, who’s obviously running a highly unusual campaign and marketing his run in openly bigoted language. For a lot of people that’s deeply troubling, so understanding the dynamic behind Trump represents an obvious story of interest.

What I’m baffled by is the media’s corresponding lack of curiosity about examining Clinton voters. After all, she has accumulated more votes than any other candidate this year and is leading a Democratic surge into key states. (Why hasn’t The New Yorker published an 8,000-word piece on why Virginia has turned into a deeply blue state over the last decade?)

And I’m not alone in noting the year’s long-running disparity. Journalism professor and Clinton supporter Jeff Jarvis recently admonished the media (emphasis in original): “I never hear from voters like me who are enthusiastic supporters. I never see reporters wading among eager backers at Clinton rallies to ask them how much they like her and why.”

So the question here is a simple one: Do the voters and areas presented in mainstream press profiles represent the actual electorate that votes in the election? If not, in what ways is the presentation biased? Some of these are fairly obvious — the view of Trump as a weird insurgent, at best, or a danger to the republic, at worst, make for a Man Bites Dog story regardless of what else is going on. However, “Former First Lady becomes first ever woman to win major party nomination” is also an unusual story. Projected swing states seem likely to have gotten more attention, but the Democratic movement of Arizona and Georgia is also compelling.

This sort of study would’ve been much easier to do 20 years ago. Identifying what qualifies as the national political press in 2016 is a study on its own, and then figuring out how to find all the relevant profile pieces from, for example, CNN.com is another extensive piece of work. The difficulty of systematic sampling and the breadth of how profiles are presented suggests a qualitative approach may be the most sensible, but any comparison with real election results will need more precision than that. Assuming we find a way through the sampling process, the work is a little easier. Coding for candidate support, enthusiasm, location, history, available demographic information, and anything else that helps form a picture of the voters being profiled can be aggregated up to a model of what the mediated electorate looks like. It’s very possible there aren’t enough data points available to do a true statistical analysis, but I think that capturing the picture in the coverage is really the goal of a study like this.

Filed: Super Special Questions || 7:39, November 23 || No Comments »


Publication bias, pharmaceutical efficacy and the peer-review process

Reading this MetaFilter discussion on publication bias in favor of positive findings for studies of antipsychotics and antidepressants has me thinking about how research gets reviewed and published. The current model involves a research (or a team of researchers) having an idea (presumably based on theory), developing it into a set of hypotheses, designing a study to test them, securing funding to actually do the study, writing up the results, submitting the paper to a journal, getting peer-reviewed, and ultimately getting either rejected or published. Publication bias occurs at the peer-review stage — studies that find no relationships between the things they examine are less likely to be published than those that do. This is particularly problematic for pharmaceutical research, because it basically means that the body of pharmaceutical literature describes drugs as more effective than they actually are. This is likely to bias both drug application and subsequent research.

There are reasons why this model persists, but it doesn’t make a lot of sense in the era of unlimited online page-space. When we could only publish a certain number of studies per issue, it made sense to filter them based on not only theoretical foundation and study design quality, but also on the impact of their contributions to the literature. Most studies with null findings aren’t going to make much of a splash, of course. But now, especially in medical research, why should we restrict publication on these criteria?

A better approach could be to submit proposals to the peer-review process, with approval carrying a commitment to publish. Right now, grants go through this process, but they’re not tied to publication — journal referees see finished products, with results. But if we really believe in building on theory and hypothesis testing, a literature review, hypothesis and study design should be enough to know if the research is worthwhile. A well-designed study that has null findings may not be a big attention-getter, but it is something that should be available to the rest of the research community, especially at the aggregate level. That is, if 40 null studies don’t get published, while 10 studies with positive findings are published, researchers ought to have a way to find out that null findings actually dominate this area of research.

Filed: Science Is Real || 17:55, April 3 || 3 Comments »


Peer-to-peer review

I’m in my tenth year as an academic, which means that my view of how things worked in the world of research pre-2002 is based purely on output — that is, the papers and books I’ve read that were published before my time. As a result, I really don’t know if the phenomenon I’ve been noticing lately is new or not, but man, there are unpublished papers everywhere. Conference papers, working papers, executive reports, etc., are posted all over scholars’ web sites, their academia.edu profiles, departmental sites, and then frequently logged in Google Scholar and/or touted in news releases. They may or may not make an academic splash, though there are certainly a number of recent conference papers in political science and mass comm that used the web to gain a lot of notice, but news organizations and commentators don’t operate with the same filters as academics do. This University of Washington study (including one author who is a former colleague of mine at Wisconsin) is a good example. It was linked by Talking Points Memo, in a way that seems fairly typical of how unpublished work is disseminated through blogs and other online political commentary. Its conclusion makes the paper appealing, as it seems to inject some empirical evidence into the debate over whether “Twitter revolutions” really have anything to do with Twitter, which has already gone back and forth in the press.

At the same time, it’s a study that hasn’t gone through the peer review process. Maybe it’s applying theory in an unconventional way, or maybe there’s something odd about the data, or maybe it’s exactly right (and I should note I’m only highlighting this study because it’s the most recent example I’ve come across — there are dozens more). But the widespread availability of unreviewed research presents a twist to the science news cycle model, which relies on carefully considered and reviewed conclusions to be reined in. If we’re inserting more and more research from earlier stages of the process into our discussions of public policy, current events, etc., we could see major challenges to the peer review model.

Filed: Science Is Real || 14:16, September 27 || 1 Comment »