‘Science Is Real’ Category Archive
Nothing much to add to this, but half of Republicans believe that ACORN, an organization that no longer exists, stole the election for Barack Obama:
49% of GOP voters nationally say they think that ACORN stole the election for President Obama. We found that 52% of Republicans thought that ACORN stole the 2008 election for Obama, so this is a modest decline, but perhaps smaller than might have been expected given that ACORN doesn’t exist anymore.
The crosstabs show a slightly larger percentage of “very conservative” respondents believing ACORN stole the election (53%), and I wonder if this is the sort of question where ideology trumps partisanship, even though it’s typically the other way around. Republican leaders are going to have a politically tough time talking about ACORN in specific for obvious reasons (e.g., it doesn’t exist anymore), but the ideological leaders of very conservative voters may not be restrained by such “reality-based” concerns.
There’s also some very interesting fake knowledge stuff from the same poll:
The 39% of Americans with an opinion about Bowles/Simpson is only slightly higher than the 25% with one about Panetta/Burns, a mythical Clinton Chief of Staff/former western Republican Senator combo we conceived of to test how many people would say they had an opinion even about something that doesn’t exist.
Bowles/Simpson does have bipartisan support from the small swath of Americans with an opinion about it. Republicans support it 26/18, Democrats favor it 21/14, and independents are for it by a 24/18 margin. Panetta/Burns doesn’t fare as well with 8% support and 17% opposition.
I’ve been curious about the assertion of fake knowledge for quite a while (I conducted a study way back when that found people claiming more knowledge about a fake race riot story than the real Canadian elections), and I wonder if there isn’t a connection there to the belief gap mechanism.
Filed: Science Is Real || 9:36, December 13 || View Comments
I’m not going to write these books, but I offer them up as a public service to others in need of high-concept ideas to pitch to publishers.
- Also, Too: Vice-Presidential Candidates’ Campaign Readiness, From Garner to Palin
- “Indeed,” Said the Blogger: How Content Replication Makes the Web Look 100 Times Bigger Than It Is
- TelePrimping: A Collection of Jabs About Barack Obama’s TelePrompter Usage That Were Delivered Via TelePrompter
- Bigger, Better, Faster, More: The Incredible Accomplishments of Paul Ryan
OK, back to finishing ICA papers.
Filed: Science Is Real || 1:23, November 2 || View Comments
I’m following up last year’s belief gap study at MAPOR this year, and analysis is still in progress, but I’ve found the thing that will open my presentation in Chicago — it’s these polls from Public Policy Polling in Ohio and North Carolina:
Who do you think deserves more credit for the killing of Osama bin Laden: Barack Obama or Mitt Romney?
OHIO All Democrats Independents Republicans Barack Obama 63% 86% 64% 38% Mitt Romney 6% 1% 1% 15% Not Sure 31% 13% 36% 47%
NORTH CAROLINA All Democrats Independents Republicans Barack Obama 63% 87% 61% 29% Mitt Romney 6% 1% 3% 15% Not Sure 31% 11% 36% 56%
Got that? Nearly 1 in 6 Republicans in these swing states say Mitt Romney deserves more credit than Barack Obama does for Osama bin Laden’s death. Unlike, say, the auto bailout, which Romney occasionally claims he ought to be credited for, neither Romney nor anyone else has ever said he deserves credit for this. And why would he? By any standard it’s a belief that’s totally detached from reality. And yet, there it is.
Filed: Science Is Real || 10:12, September 11 || View Comments
Reading this MetaFilter discussion on publication bias in favor of positive findings for studies of antipsychotics and antidepressants has me thinking about how research gets reviewed and published. The current model involves a research (or a team of researchers) having an idea (presumably based on theory), developing it into a set of hypotheses, designing a study to test them, securing funding to actually do the study, writing up the results, submitting the paper to a journal, getting peer-reviewed, and ultimately getting either rejected or published. Publication bias occurs at the peer-review stage — studies that find no relationships between the things they examine are less likely to be published than those that do. This is particularly problematic for pharmaceutical research, because it basically means that the body of pharmaceutical literature describes drugs as more effective than they actually are. This is likely to bias both drug application and subsequent research.
There are reasons why this model persists, but it doesn’t make a lot of sense in the era of unlimited online page-space. When we could only publish a certain number of studies per issue, it made sense to filter them based on not only theoretical foundation and study design quality, but also on the impact of their contributions to the literature. Most studies with null findings aren’t going to make much of a splash, of course. But now, especially in medical research, why should we restrict publication on these criteria?
A better approach could be to submit proposals to the peer-review process, with approval carrying a commitment to publish. Right now, grants go through this process, but they’re not tied to publication — journal referees see finished products, with results. But if we really believe in building on theory and hypothesis testing, a literature review, hypothesis and study design should be enough to know if the research is worthwhile. A well-designed study that has null findings may not be a big attention-getter, but it is something that should be available to the rest of the research community, especially at the aggregate level. That is, if 40 null studies don’t get published, while 10 studies with positive findings are published, researchers ought to have a way to find out that null findings actually dominate this area of research.
Filed: Science Is Real || 17:55, April 3 || View Comments
Apologies in advance, but if you don’t follow sports this post may not make much sense.
This afternoon, the Denver Broncos picked up their sixth win in seven games this season with Tim Tebow starting at quarterback. If you haven’t heard, Tebow has what might be called non-traditional passing mechanics, but as many commentators have noted, he “just wins.” There’s a lot that could be said, and already has been said, about the strange way in which quarterbacks are credited with team success in football, but that’s not really the point of this post. Rather, I want to point out how odd it is that seven games — even seven games that include six wins — can be considered so meaningful in football.
This stretch has turned Denver’s season around, to be sure. They lost four of their first five, but now find themselves tied for the lead in their division. But this is only possible because of the NFL’s relatively tiny schedule. Consider that, for a hockey goaltender — probably the only every-game player in North American major team sports with as much impact as a quarterback — six wins in seven games is barely noticeable; it’s less than a tenth of the season. For a baseball player (where there isn’t such a great analogue, since starting pitchers only go every fifth game), six wins in seven games is a good week. You could win your league’s player of the week award in May and be sent down to the minors in June. For Tebow, six wins in seven games is two months and half the season, and it’s especially significant when one of your divisional rivals (San Diego) is imploding at the same time.
But this is all perception; if we’re trying to think about what this seven-game sample means in terms of predicting the larger population of games that is a player’s career, seven tells us nothing. It doesn’t matter that an NFL season is only 16 games long; seven games don’t provide enough observations to reduce the error to an acceptable level. If we were to look at a proportionally similar number of baseball games — 70 — we’d keep the proportion the same, but reduce the sampling error by examining nominally more cases.
So where does this perception error come from? Is it just the kind of rank innumeracy we see in many contexts? Maybe, but I suspect there’s also an important media effect as well. Sports media — both reporters and game broadcasters — and the sports culture they’re embedded in frequently express hostility toward data-driven strategy. Narratives and tradition rule in sports, and when data contradict them it’s because data can’t possibly figure out the relevant “intangibles.” Noting that a seven-game span isn’t really an illuminating sample gets in the way of a lot of narrative structure.
Filed: Science Is Real || 21:49, December 4 || View Comments
At the MAPOR conference last weekend, I presented a study on how partisan media strengthens belief gaps. The belief gap idea, first identified by Doug Hindman a couple years ago, is an extension of the knowledge gap, a theory with over 40 years of work behind it. Whereas the knowledge gap hypothesis suggests that education predicts differential gains of knowledge about political issues — everyone learns, but high-education people learn more, creating a widening gap — the belief gap hypothesis suggests that ideology is better than education for such predictions. It’s called the “belief” gap because the conceptualization of beliefs better fits the context in which facts and knowledge are politically contested.
My paper (with students Delwar Hossain and Ben Lyons) took the initial findings and expanded them in three ways. First, we examined partisanship in addition to ideology and found that it’s consistently a better predictor of beliefs. We attribute this to both being essentially markers of group affiliation, but partisanship being a clearer one for both researchers and survey respondents. Ideology has long been conceptualized as a coherent belief system that drives opinion-formation, but most research suggests few people actually have this kind of formal ideology. Instead, we use cues from elites to guide our opinions, attitudes and beliefs.
Second, we examined the role of partisan traditional and social media in the belief gap process. Despite concern that social media are politically polarizing and insular, we found that partisan traditional media are far stronger drivers of partisan beliefs. There is a structural explanation for this — cable TV and radio have far larger audiences than do blogs and pundits’ social media outlets — as well as a psychological one — we’re exposed to more elite opinion through these outlets, whether those elites are elected officials or opinion-leading commentators.
Finally, we examined belief gaps in five issues — two science-related issues that had previously been studied by Hindman (climate change and abstinence-only sex education), two evidence-free rumors about President Obama (he’s a Muslim, he was born outside the U.S.) and one factual economic issue (whether most Americans’ taxes have gone up during the Obama Administration). Each of these issues has a correct answer by consensus of relevant authorities, but each is also highly politicized. We found belief gaps for each, with largely similar patterns of partisan media influence.
I lay all this out because thinking about our findings in the context of the other presentations in the belief gap panel — from Hindman; Ken Blake and Misa Culley; and Rob Daves, Allen White and Stephen Everett — led me to thinking a lot about the broader, more abstract facets of this idea.1 To my mind, there are two big questions to be answered. First, we need to think about what things a person can have “beliefs” about. During the panel, Rob Daves talked about “verifiable” issues and referenced the work of Cecilie Gaziano in this area, but I think we can think of this in cognitive terms. Given that the belief gap idea grow out of the knowledge gap, I suggest that we look towards the cognitive structure of knowledge to understand what we mean by “belief.” Presumably we are thinking of issues about which the believer can feel that their beliefs are “correct,” even if all evidence and authoritative consensus suggests otherwise, even if there is no consensus to draw on, and even if the answer exists but is unknowable. We may further want to separate issues that are retrospective (about which verification may already be possible), prospective (about which verification can’t be done yet) and ongoing (about which verification may be ephemeral or in constant dispute). These orthogonal issue dimensions would co-exist with the dimension already in use in existing research, politicization.2 The typology might look something like this — consistent with a seat-of-the-pants typology, the examples are the results of just some quick thinking on this and may not fit all that well:
|Consensus||Obama born in U.S.||Global temperatures will rise||Climate change||Lincoln killed by Booth||Vaccines and autism|
|Disputed||Roe v. Wade lowered crime rate||Economy will improve next year||Gun ownership and safety||JFK killed by Oswald|
|Unknowable||2000 election stolen||Jesus will return someday||Alien life exists|
If we believe that the process observed in the belief gap phenomenon is one of elite cuing by like-minded political leaders (consistent with the work of, e.g., John Zaller), the next question is who does the cuing across the range of this issue typology. For politicized issues, we’ve got a pretty strong hypothesis that political elites provide the most relevant cues, but who those elites are might vary by issues. Particularly for issues that are politicized along evangelical/non-evangelical religious lines, we might expect to see different people and sources playing important roles in mass-opinion formation. Maybe economic, defense and science issues all have different arrays of influential elites; still, we’re probably talking about a relatively narrow band of elites that cue beliefs across a lot of political issues.
But what about beliefs for which elite political cues are not relevant? With the possible exception of Michele Bachmann, nobody’s politicizing childhood vaccinations. So who cues beliefs about vaccinations? Is it scientific consensus (as reported by news media)? Jenny McCarthy? Oprah? If we can explain how non-political beliefs are cued, we may go a long way toward identifying the underlying cognitive and social psychological processes of political belief formation.
1. I should also acknowledge the suggestions of several members of the SIUC political science department (particularly Tobin Grant and Scott McClurg) during a preliminary presentation of this work, which I subsequently incorporated into the final product, and which have informed my ongoing thinking about this topic.
2. There’s another wrinkle here, which is the concept of issue domains and the cognitive work that goes into connecting our attitudes on related issues. For example, in the data used in our paper above, beliefs that tax cuts encourage job creation and that federal deficits discourage job creation were strongly correlated, even though tax cuts help to increase deficits. Additionally, our respondents also anticipate strong inflation over the next year, even though we’ve been in a period of historically low inflation during the global recession. What it looks like is that, instead of considering each issue on its own, there’s a relationship between all these economic issues and general economic attitudes — that is, the economy is bad, and inflation is bad, so we’re in an inflationary period. Job creation is good and tax cuts are good, so they must go together. Probably relevant, but also probably not worth getting into until the first level of questions have been worked out.
Filed: Science Is Real || 20:11, November 21 || View Comments
Whenever I have a project in mind that involves Facebook, there’s a methodological stumbling block that almost always comes up: Most of what’s interesting isn’t accessible unless you are friends with the people you’re trying to study. So maybe you rework the research questions, or you come up with a way to address them using survey data, etc.
But now I see that I was overlooking the obvious solution: Just create fake profiles to friend people with, as a group of four researchers at the University of British Columbia did. For them, it was entirely necessary, as they were studying the vulnerability of online social networks to malicious bots, so they basically created their own benign bots and observed what they accomplished. The very first phase resulted in about a 20% friend-acceptance rate, so if you’ve got a good sampling method, this is looking decent enough as a way of getting real, live Facebook content.
Filed: Science Is Real || 23:10, November 6 || View Comments
I fielded a survey recently, and this out of context result really intrigues me. For the non-quantitative people, these are regression results that show how each variable predicts the outcome variable with all the others controlled. The outcome here is agreement with the statement, “When most Americans debate issues facing the country, they are more civil today compared to ten years ago.” I’m looking at blog use, ideology and partisanship in this study, and here are the predictive results (the ones with the footnote symbols are statistically significant):
|Conservative Blog Use||.23**|
|Liberal Blog Use||.03|
|*** p < .001, ** p < .01, * p < .05, † p < .1|
I’m pretty sure that the partisanship and conservative blog use results are manifestations of those individuals remembering 2001 as a time when everybody was being so mean to George W. Bush all the time. It’s likely also the reverse — Democrats seeing the current environment as severely uncivil — but the distribution of the blog use data suggests to me it’s more the former than the latter. More on this at MAPOR next month.
Filed: Science Is Real || 19:53, October 1 || View Comments
I’m in my tenth year as an academic, which means that my view of how things worked in the world of research pre-2002 is based purely on output — that is, the papers and books I’ve read that were published before my time. As a result, I really don’t know if the phenomenon I’ve been noticing lately is new or not, but man, there are unpublished papers everywhere. Conference papers, working papers, executive reports, etc., are posted all over scholars’ web sites, their academia.edu profiles, departmental sites, and then frequently logged in Google Scholar and/or touted in news releases. They may or may not make an academic splash, though there are certainly a number of recent conference papers in political science and mass comm that used the web to gain a lot of notice, but news organizations and commentators don’t operate with the same filters as academics do. This University of Washington study (including one author who is a former colleague of mine at Wisconsin) is a good example. It was linked by Talking Points Memo, in a way that seems fairly typical of how unpublished work is disseminated through blogs and other online political commentary. Its conclusion makes the paper appealing, as it seems to inject some empirical evidence into the debate over whether “Twitter revolutions” really have anything to do with Twitter, which has already gone back and forth in the press.
At the same time, it’s a study that hasn’t gone through the peer review process. Maybe it’s applying theory in an unconventional way, or maybe there’s something odd about the data, or maybe it’s exactly right (and I should note I’m only highlighting this study because it’s the most recent example I’ve come across — there are dozens more). But the widespread availability of unreviewed research presents a twist to the science news cycle model, which relies on carefully considered and reviewed conclusions to be reined in. If we’re inserting more and more research from earlier stages of the process into our discussions of public policy, current events, etc., we could see major challenges to the peer review model.
Filed: Science Is Real || 14:16, September 27 || View Comments
17. Please tell me whether you agree or disagree with the following statement: Whatever its faults, the United States still has the best system of government in the world. Do you agree or disagree?Agree Disagree No opinion 9/8/11 77 21 2 10/3/10 77 21 2 12/19/07* 81 17 1 12/15/00 89 11 * 5/6/96** 83 15 2 4/9/95** 85 14 1 9/1/94** 84 13 3 *ABC News/Facebook; **ABC News
I’d quibble with the wording a bit (“Whatever its faults” seems like it would prime dismissal of those faults), and I think this is a question that’s being asked in the absence of much knowledge of other political systems among the American public, but still, we’re not getting rid of the Senate any time soon.