I know many tech-oriented skeptics are paying attention to the Apple Worldwide Developer Conference in San Francisco this week, wanting to find out what’s next in Macs, iPhones and iPads. But I’d like to call your attention to a different conference – a scientific conference – also going on this week. The conference is the 8th International AAAI Conference on Weblogs and Social Media, it runs through tomorrow in Ann Arbor, Michigan.
It might surprise you to learn there is a great deal of peer-reviewed science going on around blogs, social media and other newer online technologies. Curiously, while I see skeptics blogging about studies in alt-med, psychology, biology or physics almost daily, I rarely see skeptic blog posts about studies on Internet technology. (There are exceptions, of course). I see much more interest in this among the computer scientists, data scientists and journalists I follow online.
I suspect one of the reasons is studies in older scientific fields have more application to pseudoscience, the paranormal and other things skeptics seek to critique. But this newer Internet research can address the methods and techniques of skepticism itself. Many skeptics these days do a great deal of our work online. We should take advantage of the available science in this area to make our online efforts more effective.
One nice thing about the AAAI conference going on this week is much of it is published online already – indeed, full copies of all the papers to be presented were available online before the conference started. I find a number of them cover topics that will be of interest to skeptics. One of them is specifically about sending Snopes.com links to people on Twitter – a common pursuit. And another may confirm some things we know about trolls.
Let me give you a peek.
Highlights of the Online Proceedings
As I mentioned the full proceedings are online, just click through on the paper or poster titles to get an abstract and a link to the full PDF. Here are a few papers that seem to have relevance to skepticism or science online, at least to me:
- Quantifying Political Polarity Based on Bipartite Opinion Networks
- How to Ask for a Favor: A Case Study on the Success of Altruistic Requests
- How Community Feedback Shapes User Behavior
- Modeling User Attitude toward Controversial Topics in Online Social Media
- Get Back! You Don’t Know Me Like That: The Social Mediation of Fact Checking Interventions in Twitter Conversations
- Why Won’t Aliens Talk to Us? Content and Community Dynamics in Online Citizen Science
- Audience-Aware Credibility: From Understanding Audience to Establishing Credible Blogs
- Critical Mass of What? Exploring Community Growth in WikiProjects
- Counting on Friends: Cues to Perceived Trustworthiness in Facebook Profiles
- Measuring Post Traumatic Stress Disorder in Twitter
- Identifying and Analyzing Moral Evaluation Frames in Climate Change Blog Discourse
- Finding Users we Trust: Scaling up Verified Twitter Users Using their Communication Patterns
- Why Do You Spread This Message? Understanding Users Sentiment in Social Media Campaigns
- Added: Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls
If you see others I missed call them out in the comments to this post.
Is “Snoping” Effective on Twitter?
The paper titled “Get Back! You Don’t Know Me Like That” (a Ludacris lyric) examines the phenomenon of “snoping” on Twitter. This is where misinformation is replied to with a link to Snopes.com or another fact-checking site.
As skeptics know, there is much psychological research that shows people are reluctant to accept information which contradicts their prior knowledge. This reluctance is caused by cognitive dissonance, and can even result in what is called the backfire effect – the new information actually reinforces the prior wrong information. This is a very core problem for skeptics – how can we help educate people about bad information in the face of this?
These researchers in this case gathered over 3,600 tweets between 2012 and 2013, which included links to Snopes.com, FactCheck.org or Politifact.com (all US-based fact-checking websites). They limited it to tweets which were direct replies to other tweets – snoping other users. Further data reduction resulted in a set of about 1,600 “snoping events”.
One of the things they found was that users tend to ignore “out of the blue” snopes, but were three times more likely to reply if they had a friend relationship (mutual following on Twitter) with the snoper. That has a huge implication for skeptics – it may mean correcting strangers is not an effective use of time.
There’s much more to this 10-page paper (PDF here), and I’ve just started digging into it. It really deserves a whole post of its own. This paper will be presented tomorrow at the conference.
Is “Do Not Feed The Troll” A Good Strategy (Sometimes)?
Another interesting paper is “How Community Feedback Shapes User Behavior” by researchers from Stanford University and the Max Planck Institute. They presented this paper yesterday at the conference.
They looked at sites which allow users to vote on comments added by other users, using data gathered from CNN, Breitbart, IGN and AllKPop. All of these sites use Disqus to provide comments, and that company provided data to the researchers from March 2012 to August 2013. That included 1.2 million threads, 42 million comments, 140 million up or down votes from 1.8 million users. A very substantial dataset!
Their assumption going in was that the up and down voting might act as operant conditioning (which one assumes is the purpose of the feature). That is, up votes would result in more and better posts from those users, and down votes would result in fewer posts or improved posts from the affected users.
Instead, they found the opposite occurs! After a negative evaluation, the quality of posts from the affected user goes down, and the quantity goes up! Quoting: “we find that negative evaluations actually decrease post quality, with no clear trend for positive evaluations having an effect either way.” Further, this behavior tends to spread. Here are some comments from yesterday when the paper was presented:
On the good side they did find that those with positive votes tend to post more frequently, and those with no votes post less frequently. Quoting from the paper:
The fact that both types of evaluations encourage users to post more frequently suggests that providing negative feedback to “bad” users might not be a good strategy for combating undesired behavior in a community. Given that users who receive no feedback post less frequently, a potentially effective strategy could be to ignore undesired behavior and provide no feedback at all.
In other words:
Interestingly, this paper seems to confirm a design feature of Skeptic StackExchange – a site designed for question and answer interactions on skeptic topics. That site uses gamification to encourage participation. Many user interactions result in a user earning reputation points, and some features of the site are only unlocked for users with a certain reputation. Within that system, votes of all kinds (including negative votes) are limited per day and downvoting subtracts from your reputation. In other words, the site allows downvotes, but makes them costly to discourage their use.
Again, this is a ten-page paper (PDF here) which deserves a deeper dive in a post of its own.
Follow Along Online
Since this conference is about blogs and social media, naturally it has a huge presence online, particularly Twitter. You can follow along with the commentary via the hashtag #ICWSM (even if you don’t have a Twitter account). There are other resources listed on the official website and of course there is a conference Twitter account @ICWSM.
The community feedback was already presented on Monday (as you can see from the tweets above), you can look back in the feed for additional comments. The Snoping paper will be presented on Wednesday. Other papers can be found in the conference agenda (all times are Eastern US time, currently UTC-4).
Update 6:15pm: Added Zeynep Tufekci’s paper to the list, it is pretty classic skepticism of the methodologies used in other big data studies of Twitter.