Like it or not, Facebook is now central to the propagation of news and other media online. Links to online articles from Facebook posts often constitute the lion’s share of traffic to that post. So savvy publishers do anything they can to increase the likelihood that you will post or repost their content. At the same time, many legitimate journalists are a little freaked out that a private company like Facebook has so much power over what we all see.
This has led to the rise of click bait – posts with headlines and/or graphics that all but dare you not to read them, sometimes with salacious or silly content that will encourage sharing. One annoying variety of click bait that became prominent starting in 2013 are “fake news” sites. They post content ostensibly similar in intent to the comedy site The Onion, without any of the clever or hilarious writing. Some of them deliberately design their sites to encourage confusion with well known media properties. One of them (National Report) even basically admitted in an interview that what they are doing is trolling extreme conservatives for clicks.
I’ve recently written about these sites, and how Facebook’s engagement numbers sometimes exaggerate their true reach. But exaggeration or not, Facebook still sends millions of readers every day to these sites and myriad other sources of misinformation. That’s a problem central to what skeptics are all about.
But last week Facebook made a change that might help solve this problem in a general way – and which also might be useful to skeptic activists. Read on.
Flagging False Stories
The rise of fake news sites has led to some complaints, particularly when users post these stories as if they are true. Last August it was reported that Facebook was experimenting with a “satire” tag that would appear on posts from The Onion and such, to help people identify posts that are intended to be humorous or satirical. I see posts from The Onion quite often on Facebook and I’ve never seen the satire tag appear. So I’m not sure how widely this rolled out. (Extremely large sites like Facebook and Google will often test new features with just a subset of users).
As these fake news sites have proliferated, it has become clear that many of them (such as National Report, mentioned above) aren’t even satire. Facebook has always had a way to report posts as being objectionable, illegal or otherwise in violation of the site rules. The company has apparently decided to take that feature and repurpose it toward fake news. In a blog post last Tuesday, Facebook announced that they had added an option in this flagging process to specifically call out posts which link to false or misleading content.
I’ve experimented with this on the site. It definitely only works on posts containing links to outside content – so you can’t flag your neighbor’s ridiculous stream-of-consciousness rant about the Illuminati as false. (Ah, well).
I found that how you get to the new option it depends a little bit on context. Usually the drop-down on the upper right corner of a post contains an option labeled Report post or Report this post. But on some posts you need to use the option marked I don’t like this post instead. (And some posts have both options – Facebook is very confusing sometimes!) After that you get an option that asks you why you are reporting it, that looks something like the picture below on the web.
You need to pick the middle option I think it shouldn’t be on Facebook in order to get to the next prompt, which looks like this:
What none of the reporting on this feature I’ve seen so far on this has pointed out, is there’s another prompt after this. That prompt asks you to either unsubscribe from the person or page who posted the false story, or send them a message about why you are objecting to their post. Until you choose one of those options, the Done button is disabled – which makes it unclear to me whether a false news story report actually “counts” as far as Facebook is concerned until you choose one of those things. I’ve reached out to Facebook to get a comment on this.
Now how does Facebook use this information? According to the blog post they don’t plan to block these posts:
Today’s update to News Feed reduces the distribution of posts that people have reported as hoaxes and adds an annotation to posts that have received many of these types of reports to warn others on Facebook. We are not removing stories people report as false and we are not reviewing content and making a determination on its accuracy….
To reduce the number of these types of posts, News Feed will take into account when many people flag a post as false. News Feed will also take into account when many people choose to delete posts. This means a post with a link to an article that many people have reported as a hoax or chosen to delete will get reduced distribution in News Feed. This update will apply to posts including links, photos, videos and status updates.
(Emphasis mine). So they are dodging accusations of censorship by adjusting the infamous Facebook algorithm (the same one that has all those journalists freaked out) to use these reports to reduce the spread of these false stories. So far, so good.
When these stories do appear, they will be annotated thusly:
So basically, once stories are flagged this way by enough people, they should have reduced distribution on Facebook. That is, they will be less likely to go viral. In cases where they are still seen they will be marked as possibly false.
This could be a very good thing.
An Opportunity for Skeptical Activism?
Sharp eyed skeptics have surely noted by now that the wording of the prompts does not limit it to fake news sites. The prompts leave open the possibility that this could be used for any type of falsehood. That could include links to conspiracy theories, pseudoscience or paranormal.
It seems that there’s an opportunity here for skeptical activism.
If enough skeptics were to repeatedly tag links to incorrect information, we might be able to reduce its spread or at least get that disclaimer to run at the top of these items. Imagine if enough skeptics were to tag the latest Mike Adams or Food Babe story this way, and these disclaimers started appearing regularly? Perhaps this would help curb their influence.
What About Abuse?
There’s a dark side to this, of course. Any crowdsourced system like this that can be used for good can also be used maliciously.
About a year ago Dorit Rubinstein Reiss reported a case of anti-vaccine campaigners ganging up on skeptics (such as the Australian group Stop the Australian Vaccination Network) via the Facebook report feature. They would report skeptic posts as objectionable or illegal, even though they weren’t, just to get Facebook to remove them. David Gorski also blogged about this, and reported that they were quite blatant about it – even creating a special group called “FB Time-Outs for Provaxers” where they would brag about their successes. There have been other cases where reporting functions on sites like Facebook and Twitter have been abused this way.
Facebook seems a bit in denial about the possibility of this type of abuse:
We’ve found from testing that people tend not to report satirical content intended to be humorous, or content that is clearly labeled as satire. This type of content should not be affected by this update.
The vast majority of publishers on Facebook will not be impacted by this update. A small set of publishers who are frequently posting hoaxes and scams will see their distribution decrease.
Facebook’s reluctance to admit the possibility of a downside here, means we need to be on alert. Skeptics should be on the lookout for this type of thing happening – particularly that the false story tag appearing above skeptic content on Facebook.
Facebook’s new feature could be a good tool against misinformation. However, it could also be abused. Skeptics should be aware of this feature and be on alert.
- If you wish, use this feature to flag pseudoscience, paranormal and other content that clearly misleads the public.
- Keep an eye out for the false content flag appearing above links to skeptic sites.
Please report anything you find via the comments below or directly to me via my social media accounts.
Pingback: How does Facebook deal with all the crap? | Internet Cultures
Pingback: 2.10 How does Facebook deal with all the crap? | Internet Cultures
Pingback: 2.13 How does Facebook deal with all the crap? – Internet Cultures
Pingback: 2.13 How does Facebook deal with the hiccups? – Internet Cultures