Slate snubs skeptics in an item on misinformation in Google

Yesterday Slate posted a piece by Evgeny Morozov that asked the question, “Does Google have a responsibility to help stop the spread of 9/11 denialism, anti-vaccine activism, and other fringe beliefs?” On its face it is an interesting question, one that goes right to the heart of what this blog is about. But except for a one nugget of wisdom which I applaud, the bulk of the article reveals the author’s naivete about matters skeptics deal with every day.

The article comments on a peer reviewed paper in Vaccine that analyzes the “tactics and tropes” of the anti-vaccine movement. Unfortunately I don’t have access to that journal to comment on the paper directly. But I can say the author of the Slate article could have avoided some pitfalls had he availed himself of the large body of skeptic literature in addition to that one paper.

News flash: we’ve been fighting these battles for decades, and are well familiar with the tactics listed. We’ve even been going head-to-head with these communities in Google and on Twitter and in the rest of Web 2.0, using the very same techniques. The evidence easy to find in Google, I’m not sure why Morozov can’t see it.

In the rest of this article I’ll point out how the piece’s proposed solution lacks vision, and suggest some other avenues that don’t require Google to get involved.

Skeptics Ignored

Like many who casually visit the topic areas of skepticism, but do not spend any substantive time studying them, Morozov seems to assume that no-one other than intellectuals and government workers have any interest in what the merchants of misinformation are up to.

He somehow manages to ignore the 45+ year history of the organized scientific skepticism movement, which has been long been battling this type of misinformation through efforts of non-profit groups, two different national magazines and hundreds of blogs and podcasts.

He laments that advocates of fringe believes have “have branched out into manipulating search engines, editing Wikipedia entries” to push their ideas on an unsuspecting public. But it somehow never occurs to him that others might be using the same techniques to push back as well.

Indeed, this blog and others (such as Susan Gerbic’s wiki project) have long been assisting skeptics in going head to head with believers in actively editing Wikipedia, ranking highly in Google, as well as in other Web 2.0 efforts.

Missed Research Opportunities

It’s mystifying to me how Morozov completely missed this effort. It can be found using some of the very search terms he mentions as examples in the article, as well as closely related terms.

For instance he suggests the search “is global warming real” is a potential gateway to a fringe community. But the organic Google results include such reality-based sites as discovery.com, nationalgeographic.com and straightdope.com. It is naive to assume that those sites appear entirely by accident, while the global warming denial sites (which also appear nearby) have been manipulated into position.

If you search on the name Andrew Wakefield (whose photo graces the article) you do find several believer and supporter sites. But you also find Brian Deer’s investigation which led to Wakefield’s downfall as well as Respectful Insolence and Pharyngula, both well known skeptical blogs. This is not luck or happenstance, those bloggers have worked very hard to get that content there.

Several of Morozov’s suggested searches return Wikipedia articles among the results, even as the first result. The search term “who caused 9/11?” actually returns two different Wikipedia articles in the first two slots.

But again Morozov misses a research opportunity here. If you actually read those Wikipedia articles, you will find that they are very reality based. Indeed, Wikipedia takes a rather dim view of “fringe theories” even dedicating a special discussion area for editors to take note when they are getting out of hand and to coordinate response.

As I have pointed out on this blog, Wikipedia’s own rules are skewed toward reality and against peddlers of misinformation. As long as skeptics step up and take the time to enforce the established rules on that site, fringe theories should never hold sway permanently in any article.

One Insight

I do want to congratulate Morozov for one insightful observation, with which I entirely agree. He makes it near the center of the article:

What to do then? Well, perhaps, it’s time to accept that many of these communities aren’t going to lose core members regardless of how much science or evidence is poured on them. Instead, resources should go into thwarting their growth by targeting their potential—rather than existent—members.

As long-time readers of this blog know, I could not agree more. I said so in the TAM6 talk which was the kickoff for this blog (see 3:45 in that video). Most of the techniques that I espouse on this blog are designed with the above concept in mind.


A host of issues from human psychology such as cognitive dissonance, various cognitive biases, logical fallacies and conspiracy thinking largely prevent true believers from seeing the many errors they continue to make in their thinking. Thus no amount of logic from skeptics can help them.

Unfortunately, after this insight, Morozov engages in more fallacious thinking as he composes his solution.

Only Two Solutions?

Morozov can only think of two solutions: build browser-based tools that can flag dubious information and ask Google to fix the problem for us. There are problems with both suggestions, as proposed in the article.

First, why does he assume that browser-based tools would naturally be immune to the poisoning of search results that deniers attempt? Due to the huge scale issues inherent in fact-checking the web, most such tools are likely to include a crowd-sourcing element.

The very example he gives, an academic project called Dispute Finder was full of 9/11 conspiracy theory information the last time I took a look at it. It also seems to have been abandoned by its creator for all practical purposes, as the Firefox plugin is no longer available for download.

Another effort, called Hypothes.is (which I have blogged about here) has been proceeding very slowly and carefully in their early implementation efforts, no doubt in an effort to navigate around these same issues. I’m cautiously optimistic about their chances of success, but (like Wikipedia) the editors of Hypothes.is will need to be ever vigilant of fringe theories.

His primary idea is to get Google to flag fringe theories on the search engine result pages. He cites Google’s recent efforts to include suicide hotline information in relevant searches as precedent.

I would point out a counter-precedent. For years some rather unsavory hate sites have ranked very highly for the searches on the word “jew” in Google. Despite many public calls to manually alter the result, Google chose to address the problem in an interesting way (which you can see at that link). It purchased an advertisement in its own ad placement service titled “Offensive Search Results” that always appears when you make that search. The ad leads to an explanatory page. More recently, due to simple search engine optimization techniques (aka “manipulation of search results”) the very same page actually appears as an organic result as well.

It may seem like a dodge for Google to respond this way, but the Anti-Defamation League complimented Google on their response. More importantly, the response they made is simpler and more scalable.

And it doesn’t require Google’s permission! Skeptics can use these very same techniques. Indeed, the technique of placing an ad in Google results is already being used by science-based non-profits. If you use another of the putative search terms from Morozov’s article, “vaccination leads to autism” you might well see one or more of the following ads right at the top of the results right now:


This undoctored screen shot shows one instance where I saw all three! All three of these lead to reality-based websites unaffiliated with Google. This is no accident, this is skeptic activism.

Many Other Solutions

Morozov’s paltry two solutions show a lack of vision, both in their number and in the implication that neither exists in usable form. He speaks of both in the future tense, as if they have yet to be fully realized.

But the browser plug-in already exists, albeit in a slightly different form than he describes. Web of Trust (WOT) has been downloaded approximately 30 million times, and can give ratings on websites that often warn of dubious content right within Google search results and elsewhere. For instance, if a WOT user searches on “risks of vaccination”, here’s what part of the Google result screen looks like:

That red circle is an “unfavorable” rating from Web of Trust. Many of these negative ratings of denialist websites in WOT are the direct result of the efforts of skeptics.

As for other possible responses, I’ve already talked about search engine optimization efforts undertaken by skeptics as well as our Wikipedia efforts. Thousands of skeptics (including myself) edit Wikipedia every day.

Skeptics also have a huge presence on social media such as Twitter, Facebook and Google Plus, and some make a point of addressing false claims head-on whenever they occur. Because of new social-based search tools such as Search Plus Your World, these efforts by skeptics will actually start to change the results for their friends and acquaintances. Deniers may stay inside their own bubbles, but skeptics can reach others this way.

And there is much more, much of it documented on this blog.

Conclusion

I do agree with Evgeny Morozov that misinformation on the web is a pernicious problem. Indeed, the very reason I started this blog was to combat it.

However, I wish he had looked beyond one academic paper and realized that there are already thousands of people ahead of him on the battlefield actively engaged in the fight against it.

There is no need for a nuclear option, dropped from orbit by Google. We’ve got a land war well underway. Evgeny, you’re free to join us. Grab a rifle.

About these ads

About Tim Farley
Focused on online misinformation, Tim Farley is a software engineer, computer security expert and scientific skeptic who created the site What's The Harm. He is a Past Fellow of the James Randi Educational Foundation.

10 Responses to Slate snubs skeptics in an item on misinformation in Google

  1. Tim Farley says:

    If you are interested in more on this topic, Orac posted a blog about the same underlying Vaccine paper right around the same time as I posted this. Some good comments there.

  2. Tim Farley says:

    For a more humorous (to skeptics) look at the same article, check out the post on Alex Jones’ conspiracy-mongering InfoWars site titled: Soros Mouthpiece Calls On Google To Police “Conspiracy Theories”. The comment section is quite entertaining if you have the stomach for it.

  3. My proposed solutions were framed to address a very narrow goal: how does one ensure that users who have placed a search query for, say, “risks of vaccination” do not end on a site run by conspiracy theorists. Sure, the macro-level solution that you outline here is worthwhile: let’s create more sites run by skeptics with better info and make sure that info is properly presented on Wikipedia, etc. All of this is great and valid. But the scenario that I outline in the piece already presupposes that we are dealing with a page of search results heavily dominated by pseudoscience. (See “Google already has a list of search queries that send most traffic to sites that trade in pseudoscience and conspiracy theories; why not treat them differently than normal queries?”). In other words, I’m proposing how to deal with a very limited subset of cases where broader ecological solutions failed; I’m not advocating to replace these broader ecological solutions with technological fixes per se.

  4. Tim Farley says:

    Thank for replying. A couple additional comments.

    There are some very difficult questions posed by your idea. Where does Google draw the lines in this determination? Some sites (e.g. InfoWars, Natural News) are clearly largely full of nonsense, others have a mix of material both good and bad. For example, there are some autism charities that occasionally stray into vaccine denial, but also do some legitimate work. Should Google block them too? How much traffic needs to go to one of these sites from Google before we care about a given search?

    But I think you vastly overestimate what Google is willing to do. In fact, I think Google has thought long and hard about this problem and decided it is best not to intervene at all.

    In both the example you cite (suicide-related searches) and the example I cite (searches for “jew”) it is important to note that Google does not prevent the searches from ending “on a site run by conspiracy theorists”. It merely alters the results slightly so competing or explanatory information is included in the Google results. You are still free to click on the problematic results.

    But I have a more fundamental objection to what you just said, which is: I don’t think you’ve adequately demonstrated your original premise is true in any practical sense.

    You claim that there exists that there are “search results heavily dominated by pseudoscience” in which “broader…solutions failed” and which most users end up on conspiracy theory sites.

    Which searches are these, exactly? Have you measured them using Google Insight? Just because you can make Google produce a result page that has a certain pattern of results, it does not mean any users (other than you) have ever seen that page. Google, after all, is just an algorithm.

    For example, when I run an Insight search on one of your quoted phrases “vaccination leads to autism” in Insight the result I get says:

    Not enough search volume to show graphs.

    In other words, very few people actually make that search. To put it another way, you may be proposing a solution for a problem that doesn’t really exist.

  5. but where I did I say that Google would be blocking the results? Or flagging individual sites? What I wrote is that “Thus, whenever users are presented with search results that are likely to send them to sites run by pseudoscientists or conspiracy theorists, Google may simply display a huge red banner asking users to exercise caution and check a previously generated list of authoritative resources before making up their minds.” As you see, I explicitly said that users would be presented with search results (no matter how kooky they are) – there will be no tinkering with algorithms or the search results. what would change is that users would see an extra line of code on top of search results asking them to be cautious.

    Granted Google would have some internal list of sites that have been flagged as “dubious” but that list won’t be released to the public. (How that list is generated can be thought out in more detail – in consultation with respected scientists, etc.) The point is that, once Google knows that one of these kooky sites ends up on first page of search results, it would display a banner on top of the page that would say “Watch Out!” It won’t specify which sites are dangerous or not; in addition, Google would say something like “for vaccination-related information you may want to check cdc.gov or site X” or “for climate related search do check out http://www.ipcc.ch“.

  6. The concerns about perception of link ranking and what Google serves up are valid.

    I did a post awhile back on what turns up in a simple search on the term “vaccine” and “autism and vaccines” compared to a search on “economics.” http://biologyfiles.fieldofscience.com/2011/11/what-makes-expert-in-science-how-about.html

    One thing that triggered that post was the article linked within it noting that students tended to perceive the top-ranked links as more trustworthy.

    I’ve heard a rumor just last week that being a “person of trust” and a real name gets a higher ranking now on Google for certain searches and that there is some ongoing adjustment in the algorithm. But it’s rumor.

  7. Three words: Web Of Trust

    I suggested that when Sidewiki was dumped, that it be given to WoT as an extra source of content. That cuts the other way too. WoT is a great example of exactly what is being talked about – a warning which doesn’t actually censor the info, but lets you know that it might not be reliable.

    Imagine if WoT was integrated into Google results.

    Well, for one thing, we’d see a lot more gaming of the WoT system happening. Still I think it’s a good model for the proposal, even if in practical terms it has flaws

  8. Tim Farley says:

    Re: blocking of results, I was quoting your comment here, I think I quoted you accurately.

    I think the whole idea is a non-starter because of the accusations of partisanship that would immediately arise against Google. You can see them already on the InfoWars article. This is precisely why Google tries to avoid mucking with search results.

    Let’s change gears. I pointed out in my main post above how we actually don’t need Google to do this at all. Interested third parties can use Google AdWords to achieve the same effect. Indeed, this is already being done. Why do we even need Google to get involved? We’ve seen the disadvantages of this already, what if anything are the advantages?

  9. sgerbic says:

    Seems like WOT already does notify people when they are about to enter a “bad” URL. Am I missing something else here?

  10. Pingback: Content Roundup: January 2012 « Skeptical Software Tools

Follow

Get every new post delivered to your Inbox.

Join 12,950 other followers