How Gamification Can Beat Fake News
In the decade-ish since the second coming of social media, global democracy has been severely impaired by the spread of fake news. I believe that using gamification techniques could substantially retard this phenomenon and reset the balance.
N.B. I’m using “fake news” to refer to things which are actually fake, not things that are true but a certain someone (et al) doesn’t like to hear.
The Fake News Industrial Complex
Before we dive into the solution, it’s worthwhile starting with a brief recap of how fake news spreads on social channels. If you understand these algorithms feel free to skip this section.
All social media companies monetize by micro-optimizing your attention. That is, they understand that your only finite resource is time, and if they get more of your time, they control more of the revenue generated from your attention. Therefore, they have developed complex machine learning (ML) algorithms to try and figure out what you will want to consume next. The most sophisticated (and therefore dangerous) of these is the Facebook News Feed, but every major tech company has their own version (Google, Netflix, Youtube, Amazon, Apple, etc.). They usually wrap this idea in “Things You May Like” or “Coming Up” but the basic premise is the same: if I can figure out what you want next I can keep you on my site/app for longer and make more money from advertising/e-commerce/etc.
The algorithm uses a combination of techniques to figure this out, but the simplest formula is:
You are similar to Person X.
Person X looked at this.
Therefore, you might like this.
There’s a lot of additional information being fed into this system, but the concept is quite democratic: the best content (for a given audience) should win. The most popular videos on YouTube are — by definition — the most popular videos in the world. This tells you that the basic algorithm is doing its job. As with all algorithms of value, it is subject to perversion and manipulation.
There are three basic ways to propagate content into this algorithm that are likely to result in the growth of your idea:
- Buy advertising that puts your content in front of people based on criteria you choose, regardless of their consumption preferences.
- Use influential people — real or fake — to propagate the content. In both cases, you want to ensure that you match the influencer with the target so that the mass of real people want to like, share, etc.
- Create content that is so novel, so great and so interesting that people naturally love it and share it on their own.
Regardless of which option you choose, the social media platform makes money. Sure, the advertising revenue in 1 is higher margin, but they will eventually get that ad revenue if 2 or 3 are successful as well by capturing more of your attention.
The bottom line is that they have little incentive to address the issue of fake news or undue influence because they are making money on it. Lots of money. Attention is revenue, and the most salacious of fake news stories tend to drive the most clicks.
Interestingly however, the data gathering apparatus of these social media giants can also be leveraged in a positive way. By taking advantage of the algorithm’s underlying calculus and combining it with human pattern matching, we can potentially fix this problem quickly and cheaply. Of course, we’ll need a little help from gamification
How to Stop the Spread of Fake News Using Gamification
One of the most important core experiences in gamified systems is the “trust score”. Examples of this can be seen in your eBay seller/buyer score, your Uber score, and your Credit Score. Admittedly, each of those numbers has a wildly different impact on your life, but the fundamental premise is the same: an algorithm made up of different inputs calculates a number that tells someone whether or not to trust you.
These algorithms exist primarily to optimize commerce among people without established trust. However, they are modeled on the basic sociology of real-world trust: tracking and feedback. If you know everyone in your community, you likely also know how trustworthy the people are through experience. A trust score usually attempts to do the same thing.
It is important to note that a trust score merely provides guidance to the decider. Each person / company has to make up their own mind about whether or not to trust, but such a score makes it easier to decide.
In the case of a given piece of content on social media, it would be easy for the platform companies to expose a set of scores alongside the content they present. Like a credit score, this could give users an idea of whether or not to trust the content they are seeing.
Some of the factors that I believe should be scored include :
- Credibility of the Originator (number of followers, time on the platform, engagement percentage)
- Credibility of the Amplifiers (followers, time, engagement of followers)
- Partisanship (extremity of response, scored lower for very partisan)
- Velocity (speed of social spread, scored lower for very quickly)
The ideal way to present this information would be as a single “Trust Score” (out of 100) with the option for the user to drill down and see the scores of the component elements. It could also be implemented as a color or shading scheme. We could also show the scores together in a simplified view and with every post in a news feed. For example a given post might show data like this:
Trust: 75 — O:90 A:20 P:30 V:20

Elements of the score would not necessarily be weighted equally and could have aging factors.
If accompanied by an education campaign, people could start making informed decisions on the content they consume. Because of the prevalence of such trust scores, I believe most consumers would be comfortable with the basic premise of analyzing a score before deciding to act. It’s not dissimilar to looking at the mutual friends list when someone friends you on Facebook or elsewhere. The social proof is a kind of quick “reference check”.
Importantly, all of this information is already in most news feed algorithms. You don’t actually need many human moderators or major data restructuring to achieve it, and it would be cheap to implement and test. Also, when used in combination with fact checking, you could provide a double-whammy for truth and give people multiple sources to draw from. In the long term, this would also help tech companies navigate the publicity surrounding fake news and their profit motive.
Gamifying the Reporting of Fake News
Many experiments been done on gamifying micro-work for the greater good. Captcha is a good example — every time you identify an image in one of those grids you are helping train machines on how to “see” images. There are also more fun examples, such as the Google Image Labeler or Foldit, but the basic premise is the same: do a little work for me, and I’ll do something for you.
With some funding, we could build a game that would employ everyone to spot, tag and filter fake news. The basic design could be an upvote/downvote system (perhaps using a browser plug in) that would allow you to tag and vote on the accuracy of a piece of news for cash, prizes or glory. As you progress through the game, you unlock more and more complex types of interactions, including comparing two different articles against each other or a fact check-off in a Supermarket Sweep style.
I don’t know precisely which design would be optimal, but I think we could develop something here that might move the needle.
Transparency is Key
Some argue that the spread of fake news would not be slowed by more transparency. That is, people love to espouse false narratives that fit their worldview, and have been doing so since the dawn of time. Unfortunately, confirmation bias simply serves to worsen this phenomenon and enable people to simply discard facts they don’t agree with.
However, I think the game here is not to get people to admit things they believe are false. Rather, it is to slow the spread of false ideas before they reach everyone. With transparency and good scorekeeping, we could also empower those with better critical thinking skills to act as additional misinformation firewalls — weighting their opinions more heavily in the score, for example.
If you gamify their participation, you could increase efficacy even more. People could fact check for cash, prizes and status, and this information could be included in the score when available.
Some also argue that if you put too many roadblocks in peoples’ way they will simply shift to more peer-to-peer communication methods. For example, nothing can stop people from making a huge Whatsapp group and sharing information there free from any filters. I don’t believe this is a real consideration because quickly get overwhelmed with too much content. And once they do, they tend to abandon the most cacophonous spaces.
This is precisely the reason we invented these content prioritization algorithms in the first place — a truly unstructured communication channel quickly becomes unusable. But if you control the structure of information, you can achieve both financial and behavioral goals with a single silver bullet.
That same power can be used here to harness and promote good. It can be done cheaply and easily, and will help people stay informed. It’s mostly already built and baked into what we see. To make it work, we’ll require greater transparency.
It may not solve every aspect of the misinformation problem, but gamifying fake news can certainly help.