Nobody likes being deceived. Finding out that you misplaced trust in something, or someone, can make you feel foolish. Or perhaps like you’ve been taken advantage of.
This applies to relationships, friendships, business deals, purchases, and basically every other interaction we experience.
That’s why you’d probably have to do some due diligence before jumping right in.
You want to make sure the new man in your life isn’t some obsessive stalker. You want to know that a company isn’t going to scam you. And you’ll do some digging to establish the facts.
But what about the news we read? What about stories we hear?
Do we commit to the same level of research? Are our reactions the same when we’re duped by a hoax story, or an entirely made up report?
The rise of fake news in recent years suggests that’s simply not the case.
Surprisingly, it’s the elders of society that are the least vigilant. A recent study found that those aged 65 or over are eight times more likely to share bogus news.
And that’s even with the study controlling for both ideology and education level.
So much for the wisdom of age, then.
Still, it’s easy enough for anyone to fall for a fraudulent scoop on social media. No matter their age.
We’ve probably all seen it. A friend or family member shares something, and then there’s a comment:
“This never happened.” Or, “This is from 2012.”
The person posting the piece hadn’t taken the time to check all the details. Maybe they didn’t read it at all. Sometimes a headline alone is enough to provoke a share.
With big profits to be had and deepfake videos becoming more convincing, it certainly seems like our fact-checking skills will continue to be tested in the coming years.
What does fake news look like? How can you identify it? And what can you do about it?
Here’s a few tips to help you navigate what can be a deceptive digital landscape.
P.S. I’ve intentionally included a false fact in this intro. Leave a comment at the end of the article if you think you can spot it.
What exactly does fake news mean?
Fake news is a relatively new term, but the general idea has been rooted in human activity for a looooong time.
In fact, you can go all the way back to 1274 BC and find evidence of fake news.
Ramses the Great invented a story following the Battle of Kadesh. He told his people that the result was a big victory for the Egyptian people.
In reality, the battle was a bit of a dead end with no clear winner.
He even re-inscribed monuments dedicated to the achievements of others, so that historians gave him all the credit. Pretty smart, when you think about it.
This is disinformation 101.
Ramses deliberately spread false news to achieve certain goals.
In his case, it was to appease his people and cement his status in history as a great Pharoah.
Today, we get hoaxes trying to convince people that Donald Trump had an accident in his pants while playing golf.
That’s a doctored photo, guys. Don’t go spreading it around! 😏
But defining fake news is not always so simple.
False information can spread very easily, and it’s not always intentional. Journalists, for example, may only know part of a story and report on what they know.
It’s important to publish quickly in the digital media industry, to get ahead of competitors and be seen as the original source for the news in question. And later, further revelations can prove these original stories to be wrong, or at least not entirely correct.
News can also get published without the journalist even knowing the story is false. Sources can check out, and people can swear their testimony is true, only for disputes to arise down the line.
According to Claire Wardle, director of First Draft, the seven main types of misinformation are:
- Satire or parody – Where there’s no intention to harm, but the potential to fool.
- Misleading content – Where true information is used to mislead and frame an individual or an issue.
- Imposter content – Where genuine sources are impersonated.
- Fabricated content – Where the news in question is 100% false and has been designed to deceive and do harm.
- False connection – Where headlines, visuals, and captions do not support the content.
- False context – Where genuine content is shared with false context, or outside of its true context.
- Manipulated content – Where genuine information or imagery is manipulated or altered in order to deceive.
What does fake news look like?
Fake news comes in many forms.
Articles, pictures, videos, audio clips; basically anything can be manipulated to achieve a desired effect.
Ever listened to a song and thought the lyrics were saying one thing, but it turned out they were saying something else?
That gives you a good idea about how easily the brain can be fooled.
There are groups online that have used the internet and media outlets to exploit this weakness in our nature.
A lot of fake news has spread in recent years. Some are perfectly innocent pranks, but others have a darker side.
Here’s a few examples:
Starbucks’ false Dreamer Day advertising
One example of a funny story that circulated on Twitter involved Starbucks.
Fake tweets advertised “Dreamer Day” and claimed Starbucks would offer free frappuccinos to all migrants living in America.
Starbucks quickly reacted, apologized and said the advertiser was “completely false.”
Pope Francis to support Donald Trump’s presidential candidacy
This hoax was posted on a website that claimed its primary source was an appearance on the American TV station “WTOE 5 News.” Only, there is no TV station by this name.
The news that Pope Francis had endorsed Donald Trump’s presidential candidacy spread incredibly fast and created a huge splash on Facebook.
Trump’s inauguration had the largest audience ever
This fake rumor was spread by White House press secretary, Sean Spicer.
It was one of his first tasks: to convince everyone that Donald Trump was more popular, especially in comparison to his predecessor, Barack Obama.
In reality, Trump’s crowd was only one-third the size of Obama’s.
Fake news of missing children after the Manchester attack
A terrorist attack occurred in Manchester UK, right after Ariana Grande finished her concert.
Due to an enormous media coverage and since a significant part of the spectators were children and teenagers, the social media was soon invaded with false alarms.
Viral photos claimed to show missing children and teenagers during the concert. The pictures were in fact stolen and misused: they portrayed youngsters who were on different continents at the time of the attack.
Social media says a comedian is guilty in a Florida shooting case
Earlier in 2018, the U.S. faced a new mass shooting episode at a school in Parkland, Florida.
The event saw misinformation rear it’s ugly head once more. This time in the form of the wrong person being accused as the alleged shooter.
Several tweets from a fake account circulated claiming that there were two shooters: Nikolas Cruz and Sam Hyde. Cruz was the actual shooter, but Hyde is a comedian who had no connection with the tragedy.
Hyde’s picture has also been distributed in connection with similar events, such as the San Bernardino shooting.
When you hear the word deepfake, most people associate it with the kind of videos that are about to get blocked in the UK. But what are they?
Deepfakes are images or videos that look and sound like the real deal.
So how does it work?
They are created by setting two machine learning models against each other.
For the clip linked above, you would need plenty of Nicholas Cage and Daisy Ridley images. The first model then uses them to create convincing fake videos, where one face is replaced by another.
The second model’s job is to detect the fakes.
Repeat the process until the second model can’t detect any funny business and what you end up with is a terrifyingly realistic forgery.
However, the truth is they are a lot more dangerous than the example here.
Are deepfakes the future of fake news?
It doesn’t take too much imagination to see a world leader, or an election candidate, becoming the victim of a deepfake video.
A well-timed and believable release could do untold damage to the person’s reputation, and potentially swing voting decisions in a major way.
And that’s just one situation where deepfakes could infiltrate the news cycle.
We’ve already seen that the U.S. government is willing to doctor footage in order to control a certain narrative. Pretty low-tech, but still convincing enough to make a CNN reporter’s actions seem more aggressive than they really were.
Citizens, consumers, shareholders, celebrities, politicians, and world leaders. The potential damage that disinformation could do to anyone is terrifying.
It may be that fake news in the written form is just the tip of the iceberg.
How fake news goes viral
Paul Horner made his living off viral news hoaxes for several years. But he’s most famous for distributing fake news during the 2016 American election.
One of his most famous viral stories involved the claim that protestors were paid to demonstrate against Donald Trump’s rally in Fountain Hills.
It was convincing enough to get a retweet from Trump’s actual campaign manager, Corey Lewandowski.
According to Horner, the rise of fake news and its continued success is ultimately our own fault:
Honestly, people are definitely dumber. They just keep passing stuff around. Nobody fact-checks anything anymore — I mean, that’s how Trump got elected. He just said whatever he wanted, and people believed everything, and when the things he said turned out not to be true, people didn’t care because they’d already accepted it. It’s real scary. I’ve never seen anything like it.
Donald Trump even alluded to this himself during an interview with People Magazine in 1998:
If I were to run [for President], I’d run as a Republican. They’re the dumbest group of voters in the country. They believe anything on Fox News. I could lie and they’d still eat it up. I bet my numbers would be terrific.
Turns out the famous meme, which went viral in late 2015, is just another example of fake news.
But is Paul Horner right in saying that we’re getting dumber, or are there other factors at play here?
The psychological effect of fake news
As you’ll see in the ‘Facebook Has Altered Our Online Habits, and It’s Not Healthy’ section, social media plays on the chemical structure of the brain.
You get rewarded with a dopamine boost for successful social interactions. On top of that, you get the feeling of establishing or increasing your standing within a group.
The brain is hard-wired to do this because it’s trying to motivate productive behaviors from you.
Add in the fact that most people tend to be friends with people who agree with them, and this creates a self-serving filter bubble effect.
You get how it works.
John shares an article about something, and it gets a good number of likes. He feels good because his post is popular, and his friendship group valued his thoughts.
This motivates John to publish similar posts in future, where he expresses similar views.
Dr Jens Binder, senior lecturer in Psychology at Nottingham Trent University suggests this is why most people unwillingly share fake news on social media.
It’s not because we’re looking to influence the opinions of others, but rather for our own emotional reasons.
We consume news not just because of the facts in there, but also to make sense of the world, to confirm our notion of how things are working ‘out there’,” he said, also going on to allude to how our personal biases are validated by such confirmations. “The more people share my sense of understanding, the more I am convinced that I got it right.
How do emotions apply to our reasoning?
Have you ever experienced a heated argument, either in person or online?
If you answered no, that’s fake news! 😉
For the honest folk among us, you know first-hand exactly how emotions can mess with your ability to think clearly and objectively.
Brian E. Weeks, a communications researcher at the University of Michigan conducted a study in 2015 analyzing how emotions influence the way people tackle misinformation.
People who took part in the study were asked to write something about immigration reform or the death penalty. Each participant identified their political beliefs, then wrote about an issue that made them feel angry or anxious.
There were some pretty interesting results.
Those who felt angry about an issue while writing were more likely to refer to their political beliefs when looking at misinformation.
Meanwhile, those who felt anxious were more open-minded to views outside of their existing beliefs.
For both sets of people, being able to fact-check the fake news they were presented with diminished the chance of them believing the false info.
As these numbers from the Poynter Institute show, though, fake news often spreads far wider than the fact-checked versions.
How social media became a breeding ground for fake news
Plenty of seeds. Fertile soil. And regularly watered with ad spend.
It may sound like a trite analogy, but it paints the picture well enough.
Unfortunately, one of the key ingredients to why fake news is so effective on social media is us.
You and me. Mom and dad. Family and friends.
We’re the ones reacting, liking and sharing. And that’s why it’s such an effective tactic. But that doesn’t mean it should be there in the first place.
Facebook and Twitter (along with Google and Mozilla) have recently made pledges to do more in the fight against disinformation.
But they have A LOT to contend with.
Here’s just a taste:
Bots and fake accounts
It’s easy to set up a social media account.
All you need is an email address, set a password, fill in some profile information, and there you go.
Well, all of that applies to fake ones too. It doesn’t take much effort.
So, if the Kremlin wants to create a bunch of profiles pretending to represent legitimate American news sites. It can do that.
With slightly more effort, you can even automate the account and leave it to run as a bot. Between 5-10% of accounts on the main social media platforms are set up in this way.
These are essentially lines of code that aim to replicate human behavior online. They can be set up in just 5 minutes to automatically send messages, share posts on certain topics or hashtags, operate during certain time windows, and much more.
They’re not overly sophisticated right now, but they can get a lot of eyes on fake news simply because of the volume of posts they can produce.
Facebook and Twitter are closing down and banning this kind of thing any time it’s detected. But with billions of users globally, they are largely reliant on real users to flag suspicious behavior.
Social media platforms are set up to push the most viewed content. And controversial topics get way more interactions.
Whether that’s likes, shares from people who agree with the message, shares from people angered by the message, or comments from people on both sides who then end up arguing amongst themselves.
(We’ve all seen it happen.)
With the algorithms set up this way to push popular content, fake news can easily rise to the top of a lot of feeds. After all, the whole point is to trigger as many reactions and shares as possible.
Almost everything you do online is being tracked in some way.
I mean, it’s not just hackers after your information.
Advertisers and government agencies also find it useful to know how we operate online. What websites we like to visit, what links we clicked, how long we lingered, and much more.
Think about it.
How many times have you noticed the Facebook ‘like’ button embedded into a page? You’re not on Facebook’s platform. You may not even have an account. But your visit to that site has been reported back to Zuck & Co.
With all the data that is constantly being collected on you, it’s not too difficult to picture a scenario where a fake news publisher writes a piece about Kanye West and is able to specifically target ads at people with an interest in celeb culture and the Kardashians.
I definitely wouldn’t get that ad served to me. Nope.
Thankfully, there are some steps you can take to avoid the intrusion.
The impact of search engines
Searching for something on Google always brings you the best results. Right?
Turns out search engines can also be at fault when it comes to the escalation of false information.
For example, when everyone was searching for the final U.S. election results in 2016, the website boasting top rank on Google was a fake blog.
The WordPress site “70news” claimed Trump had won both the popular vote and the Electoral College.
When pressed on his sources, the creator of the blog explained his information mainly came from Twitter posts.
A totally infallible methodology there. Top marks.
In February 2019, Google announced a 30-page white paper on disinformation and how the company plans to tackle fake news going forward.
An interesting note is that Google Search does not remove content from its search results, unless it’s due to a legal reason, a violation of Google’s webmaster guidelines, or a request from the publishing site.
Google News, on the other hand, has more restrictive content rules:
[It] explicitly prohibits content that incites, promotes, or glorifies violence, harassment, or dangerous activities. Similarly, Google News does not allow sites or accounts that impersonate any person or organization, that misrepresent or conceal their ownership or primary purpose, or that engage in coordinated activity to mislead users.
So, for an added level of protection against fake stories, switch from the standard search results to the News tab. That’s at least one step towards ensuring you’re getting legitimate news stories from your Google searches.
5 ways to spot fake news
1. What is the news source?
The first thing you should check is who published the news.
Generally speaking, you can trust media outlets that have a strong reputation for reporting on actual events.
Publications such as the BBC, NPR, The Wall Street Journal, The Guardian, The New York Times and The Washington Post all land in this category.
Each of these news media companies may have their own partisan bias, but what they cover will almost certainly have happened.
Such famous outlets have to consider their own credibility. And as such, they tend to a good job of verifying sources.
Dedicated fake news sites, on the other hand, completely fabricate stories for the sake of traffic.
2. Who is the author of the piece?
If you come across a story that has no name attributed to it, the first thing you should ask yourself is “why?”
Sometimes, you’ll encounter articles online that are not signed by an author. This could be the first sign that something fishy is going on.
Real and trustworthy journalists know that putting their name to a piece adds weight. Particularly if they have an established reputation and extensive experience.
Up-and-coming reporters also know that it’s important to build an identifiable portfolio of their work. And they would definitely want to be credited for any big scoops, as this helps to build their respectability in the industry.
Many news sites will link the author’s name to a page that displays all the articles they’ve written. Perhaps all of their stories are to do with the U.S. economy, or maybe they focus on politics in the Middle East.
This can give you an extra feel for what the author’s expertise is, and whether they are a dependable voice to listen to on the story in question.
Fake news stories do come signed with names. Some real, some fabricated. Some even have a full backstory that takes quite a bit of digging to prove as false.
But as a guiding principle, this is something you should definitely consider when judging the trustworthiness of a writer, blog, or site.
Have you ever shared an article on social media without reading it first?
3. When was it published?
Some news is not necessarily fake, but it’s older than your entire wardrobe.
Distortion of the truth is the intention here. The writer will take a true story, add some new details, and link something that happened a long time ago to present events.
Usually, the feature will have a strongly worded or controversial headline. This helps the piece spread extremely quickly, with many people not even bothering to read the details of the story or check when the events occurred.
4. Pay attention to fake domains and URLs
Sometimes, you can be easily tricked into believing you are reading a valuable, trustful source.
With some fake sites, you’d probably be able to tell just from the name that they’re not exactly trustworthy.
But for others, it can be less obvious.
Attention to detail is the key here.
This is because some phony websites are a replica of an original. They can often be recreated so well that you won’t instantly realize the difference.
One example is https://abcnews.go.com/ as opposed to https://abcnews.com.co/.
Just by looking, can you tell which one is fake?
The first is a valid news site.
However, the second was a fake replica owned by Paul Horner. While live, it aimed to mimic the URL, site design, and logo of ABC News.
5. Do a little research work on article photos
Articles always have a picture attached. Usually one that reflects the highlight of the story.
But creating a fake picture is often easier than creating a fake article.
We’ve all heard of Photoshop.
The image editing software is great for graphic designers and artists to produce some amazing visuals.
Unfortunately, it can also be used deceptively. Just check out these examples of misleading pictures that went viral.
The good news is that it’s super easy to investigate the validity of any image you find online. Simply right click on the image and tell Google to search for it.
There are some more advanced techniques, too, when it comes to checking images that have been altered in multiple ways.
How to report and flag fake news
OK, so you’ve done all your detective work and you’re confident in your findings.
What can you do about the fake news once you identify it?
To start with, you can report any article on most of the big websites and platforms.
- Click on the “•••” button on the top right of the post.
- Select “Hide post.” The post will vanish from your screen and be replaced with a brief message. This is where you can choose to “Report post.”
- Finally, hit “Mark this post as false news.”
To report a single tweet:
- Find the tweet you wish to report.
- Click on the “V” icon in the top right corner of the tweet.
- Select Report from the drop-down menu.
- Choose an option that best describes the issue you have with the tweet.
- Then, provide more info about the tweet and why you think it needs to be removed.
To report an account:
- Go to the account you wish to report.
- Click on the gear icon (web and iOS) or three dots icon (Android).
- Select Report from the drop-down menu.
- Choose an option that best describes the issue you have with the account.
- Provide more information on the reason for reporting the account.
- Head over to the Instagram post you wish to report.
- Click on the three dots icon at the top-right of the post and click Report.
- Take a screenshot of the site or article and save it.
- On the Google search results page, scroll down to the bottom of the screen and click Send Feedback.
- A dialogue box appears where you can upload the screenshot and describe why you believe a site/article needs to be removed.
- Click Send and you just have to wait for Google team do the rest.
Organizations worldwide make efforts to fight fake news
Journalists, media outlets and several organizations have realized that they need to do something to combat fake news and misinformation.
Like that scene in Gotham when Bruce Wayne realizes that something has to be done about Penguin and the criminal underworld, but less entertaining.
Still, important though.
Journalism Trust Initiative
One endeavor is the Journalism Trust Initiative, a partnership that was brought forward in April 2018.
This partnership is formed of Reporters without Borders (RSF), Agence France Presse (AFP), the European Broadcasting Union (EBU), and the Global Editors Network (GEN).
With a clear mission to fight misinformation, the plan is to introduce transparency standards and ethical journalistic methods. On top of that, the JTI is proposing that quality journalism is given preferential treatment by the algorithms of search engines and social media sites.
European approaches on fighting fake news
In Spain, the problem of fake news has become a key political issue affecting both the recent national and Catalan elections. A dedicated unit was launched in March 2019 to combat the challenge.
April 2018 saw the socialist party (PSOE) announce plans to expand a bill related to private data protection. The aim is to stop the spread of fake news on social media platforms.
Since then, Spain has also signed up to creating a joint cybersecurity group with Russia, which will aim to prevent misinformation from affecting relations between the two countries.
In France, President Macron has overseen the implemention of a new fake news law. Authorities now have the power to remove fake content that spreads on social media, and block the publishing websites.
On top of that, the new rules seek to shine a light on anonymous political messages by making it clear who is paying for advertisements.
And in a pretty funny turn of events, Twitter ended up blocking the French government itself because it fell foul of its own law.
Despite these steps, CNN recently reported on a case of fake news triggering a wave of violence against the Roma ethnic community in the country.
The conservative government in the UK wants to introduce new laws to make the internet a safer place.
A white paper all about “online harms” has been published by the Department for Digital, Culture, Media and Sport (DCMS).
It proposes some tough rules that will require digital companies to take responsibility for the safety of their users. And that includes the content found on their platforms.
In a report specifically dedicated to disinformation and fake news, the DCMS calls for:
- A compulsory code of ethics for tech companies, overseen by an independent regulator.
- Powers for the regulator to launch legal action against companies caught violating that code.
- Government reforms on electoral communication laws and rules on foreign involvement in UK elections.
- Obligations for social media companies to take down known sources of harmful content, including sources of fake news and misinformation.
The BBC has also set a list of priorities. Among them is a suggestion that UK public school staff and students are trained on how to pinpoint and avoid fake news and information.
Would you support an anti-fake news law?
According to a Eurobarometer survey, 83% of Europe’s population believe fake news is a real threat to democracy.
Another conclusion is that fake news is likely to have the most influence on elections and political opinions.
These results reveal the biggest concern that most people have. We don’t want our elections to be swayed by poor quality media sources.
In short, the plan is to:
- Highlight fake news sites to advertisers and prevent such sites from making money on disinformation.
- Make it known when someone is being targeted by a political ad or issue-based ad.
- Have a clear policy on identity and online bots and take measures to close fake accounts.
- Offer tools to help people make informed decisions, promote diverse perspectives on important subjects, and make reliable sources more visible.
- Provide privacy-compliant access to data for researchers to track and better understand the spread and impact of disinformation.
Google, Facebook, and Twitter have signed up to the code on a voluntary basis. Each platform has also reported on a number of steps taken to tackle fake news.
Of course, any regulation on fake news is going to conflict with some ideas on freedom of expression.
And it’s not exactly straight forward to get the balance right.
It also opens the door for potential abuses by future governments.
A legal block on fake news sites could lead to a situation where any source that is critical of a certain government will get labelled as misinformation.
Not too farfetched an idea. Especially when you consider the likes of China, Russia, and Turkey are already limiting press freedom in this way.
Will we ever say goodbye to fake news?
Take a look at this video that was created using a combination of old and new technology: Adobe After Effects and the AI face-swapping tool FakeApp.
Now, this is my prediction: the future of fake news will look so real that not even an expert eye will be able to tell what’s real from what’s not.
Do you agree? Comment below.
Tom Rosenstiel, author and director of the American Press Institute, seems to be on my side when he says:
Whatever changes platform companies make, and whatever innovations fact checkers and other journalists put in place, those who want to deceive will adapt to them. Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to.
Meanwhile, Irene Wu, culture and technology professor at Georgetown University, is more hopeful:
Information will improve because people will learn better how to deal with masses of digital information. Right now, many people naively believe what they read on social media. When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them.
Will we ever get rid of fake news?
It’s possible, for sure, that the wave of misinformation will vanish one day. And I definitely agree that removing the financial incentive is a big step to improving things.
However, there will always be an ideological incentive. And there will always be a political incentive. Those are things you just can’t stamp out.
Whichever way things go, being able to identify fake news is already an important skill to protect yourself from malicious actors. And it may become essential soon enough.
Original post by Dana Vioreanu in 2018. Updated by Tom Bradbury.