Tuesday 20 December 2016

On the problem of fake news....


Digital Trends, 6 December 2016:

It’s been half a decade since the co-founder of Avaaz, Eli Pariser, first coined the phrase “filter bubble,” but his prophetic TED Talk — and his concerns and warnings — are even more applicable now than they were then. In an era of fake news, curated content, personalized experiences, and deep ideological divisions, it’s time we all take responsibility for bursting our own filter bubbles.

When I search for something on Google, the results I see are quite different from yours, based on our individual search histories and whatever other data Google has collected over the years. We see this all the time on our Facebook timelines, as the social network uses its vats of data to offer us what it thinks we want to see and hear. This is your bubble…..

Filter bubbles may not seem too threatening a prospect, but they can lead to two distinct but connected issues. The first is that when you only see things you agree with, it can lead to a snowballing confirmation bias that builds up steadily over time.

They don’t overtly take a stance, they invisibly paint the digital landscape with things that are likely to align with your point of view.

A wider problem is that with such difference sources of information between people, it can lead to the generation of a real disconnect, as they become unable to understand how anyone could think differently from themselves.

A look at any of the left- or right-leaning mainstream TV stations during the buildup to the recent election would have left you in no doubt over which candidate they backed. The same can be said of newspapers and other media. In fact, this is true of many published endorsements.

But we’re all aware of that bias. It’s easy to simply switch off or switch over to another station, to see the other side of the coin.

Online, the bias is more covert. Google searches, social network feeds, and even some news publications all curate what they show you. Worse, it’s all behind the scenes. They don’t overtly take a stance, they invisibly paint the digital landscape with things that are likely to align with your point of view…..

This becomes even more of a problem when you factor in faux news. This latest election was one of the most contentious in history, with low-approval candidates on both sides and salacious headlines thrown out by every source imaginable. With so much mud being slung, it was hard to keep track of what was going on, and that was doubly so online, where fake news was abundant.

This is something that Facebook CEO Mark Zuckerberg has tried to play down, claiming that it only accounted for 1 percent of the overall Facebook news. Considering Facebook has near 2 billion users, though, that’s potentially a lot of faux stories parroted as the truth. It’s proved enough of an issue that studies suggest many people have difficulty telling fake news from real news, and in the weeks since the election, both Google and Facebook have made pledges to deal with the problem.

Also consider that 61 percent of millennials use Facebook as their main source of news, and you can see how this issue could be set to worsen if it’s not stoppered soon…..

While Zuckerberg may not think fake news and memes made a difference to the election, Facebook employee and Oculus VR founder Palmer Luckey certainly did. He was outed earlier this year for investing more than $100,000 in a company that helped promote Donald Trump online through the proliferation of memes and inflammatory attack advertisements. He wouldn’t have put in the effort if he thought it worthless.

Buzzfeed’s analysis of the popular shared stories on Facebook shows that while fake news underperformed compared to its real counterparts in early 2016, by the time the Election Day rolled around at the start of November, it had a 1.5 million engagement lead over true stories.

That same analysis piece highlighted some of the biggest fake election stories, and all of them contained classic click-baiting tactics. They used scandalous wording, capitalization, and sensationalist claims to draw in the clickers, sharers, and commenters.

That’s because these sorts of words help to draw an emotional reaction from us. Marketing firm Co-Schedule discovered this back in 2014, but it’s likely something that many people would agree with even without the hard numbers. We’ve all been tempted by clickbait headlines before, and they’re usually ones that appeal to fear, anger, arousal, or some other part of us that isn’t related to critical thinking and political analysis. Everyone’s slinging mud from within their own filter bubbles, secure in the knowledge that they are right, and that everyone who thinks differently is an idiot.

And therein lies the difficulty. The only way to really understand why someone may hold a different viewpoint is through empathy. But how can you empathize when you don’t have control over how the world appears to you, and your filter serves as a buffer to stories that might help you connect with the other side?

Reaching out to us from the past, Pariser  has some thoughts for those of us now living through his warning of the future. Even if Facebook may be stripping all humanity from its news curation, there are still human minds and fingertips behind the algorithms that feed us content. He called on those programmers to instill a sense of journalistic integrity in the AI behind the scenes.

“We need the gatekeepers [of information] to encode [journalistic] responsibility into the code that they’re writing. […] We need to make sure that these algorithms have encoded in them a sense of the public life, a sense of civic responsibility. They need to be transparent enough that we can see what the rules are and […] we need [to be] given some control.”

That sort of suggestion seems particularly pertinent, since it was only at the end of August that Facebook laid off its entire editorial team, relying instead on automated algorithms to curate content. They didn’t do a great job, though, as weeks later they were found to have let a bevy of faux content through the screening process.

While it may seem like a tall order for megacorporations to push for such an open platform, so much of a stink has been raised about fake news in the wake of the election that it does seem like Facebook and Google at least will be doing something to target that problematic aspect of social networking. They can do more, though, and it could start with helping to raise awareness of the differences in the content we’re shown…..


No comments: