Showing posts with label Social media. Show all posts
Showing posts with label Social media. Show all posts

Wednesday 13 September 2017

Study finds Trump, right-wing extremism and fake news won the media battle during the 2016 US presidential election campaign


In which Facebook Inc is identified as a major commercial player in the media landscape and a significant purveyor of fake news, as well as giving page space to highly partisan and clickbait news sites.

Excerpts from Harvard University, Berkman Klein Centre for Internet and Society, Rob Faris et al, Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election, 16 August 2017:

Both winners and losers of the 2016 presidential election describe it as a political earthquake. Donald Trump was the most explicitly populist candidate in modern history. He ran an overtly anti-elite and anti-media campaign and embraced positions on trade, immigration, and international alliances, among many other topics, that were outside elite consensus. Trump expressed these positions in starkly aggressive terms. His detractors perceived Trump’s views and the manner in which he communicated them as alarming, and his supporters perceived them as refreshing and candid. He was outraised and outspent by his opponents in both the primary and the general election, and yet he prevailed—contrary to the conventional wisdom of the past several elections that winning, or at least staying close, in the money race is a precondition to winning both the nomination and the election.

In this report we explore the dynamics of the election by analyzing over two million stories related to the election, published online by approximately 70,000 media sources between May 1, 2015, and Election Day in 2016. We measure how often sources were linked to by other online sources and how often they were shared on Facebook or Twitter. Through these sharing patterns and analysis of the content of the stories, we identify both what was highly salient according to these different measures and the relationships among different media, stories, and Twitter users.

Our clearest and most significant observation is that the American political system has seen not a symmetrical polarization of the two sides of the political map, but rather the emergence of a discrete and relatively insular right-wing media ecosystem whose shape and communications practices differ sharply from the rest of the media ecosystem, ranging from the center-right to the left. Right-wing media were centered on Breitbart and Fox News, and they presented partisan-disciplined messaging, which was not the case for the traditional professional media that were the center of attention across the rest of the media sphere. The right-wing media ecosystem partly insulated its readers from nonconforming news reported elsewhere and moderated the effects of bad news for Donald Trump’s candidacy. While we observe highly partisan and clickbait news sites on both sides of the partisan divide, especially on Facebook, on the right these sites received amplification and legitimation through an attention backbone that tied the most extreme conspiracy sites like Truthfeed, Infowars, through the likes of Gateway Pundit and Conservative Treehouse, to bridging sites like Daily Caller and Breitbart that legitimated and normalized the paranoid style that came to typify the right-wing ecosystem in the 2016 election. This attention backbone relied heavily on social media.

For the past 20 years there has been substantial literature decrying the polarization of American politics. The core claim has been that the right and the left are drawing farther apart, becoming more insular, and adopting more extreme versions of their own arguments. It is well established that political elites have become polarized over the past several decades, while other research has shown that the electorate has also grown apart. Other versions of the argument have focused on the internet specifically, arguing that echo chambers or filter bubbles have caused people of like political views to read only one another and to reinforce each other’s views, leading to the adoption of more extreme views. These various arguments have focused on general features of either the communications system or political psychology—homophily, confirmation bias, in-group/out-group dynamics, and so forth. Many commentators and scholars predicted and measured roughly symmetric polarization on the two sides of the political divide.

Our observations of the 2016 election are inconsistent with a symmetric polarization hypothesis. Instead, we see a distinctly asymmetric pattern with an inflection point in the center-right—the least populated and least influential portion of the media spectrum. In effect, we have seen a radicalization of the right wing of American politics: a hollowing out of the center-right and its displacement by a new, more extreme form of right-wing politics. During this election cycle, media sources that attracted attention on the center-right, center, center-left, and left followed a more or less normal distribution of attention from the center-right to the left, when attention is measured by either links or tweets, and a somewhat more left-tilted distribution when measured by Facebook shares. By contrast, the distribution of attention on the right was skewed to the far right. The number of media outlets that appeared in the center-right was relatively small; their influence was generally low, whether measured by inlinks or social media shares; and they tended to link out to the traditional media—such as the New York Times and the Washington Post—to the same extent as did outlets in the center, center-left, and left, and significantly more than did outlets on the right. The number of farther-right media outlets is very large, and the preponderance of attention to these sources, which include Fox News and Breitbart, came from media outlets and readers within the right. This asymmetry between the left and the right appears in the link ecosystem, and is even more pronounced when measured by social media sharing…..

Our data suggest that the “fake news” framing of what happened in the 2016 campaign, which received much post-election attention, is a distraction. Moreover, it appears to reinforce and buy into a major theme of the Trump campaign: that news cannot be trusted. The wave of attention to fake news is grounded in a real phenomenon, but at least in the 2016 election it seems to have played a relatively small role in the overall scheme of things. We do indeed find stories in our data set that come from sites, like Ending the Fed, intended as political clickbait to make a profit from Facebook, often with no real interest in the political outcome…..

Our observations suggest that fixing the American public sphere may be much harder than we would like. One feature of the more widely circulated explanations of our “post-truth” moment—fake news sites seeking Facebook advertising, Russia engaging in a propaganda war, or information overload leading confused voters to fail to distinguish facts from false or misleading reporting—is that these are clearly inconsistent with democratic values, and the need for interventions to respond to them is more or less indisputable. If profit-driven fake news is the problem, solutions like urging Facebook or Google to use technical mechanisms to identify fake news sites and silence them by denying them advertising revenue or downgrading the visibility of their sites seem, on their face, not to conflict with any democratic values. Similarly, if a foreign power is seeking to influence our democratic process by propagandistic means, then having the intelligence community determine how this is being done and stop it is normatively unproblematic. If readers are simply confused, then developing tools that will feed them fact-checking metrics while they select and read stories might help. These approaches may contribute to solving the disorientation in the public sphere, but our observations suggest that they will be working on the margins of the core challenge……  

In this study, we analyze both mainstream and social media coverage of the 2016 United States presidential election. We document that the majority of mainstream media coverage was negative for both candidates, but largely followed Donald Trump’s agenda: when reporting on Hillary Clinton, coverage primarily focused on the various scandals related to the Clinton Foundation and emails. When focused on Trump, major substantive issues, primarily immigration, were prominent. Indeed, immigration emerged as a central issue in the campaign and served as a defining issue for the Trump campaign.

We find that the structure and composition of media on the right and left are quite different. The leading media on the right and left are rooted in different traditions and journalistic practices. On the conservative side, more attention was paid to pro-Trump, highly partisan media outlets. On the liberal side, by contrast, the center of gravity was made up largely of long-standing media organizations steeped in the traditions and practices of objective journalism.

Our data supports lines of research on polarization in American politics that focus on the asymmetric patterns between the left and the right, rather than studies that see polarization as a general historical phenomenon, driven by technology or other mechanisms that apply across the partisan divide.

The analysis includes the evaluation and mapping of the media landscape from several perspectives and is based on large-scale data collection of media stories published on the web and shared on Twitter……

Immigration emerged as the leading substantive issue of the campaign. Initially, the Trump campaign used a hard-line anti-immigration stance to distinguish Trump from the field of GOP contenders. Later, immigration was a wedge issue between the left and the right. Pro-Trump media sources supported this with sensationalistic, race-centric coverage of immigration focused on crime, terrorism, fear of Muslims, and disease.

While coverage of his candidacy was largely critical, Trump dominated media coverage…..

Conservative media disrupted.
Breitbart emerges as the nexus of conservative media. The Wall Street Journal is treated by social media users as centrist and less influential. The rising prominence of Breitbart along with relatively new outlets such as the Daily Caller marks a significant reshaping of the conservative media landscape over the past several years…..  

Donald Trump succeeded in shaping the election agenda. Coverage of Trump overwhelmingly outperformed coverage of Clinton. Clinton’s coverage was focused on scandals, while Trump’s coverage focused on his core issues.
Figure 1: Number of sentences by topic and candidate from May 1, 2015, to November 7, 2016

On the partisan left and right, the popularity of media sources varies significantly across the different platforms. On the left, the Huffington Post, MSNBC, and Vox are prominent on all platforms. On the right, Breitbart, Fox News, the Daily Caller, and the New York Post are popular across platforms.

Table 1: Most popular media on the right from May 1, 2015, to November 7, 2016

Table 2: Most popular media on the left from May 1, 2015, to November 7, 2016

Disinformation and propaganda are rooted in partisanship and are more prevalent on social media.

The most obvious forms of disinformation are most prevalent on social media and in the most partisan fringes of the media landscape. Greater popularity on social media than attention from media peers is a strong indicator of reporting that is partisan and, in some cases, dubious.

Among the set of top 100 media sources by inlinks or social media shares, seven sources, all from the partisan right or partisan left, receive substantially more attention on social media than links from other media outlets.


These sites do not necessarily all engage in misleading or false reporting, but they are clearly highly partisan. In this group, Gateway Pundit is in a class of its own, known for “publishing falsehoods and spreading hoaxes.”

Disproportionate popularity on Facebook is a strong indicator of highly partisan and unreliable media.

A distinct set of websites receive a disproportionate amount of attention from Facebook compared with Twitter and media inlinks. From the list of the most prominent media, 13 sites fall into this category. Many of these sites are cited by independent sources and media reporting as progenitors of inaccurate if not blatantly false reporting. Both in form and substance, the majority of these sites are aptly described as political clickbait. Again, this does not imply equivalency across these sites. Ending the Fed is often cited as the prototypical example of a media source that published false stories. The Onion is an outlier in this group, in that it is explicitly satirical and ironic, rather than, as is the case with the others, engaging in highly partisan and dubious reporting without explicit irony.


Asymmetric vulnerabilities: The right and left were subject to media manipulation in different ways.

The more insulated right-wing media ecosystem was susceptible to sustained network propaganda and disinformation, particularly misleading negative claims about Hillary Clinton. Traditional media accountability mechanisms—for example, fact-checking sites, media watchdog groups, and cross-media criticism—appear to have wielded little influence on the insular conservative media sphere. Claims aimed for “internal” consumption within the right-wing media ecosystem were more extreme, less internally coherent, and appealed more to the “paranoid style” of American politics than claims intended to affect mainstream media reporting.

The institutional commitment to impartiality of media sources at the core of attention on the left meant that hyperpartisan, unreliable sources on the left did not receive the same amplification that equivalent sites on the right did.

These same standard journalistic practices were successfully manipulated by media and activists on the right to inject anti-Clinton narratives into the mainstream media narrative. A key example is the use of the leaked Democratic National Committee’s emails and her campaign chairman John Podesta’s emails, released through Wikileaks, and the sustained series of stories written around email-based accusations of influence peddling. Another example is the book and movie release of Clinton Cash together with the sustained campaign that followed, making the Clinton Foundation the major post-convention story. By developing plausible narratives and documentation susceptible to negative coverage, parallel to the more paranoid narrative lines intended for internal consumption within the right-wing media ecosystem, and by “working the refs,” demanding mainstream coverage of anti-Clinton stories, right-wing media played a key role in setting the agenda of mainstream, center-left media. We document these dynamics in the Clinton Foundation case study section of this report.

The New York Times, 6 September 2017:

Fake Russian Facebook Accounts Bought $100,000 in Political Ads

Providing new evidence of Russian interference in the 2016 election, Facebook disclosed on Wednesday that it had identified more than $100,000 worth of divisive ads on hot-button issues purchased by a shadowy Russian company linked to the Kremlin.

Most of the 3,000 ads did not refer to particular candidates but instead focused on divisive social issues such as race, gay rights, gun control and immigration, according to a post on Facebook by Alex Stamos, the company’s chief security officer. The ads, which ran between June 2015 and May 2017, were linked to some 470 fake accounts and pages the company said it had shut down.

Facebook officials said the fake accounts were created by a Russian company called the Internet Research Agency, which is known for using “troll” accounts to post on social media and comment on news websites.

The disclosure adds to the evidence of the broad scope of the Russian influence campaign, which American intelligence agencies concluded was designed to damage Hillary Clinton and boost Donald J. Trump during the election. Multiple investigations of the Russian meddling, and the possibility that the Trump campaign somehow colluded with Russia, have cast a shadow over the first eight months of Mr. Trump’s presidency.

Facebook staff members on Wednesday briefed the Senate and House intelligence committees, which are investigating the Russian intervention in the American election. Mr. Stamos indicated that Facebook is also cooperating with investigators for Robert S. Mueller III, the special counsel, writing that “we have shared our findings with U.S. authorities investigating these issues, and we will continue to work with them as necessary.”….

In its review of election-related advertising, Facebook said it had also found an additional 2,200 ads, costing $50,000, that had less certain indications of a Russian connection. Some of those ads, for instance, were purchased by Facebook accounts with internet protocol addresses that appeared to be in the United States but with the language set to Russian.

Thursday 3 August 2017

Facebook Inc still pursuing dream of spying on users through their webcams and via their touch screens or mobile phones


The Daily Dot, 8 June 2017:

Your worst internet nightmare could be on its way to becoming a reality.
newly discovered patent application shows Facebook has come up with plans to potentially spy on its users through their phone or laptop cameras—even when they’re not turned on. This could allow it to send tailored advertisements to its nearly two billion members. The application, filed in 2014, says Facebook has thought of using “imaging components,” like a camera, to read the emotions of its users and send them catered content, like videos, photos, and ads.

“Computing devices such as laptops, mobile phones, and tablets increasingly include at least one, and often more than one, imaging component, such as a digital camera. Some devices may include a front-facing camera that is positioned on the same side of the device as a display. Thus, during normal operation, a user may be looking towards the imaging component. However, current content delivery systems typically do not utilize passive imaging information. Thus, a need exists for a content delivery solution that takes advantage of available passive imaging data to provide content to a user with improved relevancy.”

This is the US patent application to which the article is referring.

United States Patent Application 20150242679
Kind Code:
A1
Techniques for emotion detection and content delivery are described. In one embodiment, for example, an emotion detection component may identify at least one type of emotion associated with at least one detected emotion characteristic. A storage component may store the identified emotion type. An application programming interface (API) component may receive a request from one or more applications for emotion type and, in response to the request, return the identified emotion type. The one or more applications may identify content for display based upon the identified emotion type. The identification of content for display by the one or more applications based upon the identified emotion type may include searching among a plurality of content items, each content item being associated with one or more emotion type. Other embodiments are described and claimed.

Publication number
US20150242679 A1
Publication type
Application
Application number
US 14/189,467
Publication date
Aug 27, 2015
Filing date
Feb 25, 2014
Priority date
Feb 25, 2014
Also published as
Inventors
Original Assignee
Export Citation
External Links: USPTOUSPTO AssignmentEspacenet

Facebook Inc appears to have been granted this related patent, Techniques for emotion detection and content delivery (US 9681166 B2- Publication date 13 June 2017):

ABSTRACT
Techniques for emotion detection and content delivery are described. In one embodiment, for example, an emotion detection component may identify at least one type of emotion associated with at least one detected emotion characteristic. A storage component may store the identified emotion type. An application programming interface (API) component may receive a request from one or more applications for emotion type and, in response to the request, return the identified emotion type. The one or more applications may identify content for display based upon the identified emotion type. The identification of content for display by the one or more applications based upon the identified emotion type may include searching among a plurality of content items, each content item being associated with one or more emotion type. Other embodiments are described and claimed.

BACKGROUND
Users of computing devices spend increasing amounts of time browsing streams of posts on social networks, news articles, video, audio, or other digital content. The amount of information available to users is also increasing. Thus, a need exists for delivering content a user that may be of current interest to them. For example, a user's interests may be determined based upon their current emotional state. Computing devices such as laptops, mobile phones, and tablets increasingly include at least one, and often more than one, imaging component, such as a digital camera. Some devices may include a front-facing camera that is positioned on the same side of the device as a display. Thus, during normal operation, a user may be looking towards the imaging component. However, current content delivery systems typically do not utilize passive imaging information. Thus, a need exists for a content delivery solution that takes advantage of available passive imaging data to provide content to a user with improved relevancy.

Facebook also appears to have been granted a US patent in May this year for Augmenting Text Messages With Emotion Information (US 20170147202 A1).

According to CBINSIGHTS this patent would; automatically add emotional information to text messages, predicting the user’s emotion based on methods of keyboard input. The visual format of the text message would adapt in real time based on the user’s predicted emotion. As the patent notes (and as many people have likely experienced), it can be hard to convey mood and intended meaning in a text-only message; this system would aim to reduce misunderstandings.
The system could pick up data from the keyboard, mouse, touch pad, touch screen, or other input devices, and the patent mentions predicting emotion based on relative typing speed, how hard the keys are pressed, movement (using the phone’s accelerometer), location, and other factors.

Wednesday 19 July 2017

The American Resistance has many faces and tweeters are just some of them (11)


In the matter of KNIGHT FIRST AMENDMENT INSTITUTE AT COLUMBIA UNIVERSITY; REBECCA BUCKWALTER; PHILIP COHEN; HOLLY FIGUEROA; EUGENE GU; BRANDON NEELY; JOSEPH PAPP; and NICHOLAS PAPPAS, Plaintiffs, v DONALD J. TRUMP, President of the United States; SEAN M. SPICER, White House Press Secretary; and DANIEL SCAVINO, White House Director of Social Media and Assistant to the President, Defendants, UNITED STATES DISTRICT COURT FOR THE SOUTHERN DISTRICT OF NEW YORK. Filed 11 July 2017.

The New York Times, 11 July 2017:

WASHINGTON — A group of Twitter users blocked by President Trump sued him and two top White House aides on Tuesday, arguing that his account amounts to a public forum that he, as a government official, cannot bar people from.

The blocked Twitter users, represented by the Knight First Amendment Institute at Columbia University, raised cutting-edge issues about how the Constitution applies to the social media era. They say Mr. Trump cannot bar people from engaging with his account because they expressed opinions he did not like, such as mocking or criticizing him.

“The @realDonaldTrump account is a kind of digital town hall in which the president and his aides use the tweet function to communicate news and information to the public, and members of the public use the reply function to respond to the president and his aides and exchange views with one another,” the lawsuit said.

By blocking people from reading his tweets, or from viewing and replying to message chains based on them, Mr. Trump is violating their First Amendment rights because they expressed views he did not like, the lawsuit argued.

It offered several theories to back that notion. They included arguments that Mr. Trump was imposing an unconstitutional restriction on the plaintiffs’ ability to participate in a designated public forum, get access to statements the government had otherwise made available to the public and petition the government for “redress of grievances.”

Filed in Federal District Court for the Southern District of New York, the lawsuit also names Sean Spicer, the White House press secretary, and Dan Scavino, Mr. Trump’s director of social media, as defendants. It seeks a declaration that Mr. Trump’s blocking of the plaintiffs was unconstitutional, an injunction requiring him to unblock them and prohibiting him from blocking others for the views they express, and legal fees.

Tuesday 16 May 2017

NSW Police public relations blunder


In light of ongoing revelations concerning data security and privacy breaches (including hacking) by police personnel around Australia, this was not exactly a wise post on the part of NSW Police on or about 6 May 2017. As evidenced by its apparent online deletion since.



Wednesday 12 April 2017

Examining the Alternative Media Ecosystem using Twitter



In the aftermath of major political disruptions in 2016—in Britain with the Brexit vote and in the United States with the election of Donald Trump to the presidency—there has been widespread attention to and theorizing about the problem of “fake news”. But this term is both amorphous and contested. One perspective locates the problem within the emerging ecosystem of alternative media, where the term has been applied to refer to “clickbait” content that uses tabloid-style headlines to attract viewers for financial reasons (Silverman & Alexander 2016) and to describe political propaganda intentionally planted and propagated through online spaces (Timberg 2016). Challenging these definitions, alternative media outlets have appropriated the term to attack “mainstream” media for its perceived economic and political biases and for hosting inaccurate or under-sourced content (e.g. Rappoport 2016). Beneath this rhetoric, we are seeing traditional new providers and emergent alternative media battle not only for economic viability, but over accepted methods of how information is shared and consumed, and, more profoundly, for how narratives around that information are shaped and by whom.

This research seeks to provide a systematic lens for exploring the production of a certain type of “fake news”— alternative narratives of man-made crisis events. For three years, our research group has examined online rumoring during crises. Over that time, we noted the presence of very similar rumors across many man-made crisis events— including the 2013 Boston Marathon Bombings, the downing of Malaysia Airlines flight MH17, and several mass shooting events including those at Umpqua Community College in Oregon (October, 2015). For each event, rumors claimed the event had been perpetrated by someone other than the official suspects—that it was instead either a staged event performed by “crisis actors” or a “false flag” orchestrated by someone else. Both explanations claimed that a powerful individual or group was pulling the strings for political reasons. Interestingly, though the arguments and evidence used to support these alternative narratives were somewhat consistent across events, the motives cited were often very different—e.g. from the U.S. government trying to support gun control to coordinated global actors staging violence to motivate military intervention.

For this paper, we utilize this type of conspiracy theory or alternative narrative rumor as an entry point for understanding the ecosystem of alternative media. We examine the production of these narratives through Twitter and across the external websites that Twitter users reference as they engage in these narratives. We propose and demonstrate that this lens—Twitter data from mass shooting events and our method for utilizing that data to reveal and explore connections across web domains—provides a systematic approach for shedding light on the emerging phenomena of alternative media and “fake news”.

Our contributions include an increased understanding of the underlying nature of this subsection of alternative media—which hosts conspiratorial content and conducts various anti-globalist political agendas. Noting thematic convergence across domains, we theorize about how alternative media may contribute to conspiratorial thinking by creating a false perception of information diversity……

We collected tweets related to shooting events for more than ten months in 2016. This time period included several high profile shooting events, including mass shootings with civilian casualties at an Orlando, FL nightclub on June 12, in a shopping district in Munich, Germany on July 22, and at a mall in Burlington, WA on September 23. Each of these events catalyzed considerable discussion online and elsewhere about the details and motives of the attack— including claims of the attack being a “false flag”.

More than half of our alternative narrative collection (30,361 tweets) relates to the Orlando event, including:

@ActivistPost: "Was Orlando Shooting A False Flag? Shooter Has Ties To FBI, Regular At Club, Did Not Act Alone? "

This tweet is typical of an alternative narrative tweet, leveraging uncertainty in the form of a leading question (Starbird et al. 2016) to present its theory. The linked-to article—whose title is the content of this tweet—presents evidence to support the theory, including facts about the case (such as previous contact between the FBI and the shooter) and perceived connections to past events that are similarly claimed to be false flags. The underlying theme here is that the U.S. government perpetrated the shooting with the intention of blaming it on Islamic terrorism. This tweet’s author, the ActivistPost, is associated with one of the central nodes in our network graph (see Figures 1-3), referenced in 191 tweets by 153 users and connected (by user activity) to a relatively high number of other domains.

The following tweet, by an account associated with a domain that has a strong edge tie with ActivistPost, forwards a similarly themed alternative narrative:

@veteranstoday: Orlando nightclub shooting: Yet another false flag? - looks like another PR extravaganza

This article was linked-to 147 times in our data. The tweet and the article feature an image with the title, “Omar Mateen: Patsy or MK Mind-Control Slave”. The term patsy is often used to label an accused perpetrator who has been framed for the incident by government or other powerful groups. MK Mind-Control refers to a CIA project that experimented with mind control in the 1950s. This speculative tweet and related article therefore present two potential explanations of the Orlando shooting event, both building off alternative narratives used in previous events. The underlying claim here is that the named suspect was not responsible for the Orlando shootings, but that the U.S. government was. This claim is extended in the article to apply to other violent acts attributed to Muslim terrorists.

Alternative narratives around the Munich shooting had a similar theme, though blame was pushed onto international geo-political actors: Desperate Zionists Commit Another Fraud with Munich Shooting Hoax - NODISINFO

The above tweet links to an article (tweeted 54 times) within the nodisinfo.com domain, one of the most highly tweeted and highly connected domains in our data. Citing photographic evidence from the scene, the article claims that the shooting was a drill, staged by crisis actors. All of these terms echo other alternative narratives of other events. Diverging from the Orlando narratives, which blame the U.S. government, in this case the accused “real” perpetrators are Zionists—echoing long-active narratives about covert power wielded by Jewish bankers and others. The article offers no evidence to support that connection other than reference to other “staged” events.

The Cascade Mall Shooting in Burlington, Washington referenced a third kind of alternative narrative that has appeared after many U.S.-based shootings, including the Sandy Hook School shooting in 2012 and the Umpqua School shooting in 2015. This narrative claims that these mass shooting events are again staged using crisis actors, but in this case by the left-leaning U.S. government to provide a political basis for reducing gun rights.

Absence Of Footage Of Wounded/Deceased Victims. Media Were Told Victims Remained In The Mall #Cascade #FalseFlag

This tweet suggests that there were no actual victims of the event. It links to an article on the memoryholeblog.com domain, which also has a relatively high degree in our network graph and was tweeted 125 times. The linked-to article assembles evidence to make a case for the event being a drill and describes an outlook that connects several events to this narrative: “Such events are reported on by major news media uncritically, thus supporting the call for strengthened gun control measures. […]”

Interestingly, the second most highly referenced event in our alternative narrative collection from 2016 (at 5,914 tweets) is the Sandy Hook shootings, which occurred in 2012. Though a large portion of those tweets contest or deny that alternative narrative, several utilize Sandy Hook “evidence” to support alternative narratives around more recent events. For example:

Orlando shooting was a hoax. Just like Sandy Hook, Boston Bombing, and San Bernandino. Keep believing Rothschild Zionist news companies. More Orlando shooting Hoax – proof - same actors in Sandy hook & Boston Marathon Fake bombing - gun take away agenda.

These two tweets both connect the Orlando Shooting to claims that Sandy Hook was a hoax. In the first, the author refers to the “Rothschild Zionist news companies”, a reference to anti-globalist and anti-media viewpoints that appear as major themes across many alternative news sites. The second tweet connects Orlando to Sandy Hook (and paradoxically the Boston Marathon bombings) as part of an ongoing agenda to reduce gun rights in the U.S.

Taken together, these examples describe a few of what turns out to be a collection of distinct alternative narratives that share several common features. As the above tweets highlight at the micro-level, at the macro-level our domain data demonstrate that different alternative narratives are connected across users and sites—e.g. some users reference both memoryholeblog.com (which assigns blame to U.S. government officials trying to take away gun rights) and veteranstoday.com and/or nodisinfo.com (which theorize that international conspirators set up these events to further their political agendas by falsely blaming Muslim terrorists). Our tweet and domain data suggest that the production of these narratives is a distributed activity where “successful” elements (e.g. drills, crisis actors, Zionist conspirators) of one narrative are combined with others in a mutually reinforcing manner……

Sunday 22 January 2017

Saturday 21 January 2017

A Collector's Item: "@POTUS hasn't tweeted yet"


A genuine rarity at 4:33 am Sydney Time - a Trump Twitter account with no tweets 😉


UPDATE

The crowd in that Twiitter account banner? 

Not from Trump's 20 January 2017 inauguration - not even from any of the rallies he held during the 2016 presidential election campaign.

No, it happens to be a Getty image from Barack Obama's 2009 inauguration, taken down from Trump's @POTUS once social media had noticed.

Tuesday 20 December 2016

On the problem of fake news....


Digital Trends, 6 December 2016:

It’s been half a decade since the co-founder of Avaaz, Eli Pariser, first coined the phrase “filter bubble,” but his prophetic TED Talk — and his concerns and warnings — are even more applicable now than they were then. In an era of fake news, curated content, personalized experiences, and deep ideological divisions, it’s time we all take responsibility for bursting our own filter bubbles.

When I search for something on Google, the results I see are quite different from yours, based on our individual search histories and whatever other data Google has collected over the years. We see this all the time on our Facebook timelines, as the social network uses its vats of data to offer us what it thinks we want to see and hear. This is your bubble…..

Filter bubbles may not seem too threatening a prospect, but they can lead to two distinct but connected issues. The first is that when you only see things you agree with, it can lead to a snowballing confirmation bias that builds up steadily over time.

They don’t overtly take a stance, they invisibly paint the digital landscape with things that are likely to align with your point of view.

A wider problem is that with such difference sources of information between people, it can lead to the generation of a real disconnect, as they become unable to understand how anyone could think differently from themselves.

A look at any of the left- or right-leaning mainstream TV stations during the buildup to the recent election would have left you in no doubt over which candidate they backed. The same can be said of newspapers and other media. In fact, this is true of many published endorsements.

But we’re all aware of that bias. It’s easy to simply switch off or switch over to another station, to see the other side of the coin.

Online, the bias is more covert. Google searches, social network feeds, and even some news publications all curate what they show you. Worse, it’s all behind the scenes. They don’t overtly take a stance, they invisibly paint the digital landscape with things that are likely to align with your point of view…..

This becomes even more of a problem when you factor in faux news. This latest election was one of the most contentious in history, with low-approval candidates on both sides and salacious headlines thrown out by every source imaginable. With so much mud being slung, it was hard to keep track of what was going on, and that was doubly so online, where fake news was abundant.

This is something that Facebook CEO Mark Zuckerberg has tried to play down, claiming that it only accounted for 1 percent of the overall Facebook news. Considering Facebook has near 2 billion users, though, that’s potentially a lot of faux stories parroted as the truth. It’s proved enough of an issue that studies suggest many people have difficulty telling fake news from real news, and in the weeks since the election, both Google and Facebook have made pledges to deal with the problem.

Also consider that 61 percent of millennials use Facebook as their main source of news, and you can see how this issue could be set to worsen if it’s not stoppered soon…..

While Zuckerberg may not think fake news and memes made a difference to the election, Facebook employee and Oculus VR founder Palmer Luckey certainly did. He was outed earlier this year for investing more than $100,000 in a company that helped promote Donald Trump online through the proliferation of memes and inflammatory attack advertisements. He wouldn’t have put in the effort if he thought it worthless.

Buzzfeed’s analysis of the popular shared stories on Facebook shows that while fake news underperformed compared to its real counterparts in early 2016, by the time the Election Day rolled around at the start of November, it had a 1.5 million engagement lead over true stories.

That same analysis piece highlighted some of the biggest fake election stories, and all of them contained classic click-baiting tactics. They used scandalous wording, capitalization, and sensationalist claims to draw in the clickers, sharers, and commenters.

That’s because these sorts of words help to draw an emotional reaction from us. Marketing firm Co-Schedule discovered this back in 2014, but it’s likely something that many people would agree with even without the hard numbers. We’ve all been tempted by clickbait headlines before, and they’re usually ones that appeal to fear, anger, arousal, or some other part of us that isn’t related to critical thinking and political analysis. Everyone’s slinging mud from within their own filter bubbles, secure in the knowledge that they are right, and that everyone who thinks differently is an idiot.

And therein lies the difficulty. The only way to really understand why someone may hold a different viewpoint is through empathy. But how can you empathize when you don’t have control over how the world appears to you, and your filter serves as a buffer to stories that might help you connect with the other side?

Reaching out to us from the past, Pariser  has some thoughts for those of us now living through his warning of the future. Even if Facebook may be stripping all humanity from its news curation, there are still human minds and fingertips behind the algorithms that feed us content. He called on those programmers to instill a sense of journalistic integrity in the AI behind the scenes.

“We need the gatekeepers [of information] to encode [journalistic] responsibility into the code that they’re writing. […] We need to make sure that these algorithms have encoded in them a sense of the public life, a sense of civic responsibility. They need to be transparent enough that we can see what the rules are and […] we need [to be] given some control.”

That sort of suggestion seems particularly pertinent, since it was only at the end of August that Facebook laid off its entire editorial team, relying instead on automated algorithms to curate content. They didn’t do a great job, though, as weeks later they were found to have let a bevy of faux content through the screening process.

While it may seem like a tall order for megacorporations to push for such an open platform, so much of a stink has been raised about fake news in the wake of the election that it does seem like Facebook and Google at least will be doing something to target that problematic aspect of social networking. They can do more, though, and it could start with helping to raise awareness of the differences in the content we’re shown…..


Saturday 5 November 2016

Facebook allows real estate agents to place online advertisements with undisclosed racial exclusions


ProPublica, 28 October 2016:
Imagine if, during the Jim Crow era, a newspaper offered advertisers the option of placing ads only in copies that went to white readers.
That’s basically what Facebook is doing nowadays.
The ubiquitous social network not only allows advertisers to target users by their interests or background, it also gives advertisers the ability to exclude specific groups it calls “Ethnic Affinities.” Ads that exclude people based on race, gender and other sensitive factors are prohibited by federal law in housing and employment.
Here is a screenshot of a housing ad that we purchased from Facebook’s self-service advertising portal:
The ad we purchased was targeted to Facebook members who were house hunting and excluded anyone with an “affinity” for African-American, Asian-American or Hispanic people. (Here’s the ad itself.)
When we showed Facebook’s racial exclusion options to a prominent civil rights lawyer John Relman, he gasped and said, “This is horrifying. This is massively illegal. This is about as blatant a violation of the federal Fair Housing Act as one can find.”
The Fair Housing Act of 1968 makes it illegal "to make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin.” Violators can face tens of thousands of dollars in fines.
The Civil Rights Act of 1964 also prohibits the “printing or publication of notices or advertisements indicating prohibited preference, limitation, specification or discrimination” in employment recruitment.
Facebook’s business model is based on allowing advertisers to target specific groups — or, apparently to exclude specific groups — using huge reams of personal data the company has collected about its users. Facebook’s microtargeting is particularly helpful for advertisers looking to reach niche audiences, such as swing-state voters concerned about climate change. ProPublica recently offered a tool allowing users to see how Facebook is categorizing them. We found nearly 50,000 unique categories in which Facebook places its users.
Facebook says its policies prohibit advertisers from using the targeting options for discrimination, harassment, disparagement or predatory advertising practices.
“We take a strong stand against advertisers misusing our platform: Our policies prohibit using our targeting options to discriminate, and they require compliance with the law,” said Steve Satterfield, privacy and public policy manager at Facebook. “We take prompt enforcement action when we determine that ads violate our policies."
Satterfield said it’s important for advertisers to have the ability to both include and exclude groups as they test how their marketing performs. For instance, he said, an advertiser “might run one campaign in English that excludes the Hispanic affinity group to see how well the campaign performs against running that ad campaign in Spanish. This is a common practice in the industry.”
He said Facebook began offering the “Ethnic Affinity” categories within the past two years as part of a “multicultural advertising” effort.
Satterfield added that the “Ethnic Affinity” is not the same as race — which Facebook does not ask its members about. Facebook assigns members an “Ethnic Affinity” based on pages and posts they have liked or engaged with on Facebook.
When we asked why “Ethnic Affinity” was included in the “Demographics” category of its ad-targeting tool if it’s not a representation of demographics, Facebook responded that it plans to move “Ethnic Affinity” to another section.
Facebook declined to answer questions about why our housing ad excluding minority groups was approved 15 minutes after we placed the order.
By comparison, consider the advertising controls that the New York Times has put in place to prevent discriminatory housing ads. After the newspaper was successfully sued under the Fair Housing Act in 1989, it agreed to review ads for potentially discriminatory content before accepting them for publication.

Friday 28 October 2016

SOCIAL MEDIA: Don't comment if......


Nooruddean Choudry at Joe.com:

Don't comment.

Don't comment if you're poor or disadvantaged, because you're a scrubber and a scrounger and basically a waste of space.

Don't comment if you've got any affiliation with a political party or social movement, or have previous for mouthing off about issues that matter to you, because you clearly have an agenda.

Don't comment if you've not commented about this before, because you're out of your depth and need to stick to what you know and what about all the other things in the world you're not commenting upon?

Don't comment if you've got 12 followers on Twitter because no one cares what you think, you unimportant loser. Don't comment if you've got 1.2 million followers because who do you think you are, you jumped up egotist?

Don't comment if you're brown or black or Muslim or Jewish or gay or trans or bi, because you just need to get over yourself and stop playing the victim all the bloody time.

Don't comment if you're none of the above because you're just a bleeding heart liberal leftard, who jumps onto bandwagons that have nothing to do with you. Wind your fucking neck in.

Don't comment if you're a woman because you're getting ideas above your station and you're too pretty to be worrying about that, or maybe you're just one of them feminazis and probably a lesbian.

Don't comment if you're rich or famous because you're a luvvie and you don't live in the real world, and why don't you open your own fucking home to them? Just like we take in orphans when we donate to Children In Need.

Don't comment if you haven't got the full facts because you're ill-informed and wrong. Don't comment if you're an expert in the field because we don't trust so-called experts and educated elites.

Don't care. Don't worry. Don't have compassion. Don't comment on anything or anyone that's not us. Don't question what 'us' is. Don't be offended. Don't feel guilty. Don't get angry. And don't fucking cry.

Don't comment. But yeah, free speech.