Showing posts with label Social media. Show all posts
Showing posts with label Social media. Show all posts

Thursday, 3 August 2017

Facebook Inc still pursuing dream of spying on users through their webcams and via their touch screens or mobile phones


The Daily Dot, 8 June 2017:

Your worst internet nightmare could be on its way to becoming a reality.
newly discovered patent application shows Facebook has come up with plans to potentially spy on its users through their phone or laptop cameras—even when they’re not turned on. This could allow it to send tailored advertisements to its nearly two billion members. The application, filed in 2014, says Facebook has thought of using “imaging components,” like a camera, to read the emotions of its users and send them catered content, like videos, photos, and ads.

“Computing devices such as laptops, mobile phones, and tablets increasingly include at least one, and often more than one, imaging component, such as a digital camera. Some devices may include a front-facing camera that is positioned on the same side of the device as a display. Thus, during normal operation, a user may be looking towards the imaging component. However, current content delivery systems typically do not utilize passive imaging information. Thus, a need exists for a content delivery solution that takes advantage of available passive imaging data to provide content to a user with improved relevancy.”

This is the US patent application to which the article is referring.

United States Patent Application 20150242679
Kind Code:
A1
Techniques for emotion detection and content delivery are described. In one embodiment, for example, an emotion detection component may identify at least one type of emotion associated with at least one detected emotion characteristic. A storage component may store the identified emotion type. An application programming interface (API) component may receive a request from one or more applications for emotion type and, in response to the request, return the identified emotion type. The one or more applications may identify content for display based upon the identified emotion type. The identification of content for display by the one or more applications based upon the identified emotion type may include searching among a plurality of content items, each content item being associated with one or more emotion type. Other embodiments are described and claimed.

Publication number
US20150242679 A1
Publication type
Application
Application number
US 14/189,467
Publication date
Aug 27, 2015
Filing date
Feb 25, 2014
Priority date
Feb 25, 2014
Also published as
Inventors
Original Assignee
Export Citation
External Links: USPTOUSPTO AssignmentEspacenet

Facebook Inc appears to have been granted this related patent, Techniques for emotion detection and content delivery (US 9681166 B2- Publication date 13 June 2017):

ABSTRACT
Techniques for emotion detection and content delivery are described. In one embodiment, for example, an emotion detection component may identify at least one type of emotion associated with at least one detected emotion characteristic. A storage component may store the identified emotion type. An application programming interface (API) component may receive a request from one or more applications for emotion type and, in response to the request, return the identified emotion type. The one or more applications may identify content for display based upon the identified emotion type. The identification of content for display by the one or more applications based upon the identified emotion type may include searching among a plurality of content items, each content item being associated with one or more emotion type. Other embodiments are described and claimed.

BACKGROUND
Users of computing devices spend increasing amounts of time browsing streams of posts on social networks, news articles, video, audio, or other digital content. The amount of information available to users is also increasing. Thus, a need exists for delivering content a user that may be of current interest to them. For example, a user's interests may be determined based upon their current emotional state. Computing devices such as laptops, mobile phones, and tablets increasingly include at least one, and often more than one, imaging component, such as a digital camera. Some devices may include a front-facing camera that is positioned on the same side of the device as a display. Thus, during normal operation, a user may be looking towards the imaging component. However, current content delivery systems typically do not utilize passive imaging information. Thus, a need exists for a content delivery solution that takes advantage of available passive imaging data to provide content to a user with improved relevancy.

Facebook also appears to have been granted a US patent in May this year for Augmenting Text Messages With Emotion Information (US 20170147202 A1).

According to CBINSIGHTS this patent would; automatically add emotional information to text messages, predicting the user’s emotion based on methods of keyboard input. The visual format of the text message would adapt in real time based on the user’s predicted emotion. As the patent notes (and as many people have likely experienced), it can be hard to convey mood and intended meaning in a text-only message; this system would aim to reduce misunderstandings.
The system could pick up data from the keyboard, mouse, touch pad, touch screen, or other input devices, and the patent mentions predicting emotion based on relative typing speed, how hard the keys are pressed, movement (using the phone’s accelerometer), location, and other factors.

Wednesday, 19 July 2017

The American Resistance has many faces and tweeters are just some of them (11)


In the matter of KNIGHT FIRST AMENDMENT INSTITUTE AT COLUMBIA UNIVERSITY; REBECCA BUCKWALTER; PHILIP COHEN; HOLLY FIGUEROA; EUGENE GU; BRANDON NEELY; JOSEPH PAPP; and NICHOLAS PAPPAS, Plaintiffs, v DONALD J. TRUMP, President of the United States; SEAN M. SPICER, White House Press Secretary; and DANIEL SCAVINO, White House Director of Social Media and Assistant to the President, Defendants, UNITED STATES DISTRICT COURT FOR THE SOUTHERN DISTRICT OF NEW YORK. Filed 11 July 2017.

The New York Times, 11 July 2017:

WASHINGTON — A group of Twitter users blocked by President Trump sued him and two top White House aides on Tuesday, arguing that his account amounts to a public forum that he, as a government official, cannot bar people from.

The blocked Twitter users, represented by the Knight First Amendment Institute at Columbia University, raised cutting-edge issues about how the Constitution applies to the social media era. They say Mr. Trump cannot bar people from engaging with his account because they expressed opinions he did not like, such as mocking or criticizing him.

“The @realDonaldTrump account is a kind of digital town hall in which the president and his aides use the tweet function to communicate news and information to the public, and members of the public use the reply function to respond to the president and his aides and exchange views with one another,” the lawsuit said.

By blocking people from reading his tweets, or from viewing and replying to message chains based on them, Mr. Trump is violating their First Amendment rights because they expressed views he did not like, the lawsuit argued.

It offered several theories to back that notion. They included arguments that Mr. Trump was imposing an unconstitutional restriction on the plaintiffs’ ability to participate in a designated public forum, get access to statements the government had otherwise made available to the public and petition the government for “redress of grievances.”

Filed in Federal District Court for the Southern District of New York, the lawsuit also names Sean Spicer, the White House press secretary, and Dan Scavino, Mr. Trump’s director of social media, as defendants. It seeks a declaration that Mr. Trump’s blocking of the plaintiffs was unconstitutional, an injunction requiring him to unblock them and prohibiting him from blocking others for the views they express, and legal fees.

Tuesday, 16 May 2017

NSW Police public relations blunder


In light of ongoing revelations concerning data security and privacy breaches (including hacking) by police personnel around Australia, this was not exactly a wise post on the part of NSW Police on or about 6 May 2017. As evidenced by its apparent online deletion since.



Wednesday, 12 April 2017

Examining the Alternative Media Ecosystem using Twitter



In the aftermath of major political disruptions in 2016—in Britain with the Brexit vote and in the United States with the election of Donald Trump to the presidency—there has been widespread attention to and theorizing about the problem of “fake news”. But this term is both amorphous and contested. One perspective locates the problem within the emerging ecosystem of alternative media, where the term has been applied to refer to “clickbait” content that uses tabloid-style headlines to attract viewers for financial reasons (Silverman & Alexander 2016) and to describe political propaganda intentionally planted and propagated through online spaces (Timberg 2016). Challenging these definitions, alternative media outlets have appropriated the term to attack “mainstream” media for its perceived economic and political biases and for hosting inaccurate or under-sourced content (e.g. Rappoport 2016). Beneath this rhetoric, we are seeing traditional new providers and emergent alternative media battle not only for economic viability, but over accepted methods of how information is shared and consumed, and, more profoundly, for how narratives around that information are shaped and by whom.

This research seeks to provide a systematic lens for exploring the production of a certain type of “fake news”— alternative narratives of man-made crisis events. For three years, our research group has examined online rumoring during crises. Over that time, we noted the presence of very similar rumors across many man-made crisis events— including the 2013 Boston Marathon Bombings, the downing of Malaysia Airlines flight MH17, and several mass shooting events including those at Umpqua Community College in Oregon (October, 2015). For each event, rumors claimed the event had been perpetrated by someone other than the official suspects—that it was instead either a staged event performed by “crisis actors” or a “false flag” orchestrated by someone else. Both explanations claimed that a powerful individual or group was pulling the strings for political reasons. Interestingly, though the arguments and evidence used to support these alternative narratives were somewhat consistent across events, the motives cited were often very different—e.g. from the U.S. government trying to support gun control to coordinated global actors staging violence to motivate military intervention.

For this paper, we utilize this type of conspiracy theory or alternative narrative rumor as an entry point for understanding the ecosystem of alternative media. We examine the production of these narratives through Twitter and across the external websites that Twitter users reference as they engage in these narratives. We propose and demonstrate that this lens—Twitter data from mass shooting events and our method for utilizing that data to reveal and explore connections across web domains—provides a systematic approach for shedding light on the emerging phenomena of alternative media and “fake news”.

Our contributions include an increased understanding of the underlying nature of this subsection of alternative media—which hosts conspiratorial content and conducts various anti-globalist political agendas. Noting thematic convergence across domains, we theorize about how alternative media may contribute to conspiratorial thinking by creating a false perception of information diversity……

We collected tweets related to shooting events for more than ten months in 2016. This time period included several high profile shooting events, including mass shootings with civilian casualties at an Orlando, FL nightclub on June 12, in a shopping district in Munich, Germany on July 22, and at a mall in Burlington, WA on September 23. Each of these events catalyzed considerable discussion online and elsewhere about the details and motives of the attack— including claims of the attack being a “false flag”.

More than half of our alternative narrative collection (30,361 tweets) relates to the Orlando event, including:

@ActivistPost: "Was Orlando Shooting A False Flag? Shooter Has Ties To FBI, Regular At Club, Did Not Act Alone? "

This tweet is typical of an alternative narrative tweet, leveraging uncertainty in the form of a leading question (Starbird et al. 2016) to present its theory. The linked-to article—whose title is the content of this tweet—presents evidence to support the theory, including facts about the case (such as previous contact between the FBI and the shooter) and perceived connections to past events that are similarly claimed to be false flags. The underlying theme here is that the U.S. government perpetrated the shooting with the intention of blaming it on Islamic terrorism. This tweet’s author, the ActivistPost, is associated with one of the central nodes in our network graph (see Figures 1-3), referenced in 191 tweets by 153 users and connected (by user activity) to a relatively high number of other domains.

The following tweet, by an account associated with a domain that has a strong edge tie with ActivistPost, forwards a similarly themed alternative narrative:

@veteranstoday: Orlando nightclub shooting: Yet another false flag? - looks like another PR extravaganza

This article was linked-to 147 times in our data. The tweet and the article feature an image with the title, “Omar Mateen: Patsy or MK Mind-Control Slave”. The term patsy is often used to label an accused perpetrator who has been framed for the incident by government or other powerful groups. MK Mind-Control refers to a CIA project that experimented with mind control in the 1950s. This speculative tweet and related article therefore present two potential explanations of the Orlando shooting event, both building off alternative narratives used in previous events. The underlying claim here is that the named suspect was not responsible for the Orlando shootings, but that the U.S. government was. This claim is extended in the article to apply to other violent acts attributed to Muslim terrorists.

Alternative narratives around the Munich shooting had a similar theme, though blame was pushed onto international geo-political actors: Desperate Zionists Commit Another Fraud with Munich Shooting Hoax - NODISINFO

The above tweet links to an article (tweeted 54 times) within the nodisinfo.com domain, one of the most highly tweeted and highly connected domains in our data. Citing photographic evidence from the scene, the article claims that the shooting was a drill, staged by crisis actors. All of these terms echo other alternative narratives of other events. Diverging from the Orlando narratives, which blame the U.S. government, in this case the accused “real” perpetrators are Zionists—echoing long-active narratives about covert power wielded by Jewish bankers and others. The article offers no evidence to support that connection other than reference to other “staged” events.

The Cascade Mall Shooting in Burlington, Washington referenced a third kind of alternative narrative that has appeared after many U.S.-based shootings, including the Sandy Hook School shooting in 2012 and the Umpqua School shooting in 2015. This narrative claims that these mass shooting events are again staged using crisis actors, but in this case by the left-leaning U.S. government to provide a political basis for reducing gun rights.

Absence Of Footage Of Wounded/Deceased Victims. Media Were Told Victims Remained In The Mall #Cascade #FalseFlag

This tweet suggests that there were no actual victims of the event. It links to an article on the memoryholeblog.com domain, which also has a relatively high degree in our network graph and was tweeted 125 times. The linked-to article assembles evidence to make a case for the event being a drill and describes an outlook that connects several events to this narrative: “Such events are reported on by major news media uncritically, thus supporting the call for strengthened gun control measures. […]”

Interestingly, the second most highly referenced event in our alternative narrative collection from 2016 (at 5,914 tweets) is the Sandy Hook shootings, which occurred in 2012. Though a large portion of those tweets contest or deny that alternative narrative, several utilize Sandy Hook “evidence” to support alternative narratives around more recent events. For example:

Orlando shooting was a hoax. Just like Sandy Hook, Boston Bombing, and San Bernandino. Keep believing Rothschild Zionist news companies. More Orlando shooting Hoax – proof - same actors in Sandy hook & Boston Marathon Fake bombing - gun take away agenda.

These two tweets both connect the Orlando Shooting to claims that Sandy Hook was a hoax. In the first, the author refers to the “Rothschild Zionist news companies”, a reference to anti-globalist and anti-media viewpoints that appear as major themes across many alternative news sites. The second tweet connects Orlando to Sandy Hook (and paradoxically the Boston Marathon bombings) as part of an ongoing agenda to reduce gun rights in the U.S.

Taken together, these examples describe a few of what turns out to be a collection of distinct alternative narratives that share several common features. As the above tweets highlight at the micro-level, at the macro-level our domain data demonstrate that different alternative narratives are connected across users and sites—e.g. some users reference both memoryholeblog.com (which assigns blame to U.S. government officials trying to take away gun rights) and veteranstoday.com and/or nodisinfo.com (which theorize that international conspirators set up these events to further their political agendas by falsely blaming Muslim terrorists). Our tweet and domain data suggest that the production of these narratives is a distributed activity where “successful” elements (e.g. drills, crisis actors, Zionist conspirators) of one narrative are combined with others in a mutually reinforcing manner……

Sunday, 22 January 2017

Saturday, 21 January 2017

A Collector's Item: "@POTUS hasn't tweeted yet"


A genuine rarity at 4:33 am Sydney Time - a Trump Twitter account with no tweets 😉


UPDATE

The crowd in that Twiitter account banner? 

Not from Trump's 20 January 2017 inauguration - not even from any of the rallies he held during the 2016 presidential election campaign.

No, it happens to be a Getty image from Barack Obama's 2009 inauguration, taken down from Trump's @POTUS once social media had noticed.

Tuesday, 20 December 2016

On the problem of fake news....


Digital Trends, 6 December 2016:

It’s been half a decade since the co-founder of Avaaz, Eli Pariser, first coined the phrase “filter bubble,” but his prophetic TED Talk — and his concerns and warnings — are even more applicable now than they were then. In an era of fake news, curated content, personalized experiences, and deep ideological divisions, it’s time we all take responsibility for bursting our own filter bubbles.

When I search for something on Google, the results I see are quite different from yours, based on our individual search histories and whatever other data Google has collected over the years. We see this all the time on our Facebook timelines, as the social network uses its vats of data to offer us what it thinks we want to see and hear. This is your bubble…..

Filter bubbles may not seem too threatening a prospect, but they can lead to two distinct but connected issues. The first is that when you only see things you agree with, it can lead to a snowballing confirmation bias that builds up steadily over time.

They don’t overtly take a stance, they invisibly paint the digital landscape with things that are likely to align with your point of view.

A wider problem is that with such difference sources of information between people, it can lead to the generation of a real disconnect, as they become unable to understand how anyone could think differently from themselves.

A look at any of the left- or right-leaning mainstream TV stations during the buildup to the recent election would have left you in no doubt over which candidate they backed. The same can be said of newspapers and other media. In fact, this is true of many published endorsements.

But we’re all aware of that bias. It’s easy to simply switch off or switch over to another station, to see the other side of the coin.

Online, the bias is more covert. Google searches, social network feeds, and even some news publications all curate what they show you. Worse, it’s all behind the scenes. They don’t overtly take a stance, they invisibly paint the digital landscape with things that are likely to align with your point of view…..

This becomes even more of a problem when you factor in faux news. This latest election was one of the most contentious in history, with low-approval candidates on both sides and salacious headlines thrown out by every source imaginable. With so much mud being slung, it was hard to keep track of what was going on, and that was doubly so online, where fake news was abundant.

This is something that Facebook CEO Mark Zuckerberg has tried to play down, claiming that it only accounted for 1 percent of the overall Facebook news. Considering Facebook has near 2 billion users, though, that’s potentially a lot of faux stories parroted as the truth. It’s proved enough of an issue that studies suggest many people have difficulty telling fake news from real news, and in the weeks since the election, both Google and Facebook have made pledges to deal with the problem.

Also consider that 61 percent of millennials use Facebook as their main source of news, and you can see how this issue could be set to worsen if it’s not stoppered soon…..

While Zuckerberg may not think fake news and memes made a difference to the election, Facebook employee and Oculus VR founder Palmer Luckey certainly did. He was outed earlier this year for investing more than $100,000 in a company that helped promote Donald Trump online through the proliferation of memes and inflammatory attack advertisements. He wouldn’t have put in the effort if he thought it worthless.

Buzzfeed’s analysis of the popular shared stories on Facebook shows that while fake news underperformed compared to its real counterparts in early 2016, by the time the Election Day rolled around at the start of November, it had a 1.5 million engagement lead over true stories.

That same analysis piece highlighted some of the biggest fake election stories, and all of them contained classic click-baiting tactics. They used scandalous wording, capitalization, and sensationalist claims to draw in the clickers, sharers, and commenters.

That’s because these sorts of words help to draw an emotional reaction from us. Marketing firm Co-Schedule discovered this back in 2014, but it’s likely something that many people would agree with even without the hard numbers. We’ve all been tempted by clickbait headlines before, and they’re usually ones that appeal to fear, anger, arousal, or some other part of us that isn’t related to critical thinking and political analysis. Everyone’s slinging mud from within their own filter bubbles, secure in the knowledge that they are right, and that everyone who thinks differently is an idiot.

And therein lies the difficulty. The only way to really understand why someone may hold a different viewpoint is through empathy. But how can you empathize when you don’t have control over how the world appears to you, and your filter serves as a buffer to stories that might help you connect with the other side?

Reaching out to us from the past, Pariser  has some thoughts for those of us now living through his warning of the future. Even if Facebook may be stripping all humanity from its news curation, there are still human minds and fingertips behind the algorithms that feed us content. He called on those programmers to instill a sense of journalistic integrity in the AI behind the scenes.

“We need the gatekeepers [of information] to encode [journalistic] responsibility into the code that they’re writing. […] We need to make sure that these algorithms have encoded in them a sense of the public life, a sense of civic responsibility. They need to be transparent enough that we can see what the rules are and […] we need [to be] given some control.”

That sort of suggestion seems particularly pertinent, since it was only at the end of August that Facebook laid off its entire editorial team, relying instead on automated algorithms to curate content. They didn’t do a great job, though, as weeks later they were found to have let a bevy of faux content through the screening process.

While it may seem like a tall order for megacorporations to push for such an open platform, so much of a stink has been raised about fake news in the wake of the election that it does seem like Facebook and Google at least will be doing something to target that problematic aspect of social networking. They can do more, though, and it could start with helping to raise awareness of the differences in the content we’re shown…..


Saturday, 5 November 2016

Facebook allows real estate agents to place online advertisements with undisclosed racial exclusions


ProPublica, 28 October 2016:
Imagine if, during the Jim Crow era, a newspaper offered advertisers the option of placing ads only in copies that went to white readers.
That’s basically what Facebook is doing nowadays.
The ubiquitous social network not only allows advertisers to target users by their interests or background, it also gives advertisers the ability to exclude specific groups it calls “Ethnic Affinities.” Ads that exclude people based on race, gender and other sensitive factors are prohibited by federal law in housing and employment.
Here is a screenshot of a housing ad that we purchased from Facebook’s self-service advertising portal:
The ad we purchased was targeted to Facebook members who were house hunting and excluded anyone with an “affinity” for African-American, Asian-American or Hispanic people. (Here’s the ad itself.)
When we showed Facebook’s racial exclusion options to a prominent civil rights lawyer John Relman, he gasped and said, “This is horrifying. This is massively illegal. This is about as blatant a violation of the federal Fair Housing Act as one can find.”
The Fair Housing Act of 1968 makes it illegal "to make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin.” Violators can face tens of thousands of dollars in fines.
The Civil Rights Act of 1964 also prohibits the “printing or publication of notices or advertisements indicating prohibited preference, limitation, specification or discrimination” in employment recruitment.
Facebook’s business model is based on allowing advertisers to target specific groups — or, apparently to exclude specific groups — using huge reams of personal data the company has collected about its users. Facebook’s microtargeting is particularly helpful for advertisers looking to reach niche audiences, such as swing-state voters concerned about climate change. ProPublica recently offered a tool allowing users to see how Facebook is categorizing them. We found nearly 50,000 unique categories in which Facebook places its users.
Facebook says its policies prohibit advertisers from using the targeting options for discrimination, harassment, disparagement or predatory advertising practices.
“We take a strong stand against advertisers misusing our platform: Our policies prohibit using our targeting options to discriminate, and they require compliance with the law,” said Steve Satterfield, privacy and public policy manager at Facebook. “We take prompt enforcement action when we determine that ads violate our policies."
Satterfield said it’s important for advertisers to have the ability to both include and exclude groups as they test how their marketing performs. For instance, he said, an advertiser “might run one campaign in English that excludes the Hispanic affinity group to see how well the campaign performs against running that ad campaign in Spanish. This is a common practice in the industry.”
He said Facebook began offering the “Ethnic Affinity” categories within the past two years as part of a “multicultural advertising” effort.
Satterfield added that the “Ethnic Affinity” is not the same as race — which Facebook does not ask its members about. Facebook assigns members an “Ethnic Affinity” based on pages and posts they have liked or engaged with on Facebook.
When we asked why “Ethnic Affinity” was included in the “Demographics” category of its ad-targeting tool if it’s not a representation of demographics, Facebook responded that it plans to move “Ethnic Affinity” to another section.
Facebook declined to answer questions about why our housing ad excluding minority groups was approved 15 minutes after we placed the order.
By comparison, consider the advertising controls that the New York Times has put in place to prevent discriminatory housing ads. After the newspaper was successfully sued under the Fair Housing Act in 1989, it agreed to review ads for potentially discriminatory content before accepting them for publication.

Friday, 28 October 2016

SOCIAL MEDIA: Don't comment if......


Nooruddean Choudry at Joe.com:

Don't comment.

Don't comment if you're poor or disadvantaged, because you're a scrubber and a scrounger and basically a waste of space.

Don't comment if you've got any affiliation with a political party or social movement, or have previous for mouthing off about issues that matter to you, because you clearly have an agenda.

Don't comment if you've not commented about this before, because you're out of your depth and need to stick to what you know and what about all the other things in the world you're not commenting upon?

Don't comment if you've got 12 followers on Twitter because no one cares what you think, you unimportant loser. Don't comment if you've got 1.2 million followers because who do you think you are, you jumped up egotist?

Don't comment if you're brown or black or Muslim or Jewish or gay or trans or bi, because you just need to get over yourself and stop playing the victim all the bloody time.

Don't comment if you're none of the above because you're just a bleeding heart liberal leftard, who jumps onto bandwagons that have nothing to do with you. Wind your fucking neck in.

Don't comment if you're a woman because you're getting ideas above your station and you're too pretty to be worrying about that, or maybe you're just one of them feminazis and probably a lesbian.

Don't comment if you're rich or famous because you're a luvvie and you don't live in the real world, and why don't you open your own fucking home to them? Just like we take in orphans when we donate to Children In Need.

Don't comment if you haven't got the full facts because you're ill-informed and wrong. Don't comment if you're an expert in the field because we don't trust so-called experts and educated elites.

Don't care. Don't worry. Don't have compassion. Don't comment on anything or anyone that's not us. Don't question what 'us' is. Don't be offended. Don't feel guilty. Don't get angry. And don't fucking cry.

Don't comment. But yeah, free speech.
 

Wednesday, 26 October 2016

This type of police surveillance will come as no surprise to Australian blogs which post on local and regional protests


CNN.com, 11 October 2016:

The ACLU of California reported that Geofeedia had been providing law enforcement with data -- including locations -- from the social media accounts of protestors. In response, it said Tuesday that Twitter, Facebook, and Instagram had cut off Geofeedia's access to their feeds.

The extent of law enforcement's social media surveillance was discovered through public records requests of 63 agencies in California, according to the ACLU of California. Emails obtained show the tools were used to monitor chatter around "the Ferguson situation," and that Geofeedia told California law enforcement agencies to find out how police in Baltimore used its tools to "stay one step ahead of rioters," after the death of Freddie Gray in police custody.

Geofeedia provided searchable data from public Instagram posts, troves of publicly shared information from Facebook (FBTech30) via the Topic Feed API, and public tweets. Information in Twitter, Facebook, and Instagram posts can be used to infer things like location, personal associations and religious affiliation.

The ACLU says Geofeedia and other social media surveillance tools can unfairly impact communities of color. Movements like #BlackLivesMatter began on social media, and Twitter, in particular, is used as a platform for organizing and amplifying protests.

"Communities of color rely on platforms to organize, to persuade, and to spread information," Matt Cagle, technology and civil liberties policy attorney at the ACLU of Northern California, told CNNMoney. "But here, the social networks left a side door open for surveillance by the police."

Law enforcement agencies invest thousands in the tools that aggregate and surveil conversation data --the Daily Dot reported that the Denver Police Department spent $30,000 on these types of tools in May. The ACLU launched an investigation in Denver in response to this report.

Based on information in the @ACLU's report, we are immediately suspending @Geofeedia's commercial access to Twitter data.
— Policy (@policy) October 11, 2016

In an email obtained by the ACLU of California through public records requests, Geofeedia claims "over 500 law enforcement and public safety agencies" use its services.

After the ACLU's report on Tuesday, Twitter tweeted that Geofeedia's access had been revoked.

"In addition to cutting off data access, the social networks should take additional steps to implement clear rules that prohibit the use of user data for surveillance, and oversight measures to ensure developers are not using the user data for surveillance," Cagle said.

The organization is joining with the Center for Media Justice and Color of Change to ask social media sites to commit to better protecting users engaged in political and social discourse.

Malkia Cyril, the executive director of the Center for Media Justice, said that people are using social media to expose human rights abuses, turning these platforms into modern day news outlets. However, the sites aren't not subject to the same kind of scrutiny or standards, she said.

"I wasn't surprised," Cyril told CNNMoney. "But I do think the average user should be shocked and dismayed at the scope and the scale of what the ACLU found."

Tuesday, 2 August 2016

Australian Infrastructure Developments Pty Ltd still insulting people on social media


In reference to the fact that commercial and recreational fishing form part of the economic underpinning of local town/village economies within the Clarence River estuary, the Facebook page No Yamba Mega Port produced this banner:


Apparently this did not impress the proponents of the Yamba Mega Port scheme, presumably including the public 'face' of this proposal Des Euen.

A reader sent me this last Friday, 29 July 2016:



Which made me wonder what else this company was saying on Facebook that week and, oh dear, Australian Infrastructure Developments Pty Ltd aka AID Australia was back to its old ways - tossing insults.


Which by AID Australia standards is almost polite when you compare it to this use of bad language on 1 July 2016:



Thursday, 7 July 2016

Take a bow Twitter in Australia - a large part of the reason Turnbull & Co weren't unconditionally loved by the electorate on 2 July 2016 is your fault!


Excerpt from Sky News Australian Agenda 3 July 2016 interview with Liberal Senator for Tasmania George Brandis:

PAUL KELLY:
Can I just ask, do you think we are seeing deeper changes in Australian politics? We have now gone for a decade and the evidence is that it's very hard for a first-term government to it be re-elected. The electorate seems to be more impatient and more critical. Do you think that's right?

ATTORNEY-GENERAL:
I do, and there are a lot of reasons for that. I think one of the drivers of this is the increasing velocity of events. Another is the trivialisation of political communication through Twitter and things like that. There are a lot of phenomena that sociologists and political scientists will no doubt write about, but I do think that the velocity of events, the increasing accelerating velocity of events and the trivialisation of political discourse have a lot to do with it.

Take a bow, Aussie twerps.

Friday, 6 May 2016

Social media, advertising, trust and dollars


Excerpt from the Australian Newspaper History Group Newsletter No. 87 May 2016:

87.2.1 Reach and effectiveness of social media

British marketing and branding specialist Mark Ritson will give a series of lectures to the Australian Association of National Advertisers that will challenge some thinking about the reach and effectiveness of social media over established media platforms, such as print. Professor Ritson, who is head of Marketing at Melbourne Business School, will conduct four talks over six dates in Sydney and Melbourne, from 24 May. His first lecture is titled "Marketing Deconstructed: Communications – the death of the digital/traditional divide". Prof Ritson believes the likes of Twitter, Facebook and Instagram are over-rated by some marketers, who choose to ignore the proven engagement of traditional media. Prof Ritson says the belief in social media as an advertising platform has become fashionable among some marketing executives who blindly denigrate television and print. To back his position on the strength of traditional platforms, Prof Ritson cited data from Nielsen's global trust in advertising survey published last September. The report showed 63 per cent of people trusted TV advertising, and 60 per cent trusted print ads, but only 46 per cent trusted ads served on social networks (TheNewspaperWorks, 18 March 2016).

87.2.2 Online advertising nears $6bn

Australian online advertising spending climbed to $5.9 billion in 2015, a 24 per cent increase from calendar year 2014, according to the latest Interactive Advertising Bureau/Pricewaterhouse Coopers Online Advertising Expenditure Report. The fourth quarter report is a significant result for the online advertising industry which has achieved double-digit growth of at least 20 per cent since 2010. The report examines advertising expenditure across five advertising categories, each of which experienced significant year on year growth:

 Mobile grew 81 per cent this year to $1.5 billion
 Video grew 75 per cent to $500 million
 General display grew 46 per cent to $2.1 billion
 Classifieds grew 22 per cent to $1.1 billion
 Search and directories grew 14 per cent to $2.8 billion.

Outgoing chief executive of the IAB Alice Manners said, "When the IAB first started recording online ad expenditure in 2003 it was at $1.3 billion and today we are poised to break the $6 billion barrier," she said.

Friday, 8 January 2016

Politwoops is gathering Australian politicians' tweets once more


The Age 1 January 2016:

Politwoops will once again be able to collect and publish the deleted tweets of politicians around the world after Twitter announced that it reached a deal with the organisations that run the website.

Twitter revoked Politwoops' access to its API, the back-end code used by developers of other applications, earlier this year. Christopher Gates, the president of the Sunlight Foundation, a transparency group that runs the website in partnership with the Open State Foundation and Access Now, wrote at the time that Twitter's decision "truly mystified" him.

Politwoops has helped shine a light on apparent attempts by politicians to distance themselves from their remarks on Twitter. Perhaps the most notable case was when several politicians deleted tweets praising the release of Sgt. Bowe Bergdahl by captors in Afghanistan after questions arose about the soldier's past actions.

Politwoops Australia can be found here.

NOTE: A word of warning – there is at least one Australian politician’s Twitter account which was comprehensively hacked and the tweets recorded as deleted did not originate from that person, so double check all tweets you may consider quoting.


Thursday, 7 January 2016

Twitter: no trolls, bullies, haters or racists allowed



Abusive Behavior

We believe in freedom of expression and in speaking truth to power, but that means little as an underlying philosophy if voices are silenced because people are afraid to speak up. In order to ensure that people feel safe expressing diverse opinions and beliefs, we do not tolerate behavior that crosses the line into abuse, including behavior that harasses, intimidates, or uses fear to silence another user’s voice.

Any accounts and related accounts engaging in the activities specified below may be temporarily locked and/or subject to permanent suspension.

* Violent threats (direct or indirect): You may not make threats of violence or promote violence, including threatening or promoting terrorism. 

* Harassment: You may not incite or engage in the targeted abuse or harassment of others.

Some of the factors that we may consider when evaluating abusive behavior include:
o   if a primary purpose of the reported account is to harass or send abusive messages to others;
o   if the reported behavior is one-sided or includes threats;
o   if the reported account is inciting others to harass another account; and
o   if the reported account is sending harassing messages to an account from multiple accounts.

* Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories. 

*Multiple account abuse: Creating multiple accounts with overlapping uses or in order to evade the temporary or permanent suspension of a separate account is not allowed.

* Private information: You may not publish or post other people's private and confidential information, such as credit card numbers, street address, or Social Security/National Identity numbers, without their express authorization and permission. In addition, you may not post intimate photos or videos that were taken or distributed without the subject's consent. Read more about our private information policy here.

* Impersonation: You may not impersonate others through the Twitter service in a manner that is intended to or does mislead, confuse, or deceive others. Read more about our impersonation policy here.