Showing posts with label elections. Show all posts
Showing posts with label elections. Show all posts

Wednesday, 17 April 2024

Discussing Artificial Intelligence AI in April 2024

 

Well this month attention has turned from AI being used to create multiple fake bird species and celebrity images or Microsoft's using excruciatingly garish alternative landscapes to promote its software - the focus has shifted back to AI being used by bad actors in global and domestic political arenas created during election years.


Nature, WORLD VIEW, 9 April 2024:

AI-fuelledelection campaigns are here — where are the rules?

Political candidates are increasingly using AI-generated ‘softfakes’ to boost their campaigns. This raises deep ethical concerns.

By Rumman Chowdhury


Of the nearly two billion people living in countries that are holding elections this year, some have already cast their ballots. Elections held in Indonesia and Pakistan in February, among other countries, offer an early glimpse of what’s in store as artificial intelligence (AI) technologies steadily intrude into the electoral arena. The emerging picture is deeply worrying, and the concerns are much broader than just misinformation or the proliferation of fake news.


As the former director of the Machine Learning, Ethics, Transparency and Accountability (META) team at Twitter (before it became X), I can attest to the massive ongoing efforts to identify and halt election-related disinformation enabled by generative AI (GAI). But uses of AI by politicians and political parties for purposes that are not overtly malicious also raise deep ethical concerns.


GAI is ushering in an era of ‘softfakes’. These are images, videos or audio clips that are doctored to make a political candidate seem more appealing. Whereas deepfakes (digitally altered visual media) and cheap fakes (low-quality altered media) are associated with malicious actors, softfakes are often made by the candidate’s campaign team itself.


How to stop AI deepfakes from sinking society — and science


In Indonesia’s presidential election, for example, winning candidate Prabowo Subianto relied heavily on GAI, creating and promoting cartoonish avatars to rebrand himself as gemoy, which means ‘cute and cuddly’. This AI-powered makeover was part of a broader attempt to appeal to younger voters and displace allegations linking him to human-rights abuses during his stint as a high-ranking army officer. The BBC dubbed him “Indonesia’s ‘cuddly grandpa’ with a bloody past”. Furthermore, clever use of deepfakes, including an AI ‘get out the vote’ virtual resurrection of Indonesia’s deceased former president Suharto by a group backing Subianto, is thought by some to have contributed to his surprising win.


Nighat Dad, the founder of the research and advocacy organization Digital Rights Foundation, based in Lahore, Pakistan, documented how candidates in Bangladesh and Pakistan used GAI in their campaigns, including AI-written articles penned under the candidate’s name. South and southeast Asian elections have been flooded with deepfake videos of candidates speaking in numerous languages, singing nostalgic songs and more — humanizing them in a way that the candidates themselves couldn’t do in reality.


What should be done? Global guidelines might be considered around the appropriate use of GAI in elections, but what should they be? There have already been some attempts. The US Federal Communications Commission, for instance, banned the use of AI-generated voices in phone calls, known as robocalls. Businesses such as Meta have launched watermarks — a label or embedded code added to an image or video — to flag manipulated media.


But these are blunt and often voluntary measures. Rules need to be put in place all along the communications pipeline — from the companies that generate AI content to the social-media platforms that distribute them.


What the EU’s tough AI law means for research and ChatGPT


Content-generation companies should take a closer look at defining how watermarks should be used. Watermarking can be as obvious as a stamp, or as complex as embedded metadata to be picked up by content distributors.


Companies that distribute content should put in place systems and resources to monitor not just misinformation, but also election-destabilizing softfakes that are released through official, candidate-endorsed channels. When candidates don’t adhere to watermarking — none of these practices are yet mandatory — social-media companies can flag and provide appropriate alerts to viewers. Media outlets can and should have clear policies on softfakes. They might, for example, allow a deepfake in which a victory speech is translated to multiple languages, but disallow deepfakes of deceased politicians supporting candidates.


Election regulatory and government bodies should closely examine the rise of companies that are engaging in the development of fake media. Text-to-speech and voice-emulation software from Eleven Labs, an AI company based in New York City, was deployed to generate robocalls that tried to dissuade voters from voting for US President Joe Biden in the New Hampshire primary elections in January, and to create the softfakes of former Pakistani prime minister Imran Khan during his 2024 campaign outreach from a prison cell. Rather than pass softfake regulation on companies, which could stifle allowable uses such as parody, I instead suggest establishing election standards on GAI use. There is a long history of laws that limit when, how and where candidates can campaign, and what they are allowed to say.


Citizens have a part to play as well. We all know that you cannot trust what you read on the Internet. Now, we must develop the reflexes to not only spot altered media, but also to avoid the emotional urge to think that candidates’ softfakes are ‘funny’ or ‘cute’. The intent of these isn’t to lie to you — they are often obviously AI generated. The goal is to make the candidate likeable.


Softfakes are already swaying elections in some of the largest democracies in the world. We would be wise to learn and adapt as the ongoing year of democracy, with some 70 elections, unfolds over the next few months.


COMPETING INTERESTS

The author declares no competing interests.

[my yellow highlighting]



Charles Stuart University, Expert Alert, media release, 12 April 2024, excerpt:


Governments must crack down on AI interfering with elections


Charles Darwin University Computational and Artificial Intelligence expert Associate Professor Niusha Shafiabady.


Like it or not, we are affected by what we come across in social media platforms. The future wars are not planned by missiles or tanks, but they can easily run on social media platforms by influencing what people think and do. This applies to election results.


Microsoft has said that the election outcomes in India, Taiwan and the US could be affected by the AI plays by powers like China or North Korea. In the world of technology, we call this disinformation, meaning producing misleading information on purpose to change people’s views. What can we do to fight these types of attacks? Well, I believe we should question what we see or read. Not everything we hear is based on the truth. Everyone should be aware of this.


Governments should enforce more strict regulations to fight misinformation, things like: Finding triggers that show signs of unwanted interference; blocking and stopping the unauthorised or malicious trends; enforcing regulations on social media platforms to produce reports to the government to demonstrate and measure the impact and the flow of the information on the matters that affect the important issues such as elections and healthcare; and enforcing regulations on the social media platforms to monitor and stop the fake information sources or malicious actors.”


The Conversation, 10 April 2024:


Election disinformation: how AI-powered bots work and how you can protect yourself from their influence


AI Strategist and Professor of Digital Strategy, Loughborough University Nick Hajli



Social media platforms have become more than mere tools for communication. They’ve evolved into bustling arenas where truth and falsehood collide. Among these platforms, X stands out as a prominent battleground. It’s a place where disinformation campaigns thrive, perpetuated by armies of AI-powered bots programmed to sway public opinion and manipulate narratives.


AI-powered bots are automated accounts that are designed to mimic human behaviour. Bots on social media, chat platforms and conversational AI are integral to modern life. They are needed to make AI applications run effectively......


How bots work


Social influence is now a commodity that can be acquired by purchasing bots. Companies sell fake followers to artificially boost the popularity of accounts. These followers are available at remarkably low prices, with many celebrities among the purchasers.


In the course of our research, for example, colleagues and I detected a bot that had posted 100 tweets offering followers for sale.


Using AI methodologies and a theoretical approach called actor-network theory, my colleagues and I dissected how malicious social bots manipulate social media, influencing what people think and how they act with alarming efficacy. We can tell if fake news was generated by a human or a bot with an accuracy rate of 79.7%. It is crucial to comprehend how both humans and AI disseminate disinformation in order to grasp the ways in which humans leverage AI for spreading misinformation.


To take one example, we examined the activity of an account named “True Trumpers” on Twitter.



The account was established in August 2017, has no followers and no profile picture, but had, at the time of the research, posted 4,423 tweets. These included a series of entirely fabricated stories. It’s worth noting that this bot originated from an eastern European country.




Research such as this influenced X to restrict the activities of social bots. In response to the threat of social media manipulation, X has implemented temporary reading limits to curb data scraping and manipulation. Verified accounts have been limited to reading 6,000 posts a day, while unverified accounts can read 600 a day. This is a new update, so we don’t yet know if it has been effective.


Can we protect ourselves?

However, the onus ultimately falls on users to exercise caution and discern truth from falsehood, particularly during election periods. By critically evaluating information and checking sources, users can play a part in protecting the integrity of democratic processes from the onslaught of bots and disinformation campaigns on X. Every user is, in fact, a frontline defender of truth and democracy. Vigilance, critical thinking, and a healthy dose of scepticism are essential armour.


With social media, it’s important for users to understand the strategies employed by malicious accounts.


Malicious actors often use networks of bots to amplify false narratives, manipulate trends and swiftly disseminate misinformation. Users should exercise caution when encountering accounts exhibiting suspicious behaviour, such as excessive posting or repetitive messaging.


Disinformation is also frequently propagated through dedicated fake news websites. These are designed to imitate credible news sources. Users are advised to verify the authenticity of news sources by cross-referencing information with reputable sources and consulting fact-checking organisations.


Self awareness is another form of protection, especially from social engineering tactics. Psychological manipulation is often deployed to deceive users into believing falsehoods or engaging in certain actions. Users should maintain vigilance and critically assess the content they encounter, particularly during periods of heightened sensitivity such as elections.


By staying informed, engaging in civil discourse and advocating for transparency and accountability, we can collectively shape a digital ecosystem that fosters trust, transparency and informed decision-making.


Philadelphia Inquirer, 14 April 2024:

Expect to see AI ‘weaponized to deceive voters’ in this year’s presidential election

Alfred Lubrano


As the presidential campaign slowly progresses, artificial intelligence continues to accelerate at a breathless pace — capable of creating an infinite number of fraudulent images that are hard to detect and easy to believe.


Experts warn that by November voters in Pennsylvania and other states will have witnessed counterfeit photos and videos of candidates enacting one scenario after another, with reality wrecked and the truth nearly unknowable.


This is the first presidential campaign of the AI era,” said Matthew Stamm, a Drexel University electrical and computer engineering professor who leads a team that detects false or manipulated political images. “I believe things are only going to get worse.”


Last year, Stamm’s group debunked a political ad for then-presidential candidate Florida Republican Gov. Ron DeSantis ad that appeared on Twitter. It showed former President Donald Trump embracing and kissing Anthony Fauci, long a target of the right for his response to COVID-19.


That spot was a “watershed moment” in U.S. politics, said Stamm, director of his school’s Multimedia and Information Security Lab. “Using AI-created media in a misleading manner had never been seen before in an ad for a major presidential candidate,” he said.


This showed us how there’s so much potential for AI to create voting misinformation. It could get crazy.”


Election experts speak with dread of AI’s potential to wreak havoc on the election: false “evidence” of candidate misconduct; sham videos of election workers destroying ballots or preventing people from voting; phony emails that direct voters to go to the wrong polling locations; ginned-up texts sending bogus instructions to election officials that create mass confusion.....


Malicious intent


AI allows people with malicious intent to work with great speed and sophistication at low cost, according to the Cybersecurity & Infrastructure Security Agency, part of the U.S. Department of Homeland Security.


That swiftness was on display in June 2018. Doermann’s University of Buffalo colleague, Siwei Lyu, presented a paper that demonstrated how AI-generated deepfake videos could be detected because no one was blinking their eyes; the faces had been transferred from still photos.


Within three weeks, AI-equipped fraudsters stopped creating deepfakes based on photos and began culling from videos in which people blinked naturally, Doermann said, adding, “Every time we publish a solution for detecting AI, somebody gets around it quickly.”


Six years later, with AI that much more developed, “it’s gained remarkable capacities that improve daily,” said political communications expert Kathleen Hall Jamieson, director of the University of Pennsylvania’s Annenberg Public Policy Center. “Anything we can say now about AI will change in two weeks. Increasingly, that means deepfakes won’t be easily detected.


We should be suspicious of everything we see.”


AI-generated misinformation helps exacerbate already-entrenched political polarization throughout America, said Cristina Bicchieri, Penn professor of philosophy and psychology.


When we see something in social media that aligns with our point of view, even if it’s fake, we tend to want to believe it,” she said.


To battle fabrications, Stamm of Drexel said, the smart consumer could delay reposting emotionally charged material from social media until checking its veracity.


But that’s a lot to ask.


Human overreaction to a false report, he acknowledged, “is harder to resolve than any anti-AI stuff I develop in my lab.


And that’s another reason why we’re in uncharted waters.”


Thursday, 5 January 2023

Outages still plaguing social media platform, Twitter Inc. is not paying its California landlords, Elon has a garage sale & announces he is opening the platform up to political advertising in the lead-up to the US presidential election


Twitter, 4 January 2023

Elon Musk's 'faster', smarter, more informative, Twitter social media platform has been displaying the agility of dial-up Internet access in late 1990s rural and regional Australia.


Commencing at around 5am on Wednesday 4 January 2023 the numbers began to build for problems when accessing Twitter via website or app, notifications nowhere to be seen and, problems uploading to the site or having a tweet accepted. The degree of buffering was impressive, as was the alerts that something was wrong and try again.


Twitter's underwhelming performance appeared to be affecting users in Australia and New Zealand.


A Downdetector graph showing the beginnings of the user-reported problem from 2.03pm on Tuesday 3 January up to 1.48pm on Wednesday 4 January 2023 in Australia. 








More people appear to have been reporting problems in New Zealand.


Meanwhile on the morning of 4 January The Guardian newspaper revealed that Twitter Inc. is being sued for over $136,260 in unpaid rent on its California Street branch in San Francisco after Elon Musk's takeover. The landlord of 650 California Street has filed a lawsuit seeking back rent, as well as payment of attorney’s fees and other expenses.


The Guardian went on to say:


The company’s headquarters are located at another San Francisco address, 1355 Market Street, where Twitter has also reportedly fallen behind on rent, according to the New York Times. 


In addition to not paying rent and laying off workers, Musk’s Twitter is also auctioning off high-end office furniture, kitchen equipment and other relics from the past, when Twitter had over 7,500 full-time workers around the world and free lunch and other office perks were common. Some three-quarters of Twitter’s employee base is estimated to have left the company, either because they were laid off, fired or quit. 


Among the items Twitter is auctioning off are a pizza oven, a 40-quart commercial kitchen floor mixer (retails for around $18,000; bidding starts at $25), and high-end designer furniture such as Eames chairs from Herman Miller and Knoll Diamond chairs that retail in the thousands. 


Even a Twitter bird statue (bidding starts at $25) and a neon Twitter bird light display (bidding starts at $50) are up for grabs in this fire sale-style auction reminiscent of the dotcom bust of the early 2000s when failed tech startups were selling off their decadent office wares.


In yet another reversal of Twitter Inc's established policies, Musk announced he will allow political advertising on the ailing platform commencing sometime in 2023. 


It is no coincidence that 2023 will see the contest between candidates seeking party endorsement heat up ahead of the November 2024 US presidential election.


Wall Street Journal, 4 January 2023:


Twitter Inc. plans to expand the political advertisements it allows on the social-media platform after banning most of them in 2019, in the latest policy change by new owner Elon Musk.


The company also said Tuesday that it is relaxing its policy for cause-based ads in the U.S., which are ads that call for people to take action, educate and raise awareness in connection with the following categories: civic engagement, economic growth, environmental stewardship or social-equity causes.


In the coming weeks, the company said it would "align our advertising policy with that of TV and other media outlets," according to tweets from the Twitter Safety account. It didn't specify what that means and said it would "share more details as this work progresses." Twitter didn't respond to a request for comment.


Twitter largely banned political ads in November 2019, taking the opposite approach of social-media competitor Facebook at the time. Jack Dorsey, who was then chief executive of Twitter, said of the decision: "We believe political message reach should be earned, not bought."


The policy came with some exceptions that allowed for ads in support of certain politics-related topics such as voter registration. At the time, political advertising represented only a small portion of Twitter's overall advertising revenue.


Advertising in general has been a heated topic since Mr. Musk completed his $44 billion takeover of the company in October. Like many social-media companies, most of Twitter's revenue comes from advertising -- in 2021, roughly 89% of the $5.1 billion that the business brought in was from ads.


Some companies paused ad spending on the platform after the takeover amid uncertainty over how Mr. Musk planned to run the company…..


But as of Dec. 18, about 70% of Twitter's top 100 ad spenders from before the takeover weren't spending on the platform, according to an analysis of data from research firm Pathmatics…..


Meanwhile, Musk's obsession with morphing Twitter into something other than a global social media platform sees this rumour about his engagement with Tesla Inc. surface.....


The New York Observer, 3 January 2023:


Elon Musk has reportedly named a deputy at Tesla amid shareholder pressure for him to resign as the electric carmaker’s CEO after its stock price tumbled 70 percent in 2022 and deliveries missed expectations. Zhu Xiaotong, who goes by Tom Zhu, is head of Tesla China and was promoted to oversee the company’s U.S. factories and sales operations in all of North America and Europe, Reuters reported today (Jan. 3).


Shareholders didn’t appear to think much of the news, as Tesla shares had fallen 13 percent by mid-afternoon.


An internal organizational chart reviewed by Reuters shows Zhu has retained his title as Tesla’s vice president for Greater China. But the new responsibilities in North America and Europe effectively make him the highest-level executive at Tesla after CEO Musk.


The promotion was confirmed by two anonymous sources who Reuters said had seen the new organizational chart. Tesla did not immediately respond to a request for comment….


Tesla’s China chief is rumored to be Musk’s successor


Zhu, who graduated from university in 2004 and holds a New Zealand passport, according to Chinese tech news site 36kr, joined Tesla in 2014 from an infrastructure background. He was credited for growing production capacity significantly at Tesla’s Shanghai Gigafactory, which opened in 2019 and is now Tesla’s most productive plant in the world….



The New York Observer, 3 January 2023:


Elon Musk has reportedly named a deputy at Tesla amid shareholder pressure for him to resign as the electric carmaker’s CEO after its stock price tumbled 70 percent in 2022 and deliveries missed expectations. Zhu Xiaotong, who goes by Tom Zhu, is head of Tesla China and was promoted to oversee the company’s U.S. factories and sales operations in all of North America and Europe, Reuters reported today (Jan. 3).


Shareholders didn’t appear to think much of the news, as Tesla shares had fallen 13 percent by mid-afternoon.


An internal organizational chart reviewed by Reuters shows Zhu has retained his title as Tesla’s vice president for Greater China. But the new responsibilities in North America and Europe effectively make him the highest-level executive at Tesla after CEO Musk.


The promotion was confirmed by two anonymous sources who Reuters said had seen the new organizational chart. Tesla did not immediately respond to a request for comment….


Tesla’s China chief is rumored to be Musk’s successor


Zhu, who graduated from university in 2004 and holds a New Zealand passport, according to Chinese tech news site 36kr, joined Tesla in 2014 from an infrastructure background. He was credited for growing production capacity significantly at Tesla’s Shanghai Gigafactory, which opened in 2019 and is now Tesla’s most productive plant in the world….


Monday, 19 July 2021

Latest Newspoll has Federal Coalition neck and neck on the primary vote and Labor 6 points ahead on two-party preferred polling, as survey respondents mark PM Scott Morrison down following his mismanagement of the national pandemic response

 

The Australian


The Conversation, 18 July 2021:


Support for Scott Morrison and the government have slumped in Newspoll, in a major backlash against the botched vaccine rollout.


Labor has surged to a two-party lead of 53-47%, compared with 51-49% in the previous poll in late June.


The Australian reports the latest result is the worse for the Coalition this term, and if replicated at an election would deliver the government a clear loss.


Satisfaction with Morrison’s handling of the pandemic – which now sees lockdowns in the nation’s two largest states – plunged nine points in the last three weeks to 52%.


As the brought-forward Pfizer supplies start to arrive, confidence in the government’s management of the rollout is negative for the first time, with only 40% believing it being handled satisfactorily.


Morrison’s net approval in Newspoll – plus 6 – is at its lowest since the bushfire crisis, with an eight point overall shift. Anthony Albanese’s position worsened a little – he is on net minus 8. Despite a small drop, Morrison retains a solid lead over Albanese as better PM – 51-33%


Both Labor and the Coalition are polling 39% on primary votes – a two point fall for the Coalition and an equal rise for Labor.


The poll saw an 18 point drop in satisfaction with the handling of COVID since April.


Satisfaction with the government’s handling of the rollout was 53% in April and 50% in late June - in this poll 40% are satisfied with the handling and 57% are not…...


Wednesday, 21 August 2019

Vast majority of Australians (84%) support new laws to ban political parties and candidates from making “inaccurate and misleading” claims


The Guardian, 18 August 2019: 

The vast majority of Australians (84%) support new laws to ban political parties and candidates from making “inaccurate and misleading” claims, according to a new poll for the Australia Institute. 

On Sunday the progressive thinktank released a discussion paper canvassing options for truth in political advertising laws, following reports of widespread misinformation in the 2019 election campaign and calls from MPs including independent Zali Steggall and Liberal Jason Falinski for new minimum standards. 

The paper noted that truth in advertising laws operate in South Australia, where the Electoral Commission can request material be withdrawn and retracted and financial penalties apply, and New Zealand, where the media industry is self-regulated by an advertising standards body. 

It argues that industry bodies including Free TV Australia and the Advertising Standards Bureau could regulate truth in advertising, preventing the Australian Electoral Commission from being drawn into the contentious political process of adjudication. 

“Several models for increasing the truthfulness of election campaigns are available to policymakers,” it said. “They are popular and proven to work in other jurisdictions.” 

The paper includes results from a Dynata survey of 1,464 people conducted in the last week of July, with a margin of error of 3%, that found 84% of all voters want truth in advertising laws, with support in Labor, the Coalition and Greens all above the 84% level. 

Most respondents supported a range of penalties including fines (62%), forcing publications to retract claims (60%) and loss of public funding (54%). Criminal charges were supported by 42% of respondents. 

Respondents were unsure who should be the arbiter of truth, with support split between the judicial system (27%), electoral commissions (26%) and industry bodies (21%), with 15% unsure and 7% suggesting a new panel of experts. 

The survey also found 90% support for the proposition that newspapers, TV channels and social media networks should run corrections if they publish inaccurate or misleading ads.....

Tuesday, 13 November 2018

Like Turnbull before him, Scott Morrison fails to connect with voters




In its national opinion poll released on 11 November 2018 Federal Primary Votes came in at:

Liberal-National Party 35 (-1)
Australian Labor Party 40 (+1)
Australian Greens 9 (0)
Pauline Hanson’s One Nation 6 (0)

These results gave this Two-Party Preferred Voting breakdown (based on 2016 federal election preference flows):

The Australian, Twitter, 11 November 2018


AAP General Newswire, 11 November 2018:

Bill Shorten has narrowed the gap to Scott Morrison as preferred prime minister as Labor extends its lead over the coalition in the latest Newspoll.

The coalition government has slipped further behind Labor in the latest Newspoll as Bill Shorten narrowed the gap to Scott Morrison as the nation's preferred leader.

The Liberal-National coalition now trail Labor by 10 points after slipping to 45-55 on a two-party preferred basis, according to the Newspoll published in The Australian on Sunday night.

The coalition’s primary vote fell by a point to 35 per cent - two points higher than the record low of 33 per cent.

Labor's primary vote, according to the national poll of 1802 voters, sits at 40 per cent - only the third time it has hit such a mark in almost four years.

The coalition has been behind on the primary vote since the leadership change in August.

Mr Morrison's latest effort to win back votes - his bus and plane tour of Queensland - appeared to not work with voters with his net approval rating sinking another five points to minus eight.....

Monday, 5 November 2018

Calling all Newtown, Erkinville, Redfern, Stanmore girls wherever you may now live - it's time to make history!


New South Wales goes to the polls on 23 March 2019 to elect a state government. It's everyone's chance to make a difference.

You may not know me, but people call me Aunty Norma. I'm a proud Wiradjuri woman and I've lived in the electorate of Newtown almost all my life.

I grew up in Redfern. Things were very different back then. Growing up was tough but we got by. My mother looked after us and because I was the baby, she took me everywhere with her.

I went to my local public school in Erskineville and Stanmore. As the only Indigenous kid in my class I remember sitting up the back and hoping no-one would notice me because I was so shy, but I knew all the answers and I always loved school.

My passion for education came from my mother. She taught me that education opens doors and that education is powerful.

After school, my love of education took me to Teacher's College and it was activists like Charles Perkins and Gary Foley who inspired me to make the journey to Harvard.

As the very first Aboriginal person to go to Harvard, I could not fail. I had to achieve.

With support from the Black Womens Action Group I got into Harvard. There were no scholarships back then. I did everything I could to survive and in 1985 I made history and graduated from Harvard with a Masters of Education.

This to me was such a proud achievement.

These experiences made me the community activist I am today. I fought to open the National Aboriginal College and started the Lions Club in Redfern.

I also started Murawina, the first fully run Aboriginal full day care early childcare program in Redfern.

I feel like I've come so far from the little girl who sat at the back of the classroom, but every day things get harder for people like me.

Rent goes up, bills get more expensive, Uni and TAFE get more out of reach and our income stays the same.

That's why I'm asking for your help to make history. I need your help to become the first Indigenous member for Newtown.

I really can't do this alone and I need everyone's support.

If you can donate a couple of dollars, get involved in my campaign or tell your friends and family about my story, it all makes a huge impact.

Looking back, the shy kid at the back of the classroom would never have dreamed about running for Parliament.


This is our chance to make history.



Aunty Norma
Labor Candidate for Newtown


P.S. If you would like to contribute, click here!
Keep up to date with NSW Labor on Twitter and Facebook. To make a donation to NSW Labor, click here.

 This email was authorised by Kaila Murnain level 9, 377 Sussex Street, Sydney.

Friday, 20 July 2018

Slowly but surely Russian connections between the UK Brexit referendum campaign and the US presidential campaign are beginning to emerge


“We have concluded that there are risks in relation to the processing of personal data by many political parties. Particular concerns include: the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence, a lack of fair processing, and use of third party data analytics companies with insufficient checks around consent….We have looked closely at the role of those who buy and sell personal data-sets in the UK. Our existing investigation of the privacy issues raised by their work has been expanded to include their activities in political processes….The investigation has identified a total of 172 organisations of interest that required engagement, of which around 30 organisations have formed the main focus of our enquiries, including political parties, data analytics companies and major social media platforms…..Similarly, we have identified a total of 285 individuals relating to our investigation.” [UK Information Commissioner’s Office, Investigation into the use of data analytics in political campaigns: Investigation update, July 2018]

Slowly but surely the Russian connections between the UK Brexit referendum campaign and the US presidential campaign are beginning to emerge.

The Guardian, 15 July 2018:

A source familiar with the FBI investigation revealed that the commissioner and her deputy spent last week with law enforcement agencies in the US including the FBI. And Denham’s deputy, James Dipple-Johnstone, confirmed to the Observer that “some of the systems linked to the investigation were accessed from IP addresses that resolve to Russia and other areas of the CIS [Commonwealth of Independent States]”.

It was also reported that Senator Mark Warner, vice chair of US Senate Intel Committee and Damian Collins MP, chair of the Digital, Culture, Media and Sport select committee inquiry into “fake news”, met in Washington on or about 16 July 2018 to discuss Russian interference in both British and American democratic processes during an Atlantic Council meeting.

UK Information Commissioner’s Office (ICO), media release, 10 July 2018:

Information Commissioner Elizabeth Denham has today published a detailed update of her office’s investigation into the use of data analytics in political campaigns.
In March 2017, the ICO began looking into whether personal data had been misused by campaigns on both sides of the referendum on membership of the EU.

In May it launched an investigation that included political parties, data analytics companies and major social media platforms.

Today’s progress report gives details of some of the organisations and individuals under investigation, as well as enforcement actions so far.

This includes the ICO’s intention to fine Facebook a maximum £500,000 for two breaches of the Data Protection Act 1998.

Facebook, with Cambridge Analytica, has been the focus of the investigation since February when evidence emerged that an app had been used to harvest the data of 50 million Facebook users across the world. This is now estimated at 87 million.
The ICO’s investigation concluded that Facebook contravened the law by failing to safeguard people’s information. It also found that the company failed to be transparent about how people’s data was harvested by others.
Facebook has a chance to respond to the Commissioner’s Notice of Intent, after which a final decision will be made.

Other regulatory action set out in the report comprises:

warning letters to 11 political parties and notices compelling them to agree to audits of their data protection practices;

an Enforcement Notice for SCL Elections Ltd to compel it to deal properly with a subject access request from Professor David Carroll;

a criminal prosecution for SCL Elections Ltd for failing to properly deal with the ICO’s Enforcement Notice;

an Enforcement Notice for Aggregate IQ to stop processing retained data belonging to UK citizens;

a Notice of Intent to take regulatory action against data broker Emma’s Diary (Lifecycle Marketing (Mother and Baby) Ltd); and
audits of the main credit reference companies and Cambridge University Psychometric Centre.

Information Commissioner Elizabeth Denham said:
“We are at a crossroads. Trust and confidence in the integrity of our democratic processes risk being disrupted because the average voter has little idea of what is going on behind the scenes.

“New technologies that use data analytics to micro-target people give campaign groups the ability to connect with individual voters. But this cannot be at the expense of transparency, fairness and compliance with the law.

She added:
“Fines and prosecutions punish the bad actors, but my real goal is to effect change and restore trust and confidence in our democratic system.”

A second, partner report, titled Democracy Disrupted? Personal information and political influence, sets out findings and recommendations arising out of the 14-month investigation.

Among the ten recommendations is a call for the Government to introduce a statutory Code of Practice for the use of personal data in political campaigns.

Ms Denham has also called for an ethical pause to allow Government, Parliament, regulators, political parties, online platforms and the public to reflect on their responsibilities in the era of big data before there is a greater expansion in the use of new technologies.

She said:
“People cannot have control over their own data if they don’t know or understand how it is being used. That’s why greater and genuine transparency about the use of data analytics is vital.”

In addition, the ICO commissioned research from the Centre for the Analysis of Social Media at the independent thinktank DEMOS. Its report, also published today, examines current and emerging trends in how data is used in political campaigns, how use of technology is changing and how it may evolve in the next two to five years. 

The investigation, one of the largest of its kind by a Data Protection Authority, remains ongoing. The 40-strong investigation team is pursuing active lines of enquiry and reviewing a considerable amount of material retrieved from servers and equipment.

The interim progress report has been produced to inform the work of the DCMS’s Select Committee into Fake News.

The next phase of the ICO’s work is expected to be concluded by the end of October 2018.

The Washington Post, 28 June 2018:

BRISTOL, England — On Aug. 19, 2016, Arron Banks, a wealthy British businessman, sat down at the palatial residence of the Russian ambassador to London for a lunch of wild halibut and Belevskaya pastila apple sweets accompanied by Russian white wine.

Banks had just scored a huge win. From relative obscurity, he had become the largest political donor in British history by pouring millions into Brexit, the campaign to disentangle the United Kingdom from the European Union that had earned a jaw-dropping victory at the polls two months earlier.

Now he had something else that bolstered his standing as he sat down with his new Russian friend, Ambassador Alexander Yakovenko: his team’s deepening ties to Donald Trump’s insurgent presidential bid in the United States. A major Brexit supporter, Stephen K. Bannon, had just been installed as chief executive of Trump’s campaign. And Banks and his fellow Brexiteers had been invited to attend a fundraiser with Trump in Mississippi.

Less than a week after the meeting with the Russian envoy, Banks and firebrand Brexit politician Nigel Farage — by then a cult hero among some anti-establishment Trump supporters — were huddling privately with the Republican nominee in Jackson, Miss., where Farage wowed a foot-stomping crowd at a Trump rally.
Banks’s journey from a lavish meal with a Russian diplomat in London to the raucous heart of Trump country was part of an unusual intercontinental charm offensive by the wealthy British donor and his associates, a hard-partying lot who dubbed themselves the “Bad Boys of Brexit.” Their efforts to simultaneously cultivate ties to Russian officials and Trump’s campaign have captured the interest of investigators in the United Kingdom and the United States, including special counsel Robert S. Mueller III.

Vice News, 11 June 2018:

Yakovenko is already on the radar of special counsel Robert Mueller, who is investigating Russian interference in the U.S. presidential election, after he was named in the indictment of ex-Trump campaign aide George Papadopoulos….

Banks, along with close friend and former Ukip leader Nigel Farage, was among the very first overseas political figures to meet Trump after his surprise victory in November 2016.

It also emerged over the weekend that Banks passed contact information for Trump’s transition team to the Russians.