Showing posts with label disinformation. Show all posts
Showing posts with label disinformation. Show all posts

Wednesday 17 April 2024

Discussing Artificial Intelligence AI in April 2024

 

Well this month attention has turned from AI being used to create multiple fake bird species and celebrity images or Microsoft's using excruciatingly garish alternative landscapes to promote its software - the focus has shifted back to AI being used by bad actors in global and domestic political arenas created during election years.


Nature, WORLD VIEW, 9 April 2024:

AI-fuelledelection campaigns are here — where are the rules?

Political candidates are increasingly using AI-generated ‘softfakes’ to boost their campaigns. This raises deep ethical concerns.

By Rumman Chowdhury


Of the nearly two billion people living in countries that are holding elections this year, some have already cast their ballots. Elections held in Indonesia and Pakistan in February, among other countries, offer an early glimpse of what’s in store as artificial intelligence (AI) technologies steadily intrude into the electoral arena. The emerging picture is deeply worrying, and the concerns are much broader than just misinformation or the proliferation of fake news.


As the former director of the Machine Learning, Ethics, Transparency and Accountability (META) team at Twitter (before it became X), I can attest to the massive ongoing efforts to identify and halt election-related disinformation enabled by generative AI (GAI). But uses of AI by politicians and political parties for purposes that are not overtly malicious also raise deep ethical concerns.


GAI is ushering in an era of ‘softfakes’. These are images, videos or audio clips that are doctored to make a political candidate seem more appealing. Whereas deepfakes (digitally altered visual media) and cheap fakes (low-quality altered media) are associated with malicious actors, softfakes are often made by the candidate’s campaign team itself.


How to stop AI deepfakes from sinking society — and science


In Indonesia’s presidential election, for example, winning candidate Prabowo Subianto relied heavily on GAI, creating and promoting cartoonish avatars to rebrand himself as gemoy, which means ‘cute and cuddly’. This AI-powered makeover was part of a broader attempt to appeal to younger voters and displace allegations linking him to human-rights abuses during his stint as a high-ranking army officer. The BBC dubbed him “Indonesia’s ‘cuddly grandpa’ with a bloody past”. Furthermore, clever use of deepfakes, including an AI ‘get out the vote’ virtual resurrection of Indonesia’s deceased former president Suharto by a group backing Subianto, is thought by some to have contributed to his surprising win.


Nighat Dad, the founder of the research and advocacy organization Digital Rights Foundation, based in Lahore, Pakistan, documented how candidates in Bangladesh and Pakistan used GAI in their campaigns, including AI-written articles penned under the candidate’s name. South and southeast Asian elections have been flooded with deepfake videos of candidates speaking in numerous languages, singing nostalgic songs and more — humanizing them in a way that the candidates themselves couldn’t do in reality.


What should be done? Global guidelines might be considered around the appropriate use of GAI in elections, but what should they be? There have already been some attempts. The US Federal Communications Commission, for instance, banned the use of AI-generated voices in phone calls, known as robocalls. Businesses such as Meta have launched watermarks — a label or embedded code added to an image or video — to flag manipulated media.


But these are blunt and often voluntary measures. Rules need to be put in place all along the communications pipeline — from the companies that generate AI content to the social-media platforms that distribute them.


What the EU’s tough AI law means for research and ChatGPT


Content-generation companies should take a closer look at defining how watermarks should be used. Watermarking can be as obvious as a stamp, or as complex as embedded metadata to be picked up by content distributors.


Companies that distribute content should put in place systems and resources to monitor not just misinformation, but also election-destabilizing softfakes that are released through official, candidate-endorsed channels. When candidates don’t adhere to watermarking — none of these practices are yet mandatory — social-media companies can flag and provide appropriate alerts to viewers. Media outlets can and should have clear policies on softfakes. They might, for example, allow a deepfake in which a victory speech is translated to multiple languages, but disallow deepfakes of deceased politicians supporting candidates.


Election regulatory and government bodies should closely examine the rise of companies that are engaging in the development of fake media. Text-to-speech and voice-emulation software from Eleven Labs, an AI company based in New York City, was deployed to generate robocalls that tried to dissuade voters from voting for US President Joe Biden in the New Hampshire primary elections in January, and to create the softfakes of former Pakistani prime minister Imran Khan during his 2024 campaign outreach from a prison cell. Rather than pass softfake regulation on companies, which could stifle allowable uses such as parody, I instead suggest establishing election standards on GAI use. There is a long history of laws that limit when, how and where candidates can campaign, and what they are allowed to say.


Citizens have a part to play as well. We all know that you cannot trust what you read on the Internet. Now, we must develop the reflexes to not only spot altered media, but also to avoid the emotional urge to think that candidates’ softfakes are ‘funny’ or ‘cute’. The intent of these isn’t to lie to you — they are often obviously AI generated. The goal is to make the candidate likeable.


Softfakes are already swaying elections in some of the largest democracies in the world. We would be wise to learn and adapt as the ongoing year of democracy, with some 70 elections, unfolds over the next few months.


COMPETING INTERESTS

The author declares no competing interests.

[my yellow highlighting]



Charles Stuart University, Expert Alert, media release, 12 April 2024, excerpt:


Governments must crack down on AI interfering with elections


Charles Darwin University Computational and Artificial Intelligence expert Associate Professor Niusha Shafiabady.


Like it or not, we are affected by what we come across in social media platforms. The future wars are not planned by missiles or tanks, but they can easily run on social media platforms by influencing what people think and do. This applies to election results.


Microsoft has said that the election outcomes in India, Taiwan and the US could be affected by the AI plays by powers like China or North Korea. In the world of technology, we call this disinformation, meaning producing misleading information on purpose to change people’s views. What can we do to fight these types of attacks? Well, I believe we should question what we see or read. Not everything we hear is based on the truth. Everyone should be aware of this.


Governments should enforce more strict regulations to fight misinformation, things like: Finding triggers that show signs of unwanted interference; blocking and stopping the unauthorised or malicious trends; enforcing regulations on social media platforms to produce reports to the government to demonstrate and measure the impact and the flow of the information on the matters that affect the important issues such as elections and healthcare; and enforcing regulations on the social media platforms to monitor and stop the fake information sources or malicious actors.”


The Conversation, 10 April 2024:


Election disinformation: how AI-powered bots work and how you can protect yourself from their influence


AI Strategist and Professor of Digital Strategy, Loughborough University Nick Hajli



Social media platforms have become more than mere tools for communication. They’ve evolved into bustling arenas where truth and falsehood collide. Among these platforms, X stands out as a prominent battleground. It’s a place where disinformation campaigns thrive, perpetuated by armies of AI-powered bots programmed to sway public opinion and manipulate narratives.


AI-powered bots are automated accounts that are designed to mimic human behaviour. Bots on social media, chat platforms and conversational AI are integral to modern life. They are needed to make AI applications run effectively......


How bots work


Social influence is now a commodity that can be acquired by purchasing bots. Companies sell fake followers to artificially boost the popularity of accounts. These followers are available at remarkably low prices, with many celebrities among the purchasers.


In the course of our research, for example, colleagues and I detected a bot that had posted 100 tweets offering followers for sale.


Using AI methodologies and a theoretical approach called actor-network theory, my colleagues and I dissected how malicious social bots manipulate social media, influencing what people think and how they act with alarming efficacy. We can tell if fake news was generated by a human or a bot with an accuracy rate of 79.7%. It is crucial to comprehend how both humans and AI disseminate disinformation in order to grasp the ways in which humans leverage AI for spreading misinformation.


To take one example, we examined the activity of an account named “True Trumpers” on Twitter.



The account was established in August 2017, has no followers and no profile picture, but had, at the time of the research, posted 4,423 tweets. These included a series of entirely fabricated stories. It’s worth noting that this bot originated from an eastern European country.




Research such as this influenced X to restrict the activities of social bots. In response to the threat of social media manipulation, X has implemented temporary reading limits to curb data scraping and manipulation. Verified accounts have been limited to reading 6,000 posts a day, while unverified accounts can read 600 a day. This is a new update, so we don’t yet know if it has been effective.


Can we protect ourselves?

However, the onus ultimately falls on users to exercise caution and discern truth from falsehood, particularly during election periods. By critically evaluating information and checking sources, users can play a part in protecting the integrity of democratic processes from the onslaught of bots and disinformation campaigns on X. Every user is, in fact, a frontline defender of truth and democracy. Vigilance, critical thinking, and a healthy dose of scepticism are essential armour.


With social media, it’s important for users to understand the strategies employed by malicious accounts.


Malicious actors often use networks of bots to amplify false narratives, manipulate trends and swiftly disseminate misinformation. Users should exercise caution when encountering accounts exhibiting suspicious behaviour, such as excessive posting or repetitive messaging.


Disinformation is also frequently propagated through dedicated fake news websites. These are designed to imitate credible news sources. Users are advised to verify the authenticity of news sources by cross-referencing information with reputable sources and consulting fact-checking organisations.


Self awareness is another form of protection, especially from social engineering tactics. Psychological manipulation is often deployed to deceive users into believing falsehoods or engaging in certain actions. Users should maintain vigilance and critically assess the content they encounter, particularly during periods of heightened sensitivity such as elections.


By staying informed, engaging in civil discourse and advocating for transparency and accountability, we can collectively shape a digital ecosystem that fosters trust, transparency and informed decision-making.


Philadelphia Inquirer, 14 April 2024:

Expect to see AI ‘weaponized to deceive voters’ in this year’s presidential election

Alfred Lubrano


As the presidential campaign slowly progresses, artificial intelligence continues to accelerate at a breathless pace — capable of creating an infinite number of fraudulent images that are hard to detect and easy to believe.


Experts warn that by November voters in Pennsylvania and other states will have witnessed counterfeit photos and videos of candidates enacting one scenario after another, with reality wrecked and the truth nearly unknowable.


This is the first presidential campaign of the AI era,” said Matthew Stamm, a Drexel University electrical and computer engineering professor who leads a team that detects false or manipulated political images. “I believe things are only going to get worse.”


Last year, Stamm’s group debunked a political ad for then-presidential candidate Florida Republican Gov. Ron DeSantis ad that appeared on Twitter. It showed former President Donald Trump embracing and kissing Anthony Fauci, long a target of the right for his response to COVID-19.


That spot was a “watershed moment” in U.S. politics, said Stamm, director of his school’s Multimedia and Information Security Lab. “Using AI-created media in a misleading manner had never been seen before in an ad for a major presidential candidate,” he said.


This showed us how there’s so much potential for AI to create voting misinformation. It could get crazy.”


Election experts speak with dread of AI’s potential to wreak havoc on the election: false “evidence” of candidate misconduct; sham videos of election workers destroying ballots or preventing people from voting; phony emails that direct voters to go to the wrong polling locations; ginned-up texts sending bogus instructions to election officials that create mass confusion.....


Malicious intent


AI allows people with malicious intent to work with great speed and sophistication at low cost, according to the Cybersecurity & Infrastructure Security Agency, part of the U.S. Department of Homeland Security.


That swiftness was on display in June 2018. Doermann’s University of Buffalo colleague, Siwei Lyu, presented a paper that demonstrated how AI-generated deepfake videos could be detected because no one was blinking their eyes; the faces had been transferred from still photos.


Within three weeks, AI-equipped fraudsters stopped creating deepfakes based on photos and began culling from videos in which people blinked naturally, Doermann said, adding, “Every time we publish a solution for detecting AI, somebody gets around it quickly.”


Six years later, with AI that much more developed, “it’s gained remarkable capacities that improve daily,” said political communications expert Kathleen Hall Jamieson, director of the University of Pennsylvania’s Annenberg Public Policy Center. “Anything we can say now about AI will change in two weeks. Increasingly, that means deepfakes won’t be easily detected.


We should be suspicious of everything we see.”


AI-generated misinformation helps exacerbate already-entrenched political polarization throughout America, said Cristina Bicchieri, Penn professor of philosophy and psychology.


When we see something in social media that aligns with our point of view, even if it’s fake, we tend to want to believe it,” she said.


To battle fabrications, Stamm of Drexel said, the smart consumer could delay reposting emotionally charged material from social media until checking its veracity.


But that’s a lot to ask.


Human overreaction to a false report, he acknowledged, “is harder to resolve than any anti-AI stuff I develop in my lab.


And that’s another reason why we’re in uncharted waters.”


Tuesday 30 June 2020

"An infamous federal government bureaucrat at the centre of one of the biggest scandals in the ABC’s history – a fraudulent story which sparked the multi-billion dollar Northern Territory intervention – has been promoted to serve as Australia’s High Commissioner to Ghana."


New Matilda, 28 June 2020:

An infamous federal government bureaucrat at the centre of one of the biggest scandals in the ABC’s history – a fraudulent story which sparked the multi-billion dollar Northern Territory intervention – has been promoted to serve as Australia’s High Commissioner to Ghana.

Gregory Andrews was working as a senior adviser to Indigenous affairs minister Mal Brough in 2006 when he appeared as the star witness in an ABC Lateline story which falsely described him as an ‘anonymous youth worker’.
Andrews – whose face was filmed in shadow and his voice digitized to hide his identity – wept openly on camera as he described how, in the mid-2000s, he reported incidents of sexual violence against women and children in Mutitjulu to police, but withdrew his statements after being threatened by powerful men in the community.
It subsequently emerged the entire story was a fiction – Andrews had never made a single report of violence against women or children to police.
Andrews was also forced to apologise to the Federal Senate for providing misleading testimony, and later became the first public servant in history to avoid appearing before a Senate Inquiry on the grounds of stress.
The day after the ABC story was broadcast, the Northern Territory government announced a high-level inquiry into the claims. Almost a year later, the resulting reporting – Little Children Are Sacred – was used by the Howard Government as the basis for launching the Northern Territory intervention.

Infamous Canberra bureaucrat, Gregory Andrews, pictured in 2017.

Reporting by Fairfax revealed that shortly before Prime Minister John Howard announced the NT intervention, it received a report from Liberal Party polling firm Crosby Textor advising it that its best chance of winning the 2007 election was to intervene in the affairs of the state and territory governments, to try and make them look incompetent (all state and territory governments at the time were controlled by Labor).
The strategy failed – the Howard government lost the election, with former Foreign Affairs minister Alexander Downer lamenting afterwards that despite the loss, the policy proved popular with Australian voters.
Andrews worked in the community of Mutitjulu for a short period in 2005 as a project manager for the Northern Territory government. He subsequently joined the Department of Families and Community Services, Housing and Indigenous Affairs, and was providing advice directly to Minister Brough when the ABC falsely described him as an ‘anonymous youth worker’.
Talking points which had been prepared by Andrews for the minister prior to his appearance on Lateline were subsequently leaked – they revealed that once Andrews was provided anonymity by Lateline, he grossly embellished his story.
Andrews claimed children were being traded between Aboriginal communities in Central Australia as “sex slaves”. A lengthy investigation by Northern Territory police found “no evidence whatsoever” to support the claims. An Australian Crime Commission investigation also found the allegations to be false.....
Read the full article here.