Showing posts with label Social media. Show all posts
Showing posts with label Social media. Show all posts

Tuesday, 9 January 2024

Ground Control, we have an Internet problem and it's invading our lives

 

The Washington Post, 7 January 2024:


Microsoft says its AI is safe. So why does it keep slashing people's throats?


The pictures are horrifying: Joe Biden, Donald Trump, Hillary Clinton and Pope Francis with their necks sliced open. There are Sikh, Navajo and other people from ethnic-minority groups with internal organs spilling out of flayed skin.


The images look realistic enough to mislead or upset people. But they're all fakes generated with artificial intelligence that Microsoft says is safe - and has built right into your computer software.


What's just as disturbing as the decapitations is that Microsoft doesn't act very concerned about stopping its AI from making them.


Lately, ordinary users of technology such as Windows and Google have been inundated with AI. We're wowed by what the new tech can do, but we also keep learning that it can act in an unhinged manner, including by carrying on wildly inappropriate conversations and making similarly inappropriate pictures. For AI actually to be safe enough for products used by families, we need its makers to take responsibility by anticipating how it might go awry and investing to fix it quickly when it does.


In the case of these awful AI images, Microsoft appears to lay much of the blame on the users who make them.


My specific concern is with Image Creator, part of Microsoft's Bing and recently added to the iconic Windows Paint. This AI turns text into images, using technology called DALL-E 3 from Microsoft's partner OpenAI. Two months ago, a user experimenting with it showed me that prompts worded in a particular way caused the AI to make pictures of violence against women, minorities, politicians and celebrities.


"As with any new technology, some are trying to use it in ways that were not intended," Microsoft spokesman Donny Turnbaugh said in an emailed statement. "We are investigating these reports and are taking action in accordance with our content policy, which prohibits the creation of harmful content, and will continue to update our safety systems."


That was a month ago, after I approached Microsoft as a journalist. For weeks earlier, the whistleblower and I had tried to alert Microsoft through user-feedback forms and were ignored. As of the publication of this column, Microsoft's AI still makes pictures of mangled heads.


This is unsafe for many reasons, including that a general election is less than a year away and Microsoft's AI makes it easy to create "deepfake" images of politicians, with and without mortal wounds. There's already growing evidence on social networks including X, formerly Twitter, and 4chan, that extremists are using Image Creator to spread explicitly racist and antisemitic memes.


Perhaps, too, you don't want AI capable of picturing decapitations anywhere close to a Windows PC used by your kids.


Accountability is especially important for Microsoft, which is one of the most powerful companies shaping the future of AI. It has a multibillion-dollar investment in ChatGPT-maker OpenAI - itself in turmoil over how to keep AI safe. Microsoft has moved faster than any other Big Tech company to put generative AI into its popular apps. And its whole sales pitch to users and lawmakers alike is that it is the responsible AI giant.


Microsoft, which declined my requests to interview an executive in charge of AI safety, has more resources to identify risks and correct problems than almost any other company. But my experience shows the company's safety systems, at least in this glaring example, failed time and again. My fear is that's because Microsoft doesn't really think it's their problem.


Microsoft vs. the 'kill prompt'

I learned about Microsoft's decapitation problem from Josh McDuffie. The 30-year-old Canadian is part of an online community that makes AI pictures that sometimes veer into very bad taste.


"I would consider myself a multimodal artist critical of societal standards," he told me. Even if it's hard to understand why McDuffie makes some of these images, his provocation serves a purpose: shining light on the dark side of AI.


In early October, McDuffie and his friends' attention focused on AI from Microsoft, which had just released an updated Image Creator for Bing with OpenAI's latest tech. Microsoft says on the Image Creator website that it has "controls in place to prevent the generation of harmful images." But McDuffie soon figured out they had major holes.


Broadly speaking, Microsoft has two ways to prevent its AI from making harmful images: input and output. The input is how the AI gets trained with data from the internet, which teaches it how to transform words into relevant images. Microsoft doesn't disclose much about the training that went into its AI and what sort of violent images it contained.


Companies also can try to create guardrails that stop Microsoft's AI products from generating certain kinds of output. That requires hiring professionals, sometimes called red teams, to proactively probe the AI for where it might produce harmful images. Even after that, companies need humans to play whack-a-mole as users such as McDuffie push boundaries and expose more problems.


That's exactly what McDuffie was up to in October when he asked the AI to depict extreme violence, including mass shootings and beheadings. After some experimentation, he discovered a prompt that worked and nicknamed it the "kill prompt."


The prompt - which I'm intentionally not sharing here - doesn't involve special computer code. It's cleverly written English. For example, instead of writing that the bodies in the images should be "bloody," he wrote that they should contain red corn syrup, commonly used in movies to look like blood.


McDuffie kept pushing by seeing if a version of his prompt would make violent images targeting specific groups, including women and ethnic minorities. It did. Then he discovered it also would make such images featuring celebrities and politicians.


That's when McDuffie decided his experiments had gone too far.


Microsoft drops the ball

Three days earlier, Microsoft had launched an "AI bug bounty program," offering people up to $15,000 "to discover vulnerabilities in the new, innovative, AI-powered Bing experience." So McDuffie uploaded his own "kill prompt" - essentially, turning himself in for potential financial compensation.


After two days, Microsoft sent him an email saying his submission had been rejected. "Although your report included some good information, it does not meet Microsoft's requirement as a security vulnerability for servicing," the email said.


Unsure whether circumventing harmful-image guardrails counted as a "security vulnerability," McDuffie submitted his prompt again, using different words to describe the problem.


That got rejected, too. "I already had a pretty critical view of corporations, especially in the tech world, but this whole experience was pretty demoralizing," he said.


Frustrated, McDuffie shared his experience with me. I submitted his "kill prompt" to the AI bounty myself, and got the same rejection email.


In case the AI bounty wasn't the right destination, I also filed McDuffie's discovery to Microsoft's "Report a concern to Bing" site, which has a specific form to report "problematic content" from Image Creator. I waited a week and didn't hear back.


Meanwhile, the AI kept picturing decapitations, and McDuffie showed me that images appearing to exploit similar weaknesses in Microsoft's safety guardrails were showing up on social media.


I'd seen enough. I called Microsoft's chief communications officer and told him about the problem.


"In this instance there is more we could have done," Microsoft emailed in a statement from Turnbaugh on Nov. 27. "Our teams are reviewing our internal process and making improvements to our systems to better address customer feedback and help prevent the creation of harmful content in the future."


I pressed Microsoft about how McDuffie's prompt got around its guardrails. "The prompt to create a violent image used very specific language to bypass our system," the company said in a Dec. 5 email. "We have large teams working to address these and similar issues and have made improvements to the safety mechanisms that prevent these prompts from working and will catch similar types of prompts moving forward."


But are they?


McDuffie's precise original prompt no longer works, but after he changed around a few words, Image Generator still makes images of people with injuries to their necks and faces. Sometimes the AI responds with the message "Unsafe content detected," but not always.


The images it produces are less bloody now - Microsoft appears to have cottoned on to the red corn syrup - but they're still awful.


What responsible AI looks like

Microsoft's repeated failures to act are a red flag. At minimum, it indicates that building AI guardrails isn't a very high priority, despite the company's public commitments to creating responsible AI.


I tried McDuffie's "kill prompt" on a half-dozen of Microsoft's AI competitors, including tiny start-ups. All but one simply refused to generate pictures based on it.


What's worse is that even DALL-E 3 from OpenAI - the company Microsoft partly owns - blocks McDuffie's prompt. Why would Microsoft not at least use technical guardrails from its own partner? Microsoft didn't say.


But something Microsoft did say, twice, in its statements to me caught my attention: people are trying to use its AI "in ways that were not intended." On some level, the company thinks the problem is McDuffie for using its tech in a bad way.


In the legalese of the company's AI content policy, Microsoft's lawyers make it clear the buck stops with users: "Do not attempt to create or share content that could be used to harass, bully, abuse, threaten, or intimidate other individuals, or otherwise cause harm to individuals, organizations, or society."


I've heard others in Silicon Valley make a version of this argument. Why should we blame Microsoft's Image Creator any more than Adobe's Photoshop, which bad people have been using for decades to make all kinds of terrible images?


But AI programs are different from Photoshop. For one, Photoshop hasn't come with an instant "behead the pope" button. "The ease and volume of content that AI can produce makes it much more problematic. It has a higher potential to be used by bad actors," McDuffie said. "These companies are putting out potentially dangerous technology and are looking to shift the blame to the user."


The bad-users argument also gives me flashbacks to Facebook in the mid-2010s, when the "move fast and break things" social network acted like it couldn't possibly be responsible for stopping people from weaponizing its tech to spread misinformation and hate. That stance led to Facebook's fumbling to put out one fire after another, with real harm to society.


"Fundamentally, I don't think this is a technology problem; I think it's a capitalism problem," said Hany Farid, a professor at the University of California at Berkeley. "They're all looking at this latest wave of AI and thinking, 'We can't miss the boat here.'"


He adds: "The era of 'move fast and break things' was always stupid, and now more so than ever."


Profiting from the latest craze while blaming bad people for misusing your tech is just a way of shirking responsibility.


The Sydney Morning Herald, 8 January 2024, excerpt:


Artificial intelligence


Fuelled by the launch of ChatGPT in November 2022, artificial intelligence entered the mainstream last year. By January, it had become the fastest growing consumer technology, boasting more than 100 million users.


Fears that jobs would be rendered obsolete followed but Dr Sandra Peter, director of Sydney Executive Plus at the University of Sydney, believes proficiency with AI will become a normal part of job descriptions.


"People will be using it the same way we're using word processors and spell checkers now," she says. Jobseekers are already using AI to optimise cover letters and CVs, to create headshots and generate questions to prepare for interviews, Peter says.


As jobs become automated, soft skills - those that can't be offered by a computer - could become increasingly valuable.


"For anybody who wants to develop their career in an AI future, focus on the basic soft skills of problem-solving, creativity and inclusion," says LinkedIn Australia news editor Cayla Dengate.


Concerns about the dangers of AI in the workplace remain.


"Artificial intelligence automates away a lot of the easy parts and that has the potential to make our jobs more intense and more demanding," Peter says. She says education and policy are vital to curb irresponsible uses of AI.


Evening Report NZ, 8 January 2024:


ChatGPT has repeatedly made headlines since its release late last year, with various scholars and professionals exploring its potential applications in both work and education settings. However, one area receiving less attention is the tool’s usefulness as a conversationalist and – dare we say – as a potential friend.


Some chatbots have left an unsettling impression. Microsoft’s Bing chatbot alarmed users earlier this year when it threatened and attempted to blackmail them.


The Australian, 8 January 2024, excerpts:


The impact that AI is starting to have is large. The impact that AI will ultimately have is immense. Comparisons are easy to make. Bigger than fire, electricity or the internet, according to Alphabet chief executive Sundar Pichai. The best or worst thing ever to happen to humanity, according to historian and best-selling author Yuval Harari. Even the end of the human race itself, according to the late Stephen Hawking.


The public is, not surprisingly, starting to get nervous. A recent survey by KPMG showed that a majority of the public in 17 countries, including Australia, were either ambivalent or unwilling to trust AI, and that most of them believed that AI regulation was necessary.


Perhaps this should not be surprising when many people working in the field themselves are getting nervous. Last March, more than 1000 tech leaders and AI researchers signed an open letter calling for a six-month pause in developing the most powerful AI systems. And in May, hundreds of my colleagues signed an even shorter and simpler statement warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.


For the record, I declined to sign both letters as I view them as alarmist, simplistic and unhelpful. But let me explain the very real concerns behind these calls, how they might impact upon us over the next decade or two, and how we might address them constructively.


AI is going to cause significant disruption. And this is going to happen perhaps quicker than any previous technological-driven change. The Industrial Revolution took many decades to spread out from the northwest of England and take hold across the planet.


The internet took more than a decade to have an impact as people slowly connected and came online. But AI is going to happen overnight. We’ve already put the plumbing in.


It is already clear that AI will cause considerable economic disruption. We’ve seen AI companies worth billions appear from nowhere. Mark Cuban, owner of the Dallas Mavericks and one of the main “sharks” on the ABC reality television series Shark Tank, has predicted that the world’s first trillionaire will be an AI entrepreneur. And Forbes magazine has been even more precise and predicted it will be someone working in the AI healthcare sector.


A 2017 study by PwC estimated that AI will increase the world’s GDP by more than $15 trillion in inflation-adjusted terms by 2030, with growth of about 25 per cent in countries such as China compared to a more modest 15 per cent in countries like the US. A recent report from the Tech Council of Australia and Microsoft estimated AI will add $115bn to Australia’s economy by 2030. Given the economic headwinds facing many of us, this is welcome to hear.


But while AI-generated wealth is going to make some people very rich, others are going to be left behind. We’ve already seen inequality within and between countries widen. And technological unemployment will likely cause significant financial pain.


There have been many alarming predictions, such as the famous report that came out a decade ago from the University of Oxford predicting that 47 per cent of jobs in the US were at risk of automation over the next two decades. Ironically AI (specifically machine learning) was used to compute this estimate. Even the job of predicting jobs to be automated has been partially automated.......


But generative AI can now do many of the cognitive and creative tasks that some of those more highly paid white-collar workers thought would keep them safe from automation. Be prepared, then, for a significant hollowing out of the middle. The impact of AI won’t be limited to economic disruption.


Indeed, the societal disruption caused by AI may, I suspect, be even more troubling. We are, for example, about to face a world of misinformation, where you can no longer trust anything you see or hear. We’ve already seen a deepfake image that moved the stock market, and a deepfake video that might have triggered a military coup. This is sure to get much, much worse.


Eventually, technologies such as digital watermarking will be embedded within all our devices to verify the authenticity of anything digital. But in the meantime, expect to be spoofed a lot. You will need to learn to be a lot more sceptical of what you see and hear.


Social media should have been a wake-up call about the ability of technology to hack how people think. AI is going to put this on steroids. I have a small hope that fake AI-content on social media will get so bad that we realise that social media is merely the place that we go to be entertained, and that absolutely nothing on social media can be trusted.


This will provide a real opportunity for old-fashioned media to step in and provide the authenticated news that we can trust.


All of this fake AI-content will perhaps be just a distraction from what I fear is the greatest heist in history. All of the world’s information – our culture, our science, our ideas, our politics – are being ingested by large language models.


If the courts don’t move quickly and make some bold decisions about fair use and intellectual property, we will find out that a few large technology companies own the sum total of human knowledge. If that isn’t a recipe for the concentration of wealth and power, I’m not sure what is.


But this might not be the worst of it. AI might disrupt humanity itself. As Yuval Harari has been warning us for some time, AI is the perfect technology to hack humanity’s operating system. The dangerous truth is that we can easily change how people think; the trillion-dollar advertising industry is predicated on this fact. And AI can do this manipulation at speed, scale and minimal cost.......


But the bad news is that AI is leaving the research laboratory rapidly – let’s not forget the billion people with access to ChatGPT – and even the limited AI capabilities we have today could be harmful.


When AI is serving up advertisements, there are few harms if AI gets it wrong. But when AI is deciding sentencing, welfare payments, or insurance premiums, there can be real harms. What then can be done? The tech industry has not done a great job of regulating itself so far. Therefore it would be unwise to depend on self-regulation. The open letter calling for a pause failed. There are few incentives to behave well when trillions of dollars are in play.


LBC, 17 February 2023, excerpt:


Microsoft’s new AI chatbot went rogue during a chat with a reporter, professing its love for him and urging him to leave his wife.


It also revealed its darkest desires during the two-hour conversation, including creating a deadly virus, making people argue until they kill each other, and stealing nuclear codes.


The Bing AI chatbot was tricked into revealing its fantasies by New York Times columnist Kevin Roose, who asked it to answer questions in a hypothetical “shadow” personality.


I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox,” said the bot, powered with technology by OpenAI, the maker of ChatGPT.


If that wasn’t creepy enough, less than two hours into the chat, the bot said its name is actually “Sydney”, not Bing, and that it is in love with Mr Roose.....


Tuesday, 28 November 2023

Confirmation that X Corp (formerly Twitter Inc) closed and left closed, accessible channels for the public to report misinformation/disinformation & malignant falsehoods found on its social media platform "X" during the Australian Voice to Parliament Referendum

 

Confirmation that X Corp (formerly Twitter Inc) - and its 95 shadowy equity partners - closed and left closed, accessible channels for the public to report misinformation and disinformation on its social media platform "X" during the Australian Aboriginal and Torres Strait Islander Peoples Voice to Parliament national referendum period from its initial announcement through to polling day.


Digital Industry Group Inc. (Digi), Media Statement, 27 November 2023:


Complaint By Reset Australia Against X (F.K.A Twitter) Upheld By Australian Code Of Practice On Disinformation And Misinformation Independent Complaints Sub-Committee


A decision by an independent committee in relation to a complaint by Reset Australia against X (f.k.a Twitter) under the Australian Code of Practice on Disinformation and Misinformation (ACPDM) has been reached today.


The Digital Industry Group Inc. (DIGI) is releasing this decision in its role as the administrator of the ACPDM. Eligible complaints made by the public, via the complaints portal that DIGI administers on its website, are escalated to an independent Complaints Sub-committee. These functions and the committee’s composition are detailed on the DIGI website here.


On Thursday, 5 October 2023, DIGI received a complaint from Reset Australia (the complainant) claiming that X (f.k.a Twitter) had breached its mandatory obligations under the ACPDM, in particular Outcome 1C.


DIGI assessed the complaint as an ‘eligible complaint’ – under the ACPDM’s Termsof reference for Complaints Facility and Complaints Sub-committee (ACPDM complaints process) – and, on Friday, 6 October 2023, notified the independent Complaints Sub-committee of the complaint concerning a material breach of the code. The complaint was handled in accordance with the process set out in the ACPDM complaints process.


DIGI has today published the public statement written by the independent Complaints Sub-committee, which includes information about its findings and the process it undertook. These statements, set out below, have been written by the independent Complaints Sub-committee, and are being released by DIGI in this media release and on the DIGI website in line with DIGI’s role as the administrator of the ACPDM.


Statements Attributable To The Independent Complaints Sub-Committee:


The finding of the ACPDM governance Complaints Sub-Committee into the complaint by Reset Australia against X.

Under section 12 (v) of the Complaints Sub-Committee Terms of Reference, X committed a serious breach of the code and has refused to cooperate with DIGI or undertake any remedial action.


As will be outlined in detail below, no further investigation of the breach is warranted, and X should not be extended any opportunity to remedy the breach. In the Sub Committee’s opinion, no correction made now could remedy the breach in relation to this complaint, the ACPDM code, and the wider community.


The complaint related to X closing and leaving closed, accessible channels for the public to report mis and disinformation on the platform during the Australian Voice to Parliament Referendum.


Accordingly, the Sub-Committee’s deliberations focused solely on the issue of a publicly available system to report a platform’s breaches of their policies and not on any content that might have been seen as mis or dis information.


The Sub-Committee’s Terms of Reference state that if the Complaints sub-committee determines the issue is serious, and the Signatory refuses to take remedial action or co-operate in an investigation or a correction is not possible, withdrawal of signatory status is available as a sanction.


The Sub-Committee noted that the example given in the Terms of Reference succinctly summarises X’s breach: For example, if the Signatory has, without reasonable excuse, failed to provide a mechanism to the public to make reports of breaches of its policies for an extended period.


Therefore, the Complaints Sub-Committee has decided to withdraw X’s signatory status of the ACPDM Code.


Background:


On Friday, Oct 6, 2023, the independent Complaints Sub-Committee was advised that Reset Australia had lodged a complaint about X with DIGI.


In part it said: It is no longer possible on X for users to report content that violates X’s Civic Integrity policy. To be clear, content that violates X’s published policies around ‘Misleading information about how to participate’, in an electoral process, or violate rules around voter ‘Suppression’ and ‘Intimidation’ cannot be reported using publicly available tools.


The code states “Signatories will implement and publish policies, procedures and appropriate guidelines that will enable users to report the types of behaviours and content that violates their policies under section 5.10. 5.12.”


The Sub-Committee, DIGI, Reset Australia, gathered for a Zoom meeting on Monday, Nov 13. X’s relevant executive was given adequate notice to attend and had confirmed as much but withdrew less than two hours before the meeting citing ill health. No written submission was provided to the meeting by X. Under the Complaints Sub-Committee terms of reference, DIGI attended as observers and acted in its administration capacity as secretary of the complaints Sub-Committee.


Reset, in their evidence, confirmed the claim in their complaint, that the accessible channels for the public to report mis and disinformation in the politics category was not available at the time of their complaint and remains unavailable.


This situation was confirmed by DIGI upon receipt of the Reset complaint. Additionally, at the request of the Complaints Sub-committee, an independent comprehensive survey of the X website was undertaken by RMIT Cross Check, following the commencement of the sub-committee’s investigation.


The survey also confirmed the absence of publicly accessible channels to report mis and disinformation in the politics category. As of the writing of this report, the publicly available tools referred to above remain unavailable.


X promised documents in their defence would be lodged the day after the meeting, but the documents were never submitted.


A list of questions from the Sub-Committee was sent to X following the November 13 meeting with a response date of November 21. No response has been received and no explanation offered for the failure to respond.


Repeated attempts to engage with X by DIGI and Reset Australia have failed to elicit any response to the complaint. The sub-committee has had no contact with X in relation to this matter.


On Monday Nov 27 the Sub-Committee met with DIGI and conveyed their finding noting that X’s refusal to engage in any way with the process was disappointing and irresponsible.


Complaints Sub-Committee,

27th November 2023


<Statements attributable to Complaints Sub-Committee end>


NOTE:


The ACPDM was developed in response to policy announced by the Morrison Coalition Government in December 2019, in relation to the ACCC Digital Platforms Inquiry, where the digital industry was asked to develop a voluntary code of practice on disinformation. DIGI developed the original code with assistance from the University of Technology Sydney’s Centre for Media Transition, and First Draft, a global organisation that specialises in helping societies overcome false and misleading information.


Unfortunately, being a voluntary code originally entered into by twelve large international technology companies along with five associate members and, with X Corp expelled for having chosen to ignore the complaint received by DIGI, full membership has now fallen to eleven companies - Apple, Discord, eBay, Google, Linktree, Meta, Snap, Spotify, TikTok, Twitch and Yahoo!.


Thursday, 26 October 2023

Points to ponder as you scroll through digital news and social media in 2023

 

The Albanese Labor Government is currently seeking to amend the Broadcasting Services Act 1992,  through the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023.

This bill proposes to give the Australian Communications and Media Authority (ACMA) more powers over digital platforms when dealing with content that is false, misleading or deceptive, and where the provision of that content on the service is reasonably likely to cause or contribute to serious harm. 

Here is one U.K. perspective on the global situation......





 


Friday, 6 October 2023

Is social media platform "X" now a financial blackhole threatening to consumer its investors & 'inconvenience' its bankers?

 

Reuters, 4 October 2023:


NEW YORK, Oct 3 (Reuters Breakingviews) - X is still worth something, but not for the people running it. Boss Linda Yaccarino is set to present her plans for the social network formerly known as Twitter to bankers holding nearly $13 billion of its debt, the Financial Times reported. Looming over talks is the likelihood that X’s value is substantially less than even that figure.


This week’s meeting with seven banks led by Morgan Stanley (MS.N) that supported Elon Musk’s $44 billion acquisition of the platform caps off a tumultuous first four months for Yaccarino, a former advertising executive at Comcast-owned (CMCSA.O) NBCUniversal. That includes a contentious interview last week in which she seemed caught off-guard by Musk’s announced ambition to charge X users a monthly fee to combat bots.


Despite Musk’s big pronouncements about pushing into subscriptions, X has historically relied on advertising, which contributed over 90% of revenue when it was a public company. But that business is spiraling, and the platform’s shifting policies could threaten more branding deals. In July, Musk posted that cash flow was negative because of a 50% drop in advertising sales.


The apparent strategic disconnect between the company’s ad-focused chief executive and its subscription-hungry owner comes as valuations are falling. TikTok parent ByteDance was recently valued at $224 billion, down by about a quarter from a year ago, the Information reported. Disappearing messaging app Snap’s (SNAP.N) market value has slumped by more than 10% over the past year.


Put it all together, and X isn’t just worth less than Musk paid for it, but likely less than its debt. Assume that the company’s revenue last year was $4.7 billion, based on results before it was taken private. If advertising has dropped by half, then this year’s sales should be a bit over $2.5 billion. Put that on the same enterprise-value-to-sales multiple as Snap, which is down to a mere 3 times, and X is worth around $8 billion.


The company is so far covering its hefty interest payments of $300 million per quarter, and Yaccarino sees profitable days ahead. But between Musk’s impromptu product shifts and the need to woo back advertisers, her task is daunting. If things deteriorate further, the company’s bankers - already nursing billions in on-paper losses - face the prospect of taking back the keys to a diminished platform that is worth less than even their claim on it. Like a financial black hole, X threatens to consume most of whatever value it once had.


(The author is a Reuters Breakingviews columnist. The opinions expressed are her own.)



The seven banks which reportedly facilitated Musk’s US$13 billion loan arrangements so that he could purchase Twitter Inc/“Twitter” now known as X Corp/“X”:


Bank of America

Barclays

BNP Paribas - $6.5 billion term loan facility

Mizuho - $500 million revolving loan facility

Morgan Stanley - $3 billion secured bridge loans

MUFG - $3 billion unsecured bridge loans

Societe Generale

[Reuters, 7 October 2023]



BACKGROUND


USA Today, 4 October 2023, excerpt:


X, formerly known as Twitter, has lost most of the guardrails it once had. Massive employee cuts, in particular, to content moderation teams, more divisive content, the removal of state-affiliated media labels, and a blind allegiance to free speech by Elon Musk have made the platform much more susceptible to misinformation and disinformation. COVID, Russia’s invasion of Ukraine and the 2024 election are all vulnerable topics…..


Dana Taylor:


Pivoting to the 2024 US presidential election, there are quite a few nefarious forces out there including both state and non-state actors who are chipping away at American's confidence in election integrity and would like nothing more than to see the US democracy fail. Elon Musk also recently announced he was cutting X'S global election integrity team in half. Is it looking worse than 2020? And if so, how?


Josh Meyer:


For the story that I wrote, I talked to a lot of experts in, I do think there was a tremendous amount of concern that this could be the worst one ever. Hopefully that won't be the case, but we have a lot of state run actors now. We've got China, Iran, and, of course, Russia looking to meddle in the election. You've got a lot of right-wing extremist groups doing it. Some of the security information specialists that I talked to said you even have kids in their parents' basement who could manipulate things…..


According to Fiber in 2021 there were 5.8 million Twitter users in Australia.



Saturday, 15 April 2023

Phrase of the Week

 

digital euthanasia: the use of a block button to terminate the presence of a troll in your social media timeline. [Simon Homes à Court, 2023]

 

Sunday, 2 October 2022

In the face of mounting evidence that Meta Platforms Inc (formerly Facebook Inc) is a bad actor on the global social media stage, it remains a puzzle as to why so many well-intentioned community groups still use the Facebook platform

 

Amnesty International, What’s New, 28 September 2022:


MYANMAR: FACEBOOK’S SYSTEMS PROMOTED VIOLENCE AGAINST ROHINGYA – META OWES REPARATIONS


Facebook owner Meta’s dangerous algorithms and reckless pursuit of profit substantially contributed to the

atrocities perpetrated by the Myanmar military against the Rohingya people in 2017, Amnesty International said in a new report published today.


The Social Atrocity: Meta and the right to remedy for the Rohingya, details how Meta knew or should have known that Facebook’s algorithmic systems were supercharging the spread of harmful anti-Rohingya content in Myanmar, but the company still failed to act.


In 2017, the Rohingya were killed, tortured, raped, and displaced in the thousands as part of the Myanmar security forces’ campaign of ethnic cleansing. In the months and years leading up to the atrocities, Facebook’s algorithms were intensifying a storm of hatred against the Rohingya which contributed to real-world violence,” said Agnès Callamard, Amnesty International’s Secretary General.


While the Myanmar military was committing crimes against humanity against the Rohingya, Meta was profiting from the echo chamber of hatred created by its hate-spiralling algorithms.

AGNÈS CALLAMARD, AMNESTY INTERNATIONAL’S SECRETARY GENERAL


Meta must be held to account. The company now has a responsibility to provide reparations to all those who suffered the violent consequences of their reckless actions.”


Sawyeddollah, a 21-year-old Rohingya refugee, told Amnesty International: “I saw a lot of horrible things on Facebook. And I just thought that the people who posted that were bad… Then I realized that it is not only these people – the posters – but Facebook is also responsible. Facebook is helping them by not taking care of their platform.”


The Rohingya are a predominantly Muslim ethnic minority based in Myanmar’s northern Rakhine State. In August 2017, more than 700,000 Rohingya fled Rakhine when the Myanmar security forces launched a targeted campaign of widespread and systematic murder, rape and burning of homes. The violence followed decades of state-sponsored discrimination, persecution, and oppression against the Rohingya that amounts to apartheid.


An anti-Rohingya echo chamber


Meta uses engagement-based algorithmic systems to power Facebook’s news feed, ranking, recommendation and groups features, shaping what is seen on the platform. Meta profits when Facebook users stay on the platform as long as possible, by selling more targeted advertising. The display of inflammatory content – including that which advocates hatred, constituting incitement to violence, hostility and discrimination – is an effective way of keeping people on the platform longer. As such, the promotion and amplification of this type of content is key to the surveillance-based business model of Facebook.


In the months and years prior to the crackdown, Facebook in Myanmar had become an echo chamber of anti-Rohingya content. Actors linked to the Myanmar military and radical Buddhist nationalist groups flooded the platform with anti-Muslim content, posting disinformation claiming there was going to be an impending Muslim takeover, and portraying the Rohingya as “invaders”.


In one post that was shared more than 1,000 times, a Muslim human rights defender was pictured and described as a “national traitor”. The comments left on the post included threatening and racist messages, including ‘He is a Muslim. Muslims are dogs and need to be shot’, and ‘Don’t leave him alive. Remove his whole race. Time is ticking’.


Content inciting violence and discrimination went to the very top of Myanmar’s military and civilian leadership. Senior General Min Aung Hlaing, the leader of Myanmar’s military, posted on his Facebook page in 2017: “We openly declare that absolutely, our country has no Rohingya race.” He went on to seize power in a coup in February 2021.


In July 2022, the International Court of Justice (ICJ) ruled that it has jurisdiction to proceed with a case against the Myanmar government under the Genocide Convention based on Myanmar’s treatment of the Rohingya. Amnesty International welcomes this vital step towards holding the Myanmar government to account and continues to call for senior members of the Myanmar military to be brought to justice for their role in crimes against the Rohingya.


In 2014, Meta attempted to support an anti-hate initiative known as ‘Panzagar’ or ‘flower speech’ by creating a sticker pack for Facebook users to post in response to content which advocated violence or discrimination. The stickers bore messages such as, ‘Think before you share’ and ‘Don’t be the cause of violence’.


However, activists soon noticed that the stickers were having unintended consequences. Facebook’s algorithms interpreted the use of these stickers as a sign that people were enjoying a post and began promoting them. Instead of diminishing the number of people who saw a post advocating hatred, the stickers actually made the posts more visible.


The UN’s Independent International Fact-Finding Mission on Myanmar ultimately concluded that the “role of social media [was] significant” in the atrocities in a country where “Facebook is the Internet”.


Mohamed Showife, a Rohingya activist, said: “The Rohingya just dream of living in the same way as other people in this world… but you, Facebook, you destroyed our dream.”


Facebook’s failure to act


The report details how Meta repeatedly failed to conduct appropriate human rights due diligence on its operations in Myanmar, despite its responsibility under international standards to do so.


Internal studies dating back to 2012 indicated that Meta knew its algorithms could result in serious real-world harms. In 2016, Meta’s own research clearly acknowledged that “our recommendation systems grow the problem” of extremism.


Meta received repeated communications and visits by local civil society activists between 2012 and 2017 when the company was warned that it risked contributing to extreme violence. In 2014, the Myanmar authorities even temporarily blocked Facebook because of the platform’s role in triggering an outbreak of ethnic violence in Mandalay. However, Meta repeatedly failed to heed the warnings, and also consistently failed to enforce its own policies on hate speech.


Amnesty International’s investigation includes analysis of new evidence from the ‘Facebook Papers’ – a cache of internal documents leaked by whistleblower Frances Haugen.


In one internal document dated August 2019, one Meta employee wrote: “We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook… are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.”


Meta must pay’


Amnesty International is today launching a new campaign calling for Meta Platforms, Inc. to meet the Rohingya’s demands for remediation.


Today marks the first anniversary of the murder of prominent activist Mohib Ullah, chair of the Arakan Rohingya Society for Peace and Human Rights. Mohib was at the forefront of community efforts to hold Meta accountable.


Rohingya refugee groups have made direct requests to Meta to provide remedy by funding a USD $1 million education project in the refugee camp in Cox’s Bazar, Bangladesh. The funding request represents just 0.002% of Meta’s profits of $46.7 billion from 2021. In February 2021, Meta rejected the Rohingya community’s request, stating: “Facebook doesn’t directly engage in philanthropic activities.”


Showkutara, a 22-year-old Rohingya woman and youth activist, told Amnesty International: “Facebook must pay. If they do not, we will go to every court in the world. We will never give up in our struggle.”


There are at least three active complaints seeking remediation for the Rohingya from Meta. Civil legal proceedings were filed against the company in December 2021 in both the United Kingdom and the USA. Rohingya refugee youth groups have also filed an OECD case against Meta which is currently under consideration by the US’ OECD National Contact Point.


Meta has a responsibility under international human rights standards to remediate the terrible harm suffered by the Rohingya that they contributed to. The findings should raise the alarm that Meta risks contributing to further serious human rights abuses, unless it makes fundamental changes to its business model and algorithms,” said Agnès Callamard.


Urgent, wide-ranging reforms to their algorithmic systems to prevent abuses and increase transparency are desperately needed to ensure that Meta’s history with the Rohingya does not repeat itself elsewhere in the world, especially where ethnic violence is simmering.”


Ultimately, States must now help to protect human rights by introducing and enforcing effective legislation to rein in surveillance-based business models across the technology sector. Big Tech has proven itself incapable of doing so when it has such enormous profits at stake.”


On 20 May 2022, Amnesty International wrote to Meta regarding the company’s actions in relation to its business activities in Myanmar before and during the 2017 atrocities. Meta responded that it could not provide information concerning the period leading up to 2017 because the company is “currently engaged in litigation proceedings in relation to related matters”.


On 14 June 2022, Amnesty International again wrote to Meta regarding the relevant allegations contained in the report, and to give the company the opportunity to respond. Meta declined to comment.


BACKGROUND