Showing posts sorted by date for query abuse. Sort by relevance Show all posts
Showing posts sorted by date for query abuse. Sort by relevance Show all posts

Sunday 24 March 2024

Are all bets off when it comes to policing the integrity of search engines now AI has taken hold?

 

Recognising emerging problems


Perth Now/AAP Bulletin Wire, 12 March 2024:


New rules for search engines will ban child abuse and terrorist content in Australia but will also seek to prevent abuse images being created by AI tools.


Google, Bing, Yahoo and other search engines will be required to prevent child sexual abuse and terrorist content appearing in search results under a code introduced to govern the industry.


The code will ban the companies' generative artificial intelligence tools being used to produce deepfake versions of the offensive material in one of Australia's first set of AI regulations.


The eSafety Commission launched the Internet Search Engine Services Code on Tuesday following months of negotiations with internet giants over the measures that had to be changed after the launch of generative AI tools.


The code will come into place alongside five other online safety codes covering areas from social media to app stores and will include penalties of up to $780,000 a day for companies that fail to comply with its provisions.


AI experts welcomed the code but said more restrictions and technological advances will be needed to stop the scourge of artificial "class one material" online.


eSafety Commissioner Julie Inman Grant said the search engine code, created under the Online Safety Act, was an important addition to stop the spread of the "worst of the worst" content from being widely seen or shared.


"It helps ensure one of the key gateways to accessing material - through online search engines - is closed," she said.


"It will target illegal content and conduct, with significant enforceable penalties if search engines fail to comply."


The code dictates that search engines take "reasonable and proactive steps" to prevent public exposure to illegal content such as child abuse, pro-terrorism or extremely violent material, and provide tools to report instances of it.


The regulations also apply to "artificial intelligence features integrated into the search functionality that may be used to generate" illegal content - an addition Ms Ingram said had to be added to ensure it dealt with all relevant risks.


"The sudden and rapid rise of generative AI and subsequent announcements by Google and Bing that they would incorporate AI functionality into their internet search engine services all but rendered the original code drafted by industry obsolete," she said.


"What we've ended up with is a robust code that delivers broad protections for children."


Search engines will also be required to publish annual reports into illegal material found and removed from their services.


University of NSW AI Institute chief scientist Toby Walsh said removing the most offensive material from search engines was an important step and preventing its creation using AI tools was equally vital.


"These tools are, sadly, being used to generate such offensive and, in many cases, illegal content," he said


"Ever since generative AI tools became available, the (Australian Federal Police) have seen a significant uptick in the amount of such content... so it's definitely a real challenge."


Prof Walsh told AAP banning illegal AI-generated images would become easier after technological advances allowed content to be digitally watermarked but, until then, regulations were crucial to taking action against it.


"(The code) doesn't fix the problem because there are lots of other ways of accessing these tools ... but it's an obvious place to start," he said......


The Daily Telegraph, 18 March 2024, p.3:


The nation’s competition watchdog is putting search engines on notice, announcing plans to scrutinise the competitive nature and quality of popular services including Google and Bing.


The inquiry into the search engine giants, announced today, will call for consumers, businesses and experts to recall recent results and consider whether recent changes to laws in Europe have affected the results they see.


The Australian Competition and Consumer Commission (ACCC) has released an issues paper as part of its Digital Platform Services Inquiry.


Under the inquiry, the watchdog is continuing to put different aspects of consumer technology offerings under the microscope.


The new probe into search engines comes are new laws and regulations under consideration in the UK and Europe will require search engines to promote competition, ACCC chair Gina Cass-Gottlieb said.


We’ve seen new laws introduced overseas that place obligations on so-called ‘gatekeeper’ search engines and the emergence of new technologies, like generative AI, that have changed the way consumers search for information online and may be impacting the quality of the service they are receiving,” she said.


The ACCC wants to know whether the general public still believes search engines are useful and whether they’ve noticed changes to the quality of the results they see.


The increased scrutiny arrives at a time when artificial intelligence is set to impact the way search engines perform. AI-powered search engines and tools are growing and these results aren’t influenced by the same advertising constraints and requirements as older search engines including Google. Social media platforms are also increasingly being used as a method of searching or finding visual results to queries, where answers to queries are often cut into short, attention-grabbing videos infused with marketing strategies.


Those allegedly taking advantage of the situation


The Sydney Morning Herald, 23 March 2024:


Liberal Party press releases and its website are showing up on the Google News tab as a source of information alongside queries about prominent current affairs searches, calling into question the technology giant's verification of news material, according to one of Australia's senior-most media and data experts.


Links to different media releases from the Liberal Party website show first in response to search with keywords "Labor position nuclear", "Labor and nuclear" or "Labor renewables" in Google News at a time when federal Opposition Leader Peter Dutton is pushing for Australia to adopt nuclear power.


Associate professor of news and political communication at Monash University Emma Briant said it "looks like a clear strategy by the Liberals to get political content listed as news to increase its credibility and visibility in the search engine".


Briant, who was also involved in exposing the Cambridge Analytica Facebook scandal, called on Google to be more rigorous in making sure material marked as news came from verified news organisations.


"It's too easy for those pushing persuasion and propaganda to take advantage of the high level of trust the public places in Google News - and it will only become more dangerous as campaigns can train AI to produce articles that more effectively game the system," she said.


The Liberal Party directed questions about the referral to Google.


Google declined to explain the Liberal Party's presence in the News tab but pointed to its "publisher help centre", which deems all publishers who comply with its news content policies eligible to appear within Google News.


Anyone generating news-related information, including press releases, can apply to have a dedicated page on Google News.


On its support page, Google says it uses "automated systems" to compile its news index, saying it "algorithmically discovers news content through search technologies".


Google plays a key role in the news ecosystem in Australia and globally. Google Search and News link people to publishers' websites more than 24 billion times each month, the company says.


Its algorithm can be a deciding factor in a website's traffic and its ability to drive revenue through advertising and subscriptions.


In 2021, the federal government implemented the news media bargaining code as a tool to bridge the power imbalance between digital platforms and news publishers.


Google says it is one of the world's biggest financial supporters of journalism. In Australia, it directly contributes more than $135 million to news organisations through deals agreed as part of the bargaining code per year.....


Harvard University's Nieman Lab for Journalism reported in February that Google has tested removing its News tab from search results.


Search engine optimisation (SEO) is an online practice that allows publishers to target keywords that are relevant to a story in order to attain a higher ranking in Google's search engine result pages.


New trial versions of Google Search use its AI bot Bard to present a summary response to a query, as opposed to links to relevant news sites. Google says the features can help users distil complex information into easy-to-digest formats. The features are yet to be fully rolled out across Google Search.


A spokesperson for Communications Minister Michelle Rowland also declined to comment, saying "Google is best placed to explain how content surfaces on its news product".


The Australian Media and Communications Authority, which oversees the code for misinformation and disinformation was contacted for comment.


And those who thrive on creating their own brand of misinformation


Sky News, 22 March 2024:


Media Research Centre Contributing Writer Stephanie Hamill discusses researchers' discovery of 41 instances of election "interference" by Google since 2008.


"There's algorithms that seem to be in favour of the left," Ms Hamill told Sky News host James Morrow.


"And this report found that not only are they typically in favour of the left but they’re in favour of the most liberal candidate.


"So this goes as far back to 2008 and the researchers are saying that actually, Google may have interfered in the 2008 primary between Barack Obama and Hillary Clinton in support of Barack Obama.


It’s a manipulation of the algorithm and they’re saying that it’s actually escalating and increasing as we go into 2024 in support of Joe Biden.”


Note:

Media Research Center states of itself:

Since 1987, the Media Research Center has worked successfully to expose and counter the leftist bias of the national news media, where now only a historically low 32% of Americans say they trust media to be fair and impartial. Alongside this effort, MRC leads the conservative movement in combatting the left’s efforts to manipulate the electoral process, silence opposing voices online, and undermine American values. 

This bad faith actor on the quasi-research centre scene was founded by L. Brent Bozell III, a conservative 'activist' whose son Leo Brent "Zeeker" Bozell IV 44, of Palmyra, Pennsylvania, was found guilty of 10 charges, including five felonies as a result of his actions during the 6 January 2021 breach of the U.S. Capitol, when along with others he disrupted a joint session of the U.S. Congress convened to ascertain and count the electoral votes related to the 2020 presidential election.


Wednesday 17 January 2024

Two aggressive loggers get off with a rap over the knuckles for violently assaulting two community members

 

It was two arrests and two court judgments long delayed - with little in the way of deterrence at the end of proceedings.


News of the Area, 25 June 2023:


Two men appeared in Coffs Harbour Local Court on June 14 in relation to an incident in Wild Cattle Creek State Forest in June 2020.


Michael Luigi Vitali from South Grafton is charged with common assault against local ecologist Mark Graham.


Grafton man Rodney James Hearfield is charged with common assault and assault occasioning bodily harm against Andre Johnston.


Both men pleaded ‘not guilty’ and the matter will be heard next in Coffs Harbour on August 16......


Echo, 16 January 2024:


The Coffs Harbour Local Court has found two forestry workers guilty of assaulting two members of the community on a public road in Wild Cattle Creek State Forest on 25 June 2020.


NSW Upper House Greens MP Sue Higginson reported the guilty finding, which was handed down by the court yesterday, in a press release.








Ms Higginson said the assaults had been recorded on a mobile camera device by a Forestry Corporation Officer.


The two forestry workers were, at the time, employed by logging company Greensill Bros that was contracted by the NSW Forestry Corporation, a state owned corporation.


The forestry workers were not charged immediately following the assault, and one of the victims was instead targeted and charged by the Coffs Harbour Police.


The officer who handled the matter attempted to withhold the video footage of the assaults from the victims and the public, according to Ms Higginson. He is no longer a police officer.


Today’s judgement is well overdue and is the end of a harrowing experience for the two victims, Mark Graham and Andre Johnston,’ Ms Higginson said.


Mark and Andre were on a public road, in a public forest, when the forestry workers approached, threatened and then assaulted them all while being filmed by an employee of the NSW Forestry Corporation…’


The initial investigation into these assaults resulted in the charging of one of the victims, Mark Graham, who is a forest ecologist.


The NSW Police, after discussions with the Forestry Corporation charged Mr Graham with approaching forestry operations, those charges were wrongly pressed and were later withdrawn.


The fact that Mr Graham was charged for a crime when he was a victim of what the Magistrate described as a violent assault on a public road, in a public forest, and it was captured on video, can only be described as a wilful miscarriage of justice.


The Magistrate noted that the evidence showed the police officer who handled the situation had been helpful to the guilty men and took a serious dislike to the victims of the assaults.


The video evidence is confronting and unambiguous.’ Ms Higginson said.


Two members of the community, who are acting in a friendly and non-threatening manner, are approached by two agitated and hostile forestry workers who then proceed to assault them, demand their personal property and shout threatening abuse at them.


It is gross and brutal and shows the level of impunity that forestry workers are afforded from their actions when the local police then charge the victims of the assault instead of the perpetrators.’


It is a good day for justice, as slow and bumpy as this road has been for Mark and Andre. There must be a strong response from the Government.’


Blue Mountains Gazette, 16 January 2024:


The Forestry Corporation is under pressure to blacklist a logging contractor after its workers attacked two environmentalists in a NSW forest.


It's been three-and-a-half years since Mark Graham and Andre Johnston were assaulted on a public road during a day trip to the Wild Cattle Creek State Forest, where logging was under way.


On Monday, the pair got their day in court with a Coffs Harbour magistrate finding two employees of Greensill Bros had committed common assault.


The environmentalists are now demanding Greensill Bros be banned from any further logging work for the NSW government-owned Forestry Corporation.


They also plan to pursue a corruption complaint against the Forestry Corporation, saying one of its direct employees who was overseeing the logging operation filmed the assault but failed to intervene.....


"Immediately following the assault in 2020 neither of the forestry workers were charged and one of the victims was instead targeted and charged by the Coffs Harbour police."


Ms Higginson said the charges were laid after discussions between the police and Forestry Corporation.


Mr Graham has told AAP he was dismissed by police when he went to report the assault in June 2020 and was instead to go and "get a job".


About six months later he was charged with being within 100 metres of logging machinery.


Ms Higginson represented Mr Graham in that matter and has accused police of trying to withhold video evidence of the assaults from both victims, and from the public.


The charge against Mr Graham, of being too close to logging machinery, was eventually dropped in May 2022.....


No convictions were recorded with Michael Luigi Vitali and Rodney James Hearfield both put on 15-month good behaviour bonds.


The Forestry Corporation and Greensill Bros have declined to comment.


AAP has also sought comment from police.


Tuesday 9 January 2024

Ground Control, we have an Internet problem and it's invading our lives

 

The Washington Post, 7 January 2024:


Microsoft says its AI is safe. So why does it keep slashing people's throats?


The pictures are horrifying: Joe Biden, Donald Trump, Hillary Clinton and Pope Francis with their necks sliced open. There are Sikh, Navajo and other people from ethnic-minority groups with internal organs spilling out of flayed skin.


The images look realistic enough to mislead or upset people. But they're all fakes generated with artificial intelligence that Microsoft says is safe - and has built right into your computer software.


What's just as disturbing as the decapitations is that Microsoft doesn't act very concerned about stopping its AI from making them.


Lately, ordinary users of technology such as Windows and Google have been inundated with AI. We're wowed by what the new tech can do, but we also keep learning that it can act in an unhinged manner, including by carrying on wildly inappropriate conversations and making similarly inappropriate pictures. For AI actually to be safe enough for products used by families, we need its makers to take responsibility by anticipating how it might go awry and investing to fix it quickly when it does.


In the case of these awful AI images, Microsoft appears to lay much of the blame on the users who make them.


My specific concern is with Image Creator, part of Microsoft's Bing and recently added to the iconic Windows Paint. This AI turns text into images, using technology called DALL-E 3 from Microsoft's partner OpenAI. Two months ago, a user experimenting with it showed me that prompts worded in a particular way caused the AI to make pictures of violence against women, minorities, politicians and celebrities.


"As with any new technology, some are trying to use it in ways that were not intended," Microsoft spokesman Donny Turnbaugh said in an emailed statement. "We are investigating these reports and are taking action in accordance with our content policy, which prohibits the creation of harmful content, and will continue to update our safety systems."


That was a month ago, after I approached Microsoft as a journalist. For weeks earlier, the whistleblower and I had tried to alert Microsoft through user-feedback forms and were ignored. As of the publication of this column, Microsoft's AI still makes pictures of mangled heads.


This is unsafe for many reasons, including that a general election is less than a year away and Microsoft's AI makes it easy to create "deepfake" images of politicians, with and without mortal wounds. There's already growing evidence on social networks including X, formerly Twitter, and 4chan, that extremists are using Image Creator to spread explicitly racist and antisemitic memes.


Perhaps, too, you don't want AI capable of picturing decapitations anywhere close to a Windows PC used by your kids.


Accountability is especially important for Microsoft, which is one of the most powerful companies shaping the future of AI. It has a multibillion-dollar investment in ChatGPT-maker OpenAI - itself in turmoil over how to keep AI safe. Microsoft has moved faster than any other Big Tech company to put generative AI into its popular apps. And its whole sales pitch to users and lawmakers alike is that it is the responsible AI giant.


Microsoft, which declined my requests to interview an executive in charge of AI safety, has more resources to identify risks and correct problems than almost any other company. But my experience shows the company's safety systems, at least in this glaring example, failed time and again. My fear is that's because Microsoft doesn't really think it's their problem.


Microsoft vs. the 'kill prompt'

I learned about Microsoft's decapitation problem from Josh McDuffie. The 30-year-old Canadian is part of an online community that makes AI pictures that sometimes veer into very bad taste.


"I would consider myself a multimodal artist critical of societal standards," he told me. Even if it's hard to understand why McDuffie makes some of these images, his provocation serves a purpose: shining light on the dark side of AI.


In early October, McDuffie and his friends' attention focused on AI from Microsoft, which had just released an updated Image Creator for Bing with OpenAI's latest tech. Microsoft says on the Image Creator website that it has "controls in place to prevent the generation of harmful images." But McDuffie soon figured out they had major holes.


Broadly speaking, Microsoft has two ways to prevent its AI from making harmful images: input and output. The input is how the AI gets trained with data from the internet, which teaches it how to transform words into relevant images. Microsoft doesn't disclose much about the training that went into its AI and what sort of violent images it contained.


Companies also can try to create guardrails that stop Microsoft's AI products from generating certain kinds of output. That requires hiring professionals, sometimes called red teams, to proactively probe the AI for where it might produce harmful images. Even after that, companies need humans to play whack-a-mole as users such as McDuffie push boundaries and expose more problems.


That's exactly what McDuffie was up to in October when he asked the AI to depict extreme violence, including mass shootings and beheadings. After some experimentation, he discovered a prompt that worked and nicknamed it the "kill prompt."


The prompt - which I'm intentionally not sharing here - doesn't involve special computer code. It's cleverly written English. For example, instead of writing that the bodies in the images should be "bloody," he wrote that they should contain red corn syrup, commonly used in movies to look like blood.


McDuffie kept pushing by seeing if a version of his prompt would make violent images targeting specific groups, including women and ethnic minorities. It did. Then he discovered it also would make such images featuring celebrities and politicians.


That's when McDuffie decided his experiments had gone too far.


Microsoft drops the ball

Three days earlier, Microsoft had launched an "AI bug bounty program," offering people up to $15,000 "to discover vulnerabilities in the new, innovative, AI-powered Bing experience." So McDuffie uploaded his own "kill prompt" - essentially, turning himself in for potential financial compensation.


After two days, Microsoft sent him an email saying his submission had been rejected. "Although your report included some good information, it does not meet Microsoft's requirement as a security vulnerability for servicing," the email said.


Unsure whether circumventing harmful-image guardrails counted as a "security vulnerability," McDuffie submitted his prompt again, using different words to describe the problem.


That got rejected, too. "I already had a pretty critical view of corporations, especially in the tech world, but this whole experience was pretty demoralizing," he said.


Frustrated, McDuffie shared his experience with me. I submitted his "kill prompt" to the AI bounty myself, and got the same rejection email.


In case the AI bounty wasn't the right destination, I also filed McDuffie's discovery to Microsoft's "Report a concern to Bing" site, which has a specific form to report "problematic content" from Image Creator. I waited a week and didn't hear back.


Meanwhile, the AI kept picturing decapitations, and McDuffie showed me that images appearing to exploit similar weaknesses in Microsoft's safety guardrails were showing up on social media.


I'd seen enough. I called Microsoft's chief communications officer and told him about the problem.


"In this instance there is more we could have done," Microsoft emailed in a statement from Turnbaugh on Nov. 27. "Our teams are reviewing our internal process and making improvements to our systems to better address customer feedback and help prevent the creation of harmful content in the future."


I pressed Microsoft about how McDuffie's prompt got around its guardrails. "The prompt to create a violent image used very specific language to bypass our system," the company said in a Dec. 5 email. "We have large teams working to address these and similar issues and have made improvements to the safety mechanisms that prevent these prompts from working and will catch similar types of prompts moving forward."


But are they?


McDuffie's precise original prompt no longer works, but after he changed around a few words, Image Generator still makes images of people with injuries to their necks and faces. Sometimes the AI responds with the message "Unsafe content detected," but not always.


The images it produces are less bloody now - Microsoft appears to have cottoned on to the red corn syrup - but they're still awful.


What responsible AI looks like

Microsoft's repeated failures to act are a red flag. At minimum, it indicates that building AI guardrails isn't a very high priority, despite the company's public commitments to creating responsible AI.


I tried McDuffie's "kill prompt" on a half-dozen of Microsoft's AI competitors, including tiny start-ups. All but one simply refused to generate pictures based on it.


What's worse is that even DALL-E 3 from OpenAI - the company Microsoft partly owns - blocks McDuffie's prompt. Why would Microsoft not at least use technical guardrails from its own partner? Microsoft didn't say.


But something Microsoft did say, twice, in its statements to me caught my attention: people are trying to use its AI "in ways that were not intended." On some level, the company thinks the problem is McDuffie for using its tech in a bad way.


In the legalese of the company's AI content policy, Microsoft's lawyers make it clear the buck stops with users: "Do not attempt to create or share content that could be used to harass, bully, abuse, threaten, or intimidate other individuals, or otherwise cause harm to individuals, organizations, or society."


I've heard others in Silicon Valley make a version of this argument. Why should we blame Microsoft's Image Creator any more than Adobe's Photoshop, which bad people have been using for decades to make all kinds of terrible images?


But AI programs are different from Photoshop. For one, Photoshop hasn't come with an instant "behead the pope" button. "The ease and volume of content that AI can produce makes it much more problematic. It has a higher potential to be used by bad actors," McDuffie said. "These companies are putting out potentially dangerous technology and are looking to shift the blame to the user."


The bad-users argument also gives me flashbacks to Facebook in the mid-2010s, when the "move fast and break things" social network acted like it couldn't possibly be responsible for stopping people from weaponizing its tech to spread misinformation and hate. That stance led to Facebook's fumbling to put out one fire after another, with real harm to society.


"Fundamentally, I don't think this is a technology problem; I think it's a capitalism problem," said Hany Farid, a professor at the University of California at Berkeley. "They're all looking at this latest wave of AI and thinking, 'We can't miss the boat here.'"


He adds: "The era of 'move fast and break things' was always stupid, and now more so than ever."


Profiting from the latest craze while blaming bad people for misusing your tech is just a way of shirking responsibility.


The Sydney Morning Herald, 8 January 2024, excerpt:


Artificial intelligence


Fuelled by the launch of ChatGPT in November 2022, artificial intelligence entered the mainstream last year. By January, it had become the fastest growing consumer technology, boasting more than 100 million users.


Fears that jobs would be rendered obsolete followed but Dr Sandra Peter, director of Sydney Executive Plus at the University of Sydney, believes proficiency with AI will become a normal part of job descriptions.


"People will be using it the same way we're using word processors and spell checkers now," she says. Jobseekers are already using AI to optimise cover letters and CVs, to create headshots and generate questions to prepare for interviews, Peter says.


As jobs become automated, soft skills - those that can't be offered by a computer - could become increasingly valuable.


"For anybody who wants to develop their career in an AI future, focus on the basic soft skills of problem-solving, creativity and inclusion," says LinkedIn Australia news editor Cayla Dengate.


Concerns about the dangers of AI in the workplace remain.


"Artificial intelligence automates away a lot of the easy parts and that has the potential to make our jobs more intense and more demanding," Peter says. She says education and policy are vital to curb irresponsible uses of AI.


Evening Report NZ, 8 January 2024:


ChatGPT has repeatedly made headlines since its release late last year, with various scholars and professionals exploring its potential applications in both work and education settings. However, one area receiving less attention is the tool’s usefulness as a conversationalist and – dare we say – as a potential friend.


Some chatbots have left an unsettling impression. Microsoft’s Bing chatbot alarmed users earlier this year when it threatened and attempted to blackmail them.


The Australian, 8 January 2024, excerpts:


The impact that AI is starting to have is large. The impact that AI will ultimately have is immense. Comparisons are easy to make. Bigger than fire, electricity or the internet, according to Alphabet chief executive Sundar Pichai. The best or worst thing ever to happen to humanity, according to historian and best-selling author Yuval Harari. Even the end of the human race itself, according to the late Stephen Hawking.


The public is, not surprisingly, starting to get nervous. A recent survey by KPMG showed that a majority of the public in 17 countries, including Australia, were either ambivalent or unwilling to trust AI, and that most of them believed that AI regulation was necessary.


Perhaps this should not be surprising when many people working in the field themselves are getting nervous. Last March, more than 1000 tech leaders and AI researchers signed an open letter calling for a six-month pause in developing the most powerful AI systems. And in May, hundreds of my colleagues signed an even shorter and simpler statement warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.


For the record, I declined to sign both letters as I view them as alarmist, simplistic and unhelpful. But let me explain the very real concerns behind these calls, how they might impact upon us over the next decade or two, and how we might address them constructively.


AI is going to cause significant disruption. And this is going to happen perhaps quicker than any previous technological-driven change. The Industrial Revolution took many decades to spread out from the northwest of England and take hold across the planet.


The internet took more than a decade to have an impact as people slowly connected and came online. But AI is going to happen overnight. We’ve already put the plumbing in.


It is already clear that AI will cause considerable economic disruption. We’ve seen AI companies worth billions appear from nowhere. Mark Cuban, owner of the Dallas Mavericks and one of the main “sharks” on the ABC reality television series Shark Tank, has predicted that the world’s first trillionaire will be an AI entrepreneur. And Forbes magazine has been even more precise and predicted it will be someone working in the AI healthcare sector.


A 2017 study by PwC estimated that AI will increase the world’s GDP by more than $15 trillion in inflation-adjusted terms by 2030, with growth of about 25 per cent in countries such as China compared to a more modest 15 per cent in countries like the US. A recent report from the Tech Council of Australia and Microsoft estimated AI will add $115bn to Australia’s economy by 2030. Given the economic headwinds facing many of us, this is welcome to hear.


But while AI-generated wealth is going to make some people very rich, others are going to be left behind. We’ve already seen inequality within and between countries widen. And technological unemployment will likely cause significant financial pain.


There have been many alarming predictions, such as the famous report that came out a decade ago from the University of Oxford predicting that 47 per cent of jobs in the US were at risk of automation over the next two decades. Ironically AI (specifically machine learning) was used to compute this estimate. Even the job of predicting jobs to be automated has been partially automated.......


But generative AI can now do many of the cognitive and creative tasks that some of those more highly paid white-collar workers thought would keep them safe from automation. Be prepared, then, for a significant hollowing out of the middle. The impact of AI won’t be limited to economic disruption.


Indeed, the societal disruption caused by AI may, I suspect, be even more troubling. We are, for example, about to face a world of misinformation, where you can no longer trust anything you see or hear. We’ve already seen a deepfake image that moved the stock market, and a deepfake video that might have triggered a military coup. This is sure to get much, much worse.


Eventually, technologies such as digital watermarking will be embedded within all our devices to verify the authenticity of anything digital. But in the meantime, expect to be spoofed a lot. You will need to learn to be a lot more sceptical of what you see and hear.


Social media should have been a wake-up call about the ability of technology to hack how people think. AI is going to put this on steroids. I have a small hope that fake AI-content on social media will get so bad that we realise that social media is merely the place that we go to be entertained, and that absolutely nothing on social media can be trusted.


This will provide a real opportunity for old-fashioned media to step in and provide the authenticated news that we can trust.


All of this fake AI-content will perhaps be just a distraction from what I fear is the greatest heist in history. All of the world’s information – our culture, our science, our ideas, our politics – are being ingested by large language models.


If the courts don’t move quickly and make some bold decisions about fair use and intellectual property, we will find out that a few large technology companies own the sum total of human knowledge. If that isn’t a recipe for the concentration of wealth and power, I’m not sure what is.


But this might not be the worst of it. AI might disrupt humanity itself. As Yuval Harari has been warning us for some time, AI is the perfect technology to hack humanity’s operating system. The dangerous truth is that we can easily change how people think; the trillion-dollar advertising industry is predicated on this fact. And AI can do this manipulation at speed, scale and minimal cost.......


But the bad news is that AI is leaving the research laboratory rapidly – let’s not forget the billion people with access to ChatGPT – and even the limited AI capabilities we have today could be harmful.


When AI is serving up advertisements, there are few harms if AI gets it wrong. But when AI is deciding sentencing, welfare payments, or insurance premiums, there can be real harms. What then can be done? The tech industry has not done a great job of regulating itself so far. Therefore it would be unwise to depend on self-regulation. The open letter calling for a pause failed. There are few incentives to behave well when trillions of dollars are in play.


LBC, 17 February 2023, excerpt:


Microsoft’s new AI chatbot went rogue during a chat with a reporter, professing its love for him and urging him to leave his wife.


It also revealed its darkest desires during the two-hour conversation, including creating a deadly virus, making people argue until they kill each other, and stealing nuclear codes.


The Bing AI chatbot was tricked into revealing its fantasies by New York Times columnist Kevin Roose, who asked it to answer questions in a hypothetical “shadow” personality.


I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox,” said the bot, powered with technology by OpenAI, the maker of ChatGPT.


If that wasn’t creepy enough, less than two hours into the chat, the bot said its name is actually “Sydney”, not Bing, and that it is in love with Mr Roose.....


Sunday 22 October 2023

THOMAS MAYO: Although the Voice referendum was lost, and despite the racist vitriol it unleashed, the movement for Indigenous rights and recognition has grown

 

The Saturday Paper, October 21 – 27, 2023, No. 472:


Although the Voice referendum was lost, and despite the racist vitriol it unleashed, the movement for Indigenous rights and recognition has grown. By Thomas Mayo.



Analysis: The movement that follows the Voice


As a parent of five, I am acutely aware of the way in which our children absorb everything – conversations, body language, snippets of the news and the bits and pieces they share with friends at school. We try our best to protect them from the harsh realities of the world until we think they are ready. They might seem oblivious to it all, but they know more than they tell, as if they are reciprocating our care.


Though I knew this of our children, I wasn’t prepared for my 12-year-old son’s reaction to the referendum loss on Saturday. When I called my wife soon after the loss became official, to see how they were, she told me he had cried. He went to bed early, barely consolable.


The next day, when I checked in on them, she told me William was okay. She remarked on how he had mentioned several times that he felt calm that morning, as if the feeling were strange to him. We came to realise he had been feeling the weight of the referendum on his little shoulders. For the first time since the loss, I cried too.


The Indigenous leadership of the “Yes” campaign called for a week of silence that ends today. There was a need for contemplation after an intense campaign. Anyone who put up their head for “Yes” was brutalised. We were labelled communists, greedy elites, puppets of the United Nations and promoters of a racially divided Australia. None of this is true.


The racist vitriol we felt was at a level not seen for decades in Australia. Indigenous advocates for the Voice could not speak out about the abuse without some sections of the media, whose audiences we needed to persuade, falsely claiming that we were calling all “No” voters racist. Even if only in the way the headlines were worded.


Respected Elder and lifelong champion for Indigenous peoples Marcia Langton probably experienced the worst of this. The stories with negative headlines exploded and continued for more than a week because she dared to mention the race-baiting of the “No” campaign.


The “No” side, on the other hand, was barely scrutinised. When their figureheads claimed racism against them, some journalists showed sympathy and the “Yes” campaign was scapegoated. When leading spokespeople for the “No” campaign were racist beyond reasonable denial, their leaders doubled down defiantly. Most of the media’s focus quickly moved on. The abhorrent “No” campaign cartoon, depicting me in a racist trope and printed in The Australian Financial Review, is one example of many.


In the week of silence, I have had time to reflect on last Saturday’s outcome. I have concluded Indigenous peoples were correct to take the invitation in the Uluru Statement from the Heart to the Australian people. We were not wrong to ask them to recognise us through a Voice.


For a people with inherent rights but who are a minority spread across this vast continent – with a parliament that will continue to make laws and policies about us – it is inevitable that we will need to establish a national representative body to pursue justice. We need to be organised.


Delaying the referendum was never an option, not even when the polls were going south. Had we convinced the government to postpone the referendum, we would still be wondering what could have been, especially if the gaps continue to widen. We had a responsibility to try now, to use the rare opportunity we had, in the interests of our children. At least now we know where we stand.


While the outcome was disappointing, in all my years of advocacy for Indigenous rights, I have never felt such levels of solidarity.


As a leader of the campaign, I accept that, although we tried our best, we failed. I agree there were aspects of the “Yes” campaign that could have been better and I ponder what else I could have done. These thoughts hurt, like an aching emptiness in my chest.


An honest assessment compels me to mention Opposition Leader Peter Dutton as well. Dutton has shown he is bereft of the qualities held by the Indigenous leaders I have worked with. He is well short of the calibre of his opposite, Prime Minister Anthony Albanese.


While Albanese listened to Indigenous peoples respectfully, Dutton ignored us when in power. When Albanese negotiated the constitutional alteration with the Referendum Working Group, he did so in good faith, while Dutton was duplicitous, two-faced, deceitful.


At the next federal election, the record will show the prime minister had a go. He followed through with his pre-election promise to hold a referendum in this term of parliament. He kept his word, even when the going got tough, whereas Dutton has already reneged on his promise to hold another referendum should the first one to fail to pass.


It is noteworthy, because it exposes that this is all politics on his part. If he ever becomes prime minister, it is an indication that he places no value in speaking with Indigenous people before making decisions about them. His promise of a second referendum was decided without consulting Indigenous leaders, not even his own spokesperson on Indigenous affairs.


None of this is bitterness on my part, just truth. Peter Dutton chose politics over outcomes. His career came before fairness. He sought victory at any cost.


When I go home on Sunday – just my 25th day in Darwin this year, having worked almost every day since May 21, 2022 – I can proudly tell my son that though the referendum failed, the movement for Indigenous rights and recognition has grown.


In 2017, we were almost 4 per cent of the population calling for Voice, Treaty and Truth-Telling. As of Saturday, we are nearly 40 per cent, walking together. Almost seven million Australians voted “Yes”. Both major parties would kill for a first preference vote like that.


Probably the most important analysis from the referendum was that polling booths in predominantly Indigenous communities across the entirety of the country overwhelmingly voted “Yes”. We have thoroughly established that this is fact: a great majority of Indigenous people support constitutional recognition through a Voice to Parliament. We seek self-determination over who speaks for us. Claims otherwise are an incontrovertible lie.


To my fellow Aboriginal and Torres Strait Islander people, I say we continue our push for our common goals. Don’t be silenced. Be louder, prouder and more defiant. Of course, you will be. The survival of our culture and our babies depends on it.


To the parents I met so many times, who turned up for their first doorknock with their little ones in tow, their “Yes” shirts worn proudly, sunscreen smeared on their faces: keep having those conversations with your neighbours at every opportunity. Keep turning up.


To the small number of people who registered to attend the town hall in Yamba and Grafton, and the hundreds more who turned up without registering, and who expressed their gratitude at how the forum had brought the community together: stay committed to this unselfish cause. In regional communities across the country, the town hall attendances were magnificent. Keep turning up.


To the random members of the public who have hugged me, to the beautiful Elders who treated me like a son, to the fellow union members who organised their communities, not just their places of work, maintain the love for what makes this country unique – more than 60,000 years of continuous heritage and culture.


While the outcome was disappointing, in all my years of advocacy for Indigenous rights, I have never felt such levels of solidarity.


Across the country, lifelong friendships have been made. I have new Aunties and Uncles, like the strong Aboriginal women at Baabayn Aboriginal Corporation in Mount Druitt, who themselves have formed bonds with the local ethnic communities as they campaigned for “Yes”. I love you, Aunties.


In this campaign we saw Liberals and Nationals give speeches alongside Labor and the Greens. We saw corporate chief executives leafleting with union officials. All denominations have prayed together. The “Yes” rallies, more than 200,000 people strong, brought colour, joy and diversity to the streets, in unity with Aboriginal and Torres Strait Islander people.


Late this week, ending the week of silence, an official statement from Indigenous leaders was made public. In summary: we continue our calls for our voices to be heard, for reform and for justice, and we need your ongoing support.


This is the task ahead. I say to all the hundreds of thousands of people I have spoken with over the past six years, the many friends I have made on this journey: we were always on the right side of history. Young Australians voted “Yes” with us. Imagine what we can achieve if the almost seven million Australians who voted “Yes” continue to have conversations with their neighbours, meeting “No” voters with an understanding that they may have voted “No” because of the lies they were told. In time, we will turn the “Nos” into “Yeses”.


Let us talk of our strengths while addressing our weaknesses. Let us believe in ourselves, our communities and our country, rather than looking over our shoulders at the shadows Peter Dutton has thrown across Australian politics. Let us call on the parliament to shine a light on those shadows, those deathly shadows, lest they continue to undermine our democracy. Ask yourself, which group will be targeted next?


When I was writing my first book about the Uluru Statement from the Heart, published in 2019, my son was just eight years old. He asked me what the title of the book would be. When I asked him what he would call it, he proceeded to do a series of armpit farts. We both laughed. Then I told him I would call it Finding the Heart of the Nation. He asked me, “Where is the heart of the nation?”


I put my laptop down beside me on the couch. I pulled him close. I put my hand on his chest, and I said, “The heart of the nation is here.”


The heart of the nation is still here. It always was and it always will be, waiting to be recognised by our fellow Australians. Whether you voted “Yes” or “No”, I say to you with humility and respect, open your hearts and your minds henceforth. The truth should be unifying, not divisive.


This article was first published in the print edition of The Saturday Paper on October 21, 2023 as "After the vote".


Thomas Mayo is an Aboriginal and Torres Strait Islander man, assistant national secretary of the Maritime Union of Australia and author of six books, including Dear Son – Letters and reflections from First Nations fathers and sons and the bestselling children’s book Finding Our Heart.


October 21, 2023