Major tech companies have been slammed as “pirates” in Australia as the artificial intelligence (AI) boom continues. Amazon, Google, and Meta were in the firing line as a Senate inquiry probed how Australian data is used.
The Australian inquiry is about adopting artificial intelligence and the impacts and opportunities it could present. Originally scheduled for September 19, 2024, it was delayed until November 26.
Lead by Tony Sheldon, a Labor senator, said that these companies pushing AI are “pillaging our culture, data, and creativity for their gain while leaving Australians empty-handed.”
The damning conclusion comes as Sheldon butted heads with the tech powerhouses, who weren’t answering questions directly.
In a statement, the senator lambasted those being questioned, “Watching Amazon, Meta, and Google dodge questions during the hearings was like sitting through a cheap magic trick – plenty of hand-waving, a puff of smoke, and nothing to show for it in the end.”
While the inquiry won’t lead to any immediate results, it has provided 13 recommendations for how the Australian government can tackle AI. A large portion of these are about protecting workers in different sectors. This has been dubbed “high-risk AI”, where it could begin to put people out of work.
Sheldon wants to “rein in big tech” with a new set of laws created specifically targeting AI. During the inquiry, Meta, Google, and Amazon all refused to reveal how data collected from their products – like WhatsApp and Alexa – to train AI.
Meta did say that it had been collecting this data since 2007, but then couldn’t provide a clear explanation of how it’d allowed users to agree to that if the AI products in question didn’t exist in 2007.
However, Liberal Party senator, Linda Reynolds, and James McGrath of the Liberal National Party of Queensland, both disagreed that AI should be labeled “high-risk” regarding people at work. They did agree that AI was more of a threat to the country’s cybersecurity efforts than the creative industry.
Australian AI inquiry recommends protections for creatives
Another large part of the inquiry is about the creative space. Recommendation eight directly suggests that more work be done by continuing to “consult with creative workers, rightsholders, and their representative organizations” to combat the way that AI is trained. This has been labeled “unprecedented theft”.
AI models are trained by having large quantities of data ingested. For large language models, like Google Gemini or ChatGPT, it then retrieves that data to construct its responses.
Nvidia was caught training a generative AI model on thousands of hours of Netflix and YouTube, while OpenAI said that products like ChatGPT weren’t possible without infringing on copyright.
This also ties in with recommendations nine and ten. Nine suggests that, if taken on by the government, would require AI developers to clearly outline copyrighted works being used in the datasets.
Ten recommends that the government “urgently undertake further consultation” with creatives and the industry to “ensure fair remuneration is paid” if any artificially generated content is used in a commercial setting that has used their material.