Newsom taunts Trump over proposed US state ban on AI regulation | BBC threatens legal action against Perplexity | Country musician sues AI start-ups Udio and Suno
#59 | PLUS: 'Voice artists need protection from AI exploitation' | ✨AND: The $2,000 AI-generated TV commercial that points to advertising's future
A WARM WELCOME to Charting Gen AI, the global newsletter that keeps you informed on generative AI’s impacts on creators and human-made media, the ethics and behaviour of the AI companies, the future of copyright in the AI era, and the evolving AI policy landscape. Our coverage begins shortly. But first …
🏛️ AI POLICY & REGULATION
CALIFORNIA GOVERNOR Gavin Newsom this week welcomed a landmark report that paves the way for another attempt to regulate AI safety in the state that’s home to 32 of the top 50 artificial intelligence companies worldwide.
The report comes as senators in Washington mull president Trump’s One Big Beautiful Bill (OBBB) that proposes a 10-year ban on states regulating AI. The move has been widely condemned by bipartisan groups of lawmakers across all 50 US states. As we reported last week, senator Ted Cruz — chair of the Senate’s powerful commerce committee — is seeking to amend the bill by blocking access to federal broadband funds to states that ignore the decade-long moratorium.
Newsom said while he was helping to guide the responsible, safe, and ethical deployment of AI, Trump’s OBBB would gut California laws banning AI-generated child pornography, deepfake porn, and robocall scams against the elderly. “California is the home of innovation and technology that is driving the nation’s economic growth — including the emerging AI industry,” said Newsom.“As Donald Trump chooses to take our nation back to the past by dismantling laws protecting public safety, California will continue to lead the way with smart and effective policymaking.”
Newsom thanked the experts and academics who had responded to his call for detailed empirical research into the likely evolution of the most advanced models. That followed his vetoing last September of a state AI safety bill that would have made developers of the largest AI models liable for the catastrophic harms they might cause. The bill, known as SB 1047, had the support of AI pioneers Geoffrey Hinton and Yoshua Benjio and gained the unexpected backing of Elon Musk. But it was bitterly opposed by Google, Meta, OpenAI, Silicon Valley investors and powerful US Democrats including Nancy Pelosi.

Governor Newsom’s AI taskforce — led by ‘AI godmother’ Dr Fei-Fei Li, director of the Stanford Institute for Human-Centered AI, Dr Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace, and Dr Jennifer Tour Chayes, dean of UC Berkeley’s computing and data science college — this week concluded that “without proper safeguards ... powerful AI could induce severe and, in some cases, potentially irreversible harms”.
With policy principles rooted in the ethos of “trust but verify” — the Russian proverb used by Ronald Reagan during nuclear disarmament discussions with the Soviet Union in the late 1980s — the 52-page report called for more disclosures by AI developers since there was “systemic opacity” in several key areas. Transparency would also be boosted by giving whistleblowers clear protections and allowing third-parties to evaluate advanced models “above and beyond” information released by AI developers.
Senator Scott Wiener, who authored last year’s ill-fated SB 1047, said the “brightest AI minds” behind the report had affirmed “that while AI presents massive opportunities to deliver a future of abundance and broad-based prosperity, the immense power of these models requires policymakers to act with haste to establish effective safety guardrails”.
In a statement, Wiener said the AI lobby was “openly advocating for the federal government to wash away state protections against deepfake revenge porn, algorithmic bias in healthcare decisions, and intellectual property theft — commonsense protections we can all agree upon”. California could play a “vital role” in establishing safeguards and setting standards for others to follow, said Wiener, who in February introduced a new state AI safety bill, SB 53.
Wiener said his office was now considering which of the experts’ recommendations should be incorporated within SB 53, and invited “all relevant stakeholders to engage with us productively in that process”.
🌟 KEY TAKEAWAY: Newsom’s expert panel stops short of recommending any specific policies, implying that’s the job of elected politicians who now have the very latest research on the capabilities of foundation models and the risks they pose. The report says there’s a limited window of opportunity to develop regulatory policy and warns that “if those whose analysis points to the most extreme risks are right … then the stakes and costs for inaction on frontier AI at this current moment are extremely high”. Wiener clearly hopes SB 53 will be the vehicle for the expert group’s findings. If he can incorporate them then SB 53 will face an even more hostile reception than SB 1047. Much has changed since September 2024. Big Tech has been emboldened by Trump’s ‘America First AI’ stance. AI developers will no doubt call on Trump to intervene and protect ‘Big Beautiful American AI’ from what they’ll say is burdensome regulation. And at that point AI safety will be a new front in the ongoing bitter spat between Trump and Newsom.
RELATED:
ALSO:
➡ G7 leaders steer clear of overtly pro-regulatory language in talks on AI
➡ Brussels is under pressure from Big Tech to pause pioneering EU AI Act
➡ European writers urge MEPs to look again at copyright and gen AI laws
➡ AI productivity gains will only happen ‘if we reward human originality’
➡ OpenAI CEO praises Trump saying he’s been ‘very good’ for AI policy
➡ LLMs are ‘unlikely to come to the rescue of countries’ slow growth’
➡ UK government’s AI tool is powered by OpenAI, Google and Anthropic
📰 AI & NEWS MEDIA
THE BBC IS THREATENING to take legal action against Perplexity unless the AI search engine stops scraping its copyrighted material. According to the Financial Times the UK pubcaster has written to Perplexity CEO Aravind Srinivas saying its AI model was “trained using BBC content”.
The FT — which had seen the letter — said the BBC could seek an injunction against the Jeff Bezos-backed start-up unless it ceased scraping, deleted copies of scraped content, and came up with “a proposal for financial compensation” for material it had already used. The FT reported Perplexity had dismissed the BBC’s claims as “manipulative and opportunistic” saying it had “a fundamental misunderstanding of technology, the internet and intellectual property law”. “[The claims] also show how far the BBC is willing to go to preserve Google’s illegal monopoly for its own self-interest,” the FT further quoted Perplexity as saying. A BBC spokesperson confirmed to Charting Gen AI that the FT’s reporting was accurate but declined to comment further.
Last October The New York Times wrote to Perplexity demanding it should stop using its articles to create summaries. Later that same month the parent companies of the Wall Street Journal and the New York Post sued Perplexity accusing it of “engaging in a massive amount of illegal copying of publishers’ copyrighted works”. In their lawsuit, Dow Jones and NYP Holdings — both divisions of Rupert Murdoch’s News Corporation — said Perplexity’s “brazen scheme” sought to build a substitute product that competes “for readers while simultaneously free-riding on the valuable content the publishers produce”.
🌟 KEY TAKEAWAY: The BBC’s letter to Perplexity — which amounts to a cease and desist notice — is the first time the corporation has taken steps against an AI developer. It follows outputs within Perplexity that were reportedly verbatim copies of BBC content. The fear, as ever, is that AI models will compete with its training data, creating substitutes. Perplexity’s bluster is similar to the line it took when WIRED last June published the results of an investigation titled ‘Perplexity Is A Bullshit Machine’. Puzzling, though, is Perplexity’s accusation that the BBC — a publicly funded body — is seeking to prop up Google. For the sake of revenue-sharing deals Perplexity has already struck with news publishers, and those it hopes to in future, it would do well to not antagonise the BBC further.
RELATED:
ALSO:
➡ Audiences remain sceptical of AI use in news: Reuters Institute report
➡ ChatGPT shocks journo by offering to format translation for media site
➡ Business Insider’s AI policy insists ChatGPT ‘won’t replace journalists’
➡ Meanwhile Business Insider taps AI for instant summaries of top stories
➡ Press Gazette says US IT publication Levelact appears to be AI-written
⚖️ AI LEGAL ACTIONS
COUNTRY MUSICIAN Tony Justice this week filed lawsuits against music generators Udio and Suno claiming the AI start-ups trained their generative models on his copyrighted works. His attorney, Krystle Delgado, is now inviting other independent musicians to join the class suit.
Justice’s lawsuit against Udio was filed in the Southern District of New York while the Suno copyright suit was filed in Boston. Both lawsuits were filed on behalf of himself, his indie music label 5th Wheel Records, and other musicians who want to join it. Justice’s complaints — both lawsuits use substantially similar language — say the AI music generators had admitted to training their models on “publicly available” songs, mostly owned by independent artists. Both lawsuits also say Udio and Suno’s actions were “not only unlawful, but an unconscionable attack on the music community’s most vulnerable and valuable creators”.
While Udio and Suno had claimed the unauthorised use of copyrighted material was covered by the US Copyright Act’s fair use doctrine, the lawsuits point to the US Copyright Office’s recent report that stated fair use would not excuse training on expressive works to generate outputs that then compete with original works. Both suits say the “intentional theft of millions of songs created by independent artists is appalling, wrong, unjustified and certainly not fair” and “runs contrary to the purpose of the fair use doctrine”.
On YouTube Delgado — aka Miss Krystle, who’s also a musician — said she was suing Udio and Suno on behalf of independent artists, songwriters and music producers whose rights had been “trampled the most”. Delgado stressed she was “not anti-AI” but was “anti-big corporations that raise hundreds of millions of dollars that can’t be bothered to pay you for your music and instead outrageously opt for theft in the training of their AI”.
🌟 KEY TAKEAWAY: Justice’s lawsuits leave the total number of copyright actions filed against AI companies in the US at 44. They come as music labels Universal Music Group, Warner Music Group and Sony Music Entertainment continue talks with Udio and Suno over potential licensing deals that could end the music majors’ own lawsuits against the AI start-ups. Significantly Justice’s lawsuits make specific reference to the Copyright Office’s report on AI and copyright. That report was provisional and rushed out ahead of Donald Trump’s firing of its head, Shira Perlmutter. She’s now suing Trump saying the attempt to oust her was “blatantly unlawful”. What impact the report will have on the courts which ultimately decide fair use remains to be seen. Also up in the air is the report’s status and whether the Trump admin will now demand a more Big Tech-friendly version. Worth noting: Perlmutter still appears on the Copyright Office website as its leader.
RELATED:
ALSO:
➡ Artists demand Midjourney unveil AI training data after Hollywood suit
➡ Ross Intelligence granted interlocutory appeal against AI fair use ruling
➡ AI experts give evidence in Getty trial — Rebecca Newman’s new update
🎨 AI & CREATIVITY
ON TUESDAY we reported the plight of voice actor Gayanne Potter whose voice has been cloned without her consent and used as an AI-generated announcer on Scotland’s rail network. Her lawyer Dr Mathilde Pavis, founder of digital replica advisory Replique, said Gayanne’s case was part of a “troubling pattern” of voiceover artists agreeing to lend their voices to AI projects only to later find they had been “cloned, commercialised and used far beyond what they agreed to”.
After we shared Gayanne’s story, voice actor and communication coach Melissa Thom told Charting Gen AI: “We’re at a point where legislation, industry standards, and collective advocacy need to come together to safeguard creatives from this kind of exploitation. No one should have to find out that their voice is being used by an AI model, especially without their knowledge or permission.”
Melissa, CEO of BRAVA, the Bristol Academy of Voice Acting, echoed advice given by Mathilde in our report. Said Melissa: “With years of experience in the business I’ve learned not to sign contracts with vague or ambiguous terms. And when needed, I consult legal counsel to ensure I’m fully protected.” Asked what advice she’d give to aspiring voice actors worried that AI might replace them, Melissa said: “Hone your craft, build relationships and focus on what makes you stand out from the crowd. AI will compete for low budget work, but there’s still plenty of opportunity for high-quality talent.”
RELATED:
Digital replica expert sounds alarm on voice artists being ripped off by AI companies
A LEADING EXPERT in digital replicas says creatives need to demand clauses in contracts that prevent their recorded performances being used for AI model training and voice cloning.
ALSO:
➡ Adobe releases still and moving image generator Firefly as a mobile app
➡ Stock image provider Shutterstock unveils new visual identity for AI era
➡ Midjourney launches the first version of its video generating model
➡ This artist trained a generative AI model on nothing. Here’s the result
➡ Charismatic.ai unveils AI animation platform outputting ‘microdramas’
➡ Dramatify launches AI Script Breakdown automating production needs
➡ Fake bands and AI tracks ‘are taking over YouTube and Spotify’: El País
➡ Sony Music boss pledges to share AI revenues with artists & songwriters
➡ Paris and London-based animator Animaj wins $85 million for gen AI
➡ Fstoppers’ Ken Lee asks when is it OK to use AI in your photography?
⏱️ AI BRIEFS
◼️ New study shows Meta’s Llama 3.1 70B model memorised 42% of the first Harry Potter book, up from the 4.4% memorisation of earlier model, Llama 1.
◼️ Researcher hits back at Apple’s Illusion of Thinking paper (Charting #58) saying errors “reflect experimental design limitations” not “reasoning failures”.
◼️ No surprises here ... researchers find AI tools are “great for brainstorming” but humans excel in coming up with ideas when they use their own thoughts.
◼️ MIT Media Lab finds ChatGPT users get lazier the more they use it while “consistently” underperforming “at neural, linguistic, and behavioural levels”.
◼️ Survey shows 33% of American consumers will abandon a generative AI product if it shows bias and 80% expect businesses to prevent bias in AI outputs.
◼️ Business Insider investigation finds four out of 10 US datacentres are now, or are set to be, in locations where there are serious water shortages.
◼️ AI scraper bots are overwhelming online systems in libraries, archives, museums, and galleries, says a new report from GLAM-E Lab’s legal academics.
◼️ The hunt is on for uncontaminated AI training data produced before ChatGPT’s launch in 2022 as content created since could cause AI model collapse.
💬 QUOTES OF THE WEEK
“The very makers of AI, all of whom concede they don’t know with precision how it actually works, see a one in ten, maybe one in five, chance it wipes away our species. Would you get on a plane at those odds? Would you build a plane and let others on at those odds?” — Axios co-founders Jim VandeHei and Mike Allen writing on whether LLMs could escape human control
“On its current trajectory, the surest promise that generative AI offers is not creating new originators, but massively scaling up the production of derivative art.” — Sheldon Pearce, Editor at NPR Music, writing on Timbaland’s AI music project
“AI as an industry doesn’t grow if it cannot control the models, and it cannot control the models without controlling the data. The problem is that all that data is the same data, and training on the same data leads to the same outcomes.” — Eryk Salvaggio, hacker, researcher, designer, and media artist, writing in Tech Policy Press on the myth of the AI black box
“In their own ways, both technologists and artists are bidding for immortality — whether by creating a timeless novel or a godlike AI. A little humility from both sides could go a long way toward making future conversations more productive.” — Lauren Oliver, novelist and ethical AI developer, writing in Fast Company on the clash between the arts and tech
“When AI fails, it often fails quietly and badly. A misquote. A mislabelled name. An exam flagged as plagiarised because a neurodiverse student thinks differently. This isn’t sci-fi. It is real-world harm caused by systems that haven’t been tested where it counts.” — Daniel Aldridge MP writing in The House on the need for transparent AI testing
“[Google’s] Veo 3 and other models are giving us a glimpse into a new generation of misinformation — un-fact-checkable disinformation, with content that’s so life-like it’s no longer distinguishable from reality. For 90% of viewers, Veo 3 is already indiscernible from real, human content. However, the big fear is that in the next 1-2 years, all content, human or otherwise, will come into question.” — Ari Abelson, co-founder of OpenOrigins which proves images are human-made, talking to TechRadar
✨ AND FINALLY …
ADVERTISING LEGEND Sir Martin Sorrell this week urged agencies to move from “time-based to output-based compensation” now that 30-second commercials that used to take months and cost millions “take days and cost thousands” — thanks, of course, to AI. Sir Martin was speaking to Beet.tv on the sidelines of the Cannes Lions International Festival of Creativity, where brands and agencies celebrate stand-out practitioners. While Sir Martin dismissed the notion that AI is killing advertising he accepted that copywriting and visual production were being “transformed”.
Last week US audiences watching the National Basketball Association (NBA) finals got a glimpse of that transformation — a TV ad that almost certainly won’t win a coveted Cannes Lions Award, but nonetheless delighted its client Kalshi, the online prediction market, and startled its creator, video producer PJ ‘Ace’ Accetturo. Why the surprise? That Kalshi had hired him “to make the most unhinged” TV ad possible. And that a network had approved it. Take a look ...
In a blog post PJ Ace explained how it was made, starting with Kalshi’s rough ideas (the price of eggs, the likelihood of hurricanes and aliens, etc.). He used Google Gemini to generate detailed prompts for Veo 3, Google’s latest video model which outputs speech audio in near-perfect lip sync. Hundreds of Veo 3 generations resulted in 15 usable clips, which were then stitched together. All in all, one person working over two days and a cost of $2,000, a 95% reduction “compared to traditional advertising” said PJ Ace who is now recruiting for an AI studio. So anyone can do this, right? Not quite. “You still need experience to make it look like a real commercial,” he said. “I’ve been a director 15+ years, and just because something can be done quickly, doesn’t mean it’ll come out great.”
Advertising purists will mock the ad’s inane AI sloppiness. And this wouldn’t be Charting without a reminder that Veo 3 was likely trained on video content scraped from the web and YouTube without the consent of creators. There are questions too over the ad’s copyrightability. But the fact it was approved suggests networks just don’t care. And that Sir Martin, the sage of marketing, is right.
The crazy thing: They are trying to make those of us opposed to OpenAI and Grok and Meta out to be Luddites. Anything but. I simply want AI without intellectual property theft.
Is that so hard?