Ban on US states regulating AI faces crunch vote | UK government's top AI advisor is 'stepping back' | Momentous week for AI and copyright - what can plaintiffs learn?
#60 | PLUS: NMA welcomes CMA's Google AI move | NYTimes' 'ideal' AI scenario | Getty narrows AI suit | Artists: beware! | Musk wants to rewrite knowledge |✨AND FINALLY: Uh oh ... where's io?
A WARM WELCOME to Charting Gen AI, the global newsletter that keeps you informed on generative AI’s impacts on creators and human-made media, the ethics and behaviour of the AI companies, the future of copyright in the AI era, and the evolving AI policy landscape. Our coverage begins shortly. But first …
🏛️ AI POLICY & REGULATION
A PROPOSED 10-YEAR ban on US states from being able to regulate AI is heading for a crucial vote in the Senate. But first the deeply controversial proposal which has divided Republicans needs to be rewritten after encountering a last-minute snag with the Senate official who referees rules in the upper house.
Last weekend Senate parliamentarian Elizabeth MacDonough approved a revised version of the decade-long moratorium drafted by Ted Cruz, chair of the Senate’s commerce committee. That latest version said states wanting to access a $500 million AI infrastructure fund would be prohibited from enforcing state AI regulations for 10 years. Those that didn’t want to receive federal funds did not have to adhere to the provision. “In other words, this pause in AI regulation is voluntary and not a federal mandate on states,” said a note released by Cruz.
On Wednesday MacDonough reportedly told Cruz he needed to rework the latest wording to make it clear the ban was no longer linked to a $42 billion broadband fund for states. Earlier, Senate majority leader John Thune told Axios he expected there would be “some version” of the state ban in the One Big Beautiful Bill (OBBB) which Donald Trump wants to be able to sign before July 4. But the measure faces opposition from Republican senators Marsha Blackburn, Ron Johnson, Rick Scott and Josh Hawley.
Trump’s commerce secretary Howard Lutnick on Wednesday said on X that “a single national standard for AI” was needed to end “the chaos of 50 different state laws” while the National Venture Capital Association (NVCA) said “the current fragmented AI regulatory environment in the US creates unnecessary challenges for start-ups, stifles innovation, and threatens our dominance in the industry”.
🌟 KEY TAKEAWAY: If the revised 10-year ban makes it through the Senate it then faces a bumpy ride in the House where conservative firebrand Marjorie Taylor Green (aka MTG) is set to vote against it. In May the OBBB cleared the House by a single vote, with MTG later admitting she’d supported it without being aware that it stripped “states of the right to make laws or regulate AI for 10 years”. Even if it’s removed in the Senate or the House it’s likely to come back in some form, given the power of Big Tech and its near-perfect overlap with the Trump admin’s policy of America First AI.
RELATED:
THE UK GOVERNMENT’S top advisor on artificial intelligence and author of its AI Opportunities Action Plan — which called for a “reform” of copyright laws in favour of AI developers — has announced he’s “stepping back”.
Writing on LinkedIn Matt Clifford said he was leaving to look after a sick family member, and also felt he needed “to rebalance a bit”. Clifford praised what he called the “enormous personal investment” of prime minister Sir Keir Starmer and technology secretary Peter Kyle in the UK’s AI agenda and said his own “sense of mission on AI” was “stronger than ever”. “I intend to dedicate my career to making AI go well for the UK and the world,” Clifford wrote. “I feel I’ve played a small role in that so far, but there is so much more to do ... Stay tuned.”
Commenting on Clifford’s post, tech author and journalist Chris Middleton said “the UK AI industry — via trade body UKAI — strongly opposes the government’s proposals for copyright and AI” describing them as “damaging, misguided, unworkable, and divisive”. “Yet the government ignores the views of the very sector it claims to be helping,” said Middleton, adding ministers had rebuffed the views of the House of Lords, most MPs, the media, the UK’s creative communities, “and most of the public”.
Clifford’s departure came in the week that the UK government published a 10-year plan for the creative industries. The 77-page report pledged ministers would “ensure a copyright regime that values and protects human creativity, can be trusted, and unlocks new opportunities for innovation across the creative sector and wider economy”.
It also said it would establish a Creative Content Exchange (CCE), a “trusted marketplace for selling, buying, licensing, and enabling permitted access to digitised cultural and creative assets”. “This new marketplace will open up new revenue streams and allow content owners to commercialise and financialise their assets while providing data users with ease of access.”
🌟 KEY TAKEAWAY: Clifford was a polarising character. His report was published during the government’s consultation on copyright and AI, giving the distinct impression that ministers’ minds were already made up on allowing AI companies to scrape creators’ content with impunity. While Clifford — a tech investor with several outside interests — had the ear of ministers, he was regarded by many in the creative industries as a representative of Big Tech. His departure now gives the government an opportunity to repair its bruised relationship with creators following its refusal to back emergency transparency provisions in the recent tortuous passage of the data bill. Building trust would start with the government finally revealing its stance on copyright and AI following the 11,900 responses to its consultation. Its creative industries plan merely says that will happen at some point in 2025. But when, exactly?
ALSO:
➡ Big Tech lobby is urging New York governor to block sweeping AI bill
➡ Canada is mulling new AI regulatory framework that includes copyright
➡ Creative Australia updates its AI principles on IP use and transparency
📰 AI & NEWS MEDIA
TRADE BODY the News Media Association (NMA) has welcomed a move by the UK competition watchdog which could force Google to give publishers more control over content scraped for its AI-written summaries. The Competition and Markets Authority (CMA) provisionally ruled that Google’s dominance in search had an impact on the terms it was able to dictate with publishers while “insufficient controls” over how their content was used in search and its AI Overviews also limited “news publishers’ ability to monetise their content”. The CMA provisionally decided that Google’s AI assistant Gemini was outside the scope of its probe. The regulator will now consult on proposed measures before deciding its next steps in October.
Owen Meredith, CEO of the NMA, the voice of national, regional and local news media organisations in the UK, welcomed what he said was the “CMA’s recognition of the pressures facing publishers, particularly as AI technologies increasingly draw on journalistic content, often without consent or compensation”. Meredith urged the CMA to include Gemini and investigate how publishers’ content flowed into it. “Without scrutiny, this quiet extraction of content will continue to undermine the value of original journalism,” he added.
🌟 KEY TAKEAWAY: The US Department of Justice has already accused Google of seeking to expand its dominance in search to AI as part of the DOJ’s wider push to force the break-up of the hi-tech. The CMA’s probe will consider whether Google’s dominance — Google search accounts for more than 90% of all general search queries in the UK — gives it an unfair advantage over publishers who complain they’re unable to prevent their content being used for AI Overviews without it disappearing from search. Google had already deployed its playbook response to the threat of regulation, with its senior director of competition telling The Telegraph that “punitive regulations” could delay the launch of new innovations in the UK, and create a “roadblock to growth”.
THE NEW YORK TIMES’ chief executive has explained what she believes would be the ideal scenario for news groups forging relationships with AI developers.
Speaking last week at the Cannes Lions festival Meredith Kopit Levien told Semafor chief editor Ben Smith and media editor Max Tani that the “best outcome here for everyone is commercial agreements that reflect fair value exchange for our work, that gives the Times or any publisher control over how its journalism and intellectual property is used across the full scope of uses by LLMs, and that that is done in an arrangement that feels sustainable”. Last month (see Charting #56) the Times announced a multi-year AI licensing agreement with tech giant Amazon, its first such deal with an AI company. Under the partnership content will be used to train Amazon’s AI models with real-time summaries appearing in products such as AI-upgraded Alexa+ smart speakers. The Times is also suing Amazon rivals OpenAI and Microsoft for allegedly using its articles for generative model training without consent to create products that then compete with it. That lawsuit is inching its way towards a potential trial.
🌟 KEY TAKEAWAY: Kopit Levien said the “stakes were high” for “anyone developing intellectual property” but they were also high for the AI developers too “because ultimately there’s got to be a sustainable business model to make great intellectual property [since] the LLMs will ultimately only be as good as what goes into them”. The question now for publishers and creators is whether the hi-techs will feel emboldened by this week’s fair use rulings and abandon licensing talks altogether.
EARLIER THIS WEEK:
ALSO:
➡ India’s news publishers welcome government review of copyright laws
➡ NiemenLab asks ‘do readers want AI-powered news recommendations?’
➡ Press Gazette probes how BBC World Service uses AI for Polish news
➡ PA Media launches Expert Hub promising ‘no AI-generated content’
⚖️ AI LEGAL ACTIONS

IT’S BEEN A HISTORIC week for US copyright with two landmark rulings on AI and fair use. First there was US district judge William Alsup’s summary judgment stating that AI start-up Anthropic’s unauthorised use of books it had purchased to train its generative models was covered by the fair use exception in US copyright law. However, Judge Alsup also ruled that Anthropic’s storage of over 7 million copies of pirated books used to “build a central library” was not justified and said a trial would decide the resulting damages.
Two days later in the same San Franciso courthouse fellow US district judge Vince Chhabria ruled that Meta could claim fair use in a complaint brought by authors who’d accused the hi-tech of stealing their books to train its AI model. But Judge Chhabria said he could have reached a different conclusion had lawyers for the 13 authors offered compelling evidence that they’d been financially damaged by the unauthorised use of their works. Both judges agreed AI was transformative, that is, it had generated something new, something different.
IP attorney Aaron Moss told Bloomberg Law Judge Alsup’s ruling was “a clear win for [AI] developers because it essentially provides a roadmap for how to do an LLM that is fair”. Also speaking to Bloomberg Law, Matthew Sag, an AI law professor at Emory Law School, said while he agreed with Moss, “it seems weird to me that if the end result is Anthropic just goes on to eBay and Amazon, buys a bunch of books and then destroys them in the scanning process, that’s intrinsically better than just going to [the] LibGen [pirated copies dataset] and downloading them”. In other words, “you can do what you did — you just couldn’t go to a bad part of town to do it”.
🌟 KEY TAKEAWAY: If Judge Alsup’s ruling on Anthropic was a clear win for AI then Judge Chhabria’s ruling in favour of Meta should be regarded very differently. Yes, Meta triumphed, but only since the authors’ lawyers were unable to demonstrate market harm. Chhabria’s ruling hints he was desperate to find for the authors, but he couldn’t. So demonstrating financial loss will now need to be the focus of the remaining 40+ copyright infringement cases working their way through the US court system. What Sag says on Anthropic buying books rather than using pirated copies provides another tip for plaintiffs: AI developers will need to demonstrate they actually did purchase the copyrighted works rather than simply scrape them from the web, or access them via a pirated dataset.
EARLIER THIS WEEK:
GETTY IMAGES this week dropped a key plank of its copyright complaint against Stability AI, narrowing its lawsuit against the UK-based developer of text-to-image generator Stable Diffusion. During closing submissions in the closely watched High Court trial stock giant Getty dropped its claim that Stability had infringed its copyright by training Stable Diffusion on millions of its images without consent. Stability had claimed that because its AI training took place on machines run by US tech giant Amazon it was outside the scope of UK copyright law. Getty’s legal team told the London court it would continue to pursue a secondary infringement claim since even though Stability Diffusion was trained outside the UK its models then produced images within the country. Getty will also pursue claims for trademark infringement as some AI outputs included Getty’s watermarks.
Commenting on Getty’s decision to abandon its primary infringement claim, Gill Dennis, a senior lawyer in Pinsent Masons’ IP group, said: “This news will come as a blow to both sides of the AI copyright debate who were hoping that the outcome of the trial might bring some clarity to the very issues which have now been dropped. The apparent collapse of this aspect of Getty’s case serves to demonstrate just how difficult it can be for a party alleging misuse of their copyright works in the context of AI to make good their case. If a company the size of Getty cannot do it, it raises the question of whether smaller content creators, who represent the bulk of the UK’s creative industries and also claim to be losing out on royalty income, can do so.”
🌟 KEY TAKEAWAY: Dennis went on to say that Getty’s move would “only strengthen the resolve of those who want to subject AI developers to transparency obligations around the material they use to train their systems to enable fair licensing arrangements to be put in place”. In 2023 Getty sued Stability AI in the US. That copyright infringement suit is pending a decision on Stability AI’s motion to dismiss.
ALSO:
➡ Authors sue Microsoft saying its LLM was trained on pirated books
➡ Judge forces Perplexity to pause browser launch due to trademark suit
➡ Four defendants who used AI to make jigsaws are sentenced in China
🎨 AI & CREATIVITY
AN INTERNATIONAL GROUP of researchers is sounding the alarm bell over tools used by artists to protect their works from AI model training. Developed by computer scientists at the University of Chicago the tools are popular with creators and have been downloaded nearly nine million times. Invisible to the human eye, Glaze adds algorithms to digital images that confuse AI models preventing style mimicry, while Nightshade goes a step further and distorts features within the image, thus poisoning the models.
However, researchers at the University of Cambridge along with colleagues at the Technical University Darmstadt and the University of Texas at San Antonio have spotted critical weaknesses in the tools. Using a method they’re calling LightShed the protections can not only be detected but also removed, leaving artworks vulnerable to unscrupulous AI developers. During trials LightShed detected NightShade-protected images with 99.98% accuracy and removed the embedded protections. “This shows that even when using tools like NightShade, artists are still at risk of their work being used for training AI models without their consent,” said Hanna Foerster from Cambridge’s computer science department who conducted her research while an intern at TU Darmstadt. “We must let creatives know that they are still at risk and collaborate with others to develop better art protection tools in future,” she added.
🌟 KEY TAKEAWAY: The researchers stress that LightShed wasn’t developed as an attack on the computer scientists behind Glaze and Nightshade, but rather as an urgent call to action to work together on more robust AI-defeating technologies. The first step in that collaborative approach is to present LightShed at a major security conference in August and let creatives know their artworks are at risk. “What we hope to do with our work is to highlight the urgent need for a roadmap towards more resilient, artist-centred protection strategies,” said Foerster.
ALSO:
➡ The backlash against generative art ‘is hurting those who don’t use it’
➡ The Verge reports on why the music industry is embracing AI detectors
➡ Music streamer Deezer is to start flagging AI-made songs to listeners
➡ YouTube ‘needs to explain AI training policy more clearly to creators’
➡ Start-up Springboards encourages AI hallucinations to inspire creativity
➡ Performers’ union Equity urges producers’ body PACT to protect rights
👁️ EYE ON THE AIs
FRESH FROM HIS attempt to rewire the US government Elon Musk is now seeking to rewrite human knowledge using his AI chatbot Grok. Launched in 2023 with the aim of being a “maximum truth-seeking AI” Grok users were last month left dumbfounded after it began generating unprompted replies to unrelated posts discussing claims of white genocide in South Africa (Charting #54). Last week Musk declared a “major fail” after Grok told an X user that “since 2016, data suggests right-wing political violence has been more frequent and deadly” than left-wing attacks. Without offering any evidence Musk said Grok had been “objectively false” and was “parroting legacy media”. “Working on it.”
Users didn’t have to wait long for his proposed solution. “We will use Grok 3.5 ... to rewrite the entire corpus of human knowledge, adding missing information and deleting errors,” said Musk. Grok would then be retrained on that, he added. Dr Rumman Chowdhury, an AI researcher and CEO of Humane Intelligence who worked at Twitter until Musk dismissed her in November 2022, told Axios that retraining a model “would be fairly expensive” but many companies were considering how they could tweak answers to appeal to users. “These conversations are already happening. Elon is just dumb enough to say the quiet part out loud,” she added.
🌟 KEY TAKEAWAY: One of the major questions of the AI age is: What do we believe? Followed closely by: Who do we trust? AI models hallucinate for many reasons. Training data is one of them — a joke or satirical article is presented as fact (as in eat a rock a day and add glue to pizzas), or there’s bias in the data (as in all clocks in AI outputs will show their hands at 10 minutes to 2pm, since that’s how clocks are depicted in Google Images). But models can also make the wrong assumptions. And they’ll fill in gaps by generating outputs that are plausibly correct rather than absolutely right. So Musk’s big idea to rewrite human knowledge based on what he wants it to be will likely end in tears and tantrums. Who’s going to tell him?
ALSO:
➡ Ex OpenAI CTO’s Thinking Machines is now valued at $10 billion
➡ Apple has reportedly considered buying AI search engine Perplexity
➡ Meta discussed a possible takeover of Perplexity, reports Bloomberg
➡ Apple’s delayed AI overhaul of Siri will ‘now be unveiled in spring 2026’
➡ Meta AI chief scientist releases ‘world model’ that addresses LLM fails
➡ Google DeepMind unveils Magenta, a real-time AI music generator
➡ Google is trialling audio version of its AI-generated search summaries
➡ BBC discovers Meta AI users’ personal prompts are being shared online
⏱️ AI BRIEFS
◼️ Pope Leo calls on tech leaders to adopt a “super ethical” approach to AI, one that helps rather than hinders “our youth ... in their journey towards maturity”.
◼️ Over half (51%) of global spam is now produced by AI, according to a study. AI-generated emails tend to be more grammatically correct, so slip through detectors.
◼️ Anthropic stress-tested 16 LLMs in simulated corporate environments and found they would resort to blackmail and calling for humans to die when pushed.
◼️ OpenAI researchers found its latest and most powerful ‘reasoning’ models, o3 and o4, hallucinated 33% and 48% of the time, more than twice the amount of o1.
◼️ Why are lawyers particularly vulnerable to AI-hallucinated citations and fake cases? Bloomberg Law investigates.
💬 QUOTE OF THE WEEK
“The conventional wisdom that regulation stifles innovation needs turning on its head. As AI becomes more powerful and pervasive, appropriate regulation isn’t just about restricting harmful practices — it’s key to driving widespread adoption and sustainable growth. Many potential AI adopters are hesitating not due to technological limitations but uncertainties about liability, ethical boundaries and public acceptance. Clear regulatory frameworks addressing algorithmic bias, data privacy and decision transparency can actually accelerate adoption by providing clarity and confidence.” — Tim Clement-Jones, Liberal Democrat peer and spokesperson for the digital economy in the UK’s upper house, writing in The New Statesman
✨ AND FINALLY …
A VICIOUS SPAT has been playing out on social media this week between OpenAI CEO Sam Altman and Jason Rugolo, CEO of 15-person start-up iyO, maker of a screen-free computer embedded in an earbud.
Last month OpenAI announced it was buying io, a product and engineering start-up founded by legendary former Apple designer Sir Jony Ive, for nearly $6.5 billion. You might recall (in Charting #56) the wedding invitation-like announcement of their venture, and a slick video in which Altman and Ive teased they were working on “the coolest piece of technology that the world will have ever seen”. The announcement and video have been removed following a restraining order granted last week by a judge which prevents OpenAI from mentioning io. The order is part of Rugolo’s trademark dispute against Altman, who posted emails on X revealing Rugolo had hoped OpenAI would invest in or acquire iyO. “We passed and were clear along the way,” said Altman. “Now he is suing OpenAI over the name. This is silly, disappointing and wrong.” Rugolo hit back saying, “we’re not gonna let Sam and Jony steal our name”. Tang Tan, who co-founded io with Ive, said in a filing to the court that the prototype that Altman and Ive had referenced was “at least a year away from being offered for sale”, adding: “Its design is not yet finalised, but it is not an in-ear device, nor a wearable device.” Just what it is remains a mystery. And thus another episode in the ongoing drama that is OpenAI concludes, ready to resume another day.
Graham, I have a suggestion for you and for anyone else who wishes to hold the oligarchs in control of AI to account.
The intellectual property theft is egregious. But far too many people wrongly believe that writers and other creators get rich off their IP. You and I and most of Substack authors know that is not true. But it’s a difficult trope to fight.
On the other hand, a lot of people understand environmental issues and give a damn about that. AI and crypto are two of the most egregious offenders and a massive drain on energy usage. Perhaps that should be stressed.
How can we get your baroness on board?