Rights hero shares her AI playbook with publishers | Google's AI Overviews 'causing irreparable harm' says landmark complaint
#61 | PLUS: State AI regulation ban is set to return | Cloudflare blocks the bots | Authors seek publishers' pledges | BBC to publicly trial gen AI |✨AND: What's real or fake with The Velvet Sundown?
A WARM WELCOME to Charting Gen AI, the global newsletter that keeps you informed on generative AI’s impacts on creators and human-made media, the ethics and behaviour of the AI companies, the future of copyright in the AI era, and the evolving AI policy landscape. Our coverage begins shortly. But first …
👊 AI FIGHTBACK
CREATIVE RIGHTS warrior Karen Rønde this week shared a blueprint for how human-made media can take on the might of Big Tech, and win. Two days after announcing her licensing organisation was suing OpenAI for training its models on news media stakeholders’ content without consent, Rønde revealed her strategy — one that was “rooted in a deeper need to protect democratic values”.
Rønde said Big Tech companies didn’t care “about our industry, or Europe”. “It’s a global AI race and they want to win no matter what,” she told the Publishers’ Licensing Services (PLS) conference in London. “But this isn’t just a tech race. It’s our future and we can win the battle together,” stressed Rønde, CEO of the Danish Press Publications’ Collective Management Organisation (DPCMO). The Danish media industry had united behind DPCMO’s move to challenge OpenAI’s failure to respect their copyright.
“OpenAI told us very frankly when we reached out to them that Denmark is not a priority. We had to understand they had a small team, and Denmark was not top of the list. Well, law is law. And in Denmark tech companies need to abide by the law,” said Rønde. The Danish government had supported the appointment of a mediator, “but of course OpenAI didn’t want to participate”. “So on Tuesday we announced we are going to sue OpenAI on behalf of the entire Danish industry including the public service broadcasters. And hopefully more rightsholders will join the case,” said Rønde, the former public policy chief of Netflix in the Nordics.
Litigation was one of five parallel pillars of her strategy. The top priority was licensing, ensuring “fair remuneration that fuels reinvestment in quality content”. Another was legislation. “We need strong clear laws that respect creative rights in the digital age,” explained Rønde, who once ran her own law firm and was a judge in the Danish Court of Impeachment.
Literacy — winning hearts and minds “to create lasting change” — was another plank. True change was emotional and cultural. “We must lead by educating, building trust, showing integrity, and listening deeply.” A fifth pillar was leadership. Denmark this week assumed the EU presidency with a commitment to address copyright and AI. “We are a small country, but Denmark is ready to show leadership and be ambitious. And so must we.”
Rønde said the discussion about copyright was “not just about money”. “Creative content, human-made content, such as books, news, video and music and scientific works bring and bind us together as human beings, as nations. Books open windows to other worlds and to a greater understanding of our own. They reflect the cultural diversity that defines humanity. We need to tell this story again, and again, and again. And not just to policymakers and civil servants, but to the people. To the voters. And we must take the new generation seriously and build stronger bonds. Our story must bring hope and a positive agenda.”
🌟 COMMENT: Rønde’s address to publishers and creators at the PLS conference was an inspiring tour de force. Several times she mentioned that Denmark is a small country. But Big Tech has heard her mighty roar. She’s already suing LinkedIn. Last February she called the police on Apple having sought for years to get the hi-tech to pay for the content it had scraped from Danish publishers’ websites and used in its news widget. Google’s AI chatbot Gemini is now in her sights. The former member of the Danish parliament exudes fearlessness at a time when many politicians around the world worry about antagonising Silicon Valley and incurring the wrath of Trump. Rønde is fortunate to have her government’s ear. Immediately after her PLS speech I asked if she had a message for UK ministers who had failed to seize the opportunity to introduce emergency AI transparency protections in the recent data bill. “They should take their responsibilities seriously,” said Rønde. “And they should also think long-term. This is about ensuring and safeguarding democracy for the long-term.”
A LANDMARK COMPLAINT has been lodged with the UK competition watchdog claiming Google’s AI-written search summaries are causing “serious irreparable harm” to the UK’s news industry. The complaint says Google is abusing its dominant position in search to take publishers’ content and use it to promote AI Overviews, which then competes with the same publishers. The legal challenge has been raised with the Competition and Markets Authority (CMA) by justice non-profit Foxglove, the Independent Publishers Alliance, and the Movement for an Open Web. A similar complaint has been made to the European Commission.
Chris Dicker, CEO at CANDR Media Group and a board member of the Independent Publishers Alliance, told Charting Gen AI: “This action is not something we’ve taken lightly, but it’s absolutely necessary. Independent publishers are seeing their journalism effectively lifted by Google’s AI tools, with no consent, no compensation, and no ability to opt out without sacrificing all visibility in search.
“Publishers are already facing enormous commercial pressure, and now their content is being used to fuel AI-generated summaries that leave readers with little reason to click through. We’re just asking for basic fairness. As it stands there is no sustainable value exchange and unless something is done the digital ecosystem will fail to exist. We hope this legal challenge will send a clear message that independent journalism must be respected and protected. We urge the CMA to act quickly to prevent further damage before it’s too late.”
🌟 KEY TAKEAWAY: As we reported last week (Charting #60) the CMA has provisionally ruled that Google’s dominance in search had an impact on the terms it was able to dictate with publishers while there were “insufficient controls” over how their content was used in search and AI Overviews. At the heart of the complaint is the charge that publishers are unable to prevent their content being used for AI Overviews without it disappearing from search. As Dicker says, time is of the essence.
ALSO:
➡ Non-fiction writers allege OpenAI and Microsoft violated copyright
🏛️ AI POLICY & REGULATION
THE BAN ON US states from regulating AI might have been defeated but the Big Tech lobby is likely to be plotting another attempt. As we reported in our Quick Take, by 99 votes to one senators decisively backed an amendment to strike a five-year moratorium from Donald Trump’s sweeping tax-cut and spending megabill. Pro-tech think tank the ALFA Institute, founded by former House speaker Kevin McCarthy, said the removal of the ban was a “setback for American AI leadership and a missed opportunity”, leaving the door open for a “fragmented, 50-state patchwork that undermines any semblance of a national strategy”. Public Citizen, the consumer advocacy non-profit, said the defeat for its architect, senator Ted Cruz, had handed a victory to civil rights groups and state lawmakers. JB Branch, Big Tech accountability advocate at Public Citizen, told Charting Gen AI that Cruz wanted to reintroduce an AI moratorium as a standalone bill.
“Instead of pushing reckless legislation written to serve Big Tech, he should take a page from the dozens of state legislatures that are advancing bipartisan, commonsense AI safeguards,” said Branch. “State lawmakers should continue passing thoughtful AI laws that fill the gap left by federal inaction. At the same time, they should be in close dialogue with their congressional counterparts; educating them on the harms they’re seeing in their own districts and the solutions they’re crafting. The more Washington understands the real work happening in the states, the harder it will be for Big Tech to push through another federal power grab.”
🌟 KEY TAKEAWAY: Writing in his Blood in the Machine Substack, tech journalist Brian Merchant said the moratorium had aimed to shut down “meaningful laws” on AI in states such as California, New York and Colorado. “But now that it’s dead ... I worry many will forget what almost happened here. That Silicon Valley elites almost won a battle to stop states from passing AI laws, period.”
RELATED:
ALSO:
➡ Trump is set to boost US AI energy supply in AI Action Plan due July 23
➡ FT: CEOs of 44 major European firms call on Brussels to halt EU AI Act
➡ Danish government bill to give citizens copyright over their likenesses
➡ German watchdog tells Apple and Google that DeepSeek app is illegal
➡ Anthropic unveils grants for policy researchers exploring AI impacts
💰 COPYRIGHT & LICENSING
INTERNET INFRASTRUCTURE provider Cloudflare this week announced it was blocking AI crawlers from accessing its clients’ websites unless they wanted to grant them access. The industry-first move to block by default was welcomed by media groups including the Associated Press, Condé Nast, Dotdash Meredith, Gannett, Independent Media, Sky News and TIME. Cloudflare’s platform — which the company says helps manage and protect 20% of the web’s traffic — will also allow AI crawlers to state their purpose, such as AI training or search.
In a statement, Matthew Prince, co-founder and CEO of Cloudflare, said: “Original content is what makes the internet one of the greatest inventions in the last century, and it’s essential that creators continue making it. AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate. This is about safeguarding the future of a free and vibrant internet with a new model that works for everyone.”
🌟 KEY TAKEAWAY: In a blog post Prince said the web’s future interface would likely look more like ChatGPT than Google’s classic search links. In the past Google’s goal was to get you “off its site as quickly as possible” but now its AI-written summaries kept users on Google.com. That was hurting publishers and creators by sending dwindling amounts of traffic to their websites. AI chatbots sent even less. As a result the web was being “strip mined by AI crawlers with content creators seeing almost no traffic and therefore almost no value” — a situation he hopes to change. If AI developers want content, they’ll need to ask, and creators will be able to charge.
ALSO:
➡ Creative Commons unveils CC Signals live trial for AI licensing options
➡ Co-founder of Australian indie publisher explains AI licensing rationale
➡ MrBeast pulls YouTube thumbnail generator after AI training backlash
🎨 AI & CREATIVITY
MORE THAN 70 authors this week penned an open letter calling on leading publishers including Penguin, HarperCollins, Simon & Schuster and Macmillan to make a series of pledges on their use of generative AI.
Their letter and accompanying petition, which has since been signed by nearly 2,000 fellow writers, said authors were “rushing toward a future where our novels, our biographies, our poems and our memoirs — our records of the human experience — are ‘written’ by AI models that, by definition, cannot know what it is to be human. To bleed, or starve, or love”. Authors’ stories had been used without consent or payment to “train machines that, if short-sighted capitalistic greed wins, could soon be generating the books that fill our bookstores”. “The purveyors of AI have stolen our work from us and from our publishers, too,” said the letter while pointing out that “publishing as an art form” was also in jeopardy.
The authors called on all publishers to stand with them and “pledge that they will never release books that were created by machines” and “not replace their human staff with AI tools or degrade their positions into AI monitors”.
🌟 KEY TAKEAWAY: The letter further urged publishers to not “openly or secretly” publish books written by AI, use AI image generators “to design any part of the books we release”, and “only hire human audio book narrators” rather than AI tools “built on stolen voices”. “We want you to be guardians of the future of our work and the work of generations to come,” added the letter. Publishers should embrace it and respond positively. Without their authors, what do they have left?
ALSO:
➡ Is it OK for academic authors to use AI? A journal editor gives her take
➡ H&M releases first images generated using AI twins of human models
➡ To the relief of food bloggers Google ends AI-written recipe summaries
➡ Sir Martin Sorrell warns AI will replace 250,000 jobs in media buying
📰 AI & NEWS MEDIA
AI-GENERATED SUMMARIES are coming to the BBC News website as the corporation’s news chiefs pilot at-a-glance bullet points on longer news stories. The public trial is aimed at making BBC journalism more accessible, said Rhodri Talfan Davies, who leads on generative AI editorial development. “Short, scannable bullet-point summaries have proven popular with readers — particularly younger audiences — as a quick way to grasp the main points of a story,” said Talfan Davies in a statement.
“Journalists use a single, approved prompt to generate the summary, then review and edit the output before publication — so they’re always in control. Journalists will continue to review and edit every summary before it’s published, ensuring editorial standards are maintained. We will also make clear to the audience where AI has been used as part of our commitment to transparency,” he added. Also being trialled is the use of generative AI to apply the BBC’s house style to stories contributed via the BBC-funded Local Democracy Reporting Service (LDRS) which receives hundreds of local news articles every day from partners across the UK. That public trial, known as BBC Style Assist, will use a BBC-trained large language model (LLM) developed by the corporation’s R&D team and trained on thousands of BBC News stories. The trial will initially involve LDRS news coverage in Wales and the east of England.
🌟 KEY TAKEAWAY: Talfan Davies stressed that Style Assist draft outputs would be reviewed by a BBC journalist, “checking for accuracy and clarity”. The AI would have “no role in creating the original story”, which would continue to be researched and written by the BBC’s trusted LDRS partners. As with the at-a-glance summaries, the use of gen AI in the production process will be made clear to audiences. That commitment to transparency is a core theme of the BBC’s approach to gen AI.
ALSO:
➡ Journalists at Law360 express fury after bosses force use of AI detectors
➡ Belgian Elle and several other mags had ‘hundreds of AI-written stories’
➡ Reuters video chief Rob Lang is appointed its first newsroom AI editor
⏱️ RESEARCH BRIEFS
◼️ Software engineer Stijn Spanhove has unveiled an experimental app for Snap Spectacles which uses Google Gemini to detect and block outdoor advertising.
◼️ Seeing AI-generated content boosts people’s “confidence in their own creative abilities” according to a new study from the Kellogg School of Management.
◼️ Prompting chatbots to collapse reveals how LLMs construct the “illusion” of understanding through statistical patterns, says RMIT University’s Daniel Binns.
◼️ Study shows AI chatbots can be prompted to generate health disinformation such as vaccine-autism links, HIV is airborne and 5G causes infertility.
◼️ At least 13.5% of biomedical research abstracts written in 2024 were processed using AI chatbots, with an excess of words such as delve, crucial and significant.
◼️ Animal behaviourist Con Slobodchikoff tells PYMNTS he’s building an AI system that will enable humans to understand what their dogs are saying. Woof.
💬 QUOTES OF THE WEEK
“Although AI may replicate a style, it can never capture the deep narrative forged by human effort and intuition. The labour, creativity, and passion that characterise human artistry endow each work with an intrinsic value that a machine cannot achieve.” — Sheena Iyengar, professor at Columbia Business School, on a study showing AI-generated art enhances the appreciation of human-made works
“In journalism, how you introduce artificial intelligence is just as critical as what the AI does. Even the best system will spark resistance if it’s sprung without trust, transparency, and genuine respect for the craft. AI can be a powerful ally for newsrooms — if it’s brought in with care, buy-in, and a clear sense of partnership.” — Journalist and media commentator Pete Pachal writing in Fast Company on the recent AI backlash among Wikipedia editors
“People who are real technicians — art directors, cinematographers, writers, directors, and actors — have an opportunity with this technology. What’s really important is that we as an industry know what’s good about this and what’s bad about this, what is helpful for us in trying to tell our stories, and what is actually going to be dangerous.” — Filmmaker Bryn Mooser, founder of production house Asteria and co-founder of ethically trained video generator Moonvalley, talking to The Verge
✨ AND FINALLY …
IT LOOKS LIKE AI. It sounds like AI. Deezer even labelled it AI. Then it’s probably safe to say it is AI. But who’s behind the band that’s attracted over 850,000 listeners on Spotify? And how were reporters fooled to report The Velvet Sundown’s vehement claims that its music was human-made? Behind the complex yarn that played out this week on social and mainstream media are important lessons for the music industry and journalism. It all started with users on social media noticing the indie band’s rapid rise and voluminous output, with two album releases in June and another due this month. And yet track listings failed to credit the songwriters and the Velvets’ social media following was non-existent. Deezer stuck an ‘AI-generated’ label on the band’s most recent album:
While Spotify, which takes a laissez-faire attitude to AI-generated slop on its platform, sat back and watched the listener count rise:
Then something weird happened. On Instagram several images appeared bearing the hallmark signs of generative AI:
An electric guitar with no strings and no neck and an acoustic with strings but several going to the same tuner, and a microphone stand sprouting out of a musician’s arm. Other images showed band members — all exactly the same height, what are the odds of that? — recreating the Beatles’ Abbey Road cover, but with one wearing different shoes. In other images fingers and thumbs morphed into sausages. Clearly they’d been generated using AI:
And yet on X the band insisted “we are a real band and we never use AI”. “This is our music, written in long, sweaty nights in a cramped bungalow in California with real instruments, real minds, and real soul. Every chord, every lyric, every mistake — HUMAN,” said a post in a thread attacking journalists for “pushing the lazy, baseless theory” that the band was AI-generated “with zero evidence”.
Things then took another twist with someone claiming to be a spokesperson for the band giving an interview to Rolling Stone admitting songs had been generated using Suno. A few hours later that same spokesperson, Andrew Frelon (not his real name), admitted on Medium that the posts on X had all been a hoax using a spoof account, and that he’d pranked Rolling Stone and journalists from several other media outlets. Meanwhile the ‘real’ band used its Spotify bio to insist they had nothing to do with Frelon and posted links to its actual social media accounts. Turns out the images on Insta weren’t official either.
So where does this leave us? We’re going with Deezer. The Velvet Sundown might call itself ‘real’ on its real Insta account, but it’s fake. Its tracks are AI generated. Spotify says The Velvet Sundown is a ‘verified artist’ but its verification process clearly doesn’t mean bands are verified human. Frelon used social engineering tricks to fool “a large number of professional journalists” who failed to check whether he was real. He hopes his prank will inspire “a more careful approach to prevent the publication of blatantly false information by people with worse or more dangerous agendas than my own foolish experiment”.
<< Non-fiction writers allege OpenAI and Microsoft violated copyright. >> Two other big vendors that publishers with deep pockets should pursue: Amazon Kindle and Google Books. I have seen eBooks publishers selling my works. They have only been available in digital format on those two platforms. (Unless Shopify’s digital book app has been hacked.)