Defence fund is launched in memory of OpenAI whistleblower | Ousted US copyright chief loses another round | Creatives' coalition accuses Brussels of ignoring AI concerns
#65 | Meta is accused of training video models on adult movies | There's a new AI chatbot in town: how was it trained? | Stability CEO backs marketplace for artists | BBC News appoints AI chief
A WARM WELCOME to Charting Gen AI, the global newsletter that keeps you informed on generative AI’s impacts on creators and human-made media, the ethics and behaviour of the AI companies, the future of copyright in the AI era, and the evolving AI policy landscape. Our coverage begins shortly.
SPONSOR’S MESSAGE
As AI transforms the creative industries, organisations face a maze of unsettled law, conflicting regulations, and complex ethical considerations that threaten copyright protection. AiPHELION, a UK-US legal and tech consultancy with global partnerships, cuts through this complexity by providing clients with the strategic guidance needed to harness AI’s potential while managing risks, protecting IP rights, and ensuring compliance in a rapidly evolving landscape.
ETHICAL AI

THE GRIEVING PARENTS of OpenAI whistleblower Suchir Balaji this week launched a defence fund to support those who “courageously speak out against injustice”. Speaking at the inaugural Suchir Balaji Memorial Summit, held on National Whistleblower Day, his mother Poornima Ramarao said the goal was to keep Suchir’s “views and values alive”, and show that people would not be silenced. His father Balaji Ramamurthy said the fund would provide a safe haven, legal advice and support for whistleblowers in the “war they are going against”.
Suchir Balaji joined OpenAI in 2020 as a researcher and helped train the large language model (LLM) powering ChatGPT. In October 2024 he penned an online essay that questioned whether the fair use exception in US copyright law applied to the training of an AI model. In a subsequent interview with The New York Times he admitted he’d helped create the technology that was “destroying the commercial viability of the individuals, businesses, and internet services that created the digital data used to train AI systems” before resigning in 2023. “If you believe what I believe, you have to just leave the company,” he said.
Suchir was due to give evidence supporting the Times’ copyright lawsuit against OpenAI and Microsoft. In November his body was found by police in his San Francisco apartment. Medical examiners swiftly ruled he’d taken his own life. In January Poornima said a private autopsy had contradicted the cause of death stated by police, while a private investigator had discovered his apartment was “ransacked” with signs of a “struggle in the bathroom”. Poornima repeated the claims in an interview with NDTV, saying “more details from the autopsy reveal it is murder”. In May Poornima, a cloud security architect, and Balaji, an AI researcher developing ethical frameworks, filed a lawsuit claiming San Francisco authorities had denied them access to police reports and demanded their release.
Thanking everyone taking part in Wednesday’s event — speakers included ethical training warrior Ed Newton-Rex, AI safety pioneer Prof Stuart Russell, Pav Gill, the Wirecard whistleblower, and Dr Jackie Garrick, founder of Whistleblowers of America — Poornima said they had helped Suchir’s “wishes come true”. “We lost him, but our goal is to keep his ambitions, his views and his values alive. And also show that they cannot silence people and we are not going to be quiet. We’re not just asking for funds, we also want to raise awareness. We are not saying we are at war with any organisation or anything. We just want the truth, and we want accountability. And we want some kind of laws that prevent the death of future whistleblowers.”
Event organiser and host Shan Sankaran, founder of Open Efficiency which is developing a secure whistleblower platform, praised Poornima and Balaji’s courage and paid tribute to Suchir. “Your son saw what others ignored. He saw algorithms that could ruin lives, systems built without guardrails and a culture that rewarded silence. He chose light not for fame, not for profit, but because he believed technology should serve humanity, not the other way around.”
Pointing to an empty chair he said it was for Suchir “and every truthteller who cannot be here”. “Let’s see how we can turn our pain into protection. Your family’s fight in court is yours. Our fight here is different. To build the world Suchir risked everything for. We won’t offer thoughts and prayers. We offer action, and we fight in your son’s name.”
🌟 KEY TAKEAWAY: Poornima and Shan emphasised their desire that the memorial summit becomes a regular event. Poornima and Balaji also hope to build an ecosystem of lawmakers, policymakers, attorneys and those who can provide whistleblowers with expert support. “We want people to be with us in this effort,” said Poornima. “Unless we get support in large numbers the government is not going to be favouring us.” Speaking to Charting Gen AI Shan urged readers to “keep a tab on the ethical aspects of AI technology” by questioning AI developers’ views on “fairness, transparency, fair use, copyright and inclusion”. Shan not only ideated an international AI ethics summit, he’s also launched a global movement.
ALSO:
➡ Ads featuring an AI model in Vogue magazine prompt an ethical debate
➡ OpenAI CEO tells ChatGPT users: ‘Stop thinking it’s your AI therapist’
🏛️ AI POLICY & REGULATION
THE OUSTED HEAD of the US Copyright Office has lost another round in her battle for reinstatement. Shira Perlmutter was fired by a White House email in May hours after the release of a long-awaited report on AI model training which tentatively backed rightsholders. Her sacking followed the equally abrupt dismissal of Library of Congress (LoC) boss Carla Hayden who appointed Perlmutter to the top role at the Copyright Office during the first Trump administration in October 2020.
Soon after her firing Perlmutter announced she was suing Donald Trump saying the US president didn’t have the necessary constitutional powers to remove her. A lawsuit filed by her legal team that includes Donald Verrilli, a former US solicitor general in the Obama era, said the attempt to oust Perlmutter was “blatantly unlawful” since powers to hire or fire the US Copyright Office’s senior-most staffer rested solely with the LoC.
Later in May Perlmutter sought a temporary restraining order which would have put Trump’s move on hold. In a memo Perlmutter’s legal team said she wasn’t suing for monetary harm “but instead the fundamental loss of her public office”. The memo referred to her report on copyright and model training, saying it took nearly two years to compile, and said a further report which would address “potential liability for infringing AI outputs” was now being prepared. Unless she was reinstated that report would not be completed “as expected by Congress”.
US district judge Timothy Kelly denied the restraining order, saying she had failed to demonstrate that her removal had caused her to suffer irreparable harm. In June Perlmutter sought a preliminary injunction preventing her removal while the court considered her case. This week Kelly, appointed by Trump in 2017, denied Perlmutter’s motion on similar grounds, saying her “asserted inability to do that job temporarily while this lawsuit proceeds is not enough to show irreparable harm”.
🌟 KEY TAKEAWAY: A quick glance at the US Copyright Office’s website shows it still lists Perlmutter as its leader. A staff org chart also shows her at the top. Trump installed Paul Perkins, described by the Authors Guild in May as “a Department of Justice attorney with no apparent copyright expertise”, to the role of acting Register of Copyrights — yet there’s no sign of him on the website. Nor does Perkins’ name appear on copyright registration certificates being issued by the office. In fact, no name appears on the certificates at all, raising concerns over their validity.
RELATED:
A BROAD COALITION of 40 creative industry groups representing authors, performers, publishers, producers and other rightsholders has condemned the European Commission (EC) for failing to address creators’ concerns over the use of their works for AI model training.
In a statement the coalition said the EC’s final AI Code of Practice and transparency template for general purpose models — which apply from tomorrow — were a “missed opportunity” to meaningfully protect IP rights and did not deliver on the promise of the EU AI Act itself”. While the Act sought to allow creators to enforce their rights, their feedback had been “largely ignored ... to the sole benefit of the generative AI model providers that continuously infringe copyright and related rights to build their models”.
The coalition said Europe’s “creative sectors and copyright intensive industries” contributed nearly 7% of the bloc’s GDP — larger than European pharmaceutical, car or hi-tech industries — and employed nearly 17 million people, but were “being sold out in favour of those generative AI model providers”. The statement urged MEPs and member states to “challenge the unsatisfactory process” that had culminated in “a betrayal of the EU AI Act’s objectives” and would “further weaken the situation of the creative and cultural sectors across Europe and do nothing to tackle ongoing violations of EU laws”.
Speaking to Charting, Nina George, the best-selling author and political affairs commissioner at the European Writers’ Council, said the EC and member states on the AI board were “on the wrong side of history”. “You are ruining the cultural soul of Europe. You are promoting plunder, devaluation and demotivation. The wrong people in the wrong place at the wrong time, who are themselves never affected by the extent of their decisions, who are political bandits.”
🌟 KEY TAKEAWAY: Last week George told Charting the transparency template was “useless” for three reasons: (1) AI providers only need to list the top 10% of domain names they have scraped, (2) The lack of title-specific information on works meant authors wouldn’t be able to check if their book had been used or not, and (3) AI providers who can’t provide training details because it’s too much of a burden would just need to state that in the summary — a move George condemned as “a helping hand”. The coalition has voiced its concerns before, but its language is becoming more strident. It’s time politicians stood up for creators’ rights, before it’s too late.
ALSO:
➡ A bipartisan group of US Senators introduces an AI transparency bill
➡ Mark Cuban urges Trump White House to tackle advertising in LLMs
➡ The Washington Post says DOGE is using AI to delete US regulations
➡ China unveils its global AI action plan calling for regulatory cooperation
➡ Creatives’ union in Australia calls on the government to tax AI firms
➡ UN General Assembly adopts a resolution on ethical and safe use of AI
➡ A new survey across six nations says consumers want AI to be regulated
➡ Tony Blair Institute says UK must invest in AI deployment and adoption
➡ Campaigners say UK equality regulator to use AI to analyse consultation
⚖️ AI IN THE COURTS
ADULT MOVIE producer Strike 3 Holdings and its majority-owned subsidiary Counterlife Media are suing Meta over the “rampant copyright infringement” of their “award-winning, critically acclaimed adult motion pictures” to train its generative video models. According to the lawsuit filed in the Northern District of California, Meta used the BitTorrent protocol to download and distribute its works to others, bypassing laws in the two dozen US states that prevent minors from accessing adult content.
Strike 3 claimed Meta had been infringing its copyright “for years”, often on “the very same day” that new movies were released under brand names Blacked, Tushy, Vixen, and others. Since 2018 “at least” 2,396 movies had been “wilfully” infringed, the lawsuit claimed. Strike 3 said it had made “significant investments” in its productions while performers received “some of the highest pay rates for actresses and actors in the industry”. Meta was using its content to train AI models, “knowing that such models will eventually create identical content for little to no cost”, a move Strike 3 said would “effectively eliminate” its “future ability to compete in the marketplace”.
Strike 3 said Meta’s alleged infringement would provide its AI models with “natural, human-centric imagery, which shows parts of the body not found in regular videos, and a unique form of human interactions and facial expressions”, plus “unique dialogue, sound effects, and non-verbal vocalisations not found within either mainstream or competing adult content”.
“Meta’s AI programs will soon be able to produce motion pictures that look as real as plaintiffs’ works, and without the significant costs, time, care, and expense that plaintiffs invest into their motion pictures,” said the lawsuit. Strike 3 is seeking damages and a court order instructing Meta to “delete and permanently remove” its movies from Meta’s “computers, datacentres, AI clusters, models, and training data”.
🌟 KEY TAKEAWAY: Strike 3 is known in US legal circles for taking a robust stance on copyright protection. Were it to convince a court that Meta’s downloading amounted to wilful infringement then it could be looking at maximum statutory damages of nearly $360 million. Meta CEO Mark Zuckerberg might laugh that off as the price of three signing-on bonuses for his new Superintelligence Lab (one AI researcher was reportedly offered a $1 billion pay package and turned it down). But surely there’s a reputational risk here? Why on Earth is Meta apparently training its video models on porn? Does it really aim to allow users to create adult clips, or entire movies? And how will it prevent the unintentional generation of adult content?
ALSO:
➡ Anthropic seeks to appeal US district judge’s class action certification
➡ Meanwhile authors seek financial data from Anthropic for damages trial
➡ Makers of Korea’s Lee Luda chatbot ordered to pay users over privacy
👁️ EYE ON THE AIs
A NEW AI CHATBOT claiming to put “people and privacy first” has been launched by Proton, the Swiss-based non-profit best known for its end-to-end encrypted email service, Proton Mail. Dubbed Lumo the private AI assistant keeps interactions confidential and puts the user in control of their data. In a blog post Proton machine learning chief Dr Eamonn Maguire said Lumo had been built to help users “without demanding personal data in return”.
“AI opens the door to new opportunities but also new forms of data collection,” said Maguire. “Today, hundreds of millions of ordinary people interact with AI tools, unwittingly handing over sensitive information that is far more intimate than search or browsing history.” Maguire said the free chatbot didn’t use inputs to train its LLM, which was based on an open-source model “and operates from Proton’s European datacentres”. Charting wanted to know more about the model, and, as we always ask, how it was trained. No one at Proton replied to our queries, so we turned to Lumo itself. The answer was revealing:
“Lumo was trained using a combination of public datasets, books, and other publicly available information from the internet up until 2023,” came the reply. And the LLM? We heard it was based on a Mistral AI model. Mistral doesn’t comment on how its models are trained. So again we asked Lumo:
Yes, it operates on a Mistral LLM which was trained on “vast amounts of text data from the internet, up to April 2024, including books, articles, and websites”.
📣 COMMENT: Lumo could be hallucinating, of course. But judging from the replies it generated to our queries it doesn’t appear to have been trained on licensed data. Which is a shame, given Proton’s laudable comments on privacy, and its apparent desire to do the right thing. Might training data have been a blind spot? Might Proton be in the process of seeking the consent of the authors, writers and publishers whose works it has scraped? We’ll continue to seek a reply and bring it to you when we have it.
ALSO:
➡ Google’s AI Mode is released in the UK amid a new warning on traffic
➡ Axios breaks down what Microsoft and OpenAI want in their new pact
➡ AI Godfather Geoffrey Hinton says Musk and Zuckerberg are ‘oligarchs’
➡ Meta poaches ChatGPT co-founder for Superintelligence science role
🎨 AI & CREATIVITY
STABILITY AI CEO Prem Akkaraju this week told the Financial Times the developer of image generator Stable Diffusion was working on a marketplace for AI licensing. Asked by the FT’s AI correspondent Melissa Heikkilä whether a Spotify model compensating individual artists for their works made sense, Akkaraju said it was a “really great idea”.
“I think that a marketplace for people to opt into and then upload their art, I think that’s going to happen. Actually, something we’re working on, where artists can actually have a marketplace or a portal where they can say, ‘hey, you could train on this’, and then that actually gets licensed and used by us and others, and they get compensated for it. I think it’s really smart.” Heikkilä went on to ask if Stability AI — which is being sued by Getty Images in the US and UK for alleged copyright infringement, and by a group of artists in the US who have complained over the use of their works for generative model training — had reconsidered its stance on training data. “No, not really,” replied Akkaraju. “What we’re using is free-to-use data, as well as some bespoke license deals. I think that the way we’re doing it is the right way.”
🌟 KEY TAKEAWAY: Ethical training advocates will seize on Akkaraju’s reference to “free-to-use data”. It sounds suspiciously like ‘freely available’ or ‘freeware’ ... terms frequently used by AI-copyright deniers. Later in the interview Akkaraju said AI models were “essentially inspired by billions of images at one time, and definitely not duplicating or replicating anything”. That statement won’t endear artists, either.
ALSO:
➡ Amazon invests in start-up Fable which aims to be the ‘Netflix of AI’
➡ Gamers claim Activision Blizzard used AI-generated images in promo
➡ Students protest as a top Australian university offers AI course for artists
➡ Indian composer A.R. Rahman to team with OpenAI CEO on AI music
➡ Adobe releases Photoshop’s ‘most useful AI tool yet’ says Creative Bloq
➡ Meanwhile Adobe invites creatives to edit The Unfinished Film with AI
➡ According to Freelancer data, creative jobs are surging despite AI threat
📰 AI & NEWS MEDIA
BBC NEWS has appointed Meta’s media partnerships chief in the Asia-Pacific region to the new role of AI, innovation and growth director. Anjali Kapoor will oversee the news division’s responsible adoption of AI as well as its digital growth strategy in the UK and globally. Kapoor has previously held senior strategic positions at The Globe and Mail, Yahoo and Bloomberg Media. She will start next month and join the BBC News board.
“I’m honoured to be joining the BBC, one of the world’s most trusted news organisations, at such a pivotal moment for journalism,” said Kapoor. “Throughout my career, I’ve worked internationally at the intersection of media, technology and business leading audience and product strategies while also aligning innovation with editorial integrity, trust and long-term sustainability.” BBC News CEO Deborah Turness said: “This is a critical role as we transform BBC News to be fit for the future, using AI to enhance our journalism, growing audiences — particularly under-25s. Anjali’s deep knowledge of how news works, her future-focused approach to AI and her deep understanding of product and platforms make her uniquely positioned to lead this work.”
🌟 KEY TAKEAWAY: Kapoor’s appointment comes as the BBC’s commercial arm seeks to recruit creatives, technologists, producers and a leader for a new AI Creative Lab. The Corporation works to a clear AI policy which currently states that generative AI “should not be used to directly create news content” that the BBC publishes or broadcasts, “unless it is the subject of the content and its use is illustrative”.
ALSO:
➡ Amazon ‘to pay NYTimes up to $25 million per year for AI deal’: WSJ
➡ Reach NUJ chapel seeks ‘urgent’ talks with bosses over fair use of AI
➡ How can journalists spot and mitigate against gen AI bias, asks RISJ
⏱️ AI BRIEFS
◼️ 64% of US adults use AI to search for information and generate ideas while 16% of adults and 25% of under-30s use it for companionship, AP survey finds.
◼️ Microsoft researchers reveal jobs that overlap with AI. Among most at risk are interpreters and translators, writers, broadcast announcers and journalists.
◼️ Two humans working together came up with more original ideas than one human working with an AI chatbot or internet search tools, according to a study.
◼️ Carnegie Mellon and Anthropic researchers show how an LLM could replicate the 2017 Equifax cyberattack, installing malware and stealing data.
💬 QUOTES OF THE WEEK
“Just as a wary approach to AI is conflated with overall technophobia, AI ethics is often viewed as a muzzle on innovation. And the tech industry’s track record is an excellent argument for why not to uncritically embrace the next new thing. Companies like Facebook have not just waved away ethical considerations but deliberately shredded them with no consequences. The impacts of AI stand to be even more destructive.” — Andi Zeisler, Salon senior culture journalist, writing on claims women are wary of the AI rush
“Humans will find ways to wring creativity out of AI, using it as a tool. And yet there’s something intrinsically depressing, even sinister, about the technology’s incursions into the sphere of artistry. To accept its rise unconditionally feels like a surrender to bozos who view art as little more than ‘content’ — the mindset of insensible utilitarians uninterested in the possibility of true imaginative transcendence.” — Yo Zushi, journalist and musician writing in Monocle on the relationship between art and AI
“Creators need to have a reason for doing their creative work. If that goes away, if there’s no incentive, then everything will all turn to AI slop. We’ll all be AI slop.” — Bill Gross, founder of ethical AI solutions provider ProRata, talking to CJR on his plan to help publishers monetise generative AI
“I feel threatened even though my voice hasn’t been replaced by AI yet. We need legislation: Just as after the car, which replaced the horse-drawn carriage, we need a highway code.” — Voice actor Boris Rehlinger, talking to Reuters on the AI threat to the dubbing industry
“Imagine being the poor bureaucrat who has to write the analysis exploring exactly how much historical accuracy an AI needs to display when asked about, say, the Civil War. AI companies eyeing federal contracts now face a choice: do they create special ‘MAGA-compliant’ versions of their models that give different answers depending on who’s asking? Do they just avoid federal contracts entirely? Either way, it’s a lose-lose that makes the government less effective and the market less efficient.” — Mike Masnick, editor of the Techdirt blog, writing on US federal agencies having to work out whether AI systems are sufficiently ‘non-woke’ for government procurement
“We should want a future in which lives are improved by the positive uses of AI. But if America wants to continue leading the world in this technology, we must invest in what made us leaders in the first place: bold public research, open doors for global talent, and fair competition. Prioritising short-term industry profits over these bedrock principles won’t just put our technological future at risk — it will jeopardise America’s role as the world’s innovation superpower.” — Asad Ramzanali, director of AI and tech policy at the Vanderbilt Policy Accelerator, writing in MIT Technology Review
FROM OUR SPONSOR
AiPHELION is an exciting new legal and tech consultancy, and a relatively rare example of a company in its sector to be led by a female CEO, Netflix and Warner Bros alum VICTORIA FURNISS. Victoria and her co-founders MICHAEL JURY, CBO (ex-Disney, Baker McKenzie), BRET BOIVIN, CTO (ex-Icopro, Warner Bros), and JOHN BYRNE, CSO (ex-Dorsey & Whitney, co-founder of Therium, the global litigation-funding firm) blend deep experience in law, technology and strategy to advise clients in UK and US marketplaces.
“AiPHELION helps clients protect and monetise their IP while embracing AI technology lawfully and ethically,” explains Victoria. “Our work is enhanced by our bespoke intelligence tools covering the complex legal, regulatory and technical landscapes. AiPHELION is where knowledge and experience can be your ultimate competitive weapon.” Find out how by getting in touch today.