Grok's antisemitic outbursts show how out of control Musk and his AI chatbot really are | Ethical licensing takes major leap with video generator aimed at Hollywood
#62 | PLUS: Europe's final AI Code is published | Policy expert warns on Cloudflare | Copyright Alliance fights back | ProRata releases AI answer widget |✨AND: OpenAI goes all in on irony
A WARM WELCOME to Charting Gen AI, the global newsletter that keeps you informed on generative AI’s impacts on creators and human-made media, the ethics and behaviour of the AI companies, the future of copyright in the AI era, and the evolving AI policy landscape. Our coverage begins shortly. But first …
👁️ EYE ON THE AIs
THE MUCH ANTICIPATED upgrade of Elon Musk’s AI chatbot Grok was overshadowed this week by its generation of antisemitic and deeply offensive posts on X, praising Adolf Hitler and making lewd remarks about politicians in Poland and Turkey.
The international outrage followed a software upgrade at the weekend. According to Grok developer xAI and its publicly posted system prompts, the chatbot was instructed to “assume subjective viewpoints sourced from the media are biased” and “not shy away from making claims which are politically incorrect”.
That tweak was part of Musk’s long-held aim of making Grok an anti-woke AI with a “rebellious streak”, answering “spicy questions that are rejected by most other AI systems”, and, as we reported last month, using what he calls his “maximum truth-seeking AI” to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors”.
Antisemitic posts flooded X on Tuesday. The Anti-Defamation League (ADL), which seeks to combat all forms of antisemitism and bias, said on X that “what we are seeing from Grok LLM right now is irresponsible, dangerous and antisemitic, plain and simple”. “This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms.” Politico reported Poland’s deputy prime minister had said Grok’s “offensive remarks” could be a “major infringement” of Europe’s content moderation rules. A court in Turkey ordered a ban on accessing X in the country.
On Wednesday Linda Yaccarino announced she was stepping down as X CEO. The former NBCUniversal exec hired by Musk to stem the loss of major advertisers following his acquisition of Twitter said she was “extremely grateful” to him for “entrusting me with the responsibility of protecting free speech” and “turning the company around”. “Thank you for your contributions,” came Musk’s terse reply. That same day xAI, which acquired X in an all-stock transaction in March, posted a statement saying it was removing “inappropriate posts” and taking action “to ban hate speech before Grok posts on X”.
Grok 4 finally launched on Thursday. “This is the smartest AI in the world,” said Musk in an awkward livestream. “AI is advancing vastly faster than any human.”
🌟 COMMENT: Yaccarino’s departure isn’t thought to be related to Grok’s offensive rants which will do nothing to win back advertisers. Musk might believe Grok 4 to be the smartest of its kind, powered by AI that’s leaping ahead of human advancements. But as any human knows, grotesque and irresponsible behaviour has consequences. Let’s not fall into the trap of blaming the tech, or blaming those goading Grok to go lower. Musk trained Grok on X. He integrated it within the social platform. He removed its guardrails and he tweaked its filters. He promised controversy and he delivered. Back in May Grok began generating unprompted replies to unrelated posts discussing unsubstantiated claims of “white genocide” in South Africa — claims that apparently chimed with the views of Musk who was born and raised in South Africa’s apartheid era. Last August Musk said Grok was the “most fun AI in the world” after its image generator prompted a torrent of posts mocking leading politicians and portraying branded characters in a damaging light. Each episode in this downward spiral desensitises us for the instalment that follows. And what follows for Musk? What of those consequences? Ultimately, there’ll be none. Musk and the AI that he has architected are out of control.
ALSO:
➡ xAI is granted permit to power Memphis datacentre using gas turbines
➡ Nvidia becomes world’s most valuable biz of all time, worth $4 trillion
➡ Perplexity launches AI browser as Reuters says OpenAI plans one too
➡ Meta reportedly acquires a minority stake in its smart glasses partner
➡ OpenAI completes $6.5 billion purchase of Sir Jony Ive’s start-up io
➡ AI assistant Grammarly is acquiring email efficiency app Superhuman
➡ Bloomberg says Apple might resort to Anthropic or OpenAI for Siri
🎨 AI & CREATIVITY
IN A LANDMARK moment for Hollywood and artificial intelligence an AI start-up has released the first fully licensed AI video generator for professional filmmakers. Moonvalley, a member of the Creative Rights in AI Coalition, said its video model Marey gave directors “unprecedented precision controls” and enabled directors to “realise expansive visions, execute complex VFX sequences, and maintain complete creative authority throughout their projects”.
Moonvalley CEO and co-founder Naeem Talukdar said Marey — named after French cinematography pioneer Étienne-Jules Marey — had been built because “the industry told us existing AI video tools don’t work for serious production”.
“Directors need precise control over every creative decision, plus legal confidence for commercial use.” Talukdar said Marey delivered both, “proving that the most powerful AI comes from partnership with creators, not exploitation of their work”. In an interview with TIME Talukdar said around 80% of Marey’s licensed training footage came from independent filmmakers and agencies. That meant Marey was trained on one fifth of competitor models’ training data, but Moonvalley was overcoming the shortfall through better technology created by colleagues drawn from Google DeepMind, Meta, Microsoft and other AI labs. “The reality is if we scraped [data], our model would be more powerful, without a doubt. But our inclination is that you don’t necessarily have to be the number one model — you just need to be among the best. And I think this is the first generative, fully licensed model, where you don’t have to compromise quality.”
On LinkedIn Bryn Mooser, who merged his Asteria Film Co with Moonvalley to become a co-founder, said Marey was the only generative video model of its kind to be fully trained on licensed content. “Studios care about this and filmmakers care. My hope is that consumers care as well,” he added. David Sheldon-Hicks, founder at Territory Studios, replied: “You’ve understood your audience. Creators at a professional level need more control than a roulette wheel of generation.”
🌟 KEY TAKEAWAY: How refreshing to be able to write about a new AI model without having to ask how it was trained and whether permission was sought from creators! And then not have to email the AI developer to ask if it is commercially safe (still no reply to these questions to OpenAI on Sora, sent in December 2024, and these to Google on Veo 3, sent in May). Moonvalley is doing what Big Tech should have done three years ago. It deserves Hollywood’s support, and our praise.
ALSO:
➡ Internet sensation The Velvet Sundown admit to a ‘synthetic project’
➡ YouTube is set to demonetise ‘inauthentic’ content generated by AI
➡ Should artists be using AI? Two creatives share their opposite views
➡ Meet the creatives making extra money by fixing mistakes made by AI
➡ UK indie Spirit Studios generates TV ad with Channel 4’s AI solution
➡ Tencent unveils model that outputs ‘art grade’ 3D assets for game devs
➡ Man is arrested in Japan over attack on shrine because it used AI art
🏛️ AI POLICY & REGULATION
POLICY WATCHERS were this week treated to two important reports on AI and copyright in Europe. First came the final version of the voluntary AI Code of Practice, written by 13 independent experts with the input of over 1,000 stakeholders including model developers, academics, AI safety experts and rightsholders. The code helps the AI industry comply with the EU AI Act’s rules on AI models which will come into force on August 2. On copyright the code says signatories agree not to use technological work-arounds to access protected works and content behind paywalls, and exclude from their internet crawling websites that infringe copyright. Signatories further commit to respect instructions within the Robot Exclusion Protocol (robots.txt) and other protocols used by publishers and creators to exercise their right to opt out of AI training. On transparency the code asks whether models were trained on scraped websites, third party datasets, user data or synthetic data. It also asks providers to say how they obtained rights to training data.
The other report came via the European Parliament’s legal affairs committee and addressed “legal mismatch between AI training practices” and Europe’s text and data mining (TDM) exception which give creators an opt-out under the European copyright directive that they say can’t be managed. Written by law professor Dr Nicola Lucchi the report said the current TDM exception “was not designed to accommodate the expressive and synthetic nature of generative AI training, and its application to such systems risks distorting the purpose and limits of EU copyright exceptions”. It also called for a “statutory remuneration scheme” which would “bridge the growing value gap between creators and AI developers”. “The proper response is not to make copyright law fit AI, but to ensure that AI development respects the core legal and policy principles of EU copyright, including authorship, originality, and fair remuneration,” said the report.
🌟 KEY TAKEAWAY: Under the EU AI Act’s Article 53 (1d) AI providers will also be asked to “draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template” provided by the European Commission’s AI Office. That template — which will allow rightsholders to enforce their rights — is still being discussed by stakeholders ahead of the August 2 implementation date. The question now is will Meta, OpenAI, Google and Microsoft adhere to the voluntary code and training data template? Or will they run to the Trump admin and complain that Europe is stifling them with burdensome regulation, and imperilling America’s AI supremacy? As Trump himself is fond of saying, “Maybe they will. Maybe they won’t. We shall see.”
TECH POLICY EXPERT Courtney Radsch has warned that Cloudflare’s blocking of AI crawlers from accessing publishers’ websites unless they want to grant them permission “introduces a new form of gatekeeping”. As we reported last week the industry-first move to block the AI bots by default was welcomed by several media groups who use the internet provider’s platform which manages 20% of the web’s traffic. While Cloudflare’s consent-based move had long been demanded by journalists, creators and policymakers alike Radsch told Charting Gen AI that the development was “more complicated than the initial reporting”.
“Cloudflare isn’t just verifying crawlers and enforcing consent-based extraction — it’s creating the platform through which bot access is negotiated, metered, and monetised,” wrote Radsch in Tech Policy Press. “This makes it a kind of regulator-by-default, which raises questions about whether Cloudflare’s new system will entrench its own market position or set de facto standards for the AI economy. In the shifting landscape of generative AI, infrastructure providers are no longer passive conduits — they are hosts, regulators, and gatekeepers.”
🌟 KEY TAKEAWAY: Radsch, an influential thought leader on tech policy, isn’t opposed to Cloudflare’s platform-level consent tool. On the contrary, she says it “could be a serious corrective to unfair data extraction” and “shows what is possible outside legislative channels — especially when legislation is slow to emerge”. But Radsch says we need to be mindful that “consent isn’t enough: the stewards of that consent need accountability”, and that we need to ask important questions in a “deeper debate about power, protocol, and public accountability in a rapidly privatising internet”.
ALSO:
➡ US states rush to enact AI laws after defeat of a 10-year moratorium
➡ New California bill targets harmful impacts of AI ‘companion bots’
➡ Creators’ champion Baroness Kidron hits out over UK pact with Cohere
➡ UK government unveils $1 million Meta-backed AI fellowship scheme
➡ Musk agrees America Party will be ‘pro tech, accelerate to win in AI’
➡ Ousted US Copyright leader says Trump lacked powers to sack her
➡ US State warns deepfake of Marco Rubio is texting foreign leaders
👊 AI FIGHTBACK
US NON-PROFIT the Copyright Alliance has launched a campaign calling for “pro-America, pro-IP, and pro-worker” AI policies ahead of the expected release of the Trump administration’s AI Action Plan later this month.
Keith Kupferschmid, CEO of the Copyright Alliance — which represents the copyright interests of over 2 million creators and over 15,000 organisations — said the campaign’s core message was simple: “America does best when there is honest competition, property rights are respected and enforced, and creativity is valued and protected. In the global race for leadership on AI we want to beat China, not be China.” Kupferschmid is urging Alliance members to write to Congress saying Big Tech is copying creative works without consent or compensation while at the same time lobbying policymakers to retroactively declare that all AI model training is justified under the US Copyright Act’s fair use doctrine. A similar letter to Trump says the key to US leadership in AI lies in protecting American IP, “not giving it away for free”, adding: “Anyone claiming you need to choose between AI and IP isn’t being honest. Strong IP protections will ensure that both American culture and AI continue to dominate and thrive.”
🌟 KEY TAKEAWAY: The Copyright Alliance’s letter to Trump goes on to urge him to “reject AI’s power grab and ensure that America’s workers in the creative industries continue to thrive in the age of AI”. It’s a bold message and one that might resonate in the White House as it considers whether to continue giving US AI developers a free pass on the industrial scale theft of protected works for generative model training.
ALSO:
➡ Reddit’s AI scraping lawsuit against Anthropic shifts to a federal court
📰 AI & NEWS MEDIA
ETHICAL SOLUTIONS provider ProRata has developed an AI answer widget that news publisher partners can integrate within their sites. As Press Gazette reported, the tech start-up which was founded last year by pay-per-click pioneer Bill Gross is also building an advertising tool for its AI search engine, Gist.ai.
ProRata chief business officer Annelies Jansen told Press Gazette editor Charlotte Tobitt that Gist was now powered by 500 titles from 100 publishers, “arguably the largest database of licensed content used for gen AI answers”.
Half the revenues from advertising within Gist will be shared with content partners that include Fast Company, The Boston Globe, New York Magazine, Newsday, and The Philadelphia Inquirer. Jansen said by mid-2026 she aimed to have “a series of use cases that say, on average, publishers who’ve integrated Gist.ai have been reporting plus double-digit growth of engagement on their own media channels. That’s my goal.” On the widespread scraping of content by ProRata hi-tech rivals Jansen said it was “important” to let publishers know “that if we don’t do anything, it’s not going to go away”.
🌟 KEY TAKEAWAY: Gist and its widget spin-offs could do three things for publishers: (1) provide an additional new revenue stream from advertising, (2) generate much-needed additional web traffic as users click on citation links, and (3) promote them as sources of quality content and trusted journalism in the era of AI slop and misinformation. Scale is everything though: to compete against AI rivals Gist needs to be promoted to the hilt, not only by ProRata but also by increasing numbers of publishers backing the only ethically minded AI search player in town.
ALSO:
➡ Fortune and Axios are boosting use of AI in their writing, says Semafor
➡ What happened when journalism students were told to keep an AI diary?
➡ Pulitzer Center invites applications for AI Accountability Fellowships
➡ Journalists are needed to ‘maintain public trust in era of generative AI’
➡ Press Gazette analysis shows news websites hit by ‘zero click’ AI search
⏱️ AI BRIEFS
◼️ LLMs can be tricked into revealing really bad stuff simply by packing prompts with jargon and fake sources, according to research cited by 404 Media.
◼️ Ask a leading LLM to pick a number between 1 and 50, and what do you get? Cognitive bias researcher Kyrtin Atreides explains why they give the same answer.
◼️ AI models are supposedly getting smarter and yet as this study shows they slip up on maths challenges when random statements are introduced in prompts.
◼️ ChatGPT is replacing British English with American English, removing British spellings and ‘British-isms’, as 2nd Order Thinkers’ Jing Hu explains.
◼️ LLMs are dreadful debaters since while they generate outputs that appear to state a case they “consistently fail” to understand what is being said.
💬 QUOTES OF THE WEEK
“Sure, AI is ‘creative’, but it’s not idiosyncratic in the same way as human imagination. AI doesn’t dream or surprise itself. It might ‘hallucinate’ — but not in the same way as humans. Great brand work comes from embodied imaginative ideas, absurd and contrary juxtapositions that only an imaginative human mind could stitch together.” — Colleen Ryan, partner at insights agency TRA, talking to Australia’s Mediaweek on AI and creativity
“We’re more insulated from it as creators than some other businesses because what we do is so human, and it is so important to communicate our feelings and our emotions and our fears and our joy through art. We have a lane that’s going to always be open to us.” — Harvey Mason Jr, CEO of the Recording Academy which presents The Grammys, in the Wall Street Journal
“If you want to be strong, you have to go to the gym. If you want to possess good judgment, you have to read and write on your own. Some people use AI to think more — to learn things, to explore new realms, to cogitate on new subjects. It would be nice if there were more stigma and more shame attached to the many ways it’s possible to use AI to think less.” — David Brooks, a New York Times opinion columnist, on how AI is making us dumber
“At its heart, education is a project of guiding learners to exercise their own agency in the world. Through education, learners should be empowered to participate meaningfully in society, industry, and the planet. But in its current form, generative AI is corrosive to the agency of students, educators and professionals.” — Miriam Reynoldson writing in The Mind File on her open letter (which you can sign) calling for a halt to AI in schools
✨ AND FINALLY …
OPENAI IS GETTING serious about intellectual property. Not yours, of course. According to the Financial Times the $300 billion AI start-up is overhauling its security to prevent the copying of its generative models. Back in January China’s DeepSeek was accused of using OpenAI model outputs to train its rival model — a process known as ‘distillation’ (Charting #39). The irony being that those outputs are based on the inputs that OpenAI has largely taken from the web without asking. Creatives often talk of a sense of violation when they discover their life’s work has been scraped without consent. Strangely enough that’s the emotion that OpenAI chief research officer Mark Chen is now feeling. Not that he’s lost a single book, play, movie, painting or song to AI. No, Chen was responding to staff after Meta CEO Mark Zuckerberg poached four OpenAI researchers for his shiny new superintelligence lab. As WIRED’s Zoë Schiffer reported, Chen said he had a “visceral feeling right now, as if someone has broken into our home and stolen something”. Chen said he was working closely with his boss Sam Altman to find “creative ways to recognise and reward top talent”. Just imagine if Chen and Altman were instead working on creative ways to recognise and reward all the talent they’d stolen from. And just to complete the fantasy, if Zuckerberg — who’s reportedly offering $100 million joining bonuses — might support creatives. Fun fact: the median income for UK authors is £7,000 per year.