🗓️ THIS WEEK …
Welcome to this week’s newsletter which manages to reunite Yann LeCun, Yoshua Bengio and Geoffrey Hinton — recipients of the 2018 Turing Award, the “Nobel Prize of computing” — who built the foundations for today’s AI boom. The week saw another big content licensing deal, again featuring OpenAI, with hints of several more deals from it and the other AIs to come. Meta’s LeCun warned the AIs are on the wrong track with generative AI and said large language models (LLMs) were “intrinsically unsafe” (has he told his boss?). US publisher Gannett is said to be close to adding AI-generated summaries to the top of news stories while journalists elsewhere continue to experiment with AI tools. And the AI Seoul Safety Summit closed after receiving assurances from the world’s largest AIs that they’ll act transparently on AI dangers. So that’s all right then.
🔎 ESSENTIAL READING
COPYRIGHT & LICENSING
OpenAI signed a multi-year content licensing deal with News Corp, publisher of The Times and The Sunday Times in the UK, and The Wall Street Journal and The New York Post in the US, plus a raft of other titles. Under the deal — said to be worth more than $250 million over five years — content from News Corp newsbrands will appear in OpenAI products. News Corp will also “share journalistic expertise to help ensure the highest journalism standards are present across OpenAI’s offering”. News Corp CEO Robert Thompson said the “historic agreement” would “set new standards for veracity, for virtue and for value in the digital age” and marked the “beginning of a beautiful friendship”. For his part, OpenAI CEO Sam Altman said the partnership set “the foundation for a future where AI deeply respects, enhances, and upholds the standards of world-class journalism.” OpenAI’s deal with News Corp follows content licensing and partnership deals with the Financial Times, Dotdash Meredith, Le Monde, Axel Springer, Prisa Media and the Associated Press. OpenAI is being sued by hedge fund Alden Global Capital (owner of eight US newspaper publishers including the New York Daily News), as well as The New York Times, The Intercept, Raw Story and AlterNet. 🔗 News Corp-OpenAI statement; The Wall Street Journal
Meta is reportedly mulling whether to start knocking on the doors of news publishers and seek content licensing deals for its AI model training. According to Business Insider, teams within the Facebook owner are discussing whether paid deals providing broader and deeper access to news, photos and video content are the best way to make its generative AI products more effective in an increasingly crowded market. Meta wouldn’t comment. 🔗 Business Insider
Google may pursue more content licensing deals for generative AI training and outputs following its partnership with Reddit — said to be worth $60 million per year — in February. Nilay Patel, editor-in-chief of The Verge, discussed the relationship between content owners and search in the AI era with Google CEO Sundar Pichai and asked if Google planned more paid deals to train search results. “I think there are cases in which we will see dedicated incremental value to our models, and we’ll be looking at partnerships to get at that. I do think we’ll approach it that way,” said Pichai. 🔗 The Verge
Pete Brown, research director at the Tow Center for Digital Journalism, comprehensively reviewed the recent slew of content licensing deals between the AIs and news groups. Said Brown: “These developments typify the latest chapter in a fraught relationship between platforms and publishers, where the big question for news organisations — or some might argue, the chosen few that are given a choice — is: Deal or no deal? (For those not invited to the table, it’s more a case of: Deal with it.)” 🔗 Columbia Journalism Review
TECH WATCH
You’re doing it all wrong! That was Meta AI chief Yann LeCun’s message to rival AIs who believe generative AI’s LLMs will one day create artificial general intelligence (AGI), the point at which machines become smarter than humans. LeCun, chief scientist at Meta’s AI lab, told the Financial Times that LLMs had “very limited understanding of logic ... do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan ... hierarchically”. Ouch! Furthermore, LLMs can only provide accurate answers if they’ve been trained on the right data, and so are “intrinsically unsafe”. Ouch again! LeCun is working on an alternative ‘world model’ that can develop common sense in much the way that humans observe and interact with things around them. LeCun first shared that bold vision with the MIT Technology Review in 2022, saying: “This idea that we're going to just scale up the current LLMs and eventually human-level AI will emerge — I don’t believe this at all, not for one second.” Meta released the first experimental model based on LeCun’s research last June. 🔗 Financial Times; MIT Technology Review; Meta AI Research
Elon Musk’s AI start-up xAI is reportedly seeking to raise nearly $6 billion in a funding round back by Silicon Valley VCs. The Financial Times said the additional funds would give xAI a $24 billion valuation, and help start the development of Musk’s Grok chatbot. 🔗 Financial Times
The rise of generative AI video will drive even more demand for Nvidia’s graphics processing units (GPUs) in data centres, Nvidia CEO Jensen Huang told Reuters. “There’s a lot of information in life that has to be grounded by video, grounded by physics. So that’s the next big thing,” predicted Huang. Nvidia earlier announced better-than-expected quarterly revenues sending shares above $1,000 and adding $140 billion to its $2.5 trillion market cap. Huang said Nvidia customers were creating a new type of data centre (“AI factories”) built on accelerated computing. “The next industrial revolution has begun,” he added. 🔗 Reuters; Nvidia release
OpenAI’s superalignment team, tasked with ensuring the AI is able to steer and control systems smarter than humans, was disbanded following last week’s resignations of joint leaders Ilya Sutskever and Jan Leike. According to WIRED, members have either resigned or have been absorbed into other teams. Leike posted on X that he and OpenAI’s leadership had “reached a breaking point” while “safety culture and processes have taken a backseat to shiny products.” OpenAI president Greg Brockman and CEO Sam Altman penned a lengthy joint post on X defending the company’s stance on AI safety and predicting a future when users will be interacting with several models and tools “which can take actions on their behalf”. 🔗 WIRED; Leike on X; Brockman & Altman on X
OpenAI dropped the Sky voice for GPT-4o following an angry complaint from Scarlett Johansson. 🚨QUICK TAKE: OpenAI must show it can be trusted
Microsoft eagerly unveiled its AI innovations including a new class of personal computer called Copilot+ PCs, “the fastest, most intelligent Windows PCs ever built”. The key differentiator: Copilot+ PCs will run a suite of AI applications powered by small language models (SLMs) on the device, as well as being connected to LLMs in the cloud. One of the AI-on-the-device applications is Recall which takes a snapshot of everything you’re doing and saves it to a timeline to aid retrieval using visual scrolling or natural language search with the Copilot+ voice assistant. (Sounds creepy? See Regulation & Legislation). Think of it as an AI era version of Clippy, the animated paperclip that launched with Office 97. While Clippy would pop up asking if you you needed help writing a letter, Copilot+ will sharpen up your Minecraft gaming skills. At least, that’s the theory. 🔮 PREDICTION: While Microsoft wants us to call them Copilot+ PCs they’ll end up being marketed by manufacturers as ‘AI PCs’. 🔗 Microsoft Blog
Google is to start testing search and shopping ads within its new AI-generated answers. Google said the search-relevant ads — currently being tested in the US — would be drawn from existing search campaign partners and appear in a sponsored section. “As we move forward, we’ll continue to test and learn new formats, getting feedback from advertisers and the industry,” said Google. Where this leaves those in the search engine optimisation industry (global market size: $68.27 billion in 2022) is something of a mystery. Meanwhile, tech writer AW Ohlheiser penned a piece for Vox asking the question: if Google’s AI hallucinates and causes harm, is it legally liable? It’s the old chestnut of what constitutes a publisher. In the olden days when its search engine merely returned a list of blue links it could claim it was just pointing to information, but now it’s creating content that distinction, and legal status, is less clear. 🔗 Google Ad Blog; Vox
Amazon is reportedly working on a generative AI overhaul of decade-old voice assistant Alexa. CNBC said an overhauled Alexa would be available for a monthly fee which won’t be part of Amazon Prime. Amazon didn’t comment. 🔗 CNBC
UK start-up Stability AI is in talks with what a spokesperson told Reuters is a “world-renowned technology investor syndicate”. The Information said the group includes former Facebook president Sean Parker and Prem Akkaraju, former CEO of visual effects company Weta Digital. 🔗 Reuters; The Information
AUGMENTED CREATIVITY
US publisher Gannett — owner of USA Today as well as hundreds of regional news titles — is looking to add AI-generated summaries at the top of pages on its websites, according to a memo seen by The Verge. The memo reportedly says pages featuring the AI-written summaries will make clear that AI has been used, and that a journalist has reviewed them prior to publication. 🔗 The Verge
Journalists need to be careful when using generative AI tools since the unreliability of their outputs “could undermine the integrity of their work”. That’s the warning from Nir Eisikovits, director of UMass Boston’s Applied Ethics Centre. Eisikovits said economic pressures were driving newsrooms to boost productivity, but the time spent fact-checking generative AI outputs could offset any purported gains, and erode trust in journalism. 🔗 The Conversation
The Spinoza Project, a generative AI tool developed by journalism group Reporters Without Borders and an alliance of French newspaper and magazine publishers, is now being tested by 12 French media groups. The tool’s model has been trained on media articles plus technical, legal, and scientific data linked to climate change and includes sources in its outputs “to ensure transparency and, if necessary, allow journalists to verify and expand the information”. 🔗 RSF
Generative AI tools are helping to save journalists on France’s 20 Minutes news website up to eight minutes per article by automating tasks such as adding SEO metadata, rewriting news agency copy, and identifying content that some advertisers might not want their brands to appear next to. “With around 160 pieces of content published every day, this is a significant amount of time that can now be spent reporting the news to our readers,” said 20 Minutes CTO Aurélien Capdecomme while insisting that all AI-generated content is validated by humans before being published. 20 Minutes now plans to extend its use of Amazon’s Bedrock suite of generative AI products “to tag our existing image library and provide automated suggestions for article images”. 🔗 AWS blog
Boffins at Google DeepMind have partnered with YouTube to create Music AI Sandbox, a suite of tools designed to help musicians, songwriters and producers create instrumentals and alter musical styles. Artists Wyclef Jean, Justin Tranter and Marc Rebillet released demos using the Music AI Sandbox, powered by DeepMind’s Lyria music generator. 🔗 Music AI demos; DeepMind’s Lyria
REGULATION & LEGISLATION
Sixteen AIs including Google, Meta, Microsoft, OpenAI and China’s Zhipu.ai pledged to abide by the AI Seoul Safety Summit’s ‘Frontier AI Safety Commitments’ which include publishing how they will measure the risks of their most advanced AI models. Leading AI safety expert Yoshua Bengio welcomed the AIs’ “commitments to halt their models where they present extreme risks until they can make them safe”. The next AI Safety Summit will take place in France in early 2025. Magnifique. 🔗 AI Seoul Summit
Before the UK general election was called, culture secretary Lucy Frazer told the Financial Times that ministers were planning to bring forward legislation ensuring greater transparency over the use of copyrighted content for model training. Frazer said AI represented a “massive problem not just for journalism, but [also] for the creative industries”. “The first step is just to be transparent about what they [the AI companies] are using. [Then] there are ... questions about opt in and opt out [for content to be used], remuneration. I’m working with industry on all those things.” Except now she’s not. 🔗 Financial Times
The UK’s AI Safety Institute (AISI) is setting up its first oversees office in San Francisco. Technology secretary Michelle Donelan said the expansion into the US — hi-techs there must surely regard this as a ‘coals to Newcastle’ moment — was “pivotal” for the “UK’s ability to study both the risks and potential of AI from a global lens”. The AISI also announced an AI safety collaboration with Canada, and unveiled research showing five un-named AI models were “highy vulnerable” to jailbreaks — prompts that get around ethical safeguards and elicit content that’s illegal, harmful, or just plain wrong. The AISI will now share its findings so the AIs can “assess and improve” their safety. 🔗 AISI SFO release; AISI research
Europe’s council of ministers gave the final green light to the EU AI Act. Lawmakers described the risk-based regulations as “ground-breaking” and setting “a global standard for AI regulation”. The Act bans AI systems for cognitive behavioural manipulation and social scoring, and policing systems that profile people based on biometric data. There are exemptions for military uses (to the annoyance of Geoffrey Hinton, see Quotes of the Week). 🔗 EC Press release
The Council of Europe — the international human rights organisation not to be confused with the European Council which defines the EU’s political direction — adopted an AI treaty aimed at “ensuring a responsible use of AI that respects human rights, the rule of law and democracy”. 🔗 Council of Europe statement
The European Commission said Microsoft needed to make clear what generative AI safeguards had been built into search engine Bing by May 27, or face a fine. The Commission asked the tech giant to come clean in March but didn’t get an answer. In a statement it said it the “request for information is based on the suspicion that Bing may have breached the Digital Services Act for risks linked to generative AI, such as so-called ‘hallucinations’, the viral dissemination of deepfakes, as well as the automated manipulation of services that can mislead voters”. Microsoft said it had been cooperating with the Commission and was “committed” to replying to its questions. Meanwhile, the UK’s Competition and Markets Authority (CMA) said it would not investigate Microsoft’s partnership with French start-up Mistral AI. 🔗 EC statement; Euractiv; CMA statement
Microsoft’s Recall application on Copilot+ PCs caught the attention of the UK’s Information Commissioner’s Office (ICO), the data protection regulator, which said it was “making inquiries with Microsoft to understand the safeguards in place to protect user privacy”. An FAQ on Microsoft’s Copilot+ PC website said Recall snapshots remain “on the local hard disk”. But Jen Caltrider, privacy lead at software community Mozilla, told BBC News that someone with a user’s password could access their full history, including “Microsoft if they change their mind about keeping all this content local and not using it for targeted advertising or training their AIs down the line. I wouldn’t want to use a computer running Recall to do anything I wouldn’t do in front of a busload of strangers.” Earlier, the ICO said it had concluded an investigation into Snap’s ‘My AI’ chatbot having concluded it is compliant with data protection law. 🔗 ICO Recall statement; Microsoft Copilot+ PC; BBC News; ICO Snap statement
💬 QUOTES OF THE WEEK
“We have to educate ourselves about AI, and then report the hell out of it! That is the one tool we have that nobody else does — the power of reporting. Generative AI is going to be one of the seminal changes of our lives and we need to turn all our investigative and analytical power on to it to tell the story, hold these new AI powers to account, and inform people about how the tools work.” — Jane Barrett, Global Editor, Media News Strategy, Reuters, talking to Dr Alexandra Borchardt for the EBU News Report 2024
“I’m impressed by the fact that [governments] are beginning to take [AI safety] seriously. I’m unimpressed by the fact that none of them is willing to regulate military uses, and I’m unimpressed by the fact that most of the regulations have no teeth.” — Geoffrey Hinton, the ‘godfather of AI’, talking to Faisal Islam on BBC Newsnight
🤔 AND FINALLY …
Rishi Sunak, Britain’s tech bro prime minister, plunged the country into a surprise general election this week. We can confidently predict there’ll be deepfake videos of Sunak and Labour’s Sir Keir Starmer talking complete nonsense on the campaign trail. Former PM Boris Johnson will talk complete nonsense on TV and claim it’s a deepfake. Generative AI-created disinformation will flood social networks causing confusion among voters and consternation in newsrooms as the cry goes out “is this real?”. Election manifestos will echo the usual hype of the opportunities and challenges of the AI era, talking about the need to prepare for an AI-assisted/enhanced/empowered society and mitigate the extinction risk of superintelligence. What the parties are unlikely to do is talk meaningfully about regulation that protects the UK’s world-class creative industries. As a template they need look no further than Lord Chris Holme’s AI Regulation Bill, which would establish an AI Authority with regulatory teeth, force the AIs to reveal all the third party data and intellectual property used to train their models, and ensure that all IP is used with the consent of owners. It’s a sensible set of measures that deserves to be reintroduced in the new Parliament, and be backed by whoever wins the keys to 10 Downing Street.