'BETRAYED': Author says publishing giant has let down writers over lucrative AI deals
'Please tell us how you're going to respect our work and give us a seat at the AI table'
AMID THE HIGH-PROFILE lawsuits brought by newspaper groups, music labels and visual artists against AI developers the plight of one group of creators has largely gone unnoticed. Scholarly writers are the specialist wordsmiths who contribute to developing and disseminating human knowledge using a rigorous process that can involve years of painstaking research, critical thinking and multiple rounds of reviews before their work appears in books and journals.
Dr Janet Salmons is an independent writer living and working in Boulder, Colorado. The author of 13 books, she’s a former award-winning academic and 2024 research fellow at the Centre for Advanced Internet Studies in Bochum, Germany. Three of Janet’s books and a chapter in a collected volume were published by Routledge, one of the world’s leading academic publishers and part of the Taylor & Francis division of Informa, the global publishing group.
In May 2024 Informa announced a non-exclusive AI partnership with Microsoft to explore “the development of specialised expert agents … starting with Taylor & Francis’ Advanced Learning content and data”. Under the deal Informa said it would receive an initial $10 million plus undisclosed recurring payments over the following three years. “The agreement protects intellectual property rights, including limits on verbatim text extracts and alignment on the importance of detailed citation references,” said Informa.
Two months later Informa said it had secured another AI partner and that revenues from AI licensing would generate more than $75 million in 2024. That lucrative “source of significant new value” would generate “additional royalties for authors,” said Informa. Great news for creators ... or so you’d think. Janet is one of several scholarly writers who say they weren’t consulted on the AI deals and haven’t been offered compensation. They own copyright in the works that have been used for generative training but claim consent hasn’t been sought.
In an interview with Charting Gen AI Janet — who specialises in research methodologies — expresses her infuriation about not being informed about the AI deals, and frustration over her inability to opt out of them. Janet also:
Explains what goes into creating a scholarly work, and why it’s such a time-consuming process — taking 15 years in the case of one AI-ingested book.
Shares her views on how AI is undermining trust in knowledge and critical information that we need to inform decision-making, and “damaging the public discourse”.
Reveals why she feels she’s been betrayed “by those who should be safeguarding our intellectual property and championing the value of scholarly research and writing”.
Janet has advice for new authors looking to get published as well as those whose “work has already been thrown into the AI chopper”. And she has some suggestions too for Informa, including inviting authors “to be part of a team” when thinking about AI. “We need a seat at the table,” Janet believes.
We’ve sent Janet’s messages to Informa along with questions of our own and will follow up with its reply.
In the meantime, here’s the full audio interview, with transcript below.
Janet, tell me about the process that scholarly writers go through, and how it differs from other forms of writing ...
Scholarly writers and others who do research-based writing like investigative journalists first have to do the research. So, while other kinds of writers begin with imagination, observation, personal experience, scholarly writers begin by actually conducting original research or developing theories about phenomena in the world. But the point I want to emphasise here is that for a scholarly researcher every stage of your work has oversight, so your proposal is approved and then anything that you’re writing goes through series of review processes. So there are layers of other people who are looking at what you do to say, ‘Was this done appropriately? Were research ethics honoured?’, and so forth. The effort that goes into actually doing the writing is enormous but you had to work for years before you got to that point to have the material that’s the basis for the writing.
What’s the mission that scholarly writers are on? Is it a calling?
I strongly believe that it is a calling, and that we feel a commitment on a couple of levels. One is to finding new insights and understandings for the things that are going on in the world, and some of us are writing in ways that will inform other scholars so that you’re creating a foundation that others can work from. My particular specialty has been around using technology, so to just make sure that it’s clear, I’m not coming from this resistance from a point of being a Luddite or someone who is not a believer in the value of technology.
Last summer you found out that three of your books plus a chapter in a collected volume, all published by Routledge, were part of an AI partnership that its parent company Informer had struck with Microsoft and another AI developer. You first saw that news on LinkedIn. What was your reaction, and what did you do?
Well, my immediate reaction was that I did not want to be a part of it. And so I immediately contacted my editor and said ‘I would like to opt out, I don’t want my work used in this way’. The response was kind of, ‘Oh well’. And so then she forwarded me to someone else and I corresponded with this person who is in a more senior position. And he was clear that there was no opt out available and it was too late, that my work had already been consumed, and that they did not feel any compunction to contact writers at all, either before, during or after this deal. Now the books that I published with Routledge were done long before ChatGPT or any AI tools had been released to the public and so naturally there’s nothing in our contract that would have anything to do with this kind of use of the material. Nevertheless, he insisted that existing contracts were perfectly adequate. So when I pushed on the question of royalties, I said, ‘Well how do you give royalties on fragments of things? If you’re going to blast my work into bits that will then be mixed up with other bits stolen from other people and poured out, how do you figure out royalties on that? And he’s like, ‘Oh no we won’t figure royalties on outputs’. So then I wrote and said, ‘Well if it’s not on outputs then it must be on inputs, in which case, I haven’t seen any money coming in or any kind of sign of any communication to authors saying, ‘Hey, we’ve signed this deal and here’s what it means for you, and here’s how it’s going to work’. I’ve written now three times asking please explain to me how you’re going to determine royalties and I’ve received no further response.
You blogged that Routledge wrote to you saying the deals struck with two AI companies would allow authors to “ensure their ideas make the fullest possible contribution”. What was your reaction to that statement?
Well, the writing that I have done for these books, so with one of those books that was roughly 15 years of work, doing the original research then using the ideas in the development of courses that were offered at the masters level, using these ideas in presentations and getting feedback, not only the kind of formal feedback you get from review of the manuscript but also response from people in the field. Also, in all of these I created the art. I did not do this to give it to an AI tool that would then use it in whatever way it is that they use it. That was not the purpose for this work. It’s not the way that I want it to be used. So, no, I was absolutely infuriated. And I feel like because I am an independent scholar at this point in my life I have the ability to speak up in a way that other writers who are people who are trying to get tenure and promotion and have a secure position can’t, and maybe their perspective is not the perspective of their institution and they are limited in the way they can speak up.
The plight of scholarly writers hasn’t achieved the same level of coverage as other creator battles with AI. Why do you think that’s the case?
One of the issues for academic writers is that we don’t have any organisation. When the Hollywood writers went on strike I thought, ‘Oh this is great I’m glad to see writers articulating the same issues that we face as academic writers’. But they had the clout and ability because they’re unionised to create some kind of agreements and some kind of guardrails and limitations for their work that we don’t have. And that makes us really vulnerable. People always use the metaphor of David and Goliath. But in this case it’s kind of like just Goliath.
Where are things now with Routledge?
I’ve had no further response from any of my inquiries and I’ve had no communications generally from the company to authors about their policies or how they plan to assess royalties. So things are just kind of left hanging, and I don’t know where I can really push further to get more information.
Last year you wrote: “I thought I was a scholar and a writer, but I found out I’m just a link in generative AI’s content supply chain.” But this goes beyond financial compensation doesn’t it. What are your bigger concerns?
There are the input concerns about using material without people’s permission and compensation. But to me there is a larger concern that is even more urgent, and that is if we lose trust in scholarly writing, in writing that is based on rigorous empirical research that has been reviewed and has been conducted with oversight and ethical safeguards and all of those things, if we can’t trust that, then what do we have? As academic writers we talk about standing on the shoulders of giants. We honour the people who have come before us whether we agree with them or not, we’re building on their work. We honour them by citing them and referencing them so that when you read a piece of work you can say, ‘Where did this come from? OK, well it came from the other scholars and from the new research’, and it’s all been explained in a coherent form. That is completely the opposite from something that is pulled from all kinds of sources, there’s nobody critically saying like, ‘Well wait a minute, this was just a comment on a social media post, this was just a conspiracy theorist’. It’s just all mashed up and thrown out and who knows where it came from. You can’t honour the sources by referencing them because you don’t know what the sources are. So to me that’s really damaging to the public discourse and promotes a sense of distrust.
Could it be done properly? Could there be an ethical approach to this?
I’m sceptical because the people running the companies that are currently in charge of these major tools I don’t think are acting in an ethical or respectful manner, and so it’s hard for me to really trust that they’re going to make decisions that are going to be something that you know I would want to be a part of.
How has this experience left you feeling?
Beyond all of the issues about compensation and permissions I feel this sense of betrayal by those who should be safeguarding our intellectual property and championing the value of scholarly research and writing. It’s very frustrating, and it’s really sad, because I think that we need more scholarly writing, we need more research-based writing, whether you are in academic world or journalism or blogging etc, any kind of writing that is based on an examination of solid foundations.
What advice would you give a scholarly writer at the early stages of their career, and those that might be in your situation?
My first suggestion is that if you’re looking where to publish your work, whether it’s a journal or in a book format, to look at the policies and practises of the publishers you’re considering, what are they doing, how are they treating their authors, what kinds of protections are they offering to their published works. If you’re someone who has already been published write to your editor now and say, ‘What are you doing, what are you considering, what’s the publisher considering, keep me in the loop, I want to know’ so that those channels of communication can be in place. If we simply throw up our hands and say, ‘Well there’s nothing we can do, we’re just helpless’ then there’s not much chance that progress can be made.
What message do you have to those running Informa, the parent company of Routledge? What would you like to say to them?
Please communicate with your authors, editors, contributors to journals, and let us know what it is you’re planning and how you’re going to respect our work. Those of us who’ve already published we know our work has already been thrown into the AI chopper. What about new authors who are coming in? Do they have any choice in the matter? We need some communication. What about having some authors at the table when you’re making these decisions? What about inviting some authors to be a part of a team to think about OK what happens next? How can we show our authors that we respect them? We need a seat at the table, I think.
Yes, the infringement of copyright continues. Big Tech does not seem to care at all. I think in the not too distant future, people HAVE to align their values with their platforms. I, for example do not support or use Open AI or Microsoft due to lack of ethics.
These publishers just seem greedy -- willing to brush aside ethics and fairness because there are big fat dollar signs painted over their eyes and their tongues are hanging out. Big Tech have so much buying power that they can basically corrupt any institution and get them to turn against their own constituents.