Artificial intelligence is transforming industries, and publishing is no exception. HarperCollins' recent decision to partner with an unnamed AI company to use select nonfiction titles for training AI models has sparked intense debate among authors. While the initiative may seem forward-thinking, with companies like Microsoft partnering with HarperCollins to train a new AI model, HarperCollins has faced significant challenges in enrolling authors in its AI program. This hesitation highlights critical missteps in communication, value proposition, and ethical considerations.
In this blog, we’ll explore the reasons behind HarperCollins’ struggle to gain author buy-in and what executives across industries can learn from this experience.
One of HarperCollins’ most significant missteps was the lack of transparency in its AI partnership. Authors were asked to opt-in to license their works for $2,500 per book over three years, but key details were shrouded in secrecy:
This opacity has fueled skepticism among authors. Many feel they’re being asked to take a leap of faith without sufficient information. As Daniel Kibblesmith, an author approached for the initiative, put it, the offer felt “abominable.”
Trust is the cornerstone of any partnership, especially when dealing with sensitive intellectual property. HarperCollins’ failure to provide clarity has alienated authors and tarnished the initiative’s credibility.
The $2,500 offer per book, while tempting for some, has been widely criticized as inadequate. Authors argue that this amount does not reflect the value their works bring to AI training models, especially given the transformative capabilities of AI in the publishing industry.
AI companies stand to gain far more from the rich datasets derived from books, which help improve their models and, by extension, their commercial viability. Authors feel this compensation structure ignores the potential for long-term royalties or shared revenue from the AI's outputs.
By offering a fixed, modest fee without exploring alternative compensation models like profit-sharing or royalties, HarperCollins has unintentionally reinforced the perception that authors’ contributions are being undervalued.
Authors have long feared the implications of AI in creative industries. From AI-generated novels to automated journalism, there’s growing concern that these technologies could render traditional creative roles obsolete.
By asking authors to contribute to AI training, HarperCollins inadvertently positioned itself as a facilitator of this perceived obsolescence. The email sent to authors even acknowledged the controversy, stating:
“There is concern that these AI models may one day make us all obsolete.”
Instead of addressing these fears with concrete reassurances or ethical guidelines, HarperCollins left authors grappling with existential questions about their relevance in an AI-driven future. This lack of proactive communication has exacerbated resistance.
Perhaps the most glaring oversight was the failure to involve authors in the decision-making process. Rather than fostering a collaborative dialogue, HarperCollins approached authors with a pre-structured deal, offering little room for negotiation or input.
This approach ignored a fundamental truth: authors are the backbone of the publishing industry. By sidelining them, HarperCollins missed an opportunity to:
Workshops, focus groups, or even open forums could have helped HarperCollins understand and address authors’ concerns while refining the initiative.
HarperCollins’ initiative comes at a time when authors are suing companies like OpenAI for alleged copyright infringement. This legal backdrop has heightened sensitivities around AI’s use of creative works.
While HarperCollins made efforts to differentiate its opt-in model from unauthorized uses of copyrighted material, it underestimated the prevailing mistrust toward AI. Many authors see this as a slippery slope toward normalizing the exploitation of intellectual property.
Penguin Random House, for example, proactively updated its copyright pages to safeguard authors’ works from unauthorized AI usage. HarperCollins, however, has yet to take similarly robust measures to reassure authors of their rights and protections.
The challenges HarperCollins faces in enrolling authors offer valuable lessons for executives navigating AI adoption:
Organizations must be upfront about their partnerships, terms, and long-term goals. Secrecy breeds mistrust, especially when dealing with stakeholders’ intellectual property.
Stakeholders need to see tangible benefits. Explore alternative compensation models, such as royalties or co-ownership of AI outputs, to ensure equitable value distribution.
Anticipate fears about job displacement or exploitation and counter them with clear ethical guidelines and commitments to stakeholder protection.
Involve stakeholders in the design and rollout of new initiatives. Collaboration fosters trust and ensures the program aligns with their needs and concerns.
Keep a pulse on broader industry trends and sentiments. This awareness can help organizations position their initiatives more strategically and avoid backlash.
HarperCollins’ AI licensing initiative is a case study in how not to introduce disruptive technologies to a creative industry. While the company’s intentions may have been good, its approach lacked the transparency, value alignment, and stakeholder engagement needed to build trust and foster collaboration.
For authors, the initiative serves as a reminder to remain vigilant about their intellectual property rights. For executives, it’s a lesson in the importance of ethical, transparent, and inclusive AI rollouts.
As AI continues to reshape industries, the question isn’t whether organizations should adopt these technologies but how they can do so responsibly. HarperCollins’ experience shows that the road to innovation must be paved with trust, collaboration, and respect for the people who make these industries thrive.