As the industry grapples with an unprecedented submission volume crisis – 22 million peer reviews annually consuming 130 million reviewer hours! – early adopters of AI peer review tools are providing crucial insights for the broader publishing community.
At their recent SSP Industry Breakout session, Hum President Dustin Smith and AIP Publishing Chief Transformation Officer Ann Michael shared candid lessons from developing Alchemist Review, offering a roadmap for publishers considering similar innovations.
Missed out on the session?
Check out the recording, or read on for some of the key takeaways from the session.
Starting Points: Where AI Makes the Biggest Impact
In collaboration with the Purpose-Led Publishing consortium (AIP Publishing, IOP Publishing, and APS), Ann and the Alchemist Review team explored three journal types where AI could meaningfully enhance the peer review process:
- Prestige Journals: High-rigor publications with complex workflows and high submission volumes.
- Mega Journals: High-throughput, streamlined operations that require scalable tools.
- Specialty High-Volume Titles: Journals balancing niche expertise with significant volume.
For the first phase of the pilot, AIP Publishing focused on working with a limited set of journals led by 5–6 motivated editors. An “opt-in” model proved essential. As Ann Michael emphasized, successful adoption often begins when editors say, "If you could do this, then I could do that" — a sign they’re already imagining meaningful improvements.
Rethinking ROI: From Dollars to Impact
One of the most important shifts in adopting AI tools is redefining return on investment. Financial ROI, while still relevant, often needs to take a back seat to impact-based outcomes, including:
- Time savings for editors and reviewers
- Better manuscript analysis and triage
- Enhanced reviewer experiences through automation of repetitive tasks
- Shorter publication timelines
This mindset shift is essential for organizations experimenting and scaling use of AI.
What It Takes to Build and Deploy AI Tools
Being an early mover certainly comes with rewards, but it also means facing real-world hurdles. The Purpose-Led Publishing group developed a structured rollout process: limited prototype deployment, continuous editor feedback, and iterative refinement.
Some of the most valuable lessons came from early implementation challenges:
Integration Complexity
Initial versions of Alchemist Review lacked manuscript ID search — a seemingly small omission that sparked intense frustration for editors. Adding this feature improved manuscript search performance by 25%, reinforcing how deeply editors rely on familiar workflows and how important it is to work with them when building new systems for them.
Processing Delays
Early versions took up to five days to process manuscripts — a non-starter in fast-paced editorial environments. This validated a need to rebuild for near-real-time responsiveness.
UI Overload
Too many features, too soon, led to user fatigue. A revised interface focused on high-level indicators with the option to drill down, enabling editors to focus without being overwhelmed.
Culture Change: The Hardest Challenge
As Ann Michael noted at SSP, the toughest barriers often weren’t technical — they were human. Among the most critical lessons:
Managing Expectations
Many senior editors dismissed AI-generated summaries as redundant. While tough to hear, this feedback led to refining outputs toward useful analysis that humans couldn’t practically perform.
Avoiding the Perfection Trap
Stakeholders expected error rates near zero from day one. In reality, AI adoption demands a tolerance for imperfection — and a long view of value creation through iteration. This will inform the way AIP Publishing sets realistic expectations about experimentation and continuous improvement in future implementations.
Framing the Message
Telling editors that AI would reduce reviewer burden often prompted defensive reactions. Reframing the message — AI as a tool to enhance efficiency and eliminate wasted effort — proved far more effective.
User Feedback and Iteration
With multiple editors actively testing Alchemist Review across manuscripts, feedback has been intense and constant. The team implemented multiple feedback channels:
- In-app feedback flowing through Slack channels
- Direct conversations with editors
- Formal surveys and structured interviews
This feedback loop enabled rapid iteration and feature prioritization. Some features that seemed important to developers proved less valuable to editors, while seemingly minor issues (like manuscript ID integration) became critical pain points.
What’s Next?: Seamless Integration
The current version of Alchemist Review is just the beginning. Ongoing collaborations – for example with Silverchair’s ScholarOne Manuscripts – aim to integrate AI directly into editorial workflows, reducing tool-hopping and friction.
Planned expansions include:
- Expanding subject coverage beyond science, technology, and engineering to include medicine, mathematics, and humanities
- Reviewer integration with appropriate safeguards
- Author-facing tools that help improve submissions before they enter the editorial pipeline
- Advanced analytics for journal-level insights and trend analysis
Key Takeaways for Publishers
- Start small and focused: Pick willing journals and committed editorial teams rather than trying to transform everything at once.
- Prepare for honest feedback: Editors need to be direct about what works and what doesn't. View criticism as valuable input, not personal attacks.
- Invest in integration: Standalone tools create workflow friction. Plan for seamless integration with existing systems from the beginning.
- Redefine success metrics: Traditional ROI models may not apply. Focus on impact, efficiency, and user satisfaction.
- Embrace experimentation: As Ann noted, "You are not getting perfection out of the gate." Organizations that demand perfection upfront will struggle with AI implementation.
Implementing AI in editorial workflows isn’t a one-size-fits-all solution – it’s a journey. Early efforts like Alchemist Review demonstrate that meaningful innovation happens when technology is built for editors, not just around them.
Remember: success in AI implementation comes not from the sophistication of the technology, but from how well it serves the humans who use it.