A monumental shift has arrived on the doorstep of publishers.
Rapid advances in artificial intelligence bring boundless opportunities, but only for those ready to embrace and experiment with it. In our newest whitepaper, The Bright Future of AI: An Optimist’s Guide to AI + Publishing, Hum prepares publishers to flourish amidst the rise of rapidly advancing AI technology.
Inside, you’ll find:
- A 101 explainer demystifying core AI concepts
- Strengths and limitations of existing models
- A framework for pragmatic experimentation and building AI experience
- Guidance on crafting an AI-ready data architecture
- Real-world publisher case studies and expert perspectives
Get the full whitepaper - or check out the TL;DR version below.
Understanding AI
Intelligence is the ability to learn, understand, and apply knowledge, solve problems, and adapt to new situations. Artificial intelligence is focused on creating machine intelligence that mirrors intelligence in the natural world.
Practically, AI is a mix of data, models, engineered systems that show signs of intelligence, but more importantly, put that intelligence to work.
Currently, AI systems are a mix of classic machine learning (task-specific algorithms with manually crafted features), narrow AI (deep learning-based systems designed for specific tasks), and broad AI (systems like ones anchored by GPT-4, which can operate across a broader set of domains, with expanded capabilities due to external tools and information.)
There are three major types of AI models that exist within these systems:
- Generative AI, which focuses on generating new media from prompts, existing media or a combination of both.
- Interpretive AI, which focuses on context, understanding and interpretation of media, often used to reason about large corpuses.
- Predictive AI, which uses past data to predict future trends and behaviors.
These AI models can be made even more powerful when combined.
Data Quality Matters
An AI model is a structured, compressed, useful version of the data that fed it. Increasing the size of models, expanding the datasets they train on, and employing greater computational power leads to stronger AI capabilities.
At its most basic, bigger models + more compute + more data = stronger models.
A language model like Llama2 is trained on two trillion tokens of language data from sources like Wikipedia, Reddit, internet blogs, academic papers, etc. In model training, Llama2 reads a piece of text and is asked to predict the next token. It’s rewarded for getting the word right and penalized for guessing wrong.
Image generation models like those found in Stable Diffusion or DALL-E 3 are trained on text-image pairs. Diffusion models work by gradually adding noise to an image and then learning to reverse the process.
While AI models can synthesize from within their training data, they struggle or fail to represent things outside their training data. (But these models are trained to please you, so they’ll try anyways - even if it means making up cited sources or inventing a nonsensical image.
Likewise, if the training data quality is poor, the model quality will suffer.
"Large, clean, diverse data. The 3 pillars of a good dataset." - Andrej Karpathy
Top AI Use Cases for Publishers
Download the full whitepaper to explore practical ways that publishers are using AI to solve top use cases:
- Improving writing quality in the peer review process
- Conversational search
- Generating effective article summaries for different audiences
- Delivering personalized emails with curated recommendations.
- Delivering context-aware recommendations, like suggesting related articles on a topic page.
- Recommending appropriate journals for submitted manuscripts
- Tagging content for discovery and search
The Next Frontier: From LLMs to Multi-Modal AI
Large language models (LLMs) are massive multi-task learners. They can be trained for grammar correction, knowledge acquisition, summarization, logical or spatial reasoning, creative writing, or hypothesis generation.
LLMs will remain central to AI since they’re able to capture and encode so much intelligence. They’re also crucial since publishers deal with so much text. But we’ll also soon see models “go multi-modal”, mixing text, images, video, audio and other media. (How soon? OpenAI’s GPT 4.5 is rumored to include text, images, audio, video, and 3D images.)
How to Prepare for an AI Future
Many publishers are beginning to explore how AI can address their needs, from increasing researcher prestige or saving researcher time, speeding peer review and opening up editor time, detecting fraudulent manuscripts, translating research for other audiences - and more!
As you explore and evaluate AI products for your workflow, there are few important things to keep in mind:
- Cost. As you start out, you should tend toward off the shelf frontier models like GPT-4. As you scale, you’ll want to consider smaller fine-tuned models that fit a particular use case. The cost difference can be as much as 100x. That makes a big difference for how much you can take advantage of AI.
- Data privacy. OpenAI and other services say that they don’t train on your data if you use their APIs. Many publishers are keeping their content out of OpenAI, Anthropic and the other generic model providers (they’re worried about “accidental escape”). Alternatives include deploying your own open source models or using private AI clouds like the one provided by Hum.
- Trust. Can you trust the model outputs? Hallucinations are a feature, rather than a bug of the initial wave of AI models. You’ll want to carefully evaluate your models and model outputs. You may want to include a human-in-the-loop to ensure AI outputs, at least for a time.
Publishers should get comfortable experimenting, testing, and learning from AI initiatives. Now is the time to begin upskilling your team, and plan to bring in new talent, like data engineers, UX designers, and AI product managers.
The most important thing a publisher can do to prepare for the AI future is start. Because AI is moving incredibly fast, publishers need to be “building with it,” in order to have more than a surface-level understanding and keep up with the rate of change.
“The single most important thing to understand about AI is how fast it is moving.” - Dario Amodei, CEO of Anthropic
Don’t forget to grab your copy of The Bright Future of AI for Publishers for a deeper dive on the current state of AI and an optimistic view of what the future holds - or learn more about Alchemist, Hum’s AI engine.