Skip to content

CHAPTERS 1 - 6 | EARLY RELEASE

O'Reilly | Designing Large Language Model Applications

Transformer-based language models are powerful tools for solving various language tasks and represent a phase shift in natural language processing. But the transition from demos and prototypes to full-fledged applications has been slow. With this book, you'll learn the tools, techniques, and playbooks for building useful products that incorporate the power of language models.

llm_cover

What you will learn in this book

Experienced ML researcher Suhas Pai provides practical advice on dealing with commonly observed failure modes and counteracting the current limitations of state-of-the-art models. You'll deeply dive into the Transformer architecture and its variants. And you'll get up-to-date with the taxonomy of language models, which can offer insight into which models are better at which tasks.

  • Clever ways to deal with failure modes of current state-of-the-art language models and methods to exploit their strengths for building useful products

  • How to develop an intuition about the Transformer architecture and the impact of each architectural decision

  • Ways to adapt pre-trained language models to your domain and use cases

  • How to select a language model for your domain and task from among the choices available, and how to deal with the build-versus-buy conundrum

  • Effective fine-tuning and parameter efficient fine-tuning, and few-shot and zero-shot learning techniques

  • How to interface language models with external tools and integrate them into an existing software ecosystem

About the Author

Suhas Pai is an experienced machine learning researcher, having worked in the tech industry for over a decade. He is the co-founder, CTO, and ML Research Lead at Bedrock AI, a Y-Combinator-backed NLP startup in the financial domain. At Bedrock AI, Suhas invented several novel NLP techniques and LM-based architectures that fully power the core features of Bedrock AI’s products.

Suhas is also the co-chair of the Privacy Working Group at Big Science for the BLOOM language model project, which, when released, was the world’s largest open-source, multilingual language model.