Unlocking Data with Generative AI and RAG - Keith Bourne - E-Book

Unlocking Data with Generative AI and RAG E-Book

Keith Bourne

0,0
29,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Generative AI is helping organizations tap into their data in new ways, with retrieval-augmented generation (RAG) combining the strengths of large language models (LLMs) with internal data for more intelligent and relevant AI applications. The author harnesses his decade of ML experience in this book to equip you with the strategic insights and technical expertise needed when using RAG to drive transformative outcomes.
The book explores RAG’s role in enhancing organizational operations by blending theoretical foundations with practical techniques. You’ll work with detailed coding examples using tools such as LangChain and Chroma’s vector database to gain hands-on experience in integrating RAG into AI systems. The chapters contain real-world case studies and sample applications that highlight RAG’s diverse use cases, from search engines to chatbots. You’ll learn proven methods for managing vector databases, optimizing data retrieval, effective prompt engineering, and quantitatively evaluating performance. The book also takes you through advanced integrations of RAG with cutting-edge AI agents and emerging non-LLM technologies.
By the end of this book, you’ll be able to successfully deploy RAG in business settings, address common challenges, and push the boundaries of what’s possible with this revolutionary AI technique.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 593

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Unlocking Data with Generative AI and RAG

Enhance generative AI systems by integrating internal data with large language models using RAG

Keith Bourne

Unlocking Data with Generative AI and RAG

Copyright © 2024 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Niranjan Naikwadi

Publishing Product Manager: Sanjana Gupta

Book Project Manager: Aparna Ravikumar Nair

Senior Editor: Tazeen Shaikh

Technical Editor: Sweety Pagaria

Copy Editor: Safis Editing

Proofreader: Tazeen Shaikh

Indexer: Hemangini Bari

Production Designer: Alishon Mendonca

DevRel Marketing Coordinator: Vinishka Kalra

First published: September 2024

Production reference: 1130924

Published by Packt Publishing Ltd.

Grosvenor House

11 St Paul’s Square

Birmingham

B3 1RB, UK

ISBN 978-1-83588-790-5

www.packtpub.com

To Rylee, Aubri, Taryn, Papa, Lukie, Phishy, Remy, Mitchy, and, especially, Lindsay, for making it all worthwhile.

And to my mom, Barbara – hey Mom, can you believe it, I wrote a book! You always encouraged me to write. I know you’d be proud. I love you and miss you!

– Keith Bourne

Foreword

The rise of Generative AI has fundamentally changed the landscape of technology, opening up new avenues for building intelligent applications. What once required a deep understanding of AI algorithms and a PhD is now accessible to software developers and engineers across the globe. AI is increasingly becoming a commodity, making it possible for anyone with the right tools and knowledge to build powerful, transformative applications.

I first met Keith through our shared passion for building and scaling LLM applications. As the founder of ragas, an open source evaluation library for LLMs, I was intrigued by Keith’s work in leading the Generative AI revolution at Johnson & Johnson. We quickly found common ground in our discussions about the challenges and opportunities in this space, and I was impressed by Keith’s practical insights and deep expertise.

Today, developers face an overwhelming array of tutorials, tools, and frameworks for Generative AI. This confusion is typical of any disruptive technology; the noise in the market can make it hard to discern the best path forward. Keith’s book offers a much-needed clear direction, guiding developers through the complexities of Generative AI and helping them navigate the choices available.

This book is an invaluable resource for software and application developers who are eager to start building LLM applications. It stands out as one of the most state-of-the-art guides available, covering the latest frameworks and libraries, such as LangChain and ragas. Whether you’re interested in prototyping new ideas or scaling LLM applications to production, the book provides both theoretical foundations and practical advice on creating robust, scalable AI systems, including emerging concepts such as Retrieval-Augmented Generation (RAG) and agentic workflows.

I am confident that this book will become a go-to resource for anyone looking to harness the power of Generative AI. Keith’s clear, insightful guidance will not only help you understand the potential of these technologies but also empower you to build the next generation of intelligent applications.

Shahul Es

Co-founder and CTO at ragas.io

Contributors

About the author

Keith Bourne is a senior Generative AI data scientist at Johnson & Johnson. He has over a decade of experience in machine learning and AI working across diverse projects in companies that range in size from start-ups to Fortune 500 companies. With an MBA from Babson College and a master’s in applied data science from the University of Michigan, he has developed several sophisticated modular Generative AI platforms from the ground up, using numerous advanced techniques, including RAG, AI agents, and foundational model fine-tuning. Keith seeks to share his knowledge with a broader audience, aiming to demystify the complexities of RAG for organizations looking to leverage this promising technology.

I would like to thank my incredible wife, Lindsay Avila, whose unwavering love, support, and encouragement have been the foundation of my success. Your belief in me has been a guiding light in my career, and I am forever grateful for your patience, understanding, and the countless sacrifices you have made. This book is a testament to your dedication and the strength you give me every day. I love you more than words can express.

About the reviewers

Prasad Gandham has over 18 years of experience in business transformation, helping enterprise customers achieve engineering goals and operational efficiency. He excels in delivering enterprise technology solutions, from planning to implementation and maintenance. Prasad has led the development of numerous workflow and dashboard products and is skilled in coordinating dispersed teams for timely delivery and consistent productivity. His expertise includes multi-cloud environments like Azure and AWS, leading large-scale migrations, and driving cloud adoption strategies. Prasad has published technical blogs, spoken at cloud events, and guided technical executives through organizational changes to maximize cloud value.

Shubhendu Satsangi is a product manager by profession. He graduated from Delhi University with an MBA and has worked with Microsoft for more than five years. His work is focused on Azure AI Platform, where he manages multiple areas, such as Azure OpenAI, Azure Speech, Azure machine translation, and other traditional AI services. He also advises multiple Azure customers on Generative AI implementations. He has worked on building several end-to-end GenAI infrastructures with an emphasis on RAG and other customization techniques.

Table of Contents

Preface

Part 1 – Introduction to Retrieval-Augmented Generation (RAG)

1

What Is Retrieval-Augmented Generation (RAG)

Understanding RAG – Basics and principles

Advantages of RAG

Challenges of RAG

RAG vocabulary

LLM

Prompting, prompt design, and prompt engineering

LangChain and LlamaIndex

Inference

Context window

Fine-tuning – full-model fine-tuning (FMFT) and parameter-efficient fine-tuning (PEFT)

Vector store or vector database?

Vectors, vectors, vectors!

Vectors

Implementing RAG in AI applications

Comparing RAG with conventional generative AI

Comparing RAG with model fine-tuning

The architecture of RAG systems

Summary

2

Code Lab – An Entire RAG Pipeline

Technical requirements

No interface!

Setting up a large language model (LLM) account

Installing the necessary packages

Imports

OpenAI connection

Indexing

Web loading and crawling

Splitting

Embedding and indexing the chunks

Retrieval and generation

Prompt templates from the LangChain Hub

Formatting a function so that it matches the next step’s input

Defining your LLM

Setting up a LangChain chain using LCEL

Submitting a question for RAG

Final output

Complete code

Summary

3

Practical Applications of RAG

Technical requirements

Customer support and chatbots with RAG

Technical support

Financial services

Healthcare

RAG for automated reporting

How RAG is utilized with automated reporting

Transforming unstructured data into actionable insights

Enhancing decision-making and strategic planning

E-commerce support

Dynamic online product descriptions

Product recommendations for e-commerce sites

Utilizing knowledge bases with RAG

Searchability and utility of internal knowledge bases

Expanding and enhancing private data with general external knowledge bases

Innovation scouting and trend analysis

Leveraging RAG for personalized recommendations in marketing communications

Training and education

Code lab 3.1 – Adding sources to your RAG

Summary

4

Components of a RAG System

Technical requirements

Key component overview

Indexing

Retrieval and generation

Retrieval focused steps

Generation stage

Prompting

Defining your LLM

UI

Pre-processing

Post-processing

Output interface

Evaluation

Summary

References

5

Managing Security in RAG Applications

Technical requirements

How RAG can be leveraged as a security solution

Limiting data

Ensuring the reliability of generated content

Maintaining transparency

RAG security challenges

LLMs as black boxes

Privacy concerns and protecting user data

Hallucinations

Red teaming

Common areas to target with red teaming

Resources for building your red team plan

Code lab 5.1 – Securing your keys

Code lab 5.2 – Red team attack!

Code lab 5.3 – Blue team defend!

Summary

Part 2 – Components of RAG

6

Interfacing with RAG and Gradio

Technical requirements

Why Gradio?

Benefits of using Gradio

Limitations to using Gradio

Code lab – Adding a Gradio interface

Summary

7

The Key Role Vectors and Vector Stores Play in RAG

Technical requirements

Fundamentals of vectors in RAG

What is the difference between embeddings and vectors?

What is a vector?

Vector dimensions and size

Where vectors lurk in your code

Vectorization occurs in two places

Vector databases/stores store and contain vectors

Vector similarity compares your vectors

The amount of text you vectorize matters!

Not all semantics are created equal!

Code lab 7.1 – Common vectorization techniques

Term frequency-inverse document frequency (TF-IDF)

Word2Vec, Sentence2Vec, and Doc2Vec

Bidirectional encoder representations from transformers

OpenAI and other similar large-scale embedding services

Factors in selecting a vectorization option

Quality of the embedding

Cost

Network availability

Speed

Embedding compatibility

Getting started with vector stores

Data sources (other than vector)

Vector stores

Common vector store options

Choosing a vector store

Summary

8

Similarity Searching with Vectors

Technical requirements

Distance metrics versus similarity algorithms versus vector search

Vector space

Semantic versus keyword search

Semantic search example

Code lab 8.1 – Semantic distance metrics

Euclidean distance (L2)

Dot product (also called inner product)

Cosine distance

Different search paradigms – sparse, dense, and hybrid

Dense search

Sparse search

Hybrid search

Code lab 8.2 – Hybrid search with a custom function

Code lab 8.3 – Hybrid search with LangChain’s EnsembleRetriever to replace our custom function

Semantic search algorithms

k-NN

ANN

Enhancing search with indexing techniques

Vector search options

pgvector

Elasticsearch

FAISS

Google Vertex AI Vector Search

Azure AI Search

Approximate Nearest Neighbors Oh Yeah

Pinecone

Weaviate

Chroma

Summary

9

Evaluating RAG Quantitatively and with Visualizations

Technical requirements

Evaluate as you build

Evaluate after you deploy

Evaluation helps you get better

Standardized evaluation frameworks

Embedding model benchmarks

Vector store and vector search benchmarks

LLM benchmarks

Final thoughts on standardized evaluation frameworks

What is the ground truth?

How to use the ground truth?

Generating the ground truth

Human annotation

Expert knowledge

Crowdsourcing

Synthetic ground truth

Code lab 9.1 – ragas

Setting up LLMs/embedding models

Generating the synthetic ground truth

Analyzing the ragas results

Retrieval evaluation

Generation evaluation

End-to-end evaluation

Other component-wise evaluation

Ragas founder insights

Additional evaluation techniques

Bilingual Evaluation Understudy (BLEU)

Recall-Oriented Understudy for Gisting Evaluation (ROUGE)

Semantic similarity

Human evaluation

Summary

References

10

Key RAG Components in LangChain

Technical requirements

Code lab 10.1 – LangChain vector store

Vector stores, LangChain, and RAG

Code lab 10.2 – LangChain Retrievers

Retrievers, LangChain, and RAG

Code lab 10.3 – LangChain LLMs

LLMs, LangChain, and RAG

Extending the LLM capabilities

Summary

11

Using LangChain to Get More from RAG

Technical requirements

Code lab 11.1 – Document loaders

Code lab 11.2 – Text splitters

Character text splitter

Recursive character text splitter

Semantic chunker

Code lab 11.3 – Output parsers

String output parser

JSON output parser

Summary

Part 3 – Implementing Advanced RAG

12

Combining RAG with the Power of AI Agents and LangGraph

Technical requirements

Fundamentals of AI agents and RAG integration

Living in an AI agent world

LLMs as the agents’ brains

Graphs, AI agents, and LangGraph

Code lab 12.1 – adding a LangGraph agent to RAG

Tools and toolkits

Agent state

Core concepts of graph theory

Nodes and edges in our agent

Cyclical graph setup

Summary

13

Using Prompt Engineering to Improve RAG Efforts

Technical requirements

Prompt parameters

Temperature

Top-p

Seed

Take your shot

Prompting, prompt design, and prompt engineering revisited

Prompt design versus engineering approaches

Fundamentals of prompt design

Adapting prompts for different LLMs

Code lab 13.1 – Custom prompt template

Code lab 13.2 – Prompting options

Iterating

Iterating the tone

Shorten the length

Changing the focus

Summarizing

Summarizing with a focus

extract instead of summarize

Inference

Extracting key data

Inferring topics

Transformation

Expansion

Summary

14

Advanced RAG-Related Techniques for Improving Results

Technical requirements

Naïve RAG and its limitations

Hybrid RAG/multi-vector RAG for improved retrieval

Re-ranking in hybrid RAG

Code lab 14.1 – Query expansion

Code lab 14.2 – Query decomposition

Code lab 14.3 – MM-RAG

Multi-modal

Benefits of multi-modal

Multi-modal vector embeddings

Images are not just “pictures”

Introducing MM-RAG in code

Other advanced RAG techniques to explore

Indexing improvements

Retrieval

Post-retrieval/generation

Entire RAG pipeline coverage

Summary

Index

Other Books You May Enjoy

Part 1 – Introduction to Retrieval-Augmented Generation (RAG)

In this part, you will be introduced to retrieval-augmented generation (RAG), covering its basics, advantages, challenges, and practical applications across various industries. You will learn how to implement a complete RAG pipeline using Python, manage security risks, and build interactive applications with Gradio. We will also explore the key components of RAG systems, including indexing, retrieval, generation, and evaluation, and demonstrate how to optimize each stage for enhanced performance and user experience.

This part contains the following chapters:

Chapter 1, What Is Retrieval-Augmented Generation (RAG)Chapter 2, Code Lab – An Entire RAG PipelineChapter 3, Practical Applications of RAGChapter 4, Components of a RAG SystemChapter 5, Managing Security in RAG Applications

1

What Is Retrieval-Augmented Generation (RAG)

The field of artificial intelligence (AI) is rapidly evolving. At the center of it all is generative AI. At the center of generative AI is retrieval-augmented generation (RAG). RAG is emerging as a significant addition to the generative AI toolkit, harnessing the intelligence and text generation capabilities of large language models (LLMs) and integrating them with a company’s internal data. This offers a method to enhance organizational operations significantly. This book focuses on numerous aspects of RAG, examining its role in augmenting the capabilities of LLMs and leveraging internal corporate data for strategic advantage.

As this book progresses, we will outline the potential of RAG in the enterprise, suggesting how it can make AI applications more responsive and smarter, aligning them with your organizational objectives. RAG is well-positioned to become a key facilitator of customized, efficient, and insightful AI solutions, bridging the gap between generative AI’s potential and your specific business needs. Our exploration of RAG will encourage you to unlock the full potential of your corporate data, paving the way for you to enter the era of AI-driven innovation.

In this chapter, we will cover the following topics:

The basics of RAG and how it combines LLMs with a company’s private dataThe key advantages of RAG, such as improved accuracy, customization, and flexibilityThe challenges and limitations of RAG, including data quality and computational complexityImportant RAG vocabulary terms, with an emphasis on vectors and embeddingsReal-world examples of RAG applications across various industriesHow RAG differs from conventional generative AI and model fine-tuningThe overall architecture and stages of a RAG system from user and technical perspectives

By the end of this chapter, you will have a solid foundation in the core RAG concepts and understand the immense potential it offers organizations so that they can extract more value from their data and empower their LLMs. Let’s get started!

Understanding RAG – Basics and principles

Modern-day LLMs are impressive, but they have never seen your company’s private data (hopefully!). This means the ability of an LLM to help your company fully utilize its data is very limited. This very large barrier has given rise to the concept of RAG, where you are using the power and capabilities of the LLM but combining it with the knowledge and data contained within your company’s internal data repositories. This is the primary motivation for using RAG: to make new data available to the LLM and significantly increase the value you can extract from that data.

Beyond internal data, RAG is also useful in cases where the LLM has not been trained on the data, even if it is public, such as the most recent research papers or articles about a topic that is strategic to your company. In both cases, we are talking about data that was not present during the training of the LLM. You can have the latest LLM trained on the most tokens ever, but if that data was not present for training, then the LLM will be at a disadvantage in helping you reach your full productivity.

Ultimately, this highlights the fact that, for most organizations, it is a central need to connect new data to an LLM. RAG is the most popular paradigm for doing this. This book focuses on showing you how to set up a RAG application with your data, as well as how to get the most out of it in various situations. We intend to give you an in-depth understanding of RAG and its importance in leveraging an LLM within the context of a company’s private or specific data needs.

Now that you understand the basic motivations behind implementing RAG, let’s review some of the advantages of using it.

Advantages of RAG

Some of the potential advantages of using RAG include improved accuracy and relevance, customization, flexibility, and expanding the model’s knowledge beyond the training data. Let’s take a closer look:

Improved accuracy and relevance: RAG can significantly enhance the accuracy and relevance of responses that are generated by LLMs. RAG fetches and incorporates specific information from a database or dataset, typically in real time, and ensures that the output is based on both the model’s pre-existing knowledge and the most current and relevant data that you are providing directly.Customization: RAG allows you to customize and adapt the model’s knowledge to your specific domain or use case. By pointing RAG to databases or datasets directly relevant to your application, you can tailor the model’s outputs so that they align closely with the information and style that matters most for your specific needs. This customization enables the model to provide more targeted and useful responses.Flexibility: RAG provides flexibility in terms of the data sources that the model can access. You can apply RAG to various structured and unstructured data, including databases, web pages, documents, and more. This flexibility allows you to leverage diverse information sources and combine them in novel ways to enhance the model’s capabilities. Additionally, you can update or swap out the data sources as needed, enabling the model to adapt to changing information landscapes.Expanding model knowledge beyond training data: LLMs are limited by the scope of their training data. RAG overcomes this limitation by enabling models to access and utilize information that was not included in their initial training sets. This effectively expands the knowledge base of the model without the need for retraining, making LLMs more versatile and adaptable to new domains or rapidly evolving topics.Removing hallucinations: The LLM is a key component within the RAG system. LLMs have the potential to provide wrong information, also known as hallucinations. These hallucinations can manifest in several ways, such as made-up facts, incorrect facts, or even nonsensical verbiage. Often, the hallucination is worded in a way that can be very convincing, causing it to be difficult to identify. A well-designed RAG application can remove hallucinations much more easily than when directly using an LLM.

With that, we’ve covered the key advantages of implementing RAG in your organization. Next, let’s discuss some of the challenges you might face.

Challenges of RAG

There are some challenges to using RAG as well, which include dependency on the quality of the internal data, the need for data manipulation and cleaning, computational overhead, more complex integrations, and the potential for information overload. Let’s review these challenges and gain a better understanding of how they impact RAG pipelines and what can be done about them:

Dependency on data quality: When talking about how data can impact an AI model, the saying in data science circles is garbage in, garbage out. This means that if you give a model bad data, it will give you bad results. RAG is no different. The effectiveness of RAG is directly tied to the quality of the data it retrieves. If the underlying database or dataset contains outdated, biased, or inaccurate information, the outputs generated by RAG will likely suffer from the same issues.Need for data manipulation and cleaning: Data in the recesses of the company often has a lot of value to it, but it is not often in good, accessible shape. For example, data from PDF-based customer statements needs a lot of massaging so that it can be put into a format that can be useful to a RAG pipeline.Computational overhead: A RAG pipeline introduces a host of new computational steps into the response generation process, including data retrieval, processing, and integration. LLMs are getting faster every day, but even the fastest response can be more than a second, and some can take several seconds. If you combine that with other data processing steps, and possibly multiple LLM calls, the result can be a very significant increase in the time it takes to receive a response. This all leads to increased computational overhead, affecting the efficiency and scalability of the entire system. As with any other IT initiative, an organization must balance the benefits of enhanced accuracy and customization against the resource requirements and potential latency introduced by these additional processes.Data storage explosion; complexity in integration and maintenance: Traditionally, your data resides in a data source that’s queried in various ways to be made available to your internal and external systems. But with RAG, your data resides in multiple forms and locations, such as vectors in a vector database, that represent the same data, but in a different format. Add in the complexity of connecting these various data sources to LLMs and relevant technical mechanisms such as vector searches and you have a significant increase in complexity. This increased complexity can be resource-intensive. Maintaining this integration over time, especially as data sources evolve or expand, adds even more complexity and cost. Organizations need to invest in technical expertise and infrastructure to leverage RAG capabilities effectively while accounting for the rapid increase in complexities these systems bring with them.Potential for information overload: RAG-based systems can pull in too much information. It is just as important to implement mechanisms to address this issue as it is to handle times when not enough relevant information is found. Determining the relevance and importance of retrieved information to be included in the final output requires sophisticated filtering and ranking mechanisms. Without these, the quality of the generated content could be compromised by an excess of unnecessary or marginally relevant details.Hallucinations: While we listed removing hallucinations as an advantage of using RAG, hallucinations do pose one of the biggest challenges to RAG pipelines if they’re not dealt with properly. A well-designed RAG application must take measures to identify and remove hallucinations and undergo significant testing before the final output text is provided to the end user.High levels of complexity within RAG components: A typical RAG application tends to have a high level of complexity, with many components that need to be optimized for the overall application to function properly. The components can interact with each other in several ways, often with many more steps than the basic RAG pipeline you start with. Every component within the pipeline needs significant amounts of trials and testing, including your prompt design and engineering, the LLMs you use and how you use them, the various algorithms and their parameters for retrieval, the interface you use to access your RAG application, and numerous other aspects that you will need to add over the course of your development.

In this section, we explored the key advantages of implementing RAG in your organization, including improved accuracy and relevance, customization, flexibility, and the ability to expand the model’s knowledge beyond its initial training data. We also discussed some of the challenges you might face when deploying RAG, such as dependency on data quality, the need for data manipulation and cleaning, increased computational overhead, complexity in integration and maintenance, and the potential for information overload. Understanding these benefits and challenges provides a foundation for diving deeper into the core concepts and vocabulary used in RAG systems.

To understand the approaches we will introduce, you will need a good understanding of the vocabulary used to discuss these approaches. In the following section, we will familiarize ourselves with some of the foundational concepts so that you can better understand the various components and techniques involved in building effective RAG pipelines.

RAG vocabulary

Now is as good a time as any to review some vocabulary that should help you become familiar with the various concepts in RAG. In the following subsections, we will familiarize ourselves with some of this vocabulary, including LLMs, prompting concepts, inference, context windows, fine-tuning approaches, vector databases, and vectors/embeddings. This is not an exhaustive list, but understanding these core concepts should help you understand everything else we will teach you about RAG in a more effective way.

LLM

Most of this book will deal with LLMs. LLMs are generative AI technologies that focus on generating text. We will keep things simple by concentrating on the type of model that most RAG pipelines use, the LLM. However, we would like to clarify that while we will focus primarily on LLMs, RAG can also be applied to other types of generative models, such as those for images, audio, and videos. We will focus on these other types of models and how they are used in RAG in Chapter 14.

Some popular examples of LLMs are the OpenAI ChatGPT models, the Meta Llama models, Google’s Gemini models, and Anthropic’s Claude models.

Prompting, prompt design, and prompt engineering

These terms are sometimes used interchangeably, but technically, while they all have to do with prompting, they do have different meanings:

Prompting is the act of sending a query or prompt to an LLM.Prompt design refers to the strategy you implement to design the prompt you will send to the LLM. Many different prompt design strategies work in different scenarios. We will review many of these in Chapter 13.Prompt engineering focuses more on the technical aspects surrounding the prompt that you use to improve the outputs from the LLM. For example, you may break up a complex query into two or three different LLM interactions, engineering it better to achieve superior results. We will also review prompt engineering in Chapter 13.

LangChain and LlamaIndex

This book will focus on using LangChain as the framework for building our RAG pipelines. LangChain is an open source framework that supports not just RAG but any development that wants to use LLMs within a pipeline approach. With over 15 million monthly downloads, LangChain is the most popular generative AI development framework. It supports RAG particularly well, providing a modular and flexible set of tools that make RAG development significantly more efficient than not using a framework.

While LangChain is currently the most popular framework for developing RAG pipelines, LlamaIndex is a leading alternative to LangChain, with similar capabilities in general. LlamaIndex is known for its focus on search and retrieval tasks and may be a good option if you require advanced search or need to handle large datasets.

Many other options focus on various niches. Once you have gotten familiar with building RAG pipelines, be sure to look at some of the other options to see if there are frameworks that work for your particular project better.

Inference

We will use the term inference from time to time. Generally, this refers to the process of the LLM generating outputs or predictions based on given inputs using a pre-trained language model. For example, when you ask ChatGPT a question, the steps it takes to provide you with a response is called inference.

Context window

A context window, in the context of LLMs, refers to the maximum number of tokens (words, sub-words, or characters) that the model can process in a single pass. It determines the amount of text the model can see or attend to at once when making predictions or generating responses.

The context window size is a key parameter of the model architecture and is typically fixed during model training. It directly relates to the input size of the model as it sets an upper limit on the number of tokens that can be fed into the model at a time.

For example, if a model has a context window size of 4,096 tokens, it means that the model can process and generate sequences of up to 4,096 tokens. When processing longer texts, such as documents or conversations, the input needs to be divided into smaller segments that fit within the context window. This is often done using techniques such as sliding windows or truncation.

The size of the context window has implications for the model’s ability to understand and maintain long-range dependencies and context. Models with larger context windows can capture and utilize more contextual information when generating responses, which can lead to more coherent and contextually relevant outputs. However, increasing the context window size also increases the computational resources required to train and run the model.

In the context of RAG, the context window size is essential because it determines how much information from the retrieved documents can be effectively utilized by the model when generating the final response. Recent advancements in language models have led to the development of models with significantly larger context windows, enabling them to process and retain more information from the retrieved sources. See Table 1.1 to see the context windows of many popular LLMs, both closed and open sourced:

LLM

Context Window (Tokens)

ChatGPT-3.5 Turbo 0613 (OpenAI)

4,096

Llama 2 (Meta)

4,096

Llama 3 (Meta)

8,000

ChatGPT-4 (OpenAI)

8,192

ChatGPT-3.5 Turbo 0125 (OpenAI)

16,385

ChatGPT-4.0-32k (OpenAI)

32,000

Mistral (Mistral AI)

32,000

Mixtral (Mistral AI)

32,000

DBRX (Databricks)

32,000

Gemini 1.0 Pro (Google)

32,000

ChatGPT-4.0 Turbo (OpenAI)

128,000

ChatGPT-4o (OpenAI)

128,000

Claude 2.1 (Anthropic)

200,000

Claude 3 (Anthropic)

200,000

Gemini 1.5 Pro (Google)

1,000,000

Table 1.1 – Different context windows for LLMs

Figure 1.1, which is based on Table 1.1, shows that Gemini 1.5 Pro is far larger than the others.

Figure 1.1 – Different context windows for LLMs

Note that Figure 1.1 shows models that have generally aged from right to left, meaning the older models tended to have smaller context windows, with the newest models having larger context windows. This trend is likely to continue, pushing the typical context window larger as time progresses.

Fine-tuning – full-model fine-tuning (FMFT) and parameter-efficient fine-tuning (PEFT)

FMFT is where you take a foundation model and train it further to gain new capabilities. You could simply give it new knowledge for a specific domain, or you could give it a skill, such as being a conversational chatbot. FMFT updates all the parameters and biases in the model.

PEFT, on the other hand, is a type of fine-tuning where you focus only on specific parts of the parameters or biases when you fine-tune the model, but with a similar goal as general fine-tuning. The latest research in this area shows that you can achieve similar results to FMFT with far less cost, time commitment, and data.

While this book does not focus on fine-tuning, it is a very valid strategy to try to use a model fine-tuned with your data to give it more knowledge from your domain or to give it more of a voice from your domain. For example, you could train it to talk more like a scientist than a generic foundation model, if you’re using this in a scientific field. Alternatively, if you are developing in a legal field, you may want it to sound more like a lawyer.

Fine-tuning also helps the LLM to understand your company’s data better, making it better at generating an effective response during the RAG process. For example, if you have a scientific company, you might fine-tune a model with scientific information and use it for a RAG application that summarizes your research. This may improve your RAG application’s output (the summaries of your research) because your fine-tuned model understands your data better and can provide a more effective summary.

Vector store or vector database?

Both! All vector databases are vector stores, but not all vector stores are vector databases. OK, while you get out your chalkboard to draw a Vinn diagram, I will continue to explain this statement.

There are ways to store vectors that are not full databases. They are simply storage devices for vectors. So, to encompass all possible ways to store vectors, LangChain calls them all vector stores. Let’s do the same! Just know that not all the vector stores that LangChain connects with are officially considered vector databases, but in general, most of them are and many people refer to all of them as vector databases, even when they are not technically full databases from a functionality standpoint. Phew – glad we cleared that up!

Vectors, vectors, vectors!

A vector is a mathematical representation of your data. They are often referred to as embeddings when talking specifically about natural language processing (NLP) and LLMs. Vectors are one of the most important concepts to understand and there are many different parts of a RAG pipeline that utilize vectors.

We just covered many key vocabulary terms that will be important for you to understand the rest of this book. Many of these concepts will be expanded upon in future chapters. In the next section, we will continue to discuss vectors in further depth. And beyond that, we will spend Chapters 7 and 8 going over vectors and how they are used to find similar content.

Vectors

It could be argued that understanding vectors and all the ways they are used in RAG is the most important part of this entire book. As mentioned previously, vectors are simply the mathematical representations of your external data, and they are often referred to as embeddings. These representations capture semantic information in a format that can be processed by algorithms, facilitating tasks such as similarity search, which is a crucial step in the RAG process.

Vectors typically have a specific dimension based on how many numbers are represented by them. For example, this is a four-dimensional vector:

[0.123, 0.321, 0.312, 0.231]

If you didn’t know we were talking about vectors and you saw this in Python code, you might recognize this as a list of four floating points, and you aren’t too far off. However, when working with vectors in Python, you want to recognize them as a NumPy array, rather than lists. NumPy arrays are generally more machine-learning-friendly because they are optimized to be processed much faster and more efficiently than Python lists, and they are more broadly recognized as the de facto representation of embeddings across machine learning packages such as SciPy, pandas, scikit-learn, TensorFlow, Keras, Pytorch, and many others. NumPy also enables you to perform vectorized math directly on the NumPy array, such as performing element-wise operations, without having to code in loops and other approaches you might have to use if you were using a different type of sequence.

When working with vectors for vectorization, there are often hundreds or thousands of dimensions, which refers to the number of floating points present in the vector. Higher dimensionality can capture more detailed semantic information, which is crucial for accurately matching query inputs with relevant documents or data in RAG applications.

In Chapter 7, we will cover the key role vectors and vector databases play in RAG implementation. Then, in Chapter 8, we will dive more into the concept of similarity searches, which utilize vectors to search much faster and more efficiently. These are key concepts that will help you gain a much deeper understanding of how to better implement a RAG pipeline.

Understanding vectors can be a crucial underlying concept to understand how to implement RAG, but how is RAG used in practical applications in the enterprise? We will discuss these practical AI applications of RAG in the next section.

Implementing RAG in AI applications

RAG is rapidly becoming a cornerstone of generative AI platforms in the corporate world. RAG combines the power of retrieving internal or new data with generative language models to enhance the quality and relevance of the generated text. This technique can be particularly useful for companies across various industries to improve their products, services, and operational efficiencies. The following are some examples of how RAG can be used:

Customer support and chatbots: These can exist without RAG, but when integrated with RAG, it can connect those chatbots with past customer interactions, FAQs, support documents, and anything else that was specific to that customer.Technical support: With better access to customer history and information, RAG-enhanced chatbots can provide a significant improvement to current technical support chatbots.Automated reporting: RAG can assist in creating initial drafts or summarizing existing articles, research papers, and other types of unstructured data into more digestible formats.E-commerce support: For e-commerce companies, RAG can help generate dynamic product descriptions and user content, as well as make better product recommendations.Utilizing knowledge bases: RAG improves the searchability and utility of both internal and general knowledge bases by generating summaries, providing direct answers to queries, and retrieving relevant information across various domains such as legal, compliance, research, medical, academia, patents, and technical documents.Innovation scouting: This is like searching general knowledge bases but with a focus on innovation. With this, companies can use RAG to scan and summarize information from quality sources to identify trends and potential areas for innovations that are relevant to that company’s specialization.Training and education: RAG can be used by education organizations and corporate training programs to generate or customize learning materials based on specific needs and knowledge levels of the learners. With RAG, a much deeper level of internal knowledge from the organization can be incorporated into the educational curriculum in very customized ways to the individual or role.

These are just a few of the ways organizations are using RAG right now to improve their operations. We will dive into each of these areas in more depth in Chapter 3, helping you understand how you can implement all these game-changing initiatives in multiple places in your company.

You might be wondering, “If I am using an LLM such as ChatGPT to answer my questions in my company, does that mean my company is using RAG already?”

The answer is “No.”

If you just log in to ChatGPT and ask questions, that is not the same as implementing RAG. Both ChatGPT and RAG are forms of generative AI, and they are sometimes used together, but they are two different concepts. In the next section, we will discuss the differences between generative AI and RAG.

Comparing RAG with conventional generative AI

Conventional generative AI has already shown to be a revolutionary change for companies, helping their employees reach new levels of productivity. LLMs such as ChatGPT are assisting users with a rapidly growing list of applications that include writing business plans, writing and improving code, writing marketing copy, and even providing healthier recipes for a specific diet. Ultimately, much of what users are doing is getting done faster.

However, conventional generative AI does not know what it does not know. And that includes most of the internal data in your company. Can you imagine what you could do with all the benefits mentioned previously, but combined with all the data within your company – about everything your company has ever done, about your customers and all their interactions, or about all your products and services combined with a knowledge of what a specific customer’s needs are? You do not have to imagine it – that is what RAG does!

Before RAG, most of the services you saw that connected customers or employees with the data resources of the company were just scratching the surface of what is possible compared to if they could access all the data in the company. With the advent of RAG and generative AI in general, corporations are on the precipice of something really, really big.

Another area you might confuse RAG with is the concept of fine-tuning a model. Let’s discuss what the differences are between these types of approaches.

Comparing RAG with model fine-tuning

LLMs can be adapted to your data in two ways:

Fine-tuning: With fine-tuning, you are adjusting the weights and/or biases that define the model’s intelligence based on new training data. This directly impacts the model, permanently changing how it will interact with new inputs.Input/prompts: This is where you use the model, using the prompt/input to introduce new knowledge that the LLM can act upon.

Why not use fine-tuning in all situations? Once you have introduced the new knowledge, the LLM will always have it! It is also how the model was created – by being trained with data, right? That sounds right in theory, but in practice, fine-tuning has been more reliable in teaching a model specialized tasks (such as teaching a model how to converse in a certain way), and less reliable for factual recall.

The reason is complicated, but in general, a model’s knowledge of facts is like a human’s long-term memory. If you memorize a long passage from a speech or book and then try to recall it a few months later, you will likely still understand the context of the information, but you may forget specific details. On the other hand, adding knowledge through the input of the model is like our short-term memory, where the facts, details, and even the order of wording are all very fresh and available for recall. It is this latter scenario that lends itself better in a situation where you want successful factual recall. And given how much more expensive fine-tuning can be, this makes it that much more important to consider RAG.