Building LLM Powered Applications - Valentina Alto - E-Book

Building LLM Powered Applications E-Book

Valentina Alto

0,0
29,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Building LLM Powered Applications delves into the fundamental concepts, cutting-edge technologies, and practical applications that LLMs offer, ultimately paving the way for the emergence of large foundation models (LFMs) that extend the boundaries of AI capabilities.
The book begins with an in-depth introduction to LLMs. We then explore various mainstream architectural frameworks, including both proprietary models (GPT 3.5/4) and open-source models (Falcon LLM), and analyze their unique strengths and differences. Moving ahead, with a focus on the Python-based, lightweight framework called LangChain, we guide you through the process of creating intelligent agents capable of retrieving information from unstructured data and engaging with structured data using LLMs and powerful toolkits. Furthermore, the book ventures into the realm of LFMs, which transcend language modeling to encompass various AI tasks and modalities, such as vision and audio.
Whether you are a seasoned AI expert or a newcomer to the field, this book is your roadmap to unlock the full potential of LLMs and forge a new era of intelligent machines.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 442

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Building LLM Powered Applications

Create intelligent apps and agents with large language models

Valentina Alto

Building LLM Powered Applications

Copyright © 2024 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Senior Publishing Product Manager: Tushar Gupta

Acquisition Editors – Peer Reviews: Tejas Mhasvekar and Jane D'Souza

Project Editor: Namrata Katare

Content Development Editors: Shruti Menon and Bhavesh Amin

Copy Editor: Safis Editing

Technical Editor: Anirudh Singh

Proofreader: Safis Editing

Indexer: Subalakshmi Govindhan

Presentation Designer: Ajay Patule

Developer Relations Marketing Executive: Monika Sangwan

First published: May 2024

Production reference: 2100625

Published by Packt Publishing Ltd.

Grosvenor House

11 St Paul’s Square

Birmingham

B3 1RB, UK.

ISBN 978-1-83546-231-7

www.packt.com

Contributors

About the author

Valentina Alto is an AI enthusiast, tech author, and runner. After completing her master's in data science, she joined Microsoft in 2020, where she currently works as an AI specialist. Passionate about machine learning and AI since the outset of her academic journey, Valentina has deepened her knowledge in the field, authoring hundreds of articles on tech blogs. She also authored her first book with Packt, titled Modern Generative AI with ChatGPT and OpenAI Models. In her current role, she collaborates with large enterprises, aiming to integrate AI into their processes and create innovative solutions using large foundation models.

Beyond her professional pursuits, Valentina loves hiking in the beautiful Italian mountains, running, traveling, and enjoying a good book with a cup of coffee.

About the reviewers

Alexandru Vesa has over a decade of expertise as an AI engineer and is currently serving as the CEO at Cube Digital, an AI software development firm he leads with a vision inspired by the transformative potential of AI algorithms. He has a wealth of experience in navigating diverse business environments and shaping AI products in both multinational corporations and dynamic startups. Drawing inspiration from various disciplines, he has built a versatile skill set and seamlessly integrates state-of-the-art technologies with proven engineering methods. He is proficient in guiding projects from inception to scalable success.

Alex is a key figure in the DecodingML publication, collaborating with Paul Iusztin to curate the groundbreaking hands-on course LLM Twin: Building Your Production-Ready AI Replica, hosted on the Substack platform. His problem-solving and communication skills make him an indispensable force in utilizing AI to foster innovation and achieve tangible results.

Louis Owen is a data scientist/AI engineer hailing from Indonesia. Currently contributing to NLP solutions at Yellow.ai, a leading CX automation platform, he thrives on delivering innovative solutions. Louis’s diverse career spans various sectors, including NGO work with The World Bank, e-commerce with Bukalapak and Tokopedia, conversational AI with Yellow.ai, online travel with Traveloka, smart city initiatives with Qlue, and FinTech with Do-it. Louis has also written a book with Packt, titled Hyperparameter Tuning with Python, and published several papers in the AI field.

Outside of work, Louis loves to spend time mentoring aspiring data scientists, sharing insights through articles, and indulging in his hobbies of watching movies and working on side projects.

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://packt.link/llm

Contents

Preface

Who this book is for

What this book covers

To get the most out of this book

Get in touch

Making the Most Out of This Book – Get to Know Your Free Benefits

Unlock Your Book’s Exclusive Benefits

How to unlock these benefits in three easy steps

Need help?

Introduction to Large Language Models

What are large foundation models and LLMs?

AI paradigm shift – an introduction to foundation models

Under the hood of an LLM

Most popular LLM transformers-based architectures

Early experiments

Introducing the transformer architecture

Training and evaluating LLMs

Training an LLM

Model evaluation

Base models versus customized models

How to customize your model

Summary

References

LLMs for AI-Powered Applications

How LLMs are changing software development

The copilot system

Introducing AI orchestrators to embed LLMs into applications

The main components of AI orchestrators

LangChain

Haystack

Semantic Kernel

How to choose a framework

Summary

References

Choosing an LLM for Your Application

The most promising LLMs in the market

Proprietary models

GPT-4

Gemini 1.5

Claude 2

Open-source models

LLaMA-2

Falcon LLM

Mistral

Beyond language models

A decision framework to pick the right LLM

Considerations

Case study

Summary

References

Prompt Engineering

Technical requirements

What is prompt engineering?

Principles of prompt engineering

Clear instructions

Split complex tasks into subtasks

Ask for justification

Generate many outputs, then use the model to pick the best one

Use delimiters

Advanced techniques

Few-shot approach

Chain of thought

ReAct

Summary

References

Embedding LLMs within Your Applications

Technical requirements

A brief note about LangChain

Getting started with LangChain

Models and prompts

Data connections

Memory

Chains

Agents

Working with LLMs via the Hugging Face Hub

Create a Hugging Face user access token

Storing your secrets in an .env file

Start using open-source LLMs

Summary

References

Building Conversational Applications

Technical requirements

Getting started with conversational applications

Creating a plain vanilla bot

Adding memory

Adding non-parametric knowledge

Adding external tools

Developing the front-end with Streamlit

Summary

References

Search and Recommendation Engines with LLMs

Technical requirements

Introduction to recommendation systems

Existing recommendation systems

K-nearest neighbors

Matrix factorization

Neural networks

How LLMs are changing recommendation systems

Implementing an LLM-powered recommendation system

Data preprocessing

Building a QA recommendation chatbot in a cold-start scenario

Building a content-based system

Developing the front-end with Streamlit

Summary

References

Using LLMs with Structured Data

Technical requirements

What is structured data?

Getting started with relational databases

Introduction to relational databases

Overview of the Chinook database

How to work with relational databases in Python

Implementing the DBCopilot with LangChain

LangChain agents and SQL Agent

Prompt engineering

Adding further tools

Developing the front-end with Streamlit

Summary

References

Working with Code

Technical requirements

Choosing the right LLM for code

Code understanding and generation

Falcon LLM

CodeLlama

StarCoder

Act as an algorithm

Leveraging Code Interpreter

Summary

References

Building Multimodal Applications with LLMs

Technical requirements

Why multimodality?

Building a multimodal agent with LangChain

Option 1: Using an out-of-the-box toolkit for Azure AI Services

Getting Started with AzureCognitiveServicesToolkit

Setting up the toolkit

Leveraging a single tool

Leveraging multiple tools

Building an end-to-end application for invoice analysis

Option 2: Combining single tools into one agent

YouTube tools and Whisper

DALL·E and text generation

Putting it all together

Option 3: Hard-coded approach with a sequential chain

Comparing the three options

Developing the front-end with Streamlit

Summary

References

Fine-Tuning Large Language Models

Technical requirements

What is fine-tuning?

When is fine-tuning necessary?

Getting started with fine-tuning

Obtaining the dataset

Tokenizing the data

Fine-tuning the model

Using evaluation metrics

Training and saving

Summary

References

Responsible AI

What is Responsible AI and why do we need it?

Responsible AI architecture

Model level

Metaprompt level

User interface level

Regulations surrounding Responsible AI

Summary

References

Emerging Trends and Innovations

The latest trends in language models and generative AI

GPT-4V(ision)

DALL-E 3

AutoGen

Small language models

Companies embracing generative AI

Coca-Cola

Notion

Malbek

Microsoft

Summary

References

Other Books You May Enjoy

Index

Landmarks

Cover

Index

Share your thoughts

Once you’ve read Building LLM Powered Application, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Making the Most Out of This Book – Get to Know Your Free Benefits

Unlock exclusive free benefits that come with your purchase, thoughtfully crafted to supercharge your learning journey and help you learn without limits.

https://www.packtpub.com/unlock/9781835462317

Note: Have your purchase invoice ready before you begin.

Figure 1.1: Next-Gen Reader, AI Assistant (Beta), and Free PDF access

Enhanced reading experience with our Next-gen Reader:

Multi-device progress sync: Learn from any device with seamless progress sync.

Highlighting and Notetaking: Turn your reading into lasting knowledge.

Bookmarking: Revisit your most important learnings anytime.

Dark mode: Focus with minimal eye strain by switching to dark or sepia modes.

Learn smarter using our AI assistant (Beta):

Summarize it: Summarize key sections or an entire chapter.

AI code explainers: In Packt Reader, click the “Explain” button above each code block for AI-powered code explanations.

Note: AI Assistant is part of next-gen Packt Reader and is still in beta.

Learn anytime, anywhere:

Access your content offline with DRM-free PDF and ePub versions—compatible with your favorite e-readers.

Unlock Your Book’s Exclusive Benefits

Your copy of this book comes with the following exclusive benefits:

Next-gen Packt Reader

AI assistant (beta)

DRM-free PDF/ePub downloads

Use the following guide to unlock them if you haven’t already. The process takes just a few minutes and needs to be done only once.

How to unlock these benefits in three easy steps

Step 1

Have your purchase invoice for this book ready, as you’ll need it in Step 3. If you received a physical invoice, scan it on your phone and have it ready as either a PDF, JPG, or PNG.

For more help on finding your invoice, visit https://www.packtpub.com/unlock-benefits/help.

Note: Bought this book directly from Packt? You don’t need an invoice. After completing Step 2, you can jump straight to your exclusive content.

Step 2

Scan the following QR code or visit https://www.packtpub.com/unlock/9781835462317:

Step 3

Sign in to your Packt account or create a new one for free. Once you’re logged in, upload your invoice. It can be in PDF, PNG, or JPG format and must be no larger than 10 MB. Follow the rest of the instructions on the screen to complete the process.

Need help?

If you get stuck and need help, visit https://www.packtpub.com/unlock-benefits/help for a detailed FAQ on how to find your invoices and more. The following QR code will take you to the help page directly:

Note: If you are still facing issues, reach out to [email protected].