Course Overview
This course is designed to introduce generative artificial intelligence (AI) to software developers interested in using large language models (LLMs) without fine-tuning. The course provides an overview of generative AI, planning a generative AI project, getting started with Amazon Bedrock, the foundations of prompt engineering, and the architecture patterns to build generative AI applications using Amazon Bedrock and LangChain.
Course Content
- Introduction to Generative AI – Art of the Possible
- Planning a Generative AI Project
- Getting Started with Amazon Bedrock
- Foundations of Prompt Engineering
- Amazon Bedrock Application Components
- Amazon Bedrock Foundation Models
- LangChain
- Architecture Patterns
Who should attend
This course is intended for:
- Software developers interested in using LLMs without fine-tuning
Prerequisites
We recommend that attendees of this course have:
- Completed AWS Technical Essentials (AWSE)
- Intermediate-level proficiency in Python
Course Objectives
In this course, you will learn to:
- Describe generative AI and how it aligns to machine learning
- Define the importance of generative AI and explain its potential risks and benefits
- Identify business value from generative AI use cases
- Discuss the technical foundations and key terminology for generative AI
- Explain the steps for planning a generative AI project
- Identify some of the risks and mitigations when using generative AI
- Understand how Amazon Bedrock works
- Familiarize yourself with basic concepts of Amazon Bedrock
- Recognize the benefits of Amazon Bedrock
- List typical use cases for Amazon Bedrock
- Describe the typical architecture associated with an Amazon Bedrock solution
- Understand the cost structure of Amazon Bedrock
- Implement a demonstration of Amazon Bedrock in the AWS Management Console
- Define prompt engineering and apply general best practices when interacting with foundation models (FMs)
- Identify the basic types of prompt techniques, including zero-shot and few-shot learning
- Apply advanced prompt techniques when necessary for your use case
- Identify which prompt techniques are best suited for specific models
- Identify potential prompt misuses
- Analyze potential bias in FM responses and design prompts that mitigate that bias
- Identify the components of a generative AI application and how to customize an FM
- Describe Amazon Bedrock foundation models, inference parameters, and key Amazon Bedrock APIs
- Identify Amazon Web Services (AWS) offerings that help with monitoring, securing, and governing your Amazon Bedrock applications
- Describe how to integrate LangChain with LLMs, prompt templates, chains, chat models, text embeddings models, document loaders, retrievers, and Agents for Amazon Bedrock
- Describe architecture patterns that you can implement with Amazon Bedrock for building generative AI applications
- Apply the concepts to build and test sample use cases that use the various Amazon Bedrock models, LangChain, and the Retrieval Augmented Generation (RAG) approach
Outline: Developing Generative AI Applications on AWS (DGAIA)
Day 1
Module 1: Introduction to Generative AI – Art of the Possible
- Overview of ML
- Basics of generative AI
- Generative AI use cases
- Generative AI in practice
- Risks and benefits
Module 2: Planning a Generative AI Project
- Generative AI fundamentals
- Generative AI in practice
- Generative AI context
- Steps in planning a generative AI project
- Risks and mitigation
Module 3: Getting Started with Amazon Bedrock
- Introduction to Amazon Bedrock
- Architecture and use cases
- How to use Amazon Bedrock
- Demonstration: Setting up Bedrock access and using playgrounds
Module 4: Foundations of Prompt Engineering
- Basics of foundation models
- Fundamentals of prompt engineering
- Basic prompt techniques
- Advanced prompt techniques
- Model-specific prompt techniques
- Demonstration: Fine-tuning a basic text prompt
- Addressing prompt misuses
- Mitigating bias
- Demonstration: Image bias mitigation
Day 2
Module 5: Amazon Bedrock Application Components
- Overview of generative AI application components
- Foundation models and the FM interface
- Working with datasets and embeddings
- Demonstration: Word embeddings
- Additional application components
- Retrieval Augmented Generation (RAG)
- Model fine-tuning
- Securing generative AI applications
- Generative AI application architecture
Module 6: Amazon Bedrock Foundation Models
- Introduction to Amazon Bedrock foundation models
- Using Amazon Bedrock FMs for inference
- Amazon Bedrock methods
- Data protection and auditability
- Lab: Invoke Bedrock model for text generation using zero-shot prompt
Module 7: LangChain
- Optimizing LLM performance
- Integrating AWS and LangChain
- Using models with LangChain
- Constructing prompts
- Structuring documents with indexes
- Storing and retrieving data with memory
- Using chains to sequence components
- Managing external resources with LangChain agents
Module 8: Architecture Patterns
- Introduction to architecture patterns
- Text summarization
- Lab: Using Amazon Titan Text Premier to summarize text of small files
- Lab: Summarize long texts with Amazon Titan
- Question answering
- Lab: Using Amazon Bedrock for question answering
- Chatbot
- Lab: Build a chatbot
- Code generation
- Lab: Using Amazon Bedrock models for code generation
- LangChain and agents for Amazon Bedrock
- Lab: Building conversational applications with the Converse API