Language: English (US)
Design and Integrate AI-Powered S/LLMs into Enterprise Apps using Prompt Engineering, RAG, Fine-Tuning and Vector DBs
https://www.udemy.com/course/generative-ai-architectures-with-llm-prompt-rag-vector-db/
In this course, you'll learn how to Design Generative AI Architectures with integrating AI-Powered S/LLMs into EShop Support Enterprise Applications using Prompt Engineering, RAG, Fine-tuning and Vector DBs. We will design Generative AI Architectures with below components; Small and Large Language Models (S/LLMs) Prompt Engineering Retrieval Augmented Generation (RAG) Fine-Tuning Vector Databases We start with the basics and progressively dive deeper into each topic. We'll also follow LLM Augmentation Flow is a powerful framework that augments LLM results following the Prompt Engineering, RAG and Fine-Tuning. Large Language Models (LLMs) module; How Large Language Models (LLMs) works? Capabilities of LLMs: Text Generation, Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code Generation Generate Text with ChatGPT: Understand Capabilities and Limitations of LLMs (Hands-on) Function Calling and Structured Output in Large Language Models (LLMs) LLM Models: OpenAI ChatGPT, Meta Llama, Anthropic Claude, Google Gemini, Mistral Mixral, xAI Grok SLM Models: OpenAI ChatGPT 4o mini, Meta Llama 3.2 mini, Google Gemma, Microsoft Phi 3.5 Interacting Different LLMs with Chat UI: ChatGPT, LLama, Mixtral, Phi3 Interacting OpenAI Chat Completions Endpoint with Coding Installing and Running Llama and Gemma Models Using Ollama to run LLMs locally Modernizing and Design EShop Support Enterprise Apps with AI-Powered LLM Capabilities Prompt Engineering module; Steps of Designing Effective Prompts: Iterate, Evaluate and Templatize Advanced Prompting Techniques: Zero-shot, One-shot, Few-shot, Chain-of-Thought, Instruction and Role-based Design Advanced Prompts for EShop Support – Classification, Sentiment Analysis, Summarization, Q&A Chat, and Response Text Generation Design Advanced Prompts for Ticket Detail Page in EShop Support App w/ Q&A Chat and RAG Retrieval-Augmented Generation (RAG) module; The RAG Architecture Part 1: Ingestion with Embeddings and Vector Search The RAG Architecture Part 2: Retrieval with Reranking and Context Query Prompts The RAG Architecture Part 3: Generation with Generator and Output E2E Workflow of a Retrieval-Augmented Generation (RAG) - The RAG Workflow Design EShop Customer Support using RAG End-to-End RAG Example for EShop Customer Support using OpenAI Playground Fine-Tuning module; Fine-Tuning Workflow Fine-Tuning Methods: Full, Parameter-Efficient Fine-Tuning (PEFT), LoRA, Transfer Design EShop Customer Support Using Fine-Tuning End-to-End Fine-Tuning a LLM for EShop Customer Support using OpenAI Playground Lastly, we will discuss Choosing the Right Optimization – Prompt Engineering, RAG, and Fine-Tuning This course is more than just learning Generative AI, it's a deep dive into the world of how to design Advanced AI solutions by integrating LLM architectures into Enterprise applications. You'll get hands-on experience designing a complete EShop Customer Support application, including LLM capabilities like Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code Generation.
TO MAC USERS: If RAR password doesn't work, use this archive program:
RAR Expander 0.8.5 Beta 4 and extract password protected files without error.
TO WIN USERS: If RAR password doesn't work, use this archive program:
Latest Winrar and extract password protected files without error.