llm-cookbook
面向开发者的 LLM 入门教程,吴恩达大模型系列课程中文版
Stars: 11367
LLM Cookbook is a developer-oriented comprehensive guide focusing on LLM for Chinese developers. It covers various aspects from Prompt Engineering to RAG development and model fine-tuning, providing guidance on how to learn and get started with LLM projects in a way suitable for Chinese learners. The project translates and reproduces 11 courses from Professor Andrew Ng's large model series, categorizing them for beginners to systematically learn essential skills and concepts before exploring specific interests. It encourages developers to contribute by replicating unreproduced courses following the format and submitting PRs for review and merging. The project aims to help developers grasp a wide range of skills and concepts related to LLM development, offering both online reading and PDF versions for easy access and learning.
README:
本项目是一个面向开发者的大模型手册,针对国内开发者的实际需求,主打 LLM 全方位入门实践。本项目基于吴恩达老师大模型系列课程内容,对原课程内容进行筛选、翻译、复现和调优,覆盖从 Prompt Engineering 到 RAG 开发、模型微调的全部流程,用最适合国内学习者的方式,指导国内开发者如何学习、入门 LLM 相关项目。
针对不同内容的特点,我们对共计 11 门吴恩达老师的大模型课程进行了翻译复现,并结合国内学习者的实际情况,对不同课程进行了分级和排序,初学者可以先系统学习我们的必修类课程,掌握入门 LLM 所有方向都需要掌握的基础技能和概念,再选择性地学习我们的选修类课程,在自己感兴趣的方向上不断探索和学习。
如果有你非常喜欢但我们还没有进行复现的吴恩达老师大模型课程,我们欢迎每一位开发者参考我们已有课程的格式和写法来对课程进行复现并提交 PR,在 PR 审核通过后,我们会根据课程内容将课程进行分级合并。欢迎每一位开发者的贡献!
在线阅读地址:面向开发者的 LLM 入门课程-在线阅读
PDF下载地址:面向开发者的 LLM 入门教程-PDF
英文原版地址:吴恩达关于大模型的系列课程
LLM 正在逐步改变人们的生活,而对于开发者,如何基于 LLM 提供的 API 快速、便捷地开发一些具备更强能力、集成LLM 的应用,来便捷地实现一些更新颖、更实用的能力,是一个急需学习的重要能力。
由吴恩达老师与 OpenAI 合作推出的大模型系列教程,从大模型时代开发者的基础技能出发,深入浅出地介绍了如何基于大模型 API、LangChain 架构快速开发结合大模型强大能力的应用。其中,《Prompt Engineering for Developers》教程面向入门 LLM 的开发者,深入浅出地介绍了对于开发者,如何构造 Prompt 并基于 OpenAI 提供的 API 实现包括总结、推断、转换等多种常用功能,是入门 LLM 开发的经典教程;《Building Systems with the ChatGPT API》教程面向想要基于 LLM 开发应用程序的开发者,简洁有效而又系统全面地介绍了如何基于 ChatGPT API 打造完整的对话系统;《LangChain for LLM Application Development》教程结合经典大模型开源框架 LangChain,介绍了如何基于 LangChain 框架开发具备实用功能、能力全面的应用程序,《LangChain Chat With Your Data》教程则在此基础上进一步介绍了如何使用 LangChain 架构结合个人私有数据开发个性化大模型应用;《Building Generative AI Applications with Gradio》、《Evaluating and Debugging Generative AI》教程分别介绍了两个实用工具 Gradio 与 W&B,指导开发者如何结合这两个工具来打造、评估生成式 AI 应用。
上述教程非常适用于开发者学习以开启基于 LLM 实际搭建应用程序之路。因此,我们将该系列课程翻译为中文,并复现其范例代码,也为其中一个视频增加了中文字幕,支持国内中文学习者直接使用,以帮助中文学习者更好地学习 LLM 开发;我们也同时实现了效果大致相当的中文 Prompt,支持学习者感受中文语境下 LLM 的学习使用,对比掌握多语言语境下的 Prompt 设计与 LLM 开发。未来,我们也将加入更多 Prompt 高级技巧,以丰富本课程内容,帮助开发者掌握更多、更巧妙的 Prompt 技能。
所有具备基础 Python 能力,想要入门 LLM 的开发者。
《ChatGPT Prompt Engineering for Developers》、《Building Systems with the ChatGPT API》等教程作为由吴恩达老师与 OpenAI 联合推出的官方教程,在可预见的未来会成为 LLM 的重要入门教程,但是目前还只支持英文版且国内访问受限,打造中文版且国内流畅访问的教程具有重要意义;同时,GPT 对中文、英文具有不同的理解能力,本教程在多次对比、实验之后确定了效果大致相当的中文 Prompt,支持学习者研究如何提升 ChatGPT 在中文语境下的理解与生成能力。
本教程适用于所有具备基础 Python 能力,想要入门 LLM 的开发者。
如果你想要开始学习本教程,你需要提前具备:
- 至少一个 LLM API(最好是 OpenAI,如果是其他 API,你可能需要参考其他教程对 API 调用代码进行修改)
- 能够使用 Python Jupyter Notebook
本教程共包括 11 门课程,分为必修类、选修类两个类别。必修类课程是我们认为最适合初学者学习以入门 LLM 的课程,包括了入门 LLM 所有方向都需要掌握的基础技能和概念,我们也针对必修类课程制作了适合阅读的在线阅读和 PDF 版本,在学习必修类课程时,我们建议学习者按照我们列出的顺序进行学习;选修类课程是在必修类课程上的拓展延伸,包括了 RAG 开发、模型微调、模型评估等多个方面,适合学习者在掌握了必修类课程之后选择自己感兴趣的方向和课程进行学习。
必修类课程包括:
- 面向开发者的 Prompt Engineering。基于吴恩达老师《ChatGPT Prompt Engineering for Developers》课程打造,面向入门 LLM 的开发者,深入浅出地介绍了对于开发者,如何构造 Prompt 并基于 OpenAI 提供的 API 实现包括总结、推断、转换等多种常用功能,是入门 LLM 开发的第一步。
- 搭建基于 ChatGPT 的问答系统。基于吴恩达老师《Building Systems with the ChatGPT API》课程打造,指导开发者如何基于 ChatGPT 提供的 API 开发一个完整的、全面的智能问答系统。通过代码实践,实现了基于 ChatGPT 开发问答系统的全流程,介绍了基于大模型开发的新范式,是大模型开发的实践基础。
- 使用 LangChain 开发应用程序。基于吴恩达老师《LangChain for LLM Application Development》课程打造,对 LangChain 展开深入介绍,帮助学习者了解如何使用 LangChain,并基于 LangChain 开发完整的、具备强大能力的应用程序。
- 使用 LangChain 访问个人数据。基于吴恩达老师《LangChain Chat with Your Data》课程打造,深入拓展 LangChain 提供的个人数据访问能力,指导开发者如何使用 LangChain 开发能够访问用户个人数据、提供个性化服务的大模型应用。
选修类课程包括:
- 使用 Gradio 搭建生成式 AI 应用。基于吴恩达老师《Building Generative AI Applications with Gradio》课程打造,指导开发者如何使用 Gradio 通过 Python 接口程序快速、高效地为生成式 AI 构建用户界面。
- 评估改进生成式 AI。基于吴恩达老师《Evaluating and Debugging Generative AI》课程打造,结合 wandb,提供一套系统化的方法和工具,帮助开发者有效地跟踪和调试生成式 AI 模型。
- 微调大语言模型。基于吴恩达老师《Finetuning Large Language Model》课程打造,结合 lamini 框架,讲述如何便捷高效地在本地基于个人数据微调开源大语言模型。
- 大模型与语义检索。基于吴恩达老师《Large Language Models with Semantic Search》课程打造,针对检索增强生成,讲述了多种高级检索技巧以实现更准确、高效的检索增强 LLM 生成效果。
- 基于 Chroma 的高级检索。基于吴恩达老师《Advanced Retrieval for AI with Chroma》课程打造,旨在介绍基于 Chroma 的高级检索技术,提升检索结果的准确性。
- 搭建和评估高级 RAG 应用。基于吴恩达老师《Building and Evaluating Advanced RAG Applications》课程打造,介绍构建和实现高质量RAG系统所需的关键技术和评估框架。
- LangChain 的 Functions、Tools 和 Agents。基于吴恩达老师《Functions, Tools and Agents with LangChain》课程打造,介绍如何基于 LangChain 的新语法构建 Agent。
- Prompt 高级技巧。包括 CoT、自我一致性等多种 Prompt 高级技巧的基础理论与代码实现。
其他资料包括:
双语字幕视频地址:吴恩达 x OpenAI的Prompt Engineering课程专业翻译版
中英双语字幕下载:《ChatGPT提示工程》非官方版中英双语字幕
视频讲解:面向开发者的 Prompt Engineering 讲解(数字游民大会)
目录结构说明:
content:基于原课程复现的双语版代码,可运行的 Notebook,更新频率最高,更新速度最快。
docs:必修类课程文字教程版在线阅读源码,适合阅读的 md。
figures:图片文件。
核心贡献者
- 邹雨衡-项目负责人(Datawhale成员-对外经济贸易大学研究生)
- 长琴-项目发起人(内容创作者-Datawhale成员-AI算法工程师)
- 玉琳-项目发起人(内容创作者-Datawhale成员)
- 徐虎-教程编撰者(内容创作者-Datawhale成员)
- 刘伟鸿-教程编撰者(内容创作者-江南大学非全研究生)
- Joye-教程编撰者(内容创作者-数据科学家)
- 高立业(内容创作者-DataWhale成员-算法工程师)
- 邓宇文(内容创作者-Datawhale成员)
- 魂兮(内容创作者-前端工程师)
- 宋志学(内容创作者-Datawhale成员)
- 韩颐堃(内容创作者-Datawhale成员)
- 陈逸涵 (内容创作者-Datawhale意向成员-AI爱好者)
- 仲泰(内容创作者-Datawhale成员)
- 万礼行(内容创作者-视频翻译者)
- 王熠明(内容创作者-Datawhale成员)
- 曾浩龙(内容创作者-Datawhale 意向成员-JLU AI 研究生)
- 小饭同学(内容创作者)
- 孙韩玉(内容创作者-算法量化部署工程师)
- 张银晗(内容创作者-Datawhale成员)
- 左春生(内容创作者-Datawhale成员)
- 张晋(内容创作者-Datawhale成员)
- 李娇娇(内容创作者-Datawhale成员)
- 邓恺俊(内容创作者-Datawhale成员)
- 范致远(内容创作者-Datawhale成员)
- 周景林(内容创作者-Datawhale成员)
- 诸世纪(内容创作者-算法工程师)
- Zhang Yixin(内容创作者-IT爱好者)
- Sarai(内容创作者-AI应用爱好者)
其他
- 特别感谢 @Sm1les、@LSGOMYP 对本项目的帮助与支持;
- 感谢 GithubDaily 提供的双语字幕;
- 如果有任何想法可以联系我们 DataWhale 也欢迎大家多多提出 issue;
- 特别感谢以下为教程做出贡献的同学!
Made with contrib.rocks.
Datawhale 是一个专注于数据科学与 AI 领域的开源组织,汇集了众多领域院校和知名企业的优秀学习者,聚合了一群有开源精神和探索精神的团队成员。微信搜索公众号Datawhale可以加入我们。
本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for llm-cookbook
Similar Open Source Tools
llm-cookbook
LLM Cookbook is a developer-oriented comprehensive guide focusing on LLM for Chinese developers. It covers various aspects from Prompt Engineering to RAG development and model fine-tuning, providing guidance on how to learn and get started with LLM projects in a way suitable for Chinese learners. The project translates and reproduces 11 courses from Professor Andrew Ng's large model series, categorizing them for beginners to systematically learn essential skills and concepts before exploring specific interests. It encourages developers to contribute by replicating unreproduced courses following the format and submitting PRs for review and merging. The project aims to help developers grasp a wide range of skills and concepts related to LLM development, offering both online reading and PDF versions for easy access and learning.
cherry-studio
Cherry Studio is a desktop client that supports multiple LLM providers on Windows, Mac, and Linux. It offers diverse LLM provider support, AI assistants & conversations, document & data processing, practical tools integration, and enhanced user experience. The tool includes features like support for major LLM cloud services, AI web service integration, local model support, pre-configured AI assistants, document processing for text, images, and more, global search functionality, topic management system, AI-powered translation, and cross-platform support with ready-to-use features and themes for a better user experience.
cherry-studio
Cherry Studio is a desktop client that supports multiple Large Language Model (LLM) providers, available on Windows, Mac, and Linux. It allows users to create multiple Assistants and topics, use multiple models to answer questions in the same conversation, and supports drag-and-drop sorting, code highlighting, and Mermaid chart. The tool is designed to enhance productivity and streamline the process of interacting with various language models.
DB-GPT
DB-GPT is an open source AI native data app development framework with AWEL(Agentic Workflow Expression Language) and agents. It aims to build infrastructure in the field of large models, through the development of multiple technical capabilities such as multi-model management (SMMF), Text2SQL effect optimization, RAG framework and optimization, Multi-Agents framework collaboration, AWEL (agent workflow orchestration), etc. Which makes large model applications with data simpler and more convenient.
hdu-cs-wiki
The HDU Computer Science Lecture Notes is a comprehensive guide designed to help students navigate through various challenges in the field of computer science. It covers topics such as programming languages, artificial intelligence, software development, and more. The notes provide insights on how to effectively utilize university time, balance grades with project experience, and make informed decisions regarding career paths. Created by a collaborative effort involving students, teachers, and industry experts, the lecture notes aim to serve as a guiding tool for individuals seeking guidance in the computer science domain.
jd_scripts
jd_scripts is a repository containing scripts for automating various tasks on the JD platform. The scripts provide instructions for setting up and using the tools to enhance user experience and efficiency in managing JD accounts and assets. Users can automate processes such as receiving notifications, redeeming rewards, participating in group purchases, and monitoring ticket availability. The repository also includes resources for optimizing performance and security measures to safeguard user accounts. With a focus on simplifying interactions with the JD platform, jd_scripts offers a comprehensive solution for maximizing benefits and convenience for JD users.
KrillinAI
KrillinAI is a video subtitle translation and dubbing tool based on AI large models, featuring speech recognition, intelligent sentence segmentation, professional translation, and one-click deployment of the entire process. It provides a one-stop workflow from video downloading to the final product, empowering cross-language cultural communication with AI. The tool supports multiple languages for input and translation, integrates features like automatic dependency installation, video downloading from platforms like YouTube and Bilibili, high-speed subtitle recognition, intelligent subtitle segmentation and alignment, custom vocabulary replacement, professional-level translation engine, and diverse external service selection for speech and large model services.
HTFramework
HTFramework is a rapid development framework based on Unity, integrating modular requirements, code reusability, practicality, high cohesion, unified coding standards, extensibility, maintainability, generality, and pluggability. It provides continuous maintenance and upgrades. The framework includes modules for aspect-oriented program code tracking, audio management, controller simplification, coroutine scheduling, custom modules, custom datasets, debugging, entity-component-system, entity management, event handling, exception handling, finite state machines, hotfixing, input management, instruction system, main module access, network client, object pooling, procedures, reference pooling, resource loading, step editing, task editing, UI management, utility tools, web requests, and optional AI, ILRuntime-based hotfixing, XLua integration, and game component modules.
KB-Builder
KB Builder is an open-source knowledge base generation system based on the LLM large language model. It utilizes the RAG (Retrieval-Augmented Generation) data generation enhancement method to provide users with the ability to enhance knowledge generation and quickly build knowledge bases based on RAG. It aims to be the central hub for knowledge construction in enterprises, offering platform-based intelligent dialogue services and document knowledge base management functionality. Users can upload docx, pdf, txt, and md format documents and generate high-quality knowledge base question-answer pairs by invoking large models through the 'Parse Document' feature.
AI_Gen_Novel
AI_Gen_Novel is a project exploring the limits of AI in writing online fiction. Leveraging large language models and multi-agent technology, the tool aims to automatically generate web novels by compressing long texts, optimizing prompts, and enhancing originality. The tool combines the core idea of RecurrentGPT with language-based iterative computation to create texts of any length. Future directions include enhancing model capabilities, optimizing program architecture, and introducing more prior knowledge for structured storytelling.
DriveLM
DriveLM is a multimodal AI model that enables autonomous driving by combining computer vision and natural language processing. It is designed to understand and respond to complex driving scenarios using visual and textual information. DriveLM can perform various tasks related to driving, such as object detection, lane keeping, and decision-making. It is trained on a massive dataset of images and text, which allows it to learn the relationships between visual cues and driving actions. DriveLM is a powerful tool that can help to improve the safety and efficiency of autonomous vehicles.
awesome-khmer-language
Awesome Khmer Language is a comprehensive collection of resources for the Khmer language, including tools, datasets, research papers, projects/models, blogs/slides, and miscellaneous items. It covers a wide range of topics related to Khmer language processing, such as character normalization, word segmentation, part-of-speech tagging, optical character recognition, text-to-speech, and more. The repository aims to support the development of natural language processing applications for the Khmer language by providing a diverse set of resources and tools for researchers and developers.
kweaver
KWeaver is an open-source cognitive intelligence development framework that provides data scientists, application developers, and domain experts with the ability for rapid development, comprehensive openness, and high-performance knowledge network generation and cognitive intelligence large model framework. It offers features such as automated and visual knowledge graph construction, visualization and analysis of knowledge graph data, knowledge graph integration, knowledge graph resource management, large model prompt engineering and debugging, and visual configuration for large model access.
ComfyUI-BRIA_AI-RMBG
ComfyUI-BRIA_AI-RMBG is an unofficial implementation of the BRIA Background Removal v1.4 model for ComfyUI. The tool supports batch processing, including video background removal, and introduces a new mask output feature. Users can install the tool using ComfyUI Manager or manually by cloning the repository. The tool includes nodes for automatically loading the Removal v1.4 model and removing backgrounds. Updates include support for batch processing and the addition of a mask output feature.
instill-core
Instill Core is an open-source orchestrator comprising a collection of source-available projects designed to streamline every aspect of building versatile AI features with unstructured data. It includes Instill VDP (Versatile Data Pipeline) for unstructured data, AI, and pipeline orchestration, Instill Model for scalable MLOps and LLMOps for open-source or custom AI models, and Instill Artifact for unified unstructured data management. Instill Core can be used for tasks such as building, testing, and sharing pipelines, importing, serving, fine-tuning, and monitoring ML models, and transforming documents, images, audio, and video into a unified AI-ready format.
Interview-for-Algorithm-Engineer
This repository provides a collection of interview questions and answers for algorithm engineers. The questions are organized by topic, and each question includes a detailed explanation of the answer. This repository is a valuable resource for anyone preparing for an algorithm engineering interview.
For similar tasks
Flowise
Flowise is a tool that allows users to build customized LLM flows with a drag-and-drop UI. It is open-source and self-hostable, and it supports various deployments, including AWS, Azure, Digital Ocean, GCP, Railway, Render, HuggingFace Spaces, Elestio, Sealos, and RepoCloud. Flowise has three different modules in a single mono repository: server, ui, and components. The server module is a Node backend that serves API logics, the ui module is a React frontend, and the components module contains third-party node integrations. Flowise supports different environment variables to configure your instance, and you can specify these variables in the .env file inside the packages/server folder.
nlux
nlux is an open-source Javascript and React JS library that makes it super simple to integrate powerful large language models (LLMs) like ChatGPT into your web app or website. With just a few lines of code, you can add conversational AI capabilities and interact with your favourite LLM.
generative-ai-go
The Google AI Go SDK enables developers to use Google's state-of-the-art generative AI models (like Gemini) to build AI-powered features and applications. It supports use cases like generating text from text-only input, generating text from text-and-images input (multimodal), building multi-turn conversations (chat), and embedding.
awesome-langchain-zh
The awesome-langchain-zh repository is a collection of resources related to LangChain, a framework for building AI applications using large language models (LLMs). The repository includes sections on the LangChain framework itself, other language ports of LangChain, tools for low-code development, services, agents, templates, platforms, open-source projects related to knowledge management and chatbots, as well as learning resources such as notebooks, videos, and articles. It also covers other LLM frameworks and provides additional resources for exploring and working with LLMs. The repository serves as a comprehensive guide for developers and AI enthusiasts interested in leveraging LangChain and LLMs for various applications.
Large-Language-Model-Notebooks-Course
This practical free hands-on course focuses on Large Language models and their applications, providing a hands-on experience using models from OpenAI and the Hugging Face library. The course is divided into three major sections: Techniques and Libraries, Projects, and Enterprise Solutions. It covers topics such as Chatbots, Code Generation, Vector databases, LangChain, Fine Tuning, PEFT Fine Tuning, Soft Prompt tuning, LoRA, QLoRA, Evaluate Models, Knowledge Distillation, and more. Each section contains chapters with lessons supported by notebooks and articles. The course aims to help users build projects and explore enterprise solutions using Large Language Models.
ai-chatbot
Next.js AI Chatbot is an open-source app template for building AI chatbots using Next.js, Vercel AI SDK, OpenAI, and Vercel KV. It includes features like Next.js App Router, React Server Components, Vercel AI SDK for streaming chat UI, support for various AI models, Tailwind CSS styling, Radix UI for headless components, chat history management, rate limiting, session storage with Vercel KV, and authentication with NextAuth.js. The template allows easy deployment to Vercel and customization of AI model providers.
awesome-local-llms
The 'awesome-local-llms' repository is a curated list of open-source tools for local Large Language Model (LLM) inference, covering both proprietary and open weights LLMs. The repository categorizes these tools into LLM inference backend engines, LLM front end UIs, and all-in-one desktop applications. It collects GitHub repository metrics as proxies for popularity and active maintenance. Contributions are encouraged, and users can suggest additional open-source repositories through the Issues section or by running a provided script to update the README and make a pull request. The repository aims to provide a comprehensive resource for exploring and utilizing local LLM tools.
Awesome-AI-Data-Guided-Projects
A curated list of data science & AI guided projects to start building your portfolio. The repository contains guided projects covering various topics such as large language models, time series analysis, computer vision, natural language processing (NLP), and data science. Each project provides detailed instructions on how to implement specific tasks using different tools and technologies.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.