oreilly_live_training_agents
Repository for all the code and notebooks for the O'Reilly live-training: "Getting Started with LLM Agents using Langchain"
Stars: 57
This repository provides resources and notebooks for O'Reilly Live Training on getting started with LLM Agents using LangChain & LangGraph. It includes setup instructions, core learning paths, additional topics, repository structure, and additional resources for learning and deploying LangGraph agents.
README:
Conda
- Install anaconda
- This repo was tested on a Mac with python=3.11.
- Create an environment:
conda create -n oreilly-agents python=3.11 - Activate your environment with:
conda activate oreilly-agents - Install requirements with:
pip install -r requirements/requirements.txt - Setup your openai API key
Pip
-
Create a Virtual Environment: Navigate to your project directory. Make sure you have python3.10 installed! If using Python 3's built-in
venv:python -m venv oreilly-agentsIf you're usingvirtualenv:virtualenv oreilly-agents -
Activate the Virtual Environment:
-
On Windows:
.\\oreilly-agents\\Scripts\\activate -
On macOS and Linux:
source oreilly-agents/bin/activate
-
On Windows:
-
Install Dependencies from
requirements.txt:pip install python-dotenv pip install -r ./requirements/requirements.txt
-
Setup your openai API key
Remember to deactivate the virtual environment afterwards: deactivate
- Change the
.env.examplefile to.envand add your OpenAI API key.
OPENAI_API_KEY=<your openai api key>conda install jupyter -y
python -m ipykernel install --user --name=oreilly-agentsThe main notebooks are organized in a progressive learning path:
-
Simple ReAct Agent with LangGraph - Quick start with a basic ReAct agent
-
Intro to LangChain & LangGraph - Fundamentals of LangChain and LangGraph
-
1.1 LangGraph with ChatGPT Search - Using search capabilities
-
1.2 Intro LLM Agents from Scratch - Building agents without frameworks
-
-
Intro to LangGraph - Deep dive into LangGraph concepts
-
2.1 LangGraph Basics - Core LangGraph components and patterns
-
2.1 LangGraph Basics - Core LangGraph components and patterns
-
Local Research Agent with LangGraph - Building a research agent
-
LangGraph Persistence - State management and persistence in LangGraph
-
Level 2: Structured Outputs with Agents - Advanced structured output patterns
├── notebooks/ # Main learning notebooks
│ ├── assets-resources/ # Images, diagrams, and research papers
│ ├── langgraph-app/ # LangGraph deployment example
│ ├── langgraph-mcp-quick-demo/ # Model Context Protocol demo
│ ├── legacy-notebooks/ # Previous course materials
│ └── legacy-scripts/ # Utility scripts and examples
├── presentation-slides/ # Course presentation materials (PDFs)
├── requirements/ # Python dependencies
└── docs/ # Additional documentation
-
Presentation Slides: Course slides available in
presentation-slides/folder- Getting Started with LangGraph
- Getting Started with Agents Using LangChain
- Intro LLM Agents
-
Deployment Example: Check
notebooks/langgraph-app/for a complete LangGraph deployment setup -
MCP Demo: See
notebooks/langgraph-mcp-quick-demo/for Model Context Protocol integration examples -
Legacy Materials: Previous course content available in
notebooks/legacy-notebooks/andnotebooks/legacy-scripts/
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for oreilly_live_training_agents
Similar Open Source Tools
For similar tasks
OpenAGI
OpenAGI is an AI agent creation package designed for researchers and developers to create intelligent agents using advanced machine learning techniques. The package provides tools and resources for building and training AI models, enabling users to develop sophisticated AI applications. With a focus on collaboration and community engagement, OpenAGI aims to facilitate the integration of AI technologies into various domains, fostering innovation and knowledge sharing among experts and enthusiasts.
GPTSwarm
GPTSwarm is a graph-based framework for LLM-based agents that enables the creation of LLM-based agents from graphs and facilitates the customized and automatic self-organization of agent swarms with self-improvement capabilities. The library includes components for domain-specific operations, graph-related functions, LLM backend selection, memory management, and optimization algorithms to enhance agent performance and swarm efficiency. Users can quickly run predefined swarms or utilize tools like the file analyzer. GPTSwarm supports local LM inference via LM Studio, allowing users to run with a local LLM model. The framework has been accepted by ICML2024 and offers advanced features for experimentation and customization.
AgentForge
AgentForge is a low-code framework tailored for the rapid development, testing, and iteration of AI-powered autonomous agents and Cognitive Architectures. It is compatible with a range of LLM models and offers flexibility to run different models for different agents based on specific needs. The framework is designed for seamless extensibility and database-flexibility, making it an ideal playground for various AI projects. AgentForge is a beta-testing ground and future-proof hub for crafting intelligent, model-agnostic autonomous agents.
atomic_agents
Atomic Agents is a modular and extensible framework designed for creating powerful applications. It follows the principles of Atomic Design, emphasizing small and single-purpose components. Leveraging Pydantic for data validation and serialization, the framework offers a set of tools and agents that can be combined to build AI applications. It depends on the Instructor package and supports various APIs like OpenAI, Cohere, Anthropic, and Gemini. Atomic Agents is suitable for developers looking to create AI agents with a focus on modularity and flexibility.
LongRoPE
LongRoPE is a method to extend the context window of large language models (LLMs) beyond 2 million tokens. It identifies and exploits non-uniformities in positional embeddings to enable 8x context extension without fine-tuning. The method utilizes a progressive extension strategy with 256k fine-tuning to reach a 2048k context. It adjusts embeddings for shorter contexts to maintain performance within the original window size. LongRoPE has been shown to be effective in maintaining performance across various tasks from 4k to 2048k context lengths.
ax
Ax is a Typescript library that allows users to build intelligent agents inspired by agentic workflows and the Stanford DSP paper. It seamlessly integrates with multiple Large Language Models (LLMs) and VectorDBs to create RAG pipelines or collaborative agents capable of solving complex problems. The library offers advanced features such as streaming validation, multi-modal DSP, and automatic prompt tuning using optimizers. Users can easily convert documents of any format to text, perform smart chunking, embedding, and querying, and ensure output validation while streaming. Ax is production-ready, written in Typescript, and has zero dependencies.
Awesome-AI-Agents
Awesome-AI-Agents is a curated list of projects, frameworks, benchmarks, platforms, and related resources focused on autonomous AI agents powered by Large Language Models (LLMs). The repository showcases a wide range of applications, multi-agent task solver projects, agent society simulations, and advanced components for building and customizing AI agents. It also includes frameworks for orchestrating role-playing, evaluating LLM-as-Agent performance, and connecting LLMs with real-world applications through platforms and APIs. Additionally, the repository features surveys, paper lists, and blogs related to LLM-based autonomous agents, making it a valuable resource for researchers, developers, and enthusiasts in the field of AI.
CodeFuse-muAgent
CodeFuse-muAgent is a Multi-Agent framework designed to streamline Standard Operating Procedure (SOP) orchestration for agents. It integrates toolkits, code libraries, knowledge bases, and sandbox environments for rapid construction of complex Multi-Agent interactive applications. The framework enables efficient execution and handling of multi-layered and multi-dimensional tasks.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.