langroid-examples

langroid-examples

Using Langroid's Multi-Agent Framework to Build LLM Apps

Stars: 106

Visit
 screenshot

Langroid-examples is a repository containing examples of using the Langroid Multi-Agent Programming framework to build LLM applications. It provides a collection of scripts and instructions for setting up the environment, working with local LLMs, using OpenAI LLMs, and running various examples. The repository also includes optional setup instructions for integrating with Qdrant, Redis, Momento, GitHub, and Google Custom Search API. Users can explore different scenarios and functionalities of Langroid through the provided examples and documentation.

README:

langroid-examples

Examples of using the Langroid Multi-Agent Programming framework to build LLM applications.

⚠️ Many of the examples in the examples folder in this repo are copied from the corresponding folder in the core langroid repo, although the core repo is generally more updated. We occasionally update this repo with the latest versions from the langroid repo. However, there are some examples in this repo that are not in the core langroid repo.

Set up virtual env and install langroid

IMPORTANT: Please ensure you are using Python 3.11+. If you are using poetry, you may be able to just run poetry env use 3.11 if you have Python 3.11 available in your system.

# clone the repo and cd into repo root
git clone https://github.com/langroid/langroid-examples.git
cd langroid-examples

# create a virtual env under project root, .venv directory
python3 -m venv .venv

# activate the virtual env
. .venv/bin/activate

# install dependencies from pyproject.toml:
# This installs langroid with "all" extras.
poetry install 
# or equivalently:
# pip install "langroid[all]"

# or to update an existing installation:
pip install --upgrade "langroid[all]"

Once you have the environment setup, you can either:

  • work with a local LLM (see instructions here. Some of the example scripts have been explicitly tested with various local LLMs.
  • or use an OpenAI LLM (see instructions below).

Set up environment variables (API keys, etc)

To use the example scripts with an OpenAI LLM, you need an OpenAI API Key. If you don't have one, see this OpenAI Page. Currently only OpenAI models are supported. Others will be added later (Pull Requests welcome!).

In the root of the repo, copy the .env-template file to a new file .env:

cp .env-template .env

Then insert your OpenAI API Key. Your .env file should look like this (the organization is optional but may be required in some scenarios):

OPENAI_API_KEY=your-key-here-without-quotes
OPENAI_ORGANIZATION=optionally-your-organization-id

Alternatively, you can set this as an environment variable in your shell (you will need to do this every time you open a new shell):

export OPENAI_API_KEY=your-key-here-without-quotes
Optional Setup Instructions (click to expand)
  • Qdrant Vector Store API Key, URL. This is only required if you want to use Qdrant cloud. You can sign up for a free 1GB account at Qdrant cloud. If you skip setting up these, Langroid will use Qdrant in local-storage mode. Alternatively Chroma is also currently supported. We use the local-storage version of Chroma, so there is no need for an API key. Langroid uses Qdrant by default.
  • Redis Password, host, port: This is optional, and only needed to cache LLM API responses using Redis Cloud. Redis offers a free 30MB Redis account which is more than sufficient to try out Langroid and even beyond. If you don't set up these, Langroid will use a pure-python Redis in-memory cache via the Fakeredis library.
  • Momento Serverless Caching of LLM API responses (as an alternative to Redis). To use Momento instead of Redis:
    • enter your Momento Token in the .env file, as the value of MOMENTO_AUTH_TOKEN (see example file below),
    • in the .env file set CACHE_TYPE=momento (instead of CACHE_TYPE=redis which is the default).
  • GitHub Personal Access Token (required for apps that need to analyze git repos; token-based API calls are less rate-limited). See this GitHub page.
  • Google Custom Search API Credentials: Only needed to enable an Agent to use the GoogleSearchTool. To use Google Search as an LLM Tool/Plugin/function-call, you'll need to set up a Google API key, then setup a Google Custom Search Engine (CSE) and get the CSE ID. (Documentation for these can be challenging, we suggest asking GPT4 for a step-by-step guide.) After obtaining these credentials, store them as values of GOOGLE_API_KEY and GOOGLE_CSE_ID in your .env file. Full documentation on using this (and other such "stateless" tools) is coming soon, but in the meantime take a peek at the test tests/main/test_google_search_tool.py to see how to use it.

If you add all of these optional variables, your .env file should look like this:

OPENAI_API_KEY=your-key-here-without-quotes
GITHUB_ACCESS_TOKEN=your-personal-access-token-no-quotes
CACHE_TYPE=redis # or momento
REDIS_PASSWORD=your-redis-password-no-quotes
REDIS_HOST=your-redis-hostname-no-quotes
REDIS_PORT=your-redis-port-no-quotes
MOMENTO_AUTH_TOKEN=your-momento-token-no-quotes # instead of REDIS* variables
QDRANT_API_KEY=your-key
QDRANT_API_URL=https://your.url.here:6333 # note port number must be included
GOOGLE_API_KEY=your-key
GOOGLE_CSE_ID=your-cse-id

Running the examples

Typically, the examples are run as follows:

python3 examples/quick-start/chat-agent.py

Most of the scripts take additional flags:

  • -m to specify an LLM, e.g. -m ollama/mistral.
  • -nc turn off cache retrieval for LLM responses, i.e., get fresh (rather than cached) responses each time you run it.
  • -d turns on debug mode, showing more detail such as prompts etc.

All of the examples are best run on the command-line, preferably in a nice terminal like Iterm2.

Ubuntu

On ubuntu, for the SQL applications, you'll need to make sure a few dependencies are installed including:

  • postgresql
sudo apt-get install libpq-dev
  • mysql dev
sudo apt install libmysqlclient-dev
  • and if you are on an earlier version of ubuntu, then python11
sudo apt install python3.11-dev build-essential

Docker Instructions

We provide a containerized version of this repo via this Docker Image. All you need to do is set up environment variables in the .env file. Please follow these steps to setup the container:

# get the .env file template from `langroid` repo
wget https://github.com/langroid/langroid/blob/main/.env-template .env

# Edit the .env file with your favorite editor (here nano):
# add API keys as explained above
nano .env

# launch the container
docker run -it  -v ./.env:/.env langroid/langroid

# Use this command to run any of the examples
python examples/<Path/To/Example.py> 

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for langroid-examples

Similar Open Source Tools

For similar tasks

For similar jobs