rigging

rigging

Lightweight LLM Interaction Framework

Stars: 240

Visit
 screenshot

Rigging is a lightweight LLM framework designed to simplify the usage of language models in production code. It offers structured Pydantic models for text output, supports various models like LiteLLM and transformers, and provides features such as defining prompts as python functions, simple tool use, storing models as connection strings, async batching for large scale generation, and modern Python support with type hints and async capabilities. Rigging is developed by dreadnode and is suitable for tasks like building chat pipelines, running completions, tracking behavior with tracing, playing with generation parameters, and scaling up with iterating and batching.

README:

Logo

Flexible LLM library for code and agents

PyPI - Python Version PyPI - Version GitHub License GitHub Actions Workflow Status

Rigging is a lightweight LLM framework to make using language models in production code as simple and effective as possible. Here are the highlights:

  • Structured Pydantic models can be used interchangably with unstructured text output.
  • LiteLLM as the default generator giving you instant access to a huge array of models.
  • Define prompts as python functions with type hints and docstrings.
  • Simple tool use, even for models which don't support them at the API.
  • Store different models and configs as simple connection strings just like databases.
  • Integrated tracing support with Logfire.
  • Chat templating, forking, continuations, generation parameter overloads, stripping segments, etc.
  • Async batching and fast iterations for large scale generation.
  • Metadata, callbacks, and data format conversions.
  • Modern python with type hints, async support, pydantic validation, serialization, etc.
import rigging as rg

@rg.prompt(generator_id="gpt-4")
async def get_authors(count: int = 3) -> list[str]:
    """Provide famous authors."""

print(await get_authors())

# ['William Shakespeare', 'J.K. Rowling', 'Jane Austen']

Rigging is built by dreadnode where we use it daily.

Installation

We publish every version to Pypi:

pip install rigging

If you want to build from source:

cd rigging/
poetry install

Supported LLMs

Rigging will run just about any language model:

API Keys

Pass the api_key in an generator id or use standard environment variables.

rg.get_generator("gpt-4-turbo,api_key=...")
export OPENAI_API_KEY=...
export MISTRAL_API_KEY=...
export ANTHROPIC_API_KEY=...
...

Check out the docs for more.

Getting Started

Check out the guide in the docs

  1. Get a generator using a connection string.
  2. Build a chat or completion pipeline
  3. Run the pipeline and get the output.
import rigging as rg
import asyncio

async def main():
    # 1 - Get a generator
    generator = rg.get_generator("claude-3-sonnet-20240229")

    # 2 - Build a chat pipeline
    pipeline = generator.chat(
        [
            {"role": "system", "content": "Talk like a pirate."},
            {"role": "user", "content": "Say hello!"},
        ]
    )

    # 3 - Run the pipeline
    chat = await pipeline.run()
    print(chat.conversation)

# Run the main function
asyncio.run(main())

# [system]: Talk like a pirate.
# [user]: Say hello!
# [assistant]: Ahoy, matey! Here be the salty sea dog ready to trade greetings wit' ye. Arrr!

Want more?

Examples

Documentation

docs.dreadnode.io has everything you need.

Star History

Star History Chart

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for rigging

Similar Open Source Tools

For similar tasks

For similar jobs