gollm

gollm

Unified Go interface for Language Model (LLM) providers. Simplifies LLM integration with flexible prompt management and common task functions.

Stars: 245

Visit
 screenshot

gollm is a Go package designed to simplify interactions with Large Language Models (LLMs) for AI engineers and developers. It offers a unified API for multiple LLM providers, easy provider and model switching, flexible configuration options, advanced prompt engineering, prompt optimization, memory retention, structured output and validation, provider comparison tools, high-level AI functions, robust error handling and retries, and extensible architecture. The package enables users to create AI-powered golems for tasks like content creation workflows, complex reasoning tasks, structured data generation, model performance analysis, prompt optimization, and creating a mixture of agents.

README:

gollm - Go Large Language Model

Gophers building a robot by Renee French

gollm is a Go package designed to help you build your own AI golems. Just as the mystical golem of legend was brought to life with sacred words, gollm empowers you to breathe life into your AI creations using the power of Large Language Models (LLMs). This package simplifies and streamlines interactions with various LLM providers, offering a unified, flexible, and powerful interface for AI engineers and developers to craft their own digital servants.

Documentation

Table of Contents

Key Features

  • Unified API for Multiple LLM Providers: Shape your golem's mind using various providers, including OpenAI, Anthropic, Groq, and Ollama. Seamlessly switch between models like GPT-4, GPT-4o-mini, Claude, and Llama-3.1.
  • Easy Provider and Model Switching: Mold your golem's capabilities by configuring preferred providers and models with simple configuration options.
  • Flexible Configuration Options: Customize your golem's essence using environment variables, code-based configuration, or configuration files to suit your project's needs.
  • Advanced Prompt Engineering: Craft sophisticated instructions to guide your golem's responses effectively.
  • PromptOptimizer: Automatically refine and improve your prompts for better results, with support for custom metrics and different rating systems.
  • Memory Retention: Maintain context across multiple interactions for more coherent conversations.
  • Structured Output and Validation: Ensure your golem's outputs are consistent and reliable with JSON schema generation and validation.
  • Provider Comparison Tools: Test your golem's performance across different LLM providers and models for the same task.
  • High-Level AI Functions: Empower your golem with pre-built functions like ChainOfThought for complex reasoning tasks.
  • Robust Error Handling and Retries: Build resilience into your golem with built-in retry mechanisms to handle API rate limits and transient errors.
  • Extensible Architecture: Easily expand your golem's capabilities by extending support for new LLM providers and features.

Real-World Applications

Your gollm-powered golems can handle a wide range of AI-powered tasks, including:

  • Content Creation Workflows: Generate research summaries, article ideas, and refined paragraphs for writing projects.
  • Complex Reasoning Tasks: Use the ChainOfThought function to break down and analyze complex problems step-by-step.
  • Structured Data Generation: Create and validate complex data structures with customizable JSON schemas.
  • Model Performance Analysis: Compare different models' performance for specific tasks to optimize your AI pipeline.
  • Prompt Optimization: Automatically improve prompts for various tasks, from creative writing to technical documentation.
  • Mixture of Agents: Combine responses from multiple LLM providers to create diverse and robust AI agents.

Installation

go get github.com/teilomillet/gollm

Quick Start

Basic Usage

package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/teilomillet/gollm"
)

func main() {
	// Create a new LLM instance with configuration options
	llmInstance, err := gollm.NewLLM(
		gollm.WithProvider("openai"),
		gollm.WithModel("gpt-4o-mini"),
		gollm.WithAPIKey(os.Getenv("OPENAI_API_KEY")),
		gollm.WithMaxTokens(100),
	)
	if err != nil {
		log.Fatalf("Failed to create LLM: %v", err)
	}

	ctx := context.Background()

	// Create a new prompt
	prompt := gollm.NewPrompt("Tell me a short joke about programming.")

	// Generate a response
	response, err := llmInstance.Generate(ctx, prompt)
	if err != nil {
		log.Fatalf("Failed to generate text: %v", err)
	}
	fmt.Printf("Response: %s\n", response)
}

For more advanced usage, including research and content refinement, check out the examples directory.

Quick Reference

Here's a quick reference guide for the most commonly used functions and options in the gollm package:

LLM Creation and Configuration

llmInstance, err := gollm.NewLLM(
    gollm.WithProvider("openai"),
    gollm.WithModel("gpt-4"),
    gollm.WithAPIKey("your-api-key"),
    gollm.WithMaxTokens(100),
    gollm.WithTemperature(0.7),
    gollm.WithMemory(4096),
)

Prompt Creation

prompt := gollm.NewPrompt("Your prompt text here",
    gollm.WithContext("Additional context"),
    gollm.WithDirectives("Be concise", "Use examples"),
    gollm.WithOutput("Expected output format"),
    gollm.WithMaxLength(300),
)

Generate Response

response, err := llmInstance.Generate(ctx, prompt)

Chain of Thought

response, err := gollm.ChainOfThought(ctx, llmInstance, "Your question here")

Prompt Optimization

optimizer := gollm.NewPromptOptimizer(llmInstance, initialPrompt, taskDescription,
    gollm.WithCustomMetrics(/* custom metrics */),
    gollm.WithRatingSystem("numerical"),
    gollm.WithThreshold(0.8),
)
optimizedPrompt, err := optimizer.OptimizePrompt(ctx)

Model Comparison

results, err := gollm.CompareModels(ctx, prompt, validateFunc, config1, config2, config3)

Advanced Usage

The gollm package offers a range of advanced features to enhance your AI applications:

  • Prompt Engineering
  • Pre-built Functions (e.g., Chain of Thought)
  • Working with Examples
  • Prompt Templates
  • Structured Output (JSON output validation)
  • Prompt Optimizer
  • Model Comparison
  • Memory Retention

Here are examples of how to use these advanced features:

Prompt Engineering

Create sophisticated prompts with multiple components:

prompt := gollm.NewPrompt("Explain the concept of recursion in programming.",
    gollm.WithContext("The audience is beginner programmers."),
    gollm.WithDirectives(
        "Use simple language and avoid jargon.",
        "Provide a practical example.",
        "Explain potential pitfalls and how to avoid them.",
    ),
    gollm.WithOutput("Structure your response with sections: Definition, Example, Pitfalls, Best Practices."),
    gollm.WithMaxLength(300),
)

response, err := llmInstance.Generate(ctx, prompt)
if err != nil {
    log.Fatalf("Failed to generate explanation: %v", err)
}

fmt.Printf("Explanation of Recursion:\n%s\n", response)

Pre-built Functions (Chain of Thought)

Use the ChainOfThought function for step-by-step reasoning:

question := "What is the result of 15 * 7 + 22?"
response, err := gollm.ChainOfThought(ctx, llmInstance, question)
if err != nil {
    log.Fatalf("Failed to perform chain of thought: %v", err)
}
fmt.Printf("Chain of Thought:\n%s\n", response)

Working with Examples

Load examples directly from files:

examples, err := gollm.ReadExamplesFromFile("examples.txt")
if err != nil {
    log.Fatalf("Failed to read examples: %v", err)
}

prompt := gollm.NewPrompt("Generate a similar example:",
    gollm.WithExamples(examples...),
)

response, err := llmInstance.Generate(ctx, prompt)
if err != nil {
    log.Fatalf("Failed to generate example: %v", err)
}
fmt.Printf("Generated Example:\n%s\n", response)

Prompt Templates

Create reusable prompt templates for consistent prompt generation:

// Create a new prompt template
template := gollm.NewPromptTemplate(
    "AnalysisTemplate",
    "A template for analyzing topics",
    "Provide a comprehensive analysis of {{.Topic}}. Consider the following aspects:\n" +
    "1. Historical context\n" +
    "2. Current relevance\n" +
    "3. Future implications",
    gollm.WithPromptOptions(
        gollm.WithDirectives(
            "Use clear and concise language",
            "Provide specific examples where appropriate",
        ),
        gollm.WithOutput("Structure your analysis with clear headings for each aspect."),
    ),
)

// Use the template to create a prompt
data := map[string]interface{}{
    "Topic": "artificial intelligence in healthcare",
}
prompt, err := template.Execute(data)
if err != nil {
    log.Fatalf("Failed to execute template: %v", err)
}

// Generate a response using the created prompt
response, err := llmInstance.Generate(ctx, prompt)
if err != nil {
    log.Fatalf("Failed to generate response: %v", err)
}

fmt.Printf("Analysis:\n%s\n", response)

Structured Output (JSON Output Validation)

Ensure your LLM outputs are in a valid JSON format:

prompt := gollm.NewPrompt("Analyze the pros and cons of remote work.",
    gollm.WithOutput("Respond in JSON format with 'topic', 'pros', 'cons', and 'conclusion' fields."),
)

response, err := llmInstance.Generate(ctx, prompt, gollm.WithJSONSchemaValidation())
if err != nil {
    log.Fatalf("Failed to generate valid analysis: %v", err)
}

var result AnalysisResult
if err := json.Unmarshal([]byte(response), &result); err != nil {
    log.Fatalf("Failed to parse response: %v", err)
}

fmt.Printf("Analysis: %+v\n", result)

Prompt Optimizer

Use the PromptOptimizer to automatically refine and improve your prompts:

initialPrompt := "Write a short story about a robot learning to love."
taskDescription := "Generate a compelling short story that explores the theme of artificial intelligence developing emotions."

optimizer := gollm.NewPromptOptimizer(llmInstance, initialPrompt, taskDescription,
    gollm.WithCustomMetrics(
        gollm.Metric{Name: "Creativity", Description: "How original and imaginative the story is"},
        gollm.Metric{Name: "Emotional Impact", Description: "How well the story evokes feelings in the reader"},
    ),
    gollm.WithRatingSystem("numerical"),
    gollm.WithThreshold(0.8),
    gollm.WithVerbose(),
)

optimizedPrompt, err := optimizer.OptimizePrompt(ctx)
if err != nil {
    log.Fatalf("Optimization failed: %v", err)
}

fmt.Printf("Optimized Prompt: %s\n", optimizedPrompt)

Model Comparison

Compare responses from different LLM providers or models:

configs := []*gollm.Config{
    gollm.NewConfig(
        gollm.WithProvider("openai"),
        gollm.WithModel("gpt-4o-mini"),
        gollm.WithAPIKey("your-openai-api-key"),
    ),
    gollm.NewConfig(
        gollm.WithProvider("anthropic"),
        gollm.WithModel("claude-3-5-sonnet-20240620"),
        gollm.WithAPIKey("your-anthropic-api-key"),
    ),
    gollm.NewConfig(
        gollm.WithProvider("groq"),
        gollm.WithModel("llama-3.1-70b-versatile"),
        gollm.WithAPIKey("your-groq-api-key"),
    ),
}

prompt := "Tell me a joke about programming. Respond in JSON format with 'setup' and 'punchline' fields."

results, err := gollm.CompareModels(context.Background(), prompt, validateJoke, configs...)
if err != nil {
    log.Fatalf("Error comparing models: %v", err)
}

fmt.Println(gollm.AnalyzeComparisonResults(results))

Memory Retention

Enable memory to maintain context across multiple interactions:

llmInstance, err := gollm.NewLLM(
    gollm.WithProvider("openai"),
    gollm.WithModel("gpt-3.5-turbo"),
    gollm.WithAPIKey(os.Getenv("OPENAI_API_KEY")),
    gollm.WithMemory(4096), // Enable memory with a 4096 token limit
)
if err != nil {
    log.Fatalf("Failed to create LLM: %v", err)
}

ctx := context.Background()

// First interaction
prompt1 := gollm.NewPrompt("What's the capital of France?")
response1, err := llmInstance.Generate(ctx, prompt1)
if err != nil {
    log.Fatalf("Failed to generate response: %v", err)
}
fmt.Printf("Response 1: %s\n", response1)

// Second interaction, referencing the first
prompt2 := gollm.NewPrompt("What's the population of that city?")
response2, err := llmInstance.Generate(ctx, prompt2)
if err != nil {
    log.Fatalf("Failed to generate response: %v", err)
}
fmt.Printf("Response 2: %s\n", response2)

Best Practices

  1. Prompt Engineering:

    • Use the NewPrompt() function with options like WithContext(), WithDirectives(), and WithOutput() to create well-structured prompts.
    • Example:
      prompt := gollm.NewPrompt("Your main prompt here",
          gollm.WithContext("Provide relevant context"),
          gollm.WithDirectives("Be concise", "Use examples"),
          gollm.WithOutput("Specify expected output format"),
      )
  2. Utilize Prompt Templates:

    • For consistent prompt generation, create and use PromptTemplate objects.
    • Example:
      template := gollm.NewPromptTemplate(
          "CustomTemplate",
          "A template for custom prompts",
          "Generate a {{.Type}} about {{.Topic}}",
          gollm.WithPromptOptions(
              gollm.WithDirectives("Be creative", "Use vivid language"),
              gollm.WithOutput("Your {{.Type}}:"),
          ),
      )
  3. Leverage Pre-built Functions:

    • Use provided functions like ChainOfThought() for complex reasoning tasks.
    • Example:
      response, err := gollm.ChainOfThought(ctx, llmInstance, "Your complex question here")
  4. Work with Examples:

    • Use the ReadExamplesFromFile() function to load examples from files for more consistent and varied outputs.
    • Example:
      examples, err := gollm.ReadExamplesFromFile("examples.txt")
      if err != nil {
          log.Fatalf("Failed to read examples: %v", err)
      }
  5. Implement Structured Output:

    • Use the WithJSONSchemaValidation() option when generating responses to ensure valid JSON outputs.
    • Example:
      response, err := llmInstance.Generate(ctx, prompt, gollm.WithJSONSchemaValidation())
  6. Optimize Prompts:

    • Utilize the PromptOptimizer to refine and improve your prompts automatically.
    • Example:
      optimizer := gollm.NewPromptOptimizer(llmInstance, initialPrompt, taskDescription,
          gollm.WithCustomMetrics(
              gollm.Metric{Name: "Relevance", Description: "How relevant the response is to the task"},
          ),
          gollm.WithRatingSystem("numerical"),
          gollm.WithThreshold(0.8),
      )
      optimizedPrompt, err := optimizer.OptimizePrompt(ctx)
  7. Compare Model Performances:

    • Use the CompareModels() function to evaluate different models or providers for specific tasks.
    • Example:
      results, err := gollm.CompareModels(ctx, prompt, validateFunc, config1, config2, config3)
  8. Implement Memory for Contextual Interactions:

    • Enable memory retention for maintaining context across multiple interactions.
    • Example:
      llmInstance, err := gollm.NewLLM(
          gollm.WithProvider("openai"),
          gollm.WithModel("gpt-3.5-turbo"),
          gollm.WithMemory(4096), // Enable memory with a 4096 token limit
      )
  9. Error Handling and Retries:

    • Always check for errors returned by gollm functions.
    • Configure retry mechanisms to handle transient errors and rate limits.
    • Example:
      llmInstance, err := gollm.NewLLM(
          gollm.WithMaxRetries(3),
          gollm.WithRetryDelay(time.Second * 2),
      )
  10. Secure API Key Handling:

    • Use environment variables or secure configuration management to handle API keys.
    • Example:
      llmInstance, err := gollm.NewLLM(
          gollm.WithAPIKey(os.Getenv("OPENAI_API_KEY")),
      )

By following these best practices, you can make the most effective use of the gollm package, creating more robust, efficient, and maintainable AI-powered applications.

Examples and Tutorials

Check out our examples directory for more usage examples, including:

  • Basic usage
  • Different prompt types
  • Comparing providers
  • Advanced prompt templates
  • Prompt optimization
  • JSON output validation
  • Mixture of Agents

Project Status

gollm is actively maintained and under continuous development. With the recent refactoring in version 0.1.0, we've streamlined the codebase to make it simpler and more accessible for new contributors. We welcome contributions and feedback from the community.

Philosophy

gollm is built on a philosophy of pragmatic minimalism and forward-thinking simplicity:

  1. Build what's necessary: We add features and capabilities as they become needed, avoiding speculative development.

  2. Simplicity first: Every addition should be as simple and straightforward as possible while fulfilling its purpose.

  3. Future-compatible: While we don't build for hypothetical future needs, we carefully consider how current changes might impact future development.

  4. Readability counts: Code should be clear and self-explanatory. If it's not, we improve it or document it thoroughly.

  5. Modular design: Each component should do one thing well, allowing for easy understanding, testing, and future modification.

This philosophy guides our development process and helps us maintain a lean, efficient, and adaptable codebase. We encourage all contributors to keep these principles in mind when proposing changes or new features.

Contributing

We welcome contributions that align with our philosophy! Whether you're fixing a bug, improving documentation, or proposing new features, your efforts are appreciated.

To get started:

  1. Familiarize yourself with our philosophy and development approach.
  2. Check out our CONTRIBUTING.md for guidelines on how to contribute.
  3. Look through our issues for something that interests you.
  4. Fork the repository, make your changes, and submit a pull request.

Remember, the best contributions are those that adhere to our philosophy of pragmatic minimalism and readability. We encourage you to include examples and clear comments with your code.

If you have any questions or ideas, feel free to open an issue or join our community chat. We're always excited to discuss how we can improve gollm while staying true to our core principles.

Thank you for helping make gollm better!

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for gollm

Similar Open Source Tools

For similar tasks

For similar jobs