sgr-agent-core

sgr-agent-core

Schema-Guided Reasoning (SGR) has agentic system design created by neuraldeep community

Stars: 995

Visit
 screenshot

SGR Agent Core is an open-source agentic framework for building intelligent research agents using Schema-Guided Reasoning. It provides a core library with an extendable BaseAgent interface implementing a two-phase architecture and multiple ready-to-use research agent implementations. The library includes tools for search, reasoning, and clarification, real-time streaming responses, and an OpenAI-compatible REST API. It works with any OpenAI-compatible LLM, including local models for fully private research. The framework is production-ready, with comprehensive test coverage and Docker support.

README:

SGR Agent Core — the first SGR open-source agentic framework for Schema-Guided Reasoning

SGR Concept Architecture

Open-source agentic framework for building intelligent research agents using Schema-Guided Reasoning. The project provides a core library with a extendable BaseAgent interface implementing a two-phase architecture and multiple ready-to-use research agent implementations built on top of it.

The library includes extensible tools for search, reasoning, and clarification, real-time streaming responses, OpenAI-compatible REST API. Works with any OpenAI-compatible LLM, including local models for fully private research.

Why use SGR Agent Core?

  • Schema-Guided Reasoning — SGR combines structured reasoning with flexible tool selection
  • Multiple Agent Types — Choose from SGRAgent, ToolCallingAgent, or SGRToolCallingAgent
  • Extensible Architecture — Easy to create custom agents and tools
  • OpenAI-Compatible API — Drop-in replacement for OpenAI API endpoints
  • Real-time Streaming — Built-in support for streaming responses via SSE
  • Production Ready — Battle-tested with comprehensive test coverage and Docker support

Documentation

Get started quickly with our documentation:

Quick Start

Running with Docker

The fastest way to get started is using Docker:

# Clone the repository
git clone https://github.com/vamplabai/sgr-agent-core.git
cd sgr-agent-core

# Create directories with write permissions for all
sudo mkdir -p logs reports
sudo chmod 777 logs reports

# Copy and edit the configuration file
cp examples/sgr_deep_research/config.yaml.example examples/sgr_deep_research/config.yaml
# Edit examples/sgr_deep_research/config.yaml and set your API keys:
# - llm.api_key: Your OpenAI API key
# - search.tavily_api_key: Your Tavily API key (optional)

# Run the container
docker run --rm -i \
  --name sgr-agent \
  -p 8010:8010 \
  -v $(pwd)/examples/sgr_deep_research:/app/examples/sgr_deep_research:ro \
  -v $(pwd)/logs:/app/logs \
  -v $(pwd)/reports:/app/reports \
  ghcr.io/vamplabai/sgr-agent-core:latest \
  --config-file /app/examples/sgr_deep_research/config.yaml \
  --host 0.0.0.0 \
  --port 8010

The API server will be available at http://localhost:8010 with OpenAI-compatible API endpoints. Interactive API documentation (Swagger UI) is available at http://localhost:8010/docs.

Installation

If you want to use SGR Agent Core as a Python library (framework):

pip install sgr-agent-core

See the Installation Guide for detailed instructions and the Using as Library guide to get started.

Running Research Agents

The project includes example research agent configurations in the examples/ directory. To get started with deep research agents:

  1. Copy and configure the config file:
cp examples/sgr_deep_research/config.yaml.example examples/sgr_deep_research/config.yaml
# Edit examples/sgr_deep_research/config.yaml and set your API keys:
# - llm.api_key: Your OpenAI API key
# - search.tavily_api_key: Your Tavily API key (optional)
  1. Run the API server using the sgr utility:
sgr --config-file examples/sgr_deep_research/config.yaml
# or use short option
sgr -c examples/sgr_deep_research/config.yaml

Note: You can also run the server directly with Python:

python -m sgr_agent_core.server --config-file examples/sgr_deep_research/config.yaml

Using the CLI Tool (sgrsh)

For interactive command-line usage, you can use the sgrsh utility:

# Single query mode
sgrsh "Найди цену биткоина"

# With agent selection (e.g. sgr_agent, dialog_agent)
sgrsh --agent sgr_agent "What is AI?"

# With custom config file
sgrsh -c config.yaml -a sgr_agent "Your query"

# Interactive chat mode (no query argument)
sgrsh
sgrsh -a sgr_agent

The sgrsh command:

  • Automatically looks for config.yaml in the current directory
  • Supports interactive chat mode for multiple queries
  • Handles clarification and dialog (intermediate results) requests from agents
  • Works with any agent defined in your configuration (e.g. sgr_agent, dialog_agent)

For more examples and detailed usage instructions, see the examples/ directory.

Benchmarking

SimpleQA Benchmark Comparison

Performance Metrics on gpt-4.1-mini:

  • Accuracy: 86.08%
  • Correct: 3,724 answers
  • Incorrect: 554 answers
  • Not Attempted: 48 answers

More detailed benchmark results are available here.

Open-Source Development Team

All development is driven by pure enthusiasm and open-source community collaboration. We welcome contributors of all skill levels!

If you have any questions - feel free to join our community chat↗️ or reach out Valerii Kovalskii↗️.

Special Thanks To:

This project is developed by the neuraldeep community. It is inspired by the Schema-Guided Reasoning (SGR) work and SGR Agent Demo↗️ delivered by "LLM Under the Hood" community and AI R&D Hub of TIMETOACT GROUP Österreich↗️

This project is supported by the AI R&D team at red_mad_robot, providing research capacity, engineering expertise, infrastructure, and operational support.

Learn more about red_mad_robot: redmadrobot.ai↗️ habr↗️ telegram↗️

Star History

Star History Chart

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for sgr-agent-core

Similar Open Source Tools

For similar tasks

For similar jobs