llm-in-sandbox

llm-in-sandbox

LLM-in-Sandbox Elicits General Agentic Intelligence

Stars: 185

Visit
 screenshot

LLM-in-Sandbox is a project that aims to unlock general agentic intelligence by placing a large language model (LLM) inside a code sandbox with basic computer capabilities. This approach allows the LLM to outperform standalone models across various domains such as chemistry, physics, math, biomedicine, long-context understanding, and instruction-following without additional training. The project leverages reinforcement learning (RL) to further enhance performance, with benefits including consistent improvements in non-code domains, using the file system as long-term memory for up to 8× token savings, Docker isolation for security, and compatibility with various LLM providers like OpenAI, Anthropic, vLLM, and SGLang.

README:

LLM-in-Sandbox

🌐 Project Page📄 Paper🤗 Huggingface📦 Model and Data🎬 Youtube Video💻 LLM-in-Sandbox-RL

Give your LLM a computer, unlocking general agentic intelligence

As vibe coding becomes common and 🦞 OpenClaw draws widespread attention, we present the first systematic study showing that placing an LLM inside a code sandbox with basic computer capabilities lets it significantly outperform standalone LLMs across chemistry, physics, math, biomedicine, long-context understanding, and instruction-following with no extra training. RL further amplifies the gains.

  • 📈 Consistent improvements across diverse non-code domains
  • 🧠 File system as long-term memory, up to 8× token savings
  • 🐳 Docker isolation for security (vs. unrestricted setups like 🦞 OpenClaw)
  • 🔌 Works with OpenAI, Anthropic, vLLM, SGLang, etc.

Feel free to open an issue if you have any questions or run into any problems. We'd be happy to help!

Experiment Results

Demo Video
▶️ Click to watch the demo video

News

  • [2026-02-12] Released code, data, and wandb log for LLM-in-Sandbox Reinforcement Learning at LLM-in-Sandbox-RL.
  • [2026-02-11] v0.2.0: Added benchmark module for reproducing paper results, evaluating any LLM, and adding your own tasks.

Table of Contents

Installation

Requirements: Python 3.10+, Docker

pip install llm-in-sandbox

Or install from source:

git clone https://github.com/llm-in-sandbox/llm-in-sandbox.git
cd llm-in-sandbox
pip install -e .

Docker Image

The default Docker image (cdx123/llm-in-sandbox:v0.1) will be automatically pulled when you first run the agent. The first run may take a minute to download the image (~400MB), but subsequent runs will start instantly.

Advanced: Build your own image

Modify Dockerfile and build your own image:

llm-in-sandbox build

Quick Start

LLM-in-Sandbox works with various LLM providers including OpenAI, Anthropic, and self-hosted servers (vLLM, SGLang, etc.).

Option 1: Cloud / API Services

llm-in-sandbox run \
    --query "write a hello world in python" \
    --llm_name "openai/gpt-5" \
    --llm_base_url "http://your-api-server/v1" \
    --api_key "your-api-key"

Option 2: Self-Hosted Models

Using local vLLM server for Qwen3-Coder-30B-A3B-Instruct

1. Start vLLM server:

vllm serve Qwen/Qwen3-Coder-30B-A3B-Instruct \
    --served-model-name qwen3_coder \
    --enable-auto-tool-choice \
    --tool-call-parser qwen3_coder \
    --tensor-parallel-size 8  \
    --enable-prefix-caching

2. Run agent (in a new terminal once server is ready):

llm-in-sandbox run \
    --query "write a hello world in python" \
    --llm_name qwen3_coder \
    --llm_base_url "http://localhost:8000/v1"  \
    --temperature 0.7
Using local SGLang server for DeepSeek-V3.2-Thinking

1. Start sgLang server:

python3 -m sglang.launch_server \
    --model-path "deepseek-ai/DeepSeek-V3.2" \
    --served-model-name "DeepSeek-V3.2" \
    --trust-remote-code \
    --tp-size 8 \
    --tool-call-parser deepseekv32 \
    --reasoning-parser deepseek-v3 \
    --host 0.0.0.0 \
    --port 5678

2. Run agent (in a new terminal once server is ready):

llm-in-sandbox run \
    --query "write a hello world in python" \
    --llm_name DeepSeek-V3.2 \
    --llm_base_url "http://0.0.0.0:5678/v1" \
    --extra_body '{"chat_template_kwargs": {"thinking": True}}'

Parameters (Common)

Parameter Description Default
--query Task for the agent required
--llm_name Model name required
--llm_base_url API endpoint URL from LLM_BASE_URL env var
--api_key API key (not needed for local server) from OPENAI_API_KEY env var
--input_dir Input files folder to mount (Optional) None
--output_dir Output folder for results ./output
--docker_image Docker image to use cdx123/llm-in-sandbox:v0.1
--prompt_config Path to prompt template ./config/general.yaml
--temperature Sampling temperature 1.0
--max_steps Max conversation turns 100
--extra_body Extra JSON body for LLM API calls None

Run llm-in-sandbox run --help for all available parameters.

Output

Each run creates a timestamped folder:

output/2026-01-16_14-30-00/
├── files/
│   ├── answer.txt      # Final answer
│   └── hello_world.py  # Output file
└── trajectory.json     # Execution history

More Examples

We provide examples across diverse non-coding domains: scientific reasoning, long-context understanding, instruction following, travel planning, video production, music composition, poster design, and more.

👉 See examples/README.md for the full list.

Benchmark and Reproduction

Reproduce our paper results, evaluate any LLM in the sandbox, or add your own tasks.

👉 See llm_in_sandbox/benchmark/README.md

Contact Us

Feel free to open an issue if you have any questions or run into any problems, we’d be happy to help! You can also reach us directly at [email protected] and [email protected].

Acknowledgment

We learned the design and reused code from R2E-Gym. Thanks for the great work!

Citation

If you find our work helpful, please cite us:

@article{cheng2026llm,
  title={Llm-in-sandbox elicits general agentic intelligence},
  author={Cheng, Daixuan and Huang, Shaohan and Gu, Yuxian and Song, Huatong and Chen, Guoxin and Dong, Li and Zhao, Wayne Xin and Wen, Ji-Rong and Wei, Furu},
  journal={arXiv preprint arXiv:2601.16206},
  year={2026}
}

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for llm-in-sandbox

Similar Open Source Tools

For similar tasks

For similar jobs