
hud-python
OSS RL environment + evals toolkit
Stars: 179

hud-python is a Python library for creating interactive heads-up displays (HUDs) in video games. It provides a simple and flexible way to overlay information on the screen, such as player health, score, and notifications. The library is designed to be easy to use and customizable, allowing game developers to enhance the user experience by adding dynamic elements to their games. With hud-python, developers can create engaging HUDs that improve gameplay and provide important feedback to players.
README:
OSS RL environment + evals toolkit. Wrap software as environments, run benchmarks, and train with RL β locally or at scale.
π Hop on a call or π§ [email protected]
- π MCP environment skeleton β any agent can call any environment.
- β‘οΈ Live telemetry β inspect every tool call, observation, and reward in real time.
- ποΈ Public benchmarks β OSWorld-Verified, SheetBench-50, and more.
- π± Reinforcement learning built-in β Verifiers gym pipelines for GRPO on any environment.
- π Cloud browsers β AnchorBrowser, Steel, BrowserBase integrations for browser automation.
- π οΈ Hot-reload dev loop β
hud dev
for iterating on environments without rebuilds.
We welcome contributors and feature requests β open an issue or hop on a call to discuss improvements!
# Core installation - MCP servers, telemetry, basic tools for environment design
pip install hud-python
# Agent installation - Adds AI providers, datasets
pip install "hud-python[agent]"
# CLI utilities
uv tool install hud-python
# uv tool update-shell
# From source (latest)
git clone https://github.com/hud-evals/hud-python
pip install -e "hud-python[dev]"
See docs.hud.so, or add docs to any MCP client:
claude mcp add --transport http docs-hud https://docs.hud.so/mcp
For a tutorial that explains the agent and evaluation design, run (see quickstart docs):
uvx hud-python quickstart
Or just write your own agent loop (more examples here).
import asyncio, hud, os
from hud.settings import settings
from hud.clients import MCPClient
from hud.agents import ClaudeAgent
from hud.datasets import Task # See docs: https://docs.hud.so/reference/tasks
async def main() -> None:
with hud.trace("Quick Start 2048"): # All telemetry works for any MCP-based agent (see https://app.hud.so)
task = {
"prompt": "Reach 64 in 2048.",
"mcp_config": {
"hud": {
"url": "https://mcp.hud.so/v3/mcp", # HUD's cloud MCP server (see https://docs.hud.so/core-concepts/architecture)
"headers": {
"Authorization": f"Bearer {settings.api_key}", # Get your key at https://app.hud.so
"Mcp-Image": "hudpython/hud-text-2048:v1.2" # Docker image from https://hub.docker.com/u/hudpython
}
}
},
"evaluate_tool": {"name": "evaluate", "arguments": {"name": "max_number", "arguments": {"target": 64}}},
}
task = Task(**task)
# 1. Define the client explicitly:
client = MCPClient(mcp_config=task.mcp_config)
agent = ClaudeAgent(
mcp_client=client,
model="claude-sonnet-4-20250514", # requires ANTHROPIC_API_KEY
)
result = await agent.run(task)
# 2. Or just:
# result = await ClaudeAgent().run(task)
print(f"Reward: {result.reward}")
await client.shutdown()
asyncio.run(main())
The above example let's the agent play 2048 (See replay)
This is a Qwenβ2.5βVLβ3B agent training a policy on the 2048-basic browser environment:
Train with the new interactive hud rl
flow:
# Install CLI with RL extras
uv tool install "hud-python[rl]"
# Option A: Run directly from a HuggingFace dataset
hud rl hud-evals/basic-2048
# Option B: Download first, modify, then train
hud get hud-evals/basic-2048
hud rl basic-2048.jsonl
# Optional: baseline evaluation
hud eval basic-2048.jsonl
Supports multiβturn RL for both:
- Languageβonly models (e.g.,
Qwen/Qwen2.5-7B-Instruct
) - VisionβLanguage models (e.g.,
Qwen/Qwen2.5-VL-3B-Instruct
)
By default, hud rl
provisions a persistant server and trainer in the cloud, streams telemetry to app.hud.so
, and lets you monitor/manage models at app.hud.so/models
. Use --local
to run entirely on your machines (typically 2+ GPUs: one for vLLM, the rest for training).
Any HUD MCP environment and evaluation works with our RL pipeline (including remote configurations). See the guided docs: https://docs.hud.so/train-agents/quickstart
.
This is Claude Computer Use running on our proprietary financial analyst benchmark SheetBench-50:
This example runs the full dataset (only takes ~20 minutes) using run_evaluation.py:
python examples/run_evaluation.py hud-evals/SheetBench-50 --full --agent claude
Or in code:
import asyncio
from hud.datasets import run_dataset
from hud.agents import ClaudeAgent
results = await run_dataset(
name="My SheetBench-50 Evaluation",
dataset="hud-evals/SheetBench-50", # <-- HuggingFace dataset
agent_class=ClaudeAgent, # <-- Your custom agent can replace this (see https://docs.hud.so/evaluate-agents/create-agents)
agent_config={"model": "claude-sonnet-4-20250514"},
max_concurrent=50,
max_steps=30,
)
print(f"Average reward: {sum(r.reward for r in results) / len(results):.2f}")
Running a dataset creates a job and streams results to the app.hud.so platform for analysis and leaderboard submission.
This is how you can make any environment into an interactable one in 5 steps:
- Define MCP server layer using
MCPServer
from hud.server import MCPServer
from hud.tools import HudComputerTool
mcp = MCPServer("My Environment")
# Add hud tools (see all tools: https://docs.hud.so/reference/tools)
mcp.add_tool(HudComputerTool())
# Or custom tools (see https://docs.hud.so/build-environments/adapting-software)
@mcp.tool("launch_app"):
def launch_app(name: str = "Gmail")
...
if __name__ == "__main__":
mcp.run()
- Write a simple Dockerfile that installs packages and runs:
CMD ["python", "-m", "hud_controller.server"]
And build the image:
hud build # runs docker build under the hood
Or run it in interactible mode
hud dev
- Debug it with the CLI to see if it launches:
$ hud debug my-name/my-environment:latest
β Phase 1: Docker image exists
β Phase 2: MCP server responds to initialize
β Phase 3: Tools are discoverable
β Phase 4: Basic tool execution works
β Phase 5: Parallel performance is good
Progress: [βββββββββββββββββββββ] 5/5 phases (100%)
β
All phases completed successfully!
Analyze it to see if all tools appear:
$ hud analyze hudpython/hud-remote-browser:latest
β β Analysis complete
...
Tools
βββ Regular Tools
β βββ computer
β β βββ Control computer with mouse, keyboard, and screenshots
...
βββ Hub Tools
βββ setup
β βββ navigate_to_url
β βββ set_cookies
β βββ ...
βββ evaluate
βββ url_match
βββ page_contains
βββ cookie_exists
βββ ...
π‘ Telemetry Data
Live URL https://live.anchorbrowser.io?sessionId=abc123def456
- When the tests pass, push it up to the docker registry:
hud push # needs docker login, hud api key
- Now you can use
mcp.hud.so
to launch 100s of instances of this environment in parallel with any agent, and see everything live on app.hud.so:
from hud.agents import ClaudeAgent
result = await ClaudeAgent().run({ # See all agents: https://docs.hud.so/reference/agents
"prompt": "Please explore this environment",
"mcp_config": {
"my-environment": {
"url": "https://mcp.hud.so/v3/mcp",
"headers": {
"Authorization": f"Bearer {os.getenv('HUD_API_KEY')}",
"Mcp-Image": "my-name/my-environment:latest"
}
}
# "my-environment": { # or use hud run which wraps local and remote running
# "cmd": "hud",
# "args": [
# "run",
# "my-name/my-environment:latest",
# ]
# }
}
})
See the full environment design guide and common pitfalls in
environments/README.md
All leaderboards are publicly available on app.hud.so/leaderboards (see docs)
We highly suggest running 3-5 evaluations per dataset for the most consistent results across multiple jobs.
Using the run_dataset
function with a HuggingFace dataset automatically assigns your job to that leaderboard page, and allows you to create a scorecard out of it:
%%{init: {"theme": "neutral", "themeVariables": {"fontSize": "14px"}} }%%
graph LR
subgraph "Platform"
Dashboard["π app.hud.so"]
API["π mcp.hud.so"]
end
subgraph "hud"
Agent["π€ Agent"]
Task["π Task"]
SDK["π¦ SDK"]
end
subgraph "Environments"
LocalEnv["π₯οΈ Local Docker<br/>(Development)"]
RemoteEnv["βοΈ Remote Docker<br/>(100s Parallel)"]
end
subgraph "otel"
Trace["π‘ Traces & Metrics"]
end
Dataset["π Dataset<br/>(HuggingFace)"]
AnyMCP["π Any MCP Client<br/>(Cursor, Claude, Custom)"]
Agent <--> SDK
Task --> SDK
Dataset <-.-> Task
SDK <-->|"MCP"| LocalEnv
SDK <-->|"MCP"| API
API <-->|"MCP"| RemoteEnv
SDK --> Trace
Trace --> Dashboard
AnyMCP -->|"MCP"| API
Command | Purpose | Docs |
---|---|---|
hud init |
Create new environment with boilerplate. | π |
hud dev |
Hot-reload development with Docker. | π |
hud build |
Build image and generate lock file. | π |
hud push |
Share environment to registry. | π |
hud pull <target> |
Get environment from registry. | π |
hud analyze <image> |
Discover tools, resources, and metadata. | π |
hud debug <image> |
Five-phase health check of an environment. | π |
hud run <image> |
Run MCP server locally or remotely. | π |
- Merging our forks in to the main
mcp
,mcp_use
,verifiers
repositories - Helpers for building new environments (see current guide)
- Integrations with every major agent framework
- Evaluation environment registry
- Native RL training to hud environments (see current RL support)
- MCP opentelemetry standard
We welcome contributions! See CONTRIBUTING.md for guidelines.
Key areas:
- Environment examples - Add new MCP environments
- Agent implementations - Add support for new LLM providers
- Tool library - Extend the built-in tool collection
- RL training - Improve reinforcement learning pipelines
Thanks to all our contributors!
@software{hud2025agentevalplatform,
author = {HUD and Jay Ram and Lorenss Martinsons and Parth Patel and Oskars Putans and Govind Pimpale and Mayank Singamreddy and Nguyen Nhat Minh},
title = {HUD: An Evaluation Platform for Agents},
date = {2025-04},
url = {https://github.com/hud-evals/hud-python},
langid = {en}
}
License: HUD is released under the MIT License β see the LICENSE file for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for hud-python
Similar Open Source Tools

hud-python
hud-python is a Python library for creating interactive heads-up displays (HUDs) in video games. It provides a simple and flexible way to overlay information on the screen, such as player health, score, and notifications. The library is designed to be easy to use and customizable, allowing game developers to enhance the user experience by adding dynamic elements to their games. With hud-python, developers can create engaging HUDs that improve gameplay and provide important feedback to players.

LocalAGI
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. It provides a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. With LocalAGI, you can create customizable AI assistants, automations, chat bots, and agents that run 100% locally, without the need for cloud services or API keys. The platform offers features like no-code agents, web-based interface, advanced agent teaming, connectors for various platforms, comprehensive REST API, short & long-term memory capabilities, planning & reasoning, periodic tasks scheduling, memory management, multimodal support, extensible custom actions, fully customizable models, observability, and more.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

e2m
E2M is a Python library that can parse and convert various file types into Markdown format. It supports the conversion of multiple file formats, including doc, docx, epub, html, htm, url, pdf, ppt, pptx, mp3, and m4a. The ultimate goal of the E2M project is to provide high-quality data for Retrieval-Augmented Generation (RAG) and model training or fine-tuning. The core architecture consists of a Parser responsible for parsing various file types into text or image data, and a Converter responsible for converting text or image data into Markdown format.

client-python
The Mistral Python Client is a tool inspired by cohere-python that allows users to interact with the Mistral AI API. It provides functionalities to access and utilize the AI capabilities offered by Mistral. Users can easily install the client using pip and manage dependencies using poetry. The client includes examples demonstrating how to use the API for various tasks, such as chat interactions. To get started, users need to obtain a Mistral API Key and set it as an environment variable. Overall, the Mistral Python Client simplifies the integration of Mistral AI services into Python applications.

prometheus-mcp-server
Prometheus MCP Server is a Model Context Protocol (MCP) server that provides access to Prometheus metrics and queries through standardized interfaces. It allows AI assistants to execute PromQL queries and analyze metrics data. The server supports executing queries, exploring metrics, listing available metrics, viewing query results, and authentication. It offers interactive tools for AI assistants and can be configured to choose specific tools. Installation methods include using Docker Desktop, MCP-compatible clients like Claude Desktop, VS Code, Cursor, and Windsurf, and manual Docker setup. Configuration options include setting Prometheus server URL, authentication credentials, organization ID, transport mode, and bind host/port. Contributions are welcome, and the project uses `uv` for managing dependencies and includes a comprehensive test suite for functionality testing.

MHA2MLA
This repository contains the code for the paper 'Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs'. It provides tools for fine-tuning and evaluating Llama models, converting models between different frameworks, processing datasets, and performing specific model training tasks like Partial-RoPE Fine-Tuning and Multiple-Head Latent Attention Fine-Tuning. The repository also includes commands for model evaluation using Lighteval and LongBench, along with necessary environment setup instructions.

pocketgroq
PocketGroq is a tool that provides advanced functionalities for text generation, web scraping, web search, and AI response evaluation. It includes features like an Autonomous Agent for answering questions, web crawling and scraping capabilities, enhanced web search functionality, and flexible integration with Ollama server. Users can customize the agent's behavior, evaluate responses using AI, and utilize various methods for text generation, conversation management, and Chain of Thought reasoning. The tool offers comprehensive methods for different tasks, such as initializing RAG, error handling, and tool management. PocketGroq is designed to enhance development processes and enable the creation of AI-powered applications with ease.

cua
Cua is a tool for creating and running high-performance macOS and Linux virtual machines on Apple Silicon, with built-in support for AI agents. It provides libraries like Lume for running VMs with near-native performance, Computer for interacting with sandboxes, and Agent for running agentic workflows. Users can refer to the documentation for onboarding, explore demos showcasing AI-Gradio and GitHub issue fixing, and utilize accessory libraries like Core, PyLume, Computer Server, and SOM. Contributions are welcome, and the tool is open-sourced under the MIT License.

LLMVoX
LLMVoX is a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming Text-to-Speech (TTS) system designed to convert text outputs from Large Language Models into high-fidelity streaming speech with low latency. It achieves significantly lower Word Error Rate compared to speech-enabled LLMs while operating at comparable latency and speech quality. Key features include being lightweight & fast with only 30M parameters, LLM-agnostic for easy integration with existing models, multi-queue streaming for continuous speech generation, and multilingual support for easy adaptation to new languages.

mcp
Semgrep MCP Server is a beta server under active development for using Semgrep to scan code for security vulnerabilities. It provides a Model Context Protocol (MCP) for various coding tools to get specialized help in tasks. Users can connect to Semgrep AppSec Platform, scan code for vulnerabilities, customize Semgrep rules, analyze and filter scan results, and compare results. The tool is published on PyPI as semgrep-mcp and can be installed using pip, pipx, uv, poetry, or other methods. It supports CLI and Docker environments for running the server. Integration with VS Code is also available for quick installation. The project welcomes contributions and is inspired by core technologies like Semgrep and MCP, as well as related community projects and tools.

ai00_server
AI00 RWKV Server is an inference API server for the RWKV language model based upon the web-rwkv inference engine. It supports VULKAN parallel and concurrent batched inference and can run on all GPUs that support VULKAN. No need for Nvidia cards!!! AMD cards and even integrated graphics can be accelerated!!! No need for bulky pytorch, CUDA and other runtime environments, it's compact and ready to use out of the box! Compatible with OpenAI's ChatGPT API interface. 100% open source and commercially usable, under the MIT license. If you are looking for a fast, efficient, and easy-to-use LLM API server, then AI00 RWKV Server is your best choice. It can be used for various tasks, including chatbots, text generation, translation, and Q&A.

LTEngine
LTEngine is a free and open-source local AI machine translation API written in Rust. It is self-hosted and compatible with LibreTranslate. LTEngine utilizes large language models (LLMs) via llama.cpp, offering high-quality translations that rival or surpass DeepL for certain languages. It supports various accelerators like CUDA, Metal, and Vulkan, with the largest model 'gemma3-27b' fitting on a single consumer RTX 3090. LTEngine is actively developed, with a roadmap outlining future enhancements and features.

candle-vllm
Candle-vllm is an efficient and easy-to-use platform designed for inference and serving local LLMs, featuring an OpenAI compatible API server. It offers a highly extensible trait-based system for rapid implementation of new module pipelines, streaming support in generation, efficient management of key-value cache with PagedAttention, and continuous batching. The tool supports chat serving for various models and provides a seamless experience for users to interact with LLMs through different interfaces.

client-ts
Mistral Typescript Client is an SDK for Mistral AI API, providing Chat Completion and Embeddings APIs. It allows users to create chat completions, upload files, create agent completions, create embedding requests, and more. The SDK supports various JavaScript runtimes and provides detailed documentation on installation, requirements, API key setup, example usage, error handling, server selection, custom HTTP client, authentication, providers support, standalone functions, debugging, and contributions.
For similar tasks

hud-python
hud-python is a Python library for creating interactive heads-up displays (HUDs) in video games. It provides a simple and flexible way to overlay information on the screen, such as player health, score, and notifications. The library is designed to be easy to use and customizable, allowing game developers to enhance the user experience by adding dynamic elements to their games. With hud-python, developers can create engaging HUDs that improve gameplay and provide important feedback to players.

Genshin-Party-Builder
Party Builder for Genshin Impact is an AI-assisted team creation tool that helps players assemble well-rounded teams by analyzing characters' attributes, constellation levels, weapon types, elemental reactions, roles, and community scores. It allows users to optimize their team compositions for better gameplay experiences. The tool provides a user-friendly interface for easy team customization and strategy planning, enhancing the overall gaming experience for Genshin Impact players.

GTA5-Stand-LuaAIO
GTA5-Stand-LuaAIO is a comprehensive Lua script for Grand Theft Auto V that enhances gameplay by providing various features and functionalities. It is designed to streamline the gaming experience and offer players a wide range of customization options. The script includes features such as vehicle spawning, teleportation, weather control, and more, making it a versatile tool for GTA V players looking to enhance their gameplay.

Wave-Executor
Wave Executor is a robust Windows-based script executor tailored for Roblox enthusiasts. It boasts AI integration for seamless script development, ad-free premium features, and 24/7 support, ensuring an unparalleled user experience and elevating gameplay to new heights.

Wave-Executor
Wave Executor is a cutting-edge Roblox script executor designed for advanced script execution, optimized performance, and seamless user experience. Fully compatible with the latest Roblox updates, it is secure, easy to use, and perfect for gamers, developers, and modding enthusiasts looking to enhance their Roblox gameplay.

delta-executor
Delta Executor is a high-performance Roblox script executor designed for smooth script execution, enhanced gameplay, and seamless usability. It is built with security in mind, fully compatible with the latest Roblox updates, offering a stable and efficient experience for gamers, developers, and modding enthusiasts.

Luna-Executor
Luna Executor is a high-performance Roblox script executor designed for smooth script execution, enhanced gameplay, and seamless usability. Built with security in mind, it remains fully compatible with the latest Roblox updates, offering a stable and efficient experience for gamers, developers, and modding enthusiasts.

OpenDevin
OpenDevin is an open-source project aiming to replicate Devin, an autonomous AI software engineer capable of executing complex engineering tasks and collaborating actively with users on software development projects. The project aspires to enhance and innovate upon Devin through the power of the open-source community. Users can contribute to the project by developing core functionalities, frontend interface, or sandboxing solutions, participating in research and evaluation of LLMs in software engineering, and providing feedback and testing on the OpenDevin toolset.
For similar jobs

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.

MahjongCopilot
Mahjong Copilot is an AI assistant for the game Mahjong, based on the mjai (Mortal model) bot implementation. It provides step-by-step guidance for each move in the game, and can also be used to automatically play and join games. Mahjong Copilot supports both 3-person and 4-person Mahjong games, and is available in multiple languages.

DotRecast
DotRecast is a C# port of Recast & Detour, a navigation library used in many AAA and indie games and engines. It provides automatic navmesh generation, fast turnaround times, detailed customization options, and is dependency-free. Recast Navigation is divided into multiple modules, each contained in its own folder: - DotRecast.Core: Core utils - DotRecast.Recast: Navmesh generation - DotRecast.Detour: Runtime loading of navmesh data, pathfinding, navmesh queries - DotRecast.Detour.TileCache: Navmesh streaming. Useful for large levels and open-world games - DotRecast.Detour.Crowd: Agent movement, collision avoidance, and crowd simulation - DotRecast.Detour.Dynamic: Robust support for dynamic nav meshes combining pre-built voxels with dynamic objects which can be freely added and removed - DotRecast.Detour.Extras: Simple tool to import navmeshes created with A* Pathfinding Project - DotRecast.Recast.Toolset: All modules - DotRecast.Recast.Demo: Standalone, comprehensive demo app showcasing all aspects of Recast & Detour's functionality - Tests: Unit tests Recast constructs a navmesh through a multi-step mesh rasterization process: 1. First Recast rasterizes the input triangle meshes into voxels. 2. Voxels in areas where agents would not be able to move are filtered and removed. 3. The walkable areas described by the voxel grid are then divided into sets of polygonal regions. 4. The navigation polygons are generated by re-triangulating the generated polygonal regions into a navmesh. You can use Recast to build a single navmesh, or a tiled navmesh. Single meshes are suitable for many simple, static cases and are easy to work with. Tiled navmeshes are more complex to work with but better support larger, more dynamic environments. Tiled meshes enable advanced Detour features like re-baking, hierarchical path-planning, and navmesh data-streaming.

better-genshin-impact
BetterGI is a project based on computer vision technology, which aims to make Genshin Impact better. It can automatically pick up items, skip dialogues, automatically select options, automatically submit items, close pop-up pages, etc. When talking to Katherine, it can automatically receive the "Daily Commission" rewards and automatically re-dispatch. When the automatic plot function is turned on, this function will take effect, and the invitation options will be automatically selected. AI recognizes automatic casting, automatically reels in when the fish is hooked, and automatically completes the fishing progress. Help you easily complete the Seven Saint Summoning character invitation, weekly visitor challenge and other PVE content. Automatically use the "King Tree Blessing" with the `Z` key, and use the principle of refreshing wood by going online and offline to hang up a backpack full of wood. Write combat scripts to let the team fight automatically according to your strategy. Fully automatic secret realm hangs up to restore physical strength, automatically enters the secret realm to open the key, fight, walk to the ancient tree and receive rewards. Click the teleportation point on the map, or if there is a teleportation point in the list that appears after clicking, it will automatically click the teleportation point and teleport. Set a shortcut key, and long press to continuously rotate the perspective horizontally (of course you can also use it to rotate the grass god). Quickly switch between "Details" and "Enhance" pages to skip the display of holy relic enhancement results and quickly +20. You can quickly purchase items in the store in full quantity, which is suitable for quickly clearing event redemptions,ε‘΅ζε£Ί store redemptions, etc.

Egaroucid
Egaroucid is one of the strongest Othello AI applications in the world. It is available as a GUI application for Windows, a console application for Windows, MacOS, and Linux, and a web application. Egaroucid is free to use and open source under the GPL 3.0 license. It is highly customizable and can be used for a variety of purposes, including playing Othello against a computer opponent, analyzing Othello games, and developing Othello AI algorithms.

emgucv
Emgu CV is a cross-platform .Net wrapper for the OpenCV image-processing library. It allows OpenCV functions to be called from .NET compatible languages. The wrapper can be compiled by Visual Studio, Unity, and "dotnet" command, and it can run on Windows, Mac OS, Linux, iOS, and Android.

ai-game-development-tools
Here we will keep track of the AI Game Development Tools, including LLM, Agent, Code, Writer, Image, Texture, Shader, 3D Model, Animation, Video, Audio, Music, Singing Voice and Analytics. π₯ * Tool (AI LLM) * Game (Agent) * Code * Framework * Writer * Image * Texture * Shader * 3D Model * Avatar * Animation * Video * Audio * Music * Singing Voice * Speech * Analytics * Video Tool