
wikipedia-mcp
A Model Context Protocol (MCP) server that retrieves information from Wikipedia to provide context to LLMs.
Stars: 99

The Wikipedia MCP Server is a Model Context Protocol (MCP) server that provides real-time access to Wikipedia information for Large Language Models (LLMs). It allows AI assistants to retrieve accurate and up-to-date information from Wikipedia to enhance their responses. The server offers features such as searching Wikipedia, retrieving article content, getting article summaries, extracting specific sections, discovering links within articles, finding related topics, supporting multiple languages and country codes, optional caching for improved performance, and compatibility with Google ADK agents and other AI frameworks. Users can install the server using pipx, Smithery, PyPI, virtual environment, or from source. The server can be run with various options for transport protocol, language, country/locale, caching, access token, and more. It also supports Docker and Kubernetes deployment. The server provides MCP tools for interacting with Wikipedia, such as searching articles, getting article content, summaries, sections, links, coordinates, related topics, and extracting key facts. It also supports country/locale codes and language variants for languages like Chinese, Serbian, Kurdish, and Norwegian. The server includes example prompts for querying Wikipedia and provides MCP resources for interacting with Wikipedia through MCP endpoints. The project structure includes main packages, API implementation, core functionality, utility functions, and a comprehensive test suite for reliability and functionality testing.
README:
A Model Context Protocol (MCP) server that retrieves information from Wikipedia to provide context to Large Language Models (LLMs). This tool helps AI assistants access factual information from Wikipedia to ground their responses in reliable sources.
The Wikipedia MCP server provides real-time access to Wikipedia information through a standardized Model Context Protocol interface. This allows LLMs to retrieve accurate and up-to-date information directly from Wikipedia to enhance their responses.
- Search Wikipedia: Find articles matching specific queries
- Retrieve Article Content: Get full article text with all information
- Article Summaries: Get concise summaries of articles
- Section Extraction: Retrieve specific sections from articles
- Link Discovery: Find links within articles to related topics
- Related Topics: Discover topics related to a specific article
-
Multi-language Support: Access Wikipedia in different languages by specifying the
--language
or-l
argument when running the server (e.g.,wikipedia-mcp --language ta
for Tamil). -
Country/Locale Support: Use intuitive country codes like
--country US
,--country China
, or--country TW
instead of language codes. Automatically maps to appropriate Wikipedia language variants. -
Language Variant Support: Support for language variants such as Chinese traditional/simplified (e.g.,
zh-hans
for Simplified Chinese,zh-tw
for Traditional Chinese), Serbian scripts (sr-latn
,sr-cyrl
), and other regional variants. - Optional caching: Cache API responses for improved performance using --enable-cache
- Google ADK Compatibility: Fully compatible with Google ADK agents and other AI frameworks that use strict function calling schemas
The best way to install for Claude Desktop usage is with pipx, which installs the command globally:
# Install pipx if you don't have it
pip install pipx
pipx ensurepath
# Install the Wikipedia MCP server
pipx install wikipedia-mcp
This ensures the wikipedia-mcp
command is available in Claude Desktop's PATH.
To install wikipedia-mcp for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @Rudra-ravi/wikipedia-mcp --client claude
You can also install directly from PyPI:
pip install wikipedia-mcp
Note: If you use this method and encounter connection issues with Claude Desktop, you may need to use the full path to the command in your configuration. See the Configuration section for details.
# Create a virtual environment
python3 -m venv venv
# Activate the virtual environment
source venv/bin/activate
# Install the package
pip install git+https://github.com/rudra-ravi/wikipedia-mcp.git
# Clone the repository
git clone https://github.com/rudra-ravi/wikipedia-mcp.git
cd wikipedia-mcp
# Create a virtual environment
python3 -m venv wikipedia-mcp-env
source wikipedia-mcp-env/bin/activate
# Install in development mode
pip install -e .
# If installed with pipx
wikipedia-mcp
# If installed in a virtual environment
source venv/bin/activate
wikipedia-mcp
# Specify transport protocol (default: stdio)
wikipedia-mcp --transport stdio # For Claude Desktop
wikipedia-mcp --transport sse # For HTTP streaming
# Specify language (default: en for English)
wikipedia-mcp --language ja # Example for Japanese
wikipedia-mcp --language zh-hans # Example for Simplified Chinese
wikipedia-mcp --language zh-tw # Example for Traditional Chinese (Taiwan)
wikipedia-mcp --language sr-latn # Example for Serbian Latin script
# Specify country/locale (alternative to language codes)
wikipedia-mcp --country US # English (United States)
wikipedia-mcp --country China # Chinese Simplified
wikipedia-mcp --country Taiwan # Chinese Traditional (Taiwan)
wikipedia-mcp --country Japan # Japanese
wikipedia-mcp --country Germany # German
wikipedia-mcp --country france # French (case insensitive)
# List all supported countries
wikipedia-mcp --list-countries
# Optional: Specify host/port for SSE (use 0.0.0.0 for containers)
wikipedia-mcp --transport sse --host 0.0.0.0 --port 8080
# Optional: Enable caching
wikipedia-mcp --enable-cache
# Optional: Use Personal Access Token to avoid rate limiting (403 errors)
wikipedia-mcp --access-token your_wikipedia_token_here
# Or set via environment variable
export WIKIPEDIA_ACCESS_TOKEN=your_wikipedia_token_here
wikipedia-mcp
# Combine options
wikipedia-mcp --country Taiwan --enable-cache --access-token your_token --transport sse --port 8080
### Docker/Kubernetes
When running inside containers, bind the SSE server to all interfaces and map
the container port to the host or service:
```bash
# Build and run with Docker
docker build -t wikipedia-mcp .
docker run --rm -p 8080:8080 wikipedia-mcp --transport sse --host 0.0.0.0 --port 8080
Kubernetes example (minimal):
apiVersion: apps/v1
kind: Deployment
metadata:
name: wikipedia-mcp
spec:
replicas: 1
selector:
matchLabels:
app: wikipedia-mcp
template:
metadata:
labels:
app: wikipedia-mcp
spec:
containers:
- name: server
image: your-repo/wikipedia-mcp:latest
args: ["--transport", "sse", "--host", "0.0.0.0", "--port", "8080"]
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: wikipedia-mcp
spec:
selector:
app: wikipedia-mcp
ports:
- name: http
port: 8080
targetPort: 8080
Add the following to your Claude Desktop configuration file:
Option 1: Using command name (requires wikipedia-mcp
to be in PATH)
{
"mcpServers": {
"wikipedia": {
"command": "wikipedia-mcp"
}
}
}
Option 2: Using full path (recommended if you get connection errors)
{
"mcpServers": {
"wikipedia": {
"command": "/full/path/to/wikipedia-mcp"
}
}
}
Option 3: With country/language specification
{
"mcpServers": {
"wikipedia-us": {
"command": "wikipedia-mcp",
"args": ["--country", "US"]
},
"wikipedia-taiwan": {
"command": "wikipedia-mcp",
"args": ["--country", "TW"]
},
"wikipedia-japan": {
"command": "wikipedia-mcp",
"args": ["--country", "Japan"]
}
}
}
To find the full path, run: which wikipedia-mcp
Configuration file locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%/Claude/claude_desktop_config.json
- Linux:
~/.config/Claude/claude_desktop_config.json
Note: If you encounter connection errors, see the Troubleshooting section for solutions.
The Wikipedia MCP server provides the following tools for LLMs to interact with Wikipedia:
Search Wikipedia for articles matching a query.
Parameters:
-
query
(string): The search term -
limit
(integer, optional): Maximum number of results to return (default: 10)
Returns:
- A list of search results with titles, snippets, and metadata
Get the full content of a Wikipedia article.
Parameters:
-
title
(string): The title of the Wikipedia article
Returns:
- Article content including text, summary, sections, links, and categories
Get a concise summary of a Wikipedia article.
Parameters:
-
title
(string): The title of the Wikipedia article
Returns:
- A text summary of the article
Get the sections of a Wikipedia article.
Parameters:
-
title
(string): The title of the Wikipedia article
Returns:
- A structured list of article sections with their content
Get the links contained within a Wikipedia article.
Parameters:
-
title
(string): The title of the Wikipedia article
Returns:
- A list of links to other Wikipedia articles
Get the coordinates of a Wikipedia article.
Parameters:
-
title
(string): The title of the Wikipedia article
Returns:
- A dictionary containing coordinate information including:
-
title
: The article title -
pageid
: The page ID -
coordinates
: List of coordinate objects with latitude, longitude, and metadata -
exists
: Whether the article exists -
error
: Any error message if retrieval failed
-
Get topics related to a Wikipedia article based on links and categories.
Parameters:
-
title
(string): The title of the Wikipedia article -
limit
(integer, optional): Maximum number of related topics (default: 10)
Returns:
- A list of related topics with relevance information
Get a summary of a Wikipedia article tailored to a specific query.
Parameters:
-
title
(string): The title of the Wikipedia article -
query
(string): The query to focus the summary on -
max_length
(integer, optional): Maximum length of the summary (default: 250)
Returns:
- A dictionary containing the title, query, and the focused summary
Get a summary of a specific section of a Wikipedia article.
Parameters:
-
title
(string): The title of the Wikipedia article -
section_title
(string): The title of the section to summarize -
max_length
(integer, optional): Maximum length of the summary (default: 150)
Returns:
- A dictionary containing the title, section title, and the section summary
Extract key facts from a Wikipedia article, optionally focused on a specific topic within the article.
Parameters:
-
title
(string): The title of the Wikipedia article -
topic_within_article
(string, optional): A specific topic within the article to focus fact extraction -
count
(integer, optional): Number of key facts to extract (default: 5)
Returns:
- A dictionary containing the title, topic, and a list of extracted facts
The Wikipedia MCP server supports intuitive country and region codes as an alternative to language codes. This makes it easier to access region-specific Wikipedia content without needing to know language codes.
Use --list-countries
to see all supported countries:
wikipedia-mcp --list-countries
This will display countries organized by language, for example:
Supported Country/Locale Codes:
========================================
en: US, USA, United States, UK, GB, Canada, Australia, ...
zh-hans: CN, China
zh-tw: TW, Taiwan
ja: JP, Japan
de: DE, Germany
fr: FR, France
es: ES, Spain, MX, Mexico, AR, Argentina, ...
pt: PT, Portugal, BR, Brazil
ru: RU, Russia
ar: SA, Saudi Arabia, AE, UAE, EG, Egypt, ...
# Major countries by code
wikipedia-mcp --country US # United States (English)
wikipedia-mcp --country CN # China (Simplified Chinese)
wikipedia-mcp --country TW # Taiwan (Traditional Chinese)
wikipedia-mcp --country JP # Japan (Japanese)
wikipedia-mcp --country DE # Germany (German)
wikipedia-mcp --country FR # France (French)
wikipedia-mcp --country BR # Brazil (Portuguese)
wikipedia-mcp --country RU # Russia (Russian)
# Countries by full name (case insensitive)
wikipedia-mcp --country "United States"
wikipedia-mcp --country China
wikipedia-mcp --country Taiwan
wikipedia-mcp --country Japan
wikipedia-mcp --country Germany
wikipedia-mcp --country france # Case insensitive
# Regional variants
wikipedia-mcp --country HK # Hong Kong (Traditional Chinese)
wikipedia-mcp --country SG # Singapore (Simplified Chinese)
wikipedia-mcp --country "Saudi Arabia" # Arabic
wikipedia-mcp --country Mexico # Spanish
The server automatically maps country codes to appropriate Wikipedia language editions:
-
English-speaking: US, UK, Canada, Australia, New Zealand, Ireland, South Africa →
en
-
Chinese regions:
- CN, China →
zh-hans
(Simplified Chinese) - TW, Taiwan →
zh-tw
(Traditional Chinese - Taiwan) - HK, Hong Kong →
zh-hk
(Traditional Chinese - Hong Kong) - SG, Singapore →
zh-sg
(Simplified Chinese - Singapore)
- CN, China →
-
Major languages: JP→
ja
, DE→de
, FR→fr
, ES→es
, IT→it
, RU→ru
, etc. - Regional variants: Supports 140+ countries and regions
If you specify an unsupported country, you'll get a helpful error message:
$ wikipedia-mcp --country INVALID
Error: Unsupported country/locale: 'INVALID'.
Supported country codes include: US, USA, UK, GB, CA, AU, NZ, IE, ZA, CN.
Use --language parameter for direct language codes instead.
Use --list-countries to see supported country codes.
The Wikipedia MCP server supports language variants for languages that have multiple writing systems or regional variations. This feature is particularly useful for Chinese, Serbian, Kurdish, and other languages with multiple scripts or regional differences.
-
zh-hans
- Simplified Chinese -
zh-hant
- Traditional Chinese -
zh-tw
- Traditional Chinese (Taiwan) -
zh-hk
- Traditional Chinese (Hong Kong) -
zh-mo
- Traditional Chinese (Macau) -
zh-cn
- Simplified Chinese (China) -
zh-sg
- Simplified Chinese (Singapore) -
zh-my
- Simplified Chinese (Malaysia)
-
sr-latn
- Serbian Latin script -
sr-cyrl
- Serbian Cyrillic script
-
ku-latn
- Kurdish Latin script -
ku-arab
- Kurdish Arabic script
-
no
- Norwegian (automatically mapped to Bokmål)
# Access Simplified Chinese Wikipedia
wikipedia-mcp --language zh-hans
# Access Traditional Chinese Wikipedia (Taiwan)
wikipedia-mcp --language zh-tw
# Access Serbian Wikipedia in Latin script
wikipedia-mcp --language sr-latn
# Access Serbian Wikipedia in Cyrillic script
wikipedia-mcp --language sr-cyrl
When you specify a language variant like zh-hans
, the server:
- Maps the variant to the base Wikipedia language (e.g.,
zh
for Chinese variants) - Uses the base language for API connections to the Wikipedia servers
- Includes the variant parameter in API requests to get content in the specific variant
- Returns content formatted according to the specified variant's conventions
This approach ensures optimal compatibility with Wikipedia's API while providing access to variant-specific content and formatting.
Once the server is running and configured with Claude Desktop, you can use prompts like:
- "Tell me about quantum computing using the Wikipedia information."
- "Summarize the history of artificial intelligence based on Wikipedia."
- "What does Wikipedia say about climate change?"
- "Find Wikipedia articles related to machine learning."
- "Get me the introduction section of the article on neural networks from Wikipedia."
- "What are the coordinates of the Eiffel Tower?"
- "Find the latitude and longitude of Mount Everest from Wikipedia."
- "Get coordinate information for famous landmarks in Paris."
- "Search Wikipedia China for information about the Great Wall." (uses Chinese Wikipedia)
- "Tell me about Tokyo from Japanese Wikipedia sources."
- "What does German Wikipedia say about the Berlin Wall?"
- "Find information about the Eiffel Tower from French Wikipedia."
- "Get Taiwan Wikipedia's article about Taiwanese cuisine."
- "Search Traditional Chinese Wikipedia for information about Taiwan."
- "Find Simplified Chinese articles about modern China."
- "Get information from Serbian Latin Wikipedia about Belgrade."
The server also provides MCP resources (similar to HTTP endpoints but for MCP):
-
search/{query}
: Search Wikipedia for articles matching the query -
article/{title}
: Get the full content of a Wikipedia article -
summary/{title}
: Get a summary of a Wikipedia article -
sections/{title}
: Get the sections of a Wikipedia article -
links/{title}
: Get the links in a Wikipedia article -
coordinates/{title}
: Get the coordinates of a Wikipedia article -
summary/{title}/query/{query}/length/{max_length}
: Get a query-focused summary of an article -
summary/{title}/section/{section_title}/length/{max_length}
: Get a summary of a specific article section -
facts/{title}/topic/{topic_within_article}/count/{count}
: Extract key facts from an article
# Clone the repository
git clone https://github.com/rudra-ravi/wikipedia-mcp.git
cd wikipedia-mcp
# Create a virtual environment
python3 -m venv venv
source venv/bin/activate
# Install the package in development mode
pip install -e .
# Install development and test dependencies
pip install -r requirements-dev.txt
# Run the server
wikipedia-mcp
-
wikipedia_mcp/
: Main package-
__main__.py
: Entry point for the package -
server.py
: MCP server implementation -
wikipedia_client.py
: Wikipedia API client -
api/
: API implementation -
core/
: Core functionality -
utils/
: Utility functions
-
-
tests/
: Test suite-
test_basic.py
: Basic package tests -
test_cli.py
: Command-line interface tests -
test_server_tools.py
: Comprehensive server and tool tests
-
The project includes a comprehensive test suite to ensure reliability and functionality.
The test suite is organized in the tests/
directory with the following test files:
-
test_basic.py
: Basic package functionality tests -
test_cli.py
: Command-line interface and transport tests -
test_server_tools.py
: Comprehensive tests for all MCP tools and Wikipedia client functionality
# Install test dependencies
pip install -r requirements-dev.txt
# Run all tests
python -m pytest tests/ -v
# Run tests with coverage
python -m pytest tests/ --cov=wikipedia_mcp --cov-report=html
# Run only unit tests (excludes integration tests)
python -m pytest tests/ -v -m "not integration"
# Run only integration tests (requires internet connection)
python -m pytest tests/ -v -m "integration"
# Run specific test file
python -m pytest tests/test_server_tools.py -v
-
WikipediaClient Tests: Mock-based tests for all client methods
- Search functionality
- Article retrieval
- Summary extraction
- Section parsing
- Link extraction
- Related topics discovery
- Server Tests: MCP server creation and tool registration
- CLI Tests: Command-line interface functionality
- Real API Tests: Tests that make actual calls to Wikipedia API
- End-to-End Tests: Complete workflow testing
The project uses pytest.ini
for test configuration:
[pytest]
markers =
integration: marks tests as integration tests (may require network access)
slow: marks tests as slow running
testpaths = tests
addopts = -v --tb=short
All tests are designed to:
- Run reliably in CI/CD environments
- Handle network failures gracefully
- Provide clear error messages
- Cover edge cases and error conditions
When contributing new features:
- Add unit tests for new functionality
- Include both success and failure scenarios
- Mock external dependencies (Wikipedia API)
- Add integration tests for end-to-end validation
- Follow existing test patterns and naming conventions
If you encounter 403 errors or rate limiting issues when making requests to Wikipedia, you can use a Personal Access Token to increase your rate limits.
- Go to Wikimedia API Portal
- Create an account or log in
- Navigate to the "Personal API tokens" section
- Generate a new token with appropriate permissions
You can provide your token in two ways:
wikipedia-mcp --access-token your_wikipedia_token_here
export WIKIPEDIA_ACCESS_TOKEN=your_wikipedia_token_here
wikipedia-mcp
Claude Desktop with Access Token:
{
"mcpServers": {
"wikipedia": {
"command": "wikipedia-mcp",
"args": ["--access-token", "your_token_here"]
}
}
}
Claude Desktop with Environment Variable:
{
"mcpServers": {
"wikipedia": {
"command": "wikipedia-mcp",
"env": {
"WIKIPEDIA_ACCESS_TOKEN": "your_token_here"
}
}
}
}
With Multiple Options:
wikipedia-mcp --country US --access-token your_token --enable-cache --transport sse
- Keep your access token secure and never commit it to version control
- Use environment variables in production environments
- The token is automatically included in API requests using Bearer authentication
- Tokens are not logged or exposed in error messages for security
Problem: Claude Desktop shows errors like spawn wikipedia-mcp ENOENT
or cannot find the command.
Cause: This occurs when the wikipedia-mcp
command is installed in a user-specific location (like ~/.local/bin/
) that's not in Claude Desktop's PATH.
Solutions:
-
Use full path to the command (Recommended):
{ "mcpServers": { "wikipedia": { "command": "/home/username/.local/bin/wikipedia-mcp" } } }
To find your exact path, run:
which wikipedia-mcp
-
Install with pipx for global access:
pipx install wikipedia-mcp
Then use the standard configuration:
{ "mcpServers": { "wikipedia": { "command": "wikipedia-mcp" } } }
-
Create a symlink to a global location:
sudo ln -s ~/.local/bin/wikipedia-mcp /usr/local/bin/wikipedia-mcp
- Article Not Found: Check the exact spelling of article titles
- Rate Limiting / 403 Errors: Use a Personal Access Token to increase rate limits (see Personal Access Tokens section)
- Large Articles: Some Wikipedia articles are very large and may exceed token limits
The Model Context Protocol (MCP) is not a traditional HTTP API but a specialized protocol for communication between LLMs and external tools. Key characteristics:
- Uses stdio (standard input/output) or SSE (Server-Sent Events) for communication
- Designed specifically for AI model interaction
- Provides standardized formats for tools, resources, and prompts
- Integrates directly with Claude and other MCP-compatible AI systems
Claude Desktop acts as the MCP client, while this server provides the tools and resources that Claude can use to access Wikipedia information.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
- 🌐 Portfolio: ravikumar-dev.me
- 📝 Blog: Medium
- 💼 LinkedIn: in/ravi-kumar-e
- 🐦 Twitter: @Ravikumar_d3v
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for wikipedia-mcp
Similar Open Source Tools

wikipedia-mcp
The Wikipedia MCP Server is a Model Context Protocol (MCP) server that provides real-time access to Wikipedia information for Large Language Models (LLMs). It allows AI assistants to retrieve accurate and up-to-date information from Wikipedia to enhance their responses. The server offers features such as searching Wikipedia, retrieving article content, getting article summaries, extracting specific sections, discovering links within articles, finding related topics, supporting multiple languages and country codes, optional caching for improved performance, and compatibility with Google ADK agents and other AI frameworks. Users can install the server using pipx, Smithery, PyPI, virtual environment, or from source. The server can be run with various options for transport protocol, language, country/locale, caching, access token, and more. It also supports Docker and Kubernetes deployment. The server provides MCP tools for interacting with Wikipedia, such as searching articles, getting article content, summaries, sections, links, coordinates, related topics, and extracting key facts. It also supports country/locale codes and language variants for languages like Chinese, Serbian, Kurdish, and Norwegian. The server includes example prompts for querying Wikipedia and provides MCP resources for interacting with Wikipedia through MCP endpoints. The project structure includes main packages, API implementation, core functionality, utility functions, and a comprehensive test suite for reliability and functionality testing.

memento-mcp
Memento MCP is a scalable, high-performance knowledge graph memory system designed for LLMs. It offers semantic retrieval, contextual recall, and temporal awareness to any LLM client supporting the model context protocol. The system is built on core concepts like entities and relations, utilizing Neo4j as its storage backend for unified graph and vector search capabilities. With advanced features such as semantic search, temporal awareness, confidence decay, and rich metadata support, Memento MCP provides a robust solution for managing knowledge graphs efficiently and effectively.

mcp-omnisearch
mcp-omnisearch is a Model Context Protocol (MCP) server that acts as a unified gateway to multiple search providers and AI tools. It integrates Tavily, Perplexity, Kagi, Jina AI, Brave, Exa AI, and Firecrawl to offer a wide range of search, AI response, content processing, and enhancement features through a single interface. The server provides powerful search capabilities, AI response generation, content extraction, summarization, web scraping, structured data extraction, and more. It is designed to work flexibly with the API keys available, enabling users to activate only the providers they have keys for and easily add more as needed.

zotero-mcp
Zotero MCP seamlessly connects your Zotero research library with AI assistants like ChatGPT and Claude via the Model Context Protocol. It offers AI-powered semantic search, access to library content, PDF annotation extraction, and easy updates. Users can search their library, analyze citations, and get summaries, making it ideal for research tasks. The tool supports multiple embedding models, intelligent search results, and flexible access methods for both local and remote collaboration. With advanced features like semantic search and PDF annotation extraction, Zotero MCP enhances research efficiency and organization.

ck
ck (seek) is a semantic grep tool that finds code by meaning, not just keywords. It replaces traditional grep by understanding the user's search intent. It allows users to search for code based on concepts like 'error handling' and retrieves relevant code even if the exact keywords are not present. ck offers semantic search, drop-in grep compatibility, hybrid search combining keyword precision with semantic understanding, agent-friendly output in JSONL format, smart file filtering, and various advanced features. It supports multiple search modes, relevance scoring, top-K results, and smart exclusions. Users can index projects for semantic search, choose embedding models, and search specific files or directories. The tool is designed to improve code search efficiency and accuracy for developers and AI agents.

orra
Orra is a tool for building production-ready multi-agent applications that handle complex real-world interactions. It coordinates tasks across existing stack, agents, and tools run as services using intelligent reasoning. With features like smart pre-evaluated execution plans, domain grounding, durable execution, and automatic service health monitoring, Orra enables users to go fast with tools as services and revert state to handle failures. It provides real-time status tracking and webhook result delivery, making it ideal for developers looking to move beyond simple crews and agents.

mcp-documentation-server
The mcp-documentation-server is a lightweight server application designed to serve documentation files for projects. It provides a simple and efficient way to host and access project documentation, making it easy for team members and stakeholders to find and reference important information. The server supports various file formats, such as markdown and HTML, and allows for easy navigation through the documentation. With mcp-documentation-server, teams can streamline their documentation process and ensure that project information is easily accessible to all involved parties.

docs-mcp-server
The docs-mcp-server repository contains the server-side code for the documentation management system. It provides functionalities for managing, storing, and retrieving documentation files. Users can upload, update, and delete documents through the server. The server also supports user authentication and authorization to ensure secure access to the documentation system. Additionally, the server includes APIs for integrating with other systems and tools, making it a versatile solution for managing documentation in various projects and organizations.

LEANN
LEANN is an innovative vector database that democratizes personal AI, transforming your laptop into a powerful RAG system that can index and search through millions of documents using 97% less storage than traditional solutions without accuracy loss. It achieves this through graph-based selective recomputation and high-degree preserving pruning, computing embeddings on-demand instead of storing them all. LEANN allows semantic search of file system, emails, browser history, chat history, codebase, or external knowledge bases on your laptop with zero cloud costs and complete privacy. It is a drop-in semantic search MCP service fully compatible with Claude Code, enabling intelligent retrieval without changing your workflow.

open-edison
OpenEdison is a secure MCP control panel that connects AI to data/software with additional security controls to reduce data exfiltration risks. It helps address the lethal trifecta problem by providing visibility, monitoring potential threats, and alerting on data interactions. The tool offers features like data leak monitoring, controlled execution, easy configuration, visibility into agent interactions, a simple API, and Docker support. It integrates with LangGraph, LangChain, and plain Python agents for observability and policy enforcement. OpenEdison helps gain observability, control, and policy enforcement for AI interactions with systems of records, existing company software, and data to reduce risks of AI-caused data leakage.

tunacode
TunaCode CLI is an AI-powered coding assistant that provides a command-line interface for developers to enhance their coding experience. It offers features like model selection, parallel execution for faster file operations, and various commands for code management. The tool aims to improve coding efficiency and provide a seamless coding environment for developers.

roast
Roast is a convention-oriented framework for creating structured AI workflows maintained by the Augmented Engineering team at Shopify. It provides a structured, declarative approach to building AI workflows with convention over configuration, built-in tools for file operations, search, and AI interactions, Ruby integration for custom steps, shared context between steps, step customization with AI models and parameters, session replay, parallel execution, function caching, and extensive instrumentation for monitoring workflow execution, AI calls, and tool usage.

agentpress
AgentPress is a collection of simple but powerful utilities that serve as building blocks for creating AI agents. It includes core components for managing threads, registering tools, processing responses, state management, and utilizing LLMs. The tool provides a modular architecture for handling messages, LLM API calls, response processing, tool execution, and results management. Users can easily set up the environment, create custom tools with OpenAPI or XML schema, and manage conversation threads with real-time interaction. AgentPress aims to be agnostic, simple, and flexible, allowing users to customize and extend functionalities as needed.

probe
Probe is an AI-friendly, fully local, semantic code search tool designed to power the next generation of AI coding assistants. It combines the speed of ripgrep with the code-aware parsing of tree-sitter to deliver precise results with complete code blocks, making it perfect for large codebases and AI-driven development workflows. Probe supports various features like AI-friendly code extraction, fully local operation without external APIs, fast scanning of large codebases, accurate code structure parsing, re-rankers and NLP methods for better search results, multi-language support, interactive AI chat mode, and flexibility to run as a CLI tool, MCP server, or interactive AI chat.

search_with_ai
Build your own conversation-based search with AI, a simple implementation with Node.js & Vue3. Live Demo Features: * Built-in support for LLM: OpenAI, Google, Lepton, Ollama(Free) * Built-in support for search engine: Bing, Sogou, Google, SearXNG(Free) * Customizable pretty UI interface * Support dark mode * Support mobile display * Support local LLM with Ollama * Support i18n * Support Continue Q&A with contexts.

openai-edge-tts
This project provides a local, OpenAI-compatible text-to-speech (TTS) API using `edge-tts`. It emulates the OpenAI TTS endpoint (`/v1/audio/speech`), enabling users to generate speech from text with various voice options and playback speeds, just like the OpenAI API. `edge-tts` uses Microsoft Edge's online text-to-speech service, making it completely free. The project supports multiple audio formats, adjustable playback speed, and voice selection options, providing a flexible and customizable TTS solution for users.
For similar tasks

wikipedia-mcp
The Wikipedia MCP Server is a Model Context Protocol (MCP) server that provides real-time access to Wikipedia information for Large Language Models (LLMs). It allows AI assistants to retrieve accurate and up-to-date information from Wikipedia to enhance their responses. The server offers features such as searching Wikipedia, retrieving article content, getting article summaries, extracting specific sections, discovering links within articles, finding related topics, supporting multiple languages and country codes, optional caching for improved performance, and compatibility with Google ADK agents and other AI frameworks. Users can install the server using pipx, Smithery, PyPI, virtual environment, or from source. The server can be run with various options for transport protocol, language, country/locale, caching, access token, and more. It also supports Docker and Kubernetes deployment. The server provides MCP tools for interacting with Wikipedia, such as searching articles, getting article content, summaries, sections, links, coordinates, related topics, and extracting key facts. It also supports country/locale codes and language variants for languages like Chinese, Serbian, Kurdish, and Norwegian. The server includes example prompts for querying Wikipedia and provides MCP resources for interacting with Wikipedia through MCP endpoints. The project structure includes main packages, API implementation, core functionality, utility functions, and a comprehensive test suite for reliability and functionality testing.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.