
docutranslate
文档(小说、论文、字幕)翻译工具(支持 pdf/word/excel/json/epub/srt...)Document (Novel, Thesis, Subtitle) Translation Tool (Supports pdf/word/excel/json/epub/srt...)
Stars: 87

Docutranslate is a versatile tool for translating documents efficiently. It supports multiple file formats and languages, making it ideal for businesses and individuals needing quick and accurate translations. The tool uses advanced algorithms to ensure high-quality translations while maintaining the original document's formatting. With its user-friendly interface, Docutranslate simplifies the translation process and saves time for users. Whether you need to translate legal documents, technical manuals, or personal letters, Docutranslate is the go-to solution for all your document translation needs.
README:
A lightweight local file translation tool based on Large Language Models
- ✅ Multiple Format Support: Translates various files including
pdf
,docx
,xlsx
,md
,txt
,json
,epub
,srt
,ass
, and more. - ✅ Automatic Glossary Generation: Supports automatic generation of glossaries for term alignment.
- ✅ PDF Table, Formula, and Code Recognition: Recognizes and translates tables, formulas, and code often found in academic papers, powered by
docling
andmineru
PDF parsing engines. - ✅ JSON Translation: Supports specifying values to be translated in JSON using JSON paths (following
jsonpath-ng
syntax). - ✅ Word/Excel Format Preservation: Translates
docx
andxlsx
files while preserving their original formatting (does not yet supportdoc
orxls
files). - ✅ Multi-AI Platform Support: Compatible with most AI platforms, enabling high-performance, concurrent AI translation with custom prompts.
- ✅ Asynchronous Support: Designed for high-performance scenarios with full asynchronous support, offering service interfaces for parallel tasks.
- ✅ LAN and Multi-user Support: Can be used by multiple people simultaneously on a local area network.
- ✅ Interactive Web Interface: Provides an out-of-the-box Web UI and RESTful API for easy integration and use.
- ✅ Small, Multi-platform Standalone Packages: Windows and Mac standalone packages under 40MB (for versions not using the
docling
local PDF parser).
When translating
QQ Discussion Group: 1047781902
For users who want to get started quickly, we provide all-in-one packages on GitHub Releases. Simply download, unzip, and enter your AI platform API Key to begin.
-
DocuTranslate: Standard version, uses the online
minerU
engine to parse PDF documents. Choose this version if you don't need local PDF parsing (recommended). -
DocuTranslate_full: Full version, includes the built-in
docling
local PDF parsing engine. Choose this version if you need local PDF parsing.
# Basic installation
pip install docutranslate
# To use docling for local PDF parsing
pip install docutranslate[docling]
# Initialize environment
uv init
# Basic installation
uv add docutranslate
# Install docling extension
uv add docutranslate[docling]
# Initialize environment
git clone https://github.com/xunbu/docutranslate.git
cd docutranslate
uv sync
The core of the new DocuTranslate is the Workflow. Each workflow is a complete, end-to-end translation pipeline designed for a specific file type. Instead of interacting with a single large class, you select and configure a workflow based on your file type.
The basic usage flow is as follows:
-
Select a Workflow: Choose a workflow based on your input file type (e.g., PDF/Word or TXT), such as
MarkdownBasedWorkflow
orTXTWorkflow
. -
Build Configuration: Create the corresponding configuration object for the selected workflow (e.g.,
MarkdownBasedWorkflowConfig
). This object contains all necessary sub-configurations, such as:- Converter Config: Defines how to convert the original file (like a PDF) to Markdown.
- Translator Config: Defines which LLM, API-Key, target language, etc., to use.
- Exporter Config: Defines specific options for the output format (like HTML).
- Instantiate the Workflow: Create an instance of the workflow using the configuration object.
-
Execute Translation: Call the workflow's
.read_*()
and.translate()
/.translate_async()
methods. -
Export/Save Results: Call the
.export_to_*()
or.save_as_*()
methods to get or save the translation results.
Workflow | Use Case | Input Formats | Output Formats | Core Config Class |
---|---|---|---|---|
MarkdownBasedWorkflow |
Processes rich text documents like PDF, Word, images. Flow: File -> Markdown -> Translate -> Export . |
.pdf , .docx , .md , .png , .jpg , etc. |
.md , .zip , .html
|
MarkdownBasedWorkflowConfig |
TXTWorkflow |
Processes plain text documents. Flow: txt -> Translate -> Export . |
.txt and other plain text formats |
.txt , .html
|
TXTWorkflowConfig |
JsonWorkflow |
Processes JSON files. Flow: json -> Translate -> Export . |
.json |
.json , .html
|
JsonWorkflowConfig |
DocxWorkflow |
Processes docx files. Flow: docx -> Translate -> Export . |
.docx |
.docx , .html
|
DocxWorkflowConfig |
XlsxWorkflow |
Processes xlsx files. Flow: xlsx -> Translate -> Export . |
.xlsx , .csv
|
.xlsx , .html
|
XlsxWorkflowConfig |
SrtWorkflow |
Processes srt files. Flow: srt -> Translate -> Export . |
.srt |
.srt , .html
|
SrtWorkflowConfig |
EpubWorkflow |
Processes epub files. Flow: epub -> Translate -> Export . |
.epub |
.epub , .html
|
EpubWorkflowConfig |
HtmlWorkflow |
Processes html files. Flow: html -> Translate -> Export . |
.html , .htm
|
.html |
HtmlWorkflowConfig |
You can export to PDF format in the interactive interface.
For ease of use, DocuTranslate provides a full-featured Web interface and RESTful API.
Start the service:
# Start the service, listening on port 8010 by default
docutranslate -i
# Start on a specific port
docutranslate -i -p 8011
# You can also specify the port via an environment variable
export DOCUTRANSLATE_PORT=8011
docutranslate -i
-
Interactive Interface: After starting the service, visit
http://127.0.0.1:8010
(or your specified port) in your browser. -
API Documentation: The complete API documentation (Swagger UI) is available at
http://127.0.0.1:8010/docs
.
This is the most common use case. We will use the minerU
engine to convert the PDF to Markdown and then use an LLM for translation. This example uses the asynchronous method.
import asyncio
from docutranslate.workflow.md_based_workflow import MarkdownBasedWorkflow, MarkdownBasedWorkflowConfig
from docutranslate.converter.x2md.converter_mineru import ConverterMineruConfig
from docutranslate.translator.ai_translator.md_translator import MDTranslatorConfig
from docutranslate.exporter.md.md2html_exporter import MD2HTMLExporterConfig
async def main():
# 1. Build translator configuration
translator_config = MDTranslatorConfig(
base_url="https://open.bigmodel.cn/api/paas/v4", # AI Platform Base URL
api_key="YOUR_ZHIPU_API_KEY", # AI Platform API Key
model_id="glm-4-air", # Model ID
to_lang="English", # Target language
chunk_size=3000, # Text chunk size
concurrent=10, # Concurrency level
# glossary_generate_enable=True, # Enable automatic glossary generation
# glossary_dict={"Jobs":"乔布斯"}, # Pass in a glossary
# system_proxy_enable=True, # Enable system proxy
)
# 2. Build converter configuration (using minerU)
converter_config = ConverterMineruConfig(
mineru_token="YOUR_MINERU_TOKEN", # Your minerU Token
formula_ocr=True # Enable formula recognition
)
# 3. Build main workflow configuration
workflow_config = MarkdownBasedWorkflowConfig(
convert_engine="mineru", # Specify the parsing engine
converter_config=converter_config, # Pass the converter config
translator_config=translator_config, # Pass the translator config
html_exporter_config=MD2HTMLExporterConfig(cdn=True) # HTML export configuration
)
# 4. Instantiate the workflow
workflow = MarkdownBasedWorkflow(config=workflow_config)
# 5. Read the file and execute translation
print("Reading and translating the file...")
workflow.read_path("path/to/your/document.pdf")
await workflow.translate_async()
# Or use the synchronous method
# workflow.translate()
print("Translation complete!")
# 6. Save the results
workflow.save_as_html(name="translated_document.html")
workflow.save_as_markdown_zip(name="translated_document.zip")
workflow.save_as_markdown(name="translated_document.md") # Markdown with embedded images
print("Files saved to the ./output folder.")
# Or get the content strings directly
html_content = workflow.export_to_html()
html_content = workflow.export_to_markdown()
# print(html_content)
if __name__ == "__main__":
asyncio.run(main())
For plain text files, the process is simpler as it doesn't require a document parsing (conversion) step. This example uses the asynchronous method.
import asyncio
from docutranslate.workflow.txt_workflow import TXTWorkflow, TXTWorkflowConfig
from docutranslate.translator.ai_translator.txt_translator import TXTTranslatorConfig
from docutranslate.exporter.txt.txt2html_exporter import TXT2HTMLExporterConfig
async def main():
# 1. Build translator configuration
translator_config = TXTTranslatorConfig(
base_url="https://api.openai.com/v1/",
api_key="YOUR_OPENAI_API_KEY",
model_id="gpt-4o",
to_lang="Chinese",
)
# 2. Build main workflow configuration
workflow_config = TXTWorkflowConfig(
translator_config=translator_config,
html_exporter_config=TXT2HTMLExporterConfig(cdn=True)
)
# 3. Instantiate the workflow
workflow = TXTWorkflow(config=workflow_config)
# 4. Read the file and execute translation
workflow.read_path("path/to/your/notes.txt")
await workflow.translate_async()
# Or use the synchronous method
# workflow.translate()
# 5. Save the result
workflow.save_as_txt(name="translated_notes.txt")
print("TXT file saved.")
# You can also export the translated plain text
text = workflow.export_to_txt()
if __name__ == "__main__":
asyncio.run(main())
This example uses the asynchronous method. The json_paths
item in JsonTranslatorConfig
needs to specify the JSON paths to be translated (conforming to the jsonpath-ng
syntax). Only values matching these paths will be translated.
import asyncio
from docutranslate.exporter.js.json2html_exporter import Json2HTMLExporterConfig
from docutranslate.translator.ai_translator.json_translator import JsonTranslatorConfig
from docutranslate.workflow.json_workflow import JsonWorkflowConfig, JsonWorkflow
async def main():
# 1. Build translator configuration
translator_config = JsonTranslatorConfig(
base_url="https://api.openai.com/v1/",
api_key="YOUR_OPENAI_API_KEY",
model_id="gpt-4o",
to_lang="Chinese",
json_paths=["$.*", "$.name"] # Conforms to jsonpath-ng syntax, values at matching paths will be translated
)
# 2. Build main workflow configuration
workflow_config = JsonWorkflowConfig(
translator_config=translator_config,
html_exporter_config=Json2HTMLExporterConfig(cdn=True)
)
# 3. Instantiate the workflow
workflow = JsonWorkflow(config=workflow_config)
# 4. Read the file and execute translation
workflow.read_path("path/to/your/notes.json")
await workflow.translate_async()
# Or use the synchronous method
# workflow.translate()
# 5. Save the result
workflow.save_as_json(name="translated_notes.json")
print("JSON file saved.")
# You can also export the translated JSON text
text = workflow.export_to_json()
if __name__ == "__main__":
asyncio.run(main())
This example uses the asynchronous method.
import asyncio
from docutranslate.exporter.docx.docx2html_exporter import Docx2HTMLExporterConfig
from docutranslate.translator.ai_translator.docx_translator import DocxTranslatorConfig
from docutranslate.workflow.docx_workflow import DocxWorkflowConfig, DocxWorkflow
async def main():
# 1. Build translator configuration
translator_config = DocxTranslatorConfig(
base_url="https://api.openai.com/v1/",
api_key="YOUR_OPENAI_API_KEY",
model_id="gpt-4o",
to_lang="Chinese",
insert_mode="replace", # Options: "replace", "append", "prepend"
separator="\n", # Separator used in "append" and "prepend" modes
)
# 2. Build main workflow configuration
workflow_config = DocxWorkflowConfig(
translator_config=translator_config,
html_exporter_config=Docx2HTMLExporterConfig(cdn=True)
)
# 3. Instantiate the workflow
workflow = DocxWorkflow(config=workflow_config)
# 4. Read the file and execute translation
workflow.read_path("path/to/your/notes.docx")
await workflow.translate_async()
# Or use the synchronous method
# workflow.translate()
# 5. Save the result
workflow.save_as_docx(name="translated_notes.docx")
print("DOCX file saved.")
# You can also export the translated DOCX as bytes
text_bytes = workflow.export_to_docx()
if __name__ == "__main__":
asyncio.run(main())
This example uses the asynchronous method.
import asyncio
from docutranslate.exporter.xlsx.xlsx2html_exporter import Xlsx2HTMLExporterConfig
from docutranslate.translator.ai_translator.xlsx_translator import XlsxTranslatorConfig
from docutranslate.workflow.xlsx_workflow import XlsxWorkflowConfig, XlsxWorkflow
async def main():
# 1. Build translator configuration
translator_config = XlsxTranslatorConfig(
base_url="https://api.openai.com/v1/",
api_key="YOUR_OPENAI_API_KEY",
model_id="gpt-4o",
to_lang="Chinese",
insert_mode="replace", # Options: "replace", "append", "prepend"
separator="\n", # Separator used in "append" and "prepend" modes
)
# 2. Build main workflow configuration
workflow_config = XlsxWorkflowConfig(
translator_config=translator_config,
html_exporter_config=Xlsx2HTMLExporterConfig(cdn=True)
)
# 3. Instantiate the workflow
workflow = XlsxWorkflow(config=workflow_config)
# 4. Read the file and execute translation
workflow.read_path("path/to/your/notes.xlsx")
await workflow.translate_async()
# Or use the synchronous method
# workflow.translate()
# 5. Save the result
workflow.save_as_xlsx(name="translated_notes.xlsx")
print("XLSX file saved.")
# You can also export the translated XLSX as bytes
text_bytes = workflow.export_to_xlsx()
if __name__ == "__main__":
asyncio.run(main())
The translation feature relies on large language models. You need to obtain a base_url
, api_key
, and model_id
from the respective AI platform.
Recommended models: Volcengine's
doubao-seed-1-6-flash
anddoubao-seed-1-6
series, Zhipu'sglm-4-flash
, Alibaba Cloud'sqwen-plus
andqwen-flash
, Deepseek'sdeepseek-chat
, etc.
Platform Name | Get API Key | Base URL |
---|---|---|
ollama | http://127.0.0.1:11434/v1 |
|
lm studio | http://127.0.0.1:1234/v1 |
|
openrouter | Click to get | https://openrouter.ai/api/v1 |
openai | Click to get | https://api.openai.com/v1/ |
gemini | Click to get | https://generativelanguage.googleapis.com/v1beta/openai/ |
deepseek | Click to get | https://api.deepseek.com/v1 |
Zhipu AI (智谱ai) | Click to get | https://open.bigmodel.cn/api/paas/v4 |
Tencent Hunyuan (腾讯混元) | Click to get | https://api.hunyuan.cloud.tencent.com/v1 |
Alibaba Cloud Bailian (阿里云百炼) | Click to get | https://dashscope.aliyuncs.com/compatible-mode/v1 |
Volcengine (火山引擎) | Click to get | https://ark.cn-beijing.volces.com/api/v3 |
SiliconFlow (硅基流动) | Click to get | https://api.siliconflow.cn/v1 |
DMXAPI | Click to get | https://www.dmxapi.cn/v1 |
Juguang AI (聚光AI) | Click to get | https://ai.juguang.chat/v1 |
If you choose mineru
as your document parsing engine (convert_engine="mineru"
), you need to apply for a free token.
- Visit the minerU official website to register and apply for an API.
- Create a new API Token in the API Token Management interface.
Note: minerU Tokens are valid for 14 days. Please create a new one after expiration.
If you choose docling
as your document parsing engine (convert_engine="docling"
), it will download the required models from Hugging Face upon first use.
A better option is to download
docling_artifact.zip
from GitHub Releases and extract it to your working directory.
Solutions for network issues when downloading docling
models:
-
Set a Hugging Face mirror (Recommended):
-
Method A (Environment Variable): Set the system environment variable
HF_ENDPOINT
and restart your IDE or terminal.HF_ENDPOINT=https://hf-mirror.com
-
Method A (Environment Variable): Set the system environment variable
- Method B (Set in code): Add the following code at the beginning of your Python script.
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
-
Offline Usage (Download the model package in advance):
- Download
docling_artifact.zip
from GitHub Releases. - Extract it into your project directory.
- Download
- Specify the model path in your configuration (if the model is not in the same directory as the script):
from docutranslate.converter.x2md.converter_docling import ConverterDoclingConfig
converter_config = ConverterDoclingConfig(
artifact="./docling_artifact", # Path to the extracted folder
code_ocr=True,
formula_ocr=True
)
Q: Why is the translated text still in the original language?
A: Check the logs for errors. It's usually due to an overdue payment on the AI platform or network issues (check if you need to enable the system proxy).
Q: Port 8010 is already in use. What should I do?
A: Use the -p
parameter to specify a new port, or set the DOCUTRANSLATE_PORT
environment variable.
Q: Does it support translating scanned PDFs?
A: Yes. Please use the mineru
parsing engine, which has powerful OCR capabilities.
Q: Why is the first PDF translation very slow?
A: If you are using the docling
engine, it needs to download models from Hugging Face on its first run. Please refer to the "Network Issues Solutions" section above to speed up this process.
Q: How can I use it in an intranet (offline) environment?
A: Absolutely. You need to meet the following conditions:
-
Local LLM: Deploy a language model locally using tools like Ollama or LM Studio, and fill in the local model's
base_url
inTranslatorConfig
. -
Local PDF Parsing Engine (only for parsing PDFs): Use the
docling
engine and download the model package in advance as described in the "Offline Usage" section above.
Q: How does the PDF parsing cache mechanism work?
A: MarkdownBasedWorkflow
automatically caches the results of document parsing (file-to-Markdown conversion) to avoid repetitive, time-consuming parsing. The cache is stored in memory by default and records the last 10 parses. You can change the cache size using the DOCUTRANSLATE_CACHE_NUM
environment variable.
Q: How can I make the software use a proxy?
A: By default, the software does not use the system proxy. You can enable it by setting system_proxy_enable=True
in TranslatorConfig
.
Your support is welcome! Please mention the reason for your donation in the memo.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for docutranslate
Similar Open Source Tools

docutranslate
Docutranslate is a versatile tool for translating documents efficiently. It supports multiple file formats and languages, making it ideal for businesses and individuals needing quick and accurate translations. The tool uses advanced algorithms to ensure high-quality translations while maintaining the original document's formatting. With its user-friendly interface, Docutranslate simplifies the translation process and saves time for users. Whether you need to translate legal documents, technical manuals, or personal letters, Docutranslate is the go-to solution for all your document translation needs.

code-graph-rag
Graph-Code is an accurate Retrieval-Augmented Generation (RAG) system that analyzes multi-language codebases using Tree-sitter. It builds comprehensive knowledge graphs, enabling natural language querying of codebase structure and relationships, along with editing capabilities. The system supports various languages, uses Tree-sitter for parsing, Memgraph for storage, and AI models for natural language to Cypher translation. It offers features like code snippet retrieval, advanced file editing, shell command execution, interactive code optimization, reference-guided optimization, dependency analysis, and more. The architecture consists of a multi-language parser and an interactive CLI for querying the knowledge graph.

rust-genai
genai is a multi-AI providers library for Rust that aims to provide a common and ergonomic single API to various generative AI providers such as OpenAI, Anthropic, Cohere, Ollama, and Gemini. It focuses on standardizing chat completion APIs across major AI services, prioritizing ergonomics and commonality. The library initially focuses on text chat APIs and plans to expand to support images, function calling, and more in the future versions. Version 0.1.x will have breaking changes in patches, while version 0.2.x will follow semver more strictly. genai does not provide a full representation of a given AI provider but aims to simplify the differences at a lower layer for ease of use.

ruler
Ruler is a tool designed to centralize AI coding assistant instructions, providing a single source of truth for managing instructions across multiple AI coding tools. It helps in avoiding inconsistent guidance, duplicated effort, context drift, onboarding friction, and complex project structures by automatically distributing instructions to the right configuration files. With support for nested rule loading, Ruler can handle complex project structures with context-specific instructions for different components. It offers features like centralised rule management, nested rule loading, automatic distribution, targeted agent configuration, MCP server propagation, .gitignore automation, and a simple CLI for easy configuration management.

mcp-ui
mcp-ui is a collection of SDKs that bring interactive web components to the Model Context Protocol (MCP). It allows servers to define reusable UI snippets, render them securely in the client, and react to their actions in the MCP host environment. The SDKs include @mcp-ui/server (TypeScript) for generating UI resources on the server, @mcp-ui/client (TypeScript) for rendering UI components on the client, and mcp_ui_server (Ruby) for generating UI resources in a Ruby environment. The project is an experimental community playground for MCP UI ideas, with rapid iteration and enhancements.

LEANN
LEANN is an innovative vector database that democratizes personal AI, transforming your laptop into a powerful RAG system that can index and search through millions of documents using 97% less storage than traditional solutions without accuracy loss. It achieves this through graph-based selective recomputation and high-degree preserving pruning, computing embeddings on-demand instead of storing them all. LEANN allows semantic search of file system, emails, browser history, chat history, codebase, or external knowledge bases on your laptop with zero cloud costs and complete privacy. It is a drop-in semantic search MCP service fully compatible with Claude Code, enabling intelligent retrieval without changing your workflow.

SwiftAI
SwiftAI is a modern, type-safe Swift library for building AI-powered apps. It provides a unified API that works seamlessly across different AI models, including Apple's on-device models and cloud-based services like OpenAI. With features like model agnosticism, structured output, agent tool loop, conversations, extensibility, and Swift-native design, SwiftAI offers a powerful toolset for developers to integrate AI capabilities into their applications. The library supports easy installation via Swift Package Manager and offers detailed guidance on getting started, structured responses, tool use, model switching, conversations, and advanced constraints. SwiftAI aims to simplify AI integration by providing a type-safe and versatile solution for various AI tasks.

dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.

react-native-rag
React Native RAG is a library that enables private, local RAGs to supercharge LLMs with a custom knowledge base. It offers modular and extensible components like `LLM`, `Embeddings`, `VectorStore`, and `TextSplitter`, with multiple integration options. The library supports on-device inference, vector store persistence, and semantic search implementation. Users can easily generate text responses, manage documents, and utilize custom components for advanced use cases.

LLMTSCS
LLMLight is a novel framework that employs Large Language Models (LLMs) as decision-making agents for Traffic Signal Control (TSC). The framework leverages the advanced generalization capabilities of LLMs to engage in a reasoning and decision-making process akin to human intuition for effective traffic control. LLMLight has been demonstrated to be remarkably effective, generalizable, and interpretable against various transportation-based and RL-based baselines on nine real-world and synthetic datasets.

AI-Agent-Starter-Kit
AI Agent Starter Kit is a modern full-stack AI-enabled template using Next.js for frontend and Express.js for backend, with Telegram and OpenAI integrations. It offers AI-assisted development, smart environment variable setup assistance, intelligent error resolution, context-aware code completion, and built-in debugging helpers. The kit provides a structured environment for developers to interact with AI tools seamlessly, enhancing the development process and productivity.

obsei
Obsei is an open-source, low-code, AI powered automation tool that consists of an Observer to collect unstructured data from various sources, an Analyzer to analyze the collected data with various AI tasks, and an Informer to send analyzed data to various destinations. The tool is suitable for scheduled jobs or serverless applications as all Observers can store their state in databases. Obsei is still in alpha stage, so caution is advised when using it in production. The tool can be used for social listening, alerting/notification, automatic customer issue creation, extraction of deeper insights from feedbacks, market research, dataset creation for various AI tasks, and more based on creativity.

tunacode
TunaCode CLI is an AI-powered coding assistant that provides a command-line interface for developers to enhance their coding experience. It offers features like model selection, parallel execution for faster file operations, and various commands for code management. The tool aims to improve coding efficiency and provide a seamless coding environment for developers.

zcf
ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.

OSA
OSA (Open-Source-Advisor) is a tool designed to improve the quality of scientific open source projects by automating the generation of README files, documentation, CI/CD scripts, and providing advice and recommendations for repositories. It supports various LLMs accessible via API, local servers, or osa_bot hosted on ITMO servers. OSA is currently under development with features like README file generation, documentation generation, automatic implementation of changes, LLM integration, and GitHub Action Workflow generation. It requires Python 3.10 or higher and tokens for GitHub/GitLab/Gitverse and LLM API key. Users can install OSA using PyPi or build from source, and run it using CLI commands or Docker containers.

llm-context.py
LLM Context is a tool designed to assist developers in quickly injecting relevant content from code/text projects into Large Language Model chat interfaces. It leverages `.gitignore` patterns for smart file selection and offers a streamlined clipboard workflow using the command line. The tool also provides direct integration with Large Language Models through the Model Context Protocol (MCP). LLM Context is optimized for code repositories and collections of text/markdown/html documents, making it suitable for developers working on projects that fit within an LLM's context window. The tool is under active development and aims to enhance AI-assisted development workflows by harnessing the power of Large Language Models.
For similar tasks

docutranslate
Docutranslate is a versatile tool for translating documents efficiently. It supports multiple file formats and languages, making it ideal for businesses and individuals needing quick and accurate translations. The tool uses advanced algorithms to ensure high-quality translations while maintaining the original document's formatting. With its user-friendly interface, Docutranslate simplifies the translation process and saves time for users. Whether you need to translate legal documents, technical manuals, or personal letters, Docutranslate is the go-to solution for all your document translation needs.
For similar jobs

ChatFAQ
ChatFAQ is an open-source comprehensive platform for creating a wide variety of chatbots: generic ones, business-trained, or even capable of redirecting requests to human operators. It includes a specialized NLP/NLG engine based on a RAG architecture and customized chat widgets, ensuring a tailored experience for users and avoiding vendor lock-in.

anything-llm
AnythingLLM is a full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

mikupad
mikupad is a lightweight and efficient language model front-end powered by ReactJS, all packed into a single HTML file. Inspired by the likes of NovelAI, it provides a simple yet powerful interface for generating text with the help of various backends.

glide
Glide is a cloud-native LLM gateway that provides a unified REST API for accessing various large language models (LLMs) from different providers. It handles LLMOps tasks such as model failover, caching, key management, and more, making it easy to integrate LLMs into applications. Glide supports popular LLM providers like OpenAI, Anthropic, Azure OpenAI, AWS Bedrock (Titan), Cohere, Google Gemini, OctoML, and Ollama. It offers high availability, performance, and observability, and provides SDKs for Python and NodeJS to simplify integration.

onnxruntime-genai
ONNX Runtime Generative AI is a library that provides the generative AI loop for ONNX models, including inference with ONNX Runtime, logits processing, search and sampling, and KV cache management. Users can call a high level `generate()` method, or run each iteration of the model in a loop. It supports greedy/beam search and TopP, TopK sampling to generate token sequences, has built in logits processing like repetition penalties, and allows for easy custom scoring.

firecrawl
Firecrawl is an API service that takes a URL, crawls it, and converts it into clean markdown. It crawls all accessible subpages and provides clean markdown for each, without requiring a sitemap. The API is easy to use and can be self-hosted. It also integrates with Langchain and Llama Index. The Python SDK makes it easy to crawl and scrape websites in Python code.