manga-image-translator
Translate manga/image 一键翻译各类图片内文字 https://cotrans.touhou.ai/
Stars: 5111
Translate texts in manga/images. Some manga/images will never be translated, therefore this project is born. * Image/Manga Translator * Samples * Online Demo * Disclaimer * Installation * Pip/venv * Poetry * Additional instructions for **Windows** * Docker * Hosting the web server * Using as CLI * Setting Translation Secrets * Using with Nvidia GPU * Building locally * Usage * Batch mode (default) * Demo mode * Web Mode * Api Mode * Related Projects * Docs * Recommended Modules * Tips to improve translation quality * Options * Language Code Reference * Translators Reference * GPT Config Reference * Using Gimp for rendering * Api Documentation * Synchronous mode * Asynchronous mode * Manual translation * Next steps * Support Us * Thanks To All Our Contributors :
README:
Translate texts in manga/images.
中文说明 | Change Log
Join us on discord https://discord.gg/Ak8APNy4vb
Some manga/images will never be translated, therefore this project is born.
- Image/Manga Translator
Please note that the samples may not always be updated, they may not represent the current main branch version.
Original | Translated |
---|---|
(Source @09ra_19ra) |
(Mask) |
(Source @VERTIGRIS_ART) |
--detector ctd
(Mask)
|
(Source @hiduki_yayoi) |
--translator none
(Mask)
|
(Source @rikak) |
(Mask) |
Official Demo (by zyddnys): https://touhou.ai/imgtrans/
Browser Userscript (by QiroNT): https://greasyfork.org/scripts/437569
- Note this may not work sometimes due to stupid google gcp kept restarting my instance. In that case you can wait for me to restart the service, which may take up to 24 hrs.
- Note this online demo is using the current main branch version.
Successor to MMDOCR-HighPerformance.
This is a hobby project, you are welcome to contribute!
Currently this only a simple demo, many imperfections exist, we need your support to make this project better!
Primarily designed for translating Japanese text, but also supports Chinese, English and Korean.
Supports inpainting, text rendering and colorization.
# First, you need to have Python(>=3.8) installed on your system
# The latest version often does not work with some pytorch libraries yet
$ python --version
Python 3.10.6
# Clone this repo
$ git clone https://github.com/zyddnys/manga-image-translator.git
# Create venv
$ python -m venv venv
# Activate venv
$ source venv/bin/activate
# For --use-gpu option go to https://pytorch.org/ and follow
# pytorch installation instructions. Add `--upgrade --force-reinstall`
# to the pip command to overwrite the currently installed pytorch version.
# Install the dependencies
$ pip install -r requirements.txt
The models will be downloaded into ./models
at runtime.
Before you start the pip install, first install Microsoft C++ Build Tools (Download, Instructions) as some pip dependencies will not compile without it. (See #114).
To use cuda on windows install the correct pytorch version as instructed on https://pytorch.org/.
Requirements:
- Docker (version 19.03+ required for CUDA / GPU acceleration)
- Docker Compose (Optional if you want to use files in the
demo/doc
folder) - Nvidia Container Runtime (Optional if you want to use CUDA)
This project has docker support under zyddnys/manga-image-translator:main
image.
This docker image contains all required dependencies / models for the project.
It should be noted that this image is fairly large (~ 15GB).
The web server can be hosted using (For CPU)
docker run -p 5003:5003 -v result:/app/result --ipc=host --rm zyddnys/manga-image-translator:main -l ENG --manga2eng -v --mode web --host=0.0.0.0 --port=5003
or
docker-compose -f demo/doc/docker-compose-web-with-cpu.yml up
depending on which you prefer. The web server should start on port 5003
and images should become in the /result
folder.
To use docker with the CLI (I.e in batch mode)
docker run -v <targetFolder>:/app/<targetFolder> -v <targetFolder>-translated:/app/<targetFolder>-translated --ipc=host --rm zyddnys/manga-image-translator:main --mode=batch -i=/app/<targetFolder> <cli flags>
Note: In the event you need to reference files on your host machine
you will need to mount the associated files as volumes into the /app
folder inside the container.
Paths for the CLI will need to be the internal docker path /app/...
instead of the paths on your host machine
Some translation services require API keys to function to set these pass them as env vars into the docker container. For example:
docker run --env="DEEPL_AUTH_KEY=xxx" --ipc=host --rm zyddnys/manga-image-translator:main <cli flags>
To use with a supported GPU please first read the initial
Docker
section. There are some special dependencies you will need to use
To run the container with the following flags set:
docker run ... --gpus=all ... zyddnys/manga-image-translator:main ... --use-gpu
Or (For the web server + GPU)
docker-compose -f demo/doc/docker-compose-web-with-gpu.yml up
To build the docker image locally you can run (You will require make on your machine)
make build-image
Then to test the built image run
make run-web-server
# use `--use-gpu` for speedup if you have a compatible NVIDIA GPU.
# use `--target-lang <language_code>` to specify a target language.
# use `--inpainter=none` to disable inpainting.
# use `--translator=none` if you only want to use inpainting (blank bubbles)
# replace <path> with the path to the image folder or file.
$ python -m manga_translator -v --translator=google -l ENG -i <path>
# results can be found under `<path_to_image_folder>-translated`.
# saves singular image into /result folder for demonstration purposes
# use `--mode demo` to enable demo translation.
# replace <path> with the path to the image file.
$ python -m manga_translator --mode demo -v --translator=google -l ENG -i <path>
# result can be found in `result/`.
# use `--mode web` to start a web server.
$ python -m manga_translator -v --mode web --use-gpu
# the demo will be serving on http://127.0.0.1:5003
# use `--mode web` to start a web server.
$ python -m manga_translator -v --mode api --use-gpu
# the demo will be serving on http://127.0.0.1:5003
GUI implementation: BallonsTranslator
Detector:
- ENG: ??
- JPN: ??
- CHS: ??
- KOR: ??
- Using
--detector ctd
can increase the amount of text lines detected
OCR:
- ENG: ??
- JPN: ??
- CHS: ??
- KOR: 48px
Translator:
- JPN -> ENG: Sugoi
- CHS -> ENG: ??
- CHS -> JPN: ??
- JPN -> CHS: ??
- ENG -> JPN: ??
- ENG -> CHS: ??
Inpainter: ??
Colorizer: mc2
- Small resolutions can sometimes trip up the detector, which is not so good at picking up irregular text sizes. To
circumvent this you can use an upscaler by specifying
--upscale-ratio 2
or any other value - If the text being rendered is too small to read specify
--font-size-minimum 30
for instance or use the--manga2eng
renderer that will try to adapt to detected textbubbles - Specify a font with
--font-path fonts/anime_ace_3.ttf
for example
-h, --help show this help message and exit
-m, --mode {demo,batch,web,web_client,ws,api}
Run demo in single image demo mode (demo), batch
translation mode (batch), web service mode (web)
-i, --input INPUT [INPUT ...] Path to an image file if using demo mode, or path to an
image folder if using batch mode
-o, --dest DEST Path to the destination folder for translated images in
batch mode
-l, --target-lang {CHS,CHT,CSY,NLD,ENG,FRA,DEU,HUN,ITA,JPN,KOR,PLK,PTB,ROM,RUS,ESP,TRK,UKR,VIN,ARA,CNR,SRP,HRV,THA,IND,FIL}
Destination language
-v, --verbose Print debug info and save intermediate images in result
folder
-f, --format {png,webp,jpg,xcf,psd,pdf} Output format of the translation.
--attempts ATTEMPTS Retry attempts on encountered error. -1 means infinite
times.
--ignore-errors Skip image on encountered error.
--overwrite Overwrite already translated images in batch mode.
--skip-no-text Skip image without text (Will not be saved).
--model-dir MODEL_DIR Model directory (by default ./models in project root)
--use-gpu Turn on/off gpu
--use-gpu-limited Turn on/off gpu (excluding offline translator)
--detector {default,ctd,craft,none} Text detector used for creating a text mask from an
image, DO NOT use craft for manga, it's not designed
for it
--ocr {32px,48px,48px_ctc,mocr} Optical character recognition (OCR) model to use
--use-mocr-merge Use bbox merge when Manga OCR inference.
--inpainter {default,lama_large,lama_mpe,sd,none,original}
Inpainting model to use
--upscaler {waifu2x,esrgan,4xultrasharp} Upscaler to use. --upscale-ratio has to be set for it
to take effect
--upscale-ratio UPSCALE_RATIO Image upscale ratio applied before detection. Can
improve text detection.
--colorizer {mc2} Colorization model to use.
--translator {google,youdao,baidu,deepl,papago,caiyun,gpt3,gpt3.5,gpt4,none,original,offline,nllb,nllb_big,sugoi,jparacrawl,jparacrawl_big,m2m100,m2m100_big,sakura}
Language translator to use
--translator-chain TRANSLATOR_CHAIN Output of one translator goes in another. Example:
--translator-chain "google:JPN;sugoi:ENG".
--selective-translation SELECTIVE_TRANSLATION
Select a translator based on detected language in
image. Note the first translation service acts as
default if the language isn't defined. Example:
--translator-chain "google:JPN;sugoi:ENG".
--revert-upscaling Downscales the previously upscaled image after
translation back to original size (Use with --upscale-
ratio).
--detection-size DETECTION_SIZE Size of image used for detection
--det-rotate Rotate the image for detection. Might improve
detection.
--det-auto-rotate Rotate the image for detection to prefer vertical
textlines. Might improve detection.
--det-invert Invert the image colors for detection. Might improve
detection.
--det-gamma-correct Applies gamma correction for detection. Might improve
detection.
--unclip-ratio UNCLIP_RATIO How much to extend text skeleton to form bounding box
--box-threshold BOX_THRESHOLD Threshold for bbox generation
--text-threshold TEXT_THRESHOLD Threshold for text detection
--min-text-length MIN_TEXT_LENGTH Minimum text length of a text region
--no-text-lang-skip Dont skip text that is seemingly already in the target
language.
--inpainting-size INPAINTING_SIZE Size of image used for inpainting (too large will
result in OOM)
--inpainting-precision {fp32,fp16,bf16} Inpainting precision for lama, use bf16 while you can.
--colorization-size COLORIZATION_SIZE Size of image used for colorization. Set to -1 to use
full image size
--denoise-sigma DENOISE_SIGMA Used by colorizer and affects color strength, range
from 0 to 255 (default 30). -1 turns it off.
--mask-dilation-offset MASK_DILATION_OFFSET By how much to extend the text mask to remove left-over
text pixels of the original image.
--font-size FONT_SIZE Use fixed font size for rendering
--font-size-offset FONT_SIZE_OFFSET Offset font size by a given amount, positive number
increase font size and vice versa
--font-size-minimum FONT_SIZE_MINIMUM Minimum output font size. Default is
image_sides_sum/200
--font-color FONT_COLOR Overwrite the text fg/bg color detected by the OCR
model. Use hex string without the "#" such as FFFFFF
for a white foreground or FFFFFF:000000 to also have a
black background around the text.
--line-spacing LINE_SPACING Line spacing is font_size * this value. Default is 0.01
for horizontal text and 0.2 for vertical.
--force-horizontal Force text to be rendered horizontally
--force-vertical Force text to be rendered vertically
--align-left Align rendered text left
--align-center Align rendered text centered
--align-right Align rendered text right
--uppercase Change text to uppercase
--lowercase Change text to lowercase
--no-hyphenation If renderer should be splitting up words using a hyphen
character (-)
--manga2eng Render english text translated from manga with some
additional typesetting. Ignores some other argument
options
--gpt-config GPT_CONFIG Path to GPT config file, more info in README
--use-mtpe Turn on/off machine translation post editing (MTPE) on
the command line (works only on linux right now)
--save-text Save extracted text and translations into a text file.
--save-text-file SAVE_TEXT_FILE Like --save-text but with a specified file path.
--filter-text FILTER_TEXT Filter regions by their text with a regex. Example
usage: --text-filter ".*badtext.*"
--skip-lang Skip translation if source image is one of the provide languages,
use comma to separate multiple languages. Example: JPN,ENG
--prep-manual Prepare for manual typesetting by outputting blank,
inpainted images, plus copies of the original for
reference
--font-path FONT_PATH Path to font file
--gimp-font GIMP_FONT Font family to use for gimp rendering.
--host HOST Used by web module to decide which host to attach to
--port PORT Used by web module to decide which port to attach to
--nonce NONCE Used by web module as secret for securing internal web
server communication
--ws-url WS_URL Server URL for WebSocket mode
--save-quality SAVE_QUALITY Quality of saved JPEG image, range from 0 to 100 with
100 being best
--ignore-bubble IGNORE_BUBBLE The threshold for ignoring text in non bubble areas,
with valid values ranging from 1 to 50, does not ignore
others. Recommendation 5 to 10. If it is too low,
normal bubble areas may be ignored, and if it is too
large, non bubble areas may be considered normal
bubbles
Used by the --target-lang
or -l
argument.
CHS: Chinese (Simplified)
CHT: Chinese (Traditional)
CSY: Czech
NLD: Dutch
ENG: English
FRA: French
DEU: German
HUN: Hungarian
ITA: Italian
JPN: Japanese
KOR: Korean
PLK: Polish
PTB: Portuguese (Brazil)
ROM: Romanian
RUS: Russian
ESP: Spanish
TRK: Turkish
UKR: Ukrainian
VIN: Vietnames
ARA: Arabic
SRP: Serbian
HRV: Croatian
THA: Thai
IND: Indonesian
FIL: Filipino (Tagalog)
Name | API Key | Offline | Note |
---|---|---|---|
Disabled temporarily | |||
youdao | ✔️ | Requires YOUDAO_APP_KEY and YOUDAO_SECRET_KEY
|
|
baidu | ✔️ | Requires BAIDU_APP_ID and BAIDU_SECRET_KEY
|
|
deepl | ✔️ | Requires DEEPL_AUTH_KEY
|
|
caiyun | ✔️ | Requires CAIYUN_TOKEN
|
|
gpt3 | ✔️ | Implements text-davinci-003. Requires OPENAI_API_KEY
|
|
gpt3.5 | ✔️ | Implements gpt-3.5-turbo. Requires OPENAI_API_KEY
|
|
gpt4 | ✔️ | Implements gpt-4. Requires OPENAI_API_KEY
|
|
papago | |||
sakura | Requires SAKURA_API_BASE
|
||
offline | ✔️ | Chooses most suitable offline translator for language | |
sugoi | ✔️ | Sugoi V4.0 Models | |
m2m100 | ✔️ | Supports every language | |
m2m100_big | ✔️ | ||
none | ✔️ | Translate to empty texts | |
original | ✔️ | Keep original texts |
- API Key: Whether the translator requires an API key to be set as environment variable. For this you can create a .env file in the project root directory containing your api keys like so:
OPENAI_API_KEY=sk-xxxxxxx...
DEEPL_AUTH_KEY=xxxxxxxx...
-
Offline: Whether the translator can be used offline.
-
Sugoi is created by mingshiba, please support him in https://www.patreon.com/mingshiba
Used by the --gpt-config
argument.
# The prompt being feed into GPT before the text to translate.
# Use {to_lang} to indicate where the target language name should be inserted.
# Note: ChatGPT models don't use this prompt.
prompt_template: >
Please help me to translate the following text from a manga to {to_lang}
(if it's already in {to_lang} or looks like gibberish you have to output it as it is instead):\n
# What sampling temperature to use, between 0 and 2.
# Higher values like 0.8 will make the output more random,
# while lower values like 0.2 will make it more focused and deterministic.
temperature: 0.5
# An alternative to sampling with temperature, called nucleus sampling,
# where the model considers the results of the tokens with top_p probability mass.
# So 0.1 means only the tokens comprising the top 10% probability mass are considered.
top_p: 1
# The prompt being feed into ChatGPT before the text to translate.
# Use {to_lang} to indicate where the target language name should be inserted.
# Tokens used in this example: 57+
chat_system_template: >
You are a professional translation engine,
please translate the story into a colloquial,
elegant and fluent content,
without referencing machine translations.
You must only translate the story, never interpret it.
If there is any issue in the text, output it as is.
Translate to {to_lang}.
# Samples being feed into ChatGPT to show an example conversation.
# In a [prompt, response] format, keyed by the target language name.
#
# Generally, samples should include some examples of translation preferences, and ideally
# some names of characters it's likely to encounter.
#
# If you'd like to disable this feature, just set this to an empty list.
chat_sample:
Simplified Chinese: # Tokens used in this example: 88 + 84
- <|1|>恥ずかしい… 目立ちたくない… 私が消えたい…
<|2|>きみ… 大丈夫⁉
<|3|>なんだこいつ 空気読めて ないのか…?
- <|1|>好尴尬…我不想引人注目…我想消失…
<|2|>你…没事吧⁉
<|3|>这家伙怎么看不懂气氛的…?
# Overwrite configs for a specific model.
# For now the list is: gpt3, gpt35, gpt4
gpt35:
temperature: 0.3
When setting output format to {xcf
, psd
, pdf
} Gimp will be used to generate the file.
On Windows this assumes Gimp 2.x to be installed to C:\Users\<Username>\AppData\Local\Programs\Gimp 2
.
The resulting .xcf
file contains the original image as the lowest layer and it has the inpainting as a separate layer.
The translated textboxes have their own layers with the original text as the layer name for easy access.
Limitations:
- Gimp will turn text layers to regular images when saving
.psd
files. - Rotated text isn't handled well in Gimp. When editing a rotated textbox it'll also show a popup that it was modified by an outside program.
- Font family is controlled separately, with the
--gimp-font
argument.
API V2
# use `--mode api` to start a web server.
$ python -m manga_translator -v --mode api --use-gpu
# the api will be serving on http://127.0.0.1:5003
Api is accepting json(post) and multipart.
Api endpoints are /colorize_translate
, /inpaint_translate
, /translate
, /get_text
.
Valid arguments for the api are:
// These are taken from args.py. For more info see README.md
detector: String
ocr: String
inpainter: String
upscaler: String
translator: String
target_language: String
upscale_ratio: Integer
translator_chain: String
selective_translation: String
attempts: Integer
detection_size: Integer // 1024 => 'S', 1536 => 'M', 2048 => 'L', 2560 => 'X'
text_threshold: Float
box_threshold: Float
unclip_ratio: Float
inpainting_size: Integer
det_rotate: Bool
det_auto_rotate: Bool
det_invert: Bool
det_gamma_correct: Bool
min_text_length: Integer
colorization_size: Integer
denoise_sigma: Integer
mask_dilation_offset: Integer
ignore_bubble: Integer
gpt_config: String
filter_text: String
overlay_type: String
// These are api specific args
direction: String // {'auto', 'h', 'v'}
base64Images: String //Image in base64 format
image: Multipart // image upload from multipart
url: String // an url string
Manual translation replaces machine translation with human translators. Basic manual translation demo can be found at http://127.0.0.1:5003/manual when using web mode.
API
Two modes of translation service are provided by the demo: synchronous mode and asynchronous mode.
In synchronous mode your HTTP POST request will finish once the translation task is finished.
In asynchronous mode your HTTP POST request will respond with a task_id
immediately, you can use this task_id
to
poll for translation task state.
- POST a form request with form data
file:<content-of-image>
to http://127.0.0.1:5003/run - Wait for response
- Use the resultant
task_id
to find translation result inresult/
directory, e.g. using Nginx to exposeresult/
- POST a form request with form data
file:<content-of-image>
to http://127.0.0.1:5003/submit - Acquire translation
task_id
- Poll for translation task state by posting JSON
{"taskid": <task-id>}
to http://127.0.0.1:5003/task-state - Translation is finished when the resultant state is either
finished
,error
orerror-lang
- Find translation result in
result/
directory, e.g. using Nginx to exposeresult/
POST a form request with form data file:<content-of-image>
to http://127.0.0.1:5003/manual-translate
and wait for response.
You will obtain a JSON response like this:
{
"task_id": "12c779c9431f954971cae720eb104499",
"status": "pending",
"trans_result": [
{
"s": "☆上司来ちゃった……",
"t": ""
}
]
}
Fill in translated texts:
{
"task_id": "12c779c9431f954971cae720eb104499",
"status": "pending",
"trans_result": [
{
"s": "☆上司来ちゃった……",
"t": "☆Boss is here..."
}
]
}
Post translated JSON to http://127.0.0.1:5003/post-manual-result and wait for response.
Then you can find the translation result in result/
directory, e.g. using Nginx to expose result/
.
A list of what needs to be done next, you're welcome to contribute.
- Use diffusion model based inpainting to achieve near perfect result, but this could be much slower.
IMPORTANT!!!HELP NEEDED!!! The current text rendering engine is barely usable, we need your help to improve text rendering!- Text rendering area is determined by detected text lines, not speech bubbles.
This works for images without speech bubbles, but making it impossible to decide where to put translated English text. I have no idea how to solve this. - Ryota et al. proposed using multimodal machine translation, maybe we can add ViT features for building custom NMT models.
- Make this project works for video(rewrite code in C++ and use GPU/other hardware NN accelerator).
Used for detecting hard subtitles in videos, generating ass file and remove them completely. Mask refinement based using non deep learning algorithms, I am currently testing out CRF based algorithm.Angled text region merge is not currently supported- Create pip repository
GPU server is not cheap, please consider to donate to us.
-
Ko-fi: https://ko-fi.com/voilelabs
-
Patreon: https://www.patreon.com/voilelabs
-
爱发电: https://afdian.net/@voilelabs
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for manga-image-translator
Similar Open Source Tools
manga-image-translator
Translate texts in manga/images. Some manga/images will never be translated, therefore this project is born. * Image/Manga Translator * Samples * Online Demo * Disclaimer * Installation * Pip/venv * Poetry * Additional instructions for **Windows** * Docker * Hosting the web server * Using as CLI * Setting Translation Secrets * Using with Nvidia GPU * Building locally * Usage * Batch mode (default) * Demo mode * Web Mode * Api Mode * Related Projects * Docs * Recommended Modules * Tips to improve translation quality * Options * Language Code Reference * Translators Reference * GPT Config Reference * Using Gimp for rendering * Api Documentation * Synchronous mode * Asynchronous mode * Manual translation * Next steps * Support Us * Thanks To All Our Contributors :
llm-vscode
llm-vscode is an extension designed for all things LLM, utilizing llm-ls as its backend. It offers features such as code completion with 'ghost-text' suggestions, the ability to choose models for code generation via HTTP requests, ensuring prompt size fits within the context window, and code attribution checks. Users can configure the backend, suggestion behavior, keybindings, llm-ls settings, and tokenization options. Additionally, the extension supports testing models like Code Llama 13B, Phind/Phind-CodeLlama-34B-v2, and WizardLM/WizardCoder-Python-34B-V1.0. Development involves cloning llm-ls, building it, and setting up the llm-vscode extension for use.
code2prompt
Code2Prompt is a powerful command-line tool that generates comprehensive prompts from codebases, designed to streamline interactions between developers and Large Language Models (LLMs) for code analysis, documentation, and improvement tasks. It bridges the gap between codebases and LLMs by converting projects into AI-friendly prompts, enabling users to leverage AI for various software development tasks. The tool offers features like holistic codebase representation, intelligent source tree generation, customizable prompt templates, smart token management, Gitignore integration, flexible file handling, clipboard-ready output, multiple output options, and enhanced code readability.
chatgpt-cli
ChatGPT CLI provides a powerful command-line interface for seamless interaction with ChatGPT models via OpenAI and Azure. It features streaming capabilities, extensive configuration options, and supports various modes like streaming, query, and interactive mode. Users can manage thread-based context, sliding window history, and provide custom context from any source. The CLI also offers model and thread listing, advanced configuration options, and supports GPT-4, GPT-3.5-turbo, and Perplexity's models. Installation is available via Homebrew or direct download, and users can configure settings through default values, a config.yaml file, or environment variables.
rlhf-book
RLHF Book is a work-in-progress textbook covering the fundamentals of Reinforcement Learning from Human Feedback (RLHF). It is built on the Pandoc book template and is meant for people with a basic ML and/or software background. The content for the book is licensed under the Creative Commons Non-Commercial Attribution License, CC BY-NC 4.0. The repository contains a simple template for building Pandoc documents, allowing users to compile markdown files into readable files such as PDF, EPUB, and HTML.
ethereum-etl-airflow
This repository contains Airflow DAGs for extracting, transforming, and loading (ETL) data from the Ethereum blockchain into BigQuery. The DAGs use the Google Cloud Platform (GCP) services, including BigQuery, Cloud Storage, and Cloud Composer, to automate the ETL process. The repository also includes scripts for setting up the GCP environment and running the DAGs locally.
upgini
Upgini is an intelligent data search engine with a Python library that helps users find and add relevant features to their ML pipeline from various public, community, and premium external data sources. It automates the optimization of connected data sources by generating an optimal set of machine learning features using large language models, GraphNNs, and recurrent neural networks. The tool aims to simplify feature search and enrichment for external data to make it a standard approach in machine learning pipelines. It democratizes access to data sources for the data science community.
expo-stable-diffusion
The `expo-stable-diffusion` repository provides a tool for generating images using Stable Diffusion natively on iOS devices within Expo and React Native apps. Users can install and configure the module to create images based on prompts. The repository includes information on updating iOS deployment targets, enabling increased memory limits, and building iOS apps. Additionally, users can obtain Stable Diffusion models from various sources. The repository also addresses troubleshooting tips related to model load times and image generation durations. The developer seeks sponsorship to further enhance the project, including adding Android support.
gemini-pro-bot
This Python Telegram bot utilizes Google's `gemini-pro` LLM API to generate creative text formats based on user input. It's designed to be an engaging and interactive way to explore the capabilities of large language models. Key features include generating various text formats like poems, code, scripts, and musical pieces. The bot supports real-time streaming of the generation process, allowing users to witness the text unfold. Additionally, it can respond to messages with Bard's creative output and handle image-based inputs for multimodal responses. User authentication is optional, and the bot can be easily integrated with Docker or installed via pipenv.
jina
Jina is a tool that allows users to build multimodal AI services and pipelines using cloud-native technologies. It provides a Pythonic experience for serving ML models and transitioning from local deployment to advanced orchestration frameworks like Docker-Compose, Kubernetes, or Jina AI Cloud. Users can build and serve models for any data type and deep learning framework, design high-performance services with easy scaling, serve LLM models while streaming their output, integrate with Docker containers via Executor Hub, and host on CPU/GPU using Jina AI Cloud. Jina also offers advanced orchestration and scaling capabilities, a smooth transition to the cloud, and easy scalability and concurrency features for applications. Users can deploy to their own cloud or system with Kubernetes and Docker Compose integration, and even deploy to JCloud for autoscaling and monitoring.
llm-foundry
LLM Foundry is a codebase for training, finetuning, evaluating, and deploying LLMs for inference with Composer and the MosaicML platform. It is designed to be easy-to-use, efficient _and_ flexible, enabling rapid experimentation with the latest techniques. You'll find in this repo: * `llmfoundry/` - source code for models, datasets, callbacks, utilities, etc. * `scripts/` - scripts to run LLM workloads * `data_prep/` - convert text data from original sources to StreamingDataset format * `train/` - train or finetune HuggingFace and MPT models from 125M - 70B parameters * `train/benchmarking` - profile training throughput and MFU * `inference/` - convert models to HuggingFace or ONNX format, and generate responses * `inference/benchmarking` - profile inference latency and throughput * `eval/` - evaluate LLMs on academic (or custom) in-context-learning tasks * `mcli/` - launch any of these workloads using MCLI and the MosaicML platform * `TUTORIAL.md` - a deeper dive into the repo, example workflows, and FAQs
paxml
Pax is a framework to configure and run machine learning experiments on top of Jax.
june
june-va is a local voice chatbot that combines Ollama for language model capabilities, Hugging Face Transformers for speech recognition, and the Coqui TTS Toolkit for text-to-speech synthesis. It provides a flexible, privacy-focused solution for voice-assisted interactions on your local machine, ensuring that no data is sent to external servers. The tool supports various interaction modes including text input/output, voice input/text output, text input/audio output, and voice input/audio output. Users can customize the tool's behavior with a JSON configuration file and utilize voice conversion features for voice cloning. The application can be further customized using a configuration file with attributes for language model, speech-to-text model, and text-to-speech model configurations.
openai-kotlin
OpenAI Kotlin API client is a Kotlin client for OpenAI's API with multiplatform and coroutines capabilities. It allows users to interact with OpenAI's API using Kotlin programming language. The client supports various features such as models, chat, images, embeddings, files, fine-tuning, moderations, audio, assistants, threads, messages, and runs. It also provides guides on getting started, chat & function call, file source guide, and assistants. Sample apps are available for reference, and troubleshooting guides are provided for common issues. The project is open-source and licensed under the MIT license, allowing contributions from the community.
bedrock-claude-chat
This repository is a sample chatbot using the Anthropic company's LLM Claude, one of the foundational models provided by Amazon Bedrock for generative AI. It allows users to have basic conversations with the chatbot, personalize it with their own instructions and external knowledge, and analyze usage for each user/bot on the administrator dashboard. The chatbot supports various languages, including English, Japanese, Korean, Chinese, French, German, and Spanish. Deployment is straightforward and can be done via the command line or by using AWS CDK. The architecture is built on AWS managed services, eliminating the need for infrastructure management and ensuring scalability, reliability, and security.
BentoML
BentoML is an open-source model serving library for building performant and scalable AI applications with Python. It comes with everything you need for serving optimization, model packaging, and production deployment.
For similar tasks
manga-image-translator
Translate texts in manga/images. Some manga/images will never be translated, therefore this project is born. * Image/Manga Translator * Samples * Online Demo * Disclaimer * Installation * Pip/venv * Poetry * Additional instructions for **Windows** * Docker * Hosting the web server * Using as CLI * Setting Translation Secrets * Using with Nvidia GPU * Building locally * Usage * Batch mode (default) * Demo mode * Web Mode * Api Mode * Related Projects * Docs * Recommended Modules * Tips to improve translation quality * Options * Language Code Reference * Translators Reference * GPT Config Reference * Using Gimp for rendering * Api Documentation * Synchronous mode * Asynchronous mode * Manual translation * Next steps * Support Us * Thanks To All Our Contributors :
facefusion
FaceFusion is a next-generation face swapper and enhancer that allows users to seamlessly swap faces in images and videos, as well as enhance facial features for a more polished and refined look. With its advanced deep learning models, FaceFusion provides users with a wide range of options for customizing their face swaps and enhancements, making it an ideal tool for content creators, artists, and anyone looking to explore their creativity with facial manipulation.
aidea
AIdea is an app that integrates mainstream large language models and drawing models, developed using Flutter. The code is completely open-source and supports various functions such as GPT-3.5, GPT-4 from OpenAI, Claude instant, Claude 2.1 from Anthropic, Gemini Pro and visual language models from Google, as well as various Chinese and open-source models. It also supports features like text-to-image, super-resolution, coloring black and white images, artistic fonts, artistic QR codes, and more.
NeuroSandboxWebUI
A simple and convenient interface for using various neural network models. Users can interact with LLM using text, voice, and image input to generate images, videos, 3D objects, music, and audio. The tool supports a wide range of models for different tasks such as image generation, video generation, audio file separation, voice conversion, and more. Users can also view files from the outputs directory in a gallery, download models, change application settings, and check system sensors. The goal of the project is to create an easy-to-use application for utilizing neural network models.
krita-ai-diffusion
Krita-AI-Diffusion is a plugin for Krita that allows users to generate images from within the program. It offers a variety of features, including inpainting, outpainting, generating images from scratch, refining existing content, live painting, and control over image creation. The plugin is designed to fit into an interactive workflow where AI generation is used as just another tool while painting. It is meant to synergize with traditional tools and the layer stack.
joliGEN
JoliGEN is an integrated framework for training custom generative AI image-to-image models. It implements GAN, Diffusion, and Consistency models for various image translation tasks, including domain and style adaptation with conservation of semantics. The tool is designed for real-world applications such as Controlled Image Generation, Augmented Reality, Dataset Smart Augmentation, and Synthetic to Real transforms. JoliGEN allows for fast and stable training with a REST API server for simplified deployment. It offers a wide range of options and parameters with detailed documentation available for models, dataset formats, and data augmentation.
aidoku-zh-sources
Aidoku 中文图源 is a collection of Chinese manga sources for the Aidoku manga reader app. It includes links to over 30 different sources, including popular sites like 139漫画, 无敌漫画, and 巴卡漫画. The sources are organized into a single, easy-to-use list, making it easy to find and read your favorite manga.
llama.cpp
llama.cpp is a C++ implementation of LLaMA, a large language model from Meta. It provides a command-line interface for inference and can be used for a variety of tasks, including text generation, translation, and question answering. llama.cpp is highly optimized for performance and can be run on a variety of hardware, including CPUs, GPUs, and TPUs.
For similar jobs
manga-image-translator
Translate texts in manga/images. Some manga/images will never be translated, therefore this project is born. * Image/Manga Translator * Samples * Online Demo * Disclaimer * Installation * Pip/venv * Poetry * Additional instructions for **Windows** * Docker * Hosting the web server * Using as CLI * Setting Translation Secrets * Using with Nvidia GPU * Building locally * Usage * Batch mode (default) * Demo mode * Web Mode * Api Mode * Related Projects * Docs * Recommended Modules * Tips to improve translation quality * Options * Language Code Reference * Translators Reference * GPT Config Reference * Using Gimp for rendering * Api Documentation * Synchronous mode * Asynchronous mode * Manual translation * Next steps * Support Us * Thanks To All Our Contributors :
spandrel
Spandrel is a library for loading and running pre-trained PyTorch models. It automatically detects the model architecture and hyperparameters from model files, and provides a unified interface for running models.
krita-ai-diffusion
Krita-AI-Diffusion is a plugin for Krita that allows users to generate images from within the program. It offers a variety of features, including inpainting, outpainting, generating images from scratch, refining existing content, live painting, and control over image creation. The plugin is designed to fit into an interactive workflow where AI generation is used as just another tool while painting. It is meant to synergize with traditional tools and the layer stack.
aidoku-zh-sources
Aidoku 中文图源 is a collection of Chinese manga sources for the Aidoku manga reader app. It includes links to over 30 different sources, including popular sites like 139漫画, 无敌漫画, and 巴卡漫画. The sources are organized into a single, easy-to-use list, making it easy to find and read your favorite manga.