json-translator
jsontt ๐ก - AI JSON Translator with GPT + other FREE translation modules to translate your json/yaml files into other languages โ Check Readme โ Supports GPT / DeepL / Google / Bing / Libre / Argos
Stars: 466
The json-translator repository provides a free tool to translate JSON/YAML files or JSON objects into different languages using various translation modules. It supports CLI usage and package support, allowing users to translate words, sentences, JSON objects, and JSON files. The tool also offers multi-language translation, ignoring specific words, and safe translation practices. Users can contribute to the project by updating CLI, translation functions, JSON operations, and more. The roadmap includes features like Libre Translate option, Argos Translate option, Bing Translate option, and support for additional translation modules.
README:
- Contact with me on Twitter to advertise your project on jsontt cli
โจ Sponsored by fotogram.ai - Transform Your Selfies into Masterpieces with AI โจ
โจ https://fotogram.ai โจ
๐ AI / FREE JSON & YAML TRANSLATOR ๐
This package will provide you to translate your JSON/YAML files or JSON objects into different languages FREE.
Translation Module | Support | FREE |
---|---|---|
Google Translate | โ | โ
FREE |
Google Translate 2 | โ | โ
FREE |
Microsoft Bing Translate | โ | โ
FREE |
Libre Translate | โ | โ
FREE |
Argos Translate | โ | โ
FREE |
DeepL Translate | โ | require API KEY (DEEPL_API_KEY as env) |
gpt-4o | โ | require API KEY (OPENAI_API_KEY as env) |
gpt-3.5-turbo | โ | require API KEY (OPENAI_API_KEY as env) |
gpt-4 | โ | require API KEY (OPENAI_API_KEY as env) |
gpt-4o-mini | โ | require API KEY (OPENAI_API_KEY as env) |
Translation Module | Support | FREE |
---|---|---|
Google Translate | โ | โ
FREE |
Google Translate 2 | โ | โ
FREE |
Microsoft Bing Translate | โ | โ
FREE |
Libre Translate | โ | โ
FREE |
Argos Translate | โ | โ
FREE |
DeepL Translate | โ | require API KEY (DEEPL_API_KEY as env) |
gpt-4o | โ | require API KEY (OPENAI_API_KEY as env) |
gpt-3.5-turbo | โ | require API KEY (OPENAI_API_KEY as env) |
gpt-4 | โ | require API KEY (OPENAI_API_KEY as env) |
gpt-4o-mini | โ | require API KEY (OPENAI_API_KEY as env) |
Browser support will come soon...
npm i @parvineyvazov/json-translator
- OR you can install it globally (in case of using CLI)
npm i -g @parvineyvazov/json-translator
jsontt <your/path/to/file.json>
or
jsontt <your/path/to/file.yaml/yml>
-
[path]
: Required JSON/YAML file path<your/path/to/file.json>
-
[path]
: optional proxy list txt file path<your/path/to/proxy_list.txt>
-V, --version output the version number
-m, --module <Module> specify translation module
-f, --from <Language> from language
-t, --to <Languages...> to translates
-n, --name <string> optional โต | output filename
-fb, --fallback <string> optional โต | fallback logic,
try other translation modules on fail | yes, no | default: no
-cl, --concurrencylimit <number> optional โต | set max concurrency limit
(higher faster, but easy to get banned) | default: 3
-h, --help display help for command
Translate a JSON file using Google Translate:
jsontt <your/path/to/file.json> --module google --from en --to ar fr zh-CN
- with output name
jsontt <your/path/to/file.json> --module google --from en --to ar fr zh-CN --name myFiles
- with fallback logic (try other possible translation modules on fail)
jsontt <your/path/to/file.json> --module google --from en --to ar fr zh-CN --name myFiles --fallback yes
- set concurrency limit (higher faster, but easy to get banned | default: 3)
jsontt <your/path/to/file.json> --module google --from en --to ar fr zh-CN --name myFiles --fallback yes --concurrencylimit 10
- translate (json/yaml)
jsontt file.json
jsontt folder/file.json
jsontt "folder\file.json"
jsontt "C:\folder1\folder\en.json"
- with proxy (only Google Translate module)
jsontt file.json proxy.txt
Result will be in the same folder as the original JSON/YAML file.
- help
jsontt -h
jsontt --help
- Import the library to your code.
For JavaScript
const translator = require('@parvineyvazov/json-translator');
For TypeScript:
import * as translator from '@parvineyvazov/json-translator';
// Let`s translate `Home sweet home!` string from English to Chinese
const my_str = await translator.translateWord(
'Home sweet home!',
translator.languages.English,
translator.languages.Chinese_Simplified
);
// my_str: ๅฎถ๏ผ็่็ๅฎถ๏ผ
- Import the library to your code
For JavaScript
const translator = require('@parvineyvazov/json-translator');
For TypeScript:
import * as translator from '@parvineyvazov/json-translator';
/*
Let`s translate our deep object from English to Spanish
*/
const en_lang: translator.translatedObject = {
login: {
title: 'Login {{name}}',
email: 'Please, enter your email',
failure: 'Failed',
},
homepage: {
welcoming: 'Welcome!',
title: 'Live long, live healthily!',
},
profile: {
edit_screen: {
edit: 'Edit your informations',
edit_age: 'Edit your age',
number_editor: [
{
title: 'Edit number 1',
button: 'Edit 1',
},
{
title: 'Edit number 2',
button: 'Edit 2',
},
],
},
},
};
/*
FOR JavaScript don`t use translator.translatedObject (No need to remark its type)
*/
let es_lang = await translator.translateObject(
en_lang,
translator.languages.English,
translator.languages.Spanish
);
/*
es_lang:
{
"login": {
"title": "Acceso {{name}}",
"email": "Por favor introduzca su correo electrรณnico",
"failure": "Fallida"
},
"homepage": {
"welcoming": "ยกBienvenidas!",
"title": "ยกVive mucho tiempo, vivo saludable!"
},
"profile": {
"edit_screen": {
"edit": "Edita tus informaciones",
"edit_age": "Editar tu edad",
"number_editor": [
{
"title": "Editar nรบmero 1",
"button": "Editar 1"
},
{
"title": "Editar nรบmero 2",
"button": "Editar 2"
}
]
}
}
}
*/
- Import the library to your code
For JavaScript
const translator = require('@parvineyvazov/json-translator');
For TypeScript:
import * as translator from '@parvineyvazov/json-translator';
/*
Let`s translate our object from English to French, Georgian and Japanese in the same time:
*/
const en_lang: translator.translatedObject = {
login: {
title: 'Login',
email: 'Please, enter your email',
failure: 'Failed',
},
edit_screen: {
edit: 'Edit your informations',
number_editor: [
{
title: 'Edit number 1',
button: 'Edit 1',
},
],
},
};
/*
FOR JavaScript don`t use translator.translatedObject (No need to remark its type)
*/
const [french, georgian, japanese] = (await translator.translateObject(
en_lang,
translator.languages.Automatic,
[
translator.languages.French,
translator.languages.Georgian,
translator.languages.Japanese,
]
)) as Array<translator.translatedObject>; // FOR JAVASCRIPT YOU DO NOT NEED TO SPECIFY THE TYPE
/*
french:
{
"login": {
"title": "Connexion",
"email": "S'il vous plaรฎt, entrez votre email",
"failure": "Manquรฉe"
},
"edit_screen": {
"edit": "Modifier vos informations",
"number_editor": [
{
"title": "Modifier le numรฉro 1",
"button": "รditer 1"
}
]
}
}
georgian:
{
"login": {
"title": "แฒจแแกแแแ",
"email": "แแแฎแแแ, แจแแแงแแแแแ แแฅแแแแ แแ",
"failure": "แแชแแแแแแ"
},
"edit_screen": {
"edit": "แแฅแแแแ แแแคแแ แแแชแแแแ แ แแแแฅแขแแ แแแ",
"number_editor": [
{
"title": "แ แแแแฅแขแแ แแแแก แแแแแ แ 1",
"button": "แ แแแแฅแขแแ แแแ 1"
}
]
}
}
japanese:
{
"login": {
"title": "ใญใฐใคใณ",
"email": "ใใชใใฎใกใผใซใขใใฌในใๅ
ฅๅใใฆใใ ใใ",
"failure": "ๅคฑๆใใ"
},
"edit_screen": {
"edit": "ใใชใใฎๆ
ๅ ฑใ็ทจ้ใใพใ",
"number_editor": [
{
"title": "็ชๅท1ใ็ทจ้ใใพใ",
"button": "็ทจ้1ใ็ทจ้ใใพใ"
}
]
}
}
*/
- Import the library to your code.
For JavaScript
const translator = require('@parvineyvazov/json-translator');
For TypeScript:
import * as translator from '@parvineyvazov/json-translator';
/*
Let`s translate our json file into another language and save it into the same folder of en.json
*/
let path = 'C:/files/en.json'; // PATH OF YOUR JSON FILE (includes file name)
await translator.translateFile(path, translator.languages.English, [
translator.languages.German,
]);
โโ files
โโโ en.json
โโโ de.json
- Import the library to your code.
For JavaScript
const translator = require('@parvineyvazov/json-translator');
For TypeScript:
import * as translator from '@parvineyvazov/json-translator';
/*
Let`s translate our json file into multiple languages and save them into the same folder of en.json
*/
let path = 'C:/files/en.json'; // PATH OF YOUR JSON FILE (includes file name)
await translator.translateFile(path, translator.languages.English, [
translator.languages.Cebuano,
translator.languages.French,
translator.languages.German,
translator.languages.Hungarian,
translator.languages.Japanese,
]);
โโ files
โโโ en.json
โโโ ceb.json
โโโ fr.json
โโโ de.json
โโโ hu.json
โโโ ja.json
To ignore words on translation use {{word}}
OR {word}
style on your object.
{
"one": "Welcome {{name}}",
"two": "Welcome {name}",
"three": "I am {name} {{surname}}"
}
...translating to spanish
{
"one": "Bienvenido {{name}}",
"two": "Bienvenido {name}",
"three": "Soy {name} {{surname}}"
}
-
jsontt also ignores the
URL
in the text which means sometimes translations ruin the URL in the given string while translating that string. It prevents such cases by ignoring URLs in the string while translating.- You don't especially need to do anything for it, it ignores them automatically.
{
"text": "this is a puppy https://shorturl.at/lvPY5"
}
...translating to german
{
"text": "das ist ein welpe https://shorturl.at/lvPY5"
}
- Clone it
git clone https://github.com/mololab/json-translator.git
- Install dependencies (with using yarn - install yarn if you don't have)
yarn
-
Show the magic:
-
Update CLI
Go to file
src/cli/cli.ts
-
Update translation
Go to file
src/modules/functions.ts
-
Update JSON operations(deep dive, send translation request)
Go to file
src/core/json_object.ts
-
Update JSON file read/write operations
Go to file
src/core/json_file.ts
-
Update ignoring values in translation (map/unmap)
Go to file
src/core/ignorer.ts
-
-
Check CLI locally
For checking CLI locally we need to link
the package using npm
npm link
Or you can run the whole steps using make
make run-only-cli
Make sure your terminal has admin access while running these commands to prevent any access issues.
โ๏ธ Translate a word | sentence
- for JSON objects
โ๏ธ Translate JSON object
โ๏ธ Translate deep JSON object
โ๏ธ Multi language translate for JSON object
- [ ] Translate JSON object with extracting OR filtering some of its fields
- for JSON files
โ๏ธ Translate JSON file
โ๏ธ Translate deep JSON file
โ๏ธ Multi language translate for JSON file
- [ ] Translate JSON file with extracting OR filtering some of its fields
- General
โ๏ธ CLI support
โ๏ธ Safe translation (Checking undefined, long, or empty values)
โ๏ธ Queue support for big translations
โ๏ธ Informing the user about the translation process (number of completed ones, the total number of lines and etc.)
โ๏ธ Ignore value words in translation (such as ignore {{name}} OR {name} on translation)
โ๏ธ Libre Translate option (CLI)
โ๏ธ Argos Translate option (CLI)
โ๏ธ Bing Translate option (CLI)
โ๏ธ Ignore URL translation on given string
โ๏ธ CLI options for langs & source selection
โ๏ธ Define output file names on cli (optional command for cli)
โ๏ธ YAML file Translate
โ๏ธ Fallback Translation (try new module on fail)
โ๏ธ Can set concurrency limit manually
-
[ ] Libre Translate option (in code package)
-
[ ] Argos Translate option (in code package)
-
[ ] Bing Translate option (in code package)
-
[ ] Openrouter Translate module
-
[ ] Cohere Translate module
-
[ ] Anthropic/Claude Translate module
-
[ ] Together ai Translate module
-
[ ] llamacpp Translate module
-
[ ] Google Gemini Api Translate module
โ๏ธ ChatGPT support
-
[ ] Sync translation
-
[ ] Browser support
-
[ ] Translation Option for own LibreTranslate instance
-
[ ] Make "--" dynamic adjustable (placeholder of not translated ones).
@parvineyvazov/json-translator will be available under the MIT license.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for json-translator
Similar Open Source Tools
json-translator
The json-translator repository provides a free tool to translate JSON/YAML files or JSON objects into different languages using various translation modules. It supports CLI usage and package support, allowing users to translate words, sentences, JSON objects, and JSON files. The tool also offers multi-language translation, ignoring specific words, and safe translation practices. Users can contribute to the project by updating CLI, translation functions, JSON operations, and more. The roadmap includes features like Libre Translate option, Argos Translate option, Bing Translate option, and support for additional translation modules.
nextlint
Nextlint is a rich text editor (WYSIWYG) written in Svelte, using MeltUI headless UI and tailwindcss CSS framework. It is built on top of tiptap editor (headless editor) and prosemirror. Nextlint is easy to use, develop, and maintain. It has a prompt engine that helps to integrate with any AI API and enhance the writing experience. Dark/Light theme is supported and customizable.
freeGPT
freeGPT provides free access to text and image generation models. It supports various models, including gpt3, gpt4, alpaca_7b, falcon_40b, prodia, and pollinations. The tool offers both asynchronous and non-asynchronous interfaces for text completion and image generation. It also features an interactive Discord bot that provides access to all the models in the repository. The tool is easy to use and can be integrated into various applications.
agentic_security
Agentic Security is an open-source vulnerability scanner designed for safety scanning, offering customizable rule sets and agent-based attacks. It provides comprehensive fuzzing for any LLMs, LLM API integration, and stress testing with a wide range of fuzzing and attack techniques. The tool is not a foolproof solution but aims to enhance security measures against potential threats. It offers installation via pip and supports quick start commands for easy setup. Users can utilize the tool for LLM integration, adding custom datasets, running CI checks, extending dataset collections, and dynamic datasets with mutations. The tool also includes a probe endpoint for integration testing. The roadmap includes expanding dataset variety, introducing new attack vectors, developing an attacker LLM, and integrating OWASP Top 10 classification.
fittencode.nvim
Fitten Code AI Programming Assistant for Neovim provides fast completion using AI, asynchronous I/O, and support for various actions like document code, edit code, explain code, find bugs, generate unit test, implement features, optimize code, refactor code, start chat, and more. It offers features like accepting suggestions with Tab, accepting line with Ctrl + Down, accepting word with Ctrl + Right, undoing accepted text, automatic scrolling, and multiple HTTP/REST backends. It can run as a coc.nvim source or nvim-cmp source.
openlrc
Open-Lyrics is a Python library that transcribes voice files using faster-whisper and translates/polishes the resulting text into `.lrc` files in the desired language using LLM, e.g. OpenAI-GPT, Anthropic-Claude. It offers well preprocessed audio to reduce hallucination and context-aware translation to improve translation quality. Users can install the library from PyPI or GitHub and follow the installation steps to set up the environment. The tool supports GUI usage and provides Python code examples for transcription and translation tasks. It also includes features like utilizing context and glossary for translation enhancement, pricing information for different models, and a list of todo tasks for future improvements.
evalplus
EvalPlus is a rigorous evaluation framework for LLM4Code, providing HumanEval+ and MBPP+ tests to evaluate large language models on code generation tasks. It offers precise evaluation and ranking, coding rigorousness analysis, and pre-generated code samples. Users can use EvalPlus to generate code solutions, post-process code, and evaluate code quality. The tool includes tools for code generation and test input generation using various backends.
Groq2API
Groq2API is a REST API wrapper around the Groq2 model, a large language model trained by Google. The API allows you to send text prompts to the model and receive generated text responses. The API is easy to use and can be integrated into a variety of applications.
llm.nvim
llm.nvim is a universal plugin for a large language model (LLM) designed to enable users to interact with LLM within neovim. Users can customize various LLMs such as gpt, glm, kimi, and local LLM. The plugin provides tools for optimizing code, comparing code, translating text, and more. It also supports integration with free models from Cloudflare, Github models, siliconflow, and others. Users can customize tools, chat with LLM, quickly translate text, and explain code snippets. The plugin offers a flexible window interface for easy interaction and customization.
PraisonAI
Praison AI is a low-code, centralised framework that simplifies the creation and orchestration of multi-agent systems for various LLM applications. It emphasizes ease of use, customization, and human-agent interaction. The tool leverages AutoGen and CrewAI frameworks to facilitate the development of AI-generated scripts and movie concepts. Users can easily create, run, test, and deploy agents for scriptwriting and movie concept development. Praison AI also provides options for full automatic mode and integration with OpenAI models for enhanced AI capabilities.
ai00_server
AI00 RWKV Server is an inference API server for the RWKV language model based upon the web-rwkv inference engine. It supports VULKAN parallel and concurrent batched inference and can run on all GPUs that support VULKAN. No need for Nvidia cards!!! AMD cards and even integrated graphics can be accelerated!!! No need for bulky pytorch, CUDA and other runtime environments, it's compact and ready to use out of the box! Compatible with OpenAI's ChatGPT API interface. 100% open source and commercially usable, under the MIT license. If you are looking for a fast, efficient, and easy-to-use LLM API server, then AI00 RWKV Server is your best choice. It can be used for various tasks, including chatbots, text generation, translation, and Q&A.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
bce-qianfan-sdk
The Qianfan SDK provides best practices for large model toolchains, allowing AI workflows and AI-native applications to access the Qianfan large model platform elegantly and conveniently. The core capabilities of the SDK include three parts: large model reasoning, large model training, and general and extension: * `Large model reasoning`: Implements interface encapsulation for reasoning of Yuyan (ERNIE-Bot) series, open source large models, etc., supporting dialogue, completion, Embedding, etc. * `Large model training`: Based on platform capabilities, it supports end-to-end large model training process, including training data, fine-tuning/pre-training, and model services. * `General and extension`: General capabilities include common AI development tools such as Prompt/Debug/Client. The extension capability is based on the characteristics of Qianfan to adapt to common middleware frameworks.
LightRAG
LightRAG is a repository hosting the code for LightRAG, a system that supports seamless integration of custom knowledge graphs, Oracle Database 23ai, Neo4J for storage, and multiple file types. It includes features like entity deletion, batch insert, incremental insert, and graph visualization. LightRAG provides an API server implementation for RESTful API access to RAG operations, allowing users to interact with it through HTTP requests. The repository also includes evaluation scripts, code for reproducing results, and a comprehensive code structure.
airtable
A simple Golang package to access the Airtable API. It provides functionalities to interact with Airtable such as initializing client, getting tables, listing records, adding records, updating records, deleting records, and bulk deleting records. The package is compatible with Go 1.13 and above.
For similar tasks
hass-ollama-conversation
The Ollama Conversation integration adds a conversation agent powered by Ollama in Home Assistant. This agent can be used in automations to query information provided by Home Assistant about your house, including areas, devices, and their states. Users can install the integration via HACS and configure settings such as API timeout, model selection, context size, maximum tokens, and other parameters to fine-tune the responses generated by the AI language model. Contributions to the project are welcome, and discussions can be held on the Home Assistant Community platform.
rclip
rclip is a command-line photo search tool powered by the OpenAI's CLIP neural network. It allows users to search for images using text queries, similar image search, and combining multiple queries. The tool extracts features from photos to enable searching and indexing, with options for previewing results in supported terminals or custom viewers. Users can install rclip on Linux, macOS, and Windows using different installation methods. The repository follows the Conventional Commits standard and welcomes contributions from the community.
honcho
Honcho is a platform for creating personalized AI agents and LLM powered applications for end users. The repository is a monorepo containing the server/API for managing database interactions and storing application state, along with a Python SDK. It utilizes FastAPI for user context management and Poetry for dependency management. The API can be run using Docker or manually by setting environment variables. The client SDK can be installed using pip or Poetry. The project is open source and welcomes contributions, following a fork and PR workflow. Honcho is licensed under the AGPL-3.0 License.
core
OpenSumi is a framework designed to help users quickly build AI Native IDE products. It provides a set of tools and templates for creating Cloud IDEs, Desktop IDEs based on Electron, CodeBlitz web IDE Framework, Lite Web IDE on the Browser, and Mini-App liked IDE. The framework also offers documentation for users to refer to and a detailed guide on contributing to the project. OpenSumi encourages contributions from the community and provides a platform for users to report bugs, contribute code, or improve documentation. The project is licensed under the MIT license and contains third-party code under other open source licenses.
yolo-ios-app
The Ultralytics YOLO iOS App GitHub repository offers an advanced object detection tool leveraging YOLOv8 models for iOS devices. Users can transform their devices into intelligent detection tools to explore the world in a new and exciting way. The app provides real-time detection capabilities with multiple AI models to choose from, ranging from 'nano' to 'x-large'. Contributors are welcome to participate in this open-source project, and licensing options include AGPL-3.0 for open-source use and an Enterprise License for commercial integration. Users can easily set up the app by following the provided steps, including cloning the repository, adding YOLOv8 models, and running the app on their iOS devices.
PyAirbyte
PyAirbyte brings the power of Airbyte to every Python developer by providing a set of utilities to use Airbyte connectors in Python. It enables users to easily manage secrets, work with various connectors like GitHub, Shopify, and Postgres, and contribute to the project. PyAirbyte is not a replacement for Airbyte but complements it, supporting data orchestration frameworks like Airflow and Snowpark. Users can develop ETL pipelines and import connectors from local directories. The tool simplifies data integration tasks for Python developers.
md-agent
MD-Agent is a LLM-agent based toolset for Molecular Dynamics. It uses Langchain and a collection of tools to set up and execute molecular dynamics simulations, particularly in OpenMM. The tool assists in environment setup, installation, and usage by providing detailed steps. It also requires API keys for certain functionalities, such as OpenAI and paper-qa for literature searches. Contributions to the project are welcome, with a detailed Contributor's Guide available for interested individuals.
flowgen
FlowGen is a tool built for AutoGen, a great agent framework from Microsoft and a lot of contributors. It provides intuitive visual tools that streamline the construction and oversight of complex agent-based workflows, simplifying the process for creators and developers. Users can create Autoflows, chat with agents, and share flow templates. The tool is fully dockerized and supports deployment on Railway.app. Contributions to the project are welcome, and the platform uses semantic-release for versioning and releases.
For similar jobs
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
daily-poetry-image
Daily Chinese ancient poetry and AI-generated images powered by Bing DALL-E-3. GitHub Action triggers the process automatically. Poetry is provided by Today's Poem API. The website is built with Astro.
exif-photo-blog
EXIF Photo Blog is a full-stack photo blog application built with Next.js, Vercel, and Postgres. It features built-in authentication, photo upload with EXIF extraction, photo organization by tag, infinite scroll, light/dark mode, automatic OG image generation, a CMD-K menu with photo search, experimental support for AI-generated descriptions, and support for Fujifilm simulations. The application is easy to deploy to Vercel with just a few clicks and can be customized with a variety of environment variables.
SillyTavern
SillyTavern is a user interface you can install on your computer (and Android phones) that allows you to interact with text generation AIs and chat/roleplay with characters you or the community create. SillyTavern is a fork of TavernAI 1.2.8 which is under more active development and has added many major features. At this point, they can be thought of as completely independent programs.
Twitter-Insight-LLM
This project enables you to fetch liked tweets from Twitter (using Selenium), save it to JSON and Excel files, and perform initial data analysis and image captions. This is part of the initial steps for a larger personal project involving Large Language Models (LLMs).
AISuperDomain
Aila Desktop Application is a powerful tool that integrates multiple leading AI models into a single desktop application. It allows users to interact with various AI models simultaneously, providing diverse responses and insights to their inquiries. With its user-friendly interface and customizable features, Aila empowers users to engage with AI seamlessly and efficiently. Whether you're a researcher, student, or professional, Aila can enhance your AI interactions and streamline your workflow.
ChatGPT-On-CS
This project is an intelligent dialogue customer service tool based on a large model, which supports access to platforms such as WeChat, Qianniu, Bilibili, Douyin Enterprise, Douyin, Doudian, Weibo chat, Xiaohongshu professional account operation, Xiaohongshu, Zhihu, etc. You can choose GPT3.5/GPT4.0/ Lazy Treasure Box (more platforms will be supported in the future), which can process text, voice and pictures, and access external resources such as operating systems and the Internet through plug-ins, and support enterprise AI applications customized based on their own knowledge base.
obs-localvocal
LocalVocal is a live-streaming AI assistant plugin for OBS that allows you to transcribe audio speech into text and perform various language processing functions on the text using AI / LLMs (Large Language Models). It's privacy-first, with all data staying on your machine, and requires no GPU, cloud costs, network, or downtime.