
mangaba_ai
Repositório minimalista para criação de agentes de IA inteligentes e versáteis com protocolos A2A (Agent-to-Agent) e MCP (Model Context Protocol).
Stars: 166

Mangaba AI is a minimalist repository for creating intelligent and versatile AI agents with A2A (Agent-to-Agent) and MCP (Model Context Protocol) protocols. It supports any AI provider, facilitates communication between agents, manages context effectively, and offers integrated functionalities like chat, analysis, and translation. The setup is straightforward with only 2 steps to get started. The repository includes scripts for automated configuration, manual setup, and environment validation. Users can easily chat with context, analyze text, translate, and interact with multiple agents using A2A protocol. The MCP protocol handles advanced context management automatically, categorizing context types and priorities. The repository also provides examples, documentation, and a comprehensive wiki in Brazilian Portuguese for beginners and developers.
README:
Repositório minimalista para criação de agentes de IA inteligentes e versáteis com protocolos A2A (Agent-to-Agent) e MCP (Model Context Protocol).
📚 WIKI AVANÇADA - Documentação completa em português brasileiro
📋 ÍNDICE COMPLETO - Navegação rápida por todo o repositório
- 🤖 Agente de IA Versátil: Suporte a qualquer provedor de IA
- 🔗 Protocolo A2A: Comunicação entre agentes
- 🧠 Protocolo MCP: Gerenciamento avançado de contexto
- 📝 Funcionalidades Integradas: Chat, análise, tradução e mais
- ⚡ Configuração Simples: Apenas 2 passos para começar
# Configuração completa em um comando
python quick_setup.py
# 1. Instalar dependências
pip install -r requirements.txt
# 2. Configurar ambiente
copy .env.template .env
# Edite o arquivo .env com suas configurações
# 3. Validar instalação
python validate_env.py
O script quick_setup.py
automatiza todo o processo:
- ✅ Cria ambiente virtual
- ✅ Instala dependências
- ✅ Configura arquivo .env
- ✅ Valida instalação
-
Configure o arquivo .env (copie de
.env.template
):
# Obrigatório
GOOGLE_API_KEY=sua_chave_google_api_aqui
# Opcional (com valores padrão)
MODEL_NAME=gemini-2.5-flash
AGENT_NAME=MangabaAgent
LOG_LEVEL=INFO
-
Obtenha sua Google API Key:
- Acesse: https://makersuite.google.com/app/apikey
- Crie uma nova chave
- Cole no arquivo .env
# Verifica se tudo está configurado corretamente
python validate_env.py
# Salva relatório detalhado
python validate_env.py --save-report
from mangaba_ai import MangabaAgent
# Inicializar com protocolos A2A e MCP habilitados
agent = MangabaAgent()
# Chat com contexto automático
resposta = agent.chat("Olá! Como você pode me ajudar?")
print(resposta)
from mangaba_ai import MangabaAgent
agent = MangabaAgent()
# O contexto é mantido automaticamente
print(agent.chat("Meu nome é João"))
print(agent.chat("Qual é o meu nome?")) # Lembra do contexto anterior
agent = MangabaAgent()
text = "A inteligência artificial está transformando o mundo."
analysis = agent.analyze_text(text, "Faça uma análise detalhada")
print(analysis)
agent = MangabaAgent()
translation = agent.translate("Hello, how are you?", "português")
print(translation)
agent = MangabaAgent()
# Após algumas interações...
summary = agent.get_context_summary()
print(summary)
O protocolo A2A permite comunicação entre múltiplos agentes:
# Criar dois agentes
agent1 = MangabaAgent()
agent2 = MangabaAgent()
# Enviar requisição de um agente para outro
result = agent1.send_agent_request(
target_agent_id=agent2.agent_id,
action="chat",
params={"message": "Olá do Agent 1!"}
)
agent = MangabaAgent()
# Enviar mensagem para todos os agentes conectados
result = agent.broadcast_message(
message="Olá a todos!",
tags=["general", "announcement"]
)
- REQUEST: Requisições entre agentes
- RESPONSE: Respostas a requisições
- BROADCAST: Mensagens para múltiplos agentes
- NOTIFICATION: Notificações assíncronas
- ERROR: Mensagens de erro
O protocolo MCP gerencia contexto avançado automaticamente:
- CONVERSATION: Conversas e diálogos
- TASK: Tarefas e operações específicas
- MEMORY: Memórias de longo prazo
- SYSTEM: Informações do sistema
- HIGH: Contexto crítico (sempre preservado)
- MEDIUM: Contexto importante
- LOW: Contexto opcional
agent = MangabaAgent()
# Chat com contexto automático
response = agent.chat("Mensagem", use_context=True)
# Chat sem contexto
response = agent.chat("Mensagem", use_context=False)
# Obter resumo do contexto atual
summary = agent.get_context_summary()
from mangaba_ai import MangabaAgent
def demo_completa():
# Criar agente com protocolos habilitados
agent = MangabaAgent()
print(f"Agent ID: {agent.agent_id}")
print(f"MCP Habilitado: {agent.mcp_enabled}")
# Sequência de interações com contexto
agent.chat("Olá, meu nome é Maria")
agent.chat("Eu trabalho com programação")
# Análise com contexto preservado
analysis = agent.analyze_text(
"Python é uma linguagem versátil",
"Analise considerando meu perfil profissional"
)
# Tradução
translation = agent.translate("Good morning", "português")
# Resumo do contexto acumulado
context = agent.get_context_summary()
print("Contexto atual:", context)
# Comunicação A2A
agent.broadcast_message("Demonstração concluída!")
if __name__ == "__main__":
demo_completa()
Execute o exemplo interativo:
python examples/basic_example.py
Comandos disponíveis:
-
/analyze <texto>
- Analisa texto -
/translate <texto>
- Traduz texto -
/context
- Mostra contexto atual -
/broadcast <mensagem>
- Envia broadcast -
/request <agent_id> <action>
- Requisição para outro agente -
/help
- Ajuda
Para ver uma demonstração completa dos protocolos A2A e MCP:
python examples/basic_example.py --demo
-
chat(message, use_context=True)
- Chat com/sem contexto -
analyze_text(text, instruction)
- Análise de texto -
translate(text, target_language)
- Tradução -
get_context_summary()
- Resumo do contexto -
send_agent_request(agent_id, action, params)
- Requisição A2A -
broadcast_message(message, tags)
- Broadcast A2A
- A2A Protocol: Comunicação entre agentes
- MCP Protocol: Gerenciamento de contexto
- Handlers Customizados: Para requisições específicas
- Sessões MCP: Contexto isolado por sessão
API_KEY=sua_chave_api_aqui # Obrigatório
MODEL=modelo_desejado # Opcional
LOG_LEVEL=INFO # Opcional (DEBUG, INFO, WARNING, ERROR)
# Agente com configurações customizadas
agent = MangabaAgent()
# Acessar protocolos diretamente
a2a = agent.a2a_protocol
mcp = agent.mcp
# ID único do agente
print(f"Agent ID: {agent.agent_id}")
# Sessão MCP atual
print(f"Session ID: {agent.current_session_id}")
agent = MangabaAgent() resposta = agent.chat_with_context( context="Você é um tutor de programação", message="Como criar uma lista em Python?" ) print(resposta)
### Análise de Texto
```python
from mangaba_ai import MangabaAgent
agent = MangabaAgent()
texto = "Este é um texto para analisar..."
analise = agent.analyze_text(texto, "Resuma os pontos principais")
print(analise)
Para usar um modelo diferente, apenas mude no .env
:
MODEL=modelo-avancado # Modelo mais avançado
MODEL=modelo-multimodal # Para diferentes tipos de entrada
🔧 Todos os scripts estão organizados na pasta scripts/
-
validate_env.py
- Valida configuração do ambiente -
quick_setup.py
- Configuração rápida automatizada -
example_env_usage.py
- Exemplo de uso das configurações -
exemplo_curso_basico.py
- Exemplos práticos do curso básico -
setup_env.py
- Configuração manual detalhada
mangaba_ai/
├── 📁 docs/ # 📚 Documentação
│ ├── CURSO_BASICO.md # Curso básico completo
│ ├── SETUP.md # Guia de configuração
│ ├── PROTOCOLS.md # Documentação dos protocolos
│ ├── CHANGELOG.md # Histórico de mudanças
│ ├── SCRIPTS.md # Documentação dos scripts
│ └── README.md # Índice da documentação
├── 📁 scripts/ # 🔧 Scripts de configuração
│ ├── validate_env.py # Validação do ambiente
│ ├── quick_setup.py # Setup rápido automatizado
│ ├── example_env_usage.py # Exemplo de uso
│ ├── exemplo_curso_basico.py # Exemplos do curso
│ ├── setup_env.py # Setup manual detalhado
│ └── README.md # Documentação dos scripts
├── 📁 protocols/ # 🌐 Protocolos de comunicação
│ ├── mcp_protocol.py # Model Context Protocol
│ └── a2a_protocol.py # Agent-to-Agent Protocol
├── 📁 examples/ # 📖 Exemplos de uso
│ └── basic_example.py # Exemplo básico completo
├── 📁 utils/ # 🛠️ Utilitários
│ ├── __init__.py
│ └── logger.py # Sistema de logs
├── mangaba_agent.py # 🤖 Agente principal
├── config.py # ⚙️ Configurações do sistema
├── ESTRUTURA.md # 📁 Organização do repositório
├── .env.example # 🔐 Exemplo de configuração
├── requirements.txt # 📦 Dependências Python
└── README.md # 📖 Este arquivo
📋 Para detalhes completos da estrutura, consulte ESTRUTURA.md
# 1. Configuração rápida
python scripts/quick_setup.py
# 2. Validar ambiente
python scripts/validate_env.py
# 3. Testar exemplo
python scripts/example_env_usage.py
# 4. Exemplos do curso básico
python scripts/exemplo_curso_basico.py
# 5. Exemplo interativo
python examples/basic_example.py
🌟 📖 WIKI COMPLETA - Portal Principal da Documentação
A Wiki Avançada do Mangaba AI oferece documentação abrangente em português brasileiro para todos os níveis:
- 🚀 Visão Geral do Projeto - O que é e para que serve
- 🎓 Curso Básico Completo - Tutorial passo-a-passo
- ⚙️ Instalação e Configuração - Guia detalhado de setup
- ❓ FAQ - Perguntas Frequentes - Dúvidas comuns e soluções
- 🌐 Protocolos A2A e MCP - Documentação técnica completa
- ⭐ Melhores Práticas - Guia de boas práticas
- 🤝 Como Contribuir - Diretrizes de contribuição
- 📝 Glossário de Termos - Definições técnicas
- 🔧 Scripts e Automação - Documentação dos scripts
- 📊 Histórico de Mudanças - Changelog completo
- 📁 Estrutura do Projeto - Organização do repositório
🎯 Comece pela Wiki Principal - É seu portal de entrada para toda a documentação!
Agradecemos seu interesse em contribuir! Consulte nosso Guia Completo de Contribuição para informações detalhadas.
- 📚 Leia as Diretrizes de Contribuição
- 🍴 Faça fork do projeto
- 🔧 Configure o ambiente de desenvolvimento
- ⭐ Siga as Melhores Práticas
- 🧪 Execute os testes
- 📤 Abra um Pull Request
- 🐛 Correção de bugs
- ✨ Novas funcionalidades
- 📚 Melhoria da documentação
- 🧪 Adição de testes
- 🌐 Tradução para outros idiomas
📖 Primeira contribuição? Procure por issues marcadas com
good first issue
!
MIT License
Mangaba AI - Agentes de IA simples e eficazes! 🤖✨
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mangaba_ai
Similar Open Source Tools

mangaba_ai
Mangaba AI is a minimalist repository for creating intelligent and versatile AI agents with A2A (Agent-to-Agent) and MCP (Model Context Protocol) protocols. It supports any AI provider, facilitates communication between agents, manages context effectively, and offers integrated functionalities like chat, analysis, and translation. The setup is straightforward with only 2 steps to get started. The repository includes scripts for automated configuration, manual setup, and environment validation. Users can easily chat with context, analyze text, translate, and interact with multiple agents using A2A protocol. The MCP protocol handles advanced context management automatically, categorizing context types and priorities. The repository also provides examples, documentation, and a comprehensive wiki in Brazilian Portuguese for beginners and developers.

AivisSpeech
AivisSpeech is a Japanese text-to-speech software based on the VOICEVOX editor UI. It incorporates the AivisSpeech Engine for generating emotionally rich voices easily. It supports AIVMX format voice synthesis model files and specific model architectures like Style-Bert-VITS2. Users can download AivisSpeech and AivisSpeech Engine for Windows and macOS PCs, with minimum memory requirements specified. The development follows the latest version of VOICEVOX, focusing on minimal modifications, rebranding only where necessary, and avoiding refactoring. The project does not update documentation, maintain test code, or refactor unused features to prevent conflicts with VOICEVOX.

AIstudioProxyAPI
AI Studio Proxy API is a Python-based proxy server that converts the Google AI Studio web interface into an OpenAI-compatible API. It provides stable API access through Camoufox (anti-fingerprint detection Firefox) and Playwright automation. The project offers an OpenAI-compatible API endpoint, a three-layer streaming response mechanism, dynamic model switching, complete parameter control, anti-fingerprint detection, script injection functionality, modern web UI, graphical interface launcher, flexible authentication system, modular architecture, unified configuration management, and modern development tools.

AivisSpeech-Engine
AivisSpeech-Engine is a powerful open-source tool for speech recognition and synthesis. It provides state-of-the-art algorithms for converting speech to text and text to speech. The tool is designed to be user-friendly and customizable, allowing developers to easily integrate speech capabilities into their applications. With AivisSpeech-Engine, users can transcribe audio recordings, create voice-controlled interfaces, and generate natural-sounding speech output. Whether you are building a virtual assistant, developing a speech-to-text application, or experimenting with voice technology, AivisSpeech-Engine offers a comprehensive solution for all your speech processing needs.

prism-insight
PRISM-INSIGHT is a comprehensive stock analysis and trading simulation system based on AI agents. It automatically captures daily surging stocks via Telegram channel, generates expert-level analyst reports, and performs trading simulations. The system utilizes OpenAI GPT-4.1 for in-depth stock analysis and GPT-5 for investment strategy simulation. It also interacts with users via Anthropic Claude for Telegram conversations. The system architecture includes AI analysis agents, stock tracking, PDF conversion, and Telegram bot functionalities. Users can customize criteria for identifying surging stocks, modify AI prompts, and adjust chart styles. The project is open-source under the MIT license, and all investment decisions based on the analysis are the responsibility of the user.

chatgpt-on-wechat
This project is a smart chatbot based on a large model, supporting WeChat, WeChat Official Account, Feishu, and DingTalk access. You can choose from GPT3.5/GPT4.0/Claude/Wenxin Yanyi/Xunfei Xinghuo/Tongyi Qianwen/Gemini/LinkAI/ZhipuAI, which can process text, voice, and images, and access external resources such as operating systems and the Internet through plugins, supporting the development of enterprise AI applications based on proprietary knowledge bases.

FinanceMCP
FinanceMCP is a professional financial data server based on the MCP protocol, integrating the Tushare API to provide real-time financial data and technical indicator analysis for AI assistants like Claude. It offers various free public cloud services, including a web-based experience version and desktop configuration for production environments. The core features include an intelligent technical indicator system with 5 core indicators, comprehensive market coverage across 10 markets, tools for stock, index, company, macroeconomic, and fund data analysis, as well as specific modules for analyzing US and Hong Kong stock companies. The tool supports tasks like stock technical analysis, comprehensive analysis, news and macroeconomic analysis, fund and bond data queries, among others. It can be locally deployed using Streamable HTTP or SSE modes, with detailed installation and configuration instructions provided.

ailab
The 'ailab' project is an experimental ground for code generation combining AI (especially coding agents) and Deno. It aims to manage configuration files defining coding rules and modes in Deno projects, enhancing the quality and efficiency of code generation by AI. The project focuses on defining clear rules and modes for AI coding agents, establishing best practices in Deno projects, providing mechanisms for type-safe code generation and validation, applying test-driven development (TDD) workflow to AI coding, and offering implementation examples utilizing design patterns like adapter pattern.

MarkMap-OpenAi-ChatGpt
MarkMap-OpenAi-ChatGpt is a Vue.js-based mind map generation tool that allows users to generate mind maps by entering titles or content. The application integrates the markmap-lib and markmap-view libraries, supports visualizing mind maps, and provides functions for zooming and adapting the map to the screen. Users can also export the generated mind map in PNG, SVG, JPEG, and other formats. This project is suitable for quickly organizing ideas, study notes, project planning, etc. By simply entering content, users can get an intuitive mind map that can be continuously expanded, downloaded, and shared.

rime_wanxiang
Rime Wanxiang is a pinyin input method based on deep optimized lexicon and language model. It features a lexicon with tones, AI and large corpus filtering, and frequency addition to provide more accurate sentence output. The tool supports various input methods and customization options, aiming to enhance user experience through lexicon and transcription. Users can also refresh the lexicon with different types of auxiliary codes using the LMDG toolkit package. Wanxiang offers core features like tone-marked pinyin annotations, phrase composition, and word frequency, with customizable functionalities. The tool is designed to provide a seamless input experience based on lexicon and transcription.

Code-Interpreter-Api
Code Interpreter API is a project that combines a scheduling center with a sandbox environment, dedicated to creating the world's best code interpreter. It aims to provide a secure, reliable API interface for remotely running code and obtaining execution results, accelerating the development of various AI agents, and being a boon to many AI enthusiasts. The project innovatively combines Docker container technology to achieve secure isolation and execution of Python code. Additionally, the project supports storing generated image data in a PostgreSQL database and accessing it through API endpoints, providing rich data processing and storage capabilities.

AI-CloudOps
AI+CloudOps is a cloud-native operations management platform designed for enterprises. It aims to integrate artificial intelligence technology with cloud-native practices to significantly improve the efficiency and level of operations work. The platform offers features such as AIOps for monitoring data analysis and alerts, multi-dimensional permission management, visual CMDB for resource management, efficient ticketing system, deep integration with Prometheus for real-time monitoring, and unified Kubernetes management for cluster optimization.

MINI_LLM
This project is a personal implementation and reproduction of a small-parameter Chinese LLM. It mainly refers to these two open source projects: https://github.com/charent/Phi2-mini-Chinese and https://github.com/DLLXW/baby-llama2-chinese. It includes the complete process of pre-training, SFT instruction fine-tuning, DPO, and PPO (to be done). I hope to share it with everyone and hope that everyone can work together to improve it!

wechat-robot-client
The Wechat Robot Client is an intelligent robot management system that provides rich interactive experiences. It includes features such as AI chat, drawing, voice, group chat functionalities, song requests, daily summaries, friend circle viewing, friend adding, group chat management, file messaging, multiple login methods support, and more. The system also supports features like sending files, various login methods, and integration with other apps like '王者荣耀' and '吃鸡'. It offers a comprehensive solution for managing Wechat interactions and automating various tasks.

AirPower4T
AirPower4T is a development base library based on Vue3 TypeScript Element Plus Vite, using decorators, object-oriented, Hook and other front-end development methods. It provides many common components and some feedback components commonly used in background management systems, and provides a lot of enums and decorators.
For similar tasks

phospho
Phospho is a text analytics platform for LLM apps. It helps you detect issues and extract insights from text messages of your users or your app. You can gather user feedback, measure success, and iterate on your app to create the best conversational experience for your users.

OpenFactVerification
Loki is an open-source tool designed to automate the process of verifying the factuality of information. It provides a comprehensive pipeline for dissecting long texts into individual claims, assessing their worthiness for verification, generating queries for evidence search, crawling for evidence, and ultimately verifying the claims. This tool is especially useful for journalists, researchers, and anyone interested in the factuality of information.

open-parse
Open Parse is a Python library for visually discerning document layouts and chunking them effectively. It is designed to fill the gap in open-source libraries for handling complex documents. Unlike text splitting, which converts a file to raw text and slices it up, Open Parse visually analyzes documents for superior LLM input. It also supports basic markdown for parsing headings, bold, and italics, and has high-precision table support, extracting tables into clean Markdown formats with accuracy that surpasses traditional tools. Open Parse is extensible, allowing users to easily implement their own post-processing steps. It is also intuitive, with great editor support and completion everywhere, making it easy to use and learn.

spaCy
spaCy is an industrial-strength Natural Language Processing (NLP) library in Python and Cython. It incorporates the latest research and is designed for real-world applications. The library offers pretrained pipelines supporting 70+ languages, with advanced neural network models for tasks such as tagging, parsing, named entity recognition, and text classification. It also facilitates multi-task learning with pretrained transformers like BERT, along with a production-ready training system and streamlined model packaging, deployment, and workflow management. spaCy is commercial open-source software released under the MIT license.

NanoLLM
NanoLLM is a tool designed for optimized local inference for Large Language Models (LLMs) using HuggingFace-like APIs. It supports quantization, vision/language models, multimodal agents, speech, vector DB, and RAG. The tool aims to provide efficient and effective processing for LLMs on local devices, enhancing performance and usability for various AI applications.

ontogpt
OntoGPT is a Python package for extracting structured information from text using large language models, instruction prompts, and ontology-based grounding. It provides a command line interface and a minimal web app for easy usage. The tool has been evaluated on test data and is used in related projects like TALISMAN for gene set analysis. OntoGPT enables users to extract information from text by specifying relevant terms and provides the extracted objects as output.

lima
LIMA is a multilingual linguistic analyzer developed by the CEA LIST, LASTI laboratory. It is Free Software available under the MIT license. LIMA has state-of-the-art performance for more than 60 languages using deep learning modules. It also includes a powerful rules-based mechanism called ModEx for extracting information in new domains without annotated data.

liboai
liboai is a simple C++17 library for the OpenAI API, providing developers with access to OpenAI endpoints through a collection of methods and classes. It serves as a spiritual port of OpenAI's Python library, 'openai', with similar structure and features. The library supports various functionalities such as ChatGPT, Audio, Azure, Functions, Image DALL·E, Models, Completions, Edit, Embeddings, Files, Fine-tunes, Moderation, and Asynchronous Support. Users can easily integrate the library into their C++ projects to interact with OpenAI services.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.