
All-Model-Chat
All Model Chat 是一款功能强大、支持多模态输入的聊天机器人界面,旨在提供与 Google Gemini API 家族无缝交互的极致体验。它集成了动态模型选择、多模态文件输入、流式响应、全面的聊天历史管理以及广泛的自定义选项,为您带来无与伦比的 AI 互动体验。
Stars: 297

All Model Chat is a feature-rich, highly customizable web chat application designed specifically for the Google Gemini API family. It integrates dynamic model selection, multimodal file input, streaming responses, comprehensive chat history management, and extensive customization options to provide an unparalleled AI interactive experience.
README:
All Model Chat 是一款功能强大、支持多模态输入的聊天机器人界面,旨在提供与 Google Gemini API 家族无缝交互的极致体验。它集成了动态模型选择、多模态文件输入、流式响应、全面的聊天历史管理以及广泛的自定义选项,为您带来无与伦比的 AI 互动体验。
- 🤖 广泛的模型支持: 原生支持 Gemini 系列 (
2.5 Pro
,Flash
,Flash Lite
)、Imagen 系列 (3.0
,4.0
) 图像生成模型以及文本转语音 (TTS) 模型。这是一个真正意义上的多模态AI应用平台。 - 🛠️ 强大的工具集: 无缝集成 Google 的强大工具,增强模型能力:
- 🌐 网页搜索: 允许模型访问实时信息以回答时事问题,并提供引用来源。
- 💻 代码执行器: 让模型能够执行代码来解决计算问题、分析数据。
- 🔗 URL 上下文: 允许模型读取和理解您提供的 URL 内容。
- ⚙️ 高级AI参数控制: 精确调整
Temperature
和Top-P
参数,以控制AI回复的创造性与确定性。您还可以为任意对话设置自定义的系统指令 (System Prompt),从而塑造AI的性格和行为模式。 - 🤔 展示“思考过程”: 洞察模型(如 Gemini 2.5 Flash/Pro)在生成回答前的中间思考步骤。此功能非常适合用于调试和理解AI的推理过程,您甚至可以配置“思考预算”来平衡质量与速度。
- 🎙️ 语音转文本 (STT): 使用强大的 Gemini 模型将您的语音实时转录为文字输入,准确率远超浏览器标准API。您甚至可以在设置中选择不同的 Gemini 模型用于转录。
- 🔊 文本转语音 (TTS): 将模型的文本回答一键转换为流畅的语音,并提供多种高质量音色供您选择,实现“听”AI的功能。
- 🎨 画布助手 (Canvas Assistant): 一个特别设计的系统指令,能将AI变为一名前端开发助手,生成丰富、可交互的 HTML/SVG 网页内容,例如使用 ECharts 创建图表、使用 Graphviz 生成流程图等。
- 📎 丰富的文件支持: 轻松上传和处理多种文件类型,包括图片、视频、音频、PDF文档以及各类代码和文本文件。
- 🖐️ 多样化的上传方式: 提供了极致便利的文件上传体验,支持拖拽、从剪贴板粘贴、使用文件选择器,甚至可以直接调用摄头拍照或使用麦克风录音。
- ✍️ 即时创建文本文件: 无需离开应用,即可在应用内快速创建和编辑文本文件,并将其作为上下文提交给模型。
- 🆔 通过文件ID引用: 对于高级用户,您可以直接引用已上传到 Gemini API 的文件(使用其
files/...
ID),无需重复上传,节省时间和带宽。 - 🖼️ 交互式预览: 在应用内直接缩放和平移您上传的图片,或在交互式模态框中预览AI生成的HTML代码,甚至可以进入真正的全屏模式。
- 📊 智能文件管理: 提供实时上传进度条、进行中的上传可随时取消,并有清晰的错误处理提示,确保文件处理过程始终在您的掌控之中。
- 📚 持久化聊天历史: 所有对话都会自动保存在您的浏览器本地存储 (
localStorage
) 中,确保了数据隐私,并允许您随时回顾过往的交流。 - 📂 对话分组: 将您的聊天会话整理到可折叠的群组中,便于管理和查找。
- 🎭 场景管理: 创建、保存、导入和导出“聊天模板”。这使得您可以快速设定复杂的对话背景(如编程问题、角色扮演),极大提升了沟通效率。
- ✏️ 完全的消息控制: 您可以编辑、删除或重试任何一条消息。智能编辑功能(编辑用户提示)会自动从该点截断并重新提交对话,从而正确地维持上下文。
- 📥 导出对话与消息: 将整个对话导出为 PNG 图片、HTML 文件 或 TXT 文件。您还可以将单条模型回复单独导出为 PNG 或 HTML。
- ⌨️ 键盘快捷键: 专为效率爱好者设计,提供新建对话、切换模型、打开日志等多种快捷键,让操作行云流水。
- 🛠️ 日志查看器与调试工具: 内置的日志查看器让高级用户可以洞察应用的内部行为、API调用详情以及API密钥的使用情况(当提供多个密钥时)。
本应用旨在浏览器中直接使用,无需任何后端或安装配置。
- 打开应用: 访问 all-model-chat.pages.dev。
- 打开设置: 点击页面右上角的齿轮图标 (⚙️)。
- 启用自定义配置: 在“API 配置”部分,打开“使用自定义 API 配置”的开关。
- 输入您的 API 密钥: 将您的 Google Gemini API 密钥粘贴到文本框中。您可以从 Google AI Studio 获取密钥。支持每行输入一个,以使用多个密钥轮换。
-
保存并开始聊天: 点击“保存”。您的密钥将安全地存储在您浏览器的
localStorage
中,绝不会发送到任何其他地方。
- 框架: React 19 & TypeScript
-
AI SDK:
@google/genai
- 样式: Tailwind CSS (通过 CDN) & CSS 变量(用于主题化)
-
Markdown 与渲染:
react-markdown
,remark-gfm
,remark-math
,rehype-highlight
,rehype-katex
,highlight.js
,DOMPurify
,mermaid
,viz.js
-
图片导出:
html2canvas
-
模块加载: 现代 ES 模块 & Import Maps (通过
esm.sh
) - 图标: Lucide React
-
离线支持: Service Worker (
sw.js
) 用于缓存应用外壳
All-Model-Chat/
├── public/ # 静态资源 (manifest.json, sw.js)
├── src/
│ ├── components/ # React UI 组件 (头部, 聊天输入, 模态框等)
│ │ ├── chat/ # 聊天输入子组件
│ │ ├── layout/ # 布局组件
│ │ ├── message/ # 消息渲染子组件 (代码块, 图表)
│ │ ├── modals/ # 应用级模态框
│ │ ├── shared/ # 可复用的通用组件
│ │ └── settings/ # 设置面板模块
│ ├── constants/ # 应用全局常量 (app, 主题, 文件, 模型)
│ ├── hooks/ # ✨ 应用核心逻辑所在地
│ │ ├── useChat.ts # 组织所有功能的主 Hook
│ │ ├── useAppSettings.ts # 管理全局设置、主题和语言
│ │ └── ... (其他自定义 Hooks)
│ ├── services/ # 外部服务封装
│ │ ├── api/ # 模块化的 API 调用函数
│ │ ├── geminiService.ts# 封装所有对 Google GenAI API 的调用
│ │ └── logService.ts # 为日志查看器提供应用内日志服务
│ ├── utils/ # 工具函数
│ │ ├── translations/ # 语言翻译文件
│ │ └── ... (API, 领域, UI 相关的工具函数)
│ ├── App.tsx # 应用根组件
│ ├── index.tsx # React 应用入口文件
│ └── types.ts # 核心 TypeScript 类型定义
│
├── index.html # 主 HTML 文件,包含 import maps 和核心样式
└── README.md
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for All-Model-Chat
Similar Open Source Tools

All-Model-Chat
All Model Chat is a feature-rich, highly customizable web chat application designed specifically for the Google Gemini API family. It integrates dynamic model selection, multimodal file input, streaming responses, comprehensive chat history management, and extensive customization options to provide an unparalleled AI interactive experience.

Rodel.Agent
Rodel Agent is a Windows desktop application that integrates chat, text-to-image, text-to-speech, and machine translation services, providing users with a comprehensive desktop AI experience. The application supports mainstream AI services and aims to enhance user interaction through various AI functionalities.

AstrBot
AstrBot is an open-source one-stop Agentic chatbot platform and development framework. It supports large model conversations, multiple messaging platforms, Agent capabilities, plugin extensions, and WebUI for visual configuration and management of the chatbot.

memfree
MemFree is an open-source hybrid AI search engine that allows users to simultaneously search their personal knowledge base (bookmarks, notes, documents, etc.) and the Internet. It features a self-hosted super fast serverless vector database, local embedding and rerank service, one-click Chrome bookmarks index, and full code open source. Users can contribute by opening issues for bugs or making pull requests for new features or improvements.

nvim-aider
Nvim-aider is a plugin for Neovim that provides additional functionality and key mappings to enhance the user's editing experience. It offers features such as code navigation, quick access to commonly used commands, and improved text manipulation tools. With Nvim-aider, users can streamline their workflow and increase productivity while working with Neovim.

free-chat
Free Chat is a forked project from chatgpt-demo that allows users to deploy a chat application with various features. It provides branches for different functionalities like token-based message list trimming and usage demonstration of 'promplate'. Users can control the website through environment variables, including setting OpenAI API key, temperature parameter, proxy, base URL, and more. The project welcomes contributions and acknowledges supporters. It is licensed under MIT by Muspi Merol.

xlings
Xlings is a developer tool for programming learning, development, and course building. It provides features such as software installation, one-click environment setup, project dependency management, and cross-platform language package management. Additionally, it offers real-time compilation and running, AI code suggestions, tutorial project creation, automatic code checking for practice, and demo examples collection.

azure-ai-docs
Azure AI Docs is a repository that provides detailed documentation and resources for developers looking to leverage Microsoft's AI services on the Azure platform. The repository covers a wide range of topics including machine learning, natural language processing, computer vision, and more. Developers can find tutorials, code samples, best practices, and guidelines to help them integrate AI capabilities into their applications seamlessly.

jadx-mcp-server
JADX-MCP-SERVER is a standalone Python server that interacts with JADX-AI-MCP Plugin to analyze Android APKs using LLMs like Claude. It enables live communication with decompiled Android app context, uncovering vulnerabilities, parsing manifests, and facilitating reverse engineering effortlessly. The tool combines JADX-AI-MCP and JADX MCP SERVER to provide real-time reverse engineering support with LLMs, offering features like quick analysis, vulnerability detection, AI code modification, static analysis, and reverse engineering helpers. It supports various MCP tools for fetching class information, text, methods, fields, smali code, AndroidManifest.xml content, strings.xml file, resource files, and more. Tested on Claude Desktop, it aims to support other LLMs in the future, enhancing Android reverse engineering and APK modification tools connectivity for easier reverse engineering purely from vibes.

mistral.rs
Mistral.rs is a fast LLM inference platform written in Rust. We support inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.

ai-app-lab
The ai-app-lab is a high-code Python SDK Arkitect designed for enterprise developers with professional development capabilities. It provides a toolset and workflow set for developing large model applications tailored to specific business scenarios. The SDK offers highly customizable application orchestration, quality business tools, one-stop development and hosting services, security enhancements, and AI prototype application code examples. It caters to complex enterprise development scenarios, enabling the creation of highly customized intelligent applications for various industries.

awesome-ai-apps
This repository is a comprehensive collection of practical examples, tutorials, and recipes for building powerful LLM-powered applications. From simple chatbots to advanced AI agents, these projects serve as a guide for developers working with various AI frameworks and tools. Powered by Nebius AI Studio - your one-stop platform for building and deploying AI applications.

nodetool
NodeTool is a platform designed for AI enthusiasts, developers, and creators, providing a visual interface to access a variety of AI tools and models. It simplifies access to advanced AI technologies, offering resources for content creation, data analysis, automation, and more. With features like a visual editor, seamless integration with leading AI platforms, model manager, and API integration, NodeTool caters to both newcomers and experienced users in the AI field.

omnichain
OmniChain is a tool for building efficient self-updating visual workflows using AI language models, enabling users to automate tasks, create chatbots, agents, and integrate with existing frameworks. It allows users to create custom workflows guided by logic processes, store and recall information, and make decisions based on that information. The tool enables users to create tireless robot employees that operate 24/7, access the underlying operating system, generate and run NodeJS code snippets, and create custom agents and logic chains. OmniChain is self-hosted, open-source, and available for commercial use under the MIT license, with no coding skills required.

req_llm
ReqLLM is a Req-based library for LLM interactions, offering a unified interface to AI providers through a plugin-based architecture. It brings composability and middleware advantages to LLM interactions, with features like auto-synced providers/models, typed data structures, ergonomic helpers, streaming capabilities, usage & cost extraction, and a plugin-based provider system. Users can easily generate text, structured data, embeddings, and track usage costs. The tool supports various AI providers like Anthropic, OpenAI, Groq, Google, and xAI, and allows for easy addition of new providers. ReqLLM also provides API key management, detailed documentation, and a roadmap for future enhancements.

nexa-sdk
Nexa SDK is a comprehensive toolkit supporting ONNX and GGML models for text generation, image generation, vision-language models (VLM), and text-to-speech (TTS) capabilities. It offers an OpenAI-compatible API server with JSON schema mode and streaming support, along with a user-friendly Streamlit UI. Users can run Nexa SDK on any device with Python environment, with GPU acceleration supported. The toolkit provides model support, conversion engine, inference engine for various tasks, and differentiating features from other tools.
For similar tasks

All-Model-Chat
All Model Chat is a feature-rich, highly customizable web chat application designed specifically for the Google Gemini API family. It integrates dynamic model selection, multimodal file input, streaming responses, comprehensive chat history management, and extensive customization options to provide an unparalleled AI interactive experience.

ai-chatbot
Next.js AI Chatbot is an open-source app template for building AI chatbots using Next.js, Vercel AI SDK, OpenAI, and Vercel KV. It includes features like Next.js App Router, React Server Components, Vercel AI SDK for streaming chat UI, support for various AI models, Tailwind CSS styling, Radix UI for headless components, chat history management, rate limiting, session storage with Vercel KV, and authentication with NextAuth.js. The template allows easy deployment to Vercel and customization of AI model providers.

chatty
Chatty is a private AI tool that runs large language models natively and privately in the browser, ensuring in-browser privacy and offline usability. It supports chat history management, open-source models like Gemma and Llama2, responsive design, intuitive UI, markdown & code highlight, chat with files locally, custom memory support, export chat messages, voice input support, response regeneration, and light & dark mode. It aims to bring popular AI interfaces like ChatGPT and Gemini into an in-browser experience.

ollama-gui
Ollama GUI is a web interface for ollama.ai, a tool that enables running Large Language Models (LLMs) on your local machine. It provides a user-friendly platform for chatting with LLMs and accessing various models for text generation. Users can easily interact with different models, manage chat history, and explore available models through the web interface. The tool is built with Vue.js, Vite, and Tailwind CSS, offering a modern and responsive design for seamless user experience.

CodeFuse-muAgent
CodeFuse-muAgent is a Multi-Agent framework designed to streamline Standard Operating Procedure (SOP) orchestration for agents. It integrates toolkits, code libraries, knowledge bases, and sandbox environments for rapid construction of complex Multi-Agent interactive applications. The framework enables efficient execution and handling of multi-layered and multi-dimensional tasks.

pyqt-openai
VividNode is a cross-platform AI desktop chatbot application for LLM such as GPT, Claude, Gemini, Llama chatbot interaction and image generation. It offers customizable features, local chat history, and enhanced performance without requiring a browser. The application is powered by GPT4Free and allows users to interact with chatbots and generate images seamlessly. VividNode supports Windows, Mac, and Linux, securely stores chat history locally, and provides features like chat interface customization, image generation, focus and accessibility modes, and extensive customization options with keyboard shortcuts for efficient operations.

LLamaWorker
LLamaWorker is a HTTP API server developed to provide an OpenAI-compatible API for integrating Large Language Models (LLM) into applications. It supports multi-model configuration, streaming responses, text embedding, chat templates, automatic model release, function calls, API key authentication, and test UI. Users can switch models, complete chats and prompts, manage chat history, and generate tokens through the test UI. Additionally, LLamaWorker offers a Vulkan compiled version for download and provides function call templates for testing. The tool supports various backends and provides API endpoints for chat completion, prompt completion, embeddings, model information, model configuration, and model switching. A Gradio UI demo is also available for testing.

lmstudio-python
LM Studio Python SDK provides a convenient API for interacting with LM Studio instance, including text completion and chat response functionalities. The SDK allows users to manage websocket connections and chat history easily. It also offers tools for code consistency checks, automated testing, and expanding the API.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.