YesImBot
机械壳,人类心。
Stars: 52
YesImBot, also known as Athena, is a Koishi plugin designed to allow large AI models to participate in group chat discussions. It offers easy customization of the bot's name, personality, emotions, and other messages. The plugin supports load balancing multiple API interfaces for large models, provides immersive context awareness, blocks potentially harmful messages, and automatically fetches high-quality prompts. Users can adjust various settings for the bot and customize system prompt words. The ultimate goal is to seamlessly integrate the bot into group chats without detection, with ongoing improvements and features like message recognition, emoji sending, multimodal image support, and more.
README:
YesImBot / Athena 是一个 Koishi 插件,旨在让人工智能大模型也能参与到群聊的讨论中。
新的文档站已上线:https://athena.mkc.icu/
-
轻松自定义:Bot 的名字、性格、情感,以及其他额外的消息都可以在插件配置中轻易修改。
-
负载均衡:你可以配置多个大模型的 API 接口, Athena 会均衡地调用每一个 API。
-
沉浸感知:大模型感知当前的背景信息,如日期时间、群聊名字,At 消息等。
-
防提示注入:Athena 将会屏蔽可能对大模型进行注入的消息,防止机器人被他人破坏。
-
Prompt 自动获取:无需自行配制,多种优质 Prompt 开箱即用。
-
AND MORE...
[!IMPORTANT] 继续前, 请确保正在使用 Athena 的最新版本。
[!CAUTION] 请仔细阅读此部分, 这很重要。
下面来讲解配置文件的用法:
# 会话设置
MemorySlot:
# 记忆槽位,每一个记忆槽位都可以填入一个或多个会话id(群号或private:私聊账号),在一个槽位中的会话id会共享上下文
SlotContains:
- 114514 # 收到来自114514的消息时,优先使用这个槽位,意味着bot在此群中无其他会话的记忆
- 114514, private:1919810 # 收到来自1919810的私聊消息时,优先使用这个槽位,意味着bot此时拥有两个会话的记忆
- private:1919810, 12085141, 2551991321520
# 规定机器人能阅读的上下文数量
SlotSize: 100
# 机器人在每个会话开始发言所需的消息数量,即首次触发条数
FirstTriggerCount: 2
# 以下是每次机器人发送消息后的冷却条数由LLM确定或取随机数的区间
# 最大冷却条数
MaxTriggerCount: 4
# 最小冷却条数
MinTriggerCount: 2
# 距离会话最后一条消息达到此时间时,将主动触发一次Bot回复,设为 0 表示关闭此功能
MaxTriggerTime: 0
# 单次触发冷却(毫秒),冷却期间如又触发回复,将处理新触发回复,跳过本次触发
MinTriggerTime: 1000
# 每次收到 @ 消息,机器人马上开始做出回复的概率。 取值范围:[0, 1]
AtReactPossibility: 0.50
# 过滤的消息。这些包含这些关键词的消息将不会加入到上下文。
# 这主要是为了防止 Bot 遭受提示词注入攻击。
Filter:
- You are
- 呢
- 大家
# LLM API 相关设置
API:
# 这是个列表,可以配置多个 API,实现负载均衡。
APIList:
# API 返回格式类型,可选 OpenAI / Cloudflare / Ollama / Custom
- APIType: OpenAI
# API 基础 URL,此处以 OpenAI 为例
# 若你是 Cloudflare,请填入 https://api.cloudflare.com/client/v4
BaseURL: https://api.openai.com/
# 你的 API 令牌
APIKey: sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXX
# 模型
AIModel: gpt-4o-mini
# 若你正在使用 Cloudflare,不要忘记下面这个配置
# Cloudflare Account ID,若不清楚可以看看你 Cloudflare 控制台的 URL
UID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# 机器人设定
Bot:
# 名字
BotName: 胡梨
# 原神模式(什
CuteMode: true
# Prompt 文件的下载链接或文件名。如果下载失败,请手动下载文件并放入 koishi.yml 所在目录
# 非常重要! 如果你不理解这是什么,请不要修改
PromptFileUrl:
- "https://raw.githubusercontent.com/HydroGest/promptHosting/main/src/prompt.mdt" # 一代 Prompt,所有 AI 模型适用
- "https://raw.githubusercontent.com/HydroGest/promptHosting/main/src/prompt-next.mdt" # 下一代 Prompt,效果最佳,如果你是富哥,用的起 Claude 3.5 / GPT-4 等,则推荐使用
- "https://raw.githubusercontent.com/HydroGest/promptHosting/main/src/prompt-next-short.mdt" # 下一代 Prompt 的删减版,适合 GPT-4o-mini 等低配模型使用
# 当前选择的 Prompt 索引,从 0 开始
PromptFileSelected: 2
# Bot 的自我认知
WhoAmI: 一个普通群友
# Bot 的性格
BotPersonality: 冷漠/高傲/网络女神
# 屏蔽其他指令(实验性)
SendDirectly: true
# 机器人的习惯,当然你也可以放点别的小叮咛
BotHabbits: 辩论
# 机器人的背景
BotBackground: 校辩论队选手
... # 其他应用于prompt的角色设定。如果这些配置项没有被写入prompt文件,那么这些配置项将不会体现作用
# 机器人消息后处理,用于在机器人发送消息前的最后一个关头替换消息中的内容,支持正则表达式
BotSentencePostProcess:
- replacethis: 。$
tothis: ''
- replacethis: 哈哈哈哈
tothis: 嘎哈哈哈
# 机器人的打字速度
WordsPerSecond: 30 # 30 字每秒
... # 其他配置项参见文档站
然后,将机器人拉到对应的群组中。机器人首先会潜水一段时间,这取决于 SlotContains.FirstTriggerCount
的配置。当新消息条数达到这个值之后,Bot 就要开始参与讨论了(这也非常还原真实人类的情况,不是吗)。
[!TIP] 如果你认为 Bot 太活跃了,你也可以将
SlotContains.MinTriggerCount
数值调高。
接下来你可以根据实际情况调整机器人设定中的选项。在这方面你大可以自由发挥。但是如果你用的是 Cloudflare Workers AI,你可以会发现你的机器人在胡言乱语。这是 Cloudflare Workers AI 的免费模型效果不够好,中文语料较差导致的。如果你想要在保证 AI 发言质量的情况下尽量选择价格较为经济的 AI 模型,那么 ChatGPT-4o-mini 或许是明智之选。当然,你也不一定必须使用 OpenAI 的官方 API,Athena 支持任何使用 OpenAI 官方格式的 API 接口。
[!NOTE] 经过测试, Claude 3.5 模型在此场景下表现最佳。
将prompt.mdt文件下载到本地后,如果你觉得我们写得不好,或者是有自己新奇的想法,你可能会想要自定义这部分内容。接下来我们就来教你如何这么做。
首先,你需要在插件的配置中关闭 每次启动时尝试更新 Prompt 文件
这个选项,它在配置页面最下面的调试工具配置项中。之后,你可以在koishi的资源管理器中找到prompt.mdt这个文件。你可以在koishi自带的编辑器中自由地修改这个文件,不过下面有几点你需要注意:
- 某些字段会被替换,下面是所有会被替换的字段:
${BotName} -> 机器人的名字
${BotSelfId} -> 机器人的账号
${config.Bot.WhoAmI} -> 机器人的自我认知
${config.Bot.BotHometown} -> 机器人的家乡
${config.Bot.BotYearold} -> 机器人的年龄
${config.Bot.BotPersonality} -> 机器人的性格
${config.Bot.BotGender} -> 机器人的性别
${config.Bot.BotHabbits} -> 机器人的习惯
${config.Bot.BotBackground} -> 机器人的背景
${config.Bot.CuteMode} -> 开启|关闭
${curDate} -> 当前时间 # 2024年12月3日星期二17:34:00
${curGroupId} -> 当前所在群的群号
${outputSchema} -> LLM期望的输出格式模板
${coreMemory} -> 要附加给LLM的记忆
- 当前,消息队列呈现给 LLM 的格式是这样的:
[messageId][{date} from_guild:{channelId}] {senderName}<{senderId}> 说: {userContent}
- 当前,Athena 希望 LLM 返回的格式是这样的:
{
"status": "success", // "success" 或 "skip" (跳过回复) 或 "function" (运行工具)
"replyTo": "123456789", // 要把finReply发送到的会话id
"nextReplyIn": 2, // 下次回复的冷却条数,让LLM参与控制发言频率
"quote": "". // 要引用回复的消息ID
"logic": "", // LLM思考过程
"reply": "", // 初版回复
"check": "", // 检查初版回复是否符合 "消息生成条例" 过程中的检查逻辑。
"finReply": "", // 最终版回复
"functions": [] // 要运行的指令列表
}
[!NOTE] 自己修改prompt时,请确保 LLM 的回复符合要求的JSON格式。
但缺少某些条目好像也没关系?Σ(っ °Д °;)っ
[!NOTE] 由于图片查看器配置项发生了变动,会导致控制台中此项配置无法正常展开。如果你是从v1.7.x版本升级到v2,请自行将
koishi.yml
中本插件的ImageViewer
部分删除
我们强烈推荐大家使用非 Token 计费的 API,这是因为 Athena 每次对话的前置 Prompt 本身消耗了非常多的 Token。你可以使用一些以调用次数计费的 API,比如:
我们的终极目标是——即使哪一天你的账号接入了 Athena,群友也不能发现任何端倪——我们一切的改进都是朝这方面努力的。
- [x] At 消息识别
- [x] 表情发送
- [x] 图片多模态与基于图像识别的伪多模态
- [ ] 转发消息拾取
- [ ] TTS/STT 文字转语音
- [ ] RAG 记忆库
- [ ] 读取文件
- [x] 工具调用
感谢贡献者们, 是你们让 Athena 成为可能。
欢迎发布 issue,或是直接加入 Athena 官方交流 & 测试群:857518324,我们随时欢迎你的来访!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for YesImBot
Similar Open Source Tools
YesImBot
YesImBot, also known as Athena, is a Koishi plugin designed to allow large AI models to participate in group chat discussions. It offers easy customization of the bot's name, personality, emotions, and other messages. The plugin supports load balancing multiple API interfaces for large models, provides immersive context awareness, blocks potentially harmful messages, and automatically fetches high-quality prompts. Users can adjust various settings for the bot and customize system prompt words. The ultimate goal is to seamlessly integrate the bot into group chats without detection, with ongoing improvements and features like message recognition, emoji sending, multimodal image support, and more.
meet-libai
The 'meet-libai' project aims to promote and popularize the cultural heritage of the Chinese poet Li Bai by constructing a knowledge graph of Li Bai and training a professional AI intelligent body using large models. The project includes features such as data preprocessing, knowledge graph construction, question-answering system development, and visualization exploration of the graph structure. It also provides code implementations for large models and RAG retrieval enhancement.
MINI_LLM
This project is a personal implementation and reproduction of a small-parameter Chinese LLM. It mainly refers to these two open source projects: https://github.com/charent/Phi2-mini-Chinese and https://github.com/DLLXW/baby-llama2-chinese. It includes the complete process of pre-training, SFT instruction fine-tuning, DPO, and PPO (to be done). I hope to share it with everyone and hope that everyone can work together to improve it!
aiges
AIGES is a core component of the Athena Serving Framework, designed as a universal encapsulation tool for AI developers to deploy AI algorithm models and engines quickly. By integrating AIGES, you can deploy AI algorithm models and engines rapidly and host them on the Athena Serving Framework, utilizing supporting auxiliary systems for networking, distribution strategies, data processing, etc. The Athena Serving Framework aims to accelerate the cloud service of AI algorithm models and engines, providing multiple guarantees for cloud service stability through cloud-native architecture. You can efficiently and securely deploy, upgrade, scale, operate, and monitor models and engines without focusing on underlying infrastructure and service-related development, governance, and operations.
CareGPT
CareGPT is a medical large language model (LLM) that explores medical data, training, and deployment related research work. It integrates resources, open-source models, rich data, and efficient deployment methods. It supports various medical tasks, including patient diagnosis, medical dialogue, and medical knowledge integration. The model has been fine-tuned on diverse medical datasets to enhance its performance in the healthcare domain.
Senparc.AI
Senparc.AI is an AI extension package for the Senparc ecosystem, focusing on LLM (Large Language Models) interaction. It provides modules for standard interfaces and basic functionalities, as well as interfaces using SemanticKernel for plug-and-play capabilities. The package also includes a library for supporting the 'PromptRange' ecosystem, compatible with various systems and frameworks. Users can configure different AI platforms and models, define AI interface parameters, and run AI functions easily. The package offers examples and commands for dialogue, embedding, and DallE drawing operations.
Streamer-Sales
Streamer-Sales is a large model for live streamers that can explain products based on their characteristics and inspire users to make purchases. It is designed to enhance sales efficiency and user experience, whether for online live sales or offline store promotions. The model can deeply understand product features and create tailored explanations in vivid and precise language, sparking user's desire to purchase. It aims to revolutionize the shopping experience by providing detailed and unique product descriptions to engage users effectively.
bce-qianfan-sdk
The Qianfan SDK provides best practices for large model toolchains, allowing AI workflows and AI-native applications to access the Qianfan large model platform elegantly and conveniently. The core capabilities of the SDK include three parts: large model reasoning, large model training, and general and extension: * `Large model reasoning`: Implements interface encapsulation for reasoning of Yuyan (ERNIE-Bot) series, open source large models, etc., supporting dialogue, completion, Embedding, etc. * `Large model training`: Based on platform capabilities, it supports end-to-end large model training process, including training data, fine-tuning/pre-training, and model services. * `General and extension`: General capabilities include common AI development tools such as Prompt/Debug/Client. The extension capability is based on the characteristics of Qianfan to adapt to common middleware frameworks.
retro-aim-server
Retro AIM Server is an instant messaging server that revives AOL Instant Messenger clients from the 2000s. It supports Windows AIM client versions 5.0-5.9, away messages, buddy icons, buddy list, chat rooms, instant messaging, user profiles, blocking/visibility toggle/idle notification, and warning. The Management API provides functionality for administering the server, including listing users, creating users, changing passwords, and listing active sessions.
metaso-free-api
Metaso AI Free service supports high-speed streaming output, secret tower AI super network search (full network or academic as well as concise, in-depth, research three modes), zero-configuration deployment, multi-token support. Fully compatible with ChatGPT interface. It also has seven other free APIs available for use. The tool provides various deployment options such as Docker, Docker-compose, Render, Vercel, and native deployment. Users can access the tool for chat completions and token live checks. Note: Reverse API is unstable, it is recommended to use the official Metaso AI website to avoid the risk of banning. This project is for research and learning purposes only, not for commercial use.
LangChain-SearXNG
LangChain-SearXNG is an open-source AI search engine built on LangChain and SearXNG. It supports faster and more accurate search and question-answering functionalities. Users can deploy SearXNG and set up Python environment to run LangChain-SearXNG. The tool integrates AI models like OpenAI and ZhipuAI for search queries. It offers two search modes: Searxng and ZhipuWebSearch, allowing users to control the search workflow based on input parameters. LangChain-SearXNG v2 version enhances response speed and content quality compared to the previous version, providing a detailed configuration guide and showcasing the effectiveness of different search modes through comparisons.
deepseek-free-api
DeepSeek Free API is a high-speed streaming output tool that supports multi-turn conversations and zero-configuration deployment. It is compatible with the ChatGPT interface and offers multiple token support. The tool provides eight free APIs for various AI interfaces. Users can access the tool online, prepare for integration, deploy using Docker, Docker-compose, Render, Vercel, or native deployment methods. It also offers client recommendations for faster integration and supports dialogue completion and userToken live checks. The tool comes with important considerations for Nginx reverse proxy optimization and token statistics.
grps_trtllm
The grps-trtllm repository is a C++ implementation of a high-performance OpenAI LLM service, combining GRPS and TensorRT-LLM. It supports functionalities like Chat, Ai-agent, and Multi-modal. The repository offers advantages over triton-trtllm, including a complete LLM service implemented in pure C++, integrated tokenizer supporting huggingface and sentencepiece, custom HTTP functionality for OpenAI interface, support for different LLM prompt styles and result parsing styles, integration with tensorrt backend and opencv library for multi-modal LLM, and stable performance improvement compared to triton-trtllm.
ChatGLM3
ChatGLM3 is a conversational pretrained model jointly released by Zhipu AI and THU's KEG Lab. ChatGLM3-6B is the open-sourced model in the ChatGLM3 series. It inherits the advantages of its predecessors, such as fluent conversation and low deployment threshold. In addition, ChatGLM3-6B introduces the following features: 1. A stronger foundation model: ChatGLM3-6B's foundation model ChatGLM3-6B-Base employs more diverse training data, more sufficient training steps, and more reasonable training strategies. Evaluation on datasets from different perspectives, such as semantics, mathematics, reasoning, code, and knowledge, shows that ChatGLM3-6B-Base has the strongest performance among foundation models below 10B parameters. 2. More complete functional support: ChatGLM3-6B adopts a newly designed prompt format, which supports not only normal multi-turn dialogue, but also complex scenarios such as tool invocation (Function Call), code execution (Code Interpreter), and Agent tasks. 3. A more comprehensive open-source sequence: In addition to the dialogue model ChatGLM3-6B, the foundation model ChatGLM3-6B-Base, the long-text dialogue model ChatGLM3-6B-32K, and ChatGLM3-6B-128K, which further enhances the long-text comprehension ability, are also open-sourced. All the above weights are completely open to academic research and are also allowed for free commercial use after filling out a questionnaire.
ChatPilot
ChatPilot is a chat agent tool that enables AgentChat conversations, supports Google search, URL conversation (RAG), and code interpreter functionality, replicates Kimi Chat (file, drag and drop; URL, send out), and supports OpenAI/Azure API. It is based on LangChain and implements ReAct and OpenAI Function Call for agent Q&A dialogue. The tool supports various automatic tools such as online search using Google Search API, URL parsing tool, Python code interpreter, and enhanced RAG file Q&A with query rewriting support. It also allows front-end and back-end service separation using Svelte and FastAPI, respectively. Additionally, it supports voice input/output, image generation, user management, permission control, and chat record import/export.
VoAPI
VoAPI is a new high-value/high-performance AI model interface management and distribution system. It is a closed-source tool for personal learning use only, not for commercial purposes. Users must comply with upstream AI model service providers and legal regulations. The system offers a visually appealing interface with features such as independent development documentation page support, service monitoring page configuration support, and third-party login support. Users can manage user registration time, optimize interface elements, and support features like online recharge, model pricing display, and sensitive word filtering. VoAPI also provides support for various AI models and platforms, with the ability to configure homepage templates, model information, and manufacturer information.
For similar tasks
YesImBot
YesImBot, also known as Athena, is a Koishi plugin designed to allow large AI models to participate in group chat discussions. It offers easy customization of the bot's name, personality, emotions, and other messages. The plugin supports load balancing multiple API interfaces for large models, provides immersive context awareness, blocks potentially harmful messages, and automatically fetches high-quality prompts. Users can adjust various settings for the bot and customize system prompt words. The ultimate goal is to seamlessly integrate the bot into group chats without detection, with ongoing improvements and features like message recognition, emoji sending, multimodal image support, and more.
For similar jobs
YesImBot
YesImBot, also known as Athena, is a Koishi plugin designed to allow large AI models to participate in group chat discussions. It offers easy customization of the bot's name, personality, emotions, and other messages. The plugin supports load balancing multiple API interfaces for large models, provides immersive context awareness, blocks potentially harmful messages, and automatically fetches high-quality prompts. Users can adjust various settings for the bot and customize system prompt words. The ultimate goal is to seamlessly integrate the bot into group chats without detection, with ongoing improvements and features like message recognition, emoji sending, multimodal image support, and more.
ChatFAQ
ChatFAQ is an open-source comprehensive platform for creating a wide variety of chatbots: generic ones, business-trained, or even capable of redirecting requests to human operators. It includes a specialized NLP/NLG engine based on a RAG architecture and customized chat widgets, ensuring a tailored experience for users and avoiding vendor lock-in.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
anything-llm
AnythingLLM is a full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.
glide
Glide is a cloud-native LLM gateway that provides a unified REST API for accessing various large language models (LLMs) from different providers. It handles LLMOps tasks such as model failover, caching, key management, and more, making it easy to integrate LLMs into applications. Glide supports popular LLM providers like OpenAI, Anthropic, Azure OpenAI, AWS Bedrock (Titan), Cohere, Google Gemini, OctoML, and Ollama. It offers high availability, performance, and observability, and provides SDKs for Python and NodeJS to simplify integration.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.