petercat
A conversational Q&A agent configuration system, self-hosted deployment solutions, and a convenient all-in-one application SDK, allowing you to create intelligent Q&A bots for your GitHub repositories
Stars: 1083
Peter Cat is an intelligent Q&A chatbot solution designed for community maintainers and developers. It provides a conversational Q&A agent configuration system, self-hosting deployment solutions, and a convenient integrated application SDK. Users can easily create intelligent Q&A chatbots for their GitHub repositories and quickly integrate them into various official websites or projects to provide more efficient technical support for the community.
README:
我们提供对话式答疑 Agent 配置系统、自托管部署方案和便捷的一体化应用 SDK,让您能够为自己的 GitHub 仓库一键创建智能答疑机器人,并快速集成到各类官网或项目中, 为社区提供更高效的技术支持生态。
仅需要告知你的仓库地址或名称,PeterCat 即可自动完成创建机器人的全部流程
机器人创建后,所有相关Github 文档和 issue 将自动入库,作为机器人的知识依据
多种集成方式自由选择,如对话应用 SDK 集成至官网,Github APP一键安装至 Github 仓库等
项目信息查询 | 回复 Discussion |
---|---|
PR Summary | Code Review |
---|---|
| |
查 Issue | 提 Issue | 回 Issue |
---|---|---|
我们为猫猫预置了一个创建机器人的机器人,当得到用户 GitHub 仓库地址或名称时,它会使用创建工具,生成该仓库答疑机器人的各项配置(Prompt,、名字、 头像、开场白、引导语、工具集……),同时触发 Issue 和 Markdown 的入库任务。这些任务会拆分为多个子任务,将该仓库的所有已解决 issue 、高票回复以及所有 Markdown 文件内容经过 load -> split -> embed -> store 的加工过程进行知识库构建,作为机器人的回复知识依据。
你可以在这里看到完整方案:
本项目需要进行环境变量进行设置:
.env.local
环境变量 | 类型 | 描述 | 示例 |
---|---|---|---|
NEXT_PUBLIC_API_DOMAIN |
必选 | 后端服务的 API 域名。 | https://api.petercat.ai |
.env
环境变量 | 类型 | 描述 | 示例 |
---|---|---|---|
应用基础环境变量 | |||
API_URL |
必选 | 后端服务的 API 域名 | https://api.petercat.ai |
WEB_URL |
必选 | 前端 Web 服务的域名 | https://petercat.ai |
STATIC_URL |
必选 | 静态资源域名 | https://static.petercat.ai |
AWS 相关环境变量 | |||
X_GITHUB_SECRET_NAME |
必选 | AWS 托管的 Github 私钥文件名 | prod/githubapp/petercat/pem |
STATIC_SECRET_NAME |
可选 | AWS 托管的 CloudFront 签名私钥名称。如果配置了该项,将使用 CloudFront 签名 URL 来保护你的资源。更多信息请参阅 AWS 文档。 | prod/petercat/static |
LLM_TOKEN_SECRET_NAME |
可选 | AWS 托管的 llm 签名私钥名称。如果配置了该项,petercat 将使用 RSA 算法托管用户的 LLM Token | prod/petercat/llm |
LLM_TOKEN_PUBLIC_NAME |
可选 | AWS 托管的 llm 签名公钥名称。如果配置了该项,petercat 将使用 RSA 算法托管用户的 LLM Token | prod/petercat/llm/pub |
STATIC_KEYPAIR_ID |
可选 | AWS CloudFront 的 Key Pair ID。如果配置了该项,将使用 CloudFront 签名 URL 来保护你的资源。更多信息请参阅 AWS 文档。 | APKxxxxxxxx |
S3_TEMP_BUCKET_NAME |
可选 | 用于托管 AWS 临时图片文件 S3 的 bucket | xxx-temp |
SQS_QUEUE_URL |
必选 | AWS SQS 消息队列 URL | https://sqs.ap-northeast-1.amazonaws.com/xxx/petercat-task-queue |
SUPABASE 相关 env | |||
SUPABASE_URL |
必选 | supabase 服务的 URL,可以在这里找到 | https://***.supabase.co |
SUPABASE_SERVICE_KEY |
必选 | supabase 服务密钥,可以在这里找到 | {{SUPABASE_SERVICE_KEY}} |
Auth0 相关 env | |||
AUTH0_DOMAIN |
必选 | auth0 服务域名,从 auth0 / Application / Basic Information 下获取 | petercat.us.auth0.com |
AUTH0_CLIENT_ID |
必选 | auth0 客户端 ID,从 auth0 / Application / Basic Information 下获取 | artfiUxxxx |
AUTH0_CLIENT_SECRET |
必选 | auth0 客户端密钥, 从 auth0 / Application / Basic Information 下获取 | xxxx-xxxx-xxx |
API_IDENTIFIER |
必选 | auth0 的 API Identifier | https://petercat.us.auth0.com/api/v2/ |
LLM相关的 env | |||
OPENAI_API_KEY |
必选 | OpenAI 的密钥 | sk-xxxx |
OPENAI_BASE_URL |
可选 | API 请求的基础 URL。仅在使用代理或服务模拟器时指定。 | https://api.openai.com/v1 |
GEMINI_API_KEY |
可选 | Gemini 的密钥 | xxxx |
TAVILY_API_KEY |
必选 | Tavily 的密钥 | tvly-xxxxx |
注册为 Github App 的 env | |||
X_GITHUB_APP_ID |
可选 | 注册为 Github App 时,APPID | 123456 |
X_GITHUB_APPS_CLIENT_ID |
可选 | 注册为 Github App 时,APP 的 Client ID | Iv1.xxxxxxx |
X_GITHUB_APPS_CLIENT_SECRET |
可选 | 注册为 Github App 时,APP 的 Client 密钥 | xxxxxxxx |
限流配置 | |||
RATE_LIMIT_ENABLED |
可选 | 限流配置是否开启 | True |
RATE_LIMIT_REQUESTS |
可选 | 限流的请求数量 | 100 |
RATE_LIMIT_DURATION |
可选 | 限流的统计时长,单位为分钟 | 1 |
PeterCat 使用 yarn 作为包管理器
git clone https://github.com/petercat-ai/petercat.git
# 安装依赖
yarn run bootstrap
# 调试 client
yarn run client
# 调试 assistant
yarn run assistant
# 调试 server
yarn run server
# 本地启动网站
yarn run client:server
# 本地启动 assistant 组件
yarn run assistant:server
# assistant 构建
cd assistant
yarn run build
npm publish
# docker 构建
yarn run build:docker
# pypi 构建
yarn run build:pypi
yarn run publish:pypi
请把您的项目地址,使用场景,使用频率等信息发送至 [email protected]
猫猫还在养成阶段,难免有些 “小脾气”,遇到问题请对它宽容一些,可以通过以下两种途径告知铲屎官:
MIT@PeterCat
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for petercat
Similar Open Source Tools
petercat
Peter Cat is an intelligent Q&A chatbot solution designed for community maintainers and developers. It provides a conversational Q&A agent configuration system, self-hosting deployment solutions, and a convenient integrated application SDK. Users can easily create intelligent Q&A chatbots for their GitHub repositories and quickly integrate them into various official websites or projects to provide more efficient technical support for the community.
jiwu-mall-chat-tauri
Jiwu Chat Tauri APP is a desktop chat application based on Nuxt3 + Tauri + Element Plus framework. It provides a beautiful user interface with integrated chat and social functions. It also supports AI shopping chat and global dark mode. Users can engage in real-time chat, share updates, and interact with AI customer service through this application.
XiaoXinAir14IML_2019_hackintosh
XiaoXinAir14IML_2019_hackintosh is a repository dedicated to enabling macOS installation on Lenovo XiaoXin Air-14 IML 2019 laptops. The repository provides detailed information on the hardware specifications, supported systems, BIOS versions, related models, installation methods, updates, patches, and recommended settings. It also includes tools and guides for BIOS modifications, enabling high-resolution display settings, Bluetooth synchronization between macOS and Windows 10, voltage adjustments for efficiency, and experimental support for YogaSMC. The repository offers solutions for various issues like sleep support, sound card emulation, and battery information. It acknowledges the contributions of developers and tools like OpenCore, itlwm, VoodooI2C, and ALCPlugFix.
Langchain-Chatchat
LangChain-Chatchat is an open-source, offline-deployable retrieval-enhanced generation (RAG) large model knowledge base project based on large language models such as ChatGLM and application frameworks such as Langchain. It aims to establish a knowledge base Q&A solution that is friendly to Chinese scenarios, supports open-source models, and can run offline.
aidea-server
AIdea Server is an open-source Golang-based server that integrates mainstream large language models and drawing models. It supports various functionalities including OpenAI's GPT-3.5 and GPT-4, Anthropic's Claude instant and Claude 2.1, Google's Gemini Pro, as well as Chinese models like Tongyi Qianwen, Wenxin Yiyuan, and more. It also supports open-source large models like Yi 34B, Llama2, and AquilaChat 7B. Additionally, it provides features for text-to-image, super-resolution, coloring black and white images, generating art fonts and QR codes, among others.
DownEdit
DownEdit is a powerful program that allows you to download videos from various social media platforms such as TikTok, Douyin, Kuaishou, and more. With DownEdit, you can easily download videos from user profiles and edit them in bulk. You have the option to flip the videos horizontally or vertically throughout the entire directory with just a single click. Stay tuned for more exciting features coming soon!
search2ai
S2A allows your large model API to support networking, searching, news, and web page summarization. It currently supports OpenAI, Gemini, and Moonshot (non-streaming). The large model will determine whether to connect to the network based on your input, and it will not connect to the network for searching every time. You don't need to install any plugins or replace keys. You can directly replace the custom address in your commonly used third-party client. You can also deploy it yourself, which will not affect other functions you use, such as drawing and voice.
airbyte-connectors
This repository contains Airbyte connectors used in Faros and Faros Community Edition platforms as well as Airbyte Connector Development Kit (CDK) for JavaScript/TypeScript.
chatluna
Chatluna is a machine learning model plugin that provides chat services with large language models. It is highly extensible, supports multiple output formats, and offers features like custom conversation presets, rate limiting, and context awareness. Users can deploy Chatluna under Koishi without additional configuration. The plugin supports various models/platforms like OpenAI, Azure OpenAI, Google Gemini, and more. It also provides preset customization using YAML files and allows for easy forking and development within Koishi projects. However, the project lacks web UI, HTTP server, and project documentation, inviting contributions from the community.
VoiceBench
VoiceBench is a repository containing code and data for benchmarking LLM-Based Voice Assistants. It includes a leaderboard with rankings of various voice assistant models based on different evaluation metrics. The repository provides setup instructions, datasets, evaluation procedures, and a curated list of awesome voice assistants. Users can submit new voice assistant results through the issue tracker for updates on the ranking list.
video-subtitle-remover
Video-subtitle-remover (VSR) is a software based on AI technology that removes hard subtitles from videos. It achieves the following functions: - Lossless resolution: Remove hard subtitles from videos, generate files with subtitles removed - Fill the region of removed subtitles using a powerful AI algorithm model (non-adjacent pixel filling and mosaic removal) - Support custom subtitle positions, only remove subtitles in defined positions (input position) - Support automatic removal of all text in the entire video (no input position required) - Support batch removal of watermark text from multiple images.
agentica
Agentica is a human-centric framework for building large language model agents. It provides functionalities for planning, memory management, tool usage, and supports features like reflection, planning and execution, RAG, multi-agent, multi-role, and workflow. The tool allows users to quickly code and orchestrate agents, customize prompts, and make API calls to various services. It supports API calls to OpenAI, Azure, Deepseek, Moonshot, Claude, Ollama, and Together. Agentica aims to simplify the process of building AI agents by providing a user-friendly interface and a range of functionalities for agent development.
xiaogpt
xiaogpt is a tool that allows you to play ChatGPT and other LLMs with Xiaomi AI Speaker. It supports ChatGPT, New Bing, ChatGLM, Gemini, Doubao, and Tongyi Qianwen. You can use it to ask questions, get answers, and have conversations with AI assistants. xiaogpt is easy to use and can be set up in a few minutes. It is a great way to experience the power of AI and have fun with your Xiaomi AI Speaker.
AIO-Firebog-Blocklists
AIO-Firebog-Blocklists is a comprehensive tool that combines various sources into a single, cohesive blocklist. It offers customizable options to suit individual preferences and needs, ensuring regular updates to stay up-to-date with the latest threats. The tool focuses on performance optimization to minimize impact while maintaining effective filtering. It is designed to help users with ad blocking, malware protection, tracker prevention, and content filtering.
go-cyber
Cyber is a superintelligence protocol that aims to create a decentralized and censorship-resistant internet. It uses a novel consensus mechanism called CometBFT and a knowledge graph to store and process information. Cyber is designed to be scalable, secure, and efficient, and it has the potential to revolutionize the way we interact with the internet.
Chinese-Mixtral-8x7B
Chinese-Mixtral-8x7B is an open-source project based on Mistral's Mixtral-8x7B model for incremental pre-training of Chinese vocabulary, aiming to advance research on MoE models in the Chinese natural language processing community. The expanded vocabulary significantly improves the model's encoding and decoding efficiency for Chinese, and the model is pre-trained incrementally on a large-scale open-source corpus, enabling it with powerful Chinese generation and comprehension capabilities. The project includes a large model with expanded Chinese vocabulary and incremental pre-training code.
For similar tasks
petercat
Peter Cat is an intelligent Q&A chatbot solution designed for community maintainers and developers. It provides a conversational Q&A agent configuration system, self-hosting deployment solutions, and a convenient integrated application SDK. Users can easily create intelligent Q&A chatbots for their GitHub repositories and quickly integrate them into various official websites or projects to provide more efficient technical support for the community.
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
unsloth
Unsloth is a tool that allows users to fine-tune large language models (LLMs) 2-5x faster with 80% less memory. It is a free and open-source tool that can be used to fine-tune LLMs such as Gemma, Mistral, Llama 2-5, TinyLlama, and CodeLlama 34b. Unsloth supports 4-bit and 16-bit QLoRA / LoRA fine-tuning via bitsandbytes. It also supports DPO (Direct Preference Optimization), PPO, and Reward Modelling. Unsloth is compatible with Hugging Face's TRL, Trainer, Seq2SeqTrainer, and Pytorch code. It is also compatible with NVIDIA GPUs since 2018+ (minimum CUDA Capability 7.0).
beyondllm
Beyond LLM offers an all-in-one toolkit for experimentation, evaluation, and deployment of Retrieval-Augmented Generation (RAG) systems. It simplifies the process with automated integration, customizable evaluation metrics, and support for various Large Language Models (LLMs) tailored to specific needs. The aim is to reduce LLM hallucination risks and enhance reliability.
aiwechat-vercel
aiwechat-vercel is a tool that integrates AI capabilities into WeChat public accounts using Vercel functions. It requires minimal server setup, low entry barriers, and only needs a domain name that can be bound to Vercel, with almost zero cost. The tool supports various AI models, continuous Q&A sessions, chat functionality, system prompts, and custom commands. It aims to provide a platform for learning and experimentation with AI integration in WeChat public accounts.
hugging-chat-api
Unofficial HuggingChat Python API for creating chatbots, supporting features like image generation, web search, memorizing context, and changing LLMs. Users can log in, chat with the ChatBot, perform web searches, create new conversations, manage conversations, switch models, get conversation info, use assistants, and delete conversations. The API also includes a CLI mode with various commands for interacting with the tool. Users are advised not to use the application for high-stakes decisions or advice and to avoid high-frequency requests to preserve server resources.
microchain
Microchain is a function calling-based LLM agents tool with no bloat. It allows users to define LLM and templates, use various functions like Sum and Product, and create LLM agents for specific tasks. The tool provides a simple and efficient way to interact with OpenAI models and create conversational agents for various applications.
embedchain
Embedchain is an Open Source Framework for personalizing LLM responses. It simplifies the creation and deployment of personalized AI applications by efficiently managing unstructured data, generating relevant embeddings, and storing them in a vector database. With diverse APIs, users can extract contextual information, find precise answers, and engage in interactive chat conversations tailored to their data. The framework follows the design principle of being 'Conventional but Configurable' to cater to both software engineers and machine learning engineers.
For similar jobs
ChatFAQ
ChatFAQ is an open-source comprehensive platform for creating a wide variety of chatbots: generic ones, business-trained, or even capable of redirecting requests to human operators. It includes a specialized NLP/NLG engine based on a RAG architecture and customized chat widgets, ensuring a tailored experience for users and avoiding vendor lock-in.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
anything-llm
AnythingLLM is a full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.
glide
Glide is a cloud-native LLM gateway that provides a unified REST API for accessing various large language models (LLMs) from different providers. It handles LLMOps tasks such as model failover, caching, key management, and more, making it easy to integrate LLMs into applications. Glide supports popular LLM providers like OpenAI, Anthropic, Azure OpenAI, AWS Bedrock (Titan), Cohere, Google Gemini, OctoML, and Ollama. It offers high availability, performance, and observability, and provides SDKs for Python and NodeJS to simplify integration.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
onnxruntime-genai
ONNX Runtime Generative AI is a library that provides the generative AI loop for ONNX models, including inference with ONNX Runtime, logits processing, search and sampling, and KV cache management. Users can call a high level `generate()` method, or run each iteration of the model in a loop. It supports greedy/beam search and TopP, TopK sampling to generate token sequences, has built in logits processing like repetition penalties, and allows for easy custom scoring.