jimeng-free-api-all
Jimeng AI Free 服务支持即梦超强图像与视频生成能力,包含即梦 4.0 文生图等多款模型,提供文生图、图生图、视频生成功能(官方每日赠 66 积分,可生成 66 次),零配置部署且支持多路 token。 接口与 OpenAI 完全兼容,需从即梦官网获取 sessionid 作为 Authorization 的 Bearer Token,支持多账号接入。提供 Docker 部署方式及 dockerhub 镜像,可通过多种接口调用,包括对话补全、视频生成、图像生成(文生图、图生图)等,满足多样化生成需求。
Stars: 263
Jimeng AI Free API is a reverse-engineered API server that encapsulates Jimeng AI's image and video generation capabilities into OpenAI-compatible API interfaces. It supports the latest jimeng-5.0-preview, jimeng-4.6 text-to-image models, Seedance 2.0 multi-image intelligent video generation, zero-configuration deployment, and multi-token support. The API is fully compatible with OpenAI API format, seamlessly integrating with existing clients and supporting multiple session IDs for polling usage.
README:
即梦 AI 免费 API 服务 - 支持文生图、图生图、视频生成的 OpenAI 兼容接口
🎨 将即梦 AI 强大的图像和视频生成能力,通过 OpenAI 兼容接口开放给开发者
Jimeng AI Free API 是一个逆向工程的 API 服务器,将即梦 AI(Jimeng AI)的图像和视频生成能力封装为 OpenAI 兼容的 API 接口。支持最新的 jimeng-5.0-preview、jimeng-4.6 文生图模型、Seedance 2.0 多图智能视频生成,零配置部署,多路 token 支持。
- 🖼️ 文生图:支持 jimeng-5.0-preview、jimeng-4.6、jimeng-4.5 等多款模型,最高 4K 分辨率
- 🎭 图生图:多图合成,支持 1-10 张输入图片
- 🎬 视频生成:jimeng-video-3.5-pro 等模型,支持首帧/尾帧控制
- 🌊 Seedance 2.0:多图智能视频生成,支持 @1、@2 占位符引用图片
- 🔗 OpenAI 兼容:完全兼容 OpenAI API 格式,无缝对接现有客户端
- 🔄 多账号支持:支持多个 sessionid 轮询使用
| 技术 | 版本 | 用途 |
|---|---|---|
| Node.js | ≥16.0.0 | 运行环境 |
| TypeScript | ^5.0.0 | 开发语言 |
| Koa | ^2.15.0 | Web 框架 |
| Docker | latest | 容器化部署 |
| 功能名称 | 功能说明 | 模型 | 状态 |
|---|---|---|---|
| 文生图 | 根据文本描述生成图片 | jimeng-5.0-preview, jimeng-4.6, jimeng-4.5, jimeng-4.1 等 | ✅ 可用 |
| 图生图 | 多图合成生成新图片 | jimeng-5.0-preview, jimeng-4.6, jimeng-4.5 等 | ✅ 可用 |
| 文生视频 | 根据文本描述生成视频 | jimeng-video-3.5-pro 等 | ✅ 可用 |
| 图生视频 | 使用首帧/尾帧图片生成视频 | jimeng-video-3.0 等 | ✅ 可用 |
| 多图智能视频 | Seedance 2.0 多图混合生成 | seedance-2.0, seedance-2.0-pro | ✅ 可用 |
| Chat 接口 | OpenAI 兼容的对话接口 | 所有模型 | ✅ 可用 |
⚠️ 重要提示
逆向 API 是不稳定的,建议前往即梦 AI 官方 https://jimeng.jianying.com/ 体验功能,避免封禁的风险。
本组织和个人不接受任何资金捐助和交易,此项目是纯粹研究交流学习性质!
仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!
- Node.js 16+
- npm 或 yarn
- Docker(可选)
使用 Docker Hub 镜像:
# 拉取镜像
docker pull wwwzhouhui569/jimeng-free-api-all:latest
# 启动容器
docker run -it -d --init --name jimeng-free-api-all \
-p 8000:8000 \
-e TZ=Asia/Shanghai \
wwwzhouhui569/jimeng-free-api-all:latest从源码构建:
# 克隆项目
git clone https://github.com/wwwzhouhui/jimeng-free-api-all.git
# 进入目录
cd jimeng-free-api-all
# 构建镜像
docker build -t jimeng-free-api-all:latest .
# 启动容器
docker run -it -d --init --name jimeng-free-api-all \
-p 8000:8000 \
-e TZ=Asia/Shanghai \
jimeng-free-api-all:latest# 克隆项目
git clone https://github.com/wwwzhouhui/jimeng-free-api-all.git
# 进入目录
cd jimeng-free-api-all
# 安装依赖
npm install
# 开发模式
npm run dev
# 生产模式
npm run build && npm start- 访问 即梦 AI 并登录账号
- 按 F12 打开开发者工具
- 进入 Application > Cookies
- 找到
sessionid的值
支持多个账号的 sessionid,使用逗号分隔:
Authorization: Bearer sessionid1,sessionid2,sessionid3
每次请求会从中随机选择一个使用。
| 端点 | 方法 | 说明 |
|---|---|---|
/v1/chat/completions |
POST | OpenAI 兼容的对话接口 |
/v1/images/generations |
POST | 文生图接口 |
/v1/images/compositions |
POST | 图生图接口 |
/v1/videos/generations |
POST | 视频生成接口 |
/v1/models |
GET | 获取模型列表 |
文生图示例:
curl -X POST http://localhost:8000/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_sessionid" \
-d '{
"model": "jimeng-4.5",
"prompt": "美丽的日落风景,湖边的小屋",
"ratio": "16:9",
"resolution": "2k"
}'视频生成示例:
curl -X POST http://localhost:8000/v1/videos/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_sessionid" \
-d '{
"model": "jimeng-video-3.5-pro",
"prompt": "一只可爱的小猫在草地上玩耍",
"ratio": "16:9",
"resolution": "720p",
"duration": 5
}'Seedance 2.0 多图视频示例:
curl -X POST http://localhost:8000/v1/videos/generations \
-H "Authorization: Bearer your_sessionid" \
-F "model=seedance-2.0" \
-F "prompt=@1 和 @2 两人开始跳舞" \
-F "ratio=4:3" \
-F "duration=4" \
-F "files=@/path/to/image1.jpg" \
-F "files=@/path/to/image2.jpg"jimeng-free-api-all/
├── src/
│ ├── index.ts # 应用入口
│ ├── daemon.ts # 守护进程管理
│ ├── api/
│ │ ├── controllers/ # 业务逻辑控制器
│ │ │ ├── core.ts # 核心工具(Token处理等)
│ │ │ ├── images.ts # 图像生成逻辑
│ │ │ ├── videos.ts # 视频生成逻辑
│ │ │ └── chat.ts # 对话补全逻辑
│ │ ├── routes/ # API 路由定义
│ │ │ ├── index.ts # 路由聚合
│ │ │ ├── images.ts # /v1/images/* 端点
│ │ │ ├── videos.ts # /v1/videos/* 端点
│ │ │ ├── chat.ts # /v1/chat/* 端点
│ │ │ └── models.ts # /v1/models 端点
│ │ └── consts/ # API 常量和异常
│ └── lib/
│ ├── server.ts # Koa 服务器配置
│ ├── config.ts # 配置管理
│ ├── logger.ts # 日志工具
│ ├── util.ts # 辅助工具
│ ├── request/ # 请求处理类
│ ├── response/ # 响应处理类
│ ├── exceptions/ # 异常类
│ └── configs/ # 配置模式
├── configs/ # 配置文件目录
├── doc/ # 文档资源
├── Dockerfile # Docker 构建文件
├── package.json # 项目配置
└── tsconfig.json # TypeScript 配置
| 用户模型名 | 内部模型名 | 说明 |
|---|---|---|
jimeng-5.0-preview |
high_aes_general_v50 |
5.0 预览版,最新模型 |
jimeng-4.6 |
high_aes_general_v42 |
最新模型,推荐使用 |
jimeng-4.5 |
high_aes_general_v40l |
高质量模型 |
jimeng-4.1 |
high_aes_general_v41 |
高质量模型 |
jimeng-4.0 |
high_aes_general_v40 |
稳定版本 |
jimeng-3.1 |
high_aes_general_v30l_art_fangzhou |
艺术风格 |
jimeng-3.0 |
high_aes_general_v30l |
通用模型 |
jimeng-2.1 |
- | 旧版模型 |
jimeng-2.0-pro |
- | 旧版专业模型 |
jimeng-2.0 |
- | 旧版模型 |
jimeng-1.4 |
- | 早期模型 |
jimeng-xl-pro |
- | XL 专业模型 |
| 用户模型名 | 内部模型名 | 说明 |
|---|---|---|
jimeng-video-3.5-pro |
dreamina_ic_generate_video_model_vgfm_3.5_pro |
最新视频模型 |
jimeng-video-3.0 |
- | 视频生成 3.0 |
jimeng-video-3.0-pro |
- | 视频生成 3.0 专业版 |
jimeng-video-2.0 |
- | 视频生成 2.0 |
jimeng-video-2.0-pro |
- | 视频生成 2.0 专业版 |
seedance-2.0 |
dreamina_seedance_40_pro |
多图智能视频生成 |
seedance-2.0-pro |
dreamina_seedance_40_pro |
多图智能视频生成专业版 |
| 分辨率 | 1:1 | 4:3 | 3:4 | 16:9 | 9:16 | 3:2 | 2:3 | 21:9 |
|---|---|---|---|---|---|---|---|---|
| 1k | 1024×1024 | 768×1024 | 1024×768 | 1024×576 | 576×1024 | 1024×682 | 682×1024 | 1195×512 |
| 2k | 2048×2048 | 2304×1728 | 1728×2304 | 2560×1440 | 1440×2560 | 2496×1664 | 1664×2496 | 3024×1296 |
| 4k | 4096×4096 | 4608×3456 | 3456×4608 | 5120×2880 | 2880×5120 | 4992×3328 | 3328×4992 | 6048×2592 |
| 分辨率 | 1:1 | 4:3 | 3:4 | 16:9 | 9:16 |
|---|---|---|---|---|---|
| 480p | 480×480 | 640×480 | 480×640 | 854×480 | 480×854 |
| 720p | 720×720 | 960×720 | 720×960 | 1280×720 | 720×1280 |
| 1080p | 1080×1080 | 1440×1080 | 1080×1440 | 1920×1080 | 1080×1920 |
POST /v1/images/generations
| 参数 | 类型 | 必填 | 默认值 | 说明 |
|---|---|---|---|---|
| model | string | 否 | jimeng-4.5 | 模型名称 |
| prompt | string | 是 | - | 提示词,支持多图生成 |
| negative_prompt | string | 否 | "" | 反向提示词 |
| ratio | string | 否 | 1:1 | 宽高比 |
| resolution | string | 否 | 2k | 分辨率:1k, 2k, 4k |
| sample_strength | number | 否 | 0.5 | 精细度 0-1 |
| response_format | string | 否 | url | url 或 b64_json |
POST /v1/images/compositions
| 参数 | 类型 | 必填 | 默认值 | 说明 |
|---|---|---|---|---|
| model | string | 否 | jimeng-4.5 | 模型名称 |
| prompt | string | 是 | - | 提示词 |
| images | array | 是 | - | 图片URL数组,1-10张 |
| ratio | string | 否 | 1:1 | 宽高比 |
| resolution | string | 否 | 2k | 分辨率 |
POST /v1/videos/generations
| 参数 | 类型 | 必填 | 默认值 | 说明 |
|---|---|---|---|---|
| model | string | 否 | jimeng-video-3.0 | 模型名称 |
| prompt | string | 是 | - | 视频描述 |
| ratio | string | 否 | 1:1 | 宽高比 |
| resolution | string | 否 | 720p | 分辨率:480p, 720p, 1080p |
| duration | number | 否 | 5 | 时长:5 或 10 秒 |
| file_paths | array | 否 | [] | 首帧/尾帧图片URL |
POST /v1/videos/generations
| 参数 | 类型 | 必填 | 默认值 | 说明 |
|---|---|---|---|---|
| model | string | 是 | - | seedance-2.0 或 seedance-2.0-pro |
| prompt | string | 否 | - | 提示词,使用 @1、@2 引用图片 |
| ratio | string | 否 | 4:3 | 宽高比 |
| duration | number | 否 | 4 | 视频时长(秒) |
| files | file[] | 是* | - | 上传的图片(multipart) |
| file_paths | array | 是* | - | 图片URL数组(JSON) |
提示词占位符:
-
@1/@图1/@image1- 引用第一张图片 -
@2/@图2/@image2- 引用第二张图片
# 克隆项目
git clone https://github.com/wwwzhouhui/jimeng-free-api-all.git
cd jimeng-free-api-all
# 安装依赖
npm install
# 开发模式(热重载)
npm run dev# 构建生产版本
npm run build
# 启动生产服务
npm start- Fork 本项目
- 创建特性分支 (
git checkout -b feature/AmazingFeature) - 提交更改 (
git commit -m 'Add some AmazingFeature') - 推送到分支 (
git push origin feature/AmazingFeature) - 创建 Pull Request
如何获取 sessionid?
- 访问 即梦 AI 并登录
- 按 F12 打开开发者工具
- 进入 Application > Cookies
- 复制
sessionid的值
sessionid 失效怎么办?
sessionid 有效期有限,失效后需要重新登录即梦网站获取新的 sessionid。建议配置多个账号以提高可用性。
如何配置多账号?
在 Authorization 头中使用逗号分隔多个 sessionid:
Authorization: Bearer sessionid1,sessionid2,sessionid3
Docker 容器无法启动?
- 检查端口 8000 是否被占用
- 确保 Docker 服务正在运行
- 查看容器日志:
docker logs jimeng-free-api-all
生成失败返回错误?
- 检查 sessionid 是否有效
- 确认账号积分是否充足
- 检查请求参数是否正确
- 查看服务器日志获取详细错误信息
- ✨ 新增 jimeng-5.0-preview 模型:即梦 AI 最新 5.0 预览版图像生成模型(内部模型
high_aes_general_v50),支持文生图、图生图和多图生成 - ✨ 新增 jimeng-4.6 模型:即梦 AI 4.6 版图像生成模型(内部模型
high_aes_general_v42),支持文生图、图生图和多图生成 - ⚡ 升级 Draft 版本:jimeng-5.0-preview 和 jimeng-4.6 使用最新
3.3.9版本 - 🔧 扩展多图生成支持:多图检测正则匹配扩展至 jimeng-5.x 系列模型
- 🐛 修复视频返回低码率预览URL的问题:视频生成接口(含 Seedance 2.0)之前返回的是
vlabvod.com低码率预览URL(bitrate ~1152),现在通过get_local_item_listAPI 获取dreamnia.jimeng.com高码率下载URL(bitrate ~6297+) - 🐛 修复 Seedance 轮询响应解析失败:
get_history_by_idsAPI 返回数据以 historyId 为键(如result["8918159809292"]),而非result.history_list数组,导致轮询循环无法正确解析响应,视频生成后客户端请求无返回 - 🐛 修复普通视频轮询响应解析:
generateVideo函数增加result[historyId]回退解析,兼容 historyId 键值格式的API响应 - 🐛 修复 item_id 提取字段名:API 返回的视频项目 ID 位于
common_attr.id字段,补充该字段到提取链中
- ✨ 新增 Seedance 2.0 模型:支持多张图片混合生成视频
- ✨ 多图混合提示词:支持
@1、@2等占位符引用图片 - 🐛 修复 multipart 文件上传:优化 koa-body 配置
- 🔒 安全漏洞修复:升级依赖修复 19 个安全漏洞
- ⚡ 优化参数验证:
prompt参数改为可选
- ✨ 新增 jimeng-video-3.5-pro 模型
- ⚡ 升级 Draft 版本:使用
3.3.4版本 - 🔧 动态版本管理:根据模型自动选择 draft 版本
- 🔄 统一参数格式:使用
ratio和resolution替代width/height - 📤 支持 multipart/form-data:图生图和视频生成支持直接上传文件
- ⚡ 优化错误提示
- 🐛 修复积分扣费问题:优化请求参数实现免积分
- 🔧 更新浏览器指纹:Chrome 版本升级到 142
- 🐛 修复 jimeng-4.5 模型:修正模型映射名称
- ⬆️ 升级版本号:
DRAFT_VERSION升级到3.3.4 - ✨ 扩展分辨率支持:支持 1k/2k/4k 多种分辨率
欢迎加入技术交流群,分享使用心得:
- 微信: laohaibao2025
- 邮箱: [email protected]
如果这个项目对你有帮助,欢迎请我喝杯咖啡 ☕
微信支付
感谢以下项目的贡献:
如果觉得项目不错,欢迎点个 Star ⭐
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for jimeng-free-api-all
Similar Open Source Tools
jimeng-free-api-all
Jimeng AI Free API is a reverse-engineered API server that encapsulates Jimeng AI's image and video generation capabilities into OpenAI-compatible API interfaces. It supports the latest jimeng-5.0-preview, jimeng-4.6 text-to-image models, Seedance 2.0 multi-image intelligent video generation, zero-configuration deployment, and multi-token support. The API is fully compatible with OpenAI API format, seamlessly integrating with existing clients and supporting multiple session IDs for polling usage.
lingti-bot
lingti-bot is an AI Bot platform that integrates MCP Server, multi-platform message gateway, rich toolset, intelligent conversation, and voice interaction. It offers core advantages like zero-dependency deployment with a single 30MB binary file, cloud relay support for quick integration with enterprise WeChat/WeChat Official Account, built-in browser automation with CDP protocol control, 75+ MCP tools covering various scenarios, native support for Chinese platforms like DingTalk, Feishu, enterprise WeChat, WeChat Official Account, and more. It is embeddable, supports multiple AI backends like Claude, DeepSeek, Kimi, MiniMax, and Gemini, and allows access from platforms like DingTalk, Feishu, enterprise WeChat, WeChat Official Account, Slack, Telegram, and Discord. The bot is designed with simplicity as the highest design principle, focusing on zero-dependency deployment, embeddability, plain text output, code restraint, and cloud relay support.
XiaoXinAir14IML_2019_hackintosh
XiaoXinAir14IML_2019_hackintosh is a repository dedicated to enabling macOS installation on Lenovo XiaoXin Air-14 IML 2019 laptops. The repository provides detailed information on the hardware specifications, supported systems, BIOS versions, related models, installation methods, updates, patches, and recommended settings. It also includes tools and guides for BIOS modifications, enabling high-resolution display settings, Bluetooth synchronization between macOS and Windows 10, voltage adjustments for efficiency, and experimental support for YogaSMC. The repository offers solutions for various issues like sleep support, sound card emulation, and battery information. It acknowledges the contributions of developers and tools like OpenCore, itlwm, VoodooI2C, and ALCPlugFix.
daily_stock_analysis
The daily_stock_analysis repository is an intelligent stock analysis system based on AI large models for A-share/Hong Kong stock/US stock selection. It automatically analyzes and pushes a 'decision dashboard' to WeChat Work/Feishu/Telegram/email daily. The system features multi-dimensional analysis, global market support, market review, AI backtesting validation, multi-channel notifications, and scheduled execution using GitHub Actions. It utilizes AI models like Gemini, OpenAI, DeepSeek, and data sources like AkShare, Tushare, Pytdx, Baostock, YFinance for analysis. The system includes built-in trading disciplines like risk warning, trend trading, precise entry/exit points, and checklist marking for conditions.
gpt_server
The GPT Server project leverages the basic capabilities of FastChat to provide the capabilities of an openai server. It perfectly adapts more models, optimizes models with poor compatibility in FastChat, and supports loading vllm, LMDeploy, and hf in various ways. It also supports all sentence_transformers compatible semantic vector models, including Chat templates with function roles, Function Calling (Tools) capability, and multi-modal large models. The project aims to reduce the difficulty of model adaptation and project usage, making it easier to deploy the latest models with minimal code changes.
ai-app
The 'ai-app' repository is a comprehensive collection of tools and resources related to artificial intelligence, focusing on topics such as server environment setup, PyCharm and Anaconda installation, large model deployment and training, Transformer principles, RAG technology, vector databases, AI image, voice, and music generation, and AI Agent frameworks. It also includes practical guides and tutorials on implementing various AI applications. The repository serves as a valuable resource for individuals interested in exploring different aspects of AI technology.
DeepAudit
DeepAudit is an AI audit team accessible to everyone, making vulnerability discovery within reach. It is a next-generation code security audit platform based on Multi-Agent collaborative architecture. It simulates the thinking mode of security experts, achieving deep code understanding, vulnerability discovery, and automated sandbox PoC verification through multiple intelligent agents (Orchestrator, Recon, Analysis, Verification). DeepAudit aims to address the three major pain points of traditional SAST tools: high false positive rate, blind spots in business logic, and lack of verification means. Users only need to import the project, and DeepAudit automatically starts working: identifying the technology stack, analyzing potential risks, generating scripts, sandbox verification, and generating reports, ultimately outputting a professional audit report. The core concept is to let AI attack like a hacker and defend like an expert.
Llama-Chinese
Llama中文社区是一个专注于Llama模型在中文方面的优化和上层建设的高级技术社区。 **已经基于大规模中文数据,从预训练开始对Llama2模型进行中文能力的持续迭代升级【Done】**。**正在对Llama3模型进行中文能力的持续迭代升级【Doing】** 我们热忱欢迎对大模型LLM充满热情的开发者和研究者加入我们的行列。
Feishu-MCP
Feishu-MCP is a server that provides access, editing, and structured processing capabilities for Feishu documents for Cursor, Windsurf, Cline, and other AI-driven coding tools, based on the Model Context Protocol server. This project enables AI coding tools to directly access and understand the structured content of Feishu documents, significantly improving the intelligence and efficiency of document processing. It covers the real usage process of Feishu documents, allowing efficient utilization of document resources, including folder directory retrieval, content retrieval and understanding, smart creation and editing, efficient search and retrieval, and more. It enhances the intelligent access, editing, and searching of Feishu documents in daily usage, improving content processing efficiency and experience.
MiniCPM
MiniCPM is a series of open-source large models on the client side jointly developed by Face Intelligence and Tsinghua University Natural Language Processing Laboratory. The main language model MiniCPM-2B has only 2.4 billion (2.4B) non-word embedding parameters, with a total of 2.7B parameters. - After SFT, MiniCPM-2B performs similarly to Mistral-7B on public comprehensive evaluation sets (better in Chinese, mathematics, and code capabilities), and outperforms models such as Llama2-13B, MPT-30B, and Falcon-40B overall. - After DPO, MiniCPM-2B also surpasses many representative open-source large models such as Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, and Zephyr-7B-alpha on the current evaluation set MTBench, which is closest to the user experience. - Based on MiniCPM-2B, a multi-modal large model MiniCPM-V 2.0 on the client side is constructed, which achieves the best performance of models below 7B in multiple test benchmarks, and surpasses larger parameter scale models such as Qwen-VL-Chat 9.6B, CogVLM-Chat 17.4B, and Yi-VL 34B on the OpenCompass leaderboard. MiniCPM-V 2.0 also demonstrates leading OCR capabilities, approaching Gemini Pro in scene text recognition capabilities. - After Int4 quantization, MiniCPM can be deployed and inferred on mobile phones, with a streaming output speed slightly higher than human speech speed. MiniCPM-V also directly runs through the deployment of multi-modal large models on mobile phones. - A single 1080/2080 can efficiently fine-tune parameters, and a single 3090/4090 can fully fine-tune parameters. A single machine can continuously train MiniCPM, and the secondary development cost is relatively low.
DISC-LawLLM
DISC-LawLLM is a legal domain large model that aims to provide professional, intelligent, and comprehensive **legal services** to users. It is developed and open-sourced by the Data Intelligence and Social Computing Lab (Fudan-DISC) at Fudan University.
MedicalGPT
MedicalGPT is a training medical GPT model with ChatGPT training pipeline, implement of Pretraining, Supervised Finetuning, RLHF(Reward Modeling and Reinforcement Learning) and DPO(Direct Preference Optimization).
BlueLM
BlueLM is a large-scale pre-trained language model developed by vivo AI Global Research Institute, featuring 7B base and chat models. It includes high-quality training data with a token scale of 26 trillion, supporting both Chinese and English languages. BlueLM-7B-Chat excels in C-Eval and CMMLU evaluations, providing strong competition among open-source models of similar size. The models support 32K long texts for better context understanding while maintaining base capabilities. BlueLM welcomes developers for academic research and commercial applications.
petercat
Peter Cat is an intelligent Q&A chatbot solution designed for community maintainers and developers. It provides a conversational Q&A agent configuration system, self-hosting deployment solutions, and a convenient integrated application SDK. Users can easily create intelligent Q&A chatbots for their GitHub repositories and quickly integrate them into various official websites or projects to provide more efficient technical support for the community.
GodHook
GodHook is an Xposed module that integrates various fun features, including automatic replies with support for multiple AI language models, subscription functionality for daily news, inspirational quotes, and weather updates, as well as interface functions to execute host app message functions for operations alerts and data push scenarios. It also offers various other features waiting to be explored. The module is designed for learning and communication purposes only and should not be used for malicious purposes. It requires technical knowledge to configure API model information and aims to lower the technical barrier for wider usage in the future.
pmhub
PmHub is a smart project management system based on SpringCloud, SpringCloud Alibaba, and LLM. It aims to help students quickly grasp the architecture design and development process of microservices/distributed projects. PmHub provides a platform for students to experience the transformation from monolithic to microservices architecture, understand the pros and cons of both architectures, and prepare for job interviews. It offers popular technologies like SpringCloud-Gateway, Nacos, Sentinel, and provides high-quality code, continuous integration, product design documents, and an enterprise workflow system. PmHub is suitable for beginners and advanced learners who want to master core knowledge of microservices/distributed projects.
For similar tasks
eca
ECA (Editor Code Assistant) is a free and open-source editor-agnostic tool designed to link Language Model Machines (LLMs) with editors for AI pair programming. It provides a protocol for any editor to integrate, offering a seamless user experience. The tool allows for single configuration across different editors, features a chat interface for collaboration, supports multiple LLM models, and enhances code editing with context details. ECA aims to simplify the integration of LLMs with editors, focusing on improving the user experience and productivity in coding tasks.
jimeng-free-api-all
Jimeng AI Free API is a reverse-engineered API server that encapsulates Jimeng AI's image and video generation capabilities into OpenAI-compatible API interfaces. It supports the latest jimeng-5.0-preview, jimeng-4.6 text-to-image models, Seedance 2.0 multi-image intelligent video generation, zero-configuration deployment, and multi-token support. The API is fully compatible with OpenAI API format, seamlessly integrating with existing clients and supporting multiple session IDs for polling usage.
lollms-webui
LoLLMs WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all) is a user-friendly interface to access and utilize various LLM (Large Language Models) and other AI models for a wide range of tasks. With over 500 AI expert conditionings across diverse domains and more than 2500 fine tuned models over multiple domains, LoLLMs WebUI provides an immediate resource for any problem, from car repair to coding assistance, legal matters, medical diagnosis, entertainment, and more. The easy-to-use UI with light and dark mode options, integration with GitHub repository, support for different personalities, and features like thumb up/down rating, copy, edit, and remove messages, local database storage, search, export, and delete multiple discussions, make LoLLMs WebUI a powerful and versatile tool.
daily-poetry-image
Daily Chinese ancient poetry and AI-generated images powered by Bing DALL-E-3. GitHub Action triggers the process automatically. Poetry is provided by Today's Poem API. The website is built with Astro.
InvokeAI
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
LocalAI
LocalAI is a free and open-source OpenAI alternative that acts as a drop-in replacement REST API compatible with OpenAI (Elevenlabs, Anthropic, etc.) API specifications for local AI inferencing. It allows users to run LLMs, generate images, audio, and more locally or on-premises with consumer-grade hardware, supporting multiple model families and not requiring a GPU. LocalAI offers features such as text generation with GPTs, text-to-audio, audio-to-text transcription, image generation with stable diffusion, OpenAI functions, embeddings generation for vector databases, constrained grammars, downloading models directly from Huggingface, and a Vision API. It provides a detailed step-by-step introduction in its Getting Started guide and supports community integrations such as custom containers, WebUIs, model galleries, and various bots for Discord, Slack, and Telegram. LocalAI also offers resources like an LLM fine-tuning guide, instructions for local building and Kubernetes installation, projects integrating LocalAI, and a how-tos section curated by the community. It encourages users to cite the repository when utilizing it in downstream projects and acknowledges the contributions of various software from the community.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
StableSwarmUI
StableSwarmUI is a modular Stable Diffusion web user interface that emphasizes making power tools easily accessible, high performance, and extensible. It is designed to be a one-stop-shop for all things Stable Diffusion, providing a wide range of features and capabilities to enhance the user experience.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.








