
ai-app-lab
None
Stars: 120

The ai-app-lab is a high-code Python SDK Arkitect designed for enterprise developers with professional development capabilities. It provides a toolset and workflow set for developing large model applications tailored to specific business scenarios. The SDK offers highly customizable application orchestration, quality business tools, one-stop development and hosting services, security enhancements, and AI prototype application code examples. It caters to complex enterprise development scenarios, enabling the creation of highly customized intelligent applications for various industries.
README:
高代码 Python SDK Arkitect,面向具有专业开发能力的企业开发者,提供大模型应用开发需要用到的工具集和流程集。借助高代码 SDK Arkitect 和 AI 原型应用代码示例,您能够快速开发和扩展匹配您业务场景的大模型相关应用。
- 高度定制化: 提供高代码智能体应用编排方式,灵活服务客户高度定制化和自定义需求。
- 丰富优质的业务工具: 面向企业客户提供高质量、有保障的业务工具,包括丰富的业务插件库与工具链,支持与先进的大模型进行组合串联,实现一个端到端解决问题的智能体应用。
- 一站式开发与托管服务: 简化智能体应用部署和管理的流程,增强系统的稳定性。
- 安全可靠: 提供方舟的安全加固实践,增强业务数据的安全性和保密性,减小数据泄漏或窃取风险。
- AI 原型应用代码示例: 开发者可快速上手和扩展的示例,基于示例您可以按需进行定制化开发。
面向复杂的企业开发场景,搭建高定制化与自定义的智能体应用,赋能大模型在各行业场景的落地应用,实现企业智能化升级。
- 智能驾舱: 为汽车行业用户提供车载智能交互, 包括角色扮演、聊天、联网查询(天气、视频、新闻等)、车机能力唤起等多功能的融合编排使用。
- 金融服务: 为金融行业用户提供智能投顾、风险评估等服务,提升金融服务的效率和客户满意度。
- 电商库存管理: 为电商行业提供高效的库存管理方案。包括商品库存管理与查询、分析与预测需求,保证供应链运营的流畅性和效率。
- 办公助理: 支持企业客户在办公场景下文档写作、会议管理、数据分析等需求。
- 行业大模型应用: 企业可根据业务和目标进行定制和拓展。包括但不限于泛互联网、工业、政务、交通、汽车、金融等各行业场景的大模型落地应用。
功能点 | 功能简介 |
---|---|
Prompt 渲染及模型调用 | 简化调用模型时,prompt渲染及模型调用结果处理的流程。 |
插件调用 | 支持插件本地注册、插件管理及对接FC模型自动化调用。 |
Trace 监控 | 支持对接otel协议的trace管理及上报。 |
应用名称 | 应用简介 |
---|---|
互动双语视频生成器 | 只需输入一个主题,就能为你生成引人入胜且富有含义的双语视频。 |
视频实时理解 | 多模态洞察,基于豆包-视觉理解模型实时视觉与语音理解。 |
语音实时通话-青青 | 嗨,我是你的朋友乔青青,快来和我语音通话吧! |
-
安装 arkitect
pip install arkitect --index-url https://pypi.org/simple
-
创建
main.py
,修改文件中的 endpoint_id 为您新创建的推理接入点 ID。
"""
默认llm逻辑
"""
import os
from typing import AsyncIterable, Union
from arkitect.core.component.llm import BaseChatLanguageModel
from arkitect.core.component.llm.model import (
ArkChatCompletionChunk,
ArkChatParameters,
ArkChatRequest,
ArkChatResponse,
Response,
)
from arkitect.launcher.local.serve import launch_serve
from arkitect.telemetry.trace import task
endpoint_id = "<YOUR ENDPOINT ID>"
@task()
async def default_model_calling(
request: ArkChatRequest,
) -> AsyncIterable[Union[ArkChatCompletionChunk, ArkChatResponse]]:
parameters = ArkChatParameters(**request.__dict__)
llm = BaseChatLanguageModel(
endpoint_id=endpoint_id,
messages=request.messages,
parameters=parameters,
)
if request.stream:
async for resp in llm.astream():
yield resp
else:
yield await llm.arun()
@task()
async def main(request: ArkChatRequest) -> AsyncIterable[Response]:
async for resp in default_model_calling(request):
yield resp
if __name__ == "__main__":
port = os.getenv("_FAAS_RUNTIME_PORT")
launch_serve(
package_path="main",
port=int(port) if port else 8080,
health_check_path="/v1/ping",
endpoint_path="/api/v3/bots/chat/completions",
clients={},
)
- 设置 APIKEY 并启动后端
export ARK_API_KEY=<YOUR APIKEY>
python3 main.py
- 发起请求
curl --location 'http://localhost:8080/api/v3/bots/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "my-bot",
"messages": [
{
"role": "user",
"content": "介绍你自己啊"
}
]
}'
预期返回如下:
{
"error": null,
"id": "02173*************************************",
"choices": [
{
"finish_reason": "stop",
"moderation_hit_type": null,
"index": 0,
"logprobs": null,
"message": {
"content": "我是豆包,由字节跳动公司开发。我能陪你谈天说地,无论是解答各种知识疑问,比如科学原理、历史事件;还是探讨文化艺术、娱乐八卦;亦或是在生活问题上给你提供建议和思路,像制定旅行计划、规划健身安排、分享美食烹饪方法等,我都很在行。随时都可以和我交流,我时刻准备着为你排忧解难、畅聊想法! ",
"role": "assistant",
"function_call": null,
"tool_calls": null,
"audio": null
}
}
],
"created": 1736847939,
"model": "doubao-pro-32k-241215",
"object": "chat.completion",
"usage": {
"completion_tokens": 95,
"prompt_tokens": 12,
"total_tokens": 107,
"prompt_tokens_details": {
"cached_tokens": 0
}
},
"metadata": null
}
-
安装 arkitect
pip install arkitect --index-url https://pypi.org/simple
-
登录方舟控制台,创建一个推理接入点(Endpoint),请选择带 Function-call 能力的模型,推荐使用doubao-pro-32k functioncall-241028 参考文档
-
创建
main.py
,修改文件中的 endpoint_id 为您新创建的推理接入点 ID。
"""
fc+llm
"""
import os
from typing import AsyncIterable, Union
from arkitect.core.component.llm import BaseChatLanguageModel
from arkitect.core.component.llm.model import (
ArkChatCompletionChunk,
ArkChatParameters,
ArkChatRequest,
ArkChatResponse,
Response,
)
from arkitect.core.component.tool import Calculator, ToolPool
from arkitect.launcher.local.serve import launch_serve
from arkitect.telemetry.trace import task
endpoint_id = "<YOUR ENDPOINT ID>"
@task()
async def default_model_calling(
request: ArkChatRequest,
) -> AsyncIterable[Union[ArkChatCompletionChunk, ArkChatResponse]]:
parameters = ArkChatParameters(**request.__dict__)
ToolPool.register(Calculator())
llm = BaseChatLanguageModel(
endpoint_id=endpoint_id,
messages=request.messages,
parameters=parameters,
)
if request.stream:
async for resp in llm.astream(functions=ToolPool.all()):
yield resp
else:
yield await llm.arun(functions=ToolPool.all())
@task()
async def main(request: ArkChatRequest) -> AsyncIterable[Response]:
async for resp in default_model_calling(request):
yield resp
if __name__ == "__main__":
port = os.getenv("_FAAS_RUNTIME_PORT")
launch_serve(
package_path="main",
port=int(port) if port else 8080,
health_check_path="/v1/ping",
endpoint_path="/api/v3/bots/chat/completions",
clients={},
)
- 设置 APIKEY 并启动后端
export ARK_API_KEY=<YOUR APIKEY>
python3 main.py
- 发起请求
curl --location 'http://localhost:8080/api/v3/bots/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "my-bot",
"messages": [
{
"role": "user",
"content": "老王要养马,他有这样一池水:如果养马30匹,8天可可以把水喝光;如果养马25匹,12天把水喝光。老王要养马23匹,那么几天后他要为马找水喝?"
}
]
}'
预期返回如下:
{
"error": null,
"id": "0xxxxxxxxx",
"choices": [
{
"finish_reason": "stop",
"moderation_hit_type": null,
"index": 0,
"logprobs": null,
"message": {
"content": "\n首先计算出每天新增的水量,再算出池中原有的水量,最后根据养马数量计算水可以喝的天数,调用 `Calculator/Calculator` 工具进行计算。\n\n假设每匹马每天的饮水量为\\(1\\)份,我们先来求出每天新增的水量。\n\n\n\n假设每匹马每天的饮水量为1份。30匹马8天的饮水量为$30\\times8=240$份,25匹马12天的饮水量为$25\\times12=300$份。那么12天的总饮水量比8天的总饮水量多了$300-240=60$份,这60份水是$12-8=4$天新增加的水量,所以每天新增加的水量为$60\\div4=15$份。则水池原有的水量为$30\\times8-15\\times8=120$份。如果养23匹马,每天实际消耗原水池的水量为$23-15=8$份,所以喝完水池里的水需要$120\\div8=15$天\n15天后他要为马找水喝。",
"role": "assistant",
"function_call": null,
"tool_calls": null,
"audio": null
}
}
],
"created": 1737022804,
"model": "doubao-pro-32k-241215",
"object": "chat.completion",
"usage": {
"completion_tokens": 558,
"prompt_tokens": 1361,
"total_tokens": 1919,
"prompt_tokens_details": {
"cached_tokens": 0
}
},
"metadata": null
}
- arkitect 是方舟高代码智能体 SDK,面向具有专业开发能力的企业开发者,提供智能体开发需要用到的工具集和流程集。
- volcenginesdkarkruntime 是对方舟的 API 进行封装,方便用户通过 API 创建、管理和调用大模型相关服务。
-
./arkitect
目录下代码遵循 Apache 2.0 许可. -
./demohouse
目录下代码遵循【火山方舟】原型应用软件自用许可协议 许可。
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for ai-app-lab
Similar Open Source Tools

ai-app-lab
The ai-app-lab is a high-code Python SDK Arkitect designed for enterprise developers with professional development capabilities. It provides a toolset and workflow set for developing large model applications tailored to specific business scenarios. The SDK offers highly customizable application orchestration, quality business tools, one-stop development and hosting services, security enhancements, and AI prototype application code examples. It caters to complex enterprise development scenarios, enabling the creation of highly customized intelligent applications for various industries.

qwen-free-api
Qwen AI Free service supports high-speed streaming output, multi-turn dialogue, watermark-free AI drawing, long document interpretation, image parsing, zero-configuration deployment, multi-token support, automatic session trace cleaning. It is fully compatible with the ChatGPT interface. The repository provides various free APIs for different AI services. Users can access the service through different deployment methods like Docker, Docker-compose, Render, Vercel, and native deployment. It offers interfaces for chat completions, AI drawing, document interpretation, image parsing, and token checking. Users need to provide 'login_tongyi_ticket' for authorization. The project emphasizes research, learning, and personal use only, discouraging commercial use to avoid service pressure on the official platform.

glm-free-api
GLM AI Free 服务 provides high-speed streaming output, multi-turn dialogue support, intelligent agent dialogue support, AI drawing support, online search support, long document interpretation support, image parsing support. It offers zero-configuration deployment, multi-token support, and automatic session trace cleaning. It is fully compatible with the ChatGPT interface. The repository also includes six other free APIs for various services like Moonshot AI, StepChat, Qwen, Metaso, Spark, and Emohaa. The tool supports tasks such as chat completions, AI drawing, document interpretation, image parsing, and refresh token survival check.

spark-free-api
Spark AI Free 服务 provides high-speed streaming output, multi-turn dialogue support, AI drawing support, long document interpretation, and image parsing. It offers zero-configuration deployment, multi-token support, and automatic session trace cleaning. It is fully compatible with the ChatGPT interface. The repository includes multiple free-api projects for various AI services. Users can access the API for tasks such as chat completions, AI drawing, document interpretation, image analysis, and ssoSessionId live checking. The project also provides guidelines for deployment using Docker, Docker-compose, Render, Vercel, and native deployment methods. It recommends using custom clients for faster and simpler access to the free-api series projects.

step-free-api
The StepChat Free service provides high-speed streaming output, multi-turn dialogue support, online search support, long document interpretation, and image parsing. It offers zero-configuration deployment, multi-token support, and automatic session trace cleaning. It is fully compatible with the ChatGPT interface. Additionally, it provides seven other free APIs for various services. The repository includes a disclaimer about using reverse APIs and encourages users to avoid commercial use to prevent service pressure on the official platform. It offers online testing links, showcases different demos, and provides deployment guides for Docker, Docker-compose, Render, Vercel, and native deployments. The repository also includes information on using multiple accounts, optimizing Nginx reverse proxy, and checking the liveliness of refresh tokens.

kimi-free-api
KIMI AI Free 服务 支持高速流式输出、支持多轮对话、支持联网搜索、支持长文档解读、支持图像解析,零配置部署,多路token支持,自动清理会话痕迹。 与ChatGPT接口完全兼容。 还有以下五个free-api欢迎关注: 阶跃星辰 (跃问StepChat) 接口转API step-free-api 阿里通义 (Qwen) 接口转API qwen-free-api ZhipuAI (智谱清言) 接口转API glm-free-api 秘塔AI (metaso) 接口转API metaso-free-api 聆心智能 (Emohaa) 接口转API emohaa-free-api

midjourney-proxy
Midjourney Proxy is an open-source project that acts as a proxy for the Midjourney Discord channel, allowing API-based AI drawing calls for charitable purposes. It provides drawing API for free use, ensuring full functionality, security, and minimal memory usage. The project supports various commands and actions related to Imagine, Blend, Describe, and more. It also offers real-time progress tracking, Chinese prompt translation, sensitive word pre-detection, user-token connection via wss for error information retrieval, and various account configuration options. Additionally, it includes features like image zooming, seed value retrieval, account-specific speed mode settings, multiple account configurations, and more. The project aims to support mainstream drawing clients and API calls, with features like task hierarchy, Remix mode, image saving, and CDN acceleration, among others.

illufly
illufly is an Agent framework with self-evolution capabilities, aiming to quickly create value based on self-evolution. It is designed to have self-evolution capabilities in various scenarios such as intent guessing, Q&A experience, data recall rate, and tool planning ability. The framework supports continuous dialogue, built-in RAG support, and self-evolution during conversations. It also provides tools for managing experience data and supports multiple agents collaboration.

Gensokyo-llm
Gensokyo-llm is a tool designed for Gensokyo and Onebotv11, providing a one-click solution for large models. It supports various Onebotv11 standard frameworks, HTTP-API, and reverse WS. The tool is lightweight, with built-in SQLite for context maintenance and proxy support. It allows easy integration with the Gensokyo framework by configuring reverse HTTP and forward HTTP addresses. Users can set system settings, role cards, and context length. Additionally, it offers an openai original flavor API with automatic context. The tool can be used as an API or integrated with QQ channel robots. It supports converting GPT's SSE type and ensures memory safety in concurrent SSE environments. The tool also supports multiple users simultaneously transmitting SSE bidirectionally.

get_jobs
Get Jobs is a tool designed to help users find and apply for job positions on various recruitment platforms in China. It features AI job matching, automatic cover letter generation, multi-platform job application, automated filtering of inactive HR and headhunter positions, real-time WeChat message notifications, blacklisted company updates, driver adaptation for Win11, centralized configuration, long-lasting cookie login, XPathHelper plugin, global logging, and more. The tool supports platforms like Boss直聘, 猎聘, 拉勾, 51job, and 智联招聘. Users can configure the tool for customized job searches and applications.

RagaAI-Catalyst
RagaAI Catalyst is a comprehensive platform designed to enhance the management and optimization of LLM projects. It offers features such as project management, dataset management, evaluation management, trace management, prompt management, synthetic data generation, and guardrail management. These functionalities enable efficient evaluation and safeguarding of LLM applications.

vlmrun-hub
VLMRun Hub is a versatile tool for managing and running virtual machines in a centralized manner. It provides a user-friendly interface to easily create, start, stop, and monitor virtual machines across multiple hosts. With VLMRun Hub, users can efficiently manage their virtualized environments and streamline their workflow. The tool offers flexibility and scalability, making it suitable for both small-scale personal projects and large-scale enterprise deployments.

Senparc.AI
Senparc.AI is an AI extension package for the Senparc ecosystem, focusing on LLM (Large Language Models) interaction. It provides modules for standard interfaces and basic functionalities, as well as interfaces using SemanticKernel for plug-and-play capabilities. The package also includes a library for supporting the 'PromptRange' ecosystem, compatible with various systems and frameworks. Users can configure different AI platforms and models, define AI interface parameters, and run AI functions easily. The package offers examples and commands for dialogue, embedding, and DallE drawing operations.

aigcpanel
AigcPanel is a simple and easy-to-use all-in-one AI digital human system that even beginners can use. It supports video synthesis, voice synthesis, voice cloning, simplifies local model management, and allows one-click import and use of AI models. It prohibits the use of this product for illegal activities and users must comply with the laws and regulations of the People's Republic of China.

moon-bot
Moon Bot is a free script that utilizes the AlyaChan-APIs. It requires a server with specific specifications, NodeJS, FFMPEG, WhatsApp, and an API key. The script can be deployed on platforms like Heroku, VPS/RDP DigitalOcean, VPS NAT HostData, and Panel Optiklink. It supports databases like MongoDB, PostgreSQL Supabase, and PostgreSQL/MongoDB Railway for testing. Users can configure the script through .env, config.json, and config.js files. Installation and running instructions are provided for different environments. The script supports plugins and events, and external session management is possible. Moon Bot is under development and receives regular updates.
For similar tasks

ai-app-lab
The ai-app-lab is a high-code Python SDK Arkitect designed for enterprise developers with professional development capabilities. It provides a toolset and workflow set for developing large model applications tailored to specific business scenarios. The SDK offers highly customizable application orchestration, quality business tools, one-stop development and hosting services, security enhancements, and AI prototype application code examples. It caters to complex enterprise development scenarios, enabling the creation of highly customized intelligent applications for various industries.

db2rest
DB2Rest is a modern low-code REST DATA API platform that simplifies the development of intelligent applications. It seamlessly integrates existing and new databases with language models (LMs/LLMs) and vector stores, enabling the rapid delivery of context-aware, reasoning applications without vendor lock-in.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.