agentica
Agentica: Effortlessly Build Intelligent, Reflective, and Collaborative Multimodal AI Agents! 轻松构建智能、具备反思能力、可协作的多模态AI Agent。
Stars: 108
Agentica is a human-centric framework for building large language model agents. It provides functionalities for planning, memory management, tool usage, and supports features like reflection, planning and execution, RAG, multi-agent, multi-role, and workflow. The tool allows users to quickly code and orchestrate agents, customize prompts, and make API calls to various services. It supports API calls to OpenAI, Azure, Deepseek, Moonshot, Claude, Ollama, and Together. Agentica aims to simplify the process of building AI agents by providing a user-friendly interface and a range of functionalities for agent development.
README:
Agentica: 轻松构建智能、具备反思能力、可协作的多模态AI Agent。
Agentica 可以构建AI Agent,包括规划、记忆和工具使用、执行等组件。
- 规划(Planning):任务拆解、生成计划、反思
- 记忆(Memory):短期记忆(prompt实现)、长期记忆(RAG实现)
- 工具使用(Tool use):function call能力,调用外部API,以获取外部信息,包括当前日期、日历、代码执行能力、对专用信息源的访问等
Agentica can also build multi-agent systems and workflows.
Agentica 还可以构建多Agent系统和工作流。
- Planner:负责让LLM生成一个多步计划来完成复杂任务,生成相互依赖的“链式计划”,定义每一步所依赖的上一步的输出
- Worker:接受“链式计划”,循环遍历计划中的每个子任务,并调用工具完成任务,可以自动反思纠错以完成任务
- Solver:求解器将所有这些输出整合为最终答案
[2024/12/29] v0.2.3版本: 支持了ZhipuAI
的api调用,包括免费模型和工具使用,详见Release-v0.2.3
[2024/12/25] v0.2.0版本: 支持了多模态模型,输入可以是文本、图片、音频、视频,升级Assistant为Agent,Workflow支持拆解并实现复杂任务,详见Release-v0.2.0
[2024/07/02] v0.1.0版本:实现了基于LLM的Assistant,可以快速用function call搭建大语言模型助手,详见Release-v0.1.0
Agentica
是一个用于构建Agent的工具,具有以下功能:
- Agent编排:通过简单代码快速编排Agent,支持 Reflection(反思)、Plan and Solve(计划并执行)、RAG、Agent、Multi-Agent、Team、Workflow等功能
- 自定义prompt:Agent支持自定义prompt和多种工具调用(tool_calls)
- LLM集成:支持OpenAI、Azure、Deepseek、Moonshot、Anthropic、ZhipuAI、Ollama、Together等多方大模型厂商的API
- 记忆功能:包括短期记忆和长期记忆功能
- Multi-Agent协作:支持多Agent和任务委托(Team)的团队协作。
- Workflow工作流:拆解复杂任务为多个Agent,基于工作流自动化串行逐步完成任务,如投资研究、新闻文章撰写和技术教程创建
- 自我进化Agent:具有反思和增强记忆能力的自我进化Agent
- Web UI:兼容ChatPilot,可以基于Web页面交互,支持主流的open-webui、streamlit、gradio等前端交互框架
pip install -U agentica
or
git clone https://github.com/shibing624/agentica.git
cd agentica
pip install .
# Copying required .env file, and fill in the LLM api key
cp .env.example ~/.agentica/.env
cd examples
python web_search_moonshot_demo.py
-
复制.env.example文件为
~/.agentica/.env
,并填写LLM api key(选填DEEPSEEK_API_KEY、MOONSHOT_API_KEY、OPENAI_API_KEY、ZHIPUAI_API_KEY等任一个即可)。或者使用export
命令设置环境变量:export MOONSHOT_API_KEY=your_moonshot_api_key export SERPER_API_KEY=your_serper_api_key
-
使用
agentica
构建Agent并执行:
自动调用google搜索工具,示例examples/12_web_search_moonshot_demo.py
from agentica import Agent, MoonshotChat, SearchSerperTool
m = Agent(model=MoonshotChat(), tools=[SearchSerperTool()], add_datetime_to_instructions=True)
r = m.run("下一届奥运会在哪里举办")
print(r)
shibing624/ChatPilot 兼容agentica
,可以通过Web UI进行交互。
Web Demo: https://chat.mulanai.com
git clone https://github.com/shibing624/ChatPilot.git
cd ChatPilot
pip install -r requirements.txt
cp .env.example .env
bash start.sh
示例 | 描述 |
---|---|
examples/01_llm_demo.py | LLM问答Demo |
examples/02_user_prompt_demo.py | 自定义用户prompt的Demo |
examples/03_user_messages_demo.py | 自定义输入用户消息的Demo |
examples/04_memory_demo.py | Agent的记忆Demo |
examples/05_response_model_demo.py | 按指定格式(pydantic的BaseModel)回复的Demo |
examples/06_calc_with_csv_file_demo.py | LLM加载CSV文件,并执行计算来回答的Demo |
examples/07_create_image_tool_demo.py | 实现了创建图像工具的Demo |
examples/08_ocr_tool_demo.py | 实现了OCR工具的Demo |
examples/09_remove_image_background_tool_demo.py | 实现了自动去除图片背景功能,包括自动通过pip安装库,调用库实现去除图片背景 |
examples/10_vision_demo.py | 视觉理解Demo |
examples/11_web_search_openai_demo.py | 基于OpenAI的function call做网页搜索Demo |
examples/12_web_search_moonshot_demo.py | 基于Moonshot的function call做网页搜索Demo |
examples/13_storage_demo.py | Agent的存储Demo |
examples/14_custom_tool_demo.py | 自定义工具,并用大模型自主选择调用的Demo |
examples/15_crawl_webpage_demo.py | 实现了网页分析工作流:从Url爬取融资快讯 - 分析网页内容和格式 - 提取核心信息 - 汇总存为md文件 |
examples/16_get_top_papers_demo.py | 解析每日论文,并保存为json格式的Demo |
examples/17_find_paper_from_arxiv_demo.py | 实现了论文推荐的Demo:自动从arxiv搜索多组论文 - 相似论文去重 - 提取核心论文信息 - 保存为csv文件 |
examples/18_agent_input_is_list.py | 展示Agent的message可以是列表的Demo |
examples/19_naive_rag_demo.py | 实现了基础版RAG,基于Txt文档回答问题 |
examples/20_advanced_rag_demo.py | 实现了高级版RAG,基于PDF文档回答问题,新增功能:pdf文件解析、query改写,字面+语义多路混合召回,召回排序(rerank) |
examples/21_memorydb_rag_demo.py | 把参考资料放到prompt的传统RAG做法的Demo |
examples/22_chat_pdf_app_demo.py | 对PDF文档做深入对话的Demo |
examples/23_python_agent_memory_demo.py | 实现了带记忆的Code Interpreter功能,自动生成python代码并执行,下次执行时从记忆获取结果 |
examples/24_context_demo.py | 实现了传入上下文进行对话的Demo |
examples/25_tools_with_context_demo.py | 工具带上下文传参的Demo |
examples/26_complex_translate_demo.py | 实现了复杂翻译Demo |
examples/27_research_agent_demo.py | 实现了Research功能,自动调用搜索工具,汇总信息后撰写科技报告 |
examples/28_rag_integrated_langchain_demo.py | 集成LangChain的RAG Demo |
examples/29_rag_integrated_llamaindex_demo.py | 集成LlamaIndex的RAG Demo |
examples/30_text_classification_demo.py | 实现了自动训练分类模型的Agent:读取训练集文件并理解格式 - 谷歌搜索pytextclassifier库 - 爬取github页面了解pytextclassifier的调用方法 - 写代码并执行fasttext模型训练 - check训练好的模型预测结果 |
examples/31_team_news_article_demo.py | Team实现:写新闻稿的team协作,multi-role实现,委托不用角色完成各自任务:研究员检索分析文章,撰写员根据排版写文章,汇总多角色成果输出结果 |
examples/32_team_debate_demo.py | Team实现:基于委托做双人辩论Demo,特朗普和拜登辩论 |
examples/33_self_evolving_agent_demo.py | 实现了自我进化Agent的Demo |
examples/34_llm_os_demo.py | 实现了LLM OS的初步设计,基于LLM设计操作系统,可以通过LLM调用RAG、代码执行器、Shell等工具,并协同代码解释器、研究助手、投资助手等来解决问题。 |
examples/35_workflow_investment_demo.py | 实现了投资研究的工作流:股票信息收集 - 股票分析 - 撰写分析报告 - 复查报告等多个Task |
examples/36_workflow_news_article_demo.py | 实现了写新闻稿的工作流,multi-agent的实现,多次调用搜索工具,并生成高级排版的新闻文章 |
examples/37_workflow_write_novel_demo.py | 实现了写小说的工作流:定小说提纲 - 搜索谷歌反思提纲 - 撰写小说内容 - 保存为md文件 |
examples/38_workflow_write_tutorial_demo.py | 实现了写技术教程的工作流:定教程目录 - 反思目录内容 - 撰写教程内容 - 保存为md文件 |
examples/39_audio_multi_turn_demo.py | 基于openai的语音api做多轮音频对话的Demo |
examples/40_web_search_zhipuai_demo.py | 基于智谱AI的api做网页搜索Demo,使用免费的glm-4-flash模型,和免费的web-search-pro搜索工具 |
The self-evolving agent design:
具有反思和增强记忆能力的自我进化智能体(self-evolving Agents with Reflective and Memory-augmented Abilities, SAGE)
实现方法:
- 使用PythonAgent作为SAGE智能体,使用AzureOpenAIChat作为LLM, 具备code-interpreter功能,可以执行Python代码,并自动纠错。
- 使用CsvMemoryDb作为SAGE智能体的记忆,用于存储用户的问题和答案,下次遇到相似的问题时,可以直接返回答案。
cd examples
streamlit run 33_self_evolving_agent_demo.py
The LLM OS design:
cd examples
streamlit run 34_llm_os_demo.py
支持终端命令行快速搭建并体验Agent
code: cli.py
> agentica -h
usage: cli.py [-h] [--query QUERY]
[--model_provider {openai,azure,moonshot,zhipuai,deepseek,yi}]
[--model_name MODEL_NAME] [--api_base API_BASE]
[--api_key API_KEY] [--max_tokens MAX_TOKENS]
[--temperature TEMPERATURE] [--verbose VERBOSE]
[--tools [{search_serper,file_tool,shell_tool,yfinance_tool,web_search_pro,cogview,cogvideo,jina,wikipedia} ...]]
CLI for agentica
options:
-h, --help show this help message and exit
--query QUERY Question to ask the LLM
--model_provider {openai,azure,moonshot,zhipuai,deepseek,yi}
LLM model provider
--model_name MODEL_NAME
LLM model name to use, can be
gpt-4o/glm-4-flash/deepseek-chat/yi-lightning/...
--api_base API_BASE API base URL for the LLM
--api_key API_KEY API key for the LLM
--max_tokens MAX_TOKENS
Maximum number of tokens for the LLM
--temperature TEMPERATURE
Temperature for the LLM
--verbose VERBOSE enable verbose mode
--tools [{search_serper,file_tool,shell_tool,yfinance_tool,web_search_pro,cogview,cogvideo,jina,wikipedia} ...]
Tools to enable
run:
pip install agentica -U
# 单次调用,填入`--query`参数
agentica --query "下一届奥运会在哪里举办" --model_provider zhipuai --model_name glm-4-flash --tools web_search_pro
# 多次调用,多轮对话,不填`--query`参数
agentica --model_provider zhipuai --model_name glm-4-flash --tools web_search_pro cogview --verbose 1
output:
2024-12-30 21:59:15,000 - agentica - INFO - Agentica CLI
>>> 帮我画个大象在月球上的图
> 我帮你画了一张大象在月球上的图,它看起来既滑稽又可爱。大象穿着宇航服,站在月球表面,背景是广阔的星空和地球。这张图色彩明亮,细节丰富,具有卡通风格。你可以点击下面的链接查看和下载这张图片:
![大象在月球上的图](https://aigc-files.bigmodel.cn/api/cogview/20241230215915cfa22f46425e4cb0_0.png)
- Issue(建议) :
- 邮件我:xuming: [email protected]
- 微信我: 加我微信号:xuming624, 备注:姓名-公司-NLP 进NLP交流群。
如果你在研究中使用了agentica
,请按如下格式引用:
APA:
Xu, M. agentica: A Human-Centric Framework for Large Language Model Agent Workflows (Version 0.0.2) [Computer software]. https://github.com/shibing624/agentica
BibTeX:
@misc{Xu_agentica,
title={agentica: A Human-Centric Framework for Large Language Model Agent Workflows},
author={Xu Ming},
year={2024},
howpublished={\url{https://github.com/shibing624/agentica}},
}
授权协议为 The Apache License 2.0,可免费用做商业用途。请在产品说明中附加agentica
的链接和授权协议。
项目代码还很粗糙,如果大家对代码有所改进,欢迎提交回本项目,在提交之前,注意以下两点:
- 在
tests
添加相应的单元测试 - 使用
python -m pytest
来运行所有单元测试,确保所有单测都是通过的
之后即可提交PR。
Thanks for their great work!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for agentica
Similar Open Source Tools
agentica
Agentica is a human-centric framework for building large language model agents. It provides functionalities for planning, memory management, tool usage, and supports features like reflection, planning and execution, RAG, multi-agent, multi-role, and workflow. The tool allows users to quickly code and orchestrate agents, customize prompts, and make API calls to various services. It supports API calls to OpenAI, Azure, Deepseek, Moonshot, Claude, Ollama, and Together. Agentica aims to simplify the process of building AI agents by providing a user-friendly interface and a range of functionalities for agent development.
chatluna
Chatluna is a machine learning model plugin that provides chat services with large language models. It is highly extensible, supports multiple output formats, and offers features like custom conversation presets, rate limiting, and context awareness. Users can deploy Chatluna under Koishi without additional configuration. The plugin supports various models/platforms like OpenAI, Azure OpenAI, Google Gemini, and more. It also provides preset customization using YAML files and allows for easy forking and development within Koishi projects. However, the project lacks web UI, HTTP server, and project documentation, inviting contributions from the community.
HivisionIDPhotos
HivisionIDPhoto is a practical algorithm for intelligent ID photo creation. It utilizes a comprehensive model workflow to recognize, cut out, and generate ID photos for various user photo scenarios. The tool offers lightweight cutting, standard ID photo generation based on different size specifications, six-inch layout photo generation, beauty enhancement (waiting), and intelligent outfit swapping (waiting). It aims to solve emergency ID photo creation issues.
AI0x0.com
AI 0x0 is a versatile AI query generation desktop floating assistant application that supports MacOS and Windows. It allows users to utilize AI capabilities in any desktop software to query and generate text, images, audio, and video data, helping them work more efficiently. The application features a dynamic desktop floating ball, floating dialogue bubbles, customizable presets, conversation bookmarking, preset packages, network acceleration, query mode, input mode, mouse navigation, deep customization of ChatGPT Next Web, support for full-format libraries, online search, voice broadcasting, voice recognition, voice assistant, application plugins, multi-model support, online text and image generation, image recognition, frosted glass interface, light and dark theme adaptation for each language model, and free access to all language models except Chat0x0 with a key.
ChuanhuChatGPT
Chuanhu Chat is a user-friendly web graphical interface that provides various additional features for ChatGPT and other language models. It supports GPT-4, file-based question answering, local deployment of language models, online search, agent assistant, and fine-tuning. The tool offers a range of functionalities including auto-solving questions, online searching with network support, knowledge base for quick reading, local deployment of language models, GPT 3.5 fine-tuning, and custom model integration. It also features system prompts for effective role-playing, basic conversation capabilities with options to regenerate or delete dialogues, conversation history management with auto-saving and search functionalities, and a visually appealing user experience with themes, dark mode, LaTeX rendering, and PWA application support.
ipex-llm
IPEX-LLM is a PyTorch library for running Large Language Models (LLMs) on Intel CPUs and GPUs with very low latency. It provides seamless integration with various LLM frameworks and tools, including llama.cpp, ollama, Text-Generation-WebUI, HuggingFace transformers, and more. IPEX-LLM has been optimized and verified on over 50 LLM models, including LLaMA, Mistral, Mixtral, Gemma, LLaVA, Whisper, ChatGLM, Baichuan, Qwen, and RWKV. It supports a range of low-bit inference formats, including INT4, FP8, FP4, INT8, INT2, FP16, and BF16, as well as finetuning capabilities for LoRA, QLoRA, DPO, QA-LoRA, and ReLoRA. IPEX-LLM is actively maintained and updated with new features and optimizations, making it a valuable tool for researchers, developers, and anyone interested in exploring and utilizing LLMs.
Langchain-Chatchat
LangChain-Chatchat is an open-source, offline-deployable retrieval-enhanced generation (RAG) large model knowledge base project based on large language models such as ChatGLM and application frameworks such as Langchain. It aims to establish a knowledge base Q&A solution that is friendly to Chinese scenarios, supports open-source models, and can run offline.
video-subtitle-remover
Video-subtitle-remover (VSR) is a software based on AI technology that removes hard subtitles from videos. It achieves the following functions: - Lossless resolution: Remove hard subtitles from videos, generate files with subtitles removed - Fill the region of removed subtitles using a powerful AI algorithm model (non-adjacent pixel filling and mosaic removal) - Support custom subtitle positions, only remove subtitles in defined positions (input position) - Support automatic removal of all text in the entire video (no input position required) - Support batch removal of watermark text from multiple images.
DownEdit
DownEdit is a powerful program that allows you to download videos from various social media platforms such as TikTok, Douyin, Kuaishou, and more. With DownEdit, you can easily download videos from user profiles and edit them in bulk. You have the option to flip the videos horizontally or vertically throughout the entire directory with just a single click. Stay tuned for more exciting features coming soon!
Awesome-ChatTTS
Awesome-ChatTTS is an official recommended guide for ChatTTS beginners, compiling common questions and related resources. It provides a comprehensive overview of the project, including official introduction, quick experience options, popular branches, parameter explanations, voice seed details, installation guides, FAQs, and error troubleshooting. The repository also includes video tutorials, discussion community links, and project trends analysis. Users can explore various branches for different functionalities and enhancements related to ChatTTS.
HaE
HaE is a framework project in the field of network security (data security) that combines artificial intelligence (AI) large models to achieve highlighting and information extraction of HTTP messages (including WebSocket). It aims to reduce testing time, focus on valuable and meaningful messages, and improve vulnerability discovery efficiency. The project provides a clear and visual interface design, simple interface interaction, and centralized data panel for querying and extracting information. It also features built-in color upgrade algorithm, one-click export/import of data, and integration of AI large models API for optimized data processing.
Speech-AI-Forge
Speech-AI-Forge is a project developed around TTS generation models, implementing an API Server and a WebUI based on Gradio. The project offers various ways to experience and deploy Speech-AI-Forge, including online experience on HuggingFace Spaces, one-click launch on Colab, container deployment with Docker, and local deployment. The WebUI features include TTS model functionality, speaker switch for changing voices, style control, long text support with automatic text segmentation, refiner for ChatTTS native text refinement, various tools for voice control and enhancement, support for multiple TTS models, SSML synthesis control, podcast creation tools, voice creation, voice testing, ASR tools, and post-processing tools. The API Server can be launched separately for higher API throughput. The project roadmap includes support for various TTS models, ASR models, voice clone models, and enhancer models. Model downloads can be manually initiated using provided scripts. The project aims to provide inference services and may include training-related functionalities in the future.
VideoCaptioner
VideoCaptioner is a video subtitle processing assistant based on a large language model (LLM), supporting speech recognition, subtitle segmentation, optimization, translation, and full-process handling. It is user-friendly and does not require high configuration, supporting both network calls and local offline (GPU-enabled) speech recognition. It utilizes a large language model for intelligent subtitle segmentation, correction, and translation, providing stunning subtitles for videos. The tool offers features such as accurate subtitle generation without GPU, intelligent segmentation and sentence splitting based on LLM, AI subtitle optimization and translation, batch video subtitle synthesis, intuitive subtitle editing interface with real-time preview and quick editing, and low model token consumption with built-in basic LLM model for easy use.
build_MiniLLM_from_scratch
This repository aims to build a low-parameter LLM model through pretraining, fine-tuning, model rewarding, and reinforcement learning stages to create a chat model capable of simple conversation tasks. It features using the bert4torch training framework, seamless integration with transformers package for inference, optimized file reading during training to reduce memory usage, providing complete training logs for reproducibility, and the ability to customize robot attributes. The chat model supports multi-turn conversations. The trained model currently only supports basic chat functionality due to limitations in corpus size, model scale, SFT corpus size, and quality.
END-TO-END-GENERATIVE-AI-PROJECTS
The 'END TO END GENERATIVE AI PROJECTS' repository is a collection of awesome industry projects utilizing Large Language Models (LLM) for various tasks such as chat applications with PDFs, image to speech generation, video transcribing and summarizing, resume tracking, text to SQL conversion, invoice extraction, medical chatbot, financial stock analysis, and more. The projects showcase the deployment of LLM models like Google Gemini Pro, HuggingFace Models, OpenAI GPT, and technologies such as Langchain, Streamlit, LLaMA2, LLaMAindex, and more. The repository aims to provide end-to-end solutions for different AI applications.
Native-LLM-for-Android
This repository provides a demonstration of running a native Large Language Model (LLM) on Android devices. It supports various models such as Qwen2.5-Instruct, MiniCPM-DPO/SFT, Yuan2.0, Gemma2-it, StableLM2-Chat/Zephyr, and Phi3.5-mini-instruct. The demo models are optimized for extreme execution speed after being converted from HuggingFace or ModelScope. Users can download the demo models from the provided drive link, place them in the assets folder, and follow specific instructions for decompression and model export. The repository also includes information on quantization methods and performance benchmarks for different models on various devices.
For similar tasks
agentUniverse
agentUniverse is a framework for developing applications powered by multi-agent based on large language model. It provides essential components for building single agent and multi-agent collaboration mechanism for customizing collaboration patterns. Developers can easily construct multi-agent applications and share pattern practices from different fields. The framework includes pre-installed collaboration patterns like PEER and DOE for complex task breakdown and data-intensive tasks.
ASTRA.ai
Astra.ai is a multimodal agent powered by TEN, showcasing its capabilities in speech, vision, and reasoning through RAG from local documentation. It provides a platform for developing AI agents with features like RTC transportation, extension store, workflow builder, and local deployment. Users can build and test agents locally using Docker and Node.js, with prerequisites including Agora App ID, Azure's speech-to-text and text-to-speech API keys, and OpenAI API key. The platform offers advanced customization options through config files and API keys setup, enabling users to create and deploy their AI agents for various tasks.
agentica
Agentica is a human-centric framework for building large language model agents. It provides functionalities for planning, memory management, tool usage, and supports features like reflection, planning and execution, RAG, multi-agent, multi-role, and workflow. The tool allows users to quickly code and orchestrate agents, customize prompts, and make API calls to various services. It supports API calls to OpenAI, Azure, Deepseek, Moonshot, Claude, Ollama, and Together. Agentica aims to simplify the process of building AI agents by providing a user-friendly interface and a range of functionalities for agent development.
CEO-Agentic-AI-Framework
CEO-Agentic-AI-Framework is an ultra-lightweight Agentic AI framework based on the ReAct paradigm. It supports mainstream LLMs and is stronger than Swarm. The framework allows users to build their own agents, assign tasks, and interact with them through a set of predefined abilities. Users can customize agent personalities, grant and deprive abilities, and assign queries for specific tasks. CEO also supports multi-agent collaboration scenarios, where different agents with distinct capabilities can work together to achieve complex tasks. The framework provides a quick start guide, examples, and detailed documentation for seamless integration into research projects.
ain
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.