PraisonAI
PraisonAI is an AI Agents Framework with Self Reflection. PraisonAI application combines PraisonAI Agents, AutoGen, and CrewAI into a low-code solution for building and managing multi-agent LLM systems, focusing on simplicity, customisation, and efficient humanโagent collaboration.
Stars: 3012
Praison AI is a low-code, centralised framework that simplifies the creation and orchestration of multi-agent systems for various LLM applications. It emphasizes ease of use, customization, and human-agent interaction. The tool leverages AutoGen and CrewAI frameworks to facilitate the development of AI-generated scripts and movie concepts. Users can easily create, run, test, and deploy agents for scriptwriting and movie concept development. Praison AI also provides options for full automatic mode and integration with OpenAI models for enhanced AI capabilities.
README:
PraisonAI is an AI Agents Framework with Self Reflection. PraisonAI application combines PraisonAI Agents, AutoGen, and CrewAI into a low-code solution for building and managing multi-agent LLM systems, focusing on simplicity, customisation, and efficient humanโagent collaboration.
- ๐ค Automated AI Agents Creation
- ๐ Self Reflection AI Agents
- ๐ง Reasoning AI Agents
- ๐๏ธ Multi Modal AI Agents
- ๐ค Multi Agent Collaboration
- โก AI Agent Workflow
- ๐ Use CrewAI or AutoGen Framework
- ๐ฏ 100+ LLM Support
- ๐ป Chat with ENTIRE Codebase
- ๐จ Interactive UIs
- ๐ YAML-based Configuration
- ๐ ๏ธ Custom Tool Integration
- ๐ Internet Search Capability (using Crawl4AI and Tavily)
- ๐ผ๏ธ Vision Language Model (VLM) Support
- ๐๏ธ Real-time Voice Interaction
pip install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
praisonai --auto create a movie script about Robots in Mars
pip install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
praisonai --init create a movie script about Robots in Mars
praisonai
Light weight package dedicated for coding:
pip install praisonaiagents
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
Create app.py file and add the code below:
from praisonaiagents import Agent, Task, PraisonAIAgents
# 1. Create agents
researcher = Agent(
name="Researcher",
role="Senior Research Analyst",
goal="Uncover cutting-edge developments in AI and data science",
backstory="""You are an expert at a technology research group,
skilled in identifying trends and analyzing complex data.""",
verbose=True,
llm="gpt-4o",
markdown=True
)
writer = Agent(
name="Writer",
role="Tech Content Strategist",
goal="Craft compelling content on tech advancements",
backstory="""You are a content strategist known for
making complex tech topics interesting and easy to understand.""",
llm="gpt-4o",
markdown=True
)
# 2. Define Tasks
task1 = Task(
name="research_task",
description="""Analyze 2024's AI advancements.
Find major trends, new technologies, and their effects.""",
expected_output="""A detailed report on 2024 AI advancements""",
agent=researcher
)
task2 = Task(
name="writing_task",
description="""Create a blog post about major AI advancements using the insights you have.
Make it interesting, clear, and suited for tech enthusiasts.
It should be at least 4 paragraphs long.""",
expected_output="A blog post of at least 4 paragraphs",
agent=writer,
)
agents = PraisonAIAgents(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=False,
process="hierarchical",
manager_llm="gpt-4o"
)
result = agents.start()
Run:
python app.py
export OPENAI_BASE_URL=http://localhost:11434/v1
Replace xxxx with Groq API KEY:
export OPENAI_API_KEY=xxxxxxxxxxx
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export LOGLEVEL=info
Advanced logging:
export LOGLEVEL=debug
Interface | Description | URL |
---|---|---|
UI | Multi Agents such as CrewAI or AutoGen | https://docs.praison.ai/ui/ui |
Chat | Chat with 100+ LLMs, single AI Agent | https://docs.praison.ai/ui/chat |
Code | Chat with entire Codebase, single AI Agent | https://docs.praison.ai/ui/code |
Realtime | Real-time voice interaction with AI | https://docs.praison.ai/ui/realtime |
Other Features | Description | Docs |
---|---|---|
Train | Fine-tune LLMs using your custom data | https://docs.praison.ai/train |
Cookbook | Open in Colab | |
---|---|---|
Basic | PraisonAI | |
Include Tools | PraisonAI Tools |
pip install praisonai
# Install with CrewAI support
pip install "praisonai[crewai]"
# Install with AutoGen support
pip install "praisonai[autogen]"
# Install with both frameworks
pip install "praisonai[crewai,autogen]"
# Install UI support
pip install "praisonai[ui]"
# Install Chat interface
pip install "praisonai[chat]"
# Install Code interface
pip install "praisonai[code]"
# Install Realtime voice interaction
pip install "praisonai[realtime]"
# Install Call feature
pip install "praisonai[call]"
# Set your OpenAI API key
export OPENAI_API_KEY="Enter your API key"
# Initialize with CrewAI (default)
praisonai --init "create a movie script about dog in moon"
# Or initialize with AutoGen
praisonai --framework autogen --init "create a movie script about dog in moon"
# Run the agents
praisonai
# With CrewAI (default)
praisonai --auto "create a movie script about Dog in Moon"
# With AutoGen
praisonai --framework autogen --auto "create a movie script about Dog in Moon"
When installing with pip install "praisonai[crewai]"
, you get:
- CrewAI framework support
- PraisonAI tools integration
- Task delegation capabilities
- Sequential and parallel task execution
When installing with pip install "praisonai[autogen]"
, you get:
- AutoGen framework support
- PraisonAI tools integration
- Multi-agent conversation capabilities
- Code execution environment
pip install praisonai
export OPENAI_API_KEY="Enter your API key"
praisonai --init create a movie script about dog in moon
praisonai
- Installation
- Initialise
- Run
- Full Automatic Mode
- User Interface
- Praison AI Chat
- Create Custom Tools
- Agents Playbook
- Include praisonai package in your project
- Commands to Install Dev Dependencies
- Other Models
- Contributing
- Star History
pip install praisonai
export OPENAI_API_KEY="Enter your API key"
Generate your OPENAI API KEY from here: https://platform.openai.com/api-keys
Note: You can use other providers such as Ollama, Mistral ... etc. Details are provided at the bottom.
praisonai --init create a movie script about dog in moon
This will automatically create agents.yaml file in the current directory.
praisonai --framework autogen --init create movie script about cat in mars
praisonai
or
python -m praisonai
praisonai --framework autogen
praisonai --auto create a movie script about Dog in Moon
Interface | Description | URL |
---|---|---|
UI | Multi Agents such as CrewAI or AutoGen | https://docs.praisonai.com/ui/ui |
Chat | Chat with 100+ LLMs, single AI Agent | https://docs.praisonai.com/ui/chat |
Code | Chat with entire Codebase, single AI Agent | https://docs.praisonai.com/ui/code |
pip install -U "praisonai[ui]"
export OPENAI_API_KEY="Enter your API key"
chainlit create-secret
export CHAINLIT_AUTH_SECRET=xxxxxxxx
praisonai ui
or
python -m praisonai ui
pip install "praisonai[chat]"
export OPENAI_API_KEY="Enter your API key"
praisonai chat
Praison AI Chat and Praison AI Code now includes internet search capabilities using Crawl4AI and Tavily, allowing you to retrieve up-to-date information during your conversations.
You can now upload images and ask questions based on them using Vision Language Models. This feature enables visual understanding and analysis within your chat sessions.
pip install "praisonai[code]"
export OPENAI_API_KEY="Enter your API key"
praisonai code
Praison AI Code also includes internet search functionality, enabling you to find relevant code snippets and programming information online.
framework: crewai
topic: Artificial Intelligence
roles:
screenwriter:
backstory: "Skilled in crafting scripts with engaging dialogue about {topic}."
goal: Create scripts from concepts.
role: Screenwriter
tasks:
scriptwriting_task:
description: "Develop scripts with compelling characters and dialogue about {topic}."
expected_output: "Complete script ready for production."
from praisonai import PraisonAI
# Example agent_yaml content
agent_yaml = """
framework: "crewai"
topic: "Space Exploration"
roles:
astronomer:
role: "Space Researcher"
goal: "Discover new insights about {topic}"
backstory: "You are a curious and dedicated astronomer with a passion for unraveling the mysteries of the cosmos."
tasks:
investigate_exoplanets:
description: "Research and compile information about exoplanets discovered in the last decade."
expected_output: "A summarized report on exoplanet discoveries, including their size, potential habitability, and distance from Earth."
"""
# Create a PraisonAI instance with the agent_yaml content
praisonai = PraisonAI(agent_yaml=agent_yaml)
# Run PraisonAI
result = praisonai.run()
# Print the result
print(result)
Note: Please create agents.yaml file before hand.
from praisonai import PraisonAI
def basic(): # Basic Mode
praisonai = PraisonAI(agent_file="agents.yaml")
praisonai.run()
if __name__ == "__main__":
basic()
# Install uv if you haven't already
pip install uv
# Install from requirements
uv pip install -r pyproject.toml
# Install with extras
uv pip install -r pyproject.toml --extra code
uv pip install -r pyproject.toml --extra "crewai,autogen"
- Fork on GitHub: Use the "Fork" button on the repository page.
- Clone your fork:
git clone https://github.com/yourusername/praisonAI.git
- Create a branch:
git checkout -b new-feature
- Make changes and commit:
git commit -am "Add some feature"
- Push to your fork:
git push origin new-feature
- Submit a pull request via GitHub's web interface.
- Await feedback from project maintainers.
Praison AI is an open-sourced software licensed under the MIT license.
Praison AI is an open-sourced software licensed under the MIT license.
To facilitate local development with live reload, you can use Docker. Follow the steps below:
-
Create a
Dockerfile.dev
:FROM python:3.11-slim WORKDIR /app COPY . . RUN pip install flask praisonai==2.0.18 watchdog EXPOSE 5555 ENV FLASK_ENV=development CMD ["flask", "run", "--host=0.0.0.0"]
-
Create a
docker-compose.yml
:version: '3.8' services: app: build: context: . dockerfile: Dockerfile.dev volumes: - .:/app ports: - "5555:5555" environment: FLASK_ENV: development command: flask run --host=0.0.0.0 watch: image: alpine:latest volumes: - .:/app command: sh -c "apk add --no-cache inotify-tools && while inotifywait -r -e modify,create,delete /app; do kill -HUP 1; done"
-
Run Docker Compose:
docker-compose up
This setup will allow you to develop locally with live reload, making it easier to test and iterate on your code.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for PraisonAI
Similar Open Source Tools
PraisonAI
Praison AI is a low-code, centralised framework that simplifies the creation and orchestration of multi-agent systems for various LLM applications. It emphasizes ease of use, customization, and human-agent interaction. The tool leverages AutoGen and CrewAI frameworks to facilitate the development of AI-generated scripts and movie concepts. Users can easily create, run, test, and deploy agents for scriptwriting and movie concept development. Praison AI also provides options for full automatic mode and integration with OpenAI models for enhanced AI capabilities.
agentscope
AgentScope is a multi-agent platform designed to empower developers to build multi-agent applications with large-scale models. It features three high-level capabilities: Easy-to-Use, High Robustness, and Actor-Based Distribution. AgentScope provides a list of `ModelWrapper` to support both local model services and third-party model APIs, including OpenAI API, DashScope API, Gemini API, and ollama. It also enables developers to rapidly deploy local model services using libraries such as ollama (CPU inference), Flask + Transformers, Flask + ModelScope, FastChat, and vllm. AgentScope supports various services, including Web Search, Data Query, Retrieval, Code Execution, File Operation, and Text Processing. Example applications include Conversation, Game, and Distribution. AgentScope is released under Apache License 2.0 and welcomes contributions.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
pr-pilot
PR Pilot is an AI-powered tool designed to assist users in their daily workflow by delegating routine work to AI with confidence and predictability. It integrates seamlessly with popular development tools and allows users to interact with it through a Command-Line Interface, Python SDK, REST API, and Smart Workflows. Users can automate tasks such as generating PR titles and descriptions, summarizing and posting issues, and formatting README files. The tool aims to save time and enhance productivity by providing AI-powered solutions for common development tasks.
LightRAG
LightRAG is a repository hosting the code for LightRAG, a system that supports seamless integration of custom knowledge graphs, Oracle Database 23ai, Neo4J for storage, and multiple file types. It includes features like entity deletion, batch insert, incremental insert, and graph visualization. LightRAG provides an API server implementation for RESTful API access to RAG operations, allowing users to interact with it through HTTP requests. The repository also includes evaluation scripts, code for reproducing results, and a comprehensive code structure.
agentops
AgentOps is a toolkit for evaluating and developing robust and reliable AI agents. It provides benchmarks, observability, and replay analytics to help developers build better agents. AgentOps is open beta and can be signed up for here. Key features of AgentOps include: - Session replays in 3 lines of code: Initialize the AgentOps client and automatically get analytics on every LLM call. - Time travel debugging: (coming soon!) - Agent Arena: (coming soon!) - Callback handlers: AgentOps works seamlessly with applications built using Langchain and LlamaIndex.
e2m
E2M is a Python library that can parse and convert various file types into Markdown format. It supports the conversion of multiple file formats, including doc, docx, epub, html, htm, url, pdf, ppt, pptx, mp3, and m4a. The ultimate goal of the E2M project is to provide high-quality data for Retrieval-Augmented Generation (RAG) and model training or fine-tuning. The core architecture consists of a Parser responsible for parsing various file types into text or image data, and a Converter responsible for converting text or image data into Markdown format.
gemini-openai-proxy
Gemini-OpenAI-Proxy is a proxy software designed to convert OpenAI API protocol calls into Google Gemini Pro protocol, allowing software using OpenAI protocol to utilize Gemini Pro models seamlessly. It provides an easy integration of Gemini Pro's powerful features without the need for complex development work.
freeGPT
freeGPT provides free access to text and image generation models. It supports various models, including gpt3, gpt4, alpaca_7b, falcon_40b, prodia, and pollinations. The tool offers both asynchronous and non-asynchronous interfaces for text completion and image generation. It also features an interactive Discord bot that provides access to all the models in the repository. The tool is easy to use and can be integrated into various applications.
gollama
Gollama is a delightful tool that brings Ollama, your offline conversational AI companion, directly into your terminal. It provides a fun and interactive way to generate responses from various models without needing internet connectivity. Whether you're brainstorming ideas, exploring creative writing, or just looking for inspiration, Gollama is here to assist you. The tool offers an interactive interface, customizable prompts, multiple models selection, and visual feedback to enhance user experience. It can be installed via different methods like downloading the latest release, using Go, running with Docker, or building from source. Users can interact with Gollama through various options like specifying a custom base URL, prompt, model, and enabling raw output mode. The tool supports different modes like interactive, piped, CLI with image, and TUI with image. Gollama relies on third-party packages like bubbletea, glamour, huh, and lipgloss. The roadmap includes implementing piped mode, support for extracting codeblocks, copying responses/codeblocks to clipboard, GitHub Actions for automated releases, and downloading models directly from Ollama using the rest API. Contributions are welcome, and the project is licensed under the MIT License.
rpaframework
RPA Framework is an open-source collection of libraries and tools for Robotic Process Automation (RPA), designed to be used with Robot Framework and Python. It offers well-documented core libraries for Software Robot Developers, optimized for Robocorp Control Room and Developer Tools, and accepts external contributions. The project includes various libraries for tasks like archiving, browser automation, date/time manipulations, cloud services integration, encryption operations, database interactions, desktop automation, document processing, email operations, Excel manipulation, file system operations, FTP interactions, web API interactions, image manipulation, AI services, and more. The development of the repository is Python-based and requires Python version 3.8+, with tooling based on poetry and invoke for compiling, building, and running the package. The project is licensed under the Apache License 2.0.
retro-aim-server
Retro AIM Server is an instant messaging server that revives AOL Instant Messenger clients from the 2000s. It supports Windows AIM client versions 5.0-5.9, away messages, buddy icons, buddy list, chat rooms, instant messaging, user profiles, blocking/visibility toggle/idle notification, and warning. The Management API provides functionality for administering the server, including listing users, creating users, changing passwords, and listing active sessions.
cortex.cpp
Cortex is a C++ AI engine with a Docker-like command-line interface and client libraries. It supports running AI models using ONNX, TensorRT-LLM, and llama.cpp engines. Cortex can function as a standalone server or be integrated as a library. The tool provides support for various engines and models, allowing users to easily deploy and interact with AI models. It offers a range of CLI commands for managing models, embeddings, and engines, as well as a REST API for interacting with models. Cortex is designed to simplify the deployment and usage of AI models in C++ applications.
vnc-lm
vnc-lm is a Discord bot designed for messaging with language models. Users can configure model parameters, branch conversations, and edit prompts to enhance responses. The bot supports various providers like OpenAI, Huggingface, and Cloudflare Workers AI. It integrates with ollama and LiteLLM, allowing users to access a wide range of language model APIs through a single interface. Users can manage models, switch between models, split long messages, and create conversation branches. LiteLLM integration enables support for OpenAI-compatible APIs and local LLM services. The bot requires Docker for installation and can be configured through environment variables. Troubleshooting tips are provided for common issues like context window problems, Discord API errors, and LiteLLM issues.
Groq2API
Groq2API is a REST API wrapper around the Groq2 model, a large language model trained by Google. The API allows you to send text prompts to the model and receive generated text responses. The API is easy to use and can be integrated into a variety of applications.
LongLLaVA
LongLLaVA is a tool for scaling multi-modal LLMs to 1000 images efficiently via hybrid architecture. It includes stages for single-image alignment, instruction-tuning, and multi-image instruction-tuning, with evaluation through a command line interface and model inference. The tool aims to achieve GPT-4V level capabilities and beyond, providing reproducibility of results and benchmarks for efficiency and performance.
For similar tasks
devchat
DevChat is an open-source workflow engine that enables developers to create intelligent, automated workflows for engaging with users through a chat panel within their IDEs. It combines script writing flexibility, latest AI models, and an intuitive chat GUI to enhance user experience and productivity. DevChat simplifies the integration of AI in software development, unlocking new possibilities for developers.
PraisonAI
Praison AI is a low-code, centralised framework that simplifies the creation and orchestration of multi-agent systems for various LLM applications. It emphasizes ease of use, customization, and human-agent interaction. The tool leverages AutoGen and CrewAI frameworks to facilitate the development of AI-generated scripts and movie concepts. Users can easily create, run, test, and deploy agents for scriptwriting and movie concept development. Praison AI also provides options for full automatic mode and integration with OpenAI models for enhanced AI capabilities.
Wave-Executor
Wave Executor is a cutting-edge Roblox script executor designed for advanced script execution, optimized performance, and seamless user experience. Fully compatible with the latest Roblox updates, it is secure, easy to use, and perfect for gamers, developers, and modding enthusiasts looking to enhance their Roblox gameplay.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.
zep-python
Zep is an open-source platform for building and deploying large language model (LLM) applications. It provides a suite of tools and services that make it easy to integrate LLMs into your applications, including chat history memory, embedding, vector search, and data enrichment. Zep is designed to be scalable, reliable, and easy to use, making it a great choice for developers who want to build LLM-powered applications quickly and easily.
AI-in-a-Box
AI-in-a-Box is a curated collection of solution accelerators that can help engineers establish their AI/ML environments and solutions rapidly and with minimal friction, while maintaining the highest standards of quality and efficiency. It provides essential guidance on the responsible use of AI and LLM technologies, specific security guidance for Generative AI (GenAI) applications, and best practices for scaling OpenAI applications within Azure. The available accelerators include: Azure ML Operationalization in-a-box, Edge AI in-a-box, Doc Intelligence in-a-box, Image and Video Analysis in-a-box, Cognitive Services Landing Zone in-a-box, Semantic Kernel Bot in-a-box, NLP to SQL in-a-box, Assistants API in-a-box, and Assistants API Bot in-a-box.
NeMo
NeMo Framework is a generative AI framework built for researchers and pytorch developers working on large language models (LLMs), multimodal models (MM), automatic speech recognition (ASR), and text-to-speech synthesis (TTS). The primary objective of NeMo is to provide a scalable framework for researchers and developers from industry and academia to more easily implement and design new generative AI models by being able to leverage existing code and pretrained models.
For similar jobs
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.
MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".
leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.