dbgpts
Intelligent data apps and assets with LLMs
Stars: 83
The dbgpts repository contains data apps, AWEL operators, AWEL workflow templates, and agents that are built upon DB-GPT. Users can install and manage these components within their DB-GPT environment. The repository offers functionalities such as listing available flows, installing dbgpts from the official repository, viewing installed dbgpts, running flows, and managing repositories. Users can create new workflow templates and operators using the provided commands. The repository aims to enhance the capabilities of DB-GPT by providing a collection of useful tools and resources for data processing and workflow management.
README:
This repo will contains some data apps、AWEL operators、AWEL workflow templates and agents which build upon DB-GPT.
At first you need to install DB-GPT project.
We will show how to install a dbgpts from the official repository to your local DB-GPT environment.
Change to your DB-GPT project directory and run the following command to activate your virtual environment:
conda activate dbgpt_env
Make sure you have installed the required packages:
pip install poetry
dbgpt app list-remote
# Those workflow can be installed.
dbgpts In All Repos
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Repository ┃ Type ┃ Name ┃
┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ eosphoros/dbgpts │ operators │ awel-simple-operator │
│ eosphoros/dbgpts │ workflow │ awel-flow-example-chat │
│ eosphoros/dbgpts │ workflow │ awel-flow-simple-streaming-chat │
│ eosphoros/dbgpts │ workflow │ awel-flow-web-info-search │
│ fangyinc/dbgpts │ workflow │ awel-flow-example-chat │
│ fangyinc/dbgpts │ workflow │ awel-flow-simple-streaming-chat │
│ local/dbgpts │ operators │ awel-simple-operator │
│ local/dbgpts │ workflow │ awel-flow-example-chat │
│ local/dbgpts │ workflow │ awel-flow-simple-streaming-chat │
│ local/dbgpts │ workflow │ awel-flow-web-info-search │
│ local/dbgpts │ workflow │ awel-simple-example-chat │
│ local/dbgpts │ workflow │ rag-save-url-to-vstore │
│ local/dbgpts │ workflow │ rag-url-knowledge-example │
└──────────────────┴───────────┴─────────────────────────────────┘
dbgpt app list
Installed dbgpts
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Name ┃ Type ┃ Repository ┃ Path ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ awel-flow-example-chat │ flow │ aries-ckt/dbgpts │ ~/.dbgpts/packages/b8bc19cefb00ae87d6586109725f15a1/awel-flow-example-chat │
│ awel-flow-rag-chat-example │ flow │ aries-ckt/dbgpts │ ~/.dbgpts/packages/b8bc19cefb00ae87d6586109725f15a1/awel-flow-rag-chat-example │
│ awel-flow-simple-streaming-chat │ flow │ eosphoros/dbgpts │ ~/.dbgpts/packages/b8bc19cefb00ae87d6586109725f15a1/awel-flow-simple-streaming-chat │
│ awel-flow-web-info-search │ flow │ eosphoros/dbgpts │ ~/.dbgpts/packages/b8bc19cefb00ae87d6586109725f15a1/awel-flow-web-info-search │
│ awel-list-to-string-operator │ operator │ local/dbgpts │ ~/.dbgpts/packages/b8bc19cefb00ae87d6586109725f15a1/awel-list-to-string-operator │
│ rag-url-knowledge-example │ flow │ local/dbgpts │ ~/.dbgpts/packages/b8bc19cefb00ae87d6586109725f15a1/rag-url-knowledge-example │
└─────────────────────────────────┴──────────┴──────────────────┴─────────────────────────────────────────────────────────────────────────────────────┘
dbgpt app install awel-flow-simple-streaming-chat -U
Wait 10 seconds, then open the web page of DB-GPT, you will see the new AWEL flow in web page.
Like this:
dbgpt run flow chat -n awel_flow_simple_streaming_chat \
--model "chatgpt_proxyllm" \
--stream \
--messages 'Write a quick sort algorithm in Python.'
Output:
You: Write a quick sort algorithm in Python.
Chat stream started
JSON data: {"model": "chatgpt_proxyllm", "stream": true, "messages": "Write a quick sort algorithm in Python.", "chat_param": "1ecd35d4-a60a-420b-8943-8fc44f7f054a", "chat_mode": "chat_flow"}
Bot:
Sure! Here is an implementation of the Quicksort algorithm in Python:
\```python
def quicksort(arr):
if len(arr) <= 1:
return arr
else:
pivot = arr[0]
less = [x for x in arr[1:] if x <= pivot]
greater = [x for x in arr[1:] if x > pivot]
return quicksort(less) + [pivot] + quicksort(greater)
# Test the algorithm with a sample list
arr = [8, 3, 1, 5, 9, 4, 7, 2, 6]
sorted_arr = quicksort(arr)
print(sorted_arr)
\```
This code defines a `quicksort` function that recursively partitions the input list into two sublists based on a pivot element, and then joins the sorted sublists with the pivot element to produce a fully sorted list.
🎉 Chat stream finished, timecost: 5.27 s
Note: just AWEL flow(workflow) support run with command line for now.
dbgpt app uninstall awel-flow-simple-streaming-chat
You can run dbgpt app --help
to see more commands. The output will be like this:
Usage: dbgpt app [OPTIONS] COMMAND [ARGS]...
Manage your apps(dbgpts).
Options:
--help Show this message and exit.
Commands:
install Install your dbgpts(operators,agents,workflows or apps)
list List all installed dbgpts
list-remote List all available dbgpts
uninstall Uninstall your dbgpts(operators,agents,workflows or apps)
Run dbgpt run flow chat --help
to see more commands for running flows. The output will be like this:
Usage: dbgpt run flow [OPTIONS]
Run a AWEL flow.
Options:
-n, --name TEXT The name of the AWEL flow
--uid TEXT The uid of the AWEL flow
-m, --messages TEXT The messages to run AWEL flow
--model TEXT The model name of AWEL flow
-s, --stream Whether use stream mode to run AWEL flow
-t, --temperature FLOAT The temperature to run AWEL flow
--max_new_tokens INTEGER The max new tokens to run AWEL flow
--conv_uid TEXT The conversation id of the AWEL flow
-d, --data TEXT The json data to run AWEL flow, if set, will
overwrite other options
-e, --extra TEXT The extra json data to run AWEL flow.
-i, --interactive Whether use interactive mode to run AWEL flow
--help Show this message and exit.
Run dbgpt repo --help
to see more commands for managing repositories. The output will be like this:
Usage: dbgpt repo [OPTIONS] COMMAND [ARGS]...
The repository to install the dbgpts from.
Options:
--help Show this message and exit.
Commands:
add Add a new repo
list List all repos
remove Remove the specified repo
update Update the specified repo
A repository is a collection of dbgpts.
The dbgpts
can manage by multiple repositories, the official repository is eosphoros/dbgpts.
And you can add you own repository by dbgpt repo add --repo <repo_name> --url <repo_url>
, example:
- Your git repo:
dbgpt repo add --repo fangyinc/dbgpts --url https://github.com/fangyinc/dbgpts.git
- Your local repo:
dbgpt repo add --repo local/dbgpts --url /path/to/your/repo
conda create -n dbgpts python=3.10
conda activate dbgpts
pip install poetry
pip install dbgpt
dbgpt new app -n my-awel-flow-example-chat
dbgpt new app -t operator -n my-awel-operator-example
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for dbgpts
Similar Open Source Tools
dbgpts
The dbgpts repository contains data apps, AWEL operators, AWEL workflow templates, and agents that are built upon DB-GPT. Users can install and manage these components within their DB-GPT environment. The repository offers functionalities such as listing available flows, installing dbgpts from the official repository, viewing installed dbgpts, running flows, and managing repositories. Users can create new workflow templates and operators using the provided commands. The repository aims to enhance the capabilities of DB-GPT by providing a collection of useful tools and resources for data processing and workflow management.
gpt-all-star
GPT-All-Star is an AI-powered code generation tool designed for scratch development of web applications with team collaboration of autonomous AI agents. The primary focus of this research project is to explore the potential of autonomous AI agents in software development. Users can organize their team, choose leaders for each step, create action plans, and work together to complete tasks. The tool supports various endpoints like OpenAI, Azure, and Anthropic, and provides functionalities for project management, code generation, and team collaboration.
text-embeddings-inference
Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for popular models like FlagEmbedding, Ember, GTE, and E5. It implements features such as no model graph compilation step, Metal support for local execution on Macs, small docker images with fast boot times, token-based dynamic batching, optimized transformers code for inference using Flash Attention, Candle, and cuBLASLt, Safetensors weight loading, and production-ready features like distributed tracing with Open Telemetry and Prometheus metrics.
files-to-prompt
files-to-prompt is a tool that concatenates a directory full of files into a single prompt for use with Language Models (LLMs). It allows users to provide the path to one or more files or directories for processing, outputting the contents of each file with relative paths and separators. The tool offers options to include hidden files, ignore specific patterns, and exclude files specified in .gitignore. It is designed to streamline the process of preparing text data for LLMs by simplifying file concatenation and customization.
iree-amd-aie
This repository contains an early-phase IREE compiler and runtime plugin for interfacing the AMD AIE accelerator to IREE. It provides architectural overview, developer setup instructions, building guidelines, and runtime driver setup details. The repository focuses on enabling the integration of the AMD AIE accelerator with IREE, offering developers the tools and resources needed to build and run applications leveraging this technology.
docker-cups-airprint
This repository provides a Docker image that acts as an AirPrint bridge for local printers, allowing them to be exposed to iOS/macOS devices. It runs a container with CUPS and Avahi to facilitate this functionality. Users must have CUPS drivers available for their printers. The tool requires a Linux host and a dedicated IP for the container to avoid interference with other services. It supports setting up printers through environment variables and offers options for automated configuration via command line, web interface, or files. The repository includes detailed instructions on setting up and testing the AirPrint bridge.
starcoder2-self-align
StarCoder2-Instruct is an open-source pipeline that introduces StarCoder2-15B-Instruct-v0.1, a self-aligned code Large Language Model (LLM) trained with a fully permissive and transparent pipeline. It generates instruction-response pairs to fine-tune StarCoder-15B without human annotations or data from proprietary LLMs. The tool is primarily finetuned for Python code generation tasks that can be verified through execution, with potential biases and limitations. Users can provide response prefixes or one-shot examples to guide the model's output. The model may have limitations with other programming languages and out-of-domain coding tasks.
elyra
Elyra is a set of AI-centric extensions to JupyterLab Notebooks that includes features like Visual Pipeline Editor, running notebooks/scripts as batch jobs, reusable code snippets, hybrid runtime support, script editors with execution capabilities, debugger, version control using Git, and more. It provides a comprehensive environment for data scientists and AI practitioners to develop, test, and deploy machine learning models and workflows efficiently.
gpt-translate
Markdown Translation BOT is a GitHub action that translates markdown files into multiple languages using various AI models. It supports markdown, markdown-jsx, and json files only. The action can be executed by individuals with write permissions to the repository, preventing API abuse by non-trusted parties. Users can set up the action by providing their API key and configuring the workflow settings. The tool allows users to create comments with specific commands to trigger translations and automatically generate pull requests or add translated files to existing pull requests. It supports multiple file translations and can interpret any language supported by GPT-4 or GPT-3.5.
ppl.llm.serving
ppl.llm.serving is a serving component for Large Language Models (LLMs) within the PPL.LLM system. It provides a server based on gRPC and supports inference for LLaMA. The repository includes instructions for prerequisites, quick start guide, model exporting, server setup, client usage, benchmarking, and offline inference. Users can refer to the LLaMA Guide for more details on using this serving component.
react-native-fast-tflite
A high-performance TensorFlow Lite library for React Native that utilizes JSI for power, zero-copy ArrayBuffers for efficiency, and low-level C/C++ TensorFlow Lite core API for direct memory access. It supports swapping out TensorFlow Models at runtime and GPU-accelerated delegates like CoreML/Metal/OpenGL. Easy VisionCamera integration allows for seamless usage. Users can load TensorFlow Lite models, interpret input and output data, and utilize GPU Delegates for faster computation. The library is suitable for real-time object detection, image classification, and other machine learning tasks in React Native applications.
zsh_codex
Zsh Codex is a ZSH plugin that enables AI-powered code completion in the command line. It supports both OpenAI's Codex and Google's Generative AI (Gemini), providing advanced language model capabilities for coding tasks directly in the terminal. Users can easily install the plugin and configure it to enhance their coding experience with AI assistance.
mLoRA
mLoRA (Multi-LoRA Fine-Tune) is an open-source framework for efficient fine-tuning of multiple Large Language Models (LLMs) using LoRA and its variants. It allows concurrent fine-tuning of multiple LoRA adapters with a shared base model, efficient pipeline parallelism algorithm, support for various LoRA variant algorithms, and reinforcement learning preference alignment algorithms. mLoRA helps save computational and memory resources when training multiple adapters simultaneously, achieving high performance on consumer hardware.
ragflow
RAGFlow is an open-source Retrieval-Augmented Generation (RAG) engine that combines deep document understanding with Large Language Models (LLMs) to provide accurate question-answering capabilities. It offers a streamlined RAG workflow for businesses of all sizes, enabling them to extract knowledge from unstructured data in various formats, including Word documents, slides, Excel files, images, and more. RAGFlow's key features include deep document understanding, template-based chunking, grounded citations with reduced hallucinations, compatibility with heterogeneous data sources, and an automated and effortless RAG workflow. It supports multiple recall paired with fused re-ranking, configurable LLMs and embedding models, and intuitive APIs for seamless integration with business applications.
shell_gpt
ShellGPT is a command-line productivity tool powered by AI large language models (LLMs). This command-line tool offers streamlined generation of shell commands, code snippets, documentation, eliminating the need for external resources (like Google search). Supports Linux, macOS, Windows and compatible with all major Shells like PowerShell, CMD, Bash, Zsh, etc.
melodisco
Melodisco is an AI music player that allows users to listen to music and manage playlists. It provides a user-friendly interface for music playback and organization. Users can deploy Melodisco with Vercel or Docker for easy setup. Local development instructions are provided for setting up the project environment. The project credits various tools and libraries used in its development, such as Next.js, Tailwind CSS, and Stripe. Melodisco is a versatile tool for music enthusiasts looking for an AI-powered music player with features like authentication, payment integration, and multi-language support.
For similar tasks
dbgpts
The dbgpts repository contains data apps, AWEL operators, AWEL workflow templates, and agents that are built upon DB-GPT. Users can install and manage these components within their DB-GPT environment. The repository offers functionalities such as listing available flows, installing dbgpts from the official repository, viewing installed dbgpts, running flows, and managing repositories. Users can create new workflow templates and operators using the provided commands. The repository aims to enhance the capabilities of DB-GPT by providing a collection of useful tools and resources for data processing and workflow management.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.