
cb-tumblebug
Cloud-Barista Multi-Cloud Infra Management Framework
Stars: 67

CB-Tumblebug (CB-TB) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. It provides an overview, features, and architecture. The tool supports various cloud providers and resource types, with ongoing development and localization efforts. Users can deploy a multi-cloud infra with GPUs, enjoy multiple LLMs in parallel, and utilize LLM-related scripts. The tool requires Linux, Docker, Docker Compose, and Golang for building the source. Users can run CB-TB with Docker Compose or from the Makefile, set up prerequisites, contribute to the project, and view a list of contributors. The tool is licensed under an open-source license.
README:
CB-Tumblebug (CB-TB) is an advanced multi-cloud infrastructure management system that enables seamless provisioning, management, and orchestration of resources across multiple cloud service providers. Part of the Cloud-Barista project, CB-TB abstracts the complexity of multi-cloud environments into a unified, intuitive interface.
- đ Multi-Cloud Orchestration: Manage AWS, Azure, GCP, Alibaba Cloud, and more from a single platform
- ⥠Auto-provisioning: Intelligent resource recommendations and automated deployment
- đ Secure Operations: Encrypted credential management and hybrid encryption protocols
- đēī¸ Visual Infrastructure Map: Interactive GUI for infrastructure visualization and management
- đ¤ AI-Powered Management: NEW! Control infrastructure using natural language via our MCP Server
-
âī¸ Supported Cloud Providers & Resources
đ Note: Reference only - functionality not guaranteed. Regular updates are made.
Kubernetes support is currently WIP with limited features available.
đ Development Status & Contributing Notes
CB-TB has not reached version 1.0 yet. We welcome any new suggestions, issues, opinions, and contributors! Please note that the functionalities of Cloud-Barista are not yet stable or secure. Be cautious if you plan to use the current release in a production environment. If you encounter any difficulties using Cloud-Barista, please let us know by opening an issue or joining the Cloud-Barista Slack.
As an open-source project initiated by Korean members, we aim to encourage participation from Korean contributors during the initial stages of this project. Therefore, the CB-TB repository will accept the use of the Korean language in its early stages. However, we hope this project will thrive regardless of contributors' countries in the long run. To facilitate this, the maintainers recommend using English at least for the titles of Issues, Pull Requests, and Commits, while accommodating local languages in the contents.
đ¤ NEW: AI-Powered Multi-Cloud Management
- Control CB-Tumblebug through AI assistants like Claude and VS Code
- Natural language interface for infrastructure provisioning and management using MCP (Model Context Protocol)
- Streamable HTTP transport for modern MCP compatibility
- đ MCP Server Guide | đ Quick Start
đŽ GPU-Powered Multi-Cloud LLM Deployment
- Deploy GPU instances across multiple clouds for AI/ML workloads
- đ§ LLM Scripts & Examples
- ⥠Quick Start
- đ§ Prerequisites
- đ Installation & Setup
- đ How to Use
- đ ī¸ Development
- đ¤ Contributing
Get CB-Tumblebug running in under 5 minutes:
# 1. Automated setup (recommended for new users)
curl -sSL https://raw.githubusercontent.com/cloud-barista/cb-tumblebug/main/scripts/set-tb.sh | bash
# 2. Start all services
cd ~/go/src/github.com/cloud-barista/cb-tumblebug
make compose
# 3. Configure credentials (see detailed setup below)
./init/genCredential.sh
# Edit ~/.cloud-barista/credentials.yaml with your cloud credentials
./init/encCredential.sh
./init/init.sh
# 4. Access services
# - API: http://localhost:1323/tumblebug/api
# - MapUI: http://localhost:1324
# - MCP Server: http://localhost:8000/mcp (if enabled)
đĄ New to CB-Tumblebug? Follow the detailed setup guide below for comprehensive instructions.
Component | Minimum Specification | Recommended |
---|---|---|
OS | Linux (Ubuntu 22.04+) | Ubuntu 22.04 LTS |
CPU | 4 cores | 8+ cores |
Memory | 6 GiB | 16+ GiB |
Storage | 20 GiB free space | 50+ GiB SSD |
Example | AWS c5a.xlarge
|
AWS c5a.2xlarge
|
â ī¸ Performance Note: Lower specifications may cause initialization failures or performance degradation.
- Docker & Docker Compose (latest stable)
- Go 1.23.0+ (for building from source)
- Git (for cloning repository)
- đĻ View Dependencies
- đĄī¸ Software Bill of Materials (SBOM)
For new users on clean Linux systems:
# Download and run automated setup script
curl -sSL https://raw.githubusercontent.com/cloud-barista/cb-tumblebug/main/scripts/set-tb.sh | bash
âšī¸ Post-installation: Log out and back in to activate Docker permissions and aliases.
curl -sSL https://raw.githubusercontent.com/cloud-barista/cb-tumblebug/main/scripts/set-tb.sh | bash
âšī¸ After the script finishes, you may need to log out and back in to activate Docker permissions and aliases. If you'd prefer to install dependencies and clone the repository manually, follow the steps below. đ
-
Clone the CB-Tumblebug repository:
git clone https://github.com/cloud-barista/cb-tumblebug.git $HOME/go/src/github.com/cloud-barista/cb-tumblebug cd ~/go/src/github.com/cloud-barista/cb-tumblebug
Optionally, you can register aliases for the CB-Tumblebug directory to simplify navigation:
echo "alias cdtb='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug'" >> ~/.bashrc echo "alias cdtbsrc='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug/src'" >> ~/.bashrc echo "alias cdtbtest='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug/src/testclient/scripts'" >> ~/.bashrc source ~/.bashrc
-
Check Docker Compose Installation:
Ensure that Docker Engine and Docker Compose are installed on your system. If not, you can use the following script to install them (note: this script is not intended for production environments):
# download and install docker with docker compose curl -sSL get.docker.com | sh # optional: add user to docker groupd sudo groupadd docker sudo usermod -aG docker ${USER} newgrp docker # test the docker works docker run hello-world
-
Start All Components Using Docker Compose:
To run all components, use the following command:
cd ~/go/src/github.com/cloud-barista/cb-tumblebug docker compose up
This command will start all components as defined in the preconfigured docker-compose.yaml file. For configuration customization, please refer to the guide.
The following components will be started:
- ETCD: CB-Tumblebug KeyValue DB
- CB-Spider: a Cloud API controller
- CB-MapUI: a simple Map-based GUI web server
- CB-Tumblebug: the system with API server
- CB-Tumblebug MCP Server: AI assistant interface (if enabled)
- PostgreSQL: Specs and Images storage
- Traefik: Reverse proxy for secure access
Container Architecture Overview:
graph TB subgraph "External Access" User[đ¤ User] AI[đ¤ AI Assistant<br/>Claude/VS Code] end subgraph "Docker Compose Environment" subgraph "Frontend & Interfaces" UI[CB-MapUI<br/>:1324] MCP[TB-MCP Server<br/>:8000] Proxy[Traefik Proxy<br/>:80/:443] end subgraph "Backend Services" TB[CB-Tumblebug<br/>:1323<br/>Multi-Cloud Management] Spider[CB-Spider<br/>:1024<br/>Cloud API Abstraction] ETCD[ETCD<br/>:2379<br/>Metadata Store] PG[PostgreSQL<br/>:5432<br/>Specs/Images DB] end end subgraph "Cloud Providers" AWS[AWS] Azure[Azure] GCP[GCP] Others[Others...] end %% User connections User -->|HTTP/HTTPS| Proxy User -->|HTTP| UI User -->|HTTP| TB AI -->|MCP HTTP| MCP %% Proxy routing Proxy -->|Route| UI %% Internal service connections UI -.->|API calls| TB MCP -->|REST API| TB TB -->|REST API| Spider TB -->|gRPC| ETCD TB -->|SQL| PG %% Cloud connections Spider -->|Cloud APIs| AWS Spider -->|Cloud APIs| Azure Spider -->|Cloud APIs| GCP Spider -->|Cloud APIs| Others %% Styling classDef frontend fill:#e3f2fd,stroke:#1976d2 classDef backend fill:#f3e5f5,stroke:#7b1fa2 classDef storage fill:#e8f5e8,stroke:#388e3c classDef cloud fill:#fff3e0,stroke:#f57c00 class UI,MCP,Proxy frontend class TB,Spider,ETCD,PG backend class AWS,Azure,GCP,Others cloud
After running the command, you should see output similar to the following:
Service Endpoints:
- CB-Tumblebug API: http://localhost:1323/tumblebug/api
- CB-MapUI: http://localhost:1324 (direct) or https://cb-mapui.localhost (via Traefik with SSL)
- MCP Server: http://localhost:8000/mcp (if enabled)
- Traefik Dashboard: http://localhost:8080 (reverse proxy monitoring)
Note: Before using CB-Tumblebug, you need to initialize it.
To provisioning multi-cloud infrastructures with CB-TB, it is necessary to register the connection information (credentials) for clouds, as well as commonly used images and specifications.
-
Create
credentials.yaml
file and input your cloud credentials-
Overview
-
credentials.yaml
is a file that includes multiple credentials to use API of Clouds supported by CB-TB (AWS, GCP, AZURE, ALIBABA, etc.) - It should be located in the
~/.cloud-barista/
directory and securely managed. - Refer to the
template.credentials.yaml
for the template
-
-
Create
credentials.yaml
the fileAutomatically generate the
credentials.yaml
file in the~/.cloud-barista/
directory using the CB-TB scriptcd ~/go/src/github.com/cloud-barista/cb-tumblebug ./init/genCredential.sh
-
Input credential data
Put credential data to
~/.cloud-barista/credentials.yaml
(Reference: How to obtain a credential for each CSP)### Cloud credentials for credential holders (default: admin) credentialholder: admin: alibaba: # ClientId(ClientId): client ID of the EIAM application # Example: app_mkv7rgt4d7i4u7zqtzev2mxxxx ClientId: # ClientSecret(ClientSecret): client secret of the EIAM application # Example: CSEHDcHcrUKHw1CuxkJEHPveWRXBGqVqRsxxxx ClientSecret: aws: # ClientId(aws_access_key_id) # ex: AKIASSSSSSSSSSS56DJH ClientId: # ClientSecret(aws_secret_access_key) # ex: jrcy9y0Psejjfeosifj3/yxYcgadklwihjdljMIQ0 ClientSecret: ...
-
-
Encrypt
credentials.yaml
intocredentials.yaml.enc
To protect sensitive information,
credentials.yaml
is not used directly. Instead, it must be encrypted usingencCredential.sh
. The encrypted filecredentials.yaml.enc
is then used byinit.py
. This approach ensures that sensitive credentials are not stored in plain text.-
Encrypting Credentials
init/encCredential.sh
When executing the script, you have two options: 1) enter your password or 2) let the system generate a random passkey.
Option 1: Entering your password:
Option 2: Letting the system generate a random passkey, which MUST be securely stored in a safe location:
If you need to update your credentials, decrypt the encrypted file using
decCredential.sh
, make the necessary changes tocredentials.yaml
, and then re-encrypt it. -
-
(INIT) Register all multi-cloud connection information and common resources
-
How to register
Refer to README.md for init.py, and execute the
init.py
script. (enter 'y' for confirmation prompts)cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./init/init.sh
-
The credentials in
~/.cloud-barista/credentials.yaml.enc
(encrypted file from thecredentials.yaml
) will be automatically registered (all CSP and region information recorded incloudinfo.yaml
will be automatically registered in the system)- Note: You can check the latest regions and zones of CSP using
update-cloudinfo.py
and review the file for updates. (contributions to updates are welcome)
- Note: You can check the latest regions and zones of CSP using
-
Common images and specifications recorded in the
cloudimage.csv
andcloudspec.csv
files in theassets
directory will be automatically registered. -
init.py
will apply the hybrid encryption for secure transmission of credentials- Retrieve RSA Public Key: Use the
/credential/publicKey
API to get the public key. - Encrypt Credentials: Encrypt credentials with a randomly generated
AES
key, then encrypt theAES
key with theRSA public key
. - Transmit Encrypted Data: Send
the encrypted credentials
andAES key
to the server. The server decrypts the AES key and uses it to decrypt the credentials.
This method ensures your credentials are securely transmitted and protected during registration. See init.py for a Python implementation. In detail, check out Secure Credential Registration Guide (How to use the credential APIs)
- Retrieve RSA Public Key: Use the
-
-
-
Shutting down CB-TB and related components
-
Stop all containers by
ctrl
+c
or type the commandsudo docker compose stop
/sudo docker compose down
(When a shutdown event occurs to CB-TB, the system will be shutting down gracefully: API requests that can be processed within 10 seconds will be completed) -
In case of cleanup is needed due to internal system errors
- Check and delete resources created through CB-TB
- Delete CB-TB & CB-Spider metadata using the provided script
cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./init/cleanDB.sh
-
-
Upgrading the CB-TB & CB-Spider versions
The following cleanup steps are unnecessary if you clearly understand the impact of the upgrade
- Check and delete resources created through CB-TB
- Delete CB-TB & CB-Spider metadata
cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./init/cleanDB.sh
- Restart with the upgraded version
- đ¤ Using CB-TB MCP Server (AI Assistant Interface) (NEW!)
- Using CB-TB MapUI (recommended)
- Using CB-TB REST API (recommended)
đ NEW: Control CB-Tumblebug with AI assistants like Claude!
The Model Context Protocol (MCP) Server enables natural language interaction with CB-Tumblebug through AI assistants:
- đ§ AI-Powered Infrastructure Management: Deploy and manage multi-cloud resources using natural language commands
- đ Seamless Integration: Works with Claude Desktop (via proxy), VS Code (direct), and other MCP-compatible clients
- đĄ Modern Protocol: Uses Streamable HTTP transport (current MCP standard)
-
⥠Quick Start: Enable with
make compose
and uncomment MCP service indocker-compose.yaml
# Enable MCP Server (Proof of Concept)
# 1. Uncomment cb-tumblebug-mcp-server in docker-compose.yaml
# 2. Launch with Docker Compose
make compose
# Access MCP server at http://localhost:8000/mcp
đ Complete MCP Server Guide â
Visual Infrastructure Management with Interactive Maps
CB-MapUI provides an intuitive, map-based interface for managing multi-cloud infrastructure:
- đēī¸ Geographic Visualization: See your infrastructure deployed across the globe
- đ Real-time Monitoring: Monitor resource status and performance
- đŽ Interactive Control: Create, manage, and control resources visually
- đ Multi-Cloud View: Unified view across all cloud providers
# Access CB-MapUI (auto-started with Docker Compose)
open http://localhost:1324
# Or run standalone MapUI container
./scripts/runMapUI.sh
Features:
- Drag-and-drop resource creation
- Real-time infrastructure mapping
- Cross-cloud resource relationships
- Performance metrics overlay
đ Learn More: CB-MapUI Repository
Programmatic Multi-Cloud Infrastructure Management
CB-Tumblebug provides a comprehensive REST API for automated infrastructure management:
đ API Dashboard & Documentation
- Interactive API Explorer: http://localhost:1323/tumblebug/api
-
Live Documentation:
đ Authentication CB-TB uses Basic Authentication (development phase - not production-ready):
# Include base64 encoded credentials in request headers
Authorization: Basic <base64(username:password)>
đ Quick Infrastructure Creation Following the Quick MCI Creation Guide:
# 1. Create VM specification
curl -X POST "http://localhost:1323/tumblebug/ns/default/resources/spec" \
-H "Authorization: Basic <credentials>" \
-d '{"name": "web-spec", "connectionName": "aws-ap-northeast-2"}'
# 2. Create VM image
curl -X POST "http://localhost:1323/tumblebug/ns/default/resources/image" \
-H "Authorization: Basic <credentials>" \
-d '{"name": "ubuntu-image", "connectionName": "aws-ap-northeast-2"}'
# 3. Create Multi-Cloud Infrastructure
curl -X POST "http://localhost:1323/tumblebug/ns/default/mci" \
-H "Authorization: Basic <credentials>" \
-d @mci-config.json
đ ī¸ Core API Categories
- Infrastructure Resources: VM specs, images, networks, security groups
- Multi-Cloud Infrastructure (MCI): Provision and manage distributed infrastructure
- Monitoring & Control: Performance metrics, scaling, lifecycle management
-
Credentials & Connections: Secure cloud provider configuration
- Create access key object
- Create, view, control, execute remote commands, shut down, and delete MCI using the MCI(multi-cloud infrastructure service) management APIs
- CB-TB optimal and dynamic provisioning
-
Setup required tools
-
Install: git, gcc, make
sudo apt update sudo apt install make gcc git
-
Install: Golang
-
Check https://golang.org/dl/ and setup Go
-
Download
wget https://go.dev/dl/go1.23.0.linux-amd64.tar.gz; sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.23.0.linux-amd64.tar.gz
-
Setup environment
echo 'export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin' >> ~/.bashrc echo 'export GOPATH=$HOME/go' >> ~/.bashrc
source ~/.bashrc echo $GOPATH go env go version
-
-
-
-
Run Docker Compose with the build option
To build the current CB-Tumblebug source code into a container image and run it along with the other containers, use the following command:
cd ~/go/src/github.com/cloud-barista/cb-tumblebug sudo DOCKER_BUILDKIT=1 docker compose up --build
This command will automatically build the CB-Tumblebug from the local source code and start it within a Docker container, along with any other necessary services as defined in the
docker-compose.yml
file.DOCKER_BUILDKIT=1
setting is used to speed up the build by using the go build cache technique.
-
Build the Golang source code using the Makefile
cd ~/go/src/github.com/cloud-barista/cb-tumblebug/src make
All dependencies will be downloaded automatically by Go.
The initial build will take some time, but subsequent builds will be faster by the Go build cache.
Note To update the Swagger API documentation, run
make swag
- API documentation file will be generated at
cb-tumblebug/src/interface/rest/docs/swagger.yaml
- API documentation can be viewed in a web browser at http://localhost:1323/tumblebug/api (provided when CB-TB is running)
- Detailed information on how to update the API
- API documentation file will be generated at
-
Set environment variables required to run CB-TB (in another tab)
- Check and configure the contents of
cb-tumblebug/conf/setup.env
(CB-TB environment variables, modify as needed)- Apply the environment variables to the system
cd ~/go/src/github.com/cloud-barista/cb-tumblebug source conf/setup.env
- (Optional) Automatically set the TB_SELF_ENDPOINT environment variable (an externally accessible address) using a script if needed
- This is necessary if you want to access and control the Swagger API Dashboard from outside when CB-TB is running
cd ~/go/src/github.com/cloud-barista/cb-tumblebug source ./scripts/setPublicIP.sh
- Apply the environment variables to the system
- Check and configure the contents of
-
Execute the built cb-tumblebug binary by using
make run
cd ~/go/src/github.com/cloud-barista/cb-tumblebug/src make run
CB-TB welcomes improvements from both new and experienced contributors!
Check out CONTRIBUTING.
Thanks goes to these wonderful people (emoji key):
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for cb-tumblebug
Similar Open Source Tools

cb-tumblebug
CB-Tumblebug (CB-TB) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. It provides an overview, features, and architecture. The tool supports various cloud providers and resource types, with ongoing development and localization efforts. Users can deploy a multi-cloud infra with GPUs, enjoy multiple LLMs in parallel, and utilize LLM-related scripts. The tool requires Linux, Docker, Docker Compose, and Golang for building the source. Users can run CB-TB with Docker Compose or from the Makefile, set up prerequisites, contribute to the project, and view a list of contributors. The tool is licensed under an open-source license.

robustmq
RobustMQ is a next-generation, high-performance, multi-protocol message queue built in Rust. It aims to create a unified messaging infrastructure tailored for modern cloud-native and AI systems. With features like high performance, distributed architecture, multi-protocol support, pluggable storage, cloud-native readiness, multi-tenancy, security features, observability, and user-friendliness, RobustMQ is designed to be production-ready and become a top-level Apache project in the message queue ecosystem by the second half of 2025.

spec-workflow-mcp
Spec Workflow MCP is a Model Context Protocol (MCP) server that offers structured spec-driven development workflow tools for AI-assisted software development. It includes a real-time web dashboard and a VSCode extension for monitoring and managing project progress directly in the development environment. The tool supports sequential spec creation, real-time monitoring of specs and tasks, document management, archive system, task progress tracking, approval workflow, bug reporting, template system, and works on Windows, macOS, and Linux.

redb-open
reDB Node is a distributed, policy-driven data mesh platform that enables True Data Portability across various databases, warehouses, clouds, and environments. It unifies data access, data mobility, and schema transformation into one open platform. Built for developers, architects, and AI systems, reDB addresses the challenges of fragmented data ecosystems in modern enterprises by providing multi-database interoperability, automated schema versioning, zero-downtime migration, real-time developer data environments with obfuscation, quantum-resistant encryption, and policy-based access control. The project aims to build a foundation for future-proof data infrastructure.

deepflow
DeepFlow is an open-source project that provides deep observability for complex cloud-native and AI applications. It offers Zero Code data collection with eBPF for metrics, distributed tracing, request logs, and function profiling. DeepFlow is integrated with SmartEncoding to achieve Full Stack correlation and efficient access to all observability data. With DeepFlow, cloud-native and AI applications automatically gain deep observability, removing the burden of developers continually instrumenting code and providing monitoring and diagnostic capabilities covering everything from code to infrastructure for DevOps/SRE teams.

Hexabot
Hexabot Community Edition is an open-source chatbot solution designed for flexibility and customization, offering powerful text-to-action capabilities. It allows users to create and manage AI-powered, multi-channel, and multilingual chatbots with ease. The platform features an analytics dashboard, multi-channel support, visual editor, plugin system, NLP/NLU management, multi-lingual support, CMS integration, user roles & permissions, contextual data, subscribers & labels, and inbox & handover functionalities. The directory structure includes frontend, API, widget, NLU, and docker components. Prerequisites for running Hexabot include Docker and Node.js. The installation process involves cloning the repository, setting up the environment, and running the application. Users can access the UI admin panel and live chat widget for interaction. Various commands are available for managing the Docker services. Detailed documentation and contribution guidelines are provided for users interested in contributing to the project.

dagger
Dagger is an open-source runtime for composable workflows, ideal for systems requiring repeatability, modularity, observability, and cross-platform support. It features a reproducible execution engine, a universal type system, a powerful data layer, native SDKs for multiple languages, an open ecosystem, an interactive command-line environment, batteries-included observability, and seamless integration with various platforms and frameworks. It also offers LLM augmentation for connecting to LLM endpoints. Dagger is suitable for AI agents and CI/CD workflows.

holmesgpt
HolmesGPT is an open-source DevOps assistant powered by OpenAI or any tool-calling LLM of your choice. It helps in troubleshooting Kubernetes, incident response, ticket management, automated investigation, and runbook automation in plain English. The tool connects to existing observability data, is compliance-friendly, provides transparent results, supports extensible data sources, runbook automation, and integrates with existing workflows. Users can install HolmesGPT using Brew, prebuilt Docker container, Python Poetry, or Docker. The tool requires an API key for functioning and supports OpenAI, Azure AI, and self-hosted LLMs.

mcp-server-chart
mcp-server-chart is a Helm chart for deploying a Minecraft server on Kubernetes. It simplifies the process of setting up and managing a Minecraft server in a Kubernetes environment. The chart includes configurations for specifying server settings, resource limits, and persistent storage options. With mcp-server-chart, users can easily deploy and scale Minecraft servers on Kubernetes clusters, ensuring high availability and performance for multiplayer gaming experiences.

context-portal
Context-portal is a versatile tool for managing and visualizing data in a collaborative environment. It provides a user-friendly interface for organizing and sharing information, making it easy for teams to work together on projects. With features such as customizable dashboards, real-time updates, and seamless integration with popular data sources, Context-portal streamlines the data management process and enhances productivity. Whether you are a data analyst, project manager, or team leader, Context-portal offers a comprehensive solution for optimizing workflows and driving better decision-making.

infra
E2B Infra is a cloud runtime for AI agents. It provides SDKs and CLI to customize and manage environments and run AI agents in the cloud. The infrastructure is deployed using Terraform and is currently only deployable on GCP. The main components of the infrastructure are the API server, daemon running inside instances (sandboxes), Nomad driver for managing instances (sandboxes), and Nomad driver for building environments (templates).

cedar-OS
Cedar OS is an open-source framework that bridges the gap between AI agents and React applications, enabling the creation of AI-native applications where agents can interact with the application state like users. It focuses on providing intuitive and powerful ways for humans to interact with AI through features like full state integration, real-time streaming, voice-first design, and flexible architecture. Cedar OS offers production-ready chat components, agentic state management, context-aware mentions, voice integration, spells & quick actions, and fully customizable UI. It differentiates itself by offering a true AI-native architecture, developer-first experience, production-ready features, and extensibility. Built with TypeScript support, Cedar OS is designed for developers working on ambitious AI-native applications.

LMForge-End-to-End-LLMOps-Platform-for-Multi-Model-Agents
LMForge is an end-to-end LLMOps platform designed for multi-model agents. It provides a comprehensive solution for managing and deploying large language models efficiently. The platform offers tools for training, fine-tuning, and deploying various types of language models, enabling users to streamline the development and deployment process. With LMForge, users can easily experiment with different model architectures, optimize hyperparameters, and scale their models to meet specific requirements. The platform also includes features for monitoring model performance, managing datasets, and collaborating with team members, making it a versatile tool for researchers and developers working with language models.

hyper-mcp
hyper-mcp is a fast and secure MCP server that enables adding AI capabilities to applications through WebAssembly plugins. It supports writing plugins in various languages, distributing them via standard OCI registries, and running them in resource-constrained environments. The tool offers sandboxing with WASM for limiting access, cross-platform compatibility, and deployment flexibility. Security features include sandboxed plugins, memory-safe execution, secure plugin distribution, and fine-grained access control. Users can configure the tool for global or project-specific use, start the server with different transport options, and utilize available plugins for tasks like time calculations, QR code generation, hash generation, IP retrieval, and webpage fetching.

TuyaOpen
TuyaOpen is an open source AI+IoT development framework supporting cross-chip platforms and operating systems. It provides core functionalities for AI+IoT development, including pairing, activation, control, and upgrading. The SDK offers robust security and compliance capabilities, meeting data compliance requirements globally. TuyaOpen enables the development of AI+IoT products that can leverage the Tuya APP ecosystem and cloud services. It continues to expand with more cloud platform integration features and capabilities like voice, video, and facial recognition.

AIaW
AIaW is a next-generation LLM client with full functionality, lightweight, and extensible. It supports various basic functions such as streaming transfer, image uploading, and latex formulas. The tool is cross-platform with a responsive interface design. It supports multiple service providers like OpenAI, Anthropic, and Google. Users can modify questions, regenerate in a forked manner, and visualize conversations in a tree structure. Additionally, it offers features like file parsing, video parsing, plugin system, assistant market, local storage with real-time cloud sync, and customizable interface themes. Users can create multiple workspaces, use dynamic prompt word variables, extend plugins, and benefit from detailed design elements like real-time content preview, optimized code pasting, and support for various file types.
For similar tasks

cb-tumblebug
CB-Tumblebug (CB-TB) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. It provides an overview, features, and architecture. The tool supports various cloud providers and resource types, with ongoing development and localization efforts. Users can deploy a multi-cloud infra with GPUs, enjoy multiple LLMs in parallel, and utilize LLM-related scripts. The tool requires Linux, Docker, Docker Compose, and Golang for building the source. Users can run CB-TB with Docker Compose or from the Makefile, set up prerequisites, contribute to the project, and view a list of contributors. The tool is licensed under an open-source license.

pluto
Pluto is a development tool dedicated to helping developers **build cloud and AI applications more conveniently** , resolving issues such as the challenging deployment of AI applications and open-source models. Developers are able to write applications in familiar programming languages like **Python and TypeScript** , **directly defining and utilizing the cloud resources necessary for the application within their code base** , such as AWS SageMaker, DynamoDB, and more. Pluto automatically deduces the infrastructure resource needs of the app through **static program analysis** and proceeds to create these resources on the specified cloud platform, **simplifying the resources creation and application deployment process**.

dataengineering-roadmap
A repository providing basic concepts, technical challenges, and resources on data engineering in Spanish. It is a curated list of free, Spanish-language materials found on the internet to facilitate the study of data engineering enthusiasts. The repository covers programming fundamentals, programming languages like Python, version control with Git, database fundamentals, SQL, design concepts, Big Data, analytics, cloud computing, data processing, and job search tips in the IT field.

awesome-hosting
awesome-hosting is a curated list of hosting services sorted by minimal plan price. It includes various categories such as Web Services Platform, Backend-as-a-Service, Lambda, Node.js, Static site hosting, WordPress hosting, VPS providers, managed databases, GPU cloud services, and LLM/Inference API providers. Each category lists multiple service providers along with details on their minimal plan, trial options, free tier availability, open-source support, and specific features. The repository aims to help users find suitable hosting solutions based on their budget and requirements.

ubicloud
Ubicloud is an open source cloud platform that provides Infrastructure as a Service (IaaS) features on bare metal providers like Hetzner, Leaseweb, and AWS Bare Metal. Users can either set it up themselves on these providers or use the managed service offered by Ubicloud. The platform allows users to cloudify bare metal Linux machines, provision and manage cloud resources, and offers an open source alternative to traditional cloud providers, reducing costs and returning control of infrastructure to the users.

azhpc-images
This repository contains scripts for installing HPC and AI libraries and tools to build Azure HPC/AI images. It streamlines the process of provisioning compute-intensive workloads and crafting advanced AI models in the cloud, ensuring efficiency and reliability in deployments.
For similar jobs

cb-tumblebug
CB-Tumblebug (CB-TB) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. It provides an overview, features, and architecture. The tool supports various cloud providers and resource types, with ongoing development and localization efforts. Users can deploy a multi-cloud infra with GPUs, enjoy multiple LLMs in parallel, and utilize LLM-related scripts. The tool requires Linux, Docker, Docker Compose, and Golang for building the source. Users can run CB-TB with Docker Compose or from the Makefile, set up prerequisites, contribute to the project, and view a list of contributors. The tool is licensed under an open-source license.

robusta
Robusta is a tool designed to enhance Prometheus notifications for Kubernetes environments. It offers features such as smart grouping to reduce notification spam, AI investigation for alert analysis, alert enrichment with additional data like pod logs, self-healing capabilities for defining auto-remediation rules, advanced routing options, problem detection without PromQL, change-tracking for Kubernetes resources, auto-resolve functionality, and integration with various external systems like Slack, Teams, and Jira. Users can utilize Robusta with or without Prometheus, and it can be installed alongside existing Prometheus setups or as part of an all-in-one Kubernetes observability stack.

AI-CloudOps
AI+CloudOps is a cloud-native operations management platform designed for enterprises. It aims to integrate artificial intelligence technology with cloud-native practices to significantly improve the efficiency and level of operations work. The platform offers features such as AIOps for monitoring data analysis and alerts, multi-dimensional permission management, visual CMDB for resource management, efficient ticketing system, deep integration with Prometheus for real-time monitoring, and unified Kubernetes management for cluster optimization.

serverless-pdf-chat
The serverless-pdf-chat repository contains a sample application that allows users to ask natural language questions of any PDF document they upload. It leverages serverless services like Amazon Bedrock, AWS Lambda, and Amazon DynamoDB to provide text generation and analysis capabilities. The application architecture involves uploading a PDF document to an S3 bucket, extracting metadata, converting text to vectors, and using a LangChain to search for information related to user prompts. The application is not intended for production use and serves as a demonstration and educational tool.

generative-bi-using-rag
Generative BI using RAG on AWS is a comprehensive framework designed to enable Generative BI capabilities on customized data sources hosted on AWS. It offers features such as Text-to-SQL functionality for querying data sources using natural language, user-friendly interface for managing data sources, performance enhancement through historical question-answer ranking, and entity recognition. It also allows customization of business information, handling complex attribution analysis problems, and provides an intuitive question-answering UI with a conversational approach for complex queries.

azure-functions-openai-extension
Azure Functions OpenAI Extension is a project that adds support for OpenAI LLM (GPT-3.5-turbo, GPT-4) bindings in Azure Functions. It provides NuGet packages for various functionalities like text completions, chat completions, assistants, embeddings generators, and semantic search. The project requires .NET 6 SDK or greater, Azure Functions Core Tools v4.x, and specific settings in Azure Function or local settings for development. It offers features like text completions, chat completion, assistants with custom skills, embeddings generators for text relatedness, and semantic search using vector databases. The project also includes examples in C# and Python for different functionalities.

edge2ai-workshop
The edge2ai-workshop repository provides a hands-on workshop for building an IoT Predictive Maintenance workflow. It includes lab exercises for setting up components like NiFi, Streams Processing, Data Visualization, and more on a single host. The repository also covers use cases such as credit card fraud detection. Users can follow detailed instructions, prerequisites, and connectivity guidelines to connect to their cluster and explore various services. Additionally, troubleshooting tips are provided for common issues like MiNiFi not sending messages or CEM not picking up new NARs.

yu-picture
The 'yu-picture' project is an educational project that provides complete video tutorials, text tutorials, resume writing, interview question solutions, and Q&A services to help you improve your project skills and enhance your resume. It is an enterprise-level intelligent collaborative cloud image library platform based on Vue 3 + Spring Boot + COS + WebSocket. The platform has a wide range of applications, including public image uploading and retrieval, image analysis for administrators, private image management for individual users, and real-time collaborative image editing for enterprises. The project covers file management, content retrieval, permission control, and real-time collaboration, using various programming concepts, architectural design methods, and optimization strategies to ensure high-speed iteration and stable operation.