
cb-tumblebug
Cloud-Barista Multi-Cloud Infra Management Framework
Stars: 67

CB-Tumblebug (CB-TB) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. It provides an overview, features, and architecture. The tool supports various cloud providers and resource types, with ongoing development and localization efforts. Users can deploy a multi-cloud infra with GPUs, enjoy multiple LLMs in parallel, and utilize LLM-related scripts. The tool requires Linux, Docker, Docker Compose, and Golang for building the source. Users can run CB-TB with Docker Compose or from the Makefile, set up prerequisites, contribute to the project, and view a list of contributors. The tool is licensed under an open-source license.
README:
CB-Tumblebug (CB-TB) is an advanced multi-cloud infrastructure management system that enables seamless provisioning, management, and orchestration of resources across multiple cloud service providers. Part of the Cloud-Barista project, CB-TB abstracts the complexity of multi-cloud environments into a unified, intuitive interface.
- đ Multi-Cloud Orchestration: Manage AWS, Azure, GCP, Alibaba Cloud, and more from a single platform
- ⥠Auto-provisioning: Intelligent resource recommendations and automated deployment
- đ Secure Operations: Encrypted credential management and hybrid encryption protocols
- đēī¸ Visual Infrastructure Map: Interactive GUI for infrastructure visualization and management
- đ¤ AI-Powered Management: NEW! Control infrastructure using natural language via our MCP Server
-
âī¸ Supported Cloud Providers & Resources
đ Note: Reference only - functionality not guaranteed. Regular updates are made.
Kubernetes support is currently WIP with limited features available.
đ Development Status & Contributing Notes
CB-TB has not reached version 1.0 yet. We welcome any new suggestions, issues, opinions, and contributors! Please note that the functionalities of Cloud-Barista are not yet stable or secure. Be cautious if you plan to use the current release in a production environment. If you encounter any difficulties using Cloud-Barista, please let us know by opening an issue or joining the Cloud-Barista Slack.
As an open-source project initiated by Korean members, we aim to encourage participation from Korean contributors during the initial stages of this project. Therefore, the CB-TB repository will accept the use of the Korean language in its early stages. However, we hope this project will thrive regardless of contributors' countries in the long run. To facilitate this, the maintainers recommend using English at least for the titles of Issues, Pull Requests, and Commits, while accommodating local languages in the contents.
đ¤ NEW: AI-Powered Multi-Cloud Management
- Control CB-Tumblebug through AI assistants like Claude and VS Code
- Natural language interface for infrastructure provisioning and management using MCP (Model Context Protocol)
- Streamable HTTP transport for modern MCP compatibility
- đ MCP Server Guide | đ Quick Start
đŽ GPU-Powered Multi-Cloud LLM Deployment
- Deploy GPU instances across multiple clouds for AI/ML workloads
- đ§ LLM Scripts & Examples
- ⥠Quick Start
- đ§ Prerequisites
- đ Installation & Setup
- đ How to Use
- đ ī¸ Development
- đ¤ Contributing
Get CB-Tumblebug running in under 5 minutes:
# 1. Automated setup (recommended for new users)
curl -sSL https://raw.githubusercontent.com/cloud-barista/cb-tumblebug/main/scripts/set-tb.sh | bash
# 2. Start all services
cd ~/go/src/github.com/cloud-barista/cb-tumblebug
make compose
# 3. Configure credentials (see detailed setup below)
./init/genCredential.sh
# Edit ~/.cloud-barista/credentials.yaml with your cloud credentials
./init/encCredential.sh
./init/init.sh
# 4. Access services
# - API: http://localhost:1323/tumblebug/api
# - MapUI: http://localhost:1324
# - MCP Server: http://localhost:8000/mcp (if enabled)
đĄ New to CB-Tumblebug? Follow the detailed setup guide below for comprehensive instructions.
Component | Minimum Specification | Recommended |
---|---|---|
OS | Linux (Ubuntu 22.04+) | Ubuntu 22.04 LTS |
CPU | 4 cores | 8+ cores |
Memory | 6 GiB | 16+ GiB |
Storage | 20 GiB free space | 50+ GiB SSD |
Example | AWS c5a.xlarge
|
AWS c5a.2xlarge
|
â ī¸ Performance Note: Lower specifications may cause initialization failures or performance degradation.
- Docker & Docker Compose (latest stable)
- Go 1.23.0+ (for building from source)
- Git (for cloning repository)
- đĻ View Dependencies
- đĄī¸ Software Bill of Materials (SBOM)
For new users on clean Linux systems:
# Download and run automated setup script
curl -sSL https://raw.githubusercontent.com/cloud-barista/cb-tumblebug/main/scripts/set-tb.sh | bash
âšī¸ Post-installation: Log out and back in to activate Docker permissions and aliases.
curl -sSL https://raw.githubusercontent.com/cloud-barista/cb-tumblebug/main/scripts/set-tb.sh | bash
âšī¸ After the script finishes, you may need to log out and back in to activate Docker permissions and aliases. If you'd prefer to install dependencies and clone the repository manually, follow the steps below. đ
-
Clone the CB-Tumblebug repository:
git clone https://github.com/cloud-barista/cb-tumblebug.git $HOME/go/src/github.com/cloud-barista/cb-tumblebug cd ~/go/src/github.com/cloud-barista/cb-tumblebug
Optionally, you can register aliases for the CB-Tumblebug directory to simplify navigation:
echo "alias cdtb='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug'" >> ~/.bashrc echo "alias cdtbsrc='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug/src'" >> ~/.bashrc echo "alias cdtbtest='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug/src/testclient/scripts'" >> ~/.bashrc source ~/.bashrc
-
Check Docker Compose Installation:
Ensure that Docker Engine and Docker Compose are installed on your system. If not, you can use the following script to install them (note: this script is not intended for production environments):
# download and install docker with docker compose curl -sSL get.docker.com | sh # optional: add user to docker groupd sudo groupadd docker sudo usermod -aG docker ${USER} newgrp docker # test the docker works docker run hello-world
-
Start All Components Using Docker Compose:
To run all components, use the following command:
cd ~/go/src/github.com/cloud-barista/cb-tumblebug docker compose up
This command will start all components as defined in the preconfigured docker-compose.yaml file. For configuration customization, please refer to the guide.
The following components will be started:
- ETCD: CB-Tumblebug KeyValue DB
- CB-Spider: a Cloud API controller
- CB-MapUI: a simple Map-based GUI web server
- CB-Tumblebug: the system with API server
- CB-Tumblebug MCP Server: AI assistant interface (if enabled)
- PostgreSQL: Specs and Images storage
- Traefik: Reverse proxy for secure access
Container Architecture Overview:
graph TB subgraph "External Access" User[đ¤ User] AI[đ¤ AI Assistant<br/>Claude/VS Code] end subgraph "Docker Compose Environment" subgraph "Frontend & Interfaces" UI[CB-MapUI<br/>:1324] MCP[TB-MCP Server<br/>:8000] Proxy[Traefik Proxy<br/>:80/:443] end subgraph "Backend Services" TB[CB-Tumblebug<br/>:1323<br/>Multi-Cloud Management] Spider[CB-Spider<br/>:1024<br/>Cloud API Abstraction] ETCD[ETCD<br/>:2379<br/>Metadata Store] PG[PostgreSQL<br/>:5432<br/>Specs/Images DB] end end subgraph "Cloud Providers" AWS[AWS] Azure[Azure] GCP[GCP] Others[Others...] end %% User connections User -->|HTTP/HTTPS| Proxy User -->|HTTP| UI User -->|HTTP| TB AI -->|MCP HTTP| MCP %% Proxy routing Proxy -->|Route| UI %% Internal service connections UI -.->|API calls| TB MCP -->|REST API| TB TB -->|REST API| Spider TB -->|gRPC| ETCD TB -->|SQL| PG %% Cloud connections Spider -->|Cloud APIs| AWS Spider -->|Cloud APIs| Azure Spider -->|Cloud APIs| GCP Spider -->|Cloud APIs| Others %% Styling classDef frontend fill:#e3f2fd,stroke:#1976d2 classDef backend fill:#f3e5f5,stroke:#7b1fa2 classDef storage fill:#e8f5e8,stroke:#388e3c classDef cloud fill:#fff3e0,stroke:#f57c00 class UI,MCP,Proxy frontend class TB,Spider,ETCD,PG backend class AWS,Azure,GCP,Others cloud
After running the command, you should see output similar to the following:
Service Endpoints:
- CB-Tumblebug API: http://localhost:1323/tumblebug/api
- CB-MapUI: http://localhost:1324 (direct) or https://cb-mapui.localhost (via Traefik with SSL)
- MCP Server: http://localhost:8000/mcp (if enabled)
- Traefik Dashboard: http://localhost:8080 (reverse proxy monitoring)
Note: Before using CB-Tumblebug, you need to initialize it.
To provisioning multi-cloud infrastructures with CB-TB, it is necessary to register the connection information (credentials) for clouds, as well as commonly used images and specifications.
-
Create
credentials.yaml
file and input your cloud credentials-
Overview
-
credentials.yaml
is a file that includes multiple credentials to use API of Clouds supported by CB-TB (AWS, GCP, AZURE, ALIBABA, etc.) - It should be located in the
~/.cloud-barista/
directory and securely managed. - Refer to the
template.credentials.yaml
for the template
-
-
Create
credentials.yaml
the fileAutomatically generate the
credentials.yaml
file in the~/.cloud-barista/
directory using the CB-TB scriptcd ~/go/src/github.com/cloud-barista/cb-tumblebug ./init/genCredential.sh
-
Input credential data
Put credential data to
~/.cloud-barista/credentials.yaml
(Reference: How to obtain a credential for each CSP)### Cloud credentials for credential holders (default: admin) credentialholder: admin: alibaba: # ClientId(ClientId): client ID of the EIAM application # Example: app_mkv7rgt4d7i4u7zqtzev2mxxxx ClientId: # ClientSecret(ClientSecret): client secret of the EIAM application # Example: CSEHDcHcrUKHw1CuxkJEHPveWRXBGqVqRsxxxx ClientSecret: aws: # ClientId(aws_access_key_id) # ex: AKIASSSSSSSSSSS56DJH ClientId: # ClientSecret(aws_secret_access_key) # ex: jrcy9y0Psejjfeosifj3/yxYcgadklwihjdljMIQ0 ClientSecret: ...
-
-
Encrypt
credentials.yaml
intocredentials.yaml.enc
To protect sensitive information,
credentials.yaml
is not used directly. Instead, it must be encrypted usingencCredential.sh
. The encrypted filecredentials.yaml.enc
is then used byinit.py
. This approach ensures that sensitive credentials are not stored in plain text.-
Encrypting Credentials
init/encCredential.sh
When executing the script, you have two options: 1) enter your password or 2) let the system generate a random passkey.
Option 1: Entering your password:
Option 2: Letting the system generate a random passkey, which MUST be securely stored in a safe location:
If you need to update your credentials, decrypt the encrypted file using
decCredential.sh
, make the necessary changes tocredentials.yaml
, and then re-encrypt it. -
-
(INIT) Register all multi-cloud connection information and common resources
-
How to register
Refer to README.md for init.py, and execute the
init.py
script. (enter 'y' for confirmation prompts)cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./init/init.sh
-
The credentials in
~/.cloud-barista/credentials.yaml.enc
(encrypted file from thecredentials.yaml
) will be automatically registered (all CSP and region information recorded incloudinfo.yaml
will be automatically registered in the system)- Note: You can check the latest regions and zones of CSP using
update-cloudinfo.py
and review the file for updates. (contributions to updates are welcome)
- Note: You can check the latest regions and zones of CSP using
-
Common images and specifications recorded in the
cloudimage.csv
andcloudspec.csv
files in theassets
directory will be automatically registered. -
init.py
will apply the hybrid encryption for secure transmission of credentials- Retrieve RSA Public Key: Use the
/credential/publicKey
API to get the public key. - Encrypt Credentials: Encrypt credentials with a randomly generated
AES
key, then encrypt theAES
key with theRSA public key
. - Transmit Encrypted Data: Send
the encrypted credentials
andAES key
to the server. The server decrypts the AES key and uses it to decrypt the credentials.
This method ensures your credentials are securely transmitted and protected during registration. See init.py for a Python implementation. In detail, check out Secure Credential Registration Guide (How to use the credential APIs)
- Retrieve RSA Public Key: Use the
-
-
-
Shutting down CB-TB and related components
-
Stop all containers by
ctrl
+c
or type the commandsudo docker compose stop
/sudo docker compose down
(When a shutdown event occurs to CB-TB, the system will be shutting down gracefully: API requests that can be processed within 10 seconds will be completed) -
In case of cleanup is needed due to internal system errors
- Check and delete resources created through CB-TB
- Delete CB-TB & CB-Spider metadata using the provided script
cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./init/cleanDB.sh
-
-
Upgrading the CB-TB & CB-Spider versions
The following cleanup steps are unnecessary if you clearly understand the impact of the upgrade
- Check and delete resources created through CB-TB
- Delete CB-TB & CB-Spider metadata
cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./init/cleanDB.sh
- Restart with the upgraded version
- đ¤ Using CB-TB MCP Server (AI Assistant Interface) (NEW!)
- Using CB-TB MapUI (recommended)
- Using CB-TB REST API (recommended)
đ NEW: Control CB-Tumblebug with AI assistants like Claude!
The Model Context Protocol (MCP) Server enables natural language interaction with CB-Tumblebug through AI assistants:
- đ§ AI-Powered Infrastructure Management: Deploy and manage multi-cloud resources using natural language commands
- đ Seamless Integration: Works with Claude Desktop (via proxy), VS Code (direct), and other MCP-compatible clients
- đĄ Modern Protocol: Uses Streamable HTTP transport (current MCP standard)
-
⥠Quick Start: Enable with
make compose
and uncomment MCP service indocker-compose.yaml
# Enable MCP Server (Proof of Concept)
# 1. Uncomment cb-tumblebug-mcp-server in docker-compose.yaml
# 2. Launch with Docker Compose
make compose
# Access MCP server at http://localhost:8000/mcp
đ Complete MCP Server Guide â
Visual Infrastructure Management with Interactive Maps
CB-MapUI provides an intuitive, map-based interface for managing multi-cloud infrastructure:
- đēī¸ Geographic Visualization: See your infrastructure deployed across the globe
- đ Real-time Monitoring: Monitor resource status and performance
- đŽ Interactive Control: Create, manage, and control resources visually
- đ Multi-Cloud View: Unified view across all cloud providers
# Access CB-MapUI (auto-started with Docker Compose)
open http://localhost:1324
# Or run standalone MapUI container
./scripts/runMapUI.sh
Features:
- Drag-and-drop resource creation
- Real-time infrastructure mapping
- Cross-cloud resource relationships
- Performance metrics overlay
đ Learn More: CB-MapUI Repository
Programmatic Multi-Cloud Infrastructure Management
CB-Tumblebug provides a comprehensive REST API for automated infrastructure management:
đ API Dashboard & Documentation
- Interactive API Explorer: http://localhost:1323/tumblebug/api
-
Live Documentation:
đ Authentication CB-TB uses Basic Authentication (development phase - not production-ready):
# Include base64 encoded credentials in request headers
Authorization: Basic <base64(username:password)>
đ Quick Infrastructure Creation Following the Quick MCI Creation Guide:
# 1. Create VM specification
curl -X POST "http://localhost:1323/tumblebug/ns/default/resources/spec" \
-H "Authorization: Basic <credentials>" \
-d '{"name": "web-spec", "connectionName": "aws-ap-northeast-2"}'
# 2. Create VM image
curl -X POST "http://localhost:1323/tumblebug/ns/default/resources/image" \
-H "Authorization: Basic <credentials>" \
-d '{"name": "ubuntu-image", "connectionName": "aws-ap-northeast-2"}'
# 3. Create Multi-Cloud Infrastructure
curl -X POST "http://localhost:1323/tumblebug/ns/default/mci" \
-H "Authorization: Basic <credentials>" \
-d @mci-config.json
đ ī¸ Core API Categories
- Infrastructure Resources: VM specs, images, networks, security groups
- Multi-Cloud Infrastructure (MCI): Provision and manage distributed infrastructure
- Monitoring & Control: Performance metrics, scaling, lifecycle management
-
Credentials & Connections: Secure cloud provider configuration
- Create access key object
- Create, view, control, execute remote commands, shut down, and delete MCI using the MCI(multi-cloud infrastructure service) management APIs
- CB-TB optimal and dynamic provisioning
-
Setup required tools
-
Install: git, gcc, make
sudo apt update sudo apt install make gcc git
-
Install: Golang
-
Check https://golang.org/dl/ and setup Go
-
Download
wget https://go.dev/dl/go1.23.0.linux-amd64.tar.gz; sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.23.0.linux-amd64.tar.gz
-
Setup environment
echo 'export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin' >> ~/.bashrc echo 'export GOPATH=$HOME/go' >> ~/.bashrc
source ~/.bashrc echo $GOPATH go env go version
-
-
-
-
Run Docker Compose with the build option
To build the current CB-Tumblebug source code into a container image and run it along with the other containers, use the following command:
cd ~/go/src/github.com/cloud-barista/cb-tumblebug sudo DOCKER_BUILDKIT=1 docker compose up --build
This command will automatically build the CB-Tumblebug from the local source code and start it within a Docker container, along with any other necessary services as defined in the
docker-compose.yml
file.DOCKER_BUILDKIT=1
setting is used to speed up the build by using the go build cache technique.
-
Build the Golang source code using the Makefile
cd ~/go/src/github.com/cloud-barista/cb-tumblebug/src make
All dependencies will be downloaded automatically by Go.
The initial build will take some time, but subsequent builds will be faster by the Go build cache.
Note To update the Swagger API documentation, run
make swag
- API documentation file will be generated at
cb-tumblebug/src/interface/rest/docs/swagger.yaml
- API documentation can be viewed in a web browser at http://localhost:1323/tumblebug/api (provided when CB-TB is running)
- Detailed information on how to update the API
- API documentation file will be generated at
-
Set environment variables required to run CB-TB (in another tab)
- Check and configure the contents of
cb-tumblebug/conf/setup.env
(CB-TB environment variables, modify as needed)- Apply the environment variables to the system
cd ~/go/src/github.com/cloud-barista/cb-tumblebug source conf/setup.env
- (Optional) Automatically set the TB_SELF_ENDPOINT environment variable (an externally accessible address) using a script if needed
- This is necessary if you want to access and control the Swagger API Dashboard from outside when CB-TB is running
cd ~/go/src/github.com/cloud-barista/cb-tumblebug source ./scripts/setPublicIP.sh
- Apply the environment variables to the system
- Check and configure the contents of
-
Execute the built cb-tumblebug binary by using
make run
cd ~/go/src/github.com/cloud-barista/cb-tumblebug/src make run
CB-TB welcomes improvements from both new and experienced contributors!
Check out CONTRIBUTING.
Thanks goes to these wonderful people (emoji key):
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for cb-tumblebug
Similar Open Source Tools

cb-tumblebug
CB-Tumblebug (CB-TB) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. It provides an overview, features, and architecture. The tool supports various cloud providers and resource types, with ongoing development and localization efforts. Users can deploy a multi-cloud infra with GPUs, enjoy multiple LLMs in parallel, and utilize LLM-related scripts. The tool requires Linux, Docker, Docker Compose, and Golang for building the source. Users can run CB-TB with Docker Compose or from the Makefile, set up prerequisites, contribute to the project, and view a list of contributors. The tool is licensed under an open-source license.

genkit-plugins
Community plugins repository for Google Firebase Genkit, containing various plugins for AI APIs and Vector Stores. Developed by The Fire Company, this repository offers plugins like genkitx-anthropic, genkitx-cohere, genkitx-groq, genkitx-mistral, genkitx-openai, genkitx-convex, and genkitx-hnsw. Users can easily install and use these plugins in their projects, with examples provided in the documentation. The repository also showcases products like Fireview and Giftit built using these plugins, and welcomes contributions from the community.

local-deep-research
Local Deep Research is a powerful AI-powered research assistant that performs deep, iterative analysis using multiple LLMs and web searches. It can be run locally for privacy or configured to use cloud-based LLMs for enhanced capabilities. The tool offers advanced research capabilities, flexible LLM support, rich output options, privacy-focused operation, enhanced search integration, and academic & scientific integration. It also provides a web interface, command line interface, and supports multiple LLM providers and search engines. Users can configure AI models, search engines, and research parameters for customized research experiences.

robustmq
RobustMQ is a next-generation, high-performance, multi-protocol message queue built in Rust. It aims to create a unified messaging infrastructure tailored for modern cloud-native and AI systems. With features like high performance, distributed architecture, multi-protocol support, pluggable storage, cloud-native readiness, multi-tenancy, security features, observability, and user-friendliness, RobustMQ is designed to be production-ready and become a top-level Apache project in the message queue ecosystem by the second half of 2025.

inspector
A developer tool for testing and debugging Model Context Protocol (MCP) servers. It allows users to test the compliance of their MCP servers with the latest MCP specs, supports various transports like STDIO, SSE, and Streamable HTTP, features an LLM Playground for testing server behavior against different models, provides comprehensive logging and error reporting for MCP server development, and offers a modern developer experience with multiple server connections and saved configurations. The tool is built using Next.js and integrates MCP capabilities, AI SDKs from OpenAI, Anthropic, and Ollama, and various technologies like Node.js, TypeScript, and Next.js.

smriti-ai
Smriti AI is an intelligent learning assistant that helps users organize, understand, and retain study materials. It transforms passive content into active learning tools by capturing resources, converting them into summaries and quizzes, providing spaced revision with reminders, tracking progress, and offering a multimodal interface. Suitable for students, self-learners, professionals, educators, and coaching institutes.

tappas
Hailo TAPPAS is a set of full application examples that implement pipeline elements and pre-trained AI tasks. It demonstrates Hailo's system integration scenarios on predefined systems, aiming to accelerate time to market, simplify integration with Hailo's runtime SW stack, and provide a starting point for customers to fine-tune their applications. The tool supports both Hailo-15 and Hailo-8, offering various example applications optimized for different common hosts. TAPPAS includes pipelines for single network, two network, and multi-stream processing, as well as high-resolution processing via tiling. It also provides example use case pipelines like License Plate Recognition and Multi-Person Multi-Camera Tracking. The tool is regularly updated with new features, bug fixes, and platform support.

Automodel
Automodel is a Python library for automating the process of building and evaluating machine learning models. It provides a set of tools and utilities to streamline the model development workflow, from data preprocessing to model selection and evaluation. With Automodel, users can easily experiment with different algorithms, hyperparameters, and feature engineering techniques to find the best model for their dataset. The library is designed to be user-friendly and customizable, allowing users to define their own pipelines and workflows. Automodel is suitable for data scientists, machine learning engineers, and anyone looking to quickly build and test machine learning models without the need for manual intervention.

db2rest
DB2Rest is a modern low-code REST DATA API platform that simplifies the development of intelligent applications. It seamlessly integrates existing and new databases with language models (LMs/LLMs) and vector stores, enabling the rapid delivery of context-aware, reasoning applications without vendor lock-in.

Awesome-Lists
Awesome-Lists is a curated list of awesome lists across various domains of computer science and beyond, including programming languages, web development, data science, and more. It provides a comprehensive index of articles, books, courses, open source projects, and other resources. The lists are organized by topic and subtopic, making it easy to find the information you need. Awesome-Lists is a valuable resource for anyone looking to learn more about a particular topic or to stay up-to-date on the latest developments in the field.

Awesome-Lists-and-CheatSheets
Awesome-Lists is a curated index of selected resources spanning various fields including programming languages and theories, web and frontend development, server-side development and infrastructure, cloud computing and big data, data science and artificial intelligence, product design, etc. It includes articles, books, courses, examples, open-source projects, and more. The repository categorizes resources according to the knowledge system of different domains, aiming to provide valuable and concise material indexes for readers. Users can explore and learn from a wide range of high-quality resources in a systematic way.

pennywiseai-tracker
PennyWise AI Tracker is a free and open-source expense tracker that uses on-device AI to turn bank SMS into a clean and searchable money timeline. It offers smart SMS parsing, clear insights, subscription tracking, on-device AI assistant, auto-categorization, data export, and supports major Indian banks. All processing happens on the user's device for privacy. The tool is designed for Android users in India who want automatic expense tracking from bank SMS, with clean categories, subscription detection, and clear insights.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

AIPex
AIPex is a revolutionary Chrome extension that transforms your browser into an intelligent automation platform. Using natural language commands and AI-powered intelligence, AIPex can automate virtually any browser task - from complex multi-step workflows to simple repetitive actions. It offers features like natural language control, AI-powered intelligence, multi-step automation, universal compatibility, smart data extraction, precision actions, form automation, visual understanding, developer-friendly with extensive API, and lightning-fast execution of automation tasks.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **đ Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **đ§Š Customize** : Tailor your pipeline with intuitive configuration files. * **đ Extend** : Enhance your pipeline with custom code integrations. * **âī¸ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **đ¤ OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

neuropilot
NeuroPilot is an open-source AI-powered education platform that transforms study materials into interactive learning resources. It provides tools like contextual chat, smart notes, flashcards, quizzes, and AI podcasts. Supported by various AI models and embedding providers, it offers features like WebSocket streaming, JSON or vector database support, file-based storage, and configurable multi-provider setup for LLMs and TTS engines. The technology stack includes Node.js, TypeScript, Vite, React, TailwindCSS, JSON database, multiple LLM providers, and Docker for deployment. Users can contribute to the project by integrating AI models, adding mobile app support, improving performance, enhancing accessibility features, and creating documentation and tutorials.
For similar tasks

cb-tumblebug
CB-Tumblebug (CB-TB) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. It provides an overview, features, and architecture. The tool supports various cloud providers and resource types, with ongoing development and localization efforts. Users can deploy a multi-cloud infra with GPUs, enjoy multiple LLMs in parallel, and utilize LLM-related scripts. The tool requires Linux, Docker, Docker Compose, and Golang for building the source. Users can run CB-TB with Docker Compose or from the Makefile, set up prerequisites, contribute to the project, and view a list of contributors. The tool is licensed under an open-source license.

pluto
Pluto is a development tool dedicated to helping developers **build cloud and AI applications more conveniently** , resolving issues such as the challenging deployment of AI applications and open-source models. Developers are able to write applications in familiar programming languages like **Python and TypeScript** , **directly defining and utilizing the cloud resources necessary for the application within their code base** , such as AWS SageMaker, DynamoDB, and more. Pluto automatically deduces the infrastructure resource needs of the app through **static program analysis** and proceeds to create these resources on the specified cloud platform, **simplifying the resources creation and application deployment process**.

dataengineering-roadmap
A repository providing basic concepts, technical challenges, and resources on data engineering in Spanish. It is a curated list of free, Spanish-language materials found on the internet to facilitate the study of data engineering enthusiasts. The repository covers programming fundamentals, programming languages like Python, version control with Git, database fundamentals, SQL, design concepts, Big Data, analytics, cloud computing, data processing, and job search tips in the IT field.

awesome-hosting
awesome-hosting is a curated list of hosting services sorted by minimal plan price. It includes various categories such as Web Services Platform, Backend-as-a-Service, Lambda, Node.js, Static site hosting, WordPress hosting, VPS providers, managed databases, GPU cloud services, and LLM/Inference API providers. Each category lists multiple service providers along with details on their minimal plan, trial options, free tier availability, open-source support, and specific features. The repository aims to help users find suitable hosting solutions based on their budget and requirements.

ubicloud
Ubicloud is an open source cloud platform that provides Infrastructure as a Service (IaaS) features on bare metal providers like Hetzner, Leaseweb, and AWS Bare Metal. Users can either set it up themselves on these providers or use the managed service offered by Ubicloud. The platform allows users to cloudify bare metal Linux machines, provision and manage cloud resources, and offers an open source alternative to traditional cloud providers, reducing costs and returning control of infrastructure to the users.

azhpc-images
This repository contains scripts for installing HPC and AI libraries and tools to build Azure HPC/AI images. It streamlines the process of provisioning compute-intensive workloads and crafting advanced AI models in the cloud, ensuring efficiency and reliability in deployments.
For similar jobs

cb-tumblebug
CB-Tumblebug (CB-TB) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. It provides an overview, features, and architecture. The tool supports various cloud providers and resource types, with ongoing development and localization efforts. Users can deploy a multi-cloud infra with GPUs, enjoy multiple LLMs in parallel, and utilize LLM-related scripts. The tool requires Linux, Docker, Docker Compose, and Golang for building the source. Users can run CB-TB with Docker Compose or from the Makefile, set up prerequisites, contribute to the project, and view a list of contributors. The tool is licensed under an open-source license.

robusta
Robusta is a tool designed to enhance Prometheus notifications for Kubernetes environments. It offers features such as smart grouping to reduce notification spam, AI investigation for alert analysis, alert enrichment with additional data like pod logs, self-healing capabilities for defining auto-remediation rules, advanced routing options, problem detection without PromQL, change-tracking for Kubernetes resources, auto-resolve functionality, and integration with various external systems like Slack, Teams, and Jira. Users can utilize Robusta with or without Prometheus, and it can be installed alongside existing Prometheus setups or as part of an all-in-one Kubernetes observability stack.

AI-CloudOps
AI+CloudOps is a cloud-native operations management platform designed for enterprises. It aims to integrate artificial intelligence technology with cloud-native practices to significantly improve the efficiency and level of operations work. The platform offers features such as AIOps for monitoring data analysis and alerts, multi-dimensional permission management, visual CMDB for resource management, efficient ticketing system, deep integration with Prometheus for real-time monitoring, and unified Kubernetes management for cluster optimization.

serverless-pdf-chat
The serverless-pdf-chat repository contains a sample application that allows users to ask natural language questions of any PDF document they upload. It leverages serverless services like Amazon Bedrock, AWS Lambda, and Amazon DynamoDB to provide text generation and analysis capabilities. The application architecture involves uploading a PDF document to an S3 bucket, extracting metadata, converting text to vectors, and using a LangChain to search for information related to user prompts. The application is not intended for production use and serves as a demonstration and educational tool.

generative-bi-using-rag
Generative BI using RAG on AWS is a comprehensive framework designed to enable Generative BI capabilities on customized data sources hosted on AWS. It offers features such as Text-to-SQL functionality for querying data sources using natural language, user-friendly interface for managing data sources, performance enhancement through historical question-answer ranking, and entity recognition. It also allows customization of business information, handling complex attribution analysis problems, and provides an intuitive question-answering UI with a conversational approach for complex queries.

azure-functions-openai-extension
Azure Functions OpenAI Extension is a project that adds support for OpenAI LLM (GPT-3.5-turbo, GPT-4) bindings in Azure Functions. It provides NuGet packages for various functionalities like text completions, chat completions, assistants, embeddings generators, and semantic search. The project requires .NET 6 SDK or greater, Azure Functions Core Tools v4.x, and specific settings in Azure Function or local settings for development. It offers features like text completions, chat completion, assistants with custom skills, embeddings generators for text relatedness, and semantic search using vector databases. The project also includes examples in C# and Python for different functionalities.

edge2ai-workshop
The edge2ai-workshop repository provides a hands-on workshop for building an IoT Predictive Maintenance workflow. It includes lab exercises for setting up components like NiFi, Streams Processing, Data Visualization, and more on a single host. The repository also covers use cases such as credit card fraud detection. Users can follow detailed instructions, prerequisites, and connectivity guidelines to connect to their cluster and explore various services. Additionally, troubleshooting tips are provided for common issues like MiNiFi not sending messages or CEM not picking up new NARs.

yu-picture
The 'yu-picture' project is an educational project that provides complete video tutorials, text tutorials, resume writing, interview question solutions, and Q&A services to help you improve your project skills and enhance your resume. It is an enterprise-level intelligent collaborative cloud image library platform based on Vue 3 + Spring Boot + COS + WebSocket. The platform has a wide range of applications, including public image uploading and retrieval, image analysis for administrators, private image management for individual users, and real-time collaborative image editing for enterprises. The project covers file management, content retrieval, permission control, and real-time collaboration, using various programming concepts, architectural design methods, and optimization strategies to ensure high-speed iteration and stable operation.