
miner-release
Stable Diffusion and LLM miner for Heurist
Stars: 73

Heurist Miner is a tool that allows users to contribute their GPU for AI inference tasks on the Heurist network. It supports dual mining capabilities for image generation models and Large Language Models, offers flexible setup on Windows or Linux with multiple GPUs, ensures secure rewards through a dual-wallet system, and is fully open source. Users can earn rewards by hosting AI models and supporting applications in the Heurist ecosystem.
README:
- Introduction
- System Requirements
- Quick Start Guide
- Detailed Setup Instructions
- Advanced Configuration
- Troubleshooting
- FAQ
- Support and Community
Welcome to the Heurist Miner, the entrance to decentralized generative AI. Whether you have a high-end gaming PC with NVIDIA GPU or you're a datacenter owner ready to explore the world of AI and cryptocurrency, this guide will help you get started on an exciting journey!
Heurist Miner allows you to contribute your GPU to perform AI inference tasks on the Heurist network. By running this miner, you'll earn rewards by hosting AI models and supporting various applications in Heurist ecosystem.
- 🖼️ Dual Mining Capabilities: Support for both image generation models and Large Language Models.
- 🖥️ Flexible Setup: Run on Windows or Linux, with support for multiple GPUs.
- 🔐 Secure Rewards: Utilizes a dual-wallet system for enhanced security.
- 🌐 Open Source: The code is fully open and transparent. Download and run with ease.
Before you begin, ensure your system meets the following requirements:
- GPU: NVIDIA GPU with at least 12GB VRAM (24GB+ recommended for optimal performance)
- CPU: Multi-core processor (4+ cores recommended)
- RAM: 16GB+ system RAM
- Storage: At least 50GB free space (NVMe recommended for faster model loading)
-
Operating System:
- Windows 10/11 (64-bit)
- Linux (Ubuntu 20.04 LTS or later recommended)
- CUDA: Version 12.1, or 12.2
- Python: Version 3.10 or 3.11
- Git: For cloning the repository
- Stable internet connection (100 Mbps+ recommended)
- Ability to access HuggingFace and GitHub repositories
- Some models (especially larger LLMs) may require more VRAM. Check the model-specific requirements in the detailed setup sections.
- Ensure your system is up-to-date with the latest NVIDIA GPU drivers.
- Stable Diffusion models need at least 8-10GB VRAM, while LLMs can require 16GB to 40GB+ depending on the model size.
For experienced users, here's a quick overview to get you mining:
- Clone the Repository
git clone https://github.com/heurist-network/miner-release.git
cd miner-release
- Set Up Environment
- Install Miniconda (if not already installed)
- Create and activate a new conda environment:
conda create --name heurist-miner python=3.11
conda activate heurist-miner
- Install Dependencies
pip install -r requirements.txt
- Configure Miner ID
- Create a
.env
file in the root directory - Add your Ethereum wallet address:
MINER_ID_0=0xYourWalletAddressHere
Follow "Multiple GPU Configuration" section if you have multiple GPUs.
- Choose Your Miner
- For Stable Diffusion:
python sd-miner.py
- For LLM:
./llm-miner-starter.sh <model_id>
For detailed instructions, troubleshooting, and advanced configuration, please refer to the sections below.
(current version only supports Flux model)
For users who prefer using Docker, follow these steps:
- Build the Docker Image
docker build -t heurist-miner:latest .
- Run the Docker Container
Single GPU:
sudo docker run -d --gpus all \
-e MINER_ID_0=0xWalletAddressHere \
-e LOG_LEVEL=INFO \
-v $HOME/.cache/heurist:/app/.cache/heurist \
heurist-miner:latest
Replace 0xYourWalletAddressHere
with your wallet address to receive rewards.
Multiple GPUs:
sudo docker run -d --gpus all \
-e MINER_ID_0=0xYourFirstWalletAddressHere \
-e MINER_ID_1=0xYourSecondWalletAddressHere \
-e MINER_ID_2=0xYourThirdWalletAddressHere \
-e LOG_LEVEL=INFO \
-v $HOME/.cache/heurist:/app/.cache/heurist \
heurist-miner:latest
Replace 0xYourFirstWalletAddressHere
, 0xYourSecondWalletAddressHere
, and 0xYourThirdWalletAddressHere
with your actual wallet addresses.
This command:
- Runs the container in detached mode (
-d
) - Allows access to all GPUs (
--gpus all
) - Sets environment variables for miner IDs and log level
- Mounts a volume for persistent cache storage
- Uses the image we just built (
heurist-miner:latest
)
Note: Ensure you have the NVIDIA Container Toolkit installed for GPU support in Docker.
Heurist Miner uses a dual-wallet system for security and reward distribution:
- Identity Wallet: Used for authentication, stored locally. Do not store funds here.
- Reward Wallet (Miner ID): Receives points, Heurist Token rewards, potential ecosystem benefits.
- Create a
.env
file in the root directory of your miner installation. - Add your Ethereum wallet address(es) as Miner ID(s):
MINER_ID_0=0xYourFirstWalletAddressHere MINER_ID_1=0xYourSecondWalletAddressHere
- (Optional) Add custom tags for tracking:
MINER_ID_0=0xYourFirstWalletAddressHere-GamingPC4090 MINER_ID_1=0xYourSecondWalletAddressHere-GoogleCloudT4
- Generate or import identity wallets:
Follow the prompts to create new wallets or import existing ones.
python3 ./auth/generator.py
-
Install Miniconda:
- Download from Miniconda website
- Choose the latest Windows 64-bit version for Python 3.11
-
Create Conda Environment:
conda create --name heurist-miner python=3.11 conda activate heurist-miner
-
Install CUDA Toolkit:
- Download CUDA 12.1 from NVIDIA website
- Follow the installation prompts
-
Install PyTorch with GPU Support:
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
-
Clone Miner Repository and Install Dependencies:
git clone https://github.com/heurist-network/miner-release cd miner-release pip install -r requirements.txt
-
Run the Miner:
python3 sd-miner.py
-
Update GPU Drivers (if necessary):
sudo apt update sudo ubuntu-drivers autoinstall
-
Install Miniconda:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh
-
Create Conda Environment:
conda create --name heurist-miner python=3.11 conda activate heurist-miner
- Install CUDA Toolkit:
- Follow instructions on NVIDIA CUDA Installation Guide
-
Install PyTorch with GPU Support:
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
-
Clone Miner Repository and Install Dependencies:
git clone https://github.com/heurist-network/miner-release cd miner-release pip install -r requirements.txt
-
Run the Miner:
python3 sd-miner.py
-
Ensure CUDA Driver is Installed:
- Check with
nvidia-smi
- Check with
-
Select a Model ID:
- Choose based on your GPU's VRAM capacity
- Example models:
-
dolphin-2.9-llama3-8b
(24GB VRAM) -
openhermes-mixtral-8x7b-gptq
(40GB VRAM)
-
-
Run the Setup Script:
Options:
chmod +x llm-miner-starter.sh ./llm-miner-starter.sh <model_id> --miner-id-index 0 --port 8000 --gpu-ids 0
-
--miner-id-index
: Index of miner_id in.env
(default: 0) -
--port
: Port for vLLM process (default: 8000) -
--gpu-ids
: GPU ID to use (default: 0)
- Wait for Model Download:
- First run will download the model (can take time)
- Models are saved in
$HOME/.cache/huggingface
Note: 8x7b, 34b, and 70b models may take up to an hour to load on some devices.
When running the SD miner, you can use various CLI options to customize its behavior. You can combine multiple flags.
-
Log Level
- Set the verbosity of log messages:
python3 sd-miner.py --log-level DEBUG
- Options: DEBUG, INFO, WARNING, ERROR, CRITICAL (default: INFO)
- Set the verbosity of log messages:
-
Auto-Confirm
- Automatically confirm model downloads:
python3 sd-miner.py --auto-confirm yes
- Options: yes, no (default: no)
- Automatically confirm model downloads:
-
Exclude SDXL
- Exclude SDXL models to reduce VRAM usage:
python3 sd-miner.py --exclude-sdxl
- Exclude SDXL models to reduce VRAM usage:
-
Specify Model ID
- Run the miner with a specific model:
python3 sd-miner.py --model-id <model_id>
- For example, run FLUX.1-dev model with:
python3 sd-miner.py --model-id FLUX.1-dev --skip-checksum
- Run the miner with a specific model:
-
CUDA Device ID
- Specify which GPU to use:
python3 sd-miner.py --cuda-device-id 0
- Specify which GPU to use:
-
Skip checksum to speed up miner start-up time
- This skips checking the validity of model files. However, if incompleted files are present on the disk, the miner process will crash without this check.
python3 sd-miner.py --skip-checksum
- This skips checking the validity of model files. However, if incompleted files are present on the disk, the miner process will crash without this check.
For LLM miner, use the following CLI options to customize its behavior:
-
Specify Model ID
- Run the miner with a specific model (mandatory):
./llm-miner-starter.sh <model_id>
- Example:
dolphin-2.9-llama3-8b
(requires 24GB VRAM)
- Run the miner with a specific model (mandatory):
-
Miner ID Index
- Specify which miner ID from the
.env
file to use:./llm-miner-starter.sh <model_id> --miner-id-index 1
- Default: 0 (uses the first address configured)
- Specify which miner ID from the
-
Port
- Set the port for communication with the vLLM process:
./llm-miner-starter.sh <model_id> --port 8001
- Default: 8000
- Set the port for communication with the vLLM process:
-
GPU IDs
-
Specify which GPU(s) to use:
./llm-miner-starter.sh <model_id> --gpu-ids 1
-
Default: 0
-
Example combining multiple options:
./llm-miner-starter.sh dolphin-2.9-llama3-8b --miner-id-index 1 --port 8001 --gpu-ids 1
-
Advanced usage: To deploy large models using multiple GPUs on the same machine.
./llm-miner-starter.sh openhermes-mixtral-8x7b-gptq --miner-id-index 0 --port 8000 --gpu-ids 0,1
-
To utilize multiple GPUs:
- Assign unique Miner IDs in your
.env
file:MINER_ID_0=0xWalletAddress1 MINER_ID_1=0xWalletAddress2
- Set
num_cuda_devices
inconfig.toml
:[system] num_cuda_devices = 2
- Run the miner without specifying a CUDA device ID to use all available GPUs.
Running into issues? Don't worry, we've got you covered! Here are some common problems and their solutions:
-
🚨 CUDA not found
- Ensure CUDA is properly installed
- Check if the CUDA version matches PyTorch requirements
✅ Solution: Reinstall CUDA or update PyTorch to match your CUDA version
-
🚨 Dependencies installation fails
- Check your Python version (should be 3.10 or 3.11)
- Ensure you're in the correct Conda environment
✅ Solution: Create a new Conda environment and reinstall dependencies
-
🚨 CUDA out of memory error
- Check available GPU memory using
nvidia-smi
- Stop other programs occupying VRAM, or use a smaller model
✅ Solution: Add--exclude-sdxl
flag for SD miner or choose a smaller LLM
- Check available GPU memory using
-
🚨 Miner not receiving tasks
- Check your internet connection
- Verify your Miner ID is correctly set in the
.env
file
✅ Solution: Restart the miner and check logs for connection issues
-
🚨 Model loading takes too long
- This is normal for large models, especially on first run
- Check disk space and internet speed
✅ Solution: Be patient (grab a coffee! ☕), or choose a smaller model
- 🔍 Always check the console output for specific error messages
- 🔄 Ensure you're using the latest version of the miner software
- 💬 If problems persist, don't hesitate to ask for help in our Discord community!
Got questions? We've got answers!
1️⃣ Can I run both SD and LLM miners simultaneously? 🖥️🖥️
2️⃣ How do I know if I'm earning rewards? 💰
3️⃣ What's the difference between Identity Wallet and Reward Wallet? 🎭💼
4️⃣ Can I use my gaming PC for mining when I'm not gaming? 🎮➡️💻
5️⃣ How often should I update the miner software? 🔄
Join our lively community on Discord - it's where all the cool miners hang out! 🔗 Heurist Discord #dev-chat channel
- 📚 Check our Troubleshooting guide and FAQ - you might find a quick fix!
- 🆘 Still stuck? Head over to our GitHub Issues page: 🔗 Heurist Miner Issues
- 📝 When reporting, remember to include:
- Miner version
- Model
- Operating System
- Console error messages and log files
- Steps to reproduce
Keep up with the latest Heurist happenings:
- 📖 Medium: Heurist Blogs
- 📣 Discord: Tune into our #miner-announcements channel
- 🐦 X/Twitter: Follow Heurist for the latest updates
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for miner-release
Similar Open Source Tools

miner-release
Heurist Miner is a tool that allows users to contribute their GPU for AI inference tasks on the Heurist network. It supports dual mining capabilities for image generation models and Large Language Models, offers flexible setup on Windows or Linux with multiple GPUs, ensures secure rewards through a dual-wallet system, and is fully open source. Users can earn rewards by hosting AI models and supporting applications in the Heurist ecosystem.

WatermarkRemover-AI
WatermarkRemover-AI is an advanced application that utilizes AI models for precise watermark detection and seamless removal. It leverages Florence-2 for watermark identification and LaMA for inpainting. The tool offers both a command-line interface (CLI) and a PyQt6-based graphical user interface (GUI), making it accessible to users of all levels. It supports dual modes for processing images, advanced watermark detection, seamless inpainting, customizable output settings, real-time progress tracking, dark mode support, and efficient GPU acceleration using CUDA.

web-ui
WebUI is a user-friendly tool built on Gradio that enhances website accessibility for AI agents. It supports various Large Language Models (LLMs) and allows custom browser integration for seamless interaction. The tool eliminates the need for re-login and authentication challenges, offering high-definition screen recording capabilities.

trendFinder
Trend Finder is a tool designed to help users stay updated on trending topics on social media by collecting and analyzing posts from key influencers. It sends Slack notifications when new trends or product launches are detected, saving time, keeping users informed, and enabling quick responses to emerging opportunities. The tool features AI-powered trend analysis, social media and website monitoring, instant Slack notifications, and scheduled monitoring using cron jobs. Built with Node.js and Express.js, Trend Finder integrates with Together AI, Twitter/X API, Firecrawl, and Slack Webhooks for notifications.

recommendarr
Recommendarr is a tool that generates personalized TV show and movie recommendations based on your Sonarr, Radarr, Plex, and Jellyfin libraries using AI. It offers AI-powered recommendations, media server integration, flexible AI support, watch history analysis, customization options, and dark/light mode toggle. Users can connect their media libraries and watch history services, configure AI service settings, and get personalized recommendations based on genre, language, and mood/vibe preferences. The tool works with any OpenAI-compatible API and offers various recommended models for different cost options and performance levels. It provides personalized suggestions, detailed information, filter options, watch history analysis, and one-click adding of recommended content to Sonarr/Radarr.

rkllama
RKLLama is a server and client tool designed for running and interacting with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. It allows models to run on the NPU, with features such as running models on NPU, partial Ollama API compatibility, pulling models from Huggingface, API REST with documentation, dynamic loading/unloading of models, inference requests with streaming modes, simplified model naming, CPU model auto-detection, and optional debug mode. The tool supports Python 3.8 to 3.12 and has been tested on Orange Pi 5 Pro and Orange Pi 5 Plus with specific OS versions.

llm_aided_ocr
The LLM-Aided OCR Project is an advanced system that enhances Optical Character Recognition (OCR) output by leveraging natural language processing techniques and large language models. It offers features like PDF to image conversion, OCR using Tesseract, error correction using LLMs, smart text chunking, markdown formatting, duplicate content removal, quality assessment, support for local and cloud-based LLMs, asynchronous processing, detailed logging, and GPU acceleration. The project provides detailed technical overview, text processing pipeline, LLM integration, token management, quality assessment, logging, configuration, and customization. It requires Python 3.12+, Tesseract OCR engine, PDF2Image library, PyTesseract, and optional OpenAI or Anthropic API support for cloud-based LLMs. The installation process involves setting up the project, installing dependencies, and configuring environment variables. Users can place a PDF file in the project directory, update input file path, and run the script to generate post-processed text. The project optimizes processing with concurrent processing, context preservation, and adaptive token management. Configuration settings include choosing between local or API-based LLMs, selecting API provider, specifying models, and setting context size for local LLMs. Output files include raw OCR output and LLM-corrected text. Limitations include performance dependency on LLM quality and time-consuming processing for large documents.

CrewAI-Studio
CrewAI Studio is an application with a user-friendly interface for interacting with CrewAI, offering support for multiple platforms and various backend providers. It allows users to run crews in the background, export single-page apps, and use custom tools for APIs and file writing. The roadmap includes features like better import/export, human input, chat functionality, automatic crew creation, and multiuser environment support.

minefield
BitBom Minefield is a tool that uses roaring bit maps to graph Software Bill of Materials (SBOMs) with a focus on speed, air-gapped operation, scalability, and customizability. It is optimized for rapid data processing, operates securely in isolated environments, supports millions of nodes effortlessly, and allows users to extend the project without relying on upstream changes. The tool enables users to manage and explore software dependencies within isolated environments by offline processing and analyzing SBOMs.

extension-gen-ai
The Looker GenAI Extension provides code examples and resources for building a Looker Extension that integrates with Vertex AI Large Language Models (LLMs). Users can leverage the power of LLMs to enhance data exploration and analysis within Looker. The extension offers generative explore functionality to ask natural language questions about data and generative insights on dashboards to analyze data by asking questions. It leverages components like BQML Remote Models, BQML Remote UDF with Vertex AI, and Custom Fine Tune Model for different integration options. Deployment involves setting up infrastructure with Terraform and deploying the Looker Extension by creating a Looker project, copying extension files, configuring BigQuery connection, connecting to Git, and testing the extension. Users can save example prompts and configure user settings for the extension. Development of the Looker Extension environment includes installing dependencies, starting the development server, and building for production.

Groqqle
Groqqle 2.1 is a revolutionary, free AI web search and API that instantly returns ORIGINAL content derived from source articles, websites, videos, and even foreign language sources, for ANY target market of ANY reading comprehension level! It combines the power of large language models with advanced web and news search capabilities, offering a user-friendly web interface, a robust API, and now a powerful Groqqle_web_tool for seamless integration into your projects. Developers can instantly incorporate Groqqle into their applications, providing a powerful tool for content generation, research, and analysis across various domains and languages.

probe
Probe is an AI-friendly, fully local, semantic code search tool designed to power the next generation of AI coding assistants. It combines the speed of ripgrep with the code-aware parsing of tree-sitter to deliver precise results with complete code blocks, making it perfect for large codebases and AI-driven development workflows. Probe is fully local, keeping code on the user's machine without relying on external APIs. It supports multiple languages, offers various search options, and can be used in CLI mode, MCP server mode, AI chat mode, and web interface. The tool is designed to be flexible, fast, and accurate, providing developers and AI models with full context and relevant code blocks for efficient code exploration and understanding.

blum-airdrop-bot
Blum Airdrop Bot automates interactions with the Blum airdrop platform, allowing users to claim rewards, manage farming sessions, complete tasks, and play games automatically. It includes features like claiming farm rewards, starting farming sessions, auto-completing tasks, auto-playing games, and claiming daily rewards. Users can choose between Default Flow for manual task selection and One-time Flow for continuous automated tasks. The setup requires Node.js, npm, and setting up a `.env` file with `QUERY_ID`. The bot can be started with `npm start` and supports donations in Solana, EVM, and BTC.

Visionatrix
Visionatrix is a project aimed at providing easy use of ComfyUI workflows. It offers simplified setup and update processes, a minimalistic UI for daily workflow use, stable workflows with versioning and update support, scalability for multiple instances and task workers, multiple user support with integration of different user backends, LLM power for integration with Ollama/Gemini, and seamless integration as a service with backend endpoints and webhook support. The project is approaching version 1.0 release and welcomes new ideas for further implementation.

aiaio
aiaio (AI-AI-O) is a lightweight, privacy-focused web UI for interacting with AI models. It supports both local and remote LLM deployments through OpenAI-compatible APIs. The tool provides features such as dark/light mode support, local SQLite database for conversation storage, file upload and processing, configurable model parameters through UI, privacy-focused design, responsive design for mobile/desktop, syntax highlighting for code blocks, real-time conversation updates, automatic conversation summarization, customizable system prompts, WebSocket support for real-time updates, Docker support for deployment, multiple API endpoint support, and multiple system prompt support. Users can configure model parameters and API settings through the UI, handle file uploads, manage conversations, and use keyboard shortcuts for efficient interaction. The tool uses SQLite for storage with tables for conversations, messages, attachments, and settings. Contributions to the project are welcome under the Apache License 2.0.

farfalle
Farfalle is an open-source AI-powered search engine that allows users to run their own local LLM or utilize the cloud. It provides a tech stack including Next.js for frontend, FastAPI for backend, Tavily for search API, Logfire for logging, and Redis for rate limiting. Users can get started by setting up prerequisites like Docker and Ollama, and obtaining API keys for Tavily, OpenAI, and Groq. The tool supports models like llama3, mistral, and gemma. Users can clone the repository, set environment variables, run containers using Docker Compose, and deploy the backend and frontend using services like Render and Vercel.
For similar tasks

miner-release
Heurist Miner is a tool that allows users to contribute their GPU for AI inference tasks on the Heurist network. It supports dual mining capabilities for image generation models and Large Language Models, offers flexible setup on Windows or Linux with multiple GPUs, ensures secure rewards through a dual-wallet system, and is fully open source. Users can earn rewards by hosting AI models and supporting applications in the Heurist ecosystem.

Stake-auto-bot
Stake-auto-bot is a tool designed for automated staking in the cryptocurrency space. It allows users to set up automated processes for staking their digital assets, providing a convenient way to earn rewards and secure networks. The tool simplifies the staking process by automating the necessary steps, such as selecting validators, delegating tokens, and monitoring rewards. With Stake-auto-bot, users can optimize their staking strategies and maximize their returns with minimal effort.

airflow
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.

tt-metal
TT-NN is a python & C++ Neural Network OP library. It provides a low-level programming model, TT-Metalium, enabling kernel development for Tenstorrent hardware.

Forza-Mods-AIO
Forza Mods AIO is a free and open-source tool that enhances the gaming experience in Forza Horizon 4 and 5. It offers a range of time-saving and quality-of-life features, making gameplay more enjoyable and efficient. The tool is designed to streamline various aspects of the game, improving user satisfaction and overall enjoyment.

openssa
OpenSSA is an open-source framework for creating efficient, domain-specific AI agents. It enables the development of Small Specialist Agents (SSAs) that solve complex problems in specific domains. SSAs tackle multi-step problems that require planning and reasoning beyond traditional language models. They apply OODA for deliberative reasoning (OODAR) and iterative, hierarchical task planning (HTP). This "System-2 Intelligence" breaks down complex tasks into manageable steps. SSAs make informed decisions based on domain-specific knowledge. With OpenSSA, users can create agents that process, generate, and reason about information, making them more effective and efficient in solving real-world challenges.

pezzo
Pezzo is a fully cloud-native and open-source LLMOps platform that allows users to observe and monitor AI operations, troubleshoot issues, save costs and latency, collaborate, manage prompts, and deliver AI changes instantly. It supports various clients for prompt management, observability, and caching. Users can run the full Pezzo stack locally using Docker Compose, with prerequisites including Node.js 18+, Docker, and a GraphQL Language Feature Support VSCode Extension. Contributions are welcome, and the source code is available under the Apache 2.0 License.

llm_qlora
LLM_QLoRA is a repository for fine-tuning Large Language Models (LLMs) using QLoRA methodology. It provides scripts for training LLMs on custom datasets, pushing models to HuggingFace Hub, and performing inference. Additionally, it includes models trained on HuggingFace Hub, a blog post detailing the QLoRA fine-tuning process, and instructions for converting and quantizing models. The repository also addresses troubleshooting issues related to Python versions and dependencies.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.