
miner-release
Stable Diffusion and LLM miner for Heurist
Stars: 73

Heurist Miner is a tool that allows users to contribute their GPU for AI inference tasks on the Heurist network. It supports dual mining capabilities for image generation models and Large Language Models, offers flexible setup on Windows or Linux with multiple GPUs, ensures secure rewards through a dual-wallet system, and is fully open source. Users can earn rewards by hosting AI models and supporting applications in the Heurist ecosystem.
README:
- Introduction
- System Requirements
- Quick Start Guide
- Detailed Setup Instructions
- Advanced Configuration
- Troubleshooting
- FAQ
- Support and Community
Welcome to the Heurist Miner, the entrance to decentralized generative AI. Whether you have a high-end gaming PC with NVIDIA GPU or you're a datacenter owner ready to explore the world of AI and cryptocurrency, this guide will help you get started on an exciting journey!
Heurist Miner allows you to contribute your GPU to perform AI inference tasks on the Heurist network. By running this miner, you'll earn rewards by hosting AI models and supporting various applications in Heurist ecosystem.
- 🖼️ Dual Mining Capabilities: Support for both image generation models and Large Language Models.
- 🖥️ Flexible Setup: Run on Windows or Linux, with support for multiple GPUs.
- 🔐 Secure Rewards: Utilizes a dual-wallet system for enhanced security.
- 🌐 Open Source: The code is fully open and transparent. Download and run with ease.
Before you begin, ensure your system meets the following requirements:
- GPU: NVIDIA GPU with at least 12GB VRAM (24GB+ recommended for optimal performance)
- CPU: Multi-core processor (4+ cores recommended)
- RAM: 16GB+ system RAM
- Storage: At least 50GB free space (NVMe recommended for faster model loading)
-
Operating System:
- Windows 10/11 (64-bit)
- Linux (Ubuntu 20.04 LTS or later recommended)
- CUDA: Version 12.1, or 12.2
- Python: Version 3.10 or 3.11
- Git: For cloning the repository
- Stable internet connection (100 Mbps+ recommended)
- Ability to access HuggingFace and GitHub repositories
- Some models (especially larger LLMs) may require more VRAM. Check the model-specific requirements in the detailed setup sections.
- Ensure your system is up-to-date with the latest NVIDIA GPU drivers.
- Stable Diffusion models need at least 8-10GB VRAM, while LLMs can require 16GB to 40GB+ depending on the model size.
For experienced users, here's a quick overview to get you mining:
- Clone the Repository
git clone https://github.com/heurist-network/miner-release.git
cd miner-release
- Set Up Environment
- Install Miniconda (if not already installed)
- Create and activate a new conda environment:
conda create --name heurist-miner python=3.11
conda activate heurist-miner
- Install Dependencies
pip install -r requirements.txt
- Configure Miner ID
- Create a
.env
file in the root directory - Add your Ethereum wallet address:
MINER_ID_0=0xYourWalletAddressHere
Follow "Multiple GPU Configuration" section if you have multiple GPUs.
- Choose Your Miner
- For Stable Diffusion:
python sd-miner.py
- For LLM:
./llm-miner-starter.sh <model_id>
For detailed instructions, troubleshooting, and advanced configuration, please refer to the sections below.
(current version only supports Flux model)
For users who prefer using Docker, follow these steps:
- Build the Docker Image
docker build -t heurist-miner:latest .
- Run the Docker Container
Single GPU:
sudo docker run -d --gpus all \
-e MINER_ID_0=0xWalletAddressHere \
-e LOG_LEVEL=INFO \
-v $HOME/.cache/heurist:/app/.cache/heurist \
heurist-miner:latest
Replace 0xYourWalletAddressHere
with your wallet address to receive rewards.
Multiple GPUs:
sudo docker run -d --gpus all \
-e MINER_ID_0=0xYourFirstWalletAddressHere \
-e MINER_ID_1=0xYourSecondWalletAddressHere \
-e MINER_ID_2=0xYourThirdWalletAddressHere \
-e LOG_LEVEL=INFO \
-v $HOME/.cache/heurist:/app/.cache/heurist \
heurist-miner:latest
Replace 0xYourFirstWalletAddressHere
, 0xYourSecondWalletAddressHere
, and 0xYourThirdWalletAddressHere
with your actual wallet addresses.
This command:
- Runs the container in detached mode (
-d
) - Allows access to all GPUs (
--gpus all
) - Sets environment variables for miner IDs and log level
- Mounts a volume for persistent cache storage
- Uses the image we just built (
heurist-miner:latest
)
Note: Ensure you have the NVIDIA Container Toolkit installed for GPU support in Docker.
Heurist Miner uses a dual-wallet system for security and reward distribution:
- Identity Wallet: Used for authentication, stored locally. Do not store funds here.
- Reward Wallet (Miner ID): Receives points, Heurist Token rewards, potential ecosystem benefits.
- Create a
.env
file in the root directory of your miner installation. - Add your Ethereum wallet address(es) as Miner ID(s):
MINER_ID_0=0xYourFirstWalletAddressHere MINER_ID_1=0xYourSecondWalletAddressHere
- (Optional) Add custom tags for tracking:
MINER_ID_0=0xYourFirstWalletAddressHere-GamingPC4090 MINER_ID_1=0xYourSecondWalletAddressHere-GoogleCloudT4
- Generate or import identity wallets:
Follow the prompts to create new wallets or import existing ones.
python3 ./auth/generator.py
-
Install Miniconda:
- Download from Miniconda website
- Choose the latest Windows 64-bit version for Python 3.11
-
Create Conda Environment:
conda create --name heurist-miner python=3.11 conda activate heurist-miner
-
Install CUDA Toolkit:
- Download CUDA 12.1 from NVIDIA website
- Follow the installation prompts
-
Install PyTorch with GPU Support:
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
-
Clone Miner Repository and Install Dependencies:
git clone https://github.com/heurist-network/miner-release cd miner-release pip install -r requirements.txt
-
Run the Miner:
python3 sd-miner.py
-
Update GPU Drivers (if necessary):
sudo apt update sudo ubuntu-drivers autoinstall
-
Install Miniconda:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh
-
Create Conda Environment:
conda create --name heurist-miner python=3.11 conda activate heurist-miner
- Install CUDA Toolkit:
- Follow instructions on NVIDIA CUDA Installation Guide
-
Install PyTorch with GPU Support:
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
-
Clone Miner Repository and Install Dependencies:
git clone https://github.com/heurist-network/miner-release cd miner-release pip install -r requirements.txt
-
Run the Miner:
python3 sd-miner.py
-
Ensure CUDA Driver is Installed:
- Check with
nvidia-smi
- Check with
-
Select a Model ID:
- Choose based on your GPU's VRAM capacity
- Example models:
-
dolphin-2.9-llama3-8b
(24GB VRAM) -
openhermes-mixtral-8x7b-gptq
(40GB VRAM)
-
-
Run the Setup Script:
Options:
chmod +x llm-miner-starter.sh ./llm-miner-starter.sh <model_id> --miner-id-index 0 --port 8000 --gpu-ids 0
-
--miner-id-index
: Index of miner_id in.env
(default: 0) -
--port
: Port for vLLM process (default: 8000) -
--gpu-ids
: GPU ID to use (default: 0)
- Wait for Model Download:
- First run will download the model (can take time)
- Models are saved in
$HOME/.cache/huggingface
Note: 8x7b, 34b, and 70b models may take up to an hour to load on some devices.
When running the SD miner, you can use various CLI options to customize its behavior. You can combine multiple flags.
-
Log Level
- Set the verbosity of log messages:
python3 sd-miner.py --log-level DEBUG
- Options: DEBUG, INFO, WARNING, ERROR, CRITICAL (default: INFO)
- Set the verbosity of log messages:
-
Auto-Confirm
- Automatically confirm model downloads:
python3 sd-miner.py --auto-confirm yes
- Options: yes, no (default: no)
- Automatically confirm model downloads:
-
Exclude SDXL
- Exclude SDXL models to reduce VRAM usage:
python3 sd-miner.py --exclude-sdxl
- Exclude SDXL models to reduce VRAM usage:
-
Specify Model ID
- Run the miner with a specific model:
python3 sd-miner.py --model-id <model_id>
- For example, run FLUX.1-dev model with:
python3 sd-miner.py --model-id FLUX.1-dev --skip-checksum
- Run the miner with a specific model:
-
CUDA Device ID
- Specify which GPU to use:
python3 sd-miner.py --cuda-device-id 0
- Specify which GPU to use:
-
Skip checksum to speed up miner start-up time
- This skips checking the validity of model files. However, if incompleted files are present on the disk, the miner process will crash without this check.
python3 sd-miner.py --skip-checksum
- This skips checking the validity of model files. However, if incompleted files are present on the disk, the miner process will crash without this check.
For LLM miner, use the following CLI options to customize its behavior:
-
Specify Model ID
- Run the miner with a specific model (mandatory):
./llm-miner-starter.sh <model_id>
- Example:
dolphin-2.9-llama3-8b
(requires 24GB VRAM)
- Run the miner with a specific model (mandatory):
-
Miner ID Index
- Specify which miner ID from the
.env
file to use:./llm-miner-starter.sh <model_id> --miner-id-index 1
- Default: 0 (uses the first address configured)
- Specify which miner ID from the
-
Port
- Set the port for communication with the vLLM process:
./llm-miner-starter.sh <model_id> --port 8001
- Default: 8000
- Set the port for communication with the vLLM process:
-
GPU IDs
-
Specify which GPU(s) to use:
./llm-miner-starter.sh <model_id> --gpu-ids 1
-
Default: 0
-
Example combining multiple options:
./llm-miner-starter.sh dolphin-2.9-llama3-8b --miner-id-index 1 --port 8001 --gpu-ids 1
-
Advanced usage: To deploy large models using multiple GPUs on the same machine.
./llm-miner-starter.sh openhermes-mixtral-8x7b-gptq --miner-id-index 0 --port 8000 --gpu-ids 0,1
-
To utilize multiple GPUs:
- Assign unique Miner IDs in your
.env
file:MINER_ID_0=0xWalletAddress1 MINER_ID_1=0xWalletAddress2
- Set
num_cuda_devices
inconfig.toml
:[system] num_cuda_devices = 2
- Run the miner without specifying a CUDA device ID to use all available GPUs.
Running into issues? Don't worry, we've got you covered! Here are some common problems and their solutions:
-
🚨 CUDA not found
- Ensure CUDA is properly installed
- Check if the CUDA version matches PyTorch requirements
✅ Solution: Reinstall CUDA or update PyTorch to match your CUDA version
-
🚨 Dependencies installation fails
- Check your Python version (should be 3.10 or 3.11)
- Ensure you're in the correct Conda environment
✅ Solution: Create a new Conda environment and reinstall dependencies
-
🚨 CUDA out of memory error
- Check available GPU memory using
nvidia-smi
- Stop other programs occupying VRAM, or use a smaller model
✅ Solution: Add--exclude-sdxl
flag for SD miner or choose a smaller LLM
- Check available GPU memory using
-
🚨 Miner not receiving tasks
- Check your internet connection
- Verify your Miner ID is correctly set in the
.env
file
✅ Solution: Restart the miner and check logs for connection issues
-
🚨 Model loading takes too long
- This is normal for large models, especially on first run
- Check disk space and internet speed
✅ Solution: Be patient (grab a coffee! ☕), or choose a smaller model
- 🔍 Always check the console output for specific error messages
- 🔄 Ensure you're using the latest version of the miner software
- 💬 If problems persist, don't hesitate to ask for help in our Discord community!
Got questions? We've got answers!
1️⃣ Can I run both SD and LLM miners simultaneously? 🖥️🖥️
2️⃣ How do I know if I'm earning rewards? 💰
3️⃣ What's the difference between Identity Wallet and Reward Wallet? 🎭💼
4️⃣ Can I use my gaming PC for mining when I'm not gaming? 🎮➡️💻
5️⃣ How often should I update the miner software? 🔄
Join our lively community on Discord - it's where all the cool miners hang out! 🔗 Heurist Discord #dev-chat channel
- 📚 Check our Troubleshooting guide and FAQ - you might find a quick fix!
- 🆘 Still stuck? Head over to our GitHub Issues page: 🔗 Heurist Miner Issues
- 📝 When reporting, remember to include:
- Miner version
- Model
- Operating System
- Console error messages and log files
- Steps to reproduce
Keep up with the latest Heurist happenings:
- 📖 Medium: Heurist Blogs
- 📣 Discord: Tune into our #miner-announcements channel
- 🐦 X/Twitter: Follow Heurist for the latest updates
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for miner-release
Similar Open Source Tools

miner-release
Heurist Miner is a tool that allows users to contribute their GPU for AI inference tasks on the Heurist network. It supports dual mining capabilities for image generation models and Large Language Models, offers flexible setup on Windows or Linux with multiple GPUs, ensures secure rewards through a dual-wallet system, and is fully open source. Users can earn rewards by hosting AI models and supporting applications in the Heurist ecosystem.

WatermarkRemover-AI
WatermarkRemover-AI is an advanced application that utilizes AI models for precise watermark detection and seamless removal. It leverages Florence-2 for watermark identification and LaMA for inpainting. The tool offers both a command-line interface (CLI) and a PyQt6-based graphical user interface (GUI), making it accessible to users of all levels. It supports dual modes for processing images, advanced watermark detection, seamless inpainting, customizable output settings, real-time progress tracking, dark mode support, and efficient GPU acceleration using CUDA.

CrewAI-Studio
CrewAI Studio is an application with a user-friendly interface for interacting with CrewAI, offering support for multiple platforms and various backend providers. It allows users to run crews in the background, export single-page apps, and use custom tools for APIs and file writing. The roadmap includes features like better import/export, human input, chat functionality, automatic crew creation, and multiuser environment support.

trendFinder
Trend Finder is a tool designed to help users stay updated on trending topics on social media by collecting and analyzing posts from key influencers. It sends Slack notifications when new trends or product launches are detected, saving time, keeping users informed, and enabling quick responses to emerging opportunities. The tool features AI-powered trend analysis, social media and website monitoring, instant Slack notifications, and scheduled monitoring using cron jobs. Built with Node.js and Express.js, Trend Finder integrates with Together AI, Twitter/X API, Firecrawl, and Slack Webhooks for notifications.

efficient-recorder
Efficient Recorder is a battery-life friendly tool designed to stream video, screen, mic, and system audio to any S3-compatible cloud storage service. It captures audio, screenshots, and webcam photos at configurable fps, utilizing low-energy volume detection for audio recording. The tool streams data to a configurable S3 endpoint or a custom server using MinIO. It aims to be storage and battery efficient, providing queued upload processing and minimal system resource overhead. The tool requires SoX for audio recording and webcam capture tools for operation. Users can specify various command line options for customization, such as enabling screenshot and webcam capture with specific intervals and image quality settings.

minefield
BitBom Minefield is a tool that uses roaring bit maps to graph Software Bill of Materials (SBOMs) with a focus on speed, air-gapped operation, scalability, and customizability. It is optimized for rapid data processing, operates securely in isolated environments, supports millions of nodes effortlessly, and allows users to extend the project without relying on upstream changes. The tool enables users to manage and explore software dependencies within isolated environments by offline processing and analyzing SBOMs.

Hacx-GPT
Hacx GPT is a cutting-edge AI tool developed by BlackTechX, inspired by WormGPT, designed to push the boundaries of natural language processing. It is an advanced broken AI model that facilitates seamless and powerful interactions, allowing users to ask questions and perform various tasks. The tool has been rigorously tested on platforms like Kali Linux, Termux, and Ubuntu, offering powerful AI conversations and the ability to do anything the user wants. Users can easily install and run Hacx GPT on their preferred platform to explore its vast capabilities.

blum-airdrop-bot
Blum Airdrop Bot automates interactions with the Blum airdrop platform, allowing users to claim rewards, manage farming sessions, complete tasks, and play games automatically. It includes features like claiming farm rewards, starting farming sessions, auto-completing tasks, auto-playing games, and claiming daily rewards. Users can choose between Default Flow for manual task selection and One-time Flow for continuous automated tasks. The setup requires Node.js, npm, and setting up a `.env` file with `QUERY_ID`. The bot can be started with `npm start` and supports donations in Solana, EVM, and BTC.

meeting-minutes
An open-source AI assistant for taking meeting notes that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams to focus on discussions while automatically capturing and organizing meeting content without external servers or complex infrastructure. Features include modern UI, real-time audio capture, speaker diarization, local processing for privacy, and more. The tool also offers a Rust-based implementation for better performance and native integration, with features like live transcription, speaker diarization, and a rich text editor for notes. Future plans include database connection for saving meeting minutes, improving summarization quality, and adding download options for meeting transcriptions and summaries. The backend supports multiple LLM providers through a unified interface, with configurations for Anthropic, Groq, and Ollama models. System architecture includes core components like audio capture service, transcription engine, LLM orchestrator, data services, and API layer. Prerequisites for setup include Node.js, Python, FFmpeg, and Rust. Development guidelines emphasize project structure, testing, documentation, type hints, and ESLint configuration. Contributions are welcome under the MIT License.

swift-ocr-llm-powered-pdf-to-markdown
Swift OCR is a powerful tool for extracting text from PDF files using OpenAI's GPT-4 Turbo with Vision model. It offers flexible input options, advanced OCR processing, performance optimizations, structured output, robust error handling, and scalable architecture. The tool ensures accurate text extraction, resilience against failures, and efficient handling of multiple requests.

farfalle
Farfalle is an open-source AI-powered search engine that allows users to run their own local LLM or utilize the cloud. It provides a tech stack including Next.js for frontend, FastAPI for backend, Tavily for search API, Logfire for logging, and Redis for rate limiting. Users can get started by setting up prerequisites like Docker and Ollama, and obtaining API keys for Tavily, OpenAI, and Groq. The tool supports models like llama3, mistral, and gemma. Users can clone the repository, set environment variables, run containers using Docker Compose, and deploy the backend and frontend using services like Render and Vercel.

mycoder
An open-source mono-repository containing the MyCoder agent and CLI. It leverages Anthropic's Claude API for intelligent decision making, has a modular architecture with various tool categories, supports parallel execution with sub-agents, can modify code by writing itself, features a smart logging system for clear output, and is human-compatible using README.md, project files, and shell commands to build its own context.

pyspur
PySpur is a graph-based editor designed for LLM (Large Language Models) workflows. It offers modular building blocks, node-level debugging, and performance evaluation. The tool is easy to hack, supports JSON configs for workflow graphs, and is lightweight with minimal dependencies. Users can quickly set up PySpur by cloning the repository, creating a .env file, starting docker services, and accessing the portal. PySpur can also work with local models served using Ollama, with steps provided for configuration. The roadmap includes features like canvas, async/batch execution, support for Ollama, new nodes, pipeline optimization, templates, code compilation, multimodal support, and more.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **🚀 Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **🧩 Customize** : Tailor your pipeline with intuitive configuration files. * **🔌 Extend** : Enhance your pipeline with custom code integrations. * **⚖️ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **🤖 OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

Visionatrix
Visionatrix is a project aimed at providing easy use of ComfyUI workflows. It offers simplified setup and update processes, a minimalistic UI for daily workflow use, stable workflows with versioning and update support, scalability for multiple instances and task workers, multiple user support with integration of different user backends, LLM power for integration with Ollama/Gemini, and seamless integration as a service with backend endpoints and webhook support. The project is approaching version 1.0 release and welcomes new ideas for further implementation.

Devon
Devon is an open-source pair programmer tool designed to facilitate collaborative coding sessions. It provides features such as multi-file editing, codebase exploration, test writing, bug fixing, and architecture exploration. The tool supports Anthropic, OpenAI, and Groq APIs, with plans to add more models in the future. Devon is community-driven, with ongoing development goals including multi-model support, plugin system for tool builders, self-hostable Electron app, and setting SOTA on SWE-bench Lite. Users can contribute to the project by developing core functionality, conducting research on agent performance, providing feedback, and testing the tool.
For similar tasks

miner-release
Heurist Miner is a tool that allows users to contribute their GPU for AI inference tasks on the Heurist network. It supports dual mining capabilities for image generation models and Large Language Models, offers flexible setup on Windows or Linux with multiple GPUs, ensures secure rewards through a dual-wallet system, and is fully open source. Users can earn rewards by hosting AI models and supporting applications in the Heurist ecosystem.

Stake-auto-bot
Stake-auto-bot is a tool designed for automated staking in the cryptocurrency space. It allows users to set up automated processes for staking their digital assets, providing a convenient way to earn rewards and secure networks. The tool simplifies the staking process by automating the necessary steps, such as selecting validators, delegating tokens, and monitoring rewards. With Stake-auto-bot, users can optimize their staking strategies and maximize their returns with minimal effort.

airflow
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.

tt-metal
TT-NN is a python & C++ Neural Network OP library. It provides a low-level programming model, TT-Metalium, enabling kernel development for Tenstorrent hardware.

Forza-Mods-AIO
Forza Mods AIO is a free and open-source tool that enhances the gaming experience in Forza Horizon 4 and 5. It offers a range of time-saving and quality-of-life features, making gameplay more enjoyable and efficient. The tool is designed to streamline various aspects of the game, improving user satisfaction and overall enjoyment.

openssa
OpenSSA is an open-source framework for creating efficient, domain-specific AI agents. It enables the development of Small Specialist Agents (SSAs) that solve complex problems in specific domains. SSAs tackle multi-step problems that require planning and reasoning beyond traditional language models. They apply OODA for deliberative reasoning (OODAR) and iterative, hierarchical task planning (HTP). This "System-2 Intelligence" breaks down complex tasks into manageable steps. SSAs make informed decisions based on domain-specific knowledge. With OpenSSA, users can create agents that process, generate, and reason about information, making them more effective and efficient in solving real-world challenges.

pezzo
Pezzo is a fully cloud-native and open-source LLMOps platform that allows users to observe and monitor AI operations, troubleshoot issues, save costs and latency, collaborate, manage prompts, and deliver AI changes instantly. It supports various clients for prompt management, observability, and caching. Users can run the full Pezzo stack locally using Docker Compose, with prerequisites including Node.js 18+, Docker, and a GraphQL Language Feature Support VSCode Extension. Contributions are welcome, and the source code is available under the Apache 2.0 License.

llm_qlora
LLM_QLoRA is a repository for fine-tuning Large Language Models (LLMs) using QLoRA methodology. It provides scripts for training LLMs on custom datasets, pushing models to HuggingFace Hub, and performing inference. Additionally, it includes models trained on HuggingFace Hub, a blog post detailing the QLoRA fine-tuning process, and instructions for converting and quantizing models. The repository also addresses troubleshooting issues related to Python versions and dependencies.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.