![discord-ai-bot](/statics/github-mark.png)
discord-ai-bot
Discord AI chatbot using Ollama
Stars: 210
![screenshot](/screenshots_githubs/mekb-turtle-discord-ai-bot.jpg)
Discord AI Bot is a chatbot designed to interact with Ollama and AUTOMATIC1111 Stable Diffusion on Discord. The project is now archived due to lack of maintenance. Users can set up the bot by installing Node.js, Ollama, and a model, creating a Discord bot, and starting the bot with the necessary configurations. Additionally, Docker setup instructions are provided for easy deployment. The bot can be interacted with by mentioning it in Discord messages.
README:
This repository is no longer being developed on by me, mekb-turtle.
- Install Node.js (if you have a package manager, use that instead to install this)
- Make sure to install at least v14 of Node.js
- Install Ollama (ditto)
- Pull (download) a model, e.g
ollama pull orca
orollama pull llama2
- Start Ollama by running
ollama serve
-
Create a Discord bot
- Under Application » Bot
- Enable Message Content Intent
- Enable Server Members Intent (for replacing user mentions with the username)
- Under Application » Bot
- Invite the bot to a server
- Go to Application » OAuth2 » URL Generator
- Enable
bot
- Enable Send Messages, Read Messages/View Channels, and Read Message History
- Under Generated URL, click Copy and paste the URL in your browser
- Rename
.env.example
to.env
and edit the.env
file- You can get the token from Application » Bot » Token, never share this with anyone
- Make sure to change the model if you aren't using
orca
- Ollama URL can be kept the same unless you have changed the port
- You can use multiple Ollama servers at the same time by separating the URLs with commas
- Set the channels to the channel ID, comma separated
- In Discord, go to User Settings » Advanced, and enable Developer Mode
- Right click on a channel you want to use, and click Copy Channel ID
- You can edit the system message the bot uses, or disable it entirely
- Install the required dependencies with
npm i
- Start the bot with
npm start
- You can interact with the bot by @mentioning it with your message
- Install Stable Diffusion
- Run the script
./webui.sh --api --listen
- Install Docker
- Should be atleast compatible with version 3 of compose (docker engine 1.13.0+)
- Repeat steps 2—7 from the other setup instructions
- Start the bot with
make compose-up
if you have Make installed- Otherwise, try
docker compose -p discord-ai up
instead
- Otherwise, try
- You can interact with the bot by @mentioning it with your message
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for discord-ai-bot
Similar Open Source Tools
![discord-ai-bot Screenshot](/screenshots_githubs/mekb-turtle-discord-ai-bot.jpg)
discord-ai-bot
Discord AI Bot is a chatbot designed to interact with Ollama and AUTOMATIC1111 Stable Diffusion on Discord. The project is now archived due to lack of maintenance. Users can set up the bot by installing Node.js, Ollama, and a model, creating a Discord bot, and starting the bot with the necessary configurations. Additionally, Docker setup instructions are provided for easy deployment. The bot can be interacted with by mentioning it in Discord messages.
![discord-ai-bot Screenshot](/screenshots_githubs/238SAMIxD-discord-ai-bot.jpg)
discord-ai-bot
Discord AI Bot is a chatbot tool designed to interact with Ollama and AUTOMATIC1111 Stable Diffusion on Discord. The bot allows users to set up and configure a Discord bot to communicate with the mentioned AI models. Users can follow step-by-step instructions to install Node.js, Ollama, and the required dependencies, create a Discord bot, and interact with the bot by mentioning it in messages. Additionally, the tool provides set-up instructions for Docker users to easily deploy the bot using Docker containers. Overall, Discord AI Bot simplifies the process of integrating AI chatbots into Discord servers for interactive communication.
![minimal-llm-ui Screenshot](/screenshots_githubs/richawo-minimal-llm-ui.jpg)
minimal-llm-ui
This minimalistic UI serves as a simple interface for Ollama models, enabling real-time interaction with Local Language Models (LLMs). Users can chat with models, switch between different LLMs, save conversations, and create parameter-driven prompt templates. The tool is built using React, Next.js, and Tailwind CSS, with seamless integration with LangchainJs and Ollama for efficient model switching and context storage.
![azure-search-openai-javascript Screenshot](/screenshots_githubs/Azure-Samples-azure-search-openai-javascript.jpg)
azure-search-openai-javascript
This sample demonstrates a few approaches for creating ChatGPT-like experiences over your own data using the Retrieval Augmented Generation pattern. It uses Azure OpenAI Service to access the ChatGPT model (gpt-35-turbo), and Azure AI Search for data indexing and retrieval.
![obsidian-bmo-chatbot Screenshot](/screenshots_githubs/longy2k-obsidian-bmo-chatbot.jpg)
obsidian-bmo-chatbot
Obsidian BMO Chatbot is a plugin that allows users to generate and brainstorm ideas while creating notes using Large Language Models (LLMs) from various providers like Ollama, LM Studio, Anthropic, OpenAI, Mistral AI, and more. Users can interact with self-hosted LLMs, create chatbots with specific knowledge and personalities, chat from anywhere within Obsidian, and receive formatted responses in Obsidian Markdown. The plugin also offers features like customizable bot name, prompt selection, saving chat history as markdown, and more. Users can activate the plugin through Obsidian Community plugins or by installing it manually. Supported models include Ollama, LM Studio, Anthropic, Mistral AI, Google Gemini Pro, OpenAI, and Openrouter provided models.
![python-sc2 Screenshot](/screenshots_githubs/BurnySc2-python-sc2.jpg)
python-sc2
python-sc2 is an easy-to-use library for writing AI Bots for StarCraft II in Python 3. It aims for simplicity and ease of use while providing both high and low level abstractions. The library covers only the raw scripted interface and intends to help new bot authors with added functions. Users can install the library using pip and need a StarCraft II executable to run bots. The API configuration options allow users to customize bot behavior and performance. The community provides support through Discord servers, and users can contribute to the project by creating new issues or pull requests following style guidelines.
![buildware-ai Screenshot](/screenshots_githubs/mckaywrigley-buildware-ai.jpg)
buildware-ai
Buildware is a tool designed to help developers accelerate their code shipping process by leveraging AI technology. Users can build a code instruction system, submit an issue, and receive an AI-generated pull request. The tool is created by Mckay Wrigley and Tyler Bruno at Takeoff AI. Buildware offers a simple setup process involving cloning the repository, installing dependencies, setting up environment variables, configuring a database, and obtaining a GitHub Personal Access Token (PAT). The tool is currently being updated to include advanced features such as Linear integration, local codebase mode, and team support.
![h2o-llmstudio Screenshot](/screenshots_githubs/h2oai-h2o-llmstudio.jpg)
h2o-llmstudio
H2O LLM Studio is a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). With H2O LLM Studio, you can easily and effectively fine-tune LLMs without the need for any coding experience. The GUI is specially designed for large language models, and you can finetune any LLM using a large variety of hyperparameters. You can also use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint. Additionally, you can use Reinforcement Learning (RL) to finetune your model (experimental), use advanced evaluation metrics to judge generated answers by the model, track and compare your model performance visually, and easily export your model to the Hugging Face Hub and share it with the community.
![CyberScraper-2077 Screenshot](/screenshots_githubs/itsOwen-CyberScraper-2077.jpg)
CyberScraper-2077
CyberScraper 2077 is an advanced web scraping tool powered by AI, designed to extract data from websites with precision and style. It offers a user-friendly interface, supports multiple data export formats, operates in stealth mode to avoid detection, and promises lightning-fast scraping. The tool respects ethical scraping practices, including robots.txt and site policies. With upcoming features like proxy support and page navigation, CyberScraper 2077 is a futuristic solution for data extraction in the digital realm.
![open-source-slack-ai Screenshot](/screenshots_githubs/meetbryce-open-source-slack-ai.jpg)
open-source-slack-ai
This repository provides a ready-to-run basic Slack AI solution that allows users to summarize threads and channels using OpenAI. Users can generate thread summaries, channel overviews, channel summaries since a specific time, and full channel summaries. The tool is powered by GPT-3.5-Turbo and an ensemble of NLP models. It requires Python 3.8 or higher, an OpenAI API key, Slack App with associated API tokens, Poetry package manager, and ngrok for local development. Users can customize channel and thread summaries, run tests with coverage using pytest, and contribute to the project for future enhancements.
![quivr-mobile Screenshot](/screenshots_githubs/adityanandanx-quivr-mobile.jpg)
quivr-mobile
Quivr-Mobile is a React Native mobile application that allows users to upload files and engage in chat conversations using the Quivr backend API. It supports features like file upload and chatting with a language model about uploaded data. The project uses technologies like React Native, React Native Paper, and React Native Navigation. Users can follow the installation steps to set up the client and contribute to the project by opening issues or submitting pull requests following the existing coding style.
![generative-ai-js Screenshot](/screenshots_githubs/google-gemini-generative-ai-js.jpg)
generative-ai-js
Generative AI JS is a JavaScript library that provides tools for creating generative art and music using artificial intelligence techniques. It allows users to generate unique and creative content by leveraging machine learning models. The library includes functions for generating images, music, and text based on user input and preferences. With Generative AI JS, users can explore the intersection of art and technology, experiment with different creative processes, and create dynamic and interactive content for various applications.
![grafana-llm-app Screenshot](/screenshots_githubs/grafana-grafana-llm-app.jpg)
grafana-llm-app
This repository contains separate packages for Grafana LLM Plugin and the @grafana/llm package for interfacing with it. The packages are tightly coupled and developed together with identical dependencies. The repository provides instructions for developing the packages, including backend and frontend development, testing, and release processes.
![dream-team Screenshot](/screenshots_githubs/Azure-Samples-dream-team.jpg)
dream-team
Build your dream team with Autogen is a repository that leverages Microsoft Autogen 0.4, Azure OpenAI, and Streamlit to create an end-to-end multi-agent application. It provides an advanced multi-agent framework based on Magentic One, with features such as a friendly UI, single-line deployment, secure code execution, managed identities, and observability & debugging tools. Users can deploy Azure resources and the app with simple commands, work locally with virtual environments, install dependencies, update configurations, and run the application. The repository also offers resources for learning more about building applications with Autogen.
![contoso-chat Screenshot](/screenshots_githubs/Azure-Samples-contoso-chat.jpg)
contoso-chat
Contoso Chat is a Python sample demonstrating how to build, evaluate, and deploy a retail copilot application with Azure AI Studio using Promptflow with Prompty assets. The sample implements a Retrieval Augmented Generation approach to answer customer queries based on the company's product catalog and customer purchase history. It utilizes Azure AI Search, Azure Cosmos DB, Azure OpenAI, text-embeddings-ada-002, and GPT models for vectorizing user queries, AI-assisted evaluation, and generating chat responses. By exploring this sample, users can learn to build a retail copilot application, define prompts using Prompty, design, run & evaluate a copilot using Promptflow, provision and deploy the solution to Azure using the Azure Developer CLI, and understand Responsible AI practices for evaluation and content safety.
![civitai Screenshot](/screenshots_githubs/civitai-civitai.jpg)
civitai
Civitai is a platform where people can share their stable diffusion models (textual inversions, hypernetworks, aesthetic gradients, VAEs, and any other crazy stuff people do to customize their AI generations), collaborate with others to improve them, and learn from each other's work. The platform allows users to create an account, upload their models, and browse models that have been shared by others. Users can also leave comments and feedback on each other's models to facilitate collaboration and knowledge sharing.
For similar tasks
![MaxKB Screenshot](/screenshots_githubs/1Panel-dev-MaxKB.jpg)
MaxKB
MaxKB is a knowledge base Q&A system based on the LLM large language model. MaxKB = Max Knowledge Base, which aims to become the most powerful brain of the enterprise.
![Large-Language-Models Screenshot](/screenshots_githubs/VincentGranville-Large-Language-Models.jpg)
Large-Language-Models
Large Language Models (LLM) are used to browse the Wolfram directory and associated URLs to create the category structure and good word embeddings. The goal is to generate enriched prompts for GPT, Wikipedia, Arxiv, Google Scholar, Stack Exchange, or Google search. The focus is on one subdirectory: Probability & Statistics. Documentation is in the project textbook `Projects4.pdf`, which is available in the folder. It is recommended to download the document and browse your local copy with Chrome, Edge, or other viewers. Unlike on GitHub, you will be able to click on all the links and follow the internal navigation features. Look for projects related to NLP and LLM / xLLM. The best starting point is project 7.2.2, which is the core project on this topic, with references to all satellite projects. The project textbook (with solutions to all projects) is the core document needed to participate in the free course (deep tech dive) called **GenAI Fellowship**. For details about the fellowship, follow the link provided. An uncompressed version of `crawl_final_stats.txt.gz` is available on Google drive, which contains all the crawled data needed as input to the Python scripts in the XLLM5 and XLLM6 folders.
![BlossomLM Screenshot](/screenshots_githubs/Azure99-BlossomLM.jpg)
BlossomLM
BlossomLM is a series of open-source conversational large language models. This project aims to provide a high-quality general-purpose SFT dataset in both Chinese and English, making fine-tuning accessible while also providing pre-trained model weights. **Hint**: BlossomLM is a personal non-commercial project.
![InternLM Screenshot](/screenshots_githubs/InternLM-InternLM.jpg)
InternLM
InternLM is a powerful language model series with features such as 200K context window for long-context tasks, outstanding comprehensive performance in reasoning, math, code, chat experience, instruction following, and creative writing, code interpreter & data analysis capabilities, and stronger tool utilization capabilities. It offers models in sizes of 7B and 20B, suitable for research and complex scenarios. The models are recommended for various applications and exhibit better performance than previous generations. InternLM models may match or surpass other open-source models like ChatGPT. The tool has been evaluated on various datasets and has shown superior performance in multiple tasks. It requires Python >= 3.8, PyTorch >= 1.12.0, and Transformers >= 4.34 for usage. InternLM can be used for tasks like chat, agent applications, fine-tuning, deployment, and long-context inference.
![whatsapp-chatgpt Screenshot](/screenshots_githubs/askrella-whatsapp-chatgpt.jpg)
whatsapp-chatgpt
This repository contains a WhatsApp bot that utilizes OpenAI's GPT and DALL-E 2 to respond to user inputs. Users can interact with the bot through voice messages, which are transcribed and responded to. The bot requires Node.js, npm, an OpenAI API key, and a WhatsApp account. It uses Puppeteer to run a real instance of Whatsapp Web to avoid being blocked. However, there is a risk of being blocked by WhatsApp as it does not allow bots or unofficial clients on its platform. The bot is not free to use, and users will be charged by OpenAI for each request made.
![discord-ai-bot Screenshot](/screenshots_githubs/mekb-turtle-discord-ai-bot.jpg)
discord-ai-bot
Discord AI Bot is a chatbot designed to interact with Ollama and AUTOMATIC1111 Stable Diffusion on Discord. The project is now archived due to lack of maintenance. Users can set up the bot by installing Node.js, Ollama, and a model, creating a Discord bot, and starting the bot with the necessary configurations. Additionally, Docker setup instructions are provided for easy deployment. The bot can be interacted with by mentioning it in Discord messages.
![J.A.R.V.I.S Screenshot](/screenshots_githubs/clevaway-J.A.R.V.I.S.jpg)
J.A.R.V.I.S
J.A.R.V.I.S. is an offline large language model fine-tuned on custom and open datasets to mimic Jarvis's dialog with Stark. It prioritizes privacy by running locally and excels in responding like Jarvis with a similar tone. Current features include time/date queries, web searches, playing YouTube videos, and webcam image descriptions. Users can interact with Jarvis via command line after installing the model locally using Ollama. Future plans involve voice cloning, voice-to-text input, and deploying the voice model as an API.
![airdcpp-webclient Screenshot](/screenshots_githubs/airdcpp-web-airdcpp-webclient.jpg)
airdcpp-webclient
AirDC++ Web Client is a locally installed application designed for flexible file sharing within groups over a local network or the internet. It utilizes the Advanced Direct Connect protocol to create file sharing communities with thousands of users. The application offers a responsive web user interface, allows sharing local directories, searching for shared files, saving files, chatting capabilities, browsing shared directories, extension support, and a web API for HTTP REST and WebSockets.
For similar jobs
![sweep Screenshot](/screenshots_githubs/sweepai-sweep.jpg)
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
![teams-ai Screenshot](/screenshots_githubs/microsoft-teams-ai.jpg)
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
![ai-guide Screenshot](/screenshots_githubs/Crataco-ai-guide.jpg)
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
![classifai Screenshot](/screenshots_githubs/10up-classifai.jpg)
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
![chatbot-ui Screenshot](/screenshots_githubs/mckaywrigley-chatbot-ui.jpg)
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
![BricksLLM Screenshot](/screenshots_githubs/bricks-cloud-BricksLLM.jpg)
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
![uAgents Screenshot](/screenshots_githubs/fetchai-uAgents.jpg)
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
![griptape Screenshot](/screenshots_githubs/griptape-ai-griptape.jpg)
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.