
aomail-app
Aomail is an AI interface that connects to Gmail, Outlook, or any IMAP service. It leverages LLMs to categorize, summarize, prioritize, and help write & reply to emails faster. Google-verified, TAC Securityβassessed
Stars: 63

README:
An intelligent, open-source email management platform with AI capabilities. Use our hosted version or self-host for complete control.
π Website: https://aomail.ai
π§ Support: [email protected]
-
π§ Email Provider Integration
- Labels are replicated on Gmail & Outlook
- Link multiple accounts (premium plan)
-
π€ AI-Powered Tools
- Smart categorization with custom rules
- AI chat assistant for composition and replies
- Customizable AI agents
- Smart email categorization with summaries
- Search emails or ask AI questions (beta)
-
π Analytics & Management
- Usage analytics and insights
- Multi-account dashboard
- AI Custom Rules: Automatic forwarding and smart replies
- Platform Integration: Discord & Slack connectivity with smart summaries
- LLM Choice: Support for OpenAI, Anthropic, Llama, Mistral
Try Aomail for free today and experience the future of email management.
No credit card required.
Required Services:
- Gemini API
- Google OAuth
Optional Services:
- Google PubSub
- Microsoft Azure
- Stripe
- Clone and Install:
git clone https://github.com/aomail-ai/aomail-app
cd aomail-app
cd frontend && npm install
cd .. && cp backend/.env.example backend/.env
-
Google Project Setup:
1 Generate a Gemini API key here. You also need to enable Gemini API here 2 Create a Google Cloud Console Project 3 Configure OAuth screen with scopes from
/backend/aomail/constants.py
-
Set authorized origins:
http://localhost
-
Set redirect URIs:
http://localhost/signup-link
http://localhost/settings
-
PubSub Setup (Optional):
-
For Local Development:
- Install and run Google Cloud PubSub emulator
- Configure local webhook endpoint
-
For Production:
- Create a new PubSub topic in Google Cloud Console (Create topic)
- Configure webhook URL:
https://your-domain/google/receive_mail_notifications/
- Set up push subscription with your webhook
-
-
-
Configure Environment: Required variables in
.env
:
# LLM API KEYS
GEMINI_API_KEY=""
# ENCRYPTION KEYS
SOCIAL_API_REFRESH_TOKEN_KEY=""
EMAIL_ONE_LINE_SUMMARY_KEY=""
EMAIL_SHORT_SUMMARY_KEY=""
EMAIL_HTML_CONTENT_KEY=""
# DJANGO CREDENTIALS
DJANGO_SECRET_KEY=""
# Google Configuration (if using Gmail)
GOOGLE_PROJECT_ID=""
GOOGLE_CLIENT_ID=""
GOOGLE_CLIENT_SECRET=""
# Microsoft Configuration (if using Outlook)
MICROSOFT_CLIENT_ID=""
MICROSOFT_CLIENT_SECRET=""
MICROSOFT_TENANT_ID=""
MICROSOFT_CLIENT_STATE=""
- Launch Application:
chmod +x start.sh
./start.sh
Access at http://localhost:8090/
If you encounter database migration problems, run these commands:
sudo rm -fr backend/aomail/migrations
docker exec -it aomail_project-backend_dev-1 python manage.py makemigrations --empty aomail
./start.sh
Common port conflict issues:
- Check for any running containers using the same ports
- Look for conflicts between production/development containers
- Try updating ports in
start.sh
if needed
Follow these steps in order:
- Configure your DNS record
- Update the reverse proxy settings
- Open required port:
sudo ufw allow PORT_NUMBER
- Add the subdomain to
ALLOWED_HOSTS
in start.sh
Q: How do you ensure email security?
A: We take security seriously:
- All emails are encrypted and stored in a secure database
- Our code is open source and publicly auditable
- We've received a 9.7/10 security rating from TAC Security
- No AI training is performed on your data
Download Security Assessment Report
Q: How do you handle AI and data privacy?
A: We prioritize your privacy:
- No training is performed on your emails
- We use stateless API calls to LLM providers
- You can choose your preferred LLM provider:
Q: How long is the free trial?
A: We offer a 14-day free trial.
Q: Do I need to provide credit card information?
A: No, you can start your free trial without entering any payment information.
Q: Which email providers are supported?
A: Currently, we support:
- Gmail
- Outlook (beta)
Q: How does mailbox linking work?
A: We use industry-standard OAuth 2.0 for secure mailbox integration. Learn more about OAuth 2.0
Q: How do I get unlimited access to Aomail?
A: You'll need to set up the admin dashboard to give yourself unlimited access. Check out our admin dashboard repository for setup instructions.
- Set up development environment (recommended):
python3 -m venv py_env
source py_env/bin/activate
pip install -r requirements.txt
- Fork repository
- Create feature branch
- Submit pull request
-
Feature Requests: Use our feature request template with
enhancement
+backend
/frontend
labels -
Bug Reports: Use our bug report template with
bug
+backend
/frontend
labels
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aomail-app
Similar Open Source Tools

lyraios
LYRAIOS (LLM-based Your Reliable AI Operating System) is an advanced AI assistant platform built with FastAPI and Streamlit, designed to serve as an operating system for AI applications. It offers core features such as AI process management, memory system, and I/O system. The platform includes built-in tools like Calculator, Web Search, Financial Analysis, File Management, and Research Tools. It also provides specialized assistant teams for Python and research tasks. LYRAIOS is built on a technical architecture comprising FastAPI backend, Streamlit frontend, Vector Database, PostgreSQL storage, and Docker support. It offers features like knowledge management, process control, and security & access control. The roadmap includes enhancements in core platform, AI process management, memory system, tools & integrations, security & access control, open protocol architecture, multi-agent collaboration, and cross-platform support.

Hacx-GPT
Hacx GPT is a cutting-edge AI tool developed by BlackTechX, inspired by WormGPT, designed to push the boundaries of natural language processing. It is an advanced broken AI model that facilitates seamless and powerful interactions, allowing users to ask questions and perform various tasks. The tool has been rigorously tested on platforms like Kali Linux, Termux, and Ubuntu, offering powerful AI conversations and the ability to do anything the user wants. Users can easily install and run Hacx GPT on their preferred platform to explore its vast capabilities.

rkllama
RKLLama is a server and client tool designed for running and interacting with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. It allows models to run on the NPU, with features such as running models on NPU, partial Ollama API compatibility, pulling models from Huggingface, API REST with documentation, dynamic loading/unloading of models, inference requests with streaming modes, simplified model naming, CPU model auto-detection, and optional debug mode. The tool supports Python 3.8 to 3.12 and has been tested on Orange Pi 5 Pro and Orange Pi 5 Plus with specific OS versions.

CrewAI-Studio
CrewAI Studio is an application with a user-friendly interface for interacting with CrewAI, offering support for multiple platforms and various backend providers. It allows users to run crews in the background, export single-page apps, and use custom tools for APIs and file writing. The roadmap includes features like better import/export, human input, chat functionality, automatic crew creation, and multiuser environment support.

swift-ocr-llm-powered-pdf-to-markdown
Swift OCR is a powerful tool for extracting text from PDF files using OpenAI's GPT-4 Turbo with Vision model. It offers flexible input options, advanced OCR processing, performance optimizations, structured output, robust error handling, and scalable architecture. The tool ensures accurate text extraction, resilience against failures, and efficient handling of multiple requests.

farfalle
Farfalle is an open-source AI-powered search engine that allows users to run their own local LLM or utilize the cloud. It provides a tech stack including Next.js for frontend, FastAPI for backend, Tavily for search API, Logfire for logging, and Redis for rate limiting. Users can get started by setting up prerequisites like Docker and Ollama, and obtaining API keys for Tavily, OpenAI, and Groq. The tool supports models like llama3, mistral, and gemma. Users can clone the repository, set environment variables, run containers using Docker Compose, and deploy the backend and frontend using services like Render and Vercel.

DeepSeekAI
DeepSeekAI is a browser extension plugin that allows users to interact with AI by selecting text on web pages and invoking the DeepSeek large model to provide AI responses. The extension enhances browsing experience by enabling users to get summaries or answers for selected text directly on the webpage. It features context text selection, API key integration, draggable and resizable window, AI streaming replies, Markdown rendering, one-click copy, re-answer option, code copy functionality, language switching, and multi-turn dialogue support. Users can install the extension from Chrome Web Store or Edge Add-ons, or manually clone the repository, install dependencies, and build the extension. Configuration involves entering the DeepSeek API key in the extension popup window to start using the AI-driven responses.

Zero
Zero is an open-source AI email solution that allows users to self-host their email app while integrating external services like Gmail. It aims to modernize and enhance emails through AI agents, offering features like open-source transparency, AI-driven enhancements, data privacy, self-hosting freedom, unified inbox, customizable UI, and developer-friendly extensibility. Built with modern technologies, Zero provides a reliable tech stack including Next.js, React, TypeScript, TailwindCSS, Node.js, Drizzle ORM, and PostgreSQL. Users can set up Zero using standard setup or Dev Container setup for VS Code users, with detailed environment setup instructions for Better Auth, Google OAuth, and optional GitHub OAuth. Database setup involves starting a local PostgreSQL instance, setting up database connection, and executing database commands for dependencies, tables, migrations, and content viewing.

Visionatrix
Visionatrix is a project aimed at providing easy use of ComfyUI workflows. It offers simplified setup and update processes, a minimalistic UI for daily workflow use, stable workflows with versioning and update support, scalability for multiple instances and task workers, multiple user support with integration of different user backends, LLM power for integration with Ollama/Gemini, and seamless integration as a service with backend endpoints and webhook support. The project is approaching version 1.0 release and welcomes new ideas for further implementation.

one
ONE is a modern web and AI agent development toolkit that empowers developers to build AI-powered applications with high performance, beautiful UI, AI integration, responsive design, type safety, and great developer experience. It is perfect for building modern web applications, from simple landing pages to complex AI-powered platforms.

probe
Probe is an AI-friendly, fully local, semantic code search tool designed to power the next generation of AI coding assistants. It combines the speed of ripgrep with the code-aware parsing of tree-sitter to deliver precise results with complete code blocks, making it perfect for large codebases and AI-driven development workflows. Probe is fully local, keeping code on the user's machine without relying on external APIs. It supports multiple languages, offers various search options, and can be used in CLI mode, MCP server mode, AI chat mode, and web interface. The tool is designed to be flexible, fast, and accurate, providing developers and AI models with full context and relevant code blocks for efficient code exploration and understanding.

CrewAI-GUI
CrewAI-GUI is a Node-Based Frontend tool designed to revolutionize AI workflow creation. It empowers users to design complex AI agent interactions through an intuitive drag-and-drop interface, export designs to JSON for modularity and reusability, and supports both GPT-4 API and Ollama for flexible AI backend. The tool ensures cross-platform compatibility, allowing users to create AI workflows on Windows, Linux, or macOS efficiently.