WritingTools
The world's smartest system-wide grammar assistant; a better version of the Apple Intelligence Writing Tools. Works on Windows, Linux, & macOS, with the free Gemini API, local LLMs, & more.
Stars: 672
Writing Tools is an Apple Intelligence-inspired application for Windows, Linux, and macOS that supercharges your writing with an AI LLM. It allows users to instantly proofread, optimize text, and summarize content from webpages, YouTube videos, documents, etc. The tool is privacy-focused, open-source, and supports multiple languages. It offers powerful features like grammar correction, content summarization, and LLM chat mode, making it a versatile writing assistant for various tasks.
README:
Instantly proofread and optimize your writing system-wide with AI:
https://github.com/user-attachments/assets/d3ce4694-b593-45ff-ae9a-892ce94b1dc8
Summarize content (webpages, YouTube videos, documents...) in a click:
https://github.com/user-attachments/assets/ad9729ab-75fb-404d-a2cf-a2c7b94351c3
Writing Tools is an Apple Intelligence-inspired application for Windows, Linux, and macOS that supercharges your writing with an AI LLM (cloud-based or local).
With one hotkey press system-wide, it lets you fix grammar, optimize text according to your instructions, summarize content (webpages, YouTube videos, etc.), and more.
It's currently the world's most intelligent system-wide grammar assistant and works in almost any language!
- Select any text on your PC and invoke Writing Tools with
ctrl+space
. - Choose Proofread, Rewrite, Friendly, Professional, Concise, or even enter custom instructions (e.g., "add comments to this code", "make it title case", "translate to French").
- Your text will instantly be replaced with the AI-optimized version. Use
ctrl+z
to revert.
- Select all text in any webpage, document, email, etc., with
ctrl+a
, or select the transcript of a YouTube video (from its description). - Choose Summary, Key Points, or Table after invoking Writing Tools.
- Get a pop-up summary with clear and beautiful formatting (with Markdown rendering), saving you hours.
- Press
ctrl+space
without selecting text to open a tiny prompt box and ask the LLM anything (e.g., "give me a template for my LLM notes"). The response will be typed into your textbox.
Aside from being the only Windows/Linux program like Apple's Writing Tools, and the only way to use them on an Intel Mac:
- More intelligent than Apple's Writing Tools and Grammarly Premium: Apple uses a tiny 3B parameter model, while Writing Tools lets you use much more advanced models for free (e.g., Gemini 1.5 Flash [~25B]). Grammarly's rule-based NLP can't compete with LLMs.
- Completely free and open-source: No subscriptions or hidden costs. Bloat-free and uses 0% of your CPU when idle.
- Versatile AI LLM support: Jump in quickly with the free Gemini API, or an extensive range of local LLMs (via Ollama [instructions], llama.cpp, KoboldCPP, TabbyAPI, vLLM, etc.) or cloud-based LLMs (ChatGPT, Mistral AI, etc.) through Writing Tools' OpenAI-API-compatibility.
- Does not mess with your clipboard, and works system-wide.
- Privacy-focused: Your API key and config files stay on your device. NO logging, diagnostic collection, tracking, or ads. Invoked only on your command. Local LLMs keep your data on your device & work without the internet.
-
Supports multiple languages: Works with any language and translates text better than Google Translate (type "translate to [language]" in
Describe your change...
). -
Code support: Fix, improve, translate, or add comments to code with
Describe your change...
." - Themes, Dark Mode, & Customization: Choose between 2 themes: a blurry gradient theme and a plain theme that resembles the Windows + V pop-up! Also has full dark mode support. Set your own hotkey for quick access.
Writing Tools has been featured on Beebom, XDA, Neowin, and numerous others!
- Go to the Releases page and download the latest
Writing.Tools.zip
file. - Extract it to your desired location, run
Writing Tools.exe
, and enjoy! :D
Note: Writing Tools is a portable app. If you extract it into a protected folder (e.g., Program Files), run it as administrator on first launch so it can create/edit its config file (in the same folder as its exe).
- To auto-start Writing Tools on boot, add a shortcut of
Writing Tools.exe
to the Windows Start-Up folder (typeshell:startup
in Run to get there).
Run it from the source code (instructions below).
The macOS version is a native Swift port, developed by Aryamirsepasi. View the README inside the macOS folder to learn more.
To install it:
- Go to the Releases page and download the latest
.dmg
file. - Open the
.dmg
file and drag thewriting-tools.app
into the Applications folder. That's it!
- Proofread: The smartest grammar & spelling corrector. Sorry not sorry, Grammarly Premium.
- Rewrite: Improve the phrasing of your text.
- Make Friendly/Professional: Adjust the tone of your text.
-
Custom Instructions: Tailor your request (e.g., "Translate to French") through
Describe your change...
.
The following options respond in a pop-up window (with markdown rendering, selectable text, and a zoom level that saves & applies on app restarts):
- Summarize: Create clear and concise summaries.
- Extract Key Points: Highlight the most important points.
- Create Tables: Convert text into a formatted table. PS: You can copy & paste the table into MS Word.
I believe strongly in protecting your privacy. Writing Tools:
- Does not collect or store any of your writing data by itself. It doesn't even collect general logs, so it's super light and privacy-friendly.
- Lets you use local LLMs to process your text entirely on-device.
- Only sends text to the chosen AI provider (encrypted) when you explicitly use one of the options.
- Only stores your API key locally on your device.
Note: If you choose to use a cloud based LLM, refer to the AI provider's privacy policy and terms of service.
- Download and install Ollama.
- Choose an LLM from here. Recommended: Llama 3.1 8B (~8GB RAM or VRAM required).
- Run
ollama run llama3.1:8b
in your terminal to download and launch Llama 3.1. - In Writing Tools, set the
OpenAI-Compatible
provider with:- API Key:
ollama
- API Base URL:
http://localhost:11434/v1
- API Model:
llama3.1:8b
- API Key:
- That's it! Enjoy Writing Tools with absolute privacy and no internet connection! 🎉 From now on, you'll simply need to launch Ollama and Writing Tools into the background for it to work.
-
(Being investigated) On some devices, Writing Tools does not work correctly with the default hotkey.
To fix it, simply change the hotkey to ctrl+` or ctrl+j and restart Writing Tools. PS: If a hotkey is already in use by a program or background process, Writing Tools may not be able to intercept it. The above hotkeys are usually unused.
-
The initial launch of the
Writing Tools.exe
might take unusually long — this seems to be because AV software extensively scans this new executable before letting it run. Once it launches into the background in RAM, it works instantly as usual.
If you prefer to run the program directly from the main.py
file, follow these OS-specific instructions.
1. Download the Code
- Click the green
<> Code ▼
button toward the very top of this page, and clickDownload ZIP
.
2. Install Dependencies
After extracting the folder, open your Terminal (or Command Prompt) in the relevant directory.
-
Windows:
cd path\to\Windows_and_Linux pip install -r requirements.txt
-
Linux:
cd /path/to/Windows_and_Linux pip3 install -r requirements.txt
Of course, you'll need to have Python installed!
3. Run the Program
-
Windows:
pythonw main.py
-
Linux:
python3 main.py
Here's how to compile it with PyInstaller and a virtual environment:
- First, create and activate a virtual environment:
# Install virtualenv if you haven't already
pip install virtualenv
# Create a new virtual environment
virtualenv myvenv
# Activate it
# On Windows:
myvenv\Scripts\activate
# On Linux:
source myvenv/bin/activate
- Once activated, install the required packages:
pip install -r requirements.txt
- Build Writing Tools:
python pyinstaller-build-script.py
macOS Version (by Aryamirsepasi) build instructions:
- Install Xcode
- Ensure you have Xcode installed on your macOS system.
- Download it from the Mac App Store.
- Clone the Repository to your local machine
git clone https://github.com/theJayTea/WritingTools.git
cd WritingTools
- Open the Project in Xcode
- Open Xcode.
- Select File > Open from the menu bar.
- Navigate to the
macOS
folder and select it.
-
Generate the Project File
Run the following command to generate the
.xcodeproj
file:
swift package generate-xcodeproj
- Build the Project
- Select your target device as My Mac in Xcode.
- Build the project by clicking the Play button (or pressing
Command + R
).
- Run the App
- After the build is successful, the app will launch automatically.
Writing Tools would not be where it is today without its amazing contributors:
1. Cameron Redmore (CameronRedmore):
Extensively refactored Writing Tools and added OpenAI Compatible API support, streamed responses, and the chat mode when no text is selected.
2. momokrono:
Added Linux support and switched to the pynput API to improve Windows stability. Fixed misc. bugs, such as handling quitting onboarding without completing it. @momokrono has been super kind and helpful, and I'm very grateful to have them as a contributor - Jesai.
3. Disneyhockey40 (Soszust40):
Helped add dark mode, the plain theme, tray menu fixes, and UI improvements.
Helped improve the reliability of text selection.
5. raghavdhingra24:
Made the rounded corners anti-aliased & prettier.
6. ErrorCatDev:
Significantly improved the About window, making it scrollable and cleaning things up. Also improved our .gitignore & requirements.txt.
7. Vadim Karpenko:
Helped add the start-on-boot setting!
A native Swift port created entirely by Aryamirsepasi! This was a big endeavour and they've done an amazing job. We're grateful to have them as a contributor. 🫡
I welcome contributions! :D
If you'd like to improve Writing Tools, please feel free to open a Pull Request or get in touch with me.
If there are major changes on your mind, it may be a good idea to get in touch before working on it.
Email: [email protected]
Made with ❤️ by a high school student. Check out my other app, Bliss AI, a free AI tutor!
Distributed under the GNU General Public License v3.0.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for WritingTools
Similar Open Source Tools
WritingTools
Writing Tools is an Apple Intelligence-inspired application for Windows, Linux, and macOS that supercharges your writing with an AI LLM. It allows users to instantly proofread, optimize text, and summarize content from webpages, YouTube videos, documents, etc. The tool is privacy-focused, open-source, and supports multiple languages. It offers powerful features like grammar correction, content summarization, and LLM chat mode, making it a versatile writing assistant for various tasks.
llm-autoeval
LLM AutoEval is a tool that simplifies the process of evaluating Large Language Models (LLMs) using a convenient Colab notebook. It automates the setup and execution of evaluations using RunPod, allowing users to customize evaluation parameters and generate summaries that can be uploaded to GitHub Gist for easy sharing and reference. LLM AutoEval supports various benchmark suites, including Nous, Lighteval, and Open LLM, enabling users to compare their results with existing models and leaderboards.
easydiffusion
Easy Diffusion 3.0 is a user-friendly tool for installing and using Stable Diffusion on your computer. It offers hassle-free installation, clutter-free UI, task queue, intelligent model detection, live preview, image modifiers, multiple prompts file, saving generated images, UI themes, searchable models dropdown, and supports various image generation tasks like 'Text to Image', 'Image to Image', and 'InPainting'. The tool also provides advanced features such as custom models, merge models, custom VAE models, multi-GPU support, auto-updater, developer console, and more. It is designed for both new users and advanced users looking for powerful AI image generation capabilities.
LLMstudio
LLMstudio by TensorOps is a platform that offers prompt engineering tools for accessing models from providers like OpenAI, VertexAI, and Bedrock. It provides features such as Python Client Gateway, Prompt Editing UI, History Management, and Context Limit Adaptability. Users can track past runs, log costs and latency, and export history to CSV. The tool also supports automatic switching to larger-context models when needed. Coming soon features include side-by-side comparison of LLMs, automated testing, API key administration, project organization, and resilience against rate limits. LLMstudio aims to streamline prompt engineering, provide execution history tracking, and enable effortless data export, offering an evolving environment for teams to experiment with advanced language models.
Local-File-Organizer
The Local File Organizer is an AI-powered tool designed to help users organize their digital files efficiently and securely on their local device. By leveraging advanced AI models for text and visual content analysis, the tool automatically scans and categorizes files, generates relevant descriptions and filenames, and organizes them into a new directory structure. All AI processing occurs locally using the Nexa SDK, ensuring privacy and security. With support for multiple file types and customizable prompts, this tool aims to simplify file management and bring order to users' digital lives.
t3rn-airdrop-bot
A bot designed to automate transactions and bridge assets on the t3rn network, making the process seamless and efficient. It supports multiple wallets through a JSON file containing private keys, with robust error handling and retry mechanisms. The tool is user-friendly, easy to set up, and supports bridging from Optimism Sepolia and Arbitrum Sepolia.
maige
Maige is a tool designed to simplify repository maintenance by automating the handling of issue labels. Users can quickly set up Maige to let AI manage their issue labels effortlessly. The tool provides guidance on self-hosting, GitHub app integration, environment variables setup, and offers commands for streamlined issue management. Maige aims to streamline the process of managing issues in a repository, making it easier for users to handle tasks related to labeling and tracking issues.
llm-answer-engine
This repository contains the code and instructions needed to build a sophisticated answer engine that leverages the capabilities of Groq, Mistral AI's Mixtral, Langchain.JS, Brave Search, Serper API, and OpenAI. Designed to efficiently return sources, answers, images, videos, and follow-up questions based on user queries, this project is an ideal starting point for developers interested in natural language processing and search technologies.
kollektiv
Kollektiv is a Retrieval-Augmented Generation (RAG) system designed to enable users to chat with their favorite documentation easily. It aims to provide LLMs with access to the most up-to-date knowledge, reducing inaccuracies and improving productivity. The system utilizes intelligent web crawling, advanced document processing, vector search, multi-query expansion, smart re-ranking, AI-powered responses, and dynamic system prompts. The technical stack includes Python/FastAPI for backend, Supabase, ChromaDB, and Redis for storage, OpenAI and Anthropic Claude 3.5 Sonnet for AI/ML, and Chainlit for UI. Kollektiv is licensed under a modified version of the Apache License 2.0, allowing free use for non-commercial purposes.
transcriptionstream
Transcription Stream is a self-hosted diarization service that works offline, allowing users to easily transcribe and summarize audio files. It includes a web interface for file management, Ollama for complex operations on transcriptions, and Meilisearch for fast full-text search. Users can upload files via SSH or web interface, with output stored in named folders. The tool requires a NVIDIA GPU and provides various scripts for installation and running. Ports for SSH, HTTP, Ollama, and Meilisearch are specified, along with access details for SSH server and web interface. Customization options and troubleshooting tips are provided in the documentation.
restai
RestAI is an AIaaS (AI as a Service) platform that allows users to create and consume AI agents (projects) using a simple REST API. It supports various types of agents, including RAG (Retrieval-Augmented Generation), RAGSQL (RAG for SQL), inference, vision, and router. RestAI features automatic VRAM management, support for any public LLM supported by LlamaIndex or any local LLM supported by Ollama, a user-friendly API with Swagger documentation, and a frontend for easy access. It also provides evaluation capabilities for RAG agents using deepeval.
gemini_multipdf_chat
Gemini PDF Chatbot is a Streamlit-based application that allows users to chat with a conversational AI model trained on PDF documents. The chatbot extracts information from uploaded PDF files and answers user questions based on the provided context. It features PDF upload, text extraction, conversational AI using the Gemini model, and a chat interface. Users can deploy the application locally or to the cloud, and the project structure includes main application script, environment variable file, requirements, and documentation. Dependencies include PyPDF2, langchain, Streamlit, google.generativeai, and dotenv.
LocalAIVoiceChat
LocalAIVoiceChat is an experimental alpha software that enables real-time voice chat with a customizable AI personality and voice on your PC. It integrates Zephyr 7B language model with speech-to-text and text-to-speech libraries. The tool is designed for users interested in state-of-the-art voice solutions and provides an early version of a local real-time chatbot.
superduper
superduper.io is a Python framework that integrates AI models, APIs, and vector search engines directly with existing databases. It allows hosting of models, streaming inference, and scalable model training/fine-tuning. Key features include integration of AI with data infrastructure, inference via change-data-capture, scalable model training, model chaining, simple Python interface, Python-first approach, working with difficult data types, feature storing, and vector search capabilities. The tool enables users to turn their existing databases into centralized repositories for managing AI model inputs and outputs, as well as conducting vector searches without the need for specialized databases.
omniscient
Omniscient is an advanced AI Platform offered as a SaaS, empowering projects with cutting-edge artificial intelligence capabilities. Seamlessly integrating with Next.js 14, React, Typescript, and APIs like OpenAI and Replicate, it provides solutions for code generation, conversation simulation, image creation, music composition, and video generation.
bittensor
Bittensor is an internet-scale neural network that incentivizes computers to provide access to machine learning models in a decentralized and censorship-resistant manner. It operates through a token-based mechanism where miners host, train, and procure machine learning systems to fulfill verification problems defined by validators. The network rewards miners and validators for their contributions, ensuring continuous improvement in knowledge output. Bittensor allows anyone to participate, extract value, and govern the network without centralized control. It supports tasks such as generating text, audio, images, and extracting numerical representations.
For similar tasks
WritingTools
Writing Tools is an Apple Intelligence-inspired application for Windows, Linux, and macOS that supercharges your writing with an AI LLM. It allows users to instantly proofread, optimize text, and summarize content from webpages, YouTube videos, documents, etc. The tool is privacy-focused, open-source, and supports multiple languages. It offers powerful features like grammar correction, content summarization, and LLM chat mode, making it a versatile writing assistant for various tasks.
generative-ai-use-cases-jp
Generative AI (生成 AI) brings revolutionary potential to transform businesses. This repository demonstrates business use cases leveraging Generative AI.
AlwaysReddy
AlwaysReddy is a simple LLM assistant with no UI that you interact with entirely using hotkeys. It can easily read from or write to your clipboard, and voice chat with you via TTS and STT. Here are some of the things you can use AlwaysReddy for: - Explain a new concept to AlwaysReddy and have it save the concept (in roughly your words) into a note. - Ask AlwaysReddy "What is X called?" when you know how to roughly describe something but can't remember what it is called. - Have AlwaysReddy proofread the text in your clipboard before you send it. - Ask AlwaysReddy "From the comments in my clipboard, what do the r/LocalLLaMA users think of X?" - Quickly list what you have done today and get AlwaysReddy to write a journal entry to your clipboard before you shutdown the computer for the day.
nlp-llms-resources
The 'nlp-llms-resources' repository is a comprehensive resource list for Natural Language Processing (NLP) and Large Language Models (LLMs). It covers a wide range of topics including traditional NLP datasets, data acquisition, libraries for NLP, neural networks, sentiment analysis, optical character recognition, information extraction, semantics, topic modeling, multilingual NLP, domain-specific LLMs, vector databases, ethics, costing, books, courses, surveys, aggregators, newsletters, papers, conferences, and societies. The repository provides valuable information and resources for individuals interested in NLP and LLMs.
Awesome-Segment-Anything
Awesome-Segment-Anything is a powerful tool for segmenting and extracting information from various types of data. It provides a user-friendly interface to easily define segmentation rules and apply them to text, images, and other data formats. The tool supports both supervised and unsupervised segmentation methods, allowing users to customize the segmentation process based on their specific needs. With its versatile functionality and intuitive design, Awesome-Segment-Anything is ideal for data analysts, researchers, content creators, and anyone looking to efficiently extract valuable insights from complex datasets.
fairseq
Fairseq is a sequence modeling toolkit that enables researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. It provides reference implementations of various sequence modeling papers covering CNN, LSTM networks, Transformer networks, LightConv, DynamicConv models, Non-autoregressive Transformers, Finetuning, and more. The toolkit supports multi-GPU training, fast generation on CPU and GPU, mixed precision training, extensibility, flexible configuration based on Hydra, and full parameter and optimizer state sharding. Pre-trained models are available for translation and language modeling with a torch.hub interface. Fairseq also offers pre-trained models and examples for tasks like XLS-R, cross-lingual retrieval, wav2vec 2.0, unsupervised quality estimation, and more.
transcriptionstream
Transcription Stream is a self-hosted diarization service that works offline, allowing users to easily transcribe and summarize audio files. It includes a web interface for file management, Ollama for complex operations on transcriptions, and Meilisearch for fast full-text search. Users can upload files via SSH or web interface, with output stored in named folders. The tool requires a NVIDIA GPU and provides various scripts for installation and running. Ports for SSH, HTTP, Ollama, and Meilisearch are specified, along with access details for SSH server and web interface. Customization options and troubleshooting tips are provided in the documentation.
obsidian-textgenerator-plugin
Text Generator is an open-source AI Assistant Tool that leverages Generative Artificial Intelligence to enhance knowledge creation and organization in Obsidian. It allows users to generate ideas, titles, summaries, outlines, and paragraphs based on their knowledge database, offering endless possibilities. The plugin is free and open source, compatible with Obsidian for a powerful Personal Knowledge Management system. It provides flexible prompts, template engine for repetitive tasks, community templates for shared use cases, and highly flexible configuration with services like Google Generative AI, OpenAI, and HuggingFace.
For similar jobs
ChatFAQ
ChatFAQ is an open-source comprehensive platform for creating a wide variety of chatbots: generic ones, business-trained, or even capable of redirecting requests to human operators. It includes a specialized NLP/NLG engine based on a RAG architecture and customized chat widgets, ensuring a tailored experience for users and avoiding vendor lock-in.
anything-llm
AnythingLLM is a full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
mikupad
mikupad is a lightweight and efficient language model front-end powered by ReactJS, all packed into a single HTML file. Inspired by the likes of NovelAI, it provides a simple yet powerful interface for generating text with the help of various backends.
glide
Glide is a cloud-native LLM gateway that provides a unified REST API for accessing various large language models (LLMs) from different providers. It handles LLMOps tasks such as model failover, caching, key management, and more, making it easy to integrate LLMs into applications. Glide supports popular LLM providers like OpenAI, Anthropic, Azure OpenAI, AWS Bedrock (Titan), Cohere, Google Gemini, OctoML, and Ollama. It offers high availability, performance, and observability, and provides SDKs for Python and NodeJS to simplify integration.
onnxruntime-genai
ONNX Runtime Generative AI is a library that provides the generative AI loop for ONNX models, including inference with ONNX Runtime, logits processing, search and sampling, and KV cache management. Users can call a high level `generate()` method, or run each iteration of the model in a loop. It supports greedy/beam search and TopP, TopK sampling to generate token sequences, has built in logits processing like repetition penalties, and allows for easy custom scoring.
firecrawl
Firecrawl is an API service that takes a URL, crawls it, and converts it into clean markdown. It crawls all accessible subpages and provides clean markdown for each, without requiring a sitemap. The API is easy to use and can be self-hosted. It also integrates with Langchain and Llama Index. The Python SDK makes it easy to crawl and scrape websites in Python code.