grammar-llm

grammar-llm

AI-powered grammar checker using fine-tuned language models to fix grammatical errors in text.

Stars: 94

Visit
 screenshot

GrammarLLM is an AI-powered grammar correction tool that utilizes fine-tuned language models to fix grammatical errors in text. It offers real-time grammar and spelling correction with individual suggestion acceptance. The tool features a clean and responsive web interface, a FastAPI backend integrated with llama.cpp, and support for multiple grammar models. Users can easily deploy the tool using Docker Compose and interact with it through a web interface or REST API. The default model, GRMR-V3-G4B-Q8_0, provides grammar correction, spelling correction, punctuation fixes, and style improvements without requiring a GPU. The tool also includes endpoints for applying single or multiple suggestions to text, a health check endpoint, and detailed documentation for functionality and model details. Testing and verification steps are provided for manual and Docker testing, along with community guidelines for contributing, reporting issues, and getting support.

README:

GrammarLLM

GrammarLLM is an open-source framework for automated grammar correction, writing quality assessment, and structured feedback generation using fine-tuned large language models. It performs sentence-level error detection and correction, computes quantitative writing quality scores based on detected error spans, and generates detailed, downloadable PDF reports with highlighted differences between original and corrected text. Designed for reproducible experimentation and evaluation, GrammarLLM provides a REST API and web interface for integration into research workflows and supports CPU-based execution without requiring GPU resources.

Buy Me a Coffee GitHub License GitHub Repo stars

grammar-llm

Features

  • Real-time grammar and spelling correction
  • AI-powered suggestions using fine-tuned LLMs
  • Writing quality scoring (0–100) based on error-to-word ratio
  • PDF report generation with visually highlighted original and corrected sentences
  • Individual suggestion acceptance
  • Clean, responsive web interface
  • FastAPI backend with llama.cpp integration
  • Support for multiple grammar models
  • Doesn't require a GPU
  • REST API for programmatic access

Docker Deployment

Using Docker Compose (Recommended)

docker-compose up -d

Installation

  1. Clone the repository:
git clone https://github.com/whiteh4cker-tr/grammar-llm.git
cd grammar-llm
  1. Create a virtual environment (recommended):
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt

Usage

  1. Start the application:
uvicorn main:app --reload --host 0.0.0.0 --port 8000
  1. Open your browser and navigate to:
http://localhost:8000

Example Usage

Web Interface

Simply paste or type your text in the editor and click "Check Grammar". The application will:

  1. Analyze your text and display suggestions with highlighted differences
  2. Calculate and display a writing quality score (0–100) based on the ratio of errors to total words
  3. Provide a "Download Report" button to generate a PDF report containing:
    • Writing quality score
    • All suggestions with original and corrected sentences
    • Visual highlighting of error words (red) and corrections (green)
    • WCAG 2.0 AA compliant color contrast for accessibility

API Usage

The application exposes a REST API for programmatic access:

# Send text for correction
curl -X POST "http://localhost:8000/correct" \
  -H "Content-Type: application/json" \
  -d '{"text": "your text here"}'

Using Python

import requests, json

URL = "http://localhost:8000/correct"
payload = {"text": "She dont like the apples. this is a bad sentence"}

resp = requests.post(URL, json=payload, timeout=30)
resp.raise_for_status()
print(resp.status_code)
print(json.dumps(resp.json(), indent=2, ensure_ascii=False))

📦 Using Postman

  • Method: POST
  • URL: http://localhost:8000/correct
  • Headers: Content-Type: application/json
  • Body (raw → JSON): "text": "She dont like the apples. this is a bad sentence" }

Output (when corrections are suggested)

{
    "suggestions": [
        {
            "original": "She dont like the apples. this is a bad sentence",
            "corrected": "She doesn't like the apples. This is a bad sentence",
            "sentence": "Sentence 1",
            "start_index": 0,
            "end_index": 48,
            "original_highlighted": "She <span class=\"error-word\">dont</span> like the apples. <span class=\"error-word\">this</span> is a bad sentence",
            "corrected_highlighted": "She <span class=\"corrected-word\">doesn</span><span class=\"corrected-word\">'</span><span class=\"corrected-word\">t</span> like the apples. <span class=\"corrected-word\">This</span> is a bad sentence"
        }
    ],
    "corrected_text": "She doesn't like the apples. This is a bad sentence"
}

Output (when input is already correct)

{
    "suggestions": [],
    "corrected_text": "This is a good sentence. This is another good sentence."
}

Configuration

The application uses the GRMR-V3-G4B-Q8_0 model by default. The model will be automatically downloaded on first run (approx. 4.13GB).

Functionality Documentation

Core Features

Grammar Correction Endpoint

  • Endpoint: POST /correct
  • Request Body: {"text": "your text here"}
  • Response: Returns a CorrectionResponse object containing:
    • suggestions (List[Suggestion]): a list of per-sentence suggestion objects. Each suggestion includes original, corrected, sentence, start_index, end_index, and HTML-highlighted fields (original_highlighted, corrected_highlighted). Only sentences with a meaningful correction are included; suggestions may be empty.
    • corrected_text: The fully corrected version of the input text (this field is always returned)

Apply Suggestion Endpoint

  • Endpoint: POST /apply-suggestion
  • Use Case: Apply a single suggestion to the original text
  • Request Parameters: Original text, suggestion index, and suggestions list

Apply Multiple Suggestions Endpoint

  • Endpoint: POST /apply-suggestions
  • Use Case: Apply multiple suggestions to the original text at once
  • Features: Handles overlapping suggestions intelligently by keeping the rightmost replacement
  • Note: This endpoint is available for programmatic API clients. The web frontend applies suggestions one at a time using the /apply-suggestion endpoint instead.

Health Check Endpoint

  • Endpoint: GET /health
  • Response: Returns status of the application

Model Details

  • Model: GRMR-V3-G4B (Quantized to 8-bit)
  • Context Window: 4096 tokens
  • Capabilities: Grammar correction, spelling correction, punctuation fixes, and style improvements
  • GPU Required: No - runs on CPU with llama.cpp

Testing & Verification

Manual Testing Steps

  1. Verify Application Start

    uvicorn main:app --reload --host 0.0.0.0 --port 8000

    Expected console output:

    ============================================================
    GrammarLLM
    ============================================================
    Server starting on http://localhost:8000
    (Also accessible on http://127.0.0.1:8000)
    ============================================================
    
  2. Test Health Check

    curl http://localhost:8000/health

    Expected response: {"status":"healthy","model_loaded":true}

Docker Testing

docker-compose up
curl http://localhost:8000/health

Expected: Application is accessible and responsive

Community Guidelines

Contributing

We welcome contributions from the community! Here's how you can help:

  1. Fork the Repository

    git clone https://github.com/whiteh4cker-tr/grammar-llm.git
    cd grammar-llm
  2. Create a Feature Branch

    git checkout -b your-feature-name
  3. Make Your Changes

    • Ensure your code follows the existing style
    • Test your changes thoroughly
    • Update documentation as needed
  4. Submit a Pull Request

    • Push your changes to your fork
    • Open a pull request describing your changes
    • Link any related issues
    • Wait for review and feedback

Reporting Issues

Found a bug or have a feature request? Please open an issue on GitHub:

  1. Check existing issues to avoid duplicates

  2. Provide detailed information:

    • Description of the problem
    • Steps to reproduce
    • Expected vs. actual behavior
    • System information (OS, Python version, etc.)
    • Console output or error messages
  3. Use clear titles and descriptions

Getting Support

  • GitHub Issues: For bug reports and feature requests
  • Documentation: Check the README and code comments for detailed information
  • Discussions: Use GitHub Discussions for general questions and support

Code of Conduct

Please be respectful and constructive in all interactions with other community members.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for grammar-llm

Similar Open Source Tools

For similar tasks

For similar jobs