readme-ai
README file generator, powered by AI.
Stars: 1491
README-AI is a developer tool that auto-generates README.md files using a combination of data extraction and generative AI. It streamlines documentation creation and maintenance, enhancing developer productivity. This project aims to enable all skill levels, across all domains, to better understand, use, and contribute to open-source software. It offers flexible README generation, supports multiple large language models (LLMs), provides customizable output options, works with various programming languages and project types, and includes an offline mode for generating boilerplate README files without external API calls.
README:
Designed for simplicity, customization, and developer productivity.
- โก๏ธ Introduction
- ๐พ Demo
- โ๏ธ Features
- ๐ธ Quickstart
- ๐ก Configuration
- ๐ค Examples
- ๐ฐ Contributing
[!IMPORTANT] โจ See the Official Documentation for more details.
Objective
README-AI is a developer tool for automatically generating README markdown files using a robust repository processor engine and generative AI. Simply provide a repository URL or local path to your codebase, and a well-structured and detailed README file will be generated for you.
Motivation
This project aims to streamline the documentation process for developers, ensuring projects are properly documented and easy to understand. Whether you're working on an open-source project, enterprise software, or a personal project, README-AI is here to help you create high-quality documentation quickly and efficiently.
Running from the command line:
Running directly in your browser:
- Automated Documentation: Synchronize data from third-party sources and generates documentation automatically.
- Customizable Output: Dozens of options for styling/formatting, badges, header designs, and more.
- Language Agnostic: Works across a wide range of programming languages and project types.
-
Multi-LLM Support: Compatible with
OpenAI
,Ollama
,Anthropic
,Google Gemini
andOffline Mode
. - Offline Mode: Generate a boilerplate README without calling an external API.
- Markdown Best Practices: Leverage best practices in Markdown formatting for clean, professional-looking docs.
A few combinations of README styles and configurations:
See the Configuration section for a complete list of CLI options.
๐ Overview
Overview โ High-level introduction of the project, focused on the value proposition and use-cases, rather than technical aspects. |
โจ Features
Features Table โ Generated markdown table that highlights the key technical features and components of the codebase. This table is generated using a structured prompt template. |
๐ Codebase Documentation
Directory Tree โ The project's directory structure is generated using pure Python and embedded in the README. See readmeai.generators.tree. for more details. |
File Summaries โ Summarizes key modules of the project, which are also used as context for downstream prompts. |
๐ Quickstart Instructions
Getting Started Guides โ Prerequisites and system requirements are extracted from the codebase during preprocessing. The parsers handles the majority of this logic currently. |
Installation Guide โ |
๐ฐ Contributing Guidelines
System Requirements:
- Python
3.9+
- Package Manager/Container:
pip
,pipx
,docker
- LLM API Service:
OpenAI
,Ollama
,Anthropic
,Google Gemini
,Offline Mode
Repository URL or Path:
Make sure to have a repository URL or local directory path ready for the CLI.
LLM API Service:
- OpenAI: Recommended, requires an account setup and API key.
- Ollama: Free and open-source, potentially slower and more resource-intensive.
- Anthropic: Requires an Anthropic account and API key.
- Google Gemini: Requires a Google Cloud account and API key.
- Offline Mode: Generates a boilerplate README without making API calls.
Install readme-ai using your preferred package manager, container, or directly from the source.
โฏ pip install readmeai
โฏ pipx install readmeai
[! TIP]
Use pipx to install and run Python command-line applications without causing dependency conflicts with other packages!
Pull the latest Docker image from the Docker Hub repository.
โฏ docker pull zeroxeli/readme-ai:latest
Build readme-ai
โฏ bash setup/setup.sh
- Clone the repository:
โฏ git clone https://github.com/eli64s/readme-ai
- Navigate to the
readme-ai
directory:
โฏ cd readme-ai
- Install dependencies using
poetry
:
โฏ poetry install
- Enter the
poetry
shell environment:
โฏ poetry shell
To use the Anthropic and Google Gemini clients, install the optional dependencies.
Anthropic:
โฏ pip install readmeai[anthropic]
Google Gemini:
โฏ pip install readmeai[gemini]
OpenAI
Generate a OpenAI API key and set it as the environment variable OPENAI_API_KEY
.
# Using Linux or macOS
โฏ export OPENAI_API_KEY=<your_api_key>
# Using Windows
โฏ set OPENAI_API_KEY=<your_api_key>
Ollama
Pull your model of choice from the Ollama repository:
โฏ ollama pull mistral:latest
Start the Ollama server:
โฏ export OLLAMA_HOST=127.0.0.1 && ollama serve
See all available models from Ollama here.
Anthropic
Generate an Anthropic API key and set the following environment variables:
โฏ export ANTHROPIC_API_KEY=<your_api_key>
Google Gemini
Generate a Google API key and set the following environment variables:
โฏ export GOOGLE_API_KEY=<your_api_key>
With OpenAI:
โฏ readmeai --api openai --repository https://github.com/eli64s/readme-ai
[! IMPORTANT] By default, the
gpt-3.5-turbo
model is used. Higher costs may be incurred when more advanced models.
With Ollama:
โฏ readmeai --api ollama --model llama3 --repository https://github.com/eli64s/readme-ai
With Anthropic:
โฏ readmeai --api anthropic -m claude-3-5-sonnet-20240620 -r https://github.com/eli64s/readme-ai
With Gemini:
โฏ readmeai --api gemini -m gemini-1.5-flash -r https://github.com/eli64s/readme-ai
Adding more customization options:
โฏ readmeai --repository https://github.com/eli64s/readme-ai \
--output readmeai.md \
--api openai \
--model gpt-4 \
--badge-color A931EC \
--badge-style flat-square \
--header-style compact \
--toc-style fold \
--temperature 0.9 \
--tree-depth 2
--image LLM \
--emojis
Running the Docker container with the OpenAI API:
โฏ docker run -it \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-v "$(pwd)":/app zeroxeli/readme-ai:latest \
-r https://github.com/eli64s/readme-ai
Try readme-ai directly in your browser, no installation required. See the readme-ai-streamlit repository for more details.
Using readme-ai
โฏ conda activate readmeai
โฏ python3 -m readmeai.cli.main -r https://github.com/eli64s/readme-ai
โฏ poetry shell
โฏ poetry run python3 -m readmeai.cli.main -r https://github.com/eli64s/readme-ai
The pytest framework and nox automation tool are used for testing the application.
โฏ make test
โฏ make test-nox
[!TIP] Use nox to test application against multiple Python environments and dependencies!
Customize your README generation using these CLI options:
Option | Description | Default |
---|---|---|
--align |
Text align in header | center |
--api |
LLM API service provider | offline |
--badge-color |
Badge color name or hex code | 0080ff |
--badge-style |
Badge icon style type | flat |
--base-url |
Base URL for the repository | v1/chat/completions |
--context-window |
Maximum context window of the LLM API | 3900 |
--emojis |
Adds emojis to the README header sections | False |
--header-style |
Header template style | classic |
--image |
Project logo image | blue |
--model |
Specific LLM model to use | gpt-3.5-turbo |
--output |
Output filename | readme-ai.md |
--rate-limit |
Maximum API requests per minute | 10 |
--repository |
Repository URL or local directory path | None |
--temperature |
Creativity level for content generation | 0.1 |
--toc-style |
Table of contents template style | bullet |
--top-p |
Probability of the top-p sampling method | 0.9 |
--tree-depth |
Maximum depth of the directory tree structure | 2 |
[!TIP] For a full list of options, run
readmeai --help
in your terminal.
To see the full list of customization options, check out the Configuration section in the official documentation. This section provides a detailed overview of all available CLI options and how to use them, including badge styles, header templates, and more.
Language/Framework | Output File | Input Repository | Description |
---|---|---|---|
Python | readme-python.md | readme-ai | Core readme-ai project |
TypeScript & React | readme-typescript.md | ChatGPT App | React Native ChatGPT app |
PostgreSQL & DuckDB | readme-postgres.md | Buenavista | Postgres proxy server |
Kotlin & Android | readme-kotlin.md | file.io Client | Android file sharing app |
Streamlit | readme-streamlit.md | readme-ai-streamlit | Streamlit UI for readme-ai app |
Rust & C | readme-rust-c.md | CallMon | System call monitoring tool |
Docker & Go | readme-go.md | docker-gs-ping | Dockerized Go app |
Java | readme-java.md | Minimal-Todo | Minimalist todo Java app |
FastAPI & Redis | readme-fastapi-redis.md | async-ml-inference | Async ML inference service |
Jupyter Notebook | readme-mlops.md | mlops-course | MLOps course repository |
Apache Flink | readme-local.md | Local Directory | Example using a local directory |
See additional README files generated by readme-ai here
- [ ] Release
readmeai 1.0.0
with enhanced documentation management features. - [ ] Develop
Vscode Extension
to generate README files directly in the editor. - [ ] Develop
GitHub Actions
to automate documentation updates. - [ ] Add
badge packs
to provide additional badge styles and options.- [ ] Code coverage, CI/CD status, project version, and more.
Contributions are welcome and encouraged! If interested, please begin by reviewing the resources below:
- ๐ก Contributing Guide: Learn about our contribution process, coding standards, and how to submit your ideas.
- ๐ฌ Start a Discussion: Have questions or suggestions? Join our community discussions to share your thoughts and engage with others.
- ๐ Report an Issue: Found a bug or have a feature request? Let us know by opening an issue so we can address it promptly.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for readme-ai
Similar Open Source Tools
readme-ai
README-AI is a developer tool that auto-generates README.md files using a combination of data extraction and generative AI. It streamlines documentation creation and maintenance, enhancing developer productivity. This project aims to enable all skill levels, across all domains, to better understand, use, and contribute to open-source software. It offers flexible README generation, supports multiple large language models (LLMs), provides customizable output options, works with various programming languages and project types, and includes an offline mode for generating boilerplate README files without external API calls.
Noi
Noi is an AI-enhanced customizable browser designed to streamline digital experiences. It includes curated AI websites, allows adding any URL, offers prompts management, Noi Ask for batch messaging, various themes, Noi Cache Mode for quick link access, cookie data isolation, and more. Users can explore, extend, and empower their browsing experience with Noi.
cortex.cpp
Cortex is a C++ AI engine with a Docker-like command-line interface and client libraries. It supports running AI models using ONNX, TensorRT-LLM, and llama.cpp engines. Cortex can function as a standalone server or be integrated as a library. The tool provides support for various engines and models, allowing users to easily deploy and interact with AI models. It offers a range of CLI commands for managing models, embeddings, and engines, as well as a REST API for interacting with models. Cortex is designed to simplify the deployment and usage of AI models in C++ applications.
jan
Jan is an open-source ChatGPT alternative that runs 100% offline on your computer. It supports universal architectures, including Nvidia GPUs, Apple M-series, Apple Intel, Linux Debian, and Windows x64. Jan is currently in development, so expect breaking changes and bugs. It is lightweight and embeddable, and can be used on its own within your own projects.
chatglm.cpp
ChatGLM.cpp is a C++ implementation of ChatGLM-6B, ChatGLM2-6B, ChatGLM3-6B and more LLMs for real-time chatting on your MacBook. It is based on ggml, working in the same way as llama.cpp. ChatGLM.cpp features accelerated memory-efficient CPU inference with int4/int8 quantization, optimized KV cache and parallel computing. It also supports P-Tuning v2 and LoRA finetuned models, streaming generation with typewriter effect, Python binding, web demo, api servers and more possibilities.
DownEdit
DownEdit is a powerful program that allows you to download videos from various social media platforms such as TikTok, Douyin, Kuaishou, and more. With DownEdit, you can easily download videos from user profiles and edit them in bulk. You have the option to flip the videos horizontally or vertically throughout the entire directory with just a single click. Stay tuned for more exciting features coming soon!
onnxruntime-server
ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference. It aims to offer simple, high-performance ML inference and a good developer experience. Users can provide inference APIs for ONNX models without writing additional code by placing the models in the directory structure. Each session can choose between CPU or CUDA, analyze input/output, and provide Swagger API documentation for easy testing. Ready-to-run Docker images are available, making it convenient to deploy the server.
MooER
MooER (ๆฉ่ณ) is an LLM-based speech recognition and translation model developed by Moore Threads. It allows users to transcribe speech into text (ASR) and translate speech into other languages (AST) in an end-to-end manner. The model was trained using 5K hours of data and is now also available with an 80K hours version. MooER is the first LLM-based speech model trained and inferred using domestic GPUs. The repository includes pretrained models, inference code, and a Gradio demo for a better user experience.
ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It allows users to communicate with the Ollama server and manage models for various deployment scenarios. The library provides APIs for interacting with Ollama, generating fake data, testing UI interactions, translating messages, and building web UIs. Users can easily integrate Ollama4j into their Java projects to leverage the functionalities offered by the Ollama server.
wzry_ai
This is an open-source project for playing the game King of Glory with an artificial intelligence model. The first phase of the project has been completed, and future upgrades will be built upon this foundation. The second phase of the project has started, and progress is expected to proceed according to plan. For any questions, feel free to join the QQ exchange group: 687853827. The project aims to learn artificial intelligence and strictly prohibits cheating. Detailed installation instructions are available in the doc/README.md file. Environment installation video: (bilibili) Welcome to follow, like, tip, comment, and provide your suggestions.
gollama
Gollama is a delightful tool that brings Ollama, your offline conversational AI companion, directly into your terminal. It provides a fun and interactive way to generate responses from various models without needing internet connectivity. Whether you're brainstorming ideas, exploring creative writing, or just looking for inspiration, Gollama is here to assist you. The tool offers an interactive interface, customizable prompts, multiple models selection, and visual feedback to enhance user experience. It can be installed via different methods like downloading the latest release, using Go, running with Docker, or building from source. Users can interact with Gollama through various options like specifying a custom base URL, prompt, model, and enabling raw output mode. The tool supports different modes like interactive, piped, CLI with image, and TUI with image. Gollama relies on third-party packages like bubbletea, glamour, huh, and lipgloss. The roadmap includes implementing piped mode, support for extracting codeblocks, copying responses/codeblocks to clipboard, GitHub Actions for automated releases, and downloading models directly from Ollama using the rest API. Contributions are welcome, and the project is licensed under the MIT License.
skpro
skpro is a library for supervised probabilistic prediction in python. It provides `scikit-learn`-like, `scikit-base` compatible interfaces to: * tabular **supervised regressors for probabilistic prediction** \- interval, quantile and distribution predictions * tabular **probabilistic time-to-event and survival prediction** \- instance-individual survival distributions * **metrics to evaluate probabilistic predictions** , e.g., pinball loss, empirical coverage, CRPS, survival losses * **reductions** to turn `scikit-learn` regressors into probabilistic `skpro` regressors, such as bootstrap or conformal * building **pipelines and composite models** , including tuning via probabilistic performance metrics * symbolic **probability distributions** with value domain of `pandas.DataFrame`-s and `pandas`-like interface
XLICON-V2-MD
XLICON-V2-MD is a versatile Multi-Device WhatsApp bot developed by Salman Ahamed. It offers a wide range of features, making it an advanced and user-friendly bot for various purposes. The bot supports multi-device operation, AI photo enhancement, downloader commands, hidden NSFW commands, logo generation, anime exploration, economic activities, games, and audio/video editing. Users can deploy the bot on platforms like Heroku, Replit, Codespace, Okteto, Railway, Mongenius, Coolify, and Render. The bot is maintained by Salman Ahamed and Abraham Dwamena, with contributions from various developers and testers. Misusing the bot may result in a ban from WhatsApp, so users are advised to use it at their own risk.
RTXZY-MD
RTXZY-MD is a bot tool that supports file hosting, QR code, pairing code, and RestApi features. Users must fill in the Apikey for the bot to function properly. It is not recommended to install the bot on platforms lacking ffmpeg, imagemagick, webp, or express.js support. The tool allows for 95% implementation of website api and supports free and premium ApiKeys. Users can join group bots and get support from Sociabuzz. The tool can be run on Heroku with specific buildpacks and is suitable for Windows/VPS/RDP users who need Git, NodeJS, FFmpeg, and ImageMagick installations.
FalkorDB
FalkorDB is the first queryable Property Graph database to use sparse matrices to represent the adjacency matrix in graphs and linear algebra to query the graph. Primary features: * Adopting the Property Graph Model * Nodes (vertices) and Relationships (edges) that may have attributes * Nodes can have multiple labels * Relationships have a relationship type * Graphs represented as sparse adjacency matrices * OpenCypher with proprietary extensions as a query language * Queries are translated into linear algebra expressions
ovos-installer
The ovos-installer is a simple and multilingual tool designed to install Open Voice OS and HiveMind using Bash, Whiptail, and Ansible. It supports various Linux distributions and provides an automated installation process. Users can easily start and stop services, update their Open Voice OS instance, and uninstall the tool if needed. The installer also allows for non-interactive installation through scenario files. It offers a user-friendly way to set up Open Voice OS on different systems.
For similar tasks
readme-ai
README-AI is a developer tool that auto-generates README.md files using a combination of data extraction and generative AI. It streamlines documentation creation and maintenance, enhancing developer productivity. This project aims to enable all skill levels, across all domains, to better understand, use, and contribute to open-source software. It offers flexible README generation, supports multiple large language models (LLMs), provides customizable output options, works with various programming languages and project types, and includes an offline mode for generating boilerplate README files without external API calls.
devchat
DevChat is an open-source workflow engine that enables developers to create intelligent, automated workflows for engaging with users through a chat panel within their IDEs. It combines script writing flexibility, latest AI models, and an intuitive chat GUI to enhance user experience and productivity. DevChat simplifies the integration of AI in software development, unlocking new possibilities for developers.
lowcode-vscode
This repository is a low-code tool that supports ChatGPT and other LLM models. It provides functionalities such as OCR translation, generating specified format JSON, translating Chinese to camel case, translating current directory to English, and quickly creating code templates. Users can also generate CURD operations for managing backend list pages. The tool allows users to select templates, initialize query form configurations using OCR, initialize table configurations using OCR, translate Chinese fields using ChatGPT, and generate code without writing a single line. It aims to enhance productivity by simplifying code generation and development processes.
AI-Prompt-Genius
AI Prompt Genius is a Chrome extension that allows you to curate a custom library of AI prompts. It is built using React web app and Tailwind CSS with DaisyUI components. The extension enables users to create and manage AI prompts for various purposes. It provides a user-friendly interface for organizing and accessing AI prompts efficiently. AI Prompt Genius is designed to enhance productivity and creativity by offering a personalized collection of prompts tailored to individual needs. Users can easily install the extension from the Chrome Web Store and start using it to generate AI prompts for different tasks.
second-brain-agent
The Second Brain AI Agent Project is a tool designed to empower personal knowledge management by automatically indexing markdown files and links, providing a smart search engine powered by OpenAI, integrating seamlessly with different note-taking methods, and enhancing productivity by accessing information efficiently. The system is built on LangChain framework and ChromaDB vector store, utilizing a pipeline to process markdown files and extract text and links for indexing. It employs a Retrieval-augmented generation (RAG) process to provide context for asking questions to the large language model. The tool is beneficial for professionals, students, researchers, and creatives looking to streamline workflows, improve study sessions, delve deep into research, and organize thoughts and ideas effortlessly.
AI-scripts
AI-scripts is a repository containing various AI scripts used for daily tasks. It includes tools like 'holefill' for filling code snippets in VIM, 'aiemu' for emulation purposes, and 'chatsh [model]' for terminal-based ChatGPT functionality. The repository aims to streamline AI-related workflows and enhance productivity by providing convenient scripts for common tasks.
magic-cli
Magic CLI is a command line utility that leverages Large Language Models (LLMs) to enhance command line efficiency. It is inspired by projects like Amazon Q and GitHub Copilot for CLI. The tool allows users to suggest commands, search across command history, and generate commands for specific tasks using local or remote LLM providers. Magic CLI also provides configuration options for LLM selection and response generation. The project is still in early development, so users should expect breaking changes and bugs.
obsidian-github-copilot
Obsidian Github Copilot Plugin is a tool that enables users to utilize Github Copilot within the Obsidian editor. It acts as a bridge between Obsidian and the Github Copilot service, allowing for enhanced code completion and suggestion features. Users can configure various settings such as suggestion generation delay, key bindings, and visibility of suggestions. The plugin requires a Github Copilot subscription, Node.js 18 or later, and a network connection to interact with the Copilot service. It simplifies the process of writing code by providing helpful completions and suggestions directly within the Obsidian editor.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.