
QodeAssist
QodeAssist is an AI-powered coding assistant plugin for Qt Creator
Stars: 156

QodeAssist is an AI-powered coding assistant plugin for Qt Creator, offering intelligent code completion and suggestions for C++ and QML. It leverages large language models like Ollama to enhance coding productivity with context-aware AI assistance directly in the Qt development environment. The plugin supports multiple LLM providers, extensive model-specific templates, and easy configuration for enhanced coding experience.
README:
QodeAssist is an AI-powered coding assistant plugin for Qt Creator. It provides intelligent code completion and suggestions for C++ and QML, leveraging large language models through local providers like Ollama. Enhance your coding productivity with context-aware AI assistance directly in your Qt development environment.
When using paid providers like Claude, OpenRouter or OpenAI-compatible services:
- These services will consume API tokens which may result in charges to your account
- The QodeAssist developer bears no responsibility for any charges incurred
- Please carefully review the provider's pricing and your account settings before use
The QodeAssist developer offers commercial services for:
- Adapting the plugin for specific Qt Creator versions
- Custom development for particular operating systems
- Integration with specific language models
- Implementing custom features and modifications
For commercial inquiries, please contact: [email protected]
- Overview
- Install plugin to QtCreator
- Configure for Anthropic Claude
- Configure for OpenAI
- Configure for Mistral AI
- Configure for Google AI
- Configure for Ollama
- Configure for llama.cpp
- System Prompt Configuration
- File Context Features
- QtCreator Version Compatibility
- Development Progress
- Hotkeys
- Troubleshooting
- Support the Development
- How to Build
- AI-powered code completion
- Chat functionality:
- Side and Bottom panels
- Chat history autosave and restore
- Token usage monitoring and management
- Attach files for one-time code analysis
- Link files for persistent context with auto update in conversations
- Automatic syncing with open editor files (optional)
- Support for multiple LLM providers:
- Ollama
- llama.cpp
- OpenAI
- Anthropic Claude
- LM Studio
- Mistral AI
- Google AI
- OpenAI-compatible providers(eg. llama.cpp, https://openrouter.ai)
- Extensive library of model-specific templates
- Custom template support
- Easy configuration and model selection
Join our Discord Community: Have questions or want to discuss QodeAssist? Join our Discord server to connect with other users and get support!
- Install Latest Qt Creator
- Download the QodeAssist plugin for your Qt Creator
- Remove old version plugin if already was installed
- on macOS for QtCreator 16: ~/Library/Application Support/QtProject/Qt Creator/plugins/16.0.0/petrmironychev.qodeassist
- on windows for QtCreator 16: C:\Users<user>\AppData\Local\QtProject\qtcreator\plugins\16.0.0\petrmironychev.qodeassist\lib\qtcreator\plugins
- Remove old version plugin if already was installed
- Launch Qt Creator and install the plugin:
- Go to:
- MacOS: Qt Creator -> About Plugins...
- Windows\Linux: Help -> About Plugins...
- Click on "Install Plugin..."
- Select the downloaded QodeAssist plugin archive file
- Go to:
- Open Qt Creator settings and navigate to the QodeAssist section
- Go to Provider Settings tab and configure Claude api key
- Return to General tab and configure:
- Set "Claude" as the provider for code completion or/and chat assistant
- Set the Claude URL (https://api.anthropic.com)
- Select your preferred model (e.g., claude-3-5-sonnet-20241022)
- Choose the Claude template for code completion or/and chat
- Open Qt Creator settings and navigate to the QodeAssist section
- Go to Provider Settings tab and configure OpenAI api key
- Return to General tab and configure:
- Set "OpenAI" as the provider for code completion or/and chat assistant
- Set the OpenAI URL (https://api.openai.com)
- Select your preferred model (e.g., gpt-4o)
- Choose the OpenAI template for code completion or/and chat
- Open Qt Creator settings and navigate to the QodeAssist section
- Go to Provider Settings tab and configure Mistral AI api key
- Return to General tab and configure:
- Set "Mistral AI" as the provider for code completion or/and chat assistant
- Set the OpenAI URL (https://api.mistral.ai)
- Select your preferred model (e.g., mistral-large-latest)
- Choose the Mistral AI template for code completion or/and chat
- Open Qt Creator settings and navigate to the QodeAssist section
- Go to Provider Settings tab and configure Google AI api key
- Return to General tab and configure:
- Set "Google AI" as the provider for code completion or/and chat assistant
- Set the OpenAI URL (https://generativelanguage.googleapis.com/v1beta)
- Select your preferred model (e.g., gemini-2.0-flash)
- Choose the Google AI template
- Install Ollama. Make sure to review the system requirements before installation.
- Install a language models in Ollama via terminal. For example, you can run:
For standard computers (minimum 8GB RAM):
ollama run qwen2.5-coder:7b
For better performance (16GB+ RAM):
ollama run qwen2.5-coder:14b
For high-end systems (32GB+ RAM):
ollama run qwen2.5-coder:32b
- Open Qt Creator settings (Edit > Preferences on Linux/Windows, Qt Creator > Preferences on macOS)
- Navigate to the "QodeAssist" tab
- On the "General" page, verify:
- Ollama is selected as your LLM provider
- The URL is set to http://localhost:11434
- Your installed model appears in the model selection
- The prompt template is Ollama Auto FIM or Ollama Auto Chat for chat assistance. You can specify template if it is not work correct
- Click Apply if you made any changes
You're all set! QodeAssist is now ready to use in Qt Creator.
- Open Qt Creator settings and navigate to the QodeAssist section
- Go to General tab and configure:
- Set "llama.cpp" as the provider for code completion or/and chat assistant
- Set the llama.cpp URL (e.g. http://localhost:8080)
- Fill in model name
- Choose template for model(e.g. llama.cpp FIM for any model with FIM support)
The plugin comes with default system prompts optimized for chat and instruct models, as these currently provide better results for code assistance. If you prefer using FIM (Fill-in-Middle) models, you can easily customize the system prompt in the settings.
QodeAssist provides two powerful ways to include source code files in your chat conversations: Attachments and Linked Files. Each serves a distinct purpose and helps provide better context for the AI assistant.
Attachments are designed for one-time code analysis and specific queries:
- Files are included only in the current message
- Content is discarded after the message is processed
- Ideal for:
- Getting specific feedback on code changes
- Code review requests
- Analyzing isolated code segments
- Quick implementation questions
- Files can be attached using the paperclip icon in the chat interface
- Multiple files can be attached to a single message
Linked files provide persistent context throughout the conversation:
- Files remain accessible for the entire chat session
- Content is included in every message exchange
- Files are automatically refreshed - always using latest content from disk
- Perfect for:
- Long-term refactoring discussions
- Complex architectural changes
- Multi-file implementations
- Maintaining context across related questions
- Can be managed using the link icon in the chat interface
- Supports automatic syncing with open editor files (can be enabled in settings)
- Files can be added/removed at any time during the conversation
- QtCreator 16.0.0 - 0.5.2 - 0.5.x
- QtCreator 15.0.1 - 0.4.8 - 0.5.1
- QtCreator 15.0.0 - 0.4.0 - 0.4.7
- QtCreator 14.0.2 - 0.2.3 - 0.3.x
- QtCreator 14.0.1 - 0.2.2 plugin version and below
- [x] Basic plugin with code autocomplete functionality
- [x] Improve and automate settings
- [x] Add chat functionality
- [x] Sharing diff with model
- [ ] Sharing project source with model
- [ ] Support for more providers and models
- To call manual request to suggestion, you can use or change it in settings
- on Mac: Option + Command + Q
- on Windows: Ctrl + Alt + Q
- on Linux with KDE Plasma: Ctrl + Alt + Q
- To insert the full suggestion, you can use the TAB key
- To insert word of suggistion, you can use Alt + Right Arrow for Win/Lin, or Option + Right Arrow for Mac
If QodeAssist is having problems connecting to the LLM provider, please check the following:
-
Verify the IP address and port:
- For Ollama, the default is usually http://localhost:11434
- For LM Studio, the default is usually http://localhost:1234
-
Confirm that the selected model and template are compatible:
Ensure you've chosen the correct model in the "Select Models" option Verify that the selected prompt template matches the model you're using
-
On Linux the prebuilt binaries support only ubuntu 22.04+ or simililliar os. If you need compatiblity with another os, you have to build manualy. our experiments and resolution you can check here: https://github.com/Palm1r/QodeAssist/issues/48
If you're still experiencing issues with QodeAssist, you can try resetting the settings to their default values:
- Open Qt Creator settings
- Navigate to the "QodeAssist" tab
- Pick settings page for reset
- Click on the "Reset Page to Defaults" button
- The API key will not reset
- Select model after reset
If you find QodeAssist helpful, there are several ways you can support the project:
-
Report Issues: If you encounter any bugs or have suggestions for improvements, please open an issue on our GitHub repository.
-
Contribute: Feel free to submit pull requests with bug fixes or new features.
-
Spread the Word: Star our GitHub repository and share QodeAssist with your fellow developers.
-
Financial Support: If you'd like to support the development financially, you can make a donation using one of the following:
- Bitcoin (BTC):
bc1qndq7f0mpnlya48vk7kugvyqj5w89xrg4wzg68t
- Ethereum (ETH):
0xA5e8c37c94b24e25F9f1f292a01AF55F03099D8D
- Litecoin (LTC):
ltc1qlrxnk30s2pcjchzx4qrxvdjt5gzuervy5mv0vy
- USDT (TRC20):
THdZrE7d6epW6ry98GA3MLXRjha1DjKtUx
- Bitcoin (BTC):
Every contribution, no matter how small, is greatly appreciated and helps keep the project alive!
Create a build directory and run
cmake -DCMAKE_PREFIX_PATH=<path_to_qtcreator> -DCMAKE_BUILD_TYPE=RelWithDebInfo <path_to_plugin_source>
cmake --build .
where <path_to_qtcreator>
is the relative or absolute path to a Qt Creator build directory, or to a
combined binary and development package (Windows / Linux), or to the Qt Creator.app/Contents/Resources/
directory of a combined binary and development package (macOS), and <path_to_plugin_source>
is the
relative or absolute path to this plugin directory.
QML code style: Preferably follow the following guidelines https://github.com/Furkanzmc/QML-Coding-Guide, thank you @Furkanzmc for collect them C++ code style: check use .clang-fortmat in project
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for QodeAssist
Similar Open Source Tools

QodeAssist
QodeAssist is an AI-powered coding assistant plugin for Qt Creator, offering intelligent code completion and suggestions for C++ and QML. It leverages large language models like Ollama to enhance coding productivity with context-aware AI assistance directly in the Qt development environment. The plugin supports multiple LLM providers, extensive model-specific templates, and easy configuration for enhanced coding experience.

agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.

Stable-Diffusion-Android
Stable Diffusion AI is an easy-to-use app for generating images from text or other images. It allows communication with servers powered by various AI technologies like AI Horde, Hugging Face Inference API, OpenAI, StabilityAI, and LocalDiffusion. The app supports Txt2Img and Img2Img modes, positive and negative prompts, dynamic size and sampling methods, unique seed input, and batch image generation. Users can also inpaint images, select faces from gallery or camera, and export images. The app offers settings for server URL, SD Model selection, auto-saving images, and clearing cache.

HAMi
HAMi is a Heterogeneous AI Computing Virtualization Middleware designed to manage Heterogeneous AI Computing Devices in a Kubernetes cluster. It allows for device sharing, device memory control, device type specification, and device UUID specification. The tool is easy to use and does not require modifying task YAML files. It includes features like hard limits on device memory, partial device allocation, streaming multiprocessor limits, and core usage specification. HAMi consists of components like a mutating webhook, scheduler extender, device plugins, and in-container virtualization techniques. It is suitable for scenarios requiring device sharing, specific device memory allocation, GPU balancing, low utilization optimization, and scenarios needing multiple small GPUs. The tool requires prerequisites like NVIDIA drivers, CUDA version, nvidia-docker, Kubernetes version, glibc version, and helm. Users can install, upgrade, and uninstall HAMi, submit tasks, and monitor cluster information. The tool's roadmap includes supporting additional AI computing devices, video codec processing, and Multi-Instance GPUs (MIG).

GeminiChatUp
Gemini ChatUp is a chat application utilizing the Google GeminiPro API Key. It supports responsive layout and can store multiple sets of conversations with customizable parameters for each set. Users can log in with a test account or provide their own API Key to deploy the feature. The application also offers user authentication through Edge config in Vercel, allowing users to add usernames and passwords in JSON format. Local deployment is possible by installing dependencies, setting up environment variables, and running the application locally.

BotServer
General Bot is a chat bot server that accelerates bot development by providing code base, resources, deployment to the cloud, and templates for creating new bots. It allows modification of bot packages without code through a database and service backend. Users can develop bot packages using custom code in editors like Visual Studio Code, Atom, or Brackets. The tool supports creating bots by copying and pasting files and using favorite tools from Office or Photoshop. It also enables building custom dialogs with BASIC for extending bots.

gpt4all
GPT4All is an ecosystem to run powerful and customized large language models that work locally on consumer grade CPUs and any GPU. Note that your CPU needs to support AVX or AVX2 instructions. Learn more in the documentation. A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. Nomic AI supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.

docq
Docq is a private and secure GenAI tool designed to extract knowledge from business documents, enabling users to find answers independently. It allows data to stay within organizational boundaries, supports self-hosting with various cloud vendors, and offers multi-model and multi-modal capabilities. Docq is extensible, open-source (AGPLv3), and provides commercial licensing options. The tool aims to be a turnkey solution for organizations to adopt AI innovation safely, with plans for future features like more data ingestion options and model fine-tuning.

repromodel
ReproModel is an open-source toolbox designed to boost AI research efficiency by enabling researchers to reproduce, compare, train, and test AI models faster. It provides standardized models, dataloaders, and processing procedures, allowing researchers to focus on new datasets and model development. With a no-code solution, users can access benchmark and SOTA models and datasets, utilize training visualizations, extract code for publication, and leverage an LLM-powered automated methodology description writer. The toolbox helps researchers modularize development, compare pipeline performance reproducibly, and reduce time for model development, computation, and writing. Future versions aim to facilitate building upon state-of-the-art research by loading previously published study IDs with verified code, experiments, and results stored in the system.

vertex-ai-mlops
Vertex AI is a platform for end-to-end model development. It consist of core components that make the processes of MLOps possible for design patterns of all types.

NeMo-Curator
NeMo Curator is a GPU-accelerated open-source framework designed for efficient large language model data curation. It provides scalable dataset preparation for tasks like foundation model pretraining, domain-adaptive pretraining, supervised fine-tuning, and parameter-efficient fine-tuning. The library leverages GPUs with Dask and RAPIDS to accelerate data curation, offering customizable and modular interfaces for pipeline expansion and model convergence. Key features include data download, text extraction, quality filtering, deduplication, downstream-task decontamination, distributed data classification, and PII redaction. NeMo Curator is suitable for curating high-quality datasets for large language model training.

anything-llm
AnythingLLM is a full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.

dify
Dify is an open-source LLM app development platform that combines AI workflow, RAG pipeline, agent capabilities, model management, observability features, and more. It allows users to quickly go from prototype to production. Key features include: 1. Workflow: Build and test powerful AI workflows on a visual canvas. 2. Comprehensive model support: Seamless integration with hundreds of proprietary / open-source LLMs from dozens of inference providers and self-hosted solutions. 3. Prompt IDE: Intuitive interface for crafting prompts, comparing model performance, and adding additional features. 4. RAG Pipeline: Extensive RAG capabilities that cover everything from document ingestion to retrieval. 5. Agent capabilities: Define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools. 6. LLMOps: Monitor and analyze application logs and performance over time. 7. Backend-as-a-Service: All of Dify's offerings come with corresponding APIs for easy integration into your own business logic.

Video-Super-Resolution-Library
Intel® Library for Video Super Resolution (Intel® Library for VSR) is a project that offers a variety of algorithms, including machine learning and deep learning implementations, to convert low-resolution videos to high resolution. It enhances the RAISR algorithm to provide better visual quality and real-time performance for upscaling on Intel® Xeon® platforms and Intel® GPUs. The project is developed in C++ and utilizes Intel® AVX-512 on Intel® Xeon® Scalable Processor family and OpenCL support on Intel® GPUs. It includes an FFmpeg plugin inside a Docker container for ease of testing and deployment.

quark-engine
Quark Engine is an AI-powered tool designed for analyzing Android APK files. It focuses on enhancing the detection process for auto-suggestion, enabling users to create detection workflows without coding. The tool offers an intuitive drag-and-drop interface for workflow adjustments and updates. Quark Agent, the core component, generates Quark Script code based on natural language input and feedback. The project is committed to providing a user-friendly experience for designing detection workflows through textual and visual methods. Various features are still under development and will be rolled out gradually.
For similar tasks

QodeAssist
QodeAssist is an AI-powered coding assistant plugin for Qt Creator, offering intelligent code completion and suggestions for C++ and QML. It leverages large language models like Ollama to enhance coding productivity with context-aware AI assistance directly in the Qt development environment. The plugin supports multiple LLM providers, extensive model-specific templates, and easy configuration for enhanced coding experience.

monacopilot
Monacopilot is a powerful and customizable AI auto-completion plugin for the Monaco Editor. It supports multiple AI providers such as Anthropic, OpenAI, Groq, and Google, providing real-time code completions with an efficient caching system. The plugin offers context-aware suggestions, customizable completion behavior, and framework agnostic features. Users can also customize the model support and trigger completions manually. Monacopilot is designed to enhance coding productivity by providing accurate and contextually appropriate completions in daily spoken language.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.