QodeAssist
QodeAssist is an AI-powered coding assistant plugin for Qt Creator
Stars: 99
QodeAssist is an AI-powered coding assistant plugin for Qt Creator, offering intelligent code completion and suggestions for C++ and QML. It leverages large language models like Ollama to enhance coding productivity with context-aware AI assistance directly in the Qt development environment. The plugin supports multiple LLM providers, extensive model-specific templates, and easy configuration for enhanced coding experience.
README:
QodeAssist is an AI-powered coding assistant plugin for Qt Creator. It provides intelligent code completion and suggestions for C++ and QML, leveraging large language models through local providers like Ollama. Enhance your coding productivity with context-aware AI assistance directly in your Qt development environment.
When using paid providers like Claude, OpenRouter or OpenAI-compatible services:
- These services will consume API tokens which may result in charges to your account
- The QodeAssist developer bears no responsibility for any charges incurred
- Please carefully review the provider's pricing and your account settings before use
- Overview
- Install plugin to QtCreator
- Configure for Anthropic Claude
- Configure for OpenAI
- Configure for using Ollama
- System Prompt Configuration
- Template-Model Compatibility
- QtCreator Version Compatibility
- Development Progress
- Hotkeys
- Troubleshooting
- Support the Development
- How to Build
- AI-powered code completion
- Chat functionality:
- Side and Bottom panels
- Support for multiple LLM providers:
- Ollama
- OpenAI
- Anthropic Claude
- LM Studio
- OpenAI-compatible providers(eg. https://openrouter.ai)
- Extensive library of model-specific templates
- Custom template support
- Easy configuration and model selection
- Install Latest Qt Creator
- Download the QodeAssist plugin for your Qt Creator
- Launch Qt Creator and install the plugin:
- Go to:
- MacOS: Qt Creator -> About Plugins...
- Windows\Linux: Help -> About Plugins...
- Click on "Install Plugin..."
- Select the downloaded QodeAssist plugin archive file
- Go to:
- Open Qt Creator settings and navigate to the QodeAssist section
- Go to Provider Settings tab and configure Claude api key
- Return to General tab and configure:
- Set "Claude" as the provider for code completion or/and chat assistant
- Set the Claude URL (https://api.anthropic.com)
- Select your preferred model (e.g., claude-3-5-sonnet-20241022)
- Choose the Claude template for code completion or/and chat
- Open Qt Creator settings and navigate to the QodeAssist section
- Go to Provider Settings tab and configure OpenAI api key
- Return to General tab and configure:
- Set "OpenAI" as the provider for code completion or/and chat assistant
- Set the OpenAI URL (https://api.openai.com)
- Select your preferred model (e.g., gpt-4o)
- Choose the OpenAI template for code completion or/and chat
- Install Ollama. Make sure to review the system requirements before installation.
- Install a language models in Ollama via terminal. For example, you can run:
For standard computers (minimum 8GB RAM):
ollama run qwen2.5-coder:7b
For better performance (16GB+ RAM):
ollama run qwen2.5-coder:14b
For high-end systems (32GB+ RAM):
ollama run qwen2.5-coder:32b
- Open Qt Creator settings (Edit > Preferences on Linux/Windows, Qt Creator > Preferences on macOS)
- Navigate to the "Qode Assist" tab
- On the "General" page, verify:
- Ollama is selected as your LLM provider
- The URL is set to http://localhost:11434
- Your installed model appears in the model selection
- The prompt template is Ollama Auto FIM or Ollama Auto Chat for chat assistance. You can specify template if it is not work correct
- Click Apply if you made any changes
You're all set! QodeAssist is now ready to use in Qt Creator.
The plugin comes with default system prompts optimized for chat and instruct models, as these currently provide better results for code assistance. If you prefer using FIM (Fill-in-Middle) models, you can easily customize the system prompt in the settings.
Template | Compatible Models | Purpose |
---|---|---|
CodeLlama FIM | codellama:code |
Code completion |
DeepSeekCoder FIM |
deepseek-coder-v2 , deepseek-v2.5
|
Code completion |
Ollama Auto FIM | Any Ollama base/fim models |
Code completion |
Qwen FIM | Qwen 2.5 models(exclude instruct) |
Code completion |
StarCoder2 FIM | starcoder2 base model |
Code completion |
Alpaca | starcoder2:instruct |
Chat assistance |
Basic Chat | Messages without tokens |
Chat assistance |
ChatML | Qwen 2.5 models(exclude base models) |
Chat assistance |
Llama2 |
llama2 model family , codellama:instruct
|
Chat assistance |
Llama3 | llama3 model family |
Chat assistance |
Ollama Auto Chat | Any Ollama chat/instruct models |
Chat assistance |
- QtCreator 15.0.1 - 0.4.8 - 0.4.x
- QtCreator 15.0.0 - 0.4.0 - 0.4.7
- QtCreator 14.0.2 - 0.2.3 - 0.3.x
- QtCreator 14.0.1 - 0.2.2 plugin version and below
- [x] Basic plugin with code autocomplete functionality
- [x] Improve and automate settings
- [x] Add chat functionality
- [x] Sharing diff with model
- [ ] Sharing project source with model
- [ ] Support for more providers and models
- To call manual request to suggestion, you can use or change it in settings
- on Mac: Option + Command + Q
- on Windows: Ctrl + Alt + Q
- To insert the full suggestion, you can use the TAB key
- To insert word of suggistion, you can use Alt + Right Arrow for Win/Lin, or Option + Right Arrow for Mac
If QodeAssist is having problems connecting to the LLM provider, please check the following:
-
Verify the IP address and port:
- For Ollama, the default is usually http://localhost:11434
- For LM Studio, the default is usually http://localhost:1234
-
Check the endpoint:
Make sure the endpoint in the settings matches the one required by your provider - For Ollama, it should be /api/generate - For LM Studio and OpenAI compatible providers, it's usually /v1/chat/completions
- Confirm that the selected model and template are compatible:
Ensure you've chosen the correct model in the "Select Models" option Verify that the selected prompt template matches the model you're using
If you're still experiencing issues with QodeAssist, you can try resetting the settings to their default values:
- Open Qt Creator settings
- Navigate to the "Qode Assist" tab
- Pick settings page for reset
- Click on the "Reset Page to Defaults" button
- The API key will not reset
- Select model after reset
If you find QodeAssist helpful, there are several ways you can support the project:
-
Report Issues: If you encounter any bugs or have suggestions for improvements, please open an issue on our GitHub repository.
-
Contribute: Feel free to submit pull requests with bug fixes or new features.
-
Spread the Word: Star our GitHub repository and share QodeAssist with your fellow developers.
-
Financial Support: If you'd like to support the development financially, you can make a donation using one of the following:
- Bitcoin (BTC):
bc1qndq7f0mpnlya48vk7kugvyqj5w89xrg4wzg68t
- Ethereum (ETH):
0xA5e8c37c94b24e25F9f1f292a01AF55F03099D8D
- Litecoin (LTC):
ltc1qlrxnk30s2pcjchzx4qrxvdjt5gzuervy5mv0vy
- USDT (TRC20):
THdZrE7d6epW6ry98GA3MLXRjha1DjKtUx
- Bitcoin (BTC):
Every contribution, no matter how small, is greatly appreciated and helps keep the project alive!
Create a build directory and run
cmake -DCMAKE_PREFIX_PATH=<path_to_qtcreator> -DCMAKE_BUILD_TYPE=RelWithDebInfo <path_to_plugin_source>
cmake --build .
where <path_to_qtcreator>
is the relative or absolute path to a Qt Creator build directory, or to a
combined binary and development package (Windows / Linux), or to the Qt Creator.app/Contents/Resources/
directory of a combined binary and development package (macOS), and <path_to_plugin_source>
is the
relative or absolute path to this plugin directory.
QML code style: Preferably follow the following guidelines https://github.com/Furkanzmc/QML-Coding-Guide, thank you @Furkanzmc for collect them C++ code style: check use .clang-fortmat in project
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for QodeAssist
Similar Open Source Tools
QodeAssist
QodeAssist is an AI-powered coding assistant plugin for Qt Creator, offering intelligent code completion and suggestions for C++ and QML. It leverages large language models like Ollama to enhance coding productivity with context-aware AI assistance directly in the Qt development environment. The plugin supports multiple LLM providers, extensive model-specific templates, and easy configuration for enhanced coding experience.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
tts-generation-webui
TTS Generation WebUI is a comprehensive tool that provides a user-friendly interface for text-to-speech and voice cloning tasks. It integrates various AI models such as Bark, MusicGen, AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, and MAGNeT. The tool offers one-click installers, Google Colab demo, videos for guidance, and extra voices for Bark. Users can generate audio outputs, manage models, caches, and system space for AI projects. The project is open-source and emphasizes ethical and responsible use of AI technology.
AgentBench
AgentBench is a benchmark designed to evaluate Large Language Models (LLMs) as autonomous agents in various environments. It includes 8 distinct environments such as Operating System, Database, Knowledge Graph, Digital Card Game, and Lateral Thinking Puzzles. The tool provides a comprehensive evaluation of LLMs' ability to operate as agents by offering Dev and Test sets for each environment. Users can quickly start using the tool by following the provided steps, configuring the agent, starting task servers, and assigning tasks. AgentBench aims to bridge the gap between LLMs' proficiency as agents and their practical usability.
AIOS
AIOS, a Large Language Model (LLM) Agent operating system, embeds large language model into Operating Systems (OS) as the brain of the OS, enabling an operating system "with soul" -- an important step towards AGI. AIOS is designed to optimize resource allocation, facilitate context switch across agents, enable concurrent execution of agents, provide tool service for agents, maintain access control for agents, and provide a rich set of toolkits for LLM Agent developers.
keras-llm-robot
The Keras-llm-robot Web UI project is an open-source tool designed for offline deployment and testing of various open-source models from the Hugging Face website. It allows users to combine multiple models through configuration to achieve functionalities like multimodal, RAG, Agent, and more. The project consists of three main interfaces: chat interface for language models, configuration interface for loading models, and tools & agent interface for auxiliary models. Users can interact with the language model through text, voice, and image inputs, and the tool supports features like model loading, quantization, fine-tuning, role-playing, code interpretation, speech recognition, image recognition, network search engine, and function calling.
ragflow
RAGFlow is an open-source Retrieval-Augmented Generation (RAG) engine that combines deep document understanding with Large Language Models (LLMs) to provide accurate question-answering capabilities. It offers a streamlined RAG workflow for businesses of all sizes, enabling them to extract knowledge from unstructured data in various formats, including Word documents, slides, Excel files, images, and more. RAGFlow's key features include deep document understanding, template-based chunking, grounded citations with reduced hallucinations, compatibility with heterogeneous data sources, and an automated and effortless RAG workflow. It supports multiple recall paired with fused re-ranking, configurable LLMs and embedding models, and intuitive APIs for seamless integration with business applications.
OmAgent
OmAgent is an open-source agent framework designed to streamline the development of on-device multimodal agents. It enables agents to empower various hardware devices, integrates speed-optimized SOTA multimodal models, provides SOTA multimodal agent algorithms, and focuses on optimizing the end-to-end computing pipeline for real-time user interaction experience. Key features include easy connection to diverse devices, scalability, flexibility, and workflow orchestration. The architecture emphasizes graph-based workflow orchestration, native multimodality, and device-centricity, allowing developers to create bespoke intelligent agent programs.
evalverse
Evalverse is an open-source project designed to support Large Language Model (LLM) evaluation needs. It provides a standardized and user-friendly solution for processing and managing LLM evaluations, catering to AI research engineers and scientists. Evalverse supports various evaluation methods, insightful reports, and no-code evaluation processes. Users can access unified evaluation with submodules, request evaluations without code via Slack bot, and obtain comprehensive reports with scores, rankings, and visuals. The tool allows for easy comparison of scores across different models and swift addition of new evaluation tools.
RD-Agent
RD-Agent is a tool designed to automate critical aspects of industrial R&D processes, focusing on data-driven scenarios to streamline model and data development. It aims to propose new ideas ('R') and implement them ('D') automatically, leading to solutions of significant industrial value. The tool supports scenarios like Automated Quantitative Trading, Data Mining Agent, Research Copilot, and more, with a framework to push the boundaries of research in data science. Users can create a Conda environment, install the RDAgent package from PyPI, configure GPT model, and run various applications for tasks like quantitative trading, model evolution, medical prediction, and more. The tool is intended to enhance R&D processes and boost productivity in industrial settings.
anything-llm
AnythingLLM is a full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.
Qwen
Qwen is a series of large language models developed by Alibaba DAMO Academy. It outperforms the baseline models of similar model sizes on a series of benchmark datasets, e.g., MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, etc., which evaluate the models’ capabilities on natural language understanding, mathematic problem solving, coding, etc. Qwen models outperform the baseline models of similar model sizes on a series of benchmark datasets, e.g., MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, etc., which evaluate the models’ capabilities on natural language understanding, mathematic problem solving, coding, etc. Qwen-72B achieves better performance than LLaMA2-70B on all tasks and outperforms GPT-3.5 on 7 out of 10 tasks.
airflint
Airflint is a tool designed to enforce best practices for all your Airflow Directed Acyclic Graphs (DAGs). It is currently in the alpha stage and aims to help users adhere to recommended practices when working with Airflow. Users can install Airflint from PyPI and integrate it into their existing Airflow environment to improve DAG quality. The tool provides rules for function-level imports and jinja template syntax usage, among others, to enhance the development process of Airflow DAGs.
HAMi
HAMi is a Heterogeneous AI Computing Virtualization Middleware designed to manage Heterogeneous AI Computing Devices in a Kubernetes cluster. It allows for device sharing, device memory control, device type specification, and device UUID specification. The tool is easy to use and does not require modifying task YAML files. It includes features like hard limits on device memory, partial device allocation, streaming multiprocessor limits, and core usage specification. HAMi consists of components like a mutating webhook, scheduler extender, device plugins, and in-container virtualization techniques. It is suitable for scenarios requiring device sharing, specific device memory allocation, GPU balancing, low utilization optimization, and scenarios needing multiple small GPUs. The tool requires prerequisites like NVIDIA drivers, CUDA version, nvidia-docker, Kubernetes version, glibc version, and helm. Users can install, upgrade, and uninstall HAMi, submit tasks, and monitor cluster information. The tool's roadmap includes supporting additional AI computing devices, video codec processing, and Multi-Instance GPUs (MIG).
amazon-bedrock-client-for-mac
A sleek and powerful macOS client for Amazon Bedrock, bringing AI models to your desktop. It provides seamless interaction with multiple Amazon Bedrock models, real-time chat interface, easy model switching, support for various AI tasks, and native Dark Mode support. Built with SwiftUI for optimal performance and modern UI.
lunary
Lunary is an open-source observability and prompt platform for Large Language Models (LLMs). It provides a suite of features to help AI developers take their applications into production, including analytics, monitoring, prompt templates, fine-tuning dataset creation, chat and feedback tracking, and evaluations. Lunary is designed to be usable with any model, not just OpenAI, and is easy to integrate and self-host.
For similar tasks
QodeAssist
QodeAssist is an AI-powered coding assistant plugin for Qt Creator, offering intelligent code completion and suggestions for C++ and QML. It leverages large language models like Ollama to enhance coding productivity with context-aware AI assistance directly in the Qt development environment. The plugin supports multiple LLM providers, extensive model-specific templates, and easy configuration for enhanced coding experience.
monacopilot
Monacopilot is a powerful and customizable AI auto-completion plugin for the Monaco Editor. It supports multiple AI providers such as Anthropic, OpenAI, Groq, and Google, providing real-time code completions with an efficient caching system. The plugin offers context-aware suggestions, customizable completion behavior, and framework agnostic features. Users can also customize the model support and trigger completions manually. Monacopilot is designed to enhance coding productivity by providing accurate and contextually appropriate completions in daily spoken language.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.