minuet-ai.nvim
💃 Dance with Intelligence in Your Code. Minuet AI integrates with nvim-cmp, offering AI completion from popular LLMs including OpenAI, Gemini, Claude, and more.
Stars: 62
Minuet AI is a Neovim plugin that integrates with nvim-cmp to provide AI-powered code completion using multiple AI providers such as OpenAI, Claude, Gemini, Codestral, and Huggingface. It offers customizable configuration options and streaming support for completion delivery. Users can manually invoke completion or use cost-effective models for auto-completion. The plugin requires API keys for supported AI providers and allows customization of system prompts. Minuet AI also supports changing providers, toggling auto-completion, and provides solutions for input delay issues. Integration with lazyvim is possible, and future plans include implementing RAG on the codebase and virtual text UI support.
README:
- Minuet AI
- Features
- Requirements
- Installation
- Configuration
- API Keys
- System Prompt
- Providers
- Commands
- FAQ
- TODO
- Contributing
- Acknowledgement
Minuet AI: Dance with Intelligence in Your Code 💃.
Minuet-ai
integrates with nvim-cmp
, brings the grace and harmony of a
minuet to your coding process. Just as dancers move during a minuet.
- AI-powered code completion
- Support for multiple AI providers (OpenAI, Claude, Gemini, Codestral, Huggingface, and OpenAI-compatible services)
- Customizable configuration options
- Streaming support to enable completion delivery even with slower LLMs
- Neovim 0.10+.
- plenary.nvim
- nvim-cmp
- An API key for at least one of the supported AI providers
Lazy
specs = {
{
'milanglacier/minuet-ai.nvim',
config = function()
require('minuet').setup {
-- Your configuration options here
}
end
},
{ 'nvim-lua/plenary.nvim' },
{ 'hrsh7th/nvim-cmp' },
}
-- If you wish to invoke completion manually,
-- The following configuration binds `A-y` key
-- to invoke the configuration manually.
require('cmp').setup {
mapping = {
["<A-y>"] = require('minuet').make_cmp_map()
-- and your other keymappings
},
}
Given the response speed and rate limits of LLM services, we recommend you
either invoke minuet
completion manually or use a cost-effective model for
auto-completion. The author recommends gemini-1.5-flash
for auto-completion.
You need to add minuet
into source of nvim-cmp
for auto completion to work.
require('cmp').setup {
sources = {
{
{ name = 'minuet' },
-- and your other sources
}
},
performance = {
-- It is recommended to increase the timeout duration due to
-- the typically slower response speed of LLMs compared to
-- other completion sources. This is not needed when you only
-- need manual completion.
fetching_timeout = 2000,
},
}
If you are using a distribution like lazyvim
, see the FAQ section to see how
to configure minuet
with lazyvim
, note that the author does not use
lazyvim
, the FAQ section does not guarantee to work. PRs are welcome to fix
the problem if it exists.
Minuet AI comes with the following defaults:
default_config = {
-- Enable or disable auto-completion. Note that you still need to add
-- Minuet to your cmp sources. This option controls whether cmp will
-- attempt to invoke minuet when minuet is included in cmp sources. This
-- setting has no effect on manual completion; Minuet will always be
-- enabled when invoked manually.
enabled = true,
provider = 'codestral',
-- the maximum total characters of the context before and after the cursor
-- 12,800 characters typically equate to approximately 4,000 tokens for
-- LLMs.
context_window = 12800,
-- when the total characters exceed the context window, the ratio of
-- context before cursor and after cursor, the larger the ratio the more
-- context before cursor will be used. This option should be between 0 and
-- 1, context_ratio = 0.75 means the ratio will be 3:1.
context_ratio = 0.75,
throttle = 1000, -- only send the request every x milliseconds, use 0 to disable throttle.
-- debounce the request in x milliseconds, set to 0 to disable debounce
debounce = 400,
-- show notification when request is sent or request fails. options:
-- false to disable notification. Note that you should use false, not "false".
-- "verbose" for all notifications.
-- "warn" for warnings and above.
-- "error" just errors.
notify = 'verbose',
-- The request timeout, measured in seconds. When streaming is enabled
-- (stream = true), setting a shorter request_timeout allows for faster
-- retrieval of completion items, albeit potentially incomplete.
-- Conversely, with streaming disabled (stream = false), a timeout
-- occurring before the LLM returns results will yield no completion items.
request_timeout = 3,
-- if completion item has multiple lines, create another completion item only containing its first line.
add_single_line_entry = true,
-- The number of completion items (encoded as part of the prompt for the
-- chat LLM) requested from the language model. It's important to note that
-- when 'add_single_line_entry' is set to true, the actual number of
-- returned items may exceed this value. Additionally, the LLM cannot
-- guarantee the exact number of completion items specified, as this
-- parameter serves only as a prompt guideline.
n_completions = 3,
-- Defines the length of non-whitespace context after the cursor used to
-- filter completion text. Set to 0 to disable filtering.
--
-- Example: With after_cursor_filter_length = 3 and context:
--
-- "def fib(n):\n|\n\nfib(5)" (where | represents cursor position),
--
-- if the completion text contains "fib", then "fib" and subsequent text
-- will be removed. This setting filters repeated text generated by the
-- LLM. A large value (e.g., 15) is recommended to avoid false positives.
after_cursor_filter_length = 15,
-- proxy port to use
proxy = nil,
provider_options = {
-- see the documentation in each provider in the following part.
},
-- see the documentation in the `System Prompt` section
default_template = {
template = '...',
prompt = '...',
guidelines = '...',
n_completion_template = '...',
},
default_few_shots = { '...' },
}
Minuet AI requires API keys to function. Set the following environment variables:
-
OPENAI_API_KEY
for OpenAI -
GEMINI_API_KEY
for Gemini -
ANTHROPIC_API_KEY
for Claude -
CODESTRAL_API_KEY
for Codestral -
HF_API_KEY
for Huggingface - Custom environment variable for OpenAI-compatible services (as specified in your configuration)
See prompt for the default system prompt used by minuet
.
You can customize the template
by encoding placeholders within triple braces.
These placeholders will be interpolated using the corresponding key-value pairs
from the table. The value can be either a string or a function that takes no
arguments and returns a string.
Here's a simplified example for illustrative purposes (not intended for actual configuration):
system = {
template = '{{{assistant}}}\n{{{role}}}'
assistant = function() return 'you are a helpful assistant' end,
role = "you are also a code expert.",
}
Note that n_completion_template
is a special placeholder as it contains one
%d
which will be encoded with config.n_completions
, if you want to
customize this template, make sure your prompt also contains only one %d
.
Similarly, few_shots
can be a table in the following form or a function that
takes no argument and returns a table in the following form:
{
{ role = "user", content = "something" },
{ role = "assistant", content = "something" }
-- ...
-- You can pass as many turns as you want
}
Below is an example to configure the prompt based on filetype:
require('minuet').setup {
provider_options = {
openai = {
system = {
prompt = function()
if vim.bo.ft == 'tex' then
return [[your prompt for completing prose.]]
else
return require('minuet.config').default_template.prompt
end
end,
},
few_shots = function()
if vim.bo.ft == 'tex' then
return {
-- your few shots examples for prose
}
else
return require('minuet.config').default_few_shots
end
end,
},
},
}
There's no need to replicate unchanged fields. The system will automatically
merge modified fields with default values using the tbl_deep_extend
function.
the following is the default configuration for OpenAI:
provider_options = {
openai = {
model = 'gpt-4o-mini',
system = system,
few_shots = default_few_shots,
stream = true,
optional = {
-- pass any additional parameters you want to send to OpenAI request,
-- e.g.
-- stop = { 'end' },
-- max_tokens = 256,
-- top_p = 0.9,
},
},
}
The following configuration is not the default, but recommended to prevent request timeout from outputing too many tokens.
provider_options = {
openai = {
optional = {
max_tokens = 256,
},
},
}
the following is the default configuration for Claude:
provider_options = {
claude = {
max_tokens = 512,
model = 'claude-3-5-sonnet-20240620',
system = system,
few_shots = default_few_shots,
stream = true,
optional = {
-- pass any additional parameters you want to send to claude request,
-- e.g.
-- stop_sequences = nil,
},
},
}
Codestral is a text completion model, not a chat model, so the system prompt
and few shot examples does not apply. Note that you should use the
CODESTRAL_API_KEY
, not the MISTRAL_API_KEY
, as they are using different
endpoint. To use the Mistral endpoint, simply modify the end_point
and
api_key
parameters in the configuration.
the following is the default configuration for Codestral:
provider_options = {
codestral = {
model = 'codestral-latest',
end_point = 'https://codestral.mistral.ai/v1/fim/completions',
api_key = 'CODESTRAL_API_KEY',
stream = true,
optional = {
stop = nil, -- the identifier to stop the completion generation
max_tokens = nil,
},
},
}
The following configuration is not the default, but recommended to prevent request timeout from outputing too many tokens.
provider_options = {
codestral = {
optional = {
max_tokens = 256,
stop = { '\n\n' },
},
},
}
The following config is the default.
provider_options = {
gemini = {
model = 'gemini-1.5-flash-latest',
system = system,
few_shots = default_few_shots,
stream = true,
optional = {},
},
}
The following configuration is not the default, but recommended to prevent request timeout from outputing too many tokens. You can also adjust the safety settings following the example:
provider_options = {
gemini = {
optional = {
generationConfig = {
maxOutputTokens = 256,
},
safetySettings = {
{
-- HARM_CATEGORY_HATE_SPEECH,
-- HARM_CATEGORY_HARASSMENT
-- HARM_CATEGORY_SEXUALLY_EXPLICIT
category = 'HARM_CATEGORY_DANGEROUS_CONTENT',
-- BLOCK_NONE
threshold = 'BLOCK_ONLY_HIGH',
},
},
},
},
}
Use any providers compatible with OpenAI's chat completion API.
For example, you can set the end_point
to
http://localhost:11434/v1/chat/completions
to use ollama
.
Note that not all openAI compatible services has streaming support, you should
change stream=false
to disable streaming in case your services do not support
it.
The following config is the default.
provider_options = {
openai_compatible = {
model = 'llama-3.1-70b-versatile',
system = default_system,
few_shots = default_few_shots,
end_point = 'https://api.groq.com/openai/v1/chat/completions',
api_key = 'GROQ_API_KEY',
name = 'Groq',
stream = true,
optional = {
stop = nil,
max_tokens = nil,
},
}
}
Use any provider compatible with OpenAI's completion API. This request uses the text completion API, not chat completion, so system prompts and few-shot examples are not applicable.
For example, you can set the end_point
to
http://localhost:11434/v1/completions
to use ollama
.
Refer to the Completions Legacy section of the OpenAI documentation for details.
Note that not all openAI compatible services has streaming support, you should
change stream=false
to disable streaming in case your services do not support
it.
provider_options = {
openai_fim_compatible = {
model = 'deepseek-coder',
end_point = 'https://api.deepseek.com/beta/completions',
api_key = 'DEEPSEEK_API_KEY',
name = 'Deepseek',
stream = true,
optional = {
stop = nil,
max_tokens = nil,
},
}
}
The following configuration is not the default, but recommended to prevent request timeout from outputing too many tokens.
provider_options = {
openai_fim_compatible = {
optional = {
max_tokens = 256,
stop = { '\n\n' },
},
},
}
Currently only text completion model in huggingface is supported, so the system prompt and few shot examples does not apply.
provider_options = {
huggingface = {
end_point = 'https://api-inference.huggingface.co/models/bigcode/starcoder2-3b',
type = 'completion',
strategies = {
completion = {
markers = {
prefix = '<fim_prefix>',
suffix = '<fim_suffix>',
middle = '<fim_middle>',
},
strategy = 'PSM', -- PSM, SPM or PM
},
},
optional = {
parameters = {
-- The parameter specifications for different LLMs may vary.
-- Ensure you specify the parameters after reading the API
-- documentation.
stop = nil,
max_tokens = nil,
do_sample = nil,
},
},
},
}
This command allows you to change the provider after Minuet
has been setup.
Enable or disable auto-completion. Note that you still need to add Minuet to your cmp sources. This command controls whether cmp will attempt to invoke minuet when minuet is included in cmp sources. This command has no effect on manual completion; Minuet will always be enabled when invoked manually.
You can configure the icons of minuet
by using the following snippet
(referenced from cmp's
wiki):
local cmp = require('cmp')
cmp.setup {
formatting = {
format = function(entry, vim_item)
-- Kind icons
vim_item.kind = string.format('%s %s', kind_icons[vim_item.kind], vim_item.kind) -- This concatenates the icons with the name of the item kind
-- Source
vim_item.menu = ({
minuet = "ó±—»"
})[entry.source.name]
return vim_item
end
},
}
When using Minuet with auto-complete enabled, you may occasionally experience a
noticeable delay when pressing <CR>
to move to the next line. This occurs
because Minuet triggers autocompletion at the start of a new line, while cmp
blocks the <CR>
key, awaiting Minuet's response.
To address this issue, consider the following solutions:
- Unbind the
<CR>
key from your cmp keymap. - Utilize cmp's internal API to avoid blocking calls, though be aware that this API may change without prior notice.
Here's an example of the second approach using Lua:
local cmp = require 'cmp'
opts.mapping = {
['<CR>'] = cmp.mapping(function(fallback)
-- use the internal non-blocking call to check if cmp is visible
if cmp.core.view:visible() then
cmp.confirm { select = true }
else
fallback()
end
end),
}
{
'milanglacier/minuet-ai.nvim',
config = function()
require('minuet').setup {
-- Your configuration options here
}
end
},
{ 'nvim-lua/plenary.nvim' },
{
'nvim-cmp',
opts = function(_, opts)
-- if you wish to use autocomplete
table.insert(opts.sources, 1, {
name = 'minuet',
group_index = 1,
priority = 100,
})
opts.performance = {
-- It is recommended to increase the timeout duration due to
-- the typically slower response speed of LLMs compared to
-- other completion sources. This is not needed when you only
-- need manual completion.
fetching_timeout = 2000,
}
opts.mapping = vim.tbl_deep_extend('force', opts.mapping or {}, {
-- if you wish to use manual complete
['<A-y>'] = require('minuet').make_cmp_map(),
-- You don't need to worry about <CR> delay because lazyvim handles this situation for you.
['<CR>'] = nil,
})
end,
}
- Implement
RAG
on the codebase and encode the codebase information into the request to LLM. - Virtual text UI support.
Contributions are welcome! Please feel free to submit a Pull Request.
- cmp-ai: A large piece of the codebase are based on this plugin.
- continue.dev: not a neovim plugin, but I find a lot LLM models from here.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for minuet-ai.nvim
Similar Open Source Tools
minuet-ai.nvim
Minuet AI is a Neovim plugin that integrates with nvim-cmp to provide AI-powered code completion using multiple AI providers such as OpenAI, Claude, Gemini, Codestral, and Huggingface. It offers customizable configuration options and streaming support for completion delivery. Users can manually invoke completion or use cost-effective models for auto-completion. The plugin requires API keys for supported AI providers and allows customization of system prompts. Minuet AI also supports changing providers, toggling auto-completion, and provides solutions for input delay issues. Integration with lazyvim is possible, and future plans include implementing RAG on the codebase and virtual text UI support.
codecompanion.nvim
CodeCompanion.nvim is a Neovim plugin that provides a Copilot Chat experience, adapter support for various LLMs, agentic workflows, inline code creation and modification, built-in actions for language prompts and error fixes, custom actions creation, async execution, and more. It supports Anthropic, Ollama, and OpenAI adapters. The plugin is primarily developed for personal workflows with no guarantees of regular updates or support. Users can customize the plugin to their needs by forking the project.
APIMyLlama
APIMyLlama is a server application that provides an interface to interact with the Ollama API, a powerful AI tool to run LLMs. It allows users to easily distribute API keys to create amazing things. The tool offers commands to generate, list, remove, add, change, activate, deactivate, and manage API keys, as well as functionalities to work with webhooks, set rate limits, and get detailed information about API keys. Users can install APIMyLlama packages with NPM, PIP, Jitpack Repo+Gradle or Maven, or from the Crates Repository. The tool supports Node.JS, Python, Java, and Rust for generating responses from the API. Additionally, it provides built-in health checking commands for monitoring API health status.
neocodeium
NeoCodeium is a free AI completion plugin powered by Codeium, designed for Neovim users. It aims to provide a smoother experience by eliminating flickering suggestions and allowing for repeatable completions using the `.` key. The plugin offers performance improvements through cache techniques, displays suggestion count labels, and supports Lua scripting. Users can customize keymaps, manage suggestions, and interact with the AI chat feature. NeoCodeium enhances code completion in Neovim, making it a valuable tool for developers seeking efficient coding assistance.
aioboto3
aioboto3 is an async AWS SDK for Python that allows users to use near enough all of the boto3 client commands in an async manner just by prefixing the command with `await`. It combines the great work of boto3 and aiobotocore, enabling users to use higher level APIs provided by boto3 in an asynchronous manner. The package provides support for various AWS services such as DynamoDB, S3, Kinesis, SSM Parameter Store, and Athena. It also offers features like client-side encryption using KMS-Managed master keys and supports asyncifying `get_presigned_url`. The library closely mimics the usage of boto3 and is mainly developed to be used in async microservices.
gen.nvim
gen.nvim is a tool that allows users to generate text using Language Models (LLMs) with customizable prompts. It requires Ollama with models like `llama3`, `mistral`, or `zephyr`, along with Curl for installation. Users can use the `Gen` command to generate text based on predefined or custom prompts. The tool provides key maps for easy invocation and allows for follow-up questions during conversations. Additionally, users can select a model from a list of installed models and customize prompts as needed.
pg_vectorize
pg_vectorize is a Postgres extension that automates text to embeddings transformation, enabling vector search and LLM applications with minimal function calls. It integrates with popular LLMs, provides workflows for vector search and RAG, and automates Postgres triggers for updating embeddings. The tool is part of the VectorDB Stack on Tembo Cloud, offering high-level APIs for easy initialization and search.
wtf.nvim
wtf.nvim is a Neovim plugin that enhances diagnostic debugging by providing explanations and solutions for code issues using ChatGPT. It allows users to search the web for answers directly from Neovim, making the debugging process faster and more efficient. The plugin works with any language that has LSP support in Neovim, offering AI-powered diagnostic assistance and seamless integration with various resources for resolving coding problems.
model.nvim
model.nvim is a tool designed for Neovim users who want to utilize AI models for completions or chat within their text editor. It allows users to build prompts programmatically with Lua, customize prompts, experiment with multiple providers, and use both hosted and local models. The tool supports features like provider agnosticism, programmatic prompts in Lua, async and multistep prompts, streaming completions, and chat functionality in 'mchat' filetype buffer. Users can customize prompts, manage responses, and context, and utilize various providers like OpenAI ChatGPT, Google PaLM, llama.cpp, ollama, and more. The tool also supports treesitter highlights and folds for chat buffers.
json_repair
This simple package can be used to fix an invalid json string. To know all cases in which this package will work, check out the unit test. Inspired by https://github.com/josdejong/jsonrepair Motivation Some LLMs are a bit iffy when it comes to returning well formed JSON data, sometimes they skip a parentheses and sometimes they add some words in it, because that's what an LLM does. Luckily, the mistakes LLMs make are simple enough to be fixed without destroying the content. I searched for a lightweight python package that was able to reliably fix this problem but couldn't find any. So I wrote one How to use from json_repair import repair_json good_json_string = repair_json(bad_json_string) # If the string was super broken this will return an empty string You can use this library to completely replace `json.loads()`: import json_repair decoded_object = json_repair.loads(json_string) or just import json_repair decoded_object = json_repair.repair_json(json_string, return_objects=True) Read json from a file or file descriptor JSON repair provides also a drop-in replacement for `json.load()`: import json_repair try: file_descriptor = open(fname, 'rb') except OSError: ... with file_descriptor: decoded_object = json_repair.load(file_descriptor) and another method to read from a file: import json_repair try: decoded_object = json_repair.from_file(json_file) except OSError: ... except IOError: ... Keep in mind that the library will not catch any IO-related exception and those will need to be managed by you Performance considerations If you find this library too slow because is using `json.loads()` you can skip that by passing `skip_json_loads=True` to `repair_json`. Like: from json_repair import repair_json good_json_string = repair_json(bad_json_string, skip_json_loads=True) I made a choice of not using any fast json library to avoid having any external dependency, so that anybody can use it regardless of their stack. Some rules of thumb to use: - Setting `return_objects=True` will always be faster because the parser returns an object already and it doesn't have serialize that object to JSON - `skip_json_loads` is faster only if you 100% know that the string is not a valid JSON - If you are having issues with escaping pass the string as **raw** string like: `r"string with escaping\"" Adding to requirements Please pin this library only on the major version! We use TDD and strict semantic versioning, there will be frequent updates and no breaking changes in minor and patch versions. To ensure that you only pin the major version of this library in your `requirements.txt`, specify the package name followed by the major version and a wildcard for minor and patch versions. For example: json_repair==0.* In this example, any version that starts with `0.` will be acceptable, allowing for updates on minor and patch versions. How it works This module will parse the JSON file following the BNF definition:
ActionWeaver
ActionWeaver is an AI application framework designed for simplicity, relying on OpenAI and Pydantic. It supports both OpenAI API and Azure OpenAI service. The framework allows for function calling as a core feature, extensibility to integrate any Python code, function orchestration for building complex call hierarchies, and telemetry and observability integration. Users can easily install ActionWeaver using pip and leverage its capabilities to create, invoke, and orchestrate actions with the language model. The framework also provides structured extraction using Pydantic models and allows for exception handling customization. Contributions to the project are welcome, and users are encouraged to cite ActionWeaver if found useful.
dbt-airflow
A Python package that helps Data and Analytics engineers render dbt projects in Apache Airflow DAGs. It enables teams to automatically render their dbt projects in a granular level, creating individual Airflow tasks for every model, seed, snapshot, and test within the dbt project. This allows for full control at the task-level, improving visibility and management of data models within the team.
experts
Experts.js is a tool that simplifies the creation and deployment of OpenAI's Assistants, allowing users to link them together as Tools to create a Panel of Experts system with expanded memory and attention to detail. It leverages the new Assistants API from OpenAI, which offers advanced features such as referencing attached files & images as knowledge sources, supporting instructions up to 256,000 characters, integrating with 128 tools, and utilizing the Vector Store API for efficient file search. Experts.js introduces Assistants as Tools, enabling the creation of Multi AI Agent Systems where each Tool is an LLM-backed Assistant that can take on specialized roles or fulfill complex tasks.
Bard-API
The Bard API is a Python package that returns responses from Google Bard through the value of a cookie. It is an unofficial API that operates through reverse-engineering, utilizing cookie values to interact with Google Bard for users struggling with frequent authentication problems or unable to authenticate via Google Authentication. The Bard API is not a free service, but rather a tool provided to assist developers with testing certain functionalities due to the delayed development and release of Google Bard's API. It has been designed with a lightweight structure that can easily adapt to the emergence of an official API. Therefore, using it for any other purposes is strongly discouraged. If you have access to a reliable official PaLM-2 API or Google Generative AI API, replace the provided response with the corresponding official code. Check out https://github.com/dsdanielpark/Bard-API/issues/262.
semantic-cache
Semantic Cache is a tool for caching natural text based on semantic similarity. It allows for classifying text into categories, caching AI responses, and reducing API latency by responding to similar queries with cached values. The tool stores cache entries by meaning, handles synonyms, supports multiple languages, understands complex queries, and offers easy integration with Node.js applications. Users can set a custom proximity threshold for filtering results. The tool is ideal for tasks involving querying or retrieving information based on meaning, such as natural language classification or caching AI responses.
web-llm
WebLLM is a modular and customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including json-mode, function-calling, streaming, etc. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.
For similar tasks
minuet-ai.nvim
Minuet AI is a Neovim plugin that integrates with nvim-cmp to provide AI-powered code completion using multiple AI providers such as OpenAI, Claude, Gemini, Codestral, and Huggingface. It offers customizable configuration options and streaming support for completion delivery. Users can manually invoke completion or use cost-effective models for auto-completion. The plugin requires API keys for supported AI providers and allows customization of system prompts. Minuet AI also supports changing providers, toggling auto-completion, and provides solutions for input delay issues. Integration with lazyvim is possible, and future plans include implementing RAG on the codebase and virtual text UI support.
awesome-code-ai
A curated list of AI coding tools, including code completion, refactoring, and assistants. This list includes both open-source and commercial tools, as well as tools that are still in development. Some of the most popular AI coding tools include GitHub Copilot, CodiumAI, Codeium, Tabnine, and Replit Ghostwriter.
companion-vscode
Quack Companion is a VSCode extension that provides smart linting, code chat, and coding guideline curation for developers. It aims to enhance the coding experience by offering a new tab with features like curating software insights with the team, code chat similar to ChatGPT, smart linting, and upcoming code completion. The extension focuses on creating a smooth contribution experience for developers by turning contribution guidelines into a live pair coding experience, helping developers find starter contribution opportunities, and ensuring alignment between contribution goals and project priorities. Quack collects limited telemetry data to improve its services and products for developers, with options for anonymization and disabling telemetry available to users.
CodeGeeX4
CodeGeeX4-ALL-9B is an open-source multilingual code generation model based on GLM-4-9B, offering enhanced code generation capabilities. It supports functions like code completion, code interpreter, web search, function call, and repository-level code Q&A. The model has competitive performance on benchmarks like BigCodeBench and NaturalCodeBench, outperforming larger models in terms of speed and performance.
probsem
ProbSem is a repository that provides a framework to leverage large language models (LLMs) for assigning context-conditional probability distributions over queried strings. It supports OpenAI engines and HuggingFace CausalLM models, and is flexible for research applications in linguistics, cognitive science, program synthesis, and NLP. Users can define prompts, contexts, and queries to derive probability distributions over possible completions, enabling tasks like cloze completion, multiple-choice QA, semantic parsing, and code completion. The repository offers CLI and API interfaces for evaluation, with options to customize models, normalize scores, and adjust temperature for probability distributions.
are-copilots-local-yet
Current trends and state of the art for using open & local LLM models as copilots to complete code, generate projects, act as shell assistants, automatically fix bugs, and more. This document is a curated list of local Copilots, shell assistants, and related projects, intended to be a resource for those interested in a survey of the existing tools and to help developers discover the state of the art for projects like these.
chatgpt
The ChatGPT R package provides a set of features to assist in R coding. It includes addins like Ask ChatGPT, Comment selected code, Complete selected code, Create unit tests, Create variable name, Document code, Explain selected code, Find issues in the selected code, Optimize selected code, and Refactor selected code. Users can interact with ChatGPT to get code suggestions, explanations, and optimizations. The package helps in improving coding efficiency and quality by providing AI-powered assistance within the RStudio environment.
ai-commits-intellij-plugin
AI Commits is a plugin for IntelliJ-based IDEs and Android Studio that generates commit messages using git diff and OpenAI. It offers features such as generating commit messages from diff using OpenAI API, computing diff only from selected files and lines in the commit dialog, creating custom prompts for commit message generation, using predefined variables and hints to customize prompts, choosing any of the models available in OpenAI API, setting OpenAI network proxy, and setting custom OpenAI compatible API endpoint.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.