ain
A HTTP API client for the terminal
Stars: 592
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.
README:
Ain is a terminal HTTP API client. It's an alternative to postman, paw or insomnia.
- Flexible organization of API:s using files and folders.
- Use shell-scripts and executables for common tasks.
- Put things that change in environment variables or .env-files.
- Handles url-encoding.
- Share the resulting curl, wget or httpie command-line.
- Pipe the API output for further processing.
- Tries hard to be helpful when there are errors.
Ain was built to enable scripting of input and further processing of output via pipes. It targets users who work with many API:s using a simple file format. It uses curl, wget or httpie to make the actual calls.
⭐ !Please leave a star if you find it useful! ⭐
- Pre-requisites
- Installation
- Quick start
- Longer start
- Important concepts
- Templates
- Running ain
- Supported sections
- Environment variables
- Executables
- Fatals
- Quoting
- Escaping
- URL-encoding
- Sharing is caring
- Handling line endings
- Troubleshooting
- Ain in a bigger context
- Contributing
You need curl, wget or httpie installed and available on your $PATH
. The easiest way to test this is to run ain -b
. This will generate a basic starter template listing what backends you have available on your system in the [Backend] section. It will select one and leave the others commented out.
You can also check manually what backends you have installed by opening up a shell and type curl
, wget
or http
(add the suffix .exe to those commands if you're on windows). If there's any output from the command itself you're good to go.
On linux or mac one of the three above is very likely to be installed on your box already. The others are available in your package manager or homebrew.
If you're on windows curl.exe is already installed if it's windows 10 build 17063 or higher. Otherwise you can get the binaries via scoop, chocolatey or download them yourself. Ain uses curl.exe and cannot use the curl cmd-let powershell builtin.
You need go 1.13 or higher. Using go install
:
go install github.com/jonaslu/ain/cmd/ain@latest
Using the package-manager homebrew
brew install ain
Using the windows package-manager scoop
scoop bucket add jonaslu_tools https://github.com/jonaslu/scoop-tools.git
scoop install ain
yay -S ain-bin
Install it so it's available on your $PATH
:
https://github.com/jonaslu/ain/releases
Ain comes with a built in basic template that you can use as a starting point. Ain also checks what backends (that's curl, wget or httpie) are available on your system and inserts them into the [Backend] section of the generated template. One will be selected and the rest commented out so the template is runnable directly.
Run:
ain -b basic_template.ain
The command above will output a starter-template to the file basic_template.ain
.
The basic template contains a common scenario of calling GET on localhost
with the Content-Type: application/json
.
Run the generated template by specifying a PORT
environment variable:
PORT=8080 ain basic_template.ain
Ain uses sections in square brackets to specify how to call an API.
Start by putting things common to all APIs for a service in a file (let's call it base.ain):
$> cat base.ain
[Host]
http://localhost:8080
[Headers]
Content-Type: application/json
[Backend]
curl
[BackendOptions]
-sS
Then add another file for a specific URL:
$> cat create-blog-post.ain
[Host]
/api/blog/create
[Method]
POST
[Body]
{
"title": "Million dollar idea",
"text": "A dating service. With music."
}
Run ain to combine them into a single API call and print the result:
$> ain base.ain create-blog-post.ain
{
"status": "ok"
}
See the help for all options ain supports: ain -h
- Templates: Files containing what, how and where to make the API call. By convention has the file-ending
.ain
. - Sections: Headings in a template file.
- Environment variables: Enables variables in a template file.
- Executables: Enables using the results of another command in a template file.
- Backends: The thing that makes the API call (curl, wget or httpie).
- Fatals: Error in parsing the template files (it's your fault).
Ain reads sections from template-files. Here's a full example:
[Host]
http://localhost:${PORT}/api/blog/post
[Query]
id=2e79870c-6504-4ac6-a2b7-01da7a6532f1
[Headers]
Authorization: Bearer $(./get-jwt-token.sh)
Content-Type: application/json
[Method]
POST
[Body]
{
"title": "Reaping death",
"content": "There is a place beyond the dreamworlds past the womb of night."
}
[Config]
Timeout=10
[Backend]
curl
[BackendOptions]
-sS # Comments are ignored.
# This too.
The template files can be named anything but some unique ending-convention such as .ain is recommended so you can find them easily.
Ain understands eight [Sections] (the things in square brackets). Each of the sections are described in details below.
Sections either combine or overwrite across all the template files given to ain.
Anything after a pound sign (#) is a comment and will be ignored.
ain [options] <template-files...>[!]
Ain accepts one or more template-file(s) as a mandatory parameter. As sections combine or overwrite where it makes sense you can better organize API-calls into hierarchical structures with increasing specificity.
An example would be setting the [Headers], [Backend] and [BackendOptions] in a base template file and then specifying the specific [Host], [Method] and [Body] in several template files, one for each API-endpoint. You can even use an alias for things you will always set.
Adding an exclamation-mark (!) at the end of the template file name makes ain open the file in your $VISUAL
or $EDITOR
editor or falls back to vim in that order so you can edit the template file. Any changes are not stored back into the template file and used only this invocation.
Example:
ain templates/get-blog-post.ain!
Ain waits for the editor command to exit. Any terminal editor such as vim, emacs, nano etc will be fine. If your editor of choice forks (e g vscode does by default) check if there's a flag stopping it from forking. For example to stop vscode from forking use the --wait
flag:
export EDITOR="code --wait"
If ain is connected to a pipe it will try to read template file names off that pipe. This enables you to use find and a selector such as fzf to keep track of the template-files:
$> find . -name *.ain | fzf -m | ain
Template file names specified on the command line are read before any names from a pipe. This means that echo create-blog-post.ain | ain base.ain
is the same as ain base.ain create-blog-post.ain
.
Ain functions as bash when it comes to file names: if they contain white-space the name should be quoted.
When making the call ain mimics how data is returned by the backend. After printing any internal errors of it's own, ain echoes back output from the backend: first the standard error (stderr) and then the standard out (stdout). It then returns the exit code from the backend command as it's own unless there are error specific to ain in which it returns status 1.
Sections are case-insensitive and whitespace ignored but by convention uses CamelCase and are left indented. A section cannot be defined twice in a file. A section ends where the next begins or the file ends.
See escaping If you need a literal supported section heading on a new line.
Contains the URL to the API. This section appends the lines from one template file to the next. This neat little feature allows you to specify a base-url in one file (e g base.ain
) as such: http://localhost:3000
and in the next template file specify the endpoint (e g login.ain
): /api/auth/login
.
It's recommended that you use the [Query] section below for query-parameters as it handles joining with delimiters and trimming whitespace. You can however put raw query-paramters in the [Host] section too.
Any query-parameters added in the [Query] section are appended last to the URL. The whole URL is properly url-encoded before passed to the backend. The [Host] section shall combine to one and only one valid URL. Multiple URLs is not supported.
Ain performs no validation on the url (as backends differ on what a valid url looks like). If your call does not go through use ain -p
as mentioned in troubleshooting and input that directly into the backend to see what it thinks it means.
The [Host] section is mandatory and appends across template files.
All lines in the [Query] section is appended to the URL after the complete URL has been assembled. This means that you can specify query-parameters that apply to many endpoints in one file instead of having to include the same parameter in all endpoints.
An example is if an API_KEY=<secret>
query-parameter applies to several endpoints. You can define this query-parameter in a base-file and simply have the specific endpoint URL and possible extra query-parameters in their own file.
Example - base.ain
:
[Host]
http://localhost:8080/api
[Query]
API_KEY=a922be9f-1aaf-47ef-b70b-b400a3aa386e
get-post.ain
[Host]
/blog/post
[Query]
id=1
This will result in the url:
http://localhost:8080/api/blog/post?API_KEY=a922be9f-1aaf-47ef-b70b-b400a3aa386e&id=1
To avoid the common bash-ism error of having spaces around the equals sign, the whitespace in a query key / value is only significant within the string.
This means that page=3
and page = 3
will become the same query parameter and page = the next one
will become page=the+next+one
when processed. If you need actual spaces between the equal-sign and the key / value strings you need to encode it yourself: e g page+=+3
or put
that key-value in the [Host] section where space is significant.
Each line under the [Query] section is appended with a delimiter. Ain defaults to the query-string delimiter &
. See the [Config] section for setting a custom delimiter.
All query-parameters are properly url-encoded. See url-encoding.
The [Query] section appends across template files.
Headers to include in the API call.
Example:
[Headers]
Authorization: Bearer 888e90f2-319f-40a0-b422-d78bb95f229e
Content-Type: application/json
The [Headers] section appends across template files.
What http-method to use in the API call (e g GET, POST, PATCH). If omitted the backend default is used (GET in both curl, wget and httpie).
Example:
[Method]
POST
The [Method] section is overridden by latter template files.
If the API call needs a body (as in the POST or PATCH http methods) the content of this section is passed as a file to the backend with the formatting retained from the [Body] section. Ain uses files to pass the [Body] contents because white-space may be important (e g yaml) and this section tends to be long.
The file passed to the backend is removed after the API call unless you pass the -l
(as in leave) flag. Ain places the file in the $TMPDIR directory (usually /tmp
on your box). You can override this in your shell by explicitly setting $TMPDIR
if you'd like them elsewhere.
Passing print command -p
(as in print) flag will cause ain to write out the file named ain-body in the directory where ain is invoked (cwd
) and leave the file after completion. The -p
flag is for [sharing] and for [troubleshooting]. Leaving the body file makes the resulting printed command shareable and runnable.
The [Body] section removes any trailing whitespace and keeps empty newlines between the first and last non-empty line.
Example:
[Body]
{
"some": "json", # ain removes comments
"more": "jayson"
}
Is passed as this in the tmp-file:
{
"some": "json",
"more": "jayson"
}
The [Body] section is overridden by latter template files.
This section contains config for ain. All config parameters are case-insensitive and any whitespace is ignored. Parameters for backends themselves are passed via the [BackendOptions] section.
Full config example:
[Config]
Timeout=3
QueryDelim=;
The [Config] sections is overridden by latter template files.
Config format: Timeout=<timeout in seconds>
The timeout is enforced during the whole execution of ain (both running executables and the actual API call). If omitted defaults to no timeout. This is the only section where executables cannot be used, since the timeout needs to be known before the executables are invoked.
Config format: QueryDelim=<text>
This is the delimiter used when concatenating the lines under the [Query] section to form the query-string of an URL. It can be any text that does not contain a space including the empty string.
It defaults to (&
).
The [Backend] specifies what command should be used to run the actual API call.
Valid options are curl, wget or httpie.
Example:
[Backend]
curl
The [Backend] section is mandatory and is overridden by latter template files.
Backend specific options that are passed on to the backend command invocation.
Example:
[Backend]
curl
[BackendOptions]
-sS # Makes curl disable it's progress bar in a pipe
The [BackendOptions] section appends across template files.
Anything inside ${}
in a template is replaced with the value found in the environment.
Ain also reads any .env files in the folder from where it's run. You can pass a custom .env file via the -e
flag. Only new variables are set. Any already existing env-variable is not modified.
This enables you to specify things that vary across API calls either permanently in the .env file or one-shot via the command-line. Example:
PORT=5000 ain base.ain create-blog-post.ain
Environment-variables are expanded first and can be used with any executable. Example $(cat ${ENV}/token.json)
.
Ain uses envparse for parsing environment variables.
An executable expression (i e $(command arg1 arg2)
) will be replaced by running the command with any arguments and replacing the expression with the output (STDOUT). For example $(echo 1)
will be replaced by 1
when processing the template.
An more real world example is getting JWT tokens from a separate script and share that across templates:
[Headers]
Authorization: Bearer $(bash -c "./get-login.sh | jq -r '.token'")
If shell features such as pipes are needed this can be done via a command string (e g bash -c in bash.
If parentheses are needed as arguments they must be within quotes (e g $(node -e 'console.log("Hi")')") to not end the executable expression.
Ain expects the first word in an executable to be on your $PATH and the rest to be arguments (hence the need for quotes to bash -c as this is passed as one argument).
Executables are captured and replaced in the template after any environment-variables are expanded. This means that anything the executable returns is inserted directly even if it's a valid environment variable name.
Ain has two types of errors: fatals and errors. Errors are things internal to ain (it's not your fault) such as not finding the backend-binary.
Fatals are errors in the template (it's your fault). Ain will try to parse as much of the templates as possible aggregating fatals before reporting back to you. Fatals include the template file name where the fatal occurred, the line-number and a small context of the template:
$ ain templates/example.ain
Fatal error in file: templates/example.ain
Cannot find value for variable PORT on line 2:
1 [Host]
2 > http://localhost:${PORT}
3
Fatals can be hard to understand if environment variables or executables substitute for values in the template. If the line with the fatal contains any substituted value a separate expanded context is printed. It contains up to three lines with the resulting substitution and a row number into the original template:
$ TIMEOUT=-1 ain templates/example.ain
Fatal error in file: templates/example.ain
Timeout interval must be greater than 0 on line 10:
9 [Config]
10 > Timeout=${TIMEOUT}
11
Expanded context:
10 > Timeout=-1
Quoting in bash is hard and therefore ain tries avoid it. There are four places where it might be necessary: Arguments to executables, backend options, invoking the $VISUAL or $EDITOR command and when passing template-names via a pipe into ain. All for the same reasons as bash: a word is an argument to something and a whitespace is the delimiter to the next argument. If whitespace is part of the argument it must be explicit.
The canonical example of when quoting is needed is doing more complex things involving pipes. E g $(sh -c 'find . | fzf -m | xargs echo')
.
Quoting is kept simple, you can use ' or ". There is only one escape-sequence (\'
and \"
respectively) to insert a quote inside a quoted string of the same type. You can avoid when possible by selecting the other quote character (e g 'I need a " inside this string').
TL;DR: To escape a comment #
precede it with a backtick: `#
.
Escaping is hard and therefore ain tries to avoid it. These symbols have special meaning to ain:
Symbol -> meaning
# -> comment
${ -> environment variable
$( -> executable
If you need these symbols literally in your output, escape with a backtick:
Symbol -> output
`# -> #
`${ -> ${
`$( -> $(
If you need a literal backtick just before a symbol, you escape the escaping with a slash:
\`#
\`${
\`$(
If you need a literal }
in an environment variable you escape it with a backtick:
Template -> Environment variable
${VA`}RZ} -> VA}RZ
If you need a literal )
in an executable, either escape it with a backtick or enclose it in quotes.
These two examples are equivalent and inserts the string Hi:
$(node -e console.log('Hi'`))
$(node -e 'console.log("Hi")')
If you need a literal backtick right before closing the envvar or executable you escape the backtick with a slash:
$(echo \`)
${VAR\`}
Since environment variables are only expanded once, ${
doesn't need escaping when returned from an environment variable. E g VAR='${GOAT}'
, ${GOAT}
is passed literally to the output. Same for executables, any returned value containing ${
does not need escaping. E g $(echo $(yo )
, $(yo
is passed literally to the output.
Comments needs escaping when returned both in environment variables and executables.
A section header (one of the eight listed under supported sections) needs escaping if it's the only text a separate line. It is escaped with a backtick. Example:
[Body]
I'm part of the
`[Body]
and included in the output.
If you need a literal backtick followed by a valid section heading you escape that backtick with a slash. Example:
[Body]
This text is outputted as
\`[Body]
backtick [Body].
URL-encoding is something ain tries hard to take care of for you. Both the path and the query-section of an url is scanned and any non-valid charaters are encoded while already legal encodings (format %<hex><hex>
and +
for the query string) are kept as is.
This means that you can mix url-encoded text, half encoded text or unencoded text and ain will convert them all into a properly url-encoded URL.
Example:
[Host]
https://localhost:8080/api/finance/ca$h
[Query]
account=full of ca%24h
Will result in the URL:
https://localhost:8080/api/finance/ca%24h?account=full+of+ca%24h
The only caveats is that ain cannot know if a plus sign (+) is an encoded space or if the actual plus sign was meant. In this case ain leaves the plus sign as is. Also it cannot know if you actually meant % instead of an encoded character. In both cases you need to yourself manually escape the plus (%2B) and percent sign (%25).
Ain can print out the command instead of running it via the -p
flag. This enables you to inspect how the curl, wget or httpie API call would look like or share the command:
ain -p base.ain create-blog-post.ain > share-me.sh
Piping it into bash is equivalent to running the command without -p
.
ain -p base.ain create-blog-post.ain | bash
Any content within the [Body] section when passing the flag -p
will be written to a file in the current working directory where ain is invoked. The file is not removed after ain completes. See [Body] for details.
A note on line-endings. Ain uses line-feed (\n) when printing it's output. If you're on windows and storing ain:s result to a file, this may cause trouble. Instead of trying to guess what line ending we're on (WSL, docker, cygwin etc makes this a wild goose chase), you'll have to manually convert them if the receiving program complains.
Instructions here: https://stackoverflow.com/a/19914445/1574968
If the templates are valid but the actual backend call fails, passing the -p
flag will show you the command ain tries to run. Invoking this yourself in a terminal might give you more clues to what's wrong.
But wait! There's more!
With ain being terminal friendly there are few neat tricks in the wiki
I'd love if you want to get your hands dirty and improve ain!
If you look closely there are almost* no tests. There's even a commit wiping all tests that once was. Why is a good question. WTF is also a valid response.
It's an experiment you see, I've blogged about atomic literate commits paired with a thing called a test plan. This means you make the commit solve one problem, write in plain english what problem is and how the commit solves it and how you verified that it works. All of that in the commit messages. For TL;DR; do a git log
and see for yourself.
I'll ask you to do the same and we'll experiment together. See it as a opportunity to try on something new.
* Except for where it does make sense to have a unit-test: to exercise a well known algo and prove it's correct as done in utils_test.go. Doing this by hand would be hard, timeconsuming and error prone.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for ain
Similar Open Source Tools
ain
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.
blinkid-ios
BlinkID iOS is a mobile SDK that enables developers to easily integrate ID scanning and data extraction capabilities into their iOS applications. The SDK supports scanning and processing various types of identity documents, such as passports, driver's licenses, and ID cards. It provides accurate and fast data extraction, including personal information and document details. With BlinkID iOS, developers can enhance their apps with secure and reliable ID verification functionality, improving user experience and streamlining identity verification processes.
prelude
Prelude is a simple tool for creating context prompts for LLMs with long context windows. It helps improve code distributed over multiple files by generating prompts with file tree and concatenated file contents. The prompt is copied to clipboard and can be saved to a file. It excludes files listed in .preludeignore and .gitignore files. The tool requires the `tree` command to be installed on the system for functionality.
Open-LLM-VTuber
Open-LLM-VTuber is a project in early stages of development that allows users to interact with Large Language Models (LLM) using voice commands and receive responses through a Live2D talking face. The project aims to provide a minimum viable prototype for offline use on macOS, Linux, and Windows, with features like long-term memory using MemGPT, customizable LLM backends, speech recognition, and text-to-speech providers. Users can configure the project to chat with LLMs, choose different backend services, and utilize Live2D models for visual representation. The project supports perpetual chat, offline operation, and GPU acceleration on macOS, addressing limitations of existing solutions on macOS.
llamafile
llamafile is a tool that enables users to distribute and run Large Language Models (LLMs) with a single file. It combines llama.cpp with Cosmopolitan Libc to create a framework that simplifies the complexity of LLMs into a single-file executable called a 'llamafile'. Users can run these executable files locally on most computers without the need for installation, making open LLMs more accessible to developers and end users. llamafile also provides example llamafiles for various LLM models, allowing users to try out different LLMs locally. The tool supports multiple CPU microarchitectures, CPU architectures, and operating systems, making it versatile and easy to use.
smartcat
Smartcat is a CLI interface that brings language models into the Unix ecosystem, allowing power users to leverage the capabilities of LLMs in their daily workflows. It features a minimalist design, seamless integration with terminal and editor workflows, and customizable prompts for specific tasks. Smartcat currently supports OpenAI, Mistral AI, and Anthropic APIs, providing access to a range of language models. With its ability to manipulate file and text streams, integrate with editors, and offer configurable settings, Smartcat empowers users to automate tasks, enhance code quality, and explore creative possibilities.
llamabot
LlamaBot is a Pythonic bot interface to Large Language Models (LLMs), providing an easy way to experiment with LLMs in Jupyter notebooks and build Python apps utilizing LLMs. It supports all models available in LiteLLM. Users can access LLMs either through local models with Ollama or by using API providers like OpenAI and Mistral. LlamaBot offers different bot interfaces like SimpleBot, ChatBot, QueryBot, and ImageBot for various tasks such as rephrasing text, maintaining chat history, querying documents, and generating images. The tool also includes CLI demos showcasing its capabilities and supports contributions for new features and bug reports from the community.
SirChatalot
A Telegram bot that proves you don't need a body to have a personality. It can use various text and image generation APIs to generate responses to user messages. For text generation, the bot can use: * OpenAI's ChatGPT API (or other compatible API). Vision capabilities can be used with GPT-4 models. Function calling can be used with Function calling. * Anthropic's Claude API. Vision capabilities can be used with Claude 3 models. Function calling can be used with tool use. * YandexGPT API Bot can also generate images with: * OpenAI's DALL-E * Stability AI * Yandex ART This bot can also be used to generate responses to voice messages. Bot will convert the voice message to text and will then generate a response. Speech recognition can be done using the OpenAI's Whisper model. To use this feature, you need to install the ffmpeg library. This bot is also support working with files, see Files section for more details. If function calling is enabled, bot can generate images and search the web (limited).
aiac
AIAC is a library and command line tool to generate Infrastructure as Code (IaC) templates, configurations, utilities, queries, and more via LLM providers such as OpenAI, Amazon Bedrock, and Ollama. Users can define multiple 'backends' targeting different LLM providers and environments using a simple configuration file. The tool allows users to ask a model to generate templates for different scenarios and composes an appropriate request to the selected provider, storing the resulting code to a file and/or printing it to standard output.
eval-dev-quality
DevQualityEval is an evaluation benchmark and framework designed to compare and improve the quality of code generation of Language Model Models (LLMs). It provides developers with a standardized benchmark to enhance real-world usage in software development and offers users metrics and comparisons to assess the usefulness of LLMs for their tasks. The tool evaluates LLMs' performance in solving software development tasks and measures the quality of their results through a point-based system. Users can run specific tasks, such as test generation, across different programming languages to evaluate LLMs' language understanding and code generation capabilities.
gpt-subtrans
GPT-Subtrans is an open-source subtitle translator that utilizes large language models (LLMs) as translation services. It supports translation between any language pairs that the language model supports. Note that GPT-Subtrans requires an active internet connection, as subtitles are sent to the provider's servers for translation, and their privacy policy applies.
RAGMeUp
RAG Me Up is a generic framework that enables users to perform Retrieve and Generate (RAG) on their own dataset easily. It consists of a small server and UIs for communication. Best run on GPU with 16GB vRAM. Users can combine RAG with fine-tuning using LLaMa2Lang repository. The tool allows configuration for LLM, data, LLM parameters, prompt, and document splitting. Funding is sought to democratize AI and advance its applications.
reader
Reader is a tool that converts any URL to an LLM-friendly input with a simple prefix `https://r.jina.ai/`. It improves the output for your agent and RAG systems at no cost. Reader supports image reading, captioning all images at the specified URL and adding `Image [idx]: [caption]` as an alt tag. This enables downstream LLMs to interact with the images in reasoning, summarizing, etc. Reader offers a streaming mode, useful when the standard mode provides an incomplete result. In streaming mode, Reader waits a bit longer until the page is fully rendered, providing more complete information. Reader also supports a JSON mode, which contains three fields: `url`, `title`, and `content`. Reader is backed by Jina AI and licensed under Apache-2.0.
vectorflow
VectorFlow is an open source, high throughput, fault tolerant vector embedding pipeline. It provides a simple API endpoint for ingesting large volumes of raw data, processing, and storing or returning the vectors quickly and reliably. The tool supports text-based files like TXT, PDF, HTML, and DOCX, and can be run locally with Kubernetes in production. VectorFlow offers functionalities like embedding documents, running chunking schemas, custom chunking, and integrating with vector databases like Pinecone, Qdrant, and Weaviate. It enforces a standardized schema for uploading data to a vector store and supports features like raw embeddings webhook, chunk validation webhook, S3 endpoint, and telemetry. The tool can be used with the Python client and provides detailed instructions for running and testing the functionalities.
redbox-copilot
Redbox Copilot is a retrieval augmented generation (RAG) app that uses GenAI to chat with and summarise civil service documents. It increases organisational memory by indexing documents and can summarise reports read months ago, supplement them with current work, and produce a first draft that lets civil servants focus on what they do best. The project uses a microservice architecture with each microservice running in its own container defined by a Dockerfile. Dependencies are managed using Python Poetry. Contributions are welcome, and the project is licensed under the MIT License.
For similar tasks
ain
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.
agentica
Agentica is a human-centric framework for building large language model agents. It provides functionalities for planning, memory management, tool usage, and supports features like reflection, planning and execution, RAG, multi-agent, multi-role, and workflow. The tool allows users to quickly code and orchestrate agents, customize prompts, and make API calls to various services. It supports API calls to OpenAI, Azure, Deepseek, Moonshot, Claude, Ollama, and Together. Agentica aims to simplify the process of building AI agents by providing a user-friendly interface and a range of functionalities for agent development.
For similar jobs
google.aip.dev
API Improvement Proposals (AIPs) are design documents that provide high-level, concise documentation for API development at Google. The goal of AIPs is to serve as the source of truth for API-related documentation and to facilitate discussion and consensus among API teams. AIPs are similar to Python's enhancement proposals (PEPs) and are organized into different areas within Google to accommodate historical differences in customs, styles, and guidance.
kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.
speakeasy
Speakeasy is a tool that helps developers create production-quality SDKs, Terraform providers, documentation, and more from OpenAPI specifications. It supports a wide range of languages, including Go, Python, TypeScript, Java, and C#, and provides features such as automatic maintenance, type safety, and fault tolerance. Speakeasy also integrates with popular package managers like npm, PyPI, Maven, and Terraform Registry for easy distribution.
apicat
ApiCat is an API documentation management tool that is fully compatible with the OpenAPI specification. With ApiCat, you can freely and efficiently manage your APIs. It integrates the capabilities of LLM, which not only helps you automatically generate API documentation and data models but also creates corresponding test cases based on the API content. Using ApiCat, you can quickly accomplish anything outside of coding, allowing you to focus your energy on the code itself.
aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.
ain
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.
OllamaKit
OllamaKit is a Swift library designed to simplify interactions with the Ollama API. It handles network communication and data processing, offering an efficient interface for Swift applications to communicate with the Ollama API. The library is optimized for use within Ollamac, a macOS app for interacting with Ollama models.
ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It facilitates communication with the Ollama server and provides models for deployment. The tool requires Java 11 or higher and can be installed locally or via Docker. Users can integrate Ollama4j into Maven projects by adding the specified dependency. The tool offers API specifications and supports various development tasks such as building, running unit tests, and integration tests. Releases are automated through GitHub Actions CI workflow. Areas of improvement include adhering to Java naming conventions, updating deprecated code, implementing logging, using lombok, and enhancing request body creation. Contributions to the project are encouraged, whether reporting bugs, suggesting enhancements, or contributing code.