
client-python
Python client library for Mistral AI platform
Stars: 570

The Mistral Python Client is a tool inspired by cohere-python that allows users to interact with the Mistral AI API. It provides functionalities to access and utilize the AI capabilities offered by Mistral. Users can easily install the client using pip and manage dependencies using poetry. The client includes examples demonstrating how to use the API for various tasks, such as chat interactions. To get started, users need to obtain a Mistral API Key and set it as an environment variable. Overall, the Mistral Python Client simplifies the integration of Mistral AI services into Python applications.
README:
This documentation is for Mistral AI SDK v1. You can find more details on how to migrate from v0 to v1 here
Before you begin, you will need a Mistral AI API key.
- Get your own Mistral API Key: https://docs.mistral.ai/#api-access
- Set your Mistral API Key as an environment variable. You only need to do this once.
# set Mistral API Key (using zsh for example)
$ echo 'export MISTRAL_API_KEY=[your_key_here]' >> ~/.zshenv
# reload the environment (or just quit and open a new terminal)
$ source ~/.zshenv
Mistral AI API: Our Chat Completion and Embeddings APIs specification. Create your account on La Plateforme to get access and read the docs to learn how to use it.
[!NOTE] Python version upgrade policy
Once a Python version reaches its official end of life date, a 3-month grace period is provided for users to upgrade. Following this grace period, the minimum python version supported in the SDK will be updated.
The SDK can be installed with either pip or poetry package managers.
PIP is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
pip install mistralai
Poetry is a modern tool that simplifies dependency management and package publishing by using a single pyproject.toml
file to handle project metadata and dependencies.
poetry add mistralai
You can use this SDK in a Python shell with uv and the uvx
command that comes with it like so:
uvx --from mistralai python
It's also possible to write a standalone Python script without needing to set up a whole project like so:
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.9"
# dependencies = [
# "mistralai",
# ]
# ///
from mistralai import Mistral
sdk = Mistral(
# SDK arguments
)
# Rest of script here...
Once that is saved to a file, you can run it with uv run script.py
where
script.py
can be replaced with the actual file name.
This example shows how to create chat completions.
# Synchronous Example
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.chat.complete(model="mistral-small-latest", messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
},
])
# Handle response
print(res)
The same SDK client can also be used to make asychronous requests by importing asyncio.
# Asynchronous Example
import asyncio
from mistralai import Mistral
import os
async def main():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = await mistral.chat.complete_async(model="mistral-small-latest", messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
},
])
# Handle response
print(res)
asyncio.run(main())
This example shows how to upload a file.
# Synchronous Example
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.files.upload(file={
"file_name": "example.file",
"content": open("example.file", "rb"),
})
# Handle response
print(res)
The same SDK client can also be used to make asychronous requests by importing asyncio.
# Asynchronous Example
import asyncio
from mistralai import Mistral
import os
async def main():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = await mistral.files.upload_async(file={
"file_name": "example.file",
"content": open("example.file", "rb"),
})
# Handle response
print(res)
asyncio.run(main())
This example shows how to create agents completions.
# Synchronous Example
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.agents.complete(messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
},
], agent_id="<id>")
# Handle response
print(res)
The same SDK client can also be used to make asychronous requests by importing asyncio.
# Asynchronous Example
import asyncio
from mistralai import Mistral
import os
async def main():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = await mistral.agents.complete_async(messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
},
], agent_id="<id>")
# Handle response
print(res)
asyncio.run(main())
This example shows how to create embedding request.
# Synchronous Example
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.embeddings.create(model="mistral-embed", inputs=[
"Embed this sentence.",
"As well as this one.",
])
# Handle response
print(res)
The same SDK client can also be used to make asychronous requests by importing asyncio.
# Asynchronous Example
import asyncio
from mistralai import Mistral
import os
async def main():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = await mistral.embeddings.create_async(model="mistral-embed", inputs=[
"Embed this sentence.",
"As well as this one.",
])
# Handle response
print(res)
asyncio.run(main())
You can run the examples in the examples/
directory using poetry run
or by entering the virtual environment using poetry shell
.
Prerequisites
Before you begin, ensure you have AZUREAI_ENDPOINT
and an AZURE_API_KEY
. To obtain these, you will need to deploy Mistral on Azure AI.
See instructions for deploying Mistral on Azure AI here.
Here's a basic example to get you started. You can also run the example in the examples
directory.
import asyncio
import os
from mistralai_azure import MistralAzure
client = MistralAzure(
azure_api_key=os.getenv("AZURE_API_KEY", ""),
azure_endpoint=os.getenv("AZURE_ENDPOINT", "")
)
async def main() -> None:
res = await client.chat.complete_async(
max_tokens= 100,
temperature= 0.5,
messages= [
{
"content": "Hello there!",
"role": "user"
}
]
)
print(res)
asyncio.run(main())
The documentation for the Azure SDK is available here.
Prerequisites
Before you begin, you will need to create a Google Cloud project and enable the Mistral API. To do this, follow the instructions here.
To run this locally you will also need to ensure you are authenticated with Google Cloud. You can do this by running
gcloud auth application-default login
Step 1: Install
Install the extras dependencies specific to Google Cloud:
pip install mistralai[gcp]
Step 2: Example Usage
Here's a basic example to get you started.
import asyncio
from mistralai_gcp import MistralGoogleCloud
client = MistralGoogleCloud()
async def main() -> None:
res = await client.chat.complete_async(
model= "mistral-small-2402",
messages= [
{
"content": "Hello there!",
"role": "user"
}
]
)
print(res)
asyncio.run(main())
The documentation for the GCP SDK is available here.
Available methods
- moderate - Moderations
- moderate_chat - Chat Moderations
- create - Embeddings
- upload - Upload File
- list - List Files
- retrieve - Retrieve File
- delete - Delete File
- download - Download File
- get_signed_url - Get Signed Url
- list - Get Fine Tuning Jobs
- create - Create Fine Tuning Job
- get - Get Fine Tuning Job
- cancel - Cancel Fine Tuning Job
- start - Start Fine Tuning Job
- list - List Models
- retrieve - Retrieve Model
- delete - Delete Model
- update - Update Fine Tuned Model
- archive - Archive Fine Tuned Model
- unarchive - Unarchive Fine Tuned Model
- process - OCR
Server-sent events are used to stream content from certain
operations. These operations will expose the stream as Generator that
can be consumed using a simple for
loop. The loop will
terminate when the server no longer has any events to send and closes the
underlying connection.
The stream is also a Context Manager and can be used with the with
statement and will close the
underlying connection when the context is exited.
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.chat.stream(model="mistral-small-latest", messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
},
])
with res as event_stream:
for event in event_stream:
# handle event
print(event, flush=True)
Certain SDK methods accept file objects as part of a request body or multi-part request. It is possible and typically recommended to upload files as a stream rather than reading the entire contents into memory. This avoids excessive memory consumption and potentially crashing with out-of-memory errors when working with very large files. The following example demonstrates how to attach a file stream to a request.
[!TIP]
For endpoints that handle file uploads bytes arrays can also be used. However, using streams is recommended for large files.
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.files.upload(file={
"file_name": "example.file",
"content": open("example.file", "rb"),
})
# Handle response
print(res)
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a RetryConfig
object to the call:
from mistralai import Mistral
from mistralai.utils import BackoffStrategy, RetryConfig
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list(,
RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))
# Handle response
print(res)
If you'd like to override the default retry strategy for all operations that support retries, you can use the retry_config
optional parameter when initializing the SDK:
from mistralai import Mistral
from mistralai.utils import BackoffStrategy, RetryConfig
import os
with Mistral(
retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list()
# Handle response
print(res)
Handling errors in this SDK should largely match your expectations. All operations return a response object or raise an exception.
By default, an API error will raise a models.SDKError exception, which has the following properties:
Property | Type | Description |
---|---|---|
.status_code |
int | The HTTP status code |
.message |
str | The error message |
.raw_response |
httpx.Response | The raw HTTP response |
.body |
str | The response content |
When custom error responses are specified for an operation, the SDK may also raise their associated exceptions. You can refer to respective Errors tables in SDK docs for more details on possible exception types for each operation. For example, the list_async
method may raise the following exceptions:
Error Type | Status Code | Content Type |
---|---|---|
models.HTTPValidationError | 422 | application/json |
models.SDKError | 4XX, 5XX | */* |
from mistralai import Mistral, models
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = None
try:
res = mistral.models.list()
# Handle response
print(res)
except models.HTTPValidationError as e:
# handle e.data: models.HTTPValidationErrorData
raise(e)
except models.SDKError as e:
# handle exception
raise(e)
You can override the default server globally by passing a server name to the server: str
optional parameter when initializing the SDK client instance. The selected server will then be used as the default on the operations that use it. This table lists the names associated with the available servers:
Name | Server | Description |
---|---|---|
eu |
https://api.mistral.ai |
EU Production server |
from mistralai import Mistral
import os
with Mistral(
server="eu",
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list()
# Handle response
print(res)
The default server can also be overridden globally by passing a URL to the server_url: str
optional parameter when initializing the SDK client instance. For example:
from mistralai import Mistral
import os
with Mistral(
server_url="https://api.mistral.ai",
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list()
# Handle response
print(res)
The Python SDK makes API calls using the httpx HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance.
Depending on whether you are using the sync or async version of the SDK, you can pass an instance of HttpClient
or AsyncHttpClient
respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls.
This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of httpx.Client
or httpx.AsyncClient
directly.
For example, you could specify a header for every request that this sdk makes as follows:
from mistralai import Mistral
import httpx
http_client = httpx.Client(headers={"x-custom-header": "someValue"})
s = Mistral(client=http_client)
or you could wrap the client with your own custom logic:
from mistralai import Mistral
from mistralai.httpclient import AsyncHttpClient
import httpx
class CustomClient(AsyncHttpClient):
client: AsyncHttpClient
def __init__(self, client: AsyncHttpClient):
self.client = client
async def send(
self,
request: httpx.Request,
*,
stream: bool = False,
auth: Union[
httpx._types.AuthTypes, httpx._client.UseClientDefault, None
] = httpx.USE_CLIENT_DEFAULT,
follow_redirects: Union[
bool, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
) -> httpx.Response:
request.headers["Client-Level-Header"] = "added by client"
return await self.client.send(
request, stream=stream, auth=auth, follow_redirects=follow_redirects
)
def build_request(
self,
method: str,
url: httpx._types.URLTypes,
*,
content: Optional[httpx._types.RequestContent] = None,
data: Optional[httpx._types.RequestData] = None,
files: Optional[httpx._types.RequestFiles] = None,
json: Optional[Any] = None,
params: Optional[httpx._types.QueryParamTypes] = None,
headers: Optional[httpx._types.HeaderTypes] = None,
cookies: Optional[httpx._types.CookieTypes] = None,
timeout: Union[
httpx._types.TimeoutTypes, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
extensions: Optional[httpx._types.RequestExtensions] = None,
) -> httpx.Request:
return self.client.build_request(
method,
url,
content=content,
data=data,
files=files,
json=json,
params=params,
headers=headers,
cookies=cookies,
timeout=timeout,
extensions=extensions,
)
s = Mistral(async_client=CustomClient(httpx.AsyncClient()))
This SDK supports the following security scheme globally:
Name | Type | Scheme | Environment Variable |
---|---|---|---|
api_key |
http | HTTP Bearer | MISTRAL_API_KEY |
To authenticate with the API the api_key
parameter must be set when initializing the SDK client instance. For example:
from mistralai import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.models.list()
# Handle response
print(res)
The Mistral
class implements the context manager protocol and registers a finalizer function to close the underlying sync and async HTTPX clients it uses under the hood. This will close HTTP connections, release memory and free up other resources held by the SDK. In short-lived Python programs and notebooks that make a few SDK method calls, resource management may not be a concern. However, in longer-lived programs, it is beneficial to create a single SDK instance via a context manager and reuse it across the application.
from mistralai import Mistral
import os
def main():
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
# Rest of application here...
# Or when using async:
async def amain():
async with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
# Rest of application here...
You can setup your SDK to emit debug logs for SDK requests and responses.
You can pass your own logger class directly into your SDK.
from mistralai import Mistral
import logging
logging.basicConfig(level=logging.DEBUG)
s = Mistral(debug_logger=logging.getLogger("mistralai"))
You can also enable a default debug logger by setting an environment variable MISTRAL_DEBUG
to true.
Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.
While we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation. We look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for client-python
Similar Open Source Tools

client-python
The Mistral Python Client is a tool inspired by cohere-python that allows users to interact with the Mistral AI API. It provides functionalities to access and utilize the AI capabilities offered by Mistral. Users can easily install the client using pip and manage dependencies using poetry. The client includes examples demonstrating how to use the API for various tasks, such as chat interactions. To get started, users need to obtain a Mistral API Key and set it as an environment variable. Overall, the Mistral Python Client simplifies the integration of Mistral AI services into Python applications.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

clarifai-python
The Clarifai Python SDK offers a comprehensive set of tools to integrate Clarifai's AI platform to leverage computer vision capabilities like classification , detection ,segementation and natural language capabilities like classification , summarisation , generation , Q&A ,etc into your applications. With just a few lines of code, you can leverage cutting-edge artificial intelligence to unlock valuable insights from visual and textual content.

instructor
Instructor is a Python library that makes it a breeze to work with structured outputs from large language models (LLMs). Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows!

ai21-python
The AI21 Labs Python SDK is a comprehensive tool for interacting with the AI21 API. It provides functionalities for chat completions, conversational RAG, token counting, error handling, and support for various cloud providers like AWS, Azure, and Vertex. The SDK offers both synchronous and asynchronous usage, along with detailed examples and documentation. Users can quickly get started with the SDK to leverage AI21's powerful models for various natural language processing tasks.

aioshelly
Aioshelly is an asynchronous library designed to control Shelly devices. It is currently under development and requires Python version 3.11 or higher, along with dependencies like bluetooth-data-tools, aiohttp, and orjson. The library provides examples for interacting with Gen1 devices using CoAP protocol and Gen2/Gen3 devices using RPC and WebSocket protocols. Users can easily connect to Shelly devices, retrieve status information, and perform various actions through the provided APIs. The repository also includes example scripts for quick testing and usage guidelines for contributors to maintain consistency with the Shelly API.

obsei
Obsei is an open-source, low-code, AI powered automation tool that consists of an Observer to collect unstructured data from various sources, an Analyzer to analyze the collected data with various AI tasks, and an Informer to send analyzed data to various destinations. The tool is suitable for scheduled jobs or serverless applications as all Observers can store their state in databases. Obsei is still in alpha stage, so caution is advised when using it in production. The tool can be used for social listening, alerting/notification, automatic customer issue creation, extraction of deeper insights from feedbacks, market research, dataset creation for various AI tasks, and more based on creativity.

langchainrb
Langchain.rb is a Ruby library that makes it easy to build LLM-powered applications. It provides a unified interface to a variety of LLMs, vector search databases, and other tools, making it easy to build and deploy RAG (Retrieval Augmented Generation) systems and assistants. Langchain.rb is open source and available under the MIT License.

mistral-inference
Mistral Inference repository contains minimal code to run 7B, 8x7B, and 8x22B models. It provides model download links, installation instructions, and usage guidelines for running models via CLI or Python. The repository also includes information on guardrailing, model platforms, deployment, and references. Users can interact with models through commands like mistral-demo, mistral-chat, and mistral-common. Mistral AI models support function calling and chat interactions for tasks like testing models, chatting with models, and using Codestral as a coding assistant. The repository offers detailed documentation and links to blogs for further information.

curator
Bespoke Curator is an open-source tool for data curation and structured data extraction. It provides a Python library for generating synthetic data at scale, with features like programmability, performance optimization, caching, and integration with HuggingFace Datasets. The tool includes a Curator Viewer for dataset visualization and offers a rich set of functionalities for creating and refining data generation strategies.

ai00_server
AI00 RWKV Server is an inference API server for the RWKV language model based upon the web-rwkv inference engine. It supports VULKAN parallel and concurrent batched inference and can run on all GPUs that support VULKAN. No need for Nvidia cards!!! AMD cards and even integrated graphics can be accelerated!!! No need for bulky pytorch, CUDA and other runtime environments, it's compact and ready to use out of the box! Compatible with OpenAI's ChatGPT API interface. 100% open source and commercially usable, under the MIT license. If you are looking for a fast, efficient, and easy-to-use LLM API server, then AI00 RWKV Server is your best choice. It can be used for various tasks, including chatbots, text generation, translation, and Q&A.

openvino.genai
The GenAI repository contains pipelines that implement image and text generation tasks. The implementation uses OpenVINO capabilities to optimize the pipelines. Each sample covers a family of models and suggests certain modifications to adapt the code to specific needs. It includes the following pipelines: 1. Benchmarking script for large language models 2. Text generation C++ samples that support most popular models like LLaMA 2 3. Stable Diffuison (with LoRA) C++ image generation pipeline 4. Latent Consistency Model (with LoRA) C++ image generation pipeline

ai-gateway
LangDB AI Gateway is an open-source enterprise AI gateway built in Rust. It provides a unified interface to all LLMs using the OpenAI API format, focusing on high performance, enterprise readiness, and data control. The gateway offers features like comprehensive usage analytics, cost tracking, rate limiting, data ownership, and detailed logging. It supports various LLM providers and provides OpenAI-compatible endpoints for chat completions, model listing, embeddings generation, and image generation. Users can configure advanced settings, such as rate limiting, cost control, dynamic model routing, and observability with OpenTelemetry tracing. The gateway can be run with Docker Compose and integrated with MCP tools for server communication.

pocketgroq
PocketGroq is a tool that provides advanced functionalities for text generation, web scraping, web search, and AI response evaluation. It includes features like an Autonomous Agent for answering questions, web crawling and scraping capabilities, enhanced web search functionality, and flexible integration with Ollama server. Users can customize the agent's behavior, evaluate responses using AI, and utilize various methods for text generation, conversation management, and Chain of Thought reasoning. The tool offers comprehensive methods for different tasks, such as initializing RAG, error handling, and tool management. PocketGroq is designed to enhance development processes and enable the creation of AI-powered applications with ease.

fastrtc
FastRTC is a real-time communication library for Python that allows users to turn any Python function into a real-time audio and video stream over WebRTC or WebSockets. It provides features like automatic voice detection, UI launching, WebRTC support, WebSocket support, telephone support, and customizable backend for production applications. The library offers various examples and usage scenarios for audio and video streaming, object detection, voice APIs, chat applications, and more.

llm
llm.rb is a zero-dependency Ruby toolkit for Large Language Models that includes OpenAI, Gemini, Anthropic, xAI (Grok), DeepSeek, Ollama, and LlamaCpp. The toolkit provides full support for chat, streaming, tool calling, audio, images, files, and structured outputs (JSON Schema). It offers a single unified interface for multiple providers, zero dependencies outside Ruby's standard library, smart API design, and optional per-provider process-wide connection pool. Features include chat, agents, media support (text-to-speech, transcription, translation, image generation, editing), embeddings, model management, and more.
For similar tasks

client-python
The Mistral Python Client is a tool inspired by cohere-python that allows users to interact with the Mistral AI API. It provides functionalities to access and utilize the AI capabilities offered by Mistral. Users can easily install the client using pip and manage dependencies using poetry. The client includes examples demonstrating how to use the API for various tasks, such as chat interactions. To get started, users need to obtain a Mistral API Key and set it as an environment variable. Overall, the Mistral Python Client simplifies the integration of Mistral AI services into Python applications.

Bavarder
Bavarder is an AI-powered chit-chat tool designed for informal conversations about unimportant matters. Users can engage in light-hearted discussions with the AI, simulating casual chit-chat scenarios. The tool provides a platform for users to interact with AI in a fun and entertaining way, offering a unique experience of engaging with artificial intelligence in a conversational manner.

ChaKt-KMP
ChaKt is a multiplatform app built using Kotlin and Compose Multiplatform to demonstrate the use of Generative AI SDK for Kotlin Multiplatform to generate content using Google's Generative AI models. It features a simple chat based user interface and experience to interact with AI. The app supports mobile, desktop, and web platforms, and is built with Kotlin Multiplatform, Kotlin Coroutines, Compose Multiplatform, Generative AI SDK, Calf - File picker, and BuildKonfig. Users can contribute to the project by following the guidelines in CONTRIBUTING.md. The app is licensed under the MIT License.

Neurite
Neurite is an innovative project that combines chaos theory and graph theory to create a digital interface that explores hidden patterns and connections for creative thinking. It offers a unique workspace blending fractals with mind mapping techniques, allowing users to navigate the Mandelbrot set in real-time. Nodes in Neurite represent various content types like text, images, videos, code, and AI agents, enabling users to create personalized microcosms of thoughts and inspirations. The tool supports synchronized knowledge management through bi-directional synchronization between mind-mapping and text-based hyperlinking. Neurite also features FractalGPT for modular conversation with AI, local AI capabilities for multi-agent chat networks, and a Neural API for executing code and sequencing animations. The project is actively developed with plans for deeper fractal zoom, advanced control over node placement, and experimental features.

weixin-dyh-ai
WeiXin-Dyh-AI is a backend management system that supports integrating WeChat subscription accounts with AI services. It currently supports integration with Ali AI, Moonshot, and Tencent Hyunyuan. Users can configure different AI models to simulate and interact with AI in multiple modes: text-based knowledge Q&A, text-to-image drawing, image description, text-to-voice conversion, enabling human-AI conversations on WeChat. The system allows hierarchical AI prompt settings at system, subscription account, and WeChat user levels. Users can configure AI model types, providers, and specific instances. The system also supports rules for allocating models and keys at different levels. It addresses limitations of WeChat's messaging system and offers features like text-based commands and voice support for interactions with AI.

Senparc.AI
Senparc.AI is an AI extension package for the Senparc ecosystem, focusing on LLM (Large Language Models) interaction. It provides modules for standard interfaces and basic functionalities, as well as interfaces using SemanticKernel for plug-and-play capabilities. The package also includes a library for supporting the 'PromptRange' ecosystem, compatible with various systems and frameworks. Users can configure different AI platforms and models, define AI interface parameters, and run AI functions easily. The package offers examples and commands for dialogue, embedding, and DallE drawing operations.

catai
CatAI is a tool that allows users to run GGUF models on their computer with a chat UI. It serves as a local AI assistant inspired by Node-Llama-Cpp and Llama.cpp. The tool provides features such as auto-detecting programming language, showing original messages by clicking on user icons, real-time text streaming, and fast model downloads. Users can interact with the tool through a CLI that supports commands for installing, listing, setting, serving, updating, and removing models. CatAI is cross-platform and supports Windows, Linux, and Mac. It utilizes node-llama-cpp and offers a simple API for asking model questions. Additionally, developers can integrate the tool with node-llama-cpp@beta for model management and chatting. The configuration can be edited via the web UI, and contributions to the project are welcome. The tool is licensed under Llama.cpp's license.

Wa-OpenAI
Wa-OpenAI is a WhatsApp chatbot powered by OpenAI's ChatGPT and DALL-E models, allowing users to interact with AI for text generation and image creation. Users can easily integrate the bot into their WhatsApp conversations using commands like '/ai' and '/img'. The tool requires setting up an OpenAI API key and can be installed on RDP/Windows or Termux environments. It provides a convenient way to leverage AI capabilities within WhatsApp chats, offering a seamless experience for generating text and images.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.