ai-microcore

ai-microcore

A handy lib for smooth interaction with large language models (LLMs) and crafting AI apps.

Stars: 92

Visit
 screenshot

MicroCore is a collection of python adapters for Large Language Models and Vector Databases / Semantic Search APIs. It allows convenient communication with these services, easy switching between models & services, and separation of business logic from implementation details. Users can keep applications simple and try various models & services without changing application code. MicroCore connects MCP tools to language models easily, supports text completion and chat completion models, and provides features for configuring, installing vendor-specific packages, and using vector databases.

README:

AI MicroCore: A Minimalistic Foundation for AI Applications

Release Notes Code Quality Pylint Tests Code Coverage Stand With Ukraine License

MicroCore is a collection of python adapters for Large Language Models and Vector Databases / Semantic Search APIs allowing to communicate with these services in a convenient way, make them easily switchable and separate business logic from the implementation details.

It defines interfaces for features typically used in AI applications, which allows you to keep your application as simple as possible and try various models & services without need to change your application code.

You can even switch between text completion and chat completion models only using configuration.

Thanks to LLM-agnostic MCP integration, MicroCore connects MCP tools to any language models easily, whether through API providers that do not support MCP, or through inference using pytorch or arbitrary python functions.

The basic example of usage is as follows:

from microcore import llm

while user_msg := input('Enter message: '):
    print('AI: ' + llm(user_msg))

🔗 Links

💻 Installation

Install as PyPi package:

pip install ai-microcore

Alternatively, you may just copy microcore folder to your project sources root.

git clone [email protected]:Nayjest/ai-microcore.git && mv ai-microcore/microcore ./ && rm -rf ai-microcore

📋 Requirements

Python 3.10 / 3.11 / 3.12 / 3.13 / 3.14

⚙️ Configuring

Minimal Configuration

Having OPENAI_API_KEY in OS environment variables is enough for basic usage.

Similarity search features will work out of the box if you have the chromadb pip package installed.

Configuration Methods

There are a few options available for configuring microcore:

For the full list of available configuration options, you may also check microcore/configuration.py.

Installing vendor-specific packages

For models working not via OpenAI API, you may need to install additional packages:

Anthropic Claude

pip install anthropic

Google Gemini via AI Studio or Vertex AI

pip install google-genai

Local language models via Hugging Face Transformers

You will need to install transformers and a deep learning library of your choice (PyTorch, TensorFlow, Flax, etc).

See transformers installation.

Priority of Configuration Sources

  1. Configuration options passed as arguments to microcore.configure() have the highest priority.
  2. The priority of configuration file options (.env by default or the value of DOT_ENV_FILE) is higher than OS environment variables.
    💡 Setting USE_DOT_ENV to false disables reading configuration files.
  3. OS environment variables have the lowest priority.

Vector Databases

Vector database functions are available via microcore.texts.

ChromaDB

The default vector database is Chroma. In order to use vector database functions with ChromaDB, you need to install the chromadb package:

pip install chromadb

By default, MicroCore will use ChromaDB PersistentClient (if the corresponding package is installed). Alternatively, you can run Chroma as a separate service and configure MicroCore to use HttpClient:

from microcore import configure
configure(
    EMBEDDING_DB_HOST = 'localhost',
    EMBEDDING_DB_PORT = 8000,
)

Qdrant

In order to use vector database functions with Qdrant, you need to install the qdrant-client package:

pip install qdrant-client

Configuration example

from microcore import configure, EmbeddingDbType
from sentence_transformers import SentenceTransformer

configure(
    EMBEDDING_DB_TYPE=EmbeddingDbType.QDRANT,
    EMBEDDING_DB_HOST="localhost",
    EMBEDDING_DB_PORT="6333",
    EMBEDDING_DB_SIZE=384,  # number of dimensions in the SentenceTransformer model
    EMBEDDING_DB_FUNCTION=SentenceTransformer("paraphrase-multilingual-MiniLM-L12-v2"),
)

🌟 Core Functions

llm(prompt: str, **kwargs) → str

Performs a request to a large language model (LLM).

Asynchronous variant: allm(prompt: str, **kwargs)

from microcore import *

# Will print all requests and responses to console
use_logging()

# Basic usage
ai_response = llm('What is your model name?')

# You may also pass a list of strings as prompt
# - For chat completion models elements are treated as separate messages
# - For completion LLMs elements are treated as text lines
llm(['1+2', '='])
llm('1+2=', model='gpt-5.2')

# To specify a message role, you can use dictionary or classes
llm(dict(role='system', content='1+2='))
# equivalent
llm(SysMsg('1+2='))

# The returned value is a string
assert '7' == llm([
 SysMsg('You are a calculator'),
 UserMsg('1+2='),
 AssistantMsg('3'),
 UserMsg('3+4=')]
).strip()

# But it contains all fields of the LLM response in additional attributes
for i in llm('1+2=?', n=3, temperature=2).choices:
    print('RESPONSE:', i.message.content)

# To use response streaming you may specify the callback function:
llm('Hi there', callback=lambda x: print(x, end=''))

# Or multiple callbacks:
output = []
llm('Hi there', callbacks=[
    lambda x: print(x, end=''),
    lambda x: output.append(x),
])

tpl(file_path, **params) → str

Renders prompt template with params.

Full-featured Jinja2 templates are used by default.

Related configuration options:

from microcore import configure
configure(
    # 'tpl' folder in current working directory by default
    PROMPT_TEMPLATES_PATH = 'my_templates_folder'
)

texts.search(collection: str, query: str | list, n_results: int = 5, where: dict = None, **kwargs) → list[str]

Similarity search

texts.find_one(self, collection: str, query: str | list) → str | None

Find most similar text

texts.get_all(self, collection: str) -> list[str]

Return collection of texts

texts.save(collection: str, text: str, metadata: dict = None)

Store text and related metadata in embeddings database

texts.save_many(collection: str, items: list[tuple[str, dict] | str])

Store multiple texts and related metadata in the embeddings database

texts.clear(collection: str):

Clear collection

API providers and models support

MI-MicroCore supports major API providers via various chat completion / text completion APIs.

Tested with the following services:

And more via Google / Anthropic / OpenAI API.

Supported local language model APIs:

  • HuggingFace Transformers (see configuration examples here).
  • Custom local models by providing own function for chat / text completion, sync / async inference.

🖼️ Examples

Performs a code review by LLM for changes in git .patch files in any programming languages.

Image analysis (Google Colab)

Determine the number of petals and the color of the flower from a photo (gpt-4-turbo)

Benchmark accuracy of 20+ state of the art models on solving olympiad math problems. Inferencing local language models via HuggingFace Transformers, parallel inference.

Simple example demonstrating image generation using OpenAI GPT Image model.

Text generation using HF/Transformers model locally (example with Qwen 3 0.6B).

📚 Guides & Reference

For more detailed information, check out these articles:

Python functions as AI tools

Usage Example:

from microcore.ai_func import ai_func

@ai_func
def search_products(
    query: str,
    category: str = "all",
    max_results: int = 10,
    in_stock_only: bool = False
):
    """
    Search for products in the catalog.

    Args:
        query: Search terms to find matching products
        category: Product category to filter by (e.g., "electronics", "clothing")
        max_results: Maximum number of results to return
        in_stock_only: If True, only return products currently in stock

    Returns:
        List of matching products with name, price, and availability
    """
    # Implementation would go here
    pass

Output:

# Search for products in the catalog.

Args:
    query: Search terms to find matching products
    category: Product category to filter by (e.g., "electronics", "clothing")
    max_results: Maximum number of results to return
    in_stock_only: If True, only return products currently in stock

Returns:
    List of matching products with name, price, and availability
{
  "call": "search_products",
  "query": <str>,
  "category": <str> (default = "all"),
  "max_results": <int> (default = 10),
  "in_stock_only": <bool> (default = False)
}

🤖 AI Modules

This is an experimental feature.

Tweaks the Python import system to provide automatic setup of MicroCore environment based on metadata in module docstrings.

Usage:

import microcore.ai_modules

Features:

  • Automatically registers template folders of AI modules in Jinja2 environment

🛠️ Contributing

Please see CONTRIBUTING for details.

📝 License

Licensed under the MIT License © 2023–2026 Vitalii Stepanenko

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for ai-microcore

Similar Open Source Tools

For similar tasks

For similar jobs