
openinference
OpenTelemetry Instrumentation for AI Observability
Stars: 598

OpenInference is a set of conventions and plugins that complement OpenTelemetry to enable tracing of AI applications. It provides a way to capture and analyze the performance and behavior of AI models, including their interactions with other components of the application. OpenInference is designed to be language-agnostic and can be used with any OpenTelemetry-compatible backend. It includes a set of instrumentations for popular machine learning SDKs and frameworks, making it easy to add tracing to your AI applications.
README:
OpenInference is a set of conventions and plugins that is complimentary to OpenTelemetry to enable tracing of AI applications. OpenInference is natively supported by arize-phoenix, but can be used with any OpenTelemetry-compatible backend as well.
The OpenInference specification is edited in markdown files found in the spec directory. It's designed to provide insight into the invocation of LLMs and the surrounding application context such as retrieval from vector stores and the usage of external tools such as search engines or APIs. The specification is transport and file-format agnostic, and is intended to be used in conjunction with other specifications such as JSON, ProtoBuf, and DataFrames.
OpenInference provides a set of instrumentations for popular machine learning SDKs and frameworks in a variety of languages.
Package | Description | Version |
---|---|---|
openinference-semantic-conventions |
Semantic conventions for tracing of LLM Apps. | |
openinference-instrumentation |
Reusable utilities, decorators, configurations, and helpers for instrumentation. | |
openinference-instrumentation-agno |
OpenInference Instrumentation for Agno Agents. | |
openinference-instrumentation-openai |
OpenInference Instrumentation for OpenAI SDK. | |
openinference-instrumentation-openai-agents |
OpenInference Instrumentation for OpenAI Agents SDK. | |
openinference-instrumentation-llama-index |
OpenInference Instrumentation for LlamaIndex. | |
openinference-instrumentation-dspy |
OpenInference Instrumentation for DSPy. | |
openinference-instrumentation-bedrock |
OpenInference Instrumentation for AWS Bedrock. | |
openinference-instrumentation-langchain |
OpenInference Instrumentation for LangChain. | |
openinference-instrumentation-mcp |
OpenInference Instrumentation for MCP. | |
openinference-instrumentation-mistralai |
OpenInference Instrumentation for MistralAI. | |
openinference-instrumentation-portkey |
OpenInference Instrumentation for Portkey. | |
openinference-instrumentation-guardrails |
OpenInference Instrumentation for Guardrails. | |
openinference-instrumentation-vertexai |
OpenInference Instrumentation for VertexAI. | |
openinference-instrumentation-crewai |
OpenInference Instrumentation for CrewAI. | |
openinference-instrumentation-haystack |
OpenInference Instrumentation for Haystack. | |
openinference-instrumentation-litellm |
OpenInference Instrumentation for liteLLM. | |
openinference-instrumentation-groq |
OpenInference Instrumentation for Groq. | |
openinference-instrumentation-instructor |
OpenInference Instrumentation for Instructor. | |
openinference-instrumentation-anthropic |
OpenInference Instrumentation for Anthropic. | |
openinference-instrumentation-beeai |
OpenInference Instrumentation for BeeAI. | |
openinference-instrumentation-google-genai |
OpenInference Instrumentation for Google GenAI. | |
openinference-instrumentation-google-adk |
OpenInference Instrumentation for Google ADK. | |
openinference-instrumentation-autogen-agentchat |
OpenInference Instrumentation for Microsoft Autogen AgentChat. | |
openinference-instrumentation-pydantic-ai |
OpenInference Instrumentation for PydanticAI. | |
openinference-instrumentation-smolagents |
OpenInference Instrumentation for smolagents. |
Normalize and convert data across other instrumentation libraries by adding span processors that unify data.
Package | Description | Version |
---|---|---|
openinference-instrumentation-openlit |
OpenInference Span Processor for OpenLIT traces. | |
openinference-instrumentation-openllmetry |
OpenInference Span Processor for OpenLLMetry (Traceloop) traces. |
Name | Description | Complexity Level |
---|---|---|
Agno | Agno agent examples | Beginner |
OpenAI SDK | OpenAI Python SDK, including chat completions and embeddings | Beginner |
MistralAI SDK | MistralAI Python SDK | Beginner |
VertexAI SDK | VertexAI Python SDK | Beginner |
LlamaIndex | LlamaIndex query engines | Beginner |
DSPy | DSPy primitives and custom RAG modules | Beginner |
Boto3 Bedrock Client | Boto3 Bedrock client | Beginner |
LangChain | LangChain primitives and simple chains | Beginner |
LiteLLM | A lightweight LiteLLM framework | Beginner |
LiteLLM Proxy | LiteLLM Proxy to log OpenAI, Azure, Vertex, Bedrock | Beginner |
Groq | Groq and AsyncGroq chat completions | Beginner |
Anthropic | Anthropic Messages client | Beginner |
BeeAI | Agentic instrumentation in the BeeAI framework | Beginner |
LlamaIndex + Next.js Chatbot | A fully functional chatbot using Next.js and a LlamaIndex FastAPI backend | Intermediate |
LangServe | A LangChain application deployed with LangServe using custom metadata on a per-request basis | Intermediate |
DSPy | A DSPy RAG application using FastAPI, Weaviate, and Cohere | Intermediate |
Haystack | A Haystack QA RAG application | Intermediate |
OpenAI Agents | OpenAI Agents with handoffs | Intermediate |
Autogen AgentChat | Microsoft Autogen Assistant Agent and Team Chat | Intermediate |
PydanticAI | PydanticAI agent examples | Intermediate |
Package | Description | Version |
---|---|---|
@arizeai/openinference-semantic-conventions |
Semantic conventions for tracing of LLM Apps. | |
@arizeai/openinference-core |
Reusable utilities, configuration, and helpers for instrumentation. | |
@arizeai/openinference-instrumentation-bedrock |
OpenInference Instrumentation for AWS Bedrock. | |
@arizeai/openinference-instrumentation-bedrock-agent-runtime |
OpenInference Instrumentation for AWS Bedrock Agent Runtime. | |
@arizeai/openinference-instrumentation-beeai |
OpenInference Instrumentation for BeeAI. | |
@arizeai/openinference-instrumentation-langchain |
OpenInference Instrumentation for LangChain.js. | |
@arizeai/openinference-instrumentation-mcp |
OpenInference Instrumentation for MCP. | |
@arizeai/openinference-instrumentation-openai |
OpenInference Instrumentation for OpenAI SDK. | |
@arizeai/openinference-vercel |
OpenInference Support for Vercel AI SDK. | |
@arizeai/openinference-mastra |
OpenInference Support for Mastra. |
Name | Description | Complexity Level |
---|---|---|
OpenAI SDK | OpenAI Node.js client | Beginner |
BeeAI framework - ReAct agent | Agentic ReActAgent instrumentation in the BeeAI framework |
Beginner |
BeeAI framework - ToolCalling agent | Agentic ToolCallingAgent instrumentation in the BeeAI framework |
Beginner |
BeeAI framework - LLM | See how to run instrumentation only for the specific LLM module part in the BeeAI framework | Beginner |
LlamaIndex Express App | A fully functional LlamaIndex chatbot with a Next.js frontend and a LlamaIndex Express backend, instrumented using openinference-instrumentation-openai
|
Intermediate |
LangChain OpenAI | A simple script to call OpenAI via LangChain, instrumented using openinference-instrumentation-langchain
|
Beginner |
LangChain RAG Express App | A fully functional LangChain chatbot that uses RAG to answer user questions. It has a Next.js frontend and a LangChain Express backend, instrumented using openinference-instrumentation-langchain
|
Intermediate |
Next.js + OpenAI | A Next.js 13 project bootstrapped with create-next-app that uses OpenAI to generate text |
Beginner |
Package | Description | Version |
---|---|---|
openinference-semantic-conventions |
Semantic conventions for tracing of LLM Apps. | |
openinference-instrumentation |
Base instrumentation utilities. | |
openinference-instrumentation-langchain4j |
OpenInference Instrumentation for LangChain4j. | |
openinference-instrumentation-springAI |
OpenInference Instrumentation for Spring AI. |
Name | Description | Complexity Level |
---|---|---|
LangChain4j Example | Simple example using LangChain4j with OpenAI | Beginner |
Spring AI Example | Spring AI example with OpenAI and tool calling | Beginner |
OpenInference supports the following destinations as span collectors.
- β Arize-Phoenix
- β Arize
- β Any OTEL-compatible collector
Join our community to connect with thousands of machine learning practitioners and LLM observability enthusiasts!
- π Join our Slack community.
- π‘ Ask questions and provide feedback in the #phoenix-support channel.
- π Leave a star on our GitHub.
- π Report bugs with GitHub Issues.
- π Follow us on X.
- πΊοΈ Check out our roadmap to see where we're heading next.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for openinference
Similar Open Source Tools

openinference
OpenInference is a set of conventions and plugins that complement OpenTelemetry to enable tracing of AI applications. It provides a way to capture and analyze the performance and behavior of AI models, including their interactions with other components of the application. OpenInference is designed to be language-agnostic and can be used with any OpenTelemetry-compatible backend. It includes a set of instrumentations for popular machine learning SDKs and frameworks, making it easy to add tracing to your AI applications.

ai
This repository contains a collection of AI algorithms and models for various machine learning tasks. It provides implementations of popular algorithms such as neural networks, decision trees, and support vector machines. The code is well-documented and easy to understand, making it suitable for both beginners and experienced developers. The repository also includes example datasets and tutorials to help users get started with building and training AI models. Whether you are a student learning about AI or a professional working on machine learning projects, this repository can be a valuable resource for your development journey.

chatmcp
Chatmcp is a chatbot framework for building conversational AI applications. It provides a flexible and extensible platform for creating chatbots that can interact with users in a natural language. With Chatmcp, developers can easily integrate chatbot functionality into their applications, enabling users to communicate with the system through text-based conversations. The framework supports various natural language processing techniques and allows for the customization of chatbot behavior and responses. Chatmcp simplifies the development of chatbots by providing a set of pre-built components and tools that streamline the creation process. Whether you are building a customer support chatbot, a virtual assistant, or a chat-based game, Chatmcp offers the necessary features and capabilities to bring your conversational AI ideas to life.

BaseAI
BaseAI is an AI framework designed for creating declarative and composable AI-powered LLM products. It enables the development of AI agent pipes locally, incorporating agentic tools and memory (RAG). The framework offers a learn guide for beginners to kickstart their journey with BaseAI. For detailed documentation, users can visit baseai.dev/docs. Contributions to BaseAI are encouraged, and interested individuals can refer to the Contributing Guide. The original authors of BaseAI include Ahmad Awais, Ashar Irfan, Saqib Ameen, Saad Irfan, and Ahmad Bilal. Security vulnerabilities can be reported privately via email to [email protected]. BaseAI aims to provide resources for learning AI agent development, utilizing agentic tools and memory.

MeeseeksAI
MeeseeksAI is a framework designed to orchestrate AI agents using a mermaid graph and networkx. It provides a structured approach to managing and coordinating multiple AI agents within a system. The framework allows users to define the interactions and dependencies between agents through a visual representation, making it easier to understand and modify the behavior of the AI system. By leveraging the power of networkx, MeeseeksAI enables efficient graph-based computations and optimizations, enhancing the overall performance of AI workflows. With its intuitive design and flexible architecture, MeeseeksAI simplifies the process of building and deploying complex AI systems, empowering users to create sophisticated agent interactions with ease.

PurpleLlama
Purple Llama is an umbrella project that aims to provide tools and evaluations to support responsible development and usage of generative AI models. It encompasses components for cybersecurity and input/output safeguards, with plans to expand in the future. The project emphasizes a collaborative approach, borrowing the concept of purple teaming from cybersecurity, to address potential risks and challenges posed by generative AI. Components within Purple Llama are licensed permissively to foster community collaboration and standardize the development of trust and safety tools for generative AI.

generative-ai-design-patterns
A catalog of design patterns for building generative AI applications, capturing current best practices in the field. The repository serves as a living catalog on GitHub to help practitioners navigate through the noise and identify areas for improvement. It is too early for a book due to the evolving nature of generative AI in production and the lack of concrete evidence to support certain claims.

LazyLLM
LazyLLM is a low-code development tool for building complex AI applications with multiple agents. It assists developers in building AI applications at a low cost and continuously optimizing their performance. The tool provides a convenient workflow for application development and offers standard processes and tools for various stages of application development. Users can quickly prototype applications with LazyLLM, analyze bad cases with scenario task data, and iteratively optimize key components to enhance the overall application performance. LazyLLM aims to simplify the AI application development process and provide flexibility for both beginners and experts to create high-quality applications.

BentoVLLM
BentoVLLM is an example project demonstrating how to serve and deploy open-source Large Language Models using vLLM, a high-throughput and memory-efficient inference engine. It provides a basis for advanced code customization, such as custom models, inference logic, or vLLM options. The project allows for simple LLM hosting with OpenAI compatible endpoints without the need to write any code. Users can interact with the server using Swagger UI or other methods, and the service can be deployed to BentoCloud for better management and scalability. Additionally, the repository includes integration examples for different LLM models and tools.

AI_Spectrum
AI_Spectrum is a versatile machine learning library that provides a wide range of tools and algorithms for building and deploying AI models. It offers a user-friendly interface for data preprocessing, model training, and evaluation. With AI_Spectrum, users can easily experiment with different machine learning techniques and optimize their models for various tasks. The library is designed to be flexible and scalable, making it suitable for both beginners and experienced data scientists.

copilot
OpenCopilot is a tool that allows users to create their own AI copilot for their products. It integrates with APIs to execute calls as needed, using LLMs to determine the appropriate endpoint and payload. Users can define API actions, validate schemas, and integrate a user-friendly chat bubble into their SaaS app. The tool is capable of calling APIs, transforming responses, and populating request fields based on context. It is not suitable for handling large APIs without JSON transformers. Users can teach the copilot via flows and embed it in their app with minimal code.

LightLLM
LightLLM is a lightweight library for linear and logistic regression models. It provides a simple and efficient way to train and deploy machine learning models for regression tasks. The library is designed to be easy to use and integrate into existing projects, making it suitable for both beginners and experienced data scientists. With LightLLM, users can quickly build and evaluate regression models using a variety of algorithms and hyperparameters. The library also supports feature engineering and model interpretation, allowing users to gain insights from their data and make informed decisions based on the model predictions.

duckduckgo-ai-chat
This repository contains a chatbot tool powered by AI technology. The chatbot is designed to interact with users in a conversational manner, providing information and assistance on various topics. Users can engage with the chatbot to ask questions, seek recommendations, or simply have a casual conversation. The AI technology behind the chatbot enables it to understand natural language inputs and provide relevant responses, making the interaction more intuitive and engaging. The tool is versatile and can be customized for different use cases, such as customer support, information retrieval, or entertainment purposes. Overall, the chatbot offers a user-friendly and interactive experience, leveraging AI to enhance communication and engagement.

ai-demos
The 'ai-demos' repository is a collection of example code from presentations focusing on building with AI and LLMs. It serves as a resource for developers looking to explore practical applications of artificial intelligence in their projects. The code snippets showcase various techniques and approaches to leverage AI technologies effectively. The repository aims to inspire and educate developers on integrating AI solutions into their applications.

AI-CryptoTrader
AI-CryptoTrader is a state-of-the-art cryptocurrency trading bot that uses ensemble methods to combine the predictions of multiple algorithms. Written in Python, it connects to the Binance trading platform and integrates with Azure for efficiency and scalability. The bot uses technical indicators and machine learning algorithms to generate predictions for buy and sell orders, adjusting to market conditions. While robust, users should be cautious due to cryptocurrency market volatility.

deepteam
Deepteam is a powerful open-source tool designed for deep learning projects. It provides a user-friendly interface for training, testing, and deploying deep neural networks. With Deepteam, users can easily create and manage complex models, visualize training progress, and optimize hyperparameters. The tool supports various deep learning frameworks and allows seamless integration with popular libraries like TensorFlow and PyTorch. Whether you are a beginner or an experienced deep learning practitioner, Deepteam simplifies the development process and accelerates model deployment.
For similar tasks

openinference
OpenInference is a set of conventions and plugins that complement OpenTelemetry to enable tracing of AI applications. It provides a way to capture and analyze the performance and behavior of AI models, including their interactions with other components of the application. OpenInference is designed to be language-agnostic and can be used with any OpenTelemetry-compatible backend. It includes a set of instrumentations for popular machine learning SDKs and frameworks, making it easy to add tracing to your AI applications.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.