
monoscope
Monoscope lets you ingest and explore your logs, traces and metrics. We store these in S3 compatible buckets. Query in natural language via LLMs.
Stars: 201

Monoscope is an open-source monitoring and observability platform that uses artificial intelligence to understand and monitor systems automatically. It allows users to ingest and explore logs, traces, and metrics in S3 buckets, query in natural language via LLMs, and create AI agents to detect anomalies. Key capabilities include universal data ingestion, AI-powered understanding, natural language interface, cost-effective storage, and zero configuration. Monoscope is designed to reduce alert fatigue, catch issues before they impact users, and provide visibility across complex systems.
README:
Monoscope lets you ingest and explore your logs, traces and metrics in S3 buckets. Query in natural language via LLMs. Monoscope also let's you create AI agents that run at an interval to automatically detect anomalies in your logs, metrics, and traces. The most important actions and logs and insight are sent as reports to your email every day or week.
Website β’ Discord β’ Twitter β’ Changelog β’ Documentation
π€ AI Anomaly Detection β’ π¬ Natural Language Search β’ β Star Us β’ π€ Contributing
Monoscope automatically detects anomalies in your logs, metrics, and traces using AI β no configuration required.
Monoscope is an open-source observability platform that uses artificial intelligence to understand and monitor your systems automatically. Unlike traditional monitoring tools that require extensive configuration and generate overwhelming alerts, Monoscope learns your system's normal behavior and only alerts you when something is genuinely wrong.
- Universal Data Ingestion: Native support for OpenTelemetry means compatibility with 750+ integrations out of the box
- AI-Powered Understanding: Our LLM engine understands context, not just thresholds
- Natural Language Interface: Query your data in plain English
- Cost-Effective Storage: Store years of data affordably with S3-compatible object storage
- Zero Configuration: Start getting insights immediately without complex setup
# Run with Docker (recommended)
docker run -p 8080:8080 monoscope/monoscope:latest
# Or clone and run locally
git clone https://github.com/monoscope-tech/monoscope.git
cd monoscope
docker-compose up
Visit http://localhost:8080
to access Monoscope. Full installation guide β
Monoscope is built on OpenTelemetry, the industry-standard observability framework. This means you get instant compatibility with 750+ integrations including all major languages, frameworks, and infrastructure components.
- Logs: Application logs, system logs, audit trails
- Metrics: Performance counters, business KPIs, custom metrics
- Traces: Distributed request flows, latency tracking, dependency mapping
# For Python applications
pip install opentelemetry-api opentelemetry-sdk
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:8080"
# For Node.js applications
npm install @opentelemetry/api @opentelemetry/sdk-node
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:8080"
# For Kubernetes clusters
helm install opentelemetry-collector open-telemetry/opentelemetry-collector \
--set config.exporters.otlp.endpoint="monoscope:8080"
Monoscope automatically correlates logs, metrics, and traces from the same service, giving you a complete picture of your system's behavior. No manual correlation or configuration required.
Monoscope's AI engine continuously learns your system's normal behavior patterns and automatically alerts you to genuine issues:
- Context-Aware Detection: Understands that high CPU during deployments is normal, but high CPU at 3 AM is not
- Seasonal Pattern Recognition: Learns daily, weekly, and monthly patterns in your data
- Cross-Signal Correlation: Detects anomalies by analyzing logs, metrics, and traces together
- Noise Reduction: Reduces alert fatigue by 90% compared to threshold-based monitoring
The AI runs continuously in the background, requiring no configuration or training from you.
Query your observability data using plain English instead of complex query languages:
- "Show me all errors in the payment service in the last hour"
- "What caused the spike in response time yesterday at 3 PM?"
- "Which services are consuming the most memory?"
- "Find all database queries taking longer than 1 second"
Monoscope translates your natural language into optimized queries across logs, metrics, and traces, returning relevant results with explanations.
LLM-based engine that understands context and identifies real issues, not just threshold violations |
Search logs and metrics using plain English - no complex query languages required |
Handle millions of events/sec with our custom TimeFusion storage engine |
Store years of data affordably with S3-compatible object storage |
Log Explorer - Main View |
Log Explorer - Detailed View |
Dashboard Analytics |
Monoscope - Open Source Observability |
Monoscope combines high-performance data ingestion with intelligent AI analysis:
graph LR
A[Your Apps] -->|Logs/Metrics| B[Ingestion API]
B --> C[TimeFusion Engine]
C --> D[S3 Storage]
C --> E[LLM Pipeline]
E --> F[Anomaly Detection]
F --> G[Alerts & Dashboard]
- Language: Built in Haskell for reliability and performance
- Storage: S3-compatible object storage for cost-effective retention
- AI Engine: State-of-the-art LLMs for intelligent analysis
- Scale: Horizontally scalable architecture
Traditional monitoring tools require extensive configuration, generate overwhelming alerts, and still miss critical issues. You spend more time managing your monitoring than actually using it.
Monoscope uses AI to understand your system's behavior, automatically detect anomalies, and provide actionable insights - all without complex configuration.
- DevOps Teams reducing alert fatigue by 90%
- SREs catching issues before they impact users
- Engineering Leaders getting visibility across complex systems
- Startups implementing enterprise-grade observability on a budget
- Features
- Getting Started
- Prerequisites
- Installation
- Development Setup
- Testing
- Useful Links
- Contributing
- License
- π€ AI-Powered Anomaly Detection: Leverages LLMs to automatically identify and alert on unusual patterns
- βοΈ S3-Compatible Storage: Store logs, metrics and traces in any S3-compatible object storage
- π High Performance: Written in Haskell and rust for reliability and performance
- π Real-Time Analytics: Monitor your systems with minimal latency
- π Extensible: Easy to integrate with existing monitoring infrastructure
Prerequisites
Before installing Monoscope, ensure you have the following dependencies:
- Haskell: Install via GHCup
- PostgreSQL with TimescaleDB: For time-series data storage
- LLVM: Required for compilation
- Google Cloud SDK: For GCP integration (if using GCP)
- Install Haskell via GHCup
curl --proto '=https' --tlsv1.2 -sSf https://get-ghcup.haskell.org | sh
- Clone the Repository
git clone https://github.com/monoscope-tech/monoscope.git
cd monoscope
- Install System Dependencies
For macOS:
# Install LLVM
brew install llvm
# Install PostgreSQL with TimescaleDB
brew install postgresql
brew install timescaledb
# Install libpq
brew install libpq
For Linux (Ubuntu/Debian):
# Install LLVM
sudo apt-get install llvm
# Install PostgreSQL and TimescaleDB
# Follow instructions at: https://docs.timescale.com/install/latest/
# Install libpq
sudo apt-get install libpq-dev
- Configure Google Cloud (Optional)
If using Google Cloud integration:
gcloud auth application-default login
- Run Monoscope
stack run
Development Setup
- Create a Docker volume for PostgreSQL data:
docker volume create pgdata
- Run TimescaleDB in Docker:
make timescaledb-docker
- Configure pg_cron extension:
Add the following to your PostgreSQL configuration:
ALTER system SET cron.database_name = 'monoscope';
ALTER system SET shared_preload_libraries = 'pg_cron';
Then restart the TimescaleDB Docker container.
Install code formatting and linting tools:
# Code formatter
brew install ormolu
# Linter
brew install hlint
Useful commands:
# Format code
make fmt
# Run linter
make lint
π‘ Tip: For better IDE support, compile Haskell Language Server locally to avoid crashes, especially on macOS. See issue #2391.
To build the service worker:
workbox generateSW workbox-config.js
Running Tests
make test
# OR
stack test --ghc-options=-w
Unit tests don't require a database connection and run much faster. They include doctests and pure function tests.
make test-unit
# OR
stack test monoscope-server:unit-tests --ghc-options=-w
make live-test-unit
# OR
stack test monoscope-server:unit-tests --ghc-options=-w --file-watch
stack test --test-arguments "--match=SeedingConfig" monoscope-server:tests
# OR
stack test --ta "--match=SeedingConfig" monoscope-server:tests
- π¬ Discord - Chat with users and contributors
- π Issues - Report bugs or request features
- π¦ Twitter - Follow for updates
- π Blog - Tutorials and case studies
We welcome contributions to Monoscope! Please feel free to:
- Report bugs and request features via GitHub Issues
- Submit pull requests for bug fixes and new features
- Improve documentation and examples
- Share your use cases and feedback
Before contributing, please read our contributing guidelines and ensure your code passes all tests and linting checks.
Monoscope is open source software. Please see the LICENSE file for details.
- [ ] Kubernetes Operator
- [ ] Terraform Provider
- [ ] Mobile App
- [ ] Distributed Tracing Support
- [ ] Custom ML Model Training
See our public roadmap for more details.
Feature | Monoscope | Datadog | Elastic | Prometheus |
---|---|---|---|---|
AI Anomaly Detection | β Built-in | β Add-on | β | β |
Natural Language Search | β | β | β | β |
Cost-Effective Storage | β S3 | β Proprietary | β | β |
No Configuration Alerts | β | β | β | β |
Open Source | β | β | β | β |
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for monoscope
Similar Open Source Tools

monoscope
Monoscope is an open-source monitoring and observability platform that uses artificial intelligence to understand and monitor systems automatically. It allows users to ingest and explore logs, traces, and metrics in S3 buckets, query in natural language via LLMs, and create AI agents to detect anomalies. Key capabilities include universal data ingestion, AI-powered understanding, natural language interface, cost-effective storage, and zero configuration. Monoscope is designed to reduce alert fatigue, catch issues before they impact users, and provide visibility across complex systems.

deepflow
DeepFlow is an open-source project that provides deep observability for complex cloud-native and AI applications. It offers Zero Code data collection with eBPF for metrics, distributed tracing, request logs, and function profiling. DeepFlow is integrated with SmartEncoding to achieve Full Stack correlation and efficient access to all observability data. With DeepFlow, cloud-native and AI applications automatically gain deep observability, removing the burden of developers continually instrumenting code and providing monitoring and diagnostic capabilities covering everything from code to infrastructure for DevOps/SRE teams.

koog
Koog is a Kotlin-based framework for building and running AI agents entirely in idiomatic Kotlin. It allows users to create agents that interact with tools, handle complex workflows, and communicate with users. Key features include pure Kotlin implementation, MCP integration, embedding capabilities, custom tool creation, ready-to-use components, intelligent history compression, powerful streaming API, persistent agent memory, comprehensive tracing, flexible graph workflows, modular feature system, scalable architecture, and multiplatform support.

CredSweeper
CredSweeper is a tool designed to detect credentials like tokens, passwords, and API keys in directories or files. It helps users identify potential exposure of sensitive information by scanning lines, filtering, and utilizing an AI model. The tool reports lines containing possible credentials, their location, and the expected type of credential.

mcp-fundamentals
The mcp-fundamentals repository is a collection of fundamental concepts and examples related to microservices, cloud computing, and DevOps. It covers topics such as containerization, orchestration, CI/CD pipelines, and infrastructure as code. The repository provides hands-on exercises and code samples to help users understand and apply these concepts in real-world scenarios. Whether you are a beginner looking to learn the basics or an experienced professional seeking to refresh your knowledge, mcp-fundamentals has something for everyone.

LightLLM
LightLLM is a lightweight library for linear and logistic regression models. It provides a simple and efficient way to train and deploy machine learning models for regression tasks. The library is designed to be easy to use and integrate into existing projects, making it suitable for both beginners and experienced data scientists. With LightLLM, users can quickly build and evaluate regression models using a variety of algorithms and hyperparameters. The library also supports feature engineering and model interpretation, allowing users to gain insights from their data and make informed decisions based on the model predictions.

cellm
Cellm is an Excel extension that allows users to leverage Large Language Models (LLMs) like ChatGPT within cell formulas. It enables users to extract AI responses to text ranges, making it useful for automating repetitive tasks that involve data processing and analysis. Cellm supports various models from Anthropic, Mistral, OpenAI, and Google, as well as locally hosted models via Llamafiles, Ollama, or vLLM. The tool is designed to simplify the integration of AI capabilities into Excel for tasks such as text classification, data cleaning, content summarization, entity extraction, and more.

pointer
Pointer is a lightweight and efficient tool for analyzing and visualizing data structures in C and C++ programs. It provides a user-friendly interface to track memory allocations, pointer references, and data structures, helping developers to identify memory leaks, pointer errors, and optimize memory usage. With Pointer, users can easily navigate through complex data structures, visualize memory layouts, and debug pointer-related issues in their codebase. The tool offers interactive features such as memory snapshots, pointer tracking, and memory visualization, making it a valuable asset for C and C++ developers working on memory-intensive applications.

aiounifi
Aiounifi is a Python library that provides a simple interface for interacting with the Unifi Controller API. It allows users to easily manage their Unifi network devices, such as access points, switches, and gateways, through automated scripts or applications. With Aiounifi, users can retrieve device information, perform configuration changes, monitor network performance, and more, all through a convenient and efficient API wrapper. This library simplifies the process of integrating Unifi network management into custom solutions, making it ideal for network administrators, developers, and enthusiasts looking to automate and streamline their network operations.

memori
Memori is a lightweight and user-friendly memory management tool for developers. It helps in tracking memory usage, detecting memory leaks, and optimizing memory allocation in software projects. With Memori, developers can easily monitor and analyze memory consumption to improve the performance and stability of their applications. The tool provides detailed insights into memory usage patterns and helps in identifying areas for optimization. Memori is designed to be easy to integrate into existing projects and offers a simple yet powerful interface for managing memory resources effectively.

parlant
Parlant is a structured approach to building and guiding customer-facing AI agents. It allows developers to create and manage robust AI agents, providing specific feedback on agent behavior and helping understand user intentions better. With features like guidelines, glossary, coherence checks, dynamic context, and guided tool use, Parlant offers control over agent responses and behavior. Developer-friendly aspects include instant changes, Git integration, clean architecture, and type safety. It enables confident deployment with scalability, effective debugging, and validation before deployment. Parlant works with major LLM providers and offers client SDKs for Python and TypeScript. The tool facilitates natural customer interactions through asynchronous communication and provides a chat UI for testing new behaviors before deployment.

trubrics-sdk
Trubrics-sdk is a software development kit designed to facilitate the integration of analytics features into applications. It provides a set of tools and functionalities that enable developers to easily incorporate analytics capabilities, such as data collection, analysis, and reporting, into their software products. The SDK streamlines the process of implementing analytics solutions, allowing developers to focus on building and enhancing their applications' functionality and user experience. By leveraging trubrics-sdk, developers can quickly and efficiently integrate robust analytics features, gaining valuable insights into user behavior and application performance.

hyper-mcp
hyper-mcp is a fast and secure MCP server that enables adding AI capabilities to applications through WebAssembly plugins. It supports writing plugins in various languages, distributing them via standard OCI registries, and running them in resource-constrained environments. The tool offers sandboxing with WASM for limiting access, cross-platform compatibility, and deployment flexibility. Security features include sandboxed plugins, memory-safe execution, secure plugin distribution, and fine-grained access control. Users can configure the tool for global or project-specific use, start the server with different transport options, and utilize available plugins for tasks like time calculations, QR code generation, hash generation, IP retrieval, and webpage fetching.

ml-retreat
ML-Retreat is a comprehensive machine learning library designed to simplify and streamline the process of building and deploying machine learning models. It provides a wide range of tools and utilities for data preprocessing, model training, evaluation, and deployment. With ML-Retreat, users can easily experiment with different algorithms, hyperparameters, and feature engineering techniques to optimize their models. The library is built with a focus on scalability, performance, and ease of use, making it suitable for both beginners and experienced machine learning practitioners.

contextgem
Contextgem is a Ruby gem that provides a simple way to manage context-specific configurations in your Ruby applications. It allows you to define different configurations based on the context in which your application is running, such as development, testing, or production. This helps you keep your configuration settings organized and easily accessible, making it easier to maintain and update your application. With Contextgem, you can easily switch between different configurations without having to modify your code, making it a valuable tool for managing complex applications with multiple environments.

airbrussh
Airbrussh is a Capistrano plugin that enhances the output of Capistrano's deploy command. It provides a more detailed and structured view of the deployment process, including color-coded output, timestamps, and improved formatting. Airbrussh aims to make the deployment logs easier to read and understand, helping developers troubleshoot issues and monitor deployments more effectively. It is a useful tool for teams working with Capistrano to streamline their deployment workflows and improve visibility into the deployment process.
For similar tasks

qdrant
Qdrant is a vector similarity search engine and vector database. It is written in Rust, which makes it fast and reliable even under high load. Qdrant can be used for a variety of applications, including: * Semantic search * Image search * Product recommendations * Chatbots * Anomaly detection Qdrant offers a variety of features, including: * Payload storage and filtering * Hybrid search with sparse vectors * Vector quantization and on-disk storage * Distributed deployment * Highlighted features such as query planning, payload indexes, SIMD hardware acceleration, async I/O, and write-ahead logging Qdrant is available as a fully managed cloud service or as an open-source software that can be deployed on-premises.

SynapseML
SynapseML (previously known as MMLSpark) is an open-source library that simplifies the creation of massively scalable machine learning (ML) pipelines. It provides simple, composable, and distributed APIs for various machine learning tasks such as text analytics, vision, anomaly detection, and more. Built on Apache Spark, SynapseML allows seamless integration of models into existing workflows. It supports training and evaluation on single-node, multi-node, and resizable clusters, enabling scalability without resource wastage. Compatible with Python, R, Scala, Java, and .NET, SynapseML abstracts over different data sources for easy experimentation. Requires Scala 2.12, Spark 3.4+, and Python 3.8+.

mlx-vlm
MLX-VLM is a package designed for running Vision LLMs on Mac systems using MLX. It provides a convenient way to install and utilize the package for processing large language models related to vision tasks. The tool simplifies the process of running LLMs on Mac computers, offering a seamless experience for users interested in leveraging MLX for vision-related projects.

Java-AI-Book-Code
The Java-AI-Book-Code repository contains code examples for the 2020 edition of 'Practical Artificial Intelligence With Java'. It is a comprehensive update of the previous 2013 edition, featuring new content on deep learning, knowledge graphs, anomaly detection, linked data, genetic algorithms, search algorithms, and more. The repository serves as a valuable resource for Java developers interested in AI applications and provides practical implementations of various AI techniques and algorithms.

Awesome-AI-Data-Guided-Projects
A curated list of data science & AI guided projects to start building your portfolio. The repository contains guided projects covering various topics such as large language models, time series analysis, computer vision, natural language processing (NLP), and data science. Each project provides detailed instructions on how to implement specific tasks using different tools and technologies.

awesome-AIOps
awesome-AIOps is a curated list of academic researches and industrial materials related to Artificial Intelligence for IT Operations (AIOps). It includes resources such as competitions, white papers, blogs, tutorials, benchmarks, tools, companies, academic materials, talks, workshops, papers, and courses covering various aspects of AIOps like anomaly detection, root cause analysis, incident management, microservices, dependency tracing, and more.

AI-Security-and-Privacy-Events
AI-Security-and-Privacy-Events is a curated list of academic events focusing on AI security and privacy. It includes seminars, conferences, workshops, tutorials, special sessions, and covers various topics such as NLP & LLM Security, Privacy and Security in ML, Machine Learning Security, AI System with Confidential Computing, Adversarial Machine Learning, and more.

AI_Spectrum
AI_Spectrum is a versatile machine learning library that provides a wide range of tools and algorithms for building and deploying AI models. It offers a user-friendly interface for data preprocessing, model training, and evaluation. With AI_Spectrum, users can easily experiment with different machine learning techniques and optimize their models for various tasks. The library is designed to be flexible and scalable, making it suitable for both beginners and experienced data scientists.
For similar jobs

lollms-webui
LoLLMs WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all) is a user-friendly interface to access and utilize various LLM (Large Language Models) and other AI models for a wide range of tasks. With over 500 AI expert conditionings across diverse domains and more than 2500 fine tuned models over multiple domains, LoLLMs WebUI provides an immediate resource for any problem, from car repair to coding assistance, legal matters, medical diagnosis, entertainment, and more. The easy-to-use UI with light and dark mode options, integration with GitHub repository, support for different personalities, and features like thumb up/down rating, copy, edit, and remove messages, local database storage, search, export, and delete multiple discussions, make LoLLMs WebUI a powerful and versatile tool.

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customerβs subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.

minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.

mage-ai
Mage is an open-source data pipeline tool for transforming and integrating data. It offers an easy developer experience, engineering best practices built-in, and data as a first-class citizen. Mage makes it easy to build, preview, and launch data pipelines, and provides observability and scaling capabilities. It supports data integrations, streaming pipelines, and dbt integration.

AiTreasureBox
AiTreasureBox is a versatile AI tool that provides a collection of pre-trained models and algorithms for various machine learning tasks. It simplifies the process of implementing AI solutions by offering ready-to-use components that can be easily integrated into projects. With AiTreasureBox, users can quickly prototype and deploy AI applications without the need for extensive knowledge in machine learning or deep learning. The tool covers a wide range of tasks such as image classification, text generation, sentiment analysis, object detection, and more. It is designed to be user-friendly and accessible to both beginners and experienced developers, making AI development more efficient and accessible to a wider audience.

tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.

airbyte
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's no-code Connector Builder or low-code CDK. Airbyte is used by data engineers and analysts at companies of all sizes to build and manage their data pipelines.

labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.