middleware
✨ Open-source DORA metrics platform for engineering teams ✨
Stars: 1099
Middleware is an open-source engineering management tool that helps engineering leaders measure and analyze team effectiveness using DORA metrics. It integrates with CI/CD tools, automates DORA metric collection and analysis, visualizes key performance indicators, provides customizable reports and dashboards, and integrates with project management platforms. Users can set up Middleware using Docker or manually, generate encryption keys, set up backend and web servers, and access the application to view DORA metrics. The tool calculates DORA metrics using GitHub data, including Deployment Frequency, Lead Time for Changes, Mean Time to Restore, and Change Failure Rate. Middleware aims to provide DORA metrics to users based on their Git data, simplifying the process of tracking software delivery performance and operational efficiency.
README:
Open-source engineering management that unlocks developer potential
Join our Open Source Community
Middleware is an open-source tool designed to help engineering leaders measure and analyze the effectiveness of their teams using the DORA metrics. The DORA metrics are a set of four key values that provide insights into software delivery performance and operational efficiency.
They are:
- Deployment Frequency: The frequency of code deployments to production or an operational environment.
- Lead Time for Changes: The time it takes for a commit to make it into production.
- Mean Time to Restore: The time it takes to restore service after an incident or failure.
- Change Failure Rate: The percentage of deployments that result in failures or require remediation.
Table of Contents
- Integration with various CI/CD tools
- Automated collection and analysis of DORA metrics
- Visualization of key performance indicators
- Customizable reports and dashboards
- Integration with popular project management platforms
-
Ensure that you have docker installed and running.
-
Open the terminal and run the following command:
docker volume create middleware_postgres_data docker volume create middleware_keys docker run --name middleware \ -p 3333:3333 \ -p 9696:9696 \ -p 9697:9697 \ -v middleware_postgres_data:/var/lib/postgresql/data \ -v middleware_keys:/app/keys \ -d middlewareeng/middleware:latest docker logs -f middleware
-
Wait for sometime for the services to be up.
-
The app shall be available on your host at http://localhost:3333.
-
In case you want to stop the container, run the following command:
docker stop middleware
-
In order to fetch latest version from remote and then starting the system, use following command:
docker pull middlewareeng/middleware:latest docker rm -f middleware || true docker run --name middleware \ -p 3333:3333 \ -v middleware_postgres_data:/var/lib/postgresql/data \ -v middleware_keys:/app/keys \ -d middlewareeng/middleware:latest docker logs -f middleware
-
If you see an error like:
Conflict. The container name "/middleware" is already in use by container.
Then run following command before running the container again:docker rm -f middleware
-
If you wish to delete all the data saved in the container, you can delete the volumes created by running the following command:
docker volume rm middleware_postgres_data middleware_keys
Gitpod enables development on remote machines and helps you get started with Middleware if your machine does not support running the project locally.
If you want to run the project locally you can setup using docker or setup everything manually.
-
Click the button below to open this project in Gitpod.
-
This will open a fully configured workspace in your browser with all the necessary dependencies already installed.
After initialization, you can access the server at port 3333 of the gitpod instance.
[!IMPORTANT] We recommend minimum 16GB RAM when developing locally.
If you don't have docker installed, please install docker over here. Make sure docker is running.
-
Clone the Repository:
git clone https://github.com/middlewarehq/middleware
-
Navigate to the Project Directory:
cd middleware -
Run
dev.shscript in the project root 🪄
./dev.shcreates a.envfile with required development environments and runs a CLI with does all the heavy lifting from tracking the container withdocker compose watchto providing you with logs from different services.
The usage is as follows:./dev.sh
You may update the
env.exampleand setENVIRONMENT=prodto run it in production setup.
Further if any changes are required to be made to ports, you may update thedocker-compose.ymlfile, accordingly. -
Access the Application: Once the project is running, access the application through your web browser at http://localhost:3333. Further, other services can be accessed at:
- The analytics server is available at http://localhost:9696.
- The sync server can be accessed at http://localhost:9697.
- The postgres database can be accessed at host:
localhost, port:5434, username:postgres, password:postgres, db name:mhq-oss. - The redis server can be accessed at host:
localhost, port:6385.
-
View the logs: Although the CLI tracks all logs, the logs of services running inside the container can be viewed in different terminals using the following commands:
Frontend logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/web-server/web-server.logBackend api server logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/apiserver/apiserver.logBackend sync server logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/sync_server/sync_server.logRedis logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/redis/redis.logPostgres logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/postgres/postgres.log
To set up middleware locally, follow these steps:
-
Clone the Repository:
git clone https://github.com/middlewarehq/middleware.git
-
Navigate to the Project Directory:
cd middleware -
Run Redis and Postgres Containers:
If you don't have docker installed, please install docker over here
Run the following commands to run Postgres and Redis using docker.
cd database-docker && docker-compose up -d
If you don't prefer Docker, you can choose to install Postgres and Redis manually.
Once you are done with using or developing Middleware, you can choose to close these running container. (NOTE: Don't do this if you are following this document and trying to run Middleware.)
cd database-docker/ docker-compose down -v -
Generate Encryption keys:
Generate encryption keys for the project by running the following command in the project root directory:
cd setup_utils && . ./generate_config_ini.sh && cd ..
-
Backend Server Setup
-
Install python version
3.11.6-
For this you can install python from over here if you don't have it on your machine.
-
Install pyenev
git clone https://github.com/pyenv/pyenv.git ~/.pyenv -
Add pyenv to your shell's configuration file (.bashrc, .bash_profile, .zshrc, etc.):
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
-
Reload your shell:
source ~/.bashrc
-
-
Move backend directory to create a virtual environment
cd backend python -m venv venv -
Activate virtual environment
. venv/bin/activate -
Install Dependencies
pip install -r requirements.txt -r dev-requirements.txt
-
Create a
.envfile in the root directory and add the following environment variables, replacing the values with your own if needed:DB_HOST=localhost DB_NAME=mhq-oss DB_PASS=postgres DB_PORT=5434 DB_USER=postgres REDIS_HOST=localhost REDIS_PORT=6385 ANALYTICS_SERVER_PORT=9696 SYNC_SERVER_PORT=9697 DEFAULT_SYNC_DAYS=31 -
Start the backend servers
-
Change Directory to analytics_server
cd analytics_server -
For backend analytics server:
flask --app app --debug run --port 9696
-
For backend sync server:
flask --app sync_app --debug run --port 9697
NOTE: Open this sync sever in a new terminal window after activating the virtual environment only after starting analytics server.
-
-
-
Web Server Setup
-
Access the Application: Once the project is running, access the application through your web browser at http://localhost:3333.
Additionally:- The analytics server is available at http://localhost:9696.
- The sync server can be accessed at http://localhost:9697.
- Setup the project by following the steps mentioned above.
- Generate and Add your PAT token from code provider.
- Create a team and select repositories for the team.
- See Dora Metrics for your team.
- Update settings related to incident filters, excluded pull requests, prod branches etc to get more accurate data.
Middleware can display DORA Metrics using exclusively Pull Requests based data. The aim is to provide Dora Metrics to anyone and everyone using their Git data, regardless of other integrations, in as few steps as possible.
In its totality, Dora Metrics are derived from Pull Requests, Deployments, and Incidents.
To learn more about how it's done, look at our documentation here.
Coming Soon!
To get started contributing to middleware check out our CONTRIBUTING.md.
[!IMPORTANT] ✨ We offer SWAG for solving issues labelled
advanced! ✨ Please confirm with our team on respective issues before proceeding.
[!IMPORTANT] 👩💻 When new hiring positions open, we look at our open source contributors first! 👨💻 Join our Slack so we can reach out to you.
We appreciate your contributions and look forward to working together to make Middleware even better!
This sections contains some automation scripts that can generate boilerplate code to extend certain features and ship faster 🚀
- Context: Initially, adding a new setting required context of the settings system, changes across some files and making adapters and defaults based on the new setting class structure.
- This can now be done by running the
python make_new_setting.pyscript in the./backend/dev_scriptsdirectory
If you are in the root directory, you can run:
python ./backend/dev_scripts/make_new_setting.py
- Enter the setting name in the consitent format.
- Add the required keys and their types. Enter
doneonce you have added all the fields. - Update imports and linting.
- You are good to go :tada"
- Note: For more non-primitive types in the setting such as uuid, enums etc, you will have to make changes to the generated adaptors.
https://github.com/middlewarehq/middleware/assets/70485812/f0529fa7-a2cb-44b1-ae07-2a7c97f56bef
To get started contributing to middleware check out our SECURITY.md.
We look forward to your part in keeping Middleware secure!
This project is licensed under the Apache 2.0 License - see the LICENSE.md file for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for middleware
Similar Open Source Tools
middleware
Middleware is an open-source engineering management tool that helps engineering leaders measure and analyze team effectiveness using DORA metrics. It integrates with CI/CD tools, automates DORA metric collection and analysis, visualizes key performance indicators, provides customizable reports and dashboards, and integrates with project management platforms. Users can set up Middleware using Docker or manually, generate encryption keys, set up backend and web servers, and access the application to view DORA metrics. The tool calculates DORA metrics using GitHub data, including Deployment Frequency, Lead Time for Changes, Mean Time to Restore, and Change Failure Rate. Middleware aims to provide DORA metrics to users based on their Git data, simplifying the process of tracking software delivery performance and operational efficiency.
ChatOpsLLM
ChatOpsLLM is a project designed to empower chatbots with effortless DevOps capabilities. It provides an intuitive interface and streamlined workflows for managing and scaling language models. The project incorporates robust MLOps practices, including CI/CD pipelines with Jenkins and Ansible, monitoring with Prometheus and Grafana, and centralized logging with the ELK stack. Developers can find detailed documentation and instructions on the project's website.
frontend
Nuclia frontend apps and libraries repository contains various frontend applications and libraries for the Nuclia platform. It includes components such as Dashboard, Widget, SDK, Sistema (design system), NucliaDB admin, CI/CD Deployment, and Maintenance page. The repository provides detailed instructions on installation, dependencies, and usage of these components for both Nuclia employees and external developers. It also covers deployment processes for different components and tools like ArgoCD for monitoring deployments and logs. The repository aims to facilitate the development, testing, and deployment of frontend applications within the Nuclia ecosystem.
qrev
QRev is an open-source alternative to Salesforce, offering AI agents to scale sales organizations infinitely. It aims to provide digital workers for various sales roles or a superagent named Qai. The tech stack includes TypeScript for frontend, NodeJS for backend, MongoDB for app server database, ChromaDB for vector database, SQLite for AI server SQL relational database, and Langchain for LLM tooling. The tool allows users to run client app, app server, and AI server components. It requires Node.js and MongoDB to be installed, and provides detailed setup instructions in the README file.
SWELancer-Benchmark
SWE-Lancer is a benchmark repository containing datasets and code for the paper 'SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?'. It provides instructions for package management, building Docker images, configuring environment variables, and running evaluations. Users can use this tool to assess the performance of language models in real-world freelance software engineering tasks.
telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)
agnai
Agnaistic is an AI roleplay chat tool that allows users to interact with personalized characters using their favorite AI services. It supports multiple AI services, persona schema formats, and features such as group conversations, user authentication, and memory/lore books. Agnaistic can be self-hosted or run using Docker, and it provides a range of customization options through its settings.json file. The tool is designed to be user-friendly and accessible, making it suitable for both casual users and developers.
agent-service-toolkit
The AI Agent Service Toolkit is a comprehensive toolkit designed for running an AI agent service using LangGraph, FastAPI, and Streamlit. It includes a LangGraph agent, a FastAPI service, a client for interacting with the service, and a Streamlit app for providing a chat interface. The project offers a template for building and running agents with the LangGraph framework, showcasing a complete setup from agent definition to user interface. Key features include LangGraph Agent with latest features, FastAPI Service, Advanced Streaming support, Streamlit Interface, Multiple Agent Support, Asynchronous Design, Content Moderation, RAG Agent implementation, Feedback Mechanism, Docker Support, and Testing. The repository structure includes directories for defining agents, protocol schema, core modules, service, client, Streamlit app, and tests.
orama-core
OramaCore is a database designed for AI projects, answer engines, copilots, and search functionalities. It offers features such as a full-text search engine, vector database, LLM interface, and various utilities. The tool is currently under active development and not recommended for production use due to potential API changes. OramaCore aims to provide a comprehensive solution for managing data and enabling advanced AI capabilities in projects.
unitycatalog
Unity Catalog is an open and interoperable catalog for data and AI, supporting multi-format tables, unstructured data, and AI assets. It offers plugin support for extensibility and interoperates with Delta Sharing protocol. The catalog is fully open with OpenAPI spec and OSS implementation, providing unified governance for data and AI with asset-level access control enforced through REST APIs.
airo
Airo is a tool designed to simplify the process of deploying containers to self-hosted servers. It allows users to focus on building their products without the complexity of Kubernetes or CI/CD pipelines. With Airo, users can easily build and push Docker images, deploy instantly with a single command, update configurations securely using SSH, and set up HTTPS and reverse proxy automatically using Caddy.
NeoGPT
NeoGPT is an AI assistant that transforms your local workspace into a powerhouse of productivity from your CLI. With features like code interpretation, multi-RAG support, vision models, and LLM integration, NeoGPT redefines how you work and create. It supports executing code seamlessly, multiple RAG techniques, vision models, and interacting with various language models. Users can run the CLI to start using NeoGPT and access features like Code Interpreter, building vector database, running Streamlit UI, and changing LLM models. The tool also offers magic commands for chat sessions, such as resetting chat history, saving conversations, exporting settings, and more. Join the NeoGPT community to experience a new era of efficiency and contribute to its evolution.
gpustack
GPUStack is an open-source GPU cluster manager designed for running large language models (LLMs). It supports a wide variety of hardware, scales with GPU inventory, offers lightweight Python package with minimal dependencies, provides OpenAI-compatible APIs, simplifies user and API key management, enables GPU metrics monitoring, and facilitates token usage and rate metrics tracking. The tool is suitable for managing GPU clusters efficiently and effectively.
starter-monorepo
Starter Monorepo is a template repository for setting up a monorepo structure in your project. It provides a basic setup with configurations for managing multiple packages within a single repository. This template includes tools for package management, versioning, testing, and deployment. By using this template, you can streamline your development process, improve code sharing, and simplify dependency management across your project. Whether you are working on a small project or a large-scale application, Starter Monorepo can help you organize your codebase efficiently and enhance collaboration among team members.
patchwork
PatchWork is an open-source framework designed for automating development tasks using large language models. It enables users to automate workflows such as PR reviews, bug fixing, security patching, and more through a self-hosted CLI agent and preferred LLMs. The framework consists of reusable atomic actions called Steps, customizable LLM prompts known as Prompt Templates, and LLM-assisted automations called Patchflows. Users can run Patchflows locally in their CLI/IDE or as part of CI/CD pipelines. PatchWork offers predefined patchflows like AutoFix, PRReview, GenerateREADME, DependencyUpgrade, and ResolveIssue, with the flexibility to create custom patchflows. Prompt templates are used to pass queries to LLMs and can be customized. Contributions to new patchflows, steps, and the core framework are encouraged, with chat assistants available to aid in the process. The roadmap includes expanding the patchflow library, introducing a debugger and validation module, supporting large-scale code embeddings, parallelization, fine-tuned models, and an open-source GUI. PatchWork is licensed under AGPL-3.0 terms, while custom patchflows and steps can be shared using the Apache-2.0 licensed patchwork template repository.
manifold
Manifold is a powerful platform for workflow automation using AI models. It supports text generation, image generation, and retrieval-augmented generation, integrating seamlessly with popular AI endpoints. Additionally, Manifold provides robust semantic search capabilities using PGVector combined with the SEFII engine. It is under active development and not production-ready.
For similar tasks
middleware
Middleware is an open-source engineering management tool that helps engineering leaders measure and analyze team effectiveness using DORA metrics. It integrates with CI/CD tools, automates DORA metric collection and analysis, visualizes key performance indicators, provides customizable reports and dashboards, and integrates with project management platforms. Users can set up Middleware using Docker or manually, generate encryption keys, set up backend and web servers, and access the application to view DORA metrics. The tool calculates DORA metrics using GitHub data, including Deployment Frequency, Lead Time for Changes, Mean Time to Restore, and Change Failure Rate. Middleware aims to provide DORA metrics to users based on their Git data, simplifying the process of tracking software delivery performance and operational efficiency.
ciso-assistant-community
CISO Assistant is a tool that helps organizations manage their cybersecurity posture and compliance. It provides a centralized platform for managing security controls, threats, and risks. CISO Assistant also includes a library of pre-built frameworks and tools to help organizations quickly and easily implement best practices.
supersonic
SuperSonic is a next-generation BI platform that integrates Chat BI (powered by LLM) and Headless BI (powered by semantic layer) paradigms. This integration ensures that Chat BI has access to the same curated and governed semantic data models as traditional BI. Furthermore, the implementation of both paradigms benefits from the integration: * Chat BI's Text2SQL gets augmented with context-retrieval from semantic models. * Headless BI's query interface gets extended with natural language API. SuperSonic provides a Chat BI interface that empowers users to query data using natural language and visualize the results with suitable charts. To enable such experience, the only thing necessary is to build logical semantic models (definition of metric/dimension/tag, along with their meaning and relationships) through a Headless BI interface. Meanwhile, SuperSonic is designed to be extensible and composable, allowing custom implementations to be added and configured with Java SPI. The integration of Chat BI and Headless BI has the potential to enhance the Text2SQL generation in two dimensions: 1. Incorporate data semantics (such as business terms, column values, etc.) into the prompt, enabling LLM to better understand the semantics and reduce hallucination. 2. Offload the generation of advanced SQL syntax (such as join, formula, etc.) from LLM to the semantic layer to reduce complexity. With these ideas in mind, we develop SuperSonic as a practical reference implementation and use it to power our real-world products. Additionally, to facilitate further development we decide to open source SuperSonic as an extensible framework.
DB-GPT
DB-GPT is an open source AI native data app development framework with AWEL(Agentic Workflow Expression Language) and agents. It aims to build infrastructure in the field of large models, through the development of multiple technical capabilities such as multi-model management (SMMF), Text2SQL effect optimization, RAG framework and optimization, Multi-Agents framework collaboration, AWEL (agent workflow orchestration), etc. Which makes large model applications with data simpler and more convenient.
Chat2DB
Chat2DB is an AI-driven data development and analysis platform that enables users to communicate with databases using natural language. It supports a wide range of databases, including MySQL, PostgreSQL, Oracle, SQLServer, SQLite, MariaDB, ClickHouse, DM, Presto, DB2, OceanBase, Hive, KingBase, MongoDB, Redis, and Snowflake. Chat2DB provides a user-friendly interface that allows users to query databases, generate reports, and explore data using natural language commands. It also offers a variety of features to help users improve their productivity, such as auto-completion, syntax highlighting, and error checking.
aide
AIDE (Advanced Intrusion Detection Environment) is a tool for monitoring file system changes. It can be used to detect unauthorized changes to monitored files and directories. AIDE was written to be a simple and free alternative to Tripwire. Features currently included in AIDE are as follows: o File attributes monitored: permissions, inode, user, group file size, mtime, atime, ctime, links and growing size. o Checksums and hashes supported: SHA1, MD5, RMD160, and TIGER. CRC32, HAVAL and GOST if Mhash support is compiled in. o Plain text configuration files and database for simplicity. o Rules, variables and macros that can be customized to local site or system policies. o Powerful regular expression support to selectively include or exclude files and directories to be monitored. o gzip database compression if zlib support is compiled in. o Free software licensed under the GNU General Public License v2.
OpsPilot
OpsPilot is an AI-powered operations navigator developed by the WeOps team. It leverages deep learning and LLM technologies to make operations plans interactive and generalize and reason about local operations knowledge. OpsPilot can be integrated with web applications in the form of a chatbot and primarily provides the following capabilities: 1. Operations capability precipitation: By depositing operations knowledge, operations skills, and troubleshooting actions, when solving problems, it acts as a navigator and guides users to solve operations problems through dialogue. 2. Local knowledge Q&A: By indexing local knowledge and Internet knowledge and combining the capabilities of LLM, it answers users' various operations questions. 3. LLM chat: When the problem is beyond the scope of OpsPilot's ability to handle, it uses LLM's capabilities to solve various long-tail problems.
aimeos-typo3
Aimeos is a professional, full-featured, and high-performance e-commerce extension for TYPO3. It can be installed in an existing TYPO3 website within 5 minutes and can be adapted, extended, overwritten, and customized to meet specific needs.
For similar jobs
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
nvidia_gpu_exporter
Nvidia GPU exporter for prometheus, using `nvidia-smi` binary to gather metrics.
tracecat
Tracecat is an open-source automation platform for security teams. It's designed to be simple but powerful, with a focus on AI features and a practitioner-obsessed UI/UX. Tracecat can be used to automate a variety of tasks, including phishing email investigation, evidence collection, and remediation plan generation.
openinference
OpenInference is a set of conventions and plugins that complement OpenTelemetry to enable tracing of AI applications. It provides a way to capture and analyze the performance and behavior of AI models, including their interactions with other components of the application. OpenInference is designed to be language-agnostic and can be used with any OpenTelemetry-compatible backend. It includes a set of instrumentations for popular machine learning SDKs and frameworks, making it easy to add tracing to your AI applications.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.



