middleware
✨ Open-source DORA metrics platform for engineering teams ✨
Stars: 1099
Middleware is an open-source engineering management tool that helps engineering leaders measure and analyze team effectiveness using DORA metrics. It integrates with CI/CD tools, automates DORA metric collection and analysis, visualizes key performance indicators, provides customizable reports and dashboards, and integrates with project management platforms. Users can set up Middleware using Docker or manually, generate encryption keys, set up backend and web servers, and access the application to view DORA metrics. The tool calculates DORA metrics using GitHub data, including Deployment Frequency, Lead Time for Changes, Mean Time to Restore, and Change Failure Rate. Middleware aims to provide DORA metrics to users based on their Git data, simplifying the process of tracking software delivery performance and operational efficiency.
README:
Open-source engineering management that unlocks developer potential
Join our Open Source Community
Middleware is an open-source tool designed to help engineering leaders measure and analyze the effectiveness of their teams using the DORA metrics. The DORA metrics are a set of four key values that provide insights into software delivery performance and operational efficiency.
They are:
- Deployment Frequency: The frequency of code deployments to production or an operational environment.
- Lead Time for Changes: The time it takes for a commit to make it into production.
- Mean Time to Restore: The time it takes to restore service after an incident or failure.
- Change Failure Rate: The percentage of deployments that result in failures or require remediation.
Table of Contents
- Integration with various CI/CD tools
- Automated collection and analysis of DORA metrics
- Visualization of key performance indicators
- Customizable reports and dashboards
- Integration with popular project management platforms
-
Ensure that you have docker installed and running.
-
Open the terminal and run the following command:
docker volume create middleware_postgres_data docker volume create middleware_keys docker run --name middleware \ -p 3333:3333 \ -p 9696:9696 \ -p 9697:9697 \ -v middleware_postgres_data:/var/lib/postgresql/data \ -v middleware_keys:/app/keys \ -d middlewareeng/middleware:latest docker logs -f middleware
-
Wait for sometime for the services to be up.
-
The app shall be available on your host at http://localhost:3333.
-
In case you want to stop the container, run the following command:
docker stop middleware
-
In order to fetch latest version from remote and then starting the system, use following command:
docker pull middlewareeng/middleware:latest docker rm -f middleware || true docker run --name middleware \ -p 3333:3333 \ -v middleware_postgres_data:/var/lib/postgresql/data \ -v middleware_keys:/app/keys \ -d middlewareeng/middleware:latest docker logs -f middleware
-
If you see an error like:
Conflict. The container name "/middleware" is already in use by container
.
Then run following command before running the container again:docker rm -f middleware
-
If you wish to delete all the data saved in the container, you can delete the volumes created by running the following command:
docker volume rm middleware_postgres_data middleware_keys
Gitpod enables development on remote machines and helps you get started with Middleware if your machine does not support running the project locally.
If you want to run the project locally you can setup using docker or setup everything manually.
-
Click the button below to open this project in Gitpod.
-
This will open a fully configured workspace in your browser with all the necessary dependencies already installed.
After initialization, you can access the server at port 3333 of the gitpod instance.
[!IMPORTANT] We recommend minimum 16GB RAM when developing locally.
If you don't have docker installed, please install docker over here. Make sure docker is running.
-
Clone the Repository:
git clone https://github.com/middlewarehq/middleware
-
Navigate to the Project Directory:
cd middleware
-
Run
dev.sh
script in the project root 🪄
./dev.sh
creates a.env
file with required development environments and runs a CLI with does all the heavy lifting from tracking the container withdocker compose watch
to providing you with logs from different services.
The usage is as follows:./dev.sh
You may update the
env.example
and setENVIRONMENT=prod
to run it in production setup.
Further if any changes are required to be made to ports, you may update thedocker-compose.yml
file, accordingly. -
Access the Application: Once the project is running, access the application through your web browser at http://localhost:3333. Further, other services can be accessed at:
- The analytics server is available at http://localhost:9696.
- The sync server can be accessed at http://localhost:9697.
- The postgres database can be accessed at host:
localhost
, port:5434
, username:postgres
, password:postgres
, db name:mhq-oss
. - The redis server can be accessed at host:
localhost
, port:6385
.
-
View the logs: Although the CLI tracks all logs, the logs of services running inside the container can be viewed in different terminals using the following commands:
Frontend logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/web-server/web-server.log
Backend api server logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/apiserver/apiserver.log
Backend sync server logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/sync_server/sync_server.log
Redis logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/redis/redis.log
Postgres logs
docker exec -it middleware-dev tail --lines 500 -f /var/log/postgres/postgres.log
To set up middleware locally, follow these steps:
-
Clone the Repository:
git clone https://github.com/middlewarehq/middleware.git
-
Navigate to the Project Directory:
cd middleware
-
Run Redis and Postgres Containers:
If you don't have docker installed, please install docker over here
Run the following commands to run Postgres and Redis using docker.
cd database-docker && docker-compose up -d
If you don't prefer Docker, you can choose to install Postgres and Redis manually.
Once you are done with using or developing Middleware, you can choose to close these running container. (NOTE: Don't do this if you are following this document and trying to run Middleware.)
cd database-docker/ docker-compose down -v
-
Generate Encryption keys:
Generate encryption keys for the project by running the following command in the project root directory:
cd setup_utils && . ./generate_config_ini.sh && cd ..
-
Backend Server Setup
-
Install python version
3.11.6
-
For this you can install python from over here if you don't have it on your machine.
-
Install pyenev
git clone https://github.com/pyenv/pyenv.git ~/.pyenv
-
Add pyenv to your shell's configuration file (.bashrc, .bash_profile, .zshrc, etc.):
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
-
Reload your shell:
source ~/.bashrc
-
-
Move backend directory to create a virtual environment
cd backend python -m venv venv
-
Activate virtual environment
. venv/bin/activate
-
Install Dependencies
pip install -r requirements.txt -r dev-requirements.txt
-
Create a
.env
file in the root directory and add the following environment variables, replacing the values with your own if needed:DB_HOST=localhost DB_NAME=mhq-oss DB_PASS=postgres DB_PORT=5434 DB_USER=postgres REDIS_HOST=localhost REDIS_PORT=6385 ANALYTICS_SERVER_PORT=9696 SYNC_SERVER_PORT=9697 DEFAULT_SYNC_DAYS=31
-
Start the backend servers
-
Change Directory to analytics_server
cd analytics_server
-
For backend analytics server:
flask --app app --debug run --port 9696
-
For backend sync server:
flask --app sync_app --debug run --port 9697
NOTE: Open this sync sever in a new terminal window after activating the virtual environment only after starting analytics server.
-
-
-
Web Server Setup
-
Access the Application: Once the project is running, access the application through your web browser at http://localhost:3333.
Additionally:- The analytics server is available at http://localhost:9696.
- The sync server can be accessed at http://localhost:9697.
- Setup the project by following the steps mentioned above.
- Generate and Add your PAT token from code provider.
- Create a team and select repositories for the team.
- See Dora Metrics for your team.
- Update settings related to incident filters, excluded pull requests, prod branches etc to get more accurate data.
Middleware can display DORA Metrics using exclusively Pull Requests based data. The aim is to provide Dora Metrics to anyone and everyone using their Git data, regardless of other integrations, in as few steps as possible.
In its totality, Dora Metrics are derived from Pull Requests, Deployments, and Incidents.
To learn more about how it's done, look at our documentation here.
Coming Soon!
To get started contributing to middleware check out our CONTRIBUTING.md.
[!IMPORTANT] ✨ We offer SWAG for solving issues labelled
advanced
! ✨ Please confirm with our team on respective issues before proceeding.
[!IMPORTANT] 👩💻 When new hiring positions open, we look at our open source contributors first! 👨💻 Join our Slack so we can reach out to you.
We appreciate your contributions and look forward to working together to make Middleware even better!
This sections contains some automation scripts that can generate boilerplate code to extend certain features and ship faster 🚀
- Context: Initially, adding a new setting required context of the settings system, changes across some files and making adapters and defaults based on the new setting class structure.
- This can now be done by running the
python make_new_setting.py
script in the./backend/dev_scripts
directory
If you are in the root directory, you can run:
python ./backend/dev_scripts/make_new_setting.py
- Enter the setting name in the consitent format.
- Add the required keys and their types. Enter
done
once you have added all the fields. - Update imports and linting.
- You are good to go :tada"
- Note: For more non-primitive types in the setting such as uuid, enums etc, you will have to make changes to the generated adaptors.
https://github.com/middlewarehq/middleware/assets/70485812/f0529fa7-a2cb-44b1-ae07-2a7c97f56bef
To get started contributing to middleware check out our SECURITY.md.
We look forward to your part in keeping Middleware secure!
This project is licensed under the Apache 2.0 License - see the LICENSE.md file for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for middleware
Similar Open Source Tools
middleware
Middleware is an open-source engineering management tool that helps engineering leaders measure and analyze team effectiveness using DORA metrics. It integrates with CI/CD tools, automates DORA metric collection and analysis, visualizes key performance indicators, provides customizable reports and dashboards, and integrates with project management platforms. Users can set up Middleware using Docker or manually, generate encryption keys, set up backend and web servers, and access the application to view DORA metrics. The tool calculates DORA metrics using GitHub data, including Deployment Frequency, Lead Time for Changes, Mean Time to Restore, and Change Failure Rate. Middleware aims to provide DORA metrics to users based on their Git data, simplifying the process of tracking software delivery performance and operational efficiency.
vasttools
This repository contains a collection of tools that can be used with vastai. The tools are free to use, modify and distribute. If you find this useful and wish to donate your welcome to send your donations to the following wallets. BTC 15qkQSYXP2BvpqJkbj2qsNFb6nd7FyVcou XMR 897VkA8sG6gh7yvrKrtvWningikPteojfSgGff3JAUs3cu7jxPDjhiAZRdcQSYPE2VGFVHAdirHqRZEpZsWyPiNK6XPQKAg RVN RSgWs9Co8nQeyPqQAAqHkHhc5ykXyoMDUp USDT(ETH ERC20) 0xa5955cf9fe7af53bcaa1d2404e2b17a1f28aac4f Paypal PayPal.Me/cryptolabsZA
open-webui
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.
Local-File-Organizer
The Local File Organizer is an AI-powered tool designed to help users organize their digital files efficiently and securely on their local device. By leveraging advanced AI models for text and visual content analysis, the tool automatically scans and categorizes files, generates relevant descriptions and filenames, and organizes them into a new directory structure. All AI processing occurs locally using the Nexa SDK, ensuring privacy and security. With support for multiple file types and customizable prompts, this tool aims to simplify file management and bring order to users' digital lives.
patchwork
PatchWork is an open-source framework designed for automating development tasks using large language models. It enables users to automate workflows such as PR reviews, bug fixing, security patching, and more through a self-hosted CLI agent and preferred LLMs. The framework consists of reusable atomic actions called Steps, customizable LLM prompts known as Prompt Templates, and LLM-assisted automations called Patchflows. Users can run Patchflows locally in their CLI/IDE or as part of CI/CD pipelines. PatchWork offers predefined patchflows like AutoFix, PRReview, GenerateREADME, DependencyUpgrade, and ResolveIssue, with the flexibility to create custom patchflows. Prompt templates are used to pass queries to LLMs and can be customized. Contributions to new patchflows, steps, and the core framework are encouraged, with chat assistants available to aid in the process. The roadmap includes expanding the patchflow library, introducing a debugger and validation module, supporting large-scale code embeddings, parallelization, fine-tuned models, and an open-source GUI. PatchWork is licensed under AGPL-3.0 terms, while custom patchflows and steps can be shared using the Apache-2.0 licensed patchwork template repository.
quivr
Quivr is a personal assistant powered by Generative AI, designed to be a second brain for users. It offers fast and efficient access to data, ensuring security and compatibility with various file formats. Quivr is open source and free to use, allowing users to share their brains publicly or keep them private. The marketplace feature enables users to share and utilize brains created by others, boosting productivity. Quivr's offline mode provides anytime, anywhere access to data. Key features include speed, security, OS compatibility, file compatibility, open source nature, public/private sharing options, a marketplace, and offline mode.
nlux
nlux is an open-source Javascript and React JS library that makes it super simple to integrate powerful large language models (LLMs) like ChatGPT into your web app or website. With just a few lines of code, you can add conversational AI capabilities and interact with your favourite LLM.
PentestGPT
PentestGPT is a penetration testing tool empowered by ChatGPT, designed to automate the penetration testing process. It operates interactively to guide penetration testers in overall progress and specific operations. The tool supports solving easy to medium HackTheBox machines and other CTF challenges. Users can use PentestGPT to perform tasks like testing connections, using different reasoning models, discussing with the tool, searching on Google, and generating reports. It also supports local LLMs with custom parsers for advanced users.
NeoGPT
NeoGPT is an AI assistant that transforms your local workspace into a powerhouse of productivity from your CLI. With features like code interpretation, multi-RAG support, vision models, and LLM integration, NeoGPT redefines how you work and create. It supports executing code seamlessly, multiple RAG techniques, vision models, and interacting with various language models. Users can run the CLI to start using NeoGPT and access features like Code Interpreter, building vector database, running Streamlit UI, and changing LLM models. The tool also offers magic commands for chat sessions, such as resetting chat history, saving conversations, exporting settings, and more. Join the NeoGPT community to experience a new era of efficiency and contribute to its evolution.
NoLabs
NoLabs is an open-source biolab that provides easy access to state-of-the-art models for bio research. It supports various tasks, including drug discovery, protein analysis, and small molecule design. NoLabs aims to accelerate bio research by making inference models accessible to everyone.
clearml-server
ClearML Server is a backend service infrastructure for ClearML, facilitating collaboration and experiment management. It includes a web app, RESTful API, and file server for storing images and models. Users can deploy ClearML Server using Docker, AWS EC2 AMI, or Kubernetes. The system design supports single IP or sub-domain configurations with specific open ports. ClearML-Agent Services container allows launching long-lasting jobs and various use cases like auto-scaler service, controllers, optimizer, and applications. Advanced functionality includes web login authentication and non-responsive experiments watchdog. Upgrading ClearML Server involves stopping containers, backing up data, downloading the latest docker-compose.yml file, configuring ClearML-Agent Services, and spinning up docker containers. Community support is available through ClearML FAQ, Stack Overflow, GitHub issues, and email contact.
robocorp
Robocorp is a platform that allows users to create, deploy, and operate Python automations and AI actions. It provides an easy way to extend the capabilities of AI agents, assistants, and copilots with custom actions written in Python. Users can create and deploy tools, skills, loaders, and plugins that securely connect any AI Assistant platform to their data and applications. The Robocorp Action Server makes Python scripts compatible with ChatGPT and LangChain by automatically creating and exposing an API based on function declaration, type hints, and docstrings. It simplifies the process of developing and deploying AI actions, enabling users to interact with AI frameworks effortlessly.
GraphRAG-Local-UI
GraphRAG Local with Interactive UI is an adaptation of Microsoft's GraphRAG, tailored to support local models and featuring a comprehensive interactive user interface. It allows users to leverage local models for LLM and embeddings, visualize knowledge graphs in 2D or 3D, manage files, settings, and queries, and explore indexing outputs. The tool aims to be cost-effective by eliminating dependency on costly cloud-based models and offers flexible querying options for global, local, and direct chat queries.
chatgpt-vscode
ChatGPT-VSCode is a Visual Studio Code integration that allows users to prompt OpenAI's GPT-4, GPT-3.5, GPT-3, and Codex models within the editor. It offers features like using improved models via OpenAI API Key, Azure OpenAI Service deployments, generating commit messages, storing conversation history, explaining and suggesting fixes for compile-time errors, viewing code differences, and more. Users can customize prompts, quick fix problems, save conversations, and export conversation history. The extension is designed to enhance developer experience by providing AI-powered assistance directly within VS Code.
nlux
NLUX is an open-source JavaScript and React JS library that simplifies the integration of powerful large language models (LLMs) like ChatGPT into web apps or websites. With just a few lines of code, users can add conversational AI capabilities and interact with their favorite LLM. The library offers features such as building AI chat interfaces in minutes, React components and hooks for easy integration, LLM adapters for various APIs, customizable assistant and user personas, streaming LLM output, custom renderers, high customizability, and zero dependencies. NLUX is designed with principles of intuitiveness, performance, accessibility, and developer experience in mind. The mission of NLUX is to enable developers to build outstanding LLM front-ends and applications with a focus on performance and usability.
atidraw
Atidraw is a web application that allows users to create, enhance, and share drawings using Cloudflare R2 and Cloudflare AI. It features intuitive drawing with signature_pad, AI-powered enhancements such as alt text generation and image generation with Stable Diffusion, global storage on Cloudflare R2, flexible authentication options, and high-performance server-side rendering on Cloudflare Pages. Users can deploy Atidraw with zero configuration on their Cloudflare account using NuxtHub.
For similar tasks
middleware
Middleware is an open-source engineering management tool that helps engineering leaders measure and analyze team effectiveness using DORA metrics. It integrates with CI/CD tools, automates DORA metric collection and analysis, visualizes key performance indicators, provides customizable reports and dashboards, and integrates with project management platforms. Users can set up Middleware using Docker or manually, generate encryption keys, set up backend and web servers, and access the application to view DORA metrics. The tool calculates DORA metrics using GitHub data, including Deployment Frequency, Lead Time for Changes, Mean Time to Restore, and Change Failure Rate. Middleware aims to provide DORA metrics to users based on their Git data, simplifying the process of tracking software delivery performance and operational efficiency.
ciso-assistant-community
CISO Assistant is a tool that helps organizations manage their cybersecurity posture and compliance. It provides a centralized platform for managing security controls, threats, and risks. CISO Assistant also includes a library of pre-built frameworks and tools to help organizations quickly and easily implement best practices.
supersonic
SuperSonic is a next-generation BI platform that integrates Chat BI (powered by LLM) and Headless BI (powered by semantic layer) paradigms. This integration ensures that Chat BI has access to the same curated and governed semantic data models as traditional BI. Furthermore, the implementation of both paradigms benefits from the integration: * Chat BI's Text2SQL gets augmented with context-retrieval from semantic models. * Headless BI's query interface gets extended with natural language API. SuperSonic provides a Chat BI interface that empowers users to query data using natural language and visualize the results with suitable charts. To enable such experience, the only thing necessary is to build logical semantic models (definition of metric/dimension/tag, along with their meaning and relationships) through a Headless BI interface. Meanwhile, SuperSonic is designed to be extensible and composable, allowing custom implementations to be added and configured with Java SPI. The integration of Chat BI and Headless BI has the potential to enhance the Text2SQL generation in two dimensions: 1. Incorporate data semantics (such as business terms, column values, etc.) into the prompt, enabling LLM to better understand the semantics and reduce hallucination. 2. Offload the generation of advanced SQL syntax (such as join, formula, etc.) from LLM to the semantic layer to reduce complexity. With these ideas in mind, we develop SuperSonic as a practical reference implementation and use it to power our real-world products. Additionally, to facilitate further development we decide to open source SuperSonic as an extensible framework.
DB-GPT
DB-GPT is an open source AI native data app development framework with AWEL(Agentic Workflow Expression Language) and agents. It aims to build infrastructure in the field of large models, through the development of multiple technical capabilities such as multi-model management (SMMF), Text2SQL effect optimization, RAG framework and optimization, Multi-Agents framework collaboration, AWEL (agent workflow orchestration), etc. Which makes large model applications with data simpler and more convenient.
Chat2DB
Chat2DB is an AI-driven data development and analysis platform that enables users to communicate with databases using natural language. It supports a wide range of databases, including MySQL, PostgreSQL, Oracle, SQLServer, SQLite, MariaDB, ClickHouse, DM, Presto, DB2, OceanBase, Hive, KingBase, MongoDB, Redis, and Snowflake. Chat2DB provides a user-friendly interface that allows users to query databases, generate reports, and explore data using natural language commands. It also offers a variety of features to help users improve their productivity, such as auto-completion, syntax highlighting, and error checking.
aide
AIDE (Advanced Intrusion Detection Environment) is a tool for monitoring file system changes. It can be used to detect unauthorized changes to monitored files and directories. AIDE was written to be a simple and free alternative to Tripwire. Features currently included in AIDE are as follows: o File attributes monitored: permissions, inode, user, group file size, mtime, atime, ctime, links and growing size. o Checksums and hashes supported: SHA1, MD5, RMD160, and TIGER. CRC32, HAVAL and GOST if Mhash support is compiled in. o Plain text configuration files and database for simplicity. o Rules, variables and macros that can be customized to local site or system policies. o Powerful regular expression support to selectively include or exclude files and directories to be monitored. o gzip database compression if zlib support is compiled in. o Free software licensed under the GNU General Public License v2.
OpsPilot
OpsPilot is an AI-powered operations navigator developed by the WeOps team. It leverages deep learning and LLM technologies to make operations plans interactive and generalize and reason about local operations knowledge. OpsPilot can be integrated with web applications in the form of a chatbot and primarily provides the following capabilities: 1. Operations capability precipitation: By depositing operations knowledge, operations skills, and troubleshooting actions, when solving problems, it acts as a navigator and guides users to solve operations problems through dialogue. 2. Local knowledge Q&A: By indexing local knowledge and Internet knowledge and combining the capabilities of LLM, it answers users' various operations questions. 3. LLM chat: When the problem is beyond the scope of OpsPilot's ability to handle, it uses LLM's capabilities to solve various long-tail problems.
aimeos-typo3
Aimeos is a professional, full-featured, and high-performance e-commerce extension for TYPO3. It can be installed in an existing TYPO3 website within 5 minutes and can be adapted, extended, overwritten, and customized to meet specific needs.
For similar jobs
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
nvidia_gpu_exporter
Nvidia GPU exporter for prometheus, using `nvidia-smi` binary to gather metrics.
tracecat
Tracecat is an open-source automation platform for security teams. It's designed to be simple but powerful, with a focus on AI features and a practitioner-obsessed UI/UX. Tracecat can be used to automate a variety of tasks, including phishing email investigation, evidence collection, and remediation plan generation.
openinference
OpenInference is a set of conventions and plugins that complement OpenTelemetry to enable tracing of AI applications. It provides a way to capture and analyze the performance and behavior of AI models, including their interactions with other components of the application. OpenInference is designed to be language-agnostic and can be used with any OpenTelemetry-compatible backend. It includes a set of instrumentations for popular machine learning SDKs and frameworks, making it easy to add tracing to your AI applications.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.