
mcp-ts-template
A production-grade TypeScript template for building robust Model Context Protocol (MCP) servers, featuring built-in observability with OpenTelemetry, advanced error handling, comprehensive utilities, and a modular architecture.
Stars: 69

The MCP TypeScript Server Template is a production-grade framework for building powerful and scalable Model Context Protocol servers with TypeScript. It features built-in observability, declarative tooling, robust error handling, and a modular, DI-driven architecture. The template is designed to be AI-agent-friendly, providing detailed rules and guidance for developers to adhere to best practices. It enforces architectural principles like 'Logic Throws, Handler Catches' pattern, full-stack observability, declarative components, and dependency injection for decoupling. The project structure includes directories for configuration, container setup, server resources, services, storage, utilities, tests, and more. Configuration is done via environment variables, and key scripts are available for development, testing, and publishing to the MCP Registry.
README:

The definitive, production-grade template for building powerful and scalable Model Context Protocol servers with TypeScript, featuring built-in observability (OpenTelemetry), declarative tooling, robust error handling, and a modular, DI-driven architecture.
mcp-ts-template
is more than just a template; it's a feature-rich, production-ready framework for building robust, observable, and secure MCP servers, providing a solid architectural foundation so you can focus entirely on creating powerful tools and resources for AI agents.
This project is designed to be AI-agent-friendly, providing an LLM-optimized AGENTS.md and detailed rules in .clinerules/clinerules.md to ensure your coding agents adhere to best practices from the start.
This template is packed with production-grade features designed for high-performance, secure, and maintainable MCP servers.
Feature | Description |
---|---|
Declarative Tooling | Define tools in a single, self-contained file (*.tool.ts ). The framework handles registration, validation, error handling, and performance metrics automatically. |
Full Observability | Zero-configuration OpenTelemetry integration. Get distributed traces and metrics out-of-the-box for all your tools and underlying dependencies (HTTP, DNS). |
Pluggable Auth | Built-in authentication middleware supporting JWT and OAuth 2.1. Easily toggle auth modes or extend with new strategies via the AuthStrategy interface. |
Stateful & Stateless Transports | Choose between stdio or HTTP transports. The HTTP transport supports both persistent, stateful sessions and ephemeral, stateless requests intelligently. |
Robust Error Handling | A centralized ErrorHandler maps all exceptions to standardized JsonRpcErrorCode s and automatically correlates them with OpenTelemetry traces for easy debugging. |
Type-Safe & Validated | Zod is used everywhere for rigorous schema validation of configuration, tool inputs/outputs, and API boundaries, preventing invalid data at the source. |
Abstracted Storage Layer | A flexible, provider-based storage service (IStorageProvider ) with ready-to-use backends for In-Memory, Filesystem, and Supabase. |
Comprehensive Utilities | A rich set of internal utilities for logging (Winston ), rate-limiting, security sanitization, ID generation, cron scheduling, and network requests. |
Integration-First Testing | Pre-configured with Vitest and msw for writing meaningful integration tests that reflect real-world usage, ensuring reliability from end to end. |
Agent-Ready Design | Includes detailed guidance in AGENTS.md and .clinerules/ to direct developer LLM agents, ensuring they adhere to the project's architectural standards. |
- Bun (v1.2.0 or higher)
-
Clone the Repository
git clone https://github.com/cyanheads/mcp-ts-template.git cd mcp-ts-template
-
Install Dependencies
bun install
-
Build the Project
bun build # or bun rebuild
You can run the server in several modes for development and production.
-
STDIO Transport: Ideal for local development or when the server is a child process.
bun run start:stdio
-
HTTP Transport: For network-accessible deployments.
bun run start:http # Server now running at http://127.0.0.1:3010
This template enforces a set of non-negotiable architectural principles to ensure every server built from it is robust, maintainable, and debuggable.
This is the cornerstone of control flow and error handling. It creates a complete separation between pure business logic and the surrounding infrastructure concerns.
-
Core Logic (
logic
): Defined within yourToolDefinition
, this is a pure, statelessasync
function. It contains only the business logic for the tool. If an operational or validation error occurs, it must terminate bythrow
ing a structuredMcpError
. It never contains atry...catch
block. -
Handler (Auto-Generated): The
toolHandlerFactory
automatically wraps yourlogic
function in a robusttry...catch
block at runtime. This factory-generated handler is responsible for creating theRequestContext
, measuring performance with OpenTelemetry, invoking your logic, and catching any thrown errors. It is the only place where errors are caught and formatted into a finalCallToolResult
.
This pattern allows you to write clean, focused business logic while the framework guarantees it's executed within a fully instrumented, safe, and observable context.
Every operation is traceable from end to end without any manual setup.
-
OpenTelemetry SDK: Initialized in
src/utils/telemetry/instrumentation.ts
before any other module, it automatically instruments supported I/O operations (HTTP, DNS, etc.). -
Trace-Aware Context: The
requestContextService
automatically injects the activetraceId
andspanId
into everyRequestContext
. -
Error-Trace Correlation: The central
ErrorHandler
records every handled exception on the active OTel span and sets its status toERROR
, ensuring every failure is visible and searchable in your tracing backend. -
Performance Spans: The
measureToolExecution
utility wraps every tool call in a dedicated span, capturing duration, status, and input/output sizes as attributes.
Tools and resources are defined declaratively in single, self-contained files. This makes the system highly modular and easy to reason about.
The entire architecture is built around a Dependency Injection (DI) container (tsyringe
).
-
Centralized Container: All services, providers, and managers are registered in a central DI container, configured in
src/container/
. - Inversion of Control: Components never create their own dependencies. Instead, they receive them via constructor injection, making them highly testable and decoupled.
- Auto-Registration: Tool and resource definitions are automatically discovered and registered with the container from barrel exports, eliminating manual wiring.
.
โโโ .clinerules/ # --> Rules and mandates for LLM-based development agents.
โโโ .github/ # --> GitHub Actions workflows (e.g., CI/CD).
โโโ scripts/ # --> Helper scripts for development (cleaning, docs, etc.).
โโโ src/
โ โโโ config/ # --> Application configuration (Zod schemas, loader).
โ โโโ container/ # --> Dependency Injection container setup and registrations.
โ โโโ mcp-server/
โ โ โโโ resources/ # --> Declarative resource definitions (*.resource.ts).
โ โ โโโ tools/ # --> Declarative tool definitions (*.tool.ts).
โ โ โโโ transports/ # --> HTTP and STDIO transport layers, including auth.
โ โ โโโ server.ts # --> Core McpServer setup (resolves components from DI).
โ โโโ services/ # --> Clients for external services (e.g., LLM providers).
โ โโโ storage/ # --> Abstracted storage layer and providers.
โ โโโ types-global/ # --> Global TypeScript types (e.g., McpError).
โ โโโ utils/ # --> Core utilities (logger, error handler, security).
โโโ tests/ # --> Vitest integration and unit tests.
โโโ .env.example # --> Example environment variables.
โโโ AGENTS.md # --> Detailed architectural guide for LLM agents.
โโโ Dockerfile # --> For building and running the server in a container.
-
Create the Definition: Create a new file at
src/mcp-server/tools/definitions/my-new-tool.tool.ts
. Use an existing tool as a template. -
Define the Tool: Export a single
const
of typeToolDefinition
containing the name, Zod schemas, and pure business logic. -
Register via Barrel Export: Open
src/mcp-server/tools/definitions/index.ts
and add your new tool definition to theallToolDefinitions
array.
// src/mcp-server/tools/definitions/index.ts
import { myNewTool } from './my-new-tool.tool.js';
// ... other imports
export const allToolDefinitions = [
// ... other tools
myNewTool,
];
That's it. The DI container automatically discovers and registers all tools from this array at startup.
-
Create Provider: Create a new class under
src/storage/providers/
that implements theIStorageProvider
interface. -
Add to Factory: Open
src/storage/core/storageFactory.ts
. Add a case to theswitch
statement to instantiate your new provider based on theSTORAGE_PROVIDER_TYPE
from the config. -
Update Config Schema: Add your new provider's name to the
StorageProviderType
enum insrc/config/index.ts
. -
Set Environment Variable: In your
.env
file, setSTORAGE_PROVIDER_TYPE
to your new provider's name.
The server is configured via environment variables, loaded and validated by src/config/index.ts
. Copy .env.example
to .env
and fill in the required values.
Variable | Description | Default |
---|---|---|
MCP_TRANSPORT_TYPE |
Transport to use: stdio or http . |
http |
MCP_SESSION_MODE |
HTTP session mode: stateless , stateful , or auto . |
auto |
MCP_AUTH_MODE |
Authentication mode: none , jwt , or oauth . |
none |
MCP_LOG_LEVEL |
Minimum log level: debug , info , warning , error , etc. |
debug |
LOGS_DIR |
Directory for log files. | logs/ |
STORAGE_PROVIDER_TYPE |
Storage backend: in-memory , filesystem , supabase . |
filesystem |
STORAGE_FILESYSTEM_PATH |
Path for the filesystem storage provider. | ./.storage |
OPENROUTER_API_KEY |
API key for the OpenRouter LLM service. | |
OTEL_ENABLED |
Set to true to enable OpenTelemetry. |
false |
MCP_AUTH_SECRET_KEY |
Secret key for signing JWTs (required for jwt auth mode). |
|
SUPABASE_URL |
URL for your Supabase project. | |
SUPABASE_SERVICE_ROLE_KEY |
Service role key for Supabase admin tasks. | |
Refer to .env.example
for a complete list of configurable options.
Key scripts available in package.json
:
Script | Description |
---|---|
bun run devdocs |
Generates a comprehensive development documentation prompt for AI analysis. |
bun run rebuild |
Clears logs, cache, and compiles the TypeScript source code to JavaScript in dist/ . |
bun run start:http |
Starts the compiled server using the HTTP transport. |
bun run start:stdio |
Starts the compiled server using the STDIO transport. |
bun run test |
Runs all unit and integration tests with Vitest. |
bun run test:coverage |
Runs all tests and generates a code coverage report. |
bun run devcheck |
A comprehensive script that runs linting, type-checking, and formatting. |
bun run publish-mcp |
(Recommended) An all-in-one script to sync, validate, commit, and publish your server to the MCP Registry. |
You can find these scripts in the scripts/
directory.
This template is configured for easy publishing to the public MCP Registry, making your server discoverable by any MCP-compatible client. The recommended method is to use the all-in-one publishing script.
For a complete walkthrough, including alternative methods and CI/CD automation, please refer to the detailed guide:
โก๏ธ How to Publish Your MCP Server
This template includes a powerful script that automates the entire publishing workflowโfrom syncing versions and validating schemas to committing changes and publishing.
- Ensure you are on the
main
branch with no uncommitted changes. -
Run the script:
bun run publish-mcp
The script will guide you through the process, including pausing for you to complete the GitHub browser login.
The script also supports flags for more granular control:
-
--validate-only
: Syncs metadata, validatesserver.json
, then stops. -
--no-commit
: Skips the automatic Git commit step. -
--publish-only
: Skips local file changes and proceeds directly to publishing.
Example:
bun run publish-mcp --validate-only
This template also includes a GitHub Actions workflow (.github/workflows/publish-mcp.yml
) that can be configured to automate this process whenever you push a new Git tag.
This is an open-source project. Contributions, issues, and feature requests are welcome. Please feel free to fork the repository, make changes, and open a pull request.
This project is licensed under the Apache 2.0 License. See the LICENSE file for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mcp-ts-template
Similar Open Source Tools

mcp-ts-template
The MCP TypeScript Server Template is a production-grade framework for building powerful and scalable Model Context Protocol servers with TypeScript. It features built-in observability, declarative tooling, robust error handling, and a modular, DI-driven architecture. The template is designed to be AI-agent-friendly, providing detailed rules and guidance for developers to adhere to best practices. It enforces architectural principles like 'Logic Throws, Handler Catches' pattern, full-stack observability, declarative components, and dependency injection for decoupling. The project structure includes directories for configuration, container setup, server resources, services, storage, utilities, tests, and more. Configuration is done via environment variables, and key scripts are available for development, testing, and publishing to the MCP Registry.

backend.ai
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs. It allocates and isolates the underlying computing resources for multi-tenant computation sessions on-demand or in batches with customizable job schedulers with its own orchestrator. All its functions are exposed as REST/GraphQL/WebSocket APIs.

xFasterTransformer
xFasterTransformer is an optimized solution for Large Language Models (LLMs) on the X86 platform, providing high performance and scalability for inference on mainstream LLM models. It offers C++ and Python APIs for easy integration, along with example codes and benchmark scripts. Users can prepare models in a different format, convert them, and use the APIs for tasks like encoding input prompts, generating token ids, and serving inference requests. The tool supports various data types and models, and can run in single or multi-rank modes using MPI. A web demo based on Gradio is available for popular LLM models like ChatGLM and Llama2. Benchmark scripts help evaluate model inference performance quickly, and MLServer enables serving with REST and gRPC interfaces.

ps-fuzz
The Prompt Fuzzer is an open-source tool that helps you assess the security of your GenAI application's system prompt against various dynamic LLM-based attacks. It provides a security evaluation based on the outcome of these attack simulations, enabling you to strengthen your system prompt as needed. The Prompt Fuzzer dynamically tailors its tests to your application's unique configuration and domain. The Fuzzer also includes a Playground chat interface, giving you the chance to iteratively improve your system prompt, hardening it against a wide spectrum of generative AI attacks.

mcpd
mcpd is a tool developed by Mozilla AI to declaratively manage Model Context Protocol (MCP) servers, enabling consistent interface for defining and running tools across different environments. It bridges the gap between local development and enterprise deployment by providing secure secrets management, declarative configuration, and seamless environment promotion. mcpd simplifies the developer experience by offering zero-config tool setup, language-agnostic tooling, version-controlled configuration files, enterprise-ready secrets management, and smooth transition from local to production environments.

LLMTSCS
LLMLight is a novel framework that employs Large Language Models (LLMs) as decision-making agents for Traffic Signal Control (TSC). The framework leverages the advanced generalization capabilities of LLMs to engage in a reasoning and decision-making process akin to human intuition for effective traffic control. LLMLight has been demonstrated to be remarkably effective, generalizable, and interpretable against various transportation-based and RL-based baselines on nine real-world and synthetic datasets.

evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.

chat
deco.chat is an open-source foundation for building AI-native software, providing developers, engineers, and AI enthusiasts with robust tools to rapidly prototype, develop, and deploy AI-powered applications. It empowers Vibecoders to prototype ideas and Agentic engineers to deploy scalable, secure, and sustainable production systems. The core capabilities include an open-source runtime for composing tools and workflows, MCP Mesh for secure integration of models and APIs, a unified TypeScript stack for backend logic and custom frontends, global modular infrastructure built on Cloudflare, and a visual workspace for building agents and orchestrating everything in code.

SageAttention
SageAttention is an official implementation of an accurate 8-bit attention mechanism for plug-and-play inference acceleration. It is optimized for RTX4090 and RTX3090 GPUs, providing performance improvements for specific GPU architectures. The tool offers a technique called 'smooth_k' to ensure accuracy in processing FP16/BF16 data. Users can easily replace 'scaled_dot_product_attention' with SageAttention for faster video processing.

aim
Aim is a command-line tool for downloading and uploading files with resume support. It supports various protocols including HTTP, FTP, SFTP, SSH, and S3. Aim features an interactive mode for easy navigation and selection of files, as well as the ability to share folders over HTTP for easy access from other devices. Additionally, it offers customizable progress indicators and output formats, and can be integrated with other commands through piping. Aim can be installed via pre-built binaries or by compiling from source, and is also available as a Docker image for platform-independent usage.

mistral.rs
Mistral.rs is a fast LLM inference platform written in Rust. We support inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.

twitter-automation-ai
Advanced Twitter Automation AI is a modular Python-based framework for automating Twitter at scale. It supports multiple accounts, robust Selenium automation with optional undetected Chrome + stealth, per-account proxies and rotation, structured LLM generation/analysis, community posting, and per-account metrics/logs. The tool allows seamless management and automation of multiple Twitter accounts, content scraping, publishing, LLM integration for generating and analyzing tweet content, engagement automation, configurable automation, browser automation using Selenium, modular design for easy extension, comprehensive logging, community posting, stealth mode for reduced fingerprinting, per-account proxies, LLM structured prompts, and per-account JSON summaries and event logs for observability.

LEANN
LEANN is an innovative vector database that democratizes personal AI, transforming your laptop into a powerful RAG system that can index and search through millions of documents using 97% less storage than traditional solutions without accuracy loss. It achieves this through graph-based selective recomputation and high-degree preserving pruning, computing embeddings on-demand instead of storing them all. LEANN allows semantic search of file system, emails, browser history, chat history, codebase, or external knowledge bases on your laptop with zero cloud costs and complete privacy. It is a drop-in semantic search MCP service fully compatible with Claude Code, enabling intelligent retrieval without changing your workflow.

llm
LLM is a Rust library that allows users to utilize multiple LLM backends (OpenAI, Anthropic, Ollama, DeepSeek, xAI, Phind, Groq, Google) in a single project. It provides a unified API and builder style for creating chat or text completion requests without the need for multiple structures and crates. Key features include multi-backend management, multi-step chains, templates for complex prompts, builder pattern for easy configuration, extensibility, validation, evaluation, parallel evaluation, function calling, REST API support, vision integration, and reasoning capabilities.

MockingBird
MockingBird is a toolbox designed for Mandarin speech synthesis using PyTorch. It supports multiple datasets such as aidatatang_200zh, magicdata, aishell3, and data_aishell. The toolbox can run on Windows, Linux, and M1 MacOS, providing easy and effective speech synthesis with pretrained encoder/vocoder models. It is webserver ready for remote calling. Users can train their own models or use existing ones for the encoder, synthesizer, and vocoder. The toolbox offers a demo video and detailed setup instructions for installation and model training.

pentagi
PentAGI is an innovative tool for automated security testing that leverages cutting-edge artificial intelligence technologies. It is designed for information security professionals, researchers, and enthusiasts who need a powerful and flexible solution for conducting penetration tests. The tool provides secure and isolated operations in a sandboxed Docker environment, fully autonomous AI-powered agent for penetration testing steps, a suite of 20+ professional security tools, smart memory system for storing research results, web intelligence for gathering information, integration with external search systems, team delegation system, comprehensive monitoring and reporting, modern interface, API integration, persistent storage, scalable architecture, self-hosted solution, flexible authentication, and quick deployment through Docker Compose.
For similar tasks

mcp-ts-template
The MCP TypeScript Server Template is a production-grade framework for building powerful and scalable Model Context Protocol servers with TypeScript. It features built-in observability, declarative tooling, robust error handling, and a modular, DI-driven architecture. The template is designed to be AI-agent-friendly, providing detailed rules and guidance for developers to adhere to best practices. It enforces architectural principles like 'Logic Throws, Handler Catches' pattern, full-stack observability, declarative components, and dependency injection for decoupling. The project structure includes directories for configuration, container setup, server resources, services, storage, utilities, tests, and more. Configuration is done via environment variables, and key scripts are available for development, testing, and publishing to the MCP Registry.

XLearning
XLearning is a scheduling platform for big data and artificial intelligence, supporting various machine learning and deep learning frameworks. It runs on Hadoop Yarn and integrates frameworks like TensorFlow, MXNet, Caffe, Theano, PyTorch, Keras, XGBoost. XLearning offers scalability, compatibility, multiple deep learning framework support, unified data management based on HDFS, visualization display, and compatibility with code at native frameworks. It provides functions for data input/output strategies, container management, TensorBoard service, and resource usage metrics display. XLearning requires JDK >= 1.7 and Maven >= 3.3 for compilation, and deployment on CentOS 7.2 with Java >= 1.7 and Hadoop 2.6, 2.7, 2.8.

parllama
PAR LLAMA is a Text UI application for managing and using LLMs, designed with Textual and Rich and PAR AI Core. It runs on major OS's including Windows, Windows WSL, Mac, and Linux. Supports Dark and Light mode, custom themes, and various workflows like Ollama chat, image chat, and OpenAI provider chat. Offers features like custom prompts, themes, environment variables configuration, and remote instance connection. Suitable for managing and using LLMs efficiently.
For similar jobs

llmops-promptflow-template
LLMOps with Prompt flow is a template and guidance for building LLM-infused apps using Prompt flow. It provides centralized code hosting, lifecycle management, variant and hyperparameter experimentation, A/B deployment, many-to-many dataset/flow relationships, multiple deployment targets, comprehensive reporting, BYOF capabilities, configuration-based development, local prompt experimentation and evaluation, endpoint testing, and optional Human-in-loop validation. The tool is customizable to suit various application needs.

azure-search-vector-samples
This repository provides code samples in Python, C#, REST, and JavaScript for vector support in Azure AI Search. It includes demos for various languages showcasing vectorization of data, creating indexes, and querying vector data. Additionally, it offers tools like Azure AI Search Lab for experimenting with AI-enabled search scenarios in Azure and templates for deploying custom chat-with-your-data solutions. The repository also features documentation on vector search, hybrid search, creating and querying vector indexes, and REST API references for Azure AI Search and Azure OpenAI Service.

geti-sdk
The Intelยฎ Getiโข SDK is a python package that enables teams to rapidly develop AI models by easing the complexities of model development and enhancing collaboration between teams. It provides tools to interact with an Intelยฎ Getiโข server via the REST API, allowing for project creation, downloading, uploading, deploying for local inference with OpenVINO, setting project and model configuration, launching and monitoring training jobs, and media upload and prediction. The SDK also includes tutorial-style Jupyter notebooks demonstrating its usage.

booster
Booster is a powerful inference accelerator designed for scaling large language models within production environments or for experimental purposes. It is built with performance and scaling in mind, supporting various CPUs and GPUs, including Nvidia CUDA, Apple Metal, and OpenCL cards. The tool can split large models across multiple GPUs, offering fast inference on machines with beefy GPUs. It supports both regular FP16/FP32 models and quantised versions, along with popular LLM architectures. Additionally, Booster features proprietary Janus Sampling for code generation and non-English languages.

xFasterTransformer
xFasterTransformer is an optimized solution for Large Language Models (LLMs) on the X86 platform, providing high performance and scalability for inference on mainstream LLM models. It offers C++ and Python APIs for easy integration, along with example codes and benchmark scripts. Users can prepare models in a different format, convert them, and use the APIs for tasks like encoding input prompts, generating token ids, and serving inference requests. The tool supports various data types and models, and can run in single or multi-rank modes using MPI. A web demo based on Gradio is available for popular LLM models like ChatGLM and Llama2. Benchmark scripts help evaluate model inference performance quickly, and MLServer enables serving with REST and gRPC interfaces.

amazon-transcribe-live-call-analytics
The Amazon Transcribe Live Call Analytics (LCA) with Agent Assist Sample Solution is designed to help contact centers assess and optimize caller experiences in real time. It leverages Amazon machine learning services like Amazon Transcribe, Amazon Comprehend, and Amazon SageMaker to transcribe and extract insights from contact center audio. The solution provides real-time supervisor and agent assist features, integrates with existing contact centers, and offers a scalable, cost-effective approach to improve customer interactions. The end-to-end architecture includes features like live call transcription, call summarization, AI-powered agent assistance, and real-time analytics. The solution is event-driven, ensuring low latency and seamless processing flow from ingested speech to live webpage updates.

ai-lab-recipes
This repository contains recipes for building and running containerized AI and LLM applications with Podman. It provides model servers that serve machine-learning models via an API, allowing developers to quickly prototype new AI applications locally. The recipes include components like model servers and AI applications for tasks such as chat, summarization, object detection, etc. Images for sample applications and models are available in `quay.io`, and bootable containers for AI training on Linux OS are enabled.

XLearning
XLearning is a scheduling platform for big data and artificial intelligence, supporting various machine learning and deep learning frameworks. It runs on Hadoop Yarn and integrates frameworks like TensorFlow, MXNet, Caffe, Theano, PyTorch, Keras, XGBoost. XLearning offers scalability, compatibility, multiple deep learning framework support, unified data management based on HDFS, visualization display, and compatibility with code at native frameworks. It provides functions for data input/output strategies, container management, TensorBoard service, and resource usage metrics display. XLearning requires JDK >= 1.7 and Maven >= 3.3 for compilation, and deployment on CentOS 7.2 with Java >= 1.7 and Hadoop 2.6, 2.7, 2.8.