nx_open
NetworkOptix open-source components used to build Powered-by-Nx products including Desktop Client for Network Optix Video Management Platform.
Stars: 51
The `nx_open` repository contains open-source components for the Network Optix Meta Platform, used to build products like Nx Witness Video Management System. It includes source code, specifications, and a Desktop Client. The repository is licensed under Mozilla Public License 2.0. Users can build the Desktop Client and customize it using a zip file. The build environment supports Windows, Linux, and macOS platforms with specific prerequisites. The repository provides scripts for building, signing executable files, and running the Desktop Client. Compatibility with VMS Server versions is crucial, and automatic VMS updates are disabled for the open-source Desktop Client.
README:
// Copyright 2018-present Network Optix, Inc. Licensed under MPL 2.0: www.mozilla.org/MPL/2.0/
This repository nx_open
contains Network Optix Meta Platform open-source components - the
source code and specifications which are used to build all Powered-by-Nx products including Nx
Witness Video Management System (VMS).
Currently, the main VMS component which can be built from this repository is the Desktop Client.
Other notable components which are parts of the Desktop Client, but can be useful independently,
include the Nx Kit library (artifacts/nx_kit/
) - see its readme.md
for details.
Most of the source code and other files are licensed under the terms of Mozilla Public License 2.0
(unless specified otherwise in the files) which can be found in the license_mpl2.md
file in the
licenses/
directory in the root directory of the repository.
ATTENTION: This document provides only the brief information about the build process and its
prerequisites, specific to the current branch. For the most actual instructions how to set up the
build environment, explanation of the build system internals, and recommendations for using build
and development tools, refer to the following document in the master
branch of this repository:
build.md
The "Network Optix Meta Platform open-source components" software incorporates, depends upon, interacts with, or was developed using a number of free and open-source software components. The full list of such components can be found at OPEN SOURCE SOFTWARE DISCLOSURE. Please see the linked component websites for additional licensing, dependency, and use information, as well as the component source code.
Supported target platforms and architectures:
- Windows 10 x64 (Microsoft Visual Studio).
- Linux Ubuntu 18.04, 20.04, 22.04 (GCC or Clang) x64, ARM 32/64 (cross-compiling on Linux x64).
- macOS Monterey 12.6.3 (Xcode with Clang) x64, Apple M1/M2.
Build prerequisites:
-
Python 3.8+ - should be available on
PATH
aspython
, and for macOS and Ubuntu also aspython3
. -
Pip - should be available on
PATH
aspip
and be installed for the Python interpreter used by the build. -
CMake, Ninja, Conan - recommended to be installed via
pip
fromrequirements.txt
of themaster
branch; you may see the required versions in this file. -
Linux: Install the build and runtime dependencies via CMake by specifying the
cmake
command-line argument-DinstallSystemRequirements=ON
at the Generation stage (may ask for asudo
password).- NOTE: The compiler is downloaded as a Conan artifact during the CMake Generation stage - compilers installed in the Linux system (if any) are not used.
-
Windows: Microsoft Visual Studio 2022,
Community Edition; select the components:
- "The Workload" -> "Desktop development with C++"
- "Individual components" -> "C++ CMake tools for Windows"
-
macOS: Xcode Command Line Tools 14.2+; also install the following build dependencies:
- For Apple M1/M2, install Rosetta 2:
/usr/sbin/softwareupdate --install-rosetta --agree-to-license
The Client uses in its GUI a collection of texts and graphics called a Customization Package; it defines the branding of the VMS. The Customization Package comes as a zip file. A default one is taken from Conan - the Client will be branded as Nx Meta and will show placeholders for such traits as the company name, web site and End-User License Agreement text. If you want to define these traits, create a "Custom Client" entity on the Nx Meta Developer Portal and download the generated Customization Package zip at https://meta.nxvms.com/developers/custom-clients/. Customization Packages with branding other than Nx Meta can be available there as well.
All the commands necessary to perform the CMake Configuration and Build stages are written in the
scripts build.sh
(for Linux and macOS) and build.bat
(for Windows) located in the repository
root. Please treat these scripts as a quick start aid, study their source, and feel free to use
your favorite C++ development workflow instead.
The scripts create/use the build directory as a sibling to the repository root directory, having
added the -build
suffix. Here we assume the repository root is nx_open/
, so the build directory
will be nx_open-build/
.
ATTENTION: If the generation fails for any reason, remove CMakeCache.txt
manually before the next
attempt of running the build script.
Below are the usage examples, where <build>
is ./build.sh
for Linux and macOS, and build.bat
for Windows.
-
To make a clean Debug build, delete the build directory (if any), and run the command:
<build>
The built executables will be placed in
nx_open-build/bin/
. -
To make a clean Release build with the distribution package and unit test archive, delete the build directory (if any), and run the command:
<build> -DdeveloperBuild=OFF
The built distribution packages and unit test archive will be placed in
nx_open-build/distrib/
. To run the unit tests, unpack the unit test archive and run all the executables in it either one-by-one, or in parallel. -
To use the obtained Customization Package rather than the default one coming from Conan (Nx-Meta-branded with placeholders), add the following arguments to the
<build>
script:-DcustomizationPackageFile=<customization.zip>
NOTE: The value in the
"id":
field ofdescription.json
inside the specified zip file must match the one in the Server in order to be able to connect to it. -
To perform an incremental build after some changes, run the
<build>
script without arguments.- Note that there is no need to explicitly call the Generation stage after adding/deleting
source files or altering the build system files, because
ninja_tool.py
properly handles such cases - the Generation stage will be called automatically when needed.
- Note that there is no need to explicitly call the Generation stage after adding/deleting
source files or altering the build system files, because
For cross-compiling on Linux or macOS, set the CMake variable <targetDevice>
: add the
argument -DtargetDevice=<value>
, where is one of the following:
linux_x64
linux_arm64
linux_arm32
macos_x64
macos_arm64
Building and debugging in Visual Studio IDE is also supported: run the Generation stage from the
command line, it will create CMakeSettings.json
and launch.vs.json
, then open the project.
It is recommended to set the environment variable NX_CONAN_DOWNLOAD_CACHE
to the full path of a
directory that will be used to avoid re-downloading all the artifacts from the internet for every
clean build; for example, create the directory conan_cache/
next to the repository root and the
build directories.
-
Windows:
There is an option of signing the built executables (including the distribution file itself) with the software publisher certificate. To perform it, a valid certificate file in the PKCS#12 format is needed.
Signing is performed by the
signtool.py
script which is a wrapper around native Windowssigntool.exe
. To enable signing, the following preparation steps must be done:-
Save the publisher certificate file somewhere in your file system.
-
Create (preferably outside of the source tree) the configuration file. This file must contain the following fields:
-
file
: the path to the publisher certificate file. It must be either an absolute path or a path relative to the directory where the configuration file resides. -
password
: the password protecting the publisher certificate file. -
timestamp_servers
(optional): a list of the URLs of the trusted timestamping server. If this field is present in the configuration file, the signed file will be time-stamped using one of the listed servers. If this field is absent, the signed file will not be time-stamped.
The example of a configuration file can be found in
build_utils/signtool/config/config.yaml
. -
-
Add a CMake argument
-DsigntoolConfig=<configuration_file_path>
to the Generation stage. If this argument is missing, no signing will be performed.
Also you can sign any file manually by calling
signtool.py
directly:python build_utils/signtool/signtool.py --config <configuration_file> --file <unsigned_file> --output <signed_file>
To test the signing procedure, you can use a self-signed certificate. To generate such certificate, you can use the file
build_utils/signtool/genkey/genkey_signtool.bat
. When run, it creates thecertificate.p12
file and a couple of auxillary*.pem
files in the same directory where it is run. We recommend to move these files outside of the source directory to maintain the out-of-source build concept. -
-
Linux:
Signing is not required; no tools or instructions are provided.
-
macOS:
A signing tool suitable for standalone use is being developed and will likely be provided in the future. As for now, you can use your regular signing procedure that you involve for your other macOS developments.
The VMS Desktop Client can be run directly from the build directory, without installing a distribution package.
After the successful build, the Desktop Client executable is located in nx_open-build/bin/
; its
name may depend on the Customization Package.
For Linux and macOS, just run the Desktop Client executable.
For Windows, before running the Desktop Client executable or any other executable built, run the following script (generated by Conan during the build) in the console, which properly sets PATH and some other environment variables:
nx_open-build/activate_run.bat
To restore the original variable values including PATH, you may run the following script:
nx_open-build/deactivate_run.bat
The Desktop Client built from the open-source repository can only connect to a compatible VMS Server. Because the VMS Server sources are not publicly available, such Server can only be obtained from any public VMS release, including the official VMS releases, and the regular preview releases called Nx Meta VMS.
For any given public VMS release, the compatibility is guaranteed only for the Client built from
the same commit as the Server. The particular commit can be identified in the repository by its git
tag. The public release tags look like vms/4.2/12345_release_all
or vms/5.0/34567_beta_meta_R2
.
Clients built from further commits in the same branch may retain compatibility with the publicly released Server for a while, but at some point may lose the compatibility because of some changes introduced synchronously into the Client and the Server parts of the source code. Thus, it is recommended to base the Client modification branches from tagged commits corresponding to the public releases, including Nx Meta VMS releases, and rebase them as soon as next public release from this branch is available.
ATTENTION: Besides having the compatible code, to be able to work together, the Client and the
Server have to use Customization Packages with the same <customization_id>
value.
During the Generation stage, the build system tries to determine the compatible Server version
checking the git tags. It searches for the first commit common for the current branch and one of
the "protected" branches (corresponding to stable VMS versions), and checks if it has a "release"
tag of the form "vms/#.#/#####_...". If no such tag is found, the build number is set to 0 and a
warning is produced, otherwise the build number is extracted from the tag. To bypass this
algorithm, pass "-DbuildNumber=<custom_build_number>" to cmake; to get back to it, make a clean
build or pass -DbuildNumber=AUTO
.
The VMS product includes a comprehensive auto-update support, but this feature is turned off for the open-source Desktop Client, because it would simply re-write a custom-built Desktop Client with the new version of the Desktop Client built by Nx. Note that the VMS admin still can force such an automatic update, with the mentioned consequences.
Technically it is possible to specify a custom Update server in the VMS Server settings, deploy a custom Update server, and prepare the update packages and meta-information according to the VMS standard, so that the automatic updates will work with a custom VMS built from open source. In the future, instructions and/or tools for this will likely be provided.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for nx_open
Similar Open Source Tools
nx_open
The `nx_open` repository contains open-source components for the Network Optix Meta Platform, used to build products like Nx Witness Video Management System. It includes source code, specifications, and a Desktop Client. The repository is licensed under Mozilla Public License 2.0. Users can build the Desktop Client and customize it using a zip file. The build environment supports Windows, Linux, and macOS platforms with specific prerequisites. The repository provides scripts for building, signing executable files, and running the Desktop Client. Compatibility with VMS Server versions is crucial, and automatic VMS updates are disabled for the open-source Desktop Client.
gpt-subtrans
GPT-Subtrans is an open-source subtitle translator that utilizes large language models (LLMs) as translation services. It supports translation between any language pairs that the language model supports. Note that GPT-Subtrans requires an active internet connection, as subtitles are sent to the provider's servers for translation, and their privacy policy applies.
airflow
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
2p-kt
2P-Kt is a Kotlin-based and multi-platform reboot of tuProlog (2P), a multi-paradigm logic programming framework written in Java. It consists of an open ecosystem for Symbolic Artificial Intelligence (AI) with modules supporting logic terms, unification, indexing, resolution of logic queries, probabilistic logic programming, binary decision diagrams, OR-concurrent resolution, DSL for logic programming, parsing modules, serialisation modules, command-line interface, and graphical user interface. The tool is designed to support knowledge representation and automatic reasoning through logic programming in an extensible and flexible way, encouraging extensions towards other symbolic AI systems than Prolog. It is a pure, multi-platform Kotlin project supporting JVM, JS, Android, and Native platforms, with a lightweight library leveraging the Kotlin common library.
LeanAide
LeanAide is a work in progress AI tool designed to assist with development using the Lean Theorem Prover. It currently offers a tool that translates natural language statements to Lean types, including theorem statements. The tool is based on GPT 3.5-turbo/GPT 4 and requires an OpenAI key for usage. Users can include LeanAide as a dependency in their projects to access the translation functionality.
serverless-pdf-chat
The serverless-pdf-chat repository contains a sample application that allows users to ask natural language questions of any PDF document they upload. It leverages serverless services like Amazon Bedrock, AWS Lambda, and Amazon DynamoDB to provide text generation and analysis capabilities. The application architecture involves uploading a PDF document to an S3 bucket, extracting metadata, converting text to vectors, and using a LangChain to search for information related to user prompts. The application is not intended for production use and serves as a demonstration and educational tool.
cluster-toolkit
Cluster Toolkit is an open-source software by Google Cloud for deploying AI/ML and HPC environments on Google Cloud. It allows easy deployment following best practices, with high customization and extensibility. The toolkit includes tutorials, examples, and documentation for various modules designed for AI/ML and HPC use cases.
aisuite
Aisuite is a simple, unified interface to multiple Generative AI providers. It allows developers to easily interact with various Language Model (LLM) providers like OpenAI, Anthropic, Azure, Google, AWS, and more through a standardized interface. The library focuses on chat completions and provides a thin wrapper around python client libraries, enabling creators to test responses from different LLM providers without changing their code. Aisuite maximizes stability by using HTTP endpoints or SDKs for making calls to the providers. Users can install the base package or specific provider packages, set up API keys, and utilize the library to generate chat completion responses from different models.
AirSane
AirSane is a SANE frontend and scanner server that supports Apple's AirScan protocol. It automatically detects scanners and publishes them through mDNS. Acquired images can be transferred in JPEG, PNG, and PDF/raster format. The tool is intended to be used with AirScan/eSCL clients such as Apple's Image Capture, sane-airscan on Linux, and the eSCL client built into Windows 10 and 11. It provides a simple web interface and encodes images on-the-fly to keep memory/storage demands low, making it suitable for devices like Raspberry Pi. Authentication and secure communication are supported in conjunction with a proxy server like nginx. AirSane has been reverse-engineered from Apple's AirScanScanner client communication protocol and offers a range of installation and configuration options for different operating systems.
tutor-gpt
Tutor-GPT is an LLM powered learning companion developed by Plastic Labs. It dynamically reasons about your learning needs and updates its own prompts to best serve you. It is an expansive learning companion that uses theory of mind experiments to provide personalized learning experiences. The project is split into different modules for backend logic, including core logic, discord bot implementation, FastAPI API interface, NextJS web front end, common utilities, and SQL scripts for setting up local supabase. Tutor-GPT is powered by Honcho to build robust user representations and create personalized experiences for each user. Users can run their own instance of the bot by following the provided instructions.
geti-sdk
The Intel® Geti™ SDK is a python package that enables teams to rapidly develop AI models by easing the complexities of model development and enhancing collaboration between teams. It provides tools to interact with an Intel® Geti™ server via the REST API, allowing for project creation, downloading, uploading, deploying for local inference with OpenVINO, setting project and model configuration, launching and monitoring training jobs, and media upload and prediction. The SDK also includes tutorial-style Jupyter notebooks demonstrating its usage.
aws-ai-stack
AWS AI Stack is a full-stack boilerplate project designed for building serverless AI applications on AWS. It provides a trusted AWS foundation for AI apps with access to powerful LLM models via Bedrock. The architecture is serverless, ensuring cost-efficiency by only paying for usage. The project includes features like AI Chat & Streaming Responses, Multiple AI Models & Data Privacy, Custom Domain Names, API & Event-Driven architecture, Built-In Authentication, Multi-Environment support, and CI/CD with Github Actions. Users can easily create AI Chat bots, authentication services, business logic, and async workers using AWS Lambda, API Gateway, DynamoDB, and EventBridge.
StableSwarmUI
StableSwarmUI is a modular Stable Diffusion web user interface that emphasizes making power tools easily accessible, high performance, and extensible. It is designed to be a one-stop-shop for all things Stable Diffusion, providing a wide range of features and capabilities to enhance the user experience.
LLM-LieDetector
This repository contains code for reproducing experiments on lie detection in black-box LLMs by asking unrelated questions. It includes Q/A datasets, prompts, and fine-tuning datasets for generating lies with language models. The lie detectors rely on asking binary 'elicitation questions' to diagnose whether the model has lied. The code covers generating lies from language models, training and testing lie detectors, and generalization experiments. It requires access to GPUs and OpenAI API calls for running experiments with open-source models. Results are stored in the repository for reproducibility.
libedgetpu
This repository contains the source code for the userspace level runtime driver for Coral devices. The software is distributed in binary form at coral.ai/software. Users can build the library using Docker + Bazel, Bazel, or Makefile methods. It supports building on Linux, macOS, and Windows. The library is used to enable the Edge TPU runtime, which may heat up during operation. Google does not accept responsibility for any loss or damage if the device is operated outside the recommended ambient temperature range.
agentok
Agentok Studio is a tool built upon AG2, a powerful agent framework from Microsoft, offering intuitive visual tools to streamline the creation and management of complex agent-based workflows. It simplifies the process for creators and developers by generating native Python code with minimal dependencies, enabling users to create self-contained code that can be executed anywhere. The tool is currently under development and not recommended for production use, but contributions are welcome from the community to enhance its capabilities and functionalities.
For similar tasks
nx_open
The `nx_open` repository contains open-source components for the Network Optix Meta Platform, used to build products like Nx Witness Video Management System. It includes source code, specifications, and a Desktop Client. The repository is licensed under Mozilla Public License 2.0. Users can build the Desktop Client and customize it using a zip file. The build environment supports Windows, Linux, and macOS platforms with specific prerequisites. The repository provides scripts for building, signing executable files, and running the Desktop Client. Compatibility with VMS Server versions is crucial, and automatic VMS updates are disabled for the open-source Desktop Client.
For similar jobs
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
nvidia_gpu_exporter
Nvidia GPU exporter for prometheus, using `nvidia-smi` binary to gather metrics.
tracecat
Tracecat is an open-source automation platform for security teams. It's designed to be simple but powerful, with a focus on AI features and a practitioner-obsessed UI/UX. Tracecat can be used to automate a variety of tasks, including phishing email investigation, evidence collection, and remediation plan generation.
openinference
OpenInference is a set of conventions and plugins that complement OpenTelemetry to enable tracing of AI applications. It provides a way to capture and analyze the performance and behavior of AI models, including their interactions with other components of the application. OpenInference is designed to be language-agnostic and can be used with any OpenTelemetry-compatible backend. It includes a set of instrumentations for popular machine learning SDKs and frameworks, making it easy to add tracing to your AI applications.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.