treds
Sorted Data Structure Server - Treds is a Data Structure Server which returns data in sorted order and is the fastest prefix search server. It also persists data on disk.
Stars: 63
Treds is a Radix Trie based data structure server that stores keys in sorted order, ensuring fast and efficient retrieval. It offers various commands for key/value store, sorted maps store, list store, set store, hash store, and more. Treds provides unique features like optimized querying for keys with common prefixes, sorted key/value pairs, and new commands like DELPREFIX, LNGPREFIX, and PPUBLISH. It is designed for high performance with single-threaded architecture and event loop, utilizing modified Radix trees and Doubly Linked Lists for quick lookup. Treds also supports PubSub functionality and vector store operations for vector search using HNSW algorithm.
README:
Treds is a Radix Trie based data structure server that stores keys in sorted order, ensuring fast and efficient retrieval. A scan operation returns keys in their sorted sequence.
- Keys at root level having a common prefix can be queried optimally
-
SCANKEYS/SCANKVS/KEYS/KVS
commands returns the results in sorted order - Unlike Redis KEYS, Treds
KEYS
has cursor and matches any valid regex expression also it returns count number of data if data is there - Unlike Redis SCAN, Treds
SCAN
always returns count number of data if data is there. TredsSCAN
works on prefix only. - Unlike Redis ZRANGEBYLEX, Treds
ZRANGELEX
always returns data irrespective of score, basically data across different scores are returned - Unlike Redis PSUBSCRIBE, Treds
PSUBSCRIBE
is designed to work with channels having a common prefix - It has Sorted Maps instead of Sorted Sets. So we can create a Sorted Key/Value pair with associated with a score
- New command -
DELPREFIX
- Deletes all keys having a common prefix and returns number of keys deleted - New command -
LNGPREFIX
- Returns the key value pair in which key is the longest prefix of given string - New command -
PPUBLISH
- Publish a message to all channels that have names with the given channel as their prefix - Currently, it only has Key/Value store, Sorted Maps store, List store, Set store and Hash store and only supports strings/number as values
It is single threaded and has event loop. Implemented using modified Radix trees where leaf nodes are connected by Doubly Linked List in Radix Trie to facilitate the quick lookup of keys/values in sorted order. Doubly Linked List of leaf nodes are updated at the time of create/delete and update of keys optimally. This structure is similar to Prefix Hash Tree, but for Radix Tree and without converting keys to binary. Tree Map used to store score maps also are connected internally using Doubly Linked List using similar logic. For more details - check out the medium article
Both Treds and Redis are filled with 10 Million Keys in KeyValue Store and 10 Million Keys in a Sorted Map/Set respectively
Each key is of format user:%d
, so every key has prefix user:
The commands are run in Golang program and redirecting the output to a file go run main.go > out
.
For Redis setup see - Redis Prefix Bench Repo.
For Etcd setup see - Etcd Prefix Bench Repo
Treds Command -
scankeys 0 prefix 100000000000
Redis Command -
scan 0 match prefix count 100000000000
This graph shows the performance comparison between Treds - ScanKeys and Redis - Scan:
Treds Command -
scankvs 0 prefix 1000
Redis Command -
FT.SEARCH idx:user prefix SORTBY name LIMIT 0 1000
Prefix for redis command can be replaced by "User*", "User1*", "User10*" ... etc
This graph shows the performance comparison between Treds - ScanKVS and Redis FT.Search:
Treds Command -
zrangescorekeys key 0 max 0 100000000000 false
Redis Command -
zrangebyscore key 0 max
This graph shows the performance comparison between Treds - ZRangeScoreKeys and Redis - ZRangeByScore:
Treds Command -
scankeys 0 prefix 100000000000
Etcd Command -
etcdctl get prefix --prefix --keys-only
This graph shows the performance comparison between Treds - ScanKeys and Etcd get --prefix command:
-
PING
- Replies with aPONG
-
SET key value
- Sets a key value pair -
GET key
- Get a value for a key -
DEL key
- Delete a key -
MSET key1 value1 [key2 value2 key3 value3 ....]
- Set values for multiple keys -
MGET key1 [key2 key3 ....]
- Get values for multiple keys -
DELPREFIX prefix
- Delete all keys having a common prefix. Returns number of keys deleted -
LNGPREFIX string
- Returns the key value pair in which key is the longest prefix of given string -
DBSIZE
- Get number of keys in the db -
SCANKEYS cursor prefix count
- Returns the count number of keys matching prefix starting from an index in lex order only present in Key/Value Store. Last element is the next cursor -
SCANKVS cursor prefix count
- Returns the count number of keys/value pair in which keys match prefix starting from an index in lex order only present in Key/Value Store. Last element is the next cursor -
KEYS cursor regex count
- Returns count number of keys matching a regex in lex order starting with cursor. Count is optional. Last element is the next cursor -
KVS cursor regex count
- Returns count number of keys/values in which keys match a regex in lex order starting with cursor. Count is optional. Last element is the next cursor -
EXPIRE key seconds
- Expire key after given seconds -
TTL key
- Returns the time in seconds remaining before key expires. -1 if key has no expiry, -2 if key is not present.
-
KEYSZ cursor regex count
- Returns count number of keys in Sorted Maps Store matching a regex in lex order starting with cursor. Count is optional. Last element is the next cursor -
ZADD key score member_key member_value [score member_key member_value ....]
- Add member_key with member value with score to a sorted map in key -
ZREM key member [member ...]
- Removes a member from sorted map in key -
ZCARD key
- Returns the count of key/value pairs in sorted map in key -
ZSCORE key member
- Returns the score of a member in sorted map in key -
ZRANGELEXKEYS key offset count withscore min max
- Returns the count number of keys are >= min and <= max starting from an index in a sorted map in lex order. WithScore can be true or false -
ZRANGELEXKVS key offset count withscore min max
- Returns the count number of key/value pair in which keys are >= min and <= max starting from an index in a sorted map in lex order. WithScore can be true or false -
ZRANGESCOREKEYS key min max offset count withscore
- Returns the count number of keys with the score between min/max in sorted order of score. WithScore can be true or false -
ZRANGESCOREKVS key min max offset count withscore
- Returns the count number of key/value pair with the score between min/max in sorted order of score. WithScore can be true or false -
ZREVRANGELEXKEYS key offset count withscore min max
- Returns the count number of keys are >= min and <= max starting from an index in a sorted map in reverse lex order. WithScore can be true or false -
ZREVRANGELEXKVS key offset count withscore min max
- Returns the count number of key/value pair in which keys are >= min and <= max starting from an index in a sorted map in reverse lex order. WithScore can be true or false -
ZREVRANGESCOREKEYS key min max offset count withscore
- Returns the count number of keys with the score between min/max in reverser sorted order of score. WithScore can be true or false -
ZREVRANGESCOREKVS key min max offset count withscore
- Returns the count number of key/value pair with the score between min/max in reverse sorted order of score. WithScore can be true or false
-
KEYSL cursor regex count
- Returns count number of keys in List Store matching a regex in lex order starting with cursor. Count is optional. Last element is the next cursor -
LPUSH key element [element ...]
- Adds elements to the left of list with key -
RPUSH key element [element ...]
- Adds elements to the right of list with key -
LPOP key count
- Removes count elements from left of list with key and returns the popped elements -
RPOP key count
- Removes count elements from right of list with key and returns the popped elements -
LREM key index
- Removes element at index of list with key -
LSET key index element
- Sets an element at an index of a list with key -
LRANGE key start stop
- Returns the elements from start index to stop index in the list with key -
LLEN key
- Returns the length of list with key -
LINDEX key index
- Returns the element at index of list with key
-
KEYSS cursor regex count
- Returns count number of keys in Set Store matching a regex in lex order starting with cursor. Count is optional. Last element is the next cursor -
SADD key member [member ...]
- Adds the members to a set with key -
SREM key member [member ...]
- Removes the members from a set with key -
SMEMBERS key
- Returns all members of a set with key -
SISMEMBER key member
- Return 1 if member is present in set with key, 0 otherwise -
SCARD key
- Returns the size of the set with key -
SUNION key [key ...]
- Returns the union of sets with the give keys -
SINTER key [key ...]
- Returns the intersection of sets with the given keys -
SDIFF key [key ...]
- Returns the difference between the first set and all the successive sets.
-
KEYSH cursor regex count
- Returns count number of keys in Hash Store matching a regex in lex order starting with cursor. Count is optional. Last element is the next cursor -
HSET key field value [field value ...]
- Sets field value pairs in the hash with key -
HGET key field
- Returns the value present at field inside the hash at key -
HGETALL key
- Returns all field value pairs inside the hash at the key -
HLEN key
- Returns the size of hash at the key -
HDEL key field [field ...]
- Deletes the fields present inside the hash at the key -
HEXISTS key field
- Returns a true or false based on field is present in hash at key or not -
HKEYS key
- Returns all field present in the hash at key -
HVALS key
- Returns all values present in the hash at key
-
SNAPSHOT
- Persist the Key Value Store data on disk immediately. -
RESTORE folder_path
- Restore the persisted snapshot on disk immediately.
-
FLUSHALL
- Deletes all keys
-
MULTI
- Starts a transaction -
EXEC
- Execute all commands in the transaction and close the transaction -
DISCARD
- Discard all commands in the transaction and close the transaction
-
PUBLISH channel message
- Publish a message to a channel -
SUBSCRIBE channel [channel ...]
- Subscribe to channels -
UNSUBSCRIBE channel [channel ...]
- Unsubscribe to channels -
PSUBSCRIBE channel [channel ...]
- Subscription receives all messages published to channels whose names are prefixes of the given channels. -
PPUBLISH channel message
- This command publishes the message to all channels that have names with the given channel as their prefix. -
PUBSUBCHANNELS prefix
- Returns all active channels having one or more subscribers, a common prefix with the given prefix. Prefix is optional.
While PUBLISH
and SUBSCRIBE
are similar to Redis, PSUBSCRIBE
and PPUBLISH
are designed to work with channels having a common prefix.
If a client subscribes to a channel named NEWS-IND-KA-BLR
using PSUBSCRIBE
, then the client will receive messages published
to channels NEWS-IND-KA-BLR
, NEWS-IND-KA
, NEWS-IND
, NEWS
using PPUBLISH
.
In simple words - PPUBLISH
publishes a message to all channels that have names with the given channel as their prefix and
PSUBSCRIBE
receives all messages published to channels whose names are prefixes of the given channels.
-
DCREATE collectionname schemajson indexjson
- Create a collection with schema and index -
DDROP collectionname
- Drop a collection -
DINSERT collectionname json
- Insert a document in a collection -
DQUERY collectionname json
- Query a collection -
DEXPLAIN collectionname json
- Explain a query - Returns the query plan - index with which query is executed
DCREATE users "{\"name\": {\"type\": \"string\"}, \"age\": {\"type\": \"float\", \"min\": 18}, \"salary\": {\"type\": \"float\"}}" "[{\"fields\": [\"age\"], \"type\": \"normal\"}, {\"fields\": [\"salary\"], \"type\": \"normal\"}]"
DCREATE users "{\"name\": {\"type\": \"string\"}, \"age\": {\"type\": \"float\", \"min\": 18}, \"salary\": {\"type\": \"float\"}}" "[{\"fields\": [\"age\", \"salary\"], \"type\": \"normal\"}]"
DINSERT users "{\"name\": \"Spiderman\", \"age\": 13, \"salary\": 500}"
DINSERT users "{\"name\": \"Heman\", \"age\": 14, \"salary\": 600}"
DINSERT users "{\"name\": \"Superman\", \"age\": 15, \"salary\": 300}"
DINSERT users "{\"name\": \"Batman\", \"age\": 18, \"salary\": 900}"
DINSERT users "{\"name\": \"Antman\", \"age\": 25, \"salary\": 800}"
DEXPLAIN users "{\"filters\":[{\"field\":\"age\",\"operator\":\"$gt\",\"value\":14},{\"field\":\"salary\",\"operator\":\"$lt\",\"value\":900}]}"
DQUERY users "{\"filters\":[{\"field\":\"age\",\"operator\":\"$gt\",\"value\":14},{\"field\":\"salary\",\"operator\":\"$lt\",\"value\":900}]}"
-
VCREATE vectorname maxNeighbor levelFactor efSearch
- Create a vector store with maxNeighbor, levelFactor and efSearch -
VDROP vectorname
- Drop a vector store -
VINSERT vectorname float [float...]
- Insert a vector in a vector store -
VSEARCH vectorname float [float...] k
- Search k nearest neighbors of a vector in a vector store using HNSW algorithm -
VDELETE vectorname string
- Delete a vector from a vector store, input is the vector id returned inVINSERT
orVSEARCH
VCREATE vec 6 0.5 100
VINSERT vec 1.0 2.0
VINSERT vec 2.0 3.0
VINSERT vec 3.0 4.0
VSEARCH vec 1.5 2.5 2 // Returns 2 nearest neighbors
To run server run the following command on repository root
export TREDS_PORT=7997
go run main.go -port 7997
Using docker
docker run -p 7997:7997 absolutelightning/treds
Default Port of Treds is 7997
If port is set in env variable as well as flag, flag takes the precedence.
To build the binary for the treds server, run following command in repo root -
Binary named treds
will be generated in repo root.
make build
GOOS=linux GOARCH=arm64 make build
./treds
Treds encodes and decodes the messages in RESP so redis-cli can be used to interact with Treds server.
redis-cli -p 7997
It is advised to run Treds cluster on production. To bootstrap a 3 node cluster, lets say we have 3 servers
Sever 1, Server 2 and Server 3
On Server 1 run
./treds -bind 0.0.0.0 -advertise ip-server-1 -servers 'uuid-server-2:ip-server-2:8300,uuid-server-3:ip-server-3:8300' -id uuid-server-1
On Server 2 run
./treds -bind 0.0.0.0 -advertise ip-server-2 -servers 'uuid-server-1:ip-server-1:8300,uuid-server-3:ip-server-3:8300' -id uuid-server-2
On Server 3 run
./treds -bind 0.0.0.0 -advertise ip-server-3 -servers 'uuid-server-1:ip-server-1:8300,uuid-server-2:ip-server-2:8300' -id uuid-server-3
- Currently only KV Store gets persisted in Snapshot, add support for other store.
- Authentication.
- Tests
- More Commands ...
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for treds
Similar Open Source Tools
treds
Treds is a Radix Trie based data structure server that stores keys in sorted order, ensuring fast and efficient retrieval. It offers various commands for key/value store, sorted maps store, list store, set store, hash store, and more. Treds provides unique features like optimized querying for keys with common prefixes, sorted key/value pairs, and new commands like DELPREFIX, LNGPREFIX, and PPUBLISH. It is designed for high performance with single-threaded architecture and event loop, utilizing modified Radix trees and Doubly Linked Lists for quick lookup. Treds also supports PubSub functionality and vector store operations for vector search using HNSW algorithm.
cheating-based-prompt-engine
This is a vulnerability mining engine purely based on GPT, requiring no prior knowledge base, no fine-tuning, yet its effectiveness can overwhelmingly surpass most of the current related research. The core idea revolves around being task-driven, not question-driven, driven by prompts, not by code, and focused on prompt design, not model design. The essence is encapsulated in one word: deception. It is a type of code understanding logic vulnerability mining that fully stimulates the capabilities of GPT, suitable for real actual projects.
mergekit
Mergekit is a toolkit for merging pre-trained language models. It uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention.
trickPrompt-engine
This repository contains a vulnerability mining engine based on GPT technology. The engine is designed to identify logic vulnerabilities in code by utilizing task-driven prompts. It does not require prior knowledge or fine-tuning and focuses on prompt design rather than model design. The tool is effective in real-world projects and should not be used for academic vulnerability testing. It supports scanning projects in various languages, with current support for Solidity. The engine is configured through prompts and environment settings, enabling users to scan for vulnerabilities in their codebase. Future updates aim to optimize code structure, add more language support, and enhance usability through command line mode. The tool has received a significant audit bounty of $50,000+ as of May 2024.
hordelib
horde-engine is a wrapper around ComfyUI designed to run inference pipelines visually designed in the ComfyUI GUI. It enables users to design inference pipelines in ComfyUI and then call them programmatically, maintaining compatibility with the existing horde implementation. The library provides features for processing Horde payloads, initializing the library, downloading and validating models, and generating images based on input data. It also includes custom nodes for preprocessing and tasks such as face restoration and QR code generation. The project depends on various open source projects and bundles some dependencies within the library itself. Users can design ComfyUI pipelines, convert them to the backend format, and run them using the run_image_pipeline() method in hordelib.comfy.Comfy(). The project is actively developed and tested using git, tox, and a specific model directory structure.
godot-llm
Godot LLM is a plugin that enables the utilization of large language models (LLM) for generating content in games. It provides functionality for text generation, text embedding, multimodal text generation, and vector database management within the Godot game engine. The plugin supports features like Retrieval Augmented Generation (RAG) and integrates llama.cpp-based functionalities for text generation, embedding, and multimodal capabilities. It offers support for various platforms and allows users to experiment with LLM models in their game development projects.
ML-Bench
ML-Bench is a tool designed to evaluate large language models and agents for machine learning tasks on repository-level code. It provides functionalities for data preparation, environment setup, usage, API calling, open source model fine-tuning, and inference. Users can clone the repository, load datasets, run ML-LLM-Bench, prepare data, fine-tune models, and perform inference tasks. The tool aims to facilitate the evaluation of language models and agents in the context of machine learning tasks on code repositories.
ice-score
ICE-Score is a tool designed to instruct large language models to evaluate code. It provides a minimum viable product (MVP) for evaluating generated code snippets using inputs such as problem, output, task, aspect, and model. Users can also evaluate with reference code and enable zero-shot chain-of-thought evaluation. The tool is built on codegen-metrics and code-bert-score repositories and includes datasets like CoNaLa and HumanEval. ICE-Score has been accepted to EACL 2024.
sunone_aimbot
Sunone Aimbot is an AI-powered aim bot for first-person shooter games. It leverages YOLOv8 and YOLOv10 models, PyTorch, and various tools to automatically target and aim at enemies within the game. The AI model has been trained on more than 30,000 images from popular first-person shooter games like Warface, Destiny 2, Battlefield 2042, CS:GO, Fortnite, The Finals, CS2, and more. The aimbot can be configured through the `config.ini` file to adjust various settings related to object search, capture methods, aiming behavior, hotkeys, mouse settings, shooting options, Arduino integration, AI model parameters, overlay display, debug window, and more. Users are advised to follow specific recommendations to optimize performance and avoid potential issues while using the aimbot.
stagehand
Stagehand is an AI web browsing framework that simplifies and extends web automation using three simple APIs: act, extract, and observe. It aims to provide a lightweight, configurable framework without complex abstractions, allowing users to automate web tasks reliably. The tool generates Playwright code based on atomic instructions provided by the user, enabling natural language-driven web automation. Stagehand is open source, maintained by the Browserbase team, and supports different models and model providers for flexibility in automation tasks.
aiid
The Artificial Intelligence Incident Database (AIID) is a collection of incidents involving the development and use of artificial intelligence (AI). The database is designed to help researchers, policymakers, and the public understand the potential risks and benefits of AI, and to inform the development of policies and practices to mitigate the risks and promote the benefits of AI. The AIID is a collaborative project involving researchers from the University of California, Berkeley, the University of Washington, and the University of Toronto.
llm-vscode
llm-vscode is an extension designed for all things LLM, utilizing llm-ls as its backend. It offers features such as code completion with 'ghost-text' suggestions, the ability to choose models for code generation via HTTP requests, ensuring prompt size fits within the context window, and code attribution checks. Users can configure the backend, suggestion behavior, keybindings, llm-ls settings, and tokenization options. Additionally, the extension supports testing models like Code Llama 13B, Phind/Phind-CodeLlama-34B-v2, and WizardLM/WizardCoder-Python-34B-V1.0. Development involves cloning llm-ls, building it, and setting up the llm-vscode extension for use.
shellChatGPT
ShellChatGPT is a shell wrapper for OpenAI's ChatGPT, DALL-E, Whisper, and TTS, featuring integration with LocalAI, Ollama, Gemini, Mistral, Groq, and GitHub Models. It provides text and chat completions, vision, reasoning, and audio models, voice-in and voice-out chatting mode, text editor interface, markdown rendering support, session management, instruction prompt manager, integration with various service providers, command line completion, file picker dialogs, color scheme personalization, stdin and text file input support, and compatibility with Linux, FreeBSD, MacOS, and Termux for a responsive experience.
SpeziLLM
The Spezi LLM Swift Package includes modules that help integrate LLM-related functionality in applications. It provides tools for local LLM execution, usage of remote OpenAI-based LLMs, and LLMs running on Fog node resources within the local network. The package contains targets like SpeziLLM, SpeziLLMLocal, SpeziLLMLocalDownload, SpeziLLMOpenAI, and SpeziLLMFog for different LLM functionalities. Users can configure and interact with local LLMs, OpenAI LLMs, and Fog LLMs using the provided APIs and platforms within the Spezi ecosystem.
For similar tasks
treds
Treds is a Radix Trie based data structure server that stores keys in sorted order, ensuring fast and efficient retrieval. It offers various commands for key/value store, sorted maps store, list store, set store, hash store, and more. Treds provides unique features like optimized querying for keys with common prefixes, sorted key/value pairs, and new commands like DELPREFIX, LNGPREFIX, and PPUBLISH. It is designed for high performance with single-threaded architecture and event loop, utilizing modified Radix trees and Doubly Linked Lists for quick lookup. Treds also supports PubSub functionality and vector store operations for vector search using HNSW algorithm.
For similar jobs
db2rest
DB2Rest is a modern low-code REST DATA API platform that simplifies the development of intelligent applications. It seamlessly integrates existing and new databases with language models (LMs/LLMs) and vector stores, enabling the rapid delivery of context-aware, reasoning applications without vendor lock-in.
mage-ai
Mage is an open-source data pipeline tool for transforming and integrating data. It offers an easy developer experience, engineering best practices built-in, and data as a first-class citizen. Mage makes it easy to build, preview, and launch data pipelines, and provides observability and scaling capabilities. It supports data integrations, streaming pipelines, and dbt integration.
airbyte
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's no-code Connector Builder or low-code CDK. Airbyte is used by data engineers and analysts at companies of all sizes to build and manage their data pipelines.
labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.
telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)
airflow
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
airbyte-platform
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's low-code Connector Development Kit (CDK). Airbyte is used by data engineers and analysts at companies of all sizes to move data for a variety of purposes, including data warehousing, data analysis, and machine learning.
chronon
Chronon is a platform that simplifies and improves ML workflows by providing a central place to define features, ensuring point-in-time correctness for backfills, simplifying orchestration for batch and streaming pipelines, offering easy endpoints for feature fetching, and guaranteeing and measuring consistency. It offers benefits over other approaches by enabling the use of a broad set of data for training, handling large aggregations and other computationally intensive transformations, and abstracting away the infrastructure complexity of data plumbing.