erlang-red
Visual Erlang Prompting for an AI world - inspired by Node-RED
Stars: 292
Erlang-Red is an experimental Erlang backend designed to replace Node-RED's existing NodeJS backend, aiming for 100% compatibility with existing Node-RED flow code. It brings the advantages of low-code visual flow-based programming to Erlang, a language designed for message passing and concurrency. The tool allows for creating data flows that describe concurrent processing with guaranteed concurrency and performance. Erlang-Red provides a visual flow editor for creating and testing flows, supporting various Node-RED core nodes and Erlang-specific nodes. The development process is flow-driven, with test flows ensuring correct node functionality. The tool can be deployed locally using Docker or on platforms like Fly.io and Heroku. Contributions in the form of Erlang code, Node-RED test flows, and Elixir code are welcome, with a focus on replicating Node-RED functionality in alternative programming languages.
README:
Experimental Erlang backend to replace Node-REDs existing NodeJS backend, aiming for 100% compatibility with existing Node-RED flow code.
Bring the advantages of low-code visual flow-based programming to a programming language that is designed for message passing and concurrency from the ground up, hence Erlang. More details described in the corresponding blog post.
Node-RED is great for creating data flows that actually describe concurrent processing, it is just a shame the NodeJS is single threaded. So why not use something that is multi-process from the ground up? Concurrency is guaranteed and included.
Also Erlang isn't the most understandable of programming language - unless one has fallen into in a cauldron of Prolog, spiced with Lisp.
So won't it be great to have the simplicity of low-code visual flow based programming and the performance (and concurrency) of Erlang?
Thanks to @mwmiller, Erlang-Red can be tested at ered.fly.dev or locally using docker:
docker run --interactive --tty --publish 8080:8080 gorenje/erlang-red:0.2.8
Starts Erlang-Red listening on localhost:8080/erlang-red and drops into an Eshell console for BEAM introspection.
For more details on the project, check out my interview at the TADS Blog - I go into why Erlang-Red and how it differs from Node-RED and what influence Flow Based Programming has on both.
Erlang-Red by Example videos:
-
Configuring a visual genserver behaviour in Erlang-Red - Flow
-
Using the supervisor node in Erlang-Red - Flow
-
Binary node and intrepreting binary data in Erlang-Red - Flow and Flow
Sample MQTT Broker with explanation.
Many thanks to @joaohf there is a Erlang-Red recipe for Yocto project.
Also I did a quick experiment with a Raspberry Pi 4 to get the I2C nodes working. That wasn't "true" embedding since Erlang-Red was running in a docker container on a Raspberry running a debian distribution!
Breadboards are prototyping devices found in electronics. Erlang-Red can be best thought of as a programming breadboard.
What are some tools for software prototyping? Besides AI and VScode. Software developers create prototypes but they don't prototype software.
A telnet session flow describes how breadboard programming can be done using Erlang-Red. That flow prototypes a possible software solution starting with a simple concurrent approach until a first final approach is found. All solutions are testable and usable - instantly and all solutions build on previous solutions - simply copy and paste the flows. That's prototyping.
Implementation of the MQTT specs to create an MQTT broker in Erlang-Red. The broker is created as a flow and at the same time, a client is created using the Erlang-Red MQTT nodes so that the broker implementation can be tested. Again a breadboard: thing of the MQTT nodes as an oscilloscope testing the voltage!
My development process is best described as flow driven development based around a set of test flows to ensure that node functionality is implemented correctly - meaning that it matches the existing Node-RED functionality.
Test flows are mirrored in a separate repository for better maintainability and also integration with existing Node-RED installations.
Erlang architecture is best described by describing various use cases:
- Deploying flows to Erlang-Red. Explains the start up process and how Erlang processes are started for nodes.
- Workings of a supervisor node supervising a function node.
- The challenges of function node which must support timeouts, sub-processes and being supervised by a supervisor.
- Inner workings of link nodes and how to deal with dynamic link calls.
This is a non-complete list of nodes that partially or completely work:
| Node | Comment | Example Flow |
|---|---|---|
| batch | Mark messages as belonging to a batch and buffer messages until batches are completed. | Flow |
| binary | parser and match binary data using Packet definitions. | Flow |
| catch | catches exception of selected nodes and of entire flows but not groups | Flow |
| change | supports many operators but not all. JSONata in basic form is also supported. | Flow |
| complete | is available and can be used on certain nodes, not all | Flow |
| csv | initial RFC4180 decoder working, supports only comma separator | Flow |
| debug | only debugs the entire message, individal msg properties aren't supported. msg count as status is supported. | Flow |
| delay | supported static delay not dynamic delay set via msg.delay
|
Flow |
| exec | executing and killing commands is supported but only for commands in spawn mode and set on the node. Appending arguments to commands isn't supported. Timeouts are supported. Kill messages are also supported. | Flow |
| file | write and delete files anywhere on disk | TBD |
| file in | working for files located in /priv
|
Flow |
| filter | filter messages based on value changes | Flow |
| function | working for any Erlang. Stop and start also respected. Timeout and more than one output port isn't supported. | Flow |
| http in | working for GET and POST, not available for PUT,DELETE etc | Flow |
| http request | basic support for doing rrequests, anything complex probably won't work | Flow |
| http response | working | Flow |
| i2c out | Very initial and very basic I2C out node | Flow |
| inject | working for most types except for flow, global ... | Flow |
| join |
manual arrays of count X is working, parts isn't supported |
Flow |
| json | working | Flow |
| junction | working | Flow |
| link call | working - dynamic & static calls and timeout is respected | Flow |
| link in | working | Flow |
| link out | working | Flow |
| markdown | working and supports whatever earmark supports. | Flow |
| mqtt in | should be working | Flow |
| mqtt out | should be working | Flow |
| noop | doing nothing is very much supported | Flow |
| range | range node is used to map between two different value ranges. | Flow |
| sort | basic sort function implemented | Flow |
| split | splitting arrays into individual messages is supported, string, buffers and objects aren't. | Flow |
| status | working | Flow |
| switch | most operators work along with basic JSONata expressions | Flow |
| tcp in | Tcp in node supports starting a TCP/IP server listening on a specific port. | Flow |
| tcp out | Tcp out node that currently only supports the reply-to node to respond to an existing tcp in connections. | Flow |
| tcp request | Tcp request node for connecting and communicating with Tcp listners. | Flow |
| template | mustache templating is working but parsing into JSON or YAML isn't supported | Flow |
| trigger | the default settings should work | Flow |
These nodes represent specific Erlang features as nodes and as such, could be implemented in NodeJS to provide Node-RED with the same functionality.
| Node | Comment | Example Flow |
|---|---|---|
| event handler | Erlang-Red node for the Erlang gen_event behaviour. Supports both dynamic and static configuration of the event handler. |
Flow |
| module | Erlang module for defining Erlang modules that can be used with the function, event handler and statemachine nodes. | Flow |
| supervisor | Erlang-only node that implements the supervisor behaviour. Supports supervising supervisors and ordering of processes (i.e. nodes) to ensure correct restart and shutdown sequences. | Flow |
| statemachine | Implements the gen_statem behaviour. Requires a module node to define the actions of the statemachine. |
Flow |
| event handler | In conjunction with the module node, this node implements the gen_event behaviour. |
Flow |
| generic server | Implements the gen_server behaviour. Requires a module node to define the actions of the server. |
Flow |
These nodes can be installed using the corresponding Node-RED node package. In Node-RED these nodes are placebos, doing nothing.
Nodes for ensuring truth in unit test flows.
| Node | Comment | Example Flow |
|---|---|---|
| assert failure | Sending this node a message, will cause test failure. This node ensures certain pathways of a flow aren't reached by messages. | Flow |
| assert success | If this node isn't reached during a test run, then that test will failure. This node represents pathways that must be traversed. | Flow |
| assert debug | This node can be used to ensure that another node produces content for the debug panel. | Flow |
| assert status | Ensure that a node is assigned a specific status value. | Flow |
| assert values | Check specific values on the message object and ensure these are correct. | Flow |
These nodes can be installed using the corresponding Node-RED node package.
- Contexts are not supported, so there is no setting things on
flow,nodeorglobal. - JSONata has been partially implemented by the Erlang JSONata Parser.
Elixir helpers can be added to erlang-red-elixir-helpers repository.
There is nothing stopping anyone from creating a complete node in Elixir provided there is a Erlang "node-wrapper", i.e., a bit of Erlang code in the src/nodes directory that references the Elixir node.
The initial example markdown node is an Erlang node that references Elixir code. I also wrote an Elixir wrapper function whereby I could have just as easily referenced Earmark directly from the Erlang code. That was a stylist choice.
I intend to use Elixir code for importing Elixir libraries to the project and less coding nodes in Elixir. I simply prefer Erlang syntax. But each to their own :)
$ rebar3 get-deps && rebar3 compile
$ rebar3 eunit
rebar3 shell --apps erlang_red
Open the Node-RED visual flow editor in a browser:
open -a Firefox http://localhost:9090/node-red
I use docker to develop this so for me, the following works:
git clone [email protected]:gorenje/erlang-red.git
cd erlang-red
docker run -it -v $(pwd)/erlang-red:/code -v $(pwd)/data:/data -p 8080:8080 -w /code --rm erlang bash
## inside docker shell:
rebar3 shell --apps erlang_red
Then from the docker host machine, open a browser:
open -a Firefox http://localhost:8080/node-red
That should display the Node-RED visual editor.
A release can be bundled together:
$ rebar3 as prod release -n erlang_red
All static frontend code (for the Node-RED flow editor) and the test flow files in priv/testflows are bundled into the release.
Cowboy server will started on port 8080 unless the PORT env variable is set.
A sample Dockerfile Dockerfile.fly is provided to allow for easy launching of an instance as a fly application.
The provided shell script (fly_er.sh) sets some common expected parameters for the launch.
Advanced users may wish to examine the fly launch line therein and adjust for their requirements.
Using the container stack at heroku, deployment becomes a git push heroku after the usual heroku setup:
-
heroku login-->heroku git:remote -a <app name>-->heroku stack:set container-->git push heroku
However the Dockerfile.heroku does not start the flow editor, the image is designed to run a set of flows, in this case (at time of writing) a simple website with a single page.
Basically this flow is the red-erik.org site.
The image does this by setting the following ENV variables:
-
COMPUTEFLOW=499288ab4007ac6a- flow to be used. This can also be a comma separated list of flows that are all started. -
DISABLE_FLOWEDITOR=YES- any value will do, if set the flow editor is disabled.
Also be aware that Erlang-Red supports a PORT env variable to specifying the port upon which Cowboy will listen on for connections. The default is 8080.
Heroku uses this to specify the port to connect for a docker image so that its load balancer can get it right.
What the gif shows is executing a simple flow using Erlang as a backend. The flow demonstrates the difference in the switch node of 'check all' or 'stop at first match'.
All nodes are are processes- that is shown on the left in the terminal window.
This example is extremely trivial but it does lay the groundwork for expansion.
To create unit tests for this, Node-RED frontend has been extended with a "Create Test Case" button on the export dialog:
Test flows are stored in the testflows directory and will be picked up the next time make eunit-test is called. In this way it is possible to create unit tests visually.
Flow tests can also be tested within the flow editor, for more details see below.
The flow test suite is now maintained in a separate repository but is duplicated here.
To better support testing of flows, two new nodes have been created:
"Assert Failed" node cases unit tests to fail if a message reaches it, regardless of any message values. It's basically the same as a assert(false) call. The intention is to ensure that specific parts of a flow aren't reached.
The second node (in green) is an equivalent to a change node except it contains test on attributes of the message object. Possible tests include 'equal', 'match', 'unset' and the respective inverses. Here the intention is that a message passes through is tested for specific values else the unit test fails.
These nodes are necessary since there is no other way to test whether flow is working or not.
Also remember these flow tests are designed to ensure the Erlang backend is correctly implementing node functionality. The purpose of these nodes is not to ensure that a flow is correct, rather that the functionality of implemented nodes works and continues to work correctly.
My plan is to create test flows that represent specific NodeRED functionality that needs to be implemented by Erlang-Red. This provides regression testing and todos for the implementation.
I created a keyboard shortcut for creating and storing these test flows from the flow editor directly. However I was still use the terminal to execute tests make eunit-test - which became painful. So instead I pulled this testing into Node-RED, as the gif demonstrates:
What the gif shows is my list of unit tests, which at the press of a button, can all be tested. Notifications for each test shows the result. In addition, the tree list shows which tests failed/succeed (red 'x' or green check). Also tests can be executed individually so that failures can be checked individually.
The best bit though is that all errors are pushed to the debug panel and from there I get directly to the node causing the error. Visual unit testing is completely integrated into Erlang-Red.
My intention is to create many small flows that represent functionality that needs to be implemented by Erlang-Red. These unit tests shows the compatibility to Node-RED and less the correctness of the Erlang code.
Contributions very much welcome in the form of Erlang code or as Node-RED test-flows, ideally with the Erlang implementation. Elixir code is also welcome, only it has its own home.
Each test flow should test exactly one feature and use the assert nodes to check correctness of expected results. Tests can also be pending to indicate that the corresponding Erlang functionality is still missing.
An overview of the sibling projects for both the reader and me:
- Unit test flow suite provides visual unit tests that verify the functionality being implemented here is the same as in Node-RED. Those test flows are designed to be executed in both Node-RED and Erlang-Red. FlowHub.org maintains the repository and is used to synchronise flow tests between Erlang-Red and Node-RED. These tests can also be used for other projects that aim to replicate Node-RED functionality in an alternative programming language.
- Node-RED and Erlang-Red unit testing nodes are used to define and automatically ensure the correct functionality. These nodes are embedded in test flows and ensure that test flows are correct. This makes testing repeatable and reliable and fast! As an aside, these nodes are maintained in an Node-RED flow.
- JSONata support for Erlang-Red is implemented by an Erlang parser with a grammer that covers most of JSONata syntax, no guarantees made. Support of JSONata functionality is limited to what the test flows require. Nothing prevents others from extending the functionality themselves, it is not a priority of mine.
- Elixir helper library allows Elixir code to be also part of Erlang-Red. Erlang-Red is not intended to be a pure Erlang project, it is intended to be a pure BEAM project. Anything that compiles down to the BEAM VM, why not include it?
-
Supervisor nodes and other Erlang behaviours as Node-RED nodes. Node package includes
gen_statemandgen_eventas nodes that can be used with Erlang-Red flows. These nodes can also be installed into Node-RED but there they do nothing. - Type parsers for parsing specific Node-RED types such as Number or Buffer. Also for handling attribute access of maps and arrays.
Questions and Answers at either the Erlang Forum or the Node-RED Forum.
Also for more details, there was also a discussion on Hacker News.
Nick and Dave for bring Node-RED to live - amazing quality and flexibility and the entire Node-RED community.
Much thanks to
- @mwmiller for providing a fly server for running a live version of Erlang-Red,
- @joaohf many tips on coding Erlang and structuring an Erlang project, and
- @Maria-12648430 for debugging my initial attempt to create a gen_server for nodes.
- @joergen7 for the Erlang insights and explaining dialyzer to me and the importance of clean code
- @vkatsuba for the great tips on using ETS tables for buffering messages
This offers multi-licensing smorgasbord to pick the license that best meets your needs:
-
if you wish to do evil and are not concerned with the impact of your behaviour (probably because you gain a financial reward from said behaviour), then you want to using the apache-2 license.
-
if your concerned about the impact of closed source software and the erosion of the commons of shared knowledge, then you might consider the gpl license
-
if you're planning to do good, for example, for educational purposes and provide others with the knowledge to make informed decisions, then you might want to consider the don't do evil license
No Artificial Intelligence was harmed in the creation of this codebase. This codebase is old skool search engine (ddg), stackoverflow, blog posts and RTFM technology.
AI contributions can be made according to the rules defined in .aiignore.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for erlang-red
Similar Open Source Tools
erlang-red
Erlang-Red is an experimental Erlang backend designed to replace Node-RED's existing NodeJS backend, aiming for 100% compatibility with existing Node-RED flow code. It brings the advantages of low-code visual flow-based programming to Erlang, a language designed for message passing and concurrency. The tool allows for creating data flows that describe concurrent processing with guaranteed concurrency and performance. Erlang-Red provides a visual flow editor for creating and testing flows, supporting various Node-RED core nodes and Erlang-specific nodes. The development process is flow-driven, with test flows ensuring correct node functionality. The tool can be deployed locally using Docker or on platforms like Fly.io and Heroku. Contributions in the form of Erlang code, Node-RED test flows, and Elixir code are welcome, with a focus on replicating Node-RED functionality in alternative programming languages.
air-script
AirScript is a domain-specific language for expressing AIR constraints for STARKs, with the goal of enabling writing and auditing constraints without the need to learn a specific programming language. It also aims to perform automated optimizations and output constraint evaluator code in multiple target languages. The project is organized into several crates including Parser, MIR, AIR, Winterfell code generator, ACE code generator, and AirScript CLI for transpiling AIRs to target languages.
aici
The Artificial Intelligence Controller Interface (AICI) lets you build Controllers that constrain and direct output of a Large Language Model (LLM) in real time. Controllers are flexible programs capable of implementing constrained decoding, dynamic editing of prompts and generated text, and coordinating execution across multiple, parallel generations. Controllers incorporate custom logic during the token-by-token decoding and maintain state during an LLM request. This allows diverse Controller strategies, from programmatic or query-based decoding to multi-agent conversations to execute efficiently in tight integration with the LLM itself.
ezkl
EZKL is a library and command-line tool for doing inference for deep learning models and other computational graphs in a zk-snark (ZKML). It enables the following workflow: 1. Define a computational graph, for instance a neural network (but really any arbitrary set of operations), as you would normally in pytorch or tensorflow. 2. Export the final graph of operations as an .onnx file and some sample inputs to a .json file. 3. Point ezkl to the .onnx and .json files to generate a ZK-SNARK circuit with which you can prove statements such as: > "I ran this publicly available neural network on some private data and it produced this output" > "I ran my private neural network on some public data and it produced this output" > "I correctly ran this publicly available neural network on some public data and it produced this output" In the backend we use the collaboratively-developed Halo2 as a proof system. The generated proofs can then be verified with much less computational resources, including on-chain (with the Ethereum Virtual Machine), in a browser, or on a device.
MegatronApp
MegatronApp is a toolchain built around the Megatron-LM training framework, offering performance tuning, slow-node detection, and training-process visualization. It includes modules like MegaScan for anomaly detection, MegaFBD for forward-backward decoupling, MegaDPP for dynamic pipeline planning, and MegaScope for visualization. The tool aims to enhance large-scale distributed training by providing valuable capabilities and insights.
project_alice
Alice is an agentic workflow framework that integrates task execution and intelligent chat capabilities. It provides a flexible environment for creating, managing, and deploying AI agents for various purposes, leveraging a microservices architecture with MongoDB for data persistence. The framework consists of components like APIs, agents, tasks, and chats that interact to produce outputs through files, messages, task results, and URL references. Users can create, test, and deploy agentic solutions in a human-language framework, making it easy to engage with by both users and agents. The tool offers an open-source option, user management, flexible model deployment, and programmatic access to tasks and chats.
ChatGPT-Telegram-Bot
The ChatGPT Telegram Bot is a powerful Telegram bot that utilizes various GPT models, including GPT3.5, GPT4, GPT4 Turbo, GPT4 Vision, DALL·E 3, Groq Mixtral-8x7b/LLaMA2-70b, and Claude2.1/Claude3 opus/sonnet API. It enables users to engage in efficient conversations and information searches on Telegram. The bot supports multiple AI models, online search with DuckDuckGo and Google, user-friendly interface, efficient message processing, document interaction, Markdown rendering, and convenient deployment options like Zeabur, Replit, and Docker. Users can set environment variables for configuration and deployment. The bot also provides Q&A functionality, supports model switching, and can be deployed in group chats with whitelisting. The project is open source under GPLv3 license.
Mapperatorinator
Mapperatorinator is a multi-model framework that uses spectrogram inputs to generate fully featured osu! beatmaps for all gamemodes and assist modding beatmaps. The project aims to automatically generate rankable quality osu! beatmaps from any song with a high degree of customizability. The tool is built upon osuT5 and osu-diffusion, utilizing GPU compute and instances on vast.ai for development. Users can responsibly use AI in their beatmaps with this tool, ensuring disclosure of AI usage. Installation instructions include cloning the repository, creating a virtual environment, and installing dependencies. The tool offers a Web GUI for user-friendly experience and a Command-Line Inference option for advanced configurations. Additionally, an Interactive CLI script is available for terminal-based workflow with guided setup. The tool provides generation tips and features MaiMod, an AI-driven modding tool for osu! beatmaps. Mapperatorinator tokenizes beatmaps, utilizes a model architecture based on HF Transformers Whisper model, and offers multitask training format for conditional generation. The tool ensures seamless long generation, refines coordinates with diffusion, and performs post-processing for improved beatmap quality. Super timing generator enhances timing accuracy, and LoRA fine-tuning allows adaptation to specific styles or gamemodes. The project acknowledges credits and related works in the osu! community.
airflow
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
sublayer
Sublayer is a model-agnostic Ruby AI Agent framework that provides base classes for building Generators, Actions, Tasks, and Agents to create AI-powered applications in Ruby. It supports various AI models and providers, such as OpenAI, Gemini, and Claude. Generators generate specific outputs, Actions perform operations, Agents are autonomous entities for tasks or monitoring, and Triggers decide when Agents are activated. The framework offers sample Generators and usage examples for building AI applications.
uTensor
uTensor is an extremely light-weight machine learning inference framework built on Tensorflow and optimized for Arm targets. It consists of a runtime library and an offline tool that handles most of the model translation work. The core runtime is only ~2KB. The workflow involves constructing and training a model in Tensorflow, then using uTensor to produce C++ code for inferencing. The runtime ensures system safety, guarantees RAM usage, and focuses on clear, concise, and debuggable code. The high-level API simplifies tensor handling and operator execution for embedded systems.
eureka-ml-insights
The Eureka ML Insights Framework is a repository containing code designed to help researchers and practitioners run reproducible evaluations of generative models efficiently. Users can define custom pipelines for data processing, inference, and evaluation, as well as utilize pre-defined evaluation pipelines for key benchmarks. The framework provides a structured approach to conducting experiments and analyzing model performance across various tasks and modalities.
AIlice
AIlice is a fully autonomous, general-purpose AI agent that aims to create a standalone artificial intelligence assistant, similar to JARVIS, based on the open-source LLM. AIlice achieves this goal by building a "text computer" that uses a Large Language Model (LLM) as its core processor. Currently, AIlice demonstrates proficiency in a range of tasks, including thematic research, coding, system management, literature reviews, and complex hybrid tasks that go beyond these basic capabilities. AIlice has reached near-perfect performance in everyday tasks using GPT-4 and is making strides towards practical application with the latest open-source models. We will ultimately achieve self-evolution of AI agents. That is, AI agents will autonomously build their own feature expansions and new types of agents, unleashing LLM's knowledge and reasoning capabilities into the real world seamlessly.
jaison-core
J.A.I.son is a Python project designed for generating responses using various components and applications. It requires specific plugins like STT, T2T, TTSG, and TTSC to function properly. Users can customize responses, voice, and configurations. The project provides a Discord bot, Twitch events and chat integration, and VTube Studio Animation Hotkeyer. It also offers features for managing conversation history, training AI models, and monitoring conversations.
mscclpp
MSCCL++ is a GPU-driven communication stack for scalable AI applications. It provides a highly efficient and customizable communication stack for distributed GPU applications. MSCCL++ redefines inter-GPU communication interfaces, delivering a highly efficient and customizable communication stack for distributed GPU applications. Its design is specifically tailored to accommodate diverse performance optimization scenarios often encountered in state-of-the-art AI applications. MSCCL++ provides communication abstractions at the lowest level close to hardware and at the highest level close to application API. The lowest level of abstraction is ultra light weight which enables a user to implement logics of data movement for a collective operation such as AllReduce inside a GPU kernel extremely efficiently without worrying about memory ordering of different ops. The modularity of MSCCL++ enables a user to construct the building blocks of MSCCL++ in a high level abstraction in Python and feed them to a CUDA kernel in order to facilitate the user's productivity. MSCCL++ provides fine-grained synchronous and asynchronous 0-copy 1-sided abstracts for communication primitives such as `put()`, `get()`, `signal()`, `flush()`, and `wait()`. The 1-sided abstractions allows a user to asynchronously `put()` their data on the remote GPU as soon as it is ready without requiring the remote side to issue any receive instruction. This enables users to easily implement flexible communication logics, such as overlapping communication with computation, or implementing customized collective communication algorithms without worrying about potential deadlocks. Additionally, the 0-copy capability enables MSCCL++ to directly transfer data between user's buffers without using intermediate internal buffers which saves GPU bandwidth and memory capacity. MSCCL++ provides consistent abstractions regardless of the location of the remote GPU (either on the local node or on a remote node) or the underlying link (either NVLink/xGMI or InfiniBand). This simplifies the code for inter-GPU communication, which is often complex due to memory ordering of GPU/CPU read/writes and therefore, is error-prone.
GhostOS
GhostOS is an AI Agent framework designed to replace JSON Schema with a Turing-complete code interaction interface (Moss Protocol). It aims to create intelligent entities capable of continuous learning and growth through code generation and project management. The framework supports various capabilities such as turning Python files into web agents, real-time voice conversation, body movements control, and emotion expression. GhostOS is still in early experimental development and focuses on out-of-the-box capabilities for AI agents.
For similar tasks
NaLLM
The NaLLM project repository explores the synergies between Neo4j and Large Language Models (LLMs) through three primary use cases: Natural Language Interface to a Knowledge Graph, Creating a Knowledge Graph from Unstructured Data, and Generating a Report using static and LLM data. The repository contains backend and frontend code organized for easy navigation. It includes blog posts, a demo database, instructions for running demos, and guidelines for contributing. The project aims to showcase the potential of Neo4j and LLMs in various applications.
lobe-icons
Lobe Icons is a collection of popular AI / LLM Model Brand SVG logos and icons. It features lightweight and scalable icons designed with highly optimized scalable vector graphics (SVG) for optimal performance. The collection is tree-shakable, allowing users to import only the icons they need to reduce the overall bundle size of their projects. Lobe Icons has an active community of designers and developers who can contribute and seek support on platforms like GitHub and Discord. The repository supports a wide range of brands across different models, providers, and applications, with more brands continuously being added through contributions. Users can easily install Lobe UI with the provided commands and integrate it with NextJS for server-side rendering. Local development can be done using Github Codespaces or by cloning the repository. Contributions are welcome, and users can contribute code by checking out the GitHub Issues. The project is MIT licensed and maintained by LobeHub.
ibm-generative-ai
IBM Generative AI Python SDK is a tool designed for the Tech Preview program for IBM Foundation Models Studio. It brings IBM Generative AI (GenAI) into Python programs, offering various operations and types. Users can start a trial version or request a demo via the provided link. The SDK was recently rewritten and released under V2 in 2024, with a migration guide available. Contributors are welcome to participate in the open-source project by contributing documentation, tests, bug fixes, and new functionality.
ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It facilitates communication with the Ollama server and provides models for deployment. The tool requires Java 11 or higher and can be installed locally or via Docker. Users can integrate Ollama4j into Maven projects by adding the specified dependency. The tool offers API specifications and supports various development tasks such as building, running unit tests, and integration tests. Releases are automated through GitHub Actions CI workflow. Areas of improvement include adhering to Java naming conventions, updating deprecated code, implementing logging, using lombok, and enhancing request body creation. Contributions to the project are encouraged, whether reporting bugs, suggesting enhancements, or contributing code.
openkore
OpenKore is a custom client and intelligent automated assistant for Ragnarok Online. It is a free, open source, and cross-platform program (Linux, Windows, and MacOS are supported). To run OpenKore, you need to download and extract it or clone the repository using Git. Configure OpenKore according to the documentation and run openkore.pl to start. The tool provides a FAQ section for troubleshooting, guidelines for reporting issues, and information about botting status on official servers. OpenKore is developed by a global team, and contributions are welcome through pull requests. Various community resources are available for support and communication. Users are advised to comply with the GNU General Public License when using and distributing the software.
quivr-mobile
Quivr-Mobile is a React Native mobile application that allows users to upload files and engage in chat conversations using the Quivr backend API. It supports features like file upload and chatting with a language model about uploaded data. The project uses technologies like React Native, React Native Paper, and React Native Navigation. Users can follow the installation steps to set up the client and contribute to the project by opening issues or submitting pull requests following the existing coding style.
python-projects-2024
Welcome to `OPEN ODYSSEY 1.0` - an Open-source extravaganza for Python and AI/ML Projects. Collaborating with MLH (Major League Hacking), this repository welcomes contributions in the form of fixing outstanding issues, submitting bug reports or new feature requests, adding new projects, implementing new models, and encouraging creativity. Follow the instructions to contribute by forking the repository, cloning it to your PC, creating a new folder for your project, and making a pull request. The repository also features a special Leaderboard for top contributors and offers certificates for all participants and mentors. Follow `OPEN ODYSSEY 1.0` on social media for swift approval of your quest.
evalite
Evalite is a TypeScript-native, local-first tool designed for testing LLM-powered apps. It allows users to view documentation and join a Discord community. To contribute, users need to create a .env file with an OPENAI_API_KEY, run the dev command to check types, run tests, and start the UI dev server. Additionally, users can run 'evalite watch' on examples in the 'packages/example' directory. Note that running 'pnpm build' in the root and 'npm link' in 'packages/evalite' may be necessary for the global 'evalite' command to work.
For similar jobs
resonance
Resonance is a framework designed to facilitate interoperability and messaging between services in your infrastructure and beyond. It provides AI capabilities and takes full advantage of asynchronous PHP, built on top of Swoole. With Resonance, you can: * Chat with Open-Source LLMs: Create prompt controllers to directly answer user's prompts. LLM takes care of determining user's intention, so you can focus on taking appropriate action. * Asynchronous Where it Matters: Respond asynchronously to incoming RPC or WebSocket messages (or both combined) with little overhead. You can set up all the asynchronous features using attributes. No elaborate configuration is needed. * Simple Things Remain Simple: Writing HTTP controllers is similar to how it's done in the synchronous code. Controllers have new exciting features that take advantage of the asynchronous environment. * Consistency is Key: You can keep the same approach to writing software no matter the size of your project. There are no growing central configuration files or service dependencies registries. Every relation between code modules is local to those modules. * Promises in PHP: Resonance provides a partial implementation of Promise/A+ spec to handle various asynchronous tasks. * GraphQL Out of the Box: You can build elaborate GraphQL schemas by using just the PHP attributes. Resonance takes care of reusing SQL queries and optimizing the resources' usage. All fields can be resolved asynchronously.
aiogram_bot_template
Aiogram bot template is a boilerplate for creating Telegram bots using Aiogram framework. It provides a solid foundation for building robust and scalable bots with a focus on code organization, database integration, and localization.
pluto
Pluto is a development tool dedicated to helping developers **build cloud and AI applications more conveniently** , resolving issues such as the challenging deployment of AI applications and open-source models. Developers are able to write applications in familiar programming languages like **Python and TypeScript** , **directly defining and utilizing the cloud resources necessary for the application within their code base** , such as AWS SageMaker, DynamoDB, and more. Pluto automatically deduces the infrastructure resource needs of the app through **static program analysis** and proceeds to create these resources on the specified cloud platform, **simplifying the resources creation and application deployment process**.
pinecone-ts-client
The official Node.js client for Pinecone, written in TypeScript. This client library provides a high-level interface for interacting with the Pinecone vector database service. With this client, you can create and manage indexes, upsert and query vector data, and perform other operations related to vector search and retrieval. The client is designed to be easy to use and provides a consistent and idiomatic experience for Node.js developers. It supports all the features and functionality of the Pinecone API, making it a comprehensive solution for building vector-powered applications in Node.js.
aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.
gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.
aioconsole
aioconsole is a Python package that provides asynchronous console and interfaces for asyncio. It offers asynchronous equivalents to input, print, exec, and code.interact, an interactive loop running the asynchronous Python console, customization and running of command line interfaces using argparse, stream support to serve interfaces instead of using standard streams, and the apython script to access asyncio code at runtime without modifying the sources. The package requires Python version 3.8 or higher and can be installed from PyPI or GitHub. It allows users to run Python files or modules with a modified asyncio policy, replacing the default event loop with an interactive loop. aioconsole is useful for scenarios where users need to interact with asyncio code in a console environment.
aiosqlite
aiosqlite is a Python library that provides a friendly, async interface to SQLite databases. It replicates the standard sqlite3 module but with async versions of all the standard connection and cursor methods, along with context managers for automatically closing connections and cursors. It allows interaction with SQLite databases on the main AsyncIO event loop without blocking execution of other coroutines while waiting for queries or data fetches. The library also replicates most of the advanced features of sqlite3, such as row factories and total changes tracking.





