
erlang-red
Visual Erlang Prompting for an AI world - inspired by Node-RED
Stars: 292

Erlang-Red is an experimental Erlang backend designed to replace Node-RED's existing NodeJS backend, aiming for 100% compatibility with existing Node-RED flow code. It brings the advantages of low-code visual flow-based programming to Erlang, a language designed for message passing and concurrency. The tool allows for creating data flows that describe concurrent processing with guaranteed concurrency and performance. Erlang-Red provides a visual flow editor for creating and testing flows, supporting various Node-RED core nodes and Erlang-specific nodes. The development process is flow-driven, with test flows ensuring correct node functionality. The tool can be deployed locally using Docker or on platforms like Fly.io and Heroku. Contributions in the form of Erlang code, Node-RED test flows, and Elixir code are welcome, with a focus on replicating Node-RED functionality in alternative programming languages.
README:
Experimental Erlang backend to replace Node-REDs existing NodeJS backend, aiming for 100% compatibility with existing Node-RED flow code.
Bring the advantages of low-code visual flow-based programming to a programming language that is designed for message passing and concurrency from the ground up, hence Erlang. More details described in the corresponding blog post.
Node-RED is great for creating data flows that actually describe concurrent processing, it is just a shame the NodeJS is single threaded. So why not use something that is multi-process from the ground up? Concurrency is guaranteed and included.
Also Erlang isn't the most understandable of programming language - unless one has fallen into in a cauldron of Prolog, spiced with Lisp.
So won't it be great to have the simplicity of low-code visual flow based programming and the performance (and concurrency) of Erlang?
Thanks to @mwmiller, Erlang-Red can be tested at ered.fly.dev or locally using docker:
docker run --interactive --tty --publish 8080:8080 gorenje/erlang-red:0.2.8
Starts Erlang-Red listening on localhost:8080/erlang-red and drops into an Eshell console for BEAM introspection.
For more details on the project, check out my interview at the TADS Blog - I go into why Erlang-Red and how it differs from Node-RED and what influence Flow Based Programming has on both.
Erlang-Red by Example videos:
-
Configuring a visual genserver behaviour in Erlang-Red - Flow
-
Using the supervisor node in Erlang-Red - Flow
-
Binary node and intrepreting binary data in Erlang-Red - Flow and Flow
Sample MQTT Broker with explanation.
Many thanks to @joaohf there is a Erlang-Red recipe for Yocto project.
Also I did a quick experiment with a Raspberry Pi 4 to get the I2C nodes working. That wasn't "true" embedding since Erlang-Red was running in a docker container on a Raspberry running a debian distribution!
Breadboards are prototyping devices found in electronics. Erlang-Red can be best thought of as a programming breadboard.
What are some tools for software prototyping? Besides AI and VScode. Software developers create prototypes but they don't prototype software.
A telnet session flow describes how breadboard programming can be done using Erlang-Red. That flow prototypes a possible software solution starting with a simple concurrent approach until a first final approach is found. All solutions are testable and usable - instantly and all solutions build on previous solutions - simply copy and paste the flows. That's prototyping.
Implementation of the MQTT specs to create an MQTT broker in Erlang-Red. The broker is created as a flow and at the same time, a client is created using the Erlang-Red MQTT nodes so that the broker implementation can be tested. Again a breadboard: thing of the MQTT nodes as an oscilloscope testing the voltage!
My development process is best described as flow driven development based around a set of test flows to ensure that node functionality is implemented correctly - meaning that it matches the existing Node-RED functionality.
Test flows are mirrored in a separate repository for better maintainability and also integration with existing Node-RED installations.
Erlang architecture is best described by describing various use cases:
- Deploying flows to Erlang-Red. Explains the start up process and how Erlang processes are started for nodes.
- Workings of a supervisor node supervising a function node.
- The challenges of function node which must support timeouts, sub-processes and being supervised by a supervisor.
- Inner workings of link nodes and how to deal with dynamic link calls.
This is a non-complete list of nodes that partially or completely work:
Node | Comment | Example Flow |
---|---|---|
batch | Mark messages as belonging to a batch and buffer messages until batches are completed. | Flow |
binary | parser and match binary data using Packet definitions. | Flow |
catch | catches exception of selected nodes and of entire flows but not groups | Flow |
change | supports many operators but not all. JSONata in basic form is also supported. | Flow |
complete | is available and can be used on certain nodes, not all | Flow |
csv | initial RFC4180 decoder working, supports only comma separator | Flow |
debug | only debugs the entire message, individal msg properties aren't supported. msg count as status is supported. | Flow |
delay | supported static delay not dynamic delay set via msg.delay
|
Flow |
exec | executing and killing commands is supported but only for commands in spawn mode and set on the node. Appending arguments to commands isn't supported. Timeouts are supported. Kill messages are also supported. | Flow |
file | write and delete files anywhere on disk | TBD |
file in | working for files located in /priv
|
Flow |
filter | filter messages based on value changes | Flow |
function | working for any Erlang. Stop and start also respected. Timeout and more than one output port isn't supported. | Flow |
http in | working for GET and POST, not available for PUT,DELETE etc | Flow |
http request | basic support for doing rrequests, anything complex probably won't work | Flow |
http response | working | Flow |
i2c out | Very initial and very basic I2C out node | Flow |
inject | working for most types except for flow, global ... | Flow |
join |
manual arrays of count X is working, parts isn't supported |
Flow |
json | working | Flow |
junction | working | Flow |
link call | working - dynamic & static calls and timeout is respected | Flow |
link in | working | Flow |
link out | working | Flow |
markdown | working and supports whatever earmark supports. | Flow |
mqtt in | should be working | Flow |
mqtt out | should be working | Flow |
noop | doing nothing is very much supported | Flow |
range | range node is used to map between two different value ranges. | Flow |
sort | basic sort function implemented | Flow |
split | splitting arrays into individual messages is supported, string, buffers and objects aren't. | Flow |
status | working | Flow |
switch | most operators work along with basic JSONata expressions | Flow |
tcp in | Tcp in node supports starting a TCP/IP server listening on a specific port. | Flow |
tcp out | Tcp out node that currently only supports the reply-to node to respond to an existing tcp in connections. | Flow |
tcp request | Tcp request node for connecting and communicating with Tcp listners. | Flow |
template | mustache templating is working but parsing into JSON or YAML isn't supported | Flow |
trigger | the default settings should work | Flow |
These nodes represent specific Erlang features as nodes and as such, could be implemented in NodeJS to provide Node-RED with the same functionality.
Node | Comment | Example Flow |
---|---|---|
event handler | Erlang-Red node for the Erlang gen_event behaviour. Supports both dynamic and static configuration of the event handler. |
Flow |
module | Erlang module for defining Erlang modules that can be used with the function, event handler and statemachine nodes. | Flow |
supervisor | Erlang-only node that implements the supervisor behaviour. Supports supervising supervisors and ordering of processes (i.e. nodes) to ensure correct restart and shutdown sequences. | Flow |
statemachine | Implements the gen_statem behaviour. Requires a module node to define the actions of the statemachine. |
Flow |
event handler | In conjunction with the module node, this node implements the gen_event behaviour. |
Flow |
generic server | Implements the gen_server behaviour. Requires a module node to define the actions of the server. |
Flow |
These nodes can be installed using the corresponding Node-RED node package. In Node-RED these nodes are placebos, doing nothing.
Nodes for ensuring truth in unit test flows.
Node | Comment | Example Flow |
---|---|---|
assert failure | Sending this node a message, will cause test failure. This node ensures certain pathways of a flow aren't reached by messages. | Flow |
assert success | If this node isn't reached during a test run, then that test will failure. This node represents pathways that must be traversed. | Flow |
assert debug | This node can be used to ensure that another node produces content for the debug panel. | Flow |
assert status | Ensure that a node is assigned a specific status value. | Flow |
assert values | Check specific values on the message object and ensure these are correct. | Flow |
These nodes can be installed using the corresponding Node-RED node package.
- Contexts are not supported, so there is no setting things on
flow
,node
orglobal
. - JSONata has been partially implemented by the Erlang JSONata Parser.
Elixir helpers can be added to erlang-red-elixir-helpers repository.
There is nothing stopping anyone from creating a complete node in Elixir provided there is a Erlang "node-wrapper", i.e., a bit of Erlang code in the src/nodes directory that references the Elixir node.
The initial example markdown node is an Erlang node that references Elixir code. I also wrote an Elixir wrapper function whereby I could have just as easily referenced Earmark directly from the Erlang code. That was a stylist choice.
I intend to use Elixir code for importing Elixir libraries to the project and less coding nodes in Elixir. I simply prefer Erlang syntax. But each to their own :)
$ rebar3 get-deps && rebar3 compile
$ rebar3 eunit
rebar3 shell --apps erlang_red
Open the Node-RED visual flow editor in a browser:
open -a Firefox http://localhost:9090/node-red
I use docker to develop this so for me, the following works:
git clone [email protected]:gorenje/erlang-red.git
cd erlang-red
docker run -it -v $(pwd)/erlang-red:/code -v $(pwd)/data:/data -p 8080:8080 -w /code --rm erlang bash
## inside docker shell:
rebar3 shell --apps erlang_red
Then from the docker host machine, open a browser:
open -a Firefox http://localhost:8080/node-red
That should display the Node-RED visual editor.
A release can be bundled together:
$ rebar3 as prod release -n erlang_red
All static frontend code (for the Node-RED flow editor) and the test flow files in priv/testflows
are bundled into the release.
Cowboy server will started on port 8080 unless the PORT
env variable is set.
A sample Dockerfile Dockerfile.fly
is provided to allow for easy launching of an instance as a fly application.
The provided shell script (fly_er.sh
) sets some common expected parameters for the launch.
Advanced users may wish to examine the fly launch
line therein and adjust for their requirements.
Using the container stack at heroku, deployment becomes a git push heroku
after the usual heroku setup:
-
heroku login
-->heroku git:remote -a <app name>
-->heroku stack:set container
-->git push heroku
However the Dockerfile.heroku does not start the flow editor, the image is designed to run a set of flows, in this case (at time of writing) a simple website with a single page.
Basically this flow is the red-erik.org site.
The image does this by setting the following ENV variables:
-
COMPUTEFLOW
=499288ab4007ac6a
- flow to be used. This can also be a comma separated list of flows that are all started. -
DISABLE_FLOWEDITOR
=YES
- any value will do, if set the flow editor is disabled.
Also be aware that Erlang-Red supports a PORT
env variable to specifying the port upon which Cowboy will listen on for connections. The default is 8080.
Heroku uses this to specify the port to connect for a docker image so that its load balancer can get it right.
What the gif shows is executing a simple flow using Erlang as a backend. The flow demonstrates the difference in the switch node of 'check all' or 'stop at first match'.
All nodes are are processes- that is shown on the left in the terminal window.
This example is extremely trivial but it does lay the groundwork for expansion.
To create unit tests for this, Node-RED frontend has been extended with a "Create Test Case" button on the export dialog:
Test flows are stored in the testflows directory and will be picked up the next time make eunit-test
is called. In this way it is possible to create unit tests visually.
Flow tests can also be tested within the flow editor, for more details see below.
The flow test suite is now maintained in a separate repository but is duplicated here.
To better support testing of flows, two new nodes have been created:
"Assert Failed" node cases unit tests to fail if a message reaches it, regardless of any message values. It's basically the same as a assert(false)
call. The intention is to ensure that specific parts of a flow aren't reached.
The second node (in green) is an equivalent to a change node except it contains test on attributes of the message object. Possible tests include 'equal', 'match', 'unset' and the respective inverses. Here the intention is that a message passes through is tested for specific values else the unit test fails.
These nodes are necessary since there is no other way to test whether flow is working or not.
Also remember these flow tests are designed to ensure the Erlang backend is correctly implementing node functionality. The purpose of these nodes is not to ensure that a flow is correct, rather that the functionality of implemented nodes works and continues to work correctly.
My plan is to create test flows that represent specific NodeRED functionality that needs to be implemented by Erlang-Red. This provides regression testing and todos for the implementation.
I created a keyboard shortcut for creating and storing these test flows from the flow editor directly. However I was still use the terminal to execute tests make eunit-test
- which became painful. So instead I pulled this testing into Node-RED, as the gif demonstrates:
What the gif shows is my list of unit tests, which at the press of a button, can all be tested. Notifications for each test shows the result. In addition, the tree list shows which tests failed/succeed (red 'x' or green check). Also tests can be executed individually so that failures can be checked individually.
The best bit though is that all errors are pushed to the debug panel and from there I get directly to the node causing the error. Visual unit testing is completely integrated into Erlang-Red.
My intention is to create many small flows that represent functionality that needs to be implemented by Erlang-Red. These unit tests shows the compatibility to Node-RED and less the correctness of the Erlang code.
Contributions very much welcome in the form of Erlang code or as Node-RED test-flows, ideally with the Erlang implementation. Elixir code is also welcome, only it has its own home.
Each test flow should test exactly one feature and use the assert nodes to check correctness of expected results. Tests can also be pending to indicate that the corresponding Erlang functionality is still missing.
An overview of the sibling projects for both the reader and me:
- Unit test flow suite provides visual unit tests that verify the functionality being implemented here is the same as in Node-RED. Those test flows are designed to be executed in both Node-RED and Erlang-Red. FlowHub.org maintains the repository and is used to synchronise flow tests between Erlang-Red and Node-RED. These tests can also be used for other projects that aim to replicate Node-RED functionality in an alternative programming language.
- Node-RED and Erlang-Red unit testing nodes are used to define and automatically ensure the correct functionality. These nodes are embedded in test flows and ensure that test flows are correct. This makes testing repeatable and reliable and fast! As an aside, these nodes are maintained in an Node-RED flow.
- JSONata support for Erlang-Red is implemented by an Erlang parser with a grammer that covers most of JSONata syntax, no guarantees made. Support of JSONata functionality is limited to what the test flows require. Nothing prevents others from extending the functionality themselves, it is not a priority of mine.
- Elixir helper library allows Elixir code to be also part of Erlang-Red. Erlang-Red is not intended to be a pure Erlang project, it is intended to be a pure BEAM project. Anything that compiles down to the BEAM VM, why not include it?
-
Supervisor nodes and other Erlang behaviours as Node-RED nodes. Node package includes
gen_statem
andgen_event
as nodes that can be used with Erlang-Red flows. These nodes can also be installed into Node-RED but there they do nothing. - Type parsers for parsing specific Node-RED types such as Number or Buffer. Also for handling attribute access of maps and arrays.
Questions and Answers at either the Erlang Forum or the Node-RED Forum.
Also for more details, there was also a discussion on Hacker News.
Nick and Dave for bring Node-RED to live - amazing quality and flexibility and the entire Node-RED community.
Much thanks to
- @mwmiller for providing a fly server for running a live version of Erlang-Red,
- @joaohf many tips on coding Erlang and structuring an Erlang project, and
- @Maria-12648430 for debugging my initial attempt to create a gen_server for nodes.
- @joergen7 for the Erlang insights and explaining dialyzer to me and the importance of clean code
- @vkatsuba for the great tips on using ETS tables for buffering messages
This offers multi-licensing smorgasbord to pick the license that best meets your needs:
-
if you wish to do evil and are not concerned with the impact of your behaviour (probably because you gain a financial reward from said behaviour), then you want to using the apache-2 license.
-
if your concerned about the impact of closed source software and the erosion of the commons of shared knowledge, then you might consider the gpl license
-
if you're planning to do good, for example, for educational purposes and provide others with the knowledge to make informed decisions, then you might want to consider the don't do evil license
No Artificial Intelligence was harmed in the creation of this codebase. This codebase is old skool search engine (ddg), stackoverflow, blog posts and RTFM technology.
AI contributions can be made according to the rules defined in .aiignore.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for erlang-red
Similar Open Source Tools

erlang-red
Erlang-Red is an experimental Erlang backend designed to replace Node-RED's existing NodeJS backend, aiming for 100% compatibility with existing Node-RED flow code. It brings the advantages of low-code visual flow-based programming to Erlang, a language designed for message passing and concurrency. The tool allows for creating data flows that describe concurrent processing with guaranteed concurrency and performance. Erlang-Red provides a visual flow editor for creating and testing flows, supporting various Node-RED core nodes and Erlang-specific nodes. The development process is flow-driven, with test flows ensuring correct node functionality. The tool can be deployed locally using Docker or on platforms like Fly.io and Heroku. Contributions in the form of Erlang code, Node-RED test flows, and Elixir code are welcome, with a focus on replicating Node-RED functionality in alternative programming languages.

BeamNGpy
BeamNGpy is an official Python library providing an API to interact with BeamNG.tech, a video game focused on academia and industry. It allows remote control of vehicles, AI-controlled vehicles, dynamic sensor models, access to road network and scenario objects, and multiple clients. The library comes with low-level functions and higher-level interfaces for complex actions. BeamNGpy requires BeamNG.tech for usage and offers compatibility information for different versions. It also provides troubleshooting tips and encourages user contributions.

air-script
AirScript is a domain-specific language for expressing AIR constraints for STARKs, with the goal of enabling writing and auditing constraints without the need to learn a specific programming language. It also aims to perform automated optimizations and output constraint evaluator code in multiple target languages. The project is organized into several crates including Parser, MIR, AIR, Winterfell code generator, ACE code generator, and AirScript CLI for transpiling AIRs to target languages.

HybridAGI
HybridAGI is the first Programmable LLM-based Autonomous Agent that lets you program its behavior using a **graph-based prompt programming** approach. This state-of-the-art feature allows the AGI to efficiently use any tool while controlling the long-term behavior of the agent. Become the _first Prompt Programmers in history_ ; be a part of the AI revolution one node at a time! **Disclaimer: We are currently in the process of upgrading the codebase to integrate DSPy**

hof
Hof is a CLI tool that unifies data models, schemas, code generation, and a task engine. It allows users to augment data, config, and schemas with CUE to improve consistency, generate multiple Yaml and JSON files, explore data or config with a TUI, and run workflows with automatic task dependency inference. The tool uses CUE to power the DX and implementation, providing a language for specifying schemas, configuration, and writing declarative code. Hof offers core features like code generation, data model management, task engine, CUE cmds, creators, modules, TUI, and chat for better, scalable results.

hackingBuddyGPT
hackingBuddyGPT is a framework for testing LLM-based agents for security testing. It aims to create common ground truth by creating common security testbeds and benchmarks, evaluating multiple LLMs and techniques against those, and publishing prototypes and findings as open-source/open-access reports. The initial focus is on evaluating the efficiency of LLMs for Linux privilege escalation attacks, but the framework is being expanded to evaluate the use of LLMs for web penetration-testing and web API testing. hackingBuddyGPT is released as open-source to level the playing field for blue teams against APTs that have access to more sophisticated resources.

hashbrown
Hashbrown is a lightweight and efficient hashing library for Python, designed to provide easy-to-use cryptographic hashing functions for secure data storage and transmission. It supports a variety of hashing algorithms, including MD5, SHA-1, SHA-256, and SHA-512, allowing users to generate hash values for strings, files, and other data types. With Hashbrown, developers can quickly implement data integrity checks, password hashing, digital signatures, and other security features in their Python applications.

neutone_sdk
The Neutone SDK is a tool designed for researchers to wrap their own audio models and run them in a DAW using the Neutone Plugin. It simplifies the process by allowing models to be built using PyTorch and minimal Python code, eliminating the need for extensive C++ knowledge. The SDK provides support for buffering inputs and outputs, sample rate conversion, and profiling tools for model performance testing. It also offers examples, notebooks, and a submission process for sharing models with the community.

PulsarRPA
PulsarRPA is a high-performance, distributed, open-source Robotic Process Automation (RPA) framework designed to handle large-scale RPA tasks with ease. It provides a comprehensive solution for browser automation, web content understanding, and data extraction. PulsarRPA addresses challenges of browser automation and accurate web data extraction from complex and evolving websites. It incorporates innovative technologies like browser rendering, RPA, intelligent scraping, advanced DOM parsing, and distributed architecture to ensure efficient, accurate, and scalable web data extraction. The tool is open-source, customizable, and supports cutting-edge information extraction technology, making it a preferred solution for large-scale web data extraction.

dbrx
DBRX is a large language model trained by Databricks and made available under an open license. It is a Mixture-of-Experts (MoE) model with 132B total parameters and 36B live parameters, using 16 experts, of which 4 are active during training or inference. DBRX was pre-trained for 12T tokens of text and has a context length of 32K tokens. The model is available in two versions: a base model and an Instruct model, which is finetuned for instruction following. DBRX can be used for a variety of tasks, including text generation, question answering, summarization, and translation.

OpenBB
The OpenBB Platform is the first financial platform that is free and fully open source, offering access to equity, options, crypto, forex, macro economy, fixed income, and more. It provides a broad range of extensions to enhance the user experience according to their needs. Users can sign up to the OpenBB Hub to maximize the benefits of the OpenBB ecosystem. Additionally, the platform includes an AI-powered Research and Analytics Workspace for free. There is also an open source AI financial analyst agent available that can access all the data within OpenBB.

wandbot
Wandbot is a question-answering bot designed for Weights & Biases documentation. It employs Retrieval Augmented Generation with a ChromaDB backend for efficient responses. The bot features periodic data ingestion, integration with Discord and Slack, and performance monitoring through logging. It has a fallback mechanism for model selection and is evaluated based on retrieval accuracy and model-generated responses. The implementation includes creating document embeddings, constructing the Q&A RAGPipeline, model selection, deployment on FastAPI, Discord, and Slack, logging and analysis with Weights & Biases Tables, and performance evaluation.

Document-Knowledge-Mining-Solution-Accelerator
The Document Knowledge Mining Solution Accelerator leverages Azure OpenAI and Azure AI Document Intelligence to ingest, extract, and classify content from various assets, enabling chat-based insight discovery, analysis, and prompt guidance. It uses OCR and multi-modal LLM to extract information from documents like text, handwritten text, charts, graphs, tables, and form fields. Users can customize the technical architecture and data processing workflow. Key features include ingesting and extracting real-world entities, chat-based insights discovery, text and document data analysis, prompt suggestion guidance, and multi-modal information processing.

kdbai-samples
KDB.AI is a time-based vector database that allows developers to build scalable, reliable, and real-time applications by providing advanced search, recommendation, and personalization for Generative AI applications. It supports multiple index types, distance metrics, top-N and metadata filtered retrieval, as well as Python and REST interfaces. The repository contains samples demonstrating various use-cases such as temporal similarity search, document search, image search, recommendation systems, sentiment analysis, and more. KDB.AI integrates with platforms like ChatGPT, Langchain, and LlamaIndex. The setup steps require Unix terminal, Python 3.8+, and pip installed. Users can install necessary Python packages and run Jupyter notebooks to interact with the samples.

mastra
Mastra is an opinionated Typescript framework designed to help users quickly build AI applications and features. It provides primitives such as workflows, agents, RAG, integrations, syncs, and evals. Users can run Mastra locally or deploy it to a serverless cloud. The framework supports various LLM providers, offers tools for building language models, workflows, and accessing knowledge bases. It includes features like durable graph-based state machines, retrieval-augmented generation, integrations, syncs, and automated tests for evaluating LLM outputs.

LLaMa2lang
LLaMa2lang is a repository containing convenience scripts to finetune LLaMa3-8B (or any other foundation model) for chat towards any language that isn't English. The repository aims to improve the performance of LLaMa3 for non-English languages by combining fine-tuning with RAG. Users can translate datasets, extract threads, turn threads into prompts, and finetune models using QLoRA and PEFT. Additionally, the repository supports translation models like OPUS, M2M, MADLAD, and base datasets like OASST1 and OASST2. The process involves loading datasets, translating them, combining checkpoints, and running inference using the newly trained model. The repository also provides benchmarking scripts to choose the right translation model for a target language.
For similar tasks

NaLLM
The NaLLM project repository explores the synergies between Neo4j and Large Language Models (LLMs) through three primary use cases: Natural Language Interface to a Knowledge Graph, Creating a Knowledge Graph from Unstructured Data, and Generating a Report using static and LLM data. The repository contains backend and frontend code organized for easy navigation. It includes blog posts, a demo database, instructions for running demos, and guidelines for contributing. The project aims to showcase the potential of Neo4j and LLMs in various applications.

lobe-icons
Lobe Icons is a collection of popular AI / LLM Model Brand SVG logos and icons. It features lightweight and scalable icons designed with highly optimized scalable vector graphics (SVG) for optimal performance. The collection is tree-shakable, allowing users to import only the icons they need to reduce the overall bundle size of their projects. Lobe Icons has an active community of designers and developers who can contribute and seek support on platforms like GitHub and Discord. The repository supports a wide range of brands across different models, providers, and applications, with more brands continuously being added through contributions. Users can easily install Lobe UI with the provided commands and integrate it with NextJS for server-side rendering. Local development can be done using Github Codespaces or by cloning the repository. Contributions are welcome, and users can contribute code by checking out the GitHub Issues. The project is MIT licensed and maintained by LobeHub.

ibm-generative-ai
IBM Generative AI Python SDK is a tool designed for the Tech Preview program for IBM Foundation Models Studio. It brings IBM Generative AI (GenAI) into Python programs, offering various operations and types. Users can start a trial version or request a demo via the provided link. The SDK was recently rewritten and released under V2 in 2024, with a migration guide available. Contributors are welcome to participate in the open-source project by contributing documentation, tests, bug fixes, and new functionality.

ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It facilitates communication with the Ollama server and provides models for deployment. The tool requires Java 11 or higher and can be installed locally or via Docker. Users can integrate Ollama4j into Maven projects by adding the specified dependency. The tool offers API specifications and supports various development tasks such as building, running unit tests, and integration tests. Releases are automated through GitHub Actions CI workflow. Areas of improvement include adhering to Java naming conventions, updating deprecated code, implementing logging, using lombok, and enhancing request body creation. Contributions to the project are encouraged, whether reporting bugs, suggesting enhancements, or contributing code.

openkore
OpenKore is a custom client and intelligent automated assistant for Ragnarok Online. It is a free, open source, and cross-platform program (Linux, Windows, and MacOS are supported). To run OpenKore, you need to download and extract it or clone the repository using Git. Configure OpenKore according to the documentation and run openkore.pl to start. The tool provides a FAQ section for troubleshooting, guidelines for reporting issues, and information about botting status on official servers. OpenKore is developed by a global team, and contributions are welcome through pull requests. Various community resources are available for support and communication. Users are advised to comply with the GNU General Public License when using and distributing the software.

quivr-mobile
Quivr-Mobile is a React Native mobile application that allows users to upload files and engage in chat conversations using the Quivr backend API. It supports features like file upload and chatting with a language model about uploaded data. The project uses technologies like React Native, React Native Paper, and React Native Navigation. Users can follow the installation steps to set up the client and contribute to the project by opening issues or submitting pull requests following the existing coding style.

python-projects-2024
Welcome to `OPEN ODYSSEY 1.0` - an Open-source extravaganza for Python and AI/ML Projects. Collaborating with MLH (Major League Hacking), this repository welcomes contributions in the form of fixing outstanding issues, submitting bug reports or new feature requests, adding new projects, implementing new models, and encouraging creativity. Follow the instructions to contribute by forking the repository, cloning it to your PC, creating a new folder for your project, and making a pull request. The repository also features a special Leaderboard for top contributors and offers certificates for all participants and mentors. Follow `OPEN ODYSSEY 1.0` on social media for swift approval of your quest.

evalite
Evalite is a TypeScript-native, local-first tool designed for testing LLM-powered apps. It allows users to view documentation and join a Discord community. To contribute, users need to create a .env file with an OPENAI_API_KEY, run the dev command to check types, run tests, and start the UI dev server. Additionally, users can run 'evalite watch' on examples in the 'packages/example' directory. Note that running 'pnpm build' in the root and 'npm link' in 'packages/evalite' may be necessary for the global 'evalite' command to work.
For similar jobs

resonance
Resonance is a framework designed to facilitate interoperability and messaging between services in your infrastructure and beyond. It provides AI capabilities and takes full advantage of asynchronous PHP, built on top of Swoole. With Resonance, you can: * Chat with Open-Source LLMs: Create prompt controllers to directly answer user's prompts. LLM takes care of determining user's intention, so you can focus on taking appropriate action. * Asynchronous Where it Matters: Respond asynchronously to incoming RPC or WebSocket messages (or both combined) with little overhead. You can set up all the asynchronous features using attributes. No elaborate configuration is needed. * Simple Things Remain Simple: Writing HTTP controllers is similar to how it's done in the synchronous code. Controllers have new exciting features that take advantage of the asynchronous environment. * Consistency is Key: You can keep the same approach to writing software no matter the size of your project. There are no growing central configuration files or service dependencies registries. Every relation between code modules is local to those modules. * Promises in PHP: Resonance provides a partial implementation of Promise/A+ spec to handle various asynchronous tasks. * GraphQL Out of the Box: You can build elaborate GraphQL schemas by using just the PHP attributes. Resonance takes care of reusing SQL queries and optimizing the resources' usage. All fields can be resolved asynchronously.

aiogram_bot_template
Aiogram bot template is a boilerplate for creating Telegram bots using Aiogram framework. It provides a solid foundation for building robust and scalable bots with a focus on code organization, database integration, and localization.

pluto
Pluto is a development tool dedicated to helping developers **build cloud and AI applications more conveniently** , resolving issues such as the challenging deployment of AI applications and open-source models. Developers are able to write applications in familiar programming languages like **Python and TypeScript** , **directly defining and utilizing the cloud resources necessary for the application within their code base** , such as AWS SageMaker, DynamoDB, and more. Pluto automatically deduces the infrastructure resource needs of the app through **static program analysis** and proceeds to create these resources on the specified cloud platform, **simplifying the resources creation and application deployment process**.

pinecone-ts-client
The official Node.js client for Pinecone, written in TypeScript. This client library provides a high-level interface for interacting with the Pinecone vector database service. With this client, you can create and manage indexes, upsert and query vector data, and perform other operations related to vector search and retrieval. The client is designed to be easy to use and provides a consistent and idiomatic experience for Node.js developers. It supports all the features and functionality of the Pinecone API, making it a comprehensive solution for building vector-powered applications in Node.js.

aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.

gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.

aioconsole
aioconsole is a Python package that provides asynchronous console and interfaces for asyncio. It offers asynchronous equivalents to input, print, exec, and code.interact, an interactive loop running the asynchronous Python console, customization and running of command line interfaces using argparse, stream support to serve interfaces instead of using standard streams, and the apython script to access asyncio code at runtime without modifying the sources. The package requires Python version 3.8 or higher and can be installed from PyPI or GitHub. It allows users to run Python files or modules with a modified asyncio policy, replacing the default event loop with an interactive loop. aioconsole is useful for scenarios where users need to interact with asyncio code in a console environment.

aiosqlite
aiosqlite is a Python library that provides a friendly, async interface to SQLite databases. It replicates the standard sqlite3 module but with async versions of all the standard connection and cursor methods, along with context managers for automatically closing connections and cursors. It allows interaction with SQLite databases on the main AsyncIO event loop without blocking execution of other coroutines while waiting for queries or data fetches. The library also replicates most of the advanced features of sqlite3, such as row factories and total changes tracking.