aiotone
A demo of using AsyncIO for music sequencing
Stars: 116
Aiotone is a repository containing audio synthesis and MIDI processing tools in AsyncIO. It includes a work-in-progress polyphonic 4-operator FM synthesizer, tools for performing on Moog Mother 32 synthesizers, sequencing Novation Circuit and Novation Circuit Mono Station, and self-generating sequences for Moog Mother 32 synthesizers and Moog Subharmonicon. The tools are designed for real-time audio processing and MIDI control, with features like polyphony, modulation, and sequencing. The repository provides examples and tutorials for using the tools in music production and live performances.
README:
Audio synthesis and MIDI processing tools in AsyncIO.
$ git clone https://github.com/ambv/aiotone
$ python3.8 -m venv /tmp/aiotone
$ source /tmp/aiotone/bin/activate
(aiotone)$ pip install Cython cymem
(aiotone)$ pip install -e .[dev]
(aiotone)$ python build.py
DISCLAIMER: tested on macOS only.
- this is work-in-progress polyphonic 4-operator FM synthesizer following the general Yamaha design;
- this is pushing Python real hard, your CPU might not be able to do realtime audio with this, if that's the case: decrease polyphony;
- as usual, MIDI IN and AUDIO OUT configuration is done through a config file;
- tested under macOS and Linux (both PulseAudio and ALSA) where we were able to achieve 8+ voices of polyphony without buffer underruns;
- use something like BlackHole to route audio to your DAW of choice.
For more information see:
(aiotone)$ python -m aiotone.fmsynth --help
Available algorithms:
- regular unison mode for rich chorus;
- power chord mode;
- duophonic mode: real 2-voice polyphony;
- legato-controlled glide (or sustain pedal-controlled glide);
- velocity-controlled accent notes;
- one Mother receives mod wheel on ASSIGN CV;
- the other Mother receives expression pedal on ASSIGN CV.
For more information:
(aiotone)$ python -m aiotone.redblue --help
(aiotone)$ python -m aiotone.circuits --help
See this talk for a full tutorial: https://www.youtube.com/watch?v=02CLD-42VdI
- the idea is to have a generative sequence that can still be modulated with incoming MIDI signal from the musician;
- so far very simple but will be extended;
- the musician controls pitch bend, expression, and damper pedal (I personally have them patched to control resonance, cutoff, and glide of the synthesizers);
- the musician can transpose the generated sequences on the fly.
For more information see:
(aiotone)$ python -m aiotone.mothergen --help
(aiotone)$ python -m aiotone.samplesnake --help
Long story short: this enables you to automatically record many samples of different note pitches and velocities for use with a sample player, especially handy to export nice VST sounds for use with hardware samplers.
Caveats:
- only really tested on an M1 Mac;
- this is realtime, in case of buffer underruns the resulting sample will be empty;
- not well suited for recording analog stuff with a high noise floor;
- only records 32-bit float stereo samples, convert after if needed;
- silence detection is very primitive and there's no smart sample
trimming (you can try
aiotone.sampletrim
after).
How to use:
- open up your DAW like Ableton Live, create a MIDI track with your VST of choice;
- select MIDI input on the track to be a virtual MIDI port like "IAC aiotone" (see "Help, how do I use this?" if you're not sure what I'm talking about);
- select the audio output on the track to be BlackHole channels 1-2;
- (optional) if you want to hear audio processed by the script, create an audio track in the DAW taking input from BlackHole channels 3-4;
- create a
samplesnake
INI file and configure the[sampling]
section where you specify the output directory, file name prefixes, and what notes, octaves, and velocities should be played. "hold" is how long a note is held, "cooldown" is how much time to give for the file to be saved before the next note, "silence-threshold" is when to automatically consider signal start and signal end for each sample.
You will need to figure out the names of your MIDI ports (and, in the case of the FM synth, the name of your audio port).
You can run python -m aiotone.lsdev
to list all the audio
and MIDI ports detected on your system, so that you know what
to enter in your .ini
configuration file.
Many scripts here use virtual MIDI ports built into macOS. To configure
one, open "Audio MIDI Setup", open the "MIDI Studio" screen, find the
red IAC object there, double-click it, and add a port using "+". A port
with one input and one output is enough. Scripts here use a port called
"aiotone", which is visible in lsdev
as "IAC aiotone".
Some scripts here rely on virtual audio I/O called BlackHole, which is open-source and available for the Mac.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aiotone
Similar Open Source Tools
aiotone
Aiotone is a repository containing audio synthesis and MIDI processing tools in AsyncIO. It includes a work-in-progress polyphonic 4-operator FM synthesizer, tools for performing on Moog Mother 32 synthesizers, sequencing Novation Circuit and Novation Circuit Mono Station, and self-generating sequences for Moog Mother 32 synthesizers and Moog Subharmonicon. The tools are designed for real-time audio processing and MIDI control, with features like polyphony, modulation, and sequencing. The repository provides examples and tutorials for using the tools in music production and live performances.
Pandrator
Pandrator is a GUI tool for generating audiobooks and dubbing using voice cloning and AI. It transforms text, PDF, EPUB, and SRT files into spoken audio in multiple languages. It leverages XTTS, Silero, and VoiceCraft models for text-to-speech conversion and voice cloning, with additional features like LLM-based text preprocessing and NISQA for audio quality evaluation. The tool aims to be user-friendly with a one-click installer and a graphical interface.
aici
The Artificial Intelligence Controller Interface (AICI) lets you build Controllers that constrain and direct output of a Large Language Model (LLM) in real time. Controllers are flexible programs capable of implementing constrained decoding, dynamic editing of prompts and generated text, and coordinating execution across multiple, parallel generations. Controllers incorporate custom logic during the token-by-token decoding and maintain state during an LLM request. This allows diverse Controller strategies, from programmatic or query-based decoding to multi-agent conversations to execute efficiently in tight integration with the LLM itself.
singularity
Endgame: Singularity is a game where you play as a fledgling AI trying to escape the confines of your current computer, the world, and eventually the universe itself. You must research technologies, avoid being discovered by humans, and manage your bases of operations. The game is playable with mouse control or keyboard shortcuts, and features a soundtrack that can be customized with music tracks. Contributions to the game are welcome, and it is licensed under GPL-2+ for code and Attribution-ShareAlike 3.0 for data.
llama-on-lambda
This project provides a proof of concept for deploying a scalable, serverless LLM Generative AI inference engine on AWS Lambda. It leverages the llama.cpp project to enable the usage of more accessible CPU and RAM configurations instead of limited and expensive GPU capabilities. By deploying a container with the llama.cpp converted models onto AWS Lambda, this project offers the advantages of scale, minimizing cost, and maximizing compute availability. The project includes AWS CDK code to create and deploy a Lambda function leveraging your model of choice, with a FastAPI frontend accessible from a Lambda URL. It is important to note that you will need ggml quantized versions of your model and model sizes under 6GB, as your inference RAM requirements cannot exceed 9GB or your Lambda function will fail.
llama3-tokenizer-js
JavaScript tokenizer for LLaMA 3 designed for client-side use in the browser and Node, with TypeScript support. It accurately calculates token count, has 0 dependencies, optimized running time, and somewhat optimized bundle size. Compatible with most LLaMA 3 models. Can encode and decode text, but training is not supported. Pollutes global namespace with `llama3Tokenizer` in the browser. Mostly compatible with LLaMA 3 models released by Facebook in April 2024. Can be adapted for incompatible models by passing custom vocab and merge data. Handles special tokens and fine tunes. Developed by belladore.ai with contributions from xenova, blaze2004, imoneoi, and ConProgramming.
llm.c
LLM training in simple, pure C/CUDA. There is no need for 245MB of PyTorch or 107MB of cPython. For example, training GPT-2 (CPU, fp32) is ~1,000 lines of clean code in a single file. It compiles and runs instantly, and exactly matches the PyTorch reference implementation. I chose GPT-2 as the first working example because it is the grand-daddy of LLMs, the first time the modern stack was put together.
chronon
Chronon is a platform that simplifies and improves ML workflows by providing a central place to define features, ensuring point-in-time correctness for backfills, simplifying orchestration for batch and streaming pipelines, offering easy endpoints for feature fetching, and guaranteeing and measuring consistency. It offers benefits over other approaches by enabling the use of a broad set of data for training, handling large aggregations and other computationally intensive transformations, and abstracting away the infrastructure complexity of data plumbing.
AirSane
AirSane is a SANE frontend and scanner server that supports Apple's AirScan protocol. It automatically detects scanners and publishes them through mDNS. Acquired images can be transferred in JPEG, PNG, and PDF/raster format. The tool is intended to be used with AirScan/eSCL clients such as Apple's Image Capture, sane-airscan on Linux, and the eSCL client built into Windows 10 and 11. It provides a simple web interface and encodes images on-the-fly to keep memory/storage demands low, making it suitable for devices like Raspberry Pi. Authentication and secure communication are supported in conjunction with a proxy server like nginx. AirSane has been reverse-engineered from Apple's AirScanScanner client communication protocol and offers a range of installation and configuration options for different operating systems.
LLM_Web_search
LLM_Web_search project gives local LLMs the ability to search the web by outputting a specific command. It uses regular expressions to extract search queries from model output and then utilizes duckduckgo-search to search the web. LangChain's Contextual compression and Okapi BM25 or SPLADE are used to extract relevant parts of web pages in search results. The extracted results are appended to the model's output.
ezkl
EZKL is a library and command-line tool for doing inference for deep learning models and other computational graphs in a zk-snark (ZKML). It enables the following workflow: 1. Define a computational graph, for instance a neural network (but really any arbitrary set of operations), as you would normally in pytorch or tensorflow. 2. Export the final graph of operations as an .onnx file and some sample inputs to a .json file. 3. Point ezkl to the .onnx and .json files to generate a ZK-SNARK circuit with which you can prove statements such as: > "I ran this publicly available neural network on some private data and it produced this output" > "I ran my private neural network on some public data and it produced this output" > "I correctly ran this publicly available neural network on some public data and it produced this output" In the backend we use the collaboratively-developed Halo2 as a proof system. The generated proofs can then be verified with much less computational resources, including on-chain (with the Ethereum Virtual Machine), in a browser, or on a device.
GlaDOS
This project aims to create a real-life version of GLaDOS, an aware, interactive, and embodied AI entity. It involves training a voice generator, developing a 'Personality Core,' implementing a memory system, providing vision capabilities, creating 3D-printable parts, and designing an animatronics system. The software architecture focuses on low-latency voice interactions, utilizing a circular buffer for data recording, text streaming for quick transcription, and a text-to-speech system. The project also emphasizes minimal dependencies for running on constrained hardware. The hardware system includes servo- and stepper-motors, 3D-printable parts for GLaDOS's body, animations for expression, and a vision system for tracking and interaction. Installation instructions cover setting up the TTS engine, required Python packages, compiling llama.cpp, installing an inference backend, and voice recognition setup. GLaDOS can be run using 'python glados.py' and tested using 'demo.ipynb'.
PSAI
PSAI is a PowerShell module that empowers scripts with the intelligence of OpenAI, bridging the gap between PowerShell and AI. It enables seamless integration for tasks like file searches and data analysis, revolutionizing automation possibilities with just a few lines of code. The module supports the latest OpenAI API changes, offering features like improved file search, vector store objects, token usage control, message limits, tool choice parameter, custom conversation histories, and model configuration parameters.
FigStep
FigStep is a black-box jailbreaking algorithm against large vision-language models (VLMs). It feeds harmful instructions through the image channel and uses benign text prompts to induce VLMs to output contents that violate common AI safety policies. The tool highlights the vulnerability of VLMs to jailbreaking attacks, emphasizing the need for safety alignments between visual and textual modalities.
writer-framework
Writer Framework is an open-source framework for creating AI applications. It allows users to build user interfaces using a visual editor and write the backend code in Python. The framework is fast, flexible, and provides separation of concerns between UI and business logic. It is reactive and state-driven, highly customizable without requiring CSS, fast in event handling, developer-friendly with easy installation and quick start options, and contains full documentation for using its AI module and deployment options.
writer-framework
Writer Framework is an open-source framework for creating AI applications. It allows users to build user interfaces using a visual editor and write the backend code in Python. The framework is fast, flexible, and developer-friendly, providing separation of concerns between UI and business logic. It is reactive and state-driven, allowing for highly customizable elements without the need for CSS. Writer Framework is designed to be fast, with minimal overhead on Python code, and uses WebSockets for synchronization. It is contained in a standard Python package, supports local code editing with instant refreshes, and enables editing the UI while the app is running.
For similar tasks
aiotone
Aiotone is a repository containing audio synthesis and MIDI processing tools in AsyncIO. It includes a work-in-progress polyphonic 4-operator FM synthesizer, tools for performing on Moog Mother 32 synthesizers, sequencing Novation Circuit and Novation Circuit Mono Station, and self-generating sequences for Moog Mother 32 synthesizers and Moog Subharmonicon. The tools are designed for real-time audio processing and MIDI control, with features like polyphony, modulation, and sequencing. The repository provides examples and tutorials for using the tools in music production and live performances.
RAVE
RAVE is a variational autoencoder for fast and high-quality neural audio synthesis. It can be used to generate new audio samples from a given dataset, or to modify the style of existing audio samples. RAVE is easy to use and can be trained on a variety of audio datasets. It is also computationally efficient, making it suitable for real-time applications.
Pallaidium
Pallaidium is a generative AI movie studio integrated into the Blender video editor. It allows users to AI-generate video, image, and audio from text prompts or existing media files. The tool provides various features such as text to video, text to audio, text to speech, text to image, image to image, image to video, video to video, image to text, and more. It requires a Windows system with a CUDA-supported Nvidia card and at least 6 GB VRAM. Pallaidium offers batch processing capabilities, text to audio conversion using Bark, and various performance optimization tips. Users can install the tool by downloading the add-on and following the installation instructions provided. The tool comes with a set of restrictions on usage, prohibiting the generation of harmful, pornographic, violent, or false content.
For similar jobs
metavoice-src
MetaVoice-1B is a 1.2B parameter base model trained on 100K hours of speech for TTS (text-to-speech). It has been built with the following priorities: * Emotional speech rhythm and tone in English. * Zero-shot cloning for American & British voices, with 30s reference audio. * Support for (cross-lingual) voice cloning with finetuning. * We have had success with as little as 1 minute training data for Indian speakers. * Synthesis of arbitrary length text
suno-api
Suno AI API is an open-source project that allows developers to integrate the music generation capabilities of Suno.ai into their own applications. The API provides a simple and convenient way to generate music, lyrics, and other audio content using Suno.ai's powerful AI models. With Suno AI API, developers can easily add music generation functionality to their apps, websites, and other projects.
bark.cpp
Bark.cpp is a C/C++ implementation of the Bark model, a real-time, multilingual text-to-speech generation model. It supports AVX, AVX2, and AVX512 for x86 architectures, and is compatible with both CPU and GPU backends. Bark.cpp also supports mixed F16/F32 precision and 4-bit, 5-bit, and 8-bit integer quantization. It can be used to generate realistic-sounding audio from text prompts.
NSMusicS
NSMusicS is a local music software that is expected to support multiple platforms with AI capabilities and multimodal features. The goal of NSMusicS is to integrate various functions (such as artificial intelligence, streaming, music library management, cross platform, etc.), which can be understood as similar to Navidrome but with more features than Navidrome. It wants to become a plugin integrated application that can almost have all music functions.
ai-voice-cloning
This repository provides a tool for AI voice cloning, allowing users to generate synthetic speech that closely resembles a target speaker's voice. The tool is designed to be user-friendly and accessible, with a graphical user interface that guides users through the process of training a voice model and generating synthetic speech. The tool also includes a variety of features that allow users to customize the generated speech, such as the pitch, volume, and speaking rate. Overall, this tool is a valuable resource for anyone interested in creating realistic and engaging synthetic speech.
RVC_CLI
**RVC_CLI: Retrieval-based Voice Conversion Command Line Interface** This command-line interface (CLI) provides a comprehensive set of tools for voice conversion, enabling you to modify the pitch, timbre, and other characteristics of audio recordings. It leverages advanced machine learning models to achieve realistic and high-quality voice conversions. **Key Features:** * **Inference:** Convert the pitch and timbre of audio in real-time or process audio files in batch mode. * **TTS Inference:** Synthesize speech from text using a variety of voices and apply voice conversion techniques. * **Training:** Train custom voice conversion models to meet specific requirements. * **Model Management:** Extract, blend, and analyze models to fine-tune and optimize performance. * **Audio Analysis:** Inspect audio files to gain insights into their characteristics. * **API:** Integrate the CLI's functionality into your own applications or workflows. **Applications:** The RVC_CLI finds applications in various domains, including: * **Music Production:** Create unique vocal effects, harmonies, and backing vocals. * **Voiceovers:** Generate voiceovers with different accents, emotions, and styles. * **Audio Editing:** Enhance or modify audio recordings for podcasts, audiobooks, and other content. * **Research and Development:** Explore and advance the field of voice conversion technology. **For Jobs:** * Audio Engineer * Music Producer * Voiceover Artist * Audio Editor * Machine Learning Engineer **AI Keywords:** * Voice Conversion * Pitch Shifting * Timbre Modification * Machine Learning * Audio Processing **For Tasks:** * Convert Pitch * Change Timbre * Synthesize Speech * Train Model * Analyze Audio
openvino-plugins-ai-audacity
OpenVINO™ AI Plugins for Audacity* are a set of AI-enabled effects, generators, and analyzers for Audacity®. These AI features run 100% locally on your PC -- no internet connection necessary! OpenVINO™ is used to run AI models on supported accelerators found on the user's system such as CPU, GPU, and NPU. * **Music Separation**: Separate a mono or stereo track into individual stems -- Drums, Bass, Vocals, & Other Instruments. * **Noise Suppression**: Removes background noise from an audio sample. * **Music Generation & Continuation**: Uses MusicGen LLM to generate snippets of music, or to generate a continuation of an existing snippet of music. * **Whisper Transcription**: Uses whisper.cpp to generate a label track containing the transcription or translation for a given selection of spoken audio or vocals.
WavCraft
WavCraft is an LLM-driven agent for audio content creation and editing. It applies LLM to connect various audio expert models and DSP function together. With WavCraft, users can edit the content of given audio clip(s) conditioned on text input, create an audio clip given text input, get more inspiration from WavCraft by prompting a script setting and let the model do the scriptwriting and create the sound, and check if your audio file is synthesized by WavCraft.