NeuroSync_Player

NeuroSync_Player

The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.

Stars: 61

Visit
 screenshot

NeuroSync Player is a real-time AI endpoint server that combines text-to-speech and NeuroSync generations. It includes code for various AI endpoints such as speech-to-text, text-to-speech, embedding, and vision. The tool allows users to connect their llm to Twitch and YouTube, enabling the llm-powered metahuman to respond to viewers in real-time. Additionally, it offers features like push-to-talk, face animation integration, and support for blendshapes generated from audio inputs for Unreal Engine 5. Users can train and fine-tune their own models using NeuroSync Trainer Lite, with simplified loss functions and mixed precision for faster training. The tool also supports data augmentation to help with fine detail reproduction.

README:

NeuroSync Player

12/03/2025 Local Real-Time API Toy

Realtime AI endpoint server that combines tts and neurosync generations available.

Includes code for various helpful AI endpoints (stt, tts, embedding, vision) to use with the player, or your own projects. Be mindful of licences for your use case.

21/02/2025 Scaling UP! | New 228m parameter model + config added

A milestone has been hit and previous research has got us to a point where scaling the model up is now possible with much faster training and better quality overall.

Going from 4 layers and 4 heads to 8 layers and 16 heads means updating your code and model, please ensure you have the latest versions of the api and player as the new model requires some architectural changes.

Enjoy!

19/02/2025 Trainer updates

  • Trainer: Use NeuroSync Trainer Lite for training and fine-tuning.

  • Simplified Loss Removed second order smoothness loss (left code in if you want to research the differences, mostly it just squeezes the end result resulting in choppy animation without smoothing)

  • Mixed Precision Less memory usage and faster training

  • Data augmentation Interpolate a slow set and a fast set of data from your data to help with fine detail reproduction, uses a lot of memory so /care - generally just adding the fast is best as adding slow over saturates the data with slow and noisey data (more work to do here... obv's!)

NEW : llm_to_face.py Streaming + queue added for faster response times as well as local tts option

Toy demo of how one might talk to an AI using Neurosync with context added for multi-turn.

Use a local llm or OpenAI api, just set the bool and add your key.

Demo Build: Download the demo build to test NeuroSync with an Unreal Project (aka, free realistic AI companion when used with llm_to_face.py wink )

Talk to a NeuroSync prototype live on Twitch : Visit Mai

Overview

The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.

Features:

  • Real-time facial animation
  • Integration with Unreal Engine 5 via LiveLink
  • Supports blendshapes generated from audio inputs

NeuroSync Model

To generate facial blendshapes from audio, you'll need the NeuroSync audio-to-face blendshape transformer model. You can:

Switching Between Local and Non-Local API

The player can connect to either the local API or the alpha API depending on your needs. To switch between the two, simply change the boolean value in the utils/neurosync/neurosync_api_connect.py file:

Visit neurosync.info to sign up for alpha access.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for NeuroSync_Player

Similar Open Source Tools

For similar tasks

For similar jobs