Sunshine-AIO
An all-in-one step-by-step guide to setup Sunshine with additional tools.
Stars: 54
Sunshine-AIO is an all-in-one step-by-step guide to set up Sunshine with all necessary tools for Windows users. It provides a dedicated display for game streaming, virtual monitor switching, automatic resolution adjustment, resource-saving features, game launcher integration, and stream management. The project aims to evolve into an AIO tool as it progresses, welcoming contributions from users.
README:
An all-in-one step-by-step guide to setup Sunshine with all needed tools (Windows only at the moment).
(It's initially just a guide, but as it progresses, it will become more like an AIO tool.)
Contributions to this project are welcomed and highly appreciated.
There are several reasons:
-
A dedicated display for your game stream will be created by the Virtual Display Driver.
-
Sunshine Virtual Monitor allows you to switch between your current desktop (or any number of displays you have) and the Virtual Display.
-
It will also automatically adjust the resolution, quality, HDR option, and frame rate of the Virtual Display based on client settings (Moonlight settings).
-
To save resources for your gaming experience, it will deactivate your current displays and return to your first setup once the stream is finished.
-
Playnite will allow you to gather all your games from any platform (otherwise downloaded games included) in one launcher for your convenience.
-
Playnite Watcher will simply allow you to stop the stream when you close your game. (Sunshine does not support it natively)
-
It will also allow you to automatically import all your games into Sunshine with a click.
- Sunshine Installation
- Virtual Display Driver
- Sunshine Virtual Monitor
- Playnite Installation
- Playnite Watcher
- Enjoy
- Contributing
- License
- Acknowledgements
- Star History
Sunshine Installation
-
Download Sunshine and install it on your computer.
For Windows System, download the file
sunshine-windows-installer.exe
.
To stream remotely, make sure to open these ports in your router settings and redirect them to your PC.
-
TCP
:- 47984
- 47989
- 47990
- 48010
-
UDP
:- 47998
- 47999
- 48000
-
Follow Installation steps then come back here when done.
-
Disable the new display freshly created from Device Manager or open a privileged terminal and run the command
pnputil /disable-device /deviceid root\iddsampledriver
.
If you plan to use Moonlight from a Phone, make sure to add the correct resolution of all your clients into the
C:\IddSampleDriver\option.txt
file if they don't exist already.
-
Download Sunshine Virtual Monitor
- Extract the
sunshine-virtual-monitor-main.zip
file to a secure location (if the folder is deleted, the tool will not work anymore) and open it.
- Extract the
In the next steps, you can either choose to follow these quick steps or follow the original steps from sunshine-virtual-monitor
-
Download MultiMonitorTool for Windows 64-bits (Recommended) or MultiMonitorTool for Windows 32-bits (Old computers)
- Extract the
multimonitortool*.zip
file tomultimonitortool-x64
folder and copy this folder to thesunshine-virtual-monitor-main
folder.
- Extract the
-
Open a Privileged Powershell by entering your Windows key then type
powershell
and enterCtrl + Shift + Enter
.-
Install the module WindowsDisplayManager by typing the command :
Install-Module -Name WindowsDisplayManager
-
To enable the script execution you need to set your Execution Policy from
Default
toRemoteSigned
:Set-ExecutionPolicy RemoteSigned
Source: PowerShell execution policies
-
-
Download vsync-toggle and copy the file to the
sunshine-virtual-monitor-main
folder.
Follow the steps in Sunshine Setup.
(Tip) Copy paste these commands on a PowerShell to get the config.do_cmd
and config.undo_cmd
commands written for you:
$folderName = "sunshine-virtual-monitor-main"
$folderPath = Get-ChildItem -Path "C:\" -Directory -Filter $folderName -Recurse -ErrorAction SilentlyContinue | Select-Object -First 1
$setupPath = $folderPath.FullName + "\setup_sunvdm.ps1"
$teardownPath = $folderPath.FullName + "\teardown_sunvdm.ps1"
$sunvdmLogPath = $folderPath.FullName + "\sunvdm.log"
Write-Host "$(Clear-Host)config.do_cmd:`n`ncmd /C powershell.exe -File $setupPath %SUNSHINE_CLIENT_WIDTH% %SUNSHINE_CLIENT_HEIGHT% %SUNSHINE_CLIENT_FPS% %SUNSHINE_CLIENT_HDR% > $sunvdmLogPath 2>&1`n`n`n`nconfig.undo_cmd:`n`ncmd /C powershell.exe -File $teardownPath >> $sunvdmLogPath 2>&1`n`n`n`n"
If you relocated the sunshine-virtual-monitor-main to a different disk, change the letter of the $folderPath in line 2 to match the new one. For example "D:\"
Playnite Installation
Download Playnite, install it and add all of your games.
Download Playnite Watcher and extract it to a secure location.
Make sure to follow these steps: PlayNite Watcher Script Guide
Configure your Moonlight client to connect to Sunshine and enjoy optimized streaming :)
Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/NewFeature
) - Commit your Changes (
git commit -m 'Add some NewFeature'
) - Push to the Branch (
git push origin feature/NewFeature
) - Open a Pull Request
Thanks to every contributors who have contributed in this project.
Distributed under the MIT License. See LICENSE for more information.
Shoutout to LizardByte for the Sunshine repo: https://github.com/LizardByte/Sunshine
Shoutout to itsmikethetech for the Virtual Display Driver repo: https://github.com/itsmikethetech/Virtual-Display-Driver
Thanks to Cynary for the Sunshine Virtual Monitor scripts: https://github.com/Cynary/sunshine-virtual-monitor
Shoutout to JosefNemec for Playnite: https://github.com/JosefNemec/Playnite
Shoutout to Nonary for the PlayNiteWatcher script: https://github.com/Nonary/PlayNiteWatcher
Author/Maintainer: Garoh | Discord: garohrl
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Sunshine-AIO
Similar Open Source Tools
Sunshine-AIO
Sunshine-AIO is an all-in-one step-by-step guide to set up Sunshine with all necessary tools for Windows users. It provides a dedicated display for game streaming, virtual monitor switching, automatic resolution adjustment, resource-saving features, game launcher integration, and stream management. The project aims to evolve into an AIO tool as it progresses, welcoming contributions from users.
starcoder2-self-align
StarCoder2-Instruct is an open-source pipeline that introduces StarCoder2-15B-Instruct-v0.1, a self-aligned code Large Language Model (LLM) trained with a fully permissive and transparent pipeline. It generates instruction-response pairs to fine-tune StarCoder-15B without human annotations or data from proprietary LLMs. The tool is primarily finetuned for Python code generation tasks that can be verified through execution, with potential biases and limitations. Users can provide response prefixes or one-shot examples to guide the model's output. The model may have limitations with other programming languages and out-of-domain coding tasks.
aircrack-ng
Aircrack-ng is a comprehensive suite of tools designed to evaluate the security of WiFi networks. It covers various aspects of WiFi security, including monitoring, attacking (replay attacks, deauthentication, fake access points), testing WiFi cards and driver capabilities, and cracking WEP and WPA PSK. The tools are command line-based, allowing for extensive scripting and have been utilized by many GUIs. Aircrack-ng primarily works on Linux but also supports Windows, macOS, FreeBSD, OpenBSD, NetBSD, Solaris, and eComStation 2.
modelscope-agent
ModelScope-Agent is a customizable and scalable Agent framework. A single agent has abilities such as role-playing, LLM calling, tool usage, planning, and memory. It mainly has the following characteristics: - **Simple Agent Implementation Process**: Simply specify the role instruction, LLM name, and tool name list to implement an Agent application. The framework automatically arranges workflows for tool usage, planning, and memory. - **Rich models and tools**: The framework is equipped with rich LLM interfaces, such as Dashscope and Modelscope model interfaces, OpenAI model interfaces, etc. Built in rich tools, such as **code interpreter**, **weather query**, **text to image**, **web browsing**, etc., make it easy to customize exclusive agents. - **Unified interface and high scalability**: The framework has clear tools and LLM registration mechanism, making it convenient for users to expand more diverse Agent applications. - **Low coupling**: Developers can easily use built-in tools, LLM, memory, and other components without the need to bind higher-level agents.
obs-localvocal
LocalVocal is a Speech AI assistant OBS Plugin that enables users to transcribe speech into text and translate it into any language locally on their machine. The plugin runs OpenAI's Whisper for real-time speech processing and prediction. It supports features like transcribing audio in real-time, displaying captions on screen, sending captions to files, syncing captions with recordings, and translating captions to major languages. Users can bring their own Whisper model, filter or replace captions, and experience partial transcriptions for streaming. The plugin is privacy-focused, requiring no GPU, cloud costs, network, or downtime.
sunnypilot
Sunnypilot is a fork of comma.ai's openpilot, offering a unique driving experience for over 250+ supported car makes and models with modified behaviors of driving assist engagements. It complies with comma.ai's safety rules and provides features like Modified Assistive Driving Safety, Dynamic Lane Profile, Enhanced Speed Control, Gap Adjust Cruise, and more. Users can install it on supported devices and cars following detailed instructions, ensuring a safe and enhanced driving experience.
llama.cpp
llama.cpp is a C++ implementation of LLaMA, a large language model from Meta. It provides a command-line interface for inference and can be used for a variety of tasks, including text generation, translation, and question answering. llama.cpp is highly optimized for performance and can be run on a variety of hardware, including CPUs, GPUs, and TPUs.
quickvid
QuickVid is an open-source video summarization tool that uses AI to generate summaries of YouTube videos. It is built with Whisper, GPT, LangChain, and Supabase. QuickVid can be used to save time and get the essence of any YouTube video with intelligent summarization.
anything-llm
AnythingLLM is a full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.
infinity
Infinity is a high-throughput, low-latency REST API for serving vector embeddings, supporting all sentence-transformer models and frameworks. It is developed under the MIT License and powers inference behind Gradient.ai. The API allows users to deploy models from SentenceTransformers, offers fast inference backends utilizing various accelerators, dynamic batching for efficient processing, correct and tested implementation, and easy-to-use API built on FastAPI with Swagger documentation. Users can embed text, rerank documents, and perform text classification tasks using the tool. Infinity supports various models from Huggingface and provides flexibility in deployment via CLI, Docker, Python API, and cloud services like dstack. The tool is suitable for tasks like embedding, reranking, and text classification.
node-llama-cpp
node-llama-cpp is a tool that allows users to run AI models locally on their machines. It provides pre-built bindings with the option to build from source using cmake. Users can interact with text generation models, chat with models using a chat wrapper, and force models to generate output in a parseable format like JSON. The tool supports Metal and CUDA, offers CLI functionality for chatting with models without coding, and ensures up-to-date compatibility with the latest version of llama.cpp. Installation includes pre-built binaries for macOS, Linux, and Windows, with the option to build from source if binaries are not available for the platform.
obs-localvocal
LocalVocal is a live-streaming AI assistant plugin for OBS that allows you to transcribe audio speech into text and perform various language processing functions on the text using AI / LLMs (Large Language Models). It's privacy-first, with all data staying on your machine, and requires no GPU, cloud costs, network, or downtime.
openmeter
OpenMeter is a real-time and scalable usage metering tool for AI, usage-based billing, infrastructure, and IoT use cases. It provides a REST API for integrations and offers client SDKs in Node.js, Python, Go, and Web. OpenMeter is licensed under the Apache 2.0 License.
stable-diffusion.cpp
The stable-diffusion.cpp repository provides an implementation for inferring stable diffusion in pure C/C++. It offers features such as support for different versions of stable diffusion, lightweight and dependency-free implementation, various quantization support, memory-efficient CPU inference, GPU acceleration, and more. Users can download the built executable program or build it manually. The repository also includes instructions for downloading weights, building from scratch, using different acceleration methods, running the tool, converting weights, and utilizing various features like Flash Attention, ESRGAN upscaling, PhotoMaker support, and more. Additionally, it mentions future TODOs and provides information on memory requirements, bindings, UIs, contributors, and references.
clearml-serving
ClearML Serving is a command line utility for model deployment and orchestration, enabling model deployment including serving and preprocessing code to a Kubernetes cluster or custom container based solution. It supports machine learning models like Scikit Learn, XGBoost, LightGBM, and deep learning models like TensorFlow, PyTorch, ONNX. It provides a customizable RestAPI for serving, online model deployment, scalable solutions, multi-model per container, automatic deployment, canary A/B deployment, model monitoring, usage metric reporting, metric dashboard, and model performance metrics. ClearML Serving is modular, scalable, flexible, customizable, and open source.
scalene
Scalene is a high-performance CPU, GPU, and memory profiler for Python that provides detailed information and runs faster than many other profilers. It incorporates AI-powered proposed optimizations, allowing users to generate optimization suggestions by clicking on specific lines or regions of code. Scalene separates time spent in Python from native code, highlights hotspots, and identifies memory usage per line. It supports GPU profiling on NVIDIA-based systems and detects memory leaks. Users can generate reduced profiles, profile specific functions using decorators, and suspend/resume profiling for background processes. Scalene is available as a pip or conda package and works on various platforms. It offers features like profiling at the line level, memory trends, copy volume reporting, and leak detection.
For similar tasks
Sunshine-AIO
Sunshine-AIO is an all-in-one step-by-step guide to set up Sunshine with all necessary tools for Windows users. It provides a dedicated display for game streaming, virtual monitor switching, automatic resolution adjustment, resource-saving features, game launcher integration, and stream management. The project aims to evolve into an AIO tool as it progresses, welcoming contributions from users.
For similar jobs
Sunshine-AIO
Sunshine-AIO is an all-in-one step-by-step guide to set up Sunshine with all necessary tools for Windows users. It provides a dedicated display for game streaming, virtual monitor switching, automatic resolution adjustment, resource-saving features, game launcher integration, and stream management. The project aims to evolve into an AIO tool as it progresses, welcoming contributions from users.
better-genshin-impact
BetterGI is a project based on computer vision technology, which aims to make Genshin Impact better. It can automatically pick up items, skip dialogues, automatically select options, automatically submit items, close pop-up pages, etc. When talking to Katherine, it can automatically receive the "Daily Commission" rewards and automatically re-dispatch. When the automatic plot function is turned on, this function will take effect, and the invitation options will be automatically selected. AI recognizes automatic casting, automatically reels in when the fish is hooked, and automatically completes the fishing progress. Help you easily complete the Seven Saint Summoning character invitation, weekly visitor challenge and other PVE content. Automatically use the "King Tree Blessing" with the `Z` key, and use the principle of refreshing wood by going online and offline to hang up a backpack full of wood. Write combat scripts to let the team fight automatically according to your strategy. Fully automatic secret realm hangs up to restore physical strength, automatically enters the secret realm to open the key, fight, walk to the ancient tree and receive rewards. Click the teleportation point on the map, or if there is a teleportation point in the list that appears after clicking, it will automatically click the teleportation point and teleport. Set a shortcut key, and long press to continuously rotate the perspective horizontally (of course you can also use it to rotate the grass god). Quickly switch between "Details" and "Enhance" pages to skip the display of holy relic enhancement results and quickly +20. You can quickly purchase items in the store in full quantity, which is suitable for quickly clearing event redemptions,塵歌壺 store redemptions, etc.
vector_companion
Vector Companion is an AI tool designed to act as a virtual companion on your computer. It consists of two personalities, Axiom and Axis, who can engage in conversations based on what is happening on the screen. The tool can transcribe audio output and user microphone input, take screenshots, and read text via OCR to create lifelike interactions. It requires specific prerequisites to run on Windows and uses VB Cable to capture audio. Users can interact with Axiom and Axis by running the main script after installation and configuration.