
airdash
File sharing flutter webrtc app enabling sending files to any device from anywhere
Stars: 533

AirDash is a file sharing tool that allows users to transfer photos and files securely between devices on different platforms. It offers maximum privacy and security by encrypting files and transferring them directly. Users can quickly start transfers using native mobile share sheet and drag and drop on desktop. The tool supports all major platforms and app stores, and automatically selects the best and fastest connection available. Key technologies include Flutter 3 for app development, WebRTC for file transfers, and Firebase services for signaling, config storage, and release automation. AirDash is free to use and suitable for sharing any number of files of any size.
README:
Transfer photos and files to any device.
- Apple App Store (iOS & macOS)
- Homebrew (macOS)
- Google Play (Android)
- Microsoft Store (Windows)
- Snap Store (Linux)
- Support for all major platforms and app stores (iOS, macOS, Windows, Linux and Android)
- Free forever to send any number of files of any size
- Maximum privacy and security by fully encrypting files and transferring them directly between devices
- Quickly start transfers using native mobile share sheet and drag and drop on desktop
- Send files anywhere (no need to be on the same network or nearby)
- Automatically uses the best and fastest connection available (wifi, mobile internet, ethernet etc)
- Flutter 3 (iOS, macOS, Android, Linux and Windows apps)
- WebRTC (file and data transfers)
- Firebase Firestore (WebRTC signaling and config storage)
- Firebase Functions (device pairing and config automation)
- Firebase Hosting (website and static files hosting)
- App Store Connect API and Microsoft Store submission API (release automation)
- Mixpanel (web and app analytics)
- Sentry (app monitoring and error tracking)
- Create a firebase project (https://console.firebase.google.com) and enable firestore and anonymous authentication
- Create a .env file by duplicating the .env.sample file
- Replace the firebase project id and web API key in the .env file with the ones for your project (firebase console -> project settings)
- Run
dart tools/scripts.dart app_env
to get a env.dart file - Deploy pairing backend function by
cd functions && npm i && npx firebase deploy --only pairing
- Run app using editor or
flutter run
By default a google stun server is used to connect peers. The simplest way to enable turn servers as well is to use https://www.twilio.com/stun-turn. Create functions/.env file similar to the functions/.env-sample file and deploy the updateTwilioToken backend function.
Contributions are very much welcome on everything from bug reports to feature development. If you want to change something major write an issue about it first to ensure it will be considered for merge.
Prepare
- Update libraries
- flutter pub get
- cd ios && pod update
- cd macos && pod update
Update version
- Update changelog.md and version in pubspec.yaml and snapcraft.yaml
- git commit -am vX.X.X
- git tag "vX.X.X"
- git push && git push -f --tags
macOS
- flutter build macos
- Archive, Distribute -> Export to App store AND Distribute -> Direct Distribution (~/Downloads/airdash.app)
- npx appdmg appdmg.json ./build/airdash.dmg
iOS
- flutter build ipa
- Distribute with Transporter or Xcode (open build/ios/archive/MyApp.xcarchive)
Android
- flutter build appbundle
- cp build/app/outputs/bundle/release/app-release.aab build/airdash.aab
- open https://play.google.com/console/u/0/developers/6822011924129869646/app/4975414306006741094/tracks/production
- App Bundle Explorer -> Download signed apk -> build/airdash.apk
Windows
- Open Windows in VMWare
- Open ~\Documents\airdash in vs code
- git pull -r && flutter pub get
- dart run msix:create
- Copy msix file to build/airdash.msix
- Create new update -> open https://partner.microsoft.com/en-us/dashboard/products/9NL9K7CSG30T
Linux
- Open Ubuntu in VMWare
- Open ~\Documents\airdash in vs code
- git pull -r && flutter pub get
- flutter build linux --release
- flutter clean --use-lxd # required
- snapcraft --output build/airdash.snap --use-lxd
- snapcraft upload --release=stable build/airdash.snap
- Copy to mac ./build/airdash.snap
Create Github release
- Download macOS app file
- npx appdmg appdmg.json ./build/airdash.dmg
- open https://github.com/simonbengtsson/airdash/releases/new
- Attach
- build/airdash.apk
- build/airdash.dmg
- build/airdash.msix
- build/airdash.snap
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for airdash
Similar Open Source Tools

airdash
AirDash is a file sharing tool that allows users to transfer photos and files securely between devices on different platforms. It offers maximum privacy and security by encrypting files and transferring them directly. Users can quickly start transfers using native mobile share sheet and drag and drop on desktop. The tool supports all major platforms and app stores, and automatically selects the best and fastest connection available. Key technologies include Flutter 3 for app development, WebRTC for file transfers, and Firebase services for signaling, config storage, and release automation. AirDash is free to use and suitable for sharing any number of files of any size.

Stable-Diffusion-Android
Stable Diffusion AI is an easy-to-use app for generating images from text or other images. It allows communication with servers powered by various AI technologies like AI Horde, Hugging Face Inference API, OpenAI, StabilityAI, and LocalDiffusion. The app supports Txt2Img and Img2Img modes, positive and negative prompts, dynamic size and sampling methods, unique seed input, and batch image generation. Users can also inpaint images, select faces from gallery or camera, and export images. The app offers settings for server URL, SD Model selection, auto-saving images, and clearing cache.

chatnio
Chat Nio is a next-generation AIGC one-stop business solution that combines the advantages of frontend-oriented lightweight deployment projects with powerful API distribution systems. It offers rich model support, beautiful UI design, complete Markdown support, multi-theme support, internationalization support, text-to-image support, powerful conversation sync, model market & preset system, rich file parsing, full model internet search, Progressive Web App (PWA) support, comprehensive backend management, multiple billing methods, innovative model caching, and additional features. The project aims to address limitations in conversation synchronization, billing, file parsing, conversation URL sharing, channel management, and API call support found in existing AIGC commercial sites, while also providing a user-friendly interface design and C-end features.

anything-llm
AnythingLLM is a full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.

clearml
ClearML is an auto-magical suite of tools designed to streamline AI workflows. It includes modules for experiment management, MLOps/LLMOps, data management, model serving, and more. ClearML offers features like experiment tracking, model serving, orchestration, and automation. It supports various ML/DL frameworks and integrates with Jupyter Notebook and PyCharm for remote debugging. ClearML aims to simplify collaboration, automate processes, and enhance visibility in AI projects.

clearml
ClearML is a suite of tools designed to streamline the machine learning workflow. It includes an experiment manager, MLOps/LLMOps, data management, and model serving capabilities. ClearML is open-source and offers a free tier hosting option. It supports various ML/DL frameworks and integrates with Jupyter Notebook and PyCharm. ClearML provides extensive logging capabilities, including source control info, execution environment, hyper-parameters, and experiment outputs. It also offers automation features, such as remote job execution and pipeline creation. ClearML is designed to be easy to integrate, requiring only two lines of code to add to existing scripts. It aims to improve collaboration, visibility, and data transparency within ML teams.

app
WebDB is a comprehensive and free database Integrated Development Environment (IDE) designed to maximize efficiency in database development and management. It simplifies and enhances database operations with features like DBMS discovery, query editor, time machine, NoSQL structure inferring, modern ERD visualization, and intelligent data generator. Developed with robust web technologies, WebDB is suitable for both novice and experienced database professionals.

paperless-ai
Paperless-AI is an automated document analyzer tool designed for Paperless-ngx users. It utilizes the OpenAI API and Ollama (Mistral, llama, phi 3, gemma 2) to automatically scan, analyze, and tag documents. The tool offers features such as automatic document scanning, AI-powered document analysis, automatic title and tag assignment, manual mode for analyzing documents, easy setup through a web interface, document processing dashboard, error handling, and Docker support. Users can configure the tool through a web interface and access a debug interface for monitoring and troubleshooting. Paperless-AI aims to streamline document organization and analysis processes for users with access to Paperless-ngx and AI capabilities.

docq
Docq is a private and secure GenAI tool designed to extract knowledge from business documents, enabling users to find answers independently. It allows data to stay within organizational boundaries, supports self-hosting with various cloud vendors, and offers multi-model and multi-modal capabilities. Docq is extensible, open-source (AGPLv3), and provides commercial licensing options. The tool aims to be a turnkey solution for organizations to adopt AI innovation safely, with plans for future features like more data ingestion options and model fine-tuning.

snd
Sales & Dungeons is a tool that utilizes thermal printers for creating customizable handouts, quick references, and more for Dungeons and Dragons sessions. It offers extensive templating and random generation systems, supports various connection methods, and allows importing/exporting templates and data sources. Users can access external data sources like Open5e, import data from CSV and other formats, and utilize AI prompt generation and translation. The tool supports cloud sync and is compatible with multiple operating systems and devices.

reachat
Reachat is a UI library designed for building chat experiences without the need for manual coding of components. Users can customize each component and theme using Tailwind. The library offers features such as console and companion modes, markdown rendering, code highlighting, tables, JSON support, math rendering, YouTube embeds, file uploads, message sources, animations, conversation pagination, keyboard shortcuts, responsive design, and more. Reachat is highly customizable and suitable for creating interactive chat interfaces.

DevoxxGenieIDEAPlugin
Devoxx Genie is a Java-based IntelliJ IDEA plugin that integrates with local and cloud-based LLM providers to aid in reviewing, testing, and explaining project code. It supports features like code highlighting, chat conversations, and adding files/code snippets to context. Users can modify REST endpoints and LLM parameters in settings, including support for cloud-based LLMs. The plugin requires IntelliJ version 2023.3.4 and JDK 17. Building and publishing the plugin is done using Gradle tasks. Users can select an LLM provider, choose code, and use commands like review, explain, or generate unit tests for code analysis.

swift-chat
SwiftChat is a fast and responsive AI chat application developed with React Native and powered by Amazon Bedrock. It offers real-time streaming conversations, AI image generation, multimodal support, conversation history management, and cross-platform compatibility across Android, iOS, and macOS. The app supports multiple AI models like Amazon Bedrock, Ollama, DeepSeek, and OpenAI, and features a customizable system prompt assistant. With a minimalist design philosophy and robust privacy protection, SwiftChat delivers a seamless chat experience with various features like rich Markdown support, comprehensive multimodal analysis, creative image suite, and quick access tools. The app prioritizes speed in launch, request, render, and storage, ensuring a fast and efficient user experience. SwiftChat also emphasizes app privacy and security by encrypting API key storage, minimal permission requirements, local-only data storage, and a privacy-first approach.

llm-workflow-engine
LLM Workflow Engine (LWE) is a powerful command-line interface (CLI) and workflow manager for large language models (LLMs) like ChatGPT and GPT4. It allows users to interact with LLMs directly from their terminal, making it easy to automate tasks and build complex workflows. LWE supports the official ChatGPT API, providing access to all supported models through your OpenAI account. Additionally, it features a simple plugin architecture that enables users to extend its functionality and integrate with other LLMs. LWE also offers a Python API for integrating LLM capabilities into Python scripts. Notable projects built using the original ChatGPT Wrapper, which LWE evolved from, include bookast, ChatGPT.el, ChatGPT Reddit Bot, Smarty GPT, ChatGPTify, and selection-to-chatgpt.

TaskingAI
TaskingAI brings Firebase's simplicity to **AI-native app development**. The platform enables the creation of GPTs-like multi-tenant applications using a wide range of LLMs from various providers. It features distinct, modular functions such as Inference, Retrieval, Assistant, and Tool, seamlessly integrated to enhance the development process. TaskingAI’s cohesive design ensures an efficient, intelligent, and user-friendly experience in AI application development.

minimal-chat
MinimalChat is a minimal and lightweight open-source chat application with full mobile PWA support that allows users to interact with various language models, including GPT-4 Omni, Claude Opus, and various Local/Custom Model Endpoints. It focuses on simplicity in setup and usage while being fully featured and highly responsive. The application supports features like fully voiced conversational interactions, multiple language models, markdown support, code syntax highlighting, DALL-E 3 integration, conversation importing/exporting, and responsive layout for mobile use.
For similar tasks

airdash
AirDash is a file sharing tool that allows users to transfer photos and files securely between devices on different platforms. It offers maximum privacy and security by encrypting files and transferring them directly. Users can quickly start transfers using native mobile share sheet and drag and drop on desktop. The tool supports all major platforms and app stores, and automatically selects the best and fastest connection available. Key technologies include Flutter 3 for app development, WebRTC for file transfers, and Firebase services for signaling, config storage, and release automation. AirDash is free to use and suitable for sharing any number of files of any size.
For similar jobs

AirGo
AirGo is a front and rear end separation, multi user, multi protocol proxy service management system, simple and easy to use. It supports vless, vmess, shadowsocks, and hysteria2.

mosec
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API. * **Highly performant** : web layer and task coordination built with Rust 🦀, which offers blazing speed in addition to efficient CPU utilization powered by async I/O * **Ease of use** : user interface purely in Python 🐍, by which users can serve their models in an ML framework-agnostic manner using the same code as they do for offline testing * **Dynamic batching** : aggregate requests from different users for batched inference and distribute results back * **Pipelined stages** : spawn multiple processes for pipelined stages to handle CPU/GPU/IO mixed workloads * **Cloud friendly** : designed to run in the cloud, with the model warmup, graceful shutdown, and Prometheus monitoring metrics, easily managed by Kubernetes or any container orchestration systems * **Do one thing well** : focus on the online serving part, users can pay attention to the model optimization and business logic

llm-code-interpreter
The 'llm-code-interpreter' repository is a deprecated plugin that provides a code interpreter on steroids for ChatGPT by E2B. It gives ChatGPT access to a sandboxed cloud environment with capabilities like running any code, accessing Linux OS, installing programs, using filesystem, running processes, and accessing the internet. The plugin exposes commands to run shell commands, read files, and write files, enabling various possibilities such as running different languages, installing programs, starting servers, deploying websites, and more. It is powered by the E2B API and is designed for agents to freely experiment within a sandboxed environment.

pezzo
Pezzo is a fully cloud-native and open-source LLMOps platform that allows users to observe and monitor AI operations, troubleshoot issues, save costs and latency, collaborate, manage prompts, and deliver AI changes instantly. It supports various clients for prompt management, observability, and caching. Users can run the full Pezzo stack locally using Docker Compose, with prerequisites including Node.js 18+, Docker, and a GraphQL Language Feature Support VSCode Extension. Contributions are welcome, and the source code is available under the Apache 2.0 License.

learn-generative-ai
Learn Cloud Applied Generative AI Engineering (GenEng) is a course focusing on the application of generative AI technologies in various industries. The course covers topics such as the economic impact of generative AI, the role of developers in adopting and integrating generative AI technologies, and the future trends in generative AI. Students will learn about tools like OpenAI API, LangChain, and Pinecone, and how to build and deploy Large Language Models (LLMs) for different applications. The course also explores the convergence of generative AI with Web 3.0 and its potential implications for decentralized intelligence.

gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.

fluid
Fluid is an open source Kubernetes-native Distributed Dataset Orchestrator and Accelerator for data-intensive applications, such as big data and AI applications. It implements dataset abstraction, scalable cache runtime, automated data operations, elasticity and scheduling, and is runtime platform agnostic. Key concepts include Dataset and Runtime. Prerequisites include Kubernetes version > 1.16, Golang 1.18+, and Helm 3. The tool offers features like accelerating remote file accessing, machine learning, accelerating PVC, preloading dataset, and on-the-fly dataset cache scaling. Contributions are welcomed, and the project is under the Apache 2.0 license with a vendor-neutral approach.

aiges
AIGES is a core component of the Athena Serving Framework, designed as a universal encapsulation tool for AI developers to deploy AI algorithm models and engines quickly. By integrating AIGES, you can deploy AI algorithm models and engines rapidly and host them on the Athena Serving Framework, utilizing supporting auxiliary systems for networking, distribution strategies, data processing, etc. The Athena Serving Framework aims to accelerate the cloud service of AI algorithm models and engines, providing multiple guarantees for cloud service stability through cloud-native architecture. You can efficiently and securely deploy, upgrade, scale, operate, and monitor models and engines without focusing on underlying infrastructure and service-related development, governance, and operations.