![aidldemo](/statics/github-mark.png)
aidldemo
🔥使用AIDL+匿名共享内存实现跨进程双向通信和大文件传输。
Stars: 82
![screenshot](/screenshots_githubs/kongpf8848-aidldemo.jpg)
This repository demonstrates how to achieve cross-process bidirectional communication and large file transfer using AIDL and anonymous shared memory. AIDL is a way to implement Inter-Process Communication in Android, based on Binder. To overcome the data size limit of Binder, anonymous shared memory is used for large file transfer. Shared memory allows processes to share memory by mapping a common memory area into their respective process spaces. While efficient for transferring large data between processes, shared memory lacks synchronization mechanisms, requiring additional mechanisms like semaphores. Android's anonymous shared memory (Ashmem) is based on Linux shared memory and facilitates shared memory transfer using Binder and FileDescriptor. The repository provides practical examples of bidirectional communication and large file transfer between client and server using AIDL interfaces and MemoryFile in Android.
README:
使用AIDL+匿名共享内存实现跨进程双向通信和大文件传输。
AIDL
是Android
中实现跨进程通信(Inter-Process Communication
)的一种方式。AIDL
的传输数据机制基于Binder
,Binder
对传输数据大小有限制,
传输超过1M的文件就会报android.os.TransactionTooLargeException
异常,一种解决办法就是使用匿名共享内存进行大文件传输。
共享内存是进程间通信的一种方式,通过映射一块公共内存到各自的进程空间来达到共享内存的目的。
对于进程间需要传递大量数据的场景下,这种通信方式是十分高效的,但是共享内存并未提供同步机制,也就是说,在第一个进程结束对共享内存的写操作之前,并无自动机制可以阻止第二个进程开始对它进行读取,所以我们通常需要用其他的机制来同步对共享内存的访问,例如信号量。
Android
中的匿名共享内存(Ashmem)是基于Linux
共享内存的,借助Binder
+文件描述符(FileDescriptor
)实现了共享内存的传递。它可以让多个进程操作同一块内存区域,并且除了物理内存限制,没有其他大小限制。相对于Linux
的共享内存,Ashmem对内存的管理更加精细化,并且添加了互斥锁。Java
层在使用时需要用到MemoryFile
,它封装了native
代码。Android
平台上共享内存通常的做法如下:
- 进程A通过
MemoryFile
创建共享内存,得到fd(FileDescriptor
) - 进程A通过fd将数据写入共享内存
- 进程A将fd封装成实现
Parcelable
接口的ParcelFileDescriptor
对象,通过Binder
将ParcelFileDescriptor
对象发送给进程B - 进程B获从
ParcelFileDescriptor
对象中获取fd,从fd中读取数据
我们先实现客户端向服务端传输大文件,然后再实现服务端向客户端传输大文件。
//IMyAidlInterface.aidl
interface IMyAidlInterface {
void client2server(in ParcelFileDescriptor pfd);
}
- 实现
IMyAidlInterface
接口
//AidlService.kt
class AidlService : Service() {
private val mStub: IMyAidlInterface.Stub = object : IMyAidlInterface.Stub() {
@Throws(RemoteException::class)
override fun sendData(pfd: ParcelFileDescriptor) {
}
}
override fun onBind(intent: Intent): IBinder {
return mStub
}
}
- 接收数据
//AidlService.kt
@Throws(RemoteException::class)
override fun sendData(pfd: ParcelFileDescriptor) {
/**
* 从ParcelFileDescriptor中获取FileDescriptor
*/
val fileDescriptor = pfd.fileDescriptor
/**
* 根据FileDescriptor构建InputStream对象
*/
val fis = FileInputStream(fileDescriptor)
/**
* 从InputStream中读取字节数组
*/
val data = fis.readBytes()
......
}
-
绑定服务
- 在项目的
src
目录中加入.aidl
文件 - 声明一个
IMyAidlInterface
接口实例(基于AIDL
生成) - 创建
ServiceConnection
实例,实现android.content.ServiceConnection
接口 - 调用
Context.bindService()
绑定服务,传入ServiceConnection
实例 - 在
onServiceConnected()
实现中,调用IMyAidlInterface.Stub.asInterface(binder)
,将返回参数转换为IMyAidlInterface
类型
- 在项目的
//MainActivity.kt
class MainActivity : AppCompatActivity() {
private var mStub: IMyAidlInterface? = null
private val serviceConnection = object : ServiceConnection {
override fun onServiceConnected(name: ComponentName, binder: IBinder) {
mStub = IMyAidlInterface.Stub.asInterface(binder)
}
override fun onServiceDisconnected(name: ComponentName) {
mStub = null
}
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
button1.setOnClickListener {
bindService()
}
}
private fun bindService() {
if (mStub != null) {
return
}
val intent = Intent("com.example.aidl.server.AidlService")
intent.setClassName("com.example.aidl.server","com.example.aidl.server.AidlService")
try {
val bindSucc = bindService(intent, serviceConnection, Context.BIND_AUTO_CREATE)
if (bindSucc) {
Toast.makeText(this, "bind ok", Toast.LENGTH_SHORT).show()
} else {
Toast.makeText(this, "bind fail", Toast.LENGTH_SHORT).show()
}
} catch (e: Exception) {
e.printStackTrace()
}
}
override fun onDestroy() {
if(mStub!=null) {
unbindService(serviceConnection)
}
super.onDestroy()
}
}
-
发送数据
- 将发送文件转换成字节数组
ByteArray
- 创建
MemoryFile
对象 - 向
MemoryFile
对象中写入字节数组 - 获取
MemoryFile
对应的FileDescriptor
- 根据
FileDescriptor
创建ParcelFileDescriptor
- 调用
IPC
方法,发送ParcelFileDescriptor
对象
- 将发送文件转换成字节数组
//MainActivity.kt
private fun sendLargeData() {
if (mStub == null) {
return
}
try {
/**
* 读取assets目录下文件
*/
val inputStream = assets.open("large.jpg")
/**
* 将inputStream转换成字节数组
*/
val byteArray=inputStream.readBytes()
/**
* 创建MemoryFile
*/
val memoryFile=MemoryFile("image", byteArray.size)
/**
* 向MemoryFile中写入字节数组
*/
memoryFile.writeBytes(byteArray, 0, 0, byteArray.size)
/**
* 获取MemoryFile对应的FileDescriptor
*/
val fd=MemoryFileUtils.getFileDescriptor(memoryFile)
/**
* 根据FileDescriptor创建ParcelFileDescriptor
*/
val pfd= ParcelFileDescriptor.dup(fd)
/**
* 发送数据
*/
mStub?.client2server(pfd)
} catch (e: IOException) {
e.printStackTrace()
} catch (e: RemoteException) {
e.printStackTrace()
}
}
至此,我们已经实现了客户端向服务端传输大文件,下面就继续实现服务端向客户端传输大文件功能。 服务端主动给客户端发送数据,客户端只需要进行监听即可。
- 定义监听回调接口
//ICallbackInterface.aidl
package com.example.aidl.aidl;
interface ICallbackInterface {
void server2client(in ParcelFileDescriptor pfd);
}
- 在
IMyAidlInterface.aidl
中添加注册回调和反注册回调方法,如下:
//IMyAidlInterface.aidl
import com.example.aidl.aidl.ICallbackInterface;
interface IMyAidlInterface {
......
void registerCallback(ICallbackInterface callback);
void unregisterCallback(ICallbackInterface callback);
}
- 服务端实现接口方法
//AidlService.kt
private val callbacks=RemoteCallbackList<ICallbackInterface>()
private val mStub: IMyAidlInterface.Stub = object : IMyAidlInterface.Stub() {
......
override fun registerCallback(callback: ICallbackInterface) {
callbacks.register(callback)
}
override fun unregisterCallback(callback: ICallbackInterface) {
callbacks.unregister(callback)
}
}
- 客户端绑定服务后注册回调
//MainActivity.kt
private val callback=object: ICallbackInterface.Stub() {
override fun server2client(pfd: ParcelFileDescriptor) {
val fileDescriptor = pfd.fileDescriptor
val fis = FileInputStream(fileDescriptor)
val bytes = fis.readBytes()
if (bytes != null && bytes.isNotEmpty()) {
......
}
}
}
private val serviceConnection = object : ServiceConnection {
override fun onServiceConnected(name: ComponentName, binder: IBinder) {
mStub = IMyAidlInterface.Stub.asInterface(binder)
mStub?.registerCallback(callback)
}
override fun onServiceDisconnected(name: ComponentName) {
mStub = null
}
}
- 服务端发送文件,回调给客户端。此处仅贴出核心代码,如下:
//AidlService.kt
private fun server2client(pfd:ParcelFileDescriptor){
val n=callbacks.beginBroadcast()
for(i in 0 until n){
val callback=callbacks.getBroadcastItem(i);
if (callback!=null){
try {
callback.server2client(pfd)
} catch (e:RemoteException) {
e.printStackTrace()
}
}
}
callbacks.finishBroadcast()
}
至此,我们实现了客户端和服务端双向通信和传输大文件😉😉😉
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aidldemo
Similar Open Source Tools
![aidldemo Screenshot](/screenshots_githubs/kongpf8848-aidldemo.jpg)
aidldemo
This repository demonstrates how to achieve cross-process bidirectional communication and large file transfer using AIDL and anonymous shared memory. AIDL is a way to implement Inter-Process Communication in Android, based on Binder. To overcome the data size limit of Binder, anonymous shared memory is used for large file transfer. Shared memory allows processes to share memory by mapping a common memory area into their respective process spaces. While efficient for transferring large data between processes, shared memory lacks synchronization mechanisms, requiring additional mechanisms like semaphores. Android's anonymous shared memory (Ashmem) is based on Linux shared memory and facilitates shared memory transfer using Binder and FileDescriptor. The repository provides practical examples of bidirectional communication and large file transfer between client and server using AIDL interfaces and MemoryFile in Android.
![agents-flex Screenshot](/screenshots_githubs/agents-flex-agents-flex.jpg)
agents-flex
Agents-Flex is a LLM Application Framework like LangChain base on Java. It provides a set of tools and components for building LLM applications, including LLM Visit, Prompt and Prompt Template Loader, Function Calling Definer, Invoker and Running, Memory, Embedding, Vector Storage, Resource Loaders, Document, Splitter, Loader, Parser, LLMs Chain, and Agents Chain.
![herc.ai Screenshot](/screenshots_githubs/Bes-js-herc.ai.jpg)
herc.ai
Herc.ai is a powerful library for interacting with the Herc.ai API. It offers free access to users and supports all languages. Users can benefit from Herc.ai's features unlimitedly with a one-time subscription and API key. The tool provides functionalities for question answering and text-to-image generation, with support for various models and customization options. Herc.ai can be easily integrated into CLI, CommonJS, TypeScript, and supports beta models for advanced usage. Developed by FiveSoBes and Luppux Development.
![zenu Screenshot](/screenshots_githubs/bokutotu-zenu.jpg)
zenu
ZeNu is a high-performance deep learning framework implemented in pure Rust, featuring a pure Rust implementation for safety and performance, GPU performance comparable to PyTorch with CUDA support, a simple and intuitive API, and a modular design for easy extension. It supports various layers like Linear, Convolution 2D, LSTM, and optimizers such as SGD and Adam. ZeNu also provides device support for CPU and CUDA (NVIDIA GPU) with CUDA 12.3 and cuDNN 9. The project structure includes main library, automatic differentiation engine, neural network layers, matrix operations, optimization algorithms, CUDA implementation, and other support crates. Users can find detailed implementations like MNIST classification, CIFAR10 classification, and ResNet implementation in the examples directory. Contributions to ZeNu are welcome under the MIT License.
![mediapipe-rs Screenshot](/screenshots_githubs/WasmEdge-mediapipe-rs.jpg)
mediapipe-rs
MediaPipe-rs is a Rust library designed for MediaPipe tasks on WasmEdge WASI-NN. It offers easy-to-use low-code APIs similar to mediapipe-python, with low overhead and flexibility for custom media input. The library supports various tasks like object detection, image classification, gesture recognition, and more, including TfLite models, TF Hub models, and custom models. Users can create task instances, run sessions for pre-processing, inference, and post-processing, and speed up processing by reusing sessions. The library also provides support for audio tasks using audio data from symphonia, ffmpeg, or raw audio. Users can choose between CPU, GPU, or TPU devices for processing.
![nb_utils Screenshot](/screenshots_githubs/bhoominn-nb_utils.jpg)
nb_utils
nb_utils is a Flutter package that provides a collection of useful methods, extensions, widgets, and utilities to simplify Flutter app development. It includes features like shared preferences, text styles, decorations, widgets, extensions for strings, colors, build context, date time, device, numbers, lists, scroll controllers, system methods, network utils, JWT decoding, and custom dialogs. The package aims to enhance productivity and streamline common tasks in Flutter development.
![wenxin-starter Screenshot](/screenshots_githubs/egmsia01-wenxin-starter.jpg)
wenxin-starter
WenXin-Starter is a spring-boot-starter for Baidu's "Wenxin Qianfan WENXINWORKSHOP" large model, which can help you quickly access Baidu's AI capabilities. It fully integrates the official API documentation of Wenxin Qianfan. Supports text-to-image generation, built-in dialogue memory, and supports streaming return of dialogue. Supports QPS control of a single model and supports queuing mechanism. Plugins will be added soon.
![langchain-rust Screenshot](/screenshots_githubs/Abraxas-365-langchain-rust.jpg)
langchain-rust
LangChain Rust is a library for building applications with Large Language Models (LLMs) through composability. It provides a set of tools and components that can be used to create conversational agents, document loaders, and other applications that leverage LLMs. LangChain Rust supports a variety of LLMs, including OpenAI, Azure OpenAI, Ollama, and Anthropic Claude. It also supports a variety of embeddings, vector stores, and document loaders. LangChain Rust is designed to be easy to use and extensible, making it a great choice for developers who want to build applications with LLMs.
![Avalonia-Assistant Screenshot](/screenshots_githubs/xuzeyu91-Avalonia-Assistant.jpg)
Avalonia-Assistant
Avalonia-Assistant is an open-source desktop intelligent assistant that aims to provide a user-friendly interactive experience based on the Avalonia UI framework and the integration of Semantic Kernel with OpenAI or other large LLM models. By utilizing Avalonia-Assistant, you can perform various desktop operations through text or voice commands, enhancing your productivity and daily office experience.
![aiotieba Screenshot](/screenshots_githubs/Starry-OvO-aiotieba.jpg)
aiotieba
Aiotieba is an asynchronous Python library for interacting with the Tieba API. It provides a comprehensive set of features for working with Tieba, including support for authentication, thread and post management, and image and file uploading. Aiotieba is well-documented and easy to use, making it a great choice for developers who want to build applications that interact with Tieba.
![excel-spring-boot-starter Screenshot](/screenshots_githubs/pig-mesh-excel-spring-boot-starter.jpg)
excel-spring-boot-starter
The excel-spring-boot-starter project is based on Easyexcel to implement reading and writing Excel files. EasyExcel is an open-source project for simple and memory-efficient reading and writing of Excel files in Java. It supports reading and writing Excel files up to 75M (46W rows 25 columns) in 1 minute with 64M memory, and there is a fast mode for even quicker performance but with slightly more memory consumption.
![aiocryptopay Screenshot](/screenshots_githubs/layerqa-aiocryptopay.jpg)
aiocryptopay
The aiocryptopay repository is an asynchronous API wrapper for interacting with the @cryptobot and @CryptoTestnetBot APIs. It provides methods for creating, getting, and deleting invoices and checks, as well as handling webhooks for invoice payments. Users can easily integrate this tool into their applications to manage cryptocurrency payments and transactions.
![TechFlow Screenshot](/screenshots_githubs/tech-flow-org-TechFlow.jpg)
TechFlow
TechFlow is a platform that allows users to build their own AI workflows through drag-and-drop functionality. It features a visually appealing interface with clear layout and intuitive navigation. TechFlow supports multiple models beyond Language Models (LLM) and offers flexible integration capabilities. It provides a powerful SDK for developers to easily integrate generated workflows into existing systems, enhancing flexibility and scalability. The platform aims to embed AI capabilities as modules into existing functionalities to enhance business competitiveness.
![ddddocr Screenshot](/screenshots_githubs/86maid-ddddocr.jpg)
ddddocr
ddddocr is a Rust version of a simple OCR API server that provides easy deployment for captcha recognition without relying on the OpenCV library. It offers a user-friendly general-purpose captcha recognition Rust library. The tool supports recognizing various types of captchas, including single-line text, transparent black PNG images, target detection, and slider matching algorithms. Users can also import custom OCR training models and utilize the OCR API server for flexible OCR result control and range limitation. The tool is cross-platform and can be easily deployed.
![venom Screenshot](/screenshots_githubs/orkestral-venom.jpg)
venom
Venom is a high-performance system developed with JavaScript to create a bot for WhatsApp, support for creating any interaction, such as customer service, media sending, sentence recognition based on artificial intelligence and all types of design architecture for WhatsApp.
For similar tasks
![aidldemo Screenshot](/screenshots_githubs/kongpf8848-aidldemo.jpg)
aidldemo
This repository demonstrates how to achieve cross-process bidirectional communication and large file transfer using AIDL and anonymous shared memory. AIDL is a way to implement Inter-Process Communication in Android, based on Binder. To overcome the data size limit of Binder, anonymous shared memory is used for large file transfer. Shared memory allows processes to share memory by mapping a common memory area into their respective process spaces. While efficient for transferring large data between processes, shared memory lacks synchronization mechanisms, requiring additional mechanisms like semaphores. Android's anonymous shared memory (Ashmem) is based on Linux shared memory and facilitates shared memory transfer using Binder and FileDescriptor. The repository provides practical examples of bidirectional communication and large file transfer between client and server using AIDL interfaces and MemoryFile in Android.
For similar jobs
![react-native-vision-camera Screenshot](/screenshots_githubs/mrousavy-react-native-vision-camera.jpg)
react-native-vision-camera
VisionCamera is a powerful, high-performance Camera library for React Native. It features Photo and Video capture, QR/Barcode scanner, Customizable devices and multi-cameras ("fish-eye" zoom), Customizable resolutions and aspect-ratios (4k/8k images), Customizable FPS (30..240 FPS), Frame Processors (JS worklets to run facial recognition, AI object detection, realtime video chats, ...), Smooth zooming (Reanimated), Fast pause and resume, HDR & Night modes, Custom C++/GPU accelerated video pipeline (OpenGL).
![iris_android Screenshot](/screenshots_githubs/nerve-sparks-iris_android.jpg)
iris_android
This repository contains an offline Android chat application based on llama.cpp example. Users can install, download models, and run the app completely offline and privately. To use the app, users need to go to the releases page, download and install the app. Building the app requires downloading Android Studio, cloning the repository, and importing it into Android Studio. The app can be run offline by following specific steps such as enabling developer options, wireless debugging, and downloading the stable LM model. The project is maintained by Nerve Sparks and contributions are welcome through creating feature branches and pull requests.
![aiolauncher_scripts Screenshot](/screenshots_githubs/zobnin-aiolauncher_scripts.jpg)
aiolauncher_scripts
AIO Launcher Scripts is a collection of Lua scripts that can be used with AIO Launcher to enhance its functionality. These scripts can be used to create widget scripts, search scripts, and side menu scripts. They provide various functions such as displaying text, buttons, progress bars, charts, and interacting with app widgets. The scripts can be used to customize the appearance and behavior of the launcher, add new features, and interact with external services.
![gemini-android Screenshot](/screenshots_githubs/skydoves-gemini-android.jpg)
gemini-android
Gemini Android is a repository showcasing Google's Generative AI on Android using Stream Chat SDK for Compose. It demonstrates the Gemini API for Android, implements UI elements with Jetpack Compose, utilizes Android architecture components like Hilt and AppStartup, performs background tasks with Kotlin Coroutines, and integrates chat systems with Stream Chat Compose SDK for real-time event handling. The project also provides technical content, instructions on building the project, tech stack details, architecture overview, modularization strategies, and a contribution guideline. It follows Google's official architecture guidance and offers a real-world example of app architecture implementation.
![react-native-airship Screenshot](/screenshots_githubs/urbanairship-react-native-airship.jpg)
react-native-airship
React Native Airship is a module designed to integrate Airship's iOS and Android SDKs into React Native applications. It provides developers with the necessary tools to incorporate Airship's push notification services seamlessly. The module offers a simple and efficient way to leverage Airship's features within React Native projects, enhancing user engagement and retention through targeted notifications.
![gpt_mobile Screenshot](/screenshots_githubs/Taewan-P-gpt_mobile.jpg)
gpt_mobile
GPT Mobile is a chat assistant for Android that allows users to chat with multiple models at once. It supports various platforms such as OpenAI GPT, Anthropic Claude, and Google Gemini. Users can customize temperature, top p (Nucleus sampling), and system prompt. The app features local chat history, Material You style UI, dark mode support, and per app language setting for Android 13+. It is built using 100% Kotlin, Jetpack Compose, and follows a modern app architecture for Android developers.
![Native-LLM-for-Android Screenshot](/screenshots_githubs/DakeQQ-Native-LLM-for-Android.jpg)
Native-LLM-for-Android
This repository provides a demonstration of running a native Large Language Model (LLM) on Android devices. It supports various models such as Qwen2.5-Instruct, MiniCPM-DPO/SFT, Yuan2.0, Gemma2-it, StableLM2-Chat/Zephyr, and Phi3.5-mini-instruct. The demo models are optimized for extreme execution speed after being converted from HuggingFace or ModelScope. Users can download the demo models from the provided drive link, place them in the assets folder, and follow specific instructions for decompression and model export. The repository also includes information on quantization methods and performance benchmarks for different models on various devices.
![AIDE-Plus Screenshot](/screenshots_githubs/AndroidIDE-CN-AIDE-Plus.jpg)
AIDE-Plus
AIDE-Plus is a comprehensive tool for Android app development, offering support for various Java syntax versions, Gradle and Maven build systems, ProGuard, AndroidX, CMake builds, APK/AAB generation, code coloring customization, data binding, and APK signing. It also provides features like AAPT2, D8, runtimeOnly, compileOnly, libgdxNatives, manifest merging, Shizuku installation support, and syntax auto-completion. The tool aims to streamline the development process and enhance the user experience by addressing common issues and providing advanced functionalities.