
a2a-java
Java SDK for the Agent2Agent (A2A) Protocol
Stars: 226

A2A Java SDK is a Java library that helps run agentic applications as A2AServers following the Agent2Agent (A2A) Protocol. It provides a Java server implementation of the A2A Protocol, allowing users to create A2A server agents and execute tasks. The SDK also includes a Java client implementation for communication with A2A servers using various transports like JSON-RPC 2.0, gRPC, and HTTP+JSON/REST. Users can configure different transport protocols, handle messages, tasks, push notifications, and interact with server agents. The SDK supports streaming and non-streaming responses, error handling, and task management functionalities.
README:
A Java library that helps run agentic applications as A2AServers following the Agent2Agent (A2A) Protocol.
You can build the A2A Java SDK using mvn
:
mvn clean install
We copy https://github.com/a2aproject/A2A/blob/main/specification/grpc/a2a.proto to the spec-grpc/
project, and adjust the java_package
option to be as follows:
option java_package = "io.a2a.grpc";
Then build the spec-grpc
module with mvn clean install -Pproto-compile
to regenerate the gRPC classes in the io.a2a.grpc
package.
You can find examples of how to use the A2A Java SDK in the a2a-samples repository.
More examples will be added soon.
The A2A Java SDK provides a Java server implementation of the Agent2Agent (A2A) Protocol. To run your agentic Java application as an A2A server, simply follow the steps below.
- Add an A2A Java SDK Server Maven dependency to your project
- Add a class that creates an A2A Agent Card
- Add a class that creates an A2A Agent Executor
Adding a dependency on an A2A Java SDK Reference Server will provide access to the core classes that make up the A2A specification and allow you to run your agentic Java application as an A2A server agent.
The A2A Java SDK provides reference A2A server implementations based on Quarkus for use with our tests and examples. However, the project is designed in such a way that it is trivial to integrate with various Java runtimes.
Server Integrations contains a list of community contributed integrations of the server with various runtimes. You might be able to use one of these for your target runtime, or you can use them as inspiration to create your own.
The A2A Java SDK Reference Server implementations support the following transports:
- JSON-RPC 2.0
- gRPC
- HTTP+JSON/REST
To use the reference implementation with the JSON-RPC protocol, add the following dependency to your project:
⚠️ Theio.github.a2asdk
groupId
below is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-reference-jsonrpc</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>
To use the reference implementation with the gRPC protocol, add the following dependency to your project:
⚠️ Theio.github.a2asdk
groupId
below is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-reference-grpc</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>
To use the reference implementation with the HTTP+JSON/REST protocol, add the following dependency to your project:
⚠️ Theio.github.a2asdk
groupId
below is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-reference-rest</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>
Note that you can add more than one of the above dependencies to your project depending on the transports you'd like to support.
import io.a2a.server.PublicAgentCard;
import io.a2a.spec.AgentCapabilities;
import io.a2a.spec.AgentCard;
import io.a2a.spec.AgentSkill;
...
@ApplicationScoped
public class WeatherAgentCardProducer {
@Produces
@PublicAgentCard
public AgentCard agentCard() {
return new AgentCard.Builder()
.name("Weather Agent")
.description("Helps with weather")
.url("http://localhost:10001")
.version("1.0.0")
.capabilities(new AgentCapabilities.Builder()
.streaming(true)
.pushNotifications(false)
.stateTransitionHistory(false)
.build())
.defaultInputModes(Collections.singletonList("text"))
.defaultOutputModes(Collections.singletonList("text"))
.skills(Collections.singletonList(new AgentSkill.Builder()
.id("weather_search")
.name("Search weather")
.description("Helps with weather in city, or states")
.tags(Collections.singletonList("weather"))
.examples(List.of("weather in LA, CA"))
.build()))
.protocolVersion("0.3.0")
.build();
}
}
import io.a2a.server.agentexecution.AgentExecutor;
import io.a2a.server.agentexecution.RequestContext;
import io.a2a.server.events.EventQueue;
import io.a2a.server.tasks.TaskUpdater;
import io.a2a.spec.JSONRPCError;
import io.a2a.spec.Message;
import io.a2a.spec.Part;
import io.a2a.spec.Task;
import io.a2a.spec.TaskNotCancelableError;
import io.a2a.spec.TaskState;
import io.a2a.spec.TextPart;
...
@ApplicationScoped
public class WeatherAgentExecutorProducer {
@Inject
WeatherAgent weatherAgent;
@Produces
public AgentExecutor agentExecutor() {
return new WeatherAgentExecutor(weatherAgent);
}
private static class WeatherAgentExecutor implements AgentExecutor {
private final WeatherAgent weatherAgent;
public WeatherAgentExecutor(WeatherAgent weatherAgent) {
this.weatherAgent = weatherAgent;
}
@Override
public void execute(RequestContext context, EventQueue eventQueue) throws JSONRPCError {
TaskUpdater updater = new TaskUpdater(context, eventQueue);
// mark the task as submitted and start working on it
if (context.getTask() == null) {
updater.submit();
}
updater.startWork();
// extract the text from the message
String userMessage = extractTextFromMessage(context.getMessage());
// call the weather agent with the user's message
String response = weatherAgent.chat(userMessage);
// create the response part
TextPart responsePart = new TextPart(response, null);
List<Part<?>> parts = List.of(responsePart);
// add the response as an artifact and complete the task
updater.addArtifact(parts, null, null, null);
updater.complete();
}
@Override
public void cancel(RequestContext context, EventQueue eventQueue) throws JSONRPCError {
Task task = context.getTask();
if (task.getStatus().state() == TaskState.CANCELED) {
// task already cancelled
throw new TaskNotCancelableError();
}
if (task.getStatus().state() == TaskState.COMPLETED) {
// task already completed
throw new TaskNotCancelableError();
}
// cancel the task
TaskUpdater updater = new TaskUpdater(context, eventQueue);
updater.cancel();
}
private String extractTextFromMessage(Message message) {
StringBuilder textBuilder = new StringBuilder();
if (message.getParts() != null) {
for (Part part : message.getParts()) {
if (part instanceof TextPart textPart) {
textBuilder.append(textPart.getText());
}
}
}
return textBuilder.toString();
}
}
}
The A2A Java SDK provides a Java client implementation of the Agent2Agent (A2A) Protocol, allowing communication with A2A servers. The Java client implementation supports the following transports:
- JSON-RPC 2.0
- gRPC
- HTTP+JSON/REST
To make use of the Java Client
:
Adding a dependency on a2a-java-sdk-client
will provide access to a ClientBuilder
that you can use to create your A2A Client
.
⚠️ Theio.github.a2asdk
groupId
below is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-client</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>
By default, the sdk-client is coming with the JSONRPC transport dependency. Despite the fact that the JSONRPC transport dependency is included by default, you still need to add the transport to the Client as described in JSON-RPC Transport section.
If you want to use the gRPC transport, you'll need to add a relevant dependency:
⚠️ Theio.github.a2asdk
groupId
below is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-client-transport-grpc</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>
If you want to use the HTTP+JSON/REST transport, you'll need to add a relevant dependency:
⚠️ Theio.github.a2asdk
groupId
below is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-client-transport-rest</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>
// First, get the agent card for the A2A server agent you want to connect to
AgentCard agentCard = new A2ACardResolver("http://localhost:1234").getAgentCard();
// Specify configuration for the ClientBuilder
ClientConfig clientConfig = new ClientConfig.Builder()
.setAcceptedOutputModes(List.of("text"))
.build();
// Create event consumers to handle responses that will be received from the A2A server
// (these consumers will be used for both streaming and non-streaming responses)
List<BiConsumer<ClientEvent, AgentCard>> consumers = List.of(
(event, card) -> {
if (event instanceof MessageEvent messageEvent) {
// handle the messageEvent.getMessage()
...
} else if (event instanceof TaskEvent taskEvent) {
// handle the taskEvent.getTask()
...
} else if (event instanceof TaskUpdateEvent updateEvent) {
// handle the updateEvent.getTask()
...
}
}
);
// Create a handler that will be used for any errors that occur during streaming
Consumer<Throwable> errorHandler = error -> {
// handle the error.getMessage()
...
};
// Create the client using the builder
Client client = Client
.builder(agentCard)
.clientConfig(clientConfig)
.withTransport(JSONRPCTransport.class, new JSONRPCTransportConfig())
.addConsumers(consumers)
.streamingErrorHandler(errorHandler)
.build();
Different transport protocols can be configured with specific settings using specific ClientTransportConfig
implementations. The A2A Java SDK provides JSONRPCTransportConfig
for the JSON-RPC transport and GrpcTransportConfig
for the gRPC transport.
For the JSON-RPC transport, to use the default JdkA2AHttpClient
, provide a JSONRPCTransportConfig
created with its default constructor.
To use a custom HTTP client implementation, simply create a JSONRPCTransportConfig
as follows:
// Create a custom HTTP client
A2AHttpClient customHttpClient = ...
// Configure the client settings
ClientConfig clientConfig = new ClientConfig.Builder()
.setAcceptedOutputModes(List.of("text"))
.build();
Client client = Client
.builder(agentCard)
.clientConfig(clientConfig)
.withTransport(JSONRPCTransport.class, new JSONRPCTransportConfig(customHttpClient))
.build();
For the gRPC transport, you must configure a channel factory:
// Create a channel factory function that takes the agent URL and returns a Channel
Function<String, Channel> channelFactory = agentUrl -> {
return ManagedChannelBuilder.forTarget(agentUrl)
...
.build();
};
// Configure the client with transport-specific settings
ClientConfig clientConfig = new ClientConfig.Builder()
.setAcceptedOutputModes(List.of("text"))
.build();
Client client = Client
.builder(agentCard)
.clientConfig(clientConfig)
.withTransport(GrpcTransport.class, new GrpcTransportConfig(channelFactory))
.build();
For the HTTP+JSON/REST transport, if you'd like to use the default JdkA2AHttpClient
, provide a RestTransportConfig
created with its default constructor.
To use a custom HTTP client implementation, simply create a RestTransportConfig
as follows:
// Create a custom HTTP client
A2AHttpClient customHttpClient = ...
// Configure the client settings
ClientConfig clientConfig = new ClientConfig.Builder()
.setAcceptedOutputModes(List.of("text"))
.build();
Client client = Client
.builder(agentCard)
.clientConfig(clientConfig)
.withTransport(RestTransport.class, new RestTransportConfig(customHttpClient))
.build();
You can specify configuration for multiple transports, the appropriate configuration will be used based on the selected transport:
// Configure both JSON-RPC and gRPC transports
Client client = Client
.builder(agentCard)
.withTransport(GrpcTransport.class, new GrpcTransportConfig(channelFactory))
.withTransport(JSONRPCTransport.class, new JSONRPCTransportConfig())
.withTransport(RestTransport.class, new RestTransportConfig())
.build();
// Send a text message to the A2A server agent
Message message = A2A.toUserMessage("tell me a joke");
// Send the message (uses configured consumers to handle responses)
// Streaming will automatically be used if supported by both client and server,
// otherwise the non-streaming send message method will be used automatically
client.sendMessage(message);
// You can also optionally specify a ClientCallContext with call-specific config to use
client.sendMessage(message, clientCallContext);
// Create custom consumers for this specific message
List<BiConsumer<ClientEvent, AgentCard>> customConsumers = List.of(
(event, card) -> {
// handle this specific message's responses
...
}
);
// Create custom error handler
Consumer<Throwable> customErrorHandler = error -> {
// handle the error
...
};
Message message = A2A.toUserMessage("tell me a joke");
client.sendMessage(message, customConsumers, customErrorHandler);
// Retrieve the task with id "task-1234"
Task task = client.getTask(new TaskQueryParams("task-1234"));
// You can also specify the maximum number of items of history for the task
// to include in the response and
Task task = client.getTask(new TaskQueryParams("task-1234", 10));
// You can also optionally specify a ClientCallContext with call-specific config to use
Task task = client.getTask(new TaskQueryParams("task-1234"), clientCallContext);
// Cancel the task we previously submitted with id "task-1234"
Task cancelledTask = client.cancelTask(new TaskIdParams("task-1234"));
// You can also specify additional properties using a map
Map<String, Object> metadata = Map.of("reason", "user_requested");
Task cancelledTask = client.cancelTask(new TaskIdParams("task-1234", metadata));
// You can also optionally specify a ClientCallContext with call-specific config to use
Task cancelledTask = client.cancelTask(new TaskIdParams("task-1234"), clientCallContext);
// Get task push notification configuration
TaskPushNotificationConfig config = client.getTaskPushNotificationConfiguration(
new GetTaskPushNotificationConfigParams("task-1234"));
// The push notification configuration ID can also be optionally specified
TaskPushNotificationConfig config = client.getTaskPushNotificationConfiguration(
new GetTaskPushNotificationConfigParams("task-1234", "config-4567"));
// Additional properties can be specified using a map
Map<String, Object> metadata = Map.of("source", "client");
TaskPushNotificationConfig config = client.getTaskPushNotificationConfiguration(
new GetTaskPushNotificationConfigParams("task-1234", "config-1234", metadata));
// You can also optionally specify a ClientCallContext with call-specific config to use
TaskPushNotificationConfig config = client.getTaskPushNotificationConfiguration(
new GetTaskPushNotificationConfigParams("task-1234"), clientCallContext);
// Set task push notification configuration
PushNotificationConfig pushNotificationConfig = new PushNotificationConfig.Builder()
.url("https://example.com/callback")
.authenticationInfo(new AuthenticationInfo(Collections.singletonList("jwt"), null))
.build();
TaskPushNotificationConfig taskConfig = new TaskPushNotificationConfig.Builder()
.taskId("task-1234")
.pushNotificationConfig(pushNotificationConfig)
.build();
TaskPushNotificationConfig result = client.setTaskPushNotificationConfiguration(taskConfig);
// You can also optionally specify a ClientCallContext with call-specific config to use
TaskPushNotificationConfig result = client.setTaskPushNotificationConfiguration(taskConfig, clientCallContext);
List<TaskPushNotificationConfig> configs = client.listTaskPushNotificationConfigurations(
new ListTaskPushNotificationConfigParams("task-1234"));
// Additional properties can be specified using a map
Map<String, Object> metadata = Map.of("filter", "active");
List<TaskPushNotificationConfig> configs = client.listTaskPushNotificationConfigurations(
new ListTaskPushNotificationConfigParams("task-1234", metadata));
// You can also optionally specify a ClientCallContext with call-specific config to use
List<TaskPushNotificationConfig> configs = client.listTaskPushNotificationConfigurations(
new ListTaskPushNotificationConfigParams("task-1234"), clientCallContext);
client.deleteTaskPushNotificationConfigurations(
new DeleteTaskPushNotificationConfigParams("task-1234", "config-4567"));
// Additional properties can be specified using a map
Map<String, Object> metadata = Map.of("reason", "cleanup");
client.deleteTaskPushNotificationConfigurations(
new DeleteTaskPushNotificationConfigParams("task-1234", "config-4567", metadata));
// You can also optionally specify a ClientCallContext with call-specific config to use
client.deleteTaskPushNotificationConfigurations(
new DeleteTaskPushNotificationConfigParams("task-1234", "config-4567", clientCallContext);
// Resubscribe to an ongoing task with id "task-1234" using configured consumers
TaskIdParams taskIdParams = new TaskIdParams("task-1234");
client.resubscribe(taskIdParams);
// Or resubscribe with custom consumers and error handler
List<BiConsumer<ClientEvent, AgentCard>> customConsumers = List.of(
(event, card) -> System.out.println("Resubscribe event: " + event)
);
Consumer<Throwable> customErrorHandler = error ->
System.err.println("Resubscribe error: " + error.getMessage());
client.resubscribe(taskIdParams, customConsumers, customErrorHandler);
// You can also optionally specify a ClientCallContext with call-specific config to use
client.resubscribe(taskIdParams, clientCallContext);
AgentCard serverAgentCard = client.getAgentCard();
A complete example of a Java A2A client communicating with a Python A2A server is available in the examples/helloworld/client directory. This example demonstrates:
- Setting up and using the A2A Java client
- Sending regular and streaming messages to a Python A2A server
- Receiving and processing responses from the Python A2A server
The example includes detailed instructions on how to run the Python A2A server and how to run the Java A2A client using JBang.
Check out the example's README for more information.
A complete example of a Python A2A client communicating with a Java A2A server is available in the examples/helloworld/server directory. This example demonstrates:
- A sample
AgentCard
producer - A sample
AgentExecutor
producer - A Java A2A server receiving regular and streaming messages from a Python A2A client
Check out the example's README for more information.
See COMMUNITY_ARTICLES.md for a list of community articles and videos.
This project is licensed under the terms of the Apache 2.0 License.
See CONTRIBUTING.md for contribution guidelines.
The following list contains community contributed integrations with various Java Runtimes.
To contribute an integration, please see CONTRIBUTING_INTEGRATIONS.md.
- reference/jsonrpc/README.md - JSON-RPC 2.0 Reference implementation, based on Quarkus.
- reference/grpc/README.md - gRPC Reference implementation, based on Quarkus.
- https://github.com/wildfly-extras/a2a-java-sdk-server-jakarta - This integration is based on Jakarta EE, and should work in all runtimes supporting the Jakarta EE Web Profile.
See the extras
folder for extra functionality not provided by the SDK itself!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for a2a-java
Similar Open Source Tools

a2a-java
A2A Java SDK is a Java library that helps run agentic applications as A2AServers following the Agent2Agent (A2A) Protocol. It provides a Java server implementation of the A2A Protocol, allowing users to create A2A server agents and execute tasks. The SDK also includes a Java client implementation for communication with A2A servers using various transports like JSON-RPC 2.0, gRPC, and HTTP+JSON/REST. Users can configure different transport protocols, handle messages, tasks, push notifications, and interact with server agents. The SDK supports streaming and non-streaming responses, error handling, and task management functionalities.

simple-openai
Simple-OpenAI is a Java library that provides a simple way to interact with the OpenAI API. It offers consistent interfaces for various OpenAI services like Audio, Chat Completion, Image Generation, and more. The library uses CleverClient for HTTP communication, Jackson for JSON parsing, and Lombok to reduce boilerplate code. It supports asynchronous requests and provides methods for synchronous calls as well. Users can easily create objects to communicate with the OpenAI API and perform tasks like text-to-speech, transcription, image generation, and chat completions.

Jlama
Jlama is a modern Java inference engine designed for large language models. It supports various model types such as Gemma, Llama, Mistral, GPT-2, BERT, and more. The tool implements features like Flash Attention, Mixture of Experts, and supports different model quantization formats. Built with Java 21 and utilizing the new Vector API for faster inference, Jlama allows users to add LLM inference directly to their Java applications. The tool includes a CLI for running models, a simple UI for chatting with LLMs, and examples for different model types.

lollms
LoLLMs Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.

hugging-chat-api
Unofficial HuggingChat Python API for creating chatbots, supporting features like image generation, web search, memorizing context, and changing LLMs. Users can log in, chat with the ChatBot, perform web searches, create new conversations, manage conversations, switch models, get conversation info, use assistants, and delete conversations. The API also includes a CLI mode with various commands for interacting with the tool. Users are advised not to use the application for high-stakes decisions or advice and to avoid high-frequency requests to preserve server resources.

lollms_legacy
Lord of Large Language Models (LoLLMs) Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications. The tool supports multiple personalities for generating text with different styles and tones, real-time text generation with WebSocket-based communication, RESTful API for listing personalities and adding new personalities, easy integration with various applications and frameworks, sending files to personalities, running on multiple nodes to provide a generation service to many outputs at once, and keeping data local even in the remote version.

langgraph4j
Langgraph4j is a Java library for language processing tasks such as text classification, sentiment analysis, and named entity recognition. It provides a set of tools and algorithms for analyzing text data and extracting useful information. The library is designed to be efficient and easy to use, making it suitable for both research and production applications.

OpenAI-DotNet
OpenAI-DotNet is a simple C# .NET client library for OpenAI to use through their RESTful API. It is independently developed and not an official library affiliated with OpenAI. Users need an OpenAI API account to utilize this library. The library targets .NET 6.0 and above, working across various platforms like console apps, winforms, wpf, asp.net, etc., and on Windows, Linux, and Mac. It provides functionalities for authentication, interacting with models, assistants, threads, chat, audio, images, files, fine-tuning, embeddings, and moderations.

aigverse
aigverse is a Python infrastructure framework that bridges the gap between logic synthesis and AI/ML applications. It allows efficient representation and manipulation of logic circuits, making it easier to integrate logic synthesis and optimization tasks into machine learning pipelines. Built upon EPFL Logic Synthesis Libraries, particularly mockturtle, aigverse provides a high-level Python interface to state-of-the-art algorithms for And-Inverter Graph (AIG) manipulation and logic synthesis, widely used in formal verification, hardware design, and optimization tasks.

clarifai-python-grpc
This is the official Clarifai gRPC Python client for interacting with their recognition API. Clarifai offers a platform for data scientists, developers, researchers, and enterprises to utilize artificial intelligence for image, video, and text analysis through computer vision and natural language processing. The client allows users to authenticate, predict concepts in images, and access various functionalities provided by the Clarifai API. It follows a versioning scheme that aligns with the backend API updates and includes specific instructions for installation and troubleshooting. Users can explore the Clarifai demo, sign up for an account, and refer to the documentation for detailed information.

reolink_aio
The 'reolink_aio' Python package is designed to integrate Reolink devices (NVR/cameras) into your application. It implements Reolink IP NVR and camera API, allowing users to subscribe to Reolink ONVIF SWN events for real-time event notifications via webhook. The package provides functionalities to obtain and cache NVR or camera settings, capabilities, and states, as well as enable features like infrared lights, spotlight, and siren. Users can also subscribe to events, renew timers, and disconnect from the host device.

com.openai.unity
com.openai.unity is an OpenAI package for Unity that allows users to interact with OpenAI's API through RESTful requests. It is independently developed and not an official library affiliated with OpenAI. Users can fine-tune models, create assistants, chat completions, and more. The package requires Unity 2021.3 LTS or higher and can be installed via Unity Package Manager or Git URL. Various features like authentication, Azure OpenAI integration, model management, thread creation, chat completions, audio processing, image generation, file management, fine-tuning, batch processing, embeddings, and content moderation are available.

copilot-lsp
Copilot LSP is a configuration tool for Neovim that enhances the native LSP functionality. It provides features such as text document focusing, inline completion, next edit suggestion, and status notifications. Users can easily integrate Copilot LSP into their Neovim setup to improve their coding experience. The tool offers smart clearing of suggestions, customizable defaults for Next Edit Suggestion (NES), and integration with Blink for inline completions. Copilot LSP requires installation via Mason or system and should be added to the PATH for seamless usage.

generative-ai
The 'Generative AI' repository provides a C# library for interacting with Google's Generative AI models, specifically the Gemini models. It allows users to access and integrate the Gemini API into .NET applications, supporting functionalities such as listing available models, generating content, creating tuned models, working with large files, starting chat sessions, and more. The repository also includes helper classes and enums for Gemini API aspects. Authentication methods include API key, OAuth, and various authentication modes for Google AI and Vertex AI. The package offers features for both Google AI Studio and Google Cloud Vertex AI, with detailed instructions on installation, usage, and troubleshooting.

Aiwnios
Aiwnios is a HolyC Compiler/Runtime designed for 64-bit ARM, RISCV, and x86 machines, including Apple M1 Macs, with plans for supporting other architectures in the future. The project is currently a work in progress, with regular updates and improvements planned. Aiwnios includes a sockets API (currently tested on FreeBSD) and a HolyC assembler accessible through AARCH64. The heart of Aiwnios lies in `arm_backend.c`, where the compiler is located, and a powerful AARCH64 assembler in `arm64_asm.c`. The compiler uses reverse Polish notation and statements are reversed. The developer manual is intended for developers working on the C side, providing detailed explanations of the source code.

pebblo
Pebblo enables developers to safely load data and promote their Gen AI app to deployment without worrying about the organization’s compliance and security requirements. The project identifies semantic topics and entities found in the loaded data and summarizes them on the UI or a PDF report.
For similar tasks

a2a-java
A2A Java SDK is a Java library that helps run agentic applications as A2AServers following the Agent2Agent (A2A) Protocol. It provides a Java server implementation of the A2A Protocol, allowing users to create A2A server agents and execute tasks. The SDK also includes a Java client implementation for communication with A2A servers using various transports like JSON-RPC 2.0, gRPC, and HTTP+JSON/REST. Users can configure different transport protocols, handle messages, tasks, push notifications, and interact with server agents. The SDK supports streaming and non-streaming responses, error handling, and task management functionalities.
For similar jobs

google.aip.dev
API Improvement Proposals (AIPs) are design documents that provide high-level, concise documentation for API development at Google. The goal of AIPs is to serve as the source of truth for API-related documentation and to facilitate discussion and consensus among API teams. AIPs are similar to Python's enhancement proposals (PEPs) and are organized into different areas within Google to accommodate historical differences in customs, styles, and guidance.

kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.

speakeasy
Speakeasy is a tool that helps developers create production-quality SDKs, Terraform providers, documentation, and more from OpenAPI specifications. It supports a wide range of languages, including Go, Python, TypeScript, Java, and C#, and provides features such as automatic maintenance, type safety, and fault tolerance. Speakeasy also integrates with popular package managers like npm, PyPI, Maven, and Terraform Registry for easy distribution.

apicat
ApiCat is an API documentation management tool that is fully compatible with the OpenAPI specification. With ApiCat, you can freely and efficiently manage your APIs. It integrates the capabilities of LLM, which not only helps you automatically generate API documentation and data models but also creates corresponding test cases based on the API content. Using ApiCat, you can quickly accomplish anything outside of coding, allowing you to focus your energy on the code itself.

aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.

ain
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.

OllamaKit
OllamaKit is a Swift library designed to simplify interactions with the Ollama API. It handles network communication and data processing, offering an efficient interface for Swift applications to communicate with the Ollama API. The library is optimized for use within Ollamac, a macOS app for interacting with Ollama models.

ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It facilitates communication with the Ollama server and provides models for deployment. The tool requires Java 11 or higher and can be installed locally or via Docker. Users can integrate Ollama4j into Maven projects by adding the specified dependency. The tool offers API specifications and supports various development tasks such as building, running unit tests, and integration tests. Releases are automated through GitHub Actions CI workflow. Areas of improvement include adhering to Java naming conventions, updating deprecated code, implementing logging, using lombok, and enhancing request body creation. Contributions to the project are encouraged, whether reporting bugs, suggesting enhancements, or contributing code.