a2a-java
Official Java SDK for the Agent2Agent (A2A) Protocol
Stars: 335
A2A Java SDK is a Java library that helps run agentic applications as A2AServers following the Agent2Agent (A2A) Protocol. It provides a Java server implementation of the A2A Protocol, allowing users to create A2A server agents and execute tasks. The SDK also includes a Java client implementation for communication with A2A servers using various transports like JSON-RPC 2.0, gRPC, and HTTP+JSON/REST. Users can configure different transport protocols, handle messages, tasks, push notifications, and interact with server agents. The SDK supports streaming and non-streaming responses, error handling, and task management functionalities.
README:
A Java library that helps run agentic applications as A2AServers following the Agent2Agent (A2A) Protocol.
You can build the A2A Java SDK using mvn:
mvn clean installWe copy https://github.com/a2aproject/A2A/blob/main/specification/grpc/a2a.proto to the spec-grpc/ project, and adjust the java_package option to be as follows:
option java_package = "io.a2a.grpc";
Then build the spec-grpc module with mvn clean install -Dskip.protobuf.generate=false to regenerate the gRPC classes in the io.a2a.grpc package.
You can find examples of how to use the A2A Java SDK in the a2a-samples repository.
More examples will be added soon.
The A2A Java SDK provides a Java server implementation of the Agent2Agent (A2A) Protocol. To run your agentic Java application as an A2A server, simply follow the steps below.
- Add an A2A Java SDK Server Maven dependency to your project
- Add a class that creates an A2A Agent Card
- Add a class that creates an A2A Agent Executor
Adding a dependency on an A2A Java SDK Server will provide access to the core classes that make up the A2A specification and allow you to run your agentic Java application as an A2A server agent.
The A2A Java SDK provides reference A2A server implementations based on Quarkus for use with our tests and examples. However, the project is designed in such a way that it is trivial to integrate with various Java runtimes.
Server Integrations contains a list of community contributed integrations of the server with various runtimes. You might be able to use one of these for your target runtime, or you can use them as inspiration to create your own.
The A2A Java SDK Reference Server implementations support the following transports:
- JSON-RPC 2.0
- gRPC
- HTTP+JSON/REST
To use the reference implementation with the JSON-RPC protocol, add the following dependency to your project:
⚠️ Theio.github.a2asdkgroupIdbelow is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-reference-jsonrpc</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>To use the reference implementation with the gRPC protocol, add the following dependency to your project:
⚠️ Theio.github.a2asdkgroupIdbelow is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-reference-grpc</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>To use the reference implementation with the HTTP+JSON/REST protocol, add the following dependency to your project:
⚠️ Theio.github.a2asdkgroupIdbelow is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-reference-rest</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>Note that you can add more than one of the above dependencies to your project depending on the transports you'd like to support.
import io.a2a.server.PublicAgentCard;
import io.a2a.spec.AgentCapabilities;
import io.a2a.spec.AgentCard;
import io.a2a.spec.AgentSkill;
...
@ApplicationScoped
public class WeatherAgentCardProducer {
@Produces
@PublicAgentCard
public AgentCard agentCard() {
return AgentCard.builder()
.name("Weather Agent")
.description("Helps with weather")
.url("http://localhost:10001")
.version("1.0.0")
.capabilities(AgentCapabilities.builder()
.streaming(true)
.pushNotifications(false)
.build())
.defaultInputModes(Collections.singletonList("text"))
.defaultOutputModes(Collections.singletonList("text"))
.skills(Collections.singletonList(AgentSkill.builder()
.id("weather_search")
.name("Search weather")
.description("Helps with weather in cities or states")
.tags(Collections.singletonList("weather"))
.examples(List.of("weather in LA, CA"))
.build()))
.protocolVersion(io.a2a.spec.AgentCard.CURRENT_PROTOCOL_VERSION)
.build();
}
}import io.a2a.server.agentexecution.AgentExecutor;
import io.a2a.server.agentexecution.RequestContext;
import io.a2a.server.events.EventQueue;
import io.a2a.server.tasks.AgentEmitter;
import io.a2a.spec.JSONRPCError;
import io.a2a.spec.Message;
import io.a2a.spec.Part;
import io.a2a.spec.Task;
import io.a2a.spec.TaskNotCancelableError;
import io.a2a.spec.TaskState;
import io.a2a.spec.TextPart;
...
@ApplicationScoped
public class WeatherAgentExecutorProducer {
@Inject
WeatherAgent weatherAgent;
@Produces
public AgentExecutor agentExecutor() {
return new WeatherAgentExecutor(weatherAgent);
}
private static class WeatherAgentExecutor implements AgentExecutor {
private final WeatherAgent weatherAgent;
public WeatherAgentExecutor(WeatherAgent weatherAgent) {
this.weatherAgent = weatherAgent;
}
@Override
public void execute(RequestContext context, AgentEmitter agentEmitter) throws JSONRPCError {
// mark the task as submitted and start working on it
if (context.getTask() == null) {
agentEmitter.submit();
}
agentEmitter.startWork();
// extract the text from the message
String userMessage = extractTextFromMessage(context.getMessage());
// call the weather agent with the user's message
String response = weatherAgent.chat(userMessage);
// create the response part
TextPart responsePart = new TextPart(response);
List<Part<?>> parts = List.of(responsePart);
// add the response as an artifact and complete the task
agentEmitter.addArtifact(parts);
agentEmitter.complete();
}
@Override
public void cancel(RequestContext context, AgentEmitter agentEmitter) throws JSONRPCError {
Task task = context.getTask();
if (task.getStatus().state() == TaskState.CANCELED) {
// task already cancelled
throw new TaskNotCancelableError();
}
if (task.getStatus().state() == TaskState.COMPLETED) {
// task already completed
throw new TaskNotCancelableError();
}
// cancel the task
agentEmitter.cancel();
}
private String extractTextFromMessage(Message message) {
StringBuilder textBuilder = new StringBuilder();
if (message.getParts() != null) {
for (Part part : message.getParts()) {
if (part instanceof TextPart textPart) {
textBuilder.append(textPart.getText());
}
}
}
return textBuilder.toString();
}
}
}The A2A Java SDK uses a flexible configuration system that works across different frameworks.
Default behavior: Configuration values come from META-INF/a2a-defaults.properties files on the classpath (provided by core modules and extras). These defaults work out of the box without any additional setup.
Customizing configuration:
-
Quarkus/MicroProfile Config users: Add the
microprofile-configintegration to override defaults viaapplication.properties, environment variables, or system properties -
Spring/other frameworks: See the integration module README for how to implement a custom
A2AConfigProvider - Reference implementations: Already include the MicroProfile Config integration
Executor Settings (Optional)
The SDK uses a dedicated executor for async operations like streaming. Default: 5 core threads, 50 max threads.
# Core thread pool size for the @Internal executor (default: 5)
a2a.executor.core-pool-size=5
# Maximum thread pool size (default: 50)
a2a.executor.max-pool-size=50
# Thread keep-alive time in seconds (default: 60)
a2a.executor.keep-alive-seconds=60Blocking Call Timeouts (Optional)
# Timeout for agent execution in blocking calls (default: 30 seconds)
a2a.blocking.agent.timeout.seconds=30
# Timeout for event consumption in blocking calls (default: 5 seconds)
a2a.blocking.consumption.timeout.seconds=5Why this matters:
- Streaming Performance: The executor handles streaming subscriptions. Too few threads can cause timeouts under concurrent load.
- Resource Management: The dedicated executor prevents streaming operations from competing with the ForkJoinPool.
- Concurrency: In production with high concurrent streaming, increase pool sizes accordingly.
- Agent Timeouts: LLM-based agents may need longer timeouts (60-120s) compared to simple agents.
Note: The reference server implementations (Quarkus-based) automatically include the MicroProfile Config integration, so properties work out of the box in application.properties.
The A2A Java SDK provides a Java client implementation of the Agent2Agent (A2A) Protocol, allowing communication with A2A servers. The Java client implementation supports the following transports:
- JSON-RPC 2.0
- gRPC
- HTTP+JSON/REST
To make use of the Java Client:
Adding a dependency on a2a-java-sdk-client will provide access to a ClientBuilder
that you can use to create your A2A Client.
⚠️ Theio.github.a2asdkgroupIdbelow is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-client</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>By default, the sdk-client artifact includes the JSONRPC transport dependency. However, you must still explicitly configure this transport when building the Client as described in the JSON-RPC Transport section.
If you want to use the gRPC transport, you'll need to add a relevant dependency:
⚠️ Theio.github.a2asdkgroupIdbelow is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-client-transport-grpc</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>If you want to use the HTTP+JSON/REST transport, you'll need to add a relevant dependency:
⚠️ Theio.github.a2asdkgroupIdbelow is temporary and will likely change for future releases.
<dependency>
<groupId>io.github.a2asdk</groupId>
<artifactId>a2a-java-sdk-client-transport-rest</artifactId>
<!-- Use a released version from https://github.com/a2aproject/a2a-java/releases -->
<version>${io.a2a.sdk.version}</version>
</dependency>// First, get the agent card for the A2A server agent you want to connect to
AgentCard agentCard = new A2ACardResolver("http://localhost:1234").getAgentCard();
// Specify configuration for the ClientBuilder
ClientConfig clientConfig = new ClientConfig.Builder()
.setAcceptedOutputModes(List.of("text"))
.build();
// Create event consumers to handle responses that will be received from the A2A server
// (these consumers will be used for both streaming and non-streaming responses)
List<BiConsumer<ClientEvent, AgentCard>> consumers = List.of(
(event, card) -> {
if (event instanceof MessageEvent messageEvent) {
// handle the messageEvent.getMessage()
...
} else if (event instanceof TaskEvent taskEvent) {
// handle the taskEvent.getTask()
...
} else if (event instanceof TaskUpdateEvent updateEvent) {
// handle the updateEvent.getTask()
...
}
}
);
// Create a handler that will be used for any errors that occur during streaming
Consumer<Throwable> errorHandler = error -> {
// handle the error.getMessage()
...
};
// Create the client using the builder
Client client = Client
.builder(agentCard)
.clientConfig(clientConfig)
.withTransport(JSONRPCTransport.class, new JSONRPCTransportConfig())
.addConsumers(consumers)
.streamingErrorHandler(errorHandler)
.build();Different transport protocols can be configured with specific settings using specific ClientTransportConfig implementations. The A2A Java SDK provides JSONRPCTransportConfig for the JSON-RPC transport and GrpcTransportConfig for the gRPC transport.
For the JSON-RPC transport, to use the default JdkA2AHttpClient, provide a JSONRPCTransportConfig created with its default constructor.
To use a custom HTTP client implementation, simply create a JSONRPCTransportConfig as follows:
// Create a custom HTTP client
A2AHttpClient customHttpClient = ...
// Configure the client settings
ClientConfig clientConfig = new ClientConfig.Builder()
.setAcceptedOutputModes(List.of("text"))
.build();
Client client = Client
.builder(agentCard)
.clientConfig(clientConfig)
.withTransport(JSONRPCTransport.class, new JSONRPCTransportConfig(customHttpClient))
.build();For the gRPC transport, you must configure a channel factory:
// Create a channel factory function that takes the agent URL and returns a Channel
Function<String, Channel> channelFactory = agentUrl -> {
return ManagedChannelBuilder.forTarget(agentUrl)
...
.build();
};
// Configure the client with transport-specific settings
ClientConfig clientConfig = new ClientConfig.Builder()
.setAcceptedOutputModes(List.of("text"))
.build();
Client client = Client
.builder(agentCard)
.clientConfig(clientConfig)
.withTransport(GrpcTransport.class, new GrpcTransportConfig(channelFactory))
.build();For the HTTP+JSON/REST transport, if you'd like to use the default JdkA2AHttpClient, provide a RestTransportConfig created with its default constructor.
To use a custom HTTP client implementation, simply create a RestTransportConfig as follows:
// Create a custom HTTP client
A2AHttpClient customHttpClient = ...
// Configure the client settings
ClientConfig clientConfig = new ClientConfig.Builder()
.setAcceptedOutputModes(List.of("text"))
.build();
Client client = Client
.builder(agentCard)
.clientConfig(clientConfig)
.withTransport(RestTransport.class, new RestTransportConfig(customHttpClient))
.build();You can specify configuration for multiple transports, the appropriate configuration will be used based on the selected transport:
// Configure both JSON-RPC and gRPC transports
Client client = Client
.builder(agentCard)
.withTransport(GrpcTransport.class, new GrpcTransportConfig(channelFactory))
.withTransport(JSONRPCTransport.class, new JSONRPCTransportConfig())
.withTransport(RestTransport.class, new RestTransportConfig())
.build();// Send a text message to the A2A server agent
Message message = A2A.toUserMessage("tell me a joke");
// Send the message (uses configured consumers to handle responses)
// Streaming will automatically be used if supported by both client and server,
// otherwise the non-streaming send message method will be used automatically
client.sendMessage(message);
// You can also optionally specify a ClientCallContext with call-specific config to use
client.sendMessage(message, clientCallContext);// Create custom consumers for this specific message
List<BiConsumer<ClientEvent, AgentCard>> customConsumers = List.of(
(event, card) -> {
// handle this specific message's responses
...
}
);
// Create custom error handler
Consumer<Throwable> customErrorHandler = error -> {
// handle the error
...
};
Message message = A2A.toUserMessage("tell me a joke");
client.sendMessage(message, customConsumers, customErrorHandler);// Retrieve the task with id "task-1234"
Task task = client.getTask(new TaskQueryParams("task-1234"));
// You can also specify the maximum number of history items for the task
// to include in the response
Task task = client.getTask(new TaskQueryParams("task-1234", 10));
// You can also optionally specify a ClientCallContext with call-specific config to use
Task task = client.getTask(new TaskQueryParams("task-1234"), clientCallContext);// Cancel the task we previously submitted with id "task-1234"
Task cancelledTask = client.cancelTask(new TaskIdParams("task-1234"));
// You can also specify additional properties using a map
Map<String, Object> metadata = Map.of("reason", "user_requested");
Task cancelledTask = client.cancelTask(new TaskIdParams("task-1234", metadata));
// You can also optionally specify a ClientCallContext with call-specific config to use
Task cancelledTask = client.cancelTask(new TaskIdParams("task-1234"), clientCallContext);// Get task push notification configuration
TaskPushNotificationConfig config = client.getTaskPushNotificationConfiguration(
new GetTaskPushNotificationConfigParams("task-1234"));
// The push notification configuration ID can also be optionally specified
TaskPushNotificationConfig config = client.getTaskPushNotificationConfiguration(
new GetTaskPushNotificationConfigParams("task-1234", "config-4567"));
// Additional properties can be specified using a map
Map<String, Object> metadata = Map.of("source", "client");
TaskPushNotificationConfig config = client.getTaskPushNotificationConfiguration(
new GetTaskPushNotificationConfigParams("task-1234", "config-1234", metadata));
// You can also optionally specify a ClientCallContext with call-specific config to use
TaskPushNotificationConfig config = client.getTaskPushNotificationConfiguration(
new GetTaskPushNotificationConfigParams("task-1234"), clientCallContext);// Set task push notification configuration
PushNotificationConfig pushNotificationConfig = PushNotificationConfig.builder()
.url("https://example.com/callback")
.authenticationInfo(new AuthenticationInfo(Collections.singletonList("jwt"), null))
.build();
TaskPushNotificationConfig taskConfig = TaskPushNotificationConfig.builder()
.taskId("task-1234")
.pushNotificationConfig(pushNotificationConfig)
.build();
TaskPushNotificationConfig result = client.createTaskPushNotificationConfiguration(taskConfig);
// You can also optionally specify a ClientCallContext with call-specific config to use
TaskPushNotificationConfig result = client.createTaskPushNotificationConfiguration(taskConfig, clientCallContext);List<TaskPushNotificationConfig> configs = client.listTaskPushNotificationConfigurations(
new ListTaskPushNotificationConfigParams("task-1234"));
// Additional properties can be specified using a map
Map<String, Object> metadata = Map.of("filter", "active");
List<TaskPushNotificationConfig> configs = client.listTaskPushNotificationConfigurations(
new ListTaskPushNotificationConfigParams("task-1234", metadata));
// You can also optionally specify a ClientCallContext with call-specific config to use
List<TaskPushNotificationConfig> configs = client.listTaskPushNotificationConfigurations(
new ListTaskPushNotificationConfigParams("task-1234"), clientCallContext);client.deleteTaskPushNotificationConfigurations(
new DeleteTaskPushNotificationConfigParams("task-1234", "config-4567"));
// Additional properties can be specified using a map
Map<String, Object> metadata = Map.of("reason", "cleanup");
client.deleteTaskPushNotificationConfigurations(
new DeleteTaskPushNotificationConfigParams("task-1234", "config-4567", metadata));
// You can also optionally specify a ClientCallContext with call-specific config to use
client.deleteTaskPushNotificationConfigurations(
new DeleteTaskPushNotificationConfigParams("task-1234", "config-4567", clientCallContext);// Subscribe to an ongoing task with id "task-1234" using configured consumers
TaskIdParams taskIdParams = new TaskIdParams("task-1234");
client.subscribeToTask(taskIdParams);
// Or subscribe with custom consumers and error handler
List<BiConsumer<ClientEvent, AgentCard>> customConsumers = List.of(
(event, card) -> System.out.println("Subscribe event: " + event)
);
Consumer<Throwable> customErrorHandler = error ->
System.err.println("Subscribe error: " + error.getMessage());
client.subscribeToTask(taskIdParams, customConsumers, customErrorHandler);
// You can also optionally specify a ClientCallContext with call-specific config to use
client.subscribeToTask(taskIdParams, clientCallContext);AgentCard serverAgentCard = client.getAgentCard();A complete example of a Java A2A client communicating with a Python A2A server is available in the examples/helloworld/client directory. This example demonstrates:
- Setting up and using the A2A Java client
- Sending regular and streaming messages to a Python A2A server
- Receiving and processing responses from the Python A2A server
The example includes detailed instructions on how to run the Python A2A server and how to run the Java A2A client using JBang.
Check out the example's README for more information.
A complete example of a Python A2A client communicating with a Java A2A server is available in the examples/helloworld/server directory. This example demonstrates:
- A sample
AgentCardproducer - A sample
AgentExecutorproducer - A Java A2A server receiving regular and streaming messages from a Python A2A client
Check out the example's README for more information.
See COMMUNITY_ARTICLES.md for a list of community articles and videos.
This project is licensed under the terms of the Apache 2.0 License.
See CONTRIBUTING.md for contribution guidelines.
The following list contains community contributed integrations with various Java Runtimes.
To contribute an integration, please see CONTRIBUTING_INTEGRATIONS.md.
- reference/jsonrpc/README.md - JSON-RPC 2.0 Reference implementation, based on Quarkus.
- reference/grpc/README.md - gRPC Reference implementation, based on Quarkus.
- https://github.com/wildfly-extras/a2a-java-sdk-server-jakarta - This integration is based on Jakarta EE, and should work in all runtimes supporting the Jakarta EE Web Profile.
See the extras folder for extra functionality not provided by the SDK itself!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for a2a-java
Similar Open Source Tools
a2a-java
A2A Java SDK is a Java library that helps run agentic applications as A2AServers following the Agent2Agent (A2A) Protocol. It provides a Java server implementation of the A2A Protocol, allowing users to create A2A server agents and execute tasks. The SDK also includes a Java client implementation for communication with A2A servers using various transports like JSON-RPC 2.0, gRPC, and HTTP+JSON/REST. Users can configure different transport protocols, handle messages, tasks, push notifications, and interact with server agents. The SDK supports streaming and non-streaming responses, error handling, and task management functionalities.
lollms
LoLLMs Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.
MCPSharp
MCPSharp is a .NET library that helps build Model Context Protocol (MCP) servers and clients for AI assistants and models. It allows creating MCP-compliant tools, connecting to existing MCP servers, exposing .NET methods as MCP endpoints, and handling MCP protocol details seamlessly. With features like attribute-based API, JSON-RPC support, parameter validation, and type conversion, MCPSharp simplifies the development of AI capabilities in applications through standardized interfaces.
lollms_legacy
Lord of Large Language Models (LoLLMs) Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications. The tool supports multiple personalities for generating text with different styles and tones, real-time text generation with WebSocket-based communication, RESTful API for listing personalities and adding new personalities, easy integration with various applications and frameworks, sending files to personalities, running on multiple nodes to provide a generation service to many outputs at once, and keeping data local even in the remote version.
langserve
LangServe helps developers deploy `LangChain` runnables and chains as a REST API. This library is integrated with FastAPI and uses pydantic for data validation. In addition, it provides a client that can be used to call into runnables deployed on a server. A JavaScript client is available in LangChain.js.
shortest
Shortest is an AI-powered natural language end-to-end testing framework built on Playwright. It provides a seamless testing experience by allowing users to write tests in natural language and execute them using Anthropic Claude API. The framework also offers GitHub integration with 2FA support, making it suitable for testing web applications with complex authentication flows. Shortest simplifies the testing process by enabling users to run tests locally or in CI/CD pipelines, ensuring the reliability and efficiency of web applications.
mcpdotnet
mcpdotnet is a .NET implementation of the Model Context Protocol (MCP), facilitating connections and interactions between .NET applications and MCP clients and servers. It aims to provide a clean, specification-compliant implementation with support for various MCP capabilities and transport types. The library includes features such as async/await pattern, logging support, and compatibility with .NET 8.0 and later. Users can create clients to use tools from configured servers and also create servers to register tools and interact with clients. The project roadmap includes expanding documentation, increasing test coverage, adding samples, performance optimization, SSE server support, and authentication.
raglite
RAGLite is a Python toolkit for Retrieval-Augmented Generation (RAG) with PostgreSQL or SQLite. It offers configurable options for choosing LLM providers, database types, and rerankers. The toolkit is fast and permissive, utilizing lightweight dependencies and hardware acceleration. RAGLite provides features like PDF to Markdown conversion, multi-vector chunk embedding, optimal semantic chunking, hybrid search capabilities, adaptive retrieval, and improved output quality. It is extensible with a built-in Model Context Protocol server, customizable ChatGPT-like frontend, document conversion to Markdown, and evaluation tools. Users can configure RAGLite for various tasks like configuring, inserting documents, running RAG pipelines, computing query adapters, evaluating performance, running MCP servers, and serving frontends.
LarAgent
LarAgent is a framework designed to simplify the creation and management of AI agents within Laravel projects. It offers an Eloquent-like syntax for creating and managing AI agents, Laravel-style artisan commands, flexible agent configuration, structured output handling, image input support, and extensibility. LarAgent supports multiple chat history storage options, custom tool creation, event system for agent interactions, multiple provider support, and can be used both in Laravel and standalone environments. The framework is constantly evolving to enhance developer experience, improve AI capabilities, enhance security and storage features, and enable advanced integrations like provider fallback system, Laravel Actions integration, and voice chat support.
deepgram-js-sdk
Deepgram JavaScript SDK. Power your apps with world-class speech and Language AI models.
python-sdks
Python SDK for LiveKit enables developers to easily integrate real-time video, audio, and data features into their Python applications. By connecting to a LiveKit server, users can quickly build interactive live streaming or video call applications with minimal code. The SDK includes packages for real-time participant connection and access token generation, making it simple to create rooms and manage participants. With asyncio and aiohttp support, developers can seamlessly interact with the LiveKit server API and handle real-time communication tasks effortlessly.
agent-toolkit
The Stripe Agent Toolkit enables popular agent frameworks to integrate with Stripe APIs through function calling. It includes support for Python and TypeScript, built on top of Stripe Python and Node SDKs. The toolkit provides tools for LangChain, CrewAI, and Vercel's AI SDK, allowing users to configure actions like creating payment links, invoices, refunds, and more. Users can pass the toolkit as a list of tools to agents for integration with Stripe. Context values can be provided for making requests, such as specifying connected accounts for API calls. The toolkit also supports metered billing for Vercel's AI SDK, enabling billing events submission based on customer ID and input/output meters.
gateway
Adaline Gateway is a fully local production-grade Super SDK that offers a unified interface for calling over 200+ LLMs. It is production-ready, supports batching, retries, caching, callbacks, and OpenTelemetry. Users can create custom plugins and providers for seamless integration with their infrastructure.
Jlama
Jlama is a modern Java inference engine designed for large language models. It supports various model types such as Gemma, Llama, Mistral, GPT-2, BERT, and more. The tool implements features like Flash Attention, Mixture of Experts, and supports different model quantization formats. Built with Java 21 and utilizing the new Vector API for faster inference, Jlama allows users to add LLM inference directly to their Java applications. The tool includes a CLI for running models, a simple UI for chatting with LLMs, and examples for different model types.
azure-functions-openai-extension
Azure Functions OpenAI Extension is a project that adds support for OpenAI LLM (GPT-3.5-turbo, GPT-4) bindings in Azure Functions. It provides NuGet packages for various functionalities like text completions, chat completions, assistants, embeddings generators, and semantic search. The project requires .NET 6 SDK or greater, Azure Functions Core Tools v4.x, and specific settings in Azure Function or local settings for development. It offers features like text completions, chat completion, assistants with custom skills, embeddings generators for text relatedness, and semantic search using vector databases. The project also includes examples in C# and Python for different functionalities.
consult-llm-mcp
Consult LLM MCP is an MCP server that enables users to consult powerful AI models like GPT-5.2, Gemini 3.0 Pro, and DeepSeek Reasoner for complex problem-solving. It supports multi-turn conversations, direct queries with optional file context, git changes inclusion for code review, comprehensive logging with cost estimation, and various CLI modes for Gemini and Codex. The tool is designed to simplify the process of querying AI models for assistance in resolving coding issues and improving code quality.
For similar tasks
a2a-java
A2A Java SDK is a Java library that helps run agentic applications as A2AServers following the Agent2Agent (A2A) Protocol. It provides a Java server implementation of the A2A Protocol, allowing users to create A2A server agents and execute tasks. The SDK also includes a Java client implementation for communication with A2A servers using various transports like JSON-RPC 2.0, gRPC, and HTTP+JSON/REST. Users can configure different transport protocols, handle messages, tasks, push notifications, and interact with server agents. The SDK supports streaming and non-streaming responses, error handling, and task management functionalities.
For similar jobs
google.aip.dev
API Improvement Proposals (AIPs) are design documents that provide high-level, concise documentation for API development at Google. The goal of AIPs is to serve as the source of truth for API-related documentation and to facilitate discussion and consensus among API teams. AIPs are similar to Python's enhancement proposals (PEPs) and are organized into different areas within Google to accommodate historical differences in customs, styles, and guidance.
kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.
speakeasy
Speakeasy is a tool that helps developers create production-quality SDKs, Terraform providers, documentation, and more from OpenAPI specifications. It supports a wide range of languages, including Go, Python, TypeScript, Java, and C#, and provides features such as automatic maintenance, type safety, and fault tolerance. Speakeasy also integrates with popular package managers like npm, PyPI, Maven, and Terraform Registry for easy distribution.
apicat
ApiCat is an API documentation management tool that is fully compatible with the OpenAPI specification. With ApiCat, you can freely and efficiently manage your APIs. It integrates the capabilities of LLM, which not only helps you automatically generate API documentation and data models but also creates corresponding test cases based on the API content. Using ApiCat, you can quickly accomplish anything outside of coding, allowing you to focus your energy on the code itself.
aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.
ain
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.
OllamaKit
OllamaKit is a Swift library designed to simplify interactions with the Ollama API. It handles network communication and data processing, offering an efficient interface for Swift applications to communicate with the Ollama API. The library is optimized for use within Ollamac, a macOS app for interacting with Ollama models.
ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It facilitates communication with the Ollama server and provides models for deployment. The tool requires Java 11 or higher and can be installed locally or via Docker. Users can integrate Ollama4j into Maven projects by adding the specified dependency. The tool offers API specifications and supports various development tasks such as building, running unit tests, and integration tests. Releases are automated through GitHub Actions CI workflow. Areas of improvement include adhering to Java naming conventions, updating deprecated code, implementing logging, using lombok, and enhancing request body creation. Contributions to the project are encouraged, whether reporting bugs, suggesting enhancements, or contributing code.