llm4s
Agentic and LLM Programming in Scala
Stars: 225
LLM4S provides a simple, robust, and scalable framework for building Large Language Models (LLM) applications in Scala. It aims to leverage Scala's type safety, functional programming, JVM ecosystem, concurrency, and performance advantages to create reliable and maintainable AI-powered applications. The framework supports multi-provider integration, execution environments, error handling, Model Context Protocol (MCP) support, agent frameworks, multimodal generation, and Retrieval-Augmented Generation (RAG) workflows. It also offers observability features like detailed trace logging, monitoring, and analytics for debugging and performance insights.
README:
LLM4S provides a simple, robust, and scalable framework for building LLM applications in Scala. While most LLM work is done in Python, we believe that Scala offers a fundamentally better foundation for building reliable, maintainable AI-powered applications.
Note: This is a work in progress project and is likely to change significantly over time.
- Type Safety: Catch errors at compile time, not in production.
- Functional Programming: Immutable data and pure functions for predictable, maintainable systems.
- JVM Ecosystem: Access to mature, production-grade libraries and tooling.
- Concurrency: Advanced models for safe, efficient parallelism.
- Performance: JVM speed with functional elegance.
- Ecosystem Interoperability: Seamless integration with enterprise JVM systems and cloud-native tooling.
- Multi-Provider Support: Connect seamlessly to multiple LLM providers (OpenAI, Anthropic, Google Gemini, Azure, Ollama, DeepSeek).
- Execution Environments: Run LLM-driven operations in secure, containerized or non-containerized setups.
- Error Handling: Robust mechanisms to catch, log, and recover from failures gracefully.
- MCP Support: Integration with Model Context Protocol for richer context management.
- Agent Framework: Build single or multi-agent workflows with standardized interfaces.
- Multimodal Generation: Support for text, image, voice, and other LLM modalities.
- RAG (Retrieval-Augmented Generation): Built-in tools for search, embedding, retrieval workflows, and RAGAS evaluation with benchmarking harness.
- Observability: Detailed trace logging, monitoring, and analytics for debugging and performance insights.
┌───────────────────────────┐
│ LLM4S API Layer │
└──────────┬────────────────┘
│
Multi-Provider Connector
(OpenAI | Anthropic | DeepSeek | ...)
│
┌─────────┴─────────┐
│ Execution Manager │
└─────────┬─────────┘
│
┌──────────┴──────────┐
│ Agent Framework │
└──────────┬──────────┘
│
┌────────────┴────────────┐
│ RAG Engine + Tooling │
└────────────┬────────────┘
│
┌─────────────┴─────────────┐
│ Observability Layer │
└───────────────────────────┘
- modules/core: Core LLM4S framework
- modules/workspace: Workspace runner/client/shared
- modules/crossTest: Cross-version tests
- modules/samples: Usage examples
- docs: Documentation site and references
- hooks: Pre-commit hook installer
To get started with the LLM4S project, check out this teaser talk presented by Kannupriya Kalra at the Bay Area Scala Conference. This recording is essential for understanding where we’re headed:
🎥 Teaser Talk: https://www.youtube.com/watch?v=SXybj2P3_DE&ab_channel=SalarRahmanian
LLM4S was officially introduced at the Bay Area Scala Conference in San Francisco on February 25, 2025.
To ensure code quality, we use a Git pre-commit hook that automatically checks code formatting and runs tests before allowing commits:
# Install the pre-commit hook
./hooks/install.sh
# The hook will automatically:
# - Check code formatting with scalafmt
# - Compile code for both Scala 2.13 and 3
# - Run tests for both Scala versions
# To skip the hook temporarily (not recommended):
# git commit --no-verify- JDK 21+
- SBT
- Docker
java -versionSet JAVA_HOME and update your PATH.
brew install openjdk@21
echo 'export PATH="/opt/homebrew/opt/openjdk@21/bin:$PATH"' >> ~/.zshrcsbt compile
# For all supported Scala versions (2.13 and 3)
sbt +compile
# Build and test all versions
sbt buildAllYou will need an API key for either OpenAI (https://platform.openai.com/) or Anthropic (https://console.anthropic.com/) other LLMS may be supported in the future (see the backlog).
Set the environment variables:
LLM_MODEL=openai/gpt-4o
OPENAI_API_KEY=<your_openai_api_key>
or Anthropic:
LLM_MODEL=anthropic/claude-sonnet-4-5-latest
ANTHROPIC_API_KEY=<your_anthropic_api_key>
or OpenRouter:
LLM_MODEL=openai/gpt-4o
OPENAI_API_KEY=<your_openai_api_key>
OPENAI_BASE_URL=https://openrouter.ai/api/v1
or Z.ai:
LLM_MODEL=zai/GLM-4.7
ZAI_API_KEY=<your_zai_api_key>
ZAI_BASE_URL=https://api.z.ai/api/paas/v4
or DeepSeek:
LLM_MODEL=deepseek/deepseek-chat
DEEPSEEK_API_KEY=<your_deepseek_api_key>
# Optional: DEEPSEEK_BASE_URL defaults to https://api.deepseek.com
Migration Note: The
LLMProvider.DeepSeekcase has been added to the sealedLLMProviderADT. If you have exhaustive pattern matches onLLMProvider, add acase LLMProvider.DeepSeek => ...handler, or use a wildcardcase _ => ...to gracefully handle future providers.
Or Cohere:
LLM_MODEL=cohere/command-r
COHERE_API_KEY=<your_cohere_api_key>
COHERE_BASE_URL=https://api.cohere.com
This will allow you to run the non-containerized examples.
# Using Scala 3
sbt "samples/runMain org.llm4s.samples.basic.BasicLLMCallingExample"sbt docker:publishLocal
sbt "workspaceSamples/runMain org.llm4s.samples.workspace.ContainerisedWorkspaceDemo"
# Using Scala 2.13
sbt ++2.13.16 "samples/runMain org.llm4s.samples.basic.BasicLLMCallingExample"LLM4S supports Scala 2.13 and Scala 3.7.1. The build supports version-specific code through source directories when needed:
-
src/main/scala- Common code for all versions -
src/main/scala-2.13- Scala 2.13 specific code (add when needed) -
src/main/scala-3- Scala 3 specific code (add when needed)
When you need to use version-specific features, place the code in the appropriate directory.
We've added convenient aliases for cross-compilation:
# Compile for all Scala versions
sbt compileAll
# Test all Scala versions
sbt testAll
# Both compile and test
sbt buildAll
# Publish for all versions
sbt publishAllWe use specialized test projects to verify cross-version compatibility against the published artifacts. These tests ensure that the library works correctly across different Scala versions by testing against actual published JARs rather than local target directories.
# Run tests for both Scala 2 and 3 against published JARs
sbt testCross
# Full clean, publish, and test verification
sbt fullCrossTestNote: For detailed information about our cross-testing strategy and setup, see modules/crossTest/README.md
Our goal is to implement Scala equivalents of popular Python LLM frameworks, with multi-provider, multimodal, and observability-first design as core principles.
For the full roadmap including core framework features and agent phases, see the LLM4S Roadmap
The roadmap covers:
- Core Framework Features: Multi-provider LLM, image generation, speech, embeddings, tools, MCP
- Agent Framework Phases: Conversations, guardrails, handoffs, memory, streaming, built-in tools
- Production Pillars: Testing, API Stability, Performance, Security, Documentation, Observability
- Path to v1.0.0: Structured path to production release
- [ ] Single API access to multiple LLM providers (like LiteLLM) - llmconnect ✅ Complete
- [ ] Comprehensive toolchain for building LLM apps (LangChain/LangGraph equivalent)
- [x] Tool calling ✅ Complete
- [x] RAG search & retrieval ✅ Complete (vector memory, embeddings, document Q&A)
- [x] RAG evaluation & benchmarking ✅ Complete (RAGAS metrics, systematic comparison)
- [x] Logging, tracking, and monitoring ✅ Complete
- [ ] Agentic framework (like PydanticAI, CrewAI)
- [x] Single-agent workflows ✅ Complete
- [x] Multi-agent handoffs ✅ Complete
- [x] Memory system (in-memory, SQLite, vector) ✅ Complete
- [x] Streaming events ✅ Complete
- [x] Built-in tools module ✅ Complete
- [ ] DAG-based orchestration 🚧 In Progress
- [ ] Tokenization utilities (Scala port of tiktoken) ✅ Complete
- [ ] Examples for all supported modalities and workflows ✅ Complete
- [ ] Stable platform with extensive test coverage 🚧 In Progress
- [ ] Scala Coding SWE Agent - perform SWE Bench–type tasks on Scala codebases
- [ ] Code maps, code generation, and library templates
Tool calling is a critical integration - designed to work seamlessly with multi-provider support and agent frameworks. We use ScalaMeta to auto-generate tool definitions, support dynamic mapping, and run in secure execution environments.
Tools can run:
- In containerized sandboxes for isolation and safety.
- In multi-modal pipelines where LLMs interact with text, images, and voice.
- With observability hooks for trace analysis.
Using ScalaMeta to automatically generate tool definitions from Scala methods:
/** My tool does some funky things with a & b...
* @param a The first thing
* @param b The second thing
*/
def myTool(a: Int, b: String): ToolResponse = {
// Implementation
}ScalaMeta extracts method parameters, types, and documentation to generate OpenAI-compatible tool definitions.
Mapping LLM tool call requests to actual method invocations through:
- Code generation
- Reflection-based approaches
- ScalaMeta-based parameter mapping
Tools run in a protected Docker container environment to prevent accidental system damage or data leakage.
Tracing isn’t just for debugging - it’s the backbone of understanding model behavior.LLM4S’s observability layer includes:
- Detailed token usage reporting
- Multi-backend trace output (Langfuse, console, none)
- Agent state visualization
- Integration with monitoring dashboards
Configure tracing behavior using the TRACING_MODE environment variable:
# Send traces to Langfuse (default)
TRACING_MODE=langfuse
LANGFUSE_PUBLIC_KEY=pk-lf-your-key
LANGFUSE_SECRET_KEY=sk-lf-your-secret
# Print detailed traces to console with colors and token usage
TRACING_MODE=print
# Disable tracing completely
TRACING_MODE=noneimport org.llm4s.trace.{ EnhancedTracing, Tracing }
// Create tracer from environment (Result), fallback to console tracer
val tracer: Tracing = EnhancedTracing
.createFromEnv()
.fold(_ => Tracing.createFromEnhanced(new org.llm4s.trace.EnhancedConsoleTracing()), Tracing.createFromEnhanced)
// Trace events, completions, and token usage
tracer.traceEvent("Starting LLM operation")
tracer.traceCompletion(completion, completion.model) // prefer the model reported by the API
tracer.traceTokenUsage(tokenUsage, completion.model, "chat-completion")
tracer.traceAgentState(agentState)Note: The LLM4S template has moved to its own repository for better maintainability and independent versioning.
The llm4s.g8 starter kit helps you quickly create AI-powered applications using llm4s.
It is a starter kit for building AI-powered applications using llm4s with improved SDK usability and developer ergonomics. You can now spin up a fully working scala project with a single sbt command.
The starter kit comes pre-configured with best practices, prompt execution examples, CI, formatting hooks, unit testing, documentation, and cross-platform support.
Template Repository: github.com/llm4s/llm4s.g8
Using sbt, do:
sbt new llm4s/llm4s.g8 \
--name=<your.project.name> \
--package=<your.organization> \
--version=0.1.0-SNAPSHOT \
--llm4s_version=<llm4s.version> \ # 0.1.1 is the latest version at the time of writing
--scala_version=<scala.version> \ # 2.x.x or Scala 3.x.x
--munit_version=<munit.version> \ # 1.1.1 is the latest version at the time of writing
--directory=<your.project.name> \
--force
to create new project.
For more information about the template, including compatibility matrix and documentation, visit the template repository. Use the comprehensive documentation to get started with the project using starter kit.
llm4s exposes a single configuration flow with sensible precedence:
- Precedence:
-Dsystem properties >application.conf(if your app provides it) >reference.confdefaults. - Environment variables are wired via
${?ENV}inreference.conf(no.envreader required).
Preferred typed entry points (PureConfig-backed via Llm4sConfig):
- Provider / model:
-
Llm4sConfig.provider(): Result[ProviderConfig]– returns the typed provider config (OpenAI/Azure/Anthropic/Ollama). -
LLMConnect.getClient(config: ProviderConfig): Result[LLMClient]– builds a client from a typed config.
-
- Tracing:
-
Llm4sConfig.tracing(): Result[TracingSettings]– returns typed tracing settings. -
EnhancedTracing.create(settings: TracingSettings): EnhancedTracing– builds an enhanced tracer from typed settings. -
Tracing.create(settings: TracingSettings): Tracing– builds a legacyTracingfrom typed settings.
-
- Embeddings:
-
Llm4sConfig.embeddings(): Result[(String, EmbeddingProviderConfig)]– returns(provider, config)with validation. -
EmbeddingClient.from(provider: String, cfg: EmbeddingProviderConfig): Result[EmbeddingClient]– builds an embeddings client from typed config.
-
Recommended usage patterns:
- Model name for display:
Llm4sConfig.provider().map(_.model)or prefercompletion.modelfrom API responses. - Tracing:
- For enhanced tracing:
Llm4sConfig.tracing().map(EnhancedTracing.create). - For legacy
Tracing:Llm4sConfig.tracing().map(Tracing.create).
- For enhanced tracing:
- Workspace (samples):
WorkspaceConfigSupport.load()to getworkspaceDir,imageName,hostPort,traceLogPath. - Embeddings sample (samples):
EmbeddingUiSettings.loadFromEnv,EmbeddingTargets.loadFromEnv,EmbeddingQuery.loadFromEnv(sample helpers backed byLlm4sConfig).
Use these loaders to convert flat keys and HOCON paths into typed, validated settings used by the code:
-
LLM model selection
- Keys:
llm4s.llm.modelorLLM_MODEL - Type:
ProviderConfig(with provider-specific subtypes) - Loader:
Llm4sConfig.provider()+LLMConnect.getClient(...)
- Keys:
-
Tracing configuration
- Keys:
llm4s.tracing.mode|TRACING_MODE,LANGFUSE_URL,LANGFUSE_PUBLIC_KEY,LANGFUSE_SECRET_KEY,LANGFUSE_ENV,LANGFUSE_RELEASE,LANGFUSE_VERSION - Type:
TracingSettings - Loader:
Llm4sConfig.tracing()→ thenEnhancedTracing.createorTracing.create
- Keys:
-
Workspace settings (samples)
- Keys:
llm4s.workspace.dir|WORKSPACE_DIR,llm4s.workspace.image|WORKSPACE_IMAGE,llm4s.workspace.port|WORKSPACE_PORT,llm4s.workspace.traceLogPath|WORKSPACE_TRACE_LOG - Type:
WorkspaceSettings - Loader:
WorkspaceConfigSupport.load()
- Keys:
-
Embeddings: inputs and UI (samples)
- Input paths:
EMBEDDING_INPUT_PATHSorEMBEDDING_INPUT_PATH→EmbeddingTargets.loadFromEnv()→EmbeddingTargets - Query:
EMBEDDING_QUERY→EmbeddingQuery.loadFromEnv()→EmbeddingQuery - UI knobs:
MAX_ROWS_PER_FILE,TOP_DIMS_PER_ROW,GLOBAL_TOPK,SHOW_GLOBAL_TOP,COLOR,TABLE_WIDTH→EmbeddingUiSettings.loadFromEnv()→EmbeddingUiSettings
- Input paths:
-
Embeddings: provider configuration
- Key:
EMBEDDING_PROVIDERorllm4s.embeddings.provider(required) - Supported providers:
openai,voyage,ollama - Type:
(String, EmbeddingProviderConfig) - Loader:
Llm4sConfig.embeddings() - Provider-specific keys:
-
OpenAI:
OPENAI_EMBEDDING_BASE_URL,OPENAI_EMBEDDING_MODEL,OPENAI_API_KEY -
Voyage:
VOYAGE_EMBEDDING_BASE_URL,VOYAGE_EMBEDDING_MODEL,VOYAGE_API_KEY -
Ollama (local):
OLLAMA_EMBEDDING_BASE_URL(default:http://localhost:11434),OLLAMA_EMBEDDING_MODEL
-
OpenAI:
- Key:
-
Provider API keys and endpoints
- Keys:
OPENAI_API_KEY,OPENAI_BASE_URL,ANTHROPIC_API_KEY,ANTHROPIC_BASE_URL,AZURE_API_BASE,AZURE_API_KEY,AZURE_API_VERSION,OLLAMA_BASE_URL,GEMINI_BASE_URL,GOOGLE_API_KEY,DEEPSEEK_API_KEY,DEEPSEEK_BASE_URL,COHERE_BASE_URL,COHERE_API_KEY - Type: concrete
ProviderConfig(e.g.,OpenAIConfig,AnthropicConfig,AzureConfig,OllamaConfig,GeminiConfig,DeepSeekConfig,CohereConfig) - Loader:
Llm4sConfig.provider()→ then provider-specific config constructors
- Keys:
Tracing
- Configure mode via
llm4s.tracing.mode(default:console). Supported:langfuse,console,noop. - Override with env:
TRACING_MODE=langfuse(or system property-Dllm4s.tracing.mode=langfuse). - Build tracers:
- Typed:
Llm4sConfig.tracing().map(EnhancedTracing.create)→Result[EnhancedTracing] - Legacy bridge:
Llm4sConfig.tracing().map(Tracing.create) - Low-level:
LangfuseTracing.fromEnv()→Result[LangfuseTracing]
- Typed:
Example (no application.conf required):
sbt -Dllm4s.llm.model=openai/gpt-4o -Dllm4s.openai.apiKey=sk-... "samples/runMain org.llm4s.samples.basic.BasicLLMCallingExample"
Or with environment variables (picked up via reference.conf):
export LLM_MODEL=openai/gpt-4o
export OPENAI_API_KEY=sk-...
sbt "samples/runMain org.llm4s.samples.basic.BasicLLMCallingExample"
LLM4S uses GitHub Actions for continuous integration to ensure code quality and compatibility across different platforms and Scala versions.
Our unified CI workflow runs on every push and pull request to main/master branches:
- Quick Checks: Fast-failing checks for code formatting and compilation
- Cross-Platform Testing: Tests run on Ubuntu and Windows with Scala 2.13.16 and 3.7.1
- Template Validation: Verifies the g8 template works correctly
- Caching: Optimized caching strategy with Coursier for faster builds
Automated AI-powered code review for pull requests:
- Automatic Reviews: Trusted PRs get automatic Claude reviews
- Security: External PRs require manual trigger by maintainers
-
Manual Trigger: Maintainers can request reviews with
@claudecomment
Automated release process triggered by version tags (format: v0.1.11):
-
Tag Format: Must use
vprefix (e.g.,v0.1.11, not0.1.11) - Pre-release Checks: Runs full CI suite before publishing
- GPG Signing: Artifacts are signed for security
- Maven Central: Publishes to Sonatype/Maven Central
See RELEASE.md for detailed release instructions.
You can run the same checks locally before pushing:
# Check formatting
sbt scalafmtCheckAll
# Compile all Scala versions
sbt +compile
# Run all tests
sbt +test
# Full build (compile + test)
sbt buildAllStay hands-on with LLM4S! Join us for interactive mob programming sessions, live debugging, and open-source collaboration. These events are great for developers, contributors, and anyone curious about Scala + GenAI.
🗓️ Weekly live coding and collaboration during LLM4S Dev Hour, join us every Sunday on Discord!
| Date | Session Title | Description | Location | Hosts | Details URL | Featured In |
|---|---|---|---|---|---|---|
| 20-Jul-2025 onwards (Weekly Sundays) | 🗓️ LLM4S Dev Hour - Weekly Live Coding & Collaboration | A weekly mob programming session where we code, debug, and learn together - open to all! 📌 Updates are shared by the host in the #llm4s-dev-hour Discord channel after each session. Weekly changing Luma invite link (for scheduling in your calender)
|
Online, London, UK (9am local time) | Kannupriya Kalra, Rory Graves |
LinkedIn Reddit1 Reddit2 Bluesky Mastodon X/Twitter |
Scala Times – Issue #537 |
See the talks being given by maintainers and open source developers globally and witness the engagement by developers around the world.
Stay updated with talks, workshops, and presentations about LLM4S happening globally. These sessions dive into the architecture, features, and future plans of the project.
Snapshots from LLM4S talks held around the world 🌍.
| Date | Event/Conference | Talk Title | Location | Speaker Name | Details URL | Recording Link URL | Featured In |
|---|---|---|---|---|---|---|---|
| 25-Feb-2025 | Bay Area Scala | Let's Teach LLMs to Write Great Scala! (Original version) | Tubi office, San Francisco, CA, USA 🇺🇸 | Kannupriya Kalra | Event Info , Reddit Discussion , Mastodon Post , Bluesky Post , X/Twitter Post , Meetup Event | Watch Recording | – |
| 20-Apr-2025 | Scala India | Let's Teach LLMs to Write Great Scala! (Updated from Feb 2025) | India 🇮🇳 | Kannupriya Kalra | Event Info , Reddit Discussion , X/Twitter Post | Watch Recording | – |
| 28-May-2025 | Functional World 2025 by Scalac | Let's Teach LLMs to Write Great Scala! (Updated from Apr 2025) | Gdansk, Poland 🇵🇱 | Kannupriya Kalra | LinkedIn Post 1 , LinkedIn Post 2 , Reddit Discussion , Meetup Link , X/Twitter Post | Watch Recording | Scalendar (May 2025) , Scala Times 1 , Scala Times 2 |
| 13-Jun-2025 | Dallas Scala Enthusiasts | Let's Teach LLMs to Write Great Scala! (Updated from May 2025) | Dallas, Texas, USA 🇺🇸 | Kannupriya Kalra | Meetup Event , LinkedIn Post , X/Twitter Post , Reddit Discussion , Bluesky Post , Mastodon Post | Watch Recording | Scalendar (June 2025) |
| 13-Aug-2025 | London Scala Users Group | Scala Meets GenAI: Build the Cool Stuff with LLM4S | The Trade Desk office, London, UK 🇬🇧 | Kannupriya Kalra, Rory Graves | Meetup Event , X/Twitter Post , Bluesky Post , LinkedIn Post | Recording will be posted once the event is done | Scalendar (August 2025) |
| 21-Aug-2025 | Scala Days 2025 | Scala Meets GenAI: Build the Cool Stuff with LLM4S | SwissTech Convention Center,EPFL campus, Lausanne, Switzerland 🇨🇭 | Kannupriya Kalra, Rory Graves | Talk Info , LinkedIn Post , X/Twitter Post , Reddit Discussion , Bluesky Post , Mastodon Post | Recording will be posted once the event is done | Scala Days 2025: August in Lausanne – Code, Community & Innovation , Scalendar (August 2025) , Scala Days 2025 LinkedIn Post , Scala Days 2025 Highlights , Scala Days 2025 Wrap , Scala Days 2025 Recap – A Scala Community Reunion , Xebia Scala days blog |
| 25-Aug-2025 | Zürich Scala Enthusiasts | Fork It Till You Make It: Career Building with Scala OSS | Rivero AG, ABB Historic Building, Elias-Canetti-Strasse 7, Zürich, Switzerland 🇨🇭 | Kannupriya Kalra | Meetup Event , LinkedIn Post , X/Twitter Post , Bluesky Post , Mastodon Post , Reddit Discussion | Recording will be posted once the event is done | Scalendar (August 2025) |
| 24-Oct-2025 | GEN AI London 2025 | Building Reliable AI systems: From Hype to Practical Toolkits | Queen Elizabeth II Center in the City of Westminster, London, UK 🇬🇧 | Kannupriya Kalra | GEN AI London Event Website , GEN AI London 2025 Schedule , LinkedIn Post 1 , LinkedIn Post 2 , LinkedIn Post 3 , X/Twitter Post , Mastodon Post , Bluesky Post , Reddit Discussion | Recording will be posted once the event is done, View Slides, Download Slides | – |
|
| 12-18-Oct-2025 | ICFP/SPLASH 2025 (The Scala Workshop 2025) | Mentoring in the Scala Ecosystem: Insights from Google Summer of Code | Peony West, Marina Bay Sands Convention Center, Singapore 🇸🇬 | Kannupriya Kalra | ICFP/SPLASH 2025 Event Website , The Scala Workshop 2025 Schedule , LinkedIn Post , X/Twitter Post , Mastodon Post , Bluesky Post , Reddit Discussion | Watch Recording | – | 24-Oct-2025 | GEN AI London 2025 | Building Reliable AI systems: From Hype to Practical Toolkits | Queen Elizabeth II Center in the City of Westminster, London, UK 🇬🇧 | Kannupriya Kalra | GEN AI London Event Website , GEN AI London 2025 Schedule , LinkedIn Post 1 , LinkedIn Post 2 , LinkedIn Post 3 , X/Twitter Post , Mastodon Post , Bluesky Post , Reddit Discussion | Recording will be posted once the event is done, View Slides, Download Slides | – |
======= | 18-Sept-2025 | Scala Center Talks | Lightning Talks Powered by GSoC 2025 for Scala | EPFL campus, Lausanne, Switzerland 🇨🇭 | Kannupriya Kalra | Event Invite , LinkedIn Post , Scala Center's LinkedIn Post , X/Twitter Post , Mastodon Post , Bluesky Post , Reddit Discussion | Recording will be posted once the event is done, View LLM4S Slides 1, 2, 3, 4, Download LLM4S Slides 1, 2, 3, 4 | – | | 12-18-Oct-2025 | ICFP/SPLASH 2025 (The Scala Workshop 2025) | Mentoring in the Scala Ecosystem: Insights from Google Summer of Code | Peony West, Marina Bay Sands Convention Center, Singapore 🇸🇬 | Kannupriya Kalra | ICFP/SPLASH 2025 Event Website , The Scala Workshop 2025 Schedule , LinkedIn Post , X/Twitter Post , Mastodon Post , Bluesky Post , Reddit Discussion | Watch Recording | – | | 24-Oct-2025 | GEN AI London 2025 | Building Reliable AI systems: From Hype to Practical Toolkits | Queen Elizabeth II Center in the City of Westminster, London, UK 🇬🇧 | Kannupriya Kalra | GEN AI London Event Website , GEN AI London 2025 Schedule , LinkedIn Post 1 , LinkedIn Post 2 , LinkedIn Post 3 , X/Twitter Post , Mastodon Post , Bluesky Post , Reddit Discussion | Recording will be posted once the event is done, View Slides, Download Slides | – |
b0eede0 (docs(readme): fix markdown table row to render correctly) | 23-25-Oct-2025 | Google Summer Of Code Mentor Summit 2025 | LLM4S x GSoC 2025: Engineering GenAI Agents in Functional Scala | Google Office, Munich, Erika-Mann-Str. 33 · 80636 München, Germany 🇩🇪 | Kannupriya Kalra | Event Website | Recording will be posted once the event is done. View Scala Center Slides, Download Scala Center Slides, View GSoC Mentor Summit All Speakers Slides, Download GSoC Mentor Summit All Speakers Slides | – | | 29-30-Nov-2025 | Oaisys Conf 2025: AI Practitioners Conference | LLM4S: Building Reliable AI Systems in the JVM Ecosystem | MCCIA, Pune, India 🇮🇳 | Kannupriya Kalra, Shubham Vishwakarma | Event Website, LinkedIn Post, X/Twitter Post, Mastodon Post, Bluesky Post, Reddit Post | Recording will be posted once the event recordings are available. View Slides, Download Slides | – | | 10-Dec-2025 |AI Compute & Hardware Conference 2025 | Functional Intelligence: Building Scalable AI Systems for the Hardware Era | Samsung HQ, San Jose, California, USA 🇺🇸 | Kannupriya Kalra | Event Website, Event details on Meetup, LinkedIn Post, X/Twitter Post, Mastodon Post, Bluesky Post, Reddit Post | Recording will be posted once the event recordings are available. View Slides (coming soon), Download Slides (coming soon) | – |
📝 Want to invite us for a talk or workshop? Reach out via our respective emails or connect on Discord: https://discord.gg/4uvTPn6qww
- Build AI-powered applications in a statically typed, functional language designed for large systems.
- Help shape the Scala ecosystem’s future in the AI/LLM space.
- Learn modern LLM techniques like zero-shot prompting, tool calling, and agentic workflows.
- Collaborate with experienced Scala engineers and open-source contributors.
- Gain real-world experience working with Dockerized environments and multi-LLM providers.
- Contribute to a project that offers you the opportunity to become a mentor or contributor funded by Google through its Google Summer of Code (GSoC) program.
- Join a global developer community focused on type-safe, maintainable AI systems.
Interested in contributing? Start here:
LLM4S GitHub Issues: https://lnkd.in/eXrhwgWY
Want to be part of developing this and interact with other developers? Join our Discord community!
LLM4S Discord: https://lnkd.in/eb4ZFdtG
LLM4S was selected for GSoC 2025 under the Scala Center Organisation.
This project is also participating in Google Summer of Code (GSoC) 2025! If you're interested in contributing to the project as a contributor, check out the details here:
👉 Scala Center GSoC Ideas: https://lnkd.in/enXAepQ3
To know everything about GSoC and how it works, check out this talk:
🎥 GSoC Process Explained: https://lnkd.in/e_dM57bZ
To learn about the experience of GSoC contributors of LLM4S, check out their blogs in the section below.
📚 Explore Past GSoC Projects with Scala Center: https://www.gsocorganizations.dev/organization/scala-center/ This page includes detailed information on all GSoC projects with Scala Center from past years - including project descriptions, code repositories, contributor blogs, and mentor details.
Hello GSoCers and future GSoC aspirants! Here are some essential onboarding links to help you collaborate and stay organized within the LLM4S community.
- 🔗 LLM4S GSoC GitHub Team:You have been invited to join the LLM4S GitHub team for GSoC participants. Accepting this invite will grant you access to internal resources and coordination tools.👉 https://github.com/orgs/llm4s/teams/gsoc/members
- 📌 Private GSoC Project Tracking Board: Once you're part of the team, you will have access to our private GSoC tracking board. This board helps you track tasks, timelines, and project deliverables throughout the GSoC period. 👉 https://github.com/orgs/llm4s/projects/3
- Contributor: Elvan Konukseven | GSoC Final Report URL
-
LinkedIn: https://www.linkedin.com/in/elvan-konukseven/ | Email: [email protected] | Discord:
elvan_31441 - Mentors: Kannupriya Kalra (Email: [email protected]), Rory Graves (Email: [email protected])
- Announcement: Official Acceptance Post | Volunteering at Scala Center 1, 2, 3 | Python Vs Scala
- Contributor Blogs: 📌 elvankonukseven.com/blog
- Work log: 📌 GitHub Project Board
- Contributor: Gopi Trinadh Maddikunta | GSoC Final Report URL
-
LinkedIn: https://www.linkedin.com/in/gopitrinadhmaddikunta/ | Email: [email protected] | Discord:
g3nadh_58439 - Mentors: Kannupriya Kalra (Email: [email protected]), Rory Graves (Email: [email protected]), Dmitry Mamonov (Email: [email protected])
- Announcement: Official Acceptance Post | Midterm evaluation post | Lightning talk post| GSoC Final Report post
- Contributor Blogs: 📌 Main Blog | 📌 Scala at Light Speed – Part 1 | 📌 Scala at Light Speed – Part 2
- Work log: 📌 Work Log → GitHub Project
- Contributor: Anshuman Awasthi | GSoC Final Report URL
-
LinkedIn: https://www.linkedin.com/in/let-me-try-to-fork-your-responsibilities/ | Email: [email protected] | Discord:
anshuman23026 - Mentors: Kannupriya Kalra (Email: [email protected]), Rory Graves (Email: [email protected])
- Announcement: Official Acceptance Post | Midterm evaluation post | Rock the JVM post | Lightning talk post | GSoC Final Report post
- Contributor Blogs: 📌 Anshuman's GSoC Journey
- Work Log: 📌 GitHub Project Board
- Contributor: Shubham Vishwakarma | GSoC Final Report URL
-
LinkedIn: https://www.linkedin.com/in/shubham-vish/ | Email: [email protected] | Discord:
oxygen4076 - Mentors: Kannupriya Kalra (Email: [email protected]), Rory Graves (Email: [email protected]), Dmitry Mamonov (Email: [email protected])
- Announcement: Official Acceptance Post | Midterm evaluation post | Midway journey post | Lightning talk post | Rock the JVM post
- Contributor Blogs: 📌 Cracking the Code: My GSoC 2025 Story
- Work log: 📌 GitHub Project Board
Feel free to reach out to the contributors or mentors listed for any guidance or questions related to GSoC 2026.
Contributors selected across the globe for GSoC 2025 program.
We’ve got exciting news to share - Scalac, one of the leading Scala development companies, has officially partnered with LLM4S for a dedicated AI-focused blog series!
This collaboration was initiated after our talk at Functional World 2025, and it’s now evolving into a full-fledged multi-part series and an upcoming eBook hosted on Scalac’s platform. The series will combine practical Scala code, GenAI architecture, and reflections from the LLM4S team - making it accessible for Scala developers everywhere who want to build with LLMs.
📝 The first post is already drafted and under review by the Scalac editorial team. We’re working together to ensure this content is both technically insightful and visually engaging.
🎉 Thanks to Matylda Kamińska, Rafał Kruczek, and the Scalac marketing team for this opportunity and collaboration!
Stay tuned - the series will be published soon on scalac.io/blog, and we’ll link it here as it goes live.
LLM4S blogs powered by Scalac.
Technical deep-dives, production stories, and insights from LLM4S contributors. These articles chronicle real-world implementations, architectural decisions, and lessons learned from building type-safe LLM infrastructure in Scala.
| Author | Title | Topics Covered | Part of Series | Link |
|---|---|---|---|---|
| Vitthal Mirji | llm4s: type-safe LLM infrastructure for Scala that stay 1-step ahead of everything | Introduction to llm4s, why type safety matters, runtime → compile-time errors, provider abstraction, agent framework overview | Building type-safe LLM infrastructure (Part 1/7) | Read article |
| Vitthal Mirji | Developer experience: How we turned 20-minute llm4s setup into 60 seconds | Giter8 template creation, onboarding friction elimination, starter kit design, 95% time savings (PR #101) | Building type-safe LLM infrastructure (Part 2/7) | Read article |
| Vitthal Mirji | Production error handling: When our LLM pipeline threw 'Unknown error' for everything | Type-safe error hierarchies, ADTs, Either-based error handling, 60% faster debugging (PR #137) | Building type-safe LLM infrastructure (Part 3/7) | Read article |
| Vitthal Mirji | Error hierarchy refinement: Smart constructors and the code we deleted | Smart constructors, trait-based error classification, eliminating boolean flags, -263 lines (PR #197) | Building type-safe LLM infrastructure (Part 4/7) | Read article |
| Vitthal Mirji | Type system upgrades: The 'asistant' typo that compiled and ran in production | String literals → MessageRole enum, 6 type classes, compile-time typo prevention, 43-file migration (PR #216) | Building type-safe LLM infrastructure (Part 5/7) | Read article |
| Vitthal Mirji | Safety refactor: The P1 streaming bug that showed wrong errors and 47 try-catch blocks | Eliminating 47 try-catch blocks, safety utilities, resource management, streaming bug fix, -260 net lines (PR #260) | Building type-safe LLM infrastructure (Part 6/7) | Read article |
| Vitthal Mirji | 5 Production patterns from building llm4s: What actually works | Pattern-based design, type-safe foundations, developer experience first, migration playbooks, production lessons learned | Building type-safe LLM infrastructure (Part 7/7) | Read article |
💡 You can contribute writing blogs Share your LLM4S experience, architectural insights, or production lessons. Reach out to maintainers on Discord or create a PR updating this table.
Our Google Summer of Code (GSoC) 2025 contributors have actively documented their journeys, sharing insights and implementation deep-dives from their projects. These blog posts offer valuable perspectives on how LLM4S is evolving from a contributor-first lens.
| Contributor | Blog(s) | Project |
|---|---|---|
| Elvan Konukseven | elvankonukseven.com/blog | Agentic Toolkit for LLMs |
| Gopi Trinadh Maddikunta |
Main Blog Scala at Light Speed – Part 1 Scala at Light Speed – Part 2 |
RAG in a Box |
| Anshuman Awasthi | Anshuman's GSoC Journey | Multimodal LLM Support |
| Shubham Vishwakarma | Cracking the Code: My GSoC 2025 Story | Tracing and Observability |
💡 These blogs reflect first-hand experience in building real-world AI tools using Scala, and are great resources for future contributors and researchers alike.
- 🌐 Main Blog
- 📝 Articles:
- 🌐 Main Blog
- 📝 Articles:
- Spark of Curiosity
- The Hunt Begins
- Enter Scala Center
- Understanding the Mission
- Locking the Vision
- The Proposal Sprint)
- The Acceptance Moment
- Stepping In
- Fiel for the Mission
- Scala at Light Speed – Part 1
- Scala at Light Speed – Part 2
- From Documents to Embeddings
- Universal Extraction + Embedding Client = One step closer to RAG
- Midterm Milestone- Modular RAG in Motion
- GSoC Final Report URL
- 🌐 Main Blog : Anshuman's GSoC Journey
- 📝 Articles:
- 🌐 Main Blog : Shubham's GSoC Journey
- 📝 Articles:
Want to connect with maintainers? The LLM4S project is maintained by:
-
Rory Graves - https://www.linkedin.com/in/roryjgraves/ | Email: [email protected] | Discord:
rorybot1 -
Kannupriya Kalra - https://www.linkedin.com/in/kannupriyakalra/ | Email: [email protected] | Discord:
kannupriyakalra_46520
This project is licensed under the MIT License - see the LICENSE file for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for llm4s
Similar Open Source Tools
llm4s
LLM4S provides a simple, robust, and scalable framework for building Large Language Models (LLM) applications in Scala. It aims to leverage Scala's type safety, functional programming, JVM ecosystem, concurrency, and performance advantages to create reliable and maintainable AI-powered applications. The framework supports multi-provider integration, execution environments, error handling, Model Context Protocol (MCP) support, agent frameworks, multimodal generation, and Retrieval-Augmented Generation (RAG) workflows. It also offers observability features like detailed trace logging, monitoring, and analytics for debugging and performance insights.
OpenOutreach
OpenOutreach is a self-hosted, open-source LinkedIn automation tool designed for B2B lead generation. It automates the entire outreach process in a stealthy, human-like way by discovering and enriching target profiles, ranking profiles using ML for smart prioritization, sending personalized connection requests, following up with custom messages after acceptance, and tracking everything in a built-in CRM with web UI. It offers features like undetectable behavior, fully customizable Python-based campaigns, local execution with CRM, easy deployment with Docker, and AI-ready templating for hyper-personalized messages.
OpenResearcher
OpenResearcher is a fully open agentic large language model designed for long-horizon deep research scenarios. It achieves an impressive 54.8% accuracy on BrowseComp-Plus, surpassing performance of GPT-4.1, Claude-Opus-4, Gemini-2.5-Pro, DeepSeek-R1, and Tongyi-DeepResearch. The tool is fully open-source, providing the training and evaluation recipe—including data, model, training methodology, and evaluation framework for everyone to progress deep research. It offers features like a fully open-source recipe, highly scalable and low-cost generation of deep research trajectories, and remarkable performance on deep research benchmarks.
seline
Seline is a local-first AI desktop application that integrates conversational AI, visual generation tools, vector search, and multi-channel connectivity. It allows users to connect WhatsApp, Telegram, or Slack to create always-on bots with full context and background task delivery. The application supports multi-channel connectivity, deep research mode, local web browsing with Puppeteer, local knowledge and privacy features, visual and creative tools, automation and agents, developer experience enhancements, and more. Seline is actively developed with a focus on improving user experience and functionality.
mindnlp
MindNLP is an open-source NLP library based on MindSpore. It provides a platform for solving natural language processing tasks, containing many common approaches in NLP. It can help researchers and developers to construct and train models more conveniently and rapidly. Key features of MindNLP include: * Comprehensive data processing: Several classical NLP datasets are packaged into a friendly module for easy use, such as Multi30k, SQuAD, CoNLL, etc. * Friendly NLP model toolset: MindNLP provides various configurable components. It is friendly to customize models using MindNLP. * Easy-to-use engine: MindNLP simplified complicated training process in MindSpore. It supports Trainer and Evaluator interfaces to train and evaluate models easily. MindNLP supports a wide range of NLP tasks, including: * Language modeling * Machine translation * Question answering * Sentiment analysis * Sequence labeling * Summarization MindNLP also supports industry-leading Large Language Models (LLMs), including Llama, GLM, RWKV, etc. For support related to large language models, including pre-training, fine-tuning, and inference demo examples, you can find them in the "llm" directory. To install MindNLP, you can either install it from Pypi, download the daily build wheel, or install it from source. The installation instructions are provided in the documentation. MindNLP is released under the Apache 2.0 license. If you find this project useful in your research, please consider citing the following paper: @misc{mindnlp2022, title={{MindNLP}: a MindSpore NLP library}, author={MindNLP Contributors}, howpublished = {\url{https://github.com/mindlab-ai/mindnlp}}, year={2022} }
orchestkit
OrchestKit is a powerful and flexible orchestration tool designed to streamline and automate complex workflows. It provides a user-friendly interface for defining and managing orchestration tasks, allowing users to easily create, schedule, and monitor workflows. With support for various integrations and plugins, OrchestKit enables seamless automation of tasks across different systems and applications. Whether you are a developer looking to automate deployment processes or a system administrator managing complex IT operations, OrchestKit offers a comprehensive solution to simplify and optimize your workflow management.
agentfield
AgentField is an open-source control plane designed for autonomous AI agents, providing infrastructure for agents to make decisions beyond chatbots. It offers features like scaling infrastructure, routing & discovery, async execution, durable state, observability, trust infrastructure with cryptographic identity, verifiable credentials, and policy enforcement. Users can write agents in Python, Go, TypeScript, or interact via REST APIs. The tool enables the creation of AI backends that reason autonomously within defined boundaries, offering predictability and flexibility. AgentField aims to bridge the gap between AI frameworks and production-ready infrastructure for AI agents.
sdk
The Kubeflow SDK is a set of unified Pythonic APIs that simplify running AI workloads at any scale without needing to learn Kubernetes. It offers consistent APIs across the Kubeflow ecosystem, enabling users to focus on building AI applications rather than managing complex infrastructure. The SDK provides a unified experience, simplifies AI workloads, is built for scale, allows rapid iteration, and supports local development without a Kubernetes cluster.
aegra
Aegra is a self-hosted AI agent backend platform that provides LangGraph power without vendor lock-in. Built with FastAPI + PostgreSQL, it offers complete control over agent orchestration for teams looking to escape vendor lock-in, meet data sovereignty requirements, enable custom deployments, and optimize costs. Aegra is Agent Protocol compliant and perfect for teams seeking a free, self-hosted alternative to LangGraph Platform with zero lock-in, full control, and compatibility with existing LangGraph Client SDK.
Athena-Public
Project Athena is a Linux OS designed for AI Agents, providing memory, persistence, scheduling, and governance for AI models. It offers a comprehensive memory layer that survives across sessions, models, and IDEs, allowing users to own their data and port it anywhere. The system is built bottom-up through 1,079+ sessions, focusing on depth and compounding knowledge. Athena features a trilateral feedback loop for cross-model validation, a Model Context Protocol server with 9 tools, and a robust security model with data residency options. The repository structure includes an SDK package, examples for quickstart, scripts, protocols, workflows, and deep documentation. Key concepts cover architecture, knowledge graph, semantic memory, and adaptive latency. Workflows include booting, reasoning modes, planning, research, and iteration. The project has seen significant content expansion, viral validation, and metrics improvements.
pocketpaw
PocketPaw is a lightweight and user-friendly tool designed for managing and organizing your digital assets. It provides a simple interface for users to easily categorize, tag, and search for files across different platforms. With PocketPaw, you can efficiently organize your photos, documents, and other files in a centralized location, making it easier to access and share them. Whether you are a student looking to organize your study materials, a professional managing project files, or a casual user wanting to declutter your digital space, PocketPaw is the perfect solution for all your file management needs.
openakita
OpenAkita is a self-evolving AI Agent framework that autonomously learns new skills, performs daily self-checks and repairs, accumulates experience from task execution, and persists until the task is done. It auto-generates skills, installs dependencies, learns from mistakes, and remembers preferences. The framework is standards-based, multi-platform, and provides a Setup Center GUI for intuitive installation and configuration. It features self-learning and evolution mechanisms, a Ralph Wiggum Mode for persistent execution, multi-LLM endpoints, multi-platform IM support, desktop automation, multi-agent architecture, scheduled tasks, identity and memory management, a tool system, and a guided wizard for setup.
agentscope
AgentScope is a multi-agent platform designed to empower developers to build multi-agent applications with large-scale models. It features three high-level capabilities: Easy-to-Use, High Robustness, and Actor-Based Distribution. AgentScope provides a list of `ModelWrapper` to support both local model services and third-party model APIs, including OpenAI API, DashScope API, Gemini API, and ollama. It also enables developers to rapidly deploy local model services using libraries such as ollama (CPU inference), Flask + Transformers, Flask + ModelScope, FastChat, and vllm. AgentScope supports various services, including Web Search, Data Query, Retrieval, Code Execution, File Operation, and Text Processing. Example applications include Conversation, Game, and Distribution. AgentScope is released under Apache License 2.0 and welcomes contributions.
ClaudeBar
ClaudeBar is a macOS menu bar application that monitors AI coding assistant usage quotas. It allows users to keep track of their usage of Claude, Codex, Gemini, GitHub Copilot, Antigravity, and Z.ai at a glance. The application offers multi-provider support, real-time quota tracking, multiple themes, visual status indicators, system notifications, auto-refresh feature, and keyboard shortcuts for quick access. Users can customize monitoring by toggling individual providers on/off and receive alerts when quota status changes. The tool requires macOS 15+, Swift 6.2+, and CLI tools installed for the providers to be monitored.
Edit-Banana
Edit Banana is a universal content re-editor that allows users to transform fixed content into fully manipulatable assets. Powered by SAM 3 and multimodal large models, it enables high-fidelity reconstruction while preserving original diagram details and logical relationships. The platform offers advanced segmentation, fixed multi-round VLM scanning, high-quality OCR, user system with credits, multi-user concurrency, and a web interface. Users can upload images or PDFs to get editable DrawIO (XML) or PPTX files in seconds. The project structure includes components for segmentation, text extraction, frontend, models, and scripts, with detailed installation and setup instructions provided. The tool is open-source under the Apache License 2.0, allowing commercial use and secondary development.
handit.ai
Handit.ai is an autonomous engineer tool designed to fix AI failures 24/7. It catches failures, writes fixes, tests them, and ships PRs automatically. It monitors AI applications, detects issues, generates fixes, tests them against real data, and ships them as pull requests—all automatically. Users can write JavaScript, TypeScript, Python, and more, and the tool automates what used to require manual debugging and firefighting.
For similar tasks
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.
AI-in-a-Box
AI-in-a-Box is a curated collection of solution accelerators that can help engineers establish their AI/ML environments and solutions rapidly and with minimal friction, while maintaining the highest standards of quality and efficiency. It provides essential guidance on the responsible use of AI and LLM technologies, specific security guidance for Generative AI (GenAI) applications, and best practices for scaling OpenAI applications within Azure. The available accelerators include: Azure ML Operationalization in-a-box, Edge AI in-a-box, Doc Intelligence in-a-box, Image and Video Analysis in-a-box, Cognitive Services Landing Zone in-a-box, Semantic Kernel Bot in-a-box, NLP to SQL in-a-box, Assistants API in-a-box, and Assistants API Bot in-a-box.
spring-ai
The Spring AI project provides a Spring-friendly API and abstractions for developing AI applications. It offers a portable client API for interacting with generative AI models, enabling developers to easily swap out implementations and access various models like OpenAI, Azure OpenAI, and HuggingFace. Spring AI also supports prompt engineering, providing classes and interfaces for creating and parsing prompts, as well as incorporating proprietary data into generative AI without retraining the model. This is achieved through Retrieval Augmented Generation (RAG), which involves extracting, transforming, and loading data into a vector database for use by AI models. Spring AI's VectorStore abstraction allows for seamless transitions between different vector database implementations.
ragstack-ai
RAGStack is an out-of-the-box solution simplifying Retrieval Augmented Generation (RAG) in GenAI apps. RAGStack includes the best open-source for implementing RAG, giving developers a comprehensive Gen AI Stack leveraging LangChain, CassIO, and more. RAGStack leverages the LangChain ecosystem and is fully compatible with LangSmith for monitoring your AI deployments.
breadboard
Breadboard is a library for prototyping generative AI applications. It is inspired by the hardware maker community and their boundless creativity. Breadboard makes it easy to wire prototypes and share, remix, reuse, and compose them. The library emphasizes ease and flexibility of wiring, as well as modularity and composability.
cloudflare-ai-web
Cloudflare-ai-web is a lightweight and easy-to-use tool that allows you to quickly deploy a multi-modal AI platform using Cloudflare Workers AI. It supports serverless deployment, password protection, and local storage of chat logs. With a size of only ~638 kB gzip, it is a great option for building AI-powered applications without the need for a dedicated server.
app-builder
AppBuilder SDK is a one-stop development tool for AI native applications, providing basic cloud resources, AI capability engine, Qianfan large model, and related capability components to improve the development efficiency of AI native applications.
cookbook
This repository contains community-driven practical examples of building AI applications and solving various tasks with AI using open-source tools and models. Everyone is welcome to contribute, and we value everybody's contribution! There are several ways you can contribute to the Open-Source AI Cookbook: Submit an idea for a desired example/guide via GitHub Issues. Contribute a new notebook with a practical example. Improve existing examples by fixing issues/typos. Before contributing, check currently open issues and pull requests to avoid working on something that someone else is already working on.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.

















