SpeziLLM

SpeziLLM

A module enabling the integration of Large Language Models (LLMs) with the Spezi Ecosystem

Stars: 131

Visit
 screenshot

The Spezi LLM Swift Package includes modules that help integrate LLM-related functionality in applications. It provides tools for local LLM execution, usage of remote OpenAI-based LLMs, and LLMs running on Fog node resources within the local network. The package contains targets like SpeziLLM, SpeziLLMLocal, SpeziLLMLocalDownload, SpeziLLMOpenAI, and SpeziLLMFog for different LLM functionalities. Users can configure and interact with local LLMs, OpenAI LLMs, and Fog LLMs using the provided APIs and platforms within the Spezi ecosystem.

README:

Spezi LLM

Build and Test codecov DOI

Overview

The Spezi LLM Swift Package includes modules that are helpful to integrate LLM-related functionality in your application. The package provides all necessary tools for local LLM execution, the usage of remote OpenAI-based LLMs, as well as LLMs running on Fog node resources within the local network.

Screenshot displaying the Chat View utilizing the OpenAI API from SpeziLLMOpenAI. Screenshot displaying the Local LLM Download View from SpeziLLMLocalDownload. Screenshot displaying the Chat View utilizing a locally executed LLM via SpeziLLMLocal.
OpenAI LLM Chat View Language Model Download Local LLM Chat View

Setup

1. Add Spezi LLM as a Dependency

You need to add the SpeziLLM Swift package to your app in Xcode or Swift package.

[!IMPORTANT]
If your application is not yet configured to use Spezi, follow the Spezi setup article to set up the core Spezi infrastructure.

2. Follow the setup steps of the individual targets

As Spezi LLM contains a variety of different targets for specific LLM functionalities, please follow the additional setup guide in the respective target section of this README.

Targets

Spezi LLM provides a number of targets to help developers integrate LLMs in their Spezi-based applications:

  • SpeziLLM: Base infrastructure of LLM execution in the Spezi ecosystem.
  • SpeziLLMLocal: Local LLM execution capabilities directly on-device. Enables running open-source LLMs like Meta's Llama2 models.
  • SpeziLLMLocalDownload: Download and storage manager of local Language Models, including onboarding views.
  • SpeziLLMOpenAI: Integration with OpenAI's GPT models via using OpenAI's API service.
  • SpeziLLMFog: Discover and dispatch LLM inference jobs to Fog node resources within the local network.

The section below highlights the setup and basic use of the SpeziLLMLocal, SpeziLLMOpenAI, and SpeziLLMFog targets in order to integrate Language Models in a Spezi-based application.

[!NOTE]
To learn more about the usage of the individual targets, please refer to the DocC documentation of the package.

Spezi LLM Local

The target enables developers to easily execute medium-size Language Models (LLMs) locally on-device via the llama.cpp framework. The module allows you to interact with the locally run LLM via purely Swift-based APIs, no interaction with low-level C or C++ code is necessary, building on top of the infrastructure of the SpeziLLM target.

[!IMPORTANT] Important: In order to use the LLM local target, one needs to set build parameters in the consuming Xcode project or the consuming SPM package to enable the Swift / C++ Interop, introduced in Xcode 15 and Swift 5.9. Keep in mind that this is true for nested dependencies, one needs to set this configuration recursivly for the entire dependency tree towards the llama.cpp SPM package.

For Xcode projects:

  • Open your build settings in Xcode by selecting PROJECT_NAME > TARGET_NAME > Build Settings.
  • Within the Build Settings, search for the C++ and Objective-C Interoperability setting and set it to C++ / Objective-C++. This enables the project to use the C++ headers from llama.cpp.

For SPM packages:

  • Open the Package.swift file of your SPM package
  • Within the package target that consumes the llama.cpp package, add the interoperabilityMode(_:) Swift build setting like that:
/// Adds the dependency to the Spezi LLM SPM package
dependencies: [
    .package(url: "https://github.com/StanfordSpezi/SpeziLLM", .upToNextMinor(from: "0.6.0"))
],
targets: [
  .target(
      name: "ExampleConsumingTarget",
      /// State the dependence of the target to SpeziLLMLocal
      dependencies: [
          .product(name: "SpeziLLMLocal", package: "SpeziLLM")
      ],
      /// Important: Configure the `.interoperabilityMode(_:)` within the `swiftSettings`
      swiftSettings: [
          .interoperabilityMode(.Cxx)
      ]
  )
]

Setup

You can configure the Spezi Local LLM execution within the typical SpeziAppDelegate. In the example below, the LLMRunner from the SpeziLLM target which is responsible for providing LLM functionality within the Spezi ecosystem is configured with the LLMLocalPlatform from the SpeziLLMLocal target. This prepares the LLMRunner to locally execute Language Models.

class TestAppDelegate: SpeziAppDelegate {
    override var configuration: Configuration {
        Configuration {
            LLMRunner {
                LLMLocalPlatform()
            }
        }
    }
}

Usage

The code example below showcases the interaction with local LLMs through the the SpeziLLM LLMRunner, which is injected into the SwiftUI Environment via the Configuration shown above.

The LLMLocalSchema defines the type and configurations of the to-be-executed LLMLocalSession. This transformation is done via the LLMRunner that uses the LLMLocalPlatform. The inference via LLMLocalSession/generate() returns an AsyncThrowingStream that yields all generated String pieces.

struct LLMLocalDemoView: View {
    @Environment(LLMRunner.self) var runner
    @State var responseText = ""

    var body: some View {
        Text(responseText)
            .task {
                // Instantiate the `LLMLocalSchema` to an `LLMLocalSession` via the `LLMRunner`.
                let llmSession: LLMLocalSession = runner(
                    with: LLMLocalSchema(
                        modelPath: URL(string: "URL to the local model file")!
                    )
                )

                do {
                    for try await token in try await llmSession.generate() {
                        responseText.append(token)
                    }
                } catch {
                    // Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
                }
            }
    }
}

[!NOTE]
To learn more about the usage of SpeziLLMLocal, please refer to the DocC documentation.

Spezi LLM Open AI

A module that allows you to interact with GPT-based Large Language Models (LLMs) from OpenAI within your Spezi application. SpeziLLMOpenAI provides a pure Swift-based API for interacting with the OpenAI GPT API, building on top of the infrastructure of the SpeziLLM target. In addition, SpeziLLMOpenAI provides developers with a declarative Domain Specific Language to utilize OpenAI function calling mechanism. This enables a structured, bidirectional, and reliable communication between the OpenAI LLMs and external tools, such as the Spezi ecosystem.

Setup

In order to use OpenAI LLMs within the Spezi ecosystem, the SpeziLLM LLMRunner needs to be initialized in the Spezi Configuration with the LLMOpenAIPlatform. Only after, the LLMRunner can be used for inference of OpenAI LLMs. See the SpeziLLM documentation for more details.

import Spezi
import SpeziLLM
import SpeziLLMOpenAI

class LLMOpenAIAppDelegate: SpeziAppDelegate {
    override var configuration: Configuration {
        Configuration {
            LLMRunner {
                LLMOpenAIPlatform()
            }
        }
    }
}

[!IMPORTANT] If using SpeziLLMOpenAI on macOS, ensure to add the Keychain Access Groups entitlement to the enclosing Xcode project via PROJECT_NAME > Signing&Capabilities > + Capability. The array of keychain groups can be left empty, only the base entitlement is required.

Usage

The code example below showcases the interaction with an OpenAI LLM through the the SpeziLLM LLMRunner, which is injected into the SwiftUI Environment via the Configuration shown above.

The LLMOpenAISchema defines the type and configurations of the to-be-executed LLMOpenAISession. This transformation is done via the LLMRunner that uses the LLMOpenAIPlatform. The inference via LLMOpenAISession/generate() returns an AsyncThrowingStream that yields all generated String pieces.

import SpeziLLM
import SpeziLLMOpenAI
import SwiftUI

struct LLMOpenAIDemoView: View {
    @Environment(LLMRunner.self) var runner
    @State var responseText = ""

    var body: some View {
        Text(responseText)
            .task {
                // Instantiate the `LLMOpenAISchema` to an `LLMOpenAISession` via the `LLMRunner`.
                let llmSession: LLMOpenAISession = runner(
                    with: LLMOpenAISchema(
                        parameters: .init(
                            modelType: .gpt3_5Turbo,
                            systemPrompt: "You're a helpful assistant that answers questions from users.",
                            overwritingToken: "abc123"
                        )
                    )
                )

                do {
                    for try await token in try await llmSession.generate() {
                        responseText.append(token)
                    }
                } catch {
                    // Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
                }
            }
    }
}

[!NOTE]
To learn more about the usage of SpeziLLMOpenAI, please refer to the DocC documentation.

Spezi LLM Fog

The SpeziLLMFog target enables you to use LLMs running on Fog node computing resources within the local network. The fog nodes advertise their services via mDNS, enabling clients to discover all fog nodes serving a specific host within the local network. SpeziLLMFog then dispatches LLM inference jobs dynamically to a random fog node within the local network and streams the response to surface it to the user.

[!IMPORTANT] SpeziLLMFog requires a SpeziLLMFogNode within the local network hosted on some computing resource that actually performs the inference requests. SpeziLLMFog provides the SpeziLLMFogNode Docker-based package that enables an easy setup of these fog nodes. See the FogNode directory on the root level of the SPM package as well as the respective README.md for more details.

Setup

In order to use Fog LLMs within the Spezi ecosystem, the SpeziLLM LLMRunner needs to be initialized in the Spezi Configuration with the LLMFogPlatform. Only after, the LLMRunner can be used for inference with Fog LLMs. See the SpeziLLM documentation for more details. The LLMFogPlatform needs to be initialized with the custom root CA certificate that was used to sign the fog node web service certificate (see the FogNode/README.md documentation for more information). Copy the root CA certificate from the fog node as resource to the application using SpeziLLMFog and use it to initialize the LLMFogPlatform within the Spezi Configuration.

class LLMFogAppDelegate: SpeziAppDelegate {
    private nonisolated static var caCertificateUrl: URL {
        // Return local file URL of root CA certificate in the `.crt` format
    }
    
    override var configuration: Configuration {
         Configuration {
             LLMRunner {
                // Set up the Fog platform with the custom CA certificate
                LLMRunner {
                    LLMFogPlatform(configuration: .init(caCertificate: Self.caCertificateUrl))
                }
            }
        }
    }
}

Usage

The code example below showcases the interaction with a Fog LLM through the the SpeziLLM LLMRunner, which is injected into the SwiftUI Environment via the Configuration shown above.

The LLMFogSchema defines the type and configurations of the to-be-executed LLMFogSession. This transformation is done via the LLMRunner that uses the LLMFogPlatform. The inference via LLMFogSession/generate() returns an AsyncThrowingStream that yields all generated String pieces. The LLMFogSession automatically discovers all available LLM fog nodes within the local network upon setup and the dispatches the LLM inference jobs to the fog computing resource, streaming back the response and surfaces it to the user.

[!IMPORTANT]
The LLMFogSchema accepts a closure that returns an authorization token that is passed with every request to the Fog node in the Bearer HTTP field via the LLMFogParameters/init(modelType:systemPrompt:authToken:). The token is created via the closure upon every LLM inference request, as the LLMFogSession may be long lasting and the token could therefore expire. Ensure that the closure appropriately caches the token in order to prevent unnecessary token refresh roundtrips to external systems.

struct LLMFogDemoView: View {
    @Environment(LLMRunner.self) var runner
    @State var responseText = ""

    var body: some View {
        Text(responseText)
            .task {
                // Instantiate the `LLMFogSchema` to an `LLMFogSession` via the `LLMRunner`.
                let llmSession: LLMFogSession = runner(
                    with: LLMFogSchema(
                        parameters: .init(
                            modelType: .llama7B,
                            systemPrompt: "You're a helpful assistant that answers questions from users.",
                            authToken: {
                                // Return authorization token as `String` or `nil` if no token is required by the Fog node.
                            }
                        )
                    )
                )

                do {
                    for try await token in try await llmSession.generate() {
                        responseText.append(token)
                    }
                } catch {
                    // Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
                }
            }
    }
}

[!NOTE]
To learn more about the usage of SpeziLLMFog, please refer to the DocC documentation.

Contributing

Contributions to this project are welcome. Please make sure to read the contribution guidelines and the contributor covenant code of conduct first.

License

This project is licensed under the MIT License. See Licenses for more information.

Spezi Footer Spezi Footer

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for SpeziLLM

Similar Open Source Tools

For similar tasks

For similar jobs