palico-ai

palico-ai

Quickly Iterate on your LLM Development with an Integrated Workspace

Stars: 302

Visit
 screenshot

Palico AI is a tech stack designed for rapid iteration of LLM applications. It allows users to preview changes instantly, improve performance through experiments, debug issues with logs and tracing, deploy applications behind a REST API, and manage applications with a UI control panel. Users have complete flexibility in building their applications with Palico, integrating with various tools and libraries. The tool enables users to swap models, prompts, and logic easily using AppConfig. It also facilitates performance improvement through experiments and provides options for deploying applications to cloud providers or using managed hosting. Contributions to the project are welcomed, with easy ways to get involved by picking issues labeled as 'good first issue'.

README:

Palico AI - LLM Tech Stack for Rapid Iteration

Docs Website X (formerly Twitter) Follow NPM Version GitHub License

Building an LLM application involves continously trying out different ideas (models, prompts, architectures). Palico provides you with an integrated tech stack that helps you quickly iterate on your LLM development.

With Palico you can

  • ✅  Build any application in code with complete flexibility (docs)
  • ✅  Integrate with any external libraries like LangChain, LlamaIndex, Portkey, and more (docs)
  • ✅  Preview changes instantly with hot-reload and Playground UI (docs)
  • ✅  Systematically improve performance with Experiments (docs)
  • ✅  Debug issues with comprehensive logs and tracing (docs)
  • ✅  Deploy your application behind a REST API (docs)
  • ✅  Manage your application with an UI control panel (docs)

[!TIP] ⭐️ Star this repo to get release notifications for new features.

ezgif-4-c4cae043ed

⚡ Get started in seconds ⚡

npx palico init <project-name>

Checkout our quickstart guide.

Overview of your Palico App

https://github.com/palico-ai/palico-main/assets/32821894/54f35583-41c1-48a3-9565-95c484a4909b

🛠️ Building your Application

Build your application with complete flexiblity

With Palico, you have complete control over the implementation details of your LLM application. Building an LLM application with Palico just involves implementing the Agent interface. Here's an example:

import {
  Agent,
  AgentResponse,
  ConversationContext,
  ConversationRequestContent,
} from "@palico-ai/app";

class MyLLMApp implements Agent {
  async chat(
    content: ConversationRequestContent,
    context: ConversationContext
  ): Promise<AgentResponse> {
    // Your LLM application logic
    // 1. Pre-processing
    // 2. Build your prompt
    // 3. Call your LLM model
    // 4. Post-processing
    return {
      // 5. Return a response to caller
    }
  }
}

Learn more about building your application with palico (docs).

Integrates with your favorite tools and libraries

Since you own the implementation details, you can use Palico with most other external tools and libraries

Tools or Libraries Supported
Langchain
LlamaIndex
Portkey
OpenAI
Anthropic
Cohere
Azure
AWS Bedrock
GCP Vertex
Pinecone
PG Vector
Chroma

Learn more from docs.

Instantly preview your changes

Make a code change and instantly preview it locally on our playground UI

https://github.com/user-attachments/assets/c33ae53d-acf5-4c89-9c41-743ea1cb4722

Easily swap models, prompts, anything and everything

Working on LLM application involves testing different variations of models, prompts, and application logic. Palico helps you build an interchangable application layer using "feature-flag-like" feature called AppConfig. Using AppConfig, you can easily swap models, prompts, or any logic in your application layer.

Learn more about AppConfig.

🔄 Improving Performance with Experiments

Palico helps you create an iterative loop to systematically improve performance of your LLM application using experiments.

LandPageAssets-Page-2 drawio

With experiments, you can:

  1. Setup a list of test-cases that models the behavior of your application
  2. Make a change to your application
  3. Run an evaluation to measure how well your application performed against your test-cases
  4. Iterate

Learn more about experiments

🚀 Going to Production

You can deploy your Palico app to any cloud provider using Docker or use our managed hosting (coming soon). You can then use our ClientSDK or REST API to communicate with your LLM application.

Learn more from docs.

🤝 Contributing

The easiest way to contribute is to pick an issue with the good first issue tag 💪.

Bug Report? File here | Feature Request? File here

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for palico-ai

Similar Open Source Tools

For similar tasks

For similar jobs