
promptbook
Build responsible, controlled and transparent applications on top of LLM models!
Stars: 62

Promptbook is a library designed to build responsible, controlled, and transparent applications on top of large language models (LLMs). It helps users overcome limitations of LLMs like hallucinations, off-topic responses, and poor quality output by offering features such as fine-tuning models, prompt-engineering, and orchestrating multiple prompts in a pipeline. The library separates concerns, establishes a common format for prompt business logic, and handles low-level details like model selection and context size. It also provides tools for pipeline execution, caching, fine-tuning, anomaly detection, and versioning. Promptbook supports advanced techniques like Retrieval-Augmented Generation (RAG) and knowledge utilization to enhance output quality.
README:
Build responsible, controlled and transparent applications on top of LLM models!
- โจ Support of OpenAI o1 model
If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w๐๐ขrd responses. When this happens, you generally have three options:
- Fine-tune the model to your specifications or even train your own.
- Prompt-engineer the prompt to the best shape you can achieve.
- Orchestrate multiple prompts in a pipeline to get the best result.
In all of these situations, but especially in 3., the Promptbook library can make your life easier.
- Separates concerns between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.
- Establishes a common format
.ptbk.md
that can be used to describe your prompt business logic without having to write code or deal with the technicalities of LLMs. - Forget about low-level details like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. Just write your intent and persona who should be responsible for the task and let the library do the rest.
- Has built-in orchestration of pipeline execution and many tools to make the process easier, more reliable, and more efficient, such as caching, compilation+preparation, just-in-time fine-tuning, expectation-aware generation, agent adversary expectations, and more.
- Sometimes even the best prompts with the best framework like Promptbook
:)
can't avoid the problems. In this case, the library has built-in anomaly detection and logging to help you find and fix the problems. - Promptbook has built in versioning. You can test multiple A/B versions of pipelines and see which one works best.
- Promptbook is designed to do RAG (Retrieval-Augmented Generation) and other advanced techniques. You can use knowledge to improve the quality of the output.
Prompt book markdown file (or .ptbk.md
file) is document that describes a pipeline - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
- Multiple pipelines forms a collection which will handle core know-how of your LLM application.
- Theese pipelines are designed such as they can be written by non-programmers.
File write-website-content.ptbk.md
:
Instructions for creating web page content.
- PIPELINE URL https://promptbook.studio/webgpt/write-website-content.ptbk.md
- INPUTโฏโฏPARAM
{rawTitle}
Automatically suggested a site name or empty text- INPUTโฏโฏPARAM
{rawAssigment}
Automatically generated site entry from image recognition- OUTPUTโฏPARAM
{websiteContent}
Web content- OUTPUTโฏPARAM
{keywords}
KeywordsWhat is your web about?
- DIALOG TEMPLATE
{rawAssigment}
-> {assigment}
Website assignment and specification
- PERSONA Jane, Copywriter and Marketing Specialist.
As an experienced marketing specialist, you have been entrusted with improving the name of your client's business. A suggested name from a client: "{rawTitle}" Assignment from customer: > {assigment} ## Instructions: - Write only one name suggestion - The name will be used on the website, business cards, visuals, etc.
-> {enhancedTitle}
Enhanced titleIs the title for your website okay?
- DIALOG TEMPLATE
{enhancedTitle}
-> {title}
Title for the website
- PERSONA Josh, a copywriter, tasked with creating a claim for the website.
As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page. A website assignment from a customer: > {assigment} ## Instructions: - Write only one name suggestion - Claim will be used on website, business cards, visuals, etc. - Claim should be punchy, funny, original
-> {claim}
Claim for the web
- PERSONA Paul, extremely creative SEO specialist.
As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}". Website assignment from the customer: > {assigment} ## Instructions: - Write a list of keywords - Keywords are in basic form ## Example: - Ice cream - Olomouc - Quality - Family - Tradition - Italy - Craft
-> {keywords}
Keywords
- SIMPLE TEMPLATE
# {title} > {claim}
-> {contentBeginning}
Beginning of web content
- PERSONA Jane
As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}. A website assignment from a customer: > {assigment} ## Instructions: - Text formatting is in Markdown - Be concise and to the point - Use keywords, but they should be naturally in the text - This is the complete content of the page, so don't forget all the important information and elements the page should contain - Use headings, bullets, text formatting ## Keywords: {keywords} ## Web Content: {contentBeginning}
-> {contentBody}
Middle of the web content
- SIMPLE TEMPLATE
{contentBeginning} {contentBody}
-> {websiteContent}
Following is the scheme how the promptbook above is executed:
%% ๐ฎ Tip: Open this on GitHub or in the VSCode website to see the Mermaid graph visually
flowchart LR
subgraph "๐ Create website content"
direction TB
input((Input)):::input
templateSpecifyingTheAssigment(๐ค Specifying the assigment)
input--"{rawAssigment}"-->templateSpecifyingTheAssigment
templateImprovingTheTitle(โจ Improving the title)
input--"{rawTitle}"-->templateImprovingTheTitle
templateSpecifyingTheAssigment--"{assigment}"-->templateImprovingTheTitle
templateWebsiteTitleApproval(๐ค Website title approval)
templateImprovingTheTitle--"{enhancedTitle}"-->templateWebsiteTitleApproval
templateCunningSubtitle(๐ฐ Cunning subtitle)
templateWebsiteTitleApproval--"{title}"-->templateCunningSubtitle
templateSpecifyingTheAssigment--"{assigment}"-->templateCunningSubtitle
templateKeywordAnalysis(๐ฆ Keyword analysis)
templateWebsiteTitleApproval--"{title}"-->templateKeywordAnalysis
templateSpecifyingTheAssigment--"{assigment}"-->templateKeywordAnalysis
templateCombineTheBeginning(๐ Combine the beginning)
templateWebsiteTitleApproval--"{title}"-->templateCombineTheBeginning
templateCunningSubtitle--"{claim}"-->templateCombineTheBeginning
templateWriteTheContent(๐ Write the content)
templateWebsiteTitleApproval--"{title}"-->templateWriteTheContent
templateSpecifyingTheAssigment--"{assigment}"-->templateWriteTheContent
templateKeywordAnalysis--"{keywords}"-->templateWriteTheContent
templateCombineTheBeginning--"{contentBeginning}"-->templateWriteTheContent
templateCombineTheContent(๐ Combine the content)
templateCombineTheBeginning--"{contentBeginning}"-->templateCombineTheContent
templateWriteTheContent--"{contentBody}"-->templateCombineTheContent
templateCombineTheContent--"{websiteContent}"-->output
output((Output)):::output
classDef input color: grey;
classDef output color: grey;
end;
Note: We are using postprocessing functions like unwrapResult
that can be used to postprocess the result.
This library is divided into several packages, all are published from single monorepo. You can install all of them at once:
npm i ptbk
Or you can install them separately:
โญ Marked packages are worth to try first
- โญ ptbk - Bundle of all packages, when you want to install everything and you don't care about the size
-
promptbook - Same as
ptbk
- @promptbook/core - Core of the library, it contains the main logic for promptbooks
- @promptbook/node - Core of the library for Node.js environment
- @promptbook/browser - Core of the library for browser environment
- โญ @promptbook/utils - Utility functions used in the library but also useful for individual use in preprocessing and postprocessing LLM inputs and outputs
- @promptbook/markdown-utils - Utility functions used for processing markdown
- (Not finished) @promptbook/wizzard - Wizard for creating+running promptbooks in single line
- @promptbook/execute-javascript - Execution tools for javascript inside promptbooks
- @promptbook/openai - Execution tools for OpenAI API, wrapper around OpenAI SDK
- @promptbook/anthropic-claude - Execution tools for Anthropic Claude API, wrapper around Anthropic Claude SDK
- @promptbook/azure-openai - Execution tools for Azure OpenAI API
- @promptbook/langtail - Execution tools for Langtail API, wrapper around Langtail SDK
- @promptbook/fake-llm - Mocked execution tools for testing the library and saving the tokens
- @promptbook/remote-client - Remote client for remote execution of promptbooks
- @promptbook/remote-server - Remote server for remote execution of promptbooks
- @promptbook/types - Just typescript types used in the library
- @promptbook/cli - Command line interface utilities for promptbooks
The following glossary is used to clarify certain concepts:
- ๐ Collection of pipelines
- ๐ฏ Pipeline
- ๐บ Pipeline templates
- ๐คผ Personas
- โญ Parameters
- ๐ Pipeline execution
- ๐งช Expectations
- โ๏ธ Postprocessing
- ๐ฃ Words not tokens
- โฏ Separation of concerns
- ๐ Knowledge (Retrieval-augmented generation)
- ๐ Remote server
- ๐ Jokers (conditions)
- ๐ณ Metaprompting
- ๐ Linguistically typed languages
- ๐ Auto-Translations
- ๐ฝ Images, audio, video, spreadsheets
- ๐ Expectation-aware generation
- โณ Just-in-time fine-tuning
- ๐ด Anomaly detection
- ๐ฎ Agent adversary expectations
- view more
- When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
- When you want to separate code from text prompts
- When you want to describe complex prompt pipelines and don't want to do it in the code
- When you want to orchestrate multiple prompts together
- When you want to reuse parts of prompts in multiple places
- When you want to version your prompts and test multiple versions
- When you want to log the execution of prompts and backtrace the issues
- When you have already implemented single simple prompt and it works fine for your job
- When OpenAI Assistant (GPTs) is enough for you
- When you need streaming (this may be implemented in the future, see discussion).
- When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
- When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
- When you need to use recursion (see the discussion)
If you have a question start a discussion, open an issue or write me an email.
- โ Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?
- โ How is it different from the OpenAI`s GPTs?
- โ How is it different from the Langchain?
- โ How is it different from the DSPy?
- โ How is it different from anything?
- โ Is Promptbook using RAG (Retrieval-Augmented Generation)?
- โ Is Promptbook using function calling?
See CHANGELOG.md
Promptbook by Pavol Hejnรฝ is licensed under CC BY 4.0
See TODO.md
I am open to pull requests, feedback, and suggestions. Or if you like this utility, you can โ buy me a coffee or donate via cryptocurrencies.
You can also โญ star the promptbook package, follow me on GitHub or various other social networks.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for promptbook
Similar Open Source Tools

promptbook
Promptbook is a library designed to build responsible, controlled, and transparent applications on top of large language models (LLMs). It helps users overcome limitations of LLMs like hallucinations, off-topic responses, and poor quality output by offering features such as fine-tuning models, prompt-engineering, and orchestrating multiple prompts in a pipeline. The library separates concerns, establishes a common format for prompt business logic, and handles low-level details like model selection and context size. It also provides tools for pipeline execution, caching, fine-tuning, anomaly detection, and versioning. Promptbook supports advanced techniques like Retrieval-Augmented Generation (RAG) and knowledge utilization to enhance output quality.

AirConnect-Synology
AirConnect-Synology is a minimal Synology package that allows users to use AirPlay to stream to UPnP/Sonos & Chromecast devices that do not natively support AirPlay. It is compatible with DSM 7.0 and DSM 7.1, and provides detailed information on installation, configuration, supported devices, troubleshooting, and more. The package automates the installation and usage of AirConnect on Synology devices, ensuring compatibility with various architectures and firmware versions. Users can customize the configuration using the airconnect.conf file and adjust settings for specific speakers like Sonos, Bose SoundTouch, and Pioneer/Phorus/Play-Fi.

crawlee
Crawlee is a web scraping and browser automation library that helps you build reliable scrapers quickly. Your crawlers will appear human-like and fly under the radar of modern bot protections even with the default configuration. Crawlee gives you the tools to crawl the web for links, scrape data, and store it to disk or cloud while staying configurable to suit your project's needs.

easydiffusion
Easy Diffusion 3.0 is a user-friendly tool for installing and using Stable Diffusion on your computer. It offers hassle-free installation, clutter-free UI, task queue, intelligent model detection, live preview, image modifiers, multiple prompts file, saving generated images, UI themes, searchable models dropdown, and supports various image generation tasks like 'Text to Image', 'Image to Image', and 'InPainting'. The tool also provides advanced features such as custom models, merge models, custom VAE models, multi-GPU support, auto-updater, developer console, and more. It is designed for both new users and advanced users looking for powerful AI image generation capabilities.

Foxel
Foxel is a highly extensible private cloud storage solution for individuals and teams, featuring AI-powered semantic search. It offers unified file management, pluggable storage backends, semantic search capabilities, built-in file preview, permissions and sharing options, and a task processing center. Users can easily manage files, search content within unstructured data, preview various file types, share files, and process tasks asynchronously. Foxel is designed to centralize file management and enhance search capabilities for users.

UglyFeed
UglyFeed is a simple Python application designed to retrieve, aggregate, filter, rewrite, evaluate, and serve content (RSS feeds) written by a large language model. It provides features such as retrieving RSS feeds, aggregating feed items by similarity, rewriting content using various APIs, saving rewritten feeds to JSON files, converting JSON to valid RSS feed, serving XML feed via an HTTP server, deploying XML feed to GitHub or GitLab, and evaluating generated content. The tool can be used for smart content curation, dynamic blog generation, interactive educational tools, personalized reading experiences, brand monitoring, multilingual content delivery, enhanced RSS feeds, creative writing assistance, content repurposing, and fake news detection datasets. It is modular, extensible, and aims to empower users in content manipulation and delivery.

sec-parser
The `sec-parser` project simplifies extracting meaningful information from SEC EDGAR HTML documents by organizing them into semantic elements and a tree structure. It helps in parsing SEC filings for financial and regulatory analysis, analytics and data science, AI and machine learning, causal AI, and large language models. The tool is especially beneficial for AI, ML, and LLM applications by streamlining data pre-processing and feature extraction.

open-webui
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.

chonkie
Chonkie is a lightweight and fast RAG chunking library designed to efficiently split text for RAG (Retrieval-Augmented Generation) applications. It offers various chunking methods like TokenChunker, WordChunker, SentenceChunker, SemanticChunker, SDPMChunker, and an experimental LateChunker. Chonkie is feature-rich, easy to use, fast, supports multiple tokenizers, and comes with a cute pygmy hippo mascot. It aims to provide a no-nonsense solution for chunking text without the need to worry about dependencies or bloat.

ai_automation_suggester
An integration for Home Assistant that leverages AI models to understand your unique home environment and propose intelligent automations. By analyzing your entities, devices, areas, and existing automations, the AI Automation Suggester helps you discover new, context-aware use cases you might not have considered, ultimately streamlining your home management and improving efficiency, comfort, and convenience. The tool acts as a personal automation consultant, providing actionable YAML-based automations that can save energy, improve security, enhance comfort, and reduce manual intervention. It turns the complexity of a large Home Assistant environment into actionable insights and tangible benefits.

mobile-use
Mobile-use is an open-source AI agent that controls Android or IOS devices using natural language. It understands commands to perform tasks like sending messages and navigating apps. Features include natural language control, UI-aware automation, data scraping, and extensibility. Users can automate their mobile experience by setting up environment variables, customizing LLM configurations, and launching the tool via Docker or manually for development. The tool supports physical Android phones, Android simulators, and iOS simulators. Contributions are welcome, and the project is licensed under MIT.

AmigaGPT
AmigaGPT is a versatile ChatGPT client for AmigaOS 3.x, 4.1, and MorphOS. It brings the capabilities of OpenAIโs GPT to Amiga systems, enabling text generation, question answering, and creative exploration. AmigaGPT can generate images using DALL-E, supports speech output, and seamlessly integrates with AmigaOS. Users can customize the UI, choose fonts and colors, and enjoy a native user experience. The tool requires specific system requirements and offers features like state-of-the-art language models, AI image generation, speech capability, and UI customization.

llmcord
llmcord is a Discord bot that transforms Discord into a collaborative LLM frontend, allowing users to interact with various LLM models. It features a reply-based chat system that enables branching conversations, supports remote and local LLM models, allows image and text file attachments, offers customizable personality settings, and provides streamed responses. The bot is fully asynchronous, efficient in managing message data, and offers hot reloading config. With just one Python file and around 200 lines of code, llmcord provides a seamless experience for engaging with LLMs on Discord.

ppl.llm.serving
PPL LLM Serving is a serving based on ppl.nn for various Large Language Models (LLMs). It provides inference support for LLaMA. Key features include: * **High Performance:** Optimized for fast and efficient inference on LLM models. * **Scalability:** Supports distributed deployment across multiple GPUs or machines. * **Flexibility:** Allows for customization of model configurations and inference pipelines. * **Ease of Use:** Provides a user-friendly interface for deploying and managing LLM models. This tool is suitable for various tasks, including: * **Text Generation:** Generating text, stories, or code from scratch or based on a given prompt. * **Text Summarization:** Condensing long pieces of text into concise summaries. * **Question Answering:** Answering questions based on a given context or knowledge base. * **Language Translation:** Translating text between different languages. * **Chatbot Development:** Building conversational AI systems that can engage in natural language interactions. Keywords: llm, large language model, natural language processing, text generation, question answering, language translation, chatbot development

Local-File-Organizer
The Local File Organizer is an AI-powered tool designed to help users organize their digital files efficiently and securely on their local device. By leveraging advanced AI models for text and visual content analysis, the tool automatically scans and categorizes files, generates relevant descriptions and filenames, and organizes them into a new directory structure. All AI processing occurs locally using the Nexa SDK, ensuring privacy and security. With support for multiple file types and customizable prompts, this tool aims to simplify file management and bring order to users' digital lives.
For similar tasks

promptbook
Promptbook is a library designed to build responsible, controlled, and transparent applications on top of large language models (LLMs). It helps users overcome limitations of LLMs like hallucinations, off-topic responses, and poor quality output by offering features such as fine-tuning models, prompt-engineering, and orchestrating multiple prompts in a pipeline. The library separates concerns, establishes a common format for prompt business logic, and handles low-level details like model selection and context size. It also provides tools for pipeline execution, caching, fine-tuning, anomaly detection, and versioning. Promptbook supports advanced techniques like Retrieval-Augmented Generation (RAG) and knowledge utilization to enhance output quality.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.