promptbook
Build responsible, controlled and transparent applications on top of LLM models!
Stars: 62
Promptbook is a library designed to build responsible, controlled, and transparent applications on top of large language models (LLMs). It helps users overcome limitations of LLMs like hallucinations, off-topic responses, and poor quality output by offering features such as fine-tuning models, prompt-engineering, and orchestrating multiple prompts in a pipeline. The library separates concerns, establishes a common format for prompt business logic, and handles low-level details like model selection and context size. It also provides tools for pipeline execution, caching, fine-tuning, anomaly detection, and versioning. Promptbook supports advanced techniques like Retrieval-Augmented Generation (RAG) and knowledge utilization to enhance output quality.
README:
Build responsible, controlled and transparent applications on top of LLM models!
- โจ Support of OpenAI o1 model
If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w๐๐ขrd responses. When this happens, you generally have three options:
- Fine-tune the model to your specifications or even train your own.
- Prompt-engineer the prompt to the best shape you can achieve.
- Orchestrate multiple prompts in a pipeline to get the best result.
In all of these situations, but especially in 3., the Promptbook library can make your life easier.
- Separates concerns between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.
- Establishes a common format
.ptbk.md
that can be used to describe your prompt business logic without having to write code or deal with the technicalities of LLMs. - Forget about low-level details like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. Just write your intent and persona who should be responsible for the task and let the library do the rest.
- Has built-in orchestration of pipeline execution and many tools to make the process easier, more reliable, and more efficient, such as caching, compilation+preparation, just-in-time fine-tuning, expectation-aware generation, agent adversary expectations, and more.
- Sometimes even the best prompts with the best framework like Promptbook
:)
can't avoid the problems. In this case, the library has built-in anomaly detection and logging to help you find and fix the problems. - Promptbook has built in versioning. You can test multiple A/B versions of pipelines and see which one works best.
- Promptbook is designed to do RAG (Retrieval-Augmented Generation) and other advanced techniques. You can use knowledge to improve the quality of the output.
Prompt book markdown file (or .ptbk.md
file) is document that describes a pipeline - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
- Multiple pipelines forms a collection which will handle core know-how of your LLM application.
- Theese pipelines are designed such as they can be written by non-programmers.
File write-website-content.ptbk.md
:
Instructions for creating web page content.
- PIPELINE URL https://promptbook.studio/webgpt/write-website-content.ptbk.md
- INPUTโฏโฏPARAM
{rawTitle}
Automatically suggested a site name or empty text- INPUTโฏโฏPARAM
{rawAssigment}
Automatically generated site entry from image recognition- OUTPUTโฏPARAM
{websiteContent}
Web content- OUTPUTโฏPARAM
{keywords}
KeywordsWhat is your web about?
- DIALOG TEMPLATE
{rawAssigment}
-> {assigment}
Website assignment and specification
- PERSONA Jane, Copywriter and Marketing Specialist.
As an experienced marketing specialist, you have been entrusted with improving the name of your client's business. A suggested name from a client: "{rawTitle}" Assignment from customer: > {assigment} ## Instructions: - Write only one name suggestion - The name will be used on the website, business cards, visuals, etc.
-> {enhancedTitle}
Enhanced titleIs the title for your website okay?
- DIALOG TEMPLATE
{enhancedTitle}
-> {title}
Title for the website
- PERSONA Josh, a copywriter, tasked with creating a claim for the website.
As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page. A website assignment from a customer: > {assigment} ## Instructions: - Write only one name suggestion - Claim will be used on website, business cards, visuals, etc. - Claim should be punchy, funny, original
-> {claim}
Claim for the web
- PERSONA Paul, extremely creative SEO specialist.
As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}". Website assignment from the customer: > {assigment} ## Instructions: - Write a list of keywords - Keywords are in basic form ## Example: - Ice cream - Olomouc - Quality - Family - Tradition - Italy - Craft
-> {keywords}
Keywords
- SIMPLE TEMPLATE
# {title} > {claim}
-> {contentBeginning}
Beginning of web content
- PERSONA Jane
As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}. A website assignment from a customer: > {assigment} ## Instructions: - Text formatting is in Markdown - Be concise and to the point - Use keywords, but they should be naturally in the text - This is the complete content of the page, so don't forget all the important information and elements the page should contain - Use headings, bullets, text formatting ## Keywords: {keywords} ## Web Content: {contentBeginning}
-> {contentBody}
Middle of the web content
- SIMPLE TEMPLATE
{contentBeginning} {contentBody}
-> {websiteContent}
Following is the scheme how the promptbook above is executed:
%% ๐ฎ Tip: Open this on GitHub or in the VSCode website to see the Mermaid graph visually
flowchart LR
subgraph "๐ Create website content"
direction TB
input((Input)):::input
templateSpecifyingTheAssigment(๐ค Specifying the assigment)
input--"{rawAssigment}"-->templateSpecifyingTheAssigment
templateImprovingTheTitle(โจ Improving the title)
input--"{rawTitle}"-->templateImprovingTheTitle
templateSpecifyingTheAssigment--"{assigment}"-->templateImprovingTheTitle
templateWebsiteTitleApproval(๐ค Website title approval)
templateImprovingTheTitle--"{enhancedTitle}"-->templateWebsiteTitleApproval
templateCunningSubtitle(๐ฐ Cunning subtitle)
templateWebsiteTitleApproval--"{title}"-->templateCunningSubtitle
templateSpecifyingTheAssigment--"{assigment}"-->templateCunningSubtitle
templateKeywordAnalysis(๐ฆ Keyword analysis)
templateWebsiteTitleApproval--"{title}"-->templateKeywordAnalysis
templateSpecifyingTheAssigment--"{assigment}"-->templateKeywordAnalysis
templateCombineTheBeginning(๐ Combine the beginning)
templateWebsiteTitleApproval--"{title}"-->templateCombineTheBeginning
templateCunningSubtitle--"{claim}"-->templateCombineTheBeginning
templateWriteTheContent(๐ Write the content)
templateWebsiteTitleApproval--"{title}"-->templateWriteTheContent
templateSpecifyingTheAssigment--"{assigment}"-->templateWriteTheContent
templateKeywordAnalysis--"{keywords}"-->templateWriteTheContent
templateCombineTheBeginning--"{contentBeginning}"-->templateWriteTheContent
templateCombineTheContent(๐ Combine the content)
templateCombineTheBeginning--"{contentBeginning}"-->templateCombineTheContent
templateWriteTheContent--"{contentBody}"-->templateCombineTheContent
templateCombineTheContent--"{websiteContent}"-->output
output((Output)):::output
classDef input color: grey;
classDef output color: grey;
end;
Note: We are using postprocessing functions like unwrapResult
that can be used to postprocess the result.
This library is divided into several packages, all are published from single monorepo. You can install all of them at once:
npm i ptbk
Or you can install them separately:
โญ Marked packages are worth to try first
- โญ ptbk - Bundle of all packages, when you want to install everything and you don't care about the size
-
promptbook - Same as
ptbk
- @promptbook/core - Core of the library, it contains the main logic for promptbooks
- @promptbook/node - Core of the library for Node.js environment
- @promptbook/browser - Core of the library for browser environment
- โญ @promptbook/utils - Utility functions used in the library but also useful for individual use in preprocessing and postprocessing LLM inputs and outputs
- @promptbook/markdown-utils - Utility functions used for processing markdown
- (Not finished) @promptbook/wizzard - Wizard for creating+running promptbooks in single line
- @promptbook/execute-javascript - Execution tools for javascript inside promptbooks
- @promptbook/openai - Execution tools for OpenAI API, wrapper around OpenAI SDK
- @promptbook/anthropic-claude - Execution tools for Anthropic Claude API, wrapper around Anthropic Claude SDK
- @promptbook/azure-openai - Execution tools for Azure OpenAI API
- @promptbook/langtail - Execution tools for Langtail API, wrapper around Langtail SDK
- @promptbook/fake-llm - Mocked execution tools for testing the library and saving the tokens
- @promptbook/remote-client - Remote client for remote execution of promptbooks
- @promptbook/remote-server - Remote server for remote execution of promptbooks
- @promptbook/types - Just typescript types used in the library
- @promptbook/cli - Command line interface utilities for promptbooks
The following glossary is used to clarify certain concepts:
- ๐ Collection of pipelines
- ๐ฏ Pipeline
- ๐บ Pipeline templates
- ๐คผ Personas
- โญ Parameters
- ๐ Pipeline execution
- ๐งช Expectations
- โ๏ธ Postprocessing
- ๐ฃ Words not tokens
- โฏ Separation of concerns
- ๐ Knowledge (Retrieval-augmented generation)
- ๐ Remote server
- ๐ Jokers (conditions)
- ๐ณ Metaprompting
- ๐ Linguistically typed languages
- ๐ Auto-Translations
- ๐ฝ Images, audio, video, spreadsheets
- ๐ Expectation-aware generation
- โณ Just-in-time fine-tuning
- ๐ด Anomaly detection
- ๐ฎ Agent adversary expectations
- view more
- When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
- When you want to separate code from text prompts
- When you want to describe complex prompt pipelines and don't want to do it in the code
- When you want to orchestrate multiple prompts together
- When you want to reuse parts of prompts in multiple places
- When you want to version your prompts and test multiple versions
- When you want to log the execution of prompts and backtrace the issues
- When you have already implemented single simple prompt and it works fine for your job
- When OpenAI Assistant (GPTs) is enough for you
- When you need streaming (this may be implemented in the future, see discussion).
- When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
- When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
- When you need to use recursion (see the discussion)
If you have a question start a discussion, open an issue or write me an email.
- โ Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?
- โ How is it different from the OpenAI`s GPTs?
- โ How is it different from the Langchain?
- โ How is it different from the DSPy?
- โ How is it different from anything?
- โ Is Promptbook using RAG (Retrieval-Augmented Generation)?
- โ Is Promptbook using function calling?
See CHANGELOG.md
Promptbook by Pavol Hejnรฝ is licensed under CC BY 4.0
See TODO.md
I am open to pull requests, feedback, and suggestions. Or if you like this utility, you can โ buy me a coffee or donate via cryptocurrencies.
You can also โญ star the promptbook package, follow me on GitHub or various other social networks.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for promptbook
Similar Open Source Tools
promptbook
Promptbook is a library designed to build responsible, controlled, and transparent applications on top of large language models (LLMs). It helps users overcome limitations of LLMs like hallucinations, off-topic responses, and poor quality output by offering features such as fine-tuning models, prompt-engineering, and orchestrating multiple prompts in a pipeline. The library separates concerns, establishes a common format for prompt business logic, and handles low-level details like model selection and context size. It also provides tools for pipeline execution, caching, fine-tuning, anomaly detection, and versioning. Promptbook supports advanced techniques like Retrieval-Augmented Generation (RAG) and knowledge utilization to enhance output quality.
AirConnect-Synology
AirConnect-Synology is a minimal Synology package that allows users to use AirPlay to stream to UPnP/Sonos & Chromecast devices that do not natively support AirPlay. It is compatible with DSM 7.0 and DSM 7.1, and provides detailed information on installation, configuration, supported devices, troubleshooting, and more. The package automates the installation and usage of AirConnect on Synology devices, ensuring compatibility with various architectures and firmware versions. Users can customize the configuration using the airconnect.conf file and adjust settings for specific speakers like Sonos, Bose SoundTouch, and Pioneer/Phorus/Play-Fi.
prompt-generator-comfyui
Custom AI prompt generator node for ComfyUI. With this node, you can use text generation models to generate prompts. Before using, text generation model has to be trained with prompt dataset.
easydiffusion
Easy Diffusion 3.0 is a user-friendly tool for installing and using Stable Diffusion on your computer. It offers hassle-free installation, clutter-free UI, task queue, intelligent model detection, live preview, image modifiers, multiple prompts file, saving generated images, UI themes, searchable models dropdown, and supports various image generation tasks like 'Text to Image', 'Image to Image', and 'InPainting'. The tool also provides advanced features such as custom models, merge models, custom VAE models, multi-GPU support, auto-updater, developer console, and more. It is designed for both new users and advanced users looking for powerful AI image generation capabilities.
Starmoon
Starmoon is an affordable, compact AI-enabled device that can understand and respond to your emotions with empathy. It offers supportive conversations and personalized learning assistance. The device is cost-effective, voice-enabled, open-source, compact, and aims to reduce screen time. Users can assemble the device themselves using off-the-shelf components and deploy it locally for data privacy. Starmoon integrates various APIs for AI language models, speech-to-text, text-to-speech, and emotion intelligence. The hardware setup involves components like ESP32S3, microphone, amplifier, speaker, LED light, and button, along with software setup instructions for developers. The project also includes a web app, backend API, and background task dashboard for monitoring and management.
Mercury
Mercury is a code efficiency benchmark designed for code synthesis tasks. It includes 1,889 programming tasks of varying difficulty levels and provides test case generators for comprehensive evaluation. The benchmark aims to assess the efficiency of large language models in generating code solutions.
UglyFeed
UglyFeed is a simple Python application designed to retrieve, aggregate, filter, rewrite, evaluate, and serve content (RSS feeds) written by a large language model. It provides features such as retrieving RSS feeds, aggregating feed items by similarity, rewriting content using various APIs, saving rewritten feeds to JSON files, converting JSON to valid RSS feed, serving XML feed via an HTTP server, deploying XML feed to GitHub or GitLab, and evaluating generated content. The tool can be used for smart content curation, dynamic blog generation, interactive educational tools, personalized reading experiences, brand monitoring, multilingual content delivery, enhanced RSS feeds, creative writing assistance, content repurposing, and fake news detection datasets. It is modular, extensible, and aims to empower users in content manipulation and delivery.
polyfire-js
Polyfire is an all-in-one managed backend for AI apps that allows users to build AI applications directly from the frontend, eliminating the need for a separate backend. It simplifies the process by providing most backend services in just a few lines of code. With Polyfire, users can easily create chatbots, transcribe audio files, generate simple text, manage long-term memory, and generate images. The tool also offers starter guides and tutorials to help users get started quickly and efficiently.
crawlee
Crawlee is a web scraping and browser automation library that helps you build reliable scrapers quickly. Your crawlers will appear human-like and fly under the radar of modern bot protections even with the default configuration. Crawlee gives you the tools to crawl the web for links, scrape data, and store it to disk or cloud while staying configurable to suit your project's needs.
AiR
AiR is an AI tool built entirely in Rust that delivers blazing speed and efficiency. It features accurate translation and seamless text rewriting to supercharge productivity. AiR is designed to assist non-native speakers by automatically fixing errors and polishing language to sound like a native speaker. The tool is under heavy development with more features on the horizon.
Devon
Devon is an open-source pair programmer tool designed to facilitate collaborative coding sessions. It provides features such as multi-file editing, codebase exploration, test writing, bug fixing, and architecture exploration. The tool supports Anthropic, OpenAI, and Groq APIs, with plans to add more models in the future. Devon is community-driven, with ongoing development goals including multi-model support, plugin system for tool builders, self-hostable Electron app, and setting SOTA on SWE-bench Lite. Users can contribute to the project by developing core functionality, conducting research on agent performance, providing feedback, and testing the tool.
chatnio
Chat Nio is a next-generation AIGC one-stop business solution that combines the advantages of frontend-oriented lightweight deployment projects with powerful API distribution systems. It offers rich model support, beautiful UI design, complete Markdown support, multi-theme support, internationalization support, text-to-image support, powerful conversation sync, model market & preset system, rich file parsing, full model internet search, Progressive Web App (PWA) support, comprehensive backend management, multiple billing methods, innovative model caching, and additional features. The project aims to address limitations in conversation synchronization, billing, file parsing, conversation URL sharing, channel management, and API call support found in existing AIGC commercial sites, while also providing a user-friendly interface design and C-end features.
mindnlp
MindNLP is an open-source NLP library based on MindSpore. It provides a platform for solving natural language processing tasks, containing many common approaches in NLP. It can help researchers and developers to construct and train models more conveniently and rapidly. Key features of MindNLP include: * Comprehensive data processing: Several classical NLP datasets are packaged into a friendly module for easy use, such as Multi30k, SQuAD, CoNLL, etc. * Friendly NLP model toolset: MindNLP provides various configurable components. It is friendly to customize models using MindNLP. * Easy-to-use engine: MindNLP simplified complicated training process in MindSpore. It supports Trainer and Evaluator interfaces to train and evaluate models easily. MindNLP supports a wide range of NLP tasks, including: * Language modeling * Machine translation * Question answering * Sentiment analysis * Sequence labeling * Summarization MindNLP also supports industry-leading Large Language Models (LLMs), including Llama, GLM, RWKV, etc. For support related to large language models, including pre-training, fine-tuning, and inference demo examples, you can find them in the "llm" directory. To install MindNLP, you can either install it from Pypi, download the daily build wheel, or install it from source. The installation instructions are provided in the documentation. MindNLP is released under the Apache 2.0 license. If you find this project useful in your research, please consider citing the following paper: @misc{mindnlp2022, title={{MindNLP}: a MindSpore NLP library}, author={MindNLP Contributors}, howpublished = {\url{https://github.com/mindlab-ai/mindnlp}}, year={2022} }
transformerlab-app
Transformer Lab is an app that allows users to experiment with Large Language Models by providing features such as one-click download of popular models, finetuning across different hardware, RLHF and Preference Optimization, working with LLMs across different operating systems, chatting with models, using different inference engines, evaluating models, building datasets for training, calculating embeddings, providing a full REST API, running in the cloud, converting models across platforms, supporting plugins, embedded Monaco code editor, prompt editing, inference logs, all through a simple cross-platform GUI.
ppl.llm.serving
PPL LLM Serving is a serving based on ppl.nn for various Large Language Models (LLMs). It provides inference support for LLaMA. Key features include: * **High Performance:** Optimized for fast and efficient inference on LLM models. * **Scalability:** Supports distributed deployment across multiple GPUs or machines. * **Flexibility:** Allows for customization of model configurations and inference pipelines. * **Ease of Use:** Provides a user-friendly interface for deploying and managing LLM models. This tool is suitable for various tasks, including: * **Text Generation:** Generating text, stories, or code from scratch or based on a given prompt. * **Text Summarization:** Condensing long pieces of text into concise summaries. * **Question Answering:** Answering questions based on a given context or knowledge base. * **Language Translation:** Translating text between different languages. * **Chatbot Development:** Building conversational AI systems that can engage in natural language interactions. Keywords: llm, large language model, natural language processing, text generation, question answering, language translation, chatbot development
For similar tasks
promptbook
Promptbook is a library designed to build responsible, controlled, and transparent applications on top of large language models (LLMs). It helps users overcome limitations of LLMs like hallucinations, off-topic responses, and poor quality output by offering features such as fine-tuning models, prompt-engineering, and orchestrating multiple prompts in a pipeline. The library separates concerns, establishes a common format for prompt business logic, and handles low-level details like model selection and context size. It also provides tools for pipeline execution, caching, fine-tuning, anomaly detection, and versioning. Promptbook supports advanced techniques like Retrieval-Augmented Generation (RAG) and knowledge utilization to enhance output quality.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.