
node-sdk
The ChatBotKit Node SDK offers a JavaScript-based platform for effortlessly building conversational AI bots and agents.
Stars: 52

The ChatBotKit Node SDK is a JavaScript-based platform for building conversational AI bots and agents. It offers easy setup, serverless compatibility, modern framework support, customizability, and multi-platform deployment. With capabilities like multi-modal and multi-language support, conversation management, chat history review, custom datasets, and various integrations, this SDK enables users to create advanced chatbots for websites, mobile apps, and messaging platforms.
README:
.d8888b. 888888b. 888 d8P
d88P Y88b 888 "88b 888 d8P
888 888 888 .88P 888 d8P
888 8888888K. 888d88K
888 888 "Y88b 8888888b
888 888 888 888 888 Y88b
Y88b d88P 888 d88P 888 Y88b
"Y8888P" 8888888P" 888 Y88b .ai
Welcome to the ChatBotKit Node SDK. This SDK offers a JavaScript-based platform for effortlessly building conversational AI bots and agents. With ChatBotKit, you can swiftly develop and deploy AI bots capable of natural language interactions.
This is a meta repository for the ChatBotKit Node SDK. It contains the SDK packages for a number of popular platforms and frameworks such as React, Next.js, NextAuth and more.
The ChatBotKit Node SDK is comprised of the following packages:
Package | Version | Description |
---|---|---|
@chatbotkit/cli | The ChatBotKit CLI. | |
@chatbotkit/sdk | The ChatBotKit API SDK. | |
@chatbotkit/react | The ChatBotKit React SDK. | |
@chatbotkit/next | The ChatBotKit Next.js SDK. | |
@chatbotkit/nextauth | The ChatBotKit NextAuth.js SDK. | |
@chatbotkit/fetch | The ChatBotKit isometric fetch implementation. |
This repository also contains the following tools:
Package | Version | Description |
---|---|---|
create-cbk-app | A quick tool to create a new CBK application. |
- Easy Setup: Quick and straightforward installation process.
- Serverless Compatibility: Works seamlessly with modern runtime environments like Serverless, Edge, Vercel, Netlify, Cloudflare Workers, Deno, AWS Lambda, and more.
- Modern Framework Support: Built-in support for CommonJS, ECMAScript Modules, async/await, streams, etc.
- Customizability: Tailor the chatbotβs behavior and responses to fit specific use cases.
- Multi-Platform: Deploy chatbots on websites, mobile apps, and messaging platforms like Slack, Discord, and WhatsApp.
- Multi-Model: Support for a wide range of language models, including GPT-3, GPT-4, Claude, and more.
- π Multi-modal Support: Support various language and image models from all vendors such as OpenAI, Anthropic, Mistral, AWS, Google and others.
- π Multi-language Support: Allowing for easy customization and use in diverse linguistic contexts.
- π¬ Conversation Management: Manage complex conversation flaws with ease.
- π¨ Chat History: Review and reference past conversations.
- πΎ Custom Datasets: Organize data for bot responses.
- π‘ Custom Skillset: Equip chatbots with unique abilities like image generation or web fetching.
- π Document File Importing: Import various document file types into chatbot datasets.
- π΅ Media File Importing: Import a range of media file formats into chatbot datasets.
- π Widget Integration: Seamlessly embed chatbots on websites with customizable options.
- π¬ Slack, Discord, WhatsApp Bot Integrations: Easy integration with popular messaging platforms.
- πΊ Sitemap Integration: Ingest website content into a searchable knowledge base.
- π₯ Streaming: Enable/disable streaming capabilities.
- π Data Security: Robust measures to protect user data.
- π΅ Privacy Focus: Strong privacy controls to ensure responsible data handling.
- π« Content Moderation: Automatic scanning and flagging of abusive content.
- π΅ Simple Pricing: Transparent and straightforward pricing.
Follow these steps to start with ChatBotKit:
-
Installation:
npm install @chatbotkit/sdk
- Usage: Implement the SDK in your chatbot project.
This example demonstrates streaming capabilities in Edge and Serverless environments:
import { ConversationClient } from '@chatbotkit/sdk/conversation/index.js'
const client = new ConversationClient(/* configuration */)
for await (const { type, data } of client
.complete(null, { model: 'gpt-4', messages })
.stream()) {
if (type === 'token') {
process.stdout.write(data.token)
}
}
This example showcases how to build advanced conversational AI with streaming, function calls, server-side rendering and much more in a Next.js project:
// file: ./app/page.jsx
import ChatArea from '../components/ChatArea.jsx'
export default function Page() {
return <ChatArea />
}
// file: ./components/ChatArea.jsx
'use client'
import { useContext } from 'react'
import { complete } from '../actions/conversation.jsx'
import { ChatInput, ConversationContext } from '@chatbotkit/react'
import ConversationManager from '@chatbotkit/react/components/ConversationManager'
export function ChatMessages() {
const {
thinking,
text,
setText,
messages,
submit,
} = useContext(ConversationContext)
return (
<div>
<div>
{messages.map(({ id, type, text, children }) => {
switch (type) {
case 'user':
return (
<div key={id}>
<div>
<strong>user:</strong> {text}
</div>
</div>
)
case 'bot':
return (
<div key={id}>
<div>
<strong>bot:</strong> {text}
</div>
{children ? <div>{children}</div> : null}
</div>
)
}
})}
{thinking ? (
<div key="thinking">
<strong>bot:</strong> thinking...
</div>
) : null}
</div>
<ChatInput
value={text}
onChange={(e) => setText(e.target.value)}
onSubmit={submit}
placeholder="Type something..."
style={{
border: 0,
outline: 'none',
resize: 'none',
width: '100%',
marginTop: '10px',
}}
/>
</div>
)
}
export default function ChatArea() {
return (
<ConversationManager endpoint={complete}>
<ChatMessages />
</ConversationManager>
)
}
// file: ./actions/conversation.jsx
'use server'
import CalendarEvents from '../components/CalendarEvents.jsx'
import { streamComplete } from '@chatbotkit/react/actions/complete'
import { ChatBotKit } from '@chatbotkit/sdk'
const cbk = new ChatBotKit({
secret: process.env.CHATBOTKIT_API_SECRET,
})
export async function complete({ messages }) {
return streamComplete({
client: cbk.conversation,
messages,
functions: [
{
name: 'getUserName',
description: 'Get the authenticated user name',
parameters: {},
handler: async () => {
return 'John Doe'
},
},
{
name: 'getCalendarEvents',
description: 'Get a list of calendar events',
parameters: {},
handler: async () => {
const events = [
{ id: 1, title: 'Meeting with Jane Doe' },
{ id: 2, title: 'Meeting with Jill Doe' },
]
return {
children: <CalendarEvents events={events} />,
result: {
events,
},
}
},
},
{
name: 'declineCalendarEvent',
description: 'Decline a calendar event',
parameters: {
type: 'object',
properties: {
id: {
type: 'number',
description: 'The ID of the event to decline',
},
},
required: ['id'],
},
handler: async ({ id }) => {
return `You have declined the event with ID ${id}`
},
},
],
})
}
This quick example demonstrates how to use the SDK in a Next.js project:
// file: ./pages/index.js
import { AutoTextarea, useConversationManager } from '@chatbotkit/react'
export default function Index() {
const {
thinking,
text,
setText,
messages,
submit,
} = useConversationManager({
endpoint: '/api/conversation/complete',
})
function handleOnKeyDown(event) {
if (event.keyCode === 13) {
event.preventDefault()
submit()
}
}
return (
<div style={{ fontFamily: 'monospace', padding: '10px' }}>
{messages.map(({ id, type, text }) => (
<div key={id}>
<strong>{type}:</strong> {text}
</div>
))}
{thinking && (
<div key="thinking">
<strong>bot:</strong> thinking...
</div>
)}
<AutoTextarea
value={text}
onChange={(e) => setText(e.target.value)}
onKeyDown={handleOnKeyDown}
placeholder="Type something..."
style={{
border: 0,
outline: 'none',
resize: 'none',
width: '100%',
marginTop: '10px',
}}
/>
</div>
)
}
// file: ./pages/api/conversation/complete.js
import { ChatBotKit } from '@chatbotkit/sdk'
import { stream } from '@chatbotkit/next/edge'
const cbk = new ChatBotKit({
secret: process.env.CHATBOTKIT_API_SECRET,
})
export default async function handler(req) {
const { messages } = await req.json()
return stream(cbk.conversation.complete(null, { messages }))
}
export const config = {
runtime: 'edge',
}
Explore a wide range of examples here.
Some notable examples include:
Platform | Example | Description |
---|---|---|
Next.js | Stateless Chat (App Router + RSC + Functions + Function Request) | A stateless chatbot example, where the conversation is managed by the client and the server. This example uses the App Router and Server Actions as well AI functions with function requests. This is a powerful example to demonstrate the full capabilities of the ChatBotKit conversational AI platform. |
Next.js | Stateless Chat (App Router + RSC + Functions) | A stateless chatbot example, where the conversation is managed by the client and the server. This example uses the App Router and Server Actions as well AI functions. |
Next.js | Stateless Chat (App Router + RSC) | A stateless chatbot example, where the conversation is managed by the client and the server. This example uses the App Router and Server Actions. |
Next.js | Stateless Chat (App Router) | A stateless chatbot example, where the conversation is managed by the client. This example uses the App Router. |
Next.js | Stateless Chat | A stateless chatbot example, where the conversation is managed by the client. |
Next.js | Basic Chat | A basic chatbot example, where the conversation is managed by ChatBotKit. |
Next.js | NextAuth Example | Shows how to combine NextAuth and ChatBotKit. |
Node | GPT4 Streaming AI chatbot | A simple streaming AI chatbot example. |
Cloudflare Workers | GPT4 AI chatbot | A streaming AI chatbot example for Cloudflare Workers. |
All SDK features are considered unstable unless explicitly marked as stable. Stability is indicated by the presence of a @stable
tag in the documentation.
- Type Documentation: Detailed information on available types here.
- Platform Documentation: Comprehensive guide to ChatBotKit here.
- Platform Tutorials: Step-by-step tutorials for ChatBotKit here.
Encounter a bug or want to contribute? Open an issue or submit a pull request on our official GitHub repository.
The project is setup as a monorepo using pnpm and turbo build. Clone the reporitory and run the following commands to get started:
pnpm install
To perform type and lint checks, run:
pnpm check
pnpm lint
To build the project, run:
pnpm build
To build a specific package, run:
pnpm -F @chatbotkit/${PACKAGE_NAME} build
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for node-sdk
Similar Open Source Tools

node-sdk
The ChatBotKit Node SDK is a JavaScript-based platform for building conversational AI bots and agents. It offers easy setup, serverless compatibility, modern framework support, customizability, and multi-platform deployment. With capabilities like multi-modal and multi-language support, conversation management, chat history review, custom datasets, and various integrations, this SDK enables users to create advanced chatbots for websites, mobile apps, and messaging platforms.

pixeltable
Pixeltable is a Python library designed for ML Engineers and Data Scientists to focus on exploration, modeling, and app development without the need to handle data plumbing. It provides a declarative interface for working with text, images, embeddings, and video, enabling users to store, transform, index, and iterate on data within a single table interface. Pixeltable is persistent, acting as a database unlike in-memory Python libraries such as Pandas. It offers features like data storage and versioning, combined data and model lineage, indexing, orchestration of multimodal workloads, incremental updates, and automatic production-ready code generation. The tool emphasizes transparency, reproducibility, cost-saving through incremental data changes, and seamless integration with existing Python code and libraries.

x
Ant Design X is a tool for crafting AI-driven interfaces effortlessly. It is built on the best practices of enterprise-level AI products, offering flexible and diverse atomic components for various AI dialogue scenarios. The tool provides out-of-the-box model integration with inference services compatible with OpenAI standards. It also enables efficient management of conversation data flows, supports rich template options, complete TypeScript support, and advanced theme customization. Ant Design X is designed to enhance development efficiency and deliver exceptional AI interaction experiences.

superagentx
SuperAgentX is a lightweight open-source AI framework designed for multi-agent applications with Artificial General Intelligence (AGI) capabilities. It offers goal-oriented multi-agents with retry mechanisms, easy deployment through WebSocket, RESTful API, and IO console interfaces, streamlined architecture with no major dependencies, contextual memory using SQL + Vector databases, flexible LLM configuration supporting various Gen AI models, and extendable handlers for integration with diverse APIs and data sources. It aims to accelerate the development of AGI by providing a powerful platform for building autonomous AI agents capable of executing complex tasks with minimal human intervention.

AutoAudit
AutoAudit is an open-source large language model specifically designed for the field of network security. It aims to provide powerful natural language processing capabilities for security auditing and network defense, including analyzing malicious code, detecting network attacks, and predicting security vulnerabilities. By coupling AutoAudit with ClamAV, a security scanning platform has been created for practical security audit applications. The tool is intended to assist security professionals with accurate and fast analysis and predictions to combat evolving network threats.

Scrapegraph-ai
ScrapeGraphAI is a web scraping Python library that utilizes LLM and direct graph logic to create scraping pipelines for websites and local documents. It offers various standard scraping pipelines like SmartScraperGraph, SearchGraph, SpeechGraph, and ScriptCreatorGraph. Users can extract information by specifying prompts and input sources. The library supports different LLM APIs such as OpenAI, Groq, Azure, and Gemini, as well as local models using Ollama. ScrapeGraphAI is designed for data exploration and research purposes, providing a versatile tool for extracting information from web pages and generating outputs like Python scripts, audio summaries, and search results.

TempCompass
TempCompass is a benchmark designed to evaluate the temporal perception ability of Video LLMs. It encompasses a diverse set of temporal aspects and task formats to comprehensively assess the capability of Video LLMs in understanding videos. The benchmark includes conflicting videos to prevent models from relying on single-frame bias and language priors. Users can clone the repository, install required packages, prepare data, run inference using examples like Video-LLaVA and Gemini, and evaluate the performance of their models across different tasks such as Multi-Choice QA, Yes/No QA, Caption Matching, and Caption Generation.

unsloth
Unsloth is a tool that allows users to fine-tune large language models (LLMs) 2-5x faster with 80% less memory. It is a free and open-source tool that can be used to fine-tune LLMs such as Gemma, Mistral, Llama 2-5, TinyLlama, and CodeLlama 34b. Unsloth supports 4-bit and 16-bit QLoRA / LoRA fine-tuning via bitsandbytes. It also supports DPO (Direct Preference Optimization), PPO, and Reward Modelling. Unsloth is compatible with Hugging Face's TRL, Trainer, Seq2SeqTrainer, and Pytorch code. It is also compatible with NVIDIA GPUs since 2018+ (minimum CUDA Capability 7.0).

openlit
OpenLIT is an OpenTelemetry-native GenAI and LLM Application Observability tool. It's designed to make the integration process of observability into GenAI projects as easy as pie β literally, with just **a single line of code**. Whether you're working with popular LLM Libraries such as OpenAI and HuggingFace or leveraging vector databases like ChromaDB, OpenLIT ensures your applications are monitored seamlessly, providing critical insights to improve performance and reliability.

evalscope
Eval-Scope is a framework designed to support the evaluation of large language models (LLMs) by providing pre-configured benchmark datasets, common evaluation metrics, model integration, automatic evaluation for objective questions, complex task evaluation using expert models, reports generation, visualization tools, and model inference performance evaluation. It is lightweight, easy to customize, supports new dataset integration, model hosting on ModelScope, deployment of locally hosted models, and rich evaluation metrics. Eval-Scope also supports various evaluation modes like single mode, pairwise-baseline mode, and pairwise (all) mode, making it suitable for assessing and improving LLMs.

simba
Simba is an open source, portable Knowledge Management System (KMS) designed to seamlessly integrate with any Retrieval-Augmented Generation (RAG) system. It features a modern UI and modular architecture, allowing developers to focus on building advanced AI solutions without the complexities of knowledge management. Simba offers a user-friendly interface to visualize and modify document chunks, supports various vector stores and embedding models, and simplifies knowledge management for developers. It is community-driven, extensible, and aims to enhance AI functionality by providing a seamless integration with RAG-based systems.

LLM4Decompile
LLM4Decompile is an open-source large language model dedicated to decompilation of Linux x86_64 binaries, supporting GCC's O0 to O3 optimization levels. It focuses on assessing re-executability of decompiled code through HumanEval-Decompile benchmark. The tool includes models with sizes ranging from 1.3 billion to 33 billion parameters, available on Hugging Face. Users can preprocess C code into binary and assembly instructions, then decompile assembly instructions into C using LLM4Decompile. Ongoing efforts aim to expand capabilities to support more architectures and configurations, integrate with decompilation tools like Ghidra and Rizin, and enhance performance with larger training datasets.

duolingo-clone
Lingo is an interactive platform for language learning that provides a modern UI/UX experience. It offers features like courses, quests, and a shop for users to engage with. The tech stack includes React JS, Next JS, Typescript, Tailwind CSS, Vercel, and Postgresql. Users can contribute to the project by submitting changes via pull requests. The platform utilizes resources from CodeWithAntonio, Kenney Assets, Freesound, Elevenlabs AI, and Flagpack. Key dependencies include @clerk/nextjs, @neondatabase/serverless, @radix-ui/react-avatar, and more. Users can follow the project creator on GitHub and Twitter, as well as subscribe to their YouTube channel for updates. To learn more about Next.js, users can refer to the Next.js documentation and interactive tutorial.

agentscope
AgentScope is a multi-agent platform designed to empower developers to build multi-agent applications with large-scale models. It features three high-level capabilities: Easy-to-Use, High Robustness, and Actor-Based Distribution. AgentScope provides a list of `ModelWrapper` to support both local model services and third-party model APIs, including OpenAI API, DashScope API, Gemini API, and ollama. It also enables developers to rapidly deploy local model services using libraries such as ollama (CPU inference), Flask + Transformers, Flask + ModelScope, FastChat, and vllm. AgentScope supports various services, including Web Search, Data Query, Retrieval, Code Execution, File Operation, and Text Processing. Example applications include Conversation, Game, and Distribution. AgentScope is released under Apache License 2.0 and welcomes contributions.

vnve
VNVE is a Visual Novel Video Editor that allows users to create visual novel videos in their browser with AI-powered rapid creation. It offers a low-cost production solution for converting textual content into videos, creating interactive videos for gaming experiences, and making video teasers for novels and short video dramas. The tool is a pure front-end Typescript implementation powered by PixiJS + WebCodecs, and users can also create videos programmatically using the npm package. VNVE is tailored specifically for visual novels, focusing on text content and simplifying the video creation process for users.

eko
Eko is a lightweight and flexible command-line tool for managing environment variables in your projects. It allows you to easily set, get, and delete environment variables for different environments, making it simple to manage configurations across development, staging, and production environments. With Eko, you can streamline your workflow and ensure consistency in your application settings without the need for complex setup or configuration files.
For similar tasks

agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.

zep-python
Zep is an open-source platform for building and deploying large language model (LLM) applications. It provides a suite of tools and services that make it easy to integrate LLMs into your applications, including chat history memory, embedding, vector search, and data enrichment. Zep is designed to be scalable, reliable, and easy to use, making it a great choice for developers who want to build LLM-powered applications quickly and easily.

lollms
LoLLMs Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.

LlamaIndexTS
LlamaIndex.TS is a data framework for your LLM application. Use your own data with large language models (LLMs, OpenAI ChatGPT and others) in Typescript and Javascript.

semantic-kernel
Semantic Kernel is an SDK that integrates Large Language Models (LLMs) like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C#, Python, and Java. Semantic Kernel achieves this by allowing you to define plugins that can be chained together in just a few lines of code. What makes Semantic Kernel _special_ , however, is its ability to _automatically_ orchestrate plugins with AI. With Semantic Kernel planners, you can ask an LLM to generate a plan that achieves a user's unique goal. Afterwards, Semantic Kernel will execute the plan for the user.

botpress
Botpress is a platform for building next-generation chatbots and assistants powered by OpenAI. It provides a range of tools and integrations to help developers quickly and easily create and deploy chatbots for various use cases.

BotSharp
BotSharp is an open-source machine learning framework for building AI bot platforms. It provides a comprehensive set of tools and components for developing and deploying intelligent virtual assistants. BotSharp is designed to be modular and extensible, allowing developers to easily integrate it with their existing systems and applications. With BotSharp, you can quickly and easily create AI-powered chatbots, virtual assistants, and other conversational AI applications.

qdrant
Qdrant is a vector similarity search engine and vector database. It is written in Rust, which makes it fast and reliable even under high load. Qdrant can be used for a variety of applications, including: * Semantic search * Image search * Product recommendations * Chatbots * Anomaly detection Qdrant offers a variety of features, including: * Payload storage and filtering * Hybrid search with sparse vectors * Vector quantization and on-disk storage * Distributed deployment * Highlighted features such as query planning, payload indexes, SIMD hardware acceleration, async I/O, and write-ahead logging Qdrant is available as a fully managed cloud service or as an open-source software that can be deployed on-premises.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.