
mcp
MCP Server for Snowflake including Cortex AI, object management, SQL orchestration, semantic view consumption, and more
Stars: 122

The Snowflake Cortex AI Model Context Protocol (MCP) Server provides tooling for Snowflake Cortex AI, object management, and SQL orchestration. It supports capabilities such as Cortex Search, Cortex Analyst, Cortex Agent, Object Management, SQL Execution, and Semantic View Querying. Users can connect to Snowflake using various authentication methods like username/password, key pair, OAuth, SSO, and MFA. The server is client-agnostic and works with MCP Clients like Claude Desktop, Cursor, fast-agent, Microsoft Visual Studio Code + GitHub Copilot, and Codex. It includes tools for Object Management (creating, dropping, describing, listing objects), SQL Execution (executing SQL statements), and Semantic View Querying (discovering, querying Semantic Views). Troubleshooting can be done using the MCP Inspector tool.
README:
This Snowflake MCP server provides tooling for Snowflake Cortex AI, object management, and SQL orchestration, bringing these capabilities to the MCP ecosystem. When connected to an MCP Client (e.g. Claude for Desktop, fast-agent, Agentic Orchestration Framework), users can leverage these features.
The MCP server currently supports the below capabilities:
- Cortex Search: Query unstructured data in Snowflake as commonly used in Retrieval Augmented Generation (RAG) applications.
- Cortex Analyst: Query structured data in Snowflake via rich semantic modeling.
- Cortex Agent: Agentic orchestrator across structured and unstructured data retrieval
- Object Management: Perform basic operations against Snowflake's most common objects such as creation, dropping, updating, and more.
- SQL Execution: Run LLM-generated SQL managed by user-configured permissions.
- Semantic View Querying: Discover and query Snowflake Semantic Views
A simple configuration file is used to drive all tooling. An example can be seen at services/configuration.yaml and a template is below. The path to this configuration file will be passed to the server and the contents used to create MCP server tools at startup.
Cortex Services
Many Cortex Agent, Search, and Analyst services can be added. Ideal descriptions are both highly descriptive and mutually exclusive. Only the explicitly listed Cortex services will be available as tools in the MCP client.
Other Services
Other services include tooling for object management, query execution, and semantic view usage.
These groups of tools can be enabled by setting them to True in the other_services
section of the configuration file.
SQL Statement Permissions
The sql_statement_permissions
section ensures that only approved statements are executed across any tools with access to change Snowflake objects.
The list contains SQL expression types. Those marked with True are permitted while those marked with False are not permitted. Please see SQL Execution for examples of each expression type.
agent_services: # List all Cortex Agent services
- service_name: "<service_name>"
description: > # Describe contents of the agent service"
"<Agent service that ...>"
database_name: "<database_name>"
schema_name: "<schema_name>"
- service_name: "<service_name>"
description: > # Describe contents of the agent service"
"<Agent service that ...>"
database_name: "<database_name>"
schema_name: "<schema_name>"
search_services: # List all Cortex Search services
- service_name: "<service_name>"
description: > # Describe contents of the search service"
"<Search services that ...>"
database_name: "<database_name>"
schema_name: "<schema_name>"
- service_name: "<service_name>"
description: > # Describe contents of the search service"
"<Search services that ...>"
database_name: "<database_name>"
schema_name: "<schema_name>"
analyst_services: # List all Cortex Analyst semantic models/views
- service_name: "<service_name>" # Create descriptive name for the service
semantic_model: "<semantic_yaml_or_view>" # Fully-qualify semantic YAML model or Semantic View
description: > # Describe contents of the analyst service"
"<Analyst service that ...>"
- service_name: "<service_name>" # Create descriptive name for the service
semantic_model: "<semantic_yaml_or_view>" # Fully-qualify semantic YAML model or Semantic View
description: > # Describe contents of the analyst service"
"<Analyst service that ...>"
other_services: # Set desired tool groups to True to enable tools for that group
object_manager: True # Perform basic operations against Snowflake's most common objects such as creation, dropping, updating, and more.
query_manager: True # Run LLM-generated SQL managed by user-configured permissions.
semantic_manager: True # Discover and query Snowflake Semantic Views and their components.
sql_statement_permissions: # List SQL statements to explicitly allow (True) or disallow (False).
# - All: True # To allow everything, uncomment and set All: True.
- Alter: True
- Command: True
- Comment: True
- Commit: True
- Create: True
- Delete: True
- Describe: True
- Drop: True
- Insert: True
- Merge: True
- Rollback: True
- Select: True
- Transaction: True
- TruncateTable: True
- Unknown: False # To allow unknown or unmapped statement types, set Unknown: True.
- Update: True
- Use: True
[!NOTE] Previous versions of the configuration file supported specifying explicit values for columns and limit for each Cortex Search service. Instead, these are now exclusively dynamic based on user prompt. If not specified, a search service's default search_columns will be returned with a limit of 10.
The MCP server uses the Snowflake Python Connector for all authentication and connection methods. Please refer to the official Snowflake documentation for comprehensive authentication options and best practices.
The MCP server honors the RBAC permissions assigned to the specified role (as passed in the connection parameters) or default role of the user (if no role is passed to connect).
Connection parameters can be passed as CLI arguments and/or environment variables. The server supports all authentication methods available in the Snowflake Python Connector, including:
- Username/password authentication
- Key pair authentication
- OAuth authentication
- Single Sign-On (SSO)
- Multi-factor authentication (MFA)
Connection parameters can be passed as CLI arguments and/or environment variables:
Parameter | CLI Arguments | Environment Variable | Description |
---|---|---|---|
Account | --account | SNOWFLAKE_ACCOUNT | Account identifier (e.g. xy12345.us-east-1) |
Host | --host | SNOWFLAKE_HOST | Snowflake host URL |
User | --user, --username | SNOWFLAKE_USER | Username for authentication |
Password | --password | SNOWFLAKE_PASSWORD | Password or programmatic access token |
Role | --role | SNOWFLAKE_ROLE | Role to use for connection |
Warehouse | --warehouse | SNOWFLAKE_WAREHOUSE | Warehouse to use for queries |
Passcode in Password | --passcode-in-password | - | Whether passcode is embedded in password |
Passcode | --passcode | SNOWFLAKE_PASSCODE | MFA passcode for authentication |
Private Key | --private-key | SNOWFLAKE_PRIVATE_KEY | Private key for key pair authentication |
Private Key File | --private-key-file | SNOWFLAKE_PRIVATE_KEY_FILE | Path to private key file |
Private Key Password | --private-key-file-pwd | SNOWFLAKE_PRIVATE_KEY_FILE_PWD | Password for encrypted private key |
Authenticator | --authenticator | - | Authentication type (default: snowflake) |
Connection Name | --connection-name | - | Name of connection from connections.toml (or config.toml) file |
[!WARNING] Deprecation Notice: The CLI arguments
--account-identifier
and--pat
, as well as the environment variableSNOWFLAKE_PAT
, are deprecated and will be removed in a future release. Please use--account
and--password
(orSNOWFLAKE_ACCOUNT
andSNOWFLAKE_PASSWORD
) instead.
The MCP server is client-agnostic and will work with most MCP Clients that support basic functionality for MCP tools and (optionally) resources. Below are some examples.
To integrate this server with Claude Desktop as the MCP Client, add the following to your app's server configuration. By default, this is located at
- macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
- Windows: %APPDATA%\Claude\claude_desktop_config.json
Set the path to the service configuration file and configure your connection method.
{
"mcpServers": {
"mcp-server-snowflake": {
"command": "uvx",
"args": [
"snowflake-labs-mcp",
"--service-config-file",
"<path to file>/tools_config.yaml",
"--connection-name",
"default"
]
}
}
}
Register the MCP server in cursor by opening Cursor and navigating to Settings -> Cursor Settings -> MCP. Add the below.
{
"mcpServers": {
"mcp-server-snowflake": {
"command": "uvx",
"args": [
"snowflake-labs-mcp",
"--service-config-file",
"<path to file>/tools_config.yaml",
"--connection-name",
"default"
]
}
}
}
Add the MCP server as context in the chat.
For troubleshooting Cursor server issues, view the logs by opening the Output panel and selecting Cursor MCP from the dropdown menu.
Update the fastagent.config.yaml
mcp server section with the configuration file path and connection name.
# MCP Servers
mcp:
servers:
mcp-server-snowflake:
command: "uvx"
args: ["snowflake-labs-mcp", "--service-config-file", "<path to file>/tools_config.yaml", "--connection-name", "default"]
For prerequisites, environment setup, step-by-step guide and instructions, please refer to this blog.
Register the MCP server in codex by adding the following to ~/.codex/config.toml
[mcp_servers.mcp-server-snowflake]
command = "uvx"
args = [
"snowflake-labs-mcp",
"--service-config-file",
"<path to file>/tools_config.yaml",
"--connection-name",
"default"
]
After editing, the snowflake mcp should appear in the output of codex mcp list
run from the terminal.
Instances of Cortex Agent (in agent_services
section), Cortex Search (in search_services
section), and Cortex Analyst (in analyst_services
section) of the configuration file will be served as tools. Leave these sections blank to omit such tools.
Only Cortex Agent objects are supported in the MCP server. That is, only Cortex Agent objects pre-configured in Snowflake can be leveraged as tools. See Cortex Agent Run API for more details.
Ensure all services have accurate context names for service name, database, schema, etc. Ideal descriptions are both highly descriptive and mutually exclusive.
The semantic_model
value in analyst services should be a fully-qualified semantic view OR semantic YAML file in a Snowflake stage:
- For a semantic view:
MY_DATABASE.MY_SCHEMA.MY_SEMANTIC_VIEW
- For a semantic YAML file:
@MY_DATABASE.MY_SCHEMA.MY_STAGE/my_semantic_file.yaml
(Note the@
.)
The MCP server includes dozens of tools narrowly scoped to fulfill basic operation management. It is recommended to use Snowsight directly for advanced object management.
The MCP server currently supports creating, dropping, creating or altering, describing, and listing the below object types.
To enable these tools, set object_manager
to True in the configuration file under other_services
.
- Database
- Schema
- Table
- View
- Warehouse
- Compute Pool
- Role
- Stage
- User
- Image Repository
Please note that these tools are also governed by permissions captured in the configuration file under sql_statement_permissions
.
Object management tools to create and create or alter objects are governed by the Create
permission. Object dropping is governed by the Drop
permission.
It is likely that more actions and objects will be included in future releases.
The general SQL tool will provide a way to execute generic SQL statements generated by the MCP client. Users have full control over the types of SQL statement that are approved in the configuration file.
Listed in the configuration file under sql_statement_permissions
are sqlglot expression types. Those marked as False will be stopped before execution. Those marked with True will be executed (or prompt the user for execution based on the MCP client settings).
To enable the SQL execution tool, set query_manager
to True in the configuration file under other_services
.
To allow all SQL expressions to pass the additional validation, set All
to True.
Not all Snowflake SQL commands are mapped in sqlglot and you may find some obscure commands have yet to be captured in the configuration file.
Setting Unknown
to True will allow these uncaptured commands to pass the additional validation. You may also add new expression types directly to honor specific ones.
Below are some examples of sqlglot expression types with accompanying Snowflake SQL command examples:
SQLGlot Expression Type | SQL Command |
---|---|
Alter | ALTER TABLE my_table ADD COLUMN new_column VARCHAR(50); |
Command |
CALL my_procedure('param1_value', 123); GRANT ROLE analyst TO USER user1; SHOW TABLES IN SCHEMA my_database.my_schema;
|
Comment | COMMENT ON TABLE my_table IS 'This table stores customer data.'; |
Commit | COMMIT; |
Create |
CREATE TABLE my_table ( id INT, name VARCHAR(255), email VARCHAR(255) ); CREATE OR ALTER VIEW my_schema.my_new_view AS SELECT id, name, created_at FROM my_schema.my_table WHERE created_at >= '2023-01-01';
|
Delete | DELETE FROM my_table WHERE id = 101; |
Describe | DESCRIBE TABLE my_table; |
Drop | DROP TABLE my_table; |
Error |
COPY INTO my_table FROM @my_stage/data/customers.csv FILE_FORMAT = (TYPE = CSV SKIP_HEADER = 1 FIELD_DELIMITER = ','); REVOKE ROLE analyst FROM USER user1; UNDROP TABLE my_table;
|
Insert | INSERT INTO my_table (id, name, email) VALUES (102, 'Jane Doe', '[email protected]'); |
Merge | MERGE INTO my_table AS target USING (SELECT 103 AS id, 'John Smith' AS name, '[email protected]' AS email) AS source ON target.id = source.id WHEN MATCHED THEN UPDATE SET target.name = source.name, target.email = source.email WHEN NOT MATCHED THEN INSERT (id, name, email) VALUES (source.id, source.name, source.email); |
Rollback | ROLLBACK; |
Select | SELECT id, name FROM my_table WHERE id < 200 ORDER BY name; |
Transaction | BEGIN; |
TruncateTable | TRUNCATE TABLE my_table; |
Update | UPDATE my_table SET email = '[email protected]' WHERE name = 'Jane Doe'; |
Use | USE DATABASE my_database; |
Several tools support the discovery and querying of Snowflake Semantic Views and their components. Semantic Views can be listed and described. In addition, you can list their metrics and dimensions. Lastly, you can query Semantic Views directly.
To enable these tools, set semantic_manager
to True in the configuration file under other_services
.
The MCP Inspector is suggested for troubleshooting the MCP server. Run the below to launch the inspector.
npx @modelcontextprotocol/inspector uvx snowflake-labs-mcp --service-config-file "<path_to_file>/tools_config.yaml" --connection-name "default"
- The MCP server supports all connection methods supported by the Snowflake Python Connector. See Connecting to Snowflake with the Python Connector for more information.
- While LLMs' support for more tools will likely grow, you can hide tool groups by setting them to False in the configuration file. Only listed Cortex services will be made into tools as well.
- Yes. Pass it to the CLI flag --password or set as environment variable SNOWFLAKE_PASSWORD.
- The MCP server is intended to be used as one part of the MCP ecosystem. Think of it as a collection of tools. You'll need an MCP Client to act as an orchestrator. See the MCP Introduction for more information.
- All tools in this MCP server are managed services, accessible via REST API. No separate remote service deployment is necessary. Instead, the current version of the server is intended to be started by the MCP client, such as Claude Desktop, Cursor, fast-agent, etc. By configuring these MCP client with the server, the application will spin up the server service for you. Future versions of the MCP server may be deployed as a remote service in the future.
- If using a Programmatic Access Tokens, note that they do not evaluate secondary roles. When creating them, please select a single role that has access to all services and their underlying objects OR select any role. A new PAT will need to be created to alter this property.
- You may add multiple instances of both services. The MCP Client will determine the appropriate one(s) to use based on the user's prompt.
- If your account name contains underscores, try using the dashed version of the URL.
- Account identifier with underscores:
acme-marketing_test_account
- Account identifier with dashes:
acme-marketing-test-account
- Account identifier with underscores:
Please add issues to the GitHub repository.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mcp
Similar Open Source Tools

mcp
The Snowflake Cortex AI Model Context Protocol (MCP) Server provides tooling for Snowflake Cortex AI, object management, and SQL orchestration. It supports capabilities such as Cortex Search, Cortex Analyst, Cortex Agent, Object Management, SQL Execution, and Semantic View Querying. Users can connect to Snowflake using various authentication methods like username/password, key pair, OAuth, SSO, and MFA. The server is client-agnostic and works with MCP Clients like Claude Desktop, Cursor, fast-agent, Microsoft Visual Studio Code + GitHub Copilot, and Codex. It includes tools for Object Management (creating, dropping, describing, listing objects), SQL Execution (executing SQL statements), and Semantic View Querying (discovering, querying Semantic Views). Troubleshooting can be done using the MCP Inspector tool.

call-center-ai
Call Center AI is an AI-powered call center solution leveraging Azure and OpenAI GPT. It allows for AI agent-initiated phone calls or direct calls to the bot from a configured phone number. The bot is customizable for various industries like insurance, IT support, and customer service, with features such as accessing claim information, conversation history, language change, SMS sending, and more. The project is a proof of concept showcasing the integration of Azure Communication Services, Azure Cognitive Services, and Azure OpenAI for an automated call center solution.

langserve
LangServe helps developers deploy `LangChain` runnables and chains as a REST API. This library is integrated with FastAPI and uses pydantic for data validation. In addition, it provides a client that can be used to call into runnables deployed on a server. A JavaScript client is available in LangChain.js.

curator
Bespoke Curator is an open-source tool for data curation and structured data extraction. It provides a Python library for generating synthetic data at scale, with features like programmability, performance optimization, caching, and integration with HuggingFace Datasets. The tool includes a Curator Viewer for dataset visualization and offers a rich set of functionalities for creating and refining data generation strategies.

syncode
SynCode is a novel framework for the grammar-guided generation of Large Language Models (LLMs) that ensures syntactically valid output based on a Context-Free Grammar (CFG). It supports various programming languages like Python, Go, SQL, Math, JSON, and more. Users can define custom grammars using EBNF syntax. SynCode offers fast generation, seamless integration with HuggingFace Language Models, and the ability to sample with different decoding strategies.

syncode
SynCode is a novel framework for the grammar-guided generation of Large Language Models (LLMs) that ensures syntactically valid output with respect to defined Context-Free Grammar (CFG) rules. It supports general-purpose programming languages like Python, Go, SQL, JSON, and more, allowing users to define custom grammars using EBNF syntax. The tool compares favorably to other constrained decoders and offers features like fast grammar-guided generation, compatibility with HuggingFace Language Models, and the ability to work with various decoding strategies.

redisvl
Redis Vector Library (RedisVL) is a Python client library for building AI applications on top of Redis. It provides a high-level interface for managing vector indexes, performing vector search, and integrating with popular embedding models and providers. RedisVL is designed to make it easy for developers to build and deploy AI applications that leverage the speed, flexibility, and reliability of Redis.

call-center-ai
Call Center AI is an AI-powered call center solution that leverages Azure and OpenAI GPT. It is a proof of concept demonstrating the integration of Azure Communication Services, Azure Cognitive Services, and Azure OpenAI to build an automated call center solution. The project showcases features like accessing claims on a public website, customer conversation history, language change during conversation, bot interaction via phone number, multiple voice tones, lexicon understanding, todo list creation, customizable prompts, content filtering, GPT-4 Turbo for customer requests, specific data schema for claims, documentation database access, SMS report sending, conversation resumption, and more. The system architecture includes components like RAG AI Search, SMS gateway, call gateway, moderation, Cosmos DB, event broker, GPT-4 Turbo, Redis cache, translation service, and more. The tool can be deployed remotely using GitHub Actions and locally with prerequisites like Azure environment setup, configuration file creation, and resource hosting. Advanced usage includes custom training data with AI Search, prompt customization, language customization, moderation level customization, claim data schema customization, OpenAI compatible model usage for the LLM, and Twilio integration for SMS.

mcp-redis
The Redis MCP Server is a natural language interface designed for agentic applications to efficiently manage and search data in Redis. It integrates seamlessly with MCP (Model Content Protocol) clients, enabling AI-driven workflows to interact with structured and unstructured data in Redis. The server supports natural language queries, seamless MCP integration, full Redis support for various data types, search and filtering capabilities, scalability, and lightweight design. It provides tools for managing data stored in Redis, such as string, hash, list, set, sorted set, pub/sub, streams, JSON, query engine, and server management. Installation can be done from PyPI or GitHub, with options for testing, development, and Docker deployment. Configuration can be via command line arguments or environment variables. Integrations include OpenAI Agents SDK, Augment, Claude Desktop, and VS Code with GitHub Copilot. Use cases include AI assistants, chatbots, data search & analytics, and event processing. Contributions are welcome under the MIT License.

claim-ai-phone-bot
AI-powered call center solution with Azure and OpenAI GPT. The bot can answer calls, understand the customer's request, and provide relevant information or assistance. It can also create a todo list of tasks to complete the claim, and send a report after the call. The bot is customizable, and can be used in multiple languages.

llm-client
LLMClient is a JavaScript/TypeScript library that simplifies working with large language models (LLMs) by providing an easy-to-use interface for building and composing efficient prompts using prompt signatures. These signatures enable the automatic generation of typed prompts, allowing developers to leverage advanced capabilities like reasoning, function calling, RAG, ReAcT, and Chain of Thought. The library supports various LLMs and vector databases, making it a versatile tool for a wide range of applications.

cake
cake is a pure Rust implementation of the llama3 LLM distributed inference based on Candle. The project aims to enable running large models on consumer hardware clusters of iOS, macOS, Linux, and Windows devices by sharding transformer blocks. It allows running inferences on models that wouldn't fit in a single device's GPU memory by batching contiguous transformer blocks on the same worker to minimize latency. The tool provides a way to optimize memory and disk space by splitting the model into smaller bundles for workers, ensuring they only have the necessary data. cake supports various OS, architectures, and accelerations, with different statuses for each configuration.

mcpdoc
The MCP LLMS-TXT Documentation Server is an open-source server that provides developers full control over tools used by applications like Cursor, Windsurf, and Claude Code/Desktop. It allows users to create a user-defined list of `llms.txt` files and use a `fetch_docs` tool to read URLs within these files, enabling auditing of tool calls and context returned. The server supports various applications and provides a way to connect to them, configure rules, and test tool calls for tasks related to documentation retrieval and processing.

mcp-victoriametrics
The VictoriaMetrics MCP Server is an implementation of Model Context Protocol (MCP) server for VictoriaMetrics. It provides access to your VictoriaMetrics instance and seamless integration with VictoriaMetrics APIs and documentation. The server allows you to use almost all read-only APIs of VictoriaMetrics, enabling monitoring, observability, and debugging tasks related to your VictoriaMetrics instances. It also contains embedded up-to-date documentation and tools for exploring metrics, labels, alerts, and more. The server can be used for advanced automation and interaction capabilities for engineers and tools.

HippoRAG
HippoRAG is a novel retrieval augmented generation (RAG) framework inspired by the neurobiology of human long-term memory that enables Large Language Models (LLMs) to continuously integrate knowledge across external documents. It provides RAG systems with capabilities that usually require a costly and high-latency iterative LLM pipeline for only a fraction of the computational cost. The tool facilitates setting up retrieval corpus, indexing, and retrieval processes for LLMs, offering flexibility in choosing different online LLM APIs or offline LLM deployments through LangChain integration. Users can run retrieval on pre-defined queries or integrate directly with the HippoRAG API. The tool also supports reproducibility of experiments and provides data, baselines, and hyperparameter tuning scripts for research purposes.

LLMDebugger
This repository contains the code and dataset for LDB, a novel debugging framework that enables Large Language Models (LLMs) to refine their generated programs by tracking the values of intermediate variables throughout the runtime execution. LDB segments programs into basic blocks, allowing LLMs to concentrate on simpler code units, verify correctness block by block, and pinpoint errors efficiently. The tool provides APIs for debugging and generating code with debugging messages, mimicking how human developers debug programs.
For similar tasks

mcp
The Snowflake Cortex AI Model Context Protocol (MCP) Server provides tooling for Snowflake Cortex AI, object management, and SQL orchestration. It supports capabilities such as Cortex Search, Cortex Analyst, Cortex Agent, Object Management, SQL Execution, and Semantic View Querying. Users can connect to Snowflake using various authentication methods like username/password, key pair, OAuth, SSO, and MFA. The server is client-agnostic and works with MCP Clients like Claude Desktop, Cursor, fast-agent, Microsoft Visual Studio Code + GitHub Copilot, and Codex. It includes tools for Object Management (creating, dropping, describing, listing objects), SQL Execution (executing SQL statements), and Semantic View Querying (discovering, querying Semantic Views). Troubleshooting can be done using the MCP Inspector tool.

ryoma
Ryoma is an AI Powered Data Agent framework that offers a comprehensive solution for data analysis, engineering, and visualization. It leverages cutting-edge technologies like Langchain, Reflex, Apache Arrow, Jupyter Ai Magics, Amundsen, Ibis, and Feast to provide seamless integration of language models, build interactive web applications, handle in-memory data efficiently, work with AI models, and manage machine learning features in production. Ryoma also supports various data sources like Snowflake, Sqlite, BigQuery, Postgres, MySQL, and different engines like Apache Spark and Apache Flink. The tool enables users to connect to databases, run SQL queries, and interact with data and AI models through a user-friendly UI called Ryoma Lab.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.