
voyant
Fact-Verified Travel AI Agent
Stars: 66

Voyant Travel Assistant is a meta-agent pipeline designed for fast, trustworthy answers with clear provenance and resilient I/O. It focuses on AI-first planning, strict JSON parsing, non-blocking async I/O, and verification pipelines. The tool analyzes user messages, plans tool calls, executes actions, and blends responses from various tools. It supports tools like weather, country information, attractions, travel search, flight information, and policy extraction. Users can interact with the tool through a CLI interface and benefit from its architecture that ensures reliable responses and observability through stored receipts and verification processes.
README:
Single meta‑agent pipeline that plans tool calls, executes, writes receipts, and verifies before reply. Built for fast, trustworthy answers with clear provenance and resilient I/O.
Quick start: cd root && npm install && npm run build && npm run start
.
For CLI: cd root && npm run cli
. Minimal env: LLM_PROVIDER_BASEURL
+
LLM_API_KEY
(or OPENROUTER_API_KEY
), plus optional Amadeus/Vectara/Search
keys when those tools are used.
Architecture focuses on AI‑first planning (OpenAI‑style tools), strict JSON
parsing via Zod, non‑blocking async I/O with explicit timeouts/signals, and a
verification pipeline that stores receipts and artifacts for /why
.
Testing
-
Layers
- Unit: pure modules (schemas, parsers, helpers). No network.
- Integration: tool adapters and API routes with HTTP mocked.
- Golden: real meta‑agent conversations that persist receipts, then call the
verifier (verify.md) via an LLM pass‑through; assertions read the stored
verification artifact (no re‑verify on
/why
).
-
Commands
cd root && npm run test:unit
cd root && npm run test:integration
cd root && VERIFY_LLM=1 AUTO_VERIFY_REPLIES=true npm run test:golden
-
Golden prerequisites
- Provide
LLM_PROVIDER_BASEURL
+LLM_API_KEY
orOPENROUTER_API_KEY
. - Golden tests are skipped unless
VERIFY_LLM=1
.
- Provide
Agent Decision Flow
flowchart TD
U["User message"] --> API["POST /chat\\nroot/src/api/routes.ts"]
API --> HANDLE["handleChat()\\nroot/src/core/blend.ts"]
HANDLE --> RUN["runMetaAgentTurn()\\nroot/src/agent/meta_agent.ts"]
subgraph MetaAgent["Meta Agent\\nAnalyze → Plan → Act → Blend"]
RUN --> LOAD["Load meta_agent.md\\nlog prompt hash/version"]
LOAD --> PLAN["Analyze + Plan (LLM)\\nCONTROL JSON route/missing/calls"]
PLAN --> ACT["chatWithToolsLLM()\\nexecute tool plan"]
ACT --> BLEND["Blend (LLM) grounded reply"]
subgraph Tools["Tools Registry\\nroot/src/agent/tools/index.ts"]
ACT --> T1["weather / getCountry / getAttractions"]
ACT --> T2["searchTravelInfo (Tavily/Brave)"]
ACT --> T3["vectaraQuery (RAG locator)"]
ACT --> T4["extractPolicyWithCrawlee / deepResearch"]
ACT --> T5["Amadeus resolveCity / airports / flights"]
end
BLEND --> RECEIPTS["setLastReceipts()\\nslot_memory.ts"]
end
RECEIPTS --> MET["observeStage / addMeta* metrics\\nutil/metrics.ts"]
MET --> AUTO{"AUTO_VERIFY_REPLIES=true?"}
AUTO -->|Yes| VERIFY["verifyAnswer()\\ncore/verify.ts\\nctx: getContext + slots + intent"]
VERIFY --> STORE["setLastVerification()\\nslot_memory.ts"]
VERIFY --> VERDICT{"verdict = fail & revised answer?"}
VERDICT -->|Yes| REPLACE["Use revised answer\\npushMessage(thread, revised)"]
VERDICT -->|No| FINAL["Return meta reply"]
STORE --> FINAL
AUTO -->|No| FINAL
REPLACE --> FINAL
FINAL --> RESP["ChatOutput → caller"]
RESP --> WHY["/why command\\nreads receipts + stored verification"]
Prompts Flow
flowchart TD
U["User message"] --> SYS["meta_agent.md<br/>System prompt"]
SYS --> PLAN["Planning request (LLM)<br/>CONTROL JSON route/missing/calls"]
PLAN --> ACT["chatWithToolsLLM<br/>Meta Agent execution"]
ACT --> BLEND["Blend instructions<br/>within meta_agent.md"]
BLEND --> RECEIPTS["Persist receipts\\nslot_memory.setLastReceipts"]
RECEIPTS --> VERQ{"Auto-verify or /why?"}
VERQ -->|Yes| VERIFY["verify.md<br/>STRICT JSON verdict"]
VERIFY --> OUT["Reply + receipts"]
VERQ -->|No| OUT
Docs: see docs/index.html
for quick links to prompts/observability.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for voyant
Similar Open Source Tools

voyant
Voyant Travel Assistant is a meta-agent pipeline designed for fast, trustworthy answers with clear provenance and resilient I/O. It focuses on AI-first planning, strict JSON parsing, non-blocking async I/O, and verification pipelines. The tool analyzes user messages, plans tool calls, executes actions, and blends responses from various tools. It supports tools like weather, country information, attractions, travel search, flight information, and policy extraction. Users can interact with the tool through a CLI interface and benefit from its architecture that ensures reliable responses and observability through stored receipts and verification processes.

hud-python
hud-python is a Python library for creating interactive heads-up displays (HUDs) in video games. It provides a simple and flexible way to overlay information on the screen, such as player health, score, and notifications. The library is designed to be easy to use and customizable, allowing game developers to enhance the user experience by adding dynamic elements to their games. With hud-python, developers can create engaging HUDs that improve gameplay and provide important feedback to players.

ai00_server
AI00 RWKV Server is an inference API server for the RWKV language model based upon the web-rwkv inference engine. It supports VULKAN parallel and concurrent batched inference and can run on all GPUs that support VULKAN. No need for Nvidia cards!!! AMD cards and even integrated graphics can be accelerated!!! No need for bulky pytorch, CUDA and other runtime environments, it's compact and ready to use out of the box! Compatible with OpenAI's ChatGPT API interface. 100% open source and commercially usable, under the MIT license. If you are looking for a fast, efficient, and easy-to-use LLM API server, then AI00 RWKV Server is your best choice. It can be used for various tasks, including chatbots, text generation, translation, and Q&A.

onnxruntime-server
ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference. It aims to offer simple, high-performance ML inference and a good developer experience. Users can provide inference APIs for ONNX models without writing additional code by placing the models in the directory structure. Each session can choose between CPU or CUDA, analyze input/output, and provide Swagger API documentation for easy testing. Ready-to-run Docker images are available, making it convenient to deploy the server.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

evalplus
EvalPlus is a rigorous evaluation framework for LLM4Code, providing HumanEval+ and MBPP+ tests to evaluate large language models on code generation tasks. It offers precise evaluation and ranking, coding rigorousness analysis, and pre-generated code samples. Users can use EvalPlus to generate code solutions, post-process code, and evaluate code quality. The tool includes tools for code generation and test input generation using various backends.

air
Air is a new web framework for Python web development, built with FastAPI, Starlette, and Pydantic. It provides intuitive shortcuts and optimizations to expedite coding HTML with FastAPI, easy HTML content generation using Python classes, and seamless integration with Jinja templates. Air also offers utilities for using HTMX, HTML form validation powered by pydantic, and well-documented features. It aims to combine sophisticated HTML pages and a REST API into one app, making it easy to use FastAPI and Air together.

obsei
Obsei is an open-source, low-code, AI powered automation tool that consists of an Observer to collect unstructured data from various sources, an Analyzer to analyze the collected data with various AI tasks, and an Informer to send analyzed data to various destinations. The tool is suitable for scheduled jobs or serverless applications as all Observers can store their state in databases. Obsei is still in alpha stage, so caution is advised when using it in production. The tool can be used for social listening, alerting/notification, automatic customer issue creation, extraction of deeper insights from feedbacks, market research, dataset creation for various AI tasks, and more based on creativity.

LocalAGI
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. It provides a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. With LocalAGI, you can create customizable AI assistants, automations, chat bots, and agents that run 100% locally, without the need for cloud services or API keys. The platform offers features like no-code agents, web-based interface, advanced agent teaming, connectors for various platforms, comprehensive REST API, short & long-term memory capabilities, planning & reasoning, periodic tasks scheduling, memory management, multimodal support, extensible custom actions, fully customizable models, observability, and more.

evalscope
Eval-Scope is a framework designed to support the evaluation of large language models (LLMs) by providing pre-configured benchmark datasets, common evaluation metrics, model integration, automatic evaluation for objective questions, complex task evaluation using expert models, reports generation, visualization tools, and model inference performance evaluation. It is lightweight, easy to customize, supports new dataset integration, model hosting on ModelScope, deployment of locally hosted models, and rich evaluation metrics. Eval-Scope also supports various evaluation modes like single mode, pairwise-baseline mode, and pairwise (all) mode, making it suitable for assessing and improving LLMs.

prometheus-mcp-server
Prometheus MCP Server is a Model Context Protocol (MCP) server that provides access to Prometheus metrics and queries through standardized interfaces. It allows AI assistants to execute PromQL queries and analyze metrics data. The server supports executing queries, exploring metrics, listing available metrics, viewing query results, and authentication. It offers interactive tools for AI assistants and can be configured to choose specific tools. Installation methods include using Docker Desktop, MCP-compatible clients like Claude Desktop, VS Code, Cursor, and Windsurf, and manual Docker setup. Configuration options include setting Prometheus server URL, authentication credentials, organization ID, transport mode, and bind host/port. Contributions are welcome, and the project uses `uv` for managing dependencies and includes a comprehensive test suite for functionality testing.

client-ts
Mistral Typescript Client is an SDK for Mistral AI API, providing Chat Completion and Embeddings APIs. It allows users to create chat completions, upload files, create agent completions, create embedding requests, and more. The SDK supports various JavaScript runtimes and provides detailed documentation on installation, requirements, API key setup, example usage, error handling, server selection, custom HTTP client, authentication, providers support, standalone functions, debugging, and contributions.

cua
Cua is a tool for creating and running high-performance macOS and Linux virtual machines on Apple Silicon, with built-in support for AI agents. It provides libraries like Lume for running VMs with near-native performance, Computer for interacting with sandboxes, and Agent for running agentic workflows. Users can refer to the documentation for onboarding, explore demos showcasing AI-Gradio and GitHub issue fixing, and utilize accessory libraries like Core, PyLume, Computer Server, and SOM. Contributions are welcome, and the tool is open-sourced under the MIT License.

client-python
The Mistral Python Client is a tool inspired by cohere-python that allows users to interact with the Mistral AI API. It provides functionalities to access and utilize the AI capabilities offered by Mistral. Users can easily install the client using pip and manage dependencies using poetry. The client includes examples demonstrating how to use the API for various tasks, such as chat interactions. To get started, users need to obtain a Mistral API Key and set it as an environment variable. Overall, the Mistral Python Client simplifies the integration of Mistral AI services into Python applications.

browser4
Browser4 is a lightning-fast, coroutine-safe browser designed for AI integration with large language models. It offers ultra-fast automation, deep web understanding, and powerful data extraction APIs. Users can automate the browser, extract data at scale, and perform tasks like summarizing products, extracting product details, and finding specific links. The tool is developer-friendly, supports AI-powered automation, and provides advanced features like X-SQL for precise data extraction. It also offers RPA capabilities, browser control, and complex data extraction with X-SQL. Browser4 is suitable for web scraping, data extraction, automation, and AI integration tasks.

kernel-memory
Kernel Memory (KM) is a multi-modal AI Service specialized in the efficient indexing of datasets through custom continuous data hybrid pipelines, with support for Retrieval Augmented Generation (RAG), synthetic memory, prompt engineering, and custom semantic memory processing. KM is available as a Web Service, as a Docker container, a Plugin for ChatGPT/Copilot/Semantic Kernel, and as a .NET library for embedded applications. Utilizing advanced embeddings and LLMs, the system enables Natural Language querying for obtaining answers from the indexed data, complete with citations and links to the original sources. Designed for seamless integration as a Plugin with Semantic Kernel, Microsoft Copilot and ChatGPT, Kernel Memory enhances data-driven features in applications built for most popular AI platforms.
For similar tasks

voyant
Voyant Travel Assistant is a meta-agent pipeline designed for fast, trustworthy answers with clear provenance and resilient I/O. It focuses on AI-first planning, strict JSON parsing, non-blocking async I/O, and verification pipelines. The tool analyzes user messages, plans tool calls, executes actions, and blends responses from various tools. It supports tools like weather, country information, attractions, travel search, flight information, and policy extraction. Users can interact with the tool through a CLI interface and benefit from its architecture that ensures reliable responses and observability through stored receipts and verification processes.

tribe
Tribe AI is a low code tool designed to rapidly build and coordinate multi-agent teams. It leverages the langgraph framework to customize and coordinate teams of agents, allowing tasks to be split among agents with different strengths for faster and better problem-solving. The tool supports persistent conversations, observability, tool calling, human-in-the-loop functionality, easy deployment with Docker, and multi-tenancy for managing multiple users and teams.

crewAI
CrewAI is a cutting-edge framework designed to orchestrate role-playing autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. It enables AI agents to assume roles, share goals, and operate in a cohesive unit, much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions. With features like role-based agent design, autonomous inter-agent delegation, flexible task management, and support for various LLMs, CrewAI offers a dynamic and adaptable solution for both development and production workflows.

aip-community-registry
AIP Community Registry is a collection of community-built applications and projects leveraging Palantir's AIP Platform. It showcases real-world implementations from developers using AIP in production. The registry features various solutions demonstrating practical implementations and integration patterns across different use cases.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.