
deer-flow
DeerFlow is a community-driven Deep Research framework, combining language models with tools like web search, crawling, and Python execution, while contributing back to the open-source community.
Stars: 16787

DeerFlow is a community-driven Deep Research framework that combines language models with specialized tools for tasks like web search, crawling, and Python code execution. It supports FaaS deployment and one-click deployment based on Volcengine. The framework includes core capabilities like LLM integration, search and retrieval, RAG integration, MCP seamless integration, human collaboration, report post-editing, and content creation. The architecture is based on a modular multi-agent system with components like Coordinator, Planner, Research Team, and Text-to-Speech integration. DeerFlow also supports interactive mode, human-in-the-loop mechanism, and command-line arguments for customization.
README:
English | ็ฎไฝไธญๆ | ๆฅๆฌ่ช | Deutsch | Espaรฑol | ะ ัััะบะธะน | Portuguese
Originated from Open Source, give back to Open Source.
DeerFlow (Deep Exploration and Efficient Research Flow) is a community-driven Deep Research framework that builds upon the incredible work of the open source community. Our goal is to combine language models with specialized tools for tasks like web search, crawling, and Python code execution, while giving back to the community that made this possible.
Currently, DeerFlow has officially entered the FaaS Application Center of Volcengine. Users can experience it online through the experience link to intuitively feel its powerful functions and convenient operations. At the same time, to meet the deployment needs of different users, DeerFlow supports one-click deployment based on Volcengine. Click the deployment link to quickly complete the deployment process and start an efficient research journey.
Please visit our official website for more details.
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
In this demo, we showcase how to use DeerFlow to:
- Seamlessly integrate with MCP services
- Conduct the Deep Research process and produce a comprehensive report with images
- Create podcast audio based on the generated report
- How tall is Eiffel Tower compared to tallest building?
- What are the top trending repositories on GitHub?
- Write an article about Nanjing's traditional dishes
- How to decorate a rental apartment?
- Visit our official website to explore more replays.
- ๐ Quick Start
- ๐ Features
- ๐๏ธ Architecture
- ๐ ๏ธ Development
- ๐ณ Docker
- ๐ฃ๏ธ Text-to-Speech Integration
- ๐ Examples
- โ FAQ
- ๐ License
- ๐ Acknowledgments
- โญ Star History
DeerFlow is developed in Python, and comes with a web UI written in Node.js. To ensure a smooth setup process, we recommend using the following tools:
-
uv
: Simplify Python environment and dependency management.uv
automatically creates a virtual environment in the root directory and installs all required packages for youโno need to manually install Python environments. -
nvm
: Manage multiple versions of the Node.js runtime effortlessly. -
pnpm
: Install and manage dependencies of Node.js project.
Make sure your system meets the following minimum requirements:
# Clone the repository
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
# Install dependencies, uv will take care of the python interpreter and venv creation, and install the required packages
uv sync
# Configure .env with your API keys
# Tavily: https://app.tavily.com/home
# Brave_SEARCH: https://brave.com/search/api/
# volcengine TTS: Add your TTS credentials if you have them
cp .env.example .env
# See the 'Supported Search Engines' and 'Text-to-Speech Integration' sections below for all available options
# Configure conf.yaml for your LLM model and API keys
# Please refer to 'docs/configuration_guide.md' for more details
cp conf.yaml.example conf.yaml
# Install marp for ppt generation
# https://github.com/marp-team/marp-cli?tab=readme-ov-file#use-package-manager
brew install marp-cli
Optionally, install web UI dependencies via pnpm:
cd deer-flow/web
pnpm install
Please refer to the Configuration Guide for more details.
[!NOTE] Before you start the project, read the guide carefully, and update the configurations to match your specific settings and requirements.
The quickest way to run the project is to use the console UI.
# Run the project in a bash-like shell
uv run main.py
This project also includes a Web UI, offering a more dynamic and engaging interactive experience.
[!NOTE] You need to install the dependencies of web UI first.
# Run both the backend and frontend servers in development mode
# On macOS/Linux
./bootstrap.sh -d
# On Windows
bootstrap.bat -d
Open your browser and visit http://localhost:3000
to explore the web UI.
Explore more details in the web
directory.
DeerFlow supports multiple search engines that can be configured in your .env
file using the SEARCH_API
variable:
-
Tavily (default): A specialized search API for AI applications
- Requires
TAVILY_API_KEY
in your.env
file - Sign up at: https://app.tavily.com/home
- Requires
-
DuckDuckGo: Privacy-focused search engine
- No API key required
-
Brave Search: Privacy-focused search engine with advanced features
- Requires
BRAVE_SEARCH_API_KEY
in your.env
file - Sign up at: https://brave.com/search/api/
- Requires
-
Arxiv: Scientific paper search for academic research
- No API key required
- Specialized for scientific and academic papers
To configure your preferred search engine, set the SEARCH_API
variable in your .env
file:
# Choose one: tavily, duckduckgo, brave_search, arxiv
SEARCH_API=tavily
DeerFlow support private knowledgebase such as ragflow and vikingdb, so that you can use your private documents to answer questions.
-
RAGFlow๏ผopen source RAG engine
# examples in .env.example RAG_PROVIDER=ragflow RAGFLOW_API_URL="http://localhost:9388" RAGFLOW_API_KEY="ragflow-xxx" RAGFLOW_RETRIEVAL_SIZE=10 RAGFLOW_CROSS_LANGUAGES=English,Chinese,Spanish,French,German,Japanese,Korean
- ๐ค LLM Integration
- It supports the integration of most models through litellm.
- Support for open source models like Qwen, you need to read the configuration for more details.
- OpenAI-compatible API interface
- Multi-tier LLM system for different task complexities
-
๐ Search and Retrieval
- Web search via Tavily, Brave Search and more
- Crawling with Jina
- Advanced content extraction
- Support for private knowledgebase
-
๐ RAG Integration
- Supports mentioning files from RAGFlow within the input box. Start up RAGFlow server.
-
๐ MCP Seamless Integration
- Expand capabilities for private domain access, knowledge graph, web browsing and more
- Facilitates integration of diverse research tools and methodologies
-
๐ง Human-in-the-loop
- Supports interactive modification of research plans using natural language
- Supports auto-acceptance of research plans
-
๐ Report Post-Editing
- Supports Notion-like block editing
- Allows AI refinements, including AI-assisted polishing, sentence shortening, and expansion
- Powered by tiptap
- ๐๏ธ Podcast and Presentation Generation
- AI-powered podcast script generation and audio synthesis
- Automated creation of simple PowerPoint presentations
- Customizable templates for tailored content
DeerFlow implements a modular multi-agent system architecture designed for automated research and code analysis. The system is built on LangGraph, enabling a flexible state-based workflow where components communicate through a well-defined message passing system.
See it live at deerflow.tech
The system employs a streamlined workflow with the following components:
-
Coordinator: The entry point that manages the workflow lifecycle
- Initiates the research process based on user input
- Delegates tasks to the planner when appropriate
- Acts as the primary interface between the user and the system
-
Planner: Strategic component for task decomposition and planning
- Analyzes research objectives and creates structured execution plans
- Determines if enough context is available or if more research is needed
- Manages the research flow and decides when to generate the final report
-
Research Team: A collection of specialized agents that execute the plan:
- Researcher: Conducts web searches and information gathering using tools like web search engines, crawling and even MCP services.
- Coder: Handles code analysis, execution, and technical tasks using Python REPL tool. Each agent has access to specific tools optimized for their role and operates within the LangGraph framework
-
Reporter: Final stage processor for research outputs
- Aggregates findings from the research team
- Processes and structures the collected information
- Generates comprehensive research reports
DeerFlow now includes a Text-to-Speech (TTS) feature that allows you to convert research reports to speech. This feature uses the volcengine TTS API to generate high-quality audio from text. Features like speed, volume, and pitch are also customizable.
You can access the TTS functionality through the /api/tts
endpoint:
# Example API call using curl
curl --location 'http://localhost:8000/api/tts' \
--header 'Content-Type: application/json' \
--data '{
"text": "This is a test of the text-to-speech functionality.",
"speed_ratio": 1.0,
"volume_ratio": 1.0,
"pitch_ratio": 1.0
}' \
--output speech.mp3
Run the test suite:
# Run all tests
make test
# Run specific test file
pytest tests/integration/test_workflow.py
# Run with coverage
make coverage
# Run linting
make lint
# Format code
make format
DeerFlow uses LangGraph for its workflow architecture. You can use LangGraph Studio to debug and visualize the workflow in real-time.
DeerFlow includes a langgraph.json
configuration file that defines the graph structure and dependencies for the LangGraph Studio. This file points to the workflow graphs defined in the project and automatically loads environment variables from the .env
file.
# Install uv package manager if you don't have it
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies and start the LangGraph server
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.12 langgraph dev --allow-blocking
# Install dependencies
pip install -e .
pip install -U "langgraph-cli[inmem]"
# Start the LangGraph server
langgraph dev
After starting the LangGraph server, you'll see several URLs in the terminal:
- API: http://127.0.0.1:2024
- Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- API Docs: http://127.0.0.1:2024/docs
Open the Studio UI link in your browser to access the debugging interface.
In the Studio UI, you can:
- Visualize the workflow graph and see how components connect
- Trace execution in real-time to see how data flows through the system
- Inspect the state at each step of the workflow
- Debug issues by examining inputs and outputs of each component
- Provide feedback during the planning phase to refine research plans
When you submit a research topic in the Studio UI, you'll be able to see the entire workflow execution, including:
- The planning phase where the research plan is created
- The feedback loop where you can modify the plan
- The research and writing phases for each section
- The final report generation
DeerFlow supports LangSmith tracing to help you debug and monitor your workflows. To enable LangSmith tracing:
-
Make sure your
.env
file has the following configurations (see.env.example
):LANGSMITH_TRACING=true LANGSMITH_ENDPOINT="https://api.smith.langchain.com" LANGSMITH_API_KEY="xxx" LANGSMITH_PROJECT="xxx"
-
Start tracing and visualize the graph locally with LangSmith by running:
langgraph dev
This will enable trace visualization in LangGraph Studio and send your traces to LangSmith for monitoring and analysis.
- Postgres and MonogDB implementation of LangGraph checkpoint saver.
- In-memory store is used to caching the streaming messages before persisting to database, If finish_reason is "stop" or "interrupt", it triggers persistence.
- Supports saving and loading checkpoints for workflow execution.
- Supports saving chat stream events for replaying conversations.
Note: The latest langgraph-checkpoint-postgres-2.0.23 have checkpointing issue, you can check the open issue:"TypeError: Object of type HumanMessage is not JSON serializable" [https://github.com/langchain-ai/langgraph/issues/5557].
To use postgres checkpoint you should install langgraph-checkpoint-postgres-2.0.21
The default database and collection will be automatically created if not exists. Default database: checkpoing_db Default collection: checkpoint_writes_aio (langgraph checkpoint writes) Default collection: checkpoints_aio (langgraph checkpoints) Default collection: chat_streams (chat stream events for replaying conversations)
You need to set the following environment variables in your .env
file:
# Enable LangGraph checkpoint saver, supports MongoDB, Postgres
LANGGRAPH_CHECKPOINT_SAVER=true
# Set the database URL for saving checkpoints
LANGGRAPH_CHECKPOINT_DB_URL="mongodb://localhost:27017/"
#LANGGRAPH_CHECKPOINT_DB_URL=postgresql://localhost:5432/postgres
You can also run this project with Docker.
First, you need read the configuration below. Make sure .env
, .conf.yaml
files are ready.
Second, to build a Docker image of your own web server:
docker build -t deer-flow-api .
Final, start up a docker container running the web server:
# Replace deer-flow-api-app with your preferred container name
# Start the server then bind to localhost:8000
docker run -d -t -p 127.0.0.1:8000:8000 --env-file .env --name deer-flow-api-app deer-flow-api
# stop the server
docker stop deer-flow-api-app
DeerFlow provides a docker-compose setup to easily run both the backend and frontend together:
# building docker image
docker compose build
# start the server
docker compose up
[!WARNING] If you want to deploy the deer flow into production environments, please add authentication to the website and evaluate your security check of the MCPServer and Python Repl.
The following examples demonstrate the capabilities of DeerFlow:
-
OpenAI Sora Report - Analysis of OpenAI's Sora AI tool
- Discusses features, access, prompt engineering, limitations, and ethical considerations
- View full report
-
Google's Agent to Agent Protocol Report - Overview of Google's Agent to Agent (A2A) protocol
- Discusses its role in AI agent communication and its relationship with Anthropic's Model Context Protocol (MCP)
- View full report
-
What is MCP? - A comprehensive analysis of the term "MCP" across multiple contexts
- Explores Model Context Protocol in AI, Monocalcium Phosphate in chemistry, and Micro-channel Plate in electronics
- View full report
-
Bitcoin Price Fluctuations - Analysis of recent Bitcoin price movements
- Examines market trends, regulatory influences, and technical indicators
- Provides recommendations based on historical data
- View full report
-
What is LLM? - An in-depth exploration of Large Language Models
- Discusses architecture, training, applications, and ethical considerations
- View full report
-
How to Use Claude for Deep Research? - Best practices and workflows for using Claude in deep research
- Covers prompt engineering, data analysis, and integration with other tools
- View full report
-
AI Adoption in Healthcare: Influencing Factors - Analysis of factors driving AI adoption in healthcare
- Discusses AI technologies, data quality, ethical considerations, economic evaluations, organizational readiness, and digital infrastructure
- View full report
-
Quantum Computing Impact on Cryptography - Analysis of quantum computing's impact on cryptography
- Discusses vulnerabilities of classical cryptography, post-quantum cryptography, and quantum-resistant cryptographic solutions
- View full report
-
Cristiano Ronaldo's Performance Highlights - Analysis of Cristiano Ronaldo's performance highlights
- Discusses his career achievements, international goals, and performance in various matches
- View full report
To run these examples or create your own research reports, you can use the following commands:
# Run with a specific query
uv run main.py "What factors are influencing AI adoption in healthcare?"
# Run with custom planning parameters
uv run main.py --max_plan_iterations 3 "How does quantum computing impact cryptography?"
# Run in interactive mode with built-in questions
uv run main.py --interactive
# Or run with basic interactive prompt
uv run main.py
# View all available options
uv run main.py --help
The application now supports an interactive mode with built-in questions in both English and Chinese:
-
Launch the interactive mode:
uv run main.py --interactive
-
Select your preferred language (English or ไธญๆ)
-
Choose from a list of built-in questions or select the option to ask your own question
-
The system will process your question and generate a comprehensive research report
DeerFlow includes a human in the loop mechanism that allows you to review, edit, and approve research plans before they are executed:
-
Plan Review: When human in the loop is enabled, the system will present the generated research plan for your review before execution
-
Providing Feedback: You can:
- Accept the plan by responding with
[ACCEPTED]
- Edit the plan by providing feedback (e.g.,
[EDIT PLAN] Add more steps about technical implementation
) - The system will incorporate your feedback and generate a revised plan
- Accept the plan by responding with
-
Auto-acceptance: You can enable auto-acceptance to skip the review process:
- Via API: Set
auto_accepted_plan: true
in your request
- Via API: Set
-
API Integration: When using the API, you can provide feedback through the
feedback
parameter:{ "messages": [{ "role": "user", "content": "What is quantum computing?" }], "thread_id": "my_thread_id", "auto_accepted_plan": false, "feedback": "[EDIT PLAN] Include more about quantum algorithms" }
The application supports several command-line arguments to customize its behavior:
- query: The research query to process (can be multiple words)
- --interactive: Run in interactive mode with built-in questions
- --max_plan_iterations: Maximum number of planning cycles (default: 1)
- --max_step_num: Maximum number of steps in a research plan (default: 3)
- --debug: Enable detailed debug logging
Please refer to the FAQ.md for more details.
This project is open source and available under the MIT License.
DeerFlow is built upon the incredible work of the open-source community. We are deeply grateful to all the projects and contributors whose efforts have made DeerFlow possible. Truly, we stand on the shoulders of giants.
We would like to extend our sincere appreciation to the following projects for their invaluable contributions:
- LangChain: Their exceptional framework powers our LLM interactions and chains, enabling seamless integration and functionality.
- LangGraph: Their innovative approach to multi-agent orchestration has been instrumental in enabling DeerFlow's sophisticated workflows.
- Novel: Their Notion-style WYSIWYG editor supports our report editing and AI-assisted rewriting.
- RAGFlow: We have achieved support for research on users' private knowledge bases through integration with RAGFlow.
These projects exemplify the transformative power of open-source collaboration, and we are proud to build upon their foundations.
A heartfelt thank you goes out to the core authors of DeerFlow
, whose vision, passion, and dedication have brought this project to life:
Your unwavering commitment and expertise have been the driving force behind DeerFlow's success. We are honored to have you at the helm of this journey.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for deer-flow
Similar Open Source Tools

deer-flow
DeerFlow is a community-driven Deep Research framework that combines language models with specialized tools for tasks like web search, crawling, and Python code execution. It supports FaaS deployment and one-click deployment based on Volcengine. The framework includes core capabilities like LLM integration, search and retrieval, RAG integration, MCP seamless integration, human collaboration, report post-editing, and content creation. The architecture is based on a modular multi-agent system with components like Coordinator, Planner, Research Team, and Text-to-Speech integration. DeerFlow also supports interactive mode, human-in-the-loop mechanism, and command-line arguments for customization.

langmanus
LangManus is a community-driven AI automation framework that combines language models with specialized tools for tasks like web search, crawling, and Python code execution. It implements a hierarchical multi-agent system with agents like Coordinator, Planner, Supervisor, Researcher, Coder, Browser, and Reporter. The framework supports LLM integration, search and retrieval tools, Python integration, workflow management, and visualization. LangManus aims to give back to the open-source community and welcomes contributions in various forms.

cosdata
Cosdata is a cutting-edge AI data platform designed to power the next generation search pipelines. It features immutability, version control, and excels in semantic search, structured knowledge graphs, hybrid search capabilities, real-time search at scale, and ML pipeline integration. The platform is customizable, scalable, efficient, enterprise-grade, easy to use, and can manage multi-modal data. It offers high performance, indexing, low latency, and high requests per second. Cosdata is designed to meet the demands of modern search applications, empowering businesses to harness the full potential of their data.

trip_planner_agent
VacAIgent is an AI tool that automates and enhances trip planning by leveraging the CrewAI framework. It integrates a user-friendly Streamlit interface for interactive travel planning. Users can input preferences and receive tailored travel plans with the help of autonomous AI agents. The tool allows for collaborative decision-making on cities and crafting complete itineraries based on specified preferences, all accessible via a streamlined Streamlit user interface. VacAIgent can be customized to use different AI models like GPT-3.5 or local models like Ollama for enhanced privacy and customization.

next-ai-draw-io
Next AI Draw.io is a next.js web application that integrates AI capabilities with draw.io diagrams. It allows users to create, modify, and enhance diagrams through natural language commands and AI-assisted visualization. Features include LLM-Powered Diagram Creation, Image-Based Diagram Replication, Diagram History, Interactive Chat Interface, and Smart Editing. The application uses Next.js for frontend framework, @ai-sdk/react for chat interface and AI interactions, and react-drawio for diagram representation and manipulation. Diagrams are represented as XML that can be rendered in draw.io, with AI processing commands to generate or modify the XML accordingly.

Auto-Analyst
Auto-Analyst is an AI-driven data analytics agentic system designed to simplify and enhance the data science process. By integrating various specialized AI agents, this tool aims to make complex data analysis tasks more accessible and efficient for data analysts and scientists. Auto-Analyst provides a streamlined approach to data preprocessing, statistical analysis, machine learning, and visualization, all within an interactive Streamlit interface. It offers plug and play Streamlit UI, agents with data science speciality, complete automation, LLM agnostic operation, and is built using lightweight frameworks.

sd-webui-agent-scheduler
AgentScheduler is an Automatic/Vladmandic Stable Diffusion Web UI extension designed to enhance image generation workflows. It allows users to enqueue prompts, settings, and controlnets, manage queued tasks, prioritize, pause, resume, and delete tasks, view generation results, and more. The extension offers hidden features like queuing checkpoints, editing queued tasks, and custom checkpoint selection. Users can access the functionality through HTTP APIs and API callbacks. Troubleshooting steps are provided for common errors. The extension is compatible with latest versions of A1111 and Vladmandic. It is licensed under Apache License 2.0.

llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output (objects). It provides a simple yet robust interface and supports llama-cpp-python and OpenAI endpoints with GBNF grammar support (like the llama-cpp-python server) and the llama.cpp backend server. It works by generating a formal GGML-BNF grammar of the user defined structures and functions, which is then used by llama.cpp to generate text valid to that grammar. In contrast to most GBNF grammar generators it also supports nested objects, dictionaries, enums and lists of them.

DevDocs
DevDocs is a platform designed to simplify the process of digesting technical documentation for software engineers and developers. It automates the extraction and conversion of web content into markdown format, making it easier for users to access and understand the information. By crawling through child pages of a given URL, DevDocs provides a streamlined approach to gathering relevant data and integrating it into various tools for software development. The tool aims to save time and effort by eliminating the need for manual research and content extraction, ultimately enhancing productivity and efficiency in the development process.

GraphRAG-Local-UI
GraphRAG Local with Interactive UI is an adaptation of Microsoft's GraphRAG, tailored to support local models and featuring a comprehensive interactive user interface. It allows users to leverage local models for LLM and embeddings, visualize knowledge graphs in 2D or 3D, manage files, settings, and queries, and explore indexing outputs. The tool aims to be cost-effective by eliminating dependency on costly cloud-based models and offers flexible querying options for global, local, and direct chat queries.

LocalAIVoiceChat
LocalAIVoiceChat is an experimental alpha software that enables real-time voice chat with a customizable AI personality and voice on your PC. It integrates Zephyr 7B language model with speech-to-text and text-to-speech libraries. The tool is designed for users interested in state-of-the-art voice solutions and provides an early version of a local real-time chatbot.

mldl.study
MLDL.Study is a free interactive learning platform focused on simplifying Machine Learning (ML) and Deep Learning (DL) education for students and enthusiasts. It features curated roadmaps, videos, articles, and other learning materials. The platform aims to provide a comprehensive learning experience for Indian audiences, with easy-to-follow paths for ML and DL concepts, diverse resources including video tutorials and articles, and a growing community of over 6000 users. Contributors can add new resources following specific guidelines to maintain quality and relevance. Future plans include expanding content for global learners, introducing a Python programming roadmap, and creating roadmaps for fields like Generative AI and Reinforcement Learning.

pear-landing-page
PearAI Landing Page is an open-source AI-powered code editor managed by Nang and Pan. It is built with Next.js, Vercel, Tailwind CSS, and TypeScript. The project requires setting up environment variables for proper configuration. Users can run the project locally by starting the development server and visiting the specified URL in the browser. Recommended extensions include Prettier, ESLint, and JavaScript and TypeScript Nightly. Contributions to the project are welcomed and appreciated.

quick-start-guide-to-llms
This GitHub repository serves as the companion to the 'Quick Start Guide to Large Language Models - Second Edition' book. It contains code snippets and notebooks demonstrating various applications and advanced techniques in working with Transformer models and large language models (LLMs). The repository is structured into directories for notebooks, data, and images, with each notebook corresponding to a chapter in the book. Users can explore topics such as semantic search, prompt engineering, model fine-tuning, custom embeddings, advanced LLM usage, moving LLMs into production, and evaluating LLMs. The repository aims to provide practical examples and insights for working with LLMs in different contexts.

resume-job-matcher
Resume Job Matcher is a Python script that automates the process of matching resumes to a job description using AI. It leverages the Anthropic Claude API or OpenAI's GPT API to analyze resumes and provide a match score along with personalized email responses for candidates. The tool offers comprehensive resume processing, advanced AI-powered analysis, in-depth evaluation & scoring, comprehensive analytics & reporting, enhanced candidate profiling, and robust system management. Users can customize font presets, generate PDF versions of unified resumes, adjust logging level, change scoring model, modify AI provider, and adjust AI model. The final score for each resume is calculated based on AI-generated match score and resume quality score, ensuring content relevance and presentation quality are considered. Troubleshooting tips, best practices, contribution guidelines, and required Python packages are provided.

PSAI
PSAI is a PowerShell module that empowers scripts with the intelligence of OpenAI, bridging the gap between PowerShell and AI. It enables seamless integration for tasks like file searches and data analysis, revolutionizing automation possibilities with just a few lines of code. The module supports the latest OpenAI API changes, offering features like improved file search, vector store objects, token usage control, message limits, tool choice parameter, custom conversation histories, and model configuration parameters.
For similar tasks

stockbot-on-groq
StockBot Powered by Groq is an AI-powered chatbot that provides lightning-fast responses with live interactive stock charts, financial data, news, screeners, and more. Leveraging Groq's speed and Vercel's AI SDK, StockBot offers real-time conversation with natural language processing, interactive TradingView charts, adaptive interfaces, and multi-asset market coverage. It is designed for entertainment and instructional use, not for investment advice.

FinVeda
FinVeda is a dynamic financial literacy app that aims to solve the problem of low financial literacy rates in India by providing a platform for financial education. It features an AI chatbot, finance blogs, market trends analysis, SIP calculator, and finance quiz to help users learn finance with finesse. The app is free and open-source, licensed under the GNU General Public License v3.0. FinVeda was developed at IIT Jammu's Udyamitsav'24 Hackathon, where it won first place in the GenAI track and third place overall.

solana-trading-bot
Solana AI Trade Bot is an advanced trading tool specifically designed for meme token trading on the Solana blockchain. It leverages AI technology powered by GPT-4.0 to automate trades, identify low-risk/high-potential tokens, and assist in token creation and management. The bot offers cross-platform compatibility and a range of configurable settings for buying, selling, and filtering tokens. Users can benefit from real-time AI support and enhance their trading experience with features like automatic selling, slippage management, and profit/loss calculations. To optimize performance, it is recommended to connect the bot to a private light node for efficient trading execution.

deer-flow
DeerFlow is a community-driven Deep Research framework that combines language models with specialized tools for tasks like web search, crawling, and Python code execution. It supports FaaS deployment and one-click deployment based on Volcengine. The framework includes core capabilities like LLM integration, search and retrieval, RAG integration, MCP seamless integration, human collaboration, report post-editing, and content creation. The architecture is based on a modular multi-agent system with components like Coordinator, Planner, Research Team, and Text-to-Speech integration. DeerFlow also supports interactive mode, human-in-the-loop mechanism, and command-line arguments for customization.

gpt-researcher
GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks. It can produce detailed, factual, and unbiased research reports with customization options. The tool addresses issues of speed, determinism, and reliability by leveraging parallelized agent work. The main idea involves running 'planner' and 'execution' agents to generate research questions, seek related information, and create research reports. GPT Researcher optimizes costs and completes tasks in around 3 minutes. Features include generating long research reports, aggregating web sources, an easy-to-use web interface, scraping web sources, and exporting reports to various formats.

awesome-ai-web-search
The 'awesome-ai-web-search' repository is a curated list of AI-powered web search software that focuses on the intersection of Large Language Models (LLMs) and web search capabilities. It contains a timeline of various software supporting web search with LLM summarization, chat capabilities, and agent-driven research. The repository showcases both open-source and closed-source tools, providing a comprehensive overview of AI web search solutions available in the market.

leettools
LeetTools is an AI search assistant that can perform highly customizable search workflows and generate customized format results based on both web and local knowledge bases. It provides an automated document pipeline for data ingestion, indexing, and storage, allowing users to focus on implementing workflows without worrying about infrastructure. LeetTools can run with minimal resource requirements on the command line with configurable LLM settings and supports different databases for various functions. Users can configure different functions in the same workflow to use different LLM providers and models.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.