
obsidian-systemsculpt-ai
Enhance your Obsidian App experience with AI-powered tools for note-taking, task management, and much, MUCH more.
Stars: 158

SystemSculpt AI is a comprehensive AI-powered plugin for Obsidian, integrating advanced AI capabilities into note-taking, task management, knowledge organization, and content creation. It offers modules for brain integration, chat conversations, audio recording and transcription, note templates, and task generation and management. Users can customize settings, utilize AI services like OpenAI and Groq, and access documentation for detailed guidance. The plugin prioritizes data privacy by storing sensitive information locally and offering the option to use local AI models for enhanced privacy.
README:
Turn your vault into an AI‑powered thinking partner. SystemSculpt brings fast, reliable chat, agent tools for your vault, semantic “Similar Notes,” rich context handling, and a refined Obsidian‑native experience on desktop and mobile.
Get Started • Documentation • Video Tutorials
-
Chat, your way
- Use OpenAI‑compatible providers (OpenAI, OpenRouter, Groq, local servers), Anthropic via adapter, or local models (LM Studio, Ollama)
- Streaming, reasoning blocks, mobile‑friendly UI
- Per‑chat model selection; saved chats to Markdown; chat history and resume
-
Context‑rich conversations
- Drag & drop notes; @‑mention files; paste large text smartly
- Paste or attach images; use any vision‑capable model your provider supports
- Clean rendering for code, tables, citations, and attachments
-
Agent Mode (MCP) with explicit approvals
- Built‑in vault tools exposed to the model with a one‑click safety approval flow
- Filesystem tools include:
read
,write
,edit
,create_folders
,list_items
,move
,trash
- Search and context tools:
find
,search
(grep),open
(tabs/panes),context
(manage chat context)
-
Semantic “Similar Notes”
- Embeddings‑powered vector search across your vault
- “Similar Notes” panel that updates for the active file or chat
- Exclusions (folders/files), progress UI, and an embeddings status bar
- Bring your own embeddings endpoint/model (OpenAI‑compatible), or pick a provider in settings
-
Models, prompts, templates, titles
- Unified model selection across providers; favorites and quick picks
- System prompt presets or custom prompts from your vault
- Template inserter for fast drafting
- One‑shot or automatic title generation for chats and notes
-
Web search integration
- Optional web search button in the chat toolbar when supported by the current provider
- Designed for OpenRouter and native provider endpoints that offer search plugins
-
Thoughtful details
- Polished Obsidian UI, optimized scrolling and rendering for long chats
- Touch‑friendly controls and responsive layout on mobile
- Clear errors with structured notices; handy debugging commands
- Open Settings → SystemSculpt AI → Models & Prompts
- Choose a provider (OpenAI, OpenRouter, Anthropic, LM Studio, Ollama, or any OpenAI‑compatible endpoint)
- Enter your endpoint and API key if required
- Start a chat
- Command palette → “Open SystemSculpt Chat”, or click the ribbon icon
- Pick a model in the header; type and send
- Add context
- Drag notes in, @‑mention files, or click the paperclip to attach
- Use the “Chat with File” command from any note to open chat preloaded with that file
- Try Agent Mode (optional)
- Click the vault icon in the chat toolbar to toggle Agent Mode
- Approve or deny tool calls; everything is explicit and reversible
- Enable Similar Notes (optional)
- Settings → Embeddings & Search → Enable, then pick a provider
- If using a custom endpoint, set API endpoint + key + model (for example:
text-embedding-004
) - Click “Start Now” to process your vault; open the “Similar Notes” panel from the command palette
- Power‑ups
- Templates: Command palette → “Open Template Selection”
- Titles: “Change/Generate Title” from a chat or any Markdown file
- Web search: Globe button in chat toolbar (when supported by the provider)
- Toolbar: Agent Mode toggle, per‑chat settings, attach/context, web search, microphone, send
- Context manager: add/remove files and include your vault’s structure when helpful
- Rendering: unified assistant message layout, code highlighting, citations, images
- History: save chats to Markdown, open chat history, resume from a history file
- Shortcuts: configurable hotkeys; streamlined keyboard navigation
- Open “Similar Notes Panel” from the command palette or ribbon
- Results update as you switch files or as the chat evolves
- Drag similar results into chat for instant context
- Exclude chat history or specific folders/files; respect Obsidian’s own exclusions
- Status UI shows progress, counts, and completion while building embeddings
Settings → Embeddings & Search lets you:
- Enable/disable embeddings
- Choose provider: SystemSculpt or Custom (OpenAI‑compatible)
- Configure endpoint, API key, and model when using a custom provider
- Scan for local services (Ollama, LM Studio) and apply in one click
When Agent Mode is on, the model can request tools that work inside your vault. You explicitly approve each call before it runs.
- Files:
read
,write
,edit
,create_folders
,move
,trash
- Listing and navigation:
list_items
,open
- Search:
find
(by name),search
(grep) - Context & analysis:
context
(manage included files)
All tools are scoped to your vault with built‑in content limits to keep the UI responsive.
- Overview & Setup: connect providers and API keys; activate license if you have one
- Models & Prompts: pick chat/title/post‑processing models; choose prompts; manage favorites
- Chat & Templates: chat defaults, agent mode defaults, template hotkeys
- Embeddings & Search: enable embeddings, provider and model selection, exclusions, processing controls
- Audio & Transcription: microphone selection, transcription options, post‑processing
- Files & Backup: directories for attachments, recordings, chats, extractions; automatic backups and restore
- Advanced: additional controls for power users
- Open SystemSculpt Chat
- Open SystemSculpt Chat History
- Chat with File (from the current note)
- Change Chat Model (current chat) / Set Default Chat Model
- Change/Generate Title
- Open Template Selection
- Open Similar Notes Panel
- Open SystemSculpt Search
- Open SystemSculpt AI Settings
Ribbon icons include Chat, Chat History, Janitor, Similar Notes, and Search.
- Designed for mobile and desktop with responsive UI and touch-friendly controls
- Local-first: your vault stays on your device
- Your API keys talk directly to your chosen providers
- Works offline when using local models
- A shared
PlatformContext
singleton now powers every mobile/desktop branch. - Desktop defaults to native
fetch
+ streaming; mobile and constrained endpoints (e.g., OpenRouter) automatically pivot to ObsidianrequestUrl
with virtual SSE replay. - UI components emit
platform-ui-<variant>
classes so styling and behavioral toggles stay in sync across chat, recorder, and transcription flows. - Clear, actionable errors and optional debug tools
- Open Obsidian Settings → Community Plugins
- Browse and search for “SystemSculpt AI”
- Click Install, then Enable
cd /path/to/vault/.obsidian/plugins/
git clone https://github.com/SystemSculpt/obsidian-systemsculpt-plugin systemsculpt-ai
cd systemsculpt-ai
npm install
npm run build
📚 Research
Ask: “Summarize my notes on retrieval‑augmented generation and link the most similar notes.”
Use: drag notes + Similar Notes panel + agent tools for search and citations.
✍️ Writing
Ask: “Draft an outline for a blog post based on my productivity notes. Include citations.”
Use: attach context files + template inserter + title generator.
🖼️ Vision
Paste a diagram screenshot and ask questions using a vision‑capable model from your provider.
If you choose to add a license, you get:
- Document intelligence: PDF/Office → clean Markdown, with table and structure preservation
- Voice & audio intelligence: in‑app recording and robust transcription pipeline
- Unified SystemSculpt provider catalog for chat and embeddings
Learn more at https://systemsculpt.com/pricing
.
MIT License – see LICENSE
.
- Docs:
https://systemsculpt.com
- Videos:
https://youtube.com/@SystemSculpt
- Discord:
https://discord.gg/3gNUZJWxnJ
- Email:
[email protected]
Built with ❤️ by Mike for the Obsidian community.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for obsidian-systemsculpt-ai
Similar Open Source Tools

obsidian-systemsculpt-ai
SystemSculpt AI is a comprehensive AI-powered plugin for Obsidian, integrating advanced AI capabilities into note-taking, task management, knowledge organization, and content creation. It offers modules for brain integration, chat conversations, audio recording and transcription, note templates, and task generation and management. Users can customize settings, utilize AI services like OpenAI and Groq, and access documentation for detailed guidance. The plugin prioritizes data privacy by storing sensitive information locally and offering the option to use local AI models for enhanced privacy.

simplechat
The Simple Chat Application is a web-based platform that facilitates secure interactions with generative AI models, leveraging Azure OpenAI. It features Retrieval-Augmented Generation (RAG) for grounding conversations in user data. Users can upload personal or group documents processed using Azure AI Document Intelligence and Azure OpenAI Embeddings. The application offers optional features like Content Safety, Image Generation, Video and Audio processing, Document Classification, User Feedback, Conversation Archiving, Metadata Extraction, and Enhanced Citations. It uses Azure Cosmos DB for storage, Azure Active Directory for authentication, and runs on Azure App Service. Suitable for enterprise use, it supports knowledge discovery, content generation, and collaborative AI tasks in a secure, Azure-native framework.

Alice
Alice is an open-source AI companion designed to live on your desktop, providing voice interaction, intelligent context awareness, and powerful tooling. More than a chatbot, Alice is emotionally engaging and deeply useful, assisting with daily tasks and creative work. Key features include voice interaction with natural-sounding responses, memory and context management, vision and visual output capabilities, computer use tools, function calling for web search and task scheduling, wake word support, dedicated Chrome extension, and flexible settings interface. Technologies used include Vue.js, Electron, OpenAI, Go, hnswlib-node, and more. Alice is customizable and offers a dedicated Chrome extension, wake word support, and various tools for computer use and productivity tasks.

swift-chat
SwiftChat is a fast and responsive AI chat application developed with React Native and powered by Amazon Bedrock. It offers real-time streaming conversations, AI image generation, multimodal support, conversation history management, and cross-platform compatibility across Android, iOS, and macOS. The app supports multiple AI models like Amazon Bedrock, Ollama, DeepSeek, and OpenAI, and features a customizable system prompt assistant. With a minimalist design philosophy and robust privacy protection, SwiftChat delivers a seamless chat experience with various features like rich Markdown support, comprehensive multimodal analysis, creative image suite, and quick access tools. The app prioritizes speed in launch, request, render, and storage, ensuring a fast and efficient user experience. SwiftChat also emphasizes app privacy and security by encrypting API key storage, minimal permission requirements, local-only data storage, and a privacy-first approach.

Mira
Mira is an agentic AI library designed for automating company research by gathering information from various sources like company websites, LinkedIn profiles, and Google Search. It utilizes a multi-agent architecture to collect and merge data points into a structured profile with confidence scores and clear source attribution. The core library is framework-agnostic and can be integrated into applications, pipelines, or custom workflows. Mira offers features such as real-time progress events, confidence scoring, company criteria matching, and built-in services for data gathering. The tool is suitable for users looking to streamline company research processes and enhance data collection efficiency.

Gemini-Discord-Bot
A Discord bot leveraging Google Gemini for advanced conversation, content understanding, image/video/audio recognition, and more. Features conversational AI, image/video/audio and file recognition, custom personalities, admin controls, downloadable conversation history, multiple AI tools, status monitoring, and slash command UI. Users can invite the bot to their Discord server, configure preferences, upload files for analysis, and use slash commands for various actions. Customizable through `config.js` for default personalities, activities, colors, and feature toggles. Admin commands restricted to server admins for security. Local storage for chat history and settings, with a reminder not to commit secrets in `.env` file. Licensed under MIT.

ApeRAG
ApeRAG is a production-ready platform for Retrieval-Augmented Generation (RAG) that combines Graph RAG, vector search, and full-text search with advanced AI agents. It is ideal for building Knowledge Graphs, Context Engineering, and deploying intelligent AI agents for autonomous search and reasoning across knowledge bases. The platform offers features like advanced index types, intelligent AI agents with MCP support, enhanced Graph RAG with entity normalization, multimodal processing, hybrid retrieval engine, MinerU integration for document parsing, production-grade deployment with Kubernetes, enterprise management features, MCP integration, and developer-friendly tools for customization and contribution.

ai-flow
AI Flow is an open-source, user-friendly UI application that empowers you to seamlessly connect multiple AI models together, specifically leveraging the capabilities of multiples AI APIs such as OpenAI, StabilityAI and Replicate. In a nutshell, AI Flow provides a visual platform for crafting and managing AI-driven workflows, thereby facilitating diverse and dynamic AI interactions.

chunkhound
ChunkHound is a modern tool for transforming your codebase into a searchable knowledge base for AI assistants. It utilizes semantic search via the cAST algorithm and regex search, integrating with AI assistants through the Model Context Protocol (MCP). With features like cAST Algorithm, Multi-Hop Semantic Search, Regex search, and support for 22 languages, ChunkHound offers a local-first approach to code analysis and discovery. It provides intelligent code discovery, universal language support, and real-time indexing capabilities, making it a powerful tool for developers looking to enhance their coding experience.

chunkhound
ChunkHound is a tool that transforms your codebase into a searchable knowledge base for AI assistants using semantic and regex search. It integrates with AI assistants via the Model Context Protocol (MCP) and offers features such as cAST algorithm for semantic code chunking, multi-hop semantic search, natural language queries, regex search without API keys, support for 22 languages, and local-first architecture. It provides intelligent code discovery by following semantic relationships and discovering related implementations. ChunkHound is built on the cAST algorithm from Carnegie Mellon University, ensuring structure-aware chunking that preserves code meaning. It supports universal language parsing and offers efficient updates for large codebases.

better-chatbot
Better Chatbot is an open-source AI chatbot designed for individuals and teams, inspired by various AI models. It integrates major LLMs, offers powerful tools like MCP protocol and data visualization, supports automation with custom agents and visual workflows, enables collaboration by sharing configurations, provides a voice assistant feature, and ensures an intuitive user experience. The platform is built with Vercel AI SDK and Next.js, combining leading AI services into one platform for enhanced chatbot capabilities.

AIWritingCompanion
AIWritingCompanion is a lightweight and versatile browser extension designed to translate text within input fields. It offers universal compatibility, multiple activation methods, and support for various translation providers like Gemini, OpenAI, and WebAI to API. Users can install it via CRX file or Git, set API key, and use it for automatic translation or via shortcut. The tool is suitable for writers, translators, students, researchers, and bloggers. AI keywords include writing assistant, translation tool, browser extension, language translation, and text translator. Users can use it for tasks like translate text, assist in writing, simplify content, check language accuracy, and enhance communication.

crawl4ai
Crawl4AI is a powerful and free web crawling service that extracts valuable data from websites and provides LLM-friendly output formats. It supports crawling multiple URLs simultaneously, replaces media tags with ALT, and is completely free to use and open-source. Users can integrate Crawl4AI into Python projects as a library or run it as a standalone local server. The tool allows users to crawl and extract data from specified URLs using different providers and models, with options to include raw HTML content, force fresh crawls, and extract meaningful text blocks. Configuration settings can be adjusted in the `crawler/config.py` file to customize providers, API keys, chunk processing, and word thresholds. Contributions to Crawl4AI are welcome from the open-source community to enhance its value for AI enthusiasts and developers.

mcp-pointer
MCP Pointer is a local tool that combines an MCP Server with a Chrome Extension to allow users to visually select DOM elements in the browser and make textual context available to agentic coding tools like Claude Code. It bridges between the browser and AI tools via the Model Context Protocol, enabling real-time communication and compatibility with various AI tools. The tool extracts detailed information about selected elements, including text content, CSS properties, React component detection, and more, making it a valuable asset for developers working with AI-powered web development.

DesktopCommanderMCP
Desktop Commander MCP is a server that allows the Claude desktop app to execute long-running terminal commands on your computer and manage processes through Model Context Protocol (MCP). It is built on top of MCP Filesystem Server to provide additional search and replace file editing capabilities. The tool enables users to execute terminal commands with output streaming, manage processes, perform full filesystem operations, and edit code with surgical text replacements or full file rewrites. It also supports vscode-ripgrep based recursive code or text search in folders.

skypilot
SkyPilot is a framework for running LLMs, AI, and batch jobs on any cloud, offering maximum cost savings, highest GPU availability, and managed execution. SkyPilot abstracts away cloud infra burdens: - Launch jobs & clusters on any cloud - Easy scale-out: queue and run many jobs, automatically managed - Easy access to object stores (S3, GCS, R2) SkyPilot maximizes GPU availability for your jobs: * Provision in all zones/regions/clouds you have access to (the _Sky_), with automatic failover SkyPilot cuts your cloud costs: * Managed Spot: 3-6x cost savings using spot VMs, with auto-recovery from preemptions * Optimizer: 2x cost savings by auto-picking the cheapest VM/zone/region/cloud * Autostop: hands-free cleanup of idle clusters SkyPilot supports your existing GPU, TPU, and CPU workloads, with no code changes.
For similar tasks

hof
Hof is a CLI tool that unifies data models, schemas, code generation, and a task engine. It allows users to augment data, config, and schemas with CUE to improve consistency, generate multiple Yaml and JSON files, explore data or config with a TUI, and run workflows with automatic task dependency inference. The tool uses CUE to power the DX and implementation, providing a language for specifying schemas, configuration, and writing declarative code. Hof offers core features like code generation, data model management, task engine, CUE cmds, creators, modules, TUI, and chat for better, scalable results.

vast-python
This repository contains the open source python command line interface for vast.ai. The CLI has all the main functionality of the vast.ai website GUI and uses the same underlying REST API. The main functionality is self-contained in the script file vast.py, with additional invoice generating commands in vast_pdf.py. Users can interact with the vast.ai platform through the CLI to manage instances, create templates, manage teams, and perform various cloud-related tasks.

obsidian-systemsculpt-ai
SystemSculpt AI is a comprehensive AI-powered plugin for Obsidian, integrating advanced AI capabilities into note-taking, task management, knowledge organization, and content creation. It offers modules for brain integration, chat conversations, audio recording and transcription, note templates, and task generation and management. Users can customize settings, utilize AI services like OpenAI and Groq, and access documentation for detailed guidance. The plugin prioritizes data privacy by storing sensitive information locally and offering the option to use local AI models for enhanced privacy.

sdk
Smithery SDK is a tool that provides utilities to simplify the development and deployment of Model Context Protocols (MCPs) with Smithery. It offers functionalities for finding and connecting to MCP servers in the registry, building and deploying MCP servers, and creating fast MCP servers with Smithery session configuration support. Additionally, it includes a ready-to-use MCP server template. For more information and access to the MCP registry, visit https://smithery.ai/.

mushroom
MRCMS is a Java-based content management system that uses data model + template + plugin implementation, providing built-in article model publishing functionality. The goal is to quickly build small to medium websites.

flow-like
Flow-Like is an enterprise-grade workflow operating system built upon Rust for uncompromising performance, efficiency, and code safety. It offers a modular frontend for apps, a rich set of events, a node catalog, a powerful no-code workflow IDE, and tools to manage teams, templates, and projects within organizations. With typed workflows, users can create complex, large-scale workflows with clear data origins, transformations, and contracts. Flow-Like is designed to automate any process through seamless integration of LLM, ML-based, and deterministic decision-making instances.

note-gen
Note-gen is a simple tool for generating notes automatically based on user input. It uses natural language processing techniques to analyze text and extract key information to create structured notes. The tool is designed to save time and effort for users who need to summarize large amounts of text or generate notes quickly. With note-gen, users can easily create organized and concise notes for study, research, or any other purpose.

A-mem
A-MEM is a novel agentic memory system designed for Large Language Model (LLM) agents to dynamically organize memories in an agentic way. It introduces advanced memory organization capabilities, intelligent indexing, and linking of memories, comprehensive note generation, interconnected knowledge networks, continuous memory evolution, and agent-driven decision making for adaptive memory management. The system facilitates agent construction and enables dynamic memory operations and flexible agent-memory interactions.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.