
open-health
OpenHealth, AI Health Assistant | Powered by Your Data
Stars: 3152

OpenHealth is an AI health assistant that helps users manage their health data by leveraging AI and personal health information. It allows users to consolidate health data, parse it smartly, and engage in contextual conversations with GPT-powered AI. The tool supports various data sources like blood test results, health checkup data, personal physical information, family history, and symptoms. OpenHealth aims to empower users to take control of their health by combining data and intelligence for actionable health management.
README:
AI Health Assistant | Powered by Your Data
📢 Now Available on Web!
We've made OpenHealth more accessible with two tailored options:
Clinic - Quick and easy health consultations
Full Platform - Advanced tools for comprehensive health management
English | Français | Deutsch | Español | 한국어 | 中文 | 日本語 | Українська | Русский | اردو
OpenHealth helps you take charge of your health data. By leveraging AI and your personal health information, OpenHealth provides a private assistant that helps you better understand and manage your health. You can run it completely locally for maximum privacy.
Core Features
- 📊 Centralized Health Data Input: Easily consolidate all your health data in one place.
- 🛠️ Smart Parsing: Automatically parses your health data and generates structured data files.
- 🤝 Contextual Conversations: Use the structured data as context for personalized interactions with GPT-powered AI.
Data Sources You Can Add | Supported Language Models |
---|---|
• Blood Test Results • Health Checkup Data • Personal Physical Information • Family History • Symptoms |
• LLaMA • DeepSeek-V3 • GPT • Claude • Gemini |
- 💡 Your health is your responsibility.
- ✅ True health management combines your data + intelligence, turning insights into actionable plans.
- 🧠 AI acts as an unbiased tool to guide and support you in managing your long-term health effectively.
graph LR
subgraph Health Data Sources
A1[Clinical Records<br>Blood Tests/Diagnoses/<br>Prescriptions/Imaging]
A2[Health Platforms<br>Apple Health/Google Fit]
A3[Wearable Devices<br>Oura/Whoop/Garmin]
A4[Personal Records<br>Diet/Symptoms/<br>Family History]
end
subgraph Data Processing
B1[Data Parser & Standardization]
B2[Unified Health Data Format]
end
subgraph AI Integration
C1[LLM Processing<br>Commercial & Local Models]
C2[Interaction Methods<br>RAG/Cache/Agents]
end
A1 & A2 & A3 & A4 --> B1
B1 --> B2
B2 --> C1
C1 --> C2
style A1 fill:#e6b3cc,stroke:#cc6699,stroke-width:2px,color:#000
style A2 fill:#b3d9ff,stroke:#3399ff,stroke-width:2px,color:#000
style A3 fill:#c2d6d6,stroke:#669999,stroke-width:2px,color:#000
style A4 fill:#d9c3e6,stroke:#9966cc,stroke-width:2px,color:#000
style B1 fill:#c6ecd9,stroke:#66b399,stroke-width:2px,color:#000
style B2 fill:#c6ecd9,stroke:#66b399,stroke-width:2px,color:#000
style C1 fill:#ffe6cc,stroke:#ff9933,stroke-width:2px,color:#000
style C2 fill:#ffe6cc,stroke:#ff9933,stroke-width:2px,color:#000
classDef default color:#000
Note: The data parsing functionality is currently implemented in a separate Python server and is planned to be migrated to TypeScript in the future.
Installation Instructions
-
Clone the Repository:
git clone https://github.com/OpenHealthForAll/open-health.git cd open-health
-
Setup and Run:
# Copy environment file cp .env.example .env # Start the application using Docker/Podman Compose docker/podman compose --env-file .env up
For existing users, use:
# Generate ENCRYPTION_KEY for .env file: # Run the command below and add the output to ENCRYPTION_KEY in .env echo $(head -c 32 /dev/urandom | base64) # Rebuild and start the application docker/podman compose --env-file .env up --build
to rebuild the image. Run this also if you make any modifications to the .env file.
-
Access OpenHealth: Open your browser and navigate to
http://localhost:3000
to begin using OpenHealth.
Note: The system consists of two main components: parsing and LLM. For parsing, you can use docling for full local execution, while the LLM component can run fully locally using Ollama.
Note: If you're using Ollama with Docker, make sure to set the Ollama API endpoint to:
http://docker.for.mac.localhost:11434
on a Mac orhttp://host.docker.internal:11434
on Windows.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for open-health
Similar Open Source Tools

open-health
OpenHealth is an AI health assistant that helps users manage their health data by leveraging AI and personal health information. It allows users to consolidate health data, parse it smartly, and engage in contextual conversations with GPT-powered AI. The tool supports various data sources like blood test results, health checkup data, personal physical information, family history, and symptoms. OpenHealth aims to empower users to take control of their health by combining data and intelligence for actionable health management.

ramparts
Ramparts is a fast, lightweight security scanner designed for the Model Context Protocol (MCP) ecosystem. It scans MCP servers to identify vulnerabilities and provides security features such as discovering capabilities, multi-transport support, session management, static analysis, cross-origin analysis, LLM-powered analysis, and risk assessment. The tool is suitable for developers, MCP users, and MCP developers to ensure the security of their connections. It can be used for security audits, development testing, CI/CD integration, and compliance with security requirements for AI agent deployments.

panko-gpt
PankoGPT is an AI companion platform that allows users to easily create and deploy custom AI companions on messaging platforms like WhatsApp, Discord, and Telegram. Users can customize companion behavior, configure settings, and equip companions with various tools without the need for coding. The platform aims to provide contextual understanding and user-friendly interface for creating companions that respond based on context and offer configurable tools for enhanced capabilities. Planned features include expanded functionality, pre-built skills, and optimization for better performance.

ai-marketplace-monitor
An intelligent tool that monitors Facebook Marketplace listings using AI to help users find the best deals. It provides instant notifications when items matching specific criteria are posted, along with AI-powered analysis of each listing. The tool offers smart search capabilities, AI-powered listing evaluation and recommendations, various notification options, support for multiple locations, and customizable search parameters. Users can configure the tool to search for specific products, filter by price and location, and receive notifications through different channels. The tool also supports AI service providers and offers a self-hosted model option.

WebMasterLog
WebMasterLog is a comprehensive repository showcasing various web development projects built with front-end and back-end technologies. It highlights interactive user interfaces, dynamic web applications, and a spectrum of web development solutions. The repository encourages contributions in areas such as adding new projects, improving existing projects, updating documentation, fixing bugs, implementing responsive design, enhancing code readability, and optimizing project functionalities. Contributors are guided to follow specific guidelines for project submissions, including directory naming conventions, README file inclusion, project screenshots, and commit practices. Pull requests are reviewed based on criteria such as proper PR template completion, originality of work, code comments for clarity, and sharing screenshots for frontend updates. The repository also participates in various open-source programs like JWOC, GSSoC, Hacktoberfest, KWOC, 24 Pull Requests, IWOC, SWOC, and DWOC, welcoming valuable contributors.

Starmoon
Starmoon is an affordable, compact AI-enabled device that can understand and respond to your emotions with empathy. It offers supportive conversations and personalized learning assistance. The device is cost-effective, voice-enabled, open-source, compact, and aims to reduce screen time. Users can assemble the device themselves using off-the-shelf components and deploy it locally for data privacy. Starmoon integrates various APIs for AI language models, speech-to-text, text-to-speech, and emotion intelligence. The hardware setup involves components like ESP32S3, microphone, amplifier, speaker, LED light, and button, along with software setup instructions for developers. The project also includes a web app, backend API, and background task dashboard for monitoring and management.

biochatter
Generative AI models have shown tremendous usefulness in increasing accessibility and automation of a wide range of tasks. This repository contains the `biochatter` Python package, a generic backend library for the connection of biomedical applications to conversational AI. It aims to provide a common framework for deploying, testing, and evaluating diverse models and auxiliary technologies in the biomedical domain. BioChatter is part of the BioCypher ecosystem, connecting natively to BioCypher knowledge graphs.

pipecat
Pipecat is an open-source framework designed for building generative AI voice bots and multimodal assistants. It provides code building blocks for interacting with AI services, creating low-latency data pipelines, and transporting audio, video, and events over the Internet. Pipecat supports various AI services like speech-to-text, text-to-speech, image generation, and vision models. Users can implement new services and contribute to the framework. Pipecat aims to simplify the development of applications like personal coaches, meeting assistants, customer support bots, and more by providing a complete framework for integrating AI services.

local-deep-research
Local Deep Research is a powerful AI-powered research assistant that performs deep, iterative analysis using multiple LLMs and web searches. It can be run locally for privacy or configured to use cloud-based LLMs for enhanced capabilities. The tool offers advanced research capabilities, flexible LLM support, rich output options, privacy-focused operation, enhanced search integration, and academic & scientific integration. It also provides a web interface, command line interface, and supports multiple LLM providers and search engines. Users can configure AI models, search engines, and research parameters for customized research experiences.

cognee
Cognee is an open-source framework designed for creating self-improving deterministic outputs for Large Language Models (LLMs) using graphs, LLMs, and vector retrieval. It provides a platform for AI engineers to enhance their models and generate more accurate results. Users can leverage Cognee to add new information, utilize LLMs for knowledge creation, and query the system for relevant knowledge. The tool supports various LLM providers and offers flexibility in adding different data types, such as text files or directories. Cognee aims to streamline the process of working with LLMs and improving AI models for better performance and efficiency.

db2rest
DB2Rest is a modern low code REST DATA API platform that enables the rapid development of intelligent applications by combining databases, language models, and vector stores. It facilitates context-aware, reasoning applications without vendor lock-in. The tool accelerates application delivery, fosters faster innovation with AI, serves as a secure database gateway, and simplifies integration. It supports various databases like PostgreSQL, MySQL, MS SQL Server, Oracle, MongoDB, and more, with planned support for additional databases. Users can connect on Discord for support and contact [email protected] for inquiries.

duolingo-clone
Lingo is an interactive platform for language learning that provides a modern UI/UX experience. It offers features like courses, quests, and a shop for users to engage with. The tech stack includes React JS, Next JS, Typescript, Tailwind CSS, Vercel, and Postgresql. Users can contribute to the project by submitting changes via pull requests. The platform utilizes resources from CodeWithAntonio, Kenney Assets, Freesound, Elevenlabs AI, and Flagpack. Key dependencies include @clerk/nextjs, @neondatabase/serverless, @radix-ui/react-avatar, and more. Users can follow the project creator on GitHub and Twitter, as well as subscribe to their YouTube channel for updates. To learn more about Next.js, users can refer to the Next.js documentation and interactive tutorial.

easy-dataset
Easy Dataset is a specialized application designed to streamline the creation of fine-tuning datasets for Large Language Models (LLMs). It offers an intuitive interface for uploading domain-specific files, intelligently splitting content, generating questions, and producing high-quality training data for model fine-tuning. With Easy Dataset, users can transform domain knowledge into structured datasets compatible with all OpenAI-format compatible LLM APIs, making the fine-tuning process accessible and efficient.

growi
GROWI is a collaborative wiki platform that allows users to create hierarchical pages with markdown, edit simultaneously with multiple people, and support authentication with LDAP/Active Directory, OAuth, and SAML. It also integrates with Slack/Mattermost, IFTTT, and allows for plugin customization. GROWI is Docker and Docker Compose ready, supports multiple sites, HTTPS with Let's Encrypt proxy integration, and offers migration guides for on-premise installations. The tool is built with Node.js, npm, pnpm, Turborepo, and requires MongoDB, with optional dependencies on Redis and ElasticSearch for full-text search functionality.

deepchat
DeepChat is a versatile chat tool that supports multiple model cloud services and local model deployment. It offers multi-channel chat concurrency support, platform compatibility, complete Markdown rendering, and easy usability with a comprehensive guide. The tool aims to enhance chat experiences by leveraging various AI models and ensuring efficient conversation management.

fast-llm-security-guardrails
ZenGuard AI enables AI developers to integrate production-level, low-code LLM (Large Language Model) guardrails into their generative AI applications effortlessly. With ZenGuard AI, ensure your application operates within trusted boundaries, is protected from prompt injections, and maintains user privacy without compromising on performance.
For similar tasks

llmperf
LLMPerf is a tool designed for evaluating the performance of Language Model APIs. It provides functionalities for conducting load tests to measure inter-token latency and generation throughput, as well as correctness tests to verify the responses. The tool supports various LLM APIs including OpenAI, Anthropic, TogetherAI, Hugging Face, LiteLLM, Vertex AI, and SageMaker. Users can set different parameters for the tests and analyze the results to assess the performance of the LLM APIs. LLMPerf aims to standardize prompts across different APIs and provide consistent evaluation metrics for comparison.

open-health
OpenHealth is an AI health assistant that helps users manage their health data by leveraging AI and personal health information. It allows users to consolidate health data, parse it smartly, and engage in contextual conversations with GPT-powered AI. The tool supports various data sources like blood test results, health checkup data, personal physical information, family history, and symptoms. OpenHealth aims to empower users to take control of their health by combining data and intelligence for actionable health management.
For similar jobs

open-health
OpenHealth is an AI health assistant that helps users manage their health data by leveraging AI and personal health information. It allows users to consolidate health data, parse it smartly, and engage in contextual conversations with GPT-powered AI. The tool supports various data sources like blood test results, health checkup data, personal physical information, family history, and symptoms. OpenHealth aims to empower users to take control of their health by combining data and intelligence for actionable health management.

IvyGPT
IvyGPT is a medical large language model that aims to generate the most realistic doctor consultation effects. It has been fine-tuned on high-quality medical Q&A data and trained using human feedback reinforcement learning. The project features full-process training on medical Q&A LLM, multiple fine-tuning methods support, efficient dataset creation tools, and a dataset of over 300,000 high-quality doctor-patient dialogues for training.