
easy-learn-ai
Easy-to-understand AI learning resources for beginners.
Stars: 143

Easy AI is a modern web application platform focused on AI education, aiming to help users understand complex artificial intelligence concepts through a concise and intuitive approach. The platform integrates multiple learning modules, providing a comprehensive AI knowledge system from basic concepts to practical applications.
README:
Easy AI 是一个专注于 AI 教育的现代化 Web 应用平台,旨在通过简洁直观的方式帮助用户理解复杂的人工智能概念。平台集成了多个学习模块,提供从基础概念到实践应用的全方位 AI 知识体系。
分类 | 主题内容 | 状态 |
---|---|---|
模型基础 | 轻松理解 NLP - 人工智能中处理自然语言的分支 | ✅ |
模型基础 | 轻松理解 Transformer - 自注意力架构,高效处理长文本 | ✅ |
模型基础 | 轻松理解 LLM - 革命性的人工智能技术,重新定义机器理解能力 | ✅ |
模型基础 | 轻松理解 模型蒸馏 - 将复杂大模型知识压缩到轻量小模型的技术 | ✅ |
模型基础 | 轻松理解 模型量化 - 将模型权重转换为较低精度表示的技术 | ✅ |
模型基础 | 轻松理解 模型幻觉 - 模型在生成文本时出现的不真实、不合理的现象 | ✅ |
模型基础 | 轻松理解 Token - 模型在生成文本时的最小单位,每个 Token 代表一个词或词的一部分 | ✅ |
模型基础 | 轻松理解 BERT - 基于 Encoder-Only 架构的预训练语言模型 | ✅ |
模型基础 | 轻松理解 多模态 - 让AI理解和生成图片、视频、音频等多种模态数据 | ✅ |
模型基础 | 轻松理解 T5 - 基于 Encoder-Decoder 架构的预训练语言模型 | ✅ |
模型基础 | 轻松理解 GPT - 基于 Decoder-Only PLM 架构的预训练语言模型 | ✅ |
模型基础 | 轻松理解 LLaMA - 基于 Decoder-Only 架构的预训练语言模型 | ✅ |
模型基础 | 轻松理解 DeepSeek R1 - 通过创新算法让大语言模型获得强大推理能力 | ✅ |
模型基础 | 轻松理解 GGUF - 实现更高效模型存储、加载和部署的格式 | ✅ |
模型基础 | 轻松理解 MoE - 即将到来 | 👷 |
模型部署 | 轻松理解 模型部署 - 对比 Ollama 和 VLLM 两大主流本地部署方案 | ✅ |
模型训练 | 轻松理解 预训练 - 大语言模型训练的第一阶段 | ✅ |
模型微调 | 轻松理解 为什么要微调 - 长文本、知识库、微调的对比分析 | ✅ |
模型微调 | 轻松理解 模型微调方法 - 全参数微调、LoRA微调、冻结微调对比 | ✅ |
模型微调 | 轻松理解 SFT - 将预训练模型转化为实用AI助手的关键步骤 | ✅ |
模型微调 | 轻松理解 LoRA - 当前最受欢迎的大模型高效微调方法之一 | ✅ |
模型微调 | 轻松理解 RLHF - 通过强化学习将人类的主观偏好转化为模型的客观优化目标 | ✅ |
模型微调 | 轻松理解 微调参数:学习率 - 决定模型参数调整幅度的关键参数 | ✅ |
模型微调 | 轻松理解 微调参数:训练轮数 - 模型完整遍历训练数据集的次数 | ✅ |
模型微调 | 轻松理解 微调参数:批量大小 - 每次更新模型参数时的样本数量 | ✅ |
模型微调 | 轻松理解 微调参数:Lora秩 - 决定模型微调时表达能力的关键参数 | ✅ |
模型微调 | 轻松理解 DeepSpeed - 深度学习优化库,可以简化分布式训练与推理过程 | ✅ |
模型微调 | 轻松理解 Loss - 模型在训练过程中用于衡量预测值与真实值之间差异的指标 | ✅ |
模型评测 | 轻松理解 模型评测 - 即将到来 | 👷 |
数据增强 | 轻松理解 MGA - 通过轻量级框架将现有语料系统重构为多样化变体 | ✅ |
提示词 | 轻松理解 提示词工程 - 即将到来 | 👷 |
Agent | 轻松理解 Agent - 让 AI 不只是答题机器,而是会做事的智能体 | ✅ |
Agent | 轻松理解 Function Calling - 大语言模型与外部数据源、工具交互的重要方式 | ✅ |
Agent | 轻松理解 MCP - 开放标准协议,解决 AI 模型与外部数据源交互难题 | ✅ |
RAG | 轻松理解 RAG - 检索增强生成技术,解决大语言模型事实性问题 | ✅ |
RAG | 轻松理解 向量嵌入 - 即将到来 | 👷 |
RAG | 轻松理解 知识图谱 - 即将到来 | 👷 |
💡 持续更新中 ...
分类 | 主题内容 | 状态 |
---|---|---|
AI 入门 | 建立AI整体认知 - AI 技术是如何演进的? | ✅ |
模型部署 | 教你搭建一个纯本地、可联网、带本地知识库的私人 DeepSeek | ✅ |
模型微调 | 如何把你的 DeePseek-R1 微调为某个领域的专家?(理论篇) | ✅ |
模型微调 | 如何把你的 DeePseek-R1 微调为某个领域的专家?(实战篇) | ✅ |
模型微调 | LLaMA Factory 微调教程(二):入门和安装使用 | ✅ |
模型微调 | LLaMA Factory 微调教程(二):如何构建高质量数据集 | ✅ |
模型微调 | LLaMA Factory 微调教程(三):如何调整微调参数及显存消耗 | ✅ |
模型微调 | LLaMA Factory 微调教程(四):如何观测微调过程及模型导出 | ✅ |
模型微调 | LLaMA Factory 微调教程(完整版):从零微调一个专属领域大模型 | ✅ |
模型评测 | 如何对模型微调后的效果进行评估? | 👷 |
数据集 | 想微调特定领域的模型,数据集究竟要怎么搞? | ✅ |
数据集 | 如何把领域文献批量转换为可供模型微调的数据集? | ✅ |
数据集 | 使用 Easy Dataset 构造数据集实践教程 | ✅ |
Agent | MCP + 数据库,一种提高结构化数据检索精度的新方式 | ✅ |
Agent | 全网最细,看完你就能理解 MCP 的核心原理! | ✅ |
Agent | MCP 比传统应用面临着更大的安全威胁! | ✅ |
RAG | 如何有效提升 RAG 的检索精度 | 👷 |
💡 持续更新中 ...
精选各大 AI 平台优质提示词,了解 AI 提示词的精髓。
Manus | Cluely | Cursor | Lovable | Devin |
---|---|---|---|---|
dia | Junie | Bolt | Cline | Codex CLI |
Replit | RooCode | Same.dev | Spawn | Trae |
v0 | VSCode | Warp.dev | Xcode | Windsurf |
汇聚优质AI工具资源,按分类精准导航,助力工作与创作效率提升。
分类名称 | 工具数量 | 分类名称 | 工具数量 |
---|---|---|---|
全部工具 | 878+ | AI写作工具 | 100+ |
AI视频工具 | 100+ | AI图像工具 | 69+ |
AI设计工具 | 78+ | AI音频工具 | 75+ |
AI对话聊天 | 72+ | AI编程工具 | 65+ |
AI训练模型 | 49+ | AI开发平台 | 43+ |
AI搜索引擎 | 40+ | AI幻灯片 | 36+ |
AI办公工具 | 30+ | AI智能体 | 19+ |
AI语言翻译 | 19+ | AI内容检测 | 16+ |
AI法律助手 | 8+ |
💡 提示:点击分类名称可直接跳转到对应工具页面,支持URL分享和收藏特定分类。
基于 AI 提取各渠道 AI 一手新闻,每日汇总报告。
我们欢迎所有形式的贡献,包括但不限于:
- 🐛 报告 Bug
- 💡 提出新功能建议
- 📖 改进文档
- 🔧 提交代码修复
如有任何问题或建议,欢迎通过以下方式联系:
- 🌐 项目主页:Easy AI
- 📧 问题反馈:通过 Issues 页面提交
🎯 让 AI 学习变得简单 | 🚀 让知识传播更高效
Made with ❤️ for the AI learning community
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for easy-learn-ai
Similar Open Source Tools

easy-learn-ai
Easy AI is a modern web application platform focused on AI education, aiming to help users understand complex artificial intelligence concepts through a concise and intuitive approach. The platform integrates multiple learning modules, providing a comprehensive AI knowledge system from basic concepts to practical applications.

AISystem
This open-source project, also known as **Deep Learning System** or **AI System (AISys)**, aims to explore and learn about the system design of artificial intelligence and deep learning. The project is centered around the full-stack content of AI systems that ZOMI has accumulated,整理, and built during his work. The goal is to collaborate with all friends who are interested in AI open-source projects to jointly promote learning and discussion.

ai-demos
The 'ai-demos' repository is a collection of example code from presentations focusing on building with AI and LLMs. It serves as a resource for developers looking to explore practical applications of artificial intelligence in their projects. The code snippets showcase various techniques and approaches to leverage AI technologies effectively. The repository aims to inspire and educate developers on integrating AI solutions into their applications.

BrainX
BrainX is a tool designed for AI enthusiasts to explore and experiment with various machine learning algorithms and models. It provides a user-friendly interface for building, training, and evaluating AI models. The tool aims to simplify the process of developing AI applications and enable users to quickly prototype and test their ideas.

awesome-ai-apps
This repository is a comprehensive collection of practical examples, tutorials, and recipes for building powerful LLM-powered applications. From simple chatbots to advanced AI agents, these projects serve as a guide for developers working with various AI frameworks and tools. Powered by Nebius AI Studio - your one-stop platform for building and deploying AI applications.

mslearn-ai-vision
The 'mslearn-ai-vision' repository contains lab files for Azure AI Vision modules. It provides hands-on exercises and resources for learning about AI vision capabilities on the Azure platform. The labs cover topics such as image recognition, object detection, and image classification using Azure's AI services. By following the lab exercises, users can gain practical experience in building and deploying AI vision solutions in the cloud.

BaseAI
BaseAI is an AI framework designed for creating declarative and composable AI-powered LLM products. It enables the development of AI agent pipes locally, incorporating agentic tools and memory (RAG). The framework offers a learn guide for beginners to kickstart their journey with BaseAI. For detailed documentation, users can visit baseai.dev/docs. Contributions to BaseAI are encouraged, and interested individuals can refer to the Contributing Guide. The original authors of BaseAI include Ahmad Awais, Ashar Irfan, Saqib Ameen, Saad Irfan, and Ahmad Bilal. Security vulnerabilities can be reported privately via email to [email protected]. BaseAI aims to provide resources for learning AI agent development, utilizing agentic tools and memory.

GenerativeAIExamples
NVIDIA Generative AI Examples are state-of-the-art examples that are easy to deploy, test, and extend. All examples run on the high performance NVIDIA CUDA-X software stack and NVIDIA GPUs. These examples showcase the capabilities of NVIDIA's Generative AI platform, which includes tools, frameworks, and models for building and deploying generative AI applications.

azure-ai-docs
Azure AI Docs is a repository that provides detailed documentation and resources for developers looking to leverage Microsoft's AI services on the Azure platform. The repository covers a wide range of topics including machine learning, natural language processing, computer vision, and more. Developers can find tutorials, code samples, best practices, and guidelines to help them integrate AI capabilities into their applications seamlessly.

Disciplined-AI-Software-Development
Disciplined AI Software Development is a comprehensive repository that provides guidelines and best practices for developing AI software in a disciplined manner. It covers topics such as project organization, code structure, documentation, testing, and deployment strategies to ensure the reliability, scalability, and maintainability of AI applications. The repository aims to help developers and teams navigate the complexities of AI development by offering practical advice and examples to follow.

ai-app-lab
The ai-app-lab is a high-code Python SDK Arkitect designed for enterprise developers with professional development capabilities. It provides a toolset and workflow set for developing large model applications tailored to specific business scenarios. The SDK offers highly customizable application orchestration, quality business tools, one-stop development and hosting services, security enhancements, and AI prototype application code examples. It caters to complex enterprise development scenarios, enabling the creation of highly customized intelligent applications for various industries.

spring-ai-examples
Spring AI Examples is a repository containing various examples of integrating artificial intelligence capabilities into Spring applications. The examples cover a wide range of AI technologies such as machine learning, natural language processing, computer vision, and more. These examples serve as a practical guide for developers looking to incorporate AI functionalities into their Spring projects.

ai_agents_cookbooks
The 'ai_agents_cookbooks' repository contains cookbooks for AI agents, which are AI systems capable of using other software as tools. It provides resources for learning more about AI through events and requires Python 3.10 or higher as a prerequisite.

lmnr
Laminar is an all-in-one open-source platform designed for engineering AI products. It allows users to trace, evaluate, label, and analyze LLM data efficiently. The platform offers features such as automatic tracing of common AI frameworks and SDKs, local and online evaluations, simple UI for data labeling, dataset management, and scalability with gRPC communication. Laminar is built with a modern open-source stack including RabbitMQ, Postgres, Clickhouse, and Qdrant for semantic similarity search. It provides fast and beautiful dashboards for traces, evaluations, and labels, making it a comprehensive tool for AI product development.

spring-ai-alibaba-examples
This repository contains examples showcasing various uses of Spring AI Alibaba, from basic to advanced, and best practices for AI projects. It welcomes contributions related to Spring AI Alibaba usage examples, API usage, Spring AI usage examples, and best practices for AI projects. The project structure is designed to modularize functions for easy access and use.

simple-ai
Simple AI is a lightweight Python library for implementing basic artificial intelligence algorithms. It provides easy-to-use functions and classes for tasks such as machine learning, natural language processing, and computer vision. With Simple AI, users can quickly prototype and deploy AI solutions without the complexity of larger frameworks.
For similar tasks

easy-learn-ai
Easy AI is a modern web application platform focused on AI education, aiming to help users understand complex artificial intelligence concepts through a concise and intuitive approach. The platform integrates multiple learning modules, providing a comprehensive AI knowledge system from basic concepts to practical applications.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.

zep-python
Zep is an open-source platform for building and deploying large language model (LLM) applications. It provides a suite of tools and services that make it easy to integrate LLMs into your applications, including chat history memory, embedding, vector search, and data enrichment. Zep is designed to be scalable, reliable, and easy to use, making it a great choice for developers who want to build LLM-powered applications quickly and easily.

AI-in-a-Box
AI-in-a-Box is a curated collection of solution accelerators that can help engineers establish their AI/ML environments and solutions rapidly and with minimal friction, while maintaining the highest standards of quality and efficiency. It provides essential guidance on the responsible use of AI and LLM technologies, specific security guidance for Generative AI (GenAI) applications, and best practices for scaling OpenAI applications within Azure. The available accelerators include: Azure ML Operationalization in-a-box, Edge AI in-a-box, Doc Intelligence in-a-box, Image and Video Analysis in-a-box, Cognitive Services Landing Zone in-a-box, Semantic Kernel Bot in-a-box, NLP to SQL in-a-box, Assistants API in-a-box, and Assistants API Bot in-a-box.

NeMo
NeMo Framework is a generative AI framework built for researchers and pytorch developers working on large language models (LLMs), multimodal models (MM), automatic speech recognition (ASR), and text-to-speech synthesis (TTS). The primary objective of NeMo is to provide a scalable framework for researchers and developers from industry and academia to more easily implement and design new generative AI models by being able to leverage existing code and pretrained models.

E2B
E2B Sandbox is a secure sandboxed cloud environment made for AI agents and AI apps. Sandboxes allow AI agents and apps to have long running cloud secure environments. In these environments, large language models can use the same tools as humans do. For example: * Cloud browsers * GitHub repositories and CLIs * Coding tools like linters, autocomplete, "go-to defintion" * Running LLM generated code * Audio & video editing The E2B sandbox can be connected to any LLM and any AI agent or app.

floneum
Floneum is a graph editor that makes it easy to develop your own AI workflows. It uses large language models (LLMs) to run AI models locally, without any external dependencies or even a GPU. This makes it easy to use LLMs with your own data, without worrying about privacy. Floneum also has a plugin system that allows you to improve the performance of LLMs and make them work better for your specific use case. Plugins can be used in any language that supports web assembly, and they can control the output of LLMs with a process similar to JSONformer or guidance.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.