AI-Practices
🎓 机器学习与深度学习实战教程 | Comprehensive ML & DL Tutorial with Jupyter Notebooks | 包含线性回归、神经网络、CNN、RNN等完整教程
Stars: 320
AI-Practices is a systematic platform for learning and practicing artificial intelligence, covering a wide range of topics from foundational machine learning to advanced deep learning, reinforcement learning, generative models, large language models, multimodal learning, deployment optimization, distributed training, and agent reasoning. The platform provides a structured learning path, combining theoretical knowledge with practical implementation, following industrial code standards and including Kaggle competition solutions. It covers core algorithms in machine learning, deep learning, reinforcement learning, generative models, and large models. The platform supports popular deep learning frameworks like PyTorch, TensorFlow, and Keras, along with essential data science tools like NumPy, Pandas, and Scikit-Learn.
README:
| 500+ Python文件 | 280+ Notebooks | 14大核心模块 | 100+ 单元测试 | 2枚Kaggle金牌 |
|---|---|---|---|---|
| 生产级代码实现 | 可交互式学习 | 系统化知识体系 | 代码质量保证 | 竞赛实战验证 |
- 系统化学习路径 — 从基础数学到前沿技术,14个模块循序渐进
- 理论与实践结合 — 每个概念都有数学推导和代码实现
- 工程化标准 — 遵循工业级代码规范,包含完整测试
- 竞赛级方案 — 包含Kaggle Top 1%金牌解决方案
AI-Practices/
│
├── 第一阶段:机器学习基础
│ └── 01-foundations/ # 线性模型、SVM、决策树、集成学习、降维、聚类
│
├── 第二阶段:深度学习核心
│ ├── 02-neural-networks/ # 神经网络基础、优化器、正则化
│ ├── 03-computer-vision/ # CNN架构、迁移学习、目标检测
│ └── 04-sequence-models/ # RNN/LSTM、Attention、Transformer
│
├── 第三阶段:高级专题
│ ├── 05-advanced-topics/ # 函数式API、回调函数、模型优化
│ ├── 06-generative-models/ # VAE、GAN、Diffusion Models
│ └── 07-reinforcement-learning/ # DQN、PPO、SAC、Actor-Critic
│
├── 第四阶段:大模型与多模态
│ ├── 10-large-language-models/ # Transformer、GPT/LLaMA、LoRA、RAG、Agent
│ └── 11-multimodal-learning/ # CLIP、Stable Diffusion、Whisper、TTS
│
├── 第五阶段:工程化部署
│ ├── 12-deployment-optimization/ # 量化剪枝、TensorRT、FastAPI、MLOps
│ └── 13-distributed-training/ # DDP、FSDP、ZeRO、混合精度训练
│
├── 第六阶段:智能体系统
│ └── 14-agents-reasoning/ # 工具调用、推理策略、多智能体、自主Agent
│
├── 理论参考
│ └── 08-theory-notes/ # 激活函数、损失函数、架构速查
│
└── 实战项目
└── 09-practical-projects/ # Kaggle竞赛、游戏AI、跨模块集成系统
| 子模块 | 核心内容 | 关键算法 |
|---|---|---|
| 01-training-models | 模型训练基础 | 线性回归、梯度下降、正则化 |
| 02-classification | 分类算法 | 逻辑回归、MNIST实战 |
| 03-support-vector-machines | 支持向量机 | 核技巧、软间隔、SVM回归 |
| 04-decision-trees | 决策树 | CART、剪枝策略 |
| 05-ensemble-learning | 集成学习 | Bagging、Boosting、XGBoost、Stacking |
| 06-dimensionality-reduction | 降维技术 | PCA、t-SNE、LLE、UMAP |
| 07-unsupervised-learning | 无监督学习 | K-Means、DBSCAN、GMM |
| 08-end-to-end-project | 完整ML项目 | 加州房价预测 |
| 子模块 | 核心内容 |
|---|---|
| 01-keras-introduction | Keras入门、Sequential/Functional API |
| 02-training-deep-networks | BatchNorm、Dropout、初始化策略 |
| 03-custom-models-training | 自定义层、训练循环、TensorFlow底层 |
| 04-data-loading-preprocessing | 数据管道、TFRecord、预处理 |
| 子模块 | 核心内容 |
|---|---|
| 01-cnn-basics | CNN基础、池化层、ResNet实现 |
| 02-classic-architectures | 经典架构演进 |
| 03-transfer-learning | 迁移学习、猫狗分类实战 |
| 04-visualization | 特征可视化、中间层激活 |
| 子模块 | 核心内容 |
|---|---|
| 01-rnn-basics | RNN基础、LSTM、时间序列预测 |
| 02-lstm-gru | LSTM/GRU高级用法 |
| 03-text-processing | 词嵌入、One-hot编码 |
| 04-cnn-for-sequences | 一维卷积处理序列 |
| 05-transformer | Self-Attention、Multi-Head、BERT/GPT基础 |
| 子模块 | 核心内容 |
|---|---|
| 01-functional-api | 多输入多输出、残差连接、Inception |
| 02-callbacks-tensorboard | 回调函数、TensorBoard可视化 |
| 03-model-optimization | 量化、剪枝、知识蒸馏、部署 |
| 子模块 | 核心内容 |
|---|---|
| 01-vae | Vanilla AE、VAE、VQ-VAE |
| 02-gans | GAN、DCGAN、WGAN-GP |
| 03-diffusion | DDPM原理与实现 |
| 04-text-generation | 字符级LSTM文本生成 |
| 05-deepdream | DeepDream艺术生成 |
| 子模块 | 核心内容 | 测试覆盖 |
|---|---|---|
| 01-mdp-basics | MDP、值迭代、策略迭代 | ✅ |
| 02-temporal-difference | TD学习、SARSA | ✅ |
| 03-q-learning | Q-Learning、探索策略 | ✅ |
| 04-deep-q-learning | DQN、Double DQN、Dueling DQN、Rainbow | ✅ |
| 05-policy-gradient | REINFORCE、基线方法 | ✅ |
| 06-actor-critic | A2C、PPO | ✅ |
| 07-advanced-algorithms | SAC、TD3、DDPG | ✅ |
| 08-reward-optimization | 奖励塑形、好奇心驱动、逆强化学习 | ✅ |
快速参考手册,包含:
- 激活函数对比与选择
- 损失函数详解
- 网络架构速查(CNN、RNN、Dense)
| 子模块 | 项目内容 |
|---|---|
| 01-ml-basics | Titanic生存预测、Otto分类、SVM文本分类、XGBoost进阶 |
| 02-computer-vision | MNIST CNN分类 |
| 03-nlp | 情感分析LSTM、Transformer文本分类、NER、机器翻译 |
| 04-time-series | 温度预测、股票预测LSTM |
| 05-kaggle-competitions | 4个Kaggle竞赛方案(含2个金牌) |
| 06-reinforcement-learning | Flappy Bird DQN、Dino Run、股票交易RL |
| 07-integrated-systems | 多模态检索、视觉问答Agent、代码助手(109个测试) |
| 竞赛 | 排名 | 奖牌 |
|---|---|---|
| Feedback Prize - ELL | Top 1% | 🥇 金牌 |
| RSNA Abdominal Trauma | Top 1% | 🥇 金牌 |
| American Express Default | Top 5% | 🥈 银牌 |
| RSNA Lumbar Spine | Top 10% | 🥉 铜牌 |
| 子模块 | 核心内容 | 测试覆盖 |
|---|---|---|
| 01-llm-fundamentals | Transformer架构、Tokenizer | ✅ |
| 02-pretrained-models | GPT、LLaMA从零实现 | ✅ |
| 03-fine-tuning | LoRA、QLoRA高效微调 | ✅ |
| 04-prompt-engineering | Few-shot、Chain-of-Thought | ✅ |
| 05-rag | 向量数据库、检索增强生成 | ✅ |
| 06-agents | 工具调用、记忆管理 | ✅ |
| 07-alignment | RLHF、DPO对齐训练 | ✅ |
| 子模块 | 核心内容 | 测试覆盖 |
|---|---|---|
| 01-vision-language | CLIP、BLIP、LLaVA | ✅ |
| 02-image-generation | VAE、Diffusion、ControlNet | ✅ |
| 03-audio-models | Whisper语音识别、TTS语音合成 | ✅ |
| 子模块 | 核心内容 | 测试覆盖 |
|---|---|---|
| 01-model-optimization | 量化、剪枝、蒸馏、ONNX导出 | ✅ |
| 02-inference-engines | TensorRT、vLLM、ONNX Runtime | ✅ |
| 03-serving-systems | FastAPI、Triton、负载均衡 | ✅ |
| 04-mlops | 实验追踪、模型注册、监控告警 | ✅ |
| 子模块 | 核心内容 | 测试覆盖 |
|---|---|---|
| 01-data-parallel | DDP、FSDP、ZeRO | ✅ |
| 02-model-parallel | 张量并行、流水线并行、序列并行 | ✅ |
| 03-mixed-precision | AMP、BF16、梯度缩放 | ✅ |
| 04-large-scale-training | DeepSpeed、Megatron-LM | ✅ |
| 子模块 | 核心内容 | 测试覆盖 |
|---|---|---|
| 01-tool-use | Function Calling、工具注册、结构化输出 | ✅ |
| 02-reasoning | CoT、ReAct、ToT、自一致性、反思 | ✅ |
| 03-memory-systems | 短期记忆、长期记忆、向量检索 | ✅ |
| 04-planning | 任务分解、计划生成、动态重规划 | ✅ |
| 05-multi-agent | 辩论式推理、协作式推理、共识达成 | ✅ |
| 06-autonomous-agent | 目标管理、行动执行、自主循环 | ✅ |
线性模型: OLS, Ridge, Lasso, ElasticNet
分类算法: Logistic Regression, SVM, KNN
树模型: Decision Tree, Random Forest, GBDT
集成学习: Bagging, Boosting, Stacking, XGBoost, LightGBM
降维聚类: PCA, t-SNE, UMAP, K-Means, DBSCAN
优化器: SGD, Momentum, Adam, AdamW, LAMB
正则化: Dropout, BatchNorm, LayerNorm, Weight Decay
CNN架构: LeNet → AlexNet → VGG → ResNet → EfficientNet → ViT
序列模型: RNN → LSTM → GRU → Transformer → BERT → GPT
值函数方法: Q-Learning, DQN, Double DQN, Dueling DQN, Rainbow
策略梯度: REINFORCE, PPO, TRPO
Actor-Critic: A2C, A3C, SAC, TD3
自编码器: AE, VAE, VQ-VAE
对抗网络: GAN, DCGAN, WGAN-GP, StyleGAN
扩散模型: DDPM, Stable Diffusion
架构: Transformer, GPT, LLaMA
微调: LoRA, QLoRA, Adapter
推理: RAG, CoT, ReAct, ToT
对齐: RLHF, DPO
| 深度学习框架 | 数据科学 | 开发工具 |
|---|---|---|
| PyTorch 2.x | NumPy | Python 3.10+ |
| TensorFlow 2.13+ | Pandas | Jupyter Lab |
| Keras 3.x | Scikit-Learn | Docker |
# 克隆仓库
git clone https://github.com/zimingttkx/AI-Practices.git
cd AI-Practices
# 创建环境
conda create -n ai-practices python=3.10 -y
conda activate ai-practices
# 安装依赖
pip install -r requirements.txt
# 启动Jupyter
jupyter lab| 组件 | 最低配置 | 推荐配置 |
|---|---|---|
| CPU | 4核 | 8核+ |
| 内存 | 8 GB | 32 GB |
| GPU | GTX 1060 | RTX 3080+ |
| 存储 | 50 GB | 200 GB SSD |
入门阶段 (1-2个月)
├── 01-foundations # 机器学习基础
├── 02-neural-networks # 神经网络入门
└── 08-theory-notes # 理论参考
进阶阶段 (2-3个月)
├── 03-computer-vision # 计算机视觉
├── 04-sequence-models # 序列模型
├── 05-advanced-topics # 高级专题
└── 06-generative-models # 生成模型
高级阶段 (2-3个月)
├── 07-reinforcement-learning # 强化学习
├── 10-large-language-models # 大语言模型
└── 11-multimodal-learning # 多模态学习
工程化阶段 (1-2个月)
├── 12-deployment-optimization # 部署优化
├── 13-distributed-training # 分布式训练
└── 14-agents-reasoning # 智能体系统
实战阶段 (持续)
└── 09-practical-projects # 项目实战
@misc{ai-practices2024,
author = {zimingttkx},
title = {AI-Practices: 系统化人工智能学习与实践平台},
year = {2024},
publisher = {GitHub},
howpublished = {\url{https://github.com/zimingttkx/AI-Practices}}
}本项目采用 MIT License 开源协议 - 详见 LICENSE
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AI-Practices
Similar Open Source Tools
AI-Practices
AI-Practices is a systematic platform for learning and practicing artificial intelligence, covering a wide range of topics from foundational machine learning to advanced deep learning, reinforcement learning, generative models, large language models, multimodal learning, deployment optimization, distributed training, and agent reasoning. The platform provides a structured learning path, combining theoretical knowledge with practical implementation, following industrial code standards and including Kaggle competition solutions. It covers core algorithms in machine learning, deep learning, reinforcement learning, generative models, and large models. The platform supports popular deep learning frameworks like PyTorch, TensorFlow, and Keras, along with essential data science tools like NumPy, Pandas, and Scikit-Learn.
llms-from-scratch-cn
This repository provides a detailed tutorial on how to build your own large language model (LLM) from scratch. It includes all the code necessary to create a GPT-like LLM, covering the encoding, pre-training, and fine-tuning processes. The tutorial is written in a clear and concise style, with plenty of examples and illustrations to help you understand the concepts involved. It is suitable for developers and researchers with some programming experience who are interested in learning more about LLMs and how to build them.
ai-app
The 'ai-app' repository is a comprehensive collection of tools and resources related to artificial intelligence, focusing on topics such as server environment setup, PyCharm and Anaconda installation, large model deployment and training, Transformer principles, RAG technology, vector databases, AI image, voice, and music generation, and AI Agent frameworks. It also includes practical guides and tutorials on implementing various AI applications. The repository serves as a valuable resource for individuals interested in exploring different aspects of AI technology.
Awesome-LLM-Eval
Awesome-LLM-Eval: a curated list of tools, benchmarks, demos, papers for Large Language Models (like ChatGPT, LLaMA, GLM, Baichuan, etc) Evaluation on Language capabilities, Knowledge, Reasoning, Fairness and Safety.
pmhub
PmHub is a smart project management system based on SpringCloud, SpringCloud Alibaba, and LLM. It aims to help students quickly grasp the architecture design and development process of microservices/distributed projects. PmHub provides a platform for students to experience the transformation from monolithic to microservices architecture, understand the pros and cons of both architectures, and prepare for job interviews. It offers popular technologies like SpringCloud-Gateway, Nacos, Sentinel, and provides high-quality code, continuous integration, product design documents, and an enterprise workflow system. PmHub is suitable for beginners and advanced learners who want to master core knowledge of microservices/distributed projects.
AstrBot
AstrBot is a powerful and versatile tool that leverages the capabilities of large language models (LLMs) like GPT-3, GPT-3.5, and GPT-4 to enhance communication and automate tasks. It seamlessly integrates with popular messaging platforms such as QQ, QQ Channel, and Telegram, enabling users to harness the power of AI within their daily conversations and workflows.
Firefly
Firefly is an open-source large model training project that supports pre-training, fine-tuning, and DPO of mainstream large models. It includes models like Llama3, Gemma, Qwen1.5, MiniCPM, Llama, InternLM, Baichuan, ChatGLM, Yi, Deepseek, Qwen, Orion, Ziya, Xverse, Mistral, Mixtral-8x7B, Zephyr, Vicuna, Bloom, etc. The project supports full-parameter training, LoRA, QLoRA efficient training, and various tasks such as pre-training, SFT, and DPO. Suitable for users with limited training resources, QLoRA is recommended for fine-tuning instructions. The project has achieved good results on the Open LLM Leaderboard with QLoRA training process validation. The latest version has significant updates and adaptations for different chat model templates.
gpt_server
The GPT Server project leverages the basic capabilities of FastChat to provide the capabilities of an openai server. It perfectly adapts more models, optimizes models with poor compatibility in FastChat, and supports loading vllm, LMDeploy, and hf in various ways. It also supports all sentence_transformers compatible semantic vector models, including Chat templates with function roles, Function Calling (Tools) capability, and multi-modal large models. The project aims to reduce the difficulty of model adaptation and project usage, making it easier to deploy the latest models with minimal code changes.
ruoyi-vue-pro
The ruoyi-vue-pro repository is an open-source project that provides a comprehensive development platform with various functionalities such as system features, infrastructure, member center, data reports, workflow, payment system, mall system, ERP system, CRM system, and AI big model. It is built using Java backend with Spring Boot framework and Vue frontend with different versions like Vue3 with element-plus, Vue3 with vben(ant-design-vue), and Vue2 with element-ui. The project aims to offer a fast development platform for developers and enterprises, supporting features like dynamic menu loading, button-level access control, SaaS multi-tenancy, code generator, real-time communication, integration with third-party services like WeChat, Alipay, and cloud services, and more.
yudao-boot-mini
yudao-boot-mini is an open-source project focused on developing a rapid development platform for developers in China. It includes features like system functions, infrastructure, member center, data reports, workflow, mall system, WeChat official account, CRM, ERP, etc. The project is based on Spring Boot with Java backend and Vue for frontend. It offers various functionalities such as user management, role management, menu management, department management, workflow management, payment system, code generation, API documentation, database documentation, file service, WebSocket integration, message queue, Java monitoring, and more. The project is licensed under the MIT License, allowing both individuals and enterprises to use it freely without restrictions.
yudao-cloud
Yudao-cloud is an open-source project designed to provide a fast development platform for developers in China. It includes various system functions, infrastructure, member center, data reports, workflow, mall system, WeChat public account, CRM, ERP, etc. The project is based on Java backend with Spring Boot and Spring Cloud Alibaba microservices architecture. It supports multiple databases, message queues, authentication systems, dynamic menu loading, SaaS multi-tenant system, code generator, real-time communication, integration with third-party services like WeChat, Alipay, and more. The project is well-documented and follows the Alibaba Java development guidelines, ensuring clean code and architecture.
MedicalGPT
MedicalGPT is a training medical GPT model with ChatGPT training pipeline, implement of Pretraining, Supervised Finetuning, RLHF(Reward Modeling and Reinforcement Learning) and DPO(Direct Preference Optimization).
Chinese-LLaMA-Alpaca
This project open sources the **Chinese LLaMA model and the Alpaca large model fine-tuned with instructions**, to further promote the open research of large models in the Chinese NLP community. These models **extend the Chinese vocabulary based on the original LLaMA** and use Chinese data for secondary pre-training, further enhancing the basic Chinese semantic understanding ability. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, significantly improving the model's understanding and execution of instructions.
jimeng-free-api-all
Jimeng AI Free API is a reverse-engineered API server that encapsulates Jimeng AI's image and video generation capabilities into OpenAI-compatible API interfaces. It supports the latest jimeng-5.0-preview, jimeng-4.6 text-to-image models, Seedance 2.0 multi-image intelligent video generation, zero-configuration deployment, and multi-token support. The API is fully compatible with OpenAI API format, seamlessly integrating with existing clients and supporting multiple session IDs for polling usage.
Llama-Chinese
Llama中文社区是一个专注于Llama模型在中文方面的优化和上层建设的高级技术社区。 **已经基于大规模中文数据,从预训练开始对Llama2模型进行中文能力的持续迭代升级【Done】**。**正在对Llama3模型进行中文能力的持续迭代升级【Doing】** 我们热忱欢迎对大模型LLM充满热情的开发者和研究者加入我们的行列。
For similar tasks
ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources
ray
Ray is a unified framework for scaling AI and Python applications. It consists of a core distributed runtime and a set of AI libraries for simplifying ML compute, including Data, Train, Tune, RLlib, and Serve. Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations. With Ray, you can seamlessly scale the same code from a laptop to a cluster, making it easy to meet the compute-intensive demands of modern ML workloads.
labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.
djl
Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. It is designed to be easy to get started with and simple to use for Java developers. DJL provides a native Java development experience and allows users to integrate machine learning and deep learning models with their Java applications. The framework is deep learning engine agnostic, enabling users to switch engines at any point for optimal performance. DJL's ergonomic API interface guides users with best practices to accomplish deep learning tasks, such as running inference and training neural networks.
mlflow
MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud). MLflow's current components are:
* `MLflow Tracking
tt-metal
TT-NN is a python & C++ Neural Network OP library. It provides a low-level programming model, TT-Metalium, enabling kernel development for Tenstorrent hardware.
burn
Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
awsome-distributed-training
This repository contains reference architectures and test cases for distributed model training with Amazon SageMaker Hyperpod, AWS ParallelCluster, AWS Batch, and Amazon EKS. The test cases cover different types and sizes of models as well as different frameworks and parallel optimizations (Pytorch DDP/FSDP, MegatronLM, NemoMegatron...).
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.