
cohort_structure
This repository contains detailed information about the structure of AI Saturdays Lagos Cohort.
Stars: 75

The Machine Learning (ML) Flipped Cohort is a 12-week structured program designed for beginners to gain foundational to intermediate ML knowledge. Participants consume pre-recorded content during the week and engage in weekly community discussions. The program covers topics such as Python, data science foundations, databases, math for ML, text processing, linear regression, non-linear modeling, deep learning basics, and more. Participants work on capstone projects and are assessed through Google Forms. Certification requires minimum attendance, assessment scores, and participation in the final project. The cohort provides a supportive learning environment with mentorship and community interaction.
README:
The Machine Learning (ML) Flipped Cohort is a structured, community-driven Data Science and Machine Learning learning 12 weeks cohort designed for beginners. The goal is to equip individuals with foundational to intermediate ML knowledge using a flipped classroom model where learners independently consume pre-recorded content during the week, then attend a weekly community to discuss, explore and ask questions about what they’ve learned.
We follow a flipped classroom model where:
- Participants watch curated pre-recorded lectures and complete labs during the week.
- Every Saturday, attend a community call to engage with one of the organizers.
Each week you will be:
- Assigned selected videos (from a curated playlist of lectures and labs)
- Receive supporting materials like Jupyter notebooks, slides, and assessments
- Join a live Zoom session on weekends to engage with instructors and peers
- Interact daily on Discord for Q&A, collaboration, and accountability
By the end of the cohort, you will:
- Participate in capstone projects and present your solution to demonstrate real-world understanding
- Earn a certificate if all conditions are met (see below)
This cohort is ideal for:
- Students and recent graduates exploring data science or ML
- Career switchers with programming experience aiming to enter ML roles
- Self-learners seeking structure, mentorship, and a community
- You!
Prerequisite: Basic Python knowledge is expected
We’ll provide beginner-friendly Python resources during Week 1 for anyone needing a refresher.
The cohort will run for 10–12 weeks, broken down into:
- 10 weeks of structured learning
- Capstone projects
Important Dates
- Cohort Start Date: July 26, 2025
- Cohort End Date: October 18, 2025
Tool | Purpose | Link |
---|---|---|
GitHub | All materials, assignments, and resources | Cohort Repository |
Gmail Group | Announcements & Notifications | AI6 Lagos Group |
Zoom | Weekly community sessions & project demos | Link shared weekly |
Discord | Daily interaction, Q&A, accountability & support | Join Discord |
YouTube | Pre-recorded lectures & community session recordings | Pre-recorded Lectures & Lab, C9 - Weekly Community Sessions |
Each week will follow this schedule:
- Sundays: Email regarding the videos, labs, notebooks and slides for the week will be sent to participants
- Saturdays: Complete and Submit Assessments 9:00 AM WAT on Saturdays
- Saturdays: Attend a 2-hour community discussion via Zoom (10-12 PM WAT)
There will be an onboarding session on July 26th, at 10:00 AM WAT.
Week | Dates | Topics | Lectures | Labs | Assessment | Suggested Weekly Schedule |
---|---|---|---|---|---|---|
0 | Jul 26 | Onboarding & Kickoff | - | - | - | - |
1 | Jul 27 – Aug 2 | Python & Numerical Computing | ☘️Python Refresher: Lecture Video , Lecture Notebook ☘️Numerical Computing with Python and Numpy: Lecture Video, Lecture Notebook |
- | Link |
Mon: Python Refresher Lecture Wed: NumPy Lecture |
2 | Aug 3 – Aug 9 | Data Science Foundations | ☘️Introduction to Data Science: Lecture Video, Lecture Slides ☘️Data Collection and Scraping: Lecture Video, Lecture Slides |
🍒Introduction to Git and Github: Lab Video, Lab Slides 🍒Data Collection and Scraping: Lab Video, Lab Notebook |
link |
Mon: Intro to DS Lecture Tue: Intro + Git/GitHub Lab Wed: Data Collection Lecture Thur: Data Collection Lab |
3 | Aug 10 – Aug 16 | Databases, SQL & Exploratory Data Analysis | ☘️Relational Data: Lecture Video, Lecture Slides ☘️ Visualization and Data Exploration: Lecture Video, Lecture Slides |
🍒Relational Data and SQL: Lab Video, Lab Notebook 🍒Data Exploration and Visualization: Lab Video, Lab Notebook |
Link |
Mon: Relational data Lecture Tue: Relational data Lab Wed: Data Exploration Lecture Thur: Data Exploration Lab |
4 | Aug 17 – Aug 23 | Math for ML | ☘️Linear Algebra: Lecture Video, Lecture Notebook, Lecture Slides | - | TBD |
Mon: Linear Algebra Lecture Wed: Linear Algebra Notebook |
5 | Aug 24 – Aug 30 | Text Processing | ☘️ Free Text and Natural Language Processing: Lecture Video, Lecture Slides | 🍒Text Processing: Lab Video, Lab Notebook |
Mon: Free Text & NLP Lecture Wed: Text Processing Lab |
|
Project Checkpoint | ||||||
6 | Aug 31 – Sep 6 | Linear Regression & Classification Models | ☘️Introduction to Machine Learning & Linear Regression: Lecture Video, Lecture Slides ☘️Linear Classification: Lecture Video, Lecture Slides |
🍒Linear Regression and Classification: Lab Video, Lab Notebook | TBD |
Mon: Introduction to ML Lecture Wed: Linear Classification Lecture Thur Linear Regression & Classification Lab |
7 | Sep 7 – Sep 13 | Non-Linear Modeling & Interpretable ML | ☘️Nonlinear Modeling, Cross-Validation: Lecture Video, Lecture Slides ☘️Decision Trees, Interpretable Models: Lecture Video, Lecture Slides |
🍒Nonlinear Modeling: Lab Video, Lab Notebook | TBD |
Mon: Nonlinear Modeling Lecture Tue: Nonlinear Modeling Lab Wed: Decision Trees Lecture |
8 | Sep 14 – Sep 20 | Probabilistic Models | ☘️Basics of Probability: Lecture Video, Lecture Slides ☘️Maximum Likelihood Estimation, Naive bayes: Lecture Video, Lecture Slides |
- | TBD |
Mon: Basics of Probability Lecture Wed: MLE, Naive Bayes Lecture |
9 | Sep 21 – Sep 27 | Unsupervised Learning & Recommendation Systems | ☘️Unsupervised Learning: Lecture Video, Lecture Slides ☘️Recommendation Systems: Lecture Video, Lecture Slides |
🍒Unsupervised Learning: Lab Video, Lab Notebook 🍒Recommendation Systems: Lab Notebook |
TBD |
Mon: Unsupervised Learning Lecture Tue: Unsupervised Learning Lab Wed: Recommendation Systems Lecture Thur: Recommendation Systems Lab |
10 | Sep 28 – Oct 4 | Deep Learning Basics | ☘️Introduction to Deep Learning: Lecture Video, Lecture Slides | 🍒Neural Networks: Lab Video, Lab Notebook | TBD |
Mon: Deep Learning Lecture Wed: Neural Network Lab |
Capstone Project Submission | ||||||
12 | Oct 18 | Project Presentations | - | - | - | - |
- Submitted via Google Forms
- Deadline: 1 hour before the community call on Saturdays
- Reviewed live during the discussion
TBD
To receive a Certificate of Completion:
- 60% minimum attendance at community calls (tracked via Google Forms)
- 40% average assessment score
- 100% participation in the final project (submission required)
TBD
You are encouraged to explore the following:
- ML Zoomcamp – DataTalksClub
- CMU Data Science Course
- Stanford ML Course – Andrew Ng
- Machine Learning @ VU Amsterdam
This cohort is built on the foundation laid by the incredible work from Cohort 8 (C8) — its lectures, labs, and community contributions. We are deeply grateful to the selfless volunteers who made it all possible: class instructors, lab facilitators, mentors, and countless others who gave their time and expertise.
Our community is fortunate to be supported by such a generous, talented, and inspiring group of individuals. Thank you for your continued impact.
- Afolabi Animashaun
- Akintayo Jabar
- Allen Akinkunle
- Aseda Addai-Deseh
- Deborah Kanubala
- Ejiro Onose
- Emefa Duah
- Femi Ogunbode
- Fortune Adekogbe
- Foutse Yuehgoh
- Funmito Adeyemi
- Joscha Cüppers
- Khadija Iddrisu
- Kenechi Dukor
- Lawrence Francis
- Olumide Okubadejo
- Oluwaseun Ajayi
- Oluwatoyin Yetunde Sanni
- Sandra Oriji
- Steven Kolawole
- Tejumade Afonja
- Wuraola Oyewusi
This effort is brought to you by our amazing team of volunteers — thank you for your time, dedication, and leadership.
- Adetola Adetunji
- Ibrahim Gana
- Jesuyanmife Egbewale (cohort lead)
- Kenechi Dukor
- Oluwafemi Azeez
- Sharon Alawode
- Simon Ubi
- Tejumade Afonja
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for cohort_structure
Similar Open Source Tools

cohort_structure
The Machine Learning (ML) Flipped Cohort is a 12-week structured program designed for beginners to gain foundational to intermediate ML knowledge. Participants consume pre-recorded content during the week and engage in weekly community discussions. The program covers topics such as Python, data science foundations, databases, math for ML, text processing, linear regression, non-linear modeling, deep learning basics, and more. Participants work on capstone projects and are assessed through Google Forms. Certification requires minimum attendance, assessment scores, and participation in the final project. The cohort provides a supportive learning environment with mentorship and community interaction.

motia
Motia is an AI agent framework designed for software engineers to create, test, and deploy production-ready AI agents quickly. It provides a code-first approach, allowing developers to write agent logic in familiar languages and visualize execution in real-time. With Motia, developers can focus on business logic rather than infrastructure, offering zero infrastructure headaches, multi-language support, composable steps, built-in observability, instant APIs, and full control over AI logic. Ideal for building sophisticated agents and intelligent automations, Motia's event-driven architecture and modular steps enable the creation of GenAI-powered workflows, decision-making systems, and data processing pipelines.

sktime
sktime is a Python library for time series analysis that provides a unified interface for various time series learning tasks such as classification, regression, clustering, annotation, and forecasting. It offers time series algorithms and tools compatible with scikit-learn for building, tuning, and validating time series models. sktime aims to enhance the interoperability and usability of the time series analysis ecosystem by empowering users to apply algorithms across different tasks and providing interfaces to related libraries like scikit-learn, statsmodels, tsfresh, PyOD, and fbprophet.

LlamaV-o1
LlamaV-o1 is a Large Multimodal Model designed for spontaneous reasoning tasks. It outperforms various existing models on multimodal reasoning benchmarks. The project includes a Step-by-Step Visual Reasoning Benchmark, a novel evaluation metric, and a combined Multi-Step Curriculum Learning and Beam Search Approach. The model achieves superior performance in complex multi-step visual reasoning tasks in terms of accuracy and efficiency.

Video-ChatGPT
Video-ChatGPT is a video conversation model that aims to generate meaningful conversations about videos by combining large language models with a pretrained visual encoder adapted for spatiotemporal video representation. It introduces high-quality video-instruction pairs, a quantitative evaluation framework for video conversation models, and a unique multimodal capability for video understanding and language generation. The tool is designed to excel in tasks related to video reasoning, creativity, spatial and temporal understanding, and action recognition.

go-interview-practice
The Go Interview Practice repository is a comprehensive platform designed to help users practice and master Go programming through interactive coding challenges. It offers an interactive web interface with a code editor, testing experience, and competitive leaderboard. Users can practice with challenges categorized by difficulty levels, contribute solutions, and track their progress. The repository also features AI-powered interview simulation, real-time code review, dynamic interview questions, and progressive hints. Users can showcase their achievements with auto-updating profile badges and contribute to the project by submitting solutions or adding new challenges.

FlowDown-App
FlowDown is a blazing fast and smooth client app for using AI/LLM. It is lightweight and efficient with markdown support, universal compatibility, blazing fast text rendering, automated chat titles, and privacy by design. There are two editions available: FlowDown and FlowDown Community, with various features like chat with AI, fast markdown, privacy by design, bring your own LLM, offline LLM w/ MLX, visual LLM, web search, attachments, and language localization. FlowDown Community is now open-source, empowering developers to build interactive and responsive AI client apps.

AI-For-Beginners
AI-For-Beginners is a comprehensive 12-week, 24-lesson curriculum designed by experts at Microsoft to introduce beginners to the world of Artificial Intelligence (AI). The curriculum covers various topics such as Symbolic AI, Neural Networks, Computer Vision, Natural Language Processing, Genetic Algorithms, and Multi-Agent Systems. It includes hands-on lessons, quizzes, and labs using popular frameworks like TensorFlow and PyTorch. The focus is on providing a foundational understanding of AI concepts and principles, making it an ideal starting point for individuals interested in AI.

Foundations-of-LLMs
Foundations-of-LLMs is a comprehensive book aimed at readers interested in large language models, providing systematic explanations of foundational knowledge and introducing cutting-edge technologies. The book covers traditional language models, evolution of large language model architectures, prompt engineering, parameter-efficient fine-tuning, model editing, and retrieval-enhanced generation. Each chapter uses an animal as a theme to explain specific technologies, enhancing readability. The content is based on the author team's exploration and understanding of the field, with continuous monthly updates planned. The book includes a 'Paper List' for each chapter to track the latest advancements in related technologies.

generative-ai-for-beginners
This course has 18 lessons. Each lesson covers its own topic so start wherever you like! Lessons are labeled either "Learn" lessons explaining a Generative AI concept or "Build" lessons that explain a concept and code examples in both **Python** and **TypeScript** when possible. Each lesson also includes a "Keep Learning" section with additional learning tools. **What You Need** * Access to the Azure OpenAI Service **OR** OpenAI API - _Only required to complete coding lessons_ * Basic knowledge of Python or Typescript is helpful - *For absolute beginners check out these Python and TypeScript courses. * A Github account to fork this entire repo to your own GitHub account We have created a **Course Setup** lesson to help you with setting up your development environment. Don't forget to star (🌟) this repo to find it easier later. ## 🧠 Ready to Deploy? If you are looking for more advanced code samples, check out our collection of Generative AI Code Samples in both **Python** and **TypeScript**. ## 🗣️ Meet Other Learners, Get Support Join our official AI Discord server to meet and network with other learners taking this course and get support. ## 🚀 Building a Startup? Sign up for Microsoft for Startups Founders Hub to receive **free OpenAI credits** and up to **$150k towards Azure credits to access OpenAI models through Azure OpenAI Services**. ## 🙏 Want to help? Do you have suggestions or found spelling or code errors? Raise an issue or Create a pull request ## 📂 Each lesson includes: * A short video introduction to the topic * A written lesson located in the README * Python and TypeScript code samples supporting Azure OpenAI and OpenAI API * Links to extra resources to continue your learning ## 🗃️ Lessons | | Lesson Link | Description | Additional Learning | | :-: | :------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------: | ------------------------------------------------------------------------------ | | 00 | Course Setup | **Learn:** How to Setup Your Development Environment | Learn More | | 01 | Introduction to Generative AI and LLMs | **Learn:** Understanding what Generative AI is and how Large Language Models (LLMs) work. | Learn More | | 02 | Exploring and comparing different LLMs | **Learn:** How to select the right model for your use case | Learn More | | 03 | Using Generative AI Responsibly | **Learn:** How to build Generative AI Applications responsibly | Learn More | | 04 | Understanding Prompt Engineering Fundamentals | **Learn:** Hands-on Prompt Engineering Best Practices | Learn More | | 05 | Creating Advanced Prompts | **Learn:** How to apply prompt engineering techniques that improve the outcome of your prompts. | Learn More | | 06 | Building Text Generation Applications | **Build:** A text generation app using Azure OpenAI | Learn More | | 07 | Building Chat Applications | **Build:** Techniques for efficiently building and integrating chat applications. | Learn More | | 08 | Building Search Apps Vector Databases | **Build:** A search application that uses Embeddings to search for data. | Learn More | | 09 | Building Image Generation Applications | **Build:** A image generation application | Learn More | | 10 | Building Low Code AI Applications | **Build:** A Generative AI application using Low Code tools | Learn More | | 11 | Integrating External Applications with Function Calling | **Build:** What is function calling and its use cases for applications | Learn More | | 12 | Designing UX for AI Applications | **Learn:** How to apply UX design principles when developing Generative AI Applications | Learn More | | 13 | Securing Your Generative AI Applications | **Learn:** The threats and risks to AI systems and methods to secure these systems. | Learn More | | 14 | The Generative AI Application Lifecycle | **Learn:** The tools and metrics to manage the LLM Lifecycle and LLMOps | Learn More | | 15 | Retrieval Augmented Generation (RAG) and Vector Databases | **Build:** An application using a RAG Framework to retrieve embeddings from a Vector Databases | Learn More | | 16 | Open Source Models and Hugging Face | **Build:** An application using open source models available on Hugging Face | Learn More | | 17 | AI Agents | **Build:** An application using an AI Agent Framework | Learn More | | 18 | Fine-Tuning LLMs | **Learn:** The what, why and how of fine-tuning LLMs | Learn More |

Xwin-LM
Xwin-LM is a powerful and stable open-source tool for aligning large language models, offering various alignment technologies like supervised fine-tuning, reward models, reject sampling, and reinforcement learning from human feedback. It has achieved top rankings in benchmarks like AlpacaEval and surpassed GPT-4. The tool is continuously updated with new models and features.

FastDeploy
FastDeploy is an inference and deployment toolkit for large language models and visual language models based on PaddlePaddle. It provides production-ready deployment solutions with core acceleration technologies such as load-balanced PD disaggregation, unified KV cache transmission, OpenAI API server compatibility, comprehensive quantization format support, advanced acceleration techniques, and multi-hardware support. The toolkit supports various hardware platforms like NVIDIA GPUs, Kunlunxin XPUs, Iluvatar GPUs, Enflame GCUs, and Hygon DCUs, with plans for expanding support to Ascend NPU and MetaX GPU. FastDeploy aims to optimize resource utilization, throughput, and performance for inference and deployment tasks.

Awesome-LLM-RAG-Application
Awesome-LLM-RAG-Application is a repository that provides resources and information about applications based on Large Language Models (LLM) with Retrieval-Augmented Generation (RAG) pattern. It includes a survey paper, GitHub repo, and guides on advanced RAG techniques. The repository covers various aspects of RAG, including academic papers, evaluation benchmarks, downstream tasks, tools, and technologies. It also explores different frameworks, preprocessing tools, routing mechanisms, evaluation frameworks, embeddings, security guardrails, prompting tools, SQL enhancements, LLM deployment, observability tools, and more. The repository aims to offer comprehensive knowledge on RAG for readers interested in exploring and implementing LLM-based systems and products.

awesome-ai-efficiency
Awesome AI Efficiency is a curated list of resources dedicated to enhancing efficiency in AI systems. The repository covers various topics essential for optimizing AI models and processes, aiming to make AI faster, cheaper, smaller, and greener. It includes topics like quantization, pruning, caching, distillation, factorization, compilation, parameter-efficient fine-tuning, speculative decoding, hardware optimization, training techniques, inference optimization, sustainability strategies, and scalability approaches.

END-TO-END-GENERATIVE-AI-PROJECTS
The 'END TO END GENERATIVE AI PROJECTS' repository is a collection of awesome industry projects utilizing Large Language Models (LLM) for various tasks such as chat applications with PDFs, image to speech generation, video transcribing and summarizing, resume tracking, text to SQL conversion, invoice extraction, medical chatbot, financial stock analysis, and more. The projects showcase the deployment of LLM models like Google Gemini Pro, HuggingFace Models, OpenAI GPT, and technologies such as Langchain, Streamlit, LLaMA2, LLaMAindex, and more. The repository aims to provide end-to-end solutions for different AI applications.
For similar tasks

phospho
Phospho is a text analytics platform for LLM apps. It helps you detect issues and extract insights from text messages of your users or your app. You can gather user feedback, measure success, and iterate on your app to create the best conversational experience for your users.

OpenFactVerification
Loki is an open-source tool designed to automate the process of verifying the factuality of information. It provides a comprehensive pipeline for dissecting long texts into individual claims, assessing their worthiness for verification, generating queries for evidence search, crawling for evidence, and ultimately verifying the claims. This tool is especially useful for journalists, researchers, and anyone interested in the factuality of information.

open-parse
Open Parse is a Python library for visually discerning document layouts and chunking them effectively. It is designed to fill the gap in open-source libraries for handling complex documents. Unlike text splitting, which converts a file to raw text and slices it up, Open Parse visually analyzes documents for superior LLM input. It also supports basic markdown for parsing headings, bold, and italics, and has high-precision table support, extracting tables into clean Markdown formats with accuracy that surpasses traditional tools. Open Parse is extensible, allowing users to easily implement their own post-processing steps. It is also intuitive, with great editor support and completion everywhere, making it easy to use and learn.

spaCy
spaCy is an industrial-strength Natural Language Processing (NLP) library in Python and Cython. It incorporates the latest research and is designed for real-world applications. The library offers pretrained pipelines supporting 70+ languages, with advanced neural network models for tasks such as tagging, parsing, named entity recognition, and text classification. It also facilitates multi-task learning with pretrained transformers like BERT, along with a production-ready training system and streamlined model packaging, deployment, and workflow management. spaCy is commercial open-source software released under the MIT license.

NanoLLM
NanoLLM is a tool designed for optimized local inference for Large Language Models (LLMs) using HuggingFace-like APIs. It supports quantization, vision/language models, multimodal agents, speech, vector DB, and RAG. The tool aims to provide efficient and effective processing for LLMs on local devices, enhancing performance and usability for various AI applications.

ontogpt
OntoGPT is a Python package for extracting structured information from text using large language models, instruction prompts, and ontology-based grounding. It provides a command line interface and a minimal web app for easy usage. The tool has been evaluated on test data and is used in related projects like TALISMAN for gene set analysis. OntoGPT enables users to extract information from text by specifying relevant terms and provides the extracted objects as output.

lima
LIMA is a multilingual linguistic analyzer developed by the CEA LIST, LASTI laboratory. It is Free Software available under the MIT license. LIMA has state-of-the-art performance for more than 60 languages using deep learning modules. It also includes a powerful rules-based mechanism called ModEx for extracting information in new domains without annotated data.

liboai
liboai is a simple C++17 library for the OpenAI API, providing developers with access to OpenAI endpoints through a collection of methods and classes. It serves as a spiritual port of OpenAI's Python library, 'openai', with similar structure and features. The library supports various functionalities such as ChatGPT, Audio, Azure, Functions, Image DALL·E, Models, Completions, Edit, Embeddings, Files, Fine-tunes, Moderation, and Asynchronous Support. Users can easily integrate the library into their C++ projects to interact with OpenAI services.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.