
dataengineering-roadmap
Un repositorio más con conceptos básicos, desafíos técnicos y recursos sobre ingeniería de datos en español 🧙✨
Stars: 574

A repository providing basic concepts, technical challenges, and resources on data engineering in Spanish. It is a curated list of free, Spanish-language materials found on the internet to facilitate the study of data engineering enthusiasts. The repository covers programming fundamentals, programming languages like Python, version control with Git, database fundamentals, SQL, design concepts, Big Data, analytics, cloud computing, data processing, and job search tips in the IT field.
README:
Un repositorio más con conceptos básicos, desafíos técnicos y recursos sobre ingeniería de datos en español 🧙✨
¿Te gustaría aportar al repositorio? Visitá la guía de contribución
Nota: la siguiente ruta de aprendizaje está diseñada a criterio personal con la idea de facilitar el estudio de aquellos interesados en la ingeniería de datos con material libre, gratuito y en español que encontré en internet. No es una guía definitiva ni un curso, es una lista de recursos que puede ser mejorada con el tiempo con contribuciones de la comunidad.
📚 Libros de ingeniería de datos en inglés
📖 Patrones de Diseño para DE en inglés
Comenzamos con la comprensión de los conceptos fundamentales de programación y lógica. Esta sección puede ser desarrollada simultáneamente con el aprendizaje del lenguaje de programación que elijan.
- Curso: Programación Básica de Platzi
- Videos: Introducción a los Algoritmos y la Programación de TodoCode
- Videos: Ejercicios de Pseudocódigo de TodoCode
- Videos: Linea de Comandos de Datademia
- Videos: Bash scripting de Fazt
- Lectura: Introducción a la línea de comandos de Linux y el shell de Microsoft Learn
Recomiendo iniciar con Python debido a su curva de aprendizaje amigable y su prevalencia en la industria actual. No obstante, es importante destacar que el procesamiento de datos también puede realizarse con R, Java, Scala, Julia, entre otros.
- Videos: Python desde 0 de PildorasInformáticas
- Curso: Computación científica con Python de FreeCodeCamp
- Curso: Álgebra universitaria con Python de FreeCodeCamp
- Curso: Harvard CS50’s Introducción a la programación con Python subtítulado de FreeCodeCamp
- Curso: Python intermedio subtitulado de FreeCodeCamp
- Curso: Pandas de Kaggle
- Videos: Expresiones Regulares de Ada Lovecode
- Video: Principios de la Programación Orientada a Objetos de BettaTech
- Videos: Programación Orientada a Objetos explicada con Minecraft de Absolute
- Curso: Julia para gente con prisa de Miguel Raz
El aprendizaje sobre el control de versiones no solo es valioso al trabajar en equipos, sino que también nos proporciona la capacidad de rastrear, comprender y gestionar los cambios realizados en nuestro proyecto y así mantener un desarrollo eficiente y colaborativo.
- Video: ¿Qué es el control de versiones y porque es tan importante para programar? de Datademia
- Curso: Git y Github de MoureDev
- Videos: Git y Github de TodoCode
- Lectura: Usa Git correctamente de Attlasian
- Juego: Learn Git Branching
- Notebooks: Google Collab, Jupyter o Deepnote
- IDE: VSCode o Spyder
En esta instancia toca aprender sobre las bases de datos. La elección del gestor de bases de datos a utilizar queda a tu criterio, aunque personalmente recomiendo PostgreSQL para datos estructurados y MongoDB para datos no estructurados. Sin embargo, existen muchas otras opciones: MySQL, SQLite y demás.
- Videos: Introducción a las bases de datos de TodoCode
- Lectura: Diferencias entre DDL, DML y DCL de TodoPostgreSQL
- Video: Procedimientos almacenados #1 de Héctor de León
- Video: Procedimientos almacenados #2 de Héctor de León
- Video: MongoDB de Fazt
- Videos: MongoDB de MitoCode
También aprenderás SQL, un lenguaje de consulta para gestionar y manipular las bases de datos relacionales.
Ahora seguimos con conceptos más avanzados que nos servirán para diseñar bases de datos, datalake, datawarehouses, esquemas, etcétera.
- Video: ¿Cuándo utilizar SQL y cuando NoSQL? de Héctor de León
- Video: ¿Cómo se modelan las bases de datos NoSQL? de HolaMundo
- Lectura: Bases de datos orientadas a grafos de Oracle
- Video: Bases de Datos de Grafos, Fundamentos y Práctica de Datahack
Lo siguiente es entender algunos conceptos de Big Data. Además, resulta interesante adquirir conocimientos básicos sobre inteligencia artificial, inteligencia de negocios y análisis de datos sin la necesidad de profundizar demasiado.
- Video: Big Data para dummies de Datahack
- Lectura: Big Data: ¿Qué es y cómo ayuda a mi negocio? de Salesforce
- Certificación: Diseña y programa soluciones IoT con el uso de Big Data de Universidad del Rosario
- Certificación: Big Data de University of California San Diego
- Video: Big data y privacidad de Databits
- Videos: Gobierno de Datos de Smart Data
- Video: Cómo Iniciar con Gobierno de Datos sin Romper el Presupuesto de Software Gurú
- Certificación: Fundamentos profesionales del análisis de datos, de Microsoft y LinkedIn
- Certificación: Certificado profesional de Google Data Analytics
- Certificación: Certificado profesional de Analista de datos de IBM
- Curso: Análisis de datos con Python de FreeCodeCamp
- Video: Storytelling: ¿Cómo convertir tu contenido en una historia? de Coderhouse
- Curso: Machine Learning con Python de FreeCodeCamp
- Canal: AprendeIA con Ligdi Gonzalez
- Videos: Aprende Inteligencia Artificial de Dot CSV
- Video: Cómo usar ChatGPT en ingeniería de datos de Datalytics
- Curso: Inteligencia Artificial subtitulado de Universidad de Columbia
- Videos: Google Business Intelligence Certificate subtitulado de Google Career
- Videos: ¡Business Intelligence para Todos! de PEALCALA
En esta sección está el corazón de la ingeniería de datos, veremos que son los data pipelines, qué es un ETL, orquestadores, y más. Además, dejo una lista de conceptos clave qué voy a ir actualizando con sus recursos respectivos a futuro, si te interesa aprenderlos en detalle, podés buscar en los libros subidos en el repositorio.
- Video: Ingeniería de datos: viaje al corazón de los proyectos de datos de RockingData
- Video: ¿Cómo convertirte en un verdadero Ingeniero de Datos? de Databits
- Videos: Preprocesamiento de Datos en Python de Rocio Chavez
- Videos: Preprocesamiento de Datos en R de Rocio Chavez
- Video: Pruebas A/B: Datos, no opiniones de SantanDev
- Cargas incrementales
- Colas de mensajería
- Expresiones Cron
- Modelo relacional
- Modelo dimensional
- Facts y dimensiones
- Datalake, Datamart, Datawarehouse y Dataqube
- Diseño por columnas y basada por filas
- Esquemas star y snowflake
- Esquemas on read y on write
- Videos: Airflow de Data Engineering LATAM
- Video: Automatizando ideas con Apache Airflow - Yesi Díaz de Software Gurú
- Videos: Pentaho Spoon de LEARNING-BI
- Videos: Luigi subtitulado de Seattle Data Guy
- Lectura: Azure Data Factory de Microsoft
- Procesamiento de datos por lotes o batch
- Procesamiento en tiempo real o streaming
- Arquitecturas lambda y kappa
- Lectura: Diferencias clave entre el OLAP y el OLTP de AWS
- Video: Construye ETL en batch y streaming con Spark de Databits
- Lectura: Comparación de contenedores y máquinas virtuales de Atlassian
- Videos: Docker de Pelado Nerd
- Videos: Kubernetes de Pelado Nerd
- Lectura: ¿Qué es un sistema distribuido? de Atlassian
- Videos: Spark de Data Engineering LATAM
- Video: Infraestructura como código para ingeniería de datos de Spark México
- Videos: Apache Spark de NullSafe Architect
- Videos: Apache Kafka de NullSafe Architect
- Video: Great Expectations: Validar Data Pipelines como un Profesional por CodingEric en la PyConAr 2020
- Video: ETL Testing y su Automatización con Python por Patricio Miner en la #QSConf 2023
Es útil tener conocimientos de cloud computing. Llegado a este punto, te recomendaría considerar la preparación de certificaciones oficiales. Aunque estos exámenes suelen tener un costo, puedes encontrar recursos de preparación gratuitos y oficiales de los proveedores más conocidos en la industria.
- Video: Fundamentos de Cloud Computing de Datahack
- Lectura: Descubre las ventajas y desventajas de la nube de Platzi
- Lectura: Arquitectura para Big Data en Cloud de Platzi
- Ingeniería de datos de Google Cloud
- Ingeniería de datos de Microsoft Azure
- Ingeniería de datos de AWS (pronto)
Finalmente te dejo algunas lecturas y videos que ofrecen consejos y experiencias relacionadas con la búsqueda laboral en el ámbito de sistemas. Más adelante, se agregarán desafíos técnicos y otros recursos vinculados al tema.
- Video: ¿Cómo obtener tu primer empleo en ingeniería de datos? de Spark México
- Videos: Consejos Laborales para el mundo IT de TodoCode
- Videos: Esenciales para comenzar en el mundo de los sistemas de Maxi Programa
- Hilo: Consejos para completar el perfil de LinkedIn de @natayadev
- Hilo: Consejos para conseguir un trabajo remoto en IT de @natayadev
- Hilo: Cómo crear un CV ordenado y legible de @iamdoomling
- Hilo: Te dejo estos tips para sobrevivir entrevistas con recursos humanos de @iamdoomling
- Video: Programar en empresas, startups o freelance ¿Qué es mejor? de @iamdoomling
- Video: Terminé el bootcamp de programación ¿Y ahora qué? de @iamdoomling
- Video: Trabajar como contractor desde Argentina de @iamdoomling
- Podcast: DevRock de Jonatan Ariste
- (2023) Repositorio: Desafíos de código de la comunidad de MoureDev
- (2024) Repositorio: Roadmap retos de programación de la comunidad de MoureDev
En proceso 😊
Si te resultó útil este repositorio, regalame una estrella ⭐
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for dataengineering-roadmap
Similar Open Source Tools

dataengineering-roadmap
A repository providing basic concepts, technical challenges, and resources on data engineering in Spanish. It is a curated list of free, Spanish-language materials found on the internet to facilitate the study of data engineering enthusiasts. The repository covers programming fundamentals, programming languages like Python, version control with Git, database fundamentals, SQL, design concepts, Big Data, analytics, cloud computing, data processing, and job search tips in the IT field.

Ai-Hoshino
Ai Hoshino - MD is a WhatsApp bot tool with features like voice and text interaction, group configuration, anti-delete, anti-link, personalized welcome messages, chatbot functionality, sticker creation, sub-bot integration, RPG game, YouTube music and video downloads, and more. The tool is actively maintained by Starlights Team and offers a range of functionalities for WhatsApp users.

AI-LLM-ML-CS-Quant-Overview
AI-LLM-ML-CS-Quant-Overview is a repository providing overview notes on AI, Large Language Models (LLM), Machine Learning (ML), Computer Science (CS), and Quantitative Finance. It covers various topics such as LangGraph & Cursor AI, DeepSeek, MoE (Mixture of Experts), NVIDIA GTC, LLM Essentials, System Design, Computer Systems, Big Data and AI in Finance, Econometrics and Statistics Conference, C++ Design Patterns and Derivatives Pricing, High-Frequency Finance, Machine Learning for Algorithmic Trading, Stochastic Volatility Modeling, Quant Job Interview Questions, Distributed Systems, Language Models, Designing Machine Learning Systems, Designing Data-Intensive Applications (DDIA), Distributed Machine Learning, and The Elements of Quantitative Investing.

Awesome-Graph-LLM
Awesome-Graph-LLM is a curated collection of research papers exploring the intersection of graph-based techniques with Large Language Models (LLMs). The repository aims to bridge the gap between LLMs and graph structures prevalent in real-world applications by providing a comprehensive list of papers covering various aspects of graph reasoning, node classification, graph classification/regression, knowledge graphs, multimodal models, applications, and tools. It serves as a valuable resource for researchers and practitioners interested in leveraging LLMs for graph-related tasks.

AI-LLM-ML-CS-Quant-Review
This repository provides an in-depth review of industry trends in AI, Large Language Models (LLMs), Machine Learning, Computer Science, and Quantitative Finance. It covers various topics such as NVIDIA GTC conferences, DeepSeek theory and applications, LangGraph & Cursor AI, LLM essentials, system design, computer systems, big data and AI in finance, C++ design patterns, high-frequency finance, machine learning for algorithmic trading, stochastic volatility modeling, and quant job interview questions.

ERNIE-SDK
ERNIE SDK repository contains two projects: ERNIE Bot Agent and ERNIE Bot. ERNIE Bot Agent is a large model intelligent agent development framework based on the Wenxin large model orchestration capability introduced by Baidu PaddlePaddle, combined with the rich preset platform functions of the PaddlePaddle Star River community. ERNIE Bot provides developers with convenient interfaces to easily call the Wenxin large model for text creation, general conversation, semantic vectors, and AI drawing basic functions.

Embodied-AI-Guide
Embodied-AI-Guide is a comprehensive guide for beginners to understand Embodied AI, focusing on the path of entry and useful information in the field. It covers topics such as Reinforcement Learning, Imitation Learning, Large Language Model for Robotics, 3D Vision, Control, Benchmarks, and provides resources for building cognitive understanding. The repository aims to help newcomers quickly establish knowledge in the field of Embodied AI.

bce-qianfan-sdk
The Qianfan SDK provides best practices for large model toolchains, allowing AI workflows and AI-native applications to access the Qianfan large model platform elegantly and conveniently. The core capabilities of the SDK include three parts: large model reasoning, large model training, and general and extension: * `Large model reasoning`: Implements interface encapsulation for reasoning of Yuyan (ERNIE-Bot) series, open source large models, etc., supporting dialogue, completion, Embedding, etc. * `Large model training`: Based on platform capabilities, it supports end-to-end large model training process, including training data, fine-tuning/pre-training, and model services. * `General and extension`: General capabilities include common AI development tools such as Prompt/Debug/Client. The extension capability is based on the characteristics of Qianfan to adapt to common middleware frameworks.

meet-libai
The 'meet-libai' project aims to promote and popularize the cultural heritage of the Chinese poet Li Bai by constructing a knowledge graph of Li Bai and training a professional AI intelligent body using large models. The project includes features such as data preprocessing, knowledge graph construction, question-answering system development, and visualization exploration of the graph structure. It also provides code implementations for large models and RAG retrieval enhancement.

nekro-agent
Nekro Agent is an AI chat plugin and proxy execution bot that is highly scalable, offers high freedom, and has minimal deployment requirements. It features context-aware chat for group/private chats, custom character settings, sandboxed execution environment, interactive image resource handling, customizable extension development interface, easy deployment with docker-compose, integration with Stable Diffusion for AI drawing capabilities, support for various file types interaction, hot configuration updates and command control, native multimodal understanding, visual application management control panel, CoT (Chain of Thought) support, self-triggered timers and holiday greetings, event notification understanding, and more. It allows for third-party extensions and AI-generated extensions, and includes features like automatic context trigger based on LLM, and a variety of basic commands for bot administrators.

ai-chat-local
The ai-chat-local repository is a project that implements a local model customization using ollama, allowing data privacy and bypassing detection. It integrates with the esp32 development board to create a simple chat assistant. The project includes features like database integration for chat persistence, user recognition, and AI-based document processing. It also offers a graphical configuration tool for Dify.AI implementation.

gpt_academic
GPT Academic is a powerful tool that leverages the capabilities of large language models (LLMs) to enhance academic research and writing. It provides a user-friendly interface that allows researchers, students, and professionals to interact with LLMs and utilize their abilities for various academic tasks. With GPT Academic, users can access a wide range of features and functionalities, including: * **Summarization and Paraphrasing:** GPT Academic can summarize complex texts, articles, and research papers into concise and informative summaries. It can also paraphrase text to improve clarity and readability. * **Question Answering:** Users can ask GPT Academic questions related to their research or studies, and the tool will provide comprehensive and well-informed answers based on its knowledge and understanding of the relevant literature. * **Code Generation and Explanation:** GPT Academic can generate code snippets and provide explanations for complex coding concepts. It can also help debug code and suggest improvements. * **Translation:** GPT Academic supports translation of text between multiple languages, making it a valuable tool for researchers working with international collaborations or accessing resources in different languages. * **Citation and Reference Management:** GPT Academic can help users manage their citations and references by automatically generating citations in various formats and providing suggestions for relevant references based on the user's research topic. * **Collaboration and Note-Taking:** GPT Academic allows users to collaborate on projects and take notes within the tool. They can share their work with others and access a shared workspace for real-time collaboration. * **Customizable Interface:** GPT Academic offers a customizable interface that allows users to tailor the tool to their specific needs and preferences. They can choose from a variety of themes, adjust the layout, and add or remove features to create a personalized workspace. Overall, GPT Academic is a versatile and powerful tool that can significantly enhance the productivity and efficiency of academic research and writing. It empowers users to leverage the capabilities of LLMs and unlock new possibilities for academic exploration and knowledge creation.

langchat
LangChat is an enterprise AIGC project solution in the Java ecosystem. It integrates AIGC large model functionality on top of the RBAC permission system to help enterprises quickly customize AI knowledge bases and enterprise AI robots. It supports integration with various large models such as OpenAI, Gemini, Ollama, Azure, Zhifu, Alibaba Tongyi, Baidu Qianfan, etc. The project is developed solely by TyCoding and is continuously evolving. It features multi-modality, dynamic configuration, knowledge base support, advanced RAG capabilities, function call customization, multi-channel deployment, workflows visualization, AIGC client application, and more.

HuaTuoAI
HuaTuoAI is an artificial intelligence image classification system specifically designed for traditional Chinese medicine. It utilizes deep learning techniques, such as Convolutional Neural Networks (CNN), to accurately classify Chinese herbs and ingredients based on input images. The project aims to unlock the secrets of plants, depict the unknown realm of Chinese medicine using technology and intelligence, and perpetuate ancient cultural heritage.

generative-ai-use-cases-jp
Generative AI (生成 AI) brings revolutionary potential to transform businesses. This repository demonstrates business use cases leveraging Generative AI.
For similar tasks

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.

sorrentum
Sorrentum is an open-source project that aims to combine open-source development, startups, and brilliant students to build machine learning, AI, and Web3 / DeFi protocols geared towards finance and economics. The project provides opportunities for internships, research assistantships, and development grants, as well as the chance to work on cutting-edge problems, learn about startups, write academic papers, and get internships and full-time positions at companies working on Sorrentum applications.

tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.

zep-python
Zep is an open-source platform for building and deploying large language model (LLM) applications. It provides a suite of tools and services that make it easy to integrate LLMs into your applications, including chat history memory, embedding, vector search, and data enrichment. Zep is designed to be scalable, reliable, and easy to use, making it a great choice for developers who want to build LLM-powered applications quickly and easily.

telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)

mojo
Mojo is a new programming language that bridges the gap between research and production by combining Python syntax and ecosystem with systems programming and metaprogramming features. Mojo is still young, but it is designed to become a superset of Python over time.

pandas-ai
PandasAI is a Python library that makes it easy to ask questions to your data in natural language. It helps you to explore, clean, and analyze your data using generative AI.

databend
Databend is an open-source cloud data warehouse that serves as a cost-effective alternative to Snowflake. With its focus on fast query execution and data ingestion, it's designed for complex analysis of the world's largest datasets.
For similar jobs

AirGo
AirGo is a front and rear end separation, multi user, multi protocol proxy service management system, simple and easy to use. It supports vless, vmess, shadowsocks, and hysteria2.

mosec
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API. * **Highly performant** : web layer and task coordination built with Rust 🦀, which offers blazing speed in addition to efficient CPU utilization powered by async I/O * **Ease of use** : user interface purely in Python 🐍, by which users can serve their models in an ML framework-agnostic manner using the same code as they do for offline testing * **Dynamic batching** : aggregate requests from different users for batched inference and distribute results back * **Pipelined stages** : spawn multiple processes for pipelined stages to handle CPU/GPU/IO mixed workloads * **Cloud friendly** : designed to run in the cloud, with the model warmup, graceful shutdown, and Prometheus monitoring metrics, easily managed by Kubernetes or any container orchestration systems * **Do one thing well** : focus on the online serving part, users can pay attention to the model optimization and business logic

llm-code-interpreter
The 'llm-code-interpreter' repository is a deprecated plugin that provides a code interpreter on steroids for ChatGPT by E2B. It gives ChatGPT access to a sandboxed cloud environment with capabilities like running any code, accessing Linux OS, installing programs, using filesystem, running processes, and accessing the internet. The plugin exposes commands to run shell commands, read files, and write files, enabling various possibilities such as running different languages, installing programs, starting servers, deploying websites, and more. It is powered by the E2B API and is designed for agents to freely experiment within a sandboxed environment.

pezzo
Pezzo is a fully cloud-native and open-source LLMOps platform that allows users to observe and monitor AI operations, troubleshoot issues, save costs and latency, collaborate, manage prompts, and deliver AI changes instantly. It supports various clients for prompt management, observability, and caching. Users can run the full Pezzo stack locally using Docker Compose, with prerequisites including Node.js 18+, Docker, and a GraphQL Language Feature Support VSCode Extension. Contributions are welcome, and the source code is available under the Apache 2.0 License.

learn-generative-ai
Learn Cloud Applied Generative AI Engineering (GenEng) is a course focusing on the application of generative AI technologies in various industries. The course covers topics such as the economic impact of generative AI, the role of developers in adopting and integrating generative AI technologies, and the future trends in generative AI. Students will learn about tools like OpenAI API, LangChain, and Pinecone, and how to build and deploy Large Language Models (LLMs) for different applications. The course also explores the convergence of generative AI with Web 3.0 and its potential implications for decentralized intelligence.

gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.

fluid
Fluid is an open source Kubernetes-native Distributed Dataset Orchestrator and Accelerator for data-intensive applications, such as big data and AI applications. It implements dataset abstraction, scalable cache runtime, automated data operations, elasticity and scheduling, and is runtime platform agnostic. Key concepts include Dataset and Runtime. Prerequisites include Kubernetes version > 1.16, Golang 1.18+, and Helm 3. The tool offers features like accelerating remote file accessing, machine learning, accelerating PVC, preloading dataset, and on-the-fly dataset cache scaling. Contributions are welcomed, and the project is under the Apache 2.0 license with a vendor-neutral approach.

aiges
AIGES is a core component of the Athena Serving Framework, designed as a universal encapsulation tool for AI developers to deploy AI algorithm models and engines quickly. By integrating AIGES, you can deploy AI algorithm models and engines rapidly and host them on the Athena Serving Framework, utilizing supporting auxiliary systems for networking, distribution strategies, data processing, etc. The Athena Serving Framework aims to accelerate the cloud service of AI algorithm models and engines, providing multiple guarantees for cloud service stability through cloud-native architecture. You can efficiently and securely deploy, upgrade, scale, operate, and monitor models and engines without focusing on underlying infrastructure and service-related development, governance, and operations.