
awesome-green-ai
A curated list of awesome Green AI resources and tools to assess and reduce the environmental impacts of using and deploying AI.
Stars: 68

Awesome Green AI is a curated list of resources and tools aimed at reducing the environmental impacts of using and deploying AI. It addresses the carbon footprint of the ICT sector, emphasizing the importance of AI in reducing environmental impacts beyond GHG emissions and electricity consumption. The tools listed cover code-based tools for measuring environmental impacts, monitoring tools for power consumption, optimization tools for energy efficiency, and calculation tools for estimating environmental impacts of algorithms and models. The repository also includes leaderboards, papers, survey papers, and reports related to green AI and environmental sustainability in the AI sector.
README:
A curated list of awesome Green AI resources and tools to reduce the environmental impacts of using and deploying AI.
In 2020, Information and Communications Technology (ICT) sector carbon footprint was estimated to be between 2.1-3.9% of total global greenhouse gas emissions. The ICT sector continues to grow and now dominates other industries. It is estimated that the carbon footprint will double to 6-8% by 2025. For ICT sector to remain compliant with the Paris Agreement, the industry must reduce by 45% its GHG emissions from 2020 to 2030 and reach net zero by 2050 (Freitag et al., 2021).
AI is one of the fastest growing sectors, disrupting many other industries (AI Market Size Report, 2022). It therefore has an important role to play in reducing carbon footprint. The impacts of ICT, and therefore AI, are not limited to GHG emissions and electricity consumption. We need to take into account all major impacts (abiotic resource depletion, primary energy consumption, water usage, etc.) using Life Cycle Assessment (LCA) (Arushanyan et al., 2013).
AI sobriety not only means optimizing energy consumption and reducing impacts, but also includes studies on indirect impacts and rebound effects that can negate all efforts to reduce the environmental footprint (Willenbacher et al. 2021). It is therefore imperative to consider the use of AI before launching a project in order to avoid indirect impacts and rebound effects later on.
All contributions are welcome. Add links through pull requests or create an issue to start a discussion.
Tools to measure and compute environmental impacts of AI.
-
CodeCarbon β Track emissions from Compute and recommend ways to reduce their impact on the environment.
-
carbontracker β Track and predict the energy consumption and carbon footprint of training deep learning models.
-
Eco2AI β A python library which accumulates statistics about power consumption and CO2 emission during running code.
-
Zeus β A framework for deep learning energy measurement and optimization.
-
EcoLogits β Estimates the energy consumption and environmental footprint of LLM inference through APIs.
-
Tracarbon β Tracks your device's energy consumption and calculates your carbon emissions using your location.
-
AIPowerMeter β Easily monitor energy usage of machine learning programs.
β οΈ No longer maintained:
-
carbonai β Python package to monitor the power consumption of any algorithm.
-
experiment-impact-tracker β A simple drop-in method to track energy usage, carbon emissions, and compute utilization of your system.
-
GATorch β An Energy-Aware PyTorch Extension.
-
GPU Meter β Power Consumption Meter for NVIDIA GPUs.
-
PyJoules β A Python library to capture the energy consumption of code snippets.
Tools to monitor power consumption and environmental impacts.
-
Scaphandre β A metrology agent dedicated to electrical power consumption metrics.
-
CodeCarbon β Track emissions from Compute and recommend ways to reduce their impact on the environment.
-
PowerJoular β Monitor power consumption of multiple platforms and processes.
-
ALUMET β A modular and efficient software measurement tool.
-
cardamon β A tool for measuring the power consumption and carbon footprint of your software.
-
Boagent β Local API and monitoring agent focussed on environmental impacts of the host.
-
Powerletrics β PowerLetrics is a framework designed to monitor and analyze power consumption metrics at the process level on Linux.
β οΈ No longer maintained:
-
vJoule β A tool to estimate the energy consumption of your processes.
-
jupyter-power-usage β Jupyter extension to display CPU and GPU power usage and carbon emissions.
Tools to optimize energy consumption or environmental impacts.
-
Zeus β A framework for deep learning energy measurement and optimization.
-
GEOPM β A framework to enable efficient power management and performance optimizations.
Tools to estimate environmental impacts of algorithms, models and compute resources.
- Green Algorithms - A tool to easily estimate the carbon footprint of a project.
- ML CO2 Impact - Compute model emissions and add the results to your paper with our generated latex template.
- EcoLogits Calculator - Estimate energy consumption and environmental impacts of LLM inference.
- AI Carbon - Estimate your AI model's carbon footprint.
- MLCarbon - End-to-end carbon footprint modeling tool.
- GenAI Carbon Footprint - A tool to estimate energy use (kWh) and carbon emissions (gCO2eq) from LLM usage.
- Carbon footprint modeling tool - A data model and a viewer for carbon footprint scenarios.
Generic tools:
- Boaviztapi - Multi-criteria impacts of compute resources taking into account manufacturing and usage.
- Datavizta - Compute resources data explorer not limited to AI.
- EcoDiag - Compute carbon footprint of IT resources taking into account manufactuing and usage (π«π· only).
- LLM Perf Leaderboad - Benchmarking LLMs on performance and energy.
- ML.Energy Leaderboard - Energy consumption of GenAI models at inference.
- AI Energy Score Leaderboard - Energy efficiency ratings for AI models.
- Energy and Policy Considerations for Deep Learning in NLP - Strubell et al. (2019)
- Quantifying the Carbon Emissions of Machine Learning - Lacoste et al. (2019)
- Carbontracker: Tracking and Predicting the Carbon Footprint of Training Deep Learning Models - Anthony et al. (2020)
- The carbon impact of artificial intelligence. - Payal Dhar (2020)
- Green AI - Schwartz et al. (2020)
- Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning - Henderson et al. (2020)
- The Energy and Carbon Footprint of Training End-to-End Speech Recognizers - Parcollet et al. (2021)
- Carbon Emissions and Large Neural Network Training - Patterson, et al. (2021)
- Green Algorithms: Quantifying the Carbon Footprint of Computation - Lannelongue et al. (2021)
- Aligning artificial intelligence with climate change mitigation - Kaack et al. (2021)
- A Practical Guide to Quantifying Carbon Emissions for Machine Learning researchers and practitioners - Ligozat et al. (2021)
- Unraveling the Hidden Environmental Impacts of AI Solutions for Environment Life Cycle Assessment of AI Solutions - Ligozat et al. (2022)
- Measuring the Carbon Intensity of AI in Cloud Instances - Dodge et al. (2022)
- Green AI: do deep learning frameworks have different costs? - Georgiou et al. (2022)
- Estimating the Carbon Footprint of BLOOM a 176B Parameter Language Model - Luccioni et al. (2022)
- Bridging Fairness and Environmental Sustainability in Natural Language Processing - Hessenthaler et al. (2022)
- Eco2AI: carbon emissions tracking of machine learning models as the first step towards sustainable AI - Budennyy et al. (2022)
- Environmental assessment of projects involving AI methods - Lefèvre et al. (2022)
- Sustainable AI: Environmental Implications, Challenges and Opportunities - Wu et al. (2022)
- The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink - Patterson et al. (2022)
- Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning - Henderson et al. (2022)
- Towards Sustainable Artificial Intelligence: An Overview of Environmental Protection Uses and Issues - Pachot et al. (2022)
- Method and evaluations of the effective gain of artificial intelligence models for reducing CO2 emissions - DelanoΓ« et al. (2023)
- Making AI Less "Thirsty": Uncovering and Addressing the Secret Water Footprint of AI Models - Li et al. (2023)
- Zeus: Understanding and Optimizing GPU Energy Consumption of DNN Training - You et al. (2023)
- Trends in AI inference energy consumption: Beyond the performance-vs-parameter laws of deep learning Desislavov et al. (2023)
- Chasing Low-Carbon Electricity for Practical and Sustainable DNN Training - Yang et al. (2023)
- Toward Sustainable HPC: Carbon Footprint Estimation and Environmental Implications of HPC Systems - Li et al. (2023)
- Reducing the Carbon Impact of Generative AI Inference (today and in 2035) - Chien et al. (2023)
- LLMCarbon: Modeling the End-To-End Carbon Footprint of Large Language Models - Faiz et al. (2023)
- The growing energy footprint of artificial intelligence - De Vries (2023)
- Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study - Castano et al. (2023)
- Exploding AI Power Use: an Opportunity to Rethink Grid Planning and Management - Lin et al. (2023)
- Power Hungry Processing: Watts Driving the Cost of AI Deployment? - Luccioni et al. (2023)
- Perseus: Removing Energy Bloat from Large Model Training - Chung et al. (2023)
- Timeshifting strategies for carbon-efficient long-running large language model training - Jagannadharao et al. (2023)
- From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference - Samsi et al. (2023)
- Estimating the environmental impact of Generative-AI services using an LCA-based methodology - Berthelot et al. (2024)
- Towards Greener LLMs: Bringing Energy-Efficiency to the Forefront of LLM Inference - Stojkovic et al. (2024)
- Green AI: Exploring Carbon Footprints, Mitigation Strategies, and Trade Offs in Large Language Model Training - Liu et al. (2024)
- Engineering Carbon Emission-aware Machine Learning Pipelines - Humsom et al. (2024)
- A simplified machine learning product carbon footprint evaluation tool - Lang et al. (2024)
- Beyond Efficiency: Scaling AI Sustainably - Wu et al. (2024)
- The Price of Prompting: Profiling Energy Use in Large Language Models Inference - Huson et al. (2024)
- MLCA: a tool for Machine Learning Life Cycle Assessment - Morand et al. (2024)
- Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI - Varoquaux et al. (2024)
- Addition is All You Need for Energy-efficient Language Models - Luo et al. (2024)
- E-waste challenges of generative artificial intelligence - Wang et al. (2024)
- From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate - Luccioni et al. (2025)
- Unveiling Environmental Impacts of Large Language Model Serving: A Functional Unit View - Wu et al. (2025)
- Evaluating the carbon footprint of NLP methods: a survey and analysis of existing tools - Bannour et al.(2021)
- A Survey on Green Deep Learning - Xu et al. (2021)
- A Systematic Review of Green AI - Verdecchia et al. (2023)
- Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning - Luccioni et al. (2023)
- How to estimate carbon footprint when training deep learning models? A guide and review - Bouza et al. (2023)
- Towards Efficient Generative Large Language Model Serving: A Survey from Algorithms to Systems - Miao et al. (2023)
- The great challenges of generative AI (π«π· only) - Data For Good 2023
- General framework for frugal AI - AFNOR 2024
- Powering Up Europe: AI Datacenters and Electrification to Drive +c.40%-50% Growth in Electricity Consumption - Goldman Sachs 2024
- Generational Growth β AI/data centersβ global power surge and the sustainability impact - Goldman Sachs 2024
- AI and the Environment - International Standards for AI and the Environment - ITU 2024
- Powering artificial intelligence: a study of AIβs footprintβtoday and tomorrow - Deloitte 2024
- Artificial Intelligence and Electricity: A System Dynamics Approach - Schneider Electric 2024
- Developing sustainable Gen AI - Capgemini 2025
- Exploring the sustainable scaling of AI dilemma: A projective study of corporations' AI environmental impacts - Capgemini Invent 2025
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for awesome-green-ai
Similar Open Source Tools

awesome-green-ai
Awesome Green AI is a curated list of resources and tools aimed at reducing the environmental impacts of using and deploying AI. It addresses the carbon footprint of the ICT sector, emphasizing the importance of AI in reducing environmental impacts beyond GHG emissions and electricity consumption. The tools listed cover code-based tools for measuring environmental impacts, monitoring tools for power consumption, optimization tools for energy efficiency, and calculation tools for estimating environmental impacts of algorithms and models. The repository also includes leaderboards, papers, survey papers, and reports related to green AI and environmental sustainability in the AI sector.

fairseq
Fairseq is a sequence modeling toolkit that enables researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. It provides reference implementations of various sequence modeling papers covering CNN, LSTM networks, Transformer networks, LightConv, DynamicConv models, Non-autoregressive Transformers, Finetuning, and more. The toolkit supports multi-GPU training, fast generation on CPU and GPU, mixed precision training, extensibility, flexible configuration based on Hydra, and full parameter and optimizer state sharding. Pre-trained models are available for translation and language modeling with a torch.hub interface. Fairseq also offers pre-trained models and examples for tasks like XLS-R, cross-lingual retrieval, wav2vec 2.0, unsupervised quality estimation, and more.

lmdeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. It has the following core features: * **Efficient Inference** : LMDeploy delivers up to 1.8x higher request throughput than vLLM, by introducing key features like persistent batch(a.k.a. continuous batching), blocked KV cache, dynamic split&fuse, tensor parallelism, high-performance CUDA kernels and so on. * **Effective Quantization** : LMDeploy supports weight-only and k/v quantization, and the 4-bit inference performance is 2.4x higher than FP16. The quantization quality has been confirmed via OpenCompass evaluation. * **Effortless Distribution Server** : Leveraging the request distribution service, LMDeploy facilitates an easy and efficient deployment of multi-model services across multiple machines and cards. * **Interactive Inference Mode** : By caching the k/v of attention during multi-round dialogue processes, the engine remembers dialogue history, thus avoiding repetitive processing of historical sessions.

Awesome-Attention-Heads
Awesome-Attention-Heads is a platform providing the latest research on Attention Heads, focusing on enhancing understanding of Transformer structure for model interpretability. It explores attention mechanisms for behavior, inference, and analysis, alongside feed-forward networks for knowledge storage. The repository aims to support researchers studying LLM interpretability and hallucination by offering cutting-edge information on Attention Head Mining.

Awesome-AI-Agents
Awesome-AI-Agents is a curated list of projects, frameworks, benchmarks, platforms, and related resources focused on autonomous AI agents powered by Large Language Models (LLMs). The repository showcases a wide range of applications, multi-agent task solver projects, agent society simulations, and advanced components for building and customizing AI agents. It also includes frameworks for orchestrating role-playing, evaluating LLM-as-Agent performance, and connecting LLMs with real-world applications through platforms and APIs. Additionally, the repository features surveys, paper lists, and blogs related to LLM-based autonomous agents, making it a valuable resource for researchers, developers, and enthusiasts in the field of AI.

adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.).

Phi-3CookBook
Phi-3CookBook is a manual on how to use the Microsoft Phi-3 family, which consists of open AI models developed by Microsoft. The Phi-3 models are highly capable and cost-effective small language models, outperforming models of similar and larger sizes across various language, reasoning, coding, and math benchmarks. The repository provides detailed information on different Phi-3 models, their performance, availability, and usage scenarios across different platforms like Azure AI Studio, Hugging Face, and Ollama. It also covers topics such as fine-tuning, evaluation, and end-to-end samples for Phi-3-mini and Phi-3-vision models, along with labs, workshops, and contributing guidelines.

langfuse-python
Langfuse Python SDK is a software development kit that provides tools and functionalities for integrating with Langfuse's language processing services. It offers decorators for observing code behavior, low-level SDK for tracing, and wrappers for accessing Langfuse's public API. The SDK was recently rewritten in version 2, released on December 17, 2023, with detailed documentation available on the official website. It also supports integrations with OpenAI SDK, LlamaIndex, and LangChain for enhanced language processing capabilities.

sglang
SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with LLMs faster and more controllable by co-designing the frontend language and the runtime system. The core features of SGLang include: - **A Flexible Front-End Language**: This allows for easy programming of LLM applications with multiple chained generation calls, advanced prompting techniques, control flow, multiple modalities, parallelism, and external interaction. - **A High-Performance Runtime with RadixAttention**: This feature significantly accelerates the execution of complex LLM programs by automatic KV cache reuse across multiple calls. It also supports other common techniques like continuous batching and tensor parallelism.

awesome-agents
Awesome Agents is a curated list of open source AI agents designed for various tasks such as private interactions with documents, chat implementations, autonomous research, human-behavior simulation, code generation, HR queries, domain-specific research, and more. The agents leverage Large Language Models (LLMs) and other generative AI technologies to provide solutions for complex tasks and projects. The repository includes a diverse range of agents for different use cases, from conversational chatbots to AI coding engines, and from autonomous HR assistants to vision task solvers.

chronos-forecasting
Chronos is a family of pretrained time series forecasting models based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.

MATLAB-Simulink-Challenge-Project-Hub
MATLAB-Simulink-Challenge-Project-Hub is a repository aimed at contributing to the progress of engineering and science by providing challenge projects with real industry relevance and societal impact. The repository offers a wide range of projects covering various technology trends such as Artificial Intelligence, Autonomous Vehicles, Big Data, Computer Vision, and Sustainability. Participants can gain practical skills with MATLAB and Simulink while making a significant contribution to science and engineering. The projects are designed to enhance expertise in areas like Sustainability and Renewable Energy, Control, Modeling and Simulation, Machine Learning, and Robotics. By participating in these projects, individuals can receive official recognition for their problem-solving skills from technology leaders at MathWorks and earn rewards upon project completion.

tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.

moon
Moon is a monitoring and alerting platform suitable for multiple domains, supporting various application scenarios such as cloud-native, Internet of Things (IoT), and Artificial Intelligence (AI). It simplifies operational work of cloud-native monitoring, boasts strong IoT and AI support capabilities, and meets diverse monitoring needs across industries. Capable of real-time data monitoring, intelligent alerts, and fault response for various fields.

complexity
Complexity is a community-driven, open-source, and free third-party extension that enhances the features of Perplexity.ai. It provides various UI/UX/QoL tweaks, LLM/Image gen model selectors, a customizable theme, and a prompts library. The tool intercepts network traffic to alter the behavior of the host page, offering a solution to the limitations of Perplexity.ai. Users can install Complexity from Chrome Web Store, Mozilla Add-on, or build it from the source code.
For similar tasks

awesome-green-ai
Awesome Green AI is a curated list of resources and tools aimed at reducing the environmental impacts of using and deploying AI. It addresses the carbon footprint of the ICT sector, emphasizing the importance of AI in reducing environmental impacts beyond GHG emissions and electricity consumption. The tools listed cover code-based tools for measuring environmental impacts, monitoring tools for power consumption, optimization tools for energy efficiency, and calculation tools for estimating environmental impacts of algorithms and models. The repository also includes leaderboards, papers, survey papers, and reports related to green AI and environmental sustainability in the AI sector.

ai_automation_suggester
An integration for Home Assistant that leverages AI models to understand your unique home environment and propose intelligent automations. By analyzing your entities, devices, areas, and existing automations, the AI Automation Suggester helps you discover new, context-aware use cases you might not have considered, ultimately streamlining your home management and improving efficiency, comfort, and convenience. The tool acts as a personal automation consultant, providing actionable YAML-based automations that can save energy, improve security, enhance comfort, and reduce manual intervention. It turns the complexity of a large Home Assistant environment into actionable insights and tangible benefits.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.