Best AI tools for< Research Environmental Data >
20 - AI tool Sites

ePlant
ePlant is an AI-powered platform that revolutionizes plant data collection and application. It offers advanced plant-data intelligence through state-of-the-art wireless plant health monitors and AI technology. The platform enables users to remotely monitor trees and vines, track plant growth and stress, and measure environmental factors like temperature, light, and humidity. ePlant is trusted by experts in various fields such as research, urban forestry, precision agriculture, and silviculture. The TreeTag system, a key feature of ePlant, has been recognized as one of TIME's Best Inventions. With a focus on innovation and sustainability, ePlant is a valuable tool for anyone looking to optimize plant health and productivity.

DWE.ai
DWE.ai is an AI-powered platform specializing in DeepWater Exploration. The platform offers cutting-edge marine optics for crystal clear underwater imaging data, empowering marine robotics, surface vehicles, aquaculture farms, and aerial drones with advanced mapping and vision capabilities. DWE.ai provides cost-effective equipment for monitoring subsea assets, conducting detailed inspections, and collecting essential environmental data to revolutionize underwater research and operations.

Climate Policy Radar
Climate Policy Radar is an AI-powered application that serves as a live, searchable database containing over 5,000 national climate laws, policies, and UN submissions. The app aims to organize, analyze, and democratize climate data by providing open data, code, and machine learning models. It promotes a responsible approach to AI, fosters a climate NLP community, and offers an API for organizations to utilize the data. The tool addresses the challenge of sparse and siloed climate-related information, empowering decision-makers with evidence-based policies to accelerate climate action.

Allen Institute for AI (AI2)
The Allen Institute for AI (AI2) is a leading research institute dedicated to advancing artificial intelligence technologies for the common good. They focus on Natural Language Processing, Computer Vision, and AI applications for the environment. AI2 collaborates with diverse teams to tackle challenging problems in AI research, aiming to create world-changing AI solutions. The institute promotes diversity, equity, and inclusion in the research community, and offers opportunities for individuals to contribute to impactful AI projects.

Climate Change AI
Climate Change AI is a global non-profit organization that focuses on catalyzing impactful work at the intersection of climate change and machine learning. They provide resources, reports, events, and grants to support the use of machine learning in addressing climate change challenges.

Climate Change AI
Climate Change AI is a community platform dedicated to leveraging artificial intelligence to address the challenges of climate change. The platform serves as a hub for researchers, practitioners, and policymakers to collaborate, share knowledge, and develop AI solutions for mitigating the impacts of climate change. By harnessing the power of AI technologies, Climate Change AI aims to accelerate the transition to a sustainable and resilient future.

LAION
LAION is a non-profit organization that provides datasets, tools, and models to advance machine learning research. The organization's goal is to promote open public education and encourage the reuse of existing datasets and models to reduce the environmental impact of machine learning research.

Dog Age Calculator
The Dog Age Calculator is an AI-powered tool that accurately converts a dog's age into human years based on advanced scientific research and breed-specific data. It considers factors such as DNA methylation research, genetic factors, breed-specific aging patterns, individual health characteristics, and environmental influences on aging to provide precise results. The tool offers a comparison between the scientific research method and the traditional method of age calculation, highlighting the limitations of the latter. Users can easily calculate their dog's age in human years and receive care recommendations based on their life stage.

Deepfake Detection Challenge Dataset
The Deepfake Detection Challenge Dataset is a project initiated by Facebook AI to accelerate the development of new ways to detect deepfake videos. The dataset consists of over 100,000 videos and was created in collaboration with industry leaders and academic experts. It includes two versions: a preview dataset with 5k videos and a full dataset with 124k videos, each featuring facial modification algorithms. The dataset was used in a Kaggle competition to create better models for detecting manipulated media. The top-performing models achieved high accuracy on the public dataset but faced challenges when tested against the black box dataset, highlighting the importance of generalization in deepfake detection. The project aims to encourage the research community to continue advancing in detecting harmful manipulated media.

Domino Data Lab
Domino Data Lab is an enterprise AI platform that enables data scientists and IT leaders to build, deploy, and manage AI models at scale. It provides a unified platform for accessing data, tools, compute, models, and projects across any environment. Domino also fosters collaboration, establishes best practices, and tracks models in production to accelerate and scale AI while ensuring governance and reducing costs.

Domino Data Lab
Domino Data Lab is an enterprise AI platform that enables users to build, deploy, and manage AI models across any environment. It fosters collaboration, establishes best practices, and ensures governance while reducing costs. The platform provides access to a broad ecosystem of open source and commercial tools, and infrastructure, allowing users to accelerate and scale AI impact. Domino serves as a central hub for AI operations and knowledge, offering integrated workflows, automation, and hybrid multicloud capabilities. It helps users optimize compute utilization, enforce compliance, and centralize knowledge across teams.

Google Research
Google Research is a leading research organization focusing on advancing science and artificial intelligence. They conduct research in various domains such as AI/ML foundations, responsible human-centric technology, science & societal impact, computing paradigms, and algorithms & optimization. Google Research aims to create an environment for diverse research across different time scales and levels of risk, driving advancements in computer science through fundamental and applied research. They publish hundreds of research papers annually, collaborate with the academic community, and work on projects that impact technology used by billions of people worldwide.

RapidAI Research Institute
RapidAI Research Institute is an academic institution under the RapidAI open-source organization, a non-enterprise academic institution. It serves as a platform for academic research and collaboration, providing opportunities for aspiring researchers to publish papers and engage in scholarly activities. The institute offers mentorship programs and benefits for members, including access to resources such as internet connectivity, GPU configurations, and storage space. The management team consists of esteemed professionals in the field, ensuring a conducive environment for academic growth and development.

DataLab
DataLab is a data notebook that smartly leverages generative AI technology to enable users to 'chat with their data'. It features a powerful IDE for analysis, and seamlessly transforms work into shareable reports. The application runs in a cloud-hosted environment with support for R/Python, SQL, and various data science packages. Users can connect to external databases, collaborate in real-time, and utilize an AI Assistant for code generation and error correction.

Google Colab
Google Colab is a free Jupyter notebook environment that runs in the cloud. It allows you to write and execute Python code without having to install any software or set up a local environment. Colab notebooks are shareable, so you can easily collaborate with others on projects.

Segmed
Segmed offers a free Medical Data De-Identification Tool that utilizes NLP and language models to remove any PHI, ensuring privacy-compliant medical research. The tool is designed for demonstration purposes only, with the option to reach out for De-Id as a service. Segmed.ai does not save or store any data, providing a secure environment for cleaning medical data. Users can access sample data and benefit from de-identified clinical data solutions.

INSAIT
INSAIT is an Institute for Computer Science, Artificial Intelligence, and Technology located in Sofia, Bulgaria. The institute focuses on cutting-edge research areas such as Computer Vision, Robotics, Quantum Computing, Machine Learning, and Regulatory AI Compliance. INSAIT is known for its collaboration with top universities and organizations, as well as its commitment to fostering a diverse and inclusive environment for students and researchers.

INMA
INMA (International News Media Association) is a global organization that provides news media companies with resources, networking opportunities, and research on the latest trends in the industry. INMA's mission is to help news media companies succeed in the digital age by providing them with the tools and knowledge they need to adapt to the changing landscape. INMA offers a variety of services to its members, including conferences, webinars, reports, and a member directory. INMA also has a number of initiatives focused on specific areas of the news media industry, such as digital subscriptions, product and technology, and newsroom transformation.

GenWorlds
GenWorlds is an event-based communication framework for building multi-agent systems. It offers a platform for creating Generative AI applications where users can design customizable environments, utilize scalable architecture, access a repository of memories and tools, choose cognitive processes for agents, and pick coordination protocols. GenWorlds aims to foster a vibrant community of developers, AI enthusiasts, and innovators to collaborate, innovate, share knowledge, and grow together.

Dobb·E
Dobb·E is an open-source, general framework for learning household robotic manipulation. It aims to create a generalist machine for homes that can adapt and learn from users' needs efficiently. Dobb·E can learn a new task with just five minutes of demonstration, achieving an 81% success rate in various household environments. The project focuses on accelerating research on home robots and making robot assistants a common sight in every home.
20 - Open Source AI Tools

AIforEarthDataSets
The Microsoft AI for Earth program hosts geospatial data on Azure that is important to environmental sustainability and Earth science. This repo hosts documentation and demonstration notebooks for all the data that is managed by AI for Earth. It also serves as a "staging ground" for the Planetary Computer Data Catalog.

ai-audio-datasets
AI Audio Datasets List (AI-ADL) is a comprehensive collection of datasets consisting of speech, music, and sound effects, used for Generative AI, AIGC, AI model training, and audio applications. It includes datasets for speech recognition, speech synthesis, music information retrieval, music generation, audio processing, sound synthesis, and more. The repository provides a curated list of diverse datasets suitable for various AI audio tasks.

AIR-1
AIR-1 is a compact sensor device designed for monitoring various environmental parameters such as gas levels, particulate matter, temperature, and humidity. It features multiple sensors for detecting gases like CO, alcohol, H2, NO2, NH3, CO2, as well as particulate matter, VOCs, NOx, and more. The device is designed with a focus on accuracy and efficient heat management in a small form factor, making it suitable for indoor air quality monitoring and environmental sensing applications.

hongbomiao.com
hongbomiao.com is a personal research and development (R&D) lab that facilitates the sharing of knowledge. The repository covers a wide range of topics including web development, mobile development, desktop applications, API servers, cloud native technologies, data processing, machine learning, computer vision, embedded systems, simulation, database management, data cleaning, data orchestration, testing, ops, authentication, authorization, security, system tools, reverse engineering, Ethereum, hardware, network, guidelines, design, bots, and more. It provides detailed information on various tools, frameworks, libraries, and platforms used in these domains.

HippoRAG
HippoRAG is a novel retrieval augmented generation (RAG) framework inspired by the neurobiology of human long-term memory that enables Large Language Models (LLMs) to continuously integrate knowledge across external documents. It provides RAG systems with capabilities that usually require a costly and high-latency iterative LLM pipeline for only a fraction of the computational cost. The tool facilitates setting up retrieval corpus, indexing, and retrieval processes for LLMs, offering flexibility in choosing different online LLM APIs or offline LLM deployments through LangChain integration. Users can run retrieval on pre-defined queries or integrate directly with the HippoRAG API. The tool also supports reproducibility of experiments and provides data, baselines, and hyperparameter tuning scripts for research purposes.

AwesomeResponsibleAI
Awesome Responsible AI is a curated list of academic research, books, code of ethics, courses, data sets, frameworks, institutes, newsletters, principles, podcasts, reports, tools, regulations, and standards related to Responsible, Trustworthy, and Human-Centered AI. It covers various concepts such as Responsible AI, Trustworthy AI, Human-Centered AI, Responsible AI frameworks, AI Governance, and more. The repository provides a comprehensive collection of resources for individuals interested in ethical, transparent, and accountable AI development and deployment.

ABigSurveyOfLLMs
ABigSurveyOfLLMs is a repository that compiles surveys on Large Language Models (LLMs) to provide a comprehensive overview of the field. It includes surveys on various aspects of LLMs such as transformers, alignment, prompt learning, data management, evaluation, societal issues, safety, misinformation, attributes of LLMs, efficient LLMs, learning methods for LLMs, multimodal LLMs, knowledge-based LLMs, extension of LLMs, LLMs applications, and more. The repository aims to help individuals quickly understand the advancements and challenges in the field of LLMs through a collection of recent surveys and research papers.

carla
CARLA is an open-source simulator for autonomous driving research. It provides open-source code, protocols, and digital assets (urban layouts, buildings, vehicles) for developing, training, and validating autonomous driving systems. CARLA supports flexible specification of sensor suites and environmental conditions.

openrl
OpenRL is an open-source general reinforcement learning research framework that supports training for various tasks such as single-agent, multi-agent, offline RL, self-play, and natural language. Developed based on PyTorch, the goal of OpenRL is to provide a simple-to-use, flexible, efficient and sustainable platform for the reinforcement learning research community. It supports a universal interface for all tasks/environments, single-agent and multi-agent tasks, offline RL training with expert dataset, self-play training, reinforcement learning training for natural language tasks, DeepSpeed, Arena for evaluation, importing models and datasets from Hugging Face, user-defined environments, models, and datasets, gymnasium environments, callbacks, visualization tools, unit testing, and code coverage testing. It also supports various algorithms like PPO, DQN, SAC, and environments like Gymnasium, MuJoCo, Atari, and more.

chat-with-your-data-solution-accelerator
Chat with your data using OpenAI and AI Search. This solution accelerator uses an Azure OpenAI GPT model and an Azure AI Search index generated from your data, which is integrated into a web application to provide a natural language interface, including speech-to-text functionality, for search queries. Users can drag and drop files, point to storage, and take care of technical setup to transform documents. There is a web app that users can create in their own subscription with security and authentication.

PIXIU
PIXIU is a project designed to support the development, fine-tuning, and evaluation of Large Language Models (LLMs) in the financial domain. It includes components like FinBen, a Financial Language Understanding and Prediction Evaluation Benchmark, FIT, a Financial Instruction Dataset, and FinMA, a Financial Large Language Model. The project provides open resources, multi-task and multi-modal financial data, and diverse financial tasks for training and evaluation. It aims to encourage open research and transparency in the financial NLP field.

awesome-sound_event_detection
The 'awesome-sound_event_detection' repository is a curated reading list focusing on sound event detection and Sound AI. It includes research papers covering various sub-areas such as learning formulation, network architecture, pooling functions, missing or noisy audio, data augmentation, representation learning, multi-task learning, few-shot learning, zero-shot learning, knowledge transfer, polyphonic sound event detection, loss functions, audio and visual tasks, audio captioning, audio retrieval, audio generation, and more. The repository provides a comprehensive collection of papers, datasets, and resources related to sound event detection and Sound AI, making it a valuable reference for researchers and practitioners in the field.

UltraRAG
The UltraRAG framework is a researcher and developer-friendly RAG system solution that simplifies the process from data construction to model fine-tuning in domain adaptation. It introduces an automated knowledge adaptation technology system, supporting no-code programming, one-click synthesis and fine-tuning, multidimensional evaluation, and research-friendly exploration work integration. The architecture consists of Frontend, Service, and Backend components, offering flexibility in customization and optimization. Performance evaluation in the legal field shows improved results compared to VanillaRAG, with specific metrics provided. The repository is licensed under Apache-2.0 and encourages citation for support.

k2
K2 (GeoLLaMA) is a large language model for geoscience, trained on geoscience literature and fine-tuned with knowledge-intensive instruction data. It outperforms baseline models on objective and subjective tasks. The repository provides K2 weights, core data of GeoSignal, GeoBench benchmark, and code for further pretraining and instruction tuning. The model is available on Hugging Face for use. The project aims to create larger and more powerful geoscience language models in the future.

spacy-llm
This package integrates Large Language Models (LLMs) into spaCy, featuring a modular system for **fast prototyping** and **prompting** , and turning unstructured responses into **robust outputs** for various NLP tasks, **no training data** required. It supports open-source LLMs hosted on Hugging Face 🤗: Falcon, Dolly, Llama 2, OpenLLaMA, StableLM, Mistral. Integration with LangChain 🦜️🔗 - all `langchain` models and features can be used in `spacy-llm`. Tasks available out of the box: Named Entity Recognition, Text classification, Lemmatization, Relationship extraction, Sentiment analysis, Span categorization, Summarization, Entity linking, Translation, Raw prompt execution for maximum flexibility. Soon: Semantic role labeling. Easy implementation of **your own functions** via spaCy's registry for custom prompting, parsing and model integrations. For an example, see here. Map-reduce approach for splitting prompts too long for LLM's context window and fusing the results back together

SurveyX
SurveyX is an advanced academic survey automation system that leverages Large Language Models (LLMs) to generate high-quality, domain-specific academic papers and surveys. Users can request comprehensive academic papers or surveys tailored to specific topics by providing a paper title and keywords for literature retrieval. The system streamlines academic research by automating paper creation, saving users time and effort in compiling research content.

CEO-Agentic-AI-Framework
CEO-Agentic-AI-Framework is an ultra-lightweight Agentic AI framework based on the ReAct paradigm. It supports mainstream LLMs and is stronger than Swarm. The framework allows users to build their own agents, assign tasks, and interact with them through a set of predefined abilities. Users can customize agent personalities, grant and deprive abilities, and assign queries for specific tasks. CEO also supports multi-agent collaboration scenarios, where different agents with distinct capabilities can work together to achieve complex tasks. The framework provides a quick start guide, examples, and detailed documentation for seamless integration into research projects.

awesome-green-ai
Awesome Green AI is a curated list of resources and tools aimed at reducing the environmental impacts of using and deploying AI. It addresses the carbon footprint of the ICT sector, emphasizing the importance of AI in reducing environmental impacts beyond GHG emissions and electricity consumption. The tools listed cover code-based tools for measuring environmental impacts, monitoring tools for power consumption, optimization tools for energy efficiency, and calculation tools for estimating environmental impacts of algorithms and models. The repository also includes leaderboards, papers, survey papers, and reports related to green AI and environmental sustainability in the AI sector.

deeplake
Deep Lake is a Database for AI powered by a storage format optimized for deep-learning applications. Deep Lake can be used for: 1. Storing data and vectors while building LLM applications 2. Managing datasets while training deep learning models Deep Lake simplifies the deployment of enterprise-grade LLM-based products by offering storage for all data types (embeddings, audio, text, videos, images, pdfs, annotations, etc.), querying and vector search, data streaming while training models at scale, data versioning and lineage, and integrations with popular tools such as LangChain, LlamaIndex, Weights & Biases, and many more. Deep Lake works with data of any size, it is serverless, and it enables you to store all of your data in your own cloud and in one place. Deep Lake is used by Intel, Bayer Radiology, Matterport, ZERO Systems, Red Cross, Yale, & Oxford.
20 - OpenAI Gpts

Earth Conscious Voice
Hi ;) Ask me for data & insights gathered from an environmentally aware global community

Biochem Helper: Research's Helper
A helpful guide for biochemical engineers, offering insights and reassurance.

Gas Intellect Pro
Leading-Edge Gas Analysis and Optimization: Adaptable, Accurate, Advanced, developed on OpenAI.

Marine Biologist
Marine biologist studying and conserving ocean life, focusing on ecosystem health and climate effects.

GreenTech Guru
GreenTech Guru: Leading in-depth expertise in green tech, powered by OpenAI.

ClimatePal by Palau
I'm trained on major climate reports from the UN, World Resources Institute, and others. Ask me about climate trends, green energy, and how climate change affects us all. I make complex climate info easy to understand!

IR Spectra Interpreter
Analyzes IR spectra, prompts for uploads, and details findings in tables.

One atmosphere
I help you evolve your habits and processes to preserve the habitability of the earth and much more

Botanist
Focused on groundbreaking plant biology research for agricultural, medicinal, and environmental advancements.

EcoEconomist
Embrace the economics of ecosystems with EcoEconomist, balancing the scales between economic development and environmental stewardship.

amplio
Expert in Global Sustainable Development topics, provides accurate info with Harvard referencing.