![ai-deadlines](/statics/github-mark.png)
ai-deadlines
:alarm_clock: AI conference deadline countdowns
Stars: 5580
![screenshot](/screenshots_githubs/paperswithcode-ai-deadlines.jpg)
Countdown timers to keep track of a bunch of CV/NLP/ML/RO conference deadlines.
README:
Countdown timers to keep track of a bunch of CV/NLP/ML/RO conference deadlines.
Contributions are very welcome!
To keep things minimal, I'm only looking to list top-tier conferences in AI as per conferenceranks.com and my judgement calls. Please feel free to maintain a separate fork if you don't see your sub-field or conference of interest listed.
To add or update a deadline:
- Fork the repository
- Update
_data/conferences.yml
- Make sure it has the
title
,year
,id
,link
,deadline
,timezone
,date
,place
,sub
attributes- See available timezone strings here.
- Optionally add a
note
andabstract_deadline
in case the conference has a separate mandatory abstract deadline - Optionally add
hindex
(refers to h5-index from here) - Example:
- title: BestConf year: 2022 id: bestconf22 # title as lower case + last two digits of year full_name: Best Conference for Anything # full conference name link: link-to-website.com deadline: YYYY-MM-DD HH:SS abstract_deadline: YYYY-MM-DD HH:SS timezone: Asia/Seoul place: Incheon, South Korea date: September, 18-22, 2022 start: YYYY-MM-DD end: YYYY-MM-DD paperslink: link-to-full-paper-list.com pwclink: link-to-papers-with-code.com hindex: 100.0 sub: SP note: Important
- Send a pull request
- geodeadlin.es by @LukasMosser
- neuro-deadlines by @tbryn
- ai-challenge-deadlines by @dieg0as
- CV-oriented ai-deadlines (with an emphasis on medical images) by @duducheng
- es-deadlines (Embedded Systems, Computer Architecture, and Cyber-physical Systems) by @AlexVonB and @k0nze
- 2019-2020 International Conferences in AI, CV, DM, NLP and Robotics by @JackieTseng
- ccf-deadlines by @ccfddl
- networking-deadlines (Computer Networking, Measurement) by @andrewcchu
- ad-deadlines.com by @daniel-bogdoll
- sec-deadlines.github.io/ (Security and Privacy) by @clementfung
- pythondeadlin.es by @jesperdramsch
- deadlines.openlifescience.ai (Healthcare domain conferences and workshops) by @monk1337
- hci-deadlines.github.io (Human-Computer Interaction conferences) by @makinteract
- ds-deadlines.github.io (Distributed Systems, Event-based Systems, Performance, and Software Engineering conferences) by @ds-deadlines
- https://deadlines.cpusec.org/ (Computer Architecture-Security conferences) by @hoseinyavarzadeh
- se-deadlines.github.io (Software engineering conferences) by @sivanahamer and @imranur-rahman
- awesome-mlss (Machine Learning Summer Schools) by @sshkhr and @gmberton
This project is licensed under MIT.
It uses:
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for ai-deadlines
Similar Open Source Tools
![ai-deadlines Screenshot](/screenshots_githubs/paperswithcode-ai-deadlines.jpg)
ai-deadlines
Countdown timers to keep track of a bunch of CV/NLP/ML/RO conference deadlines.
![MaxKB Screenshot](/screenshots_githubs/1Panel-dev-MaxKB.jpg)
MaxKB
MaxKB is a knowledge base Q&A system based on the LLM large language model. MaxKB = Max Knowledge Base, which aims to become the most powerful brain of the enterprise.
![Apollo Screenshot](/screenshots_githubs/FreedomIntelligence-Apollo.jpg)
Apollo
Apollo is a multilingual medical LLM that covers English, Chinese, French, Hindi, Spanish, Hindi, and Arabic. It is designed to democratize medical AI to 6B people. Apollo has achieved state-of-the-art results on a variety of medical NLP tasks, including question answering, medical dialogue generation, and medical text classification. Apollo is easy to use and can be integrated into a variety of applications, making it a valuable tool for healthcare professionals and researchers.
![infinity Screenshot](/screenshots_githubs/michaelfeil-infinity.jpg)
infinity
Infinity is a high-throughput, low-latency REST API for serving vector embeddings, supporting all sentence-transformer models and frameworks. It is developed under the MIT License and powers inference behind Gradient.ai. The API allows users to deploy models from SentenceTransformers, offers fast inference backends utilizing various accelerators, dynamic batching for efficient processing, correct and tested implementation, and easy-to-use API built on FastAPI with Swagger documentation. Users can embed text, rerank documents, and perform text classification tasks using the tool. Infinity supports various models from Huggingface and provides flexibility in deployment via CLI, Docker, Python API, and cloud services like dstack. The tool is suitable for tasks like embedding, reranking, and text classification.
![ms-swift Screenshot](/screenshots_githubs/modelscope-ms-swift.jpg)
ms-swift
ms-swift is an official framework provided by the ModelScope community for fine-tuning and deploying large language models and multi-modal large models. It supports training, inference, evaluation, quantization, and deployment of over 400 large models and 100+ multi-modal large models. The framework includes various training technologies and accelerates inference, evaluation, and deployment modules. It offers a Gradio-based Web-UI interface and best practices for easy application of large models. ms-swift supports a wide range of model types, dataset types, hardware support, lightweight training methods, distributed training techniques, quantization training, RLHF training, multi-modal training, interface training, plugin and extension support, inference acceleration engines, model evaluation, and model quantization.
![luna-ai Screenshot](/screenshots_githubs/0x648-luna-ai.jpg)
luna-ai
Luna AI is a virtual streamer driven by a 'brain' composed of ChatterBot, GPT, Claude, langchain, chatglm, text-generation-webui, 讯飞星火, 智谱AI. It can interact with viewers in real-time during live streams on platforms like Bilibili, Douyin, Kuaishou, Douyu, or chat with you locally. Luna AI uses natural language processing and text-to-speech technologies like Edge-TTS, VITS-Fast, elevenlabs, bark-gui, VALL-E-X to generate responses to viewer questions and can change voice using so-vits-svc, DDSP-SVC. It can also collaborate with Stable Diffusion for drawing displays and loop custom texts. This project is completely free, and any identical copycat selling programs are pirated, please stop them promptly.
![cb-tumblebug Screenshot](/screenshots_githubs/cloud-barista-cb-tumblebug.jpg)
cb-tumblebug
CB-Tumblebug (CB-TB) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. It provides an overview, features, and architecture. The tool supports various cloud providers and resource types, with ongoing development and localization efforts. Users can deploy a multi-cloud infra with GPUs, enjoy multiple LLMs in parallel, and utilize LLM-related scripts. The tool requires Linux, Docker, Docker Compose, and Golang for building the source. Users can run CB-TB with Docker Compose or from the Makefile, set up prerequisites, contribute to the project, and view a list of contributors. The tool is licensed under an open-source license.
![MaskLLM Screenshot](/screenshots_githubs/NVlabs-MaskLLM.jpg)
MaskLLM
MaskLLM is a learnable pruning method that establishes Semi-structured Sparsity in Large Language Models (LLMs) to reduce computational overhead during inference. It is scalable and benefits from larger training datasets. The tool provides examples for running MaskLLM with Megatron-LM, preparing LLaMA checkpoints, pre-tokenizing C4 data for Megatron, generating prior masks, training MaskLLM, and evaluating the model. It also includes instructions for exporting sparse models to Huggingface.
![FlagEmbedding Screenshot](/screenshots_githubs/FlagOpen-FlagEmbedding.jpg)
FlagEmbedding
FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently: * **Long-Context LLM** : Activation Beacon * **Fine-tuning of LM** : LM-Cocktail * **Embedding Model** : Visualized-BGE, BGE-M3, LLM Embedder, BGE Embedding * **Reranker Model** : llm rerankers, BGE Reranker * **Benchmark** : C-MTEB
![OutofFocus Screenshot](/screenshots_githubs/OutofAi-OutofFocus.jpg)
OutofFocus
Out of Focus v1.0 is a flexible tool in Gradio for image manipulation through prompt manipulation by reconstruction via diffusion inversion process. Users can modify images using this tool, which is the first version of the Image modification tool by Out of AI.
![GPTQModel Screenshot](/screenshots_githubs/ModelCloud-GPTQModel.jpg)
GPTQModel
GPTQModel is an easy-to-use LLM quantization and inference toolkit based on the GPTQ algorithm. It provides support for weight-only quantization and offers features such as dynamic per layer/module flexible quantization, sharding support, and auto-heal quantization errors. The toolkit aims to ensure inference compatibility with HF Transformers, vLLM, and SGLang. It offers various model supports, faster quant inference, better quality quants, and security features like hash check of model weights. GPTQModel also focuses on faster quantization, improved quant quality as measured by PPL, and backports bug fixes from AutoGPTQ.
![langfuse-python Screenshot](/screenshots_githubs/langfuse-langfuse-python.jpg)
langfuse-python
Langfuse Python SDK is a software development kit that provides tools and functionalities for integrating with Langfuse's language processing services. It offers decorators for observing code behavior, low-level SDK for tracing, and wrappers for accessing Langfuse's public API. The SDK was recently rewritten in version 2, released on December 17, 2023, with detailed documentation available on the official website. It also supports integrations with OpenAI SDK, LlamaIndex, and LangChain for enhanced language processing capabilities.
![aichat Screenshot](/screenshots_githubs/sigoden-aichat.jpg)
aichat
Aichat is an AI-powered CLI chat and copilot tool that seamlessly integrates with over 10 leading AI platforms, providing a powerful combination of chat-based interaction, context-aware conversations, and AI-assisted shell capabilities, all within a customizable and user-friendly environment.
![pgx Screenshot](/screenshots_githubs/sotetsuk-pgx.jpg)
pgx
Pgx is a collection of GPU/TPU-accelerated parallel game simulators for reinforcement learning (RL). It provides JAX-native game simulators for various games like Backgammon, Chess, Shogi, and Go, offering super fast parallel execution on accelerators and beautiful visualization in SVG format. Pgx focuses on faster implementations while also being sufficiently general, allowing environments to be converted to the AEC API of PettingZoo for running Pgx environments through the PettingZoo API.
![rtp-llm Screenshot](/screenshots_githubs/alibaba-rtp-llm.jpg)
rtp-llm
**rtp-llm** is a Large Language Model (LLM) inference acceleration engine developed by Alibaba's Foundation Model Inference Team. It is widely used within Alibaba Group, supporting LLM service across multiple business units including Taobao, Tmall, Idlefish, Cainiao, Amap, Ele.me, AE, and Lazada. The rtp-llm project is a sub-project of the havenask.
![TempCompass Screenshot](/screenshots_githubs/llyx97-TempCompass.jpg)
TempCompass
TempCompass is a benchmark designed to evaluate the temporal perception ability of Video LLMs. It encompasses a diverse set of temporal aspects and task formats to comprehensively assess the capability of Video LLMs in understanding videos. The benchmark includes conflicting videos to prevent models from relying on single-frame bias and language priors. Users can clone the repository, install required packages, prepare data, run inference using examples like Video-LLaVA and Gemini, and evaluate the performance of their models across different tasks such as Multi-Choice QA, Yes/No QA, Caption Matching, and Caption Generation.
For similar tasks
![ai-deadlines Screenshot](/screenshots_githubs/paperswithcode-ai-deadlines.jpg)
ai-deadlines
Countdown timers to keep track of a bunch of CV/NLP/ML/RO conference deadlines.
For similar jobs
![spear Screenshot](/screenshots_githubs/isl-org-spear.jpg)
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
![openvino Screenshot](/screenshots_githubs/openvinotoolkit-openvino.jpg)
openvino
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. It provides a common API to deliver inference solutions on various platforms, including CPU, GPU, NPU, and heterogeneous devices. OpenVINO™ supports pre-trained models from Open Model Zoo and popular frameworks like TensorFlow, PyTorch, and ONNX. Key components of OpenVINO™ include the OpenVINO™ Runtime, plugins for different hardware devices, frontends for reading models from native framework formats, and the OpenVINO Model Converter (OVC) for adjusting models for optimal execution on target devices.
![peft Screenshot](/screenshots_githubs/huggingface-peft.jpg)
peft
PEFT (Parameter-Efficient Fine-Tuning) is a collection of state-of-the-art methods that enable efficient adaptation of large pretrained models to various downstream applications. By only fine-tuning a small number of extra model parameters instead of all the model's parameters, PEFT significantly decreases the computational and storage costs while achieving performance comparable to fully fine-tuned models.
![jetson-generative-ai-playground Screenshot](/screenshots_githubs/NVIDIA-AI-IOT-jetson-generative-ai-playground.jpg)
jetson-generative-ai-playground
This repo hosts tutorial documentation for running generative AI models on NVIDIA Jetson devices. The documentation is auto-generated and hosted on GitHub Pages using their CI/CD feature to automatically generate/update the HTML documentation site upon new commits.
![emgucv Screenshot](/screenshots_githubs/emgucv-emgucv.jpg)
emgucv
Emgu CV is a cross-platform .Net wrapper for the OpenCV image-processing library. It allows OpenCV functions to be called from .NET compatible languages. The wrapper can be compiled by Visual Studio, Unity, and "dotnet" command, and it can run on Windows, Mac OS, Linux, iOS, and Android.
![MMStar Screenshot](/screenshots_githubs/MMStar-Benchmark-MMStar.jpg)
MMStar
MMStar is an elite vision-indispensable multi-modal benchmark comprising 1,500 challenge samples meticulously selected by humans. It addresses two key issues in current LLM evaluation: the unnecessary use of visual content in many samples and the existence of unintentional data leakage in LLM and LVLM training. MMStar evaluates 6 core capabilities across 18 detailed axes, ensuring a balanced distribution of samples across all dimensions.
![VLMEvalKit Screenshot](/screenshots_githubs/open-compass-VLMEvalKit.jpg)
VLMEvalKit
VLMEvalKit is an open-source evaluation toolkit of large vision-language models (LVLMs). It enables one-command evaluation of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt generation-based evaluation for all LVLMs, and provide the evaluation results obtained with both exact matching and LLM-based answer extraction.
![llava-docker Screenshot](/screenshots_githubs/ashleykleynhans-llava-docker.jpg)
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.