AI-Security-and-Privacy-Events
A curated list of academic events on AI Security & Privacy
Stars: 124
AI-Security-and-Privacy-Events is a curated list of academic events focusing on AI security and privacy. It includes seminars, conferences, workshops, tutorials, special sessions, and covers various topics such as NLP & LLM Security, Privacy and Security in ML, Machine Learning Security, AI System with Confidential Computing, Adversarial Machine Learning, and more.
README:
- NLP & LLM Security
- Privacy and Security in ML (PriSec-ML)
- Machine Learning Security (MLSec)
- Seminars on Security & Privacy in Machine Learning (ML S&P)
- AI Security and Privacy (AISP) (in Chinese)
- IEEE Conference on Secure and Trustworthy Machine Learning (2022-)
- The Conference on Applied Machine Learning in Information Security (2017-)
-
- Red Teaming GenAI: What Can We Learn from Adversaries? (NeurIPS 2024)
- Safe Generative AI (NeurIPS 2024)
- Towards Safe & Trustworthy Agents (NeurIPS 2024)
- Socially Responsible Language Modelling Research (NeurIPS 2024)
- Next Generation of AI Safety (ICML 2024)
- Trustworthy Multi-modal Foundation Models and AI Agents (ICML 2024)
- Secure and Trustworthy Large Language Models (ICLR 2024)
- Reliable and Responsible Foundation Models (ICLR 2024)
- Privacy Regulation and Protection in Machine Learning (ICLR 2024)
- Responsible Language Models (AAAI 2024)
- Privacy-Preserving Artificial Intelligence (AAAI 2020-2024)
- Practical Deep Learning in the Wild (CAI 2024, AAAI 2022-2023)
- Backdoors in Deep Learning: The Good, the Bad, and the Ugly (NeurIPS 2023)
- Trustworthy and Reliable Large-Scale Machine Learning Models (ICLR 2023)
- Backdoor Attacks and Defenses in Machine Learning (ICLR 2023)
- Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data (ICLR 2022)
- Security and Safety in Machine Learning Systems (ICLR 2021)
- Robust and Reliable Machine Learning in the Real World (ICLR 2021)
- Towards Trustworthy ML: Rethinking Security and Privacy for ML (ICLR 2020)
- Safe Machine Learning: Specification, Robustness and Assurance (ICLR 2019)
- New Frontiers in Adversarial Machine Learning (ICML 2022-2023)
- Theory and Practice of Differential Privacy (ICML 2021-2022)
- Uncertainty & Robustness in Deep Learning (ICML 2020-2021)
- A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning (ICML 2021)
- Security and Privacy of Machine Learning (ICML 2019)
- Socially Responsible Machine Learning (NeurIPS 2022, ICLR 2022, ICML 2021)
- ML Safety (NeurIPS 2022)
- Privacy in Machine Learning (NeurIPS 2021)
- Dataset Curation and Security (NeurIPS 2020)
- Security in Machine Learning (NeurIPS 2018)
- Machine Learning and Computer Security (NeurIPS 2017)
- Adversarial Training (NeurIPS 2016)
- Reliable Machine Learning in the Wild (NeurIPS 2016)
- Adversarial Learning Methods for Machine Learning and Data Mining (KDD 2019-2022)
- Privacy Preserving Machine Learning (FOCS 2022, CCS 2021, NeurIPS 2020, CCS 2019, NeurIPS 2018)
- SafeAI (AAAI 2019-2022)
- Adversarial Machine Learning and Beyond (AAAI 2022)
- Towards Robust, Secure and Efficient Machine Learning (AAAI2021)
- AISafety (IJCAI 2019-2022)
-
- The Dark Side of Generative AIs and Beyond (ECCV 2024)
- Trust What You learN (ECCV 2024)
- Privacy for Vision & Imaging (ECCV 2024)
- Adversarial Machine Learning on Computer Vision (CVPR 2024, CVPR 2023, CVPR 2022, CVPR 2020)
- Secure and Safe Autonomous Driving (CVPR 2023)
- Adversarial Robustness in the Real World (ICCV 2023, ECCV 2022, ICCV 2021, CVPR 2021, ECCV 2020, CVPR 2020, CVPR 2019)
- The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CVPR 2021, ECCV 2020, CVPR 2019, CVPR 2018, CVPR 2017)
- Responsible Computer Vision (ECCV 2022)
- Safe Artificial Intelligence for Automated Driving (ECCV 2022)
- Adversarial Learning for Multimedia (ACMMM 2021)
- Adversarial Machine Learning towards Advanced Vision Systems (ACCV 2022)
-
- Trustworthy Natural Language Processing (2021-2024)
- Privacy in Natural Language Processing (ACL 2024, NAACL 2022, NAACL 2021, EMNLP 2020, WSDM 2020)
- BlackboxNLP (2018-2024)
-
- Online Misinformation- and Harm-Aware Recommender Systems (RecSys 2021, RecSys 2020)
- Adversarial Machine Learning for Recommendation and Search (CIKM 2021)
-
- Quantitative Reasoning About Data Privacy in Machine Learning (ICML 2022)
- Foundational Robustness of Foundation Models (NeurIPS 2022)
- Adversarial Robustness - Theory and Practice (NeurIPS 2018)
- Towards Adversarial Learning: from Evasion Attacks to Poisoning Attacks (KDD 2022)
- Adversarial Robustness in Deep Learning: From Practices to Theories (KDD 2021)
- Adversarial Attacks and Defenses: Frontiers, Advances and Practice (KDD 2020)
- Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications (ICDM 2020)
- Adversarial Machine Learning for Good (AAAI 2022)
- Adversarial Machine Learning (AAAI 2018)
-
- Adversarial Machine Learning in Computer Vision (CVPR 2021)
- Practical Adversarial Robustness in Deep Learning: Problems and Solutions (CVPR 2021)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Deep Learning for Privacy in Multimedia (ACMMM 2020)
-
- Vulnerabilities of Large Language Models to Adversarial Attacks (ACL 2024)
- Robustness and Adversarial Examples in Natural Language Processing (EMNLP 2021)
- Deep Adversarial Learning for NLP (NAACL 2019)
-
- Adversarial Machine Learning in Recommender Systems (ECIR 2021, RecSys 2020, WSDM 2020)
- Special Track on Safe and Robust AI (AAAI 2023)
- Special Session on Adversarial Learning for Multimedia Understanding and Retrieval (ICMR 2022)
- Special Session on Adversarial Attack and Defense (APSIPA 2022)
- Special Session on Information Security meets Adversarial Examples (WIFS 2019)
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AI-Security-and-Privacy-Events
Similar Open Source Tools
AI-Security-and-Privacy-Events
AI-Security-and-Privacy-Events is a curated list of academic events focusing on AI security and privacy. It includes seminars, conferences, workshops, tutorials, special sessions, and covers various topics such as NLP & LLM Security, Privacy and Security in ML, Machine Learning Security, AI System with Confidential Computing, Adversarial Machine Learning, and more.
data-scientist-roadmap2024
The Data Scientist Roadmap2024 provides a comprehensive guide to mastering essential tools for data science success. It includes programming languages, machine learning libraries, cloud platforms, and concepts categorized by difficulty. The roadmap covers a wide range of topics from programming languages to machine learning techniques, data visualization tools, and DevOps/MLOps tools. It also includes web development frameworks and specific concepts like supervised and unsupervised learning, NLP, deep learning, reinforcement learning, and statistics. Additionally, it delves into DevOps tools like Airflow and MLFlow, data visualization tools like Tableau and Matplotlib, and other topics such as ETL processes, optimization algorithms, and financial modeling.
Anim
Anim v0.1.0 is an animation tool that allows users to convert videos to animations using mixamorig characters. It features FK animation editing, object selection, embedded Python support (only on Windows), and the ability to export to glTF and FBX formats. Users can also utilize Mediapipe to create animations. The tool is designed to assist users in creating animations with ease and flexibility.
NeuroAI_Course
Neuromatch Academy NeuroAI Course Syllabus is a repository that contains the schedule and licensing information for the NeuroAI course. The course is designed to provide participants with a comprehensive understanding of artificial intelligence in neuroscience. It covers various topics related to AI applications in neuroscience, including machine learning, data analysis, and computational modeling. The content is primarily accessed from the ebook provided in the repository, and the course is scheduled for July 15-26, 2024. The repository is shared under a Creative Commons Attribution 4.0 International License and software elements are additionally licensed under the BSD (3-Clause) License. Contributors to the project are acknowledged and welcomed to contribute further.
chatwiki
ChatWiki is an open-source knowledge base AI question-answering system. It is built on large language models (LLM) and retrieval-augmented generation (RAG) technologies, providing out-of-the-box data processing, model invocation capabilities, and helping enterprises quickly build their own knowledge base AI question-answering systems. It offers exclusive AI question-answering system, easy integration of models, data preprocessing, simple user interface design, and adaptability to different business scenarios.
awesome-llm-plaza
Awesome LLM plaza is a curated list of awesome LLM papers, projects, and resources. It is updated daily and includes resources from a variety of sources, including huggingface daily papers, twitter, github trending, paper with code, weixin, etc.
agenta
Agenta is an open-source LLM developer platform for prompt engineering, evaluation, human feedback, and deployment of complex LLM applications. It provides tools for prompt engineering and management, evaluation, human annotation, and deployment, all without imposing any restrictions on your choice of framework, library, or model. Agenta allows developers and product teams to collaborate in building production-grade LLM-powered applications in less time.
OpenRedTeaming
OpenRedTeaming is a repository focused on red teaming for generative models, specifically large language models (LLMs). The repository provides a comprehensive survey on potential attacks on GenAI and robust safeguards. It covers attack strategies, evaluation metrics, benchmarks, and defensive approaches. The repository also implements over 30 auto red teaming methods. It includes surveys, taxonomies, attack strategies, and risks related to LLMs. The goal is to understand vulnerabilities and develop defenses against adversarial attacks on large language models.
llm_interview_note
This repository provides a comprehensive overview of large language models (LLMs), covering various aspects such as their history, types, underlying architecture, training techniques, and applications. It includes detailed explanations of key concepts like Transformer models, distributed training, fine-tuning, and reinforcement learning. The repository also discusses the evaluation and limitations of LLMs, including the phenomenon of hallucinations. Additionally, it provides a list of related courses and references for further exploration.
higress
Higress is an open-source cloud-native API gateway built on the core of Istio and Envoy, based on Alibaba's internal practice of Envoy Gateway. It is designed for AI-native API gateway, serving AI businesses such as Tongyi Qianwen APP, Bailian Big Model API, and Machine Learning PAI platform. Higress provides capabilities to interface with LLM model vendors, AI observability, multi-model load balancing/fallback, AI token flow control, and AI caching. It offers features for AI gateway, Kubernetes Ingress gateway, microservices gateway, and security protection gateway, with advantages in production-level scalability, stream processing, extensibility, and ease of use.
activepieces
Activepieces is an open source replacement for Zapier, designed to be extensible through a type-safe pieces framework written in Typescript. It features a user-friendly Workflow Builder with support for Branches, Loops, and Drag and Drop. Activepieces integrates with Google Sheets, OpenAI, Discord, and RSS, along with 80+ other integrations. The list of supported integrations continues to grow rapidly, thanks to valuable contributions from the community. Activepieces is an open ecosystem; all piece source code is available in the repository, and they are versioned and published directly to npmjs.com upon contributions. If you cannot find a specific piece on the pieces roadmap, please submit a request by visiting the following link: Request Piece Alternatively, if you are a developer, you can quickly build your own piece using our TypeScript framework. For guidance, please refer to the following guide: Contributor's Guide
unilm
The 'unilm' repository is a collection of tools, models, and architectures for Foundation Models and General AI, focusing on tasks such as NLP, MT, Speech, Document AI, and Multimodal AI. It includes various pre-trained models, such as UniLM, InfoXLM, DeltaLM, MiniLM, AdaLM, BEiT, LayoutLM, WavLM, VALL-E, and more, designed for tasks like language understanding, generation, translation, vision, speech, and multimodal processing. The repository also features toolkits like s2s-ft for sequence-to-sequence fine-tuning and Aggressive Decoding for efficient sequence-to-sequence decoding. Additionally, it offers applications like TrOCR for OCR, LayoutReader for reading order detection, and XLM-T for multilingual NMT.
Awesome-Embodied-Agent-with-LLMs
This repository, named Awesome-Embodied-Agent-with-LLMs, is a curated list of research related to Embodied AI or agents with Large Language Models. It includes various papers, surveys, and projects focusing on topics such as self-evolving agents, advanced agent applications, LLMs with RL or world models, planning and manipulation, multi-agent learning and coordination, vision and language navigation, detection, 3D grounding, interactive embodied learning, rearrangement, benchmarks, simulators, and more. The repository provides a comprehensive collection of resources for individuals interested in exploring the intersection of embodied agents and large language models.
codemod
Codemod platform is a tool that helps developers create, distribute, and run codemods in codebases of any size. The AI-powered, community-led codemods enable automation of framework upgrades, large refactoring, and boilerplate programming with speed and developer experience. It aims to make dream migrations a reality for developers by providing a platform for seamless codemod operations.
how-to-optim-algorithm-in-cuda
This repository documents how to optimize common algorithms based on CUDA. It includes subdirectories with code implementations for specific optimizations. The optimizations cover topics such as compiling PyTorch from source, NVIDIA's reduce optimization, OneFlow's elementwise template, fast atomic add for half data types, upsample nearest2d optimization in OneFlow, optimized indexing in PyTorch, OneFlow's softmax kernel, linear attention optimization, and more. The repository also includes learning resources related to deep learning frameworks, compilers, and optimization techniques.
CVPR2024-Papers-with-Code-Demo
This repository contains a collection of papers and code for the CVPR 2024 conference. The papers cover a wide range of topics in computer vision, including object detection, image segmentation, image generation, and video analysis. The code provides implementations of the algorithms described in the papers, making it easy for researchers and practitioners to reproduce the results and build upon the work of others. The repository is maintained by a team of researchers at the University of California, Berkeley.
For similar tasks
AI-Security-and-Privacy-Events
AI-Security-and-Privacy-Events is a curated list of academic events focusing on AI security and privacy. It includes seminars, conferences, workshops, tutorials, special sessions, and covers various topics such as NLP & LLM Security, Privacy and Security in ML, Machine Learning Security, AI System with Confidential Computing, Adversarial Machine Learning, and more.
qdrant
Qdrant is a vector similarity search engine and vector database. It is written in Rust, which makes it fast and reliable even under high load. Qdrant can be used for a variety of applications, including: * Semantic search * Image search * Product recommendations * Chatbots * Anomaly detection Qdrant offers a variety of features, including: * Payload storage and filtering * Hybrid search with sparse vectors * Vector quantization and on-disk storage * Distributed deployment * Highlighted features such as query planning, payload indexes, SIMD hardware acceleration, async I/O, and write-ahead logging Qdrant is available as a fully managed cloud service or as an open-source software that can be deployed on-premises.
SynapseML
SynapseML (previously known as MMLSpark) is an open-source library that simplifies the creation of massively scalable machine learning (ML) pipelines. It provides simple, composable, and distributed APIs for various machine learning tasks such as text analytics, vision, anomaly detection, and more. Built on Apache Spark, SynapseML allows seamless integration of models into existing workflows. It supports training and evaluation on single-node, multi-node, and resizable clusters, enabling scalability without resource wastage. Compatible with Python, R, Scala, Java, and .NET, SynapseML abstracts over different data sources for easy experimentation. Requires Scala 2.12, Spark 3.4+, and Python 3.8+.
mlx-vlm
MLX-VLM is a package designed for running Vision LLMs on Mac systems using MLX. It provides a convenient way to install and utilize the package for processing large language models related to vision tasks. The tool simplifies the process of running LLMs on Mac computers, offering a seamless experience for users interested in leveraging MLX for vision-related projects.
Java-AI-Book-Code
The Java-AI-Book-Code repository contains code examples for the 2020 edition of 'Practical Artificial Intelligence With Java'. It is a comprehensive update of the previous 2013 edition, featuring new content on deep learning, knowledge graphs, anomaly detection, linked data, genetic algorithms, search algorithms, and more. The repository serves as a valuable resource for Java developers interested in AI applications and provides practical implementations of various AI techniques and algorithms.
Awesome-AI-Data-Guided-Projects
A curated list of data science & AI guided projects to start building your portfolio. The repository contains guided projects covering various topics such as large language models, time series analysis, computer vision, natural language processing (NLP), and data science. Each project provides detailed instructions on how to implement specific tasks using different tools and technologies.
awesome-AIOps
awesome-AIOps is a curated list of academic researches and industrial materials related to Artificial Intelligence for IT Operations (AIOps). It includes resources such as competitions, white papers, blogs, tutorials, benchmarks, tools, companies, academic materials, talks, workshops, papers, and courses covering various aspects of AIOps like anomaly detection, root cause analysis, incident management, microservices, dependency tracing, and more.
AI_Spectrum
AI_Spectrum is a versatile machine learning library that provides a wide range of tools and algorithms for building and deploying AI models. It offers a user-friendly interface for data preprocessing, model training, and evaluation. With AI_Spectrum, users can easily experiment with different machine learning techniques and optimize their models for various tasks. The library is designed to be flexible and scalable, making it suitable for both beginners and experienced data scientists.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.