Awesome-LLM4EDA
None
Stars: 63
LLM4EDA is a repository dedicated to showcasing the emerging progress in utilizing Large Language Models for Electronic Design Automation. The repository includes resources, papers, and tools that leverage LLMs to solve problems in EDA. It covers a wide range of applications such as knowledge acquisition, code generation, code analysis, verification, and large circuit models. The goal is to provide a comprehensive understanding of how LLMs can revolutionize the EDA industry by offering innovative solutions and new interaction paradigms.
README:
- We would like to maintain a list of resources that utilize Large Language Models to solve problems in Electronic Design Automation
- LLM4EDA Paper Link
- Also see our maintaining list for Awesome Artificial Intelligence for Electronic Design Automation
- Maintained by members in SJTU-Thinklab: Ruizhe Zhong, Xingbo Du
- Users can interact with LLMs for knowledge acquisition and Q&A, providing user-friendly and easy-interactively assistant chatbot and bring us new interaction paradigm with EDA software.
- ChipNeMo: Domain-Adapted LLMs for Chip Design
- New Interaction Paradigm for Complex EDA Software Leveraging GPT
- From English to PCSEL: LLM helps design and optimize photonic crystal surface emitting lasers
- RapidGPT: Your Ultimate HDL Pair-Designer
- EDA Corpus: A Large Language Model Dataset for Enhanced Interaction with OpenROAD
- Given language format specification and requirements, LLMs will generate RTL codes and EDA controlling scripts.
- Besides, how to evaluate the quality of generated codes remains an open research focus, including syntax correctness, functionality equivalence, PPA, and security issues.
- ChatEDA: A Large Language Model Powered Autonomous Agent for EDA
- ChipNeMo: Domain-Adapted LLMs for Chip Design
- ChipGPT: How far are we from natural language hardware design
- CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis
- An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation
- RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model
- GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
- AutoChip: Automating HDL Generation Using LLM Feedback
- Chip-Chat: Challenges and Opportunities in Conversational Hardware Design
- VeriGen: A Large Language Model for Verilog Code Generation
- Generating Secure Hardware using ChatGPT Resistant to CWEs
- The Power of Large Language Models for Wireless Communication System Development: A Case Study on FPGA Platform
- A Deep Learning Framework for Verilog Autocompletion Towards Design and Verification Automation
- RTLCoder: Outperforming GPT-3.5 in Design RTL Generation with Our Open-Source Dataset and Lightweight Solution
- VerilogEval: Evaluating Large Language Models for Verilog Code Generation
- Benchmarking Large Language Models for Automated Verilog RTL Code Generation
- SpecLLM: Exploring Generation and Review of VLSI Design Specification with Large Language Model
- Zero-Shot RTL Code Generation with Attention Sink Augmented Large Language Models
- Make Every Move Count: LLM-based High-Quality RTL Code Generation Using MCTS
- From English to ASIC Hardware Implementation with Large Language Model
- EDA Corpus: A Large Language Model Dataset for Enhanced Interaction with OpenROAD
- CreativEval: Evaluating Creativity of LLM-Based Hardware Code Generation
- Evaluating LLMs for Hardware Design and Test
- AnalogCoder: Analog Circuit Design via Training-Free Code Generation
- Data is all you need: Finetuning LLMs for Chip Design via an Automated design-data augmentation framework
- SynthAI: A Multi Agent Generative AI Framework for Automated Modular HLS Design Generation
- Evaluating LLMs for Hardware Design and Test
- We also investigate LLMs' wide application in code analysis, such as bug detecting & fixing, code summarization and security checking.
- Besides, LLMs have also demonstrated strong ability for verification, e.g. Assertion Based Verification.
- ChipNeMo: Domain-Adapted LLMs for Chip Design
- LLM4SecHW: Leavering Domain-Specific Large Language Model for Hardware Debugging
- Unlocking Hardware Security Assurance: The Potential of LLMs
- RTLFixer: Automatically Fixing RTL Syntax Errors with Large Language Models
- LLM-assisted Generation of Hardware Assertions
- Using LLMs to Facilitate Formal Verification of RTL
- DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and Policy-based Protection
- Fixing Hardware Security Bugs with Large Language Models (On Hardware Security Bug Code Fixes By Prompting Large Language Models)
- LLM for SoC Security: A Paradigm Shift
- The Power of Large Language Models for Wireless Communication System Development: A Case Study on FPGA Platform
- A Deep Learning Framework for Verilog Autocompletion Towards Design and Verification Automation
- SpecLLM: Exploring Generation and Review of VLSI Design Specification with Large Language Model
- AssertLLM: Generating and Evaluating Hardware Verification Assertions from Design Specifications via Multi-LLMs
- Self-HWDebug: Automation of LLM Self-Instructing for Hardware Security Verification
- Data is all you need: Finetuning LLMs for Chip Design via an Automated design-data augmentation framework
- LLMs for Hardware Security: Boon or Bane?
- A multimodal circuit representation learning technique, poised to provide a comprehensive understanding by harmonizing and extracting insights from varied data sources, such as functional specifications, RTL designs, circuit netlists, and physical layouts.
- The Dawn of AI-Native EDA: Promises and Challenges of Large Circuit Models
LLM4EDA: Emerging Progress in Large Language Models for Electronic Design Automation
If you find this repo useful, please cite our paper.
@article{zhong2023llm4eda,
title={LLM4EDA: Emerging Progress in Large Language Models for Electronic Design Automation},
author={Zhong, Ruizhe and Du, Xingbo and Kai, Shixiong and Tang, Zhentao and Xu, Siyuan and Zhen, Hui-Ling and Hao, Jianye and Xu, Qiang and Yuan, Mingxuan and Yan, Junchi},
journal={arXiv preprint arXiv:2401.12224},
year={2023}
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Awesome-LLM4EDA
Similar Open Source Tools
Awesome-LLM4EDA
LLM4EDA is a repository dedicated to showcasing the emerging progress in utilizing Large Language Models for Electronic Design Automation. The repository includes resources, papers, and tools that leverage LLMs to solve problems in EDA. It covers a wide range of applications such as knowledge acquisition, code generation, code analysis, verification, and large circuit models. The goal is to provide a comprehensive understanding of how LLMs can revolutionize the EDA industry by offering innovative solutions and new interaction paradigms.
LLM-PLSE-paper
LLM-PLSE-paper is a repository focused on the applications of Large Language Models (LLMs) in Programming Language and Software Engineering (PL/SE) domains. It covers a wide range of topics including bug detection, specification inference and verification, code generation, fuzzing and testing, code model and reasoning, code understanding, IDE technologies, prompting for reasoning tasks, and agent/tool usage and planning. The repository provides a comprehensive collection of research papers, benchmarks, empirical studies, and frameworks related to the capabilities of LLMs in various PL/SE tasks.
awesome-generative-ai
A curated list of Generative AI projects, tools, artworks, and models
awesome-gpt-prompt-engineering
Awesome GPT Prompt Engineering is a curated list of resources, tools, and shiny things for GPT prompt engineering. It includes roadmaps, guides, techniques, prompt collections, papers, books, communities, prompt generators, Auto-GPT related tools, prompt injection information, ChatGPT plug-ins, prompt engineering job offers, and AI links directories. The repository aims to provide a comprehensive guide for prompt engineering enthusiasts, covering various aspects of working with GPT models and improving communication with AI tools.
LLMEvaluation
The LLMEvaluation repository is a comprehensive compendium of evaluation methods for Large Language Models (LLMs) and LLM-based systems. It aims to assist academics and industry professionals in creating effective evaluation suites tailored to their specific needs by reviewing industry practices for assessing LLMs and their applications. The repository covers a wide range of evaluation techniques, benchmarks, and studies related to LLMs, including areas such as embeddings, question answering, multi-turn dialogues, reasoning, multi-lingual tasks, ethical AI, biases, safe AI, code generation, summarization, software performance, agent LLM architectures, long text generation, graph understanding, and various unclassified tasks. It also includes evaluations for LLM systems in conversational systems, copilots, search and recommendation engines, task utility, and verticals like healthcare, law, science, financial, and others. The repository provides a wealth of resources for evaluating and understanding the capabilities of LLMs in different domains.
edgeai
Embedded inference of Deep Learning models is quite challenging due to high compute requirements. TI’s Edge AI software product helps optimize and accelerate inference on TI’s embedded devices. It supports heterogeneous execution of DNNs across cortex-A based MPUs, TI’s latest generation C7x DSP, and DNN accelerator (MMA). The solution simplifies the product life cycle of DNN development and deployment by providing a rich set of tools and optimized libraries.
Awesome-Papers-Autonomous-Agent
Awesome-Papers-Autonomous-Agent is a curated collection of recent papers focusing on autonomous agents, specifically interested in RL-based agents and LLM-based agents. The repository aims to provide a comprehensive resource for researchers and practitioners interested in intelligent agents that can achieve goals, acquire knowledge, and continually improve. The collection includes papers on various topics such as instruction following, building agents based on world models, using language as knowledge, leveraging LLMs as a tool, generalization across tasks, continual learning, combining RL and LLM, transformer-based policies, trajectory to language, trajectory prediction, multimodal agents, training LLMs for generalization and adaptation, task-specific designing, multi-agent systems, experimental analysis, benchmarking, applications, algorithm design, and combining with RL.
Reflection_Tuning
Reflection-Tuning is a project focused on improving the quality of instruction-tuning data through a reflection-based method. It introduces Selective Reflection-Tuning, where the student model can decide whether to accept the improvements made by the teacher model. The project aims to generate high-quality instruction-response pairs by defining specific criteria for the oracle model to follow and respond to. It also evaluates the efficacy and relevance of instruction-response pairs using the r-IFD metric. The project provides code for reflection and selection processes, along with data and model weights for both V1 and V2 methods.
repromodel
ReproModel is an open-source toolbox designed to boost AI research efficiency by enabling researchers to reproduce, compare, train, and test AI models faster. It provides standardized models, dataloaders, and processing procedures, allowing researchers to focus on new datasets and model development. With a no-code solution, users can access benchmark and SOTA models and datasets, utilize training visualizations, extract code for publication, and leverage an LLM-powered automated methodology description writer. The toolbox helps researchers modularize development, compare pipeline performance reproducibly, and reduce time for model development, computation, and writing. Future versions aim to facilitate building upon state-of-the-art research by loading previously published study IDs with verified code, experiments, and results stored in the system.
Open-Medical-Reasoning-Tasks
Open Life Science AI: Medical Reasoning Tasks is a collaborative hub for developing cutting-edge reasoning tasks for Large Language Models (LLMs) in the medical, healthcare, and clinical domains. The repository aims to advance AI capabilities in healthcare by fostering accurate diagnoses, personalized treatments, and improved patient outcomes. It offers a diverse range of medical reasoning challenges such as Diagnostic Reasoning, Treatment Planning, Medical Image Analysis, Clinical Data Interpretation, Patient History Analysis, Ethical Decision Making, Medical Literature Comprehension, and Drug Interaction Assessment. Contributors can join the community of healthcare professionals, AI researchers, and enthusiasts to contribute to the repository by creating new tasks or improvements following the provided guidelines. The repository also provides resources including a task list, evaluation metrics, medical AI papers, and healthcare datasets for training and evaluation.
interpret
InterpretML is an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your model's global behavior, or understand the reasons behind individual predictions. Interpretability is essential for: - Model debugging - Why did my model make this mistake? - Feature Engineering - How can I improve my model? - Detecting fairness issues - Does my model discriminate? - Human-AI cooperation - How can I understand and trust the model's decisions? - Regulatory compliance - Does my model satisfy legal requirements? - High-risk applications - Healthcare, finance, judicial, ...
PurpleWave
PurpleWave is a tournament-winning AI player for StarCraft: Brood War written in Scala. It has won multiple competitions and is capable of playing all three races with a variety of professional-style strategies. PurpleWave has ranked #1 on various ladders and credits several individuals and communities for its development and success. The tool can be built using specific steps outlined in the readme and run either from IntelliJ IDEA or as a JAR file in the StarCraft directory. PurpleWave is published under the MIT License, encouraging users to use it as a starting point for their own creations.
openvino-plugins-ai-audacity
OpenVINO™ AI Plugins for Audacity* are a set of AI-enabled effects, generators, and analyzers for Audacity®. These AI features run 100% locally on your PC -- no internet connection necessary! OpenVINO™ is used to run AI models on supported accelerators found on the user's system such as CPU, GPU, and NPU. * **Music Separation**: Separate a mono or stereo track into individual stems -- Drums, Bass, Vocals, & Other Instruments. * **Noise Suppression**: Removes background noise from an audio sample. * **Music Generation & Continuation**: Uses MusicGen LLM to generate snippets of music, or to generate a continuation of an existing snippet of music. * **Whisper Transcription**: Uses whisper.cpp to generate a label track containing the transcription or translation for a given selection of spoken audio or vocals.
AGI-Papers
This repository contains a collection of papers and resources related to Large Language Models (LLMs), including their applications in various domains such as text generation, translation, question answering, and dialogue systems. The repository also includes discussions on the ethical and societal implications of LLMs. **Description** This repository is a collection of papers and resources related to Large Language Models (LLMs). LLMs are a type of artificial intelligence (AI) that can understand and generate human-like text. They have a wide range of applications, including text generation, translation, question answering, and dialogue systems. **For Jobs** - **Content Writer** - **Copywriter** - **Editor** - **Journalist** - **Marketer** **AI Keywords** - **Large Language Models** - **Natural Language Processing** - **Machine Learning** - **Artificial Intelligence** - **Deep Learning** **For Tasks** - **Generate text** - **Translate text** - **Answer questions** - **Engage in dialogue** - **Summarize text**
FinRobot
FinRobot is an open-source AI agent platform designed for financial applications using large language models. It transcends the scope of FinGPT, offering a comprehensive solution that integrates a diverse array of AI technologies. The platform's versatility and adaptability cater to the multifaceted needs of the financial industry. FinRobot's ecosystem is organized into four layers, including Financial AI Agents Layer, Financial LLMs Algorithms Layer, LLMOps and DataOps Layers, and Multi-source LLM Foundation Models Layer. The platform's agent workflow involves Perception, Brain, and Action modules to capture, process, and execute financial data and insights. The Smart Scheduler optimizes model diversity and selection for tasks, managed by components like Director Agent, Agent Registration, Agent Adaptor, and Task Manager. The tool provides a structured file organization with subfolders for agents, data sources, and functional modules, along with installation instructions and hands-on tutorials.
For similar tasks
Awesome-LLM4EDA
LLM4EDA is a repository dedicated to showcasing the emerging progress in utilizing Large Language Models for Electronic Design Automation. The repository includes resources, papers, and tools that leverage LLMs to solve problems in EDA. It covers a wide range of applications such as knowledge acquisition, code generation, code analysis, verification, and large circuit models. The goal is to provide a comprehensive understanding of how LLMs can revolutionize the EDA industry by offering innovative solutions and new interaction paradigms.
DeGPT
DeGPT is a tool designed to optimize decompiler output using Large Language Models (LLM). It requires manual installation of specific packages and setting up API key for OpenAI. The tool provides functionality to perform optimization on decompiler output by running specific scripts.
code2prompt
Code2Prompt is a powerful command-line tool that generates comprehensive prompts from codebases, designed to streamline interactions between developers and Large Language Models (LLMs) for code analysis, documentation, and improvement tasks. It bridges the gap between codebases and LLMs by converting projects into AI-friendly prompts, enabling users to leverage AI for various software development tasks. The tool offers features like holistic codebase representation, intelligent source tree generation, customizable prompt templates, smart token management, Gitignore integration, flexible file handling, clipboard-ready output, multiple output options, and enhanced code readability.
For similar jobs
Awesome-LLM4EDA
LLM4EDA is a repository dedicated to showcasing the emerging progress in utilizing Large Language Models for Electronic Design Automation. The repository includes resources, papers, and tools that leverage LLMs to solve problems in EDA. It covers a wide range of applications such as knowledge acquisition, code generation, code analysis, verification, and large circuit models. The goal is to provide a comprehensive understanding of how LLMs can revolutionize the EDA industry by offering innovative solutions and new interaction paradigms.
ztachip
ztachip is a RISCV accelerator designed for vision and AI edge applications, offering up to 20-50x acceleration compared to non-accelerated RISCV implementations. It features an innovative tensor processor hardware to accelerate various vision tasks and TensorFlow AI models. ztachip introduces a new tensor programming paradigm for massive processing/data parallelism. The repository includes technical documentation, code structure, build procedures, and reference design examples for running vision/AI applications on FPGA devices. Users can build ztachip as a standalone executable or a micropython port, and run various AI/vision applications like image classification, object detection, edge detection, motion detection, and multi-tasking on supported hardware.