Awesome-LLM4RS-Papers
Large Language Model-enhanced Recommender System Papers
Stars: 480
This paper list is about Large Language Model-enhanced Recommender System. It also contains some related works. Keywords: recommendation system, large language models
README:
This is a paper list about Large Language Model-enhanced Recommender System. It also contains some related works.
Keywords: recommendation system, large language models
Welcome to open an issue or make a pull request!
- A Survey on Large Language Models for Recommendation, arxiv 2023, [paper].
- How Can Recommender Systems Benefit from Large Language Models: A Survey, arxiv 2023, [paper].
- Recommender Systems in the Era of Large Language Models (LLMs), arxiv 2023, [paper].
- Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender System, arxiv 2023, [paper].
- GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation, arxiv 2023, [paper].
- TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation, RecSys 2023 Short Paper, [paper], [code].
- Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models, arxiv 2023, [paper].
- Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach, arxiv 2023, [paper].
- A First Look at LLM-Powered Generative News Recommendation, arxiv 2023, [paper].
- Sparks of Artificial General Recommender (AGR): Early Experiments with ChatGPT, arxiv 2023, [paper].
- Zero-Shot Next-Item Recommendation using Large Pretrained Language Models, arxiv 2023, [paper], [code].
- Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction, arxiv 2023, [paper].
- Large Language Models are Zero-Shot Rankers for Recommender Systems, arxiv 2023, [paper], [code].
- Leveraging Large Language Models in Conversational Recommender Systems, arxiv 2023, [paper].
- Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models, arxiv 2023, [paper], [code].
- PALR: Personalization Aware LLMs for Recommendation, arxiv 2023, [paper].
- Prompt Tuning Large Language Models on Personalized Aspect Extraction for Recommendations, arxiv 2023, [paper].
- A Preliminary Study of ChatGPT on News Recommendation: Personalization, Provider Fairness, Fake News, arxiv 2023, [paper].
- Large Language Model for Generative Recommendation, arxiv 2023, [paper].
- GenRec: Large Language Model for Generative Recommendation, arxiv 2023, [paper].
- Generative Job Recommendations with Large Language Model, arxiv 2023, [paper].
- Exploring Large Language Model for Graph Data Understanding in Online Job Recommendations, arxiv 2023, [paper].
- LLM-Rec: Personalized Recommendation via Prompting Large Language Models, arxiv 2023, [paper].
- A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems, arxiv 2023, [paper].
- LLMRec: Benchmarking Large Language Models on Recommendation Task, arxiv 2023, [paper],[code].
- Zero-Shot Recommendations with Pre-Trained Large Language Models for Multimodal Nudging, arxiv 2023, [paper].
- Prompt Distillation for Efficient LLM-based Recommendation, CIKM 2023, [paper], [code].
- Large Language Models as Zero-Shot Conversational Recommenders, CIKM 2023, [paper], [code].
- Leveraging Large Language Models (LLMs) to Empower Training-Free Dataset Condensation for Content-Based Recommendation, arxiv 2023, [paper].
- Zero-Shot Recommendations with Pre-Trained Large Language Models for Multimodal Nudging, arxiv 2023, [paper].
- LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking, arxiv 2023, [paper], [code].
- Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences, Recsys 2023, [paper].
- CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation, arxiv 2023, [paper].
- Large Language Model Augmented Narrative Driven Recommendations, RecSys 2023 Short Paper, [paper].
- Leveraging Large Language Models for Sequential Recommendation, RecSys 2023 LBR, [paper], [code].
- ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models, WSDM 2024, [paper], [code].
- LLaRA: Aligning Large Language Models with Sequential Recommenders, arxiv 2023, [paper], [code].
- LLM4Vis: Explainable Visualization Recommendation using ChatGPT, arxiv 2023, [paper], [code].
- E4SRec: An Elegant Effective Efficient Extensible Solution of Large Language Models for Sequential Recommendation, arxiv 2023, [paper], [code].
- Adapting Large Language Models by Integrating Collaborative Semantics for Recommendation, arxiv 2023, [paper], [code].
- Representation Learning with Large Language Models for Recommendation, WWW 2024, [paper], [code].
- Stealthy Attack on Large Language Model based Recommendation, arxiv 2024, [paper].
- ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation, arxiv 2024, [paper] [code]
- Wukong: Towards a Scaling Law for Large-Scale Recommendation, arxiv 2024, [paper]
- A Large Language Model Enhanced Sequential Recommender for Joint Video and Comment Recommendation, arxiv 2024, [paper][code]
- Harnessing Large Language Models for Text-Rich Sequential Recommendation, arxiv 2024, [paper]
- Exploring Large Language Model for Graph Data Understanding in Online Job Recommendations, arxiv 2024, [paper][code]
- LLMRG: Improving Recommendations through Large Language Model Reasoning Graphs, arxiv 2024, [paper]
- Enhancing Job Recommendation through LLM-based Generative Adversarial Networks, AAAI 2024, [paper].
- LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation, arxiv 2024, [paper].
- Sequential Recommendation with Latent Relations based on Large Language Model, SIGIR 2024, [paper], [code].
- Common Sense Enhanced Knowledge-based Recommendation with Large Language Model, arxiv 2024, [paper][code]
- Re2LLM: Reflective Reinforcement Large Language Model for Session-based Recommendation, arxiv 2024, [paper]
- Enhancing Content-based Recommendation via Large Language Model, arxiv 2024, [paper]
- Aligning Large Language Models with Recommendation Knowledge, arxiv 2024, [paper]
- Where to Move Next: Zero-shot Generalization of LLMs for Next POI Recommendation, arxiv 2024, [paper]
- DRE: Generating Recommendation Explanations by Aligning Large Language Models at Data-level, arxiv 2024, [paper].
- Behavior Alignment: A New Perspective of Evaluating LLM-based Conversational Recommendation Systems, SIGIR 2024, [paper], [code].
- Exact and Efficient Unlearning for Large Language Model-based Recommendation, arxiv 2024, [paper].
- Large Language Models for Intent-Driven Session Recommendations, SIGIR 24, [paper].
- Reinforcement Learning-based Recommender Systems with Large Language Models for State Reward and Action Modeling, SIGIR 24, [paper].
- Enhancing Long-Term Recommendation with Bi-level Learnable Large Language Model Planning, SIGIR 24, [paper].
- LoRec: Large Language Model for Robust Sequential Recommendation against Poisoning Attacks, SIGIR 24, [paper].
- Data-efficient Fine-tuning for LLM-based Recommendation, SIGIR 24, [paper].
- Towards LLM-RecSys Alignment with Textual ID Learning , SIGIR 24, [paper].
- Breaking the Length Barrier: LLM-Enhanced CTR Prediction in Long Textual User Behaviors, SIGIR 24, [paper].
- RecGPT: Generative Personalized Prompts for Sequential Recommendation via ChatGPT Training Paradigm, arxiv 2024, [paper]
- Efficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations, arxiv 2024, [paper].
- Large Language Models for Next Point-of-Interest Recommendation, arxiv 2024, [paper].
- Distillation Matters: Empowering Sequential Recommenders to Match the Performance of Large Language Model, arxiv 2024, [paper].
- Large Language Models as Conversational Movie Recommenders: A User Study, arxiv 2024, [paper].
- CALRec: Contrastive Alignment of Generative LLMs for Sequential Recommendation, arxiv 2024, [paper].
- Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward, AAAI 2024, [paper].
- Breaking the Barrier: Utilizing Large Language Models for Industrial Recommendation Systems through an Inferential Knowledge Graph, arxiv 2024, [paper].
- RDRec: Rationale Distillation for LLM-based Recommendation, ACL 2024 Main (short), [paper], [code].
- Reinforced Prompt Personalization for Recommendation with Large Language Models, arxiv 2024, [paper], [code]
- Semantic Understanding and Data Imputation using Large Language Model to Accelerate Recommendation System, arxiv 2024, [paper]
- A Systematic Survey and Critical Review on Evaluating Large Language Models: Challenges, Limitations, and Recommendations, arxiv 2024, [paper]
- LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation, arxiv 2024, [paper]
- Optimizing Novelty of Top-k Recommendations using Large Language Models and Reinforcement Learning, arxiv 2024, [paper]
- "You Gotta be a Doctor, Lin": An Investigation of Name-Based Bias of Large Language Models in Employment Recommendations, arxiv 2024, [paper]
- Multi-Layer Ranking with Large Language Models for News Source Recommendation, arxiv 2024, [paper]
- Large Language Models as Evaluators for Recommendation Explanations, arxiv 2024, [paper]
- Text-like Encoding of Collaborative Information in Large Language Models for Recommendation, arxiv 2024, [paper]
- Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential Recommendation, arxiv 2024, [paper]
- XRec: Large Language Models for Explainable Recommendation, arxiv 2024, [[paper]](XRec: Large Language Models for Explainable Recommendation), [code]
- Large Language Models Enhanced Sequential Recommendation for Long-tail User and Item, arxiv 2024, [paper], [code]
- Keyword-driven Retrieval-Augmented Large Language Models for Cold-start User Recommendations, arxiv 2024, [paper]
- News Recommendation with Category Description by a Large Language Model, arxiv 2024, [paper]
- Learning Structure and Knowledge Aware Representation with Large Language Models for Concept Recommendation, arxiv 2024, [paper]
- Reindex-Then-Adapt: Improving Large Language Models for Conversational Recommendation, arxiv 2024, [paper]
- EmbSum: Leveraging the Summarization Capabilities of Large Language Models for Content-Based Recommendations, arxiv 2024, [paper]
- DynLLM: When Large Language Models Meet Dynamic Graph Recommendation, arxiv 2024, [paper]
- Conversational Topic Recommendation in Counseling and Psychotherapy with Decision Transformer and Large Language Models, arxiv 2024, [paper]
- OpenP5: An Open-Source Platform for Developing, Training, and Evaluating LLM-based Recommender Systems, Sigir 2024, [paper], [code]
- When Large Language Model based Agent Meets User Behavior Analysis: A Novel User Simulation Paradigm, arxiv 2023, [paper].
- RecMind: Large Language Model Powered Agent For Recommendation, arxiv 2023, [paper].
- On Generative Agents in Recommendation, arxiv 2023, [paper], [code].
- AgentCF: Collaborative Learning with Autonomous Language Agents for Recommender Systems, arxiv 2023, [paper].
- Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations [link]
- Balancing Information Perception with Yin-Yang: Agent-Based Information Neutrality Model for Recommendation Systems, arxiv 2024, [paper]
- Lending Interaction Wings to Recommender Systems with Conversational Agents, NeurIPS 2023, [paper].
- A Conceptual Framework for Conversational Search and Recommendation: Conceptualizing Agent-Human Interactions During the Conversational Search Process, arxiv 2024, [paper].
- Enhancing Recommender Systems with Large Language Model Reasoning Graphs, arxiv 2023, [paper].
- Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models, arxiv 2023, [paper], [code].
- LLMRec: Large Language Models with Graph Augmentation for Recommendation, WSDM 2024, [paper], [code], [blog in Chinese].
- Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application, arxiv 2024, [paper].
- Language models as recommender systems: Evaluations and limitations, NeurIPS Workshop 2021, [paper].
- Generative Recommendation: Towards Next-generation Recommender Paradigm, arxiv 2023, [paper].
- Where to Go Next for Recommender Systems? ID- vs.Modality-based recommender models revisited, SIGIR 2023, [paper], [code]
- Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights, arxiv 2023, [paper].
- Exploring Adapter-based Transfer Learning for Recommender Systems: Empirical Studies and Practical Insights, arxiv 2023, [paper].
- Is ChatGPT a Good Recommender? A Preliminary Study, arxiv 2023, [paper].
- Evaluating ChatGPT as a Recommender System: A Rigorous Approach, arxiv 2023, [paper].
- Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences, RecSys 2023 Short Paper, [paper].
- Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation, RecSys 2023 Short Paper, [paper], [code].
- Uncovering ChatGPT's Capabilities in Recommender Systems, RecSys 2023 LBR, [paper], [code].
Github Repository: "Universal_user_representations for recommendation" [link].
- Parameter-Efficient Transfer from Sequential Behaviors for User Modeling and Recommendation, SIGIR 2020, [paper], [code]
- One Person, One Model, One World: Learning Continual User Representation without Forgetting, SIGIR 2021, [paper], [code]
- ID-Agnostic User Behavior Pre-training for Sequential Recommendation, CCIR 2022, [paper].
- Towards Universal Sequence Representation Learning for Recommender Systems, KDD 2022, [paper], [code].
- TransRec: learning transferable recommendation from mixture-of-modality feedback, arxiv 2022, [paper].
- Learning Vector-Quantized Item Representation for Transferable Sequential Recommenders, WWW 2023, [paper], [code].
- One4all User Representation for Recommender Systems in E-commerce, arvix 2021, [paper].
- Text Is All You Need: Learning Language Representations for Sequential Recommendation, KDD 2023, [paper].
- Collaborative Large Language Model for Recommender Systems, arvix 2023, [paper], [code].
- A Simple Convolutional Generative Network for Next Item Recommendation, WSDM 2018/08, [paper] [code]
- Future Data Helps Training: Modeling Future Contexts for Session-based Recommendation, WWW 2020/04, [paper] [code]
- Recommender Systems with Generative Retrieval, arvix 2023, [paper].
- Generative Sequential Recommendation with GPTRec, SIGIR 2023 workshop, [paper].
- Enhanced Generative Recommendation via Content and Collaboration Integration, arvix 2024, [paper].
Survey paper: Pre-train, Prompt and Recommendation: A Comprehensive Survey of Language Modelling Paradigm Adaptations in Recommender Systems, arxiv 2023, [paper].
- Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5), arvix 2022, [paper],[code].
- Rethinking Reinforcement Learning for Recommendation: A Prompt Perspective, SIGIR 2022, [paper].
- M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems, arvix 2022, [paper].
- Personalized Prompt for Sequential Recommendation, arvix 2022, [paper].
- Knowledge Prompt-tuning for Sequential Recommendation, ACM MM 2023, [paper], [code].
- Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation, arvix 2023, [paper], [KDD Cup 2023].
- PixelRec: An Image Dataset for Benchmarking Recommender Systems with Raw Pixels, arvix 2023, [paper], [link].
- NineRec: A Benchmark Dataset Suite for Evaluating Transferable Recommendation, arvix 2023, [paper], [link].
- A Content-Driven Micro-Video Recommendation Dataset at Scale, arvix 2023, [paper], [link].
- EEG-SVRec: An EEG Dataset with User Multidimensional Affective Engagement Labels in Short Video Recommendation, arxiv, 2024[paper][link]
- MealRec : A Meal Recommendation Dataset with Meal-Course Affiliation for Personalization and Healthiness, arxiv 2024, [paper].
- MIND Your Language: A Multilingual Dataset for Cross-lingual News Recommendation, SIGIR 2024, [paper], [link].
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Awesome-LLM4RS-Papers
Similar Open Source Tools
Awesome-LLM4RS-Papers
This paper list is about Large Language Model-enhanced Recommender System. It also contains some related works. Keywords: recommendation system, large language models
Recommendation-Systems-without-Explicit-ID-Features-A-Literature-Review
This repository is a collection of papers and resources related to recommendation systems, focusing on foundation models, transferable recommender systems, large language models, and multimodal recommender systems. It explores questions such as the necessity of ID embeddings, the shift from matching to generating paradigms, and the future of multimodal recommender systems. The papers cover various aspects of recommendation systems, including pretraining, user representation, dataset benchmarks, and evaluation methods. The repository aims to provide insights and advancements in the field of recommendation systems through literature reviews, surveys, and empirical studies.
LLM-and-Law
This repository is dedicated to summarizing papers related to large language models with the field of law. It includes applications of large language models in legal tasks, legal agents, legal problems of large language models, data resources for large language models in law, law LLMs, and evaluation of large language models in the legal domain.
Awesome-Text2SQL
Awesome Text2SQL is a curated repository containing tutorials and resources for Large Language Models, Text2SQL, Text2DSL, Text2API, Text2Vis, and more. It provides guidelines on converting natural language questions into structured SQL queries, with a focus on NL2SQL. The repository includes information on various models, datasets, evaluation metrics, fine-tuning methods, libraries, and practice projects related to Text2SQL. It serves as a comprehensive resource for individuals interested in working with Text2SQL and related technologies.
awesome-deliberative-prompting
The 'awesome-deliberative-prompting' repository focuses on how to ask Large Language Models (LLMs) to produce reliable reasoning and make reason-responsive decisions through deliberative prompting. It includes success stories, prompting patterns and strategies, multi-agent deliberation, reflection and meta-cognition, text generation techniques, self-correction methods, reasoning analytics, limitations, failures, puzzles, datasets, tools, and other resources related to deliberative prompting. The repository provides a comprehensive overview of research, techniques, and tools for enhancing reasoning capabilities of LLMs.
LMOps
LMOps is a research initiative focusing on fundamental research and technology for building AI products with foundation models, particularly enabling AI capabilities with Large Language Models (LLMs) and Generative AI models. The project explores various aspects such as prompt optimization, longer context handling, LLM alignment, acceleration of LLMs, LLM customization, and understanding in-context learning. It also includes tools like Promptist for automatic prompt optimization, Structured Prompting for efficient long-sequence prompts consumption, and X-Prompt for extensible prompts beyond natural language. Additionally, LLMA accelerators are developed to speed up LLM inference by referencing and copying text spans from documents. The project aims to advance technologies that facilitate prompting language models and enhance the performance of LLMs in various scenarios.
core
OpenSumi is a framework designed to help users quickly build AI Native IDE products. It provides a set of tools and templates for creating Cloud IDEs, Desktop IDEs based on Electron, CodeBlitz web IDE Framework, Lite Web IDE on the Browser, and Mini-App liked IDE. The framework also offers documentation for users to refer to and a detailed guide on contributing to the project. OpenSumi encourages contributions from the community and provides a platform for users to report bugs, contribute code, or improve documentation. The project is licensed under the MIT license and contains third-party code under other open source licenses.
Awesome-LLM-Reasoning
**Curated collection of papers and resources on how to unlock the reasoning ability of LLMs and MLLMs.** **Description in less than 400 words, no line breaks and quotation marks.** Large Language Models (LLMs) have revolutionized the NLP landscape, showing improved performance and sample efficiency over smaller models. However, increasing model size alone has not proved sufficient for high performance on challenging reasoning tasks, such as solving arithmetic or commonsense problems. This curated collection of papers and resources presents the latest advancements in unlocking the reasoning abilities of LLMs and Multimodal LLMs (MLLMs). It covers various techniques, benchmarks, and applications, providing a comprehensive overview of the field. **5 jobs suitable for this tool, in lowercase letters.** - content writer - researcher - data analyst - software engineer - product manager **Keywords of the tool, in lowercase letters.** - llm - reasoning - multimodal - chain-of-thought - prompt engineering **5 specific tasks user can use this tool to do, in less than 3 words, Verb + noun form, in daily spoken language.** - write a story - answer a question - translate a language - generate code - summarize a document
AICIty-reID-2020
AICIty-reID 2020 is a repository containing the 1st Place submission to AICity Challenge 2020 re-id track by Baidu-UTS. It includes models trained on Paddlepaddle and Pytorch, with performance metrics and trained models provided. Users can extract features, perform camera and direction prediction, and access related repositories for drone-based building re-id, vehicle re-ID, person re-ID baseline, and person/vehicle generation. Citations are also provided for research purposes.
Crypto-Nft-Airdrop-Tool
Crypto-Nft-Airdrop-Tool is a Python tool designed for conducting airdrops of NFTs in the crypto space. It provides functionality for distributing NFTs to a specified audience efficiently. The tool is compatible with Windows platform and requires Python 3. Users can easily manage and execute airdrop campaigns using this tool, enhancing their engagement with the NFT community. The tool simplifies the process of distributing NFTs and ensures a seamless experience for both creators and recipients.
QualityScaler
QualityScaler is a Windows app powered by AI to enhance, upscale, and de-noise photographs and videos. It provides an easy-to-use GUI for upscaling images and videos using multiple AI models. The tool supports automatic image tiling and merging to avoid GPU VRAM limitations, resizing images/videos before upscaling, and interpolation between the original and upscaled content. QualityScaler is written in Python and utilizes external packages such as torch, onnxruntime-directml, customtkinter, OpenCV, moviepy, and nuitka. It requires Windows 11 or Windows 10, at least 8GB of RAM, and a Directx12 compatible GPU with 4GB VRAM or more. The tool aims to continue improving with upcoming versions by adding new features, enhancing performance, and supporting additional AI architectures.
SummaryYou
Summary You is a tool that utilizes AI to summarize YouTube videos, articles, images, and documents. Users can set the length of the summary and have the option to listen to the summaries. The tool also includes a history section, intelligent paywall detection, OLED-Dark Mode, and a user-friendly Material Design 3 style UI with dynamic color themes. It uses GPT-3.5 OpenAI/Mixtral 8x7B Groq for summarization. The backend is implemented in Python with Chaquopy, and some UI designs and codes are borrowed from Seal Material color utilities.
amplication
Amplication is a robust, open-source development platform designed to revolutionize the creation of scalable and secure .NET and Node.js applications. It automates backend applications development, ensuring consistency, predictability, and adherence to the highest standards with code that's built to scale. The user-friendly interface fosters seamless integration of APIs, data models, databases, authentication, and authorization. Built on a flexible, plugin-based architecture, Amplication allows effortless customization of the code and offers a diverse range of integrations. With a strong focus on collaboration, Amplication streamlines team-oriented development, making it an ideal choice for groups of all sizes, from startups to large enterprises. It enables users to concentrate on business logic while handling the heavy lifting of development. Experience the fastest way to develop .NET and Node.js applications with Amplication.
Prompt4ReasoningPapers
Prompt4ReasoningPapers is a repository dedicated to reasoning with language model prompting. It provides a comprehensive survey of cutting-edge research on reasoning abilities with language models. The repository includes papers, methods, analysis, resources, and tools related to reasoning tasks. It aims to support various real-world applications such as medical diagnosis, negotiation, etc.
FuseAI
FuseAI is a repository that focuses on knowledge fusion of large language models. It includes FuseChat, a state-of-the-art 7B LLM on MT-Bench, and FuseLLM, which surpasses Llama-2-7B by fusing three open-source foundation LLMs. The repository provides tech reports, releases, and datasets for FuseChat and FuseLLM, showcasing their performance and advancements in the field of chat models and large language models.
For similar tasks
Awesome-LLM4RS-Papers
This paper list is about Large Language Model-enhanced Recommender System. It also contains some related works. Keywords: recommendation system, large language models
LightRAG
LightRAG is a PyTorch library designed for building and optimizing Retriever-Agent-Generator (RAG) pipelines. It follows principles of simplicity, quality, and optimization, offering developers maximum customizability with minimal abstraction. The library includes components for model interaction, output parsing, and structured data generation. LightRAG facilitates tasks like providing explanations and examples for concepts through a question-answering pipeline.
opensearch-ai
OpenSearch GPT is a personalized AI search engine that adapts to user interests while browsing the web. It utilizes advanced technologies like Mem0 for automatic memory collection, Vercel AI ADK for AI applications, Next.js for React framework, Tailwind CSS for styling, Shadcn UI for UI components, Cobe for globe animation, GPT-4o-mini for AI capabilities, and Cloudflare Pages for web application deployment. Developed by Supermemory.ai team.
Awesome-LLM4Graph-Papers
A collection of papers and resources about Large Language Models (LLM) for Graph Learning (Graph). Integrating LLMs with graph learning techniques to enhance performance in graph learning tasks. Categorizes approaches based on four primary paradigms and nine secondary-level categories. Valuable for research or practice in self-supervised learning for recommendation systems.
ai_projects
This repository contains a collection of AI projects covering various areas of machine learning. Each project is accompanied by detailed articles on the associated blog sciblog. Projects range from introductory topics like Convolutional Neural Networks and Transfer Learning to advanced topics like Fraud Detection and Recommendation Systems. The repository also includes tutorials on data generation, distributed training, natural language processing, and time series forecasting. Additionally, it features visualization projects such as football match visualization using Datashader.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.