awesome-llm-courses
A curated list of awesome online courses about Large Langage Models (LLMs)
Stars: 87
Awesome LLM Courses is a curated list of online courses focused on Large Language Models (LLMs). The repository aims to provide a comprehensive collection of free available courses covering various aspects of LLMs, including fundamentals, engineering, and applications. The courses are suitable for individuals interested in natural language processing, AI development, and machine learning. The list includes courses from reputable platforms such as Hugging Face, Udacity, DeepLearning.AI, Cohere, DataCamp, and more, offering a wide range of topics from pretraining LLMs to building AI applications with LLMs. Whether you are a beginner looking to understand the basics of LLMs or an intermediate developer interested in advanced topics like prompt engineering and generative AI, this repository has something for everyone.
README:
A curated list of awesome online courses about Large Langage Models (LLMs).
We try to monitor free available online courses about LLMs. Please open a PR or an issue if you want to suggest a list update π€
- π€ Hugging Face
-
CodeSignal β Selected LLM/NLP course paths with Cosmo, the AI tutor πΆβ¨
- Understanding LLMs and Basic Prompting Techniques β 5 lessons β 15 practices β Intermediate
- Introduction to Natural Language Processing β 4 courses β 78 practices β Intermediate
- Text Classification with Natural Language Processing β 4 courses β 110 practices β Advanced
- π£οΈ Large Language Model Course β Maxime Labonne
- Udacity
- Introduction to Large Language Models with Google Cloud β 45 Minutes β Beginner
- Introduction to Gen AI Studio with Google Cloud β 20 Hours β Beginner
- Introduction to Gemini for Google Workspace β 1 Day β Beginner
- Introduction to Image Generation with Google Cloud β 1 Day β Intermediate
- Generative AI Fundamentals with Google Cloud β 4 Days β Beginner
- Gemini in Gmail β 1 Day β Beginner
- Gemini in Google Docs β 1 Day β Beginner
- Gemini in Google Meet β 1 Day β Beginner
- Gemini in Google Sheets β 1 Day β Beginner
- Gemini in Google Slides β 1 Day β Beginner
- Gemini API by Google β 3 Days β Intermediate
- LLMOps: Building Real-World Applications With Large Language Models β 11 Hours β Intermediate
- Transformer Models and BERT Model with Google Cloud β 1 Day β Beginner
-
DeepLearning.AI β Short Courses
- Multimodal RAG: Chat with Videos β Intel β 1 Hour β Intermediate
- AI Python for Beginners β 4-5 Hours β Beginner
- Large Multimodal Model Prompting with Gemini β Google Cloud β 2 Hours β Beginner
- Building AI Applications with Haystack β 1 Hour β Intermediate
- Improving Accuracy of LLM Applications β Lamini and Meta β 1x Hour β Intermediate
- Embedding Models: From Architecture to Implementation β Vectara β 1 Hour β Beginner
- Federated Learning β Flower β 2 Hours β Beginner to Intermediate
- Pretraining LLMs β Upstage β 1 Hour β Beginner
- Prompt Compression and Query Optimization β MongoDB β 1 Hour β Intermediate
- Carbon Aware Computing for GenAI Developers β Google Cloud β 1 Hour β Beginner
- Function-Calling and Data Extraction with LLMs β Nexusflow β 1 Hour β Intermediate
- Building Your Own Database Agent β Microsoft β 1 Hour β Beginner
- AI Agents in LangGraph β LangChain, Tavily β 1 Hour β Intermediate
- AI Agentic Design Patterns with AutoGen β Microsoft, Penn State University β 1 Hour β Beginner
- Introduction to On-Device AI β Qualcomm β 1 Hour β Beginner
- Multi AI Agent Systems with crewAI β crewAI β 1 Hour β Beginner
- Building Multimodal Search and RAG β Weaviate β 1 Hour β Intermediate
- Building Agentic RAG with LlamaIndex β LlamaIndex β 1 Hour β Beginner
- Quantization in Depth β Hugging Face β 1 Hour β Intermediate
- Prompt Engineering for Vision Models β Comet β 1 Hour β Beginner
- Getting Started With Mistral β Mistral AI β 1 Hour β Beginner
- Quantization Fundamentals with Hugging Face β Hugging Face β 1 Hour β Beginner
- Preprocessing Unstructured Data for LLM Applications β Unstructured β 1 Hour β Beginner
- Open Source Models with Hugging Face β Hugging Face β 1 Hour β Beginner
- Prompt Engineering with Llama 2 & 3 β Meta β 1 Hour β Beginner
- Red Teaming LLM Applications β Giskard β 1 hour β Beginner
- JavaScript RAG Web Apps with LlamaIndex β 1 hour β Beginner
- Efficiently Serving LLMs β Predibase β 1 hour β Intermediate
- Knowledge Graphs for RAG β Neo4j β 1 hour β Intermediate
- Serverless LLM apps with Amazon Bedrock β AWS β 1 hour β Intermediate
- ChatGPT Prompt Engineering for Developers β OpenAI β 1 hour β Beginner to Advanced
- Building Systems with the ChatGPT API β OpenAI β 1 hour β Beginner to Advanced
- LangChain for LLM Application Development β LangChain β 1 hour β Beginner
- LangChain: Chat with Your Data β LangChain β 1 hour β Beginner
- Finetuning Large Language Models β Lamini β 1 hour β Intermediate
- Large Language Models with Semantic Search β Cohere β 1 hour β Beginner
- Building Generative AI Applications with Gradio β HuggingFace β 1 hour β Beginner
- Evaluating and Debugging Generative AI Models Using Weights and Biases β W&B β 1 hour β Intermediate
- How Diffusion Models Work β 1 hour β Intermediate
- Building Applications with Vector Databases β Pinecone β 1 hour β Beginner
- Automated Testing for LLMOps β circleci β 1 hour β Intermediate
- LLMOps β Google Cloud β 1 hour β Beginner
- Build LLM Apps with LangChain.js β LangChain β 1 hour β Intermediate
- Advanced Retrieval for AI with Chroma β Chroma β 1 hour β Intermediate
- Reinforcement Learning from Human Feedback β Google Cloud β 1 hour β Intermediate
- Building and Evaluating Advanced RAG Applications β LlamaIndex β 1 hour β Beginner
- Quality and Safety for LLM Applications β Whylabs β 1 hour β Beginner
- Vector Databases: from Embeddings to Applications β Weaviate β 1 hour β Intermediate
- Functions, Tools and Agents with LangChain β LangChain β 1 hour β Intermediate
- Pair Programming with a Large Language Model β Google β 1 hour β Beginner
- Understanding and Applying Text Embeddings β Google Cloud β 1 hour β Beginner
- How Business Thinkers Can Start Building AI Plugins With Semantic Kernel β Microsoft β 1 hour β Beginner
- π¦π LangChain Academy
- Introduction to LangGraph β 40 lessons β 4 hours of video content
- Cohere
-
Become an AI Developer β DataCamp
- Introduction to Large Language Models with GPT & LangChain
- Prompt Engineering with GPT & LangChain
- Building Multimodal AI Applications with LangChain & the OpenAI API
- Semantic Search with Pinecone
- Retrieval Augmented Generation with OpenAI API & Pinecone
- Building Chatbots with the OpenAI API and Pinecone
- Using Open Source AI Models with Hugging Face
- Building NLP Applications with Hugging Face
- Image Classification with Hugging Face
- EdX
- Databricks: Large Language Models: Application through Production β 6 weeks β 4-10 hours per week
- Databricks: Large Language Models: Foundation Models from the Ground Up β 4 weeks β 4-8 hours per week
- IBM: Introduction to Generative AI
- IBM: Introduction to Prompt Engineering β 3 weeks β 1-3 hours per week
- IBM: Models and Platforms for Generative AI β 3 weeks β 1-3 hours per week
- IBM: Developing Generative AI Applications with Python β 6 weeks β 1β2 hours per week
- Coursera
- Introduction to Large Language Models β Google Cloud β Approx. 1 hour β Beginner
- Encoder-Decoder Architecture β Google Cloud β Approx. 1 hour β Advanced
- Build a Chat Application using the PaLM 2 API on Cloud Run β Google Cloud β Project β 90 minutes β Intermediate
- Generative AI with Large Language Models β AWS β Approx. 16 hours β Intermediate
-
Scrimba Courses Library β Artificial Intelligence
- Build AI Apps with ChatGPT, DALL-E and GPT-4 β 4.6 Hours β Intermediate
- Deploy AI apps with Cloudflare β 50 Minutes β Intermediate
- Intro to AI Engineering β 90 Minutes β Intermediate
- Intro to AI Engineering β 90 Minutes β Intermediate
- Intro to Mistral AI β 84 Minutes β Intermediate
- Learn LangChain.js β 94 Minutes β Intermediate
- Learn OpenAI's Assistants API β 30 Minutes β Intermediate
- Learn to code with AI β 4.5 Hours β Beginner
- Prompt Engineering for Web Developers β 3.1 Hours β Intermediate
-
W&B AI Academy
- RAG++ : From POC to Production β 75 lessons β 2 hours of video content
- Developer's guide to LLM prompting β 25 lessons β 1 hour of video content
- LLM Engineering: Structured Outputs β 34 lessons β 1 hour of video content
- Building LLM-Powered Apps β 31 lessons β 2 hours of video content
- Training and Fine-tuning Large Language Models (LLMs) β 37 lessons β 4 hours of video content
- Enterprise Model Management β Cover end-to-end model lifecycle. Include LLM Case Study β 25 lessons β 2.5 hours of video content
- Google Cloud Skills Boost
-
Introduction to Generative AI Learning Path
- 01 Introduction to Generative AI β Introductory
- 02 Introduction to Large Language Models β 8 hours β Introductory
- 03 Introduction to Responsible AI β 8 hours β Introductory
- 04 Generative AI Fundamentals β 8 hours β Introductory
- 05 Responsible AI: Applying AI Principles with Google Cloud β 8 hours β Introductory
-
Generative AI for Developers Learning Path
- 01 Introduction to Image Generation β 8 hours β Introductory
- 02 Attention Mechanism β 8 hours β Intermediate
- 03 Encoder-Decoder Architecture β 8 hours β Intermediate
- 04 Transformer Models and BERT Model β 8 hours β Introductory
- 05 Create Image Captioning Models β 8 hours β Intermediate
- 06 Introduction to Generative AI Studio β 8 hours β Introductory
- 07 Generative AI Explorer - Vertex AI β 4 hours 15 minutes β Introductory
- 08 Explore and Evaluate Models using Model Garden β 1 hour β Intermediate
- 09 Prompt Design using PaLM β 1 hour 30 minutes β Introductory
-
Introduction to Generative AI Learning Path
-
Activeloop
- LangChain & Vector Databases in Production β 40 hours of learning content
- Retrieval Augmented Generation for Production with LangChain & LlamaIndex β 1 hour of high-level video content β 25 hours of learning content
- Training & Fine-Tuning LLMs for Production β 1.5 hrs of high-level video content β 40 hours of learning content
- Full Stack LLM Bootcamp (Spring 2023)
- Freecodecamp
- Learn LangChain.js - Build LLM apps with JavaScript and OpenAI YouTube β Approx. 1 hour 30 minutes
- DAIR.AI
- The Chinese University of HongKong, Shenzhen
- CSC 6201/CIE 6021 Large Language Models β Slides from 10 lectures
-
NVIDIA β Self-Paced Courses
- Generative AI Explained β 2 Hours β Technical - Beginner
- Augmenting LLMs using Retrieval Augmented Generation β 1 Hour β Technical - Beginner
- Building RAG Agents for LLMs β 8 Hours β Technical - Intermediate
-
Weaviate Academy
- PY_101T: Text data with Weaviate β Python β Project-based
- PY_101V: Your own vectors with Weaviate β Python β Project-based
- PY_101M: Multimodal data with Weaviate β Python β Project-based
- PY_220: Flexible data representation: Named vectors β Python β Project-based
- PY_230: Vector indexes β Python
- PY_250: Vector compression for improved efficiency β Python
- PY_275: Text tokenization β Python
- PY_280: Multi-tenancy β Python
- TS_100: Intro to Weaviate with TypeScript (or JavaScript) β TypeScript β Project-based
-
Web Security Academy by Portswigger (the creators of Burp Suit)
- Web LLM attacks β Short course + 4 labs
-
Neo4j Generative AI Courses
- Neo4j & LLM Fundamentals β 4 Hours
- Introduction to Vector Indexes and Unstructured Data β 2 Hours
- Build a Neo4j-backed Chatbot using Python β 2 Hours -Β Feat. Langchain and Streamlit
- Build a Neo4j-backed Chatbot with TypeScript β 6 Hours -Β Feat. Langchain and Next.js
- Building Knowledge Graphs with LLMs β 2 Hours
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for awesome-llm-courses
Similar Open Source Tools
awesome-llm-courses
Awesome LLM Courses is a curated list of online courses focused on Large Language Models (LLMs). The repository aims to provide a comprehensive collection of free available courses covering various aspects of LLMs, including fundamentals, engineering, and applications. The courses are suitable for individuals interested in natural language processing, AI development, and machine learning. The list includes courses from reputable platforms such as Hugging Face, Udacity, DeepLearning.AI, Cohere, DataCamp, and more, offering a wide range of topics from pretraining LLMs to building AI applications with LLMs. Whether you are a beginner looking to understand the basics of LLMs or an intermediate developer interested in advanced topics like prompt engineering and generative AI, this repository has something for everyone.
LLMs4TS
LLMs4TS is a repository focused on the application of cutting-edge AI technologies for time-series analysis. It covers advanced topics such as self-supervised learning, Graph Neural Networks for Time Series, Large Language Models for Time Series, Diffusion models, Mixture-of-Experts architectures, and Mamba models. The resources in this repository span various domains like healthcare, finance, and traffic, offering tutorials, courses, and workshops from prestigious conferences. Whether you're a professional, data scientist, or researcher, the tools and techniques in this repository can enhance your time-series data analysis capabilities.
Phi-3CookBook
Phi-3CookBook is a manual on how to use the Microsoft Phi-3 family, which consists of open AI models developed by Microsoft. The Phi-3 models are highly capable and cost-effective small language models, outperforming models of similar and larger sizes across various language, reasoning, coding, and math benchmarks. The repository provides detailed information on different Phi-3 models, their performance, availability, and usage scenarios across different platforms like Azure AI Studio, Hugging Face, and Ollama. It also covers topics such as fine-tuning, evaluation, and end-to-end samples for Phi-3-mini and Phi-3-vision models, along with labs, workshops, and contributing guidelines.
Awesome-AI-Data-Guided-Projects
A curated list of data science & AI guided projects to start building your portfolio. The repository contains guided projects covering various topics such as large language models, time series analysis, computer vision, natural language processing (NLP), and data science. Each project provides detailed instructions on how to implement specific tasks using different tools and technologies.
Next-Generation-LLM-based-Recommender-Systems-Survey
The Next-Generation LLM-based Recommender Systems Survey is a comprehensive overview of the latest advancements in recommender systems leveraging Large Language Models (LLMs). The survey covers various paradigms, approaches, and applications of LLMs in recommendation tasks, including generative and non-generative models, multimodal recommendations, personalized explanations, and industrial deployment. It discusses the comparison with existing surveys, different paradigms, and specific works in the field. The survey also addresses challenges and future directions in the domain of LLM-based recommender systems.
Awesome-GenAI-Unlearning
This repository is a collection of papers on Generative AI Machine Unlearning, categorized based on modality and applications. It includes datasets, benchmarks, and surveys related to unlearning scenarios in generative AI. The repository aims to provide a comprehensive overview of research in the field of machine unlearning for generative models.
Awesome-AI-Data-GitHub-Repos
Awesome AI & Data GitHub-Repos is a curated list of essential GitHub repositories covering the AI & ML landscape. It includes resources for Natural Language Processing, Large Language Models, Computer Vision, Data Science, Machine Learning, MLOps, Data Engineering, SQL & Database, and Statistics. The repository aims to provide a comprehensive collection of projects and resources for individuals studying or working in the field of AI and data science.
MahjongCopilot
Mahjong Copilot is an AI assistant for the game Mahjong, based on the mjai (Mortal model) bot implementation. It provides step-by-step guidance for each move in the game, and can also be used to automatically play and join games. Mahjong Copilot supports both 3-person and 4-person Mahjong games, and is available in multiple languages.
DecryptPrompt
This repository does not provide a tool, but rather a collection of resources and strategies for academics in the field of artificial intelligence who are feeling depressed or overwhelmed by the rapid advancements in the field. The resources include articles, blog posts, and other materials that offer advice on how to cope with the challenges of working in a fast-paced and competitive environment.
AI-Song-Cover-RVC
AI-Song-Cover-RVC is an all-in-one repository that provides tools for downloading YouTube WAV files, separating vocals, splitting audio, training models, and performing inference using Google Colab or Kaggle. The repository offers tutorials in Indonesian for training and inference tasks. Users can access various tools and resources for processing audio data and generating song covers. The repository aims to simplify the process of working with audio data for music-related projects.
ZoraAIO
ZORA AIO is a software tool designed for interacting with the ZORA.CO ecosystem, offering extensive customization options, a wide range of contracts, and user-friendly settings. Users can perform various tasks related to NFT minting, bridging, gas management, token transactions, and more. The tool requires Python 3.10.10 for operation and provides detailed guidance on installation and usage. It includes features such as official and instant bridges, minting NFTs on different networks, creating ERC1155 contracts, updating NFT metadata, and more. Users can configure private keys and proxies in the _data_ folder and adjust settings in the _settings.py_ file. ZORA AIO is suitable for users looking to streamline their interactions within the ZORA.CO ecosystem.
nlp-phd-global-equality
This repository aims to promote global equality for individuals pursuing a PhD in NLP by providing resources and information on various aspects of the academic journey. It covers topics such as applying for a PhD, getting research opportunities, preparing for the job market, and succeeding in academia. The repository is actively updated and includes contributions from experts in the field.
superagentx
SuperAgentX is a lightweight open-source AI framework designed for multi-agent applications with Artificial General Intelligence (AGI) capabilities. It offers goal-oriented multi-agents with retry mechanisms, easy deployment through WebSocket, RESTful API, and IO console interfaces, streamlined architecture with no major dependencies, contextual memory using SQL + Vector databases, flexible LLM configuration supporting various Gen AI models, and extendable handlers for integration with diverse APIs and data sources. It aims to accelerate the development of AGI by providing a powerful platform for building autonomous AI agents capable of executing complex tasks with minimal human intervention.
choco-builder
ChocoBuilder (aka Chocolate Factory) is an open-source LLM application development framework designed to help you easily create powerful software development SDLC + LLM generation assistants. It provides modules for integration into JVM projects, usage with RAGScript, and local deployment examples. ChocoBuilder follows a Domain Driven Problem-Solving design philosophy with key concepts like ProblemClarifier, ProblemAnalyzer, SolutionDesigner, SolutionReviewer, and SolutionExecutor. It offers use cases for desktop/IDE, server, and Android applications, with examples for frontend design, semantic code search, testcase generation, and code interpretation.
SLR-FC
This repository provides a comprehensive collection of AI tools and resources to enhance literature reviews. It includes a curated list of AI tools for various tasks, such as identifying research gaps, discovering relevant papers, visualizing paper content, and summarizing text. Additionally, the repository offers materials on generative AI, effective prompts, copywriting, image creation, and showcases of AI capabilities. By leveraging these tools and resources, researchers can streamline their literature review process, gain deeper insights from scholarly literature, and improve the quality of their research outputs.
For similar tasks
awesome-llm-courses
Awesome LLM Courses is a curated list of online courses focused on Large Language Models (LLMs). The repository aims to provide a comprehensive collection of free available courses covering various aspects of LLMs, including fundamentals, engineering, and applications. The courses are suitable for individuals interested in natural language processing, AI development, and machine learning. The list includes courses from reputable platforms such as Hugging Face, Udacity, DeepLearning.AI, Cohere, DataCamp, and more, offering a wide range of topics from pretraining LLMs to building AI applications with LLMs. Whether you are a beginner looking to understand the basics of LLMs or an intermediate developer interested in advanced topics like prompt engineering and generative AI, this repository has something for everyone.
zep-python
Zep is an open-source platform for building and deploying large language model (LLM) applications. It provides a suite of tools and services that make it easy to integrate LLMs into your applications, including chat history memory, embedding, vector search, and data enrichment. Zep is designed to be scalable, reliable, and easy to use, making it a great choice for developers who want to build LLM-powered applications quickly and easily.
E2B
E2B Sandbox is a secure sandboxed cloud environment made for AI agents and AI apps. Sandboxes allow AI agents and apps to have long running cloud secure environments. In these environments, large language models can use the same tools as humans do. For example: * Cloud browsers * GitHub repositories and CLIs * Coding tools like linters, autocomplete, "go-to defintion" * Running LLM generated code * Audio & video editing The E2B sandbox can be connected to any LLM and any AI agent or app.
LlamaIndexTS
LlamaIndex.TS is a data framework for your LLM application. Use your own data with large language models (LLMs, OpenAI ChatGPT and others) in Typescript and Javascript.
floneum
Floneum is a graph editor that makes it easy to develop your own AI workflows. It uses large language models (LLMs) to run AI models locally, without any external dependencies or even a GPU. This makes it easy to use LLMs with your own data, without worrying about privacy. Floneum also has a plugin system that allows you to improve the performance of LLMs and make them work better for your specific use case. Plugins can be used in any language that supports web assembly, and they can control the output of LLMs with a process similar to JSONformer or guidance.
generative-ai-for-beginners
This course has 18 lessons. Each lesson covers its own topic so start wherever you like! Lessons are labeled either "Learn" lessons explaining a Generative AI concept or "Build" lessons that explain a concept and code examples in both **Python** and **TypeScript** when possible. Each lesson also includes a "Keep Learning" section with additional learning tools. **What You Need** * Access to the Azure OpenAI Service **OR** OpenAI API - _Only required to complete coding lessons_ * Basic knowledge of Python or Typescript is helpful - *For absolute beginners check out these Python and TypeScript courses. * A Github account to fork this entire repo to your own GitHub account We have created a **Course Setup** lesson to help you with setting up your development environment. Don't forget to star (π) this repo to find it easier later. ## π§ Ready to Deploy? If you are looking for more advanced code samples, check out our collection of Generative AI Code Samples in both **Python** and **TypeScript**. ## π£οΈ Meet Other Learners, Get Support Join our official AI Discord server to meet and network with other learners taking this course and get support. ## π Building a Startup? Sign up for Microsoft for Startups Founders Hub to receive **free OpenAI credits** and up to **$150k towards Azure credits to access OpenAI models through Azure OpenAI Services**. ## π Want to help? Do you have suggestions or found spelling or code errors? Raise an issue or Create a pull request ## π Each lesson includes: * A short video introduction to the topic * A written lesson located in the README * Python and TypeScript code samples supporting Azure OpenAI and OpenAI API * Links to extra resources to continue your learning ## ποΈ Lessons | | Lesson Link | Description | Additional Learning | | :-: | :------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------: | ------------------------------------------------------------------------------ | | 00 | Course Setup | **Learn:** How to Setup Your Development Environment | Learn More | | 01 | Introduction to Generative AI and LLMs | **Learn:** Understanding what Generative AI is and how Large Language Models (LLMs) work. | Learn More | | 02 | Exploring and comparing different LLMs | **Learn:** How to select the right model for your use case | Learn More | | 03 | Using Generative AI Responsibly | **Learn:** How to build Generative AI Applications responsibly | Learn More | | 04 | Understanding Prompt Engineering Fundamentals | **Learn:** Hands-on Prompt Engineering Best Practices | Learn More | | 05 | Creating Advanced Prompts | **Learn:** How to apply prompt engineering techniques that improve the outcome of your prompts. | Learn More | | 06 | Building Text Generation Applications | **Build:** A text generation app using Azure OpenAI | Learn More | | 07 | Building Chat Applications | **Build:** Techniques for efficiently building and integrating chat applications. | Learn More | | 08 | Building Search Apps Vector Databases | **Build:** A search application that uses Embeddings to search for data. | Learn More | | 09 | Building Image Generation Applications | **Build:** A image generation application | Learn More | | 10 | Building Low Code AI Applications | **Build:** A Generative AI application using Low Code tools | Learn More | | 11 | Integrating External Applications with Function Calling | **Build:** What is function calling and its use cases for applications | Learn More | | 12 | Designing UX for AI Applications | **Learn:** How to apply UX design principles when developing Generative AI Applications | Learn More | | 13 | Securing Your Generative AI Applications | **Learn:** The threats and risks to AI systems and methods to secure these systems. | Learn More | | 14 | The Generative AI Application Lifecycle | **Learn:** The tools and metrics to manage the LLM Lifecycle and LLMOps | Learn More | | 15 | Retrieval Augmented Generation (RAG) and Vector Databases | **Build:** An application using a RAG Framework to retrieve embeddings from a Vector Databases | Learn More | | 16 | Open Source Models and Hugging Face | **Build:** An application using open source models available on Hugging Face | Learn More | | 17 | AI Agents | **Build:** An application using an AI Agent Framework | Learn More | | 18 | Fine-Tuning LLMs | **Learn:** The what, why and how of fine-tuning LLMs | Learn More |
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
pyAIML
PyAIML is a Python implementation of the AIML (Artificial Intelligence Markup Language) interpreter. It aims to be a simple, standards-compliant interpreter for AIML 1.0.1. PyAIML is currently in pre-alpha development, so use it at your own risk. For more information on PyAIML, see the CHANGES.txt and SUPPORTED_TAGS.txt files.
For similar jobs
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.
MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".
leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.