Large-Language-Models

Large-Language-Models

Large language Models (LLM)

Stars: 233

Visit
 screenshot

Large Language Models (LLM) are used to browse the Wolfram directory and associated URLs to create the category structure and good word embeddings. The goal is to generate enriched prompts for GPT, Wikipedia, Arxiv, Google Scholar, Stack Exchange, or Google search. The focus is on one subdirectory: Probability & Statistics. Documentation is in the project textbook `Projects4.pdf`, which is available in the folder. It is recommended to download the document and browse your local copy with Chrome, Edge, or other viewers. Unlike on GitHub, you will be able to click on all the links and follow the internal navigation features. Look for projects related to NLP and LLM / xLLM. The best starting point is project 7.2.2, which is the core project on this topic, with references to all satellite projects. The project textbook (with solutions to all projects) is the core document needed to participate in the free course (deep tech dive) called **GenAI Fellowship**. For details about the fellowship, follow the link provided. An uncompressed version of `crawl_final_stats.txt.gz` is available on Google drive, which contains all the crawled data needed as input to the Python scripts in the XLLM5 and XLLM6 folders.

README:

LLM

Large language Models (LLM). Browse the Wolfram directory and associated URLs (directory and content pages), to create the category structure and good word embeddings. The goal is to generate enriched prompts for GPT, Wikipedia, ArXiv, Google Scholar, Stack Exchange or Google search. The focus is on one subdiretory: Probability & Statistics.

Documentation is in my project textbook Projects4.pdf, here in this folder. I strongly encourage you to download the document and browse your local copy with Chrome, Edge, or other viewers. Unlike on GitHub, you will be able to click on all the links and follow the internal navigation features. Look for projects related to NLP and LLM / xLLM. The best starting point is project 7.2.2. It's the core project on this topic, with references to all satellite projects.

The project textbook (with my solutions to all projects) is the core document needed to participate in the free course (deep tech dive) called GenAI Fellowship. For details about the fellowhip, follow this link.

Note: An uncompressed version of crawl_final_stats.txt.gz is available on my Google drive, here. This file contains all the crawled data needed as input to the Python scripts in the XLLM5 and XLLM6 folders.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for Large-Language-Models

Similar Open Source Tools

For similar tasks

For similar jobs