LLMFarm

LLMFarm

llama and other large language models on iOS and MacOS offline using GGML library.

Stars: 1262

Visit
 screenshot

LLMFarm is an iOS and MacOS app designed to work with large language models (LLM). It allows users to load different LLMs with specific parameters, test the performance of various LLMs on iOS and macOS, and identify the most suitable model for their projects. The tool is based on ggml and llama.cpp by Georgi Gerganov and incorporates sources from rwkv.cpp by saharNooby, Mia by byroneverson, and LlamaChat by alexrozanski. LLMFarm features support for MacOS (13+) and iOS (16+), various inferences and sampling methods, Metal compatibility (not supported on Intel Mac), model setting templates, LoRA adapters support, LoRA finetune support, LoRA export as model support, and more. It also offers a range of inferences including LLaMA, GPTNeoX, Replit, GPT2, Starcoder, RWKV, Falcon, MPT, Bloom, and others. Additionally, it supports multimodal models like LLaVA, Obsidian, and MobileVLM. Users can customize inference options through JSON files and access supported models for download.

README:

LLMFarm

Icon Icon

Install Stable                          Install Latest


Icon     Icon     Icon     Icon     Wiki

Icon   Icon


LLMFarm is an iOS and MacOS app to work with large language models (LLM). It allows you to load different LLMs with certain parameters.With LLMFarm, you can test the performance of different LLMs on iOS and macOS and find the most suitable model for your project.
Based on ggml and llama.cpp by Georgi Gerganov.

Features

Inferences

See full list here.

Multimodal

Note: For Falcon, Alpaca, GPT4All, Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2, Vigogne (French), Vicuna, Koala, OpenBuddy (Multilingual), Pygmalion/Metharme, WizardLM, Baichuan 1 & 2 + derivations, Aquila 1 & 2, Mistral AI v0.1, Refact, Persimmon 8B, MPT, Bloom select llama inferece in model settings.

Sampling methods

Getting Started

You can find answers to some questions in the FAQ section.

Inference options

When creating a chat, a JSON file is generated in which you can specify additional inference options. The chat files are located in the "chats" directory. You can see all inference options here.

Models

You can download some of the supported models here.

Development

llmfarm_core has been moved to a separate repository. To build llmfarm, you need to clone this repository recursively:

git clone --recurse-submodules https://github.com/guinmoon/LLMFarm

Also used sources from:

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for LLMFarm

Similar Open Source Tools

For similar tasks

For similar jobs