SiriLLama

SiriLLama

Use locally running LLMs directly from Siri 🦙🟣

Stars: 146

Visit
 screenshot

Siri LLama is an Apple shortcut that allows users to access locally running LLMs through Siri or the shortcut UI on any Apple device connected to the same network as the host machine. It utilizes Langchain and supports open source models from Ollama or Fireworks AI. Users can easily set up and configure the tool to interact with various language models for chat and multimodal tasks. The tool provides a convenient way to leverage the power of language models through Siri or the shortcut interface, enhancing user experience and productivity.

README:


Siri LLama

Siri LLama is apple shortcut that access locally running LLMs through Siri or the shortcut UI on any apple device connected to the same network of your host machine. It uses Langchain 🦜🔗 and supports open source models from both Ollama 🦙 or Fireworks AI 🎆

Download Shortcut from HERE

🟣 Simple Chat Video🎬

🟣 Multimodal Video 🎬

🟣 RAG Video 🎬

Getting Started

Requirements

pip install -r requirements.txt

Ollama Installation🦙

  1. Install Ollama for your machine, you have to run ollama serve in the terminal to start the server

  2. pull the models you want to use, for example

ollama run llama3 # chat model
ollama run llava # multimodal
  1. in config.py set OLLAMA_CHAT, OLLAMA_VISUAL_CHAT, and OLLAMA_EMBEDDINGS_MODEL to the models you pulled from Ollama

Fireworks AI Installation🎆

  1. get your Fireworks API Key and put it in fireworks_models.py

  2. in config.py set FIREWORKS_CHAT, FIREWORKS_VISUAL_CHAT and FIREWORKS_EMBEDDINGS_MODEL to the models you want to use from Fireworks AI. and set your and FIREWORKS_API_KEY

Config

in confing.py set MEMORY_SIZE (How many previous messages to remember) and ANSWER_SIZE_WORDS (How many words to generate in the answer)

Running SiriLlama 🟣🦙

  1. Download or clone the repo

  2. set the provider (Ollama / Fireworks) in app.py

  3. Run the flask app using

>>> python3 app.py
  1. On your Apple device, Download the shortcut from here Note that you must run the shortcut through Siri to "talk" to it, otherwise it will prompt you to type text.

  2. Run the shortcut through Siri or the shortcut UI, in first time you run the shortcut you will be asked to enter your IP address and the port number showing in terminal

>>> python app.py
...
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5001
 * Running on http://192.168.1.134:5001
Press CTRL+C to quit

In the example above, the IP address is 192.168.1.134 and the port is 5001 (default port is specified by Flask, change the line in main.py if needed)

  1. If you are using Siri to interact with the shortcut, saying "Good Bye" will stop Siri.

Common Issues 🐞

  • Even we access the flask app (not Ollama server directly), Some windows users who have Ollama installed using WSL have to make sure ollama servere is exposed to the network, Check this issue for more details
  • When running the shortcut for the first time from Siri, it should ask for permission to send data to the Flask server. If it doesn't work (especially on iOS 17.4), first try running the shortcut + sending a message from the iOS Shortcuts app to trigger the permissions dialog, then try running it through Siri again.

Other LLM Providers 🤖🤖

Supposedly SiriLLama should work with any LLMs that including OpenAI, Claude, etc. but make sure first you installed the corresponding Langchain packages and set the models in config.py

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for SiriLLama

Similar Open Source Tools

For similar tasks

For similar jobs