farfalle

farfalle

🔍 AI search engine - self-host with local or cloud LLMs

Stars: 2085

Visit
 screenshot

Farfalle is an open-source AI-powered search engine that allows users to run their own local LLM or utilize the cloud. It provides a tech stack including Next.js for frontend, FastAPI for backend, Tavily for search API, Logfire for logging, and Redis for rate limiting. Users can get started by setting up prerequisites like Docker and Ollama, and obtaining API keys for Tavily, OpenAI, and Groq. The tool supports models like llama3, mistral, and gemma. Users can clone the repository, set environment variables, run containers using Docker Compose, and deploy the backend and frontend using services like Render and Vercel.

README:

Farfalle

Open-source AI-powered search engine. (Perplexity Clone)

Run local LLMs (llama3, gemma, mistral, phi3), custom LLMs through LiteLLM, or use cloud models (Groq/Llama3, OpenAI/gpt4-o)

Demo answering questions with phi3 on my M1 Macbook Pro:

https://github.com/rashadphz/farfalle/assets/20783686/9cda83b8-0d3c-4a81-83ee-ff8cce323fee

Please feel free to contact me on Twitter or create an issue if you have any questions.

💻 Live Demo

farfalle.dev (Cloud models only)

📖 Overview

🛣️ Roadmap

  • [x] Add support for local LLMs through Ollama
  • [x] Docker deployment setup
  • [x] Add support for searxng. Eliminates the need for external dependencies.
  • [x] Create a pre-built Docker Image
  • [x] Add support for custom LLMs through LiteLLM
  • [ ] Chat History
  • [ ] Chat with local files

🛠️ Tech Stack

Features

  • Search with multiple search providers (Tavily, Searxng, Serper, Bing)
  • Answer questions with cloud models (OpenAI/gpt4-o, OpenAI/gpt3.5-turbo, Groq/Llama3)
  • Answer questions with local models (llama3, mistral, gemma, phi3)
  • Answer questions with any custom LLMs through LiteLLM

🏃🏿‍♂️ Getting Started Locally

Prerequisites

  • Docker
  • Ollama (If running local models)
    • Download any of the supported models: llama3, mistral, gemma, phi3
    • Start ollama server ollama serve

Get API Keys

Quick Start:

docker run \
    -p 8000:8000 -p 3000:3000 -p 8080:8080 \
    --add-host=host.docker.internal:host-gateway \
    ghcr.io/rashadphz/farfalle:main

Optional

  • OPENAI_API_KEY: Your OpenAI API key. Not required if you are using Ollama.
  • SEARCH_PROVIDER: The search provider to use. Can be tavily, serper, bing, or searxng.
  • OPENAI_API_KEY: Your OpenAI API key. Not required if you are using Ollama.
  • TAVILY_API_KEY: Your Tavily API key.
  • SERPER_API_KEY: Your Serper API key.
  • BING_API_KEY: Your Bing API key.
  • GROQ_API_KEY: Your Groq API key.
  • SEARXNG_BASE_URL: The base URL for the SearXNG instance.

Add any env variable to the docker run command like so:

docker run \
    -e ENV_VAR_NAME1='YOUR_ENV_VAR_VALUE1' \
    -e ENV_VAR_NAME2='YOUR_ENV_VAR_VALUE2' \
    -p 8000:8000 -p 3000:3000 -p 8080:8080 \
    --add-host=host.docker.internal:host-gateway \
    ghcr.io/rashadphz/farfalle:main

Wait for the app to start then visit http://localhost:3000.

or follow the instructions below to clone the repo and run the app locally

1. Clone the Repo

git clone [email protected]:rashadphz/farfalle.git
cd farfalle

2. Add Environment Variables

touch .env

Add the following variables to the .env file:

Search Provider

You can use Tavily, Searxng, Serper, or Bing as the search provider.

Searxng (No API Key Required)

SEARCH_PROVIDER=searxng

Tavily (Requires API Key)

TAVILY_API_KEY=...
SEARCH_PROVIDER=tavily

Serper (Requires API Key)

SERPER_API_KEY=...
SEARCH_PROVIDER=serper

Bing (Requires API Key)

BING_API_KEY=...
SEARCH_PROVIDER=bing

Optional

# Cloud Models
OPENAI_API_KEY=...
GROQ_API_KEY=...

# See https://litellm.vercel.app/docs/providers for the full list of supported models
CUSTOM_MODEL=...

3. Run Containers

This requires Docker Compose version 2.22.0 or later.

docker-compose -f docker-compose.dev.yaml up -d

Visit http://localhost:3000 to view the app.

For custom setup instructions, see custom-setup-instructions.md

🚀 Deploy

Backend

Deploy to Render

After the backend is deployed, copy the web service URL to your clipboard. It should look something like: https://some-service-name.onrender.com.

Frontend

Use the copied backend URL in the NEXT_PUBLIC_API_URL environment variable when deploying with Vercel.

Deploy with Vercel

And you're done! 🥳

Use Farfalle as a Search Engine

To use Farfalle as your default search engine, follow these steps:

  1. Visit the settings of your browser
  2. Go to 'Search Engines'
  3. Create a new search engine entry using this URL: http://localhost:3000/?q=%s.
  4. Add the search engine.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for farfalle

Similar Open Source Tools

For similar tasks

For similar jobs