smartcat
Putting a brain behind `cat`🐈⬛ Integrating language models in the Unix commands ecosystem through text streams.
Stars: 77
Smartcat is a CLI interface that brings language models into the Unix ecosystem, allowing power users to leverage the capabilities of LLMs in their daily workflows. It features a minimalist design, seamless integration with terminal and editor workflows, and customizable prompts for specific tasks. Smartcat currently supports OpenAI, Mistral AI, and Anthropic APIs, providing access to a range of language models. With its ability to manipulate file and text streams, integrate with editors, and offer configurable settings, Smartcat empowers users to automate tasks, enhance code quality, and explore creative possibilities.
README:
Puts a brain behind cat
! CLI interface to bring language models in the Unix ecosystem and allow terminal power users to make the most out of llms while keeping full control.
What makes it special:
- made for power users, tailor the config to reduce overhead on your most frequent tasks;
- minimalist, built according the unix philosophy with terminal and editor integration in mind;
- good io handling to insert user input in prompts and use the result in cli-based workflows;
- built-in partial prompt to make the model play nice as a cli tool;
- full configurability on which API, LLM version and temperature you use;
- write and save your own prompt templates for faster reccuring tasks (simplify, optimize, tests, etc);
- conversation support;
- glob expressions to include context files.
Currently supports the following APIs:
- Local runs with Ollama or any server compliant with its format, see the Ollama setup section for the free and easiest way to get started!
Answers might be slow depending on your setup, you may want to try the third party APIs for an optimal workflow. - OpenAi, Mistral AI, Anthropic, Groq.
- Installation
- Usage
- A few examples to get started 🐈⬛
-
Configuration ← please read this carefully
- Ollama setup ← easiest way to get running for free
- Voice
- How to help?
On the first run (sc
), it will ask you to generate some default configuration files and give pointers on how to finalize the install (see the configuration section).
The minimum config requirement is a default
prompt calling a setup API (either remote with api key or local with ollama).
Now on how to get it.
Only tested on linux so far. Any help is appreciated from the other platforms users.
- On linux, make sure
arecord
is installed andarecord --quiet audio.wav
record your audio until you ctrl-c it and produces no output to stdout. - On Mac, make sure
sox
is installed andsox -t waveaudio 0 audio.wav
record your audio until you ctrl-c it and produces no output to stdout. - On Windows, make sure
sox
is installed andsox -t waveaudio 0 audio.wav
record your audio until you ctrl-c it and produces no output to stdout.
If it doesn't work, please open an issue.
With an up to date rust and cargo setup (you might consider running rustup update
):
cargo install smartcat
run this command again to update smartcat
.
Chose the one compiled for your platform on the release page.
Usage: sc [OPTIONS] [INPUT_OR_TEMPLATE_REF] [INPUT_IF_TEMPLATE_REF]
Arguments:
[INPUT_OR_TEMPLATE_REF] ref to a prompt template from config or straight input (will use `default` prompt template if input)
[INPUT_IF_TEMPLATE_REF] if the first arg matches a config template, the second will be used as input
Options:
-e, --extend-conversation whether to extend the previous conversation or start a new one
-r, --repeat-input whether to repeat the input before the output, useful to extend instead of replacing
-v, --voice whether to use voice for input
--api <API> overrides which api to hit [possible values: another-api-for-tests, ollama, anthropic, groq, mistral, openai]
-m, --model <MODEL> overrides which model (of the api) to use
-t, --temperature <TEMPERATURE> temperature higher means answer further from the average
-l, --char-limit <CHAR_LIMIT> max number of chars to include, ask for user approval if more, 0 = no limit
-c, --context <CONTEXT>... glob patterns or list of files to use the content as context
make sure it's the last arg.
-h, --help Print help
-V, --version Print version
You can use it to accomplish tasks in the CLI but also in your editors (if they are good unix citizens, i.e. work with shell commands and text streams) to complete, refactor, write tests... anything!
The key to make this work seamlessly is a good default prompt that tells the model to behave like a CLI tool an not write any unwanted text like markdown formatting or explanations.
sc "say hi" # just ask (uses default prompt template)
sc -v # use your voice to ask (then press <space> to stop the recording)
sc test # use templated prompts
sc test "and parametrize them" # extend them on the fly
sc "explain how to use this program" -c **/*.md main.py # use files as context
git diff | sc "summarize the changes" # pipe data in
cat en.md | sc "translate in french" >> fr.md # write data out
sc -e "use a more informal tone" -t 2 >> fr.md # extend the conversation and raise the temprature
The key for a good integration in editors is a good default prompt (or set of) combined with the -p
flag for precising the task at hand.
The -r
flag can be used to decide whether to replace or extend the selection.
Start by selecting some text, then press :
. You can then pipe the selection content to smartcat
.
:'<,'>!sc "replace the versions with wildcards"
:'<,'>!sc "fix this function"
will overwrite the current selection with the same text transformed by the language model.
:'<,'>!sc -r test
will repeat the input, effectively appending at the end of the current selection the result of the language model.
Add the following remap to your vimrc for easy access:
nnoremap <leader>sc :'<,'>!sc
Same concept, different shortcut, simply press the pipe key to redirect the selection to smarcat
.
pipe:sc test -r
With some remapping you may have your most reccurrent action attached to few keystrokes e.g. <leader>wt
!
For quick questions:
sc "my quick question"
which will likely be your fastest path to answer: a shortcut to open your terminal (if you're not in it already), sc
and you're set. No tab finding, no logins, no redirects etc.
To enhance coding:
select a struct
:'<,'>!sc "implement the traits FromStr and ToString for this struct"
select the generated impl block
:'<,'>!sc -e "can you make it more concise?"
put the cursor at the bottom of the file and give example usage as input
:'<,'>!sc -e "now write tests for it knowing it's used like this" -c src/main.rs
...
To have a full conversation with a llm from a markdown file:
vim problem_solving.md
> write your question as comment in the markdown file then select your question
> and send it to smartcat using the aforementioned trick, use `-r` to repeat the input.
If you wan to continue the conversation, write your new question as a comment and repeat
the previous step with `-e -r`.
> This allows you to keep track of your questions and make a nice reusable document.
- by default lives at
$HOME/.config/smartcat
- the directory can be set using the
SMARTCAT_CONFIG_PATH
environement variable - use
#[<input>]
as the placeholder for input when writing prompts, if none is provided, it will be automatically added at the end of the last user message - the default model is a local
phi3
ran with ollama but I recommend trying the latest ones and see which one works best for you; - the prompt named
default
will be the one used by default. - you can play with the temperature and set a default for each prompt depending on its use case;
Three files are used:
-
.api_configs.toml
stores your credentials, you need at least one provider with API with key or a local ollama setup; -
prompts.toml
stores you prompt templates, you need at least thedefault
prompt; -
conversation.toml
stores the latest chat if you need to continue it, it's automanaged but you can make backups if you want.
.api_configs.toml
[ollama] # local API, no key required
url = "http://localhost:11434/api/chat"
default_model = "phi3"
timeout_seconds = 180 # default timeout if not specified
[openai] # each supported api has their own config section with api and url
api_key = "<your_api_key>"
default_model = "gpt-4-turbo-preview"
url = "https://api.openai.com/v1/chat/completions"
[mistral]
api_key_command = "pass mistral/api_key" # you can use a command to grab the key
default_model = "mistral-medium"
url = "https://api.mistral.ai/v1/chat/completions"
[groq]
api_key_command = "pass groq/api_key"
default_model = "llama3-70b-8192"
url = "https://api.groq.com/openai/v1/chat/completions"
[anthropic]
api_key = "<yet_another_api_key>"
url = "https://api.anthropic.com/v1/messages"
default_model = "claude-3-opus-20240229"
version = "2023-06-01"
prompts.toml
[default] # a prompt is a section
api = "ollama" # must refer to an entry in the `.api_configs.toml` file
model = "phi3" # each prompt may define its own model
[[default.messages]] # then you can list messages
role = "system"
content = """\
You are an extremely skilled programmer with a keen eye for detail and an emphasis on readable code. \
You have been tasked with acting as a smart version of the cat unix program. You take text and a prompt in and write text out. \
For that reason, it is of crucial importance to just write the desired output. Do not under any circumstance write any comment or thought \
as you output will be piped into other programs. Do not write the markdown delimiters for code as well. \
Sometimes you will be asked to implement or extend some input code. Same thing goes here, write only what was asked because what you write will \
be directly added to the user's editor. \
Never ever write ``` around the code. \
"""
[empty] # always nice to have an empty prompt available
api = "openai"
# not mentioning the model will use the default from the api config
messages = []
[test]
api = "anthropic"
temperature = 0.0
[[test.messages]]
role = "system"
content = """\
You are an extremely skilled programmer with a keen eye for detail and an emphasis on readable code. \
You have been tasked with acting as a smart version of the cat unix program. You take text and a prompt in and write text out. \
For that reason, it is of crucial importance to just write the desired output. Do not under any circumstance write any comment or thought \
as you output will be piped into other programs. Do not write the markdown delimiters for code as well. \
Sometimes you will be asked to implement or extend some input code. Same thing goes here, write only what was asked because what you write will \
be directly added to the user's editor. \
Never ever write ``` around the code. \
"""
[[test.messages]]
role = "user"
# the following placeholder string #[<input>] will be replaced by the input
# each message seeks it and replaces it
content ='''Write tests using pytest for the following code. Parametrize it if appropriate.
#[<input>]
'''
url = "https://api.openai.com/v1/audio/transcriptions"
# make sure this command fit you OS and works on its own
recording_command = "arecord -f S16_LE --quiet <audio_file_path_placeholder>"
model = "whisper-1"
api = "openai"
see the config setup file for more details.
- Install Ollama
- Pull the model you plan on using
ollama pull phi3
- Test the model
ollama run phi3 "say hi"
- Make sure the serving is available
curl http://localhost:11434
which should say "Ollama is running", else you might need to runollama serve
-
smartcat
will now be able to reach your local ollama, enjoy!
Use the -v
flag to ask for voice input then press space to end it. It will replace the prompt customization arg.
- uses openai whisper
- make sure your
recording_command
field works in your termimal command, it should create a wav file - requires you to have an openai key in your
.api_keys.toml
- you can still use any prompt template or text model to get your output
sc -v
sc test -v
sc test -v -c src/**/*
This could be a good accessiblity feature but I personnaly never use it and given its current state I am considering removing it.
smartcat
call an external program that handles the voice recording and instructs it to save the result in a wav file. It then listens to keyboard inputs and stops the recording when space is pressed.
The recording is then sent to a speech to text model, the resulting transcript is finally added to the prompt and sent to the text model to get an answer.
On linux: TODO On Mac: TODO On windows: TODO
To debug, you can check the conversation.toml
file or listen to the audio.wav
in the smart config home and see what the model heard and transcripted.
This feature shoud be offered as an extra down the road, totally optional on install. PRs are welcomed!
See CONTRIBUTING.md.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for smartcat
Similar Open Source Tools
smartcat
Smartcat is a CLI interface that brings language models into the Unix ecosystem, allowing power users to leverage the capabilities of LLMs in their daily workflows. It features a minimalist design, seamless integration with terminal and editor workflows, and customizable prompts for specific tasks. Smartcat currently supports OpenAI, Mistral AI, and Anthropic APIs, providing access to a range of language models. With its ability to manipulate file and text streams, integrate with editors, and offer configurable settings, Smartcat empowers users to automate tasks, enhance code quality, and explore creative possibilities.
Discord-AI-Selfbot
Discord-AI-Selfbot is a Python-based Discord selfbot that uses the `discord.py-self` library to automatically respond to messages mentioning its trigger word using Groq API's Llama-3 model. It functions as a normal Discord bot on a real Discord account, enabling interactions in DMs, servers, and group chats without needing to invite a bot. The selfbot comes with features like custom AI instructions, free LLM model usage, mention and reply recognition, message handling, channel-specific responses, and a psychoanalysis command to analyze user messages for insights on personality.
AlwaysReddy
AlwaysReddy is a simple LLM assistant with no UI that you interact with entirely using hotkeys. It can easily read from or write to your clipboard, and voice chat with you via TTS and STT. Here are some of the things you can use AlwaysReddy for: - Explain a new concept to AlwaysReddy and have it save the concept (in roughly your words) into a note. - Ask AlwaysReddy "What is X called?" when you know how to roughly describe something but can't remember what it is called. - Have AlwaysReddy proofread the text in your clipboard before you send it. - Ask AlwaysReddy "From the comments in my clipboard, what do the r/LocalLLaMA users think of X?" - Quickly list what you have done today and get AlwaysReddy to write a journal entry to your clipboard before you shutdown the computer for the day.
aiac
AIAC is a library and command line tool to generate Infrastructure as Code (IaC) templates, configurations, utilities, queries, and more via LLM providers such as OpenAI, Amazon Bedrock, and Ollama. Users can define multiple 'backends' targeting different LLM providers and environments using a simple configuration file. The tool allows users to ask a model to generate templates for different scenarios and composes an appropriate request to the selected provider, storing the resulting code to a file and/or printing it to standard output.
blinkid-ios
BlinkID iOS is a mobile SDK that enables developers to easily integrate ID scanning and data extraction capabilities into their iOS applications. The SDK supports scanning and processing various types of identity documents, such as passports, driver's licenses, and ID cards. It provides accurate and fast data extraction, including personal information and document details. With BlinkID iOS, developers can enhance their apps with secure and reliable ID verification functionality, improving user experience and streamlining identity verification processes.
call-gpt
Call GPT is a voice application that utilizes Deepgram for Speech to Text, elevenlabs for Text to Speech, and OpenAI for GPT prompt completion. It allows users to chat with ChatGPT on the phone, providing better transcription, understanding, and speaking capabilities than traditional IVR systems. The app returns responses with low latency, allows user interruptions, maintains chat history, and enables GPT to call external tools. It coordinates data flow between Deepgram, OpenAI, ElevenLabs, and Twilio Media Streams, enhancing voice interactions.
openai-cf-workers-ai
OpenAI for Workers AI is a simple, quick, and dirty implementation of OpenAI's API on Cloudflare's new Workers AI platform. It allows developers to use the OpenAI SDKs with the new LLMs without having to rewrite all of their code. The API currently supports completions, chat completions, audio transcription, embeddings, audio translation, and image generation. It is not production ready but will be semi-regularly updated with new features as they roll out to Workers AI.
vectorflow
VectorFlow is an open source, high throughput, fault tolerant vector embedding pipeline. It provides a simple API endpoint for ingesting large volumes of raw data, processing, and storing or returning the vectors quickly and reliably. The tool supports text-based files like TXT, PDF, HTML, and DOCX, and can be run locally with Kubernetes in production. VectorFlow offers functionalities like embedding documents, running chunking schemas, custom chunking, and integrating with vector databases like Pinecone, Qdrant, and Weaviate. It enforces a standardized schema for uploading data to a vector store and supports features like raw embeddings webhook, chunk validation webhook, S3 endpoint, and telemetry. The tool can be used with the Python client and provides detailed instructions for running and testing the functionalities.
cog-comfyui
Cog-comfyui allows users to run ComfyUI workflows on Replicate. ComfyUI is a visual programming tool for creating and sharing generative art workflows. With cog-comfyui, users can access a variety of pre-trained models and custom nodes to create their own unique artworks. The tool is easy to use and does not require any coding experience. Users simply need to upload their API JSON file and any necessary input files, and then click the "Run" button. Cog-comfyui will then generate the output image or video file.
prelude
Prelude is a simple tool for creating context prompts for LLMs with long context windows. It helps improve code distributed over multiple files by generating prompts with file tree and concatenated file contents. The prompt is copied to clipboard and can be saved to a file. It excludes files listed in .preludeignore and .gitignore files. The tool requires the `tree` command to be installed on the system for functionality.
llamafile
llamafile is a tool that enables users to distribute and run Large Language Models (LLMs) with a single file. It combines llama.cpp with Cosmopolitan Libc to create a framework that simplifies the complexity of LLMs into a single-file executable called a 'llamafile'. Users can run these executable files locally on most computers without the need for installation, making open LLMs more accessible to developers and end users. llamafile also provides example llamafiles for various LLM models, allowing users to try out different LLMs locally. The tool supports multiple CPU microarchitectures, CPU architectures, and operating systems, making it versatile and easy to use.
oterm
Oterm is a text-based terminal client for Ollama, a large language model. It provides an intuitive and simple terminal UI, allowing users to interact with Ollama without running servers or frontends. Oterm supports multiple persistent chat sessions, which are stored along with context embeddings and system prompt customizations in a SQLite database. Users can easily customize the model's system prompt and parameters, and select from any of the models they have pulled in Ollama or their own custom models. Oterm also supports keyboard shortcuts for creating new chat sessions, editing existing sessions, renaming sessions, exporting sessions as markdown, deleting sessions, toggling between dark and light themes, quitting the application, switching to multiline input mode, selecting images to include with messages, and navigating through the history of previous prompts. Oterm is licensed under the MIT License.
fabric
Fabric is an open-source framework for augmenting humans using AI. It provides a structured approach to breaking down problems into individual components and applying AI to them one at a time. Fabric includes a collection of pre-defined Patterns (prompts) that can be used for a variety of tasks, such as extracting the most interesting parts of YouTube videos and podcasts, writing essays, summarizing academic papers, creating AI art prompts, and more. Users can also create their own custom Patterns. Fabric is designed to be easy to use, with a command-line interface and a variety of helper apps. It is also extensible, allowing users to integrate it with their own AI applications and infrastructure.
CLI
Bito CLI provides a command line interface to the Bito AI chat functionality, allowing users to interact with the AI through commands. It supports complex automation and workflows, with features like long prompts and slash commands. Users can install Bito CLI on Mac, Linux, and Windows systems using various methods. The tool also offers configuration options for AI model type, access key management, and output language customization. Bito CLI is designed to enhance user experience in querying AI models and automating tasks through the command line interface.
ain
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.
For similar tasks
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
onnxruntime-genai
ONNX Runtime Generative AI is a library that provides the generative AI loop for ONNX models, including inference with ONNX Runtime, logits processing, search and sampling, and KV cache management. Users can call a high level `generate()` method, or run each iteration of the model in a loop. It supports greedy/beam search and TopP, TopK sampling to generate token sequences, has built in logits processing like repetition penalties, and allows for easy custom scoring.
jupyter-ai
Jupyter AI connects generative AI with Jupyter notebooks. It provides a user-friendly and powerful way to explore generative AI models in notebooks and improve your productivity in JupyterLab and the Jupyter Notebook. Specifically, Jupyter AI offers: * An `%%ai` magic that turns the Jupyter notebook into a reproducible generative AI playground. This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, Kaggle, VSCode, etc.). * A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant. * Support for a wide range of generative model providers, including AI21, Anthropic, AWS, Cohere, Gemini, Hugging Face, NVIDIA, and OpenAI. * Local model support through GPT4All, enabling use of generative AI models on consumer grade machines with ease and privacy.
khoj
Khoj is an open-source, personal AI assistant that extends your capabilities by creating always-available AI agents. You can share your notes and documents to extend your digital brain, and your AI agents have access to the internet, allowing you to incorporate real-time information. Khoj is accessible on Desktop, Emacs, Obsidian, Web, and Whatsapp, and you can share PDF, markdown, org-mode, notion files, and GitHub repositories. You'll get fast, accurate semantic search on top of your docs, and your agents can create deeply personal images and understand your speech. Khoj is self-hostable and always will be.
langchain_dart
LangChain.dart is a Dart port of the popular LangChain Python framework created by Harrison Chase. LangChain provides a set of ready-to-use components for working with language models and a standard interface for chaining them together to formulate more advanced use cases (e.g. chatbots, Q&A with RAG, agents, summarization, extraction, etc.). The components can be grouped into a few core modules: * **Model I/O:** LangChain offers a unified API for interacting with various LLM providers (e.g. OpenAI, Google, Mistral, Ollama, etc.), allowing developers to switch between them with ease. Additionally, it provides tools for managing model inputs (prompt templates and example selectors) and parsing the resulting model outputs (output parsers). * **Retrieval:** assists in loading user data (via document loaders), transforming it (with text splitters), extracting its meaning (using embedding models), storing (in vector stores) and retrieving it (through retrievers) so that it can be used to ground the model's responses (i.e. Retrieval-Augmented Generation or RAG). * **Agents:** "bots" that leverage LLMs to make informed decisions about which available tools (such as web search, calculators, database lookup, etc.) to use to accomplish the designated task. The different components can be composed together using the LangChain Expression Language (LCEL).
danswer
Danswer is an open-source Gen-AI Chat and Unified Search tool that connects to your company's docs, apps, and people. It provides a Chat interface and plugs into any LLM of your choice. Danswer can be deployed anywhere and for any scale - on a laptop, on-premise, or to cloud. Since you own the deployment, your user data and chats are fully in your own control. Danswer is MIT licensed and designed to be modular and easily extensible. The system also comes fully ready for production usage with user authentication, role management (admin/basic users), chat persistence, and a UI for configuring Personas (AI Assistants) and their Prompts. Danswer also serves as a Unified Search across all common workplace tools such as Slack, Google Drive, Confluence, etc. By combining LLMs and team specific knowledge, Danswer becomes a subject matter expert for the team. Imagine ChatGPT if it had access to your team's unique knowledge! It enables questions such as "A customer wants feature X, is this already supported?" or "Where's the pull request for feature Y?"
infinity
Infinity is an AI-native database designed for LLM applications, providing incredibly fast full-text and vector search capabilities. It supports a wide range of data types, including vectors, full-text, and structured data, and offers a fused search feature that combines multiple embeddings and full text. Infinity is easy to use, with an intuitive Python API and a single-binary architecture that simplifies deployment. It achieves high performance, with 0.1 milliseconds query latency on million-scale vector datasets and up to 15K QPS.
For similar jobs
vectara-answer
Vectara Answer is a sample app for Vectara-powered Summarized Semantic Search (or question-answering) with advanced configuration options. For examples of what you can build with Vectara Answer, check out Ask News, LegalAid, or any of the other demo applications.
smartcat
Smartcat is a CLI interface that brings language models into the Unix ecosystem, allowing power users to leverage the capabilities of LLMs in their daily workflows. It features a minimalist design, seamless integration with terminal and editor workflows, and customizable prompts for specific tasks. Smartcat currently supports OpenAI, Mistral AI, and Anthropic APIs, providing access to a range of language models. With its ability to manipulate file and text streams, integrate with editors, and offer configurable settings, Smartcat empowers users to automate tasks, enhance code quality, and explore creative possibilities.
ragflow
RAGFlow is an open-source Retrieval-Augmented Generation (RAG) engine that combines deep document understanding with Large Language Models (LLMs) to provide accurate question-answering capabilities. It offers a streamlined RAG workflow for businesses of all sizes, enabling them to extract knowledge from unstructured data in various formats, including Word documents, slides, Excel files, images, and more. RAGFlow's key features include deep document understanding, template-based chunking, grounded citations with reduced hallucinations, compatibility with heterogeneous data sources, and an automated and effortless RAG workflow. It supports multiple recall paired with fused re-ranking, configurable LLMs and embedding models, and intuitive APIs for seamless integration with business applications.
Dot
Dot is a standalone, open-source application designed for seamless interaction with documents and files using local LLMs and Retrieval Augmented Generation (RAG). It is inspired by solutions like Nvidia's Chat with RTX, providing a user-friendly interface for those without a programming background. Pre-packaged with Mistral 7B, Dot ensures accessibility and simplicity right out of the box. Dot allows you to load multiple documents into an LLM and interact with them in a fully local environment. Supported document types include PDF, DOCX, PPTX, XLSX, and Markdown. Users can also engage with Big Dot for inquiries not directly related to their documents, similar to interacting with ChatGPT. Built with Electron JS, Dot encapsulates a comprehensive Python environment that includes all necessary libraries. The application leverages libraries such as FAISS for creating local vector stores, Langchain, llama.cpp & Huggingface for setting up conversation chains, and additional tools for document management and interaction.
emerging-trajectories
Emerging Trajectories is an open source library for tracking and saving forecasts of political, economic, and social events. It provides a way to organize and store forecasts, as well as track their accuracy over time. This can be useful for researchers, analysts, and anyone else who wants to keep track of their predictions.
reor
Reor is an AI-powered desktop note-taking app that automatically links related notes, answers questions on your notes, and provides semantic search. Everything is stored locally and you can edit your notes with an Obsidian-like markdown editor. The hypothesis of the project is that AI tools for thought should run models locally by default. Reor stands on the shoulders of the giants Ollama, Transformers.js & LanceDB to enable both LLMs and embedding models to run locally. Connecting to OpenAI or OpenAI-compatible APIs like Oobabooga is also supported.
swirl-search
Swirl is an open-source software that allows users to simultaneously search multiple content sources and receive AI-ranked results. It connects to various data sources, including databases, public data services, and enterprise sources, and utilizes AI and LLMs to generate insights and answers based on the user's data. Swirl is easy to use, requiring only the download of a YML file, starting in Docker, and searching with Swirl. Users can add credentials to preloaded SearchProviders to access more sources. Swirl also offers integration with ChatGPT as a configured AI model. It adapts and distributes user queries to anything with a search API, re-ranking the unified results using Large Language Models without extracting or indexing anything. Swirl includes five Google Programmable Search Engines (PSEs) to get users up and running quickly. Key features of Swirl include Microsoft 365 integration, SearchProvider configurations, query adaptation, synchronous or asynchronous search federation, optional subscribe feature, pipelining of Processor stages, results stored in SQLite3 or PostgreSQL, built-in Query Transformation support, matching on word stems and handling of stopwords, duplicate detection, re-ranking of unified results using Cosine Vector Similarity, result mixers, page through all results requested, sample data sets, optional spell correction, optional search/result expiration service, easily extensible Connector and Mixer objects, and a welcoming community for collaboration and support.
obsidian-Smart2Brain
Your Smart Second Brain is a free and open-source Obsidian plugin that serves as your personal assistant, powered by large language models like ChatGPT or Llama2. It can directly access and process your notes, eliminating the need for manual prompt editing, and it can operate completely offline, ensuring your data remains private and secure.