AnnA_Anki_neuronal_Appendix
Using machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
Stars: 59
AnnA is a Python script designed to create filtered decks in optimal review order for Anki flashcards. It uses Machine Learning / AI to ensure semantically linked cards are reviewed far apart. The script helps users manage their daily reviews by creating special filtered decks that prioritize reviewing cards that are most different from the rest. It also allows users to reduce the number of daily reviews while increasing retention and automatically identifies semantic neighbors for each note.
README:
Tired of having to deal with anki flashcards that are too similar when grinding through your backlog? This python script creates filtered deck in optimal review order. It uses Machine Learning / AI to make semantically linked cards far from one another.
- One sentence summary
- Note to readers
- Other features
- FAQ
- Getting started
- Usage and arguments
- TODO list
- Credits and links that were helpful
- Crazy Ideas
Here are different ways of looking at what AnnA can do for you in a few words:
- When you don't have the time to complete all your daily reviews, use this to create a special filtered deck that makes sure you will only review the cards that are most different from the rest of your reviews.
- When you have too many learning cards and fear that some of them are too similar, use this to automatically review a subset of them.
- AnnA helps to avoid reviewing similar cards on the same day.
- AnnA allows to reduce the number of daily reviews while increasing (and not keeping the same) retention.
- AnnA allows to automatically write in each note which other notes are its semantic neighbours.
- The dev branch is usually less outdated than the main branch.
- I would really like to integrate this into anki somehow but am not knowledgeable enough about how to do it, how to manage anki versions, how to handle different platforms etc. All help is much appreciated!
- This project communicates with anki using a fork of the addon AnkiConnect called AnnA-companion. Note that AnnA-companion was tested from anki 2.1.44 to 2.1.54 only.
- Although I've been using it daily for months, I am still changing the code base almost every day, if you tried AnnA and were disappointed, maybe try it another time later. Major improvements are regularly made.
- In the past I implemented several vectorization methods. I now only kep subword TF_IDF and sentence-embeddings. TF_IDF is known to be reliable, fast, very general (it does not assume anything about your cards and will work for just about any language, format, phrasing etc). TF_IDF works very well if you have large number of cards. Sentence embeddings is more AI than ML in a way : aligned vectors are supposed to have similar meaning / topic but it way longer to compute than TFIDF so I implemented a caching mechanism that checks if the note has changed since last time and avoids unecessary recomputation.
- If you want to know how I'm using this, take a look at authors_routine.md
- If you like this, another project of mine can be used to do semantic search on your anki collection using AI : I call it Anki SemSearch
- An other project you might be interested in is anki_PrioriTag. It allows to create a filtered deck automatically to review the most forgotten tags.
- Code is PEP8 compliant, dynamically typed, all the functions have a detailed docstrings. Contributions are welcome, opening issues is encouraged and appreciated.
- Keeps the OCR data of pictures in your cards, if you analyzed them beforehand using AnkiOCR.
- Can automatically replace acronyms in your cards (e.g. 'PNO' can be replaced to "pneumothorax" if you tell it to), regexp are supported (if you want access to my homemade list of french medical acronyms, open an issue)
- Can attribute more importance to some fields of some cards if needed.
- Can be used to optimize the order of a deck of new cards, read this thread to know more
- Previous feature like clustering, plotting, searching have been removed as I don't expect them to be useful to users. But the code was clean so if you need it for some reason don't hesitate to open an issue.
-
This looks awesome! Are you working on other things like that? Yes and thank you! Another project of mine aims to do semantic search on your anki collection using AI : I call it Anki SemSearch
-
How does it work? (Layman version) AnnA connects to its companion addon to access your anki collection. This allows to use any python library without having to go through the trouble how packaging those libraries into an addon. It uses a vectorizer to assign numbers (vectors) to each cards. If the numbers of two cards are very close, then the cards have similar content and should not be reviewed too close to each other.
-
And in more details? The vectorization method AnnA is using is either
sentence-transformers
orsubword TF_IDF
. The latter is a way to count words in a document (anki cards in this case) and understand which are more important. "subword" here means that I used LLM tokenization (i.e. splitting "hypernatremia" into "hyper na tremia" which can be linked to cards dealing with "Na Cl" for example). TF_IDF is generally considered as part of machine learning, and sentence-transfomers as part of AI. AnnA leverages this information to make sure you won't review cards on the same day that are too similar. This is very useful when you have to many cards to review in a day. I initially tried to combine several vectorizer into a common scoring but it proved unreliable, I also experimented with fastText (only one language at a time, can't be packaged and large RAM usage), sentence-BERT (too large and depends too much on phrasing to be reliable). So I decided to keep it simple and provide only TFIDF. The goal is to review only the most useful cards, kinda like pareto distribution (i.e. review less cards, but review the right one and you should be able to keep afloat in med school). The code is mainly composed of a python class called AnnA. When you instantiate this class, you have to supply the name of the deck you want to filter from. It will then automatically fetch the cards from your collection, then use TF_IDF to assign vectors to each card, compute the distance matrix of the cards and create a filtered deck containing the cards in the optimal order (or bury the cards you don't have to review today). Note that rated cards of the last X days of the same deck will be used as reference to avoid having cards that are too similar to yesterday's reviews too. If you want to know more, either open an issue or read the docstrings in the code. -
Will this change anything to my anki collection? Apart from adding tags or filling the field 'KNN_neighbours", AnnA should not modify anything, if you delete the filtered deck, everything will be back to normal. That being said, the project is young and errors might still be present.
-
What is the KNN_neighbours field? A field that, if manually added in the notetype will contain a query that, if entered in the anki browser will show you which card of the deck are dimmed most similar by AnnA. The number of neighbour by default is 0.1% of the size of the deck or at least 5.
-
Does it work if I have learning steps over multiple days? Yes, that's my use case. AnnA, depending on the chosen task, either only deals with review queue and ignores learning cards and new cards, or only buries the part of your learning cards that are too similar (ignoring due cards). You can use both one after the other every morning like I do. If you have learning cards in your filtered deck it's because you lapsed those cards yesterday.
-
Does this only work for English cards? No! You can use either a multilingual sentence-transformer or TF_IDF that uses a multilingual LLM tokenization so should work on most languages (even if you have several different languages in the same deck).
-
Can I use this if I don't know python? Yes! Installing the thing might not be easy but it's absolutely doable. And you don't need to know python to run AnnA. I tried my best to make it accessible and help is welcome.
-
What do you call "optimal review order"? The order that minimizes the chance of reviewing similar cards in a day. You see, Anki has no knowledge of the content of cards and only cares about their interval and ease. Its built-in system of "siblings" is useful but I think we can do better. AnnA was made to create filtered decks sorted by "relative_overdueness" (or other) BUT in a way that keeps semantic siblings far from each other.
-
When should I use this? It seems good for dealing with the huge backlog you get in medical school, or just everyday to reduce the workload. If you have 2 000 reviews to do, but can only do 500 in a day: AnnA is making sure that you will get the most out of those 500.
-
How do you use this? I described my routine in a separate file called
docs/authors_routine.md
. -
Can I use AnnA to optimize a list of new cards? I never did it personally but helped a user doing it: see related thread
-
What are the power requirements to run this? I wanted to make it as efficient as possible but am still improving the code. Computing the distance matrix can be long if you do this on very large amount of cards but this step is done in parallel on all cores so should not be the bottleneck. Let me know if some steps are unusually slow and I will try to optimize it. There are ways to make it way easier on the CPU, see arguments
low_power_mode
andTFIDF_dim
. -
How long does it take to run? Is it fast? It takes about 1 min to run for a deck of less than 500 due cards, I've seen it take as long as 4 minutes on a
10 000
due cards deck. This is on a powerful laptop with no GPU on Linux. -
Why is creating the queue taking longer and longer? Each card is added to the queue after having been compared to the rated cards of the last few days and the current queue. As the queue grows, more computation have to be done. If this is an issue, consider creating decks that are as big as you think you can review in a day. With recent improvements in the code the speed should really not be an issue.
-
Does this work only on Linux? It should work on all platform, provided you have anki installed and AnnA-companion enabled. But the script (not the addon) uses libraries that might only work on some CPU architectures, so I'm guessing ARM system would not work but please tell me if you know tried.
-
What is the current status of this project? I use it daily but am still working on the code. You can expect breaking. I intend to keep developing until I have no free time left. Take a look at the TODO list of the dev branch if you want to know what's in the works. When in doubt, open an issue.
-
Do you mind if I open an issue? Not at all! It is very much encouraged, even just for a typo. That will at least make me happy. Of course PR are always welcome too.
-
Can this be made into an anki addon instead of a python script? I have never packaged things into anki addons so I'm not so sure. I heard that packaging complex modules into anki is a pain, and cross platform will be an issue. If you'd like to make this a reality, show yourself by opening an issue! I would really like this to be implemented into anki, and the search function would be pretty nice :)
-
What version of anki does this run on? I've been using AnnA from anki 2.1.44 and am currently on 24.04. Please tell me if you run into issues.
-
If I create a filtered deck using AnnA, how can I rebuild it? You can't rebuilt it or empty it through anki directly as it would leave you with anki's order and not AnnA's. You have to delete the filtered deck then run the script. Hence, I suggest creating large filtered decks in advance.
-
I don't think the reviews I do on AnnA's filtered decks are saved, wtf? It might be because you're using multiple device and are deleting the filtered deck on one of the device without syncing first.
-
What is subword TF_IDF? Short for "subword term frequency–inverse document frequency". It's a clever way to split words into subparts then count the parts to figure out which cards are related.
-
Does it work with images? Not currently but sBERT can be used with CLIP models so I could pretty simply implement this. If you think you would find it useful I can implement it :).
-
What are the supported languages using TF_IDF? TF_IDF is language agnostic, but the language model used to split the words was trained on the 102 largest wikipedia corpus. If your languages is very weird and non unicode or non standard in some ways you might have issues, don't hesitate to open an issue as I would gladly take a look.
-
What is the field_mapping.py file? It's a file with a unique python dictionary used by AnnA to figure out which field of which card to keep. Using it is optional. By default, each notetype will see only it's first field kept. But if you use this file you can keep multiple fields. Due to how TF_IDF works, you can add a field multiple times to give it more importance relative to other fields.
-
What is "XXX - AnnA Optideck"? The default name for the filtered deck created by AnnA. It contains the reviews in the best order for you.
-
Why are there only reviews and no learning cards in the filtered decks? When task is set to
filter_review_cards
, AnnA will fetch only review cards and not deal with learning cards. This is because I'm afraid of some weird behavior that would arise if I change the due order of learning cards. Whereas I can change it just find using review cards. -
Why does the progress bar of "Computing optimal review order" doesn't always start at 0? It's just cosmetic. At each step of the loop, the next card to review is computed by comparing it against the previously added cards and the cards rated in the last few days. This means that each turn is slower than the last. So the progress bar begins with the number of cards rated in the past to indicate that. It took great care to optimize the loop so it should not really be an issue.
-
Does AnnA take into account my card's tags ? If you set the argument
append_tags
to True, the 2 deepest level of each tags are appended at the end of the text and used like if it was part of the content of the card. Note that "_ - and /" are replaced by a space. For example :a::b::c::d some::thing
will be turned intoc d some thing
. -
Why does the task "bury_excess_learning_cards" seem to ignore a few cards? I decided to have this task not take into account cards that were failed today or the day before, those are usually top priority and I don't won't AnnA to bury them.
-
How can I know AnnA is actually working and not burying random cards? Good question, I tried to implement a metric called Improvement ratio that displays a score at the end of the run. It's a sort of sum of the distance between all cards in the new queue over the sum of the distance between all cards in what would have been the queue. It's not really indicative of anything beyond the fact that it's over 1 (=it helped) or under 1 (=it was detrimental). But the latter is kinda hard to get right because it depends on the reliability of
reference_order
in itself. I am especially doubtful of the meaning of the ratio when dealing with learning cards but so far it seems to work ok. Consider it a work in progress. -
I have underlined or put in bold the most important parts of my cards, does it matter? Words that are put in bold or underlined are actually duplicated in the text, this gives them twice as much importance.
-
What scheduler should I use with AnnA? Can I use the V3 scheduler? I strongly suggest leaving the V1 as it messes with learning cards quite a lot. I personally use V2 and developed AnnA while using V2. I don't fully understand the changes in V3 and am not really certain that it works correctly with AnnA so I can't really say. I plan to investigate switching to V3 around October 2022.
- First, read this page in its entirety, this is a complicated piece of software and you don't want to use it irresponsibly on your cards. The usage section is especially useful.
- Install the addon AnnA Companion (Anki neuronal Appendix) - do LESS reviews with MORE retention!
- Clone this repository (for example with
git clone https://github.com/thiswillbeyourgithub/AnnA_Anki_neuronal_Appendix
) - Install the required python libraries :
pip install -r requirements.txt
(in case of errors, try using python 3.9) - Edit file
utils/field_mapping.py
: it contains a dictionary where the keys are notetypes and values are lists of which field to take into account. - Edit file
utils/acronym_example.py
: it contains dictionaries where keys are words to replace and values are what the words should be replaced with. - There are two ways to run AnnA:
- Either in a Python console :
from AnnA import * ; AnnA(YOUR_ARGUMENTS)
- Or directly in the terminal :
python3 AnnA.py --help
(note that the terminal mode was added after the python console and might still contain error when parsing arguments)
- Either in a Python console :
- If you want to run AnnA on several decks in a row like I do, edit the file
utils/autorun_example.py
, move it one folder up then execute it withpython3 ./autorun.py
- Note: some embedding models need a specific argument for sentence_transformers called
trust_remote_code
. To enable it, modify the start of your command like so:ANNA_TRUST_REMOTE_CODE='anything' python [...args]
. - Open an issue telling me your remarks and suggestion
AnnA was made with customizability in mind. All the settings you might want to edit are arguments of the call of AnnA Class. Don't be frightened, many of those settings are rarely used and the default values should be good for almost anyone. Here's a link to the --help
page with detailed explanations: usage page. Otherwise you can display it yourself python ./AnnA.py --help
AnnA includes built-in methods you can run after instantiating the class. Note that methods beginning with a "_" are not supposed to be called by the user and are reserved for backend use. Here's a list of sometimes useful methods:
note: to call python method after running anna outside of a console, use argument --keep_console_open and access the variable anna. For example anna.display_best_review_order()
-
display_best_review_order
used as a debugging tool : only display order. Allows to check if the order seems correct without having to create a filtered deck. -
save_df
saves the dataframe containing the cards and all other infos needed by AnnA as a pickle file. Used mainly for debugging. Files will be saved to the folderDF_backups
- see dev branch
- Corentin Sautier for his many many insights and suggestions on ML and python. He was also instrumental in devising the score formula used to order the filtered deck.
- A post about class based tf idf by Maarten Grootendorst on Towardsdatascience
- The authors of sentence-bert and their very informative website
- The author of the addon anki-connect, as this project was indispensable when developing this addon. The companion addon is a reduced fork from anki-connect.
- This plotly script that leverages NetworkX by github user roholazandie and the PR to make compatible with recent plotly versions by anthng
The following is kept as legacy but was made while working on the ancestor of AnnA, don't judge please.
Disclaimer : I'm a medical student extremely interested in AI but who has trouble devoting time to this passion. This project is a way to get to know machine learning tools but can have practical applications. I like to have crazy ideas to motivate my projects and they are listed belows. Don't laugh. Don't hesitate to contribute either.
- Scheduling Using AnnA to do more different reviews might, somehow, increase the likelihood of eureka moments where your brain just created a new neural paths. That would supposedly help to remember and master a field.
- Optimize learning via cues : A weird idea of mine is about the notion that hearing a tone when you're learning something will increase your recall if you have the same tone playing while taking the test. So maybe it would be useful to assign a tone that increases in pitch while you advance in the queue of due cards? I told you it was crazy ideas... Also this tone could play the tone assigned to the cluster when doing random reviews.
-
Mental Walk (credit goes to a discussion with Paul Bricman) : integrating AnnA with VR by offering the user to walk in an imaginary and procedurally generated world with the reviews materialized in precise location. So the user would have to walk to the flashcard location then do the review. The cards would be automatically grouped by cluster into rooms or forests or buildings etc. Allowing to not have to create the mental palace but just have to create the flashcards.
- a possibility would be to do a dimension reduction to 5 dimensions. Use the 2 first to get the spatial location were the cluster would be assigned. Then the cards would be spread across the room but the 3 remaining vectors could be used to derive something about the room like wall whiteness, floor whiteness and the tempo of a music playing in the background.
- we could ensure that the clusters would always stay in the same room even after adding many cards or even across users by querying a large language model for the vectors associated to the main descriptors of the cluster.
- Put it on street view for automatic mental palace?
- Source Walking : it would be interesting to do the anki reviews in VR where the floor would be your source (pdf, epub, ppt, whatever). The cards would be spread over the document, approximately located above the actual source sentence. Hence leveraging the power of mental palace while doing your reviews. Accessing the big picture AND the small picture.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AnnA_Anki_neuronal_Appendix
Similar Open Source Tools
AnnA_Anki_neuronal_Appendix
AnnA is a Python script designed to create filtered decks in optimal review order for Anki flashcards. It uses Machine Learning / AI to ensure semantically linked cards are reviewed far apart. The script helps users manage their daily reviews by creating special filtered decks that prioritize reviewing cards that are most different from the rest. It also allows users to reduce the number of daily reviews while increasing retention and automatically identifies semantic neighbors for each note.
yet-another-applied-llm-benchmark
Yet Another Applied LLM Benchmark is a collection of diverse tests designed to evaluate the capabilities of language models in performing real-world tasks. The benchmark includes tests such as converting code, decompiling bytecode, explaining minified JavaScript, identifying encoding formats, writing parsers, and generating SQL queries. It features a dataflow domain-specific language for easily adding new tests and has nearly 100 tests based on actual scenarios encountered when working with language models. The benchmark aims to assess whether models can effectively handle tasks that users genuinely care about.
discourse-chatbot
The discourse-chatbot is an original AI chatbot for Discourse forums that allows users to converse with the bot in posts or chat channels. Users can customize the character of the bot, enable RAG mode for expert answers, search Wikipedia, news, and Google, provide market data, perform accurate math calculations, and experiment with vision support. The bot uses cutting-edge Open AI API and supports Azure and proxy server connections. It includes a quota system for access management and can be used in RAG mode or basic bot mode. The setup involves creating embeddings to make the bot aware of forum content and setting up bot access permissions based on trust levels. Users must obtain an API token from Open AI and configure group quotas to interact with the bot. The plugin is extensible to support other cloud bots and content search beyond the provided set.
WeeaBlind
Weeablind is a program that uses modern AI speech synthesis, diarization, language identification, and voice cloning to dub multi-lingual media and anime. It aims to create a pleasant alternative for folks facing accessibility hurdles such as blindness, dyslexia, learning disabilities, or simply those that don't enjoy reading subtitles. The program relies on state-of-the-art technologies such as ffmpeg, pydub, Coqui TTS, speechbrain, and pyannote.audio to analyze and synthesize speech that stays in-line with the source video file. Users have the option of dubbing every subtitle in the video, setting the start and end times, dubbing only foreign-language content, or full-blown multi-speaker dubbing with speaking rate and volume matching.
AI-HF_Patch
AI-HF_Patch is a comprehensive patch for AI-Shoujo that includes all free updates, fan-made English translations, essential mods, and gameplay improvements. It ensures compatibility with character cards and scenes while maintaining the original game's feel. The patch addresses common issues and provides uncensoring options. Users can support development through Patreon. The patch does not include the full game or pirated content, requiring a separate purchase from Steam. Installation is straightforward, with detailed guides available for users.
femtoGPT
femtoGPT is a pure Rust implementation of a minimal Generative Pretrained Transformer. It can be used for both inference and training of GPT-style language models using CPUs and GPUs. The tool is implemented from scratch, including tensor processing logic and training/inference code of a minimal GPT architecture. It is a great start for those fascinated by LLMs and wanting to understand how these models work at deep levels. The tool uses random generation libraries, data-serialization libraries, and a parallel computing library. It is relatively fast on CPU and correctness of gradients is checked using the gradient-check method.
aiohomekit
aiohomekit is a Python library that implements the HomeKit protocol for controlling HomeKit accessories using asyncio. It is primarily used with Home Assistant, targeting the same versions of Python and following their code standards. The library is still under development and does not offer API guarantees yet. It aims to match the behavior of real HAP controllers, even when not strictly specified, and works around issues like JSON formatting, boolean encoding, header sensitivity, and TCP packet splitting. aiohomekit is primarily tested with Phillips Hue and Eve Extend bridges via Home Assistant, but is known to work with many more devices. It does not support BLE accessories and is intended for client-side use only.
ClipboardConqueror
Clipboard Conqueror is a multi-platform omnipresent copilot alternative. Currently requiring a kobold united or openAI compatible back end, this software brings powerful LLM based tools to any text field, the universal copilot you deserve. It simply works anywhere. No need to sign in, no required key. Provided you are using local AI, CC is a data secure alternative integration provided you trust whatever backend you use. *Special thank you to the creators of KoboldAi, KoboldCPP, llamma, openAi, and the communities that made all this possible to figure out.
gpdb
Greenplum Database (GPDB) is an advanced, fully featured, open source data warehouse, based on PostgreSQL. It provides powerful and rapid analytics on petabyte scale data volumes. Uniquely geared toward big data analytics, Greenplum Database is powered by the world’s most advanced cost-based query optimizer delivering high analytical query performance on large data volumes.
bidirectional_streaming_ai_voice
This repository contains Python scripts that enable two-way voice conversations with Anthropic Claude, utilizing ElevenLabs for text-to-speech, Faster-Whisper for speech-to-text, and Pygame for audio playback. The tool operates by transcribing human audio using Faster-Whisper, sending the transcription to Anthropic Claude for response generation, and converting the LLM's response into audio using ElevenLabs. The audio is then played back through Pygame, allowing for a seamless and interactive conversation between the user and the AI. The repository includes variations of the main script to support different operating systems and configurations, such as using CPU transcription on Linux or employing the AssemblyAI API instead of Faster-Whisper.
Web-LLM-Assistant-Llama-cpp
Web-LLM Assistant is a simple web search assistant that leverages a large language model (LLM) running via Llama.cpp to provide informative and context-aware responses to user queries. It combines the power of LLMs with real-time web searching capabilities, allowing it to access up-to-date information and synthesize comprehensive answers. The tool performs web searches, collects and scrapes information from search results, refines search queries, and provides answers based on the acquired information. Users can interact with the tool by asking questions or requesting web searches, making it a valuable resource for obtaining information beyond the LLM's training data.
pythagora
Pythagora is an automated testing tool designed to generate unit tests using GPT-4. By running a single command, users can create tests for specific functions in their codebase. The tool leverages AST parsing to identify related functions and sends them to the Pythagora server for test generation. Pythagora primarily focuses on JavaScript code and supports Jest testing framework. Users can expand existing tests, increase code coverage, and find bugs efficiently. It is recommended to review the generated tests before committing them to the repository. Pythagora does not store user code on its servers but sends it to GPT and OpenAI for test generation.
WritingAIPaper
WritingAIPaper is a comprehensive guide for beginners on crafting AI conference papers. It covers topics like paper structure, core ideas, framework construction, result analysis, and introduction writing. The guide aims to help novices navigate the complexities of academic writing and contribute to the field with clarity and confidence. It also provides tips on readability improvement, logical strength, defensibility, confusion time reduction, and information density increase. The appendix includes sections on AI paper production, a checklist for final hours, common negative review comments, and advice on dealing with paper rejection.
MisguidedAttention
MisguidedAttention is a collection of prompts designed to challenge the reasoning abilities of large language models by presenting them with modified versions of well-known thought experiments, riddles, and paradoxes. The goal is to assess the logical deduction capabilities of these models and observe any shortcomings or fallacies in their responses. The repository includes a variety of prompts that test different aspects of reasoning, such as decision-making, probability assessment, and problem-solving. By analyzing how language models handle these challenges, researchers can gain insights into their reasoning processes and potential biases.
ai-agents-masterclass
AI Agents Masterclass is a repository dedicated to teaching developers how to use AI agents to transform businesses and create powerful software. It provides weekly videos with accompanying code folders, guiding users on setting up Python environments, using environment variables, and installing necessary packages to run the code. The focus is on Large Language Models that can interact with the outside world to perform tasks like drafting emails, booking appointments, and managing tasks, enabling users to create innovative applications with minimal coding effort.
For similar tasks
AnnA_Anki_neuronal_Appendix
AnnA is a Python script designed to create filtered decks in optimal review order for Anki flashcards. It uses Machine Learning / AI to ensure semantically linked cards are reviewed far apart. The script helps users manage their daily reviews by creating special filtered decks that prioritize reviewing cards that are most different from the rest. It also allows users to reduce the number of daily reviews while increasing retention and automatically identifies semantic neighbors for each note.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.