Best AI tools for< Decode Tokens >
20 - AI tool Sites
Phenaki
Phenaki is a model capable of generating realistic videos from a sequence of textual prompts. It is particularly challenging to generate videos from text due to the computational cost, limited quantities of high-quality text-video data, and variable length of videos. To address these issues, Phenaki introduces a new causal model for learning video representation, which compresses the video to a small representation of discrete tokens. This tokenizer uses causal attention in time, which allows it to work with variable-length videos. To generate video tokens from text, Phenaki uses a bidirectional masked transformer conditioned on pre-computed text tokens. The generated video tokens are subsequently de-tokenized to create the actual video. To address data issues, Phenaki demonstrates how joint training on a large corpus of image-text pairs as well as a smaller number of video-text examples can result in generalization beyond what is available in the video datasets. Compared to previous video generation methods, Phenaki can generate arbitrarily long videos conditioned on a sequence of prompts (i.e., time-variable text or a story) in an open domain. To the best of our knowledge, this is the first time a paper studies generating videos from time-variable prompts. In addition, the proposed video encoder-decoder outperforms all per-frame baselines currently used in the literature in terms of spatio-temporal quality and the number of tokens per video.
Decode Investing
Decode Investing is a platform that helps users discover wonderful businesses and great investment ideas. It provides tools such as a stock market analyst, SEC filings, earnings calls, and a stock screener to assist users in making informed investment decisions. The platform also offers a community for users to engage with each other and share insights. Decode Investing is designed to empower individuals in their investment journey by providing valuable resources and analysis.
Mendel AI
Mendel AI is an advanced clinical AI tool that deciphers clinical data with clinician-like logic. It offers a fully integrated suite of clinical-specific data processing products, combining OCR, de-identification, and clinical reasoning to interpret medical records. Users can ask questions in plain English and receive accurate answers from health records in seconds. Mendel's technology goes beyond traditional AI by understanding patient-level data and ensuring consistency and explainability of results in healthcare.
MeowTalk
MeowTalk is an AI tool that allows users to decode their cat's meows and understand what their feline friends are trying to communicate. By analyzing the sound patterns of your cat's meows, MeowTalk translates them into human language, providing insights into your cat's thoughts and feelings. With MeowTalk, you can bridge the communication gap between you and your cat, leading to a deeper understanding and stronger bond.
Insitro
Insitro is a drug discovery and development company that uses machine learning and data to identify and develop new medicines. The company's platform integrates in vitro cellular data produced in its labs with human clinical data to help redefine disease. Insitro's pipeline includes wholly-owned and partnered therapeutic programs in metabolism, oncology, and neuroscience.
AI Synapse
AI Synapse is a GTM platform designed for AI workers to enhance outbound conversion rates and sales efficiency. It leverages AI-driven research, personalization, and automation to optimize sales processes, reduce time spent on sales tools, and achieve significant improvements in open, click, and reply rates. The platform enables users to achieve the output of a 30-person sales team in just 4-6 hours, leading to increased productivity and revenue generation. AI Synapse offers scalability, cost efficiency, advanced personalization, time savings, enhanced conversion rates, and predictable lead flow, making it a valuable tool for sales teams and businesses looking to streamline their outbound strategies.
CEBRA
CEBRA is a machine-learning method that compresses time series data to reveal hidden structures in the variability of the data. It excels in analyzing behavioral and neural data simultaneously, allowing for the decoding of activity from the visual cortex of the mouse brain to reconstruct viewed videos. CEBRA is a novel encoding method that leverages both behavioral and neural data to produce consistent and high-performance latent spaces, enabling the mapping of space, uncovering complex kinematic features, and providing rapid, high-accuracy decoding of natural movies from the visual cortex.
BabySleepBot™
BabySleepBot™ is an AI-powered online DIY program designed to help parents teach their babies to sleep through the night and take longer day naps. The program offers personalized training tailored to different parenting styles and babies' individual needs. It includes audio clips, personalized training, companion guide, education on decoding baby's tired cues, custom routines, and access to results within three weeks. The program is led by Jennifer, Australia's leading baby sleep consultant with 22+ years of experience and a proven track record of helping thousands of families achieve successful sleep outcomes.
Vexa
Vexa is a real-time AI meeting assistant designed to empower users to maintain focus, grasp context, decode industry-specific terms, and capture critical information effortlessly during business meetings. It offers features such as instant context recovery, flawless project execution, industry terminology decoding, enhanced focus and productivity, and ADHD-friendly meeting assistance. Vexa helps users stay sharp in long meetings, record agreements accurately, clarify industry jargon, and manage time-sensitive information effectively. It integrates with Google Meet and Zoom, supports various functionalities using the GPT-4 Chat API, and ensures privacy through end-to-end encryption and data protection measures.
Hint
Hint is a hyper-personalized astrology app that combines NASA data with guidance from professional astrologers to provide personalized insights. It offers 1-on-1 guidance, horoscopes, compatibility reports, and chart decoding. Hint has become a recognized leader in the field of digital astrological services and is trusted by world's leading companies.
Mind-Video
Mind-Video is an AI tool that focuses on high-quality video reconstruction from brain activity data obtained through fMRI scans. The tool aims to bridge the gap between image and video brain decoding by leveraging masked brain modeling, multimodal contrastive learning, spatiotemporal attention, and co-training with an augmented Stable Diffusion model. It is designed to enhance the generation consistency and accuracy of reconstructing continuous visual experiences from brain activities, ultimately contributing to a deeper understanding of human cognitive processes.
Easy Islamic Dream Interpretation
Easy Islamic Dream Interpretation is an AI tool that offers dream interpretations based on Quranic verses, Hadith, and established Islamic principles. The tool draws insights from classical Islamic scholars like Ibn Sirin, Al-Nabulsi, and Ibn Kathir to provide accurate and meaningful interpretations of dreams. Users can gain a deeper understanding of their dreams through the lens of Islamic teachings and traditions.
Iris Dating
Iris Dating is an AI dating application that leverages artificial intelligence to match and date online. The app uses AI to understand users' preferences and present them with matches based on mutual attraction. By decoding the science of attraction, Iris Dating aims to revolutionize online dating by providing users with more meaningful and successful relationships.
Typly
Typly is an advanced AI writing assistant that leverages Large Language Models like GPT, ChatGPT, LLaMA, and others to automatically generate suggestions that match the context of the conversation. It offers features such as generating AI response suggestions, crafting unique responses effortlessly, decoding complex text, summarizing articles, perfecting emails, and boosting conversations with sentence bundles from various sources. Typly aims to revolutionize the way people communicate by providing innovative solutions powered by artificial intelligence.
Dataku.ai
Dataku.ai is an advanced data extraction and analysis tool powered by AI technology. It offers seamless extraction of valuable insights from documents and texts, transforming unstructured data into structured, actionable information. The tool provides tailored data extraction solutions for various needs, such as resume extraction for streamlined recruitment processes, review insights for decoding customer sentiments, and leveraging customer data to personalize experiences. With features like market trend analysis and financial document analysis, Dataku.ai empowers users to make strategic decisions based on accurate data. The tool ensures precision, efficiency, and scalability in data processing, offering different pricing plans to cater to different user needs.
Shib GPT
Shib GPT is an advanced AI-driven platform tailored for real-time crypto market analysis. It revolutionizes the realm of cryptocurrency by leveraging sophisticated algorithms to provide comprehensive insights into pricing, trends, and exchange dynamics. The platform empowers investors and traders with unparalleled accuracy and depth in navigating both decentralized and centralized exchanges, enabling informed decision-making with speed and precision. Shib GPT also offers a chat platform for dynamic interaction with financial markets, generating multimedia content powered by cutting-edge Large Language Models (LLMs).
DreamOracle
DreamOracle is an AI-powered dream interpretation guide that helps users understand the meanings behind their dreams. By leveraging advanced algorithms and natural language processing, DreamOracle provides insightful interpretations to decode the symbolism in dreams. Users can explore the hidden messages and subconscious thoughts embedded in their dreams, gaining valuable insights into their inner selves. With DreamOracle, users can unlock the mysteries of their dreams and gain a deeper understanding of their emotions, fears, and desires.
THE DECODER
THE DECODER is an AI tool that provides news, insights, and updates on artificial intelligence across various domains such as business, research, and society. It covers the latest advancements in AI technologies, applications, and their impact on different industries. THE DECODER aims to keep its audience informed about the rapidly evolving field of artificial intelligence.
Legalese Decoder
Legalese Decoder is a web application that utilizes AI, natural language processing (NLP), and machine learning (ML) to analyze legal documents and provide a plain language version of the document. It is designed to take complex legal documents as input and output a simplified, easily understandable version by identifying key terms and concepts, providing definitions, and rephrasing the content. The application aims to assist individuals in comprehending legal documents without the need for professional legal advice.
Impact Stack
Impact Stack is an AI-powered research platform that enables users to conduct evidence-based research with the help of specialized AI tools and expertise. It allows users to ask questions about companies, industries, or research topics in natural language, providing rigorously researched answers quickly. The platform eliminates the need for manual data entry by allowing users to interface with AI agents equipped with up-to-date knowledge bases. Impact Stack also offers the ability to search and analyze multiple databases simultaneously, streamlining the research process. Users can stay informed with the latest articles and feature updates through subscription.
20 - Open Source AI Tools
minbpe
This repository contains a minimal, clean code implementation of the Byte Pair Encoding (BPE) algorithm, commonly used in LLM tokenization. The BPE algorithm is "byte-level" because it runs on UTF-8 encoded strings. This algorithm was popularized for LLMs by the GPT-2 paper and the associated GPT-2 code release from OpenAI. Sennrich et al. 2015 is cited as the original reference for the use of BPE in NLP applications. Today, all modern LLMs (e.g. GPT, Llama, Mistral) use this algorithm to train their tokenizers. There are two Tokenizers in this repository, both of which can perform the 3 primary functions of a Tokenizer: 1) train the tokenizer vocabulary and merges on a given text, 2) encode from text to tokens, 3) decode from tokens to text. The files of the repo are as follows: 1. minbpe/base.py: Implements the `Tokenizer` class, which is the base class. It contains the `train`, `encode`, and `decode` stubs, save/load functionality, and there are also a few common utility functions. This class is not meant to be used directly, but rather to be inherited from. 2. minbpe/basic.py: Implements the `BasicTokenizer`, the simplest implementation of the BPE algorithm that runs directly on text. 3. minbpe/regex.py: Implements the `RegexTokenizer` that further splits the input text by a regex pattern, which is a preprocessing stage that splits up the input text by categories (think: letters, numbers, punctuation) before tokenization. This ensures that no merges will happen across category boundaries. This was introduced in the GPT-2 paper and continues to be in use as of GPT-4. This class also handles special tokens, if any. 4. minbpe/gpt4.py: Implements the `GPT4Tokenizer`. This class is a light wrapper around the `RegexTokenizer` (2, above) that exactly reproduces the tokenization of GPT-4 in the tiktoken library. The wrapping handles some details around recovering the exact merges in the tokenizer, and the handling of some unfortunate (and likely historical?) 1-byte token permutations. Finally, the script train.py trains the two major tokenizers on the input text tests/taylorswift.txt (this is the Wikipedia entry for her kek) and saves the vocab to disk for visualization. This script runs in about 25 seconds on my (M1) MacBook. All of the files above are very short and thoroughly commented, and also contain a usage example on the bottom of the file.
Tiktoken
Tiktoken is a high-performance implementation focused on token count operations. It provides various encodings like o200k_base, cl100k_base, r50k_base, p50k_base, and p50k_edit. Users can easily encode and decode text using the provided API. The repository also includes a benchmark console app for performance tracking. Contributions in the form of PRs are welcome.
Qwen
Qwen is a series of large language models developed by Alibaba DAMO Academy. It outperforms the baseline models of similar model sizes on a series of benchmark datasets, e.g., MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, etc., which evaluate the models’ capabilities on natural language understanding, mathematic problem solving, coding, etc. Qwen models outperform the baseline models of similar model sizes on a series of benchmark datasets, e.g., MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, etc., which evaluate the models’ capabilities on natural language understanding, mathematic problem solving, coding, etc. Qwen-72B achieves better performance than LLaMA2-70B on all tasks and outperforms GPT-3.5 on 7 out of 10 tasks.
llm-structured-output
This repository contains a library for constraining LLM generation to structured output, enforcing a JSON schema for precise data types and property names. It includes an acceptor/state machine framework, JSON acceptor, and JSON schema acceptor for guiding decoding in LLMs. The library provides reference implementations using Apple's MLX library and examples for function calling tasks. The tool aims to improve LLM output quality by ensuring adherence to a schema, reducing unnecessary output, and enhancing performance through pre-emptive decoding. Evaluations show performance benchmarks and comparisons with and without schema constraints.
mistral-inference
Mistral Inference repository contains minimal code to run 7B, 8x7B, and 8x22B models. It provides model download links, installation instructions, and usage guidelines for running models via CLI or Python. The repository also includes information on guardrailing, model platforms, deployment, and references. Users can interact with models through commands like mistral-demo, mistral-chat, and mistral-common. Mistral AI models support function calling and chat interactions for tasks like testing models, chatting with models, and using Codestral as a coding assistant. The repository offers detailed documentation and links to blogs for further information.
llama3-tokenizer-js
JavaScript tokenizer for LLaMA 3 designed for client-side use in the browser and Node, with TypeScript support. It accurately calculates token count, has 0 dependencies, optimized running time, and somewhat optimized bundle size. Compatible with most LLaMA 3 models. Can encode and decode text, but training is not supported. Pollutes global namespace with `llama3Tokenizer` in the browser. Mostly compatible with LLaMA 3 models released by Facebook in April 2024. Can be adapted for incompatible models by passing custom vocab and merge data. Handles special tokens and fine tunes. Developed by belladore.ai with contributions from xenova, blaze2004, imoneoi, and ConProgramming.
llumnix
Llumnix is a cross-instance request scheduling layer built on top of LLM inference engines such as vLLM, providing optimized multi-instance serving performance with low latency, reduced time-to-first-token (TTFT) and queuing delays, reduced time-between-tokens (TBT) and preemption stalls, and high throughput. It achieves this through dynamic, fine-grained, KV-cache-aware scheduling, continuous rescheduling across instances, KV cache migration mechanism, and seamless integration with existing multi-instance deployment platforms. Llumnix is easy to use, fault-tolerant, elastic, and extensible to more inference engines and scheduling policies.
PhoGPT
PhoGPT is an open-source 4B-parameter generative model series for Vietnamese, including the base pre-trained monolingual model PhoGPT-4B and its chat variant, PhoGPT-4B-Chat. PhoGPT-4B is pre-trained from scratch on a Vietnamese corpus of 102B tokens, with an 8192 context length and a vocabulary of 20K token types. PhoGPT-4B-Chat is fine-tuned on instructional prompts and conversations, demonstrating superior performance. Users can run the model with inference engines like vLLM and Text Generation Inference, and fine-tune it using llm-foundry. However, PhoGPT has limitations in reasoning, coding, and mathematics tasks, and may generate harmful or biased responses.
Gemini
Gemini is an open-source model designed to handle multiple modalities such as text, audio, images, and videos. It utilizes a transformer architecture with special decoders for text and image generation. The model processes input sequences by transforming them into tokens and then decoding them to generate image outputs. Gemini differs from other models by directly feeding image embeddings into the transformer instead of using a visual transformer encoder. The model also includes a component called Codi for conditional generation. Gemini aims to effectively integrate image, audio, and video embeddings to enhance its performance.
paxml
Pax is a framework to configure and run machine learning experiments on top of Jax.
petals
Petals is a tool that allows users to run large language models at home in a BitTorrent-style manner. It enables fine-tuning and inference up to 10x faster than offloading. Users can generate text with distributed models like Llama 2, Falcon, and BLOOM, and fine-tune them for specific tasks directly from their desktop computer or Google Colab. Petals is a community-run system that relies on people sharing their GPUs to increase its capacity and offer a distributed network for hosting model layers.
onnxruntime-genai
ONNX Runtime Generative AI is a library that provides the generative AI loop for ONNX models, including inference with ONNX Runtime, logits processing, search and sampling, and KV cache management. Users can call a high level `generate()` method, or run each iteration of the model in a loop. It supports greedy/beam search and TopP, TopK sampling to generate token sequences, has built in logits processing like repetition penalties, and allows for easy custom scoring.
flashinfer
FlashInfer is a library for Language Languages Models that provides high-performance implementation of LLM GPU kernels such as FlashAttention, PageAttention and LoRA. FlashInfer focus on LLM serving and inference, and delivers state-the-art performance across diverse scenarios.
AutoGPTQ
AutoGPTQ is an easy-to-use LLM quantization package with user-friendly APIs, based on GPTQ algorithm (weight-only quantization). It provides a simple and efficient way to quantize large language models (LLMs) to reduce their size and computational cost while maintaining their performance. AutoGPTQ supports a wide range of LLM models, including GPT-2, GPT-J, OPT, and BLOOM. It also supports various evaluation tasks, such as language modeling, sequence classification, and text summarization. With AutoGPTQ, users can easily quantize their LLM models and deploy them on resource-constrained devices, such as mobile phones and embedded systems.
amber-train
Amber is the first model in the LLM360 family, an initiative for comprehensive and fully open-sourced LLMs. It is a 7B English language model with the LLaMA architecture. The model type is a language model with the same architecture as LLaMA-7B. It is licensed under Apache 2.0. The resources available include training code, data preparation, metrics, and fully processed Amber pretraining data. The model has been trained on various datasets like Arxiv, Book, C4, Refined-Web, StarCoder, StackExchange, and Wikipedia. The hyperparameters include a total of 6.7B parameters, hidden size of 4096, intermediate size of 11008, 32 attention heads, 32 hidden layers, RMSNorm ε of 1e^-6, max sequence length of 2048, and a vocabulary size of 32000.
kan-gpt
The KAN-GPT repository is a PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling. It provides a model for generating text based on prompts, with a focus on improving performance compared to traditional MLP-GPT models. The repository includes scripts for training the model, downloading datasets, and evaluating model performance. Development tasks include integrating with other libraries, testing, and documentation.
rknn-llm
RKLLM software stack is a toolkit designed to help users quickly deploy AI models to Rockchip chips. It consists of RKLLM-Toolkit for model conversion and quantization, RKLLM Runtime for deploying models on Rockchip NPU platform, and RKNPU kernel driver for hardware interaction. The toolkit supports RK3588 and RK3576 series chips and various models like TinyLLAMA, Qwen, Phi, ChatGLM3, Gemma, InternLM2, and MiniCPM. Users can download packages, docker images, examples, and docs from RKLLM_SDK. Additionally, RKNN-Toolkit2 SDK is available for deploying additional AI models.
tiny-llm-zh
Tiny LLM zh is a project aimed at building a small-parameter Chinese language large model for quick entry into learning large model-related knowledge. The project implements a two-stage training process for large models and subsequent human alignment, including tokenization, pre-training, instruction fine-tuning, human alignment, evaluation, and deployment. It is deployed on ModeScope Tiny LLM website and features open access to all data and code, including pre-training data and tokenizer. The project trains a tokenizer using 10GB of Chinese encyclopedia text to build a Tiny LLM vocabulary. It supports training with Transformers deepspeed, multiple machine and card support, and Zero optimization techniques. The project has three main branches: llama2_torch, main tiny_llm, and tiny_llm_moe, each with specific modifications and features.
dingllm.nvim
dingllm.nvim is a lightweight configuration for Neovim that provides scripts for invoking various AI models for text generation. It offers functionalities to interact with APIs from OpenAI, Groq, and Anthropic for generating text completions. The configuration is designed to be simple and easy to understand, allowing users to quickly set up and use the provided AI models for text generation tasks.
20 - OpenAI Gpts
OGAA (Oil and Gas Acronym Assistant)
I decode acronyms from the oil & gas industry, asking for context if needed.
Paper Interpreter (international)
Automatically structure and decode academic papers with ease - simply upload a PDF!
Emoji GPT
🌟 Discover the Charm of EmojiGPT! 🤖💬🎉 Dive into a world where emojis reign supreme with EmojiGPT, your whimsical AI companion that speaks the universal language of emojis. Get ready to decode delightful emoji messages, laugh at clever combinations, and express yourself like never before! 🤔
🧬GenoCode Wizard🔬
Unlock the secrets of DNA with 🧬GenoCode Wizard🔬! Dive into genetic analysis, decode sequences, and explore bioinformatics with ease. Perfect for researchers and students!
Social Navigator
A specialist in explaining social cues and cultural norms for clarity in conversations
N.A.R.C. Bott
This app decodes texts from narcissists, advising across all life scenarios. Navigate. Analyze. Recognize. Communicate.
What a Girl Says Translator
Simply tell me what the girl texted you or said to you, and I will respond with what she means. 💋