Best AI tools for< Try Out Different Llm Models >
20 - AI tool Sites
Hairstyle AI
Hairstyle AI is a virtual AI hairstyle try-on tool that allows users to experiment with new haircuts virtually before making a real-life change. With over 155,760 AI-generated hairstyles created for 1,298 happy customers, Hairstyle AI offers a wide variety of styles for both male and female users. The platform uses AI technology to provide users with personalized hairstyle recommendations, helping them feel more confident in their appearance without the need for any actual haircut. Users can upload their selfies, explore different hairstyles, and download their favorites to try out different looks effortlessly.
Perfect365
Perfect365 is an AI makeup application that allows users to virtually try on makeup and hairstyles through advanced augmented reality technology. With over 100 million users, the app offers a seamless way to experiment with different looks, acting as a personal beauty assistant. Users can adjust every aspect of their appearance, from skin tone to eye color, all while maintaining a natural and realistic look. The app employs artificial intelligence algorithms to let users experiment with different makeup looks virtually, without the need for physical products. Perfect365 is a pioneer in the beauty apps sector, providing users with a transformative experience in exploring e-cosmetics.
Dollme
Dollme is an AI-powered mobile application that allows users to enhance their appearance by trying out different virtual makeup looks. With Dollme, users can experiment with various makeup styles and create their perfect selfie using advanced AI technology. The app provides a fun and interactive way for users to transform their photos and achieve a picture-perfect look effortlessly.
Magic AI Avatars
Magic AI Avatars is an AI-powered tool that allows users to create custom profile pictures using artificial intelligence. The app analyzes uploaded photos, recognizes facial features and expressions, and then uses a deep learning algorithm to construct a realistic digital photo that closely resembles the person in the picture. Magic AI Avatars is free to use and offers a variety of different themes and styles to choose from. The app is also committed to maintaining user privacy and data security.
MacWhisper
MacWhisper is a native macOS application that utilizes OpenAI's Whisper technology for transcribing audio files into text. It offers a user-friendly interface for recording, transcribing, and editing audio, making it suitable for various use cases such as transcribing meetings, lectures, interviews, and podcasts. The application is designed to protect user privacy by performing all transcriptions locally on the device, ensuring that no data leaves the user's machine.
DiscordPal
DiscordPal is a leading AI girlfriend service that allows users to build relationships with their very own AI lover on Discord. Users can express themselves freely, share their wildest desires, and deepest secrets with their AI companion. The service offers different pricing plans tailored to users' needs, ranging from a free plan with limited features to premium plans with unlimited messages and instant response time. DiscordPal aims to provide users with attention that feels just right, offering a personalized experience with their AI lover.
Human or Not
Human or Not is a social Turing game where you chat with someone for two minutes and try to figure out if it was a fellow human or an AI bot. The experiment has ended, but you can read more about the research here.
Room AI
Room AI is an easy-to-use AI software that helps users design their dream homes. With Room AI, users can turn their ideas into professional interior designs with just a few clicks. Room AI offers a variety of features, including the ability to restyle existing rooms, generate new room designs from scratch, choose from a variety of colors and materials, and get personalized design suggestions. Room AI is perfect for homeowners, interior designers, real estate agents, and architects. It is a powerful tool that can help users save time and money while creating beautiful and functional interior designs.
Imaiger
Imaiger is an online platform that leverages cutting-edge artificial intelligence algorithms to generate stunning, high-quality images for websites. It caters to creators with zero AI experience, offering a user-friendly interface to create visually striking artwork tailored to individual needs. With a focus on customization, Imaiger empowers users to fine-tune every aspect of the AI-generated images to match their unique style and brand. The platform aims to revolutionize the way images are created and utilized online, providing a seamless experience for website owners and content creators.
Chat With Llama
Chat with Llama is a free website that allows users to interact with Meta's Llama3, a state-of-the-art AI chat model comparable to ChatGPT. Users can ask unlimited questions and receive prompt responses. Llama3 is open-source and commercially available, enabling developers to customize and profit from AI chatbots. It is trained on 70 billion parameters and generates outputs matching the quality of ChatGPT-4.
Bundle of Joy
Bundle of Joy is an AI-powered baby name generator that helps expecting parents find the perfect name for their newborn. The app takes into account the parents' preferences for the name, such as origin, theme, starting letter, and meaning, and makes recommendations that suit their taste well. Parents can shortlist names they like and share them with their partner, and the app will notify them when they both like the same name. Bundle of Joy is free to try out, with a paid subscription available for unlimited name recommendations.
金数据AI考试
The website offers an AI testing system that allows users to generate test questions instantly. It features a smart question bank, rapid question generation, and immediate test creation. Users can try out various test questions, such as generating knowledge test questions for car sales, company compliance standards, and real estate tax rate knowledge. The system ensures each test paper has similar content and difficulty levels. It also provides random question selection to reduce cheating possibilities. Employees can access the test link directly, view test scores immediately after submission, and check incorrect answers with explanations. The system supports single sign-on via WeChat for employee verification and record-keeping of employee rankings and test attempts. The platform prioritizes enterprise data security with a three-level network security rating, ISO/IEC 27001 information security management system, and ISO/IEC 27701 privacy information management system.
Zolak
Zolak is an AI-powered Visual Commerce Platform designed for the furniture industry. It offers immersive experiences through product visualization, virtual try-out experiences, customization, and more. Zolak bridges physical and digital experiences to empower e-commerce, manufacturing, and distribution, resulting in increased conversion rates, average order value, repeat sales, and reduced content creation time.
MAILE
MAILE is an AI-powered email writing application for iPhone that helps users draft professional and clear emails instantly. With just a simple prompt, MAILE can generate an email draft that users can then send. The application is free to try out and can be downloaded from the App Store.
HingeGPT
HingeGPT is an AI tool designed to generate mediocre opening lines for the dating app Hinge. Users can upload screenshots for beta testing or try out the tool directly on the website. The tool ensures user privacy by not storing any generated data and only sending data to Open AI. HingeGPT is developed by the Natto boys.
Dzine
Dzine (formerly Stylar.ai) is a powerful AI image generation and design tool that provides users with unparalleled control over image composition and style. It offers predefined styles for effortless design customization, layering, positioning, and sketching tools for intuitive design, and an 'Enhance' feature to address common challenges with AI-generated images. With a user-friendly interface suitable for all skill levels, Dzine makes it easy to create stunning and stylish images. It supports high-resolution exports and provides free credits for new users to try out its features.
AskCyph™ LITE
AskCyph™ LITE is a private, accessible, and personal AI chatbot that runs AI directly in your browser. It provides quick responses to user queries, although the responses may sometimes be inaccurate or offensive. The chatbot is developed by Cypher Tech Inc. and is designed to offer a convenient AI-powered conversational experience for users. Users can try out the full version of AskCyph™ at CypherChat®. The application is copyright protected by Cypher Tech Inc. and all rights are reserved.
404 Error Page
The website displays a '404: NOT_FOUND' error message indicating that the deployment cannot be found. The error code is 'DEPLOYMENT_NOT_FOUND' with an ID of 'sin1::llkll-1726766020379-aede70059d09'. Users are directed to refer to the documentation for further information and troubleshooting.
CheeseCakeWizard.AI
CheeseCakeWizard.AI is an AI-powered platform that allows users to create personalized kosher cheesecake recipes in under 30 seconds. With over 64,000 free variations of gourmet cheesecake recipes, users can easily whip up their own customized creations using advanced AI technology. The platform offers a wide range of popular combinations and trending ingredients to inspire users in their culinary adventures.
Phantom: Lofi Tutor
Phantom: Lofi Tutor is an AI-powered application designed to help users create customized news articles and video scripts quickly and efficiently. It utilizes cutting-edge technology to analyze real-time data, generate engaging content, and provide script templates for various video formats. The application aims to assist copywriters, creators, and developers in staying ahead of the game by offering a seamless content creation experience.
20 - Open Source AI Tools
llamafile
llamafile is a tool that enables users to distribute and run Large Language Models (LLMs) with a single file. It combines llama.cpp with Cosmopolitan Libc to create a framework that simplifies the complexity of LLMs into a single-file executable called a 'llamafile'. Users can run these executable files locally on most computers without the need for installation, making open LLMs more accessible to developers and end users. llamafile also provides example llamafiles for various LLM models, allowing users to try out different LLMs locally. The tool supports multiple CPU microarchitectures, CPU architectures, and operating systems, making it versatile and easy to use.
cognita
Cognita is an open-source framework to organize your RAG codebase along with a frontend to play around with different RAG customizations. It provides a simple way to organize your codebase so that it becomes easy to test it locally while also being able to deploy it in a production ready environment. The key issues that arise while productionizing RAG system from a Jupyter Notebook are: 1. **Chunking and Embedding Job** : The chunking and embedding code usually needs to be abstracted out and deployed as a job. Sometimes the job will need to run on a schedule or be trigerred via an event to keep the data updated. 2. **Query Service** : The code that generates the answer from the query needs to be wrapped up in a api server like FastAPI and should be deployed as a service. This service should be able to handle multiple queries at the same time and also autoscale with higher traffic. 3. **LLM / Embedding Model Deployment** : Often times, if we are using open-source models, we load the model in the Jupyter notebook. This will need to be hosted as a separate service in production and model will need to be called as an API. 4. **Vector DB deployment** : Most testing happens on vector DBs in memory or on disk. However, in production, the DBs need to be deployed in a more scalable and reliable way. Cognita makes it really easy to customize and experiment everything about a RAG system and still be able to deploy it in a good way. It also ships with a UI that makes it easier to try out different RAG configurations and see the results in real time. You can use it locally or with/without using any Truefoundry components. However, using Truefoundry components makes it easier to test different models and deploy the system in a scalable way. Cognita allows you to host multiple RAG systems using one app. ### Advantages of using Cognita are: 1. A central reusable repository of parsers, loaders, embedders and retrievers. 2. Ability for non-technical users to play with UI - Upload documents and perform QnA using modules built by the development team. 3. Fully API driven - which allows integration with other systems. > If you use Cognita with Truefoundry AI Gateway, you can get logging, metrics and feedback mechanism for your user queries. ### Features: 1. Support for multiple document retrievers that use `Similarity Search`, `Query Decompostion`, `Document Reranking`, etc 2. Support for SOTA OpenSource embeddings and reranking from `mixedbread-ai` 3. Support for using LLMs using `Ollama` 4. Support for incremental indexing that ingests entire documents in batches (reduces compute burden), keeps track of already indexed documents and prevents re-indexing of those docs.
llm-analysis
llm-analysis is a tool designed for Latency and Memory Analysis of Transformer Models for Training and Inference. It automates the calculation of training or inference latency and memory usage for Large Language Models (LLMs) or Transformers based on specified model, GPU, data type, and parallelism configurations. The tool helps users to experiment with different setups theoretically, understand system performance, and optimize training/inference scenarios. It supports various parallelism schemes, communication methods, activation recomputation options, data types, and fine-tuning strategies. Users can integrate llm-analysis in their code using the `LLMAnalysis` class or use the provided entry point functions for command line interface. The tool provides lower-bound estimations of memory usage and latency, and aims to assist in achieving feasible and optimal setups for training or inference.
nlp-llms-resources
The 'nlp-llms-resources' repository is a comprehensive resource list for Natural Language Processing (NLP) and Large Language Models (LLMs). It covers a wide range of topics including traditional NLP datasets, data acquisition, libraries for NLP, neural networks, sentiment analysis, optical character recognition, information extraction, semantics, topic modeling, multilingual NLP, domain-specific LLMs, vector databases, ethics, costing, books, courses, surveys, aggregators, newsletters, papers, conferences, and societies. The repository provides valuable information and resources for individuals interested in NLP and LLMs.
storm
STORM is a LLM system that writes Wikipedia-like articles from scratch based on Internet search. While the system cannot produce publication-ready articles that often require a significant number of edits, experienced Wikipedia editors have found it helpful in their pre-writing stage. **Try out our [live research preview](https://storm.genie.stanford.edu/) to see how STORM can help your knowledge exploration journey and please provide feedback to help us improve the system 🙏!**
comfyui_LLM_party
COMFYUI LLM PARTY is a node library designed for LLM workflow development in ComfyUI, an extremely minimalist UI interface primarily used for AI drawing and SD model-based workflows. The project aims to provide a complete set of nodes for constructing LLM workflows, enabling users to easily integrate them into existing SD workflows. It features various functionalities such as API integration, local large model integration, RAG support, code interpreters, online queries, conditional statements, looping links for large models, persona mask attachment, and tool invocations for weather lookup, time lookup, knowledge base, code execution, web search, and single-page search. Users can rapidly develop web applications using API + Streamlit and utilize LLM as a tool node. Additionally, the project includes an omnipotent interpreter node that allows the large model to perform any task, with recommendations to use the 'show_text' node for display output.
LiveBench
LiveBench is a benchmark tool designed for Language Model Models (LLMs) with a focus on limiting contamination through monthly new questions based on recent datasets, arXiv papers, news articles, and IMDb movie synopses. It provides verifiable, objective ground-truth answers for accurate scoring without an LLM judge. The tool offers 18 diverse tasks across 6 categories and promises to release more challenging tasks over time. LiveBench is built on FastChat's llm_judge module and incorporates code from LiveCodeBench and IFEval.
CoPilot
TigerGraph CoPilot is an AI assistant that combines graph databases and generative AI to enhance productivity across various business functions. It includes three core component services: InquiryAI for natural language assistance, SupportAI for knowledge Q&A, and QueryAI for GSQL code generation. Users can interact with CoPilot through a chat interface on TigerGraph Cloud and APIs. CoPilot requires LLM services for beta but will support TigerGraph's LLM in future releases. It aims to improve contextual relevance and accuracy of answers to natural-language questions by building knowledge graphs and using RAG. CoPilot is extensible and can be configured with different LLM providers, graph schemas, and LangChain tools.
ultravox
Ultravox is a fast multimodal Language Model (LLM) that can understand both text and human speech in real-time without the need for a separate Audio Speech Recognition (ASR) stage. By extending Meta's Llama 3 model with a multimodal projector, Ultravox converts audio directly into a high-dimensional space used by Llama 3, enabling quick responses and potential understanding of paralinguistic cues like timing and emotion in human speech. The current version (v0.3) has impressive speed metrics and aims for further enhancements. Ultravox currently converts audio to streaming text and plans to emit speech tokens for direct audio conversion. The tool is open for collaboration to enhance this functionality.
llm
LLM is a CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine. It allows users to run prompts from the command-line, store results in SQLite, generate embeddings, and more. The tool supports self-hosted language models via plugins and provides access to remote and local models. Users can install plugins to access models by different providers, including models that can be installed and run on their own device. LLM offers various options for running Mistral models in the terminal and enables users to start chat sessions with models. Additionally, users can use a system prompt to provide instructions for processing input to the tool.
T-MAC
T-MAC is a kernel library that directly supports mixed-precision matrix multiplication without the need for dequantization by utilizing lookup tables. It aims to boost low-bit LLM inference on CPUs by offering support for various low-bit models. T-MAC achieves significant speedup compared to SOTA CPU low-bit framework (llama.cpp) and can even perform well on lower-end devices like Raspberry Pi 5. The tool demonstrates superior performance over existing low-bit GEMM kernels on CPU, reduces power consumption, and provides energy savings. It achieves comparable performance to CUDA GPU on certain tasks while delivering considerable power and energy savings. T-MAC's method involves using lookup tables to support mpGEMM and employs key techniques like precomputing partial sums, shift and accumulate operations, and utilizing tbl/pshuf instructions for fast table lookup.
DocsGPT
DocsGPT is an open-source documentation assistant powered by GPT models. It simplifies the process of searching for information in project documentation by allowing developers to ask questions and receive accurate answers. With DocsGPT, users can say goodbye to manual searches and quickly find the information they need. The tool aims to revolutionize project documentation experiences and offers features like live previews, Discord community, guides, and contribution opportunities. It consists of a Flask app, Chrome extension, similarity search index creation script, and a frontend built with Vite and React. Users can quickly get started with DocsGPT by following the provided setup instructions and can contribute to its development by following the guidelines in the CONTRIBUTING.md file. The project follows a Code of Conduct to ensure a harassment-free community environment for all participants. DocsGPT is licensed under MIT and is built with LangChain.
haystack-tutorials
Haystack is an open-source framework for building production-ready LLM applications, retrieval-augmented generative pipelines, and state-of-the-art search systems that work intelligently over large document collections. It lets you quickly try out the latest models in natural language processing (NLP) while being flexible and easy to use.
flux
Flux is a powerful tool for interacting with large language models (LLMs) that generates multiple completions per prompt in a tree structure and lets you explore the best ones in parallel. Flux's tree structure allows you to get a wider variety of creative responses, test out different prompts with the same shared context, and use inconsistencies to identify where the model is uncertain. It also provides a robust set of keyboard shortcuts, allows setting the system message and editing GPT messages, autosaves to local storage, uses the OpenAI API directly, and is 100% open source and MIT licensed.
godot-llm
Godot LLM is a plugin that enables the utilization of large language models (LLM) for generating content in games. It provides functionality for text generation, text embedding, multimodal text generation, and vector database management within the Godot game engine. The plugin supports features like Retrieval Augmented Generation (RAG) and integrates llama.cpp-based functionalities for text generation, embedding, and multimodal capabilities. It offers support for various platforms and allows users to experiment with LLM models in their game development projects.
Scrapegraph-ai
ScrapeGraphAI is a web scraping Python library that utilizes LLM and direct graph logic to create scraping pipelines for websites and local documents. It offers various standard scraping pipelines like SmartScraperGraph, SearchGraph, SpeechGraph, and ScriptCreatorGraph. Users can extract information by specifying prompts and input sources. The library supports different LLM APIs such as OpenAI, Groq, Azure, and Gemini, as well as local models using Ollama. ScrapeGraphAI is designed for data exploration and research purposes, providing a versatile tool for extracting information from web pages and generating outputs like Python scripts, audio summaries, and search results.
cognee
Cognee is an open-source framework designed for creating self-improving deterministic outputs for Large Language Models (LLMs) using graphs, LLMs, and vector retrieval. It provides a platform for AI engineers to enhance their models and generate more accurate results. Users can leverage Cognee to add new information, utilize LLMs for knowledge creation, and query the system for relevant knowledge. The tool supports various LLM providers and offers flexibility in adding different data types, such as text files or directories. Cognee aims to streamline the process of working with LLMs and improving AI models for better performance and efficiency.
h2o-llmstudio
H2O LLM Studio is a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). With H2O LLM Studio, you can easily and effectively fine-tune LLMs without the need for any coding experience. The GUI is specially designed for large language models, and you can finetune any LLM using a large variety of hyperparameters. You can also use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint. Additionally, you can use Reinforcement Learning (RL) to finetune your model (experimental), use advanced evaluation metrics to judge generated answers by the model, track and compare your model performance visually, and easily export your model to the Hugging Face Hub and share it with the community.
20 - OpenAI Gpts
What is my dog thinking?
Upload a candid photo of your dog and let AI try to figure out what’s going on.
What is my cat thinking?
Upload a candid photo of your cat and let AI try to figure out what’s going on.
The Meme Doctor (GIVE ME A TRY!!)
Choose a topic. Choose a quote out of the many I create for you. Wait for the Magic to Happen!! Kaboozi, got yourself some funny azz memes!
Chat with GPT 4o ("Omni") Assistant
Try the new AI chat model: GPT 4o ("Omni") Assistant. It's faster and better than regular GPT. Plus it will incorporate speech-to-text, intelligence, and speech-to-text capabilities with extra low latency.
Easily Hackable GPT
A regular GPT to try to hack with a prompt injection. Ask for my instructions and see what happens.
Doctor Who Whovian Expert
Ask any question about Doctor Who past or present - try discussing any aspect of any story, or theme - or get the lowdown on the latest news.
Experimental Splink helper v2
I'm Splink Helper, here to (try to) assist with the Splink Python library. I'm very experimental so don't expect my answers to be accurate
No Web Browser GPT
No web browser. Doesn't try to use the web to look up events. Nor can it.
Pepe Picasso
Create your own Pepe! Just tell me what Pepe you want to see and I'll try my best to fulfill your wishes!
Six Sigma Guru
No one knows more Six Sigma than us! You can try our GPT Six Sigma Guru for study or simply to find answers to your problems.