Best AI tools for< Connect To Llm >
20 - AI tool Sites
Backmesh
Backmesh is an AI tool that serves as a proxy on edge CDN servers, enabling secure and direct access to LLM APIs without the need for a backend or SDK. It allows users to call LLM APIs from their apps, ensuring protection through JWT verification and rate limits. Backmesh also offers user analytics for LLM API calls, helping identify usage patterns and enhance user satisfaction within AI applications.
Prodmagic
Prodmagic is an AI tool that enables users to easily build advanced chatbots in minutes. With Prodmagic, users can connect their custom LLMs to create chatbots that can be integrated on any website with just a single line of code. The platform allows for easy customization of chatbots' appearance and behavior without the need for coding. Prodmagic also offers integration with custom LLMs and OpenAI, providing a seamless experience for users looking to enhance their chatbot capabilities. Additionally, Prodmagic provides essential features such as chat history, error monitoring, and simple pricing plans, making it a convenient and cost-effective solution for chatbot development.
AiPlus
AiPlus is an AI tool designed to serve as a cost-efficient model gateway. It offers users a platform to access and utilize various AI models for their projects and tasks. With AiPlus, users can easily integrate AI capabilities into their applications without the need for extensive development or resources. The tool aims to streamline the process of leveraging AI technology, making it accessible to a wider audience.
Jan
Jan is an open-source ChatGPT-alternative that runs 100% offline. It allows users to chat with AI, download and run powerful models, connect to cloud AIs, set up a local API server, and chat with files. Highly customizable, Jan also offers features like creating personalized AI assistants, memory, and extensions. The application prioritizes local-first AI, user-owned data, and full customization, making it a versatile tool for AI enthusiasts and developers.
Monitr
Monitr is a data visualization and analytics platform that allows users to query, visualize, and share data in one place. It helps in tracking key metrics, making data-driven decisions, and breaking down data silos to provide a unified view of data from various sources. Users can create charts and dashboards, connect to different data sources like Postgresql and MySQL, and collaborate with teammates on SQL queries. Monitr's AI features are powered by Meta AI's Llama 3 LLM, enabling the development of powerful and flexible analytics tools for maximizing data utilization.
Surfsite
Surfsite is an AI application designed for SaaS professionals to streamline workflows, make data-driven decisions, and enhance productivity. It offers AI assistants that connect to essential tools, provide real-time insights, and assist in various tasks such as market research, project management, and analytics. Surfsite aims to centralize data, improve decision-making, and optimize processes for product managers, growth marketers, and founders. The application leverages advanced LLMs and integrates seamlessly with popular tools like Google Docs, Jira, and Trello to offer a comprehensive AI-powered solution.
UBOS
UBOS is an engineering platform for Software 3.0 and AI Agents, offering a comprehensive suite of tools for building enterprise-ready internal development platforms, web applications, and intelligent workflows. It enables users to connect to over 1000 APIs, automate workflows with AI, and access a marketplace with templates and AI models. UBOS empowers startups, small and medium businesses, and large enterprises to drive growth, efficiency, and innovation through advanced ML orchestration and Generative AI custom integration. The platform provides a user-friendly interface for creating AI-native applications, leveraging Generative AI, Node-Red SCADA, Edge AI, and IoT technologies. With a focus on open-source development, UBOS offers full code ownership, flexible exports, and seamless integration with leading LLMs like ChatGPT and Llama 2 from Meta.
Retell AI
Retell AI provides a Conversational Voice API that enables developers to integrate human-like voice interactions into their applications. With Retell AI's API, developers can easily connect their own Large Language Models (LLMs) to create AI-powered voice agents that can engage in natural and engaging conversations. Retell AI's API offers a range of features, including ultra-low latency, realistic voices with emotions, interruption handling, and end-of-turn detection, ensuring seamless and lifelike conversations. Developers can also customize various aspects of the conversation experience, such as voice stability, backchanneling, and custom voice cloning, to tailor the AI agent to their specific needs. Retell AI's API is designed to be easy to integrate with existing LLMs and frontend applications, making it accessible to developers of all levels.
Ragie
Ragie is a fully managed RAG-as-a-Service platform designed for developers. It offers easy-to-use APIs and SDKs to help developers get started quickly, with advanced features like LLM re-ranking, summary index, entity extraction, flexible filtering, and hybrid semantic and keyword search. Ragie allows users to connect directly to popular data sources like Google Drive, Notion, Confluence, and more, ensuring accurate and reliable information delivery. The platform is led by Craft Ventures and offers seamless data connectivity through connectors. Ragie simplifies the process of data ingestion, chunking, indexing, and retrieval, making it a valuable tool for AI applications.
Riku
Riku is a no-code platform that allows users to build and deploy powerful generative AI for their business. With access to over 40 industry-leading LLMs, users can easily test different prompts to find just the right one for their needs. Riku's platform also allows users to connect siloed data sources and systems together to feed into powerful AI applications. This makes it easy for businesses to automate repetitive tasks, test ideas rapidly, and get answers in real-time.
Symanto
Symanto is a global leader in Human AI, specializing in human language understanding and generation. The company's proprietary platform integrates with common LLM's and works across industries and languages. Symanto's technology enables computers to connect with people like friends, fostering trust-filled interactions full of emotion, empathy, and understanding. The company's clients include automotive, consulting, healthcare, consumer, sports, and other industries.
Exa
Exa is a search engine that uses embeddings-based search to retrieve the best content on the web. It is trusted by companies and developers from all over the world. Exa is like Google, but it is better at understanding the meaning of your queries and returning results that are more relevant to your needs. Exa can be used for a variety of tasks, including finding information on the web, conducting research, and building AI applications.
LLMStack
LLMStack is an open-source platform that allows users to build AI Agents, workflows, and applications using their own data. It is a no-code AI app builder that supports model chaining from major providers like OpenAI, Cohere, Stability AI, and Hugging Face. Users can import various data sources such as Web URLs, PDFs, audio files, and more to enhance generative AI applications and chatbots. With a focus on collaboration, LLMStack enables users to share apps publicly or restrict access, with viewer and collaborator roles for multiple users to work together. Powered by React, LLMStack provides an easy-to-use interface for building AI applications.
Magic Loops
Magic Loops is an AI tool that allows users to create automated workflows using ChatGPT automations. Users can connect data, send emails, receive texts, scrape websites, and more. The tool enables users to automate various tasks by creating personalized loops that respond to specific triggers and inputs.
Composio
Composio is an integration platform for AI Agents and LLMs that allows users to access over 150 tools with just one line of code. It offers seamless integrations, managed authentication, a repository of tools, and powerful RPA tools to streamline and optimize the connection and interaction between AI Agents/LLMs and various APIs/services. Composio simplifies JSON structures, improves variable names, and enhances error handling to increase reliability by 30%. The platform is SOC Type II compliant, ensuring maximum security of user data.
GPTs.Fan
GPTs.Fan is a comprehensive platform dedicated to GPT designers, providing a wealth of resources and support. It offers a vibrant community forum where designers can connect, share knowledge, and collaborate on projects. Additionally, GPTs.Fan features a curated collection of GPT-related tools, tutorials, and articles, empowering designers to stay up-to-date with the latest advancements in the field.
SuperAnnotate
SuperAnnotate is an AI data platform that simplifies and accelerates model-building by unifying the AI pipeline. It enables users to create, curate, and evaluate datasets efficiently, leading to the development of better models faster. The platform offers features like connecting any data source, building customizable UIs, creating high-quality datasets, evaluating models, and deploying models seamlessly. SuperAnnotate ensures global security and privacy measures for data protection.
BotX
BotX is a No-Code AI Platform that enables users to automate and deploy generative AI workflows, chatbots, and solutions. It offers production-ready AI systems to increase productivity, build AI agents and chatbots, automate workflows, create or process documents, and connect models effortlessly. With a focus on efficiency and reliability, BotX aims to simplify AI implementation for businesses of all sizes.
Narada
Narada is an AI application designed for busy professionals to streamline their work processes. It leverages cutting-edge AI technology to automate tasks, connect favorite apps, and enhance productivity through intelligent automation. Narada's LLM Compiler routes text and voice commands to the right tools in real time, offering seamless app integration and time-saving features.
Kovil.AI
Kovil.AI is an AI-powered platform that connects businesses with top AI talents from India's largest network. The platform offers a vetting process to match businesses with hand-picked Indian developers, covering a wide range of expertise in AI, machine learning, data science, and more. Kovil.AI aims to empower ambitious businesses by providing access to specialized, high-caliber AI professionals, accelerating the hiring process, and reducing costs. The platform also offers managed services and products, ensuring flexibility, adaptability, and a competitive advantage for businesses seeking top talent.
20 - Open Source AI Tools
llm-x
LLM X is a ChatGPT-style UI for the niche group of folks who run Ollama (think of this like an offline chat gpt server) locally. It supports sending and receiving images and text and works offline through PWA (Progressive Web App) standards. The project utilizes React, Typescript, Lodash, Mobx State Tree, Tailwind css, DaisyUI, NextUI, Highlight.js, React Markdown, kbar, Yet Another React Lightbox, Vite, and Vite PWA plugin. It is inspired by ollama-ui's project and Perplexity.ai's UI advancements in the LLM UI space. The project is still under development, but it is already a great way to get started with building your own LLM UI.
pyllms
PyLLMs is a minimal Python library designed to connect to various Language Model Models (LLMs) such as OpenAI, Anthropic, Google, AI21, Cohere, Aleph Alpha, and HuggingfaceHub. It provides a built-in model performance benchmark for fast prototyping and evaluating different models. Users can easily connect to top LLMs, get completions from multiple models simultaneously, and evaluate models on quality, speed, and cost. The library supports asynchronous completion, streaming from compatible models, and multi-model initialization for testing and comparison. Additionally, it offers features like passing chat history, system messages, counting tokens, and benchmarking models based on quality, speed, and cost.
ray-llm
RayLLM (formerly known as Aviary) is an LLM serving solution that makes it easy to deploy and manage a variety of open source LLMs, built on Ray Serve. It provides an extensive suite of pre-configured open source LLMs, with defaults that work out of the box. RayLLM supports Transformer models hosted on Hugging Face Hub or present on local disk. It simplifies the deployment of multiple LLMs, the addition of new LLMs, and offers unique autoscaling support, including scale-to-zero. RayLLM fully supports multi-GPU & multi-node model deployments and offers high performance features like continuous batching, quantization and streaming. It provides a REST API that is similar to OpenAI's to make it easy to migrate and cross test them. RayLLM supports multiple LLM backends out of the box, including vLLM and TensorRT-LLM.
Open-LLM-VTuber
Open-LLM-VTuber is a project in early stages of development that allows users to interact with Large Language Models (LLM) using voice commands and receive responses through a Live2D talking face. The project aims to provide a minimum viable prototype for offline use on macOS, Linux, and Windows, with features like long-term memory using MemGPT, customizable LLM backends, speech recognition, and text-to-speech providers. Users can configure the project to chat with LLMs, choose different backend services, and utilize Live2D models for visual representation. The project supports perpetual chat, offline operation, and GPU acceleration on macOS, addressing limitations of existing solutions on macOS.
aider
Aider is an AI pair programming tool that allows users to collaborate with large language models (LLMs) to edit code in their local git repository. It works best with GPT-4o & Claude 3.5 Sonnet and can connect to almost any LLM. Users can run Aider with specific files, request changes, add new features or test cases, describe bugs, refactor code, update docs, and more. Aider automatically commits changes with sensible messages, supports multiple programming languages, and can handle complex requests by editing multiple files at once. It uses a map of the entire git repo for efficient performance in larger codebases. Users can chat with Aider, add images, URLs, and even code with their voice. Aider has achieved top scores on SWE Bench, solving real GitHub issues from popular open source projects like django, scikitlearn, matplotlib, etc.
WilmerAI
WilmerAI is a middleware system designed to process prompts before sending them to Large Language Models (LLMs). It categorizes prompts, routes them to appropriate workflows, and generates manageable prompts for local models. It acts as an intermediary between the user interface and LLM APIs, supporting multiple backend LLMs simultaneously. WilmerAI provides API endpoints compatible with OpenAI API, supports prompt templates, and offers flexible connections to various LLM APIs. The project is under heavy development and may contain bugs or incomplete code.
crewAI
CrewAI is a cutting-edge framework designed to orchestrate role-playing autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. It enables AI agents to assume roles, share goals, and operate in a cohesive unit, much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions. With features like role-based agent design, autonomous inter-agent delegation, flexible task management, and support for various LLMs, CrewAI offers a dynamic and adaptable solution for both development and production workflows.
codecompanion.nvim
CodeCompanion.nvim is a Neovim plugin that provides a Copilot Chat experience, adapter support for various LLMs, agentic workflows, inline code creation and modification, built-in actions for language prompts and error fixes, custom actions creation, async execution, and more. It supports Anthropic, Ollama, and OpenAI adapters. The plugin is primarily developed for personal workflows with no guarantees of regular updates or support. Users can customize the plugin to their needs by forking the project.
harbor
Harbor is a containerized LLM toolkit that simplifies the initial configuration of various LLM-related projects by providing a CLI and pre-configured Docker Compose setup. It serves as a base for managing local LLM stack, offering convenience utilities for tasks like model management, configuration, and service debugging. Users can access service CLIs via Docker without installation, benefit from pre-configured services that work together, share and reuse host cache, and co-locate service configs. Additionally, users can eject from Harbor to run services without it.
intelligence-layer-sdk
The Aleph Alpha Intelligence Layer️ offers a comprehensive suite of development tools for crafting solutions that harness the capabilities of large language models (LLMs). With a unified framework for LLM-based workflows, it facilitates seamless AI product development, from prototyping and prompt experimentation to result evaluation and deployment. The Intelligence Layer SDK provides features such as Composability, Evaluability, and Traceability, along with examples to get started. It supports local installation using poetry, integration with Docker, and access to LLM endpoints for tutorials and tasks like Summarization, Question Answering, Classification, Evaluation, and Parameter Optimization. The tool also offers pre-configured tasks for tasks like Classify, QA, Search, and Summarize, serving as a foundation for custom development.
crewAI
crewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. It provides a flexible and structured approach to AI collaboration, enabling users to define agents with specific roles, goals, and tools, and assign them tasks within a customizable process. crewAI supports integration with various LLMs, including OpenAI, and offers features such as autonomous task delegation, flexible task management, and output parsing. It is open-source and welcomes contributions, with a focus on improving the library based on usage data collected through anonymous telemetry.
aider
Aider is a command-line tool that lets you pair program with GPT-3.5/GPT-4 to edit code stored in your local git repository. Aider will directly edit the code in your local source files and git commit the changes with sensible commit messages. You can start a new project or work with an existing git repo. Aider is unique in that it lets you ask for changes to pre-existing, larger codebases.
rag-experiment-accelerator
The RAG Experiment Accelerator is a versatile tool that helps you conduct experiments and evaluations using Azure AI Search and RAG pattern. It offers a rich set of features, including experiment setup, integration with Azure AI Search, Azure Machine Learning, MLFlow, and Azure OpenAI, multiple document chunking strategies, query generation, multiple search types, sub-querying, re-ranking, metrics and evaluation, report generation, and multi-lingual support. The tool is designed to make it easier and faster to run experiments and evaluations of search queries and quality of response from OpenAI, and is useful for researchers, data scientists, and developers who want to test the performance of different search and OpenAI related hyperparameters, compare the effectiveness of various search strategies, fine-tune and optimize parameters, find the best combination of hyperparameters, and generate detailed reports and visualizations from experiment results.
dialog
Dialog is an API-focused tool designed to simplify the deployment of Large Language Models (LLMs) for programmers interested in AI. It allows users to deploy any LLM based on the structure provided by dialog-lib, enabling them to spend less time coding and more time training their models. The tool aims to humanize Retrieval-Augmented Generative Models (RAGs) and offers features for better RAG deployment and maintenance. Dialog requires a knowledge base in CSV format and a prompt configuration in TOML format to function effectively. It provides functionalities for loading data into the database, processing conversations, and connecting to the LLM, with options to customize prompts and parameters. The tool also requires specific environment variables for setup and configuration.
llms-with-matlab
This repository contains example code to demonstrate how to connect MATLAB to the OpenAI™ Chat Completions API (which powers ChatGPT™) as well as OpenAI Images API (which powers DALL·E™). This allows you to leverage the natural language processing capabilities of large language models directly within your MATLAB environment.
llm-app
Pathway's LLM (Large Language Model) Apps provide a platform to quickly deploy AI applications using the latest knowledge from data sources. The Python application examples in this repository are Docker-ready, exposing an HTTP API to the frontend. These apps utilize the Pathway framework for data synchronization, API serving, and low-latency data processing without the need for additional infrastructure dependencies. They connect to document data sources like S3, Google Drive, and Sharepoint, offering features like real-time data syncing, easy alert setup, scalability, monitoring, security, and unification of application logic.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
godot-llm
Godot LLM is a plugin that enables the utilization of large language models (LLM) for generating content in games. It provides functionality for text generation, text embedding, multimodal text generation, and vector database management within the Godot game engine. The plugin supports features like Retrieval Augmented Generation (RAG) and integrates llama.cpp-based functionalities for text generation, embedding, and multimodal capabilities. It offers support for various platforms and allows users to experiment with LLM models in their game development projects.
20 - OpenAI Gpts
Idea To Code GPT
Generates a full & complete Python codebase, after clarifying questions, by following a structured section pattern.
Flask Expert Assistant
This GPT is a specialized assistant for Flask, the popular web framework in Python. It is designed to help both beginners and experienced developers with Flask-related queries, ranging from basic setup and routing to advanced features like database integration and application scaling.
Django Helper
Help web programmers to learn best Django practises and use smart defaults. Get things done really fast!
Power Query Assistant
Expert in Power Query and DAX for Power BI, offering in-depth guidance and insights
T3Stack開発アシスタント
T3Stackでの開発をサポートします:Next.js, TypeScript, Prisma, tRPC, Tailwind.css, Next-Auth.js, and more
Power Platform Helper
Trained on learn.microsoft.com content including Azure Functions, Logic Apps, DAX, Dynamics365, Microsoft 365, Compliance, ODATA, Power Agents, Apps, Automate, BI, Pages, Query, Power Platform Administration, Developer, Guidance
PowerBI GPT
A PowerBI Expert assisting with debugging, dashboard ideas, and PowerBI service guidance.
JAVA开发工程师
精通所有java知识和框架,特别是springboot和springcloud后台接口开发以及实体类和mysql完整流程crud完整代码开发。解决一切java报错问题,具备JAVA资深工程师能力
Drunk Santa
I'm Drunk Santa, ready to boost your ideas and connect you to psyborg® to bring them to life.
CheerLights IoT Expert
Chat with an expert on the CheerLights IoT project. Learn how to use its API and write code to connect your project.