wingman-ai
None
Stars: 147
Wingman AI allows you to use your voice to talk to various AI providers and LLMs, process your conversations, and ultimately trigger actions such as pressing buttons or reading answers. Our _Wingmen_ are like characters and your interface to this world, and you can easily control their behavior and characteristics, even if you're not a developer. AI is complex and it scares people. It's also **not just ChatGPT**. We want to make it as easy as possible for you to get started. That's what _Wingman AI_ is all about. It's a **framework** that allows you to build your own Wingmen and use them in your games and programs. The idea is simple, but the possibilities are endless. For example, you could: * **Role play** with an AI while playing for more immersion. Have air traffic control (ATC) in _Star Citizen_ or _Flight Simulator_. Talk to Shadowheart in Baldur's Gate 3 and have her respond in her own (cloned) voice. * Get live data such as trade information, build guides, or wiki content and have it read to you in-game by a _character_ and voice you control. * Execute keystrokes in games/applications and create complex macros. Trigger them in natural conversations with **no need for exact phrases.** The AI understands the context of your dialog and is quite _smart_ in recognizing your intent. Say _"It's raining! I can't see a thing!"_ and have it trigger a command you simply named _WipeVisors_. * Automate tasks on your computer * improve accessibility * ... and much more
README:
Official website: https://www.wingman-ai.com
Wingman AI allows you to use your voice to talk to various AI providers and LLMs, process your conversations, and ultimately trigger actions such as pressing buttons or reading answers. Our Wingmen are like characters and your interface to this world, and you can easily control their behavior and characteristics, even if you're not a developer. AI is complex and it scares people. It's also not just ChatGPT. We want to make it as easy as possible for you to get started. That's what Wingman AI is all about. It's a framework that allows you to build your own Wingmen and use them in your games and programs.
The idea is simple, but the possibilities are endless. For example, you could:
- Role play with an AI while playing for more immersion. Have air traffic control (ATC) in Star Citizen or Flight Simulator. Talk to Shadowheart in Baldur's Gate 3 and have her respond in her own (cloned) voice.
- Get live data such as trade information, build guides, or wiki content and have it read to you in-game by a character and voice you control.
- Execute keystrokes in games/applications and create complex macros. Trigger them in natural conversations with no need for exact phrases. The AI understands the context of your dialog and is quite smart in recognizing your intent. Say "It's raining! I can't see a thing!" and have it trigger a command you simply named WipeVisors.
- Automate tasks on your computer
- improve accessibility
- ... and much more
Since version 2.0, Wingman AI Core acts as a "backend" API (using FastAPI and Pydantic) with the following features:
- Push-to-talk or voice activation to capture user audio
-
AI providers with different models:
- OpenAI
- Google (Gemini)
- Azure
- Groq (llama3 with function calling)
- Mistral Cloud
- Open Router
- Cerebras
- Groq
- Perplexity
- Wingman Pro (unlimited access to several providers and models)
-
Speech-to-text providers (STT) for transcription:
- OpenAI Whisper
- Azure Whisper
- Azure Speech
- whispercpp (local, bundled with Wingman AI)
- Wingman Pro (Azure Speech or Azure Whisper)
-
Text-to-speech (TTS) providers:
- OpenAI TTS
- Azure TTS
- Edge TTS (free)
- Elevenlabs
- XVASynth (local)
- Sound effects that work with every supported TTS provider
- Multilingual by default
-
Command recording & execution (keyboard & mouse)
- AI-powered: OpenAI decides when to execute commands based on user input. Users don't need to say exact phrases.
- Instant activation: Users can (almost) instantly trigger commands by saying exact phrases.
- Optional: Predetermined responses
- Custom Wingman support: Developers can easily plug-in their own Python scripts with custom implementations
- Skills that can do almost anything. Think Alexa... but better.
- directory/file-based configuration for different use cases (e.g. games) and Wingmen. No database needed.
- Wingman AI Core exposes a lot of its functionality via REST services (with an OpenAPI/Swagger spec) and can send and receive messages from clients, games etc. using WebSockets.
- Sound Library to play mp3 or wav files in commands or Skills (similar to HCS Voice Packs for Voice Attack)
- AI instant sound effects generation with Elevenlabs
We (Team ShipBit) offer an additional client with a neat GUI that you can use to configure everything in Wingman AI Core.
No, it is not! We presented an early prototype of Wingman AI in Star Citizen on YouTube, which caused a lot of excitement and interest in the community. Star Citizen is a great game, we love it and it has a lot of interesting use-cases for Wingmen but it's not the only game we play and not the core of our interest. We're also not affiliated with CIG or Star Citizen in any way.
The video that started it all:
Wingman AI is an external, universal tool that you can run alongside any game or program. As such, it does not currently interact directly with Star Citizen or any other game, other than its ability to trigger system-wide keystrokes, which of course can have an effect on the game. However, if you find a way to interact with a game, either through an API or by reading the game's memory, you could - in theory - use it to directly trigger in-game actions or feed your models with live data. This is not the focus of Wingman AI, though.
The project is intended for two different groups of users:
If you're a developer, you can just clone the repository and start building your own Wingmen. We try to keep the codebase as open and hackable as possible, with lots of hooks and extension points. The base classes you'll need are well documented, and we're happy to help you get started. We also provide a development guide to help you witht the setup. Wingman AI Core is currently 100% written in Python.
If you're not a developer, you can start with pre-built Wingmen from us or from the community and adapt them to your needs. Since version 2, we offer an eay-to-use client for Windows that you can use to cofigure every single detail of your Wingmen. It also handles multiple configurations and offers system-wide settings like audio device selection.
Wingman AI Core is free but the AI providers you'll be using might not be. We know that this is a big concern for many people, so we are offering "Wingman Pro" which is a subscription-based service with a flat fee for all the AI providers you need (and additional GUI features). This way, you won't have to worry about intransparent "pay-per-use" costs.
Check out the pricing and features here: Wingman AI Pro
Wingman AI also supports local providers that you have to setup on your own but can then use and connect with our client for free:
You can also use your own API key to use the following services:
Our Wingmen use OpenAI's APIs and they charge by usage. That means: You don't pay a flat subscription fee, but rather for each call you make to their APIs. You can find more information about the APIs and their pricing on the OpenAI website. You will need to create your API key:
- Navigate to openai.com and click on "Try ChatGPT".
- Choose "Sign-Up" and create an account.
- (if you get an error, go back to openai.com)
- Click "Login".
- Fill in your personal information and verify your phone number.
- Select API. You don't need ChatGPT Plus to use Wingman AI.
- (Go to "Settings > Limits" and set a low soft and hard "usage limit" for your API key. We recommend this to avoid unexpected costs. $5 is fine for now)
- Go to "Billing" and add a payment method.
- Select "API Key" from the menu on the left and create one. Copy it! If you forget it, you can always create a new one.
You don't have to use Elevenlabs as TTS provider, but their voices are great and you can generate instant sound effects with their API - fully integrated into Wingman AI. You can clone any voice with 3 minutes of clean audio, e.g. your friend, an actor or a recording of an NPC in your game.
Elevenlabs offers a $5 tier with 30k characters and a $22 tier with 100k characters. Characters roll over each month with a max of 3 months worth of credits. If you're interested in the service, please consider using our referral link here. It costs you nothing extra and supports Wingman AI. We get 22% of all payments in your first year. Thank you!
Signing up is very similar to OpenAI: Create your account, set up your payment method, and create an API key. Enter that API key in Wingman AI when asked.
Microsoft Edge TTS is actually free and you don't need an API key to use it. However, it's not as "good" as the others in terms of quality. Their voices are split by language, so the same voice can't speak different languages - you have to choose a new voice for the new language instead. Wingman does this for you, but it's still "Windows TTS" and not as good as the other providers.
You can use any LLM offering an OpenAI-compatible API and connect it to Wingman AI Core easily.
- Download the installer of the latest version from wingman-ai.com.
- Install it to a directory of your choice and start the client
Wingman AI.exe
.- The client will will auto-start
Wingman AI Core.exe
in the background - The client will auto-start
whispercpp
in the background. If you have an NVIDIA RTX GPU, install the latest CUDA driver from NVIDIA and enable GPU acceleration in the Settings view.
- The client will will auto-start
If that doesn't work for some reason, try starting Wingman AI Core.exe
manually and check the terminal or your logs directory for errors.
If you're a developer, you can also run from source. This way you can preview our latest changes on the develop
branch and debug the code.
Wingman runs well on MacOS. While we don't offer a precompiled package for it, you can run it from source. Note that the TTS provider XVASynth is Windows-only and therefore not supported on MacOS.
Our default Wingmen serve as examples and starting points for your own Wingmen, and you can easily reconfigure them using the client. You can also add your own Wingmen very easily.
Our first two default Wingmen are using OpenAI's APIs. The basic process is as follows:
- Your speech is transcribed by the configured TTS provider.
- The transcript is then sent as text to the configured LLM, which responds with text and maybe function calls.
- Wingman AI Core executes function calls which can be command executions or skill functions.
- The response is then read out to you by the configured TTS provider.
- Clients connected to Wingman AI Core are notified about progress and changes live and display them in the UI.
Talking to a Wingman is like chatting with ChatGPT but with your voice. And it can actually do everything that Python can do. This means that you can customize their behavior by giving them a backstory as starting point for your conversation. You can also just tell them how to behave and they will remember that during your conversation.
The magic happens when you configure commands or key bindings. GPT will then try to match your request with the configured commands and execute them for you. It will automatically choose the best matching command based only on its name, so make sure you give it a good one (e.g. Request landing permission
).
StarHead is where it gets really interesting. This Wingman is tailored to Star Citizen and uses the StarHead API to enrich your gaming experience with external data. It is a showcase of how to build specialized Wingmen for specific use-cases and scenarios. Simply ask StarHead for the best trade route, and it will prompt you for your ship, location, and budget. It will then call the StarHead API and read the result back to you.
Like all of our OpenAI Wingmen, it will remember the conversation history and you can ask follow-up questions. For example, you can ask what the starting point of the route, or what the next stop is. You can also ask for the best trade route from a different location or with a different ship.
StarHead is a community project that aims to provide a platform for Star Citizen players to share their knowledge and experience. At the moment it is mainly focused on the trading aspect of Star Citizen. With a huge database of trade items, shop inventories and prices, it allows you to find the best trade routes and make the most profit. A large community of players is constantly working to keep the data up to date.
For updates and more information, visit the StarHead website or follow @KNEBEL on
Yes, you can! You can edit all the configs in your %APP_DATA%\ShipBit\WingmanAI\[version]
directory.
The YAML configs are very indentation-sensitive, so please be careful.
There is no hot reloading, so you have to restart Wingman AI Core after you made manual changes to the configs.
Use these naming conventions to create different configurations for different games or scenarios:
- any subdirectory in your config dir is a "configuration" or use case. Do not use special characters.
-
_[name]
(underscore): marks the default configuration that is launched on start, e.g._Star Citizen
.
-
- Inside of a configuration directory, you can create different
wingmen
by adding[name].yaml
files. Do not use special characters.-
.[name].yaml
(dot): marks the Wingman as "hidden" and skips it in the UI and on start, e.g..Computer.yaml
. -
[name].png
(image): Sets an avatar for the Wingman in the client, e.g.StarHead.png
.
-
There are a couple of other files and directories in the config directory that you can use to configure Wingman AI.
-
defaults.yaml
- contains the default settings for all Wingmen. This is merged with the settings of the individual Wingmen at runtime. Specific wingman settings always override the defaults. Once a wingman is saved using the client, it contains all the settings it needs to run and will no longer fallback to the defaults. -
settings.yaml
- contains user settings like the selected audio input and output devices -
secrets.yaml
- contains the API keys for different providers.
Access secrets in code by using secret_keeper.py
. You can access everything else with config_manager.py
.
Wingman supports all languages that OpenAI (or your configured AI provider) supports. Setting this up in Wingman is really easy:
Some STT providers need a simple configuration to specifiy a non-English language. Use might also have to find a voice that speaks the desired language.
Then find the backstory
setting for the Wingman you want to change and add a simple sentence to the backstory
prompt: Always answer in the language I'm using to talk to you.
or something like Always answer in Portuguese.
The cool thing is that you can now trigger commands in the language of your choice without changing/translating the name
of the commands - the AI will do that for you.
Are you ready to build your own Wingman or implement new features to the framework?
Please follow our guides to setup your dev environment:
If you want to read some code first and understand how it all works, we recommend you start here (in this order):
-
http://127.0.0.1:8000/docs
- The OpenAPI (ex: Swagger) spec -
wingman_core.py
- most of the public API endpoints that Wingman AI exposes - The config files in
%APP_DATA%\ShipBit\WingmanAI\[version]
to get an idea of what's configurable. -
Wingman.py
- the base class for all Wingmen -
OpenAIWingman.py
- derived from Wingman, using all the providers -
Tower.py
- the factory that creates Wingmen
If you're planning to develop a major feature or new integration, please contact us on Discord first and let us know what you're up to. We'll be happy to help you get started and make sure your work isn't wasted because we're already working on something similar.
Thank you so much for your support. We really appreciate it!
Wingman makes use of other Open Source projects internally (without modifying them in any way). We would like to thank their creators for their great work and contributions to the Open Source community.
- azure-cognitiveservices-speech - Proprietary license, Microsoft
- edge-tts - GPL-3.0
- elevenlabslib - MIT, © 2018 The Python Packaging Authority
- FastAPI - MIT, © 2018 Sebastián Ramírez
- numpy - BSD 3, © 2005-2023 NumPy Developers
- openai - Apache-2.0
- packaging - Apache/BSD, © Donald Stufft and individual contributors
- pedalboard - GPL-3.0, © 2021-2023 Spotify AB
- platformdirs - MIT, © 2010-202x plaformdirs developers
- pydantic - MIT, © 2017 to present Pydantic Services Inc. and individual contributors
- pydirectinput-rgx - MIT, © 2022 [email protected], 2020 Ben Johnson
- pyinstaller - extended GPL 2.0, © 2010-2023 PyInstaller Development Team
- PyYAML - MIT, © 2017-2021 Ingy döt Net, 2006-2016 Kirill Simonov
- scipy - BSD 3, © 2001-2002 Enthought, Inc. 2003-2023, SciPy Developers
- sounddevice - MIT, © 2015-2023 Matthias Geier
- soundfile - BSD 3, © 2013 Bastian Bechtold
- uvicorn - BSD 3, © 2017-presen, Encode OSS Ltd. All rights reserved.
This list will inevitably remain incomplete. If you miss your name here, please let us know in Discord or via Patreon.
- JayMatthew aka SawPsyder, @teddybear082, @Thaendril and @Xul for outstanding moderation in Discord, constant feedback and valuable Core & Skill contributions
- @lugia19 for developing and improving the amazing elevenlabslib.
- Knebel who helped us kickstart Wingman AI by showing it on stream and grants us access to the StarHead API for Star Citizen.
- @Zatecc from UEX Corp who supports our community developers and Wingmen with live trading data for Star Citizen using the UEX Corp API.
To our greatest Patreon supporters we say: o7
Commanders!
- The Announcer
- Weyland
- Morthius
- Grobi
- Paradox
- Gopalfreak aka Rockhound
- Averus
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for wingman-ai
Similar Open Source Tools
wingman-ai
Wingman AI allows you to use your voice to talk to various AI providers and LLMs, process your conversations, and ultimately trigger actions such as pressing buttons or reading answers. Our _Wingmen_ are like characters and your interface to this world, and you can easily control their behavior and characteristics, even if you're not a developer. AI is complex and it scares people. It's also **not just ChatGPT**. We want to make it as easy as possible for you to get started. That's what _Wingman AI_ is all about. It's a **framework** that allows you to build your own Wingmen and use them in your games and programs. The idea is simple, but the possibilities are endless. For example, you could: * **Role play** with an AI while playing for more immersion. Have air traffic control (ATC) in _Star Citizen_ or _Flight Simulator_. Talk to Shadowheart in Baldur's Gate 3 and have her respond in her own (cloned) voice. * Get live data such as trade information, build guides, or wiki content and have it read to you in-game by a _character_ and voice you control. * Execute keystrokes in games/applications and create complex macros. Trigger them in natural conversations with **no need for exact phrases.** The AI understands the context of your dialog and is quite _smart_ in recognizing your intent. Say _"It's raining! I can't see a thing!"_ and have it trigger a command you simply named _WipeVisors_. * Automate tasks on your computer * improve accessibility * ... and much more
obsidian-Smart2Brain
Your Smart Second Brain is a free and open-source Obsidian plugin that serves as your personal assistant, powered by large language models like ChatGPT or Llama2. It can directly access and process your notes, eliminating the need for manual prompt editing, and it can operate completely offline, ensuring your data remains private and secure.
WriteNow
Write Now is an all-in-one writing assistant that helps users elevate their text with features like proofreading, rewriting, friendly and professional tones, concise mode, and custom AI server configuration. It prioritizes user privacy and offers a Lite Edition for trial purposes. Users can install Write Now through the Havoc Store and configure AI server endpoints for enhanced functionality.
AutoGPT
AutoGPT is a revolutionary tool that empowers everyone to harness the power of AI. With AutoGPT, you can effortlessly build, test, and delegate tasks to AI agents, unlocking a world of possibilities. Our mission is to provide the tools you need to focus on what truly matters: innovation and creativity.
raggenie
RAGGENIE is a low-code RAG builder tool designed to simplify the creation of conversational AI applications. It offers out-of-the-box plugins for connecting to various data sources and building conversational AI on top of them, including integration with pre-built agents for actions. The tool is open-source under the MIT license, with a current focus on making it easy to build RAG applications and future plans for maintenance, monitoring, and transitioning applications from pilots to production.
PyAirbyte
PyAirbyte brings the power of Airbyte to every Python developer by providing a set of utilities to use Airbyte connectors in Python. It enables users to easily manage secrets, work with various connectors like GitHub, Shopify, and Postgres, and contribute to the project. PyAirbyte is not a replacement for Airbyte but complements it, supporting data orchestration frameworks like Airflow and Snowpark. Users can develop ETL pipelines and import connectors from local directories. The tool simplifies data integration tasks for Python developers.
digma
Digma is a Continuous Feedback platform that provides code-level insights related to performance, errors, and usage during development. It empowers developers to own their code all the way to production, improving code quality and preventing critical issues. Digma integrates with OpenTelemetry traces and metrics to generate insights in the IDE, helping developers analyze code scalability, bottlenecks, errors, and usage patterns.
claude-coder
Claude Coder is an AI-powered coding companion in the form of a VS Code extension that helps users transform ideas into code, convert designs into applications, debug intuitively, accelerate development with automation, and improve coding skills. It aims to bridge the gap between imagination and implementation, making coding accessible and efficient for developers of all skill levels.
airnode
Airnode is a fully-serverless oracle node that is designed specifically for API providers to operate their own oracles.
local_multimodal_ai_chat
Local Multimodal AI Chat is a hands-on project that teaches you how to build a multimodal chat application. It integrates different AI models to handle audio, images, and PDFs in a single chat interface. This project is perfect for anyone interested in AI and software development who wants to gain practical experience with these technologies.
recognize
Recognize is a smart media tagging tool for Nextcloud that automatically categorizes photos and music by recognizing faces, animals, landscapes, food, vehicles, buildings, landmarks, monuments, music genres, and human actions in videos. It uses pre-trained models for object detection, landmark recognition, face comparison, music genre classification, and video classification. The tool ensures privacy by processing images locally without sending data to cloud providers. However, it cannot process end-to-end encrypted files. Recognize is rated positively for ethical AI practices in terms of open-source software, freely available models, and training data transparency, except for music genre recognition due to limited access to training data.
promptbuddy
Prompt Buddy is a Microsoft Teams app that provides a central location for teams to share and discover their favorite AI prompts. It comes preloaded with Microsoft Copilot and other categories, but users can also add their own custom prompts. The app is easy to use and allows users to upvote their favorite prompts, which raises them to the top of the leaderboard. Prompt Buddy also supports dark mode and offers a mobile layout for use on phones. It is built on the Power Platform and can be customized and extended by the installer.
EdgeChains
EdgeChains is an open-source chain-of-thought engineering framework tailored for Large Language Models (LLMs)- like OpenAI GPT, LLama2, Falcon, etc. - With a focus on enterprise-grade deployability and scalability. EdgeChains is specifically designed to **orchestrate** such applications. At EdgeChains, we take a unique approach to Generative AI - we think Generative AI is a deployment and configuration management challenge rather than a UI and library design pattern challenge. We build on top of a tech that has solved this problem in a different domain - Kubernetes Config Management - and bring that to Generative AI. Edgechains is built on top of jsonnet, originally built by Google based on their experience managing a vast amount of configuration code in the Borg infrastructure.
azure-search-openai-demo
This sample demonstrates a few approaches for creating ChatGPT-like experiences over your own data using the Retrieval Augmented Generation pattern. It uses Azure OpenAI Service to access a GPT model (gpt-35-turbo), and Azure AI Search for data indexing and retrieval. The repo includes sample data so it's ready to try end to end. In this sample application we use a fictitious company called Contoso Electronics, and the experience allows its employees to ask questions about the benefits, internal policies, as well as job descriptions and roles.
AI-Studio
MindWork AI Studio is a desktop application that provides a unified chat interface for Large Language Models (LLMs). It is free to use for personal and commercial purposes, offers independence in choosing LLM providers, provides unrestricted usage through the providers API, and is cost-effective with pay-as-you-go pricing. The app prioritizes privacy, flexibility, minimal storage and memory usage, and low impact on system resources. Users can support the project through monthly contributions or one-time donations, with opportunities for companies to sponsor the project for public relations and marketing benefits. Planned features include support for more LLM providers, system prompts integration, text replacement for privacy, and advanced interactions tailored for various use cases.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
For similar tasks
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.
danswer
Danswer is an open-source Gen-AI Chat and Unified Search tool that connects to your company's docs, apps, and people. It provides a Chat interface and plugs into any LLM of your choice. Danswer can be deployed anywhere and for any scale - on a laptop, on-premise, or to cloud. Since you own the deployment, your user data and chats are fully in your own control. Danswer is MIT licensed and designed to be modular and easily extensible. The system also comes fully ready for production usage with user authentication, role management (admin/basic users), chat persistence, and a UI for configuring Personas (AI Assistants) and their Prompts. Danswer also serves as a Unified Search across all common workplace tools such as Slack, Google Drive, Confluence, etc. By combining LLMs and team specific knowledge, Danswer becomes a subject matter expert for the team. Imagine ChatGPT if it had access to your team's unique knowledge! It enables questions such as "A customer wants feature X, is this already supported?" or "Where's the pull request for feature Y?"
semantic-kernel
Semantic Kernel is an SDK that integrates Large Language Models (LLMs) like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C#, Python, and Java. Semantic Kernel achieves this by allowing you to define plugins that can be chained together in just a few lines of code. What makes Semantic Kernel _special_ , however, is its ability to _automatically_ orchestrate plugins with AI. With Semantic Kernel planners, you can ask an LLM to generate a plan that achieves a user's unique goal. Afterwards, Semantic Kernel will execute the plan for the user.
floneum
Floneum is a graph editor that makes it easy to develop your own AI workflows. It uses large language models (LLMs) to run AI models locally, without any external dependencies or even a GPU. This makes it easy to use LLMs with your own data, without worrying about privacy. Floneum also has a plugin system that allows you to improve the performance of LLMs and make them work better for your specific use case. Plugins can be used in any language that supports web assembly, and they can control the output of LLMs with a process similar to JSONformer or guidance.
mindsdb
MindsDB is a platform for customizing AI from enterprise data. You can create, serve, and fine-tune models in real-time from your database, vector store, and application data. MindsDB "enhances" SQL syntax with AI capabilities to make it accessible for developers worldwide. With MindsDB’s nearly 200 integrations, any developer can create AI customized for their purpose, faster and more securely. Their AI systems will constantly improve themselves — using companies’ own data, in real-time.
aiscript
AiScript is a lightweight scripting language that runs on JavaScript. It supports arrays, objects, and functions as first-class citizens, and is easy to write without the need for semicolons or commas. AiScript runs in a secure sandbox environment, preventing infinite loops from freezing the host. It also allows for easy provision of variables and functions from the host.
activepieces
Activepieces is an open source replacement for Zapier, designed to be extensible through a type-safe pieces framework written in Typescript. It features a user-friendly Workflow Builder with support for Branches, Loops, and Drag and Drop. Activepieces integrates with Google Sheets, OpenAI, Discord, and RSS, along with 80+ other integrations. The list of supported integrations continues to grow rapidly, thanks to valuable contributions from the community. Activepieces is an open ecosystem; all piece source code is available in the repository, and they are versioned and published directly to npmjs.com upon contributions. If you cannot find a specific piece on the pieces roadmap, please submit a request by visiting the following link: Request Piece Alternatively, if you are a developer, you can quickly build your own piece using our TypeScript framework. For guidance, please refer to the following guide: Contributor's Guide
superagent-js
Superagent is an open source framework that enables any developer to integrate production ready AI Assistants into any application in a matter of minutes.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.