feedgen
Optimise Shopping feeds with Generative AI
Stars: 158
FeedGen is an open-source tool that uses Google Cloud's state-of-the-art Large Language Models (LLMs) to improve product titles, generate more comprehensive descriptions, and fill missing attributes in product feeds. It helps merchants and advertisers surface and fix quality issues in their feeds using Generative AI in a simple and configurable way. The tool relies on GCP's Vertex AI API to provide both zero-shot and few-shot inference capabilities on GCP's foundational LLMs. With few-shot prompting, users can customize the model's responses towards their own data, achieving higher quality and more consistent output. FeedGen is an Apps Script based application that runs as an HTML sidebar in Google Sheets, allowing users to optimize their feeds with ease.
README:
Disclaimer: This is not an official Google product.
FeedGen works best for up to 30k items. Looking to scale further? Onboard with Product Studio API alpha (reach out to [email protected]) or consider processing your feed in BigQuery.
Overview • Get started • What it solves • How it works • How to Contribute • Community Spotlight
- [July 2024]: Added guide to feed optimisation using BigQuery.
- [May 2024]: Added support for
gemini-1.5-pro
andgemini-1.5-flash
- [April 2024]
-
IMPORTANT: As of April 9 and as per the updated Merchant Center product data specification please use
structured_title
andstructured_description
when importing FeedGen's output into Merchant Center instead oftitle
anddescription
respectively. Refer to these instructions for details. - Added support for Gemini 1.5 pro (preview):
gemini-1.5-pro-preview-0409
. Please note that the model name may (breakingly) change in the future.
-
IMPORTANT: As of April 9 and as per the updated Merchant Center product data specification please use
- [March 2024]
- Renamed Gemini models to
gemini-1.0-pro
andgemini-1.0-pro-vision
- Added support for retrieving JSON web pages
- Renamed Gemini models to
- [January 2024]: Added support for fetching product web page information and using it for higher quality title and description generation
- [December 2023]
- Added support for Gemini models (
gemini-pro
andgemini-pro-vision
) - Unified description generation and validation - now handled by a single prompt
- Added support for image understanding for higher
quality title and description generation (only available with
gemini-pro-vision
) - Added LLM-generated titles which should avoid duplicate values at the possible loss of some attribute information
- Added support for Gemini models (
- [November 2023]: Added description validation as a separate component
- [October 2023]: Made title and description generation optional
- [August 2023]: Added support for text-bison-32k
- [June 2023]: Moved Colab variant to
v1
and switched to JS/TS onmain
FeedGen is an open-source tool that uses Google Cloud's state-of-the-art Large Language Models (LLMs) to improve product titles, generate more comprehensive descriptions, and fill missing attributes in product feeds. It helps merchants and advertisers surface and fix quality issues in their feeds using Generative AI in a simple and configurable way.
The tool relies on GCP's Vertex AI API to provide both zero-shot and few-shot inference capabilities on GCP's foundational LLMs. With few-shot prompting, you use the best 3-10 samples from your own Shopping feed to customise the model's responses towards your own data, thus achieving higher quality and more consistent output. This can be optimised further by fine-tuning the foundational models with your own proprietary data. Find out how to fine-tune models with Vertex AI, along with the benefits of doing so, at this guide.
Note: Please check if your target feed language is one of the Vertex AI supported languages before using FeedGen, and reach out to your Google Cloud or Account representatives if not.
To get started with FeedGen:
- Make a copy of the Google Sheets spreadsheet template
- Follow the instructions detailed in the
Getting Started
worksheet
Optimising Shopping feeds is a goal for every advertiser working with Google Merchant Center (MC) in order to improve query matching, increase coverage, and ultimately click-through rates (CTR). However, it is cumbersome to sift through product disapprovals in MC or manually fix quality issues.
FeedGen tackles this using Generative AI - allowing users to surface and fix quality issues, and fill attribute gaps in their feeds, in an automated fashion.
FeedGen is an Apps Script based application that runs as an HTML sidebar (see HtmlService for details) in Google Sheets. The associated Google Sheets spreadsheet template is where all the magic happens; it holds the input feed that needs optimisation, along with specific configuration values that control how content is generated. The spreadsheet is also used for both (optional) human validation and setting up a supplemental feed in Google Merchant Center (MC).
Generative Language in Vertex AI, and in general, is a nascent feature / technology. We highly recommend manually reviewing and verifying the generated titles and descriptions. FeedGen helps users expedite this process by providing a score for both titles and descriptions (along with detailed components) that represents how "good" the generated content is, along with a Sheets-native way for bulk-approving generated content via data filters.
First, make a copy of the template spreadsheet and follow the instructions defined in the Getting Started section. The first step is to authenticate yourself to the Apps Script environment via the Initialise button as shown below.
Afterwards, navigate to the Config worksheet to configure feed settings, Vertex AI API settings (including an estimation of the costs that will be incurred), and settings to control the content generation.
Description generation works by taking the prompt prefix given in the Config sheet, appending a row of data from Input and sending the result as a prompt to the LLM. This gives you great flexibility in shaping the wording, style and other requirements you might have. All data from Input Feed will be provided as part of the prompt.
If a web page link is provided in the input feed, you may also check the
Use Landing Page Information
checkbox to load and pass sanitised content of
the product's web page into the prompt. All span
and p
tags are extracted
from the fetched HTML content and concatenated together to form an additional
paragraph of information that is passed to the LLM in the prompt, along with
dedicated instructions on how to use this additional information. JSON web
responses will be used as-is without additional parsing. Furthermore, fetched
web page information is cached using Apps Script's
CacheService for a
period of 60 seconds in order to avoid refetching and reparsing the content for
the generation of titles (which is a separate call to the Vertex AI API).
Optional: You can also provide examples of descriptions in the Few-shot examples section (see below). Those will be appended to the prompt prefix as well and inform the model of how good descriptions look like.
The result is directly output as Generated Description
Since LLMs have a tendency to hallucinate, there is an option to ask the model (in follow-up instructions within the same prompt) if the generated description meets your criteria. The model evaluates the description it just generated and responds with a numerical score as well as reasoning. Example validation criteria and scoring are provided to give some hints on how to instruct the model to evaluate descriptions - e.g. it includes criteria as well as example score values.
Titles use few-shot prompting; a technique where one would select samples from their own input feed as shown below to customise the model's responses towards their data. To help with this process, FeedGen provides a utility Google Sheets formula:
=FEEDGEN_CREATE_CONTEXT_JSON('Input Feed'!A2)
Which can be used to fill up the “Context” information field in the few-shot prompt examples table by dragging it down, just as for other Sheets formulas. This "Context" represents the entire row of data from the input feed for this item, and it will be sent as part of the prompt to the Vertex AI API.
Afterwards, you must manually fill in the remaining columns of the few-shot prompt examples table, which define the expected output by the LLM. These examples are very important as they provide the basis upon which the LLM will learn how it should generate content for the rest of the input feed. The best examples to pick are products where:
- You can identify attributes in the existing title that match column names from your input feed.
- You can propose a new title that is better than the existing one (e.g. contains more attributes).
- The proposed title has a structure that can be reused for other products within your feed.
We would recommend adding at least one example per unique category within your feed, especially if the ideal title composition would differ.
FeedGen defaults to using attributes from the input feed instead of generated
attribute values for composing the title, to avoid LLM hallucinations and ensure
consistency. For example, the value Blue
from the input feed attribute
Color for a specific feed item will be used for its corresponding title
instead of, say, a generated value Navy
. This behaviour can be overridden with
the Prefer Generated Values
checkbox in the Advanced Settings section of
the Title Prompt Settings, and is useful whenever the input feed itself
contains erroneous or poor quality data.
Within this same section you can also specify a list of safe words that can be
output in generated titles even if they did not exist beforehand in your feed.
For example, you can add the word "Size" to this list if you would like to
prefix all values of the Size
attribute with it (i.e. "Size M" instead of "M").
Finally, you can also specify whether you would like the LLM to generate titles
for you using the Use LLM-generated Titles
checkbox. This allows the LLM
to inspect the generated attribute values and select which ones to concatenate
together - avoiding duplicates - instead of the default logic where all
attribute values will be stitched together. This feature should work better with
Gemini models than PaLM 2, as Gemini models have better reasoning capabilities
that allow them to better strick to prompt instructions over PaLM 2 models.
Furthermore, LLM-generated titles allow you to specify the desired length for
titles in the prompt (max 150 characters for Merchant Center), which was not
possible previously.
Like descriptions, you may also choose to load information from the provided web
page link and pass it to the LLM for the generation of higher quality titles.
This can be done via the Use Landing Page Information
checkbox, and when
checked, all features extracted from the web page data will be listed under a
new attribute called Website Features. New
words that were not covered by existing attributes will then be added to the
generated title.
Now you are done with the configuration and ready to optimise your feed. Use the top navigation menu to launch the FeedGen sidebar and start generating and validating content in the Generated Content Validation worksheet.
You would typically work within this view to understand, approve and/or regenerate content for each feed item as follows:
- The Generate button controls the generation process, and will first
regenerate all columns with an empty or failed status before continuing on
with the rest of the feed. For example, clearing row 7 above and clicking
Generate will start the generation process at row 7 first, before continuing
on with the next feed item.
- This means that you may clear the value of the Status column for any feed item in order to regenerate it.
- To start from scratch, click Clear Generated Data first before Generate.
- If an error occurs, it will be reflected in the Status column as "Failed".
- Approval can be done in bulk via filtering the view and using the Approve Filtered button, or individually using the checkboxes in the Approval column. All entries with a score above 0 will already be pre-approved (read more about FeedGen's Scoring / Evaluation system below).
- Additional columns for titles and descriptions are grouped, so that you may expand the group you are interested in examining.
Once you have completed all the necessary approvals and are satisfied with the output, click Export to Output Feed to transfer all approved feed items to the Output Feed worksheet.
The last step is to connect the spreadsheet to MC as a supplemental feed, this can be done as described by this Help Center article for standard MC accounts, and this Help Center article for multi-client accounts (MCA).
Notice that there is a att-p-feedgen column in the output feed. This column name is completely flexible and can be changed directly in the output worksheet. It adds a custom attribute to the supplemental feed for reporting and performance measurement purposes.
As Gemini (gemini-pro-vision
) is a multimodal model, we are able to
additionally examine product images and use them to generate higher quality
titles and descriptions. This is done by adding additional instructions to the
existing title and description generation prompts for extracting visible
product features from the provided image.
For titles, extracted features are used in 2 ways:
- As a quality check for existing feed attributes. For example, if the given product feed references a white color, yet the provided image shows a black product, the title and feed attribute are going to be adjusted accordingly.
- To enhance the generated title via a new attribute called Image Features. This attribute lists all visible features the model was able to extract from the provided image. All new words that were not covered by existing attributes will then be added to the generated title.
For descriptions, extracted features are used by the model to generate a more comprehensive description that highlights the visual aspects of the product. This is particularly relevant for domains where visual appeal is paramount; where the product's key details are visually conveyed rather than in a structured text format within the feed. This includes fashion, home decor and furniture, and perfumery and jewelry to name a few.
Finally, it is important to note the following restrictions (this information is valid during the Public Preview of Gemini):
- You can specify either web images and/or Google Cloud Storage (GCS) file URIs
in the
Image Link
column of the Input Feed worksheet. GCS URIs are passed as-is to Gemini (as they are supported by the model itself), while web images are first downloaded and provided inline as part of the input to the model. - Regardless of the source, only
image/png
andimage/jpeg
MIME types are supported. - GCS URIs must also point to a bucket that is within the same Google Cloud project that is sending the request (otherwise, it will be discarded by Gemini).
- Pricing is affected as well - an additional charge will be incurred per image. This has already been taken into account in FeedGen's price estimator.
Descriptions with a score below Min. Evaluation Approval Score
will not be
pre-approved. You can re-generate those by filtering on Description Score
and removing the Status value in the Generation Validation tab.
FeedGen provides a score for generated titles between -1 and 1 that acts as a quality indicator. Positive scores indicate varying degrees of good quality, while negative scores represent uncertainty over the generated content. Like descriptions, you may specify a minimum score (defaults to 0) that you would like FeedGen to pre-approve.
Let's take a closer look with some fictitous examples to better understand the scoring for titles:
- Original Title: 2XU Men's Swim Compression Long Sleeve Top
- Original Description: 2XU Men's Swim Compression Long Sleeve Top, lightweight, breathable PWX fabric, UPF 50+ protects you from the sun.
- Generated Title: 2XU Men's Swim Compression Long Sleeve Top Black Size M PWX Fabric UV Protection
- Score: -1
- Reasoning: New words, namely "UV Protection", were added that may have been hallucinated by the language model. Indeed, "UV Protection" is not explicitly mentioned anywhere in the input feed. Examining the feed item more clearly however surfaces that the description contains the value UPF 50+, so the addition of UV Protection is actually a positive thing, but since we have no way of assessing this (without applying a more granular semantic text analysis) we default to penalising the score.
Let's look at another example for the same product:
- Original Title: 2XU Men's Swim Compression Long Sleeve Top
- Generated Title: 2XU Men's Swim Compression Top Size M
- Score: -0.5
- Reasoning: Key information was removed from the title, namely "Long Sleeve",
which describes the type of the product. FeedGen identifies this information
by first examining the structure of titles via what we refer to as
templates in our uniquitous language, before diving deeper and comparing
the individual words that compose the titles. Let's check the templates for
our example:
-
Original Title Template:
<Brand> <Gender> <Category> <Product Type>
-
Generated Title Template:
<Brand> <Gender> <Category> <Product Type> <Size>
- As you can see no attributes were actually removed, but rather the
components of the
Product Type
attribute changed in a worse way, hence the negative score.
-
Original Title Template:
FeedGen is conservative in its scoring; it will assign a score of -0.5 whenever any words get removed, even if those words were promotional phrases such as
get yours now
orwhile stocks last
, which should not be part of titles as per the Best Practices outlined by Merchant Center (MC).
Alright, so what makes a good title? Let's look at another example:
- Original Title: 2XU Men's Swim Compression Long Sleeve Top
- Generated Title: 2XU Men's Swim Compression Long Sleeve Top Size M
- Score: 0.5
- Reasoning: Nothing was changed or lost in the original title, and we added a new attribute, "Size". If the product was being offered in different sizes, this new title would now be vital in preventing all feed items for the product from getting rejected by MC (due to their titles being duplicates).
Finally, what's the ideal case? Let's take a look at one last example:
- Original Title: 2XU Men's Swim Compression Long Sleeve Top
- Input - Color: missing
- Generated Title: 2XU Men's Swim Compression Long Sleeve Top Black Size M
- Output - Color: Black
- Score: 1
- Reasoning: This is the best possible scenario; we optimised the title and filled feed attribute gaps along the way, a score of 1 is definitely well-deserved.
So in summary, the scoring systems works as follows:
Are there hallucinations? | Have we removed any words? | No change at all? | Have we optimised the title? | Did we fill in missing gaps or extract new attributes? |
---|---|---|---|---|
-1 | -0.5 | 0 | Add 0.5 | Add 0.5 |
FeedGen also applies some basic MC compliance checks, such as titles and
descriptions must not be longer than 150 and 5000 characters, respectively. If
the generated content fails these checks, the value
Failed compliance checks
will be output in the Status column. As
mentioned above, FeedGen will attempt to regenerate Failed
items first
whenever the Generate button is clicked.
FeedGen does not just fill gaps in your feed, but might also create completely new attributes that were not provided in the Input Feed. This is controllable via the few-shot prompting examples in the Config sheet; by providing "new" attributes that do not exist in the input within those examples, FeedGen will attempt to extract values for those new attributes from other values in the input feed. Let's take a look at an example:
Original Title | Product Attributes in Original Title | Product Attributes in Generated Title | Generated Attribute Values |
---|---|---|---|
ASICS Women's Performance Running Capri Tight | Brand, Gender, Product Type | Brand, Gender, Product Type, Fit | ASICS, Women's Performance, Running Capri, Tight |
Notice here how the Fit attribute was extracted out of Product Type.
FeedGen would now attempt to do the same for all other products in the feed,
so for example it will extract the value Relaxed
as Fit from the title
Agave Men's Jeans Waterman Relaxed
. If you do not want those attributes to be
created, make sure you only use attributes that exist in the input feed for your
few-shot prompting examples. Furthermore, those completely new feed attributes
will be prefixed with feedgen- in the Output Feed (e.g. feedgen-Fit) and
will be sorted to the end of the sheet to make it easier for you to locate and
delete should you not want to use them.
We recommend the following patterns for titles according to your business domain:
Domain | Recommended title structure | Example |
---|---|---|
Apparel | Brand + Gender + Product Type + Attributes (Color, Size, Material) | Ann Taylor Women’s Sweater, Black (Size 6) |
Consumable | Brand + Product Type + Attributes (Weight, Count) | TwinLab Mega CoQ10, 50 mg, 60 caps |
Hard Goods | Brand + Product + Attributes (Size, Weight, Quantity) | Frontgate Wicker Patio Chair Set, Brown, 4-Piece |
Electronics | Brand + Attribute + Product Type | Samsung 88” Smart LED TV with 4K 3D Curved Screen |
Books | Title + Type + Format (Hardcover, eBook) + Author | 1,000 Italian Recipe Cookbook, Hardcover by Michele Scicolone |
You can rely on these patterns to generate the few-shot prompting examples
defined in the FeedGen Config
worksheet, which will accordingly steer the
values generated by the model.
We also suggest the following:
- Provide as many product attributes as possible for enriching description generation.
- Use size, color, and gender / age group for title generation, if available.
- Do NOT use model numbers, prices or promotional text in titles.
Please refer to the Vertex AI Pricing and Quotas and Limits guides for more information.
As of April 9, 2024 and as per the updated Merchant Center product data specification, users need to disclose whether generative AI was used to curate the text content for titles and descriptions.
The main challenge with this is that users cannot send both title
and structured_title
, or description
and structured_description
in the same feed, as the original column values will always trump the structured_
variants.
Therefore, users need to perform an additional series of steps after exporting the approved generations into FeedGen's Output Feed tab:
- Connect the FeedGen spreadsheet to Merchant Center as a supplemental feed.
- Create feed rules to clear the title and description values along with the FeedGen supplemental feed.
- You have to create 2 distinct feed rules as shown below:
- Rename the
title
anddescription
columns in the Output Feed tab of FeedGen tostructured_title
andstructured_description
, respectively. - Add the prefix
trained_algorithmic_media:
to all generated content.
Refer to the detailed structured_title and structured_description attribute specs for more information.
We will be automating Steps #3 and #4 for you soon - stay tuned!
Credits to Glen Wilson and the team at Solutions-8 for the details and images.
Beyond the information outlined in our Contributing Guide, you would need to follow these additional steps to build FeedGen locally:
- Make sure your system has an up-to-date installation of Node.js and npm.
- Navigate to the directory where the FeedGen source code lives.
- Run
npm install
. - Run
npx @google/aside init
and click through the prompts.- Input the Apps Script
Script ID
associated with your target Google Sheets spreadsheet. You can find out this value by clicking onExtensions > Apps Script
in the top navigation menu of your target sheet, then navigating toProject Settings
(the gear icon) in the resulting Apps Script view.
- Input the Apps Script
- Run
npm run deploy
to build, test and deploy (via clasp) all code to the target spreadsheet / Apps Script project.
- Unlocking the Power of AI for Google Shopping Feed Optimization by Krisztián Korpa.
- AI-driven success: how to leverage the potential of Google FeedGen in PPC campaigns by Alex van de Pol.
- (German) Generative KI: Home24 verbessert mit FeedGen Reichweite und Performance von Shopping Ads - Think with Google.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for feedgen
Similar Open Source Tools
feedgen
FeedGen is an open-source tool that uses Google Cloud's state-of-the-art Large Language Models (LLMs) to improve product titles, generate more comprehensive descriptions, and fill missing attributes in product feeds. It helps merchants and advertisers surface and fix quality issues in their feeds using Generative AI in a simple and configurable way. The tool relies on GCP's Vertex AI API to provide both zero-shot and few-shot inference capabilities on GCP's foundational LLMs. With few-shot prompting, users can customize the model's responses towards their own data, achieving higher quality and more consistent output. FeedGen is an Apps Script based application that runs as an HTML sidebar in Google Sheets, allowing users to optimize their feeds with ease.
aici
The Artificial Intelligence Controller Interface (AICI) lets you build Controllers that constrain and direct output of a Large Language Model (LLM) in real time. Controllers are flexible programs capable of implementing constrained decoding, dynamic editing of prompts and generated text, and coordinating execution across multiple, parallel generations. Controllers incorporate custom logic during the token-by-token decoding and maintain state during an LLM request. This allows diverse Controller strategies, from programmatic or query-based decoding to multi-agent conversations to execute efficiently in tight integration with the LLM itself.
airbroke
Airbroke is an open-source error catcher tool designed for modern web applications. It provides a PostgreSQL-based backend with an Airbrake-compatible HTTP collector endpoint and a React-based frontend for error management. The tool focuses on simplicity, maintaining a small database footprint even under heavy data ingestion. Users can ask AI about issues, replay HTTP exceptions, and save/manage bookmarks for important occurrences. Airbroke supports multiple OAuth providers for secure user authentication and offers occurrence charts for better insights into error occurrences. The tool can be deployed in various ways, including building from source, using Docker images, deploying on Vercel, Render.com, Kubernetes with Helm, or Docker Compose. It requires Node.js, PostgreSQL, and specific system resources for deployment.
EdgeChains
EdgeChains is an open-source chain-of-thought engineering framework tailored for Large Language Models (LLMs)- like OpenAI GPT, LLama2, Falcon, etc. - With a focus on enterprise-grade deployability and scalability. EdgeChains is specifically designed to **orchestrate** such applications. At EdgeChains, we take a unique approach to Generative AI - we think Generative AI is a deployment and configuration management challenge rather than a UI and library design pattern challenge. We build on top of a tech that has solved this problem in a different domain - Kubernetes Config Management - and bring that to Generative AI. Edgechains is built on top of jsonnet, originally built by Google based on their experience managing a vast amount of configuration code in the Borg infrastructure.
alignment-handbook
The Alignment Handbook provides robust training recipes for continuing pretraining and aligning language models with human and AI preferences. It includes techniques such as continued pretraining, supervised fine-tuning, reward modeling, rejection sampling, and direct preference optimization (DPO). The handbook aims to fill the gap in public resources on training these models, collecting data, and measuring metrics for optimal downstream performance.
generative-ai-application-builder-on-aws
The Generative AI Application Builder on AWS (GAAB) is a solution that provides a web-based management dashboard for deploying customizable Generative AI (Gen AI) use cases. Users can experiment with and compare different combinations of Large Language Model (LLM) use cases, configure and optimize their use cases, and integrate them into their applications for production. The solution is targeted at novice to experienced users who want to experiment and productionize different Gen AI use cases. It uses LangChain open-source software to configure connections to Large Language Models (LLMs) for various use cases, with the ability to deploy chat use cases that allow querying over users' enterprise data in a chatbot-style User Interface (UI) and support custom end-user implementations through an API.
aigt
AIGT is a repository containing scripts for deep learning in guided medical interventions, focusing on ultrasound imaging. It provides a complete workflow from formatting and annotations to real-time model deployment. Users can set up an Anaconda environment, run Slicer notebooks, acquire tracked ultrasound data, and process exported data for training. The repository includes tools for segmentation, image export, and annotation creation.
airflow
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
ai-goat
AI Goat is a tool designed to help users learn about AI security through a series of vulnerable LLM CTF challenges. It allows users to run everything locally on their system without the need for sign-ups or cloud fees. The tool focuses on exploring security risks associated with large language models (LLMs) like ChatGPT, providing practical experience for security researchers to understand vulnerabilities and exploitation techniques. AI Goat uses the Vicuna LLM, derived from Meta's LLaMA and ChatGPT's response data, to create challenges that involve prompt injections, insecure output handling, and other LLM security threats. The tool also includes a prebuilt Docker image, ai-base, containing all necessary libraries to run the LLM and challenges, along with an optional CTFd container for challenge management and flag submission.
AntSK
AntSK is an AI knowledge base/agent built with .Net8+Blazor+SemanticKernel. It features a semantic kernel for accurate natural language processing, a memory kernel for continuous learning and knowledge storage, a knowledge base for importing and querying knowledge from various document formats, a text-to-image generator integrated with StableDiffusion, GPTs generation for creating personalized GPT models, API interfaces for integrating AntSK into other applications, an open API plugin system for extending functionality, a .Net plugin system for integrating business functions, real-time information retrieval from the internet, model management for adapting and managing different models from different vendors, support for domestic models and databases for operation in a trusted environment, and planned model fine-tuning based on llamafactory.
bugbug
Bugbug is a tool developed by Mozilla that leverages machine learning techniques to assist with bug and quality management, as well as other software engineering tasks like test selection and defect prediction. It provides various classifiers to suggest assignees, detect patches likely to be backed-out, classify bugs, assign product/components, distinguish between bugs and feature requests, detect bugs needing documentation, identify invalid issues, verify bugs needing QA, detect regressions, select relevant tests, track bugs, and more. Bugbug can be trained and tested using Python scripts, and it offers the ability to run model training tasks on Taskcluster. The project structure includes modules for data mining, bug/commit feature extraction, model implementations, NLP utilities, label handling, bug history playback, and GitHub issue retrieval.
chronon
Chronon is a platform that simplifies and improves ML workflows by providing a central place to define features, ensuring point-in-time correctness for backfills, simplifying orchestration for batch and streaming pipelines, offering easy endpoints for feature fetching, and guaranteeing and measuring consistency. It offers benefits over other approaches by enabling the use of a broad set of data for training, handling large aggregations and other computationally intensive transformations, and abstracting away the infrastructure complexity of data plumbing.
serverless-pdf-chat
The serverless-pdf-chat repository contains a sample application that allows users to ask natural language questions of any PDF document they upload. It leverages serverless services like Amazon Bedrock, AWS Lambda, and Amazon DynamoDB to provide text generation and analysis capabilities. The application architecture involves uploading a PDF document to an S3 bucket, extracting metadata, converting text to vectors, and using a LangChain to search for information related to user prompts. The application is not intended for production use and serves as a demonstration and educational tool.
STMP
SillyTavern MultiPlayer (STMP) is an LLM chat interface that enables multiple users to chat with an AI. It features a sidebar chat for users, tools for the Host to manage the AI's behavior and moderate users. Users can change display names, chat in different windows, and the Host can control AI settings. STMP supports Text Completions, Chat Completions, and HordeAI. Users can add/edit APIs, manage past chats, view user lists, and control delays. Hosts have access to various controls, including AI configuration, adding presets, and managing characters. Planned features include smarter retry logic, host controls enhancements, and quality of life improvements like user list fading and highlighting exact usernames in AI responses.
project_alice
Alice is an agentic workflow framework that integrates task execution and intelligent chat capabilities. It provides a flexible environment for creating, managing, and deploying AI agents for various purposes, leveraging a microservices architecture with MongoDB for data persistence. The framework consists of components like APIs, agents, tasks, and chats that interact to produce outputs through files, messages, task results, and URL references. Users can create, test, and deploy agentic solutions in a human-language framework, making it easy to engage with by both users and agents. The tool offers an open-source option, user management, flexible model deployment, and programmatic access to tasks and chats.
nerve
Nerve is a tool that allows creating stateful agents with any LLM of your choice without writing code. It provides a framework of functionalities for planning, saving, or recalling memories by dynamically adapting the prompt. Nerve is experimental and subject to changes. It is valuable for learning and experimenting but not recommended for production environments. The tool aims to instrument smart agents without code, inspired by projects like Dreadnode's Rigging framework.
For similar tasks
feedgen
FeedGen is an open-source tool that uses Google Cloud's state-of-the-art Large Language Models (LLMs) to improve product titles, generate more comprehensive descriptions, and fill missing attributes in product feeds. It helps merchants and advertisers surface and fix quality issues in their feeds using Generative AI in a simple and configurable way. The tool relies on GCP's Vertex AI API to provide both zero-shot and few-shot inference capabilities on GCP's foundational LLMs. With few-shot prompting, users can customize the model's responses towards their own data, achieving higher quality and more consistent output. FeedGen is an Apps Script based application that runs as an HTML sidebar in Google Sheets, allowing users to optimize their feeds with ease.
For similar jobs
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
daily-poetry-image
Daily Chinese ancient poetry and AI-generated images powered by Bing DALL-E-3. GitHub Action triggers the process automatically. Poetry is provided by Today's Poem API. The website is built with Astro.
exif-photo-blog
EXIF Photo Blog is a full-stack photo blog application built with Next.js, Vercel, and Postgres. It features built-in authentication, photo upload with EXIF extraction, photo organization by tag, infinite scroll, light/dark mode, automatic OG image generation, a CMD-K menu with photo search, experimental support for AI-generated descriptions, and support for Fujifilm simulations. The application is easy to deploy to Vercel with just a few clicks and can be customized with a variety of environment variables.
SillyTavern
SillyTavern is a user interface you can install on your computer (and Android phones) that allows you to interact with text generation AIs and chat/roleplay with characters you or the community create. SillyTavern is a fork of TavernAI 1.2.8 which is under more active development and has added many major features. At this point, they can be thought of as completely independent programs.
Twitter-Insight-LLM
This project enables you to fetch liked tweets from Twitter (using Selenium), save it to JSON and Excel files, and perform initial data analysis and image captions. This is part of the initial steps for a larger personal project involving Large Language Models (LLMs).
AISuperDomain
Aila Desktop Application is a powerful tool that integrates multiple leading AI models into a single desktop application. It allows users to interact with various AI models simultaneously, providing diverse responses and insights to their inquiries. With its user-friendly interface and customizable features, Aila empowers users to engage with AI seamlessly and efficiently. Whether you're a researcher, student, or professional, Aila can enhance your AI interactions and streamline your workflow.
ChatGPT-On-CS
This project is an intelligent dialogue customer service tool based on a large model, which supports access to platforms such as WeChat, Qianniu, Bilibili, Douyin Enterprise, Douyin, Doudian, Weibo chat, Xiaohongshu professional account operation, Xiaohongshu, Zhihu, etc. You can choose GPT3.5/GPT4.0/ Lazy Treasure Box (more platforms will be supported in the future), which can process text, voice and pictures, and access external resources such as operating systems and the Internet through plug-ins, and support enterprise AI applications customized based on their own knowledge base.
obs-localvocal
LocalVocal is a live-streaming AI assistant plugin for OBS that allows you to transcribe audio speech into text and perform various language processing functions on the text using AI / LLMs (Large Language Models). It's privacy-first, with all data staying on your machine, and requires no GPU, cloud costs, network, or downtime.