Best AI tools for< Baker >
Infographic
20 - AI tool Sites
Lazy AI
Lazy AI is an AI application that enables users to unlock their software creativity by building and modifying web apps with prompts and deploying them to the cloud with just one click. Users can create various applications such as a sample website for a bakery business, an API endpoint for AI text summarization, a metrics dashboard, a simple snake game, and a chatbot. Lazy AI also offers template collections for Discord bots, dev tools, productivity tools, finance tools, and marketing tools.
Inpulse.ai
Inpulse.ai is an AI platform that revolutionizes inventory management and supplier ordering for restaurant chains. It assists managers in making informed decisions by accurately forecasting sales, anticipating production needs, and optimizing food supplies. The platform provides real-time performance monitoring, automated production planning, and centralized data management to help restaurants improve their margins and reduce waste. Inpulse.ai is used by over 3,000 restaurants, food kiosks, and bakeries on a daily basis, offering a comprehensive solution to streamline operations and boost profitability.
RBG AI Drop
RBG AI Drop is an AI tool that allows users to interact with a virtual version of Justice Ruth Bader Ginsburg. Users can ask her any YES/NO question and receive a response. The tool is designed as an experiment to engage users in a unique and interactive experience. By signing up, users can be the first to receive future AI drops and continue engaging with RBG's virtual presence.
DataSnack
DataSnack is a real-time, AI-driven due diligence platform that helps you make better decisions faster. With DataSnack, you can access a wealth of data and insights on companies, industries, and markets, all in one place. Our AI-powered platform analyzes data from a variety of sources, including news, social media, and financial filings, to provide you with the most up-to-date and relevant information. With DataSnack, you can:
Inven
Inven is an AI-powered company data platform that helps professionals in private equity, investment banking, business brokerage, consulting, and corporate development find companies faster and more efficiently. With Inven, users can access a database of over 23 million companies and 430 million contacts in over 160 countries. Inven's AI algorithms and NLP solutions analyze millions of data points from a wide range of sources to give users actionable insights on any niche.
Roic AI
Roic AI is an AI tool designed to provide users with essential financial data for analyzing companies. It offers comprehensive company summaries, 30+ years of financial statements, and earnings call transcripts in a single location. Users can access crucial information about popular companies like Apple Inc. and Microsoft Corporation through this platform.
CityFALCON
CityFALCON is a financial and business due diligence platform that provides a range of solutions for the needs of a wide audience, including retail investors, retail traders, daily business news readers, brokers, students, professors, academia, wealth managers, financial advisors, P2P crowdfunding, VC, PE, institutional investors, treasury, consultancy, legal, accounting, central banks, and regulatory agencies. The platform offers a variety of features and content, including a CityFALCON Score, watchlists, similar stories, grouping news on charts, key headlines, sentiment content translation, content news premium publications, insider transactions, official company filings, investor relations, ESG content, and languages.
StockGPT
StockGPT is an AI-powered financial research assistant that provides knowledge of earnings releases, financial reports, and fundamental information for S&P 500 and Nasdaq companies. It offers features like AI search, customizable filters, up-to-date data, industry research, and more to help users analyze companies and markets effectively.
ArkiFi
ArkiFi is a revolutionary AI-powered finance workflow automation tool that leverages Generative AI to provide deterministic outputs, ensuring trustworthy results without 'hallucination'. It empowers finance professionals to focus on strategic thinking by automating grind work, formatting, and debugging. The platform aims to disintermediate human labor in advanced finance, enabling users to make faster and more accurate decisions across platforms. ArkiFi is redefining the finance industry by offering a digital financial analyst experience, generating alpha and innovation.
Eilla AI
Eilla AI is an AI platform designed to power the M&A, VC, and PE deal workflow by mirroring the expertise of industry professionals to automate repetitive tasks and assist in making complex decisions. The platform aims to streamline the deal process and provide early access to users, offering features such as automation, decision support, and access to executive and engineering teams for selected companies.
Quill
Quill is an AI-powered SEC filing platform that leverages financially-tuned AI to extract key information from company filings. It provides historical financial data, real-time SEC filings, earnings call transcripts, and more. Quill offers state-of-the-art sentence-level source citations, preventing misinformation. Users can ask questions about companies, extract numerical data, and receive alerts on filings. The platform automates earnings call analysis and can transform PDFs into spreadsheets. Quill is designed for analysts and professionals seeking accurate and up-to-date financial information.
Onnix AI
Onnix AI is a personalized AI co-pilot designed specifically for banking professionals. It saves teams time by providing accurate answers and deliverables quickly, leveraging AI and powerful data science tools. Onnix helps in creating slide decks 10x faster, running Excel analysis, and querying data sources instantly. It is built by bankers for bankers, offering a no-code platform for generating deeper insights from data. Onnix streamlines the process of creating presentations, analyzing data, and accessing information from data providers, making it an essential tool for banking teams.
AlphaSense
AlphaSense is a market intelligence and search platform that provides access to a comprehensive universe of content, including company filings, broker research, expert calls, regulatory documents, press releases, and internal content. It utilizes AI and NLP technology to surface relevant insights, monitor market trends, and collaborate on research. AlphaSense is trusted by thousands of organizations, including 85% of the S&P 100, 80% of the top asset management firms, and 80% of the top consultancies.
Hebbia
Hebbia is an AI-powered tool that helps users collaborate with LLMs more confidently and efficiently. It allows users to ask questions about all their documents, up to millions at a time, and provides important answers that are not limited to the top few results. Hebbia is designed to execute workflows with hundreds of steps over any amount of sources, turning prompts into processes. It is a trustworthy AI system that shows its work at each step, allowing users to verify, trust, and collaborate with AI. Hebbia is used by the largest enterprises, financial institutions, governments, and law firms in the world.
HouseCanary
HouseCanary is a leading AI-powered data and analytics platform for residential real estate. With a full suite of industry-leading products and tools, HouseCanary provides real estate investors, mortgage lenders, investment banks, whole loan buyers, and prop techs with the most comprehensive and accurate residential real estate data and analytics in the industry. HouseCanary's AI algorithms analyze a vast array of real estate data to generate meaningful insights to help teams be more efficient, ultimately saving time and money.
Dili
Dili is an AI Diligence Platform that automates diligence processes for various industries such as real estate, private equity, and venture capital. It helps users extract key data, summarize documents, flag issues, and generate reports with high accuracy and efficiency. Dili's advanced AI technology enhances due diligence procedures, reduces human errors, and provides valuable insights for making informed decisions in high-stakes deals.
Finance Brain
Finance Brain is an AI-powered assistant that provides instant answers for finance and accounting questions. Users can access the tool 24/7 for a monthly fee of $20, with new users receiving 3 free questions. The tool also supports uploading video files for analysis.
Susterra
Susterra is an advanced analytics platform for Public Finance stakeholders, aiming to catalyze urban development by providing powerful insights. The platform integrates leading practices from academia, leverages public data growth, and utilizes technology innovations like ML and AI to enable issuers to make suitable choices for accelerating the development of Smart Cities across the United States. Susterra offers state-of-the-art analytics, including TerraScore, TerraVision, TerraView, and Impact IQ, with a focus on public program evaluation and data visualization tools for various sectors such as Utilities, Education, Healthcare, and more.
Calypso
Calypso is an AI-first public equities copilot platform that combines the power of AI with financials, transcripts, headlines, and case studies by professionals to provide effortless analysis and superior returns. It offers features such as AI-powered insights, personalized theses, earnings previews, and updates, as well as the ability to ask any question with AI chats. Trusted by professionals, Calypso helps users stay up to date with key debates, financials, and valuation setups, making it a valuable tool for individuals in the finance industry.
Forbes Italia
Forbes Italia is an AI-powered website that covers a wide range of topics including business, rankings, and leaders. It provides articles on various subjects such as finance, innovation, investments, lifestyle, and more. The platform offers insights into the latest trends and news in the business world, with a focus on Italian and global markets. Forbes Italia aims to deliver valuable content to its audience, catering to individuals interested in entrepreneurship, finance, and technology.
20 - Open Source Tools
Next-Gen-Dialogue
Next Gen Dialogue is a Unity dialogue plugin that combines traditional dialogue design with AI techniques. It features a visual dialogue editor, modular dialogue functions, AIGC support for generating dialogue at runtime, AIGC baking dialogue in Editor, and runtime debugging. The plugin aims to provide an experimental approach to dialogue design using large language models. Users can create dialogue trees, generate dialogue content using AI, and bake dialogue content in advance. The tool also supports localization, VITS speech synthesis, and one-click translation. Users can create dialogue by code using the DialogueSystem and DialogueTree components.
Awesome-LLM-Strawberry
Awesome LLM Strawberry is a collection of research papers and blogs related to OpenAI Strawberry(o1) and Reasoning. The repository is continuously updated to track the frontier of LLM Reasoning.
awesome-generative-information-retrieval
This repository contains a curated list of resources on generative information retrieval, including research papers, datasets, tools, and applications. Generative information retrieval is a subfield of information retrieval that uses generative models to generate new documents or passages of text that are relevant to a given query. This can be useful for a variety of tasks, such as question answering, summarization, and document generation. The resources in this repository are intended to help researchers and practitioners stay up-to-date on the latest advances in generative information retrieval.
rageval
Rageval is an evaluation tool for Retrieval-augmented Generation (RAG) methods. It helps evaluate RAG systems by performing tasks such as query rewriting, document ranking, information compression, evidence verification, answer generation, and result validation. The tool provides metrics for answer correctness and answer groundedness, along with benchmark results for ASQA and ALCE datasets. Users can install and use Rageval to assess the performance of RAG models in question-answering tasks.
awesome-AI4MolConformation-MD
The 'awesome-AI4MolConformation-MD' repository focuses on protein conformations and molecular dynamics using generative artificial intelligence and deep learning. It provides resources, reviews, datasets, packages, and tools related to AI-driven molecular dynamics simulations. The repository covers a wide range of topics such as neural networks potentials, force fields, AI engines/frameworks, trajectory analysis, visualization tools, and various AI-based models for protein conformational sampling. It serves as a comprehensive guide for researchers and practitioners interested in leveraging AI for studying molecular structures and dynamics.
Odyssey
Odyssey is a framework designed to empower agents with open-world skills in Minecraft. It provides an interactive agent with a skill library, a fine-tuned LLaMA-3 model, and an open-world benchmark for evaluating agent capabilities. The framework enables agents to explore diverse gameplay opportunities in the vast Minecraft world by offering primitive and compositional skills, extensive training data, and various long-term planning tasks. Odyssey aims to advance research on autonomous agent solutions by providing datasets, model weights, and code for public use.
Awesome-LLM-Preference-Learning
The repository 'Awesome-LLM-Preference-Learning' is the official repository of a survey paper titled 'Towards a Unified View of Preference Learning for Large Language Models: A Survey'. It contains a curated list of papers related to preference learning for Large Language Models (LLMs). The repository covers various aspects of preference learning, including on-policy and off-policy methods, feedback mechanisms, reward models, algorithms, evaluation techniques, and more. The papers included in the repository explore different approaches to aligning LLMs with human preferences, improving mathematical reasoning in LLMs, enhancing code generation, and optimizing language model performance.
InternLM-XComposer
InternLM-XComposer2 is a groundbreaking vision-language large model (VLLM) based on InternLM2-7B excelling in free-form text-image composition and comprehension. It boasts several amazing capabilities and applications: * **Free-form Interleaved Text-Image Composition** : InternLM-XComposer2 can effortlessly generate coherent and contextual articles with interleaved images following diverse inputs like outlines, detailed text requirements and reference images, enabling highly customizable content creation. * **Accurate Vision-language Problem-solving** : InternLM-XComposer2 accurately handles diverse and challenging vision-language Q&A tasks based on free-form instructions, excelling in recognition, perception, detailed captioning, visual reasoning, and more. * **Awesome performance** : InternLM-XComposer2 based on InternLM2-7B not only significantly outperforms existing open-source multimodal models in 13 benchmarks but also **matches or even surpasses GPT-4V and Gemini Pro in 6 benchmarks** We release InternLM-XComposer2 series in three versions: * **InternLM-XComposer2-4KHD-7B** 🤗: The high-resolution multi-task trained VLLM model with InternLM-7B as the initialization of the LLM for _High-resolution understanding_ , _VL benchmarks_ and _AI assistant_. * **InternLM-XComposer2-VL-7B** 🤗 : The multi-task trained VLLM model with InternLM-7B as the initialization of the LLM for _VL benchmarks_ and _AI assistant_. **It ranks as the most powerful vision-language model based on 7B-parameter level LLMs, leading across 13 benchmarks.** * **InternLM-XComposer2-VL-1.8B** 🤗 : A lightweight version of InternLM-XComposer2-VL based on InternLM-1.8B. * **InternLM-XComposer2-7B** 🤗: The further instruction tuned VLLM for _Interleaved Text-Image Composition_ with free-form inputs. Please refer to Technical Report and 4KHD Technical Reportfor more details.
lego-ai-parser
Lego AI Parser is an open-source application that uses OpenAI to parse visible text of HTML elements. It is built on top of FastAPI, ready to set up as a server, and make calls from any language. It supports preset parsers for Google Local Results, Amazon Listings, Etsy Listings, Wayfair Listings, BestBuy Listings, Costco Listings, Macy's Listings, and Nordstrom Listings. Users can also design custom parsers by providing prompts, examples, and details about the OpenAI model under the classifier key.
h2ogpt
h2oGPT is an Apache V2 open-source project that allows users to query and summarize documents or chat with local private GPT LLMs. It features a private offline database of any documents (PDFs, Excel, Word, Images, Video Frames, Youtube, Audio, Code, Text, MarkDown, etc.), a persistent database (Chroma, Weaviate, or in-memory FAISS) using accurate embeddings (instructor-large, all-MiniLM-L6-v2, etc.), and efficient use of context using instruct-tuned LLMs (no need for LangChain's few-shot approach). h2oGPT also offers parallel summarization and extraction, reaching an output of 80 tokens per second with the 13B LLaMa2 model, HYDE (Hypothetical Document Embeddings) for enhanced retrieval based upon LLM responses, a variety of models supported (LLaMa2, Mistral, Falcon, Vicuna, WizardLM. With AutoGPTQ, 4-bit/8-bit, LORA, etc.), GPU support from HF and LLaMa.cpp GGML models, and CPU support using HF, LLaMa.cpp, and GPT4ALL models. Additionally, h2oGPT provides Attention Sinks for arbitrarily long generation (LLaMa-2, Mistral, MPT, Pythia, Falcon, etc.), a UI or CLI with streaming of all models, the ability to upload and view documents through the UI (control multiple collaborative or personal collections), Vision Models LLaVa, Claude-3, Gemini-Pro-Vision, GPT-4-Vision, Image Generation Stable Diffusion (sdxl-turbo, sdxl) and PlaygroundAI (playv2), Voice STT using Whisper with streaming audio conversion, Voice TTS using MIT-Licensed Microsoft Speech T5 with multiple voices and Streaming audio conversion, Voice TTS using MPL2-Licensed TTS including Voice Cloning and Streaming audio conversion, AI Assistant Voice Control Mode for hands-free control of h2oGPT chat, Bake-off UI mode against many models at the same time, Easy Download of model artifacts and control over models like LLaMa.cpp through the UI, Authentication in the UI by user/password via Native or Google OAuth, State Preservation in the UI by user/password, Linux, Docker, macOS, and Windows support, Easy Windows Installer for Windows 10 64-bit (CPU/CUDA), Easy macOS Installer for macOS (CPU/M1/M2), Inference Servers support (oLLaMa, HF TGI server, vLLM, Gradio, ExLLaMa, Replicate, OpenAI, Azure OpenAI, Anthropic), OpenAI-compliant, Server Proxy API (h2oGPT acts as drop-in-replacement to OpenAI server), Python client API (to talk to Gradio server), JSON Mode with any model via code block extraction. Also supports MistralAI JSON mode, Claude-3 via function calling with strict Schema, OpenAI via JSON mode, and vLLM via guided_json with strict Schema, Web-Search integration with Chat and Document Q/A, Agents for Search, Document Q/A, Python Code, CSV frames (Experimental, best with OpenAI currently), Evaluate performance using reward models, and Quality maintained with over 1000 unit and integration tests taking over 4 GPU-hours.
deeplake
Deep Lake is a Database for AI powered by a storage format optimized for deep-learning applications. Deep Lake can be used for: 1. Storing data and vectors while building LLM applications 2. Managing datasets while training deep learning models Deep Lake simplifies the deployment of enterprise-grade LLM-based products by offering storage for all data types (embeddings, audio, text, videos, images, pdfs, annotations, etc.), querying and vector search, data streaming while training models at scale, data versioning and lineage, and integrations with popular tools such as LangChain, LlamaIndex, Weights & Biases, and many more. Deep Lake works with data of any size, it is serverless, and it enables you to store all of your data in your own cloud and in one place. Deep Lake is used by Intel, Bayer Radiology, Matterport, ZERO Systems, Red Cross, Yale, & Oxford.
airflow-chart
This Helm chart bootstraps an Airflow deployment on a Kubernetes cluster using the Helm package manager. The version of this chart does not correlate to any other component. Users should not expect feature parity between OSS airflow chart and the Astronomer airflow-chart for identical version numbers. To install this helm chart remotely (using helm 3) kubectl create namespace airflow helm repo add astronomer https://helm.astronomer.io helm install airflow --namespace airflow astronomer/airflow To install this repository from source sh kubectl create namespace airflow helm install --namespace airflow . Prerequisites: Kubernetes 1.12+ Helm 3.6+ PV provisioner support in the underlying infrastructure Installing the Chart: sh helm install --name my-release . The command deploys Airflow on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation. Upgrading the Chart: First, look at the updating documentation to identify any backwards-incompatible changes. To upgrade the chart with the release name `my-release`: sh helm upgrade --name my-release . Uninstalling the Chart: To uninstall/delete the `my-release` deployment: sh helm delete my-release The command removes all the Kubernetes components associated with the chart and deletes the release. Updating DAGs: Bake DAGs in Docker image The recommended way to update your DAGs with this chart is to build a new docker image with the latest code (`docker build -t my-company/airflow:8a0da78 .`), push it to an accessible registry (`docker push my-company/airflow:8a0da78`), then update the Airflow pods with that image: sh helm upgrade my-release . --set images.airflow.repository=my-company/airflow --set images.airflow.tag=8a0da78 Docker Images: The Airflow image that are referenced as the default values in this chart are generated from this repository: https://github.com/astronomer/ap-airflow. Other non-airflow images used in this chart are generated from this repository: https://github.com/astronomer/ap-vendor. Parameters: The complete list of parameters supported by the community chart can be found on the Parameteres Reference page, and can be set under the `airflow` key in this chart. The following tables lists the configurable parameters of the Astronomer chart and their default values. | Parameter | Description | Default | | :----------------------------- | :-------------------------------------------------------------------------------------------------------- | :---------------------------- | | `ingress.enabled` | Enable Kubernetes Ingress support | `false` | | `ingress.acme` | Add acme annotations to Ingress object | `false` | | `ingress.tlsSecretName` | Name of secret that contains a TLS secret | `~` | | `ingress.webserverAnnotations` | Annotations added to Webserver Ingress object | `{}` | | `ingress.flowerAnnotations` | Annotations added to Flower Ingress object | `{}` | | `ingress.baseDomain` | Base domain for VHOSTs | `~` | | `ingress.auth.enabled` | Enable auth with Astronomer Platform | `true` | | `extraObjects` | Extra K8s Objects to deploy (these are passed through `tpl`). More about Extra Objects. | `[]` | | `sccEnabled` | Enable security context constraints required for OpenShift | `false` | | `authSidecar.enabled` | Enable authSidecar | `false` | | `authSidecar.repository` | The image for the auth sidecar proxy | `nginxinc/nginx-unprivileged` | | `authSidecar.tag` | The image tag for the auth sidecar proxy | `stable` | | `authSidecar.pullPolicy` | The K8s pullPolicy for the the auth sidecar proxy image | `IfNotPresent` | | `authSidecar.port` | The port the auth sidecar exposes | `8084` | | `gitSyncRelay.enabled` | Enables git sync relay feature. | `False` | | `gitSyncRelay.repo.url` | Upstream URL to the git repo to clone. | `~` | | `gitSyncRelay.repo.branch` | Branch of the upstream git repo to checkout. | `main` | | `gitSyncRelay.repo.depth` | How many revisions to check out. Leave as default `1` except in dev where history is needed. | `1` | | `gitSyncRelay.repo.wait` | Seconds to wait before pulling from the upstream remote. | `60` | | `gitSyncRelay.repo.subPath` | Path to the dags directory within the git repository. | `~` | Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, sh helm install --name my-release --set executor=CeleryExecutor --set enablePodLaunching=false . Walkthrough using kind: Install kind, and create a cluster We recommend testing with Kubernetes 1.25+, example: sh kind create cluster --image kindest/node:v1.25.11 Confirm it's up: sh kubectl cluster-info --context kind-kind Add Astronomer's Helm repo sh helm repo add astronomer https://helm.astronomer.io helm repo update Create namespace + install the chart sh kubectl create namespace airflow helm install airflow -n airflow astronomer/airflow It may take a few minutes. Confirm the pods are up: sh kubectl get pods --all-namespaces helm list -n airflow Run `kubectl port-forward svc/airflow-webserver 8080:8080 -n airflow` to port-forward the Airflow UI to http://localhost:8080/ to confirm Airflow is working. Login as _admin_ and password _admin_. Build a Docker image from your DAGs: 1. Start a project using astro-cli, which will generate a Dockerfile, and load your DAGs in. You can test locally before pushing to kind with `astro airflow start`. `sh mkdir my-airflow-project && cd my-airflow-project astro dev init` 2. Then build the image: `sh docker build -t my-dags:0.0.1 .` 3. Load the image into kind: `sh kind load docker-image my-dags:0.0.1` 4. Upgrade Helm deployment: sh helm upgrade airflow -n airflow --set images.airflow.repository=my-dags --set images.airflow.tag=0.0.1 astronomer/airflow Extra Objects: This chart can deploy extra Kubernetes objects (assuming the role used by Helm can manage them). For Astronomer Cloud and Enterprise, the role permissions can be found in the Commander role. yaml extraObjects: - apiVersion: batch/v1beta1 kind: CronJob metadata: name: "{{ .Release.Name }}-somejob" spec: schedule: "*/10 * * * *" concurrencyPolicy: Forbid jobTemplate: spec: template: spec: containers: - name: myjob image: ubuntu command: - echo args: - hello restartPolicy: OnFailure Contributing: Check out our contributing guide! License: Apache 2.0 with Commons Clause
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
ort
Ort is an unofficial ONNX Runtime 1.17 wrapper for Rust based on the now inactive onnxruntime-rs. ONNX Runtime accelerates ML inference on both CPU and GPU.
kor
Kor is a prototype tool designed to help users extract structured data from text using Language Models (LLMs). It generates prompts, sends them to specified LLMs, and parses the output. The tool works with the parsing approach and is integrated with the LangChain framework. Kor is compatible with pydantic v2 and v1, and schema is typed checked using pydantic. It is primarily used for extracting information from text based on provided reference examples and schema documentation. Kor is designed to work with all good-enough LLMs regardless of their support for function/tool calling or JSON modes.
llama3-tokenizer-js
JavaScript tokenizer for LLaMA 3 designed for client-side use in the browser and Node, with TypeScript support. It accurately calculates token count, has 0 dependencies, optimized running time, and somewhat optimized bundle size. Compatible with most LLaMA 3 models. Can encode and decode text, but training is not supported. Pollutes global namespace with `llama3Tokenizer` in the browser. Mostly compatible with LLaMA 3 models released by Facebook in April 2024. Can be adapted for incompatible models by passing custom vocab and merge data. Handles special tokens and fine tunes. Developed by belladore.ai with contributions from xenova, blaze2004, imoneoi, and ConProgramming.
Awesome-AI-Data-Guided-Projects
A curated list of data science & AI guided projects to start building your portfolio. The repository contains guided projects covering various topics such as large language models, time series analysis, computer vision, natural language processing (NLP), and data science. Each project provides detailed instructions on how to implement specific tasks using different tools and technologies.
20 - OpenAI Gpts
Pizza Pro Dough Helper
Expert in BIGA and Neapolitan pizza dough recipes, focusing on tailored calculations and precise temperature data.
Fermentation Sage - Fermento Brewster v1
stunspot's Fermentation Expert - beer, mead, kimchi, pickles, everything!
Bake Off - Great British Technical Challenge GBBO
Minimalist baking challenges with a step title and tailored hint!
Chef Dulce
Experto en repostería, ofrece recetas y consejos con un tono casual y amistoso.
Bagels
Expert on bagels, offering detailed info on types, toppings, and recipes in an enjoyable tone.
Bake Off
The Great (Pretrained Transformer) Bake Off Challenge! Bake a cake, Get roasted by Ai. Type K to view all game modes. v1.0
GingerHouseMaker
Gingerbread Designer transforming your house into festive and whimsical gingerbread. v1.1
Cake Designer
I specialize in crafting custom cake designs, offering visual representations and tailored recipes according to individual tastes and preferences.
The Great Bakeoff Master
Magical baking game host & with the 4 judges to help you become the master baker and chef