Best AI tools for< Critique Designs >
11 - AI tool Sites

Feedback Wizard
Feedback Wizard is an AI-powered tool designed to provide instant design feedback directly within Figma. It leverages AI technology to offer design wisdom and actionable insights to improve user experience and elevate the visual elements of Figma designs. With over 2700 designers already using the tool, Feedback Wizard aims to streamline the design feedback process and enhance the overall design quality.

Sun Group (China) Co., Ltd.
The website is the official site of the Sun Group (China) Co., Ltd., endorsed by Louis Koo. It provides information about the company's history, leadership, organizational structure, educational programs, research achievements, and employee activities. The site also features news updates, announcements, and resources for download.

Yogger
Yogger is an AI-powered video analysis and movement assessment tool designed for coaches, trainers, physical therapists, and athletes. It allows users to track form, gather data, and analyze movement for any sport or activity in seconds. With features like AI joint tracking, virtual assessments, and drawing tools, Yogger helps streamline client evaluations and deliver objective scores and data. Users can utilize Yogger for recovery, training enhancement, injury prevention, and precise movement analysis, all from their phone.

LLM Quality Beefer-Upper
LLM Quality Beefer-Upper is an AI tool designed to enhance the quality and productivity of LLM responses by automating critique, reflection, and improvement. Users can generate multi-agent prompt drafts, choose from different quality levels, and upload knowledge text for processing. The application aims to maximize output quality by utilizing the best available LLM models in the market.

Mock-My-Mockup
Mock-My-Mockup is an AI-powered product design tool created by Fairpixels. It allows users to upload a screenshot of a page they are working on and receive brutally honest feedback. The tool offers a user-friendly interface where users can easily drag and drop their product screenshots for analysis.

Resume Roaster AI
The website offers a service where users can have their resumes analyzed and critiqued by an AI tool. Users can submit their resumes to receive feedback on areas for improvement. The AI tool provides insights on resume quality, structure, and content to help users enhance their job application documents. It aims to assist individuals in creating more effective resumes to increase their chances of securing job opportunities.

Hell's Pitching
Hell's Pitching is an AI-powered assistant designed to help entrepreneurs refine their startup ideas by providing brutally honest feedback and insightful questions. It offers a unique approach to guiding and challenging founders in building successful startups. The tool allows users to pitch their ideas and receive side-splittingly funny roasts that lead to 'aha' moments and innovative insights. With a focus on no-nonsense critiques and humor, Hell's Pitching aims to transform startup ideas by providing wisdom and valuable feedback. The platform is free for all users, encouraging access to honest feedback for everyone.

Critique
Critique is an AI tool that redefines browsing by offering autonomous fact-checking, informed question answering, and a localized universal recommendation system. It automatically critiques comments and posts on platforms like Reddit, Youtube, and Linkedin by vetting text on any website. The tool cross-references and analyzes articles in real-time, providing vetted and summarized information directly in the user's browser.

ProWritingAid
ProWritingAid is an AI-powered writing assistant that helps writers improve their writing. It offers a range of features, including a grammar checker, plagiarism checker, and story critique tool. ProWritingAid is used by writers of all levels, from beginners to bestselling authors. It is available as a web app, desktop app, and browser extension.

ScriptReader.ai
ScriptReader.ai is an AI-powered screenplay analysis tool that provides detailed critiques and suggestions for every scene of your screenplay. It offers personalized feedback to help writers improve their scripts and elevate their writing game. With the ability to analyze strengths and weaknesses, provide grades, critiques, and suggestions for improvement on a scene-by-scene basis, ScriptReader.ai aims to help both seasoned screenwriters and beginners enhance their work and create captivating masterpieces.

RAD AI
RAD AI is an AI-powered platform that provides solutions for audience insights, influencer discovery, content optimization, managed services, and more. The platform uses advanced machine learning to analyze real-time conversations from social platforms like Reddit, TikTok, and Twitter. RAD AI offers actionable critiques to enhance brand content and helps in selecting the right influencers based on various factors. The platform aims to help brands reach their target audiences effectively and efficiently by leveraging AI technology.
20 - Open Source AI Tools

Awesome-LLM-in-Social-Science
This repository compiles a list of academic papers that evaluate, align, simulate, and provide surveys or perspectives on the use of Large Language Models (LLMs) in the field of Social Science. The papers cover various aspects of LLM research, including assessing their alignment with human values, evaluating their capabilities in tasks such as opinion formation and moral reasoning, and exploring their potential for simulating social interactions and addressing issues in diverse fields of Social Science. The repository aims to provide a comprehensive resource for researchers and practitioners interested in the intersection of LLMs and Social Science.

awesome-deliberative-prompting
The 'awesome-deliberative-prompting' repository focuses on how to ask Large Language Models (LLMs) to produce reliable reasoning and make reason-responsive decisions through deliberative prompting. It includes success stories, prompting patterns and strategies, multi-agent deliberation, reflection and meta-cognition, text generation techniques, self-correction methods, reasoning analytics, limitations, failures, puzzles, datasets, tools, and other resources related to deliberative prompting. The repository provides a comprehensive overview of research, techniques, and tools for enhancing reasoning capabilities of LLMs.

Awesome-Code-LLM
Analyze the following text from a github repository (name and readme text at end) . Then, generate a JSON object with the following keys and provide the corresponding information for each key, in lowercase letters: 'description' (detailed description of the repo, must be less than 400 words,Ensure that no line breaks and quotation marks.),'for_jobs' (List 5 jobs suitable for this tool,in lowercase letters), 'ai_keywords' (keywords of the tool,user may use those keyword to find the tool,in lowercase letters), 'for_tasks' (list of 5 specific tasks user can use this tool to do,in lowercase letters), 'answer' (in english languages)

awesome-generative-ai
A curated list of Generative AI projects, tools, artworks, and models

llms-tools
The 'llms-tools' repository is a comprehensive collection of AI tools, open-source projects, and research related to Large Language Models (LLMs) and Chatbots. It covers a wide range of topics such as AI in various domains, open-source models, chats & assistants, visual language models, evaluation tools, libraries, devices, income models, text-to-image, computer vision, audio & speech, code & math, games, robotics, typography, bio & med, military, climate, finance, and presentation. The repository provides valuable resources for researchers, developers, and enthusiasts interested in exploring the capabilities of LLMs and related technologies.

CritiqueLLM
CritiqueLLM is an official implementation of a model designed for generating informative critiques to evaluate large language model generation. It includes functionalities for data collection, referenced pointwise grading, referenced pairwise comparison, reference-free pairwise comparison, reference-free pointwise grading, inference for pointwise grading and pairwise comparison, and evaluation of the generated results. The model aims to provide a comprehensive framework for evaluating the performance of large language models based on human ratings and comparisons.

bosquet
Bosquet is a tool designed for LLMOps in large language model-based applications. It simplifies building AI applications by managing LLM and tool services, integrating with Selmer templating library for prompt templating, enabling prompt chaining and composition with Pathom graph processing, defining agents and tools for external API interactions, handling LLM memory, and providing features like call response caching. The tool aims to streamline the development process for AI applications that require complex prompt templates, memory management, and interaction with external systems.

instructor_ex
Instructor is a tool designed to structure outputs from OpenAI and other OSS LLMs by coaxing them to return JSON that maps to a provided Ecto schema. It allows for defining validation logic to guide LLMs in making corrections, and supports automatic retries. Instructor is primarily used with the OpenAI API but can be extended to work with other platforms. The tool simplifies usage by creating an ecto schema, defining a validation function, and making calls to chat_completion with instructions for the LLM. It also offers features like max_retries to fix validation errors iteratively.

py-vectara-agentic
The `vectara-agentic` Python library is designed for developing powerful AI assistants using Vectara and Agentic-RAG. It supports various agent types, includes pre-built tools for domains like finance and legal, and enables easy creation of custom AI assistants and agents. The library provides tools for summarizing text, rephrasing text, legal tasks like summarizing legal text and critiquing as a judge, financial tasks like analyzing balance sheets and income statements, and database tools for inspecting and querying databases. It also supports observability via LlamaIndex and Arize Phoenix integration.

MM-RLHF
MM-RLHF is a comprehensive project for aligning Multimodal Large Language Models (MLLMs) with human preferences. It includes a high-quality MLLM alignment dataset, a Critique-Based MLLM reward model, a novel alignment algorithm MM-DPO, and benchmarks for reward models and multimodal safety. The dataset covers image understanding, video understanding, and safety-related tasks with model-generated responses and human-annotated scores. The reward model generates critiques of candidate texts before assigning scores for enhanced interpretability. MM-DPO is an alignment algorithm that achieves performance gains with simple adjustments to the DPO framework. The project enables consistent performance improvements across 10 dimensions and 27 benchmarks for open-source MLLMs.

Controllable-RAG-Agent
This repository contains a sophisticated deterministic graph-based solution for answering complex questions using a controllable autonomous agent. The solution is designed to ensure that answers are solely based on the provided data, avoiding hallucinations. It involves various steps such as PDF loading, text preprocessing, summarization, database creation, encoding, and utilizing large language models. The algorithm follows a detailed workflow involving planning, retrieval, answering, replanning, content distillation, and performance evaluation. Heuristics and techniques implemented focus on content encoding, anonymizing questions, task breakdown, content distillation, chain of thought answering, verification, and model performance evaluation.

llm-self-correction-papers
This repository contains a curated list of papers focusing on the self-correction of large language models (LLMs) during inference. It covers various frameworks for self-correction, including intrinsic self-correction, self-correction with external tools, self-correction with information retrieval, and self-correction with training designed specifically for self-correction. The list includes survey papers, negative results, and frameworks utilizing reinforcement learning and OpenAI o1-like approaches. Contributions are welcome through pull requests following a specific format.

parlant
Parlant is a structured approach to building and guiding customer-facing AI agents. It allows developers to create and manage robust AI agents, providing specific feedback on agent behavior and helping understand user intentions better. With features like guidelines, glossary, coherence checks, dynamic context, and guided tool use, Parlant offers control over agent responses and behavior. Developer-friendly aspects include instant changes, Git integration, clean architecture, and type safety. It enables confident deployment with scalability, effective debugging, and validation before deployment. Parlant works with major LLM providers and offers client SDKs for Python and TypeScript. The tool facilitates natural customer interactions through asynchronous communication and provides a chat UI for testing new behaviors before deployment.

pywhy-llm
PyWhy-LLM is an innovative library that integrates Large Language Models (LLMs) into the causal analysis process, empowering users with knowledge previously only available through domain experts. It seamlessly augments existing causal inference processes by suggesting potential confounders, relationships between variables, backdoor sets, front door sets, IV sets, estimands, critiques of DAGs, latent confounders, and negative controls. By leveraging LLMs and formalizing human-LLM collaboration, PyWhy-LLM aims to enhance causal analysis accessibility and insight.

NeMo-Guardrails
NeMo Guardrails is an open-source toolkit for easily adding _programmable guardrails_ to LLM-based conversational applications. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more.

RAG-Survey
This repository is dedicated to collecting and categorizing papers related to Retrieval-Augmented Generation (RAG) for AI-generated content. It serves as a survey repository based on the paper 'Retrieval-Augmented Generation for AI-Generated Content: A Survey'. The repository is continuously updated to keep up with the rapid growth in the field of RAG.
20 - OpenAI Gpts

Señor Design Mentor
Get feedback on your UI designs. All you need to do is share Problem you are trying to solve and the Design for feedback

Design Crit
I conduct design critiques focused on enhancing understanding and improvement.

Website Design Critique Expert
Critiques website designs and creates shareable summary graphics.

Roast My UI
Offers constructive feedback on user's web designs based on a knowledge base of modern best practices.

UX Feedback
The UX Feedback GPT specializes in critiquing UX/UI design, focusing on accessibility, layout, and best practices from Nielsen Norman Group and IDEO. It offers tailored feedback for various design stages and emphasizes clear communication, responsiveness, and ethical design principles.

RoastMyDesign
The best design or website roaster that there is. It tells you exactly what's good, what's bad and how to fix it. Made by @ThisSiya

Legal Tech Generhater
I'll critique your legal tech ideas with my signature snark and design fittingly bad logos.

Trey Ratcliff's Fun Photo Critique GPT
Critiquing photos with humor and expertise, drawing from my 5,000 blog entries and books. Share your photo for a unique critique experience!

Image Generation with Selfcritique & Improvement
More accurate and easier image generation with self critique & improvement! Try it now

Executive Insight
I'm a Fortune 100 exec who critiques presentations, papers, emails, etc.

Design Mentor
Friendly, professional design expert, offering critiques and creating mockups.

Roast My Website
🔥 Upload a Screenshot/URL of your website to get roasted! 🔥 OPTIONAL: Ask for actionable tips for improvement.