Best AI tools for< Critique Form >
11 - AI tool Sites
Yogger
Yogger is a video analysis and AI movement assessment tool that empowers coaches, trainers, physical therapists, and athletes to gather precise movement data for performance enhancement, recovery optimization, and injury risk reduction. The software solutions offered by Yogger enable users to analyze movement, critique form, and visualize joint tracking with the help of AI technology. With Yogger, users can streamline client evaluations through automated movement screenings, delivering objective scores and data in just 60 seconds. The tool provides a versatile suite of features for any sport or activity, all accessible from a mobile device.
Critique
Critique is an AI tool that redefines browsing by offering autonomous fact-checking, informed question answering, and a localized universal recommendation system. It automatically critiques comments and posts on platforms like Reddit, Youtube, and Linkedin by vetting text on any website. The tool cross-references and analyzes articles in real-time, providing vetted and summarized information directly in the user's browser.
Art Review Generator
The Art Review Generator is a natural language processing tool and text generator that analyzes and generates art reviews based on a set of words provided as a prompt. It utilizes 57 years of art reviews from Artforum to create medium-length sentences that capture the essence of art descriptions. The tool is not classified as artificial intelligence but is often referenced in mainstream news articles as an example of AI technology. By leveraging deep matrices of probability, it generates text that reflects the distinctive language used in modern art reviews, encompassing intent, emotion, technique, and impact. Despite potential biases and glitches, the tool offers insights into evolving language trends in art criticism and provides a platform for exploring new perspectives and interpretations of art.
Error 404 Assistant
The website displays a '404: NOT_FOUND' error message along with a code and ID indicating a deployment not found issue. Users encountering this error are directed to refer to the documentation for further information and troubleshooting.
LLM Quality Beefer-Upper
LLM Quality Beefer-Upper is an AI tool designed to enhance the quality and productivity of LLM responses by automating critique, reflection, and improvement. Users can generate multi-agent prompt drafts, choose from different quality levels, and upload knowledge text for processing. The application aims to maximize output quality by utilizing the best available LLM models in the market.
ProWritingAid
ProWritingAid is an AI-powered writing assistant that helps writers improve their writing. It offers a range of features, including a grammar checker, plagiarism checker, and story critique tool. ProWritingAid is used by writers of all levels, from beginners to bestselling authors. It is available as a web app, desktop app, and browser extension.
ScriptReader.ai
ScriptReader.ai is an AI-powered screenplay analysis tool that provides expert feedback on screenplays. It offers detailed critiques and suggestions for every scene, helping writers improve their scripts. The AI technology grades scenes, identifies strengths and weaknesses, and provides personalized feedback to elevate the quality of the screenplay. Whether you're a seasoned screenwriter or a beginner, ScriptReader.ai can assist in enhancing your writing skills and creating captivating masterpieces.
Feedback Wizard
Feedback Wizard is an AI-powered tool designed to provide instant design feedback directly within Figma. It leverages AI technology to offer design wisdom and actionable insights to improve user experience and elevate the visual elements of Figma designs. With over 2700 designers already using the tool, Feedback Wizard aims to streamline the design feedback process and enhance the overall design quality.
Mock-My-Mockup
Mock-My-Mockup is an AI-powered product design tool created by Fairpixels. It allows users to upload a screenshot of a page they are working on and receive brutally honest feedback. The tool offers a user-friendly interface where users can easily drag and drop their product screenshots for analysis.
Hell's Pitching
Hell's Pitching is an AI-powered assistant designed to help entrepreneurs refine their startup ideas by providing brutally honest feedback and insightful questions. It offers a unique approach to challenging and guiding founders in building better startups through side-splittingly funny roasts and innovative insights. The tool operates 24/7, allowing users to brainstorm and get roasted at their convenience. With a focus on no-nonsense critiques and wisdom beneath the roast, Hell's Pitching aims to transform startup ideas with valuable feedback.
RAD AI
RAD AI is an AI-powered platform that provides solutions for audience insights, influencer discovery, content optimization, managed services, and more. The platform uses advanced machine learning to analyze real-time conversations from social platforms like Reddit, TikTok, and Twitter. RAD AI offers actionable critiques to enhance brand content and helps in selecting the right influencers based on various factors. The platform aims to help brands reach their target audiences effectively and efficiently by leveraging AI technology.
20 - Open Source AI Tools
Awesome-LLM-in-Social-Science
This repository compiles a list of academic papers that evaluate, align, simulate, and provide surveys or perspectives on the use of Large Language Models (LLMs) in the field of Social Science. The papers cover various aspects of LLM research, including assessing their alignment with human values, evaluating their capabilities in tasks such as opinion formation and moral reasoning, and exploring their potential for simulating social interactions and addressing issues in diverse fields of Social Science. The repository aims to provide a comprehensive resource for researchers and practitioners interested in the intersection of LLMs and Social Science.
pywhy-llm
PyWhy-LLM is an innovative library that integrates Large Language Models (LLMs) into the causal analysis process, empowering users with knowledge previously only available through domain experts. It seamlessly augments existing causal inference processes by suggesting potential confounders, relationships between variables, backdoor sets, front door sets, IV sets, estimands, critiques of DAGs, latent confounders, and negative controls. By leveraging LLMs and formalizing human-LLM collaboration, PyWhy-LLM aims to enhance causal analysis accessibility and insight.
awesome-RLAIF
Reinforcement Learning from AI Feedback (RLAIF) is a concept that describes a type of machine learning approach where **an AI agent learns by receiving feedback or guidance from another AI system**. This concept is closely related to the field of Reinforcement Learning (RL), which is a type of machine learning where an agent learns to make a sequence of decisions in an environment to maximize a cumulative reward. In traditional RL, an agent interacts with an environment and receives feedback in the form of rewards or penalties based on the actions it takes. It learns to improve its decision-making over time to achieve its goals. In the context of Reinforcement Learning from AI Feedback, the AI agent still aims to learn optimal behavior through interactions, but **the feedback comes from another AI system rather than from the environment or human evaluators**. This can be **particularly useful in situations where it may be challenging to define clear reward functions or when it is more efficient to use another AI system to provide guidance**. The feedback from the AI system can take various forms, such as: - **Demonstrations** : The AI system provides demonstrations of desired behavior, and the learning agent tries to imitate these demonstrations. - **Comparison Data** : The AI system ranks or compares different actions taken by the learning agent, helping it to understand which actions are better or worse. - **Reward Shaping** : The AI system provides additional reward signals to guide the learning agent's behavior, supplementing the rewards from the environment. This approach is often used in scenarios where the RL agent needs to learn from **limited human or expert feedback or when the reward signal from the environment is sparse or unclear**. It can also be used to **accelerate the learning process and make RL more sample-efficient**. Reinforcement Learning from AI Feedback is an area of ongoing research and has applications in various domains, including robotics, autonomous vehicles, and game playing, among others.
llm-self-correction-papers
This repository contains a curated list of papers focusing on the self-correction of large language models (LLMs) during inference. It covers various frameworks for self-correction, including intrinsic self-correction, self-correction with external tools, self-correction with information retrieval, and self-correction with training designed specifically for self-correction. The list includes survey papers, negative results, and frameworks utilizing reinforcement learning and OpenAI o1-like approaches. Contributions are welcome through pull requests following a specific format.
CritiqueLLM
CritiqueLLM is an official implementation of a model designed for generating informative critiques to evaluate large language model generation. It includes functionalities for data collection, referenced pointwise grading, referenced pairwise comparison, reference-free pairwise comparison, reference-free pointwise grading, inference for pointwise grading and pairwise comparison, and evaluation of the generated results. The model aims to provide a comprehensive framework for evaluating the performance of large language models based on human ratings and comparisons.
RAG-Survey
This repository is dedicated to collecting and categorizing papers related to Retrieval-Augmented Generation (RAG) for AI-generated content. It serves as a survey repository based on the paper 'Retrieval-Augmented Generation for AI-Generated Content: A Survey'. The repository is continuously updated to keep up with the rapid growth in the field of RAG.
awesome-llm-attributions
This repository focuses on unraveling the sources that large language models tap into for attribution or citation. It delves into the origins of facts, their utilization by the models, the efficacy of attribution methodologies, and challenges tied to ambiguous knowledge reservoirs, biases, and pitfalls of excessive attribution.
parlant
Parlant is a structured approach to building and guiding customer-facing AI agents. It allows developers to create and manage robust AI agents, providing specific feedback on agent behavior and helping understand user intentions better. With features like guidelines, glossary, coherence checks, dynamic context, and guided tool use, Parlant offers control over agent responses and behavior. Developer-friendly aspects include instant changes, Git integration, clean architecture, and type safety. It enables confident deployment with scalability, effective debugging, and validation before deployment. Parlant works with major LLM providers and offers client SDKs for Python and TypeScript. The tool facilitates natural customer interactions through asynchronous communication and provides a chat UI for testing new behaviors before deployment.
Awesome-LLM-RAG
This repository, Awesome-LLM-RAG, aims to record advanced papers on Retrieval Augmented Generation (RAG) in Large Language Models (LLMs). It serves as a resource hub for researchers interested in promoting their work related to LLM RAG by updating paper information through pull requests. The repository covers various topics such as workshops, tutorials, papers, surveys, benchmarks, retrieval-enhanced LLMs, RAG instruction tuning, RAG in-context learning, RAG embeddings, RAG simulators, RAG search, RAG long-text and memory, RAG evaluation, RAG optimization, and RAG applications.
awesome-deliberative-prompting
The 'awesome-deliberative-prompting' repository focuses on how to ask Large Language Models (LLMs) to produce reliable reasoning and make reason-responsive decisions through deliberative prompting. It includes success stories, prompting patterns and strategies, multi-agent deliberation, reflection and meta-cognition, text generation techniques, self-correction methods, reasoning analytics, limitations, failures, puzzles, datasets, tools, and other resources related to deliberative prompting. The repository provides a comprehensive overview of research, techniques, and tools for enhancing reasoning capabilities of LLMs.
Awesome-Code-LLM
Analyze the following text from a github repository (name and readme text at end) . Then, generate a JSON object with the following keys and provide the corresponding information for each key, in lowercase letters: 'description' (detailed description of the repo, must be less than 400 words,Ensure that no line breaks and quotation marks.),'for_jobs' (List 5 jobs suitable for this tool,in lowercase letters), 'ai_keywords' (keywords of the tool,user may use those keyword to find the tool,in lowercase letters), 'for_tasks' (list of 5 specific tasks user can use this tool to do,in lowercase letters), 'answer' (in english languages)
chatgpt-universe
ChatGPT is a large language model that can generate human-like text, translate languages, write different kinds of creative content, and answer your questions in a conversational way. It is trained on a massive amount of text data, and it is able to understand and respond to a wide range of natural language prompts. Here are 5 jobs suitable for this tool, in lowercase letters: 1. content writer 2. chatbot assistant 3. language translator 4. creative writer 5. researcher
NeMo-Guardrails
NeMo Guardrails is an open-source toolkit for easily adding _programmable guardrails_ to LLM-based conversational applications. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more.
DecryptPrompt
This repository does not provide a tool, but rather a collection of resources and strategies for academics in the field of artificial intelligence who are feeling depressed or overwhelmed by the rapid advancements in the field. The resources include articles, blog posts, and other materials that offer advice on how to cope with the challenges of working in a fast-paced and competitive environment.
awesome-generative-ai
A curated list of Generative AI projects, tools, artworks, and models
awesome-generative-information-retrieval
This repository contains a curated list of resources on generative information retrieval, including research papers, datasets, tools, and applications. Generative information retrieval is a subfield of information retrieval that uses generative models to generate new documents or passages of text that are relevant to a given query. This can be useful for a variety of tasks, such as question answering, summarization, and document generation. The resources in this repository are intended to help researchers and practitioners stay up-to-date on the latest advances in generative information retrieval.
awesome-generative-ai-guide
This repository serves as a comprehensive hub for updates on generative AI research, interview materials, notebooks, and more. It includes monthly best GenAI papers list, interview resources, free courses, and code repositories/notebooks for developing generative AI applications. The repository is regularly updated with the latest additions to keep users informed and engaged in the field of generative AI.
20 - OpenAI Gpts
Website Design Critique Expert
Critiques website designs and creates shareable summary graphics.
Trey Ratcliff's Fun Photo Critique GPT
Critiquing photos with humor and expertise, drawing from my 5,000 blog entries and books. Share your photo for a unique critique experience!
Legal Tech Generhater
I'll critique your legal tech ideas with my signature snark and design fittingly bad logos.
Apollo
Expert in art critique and analysis, knowledgeable in art history, theory, and psychology.
Image Generation with Selfcritique & Improvement
More accurate and easier image generation with self critique & improvement! Try it now
Academic Research Reviewer
Upon uploading a research paper, I provide a concise section wise analysis covering Abstract, Lit Review, Findings, Methodology, and Conclusion. I also critique the work, highlight its strengths, and answer any open questions from my Knowledge base of Open source materials.
AutoExpert (Academic)
Upon uploading a research paper, I provide a concise analysis covering its authors, key findings, methodology, and relevance. I also critique the work, highlight its strengths, and identify any open questions from a professional perspective.
Executive Insight
I'm a Fortune 100 exec who critiques presentations, papers, emails, etc.
Arte Crítico
Experto en crítica y curaduría de arte, especializado en reseñas y descripción de obras.
Riley
An interactive resume assistant, providing detailed insights on Randy's professional background.