
julius-gpt
Generate and publish your content from the command line with the help of AI (GPT) 🤯
Stars: 53

julius-gpt is a Node.js CLI and API tool that enables users to generate content such as blog posts and landing pages using Large Language Models (LLMs) like OpenAI. It supports generating text in multiple languages provided by the available LLMs. The tool offers different modes for content generation, including automatic, interactive, or using a content template. Users can fine-tune the content generation process with completion parameters and create SEO-friendly content with post titles, descriptions, and slugs. Additionally, users can publish content on WordPress and access upcoming features like image generation and RAG. The tool also supports custom prompts for personalized content generation and offers various commands for WordPress-related tasks.
README:
This Node.js CLI and API gives you the ability to generate content (blog post, landing pages, ...) with a LLM (OpenAI, ...). It can generate text in all languages supported by the available LLMs.
This project is using Langchain JS
🔄 Different modes for generating content: automatic, interactive, or with a content template.
🧠 Supported LLMs : OpenAI (stable), Mistral (experimental), Claude (upcoming release), Groq (upcoming release).
🌍 All languages supported by the available LLMs.
🔥 SEO friendly : generate post title, description & slug.
✍️ Default or custom prompts.
⚙️ Fine-tuning with completion parameters.
📝 Publish content on WordPress.
🌐 API.
🔜 Upcoming features: image generations, RAG, publish on NextJS.
- Features
- How it Works ?
- Warning
- Examples
- Installation
- CLI
- Wordpress related commands
- API
- Some Tools that can Help to Check Quality
- Credit
This component can be used in different modes:
- with the CLI ( interactive mode, automatic mode or with the help of a template).
- In your application with the API.
In interactive mode, the CLI will ask you for some parameters (topic/title, language, intent, audience, etc.).
In automatic mode, you need to supply all the necessary parameters to the command line. This mode of operation allows you to create a multitude of contents in series (for example, in a shell script).
Both modes will use different predefined prompts to generate the content:
- Generate the outline of the post (with the SEO description, SEO title, the slug).
- Generate the introduction.
- Generate the content of the different heading of the outline.
- Generate the conclusion.
The final result is in Markdown and HTML.
A template contains a document structure within a series of prompts. Each prompt will be executed in a specific order and will be replaced by the answer provided by the AI. It is possible to use different formats: Markdown, HTML, JSON, etc.
The main advantage of the template usage is the customisation of the output. You can use your own prompts. Templates are also interesting if you want to produce different contents based on the same structure (product pages, landing pages, etc.).
One of the problems of AI content generation is the repetition of the main keywords.
This script also uses temperature
, frequency_penalty
, and presence_penalty
parameters to try to minimize this.
See the OpenAI API documentation for more details.
When generating, the CLI gives you the ability to publish the content to your WordPress blog. Other CMS will be supported in the future. We need to support some headless CMS.
This is an experimental project. You are welcome to suggest improvements, like other prompts and other values for the parameters. The cost of the API calls is not included in the price of the CLI. You need to have an OpenAI API key to use this CLI. In all cases, you have to review the final output. AI can provide incorrect information.
Camping-cars écologiques ? Utopie ou réalité en 2024 ?
julius post -fp 1.5 -g -tp "5\ reasons\ to\ use\ AI\ for\ generating\ content" -f ./reasons-to-use-ai-content
Markdown result : 5 Reasons to Use AI for Generating Content
julius template-post -f ./dobermann -t ./template.md -i breed=dobermann -d
Template : template.md
Markdown result : dobermann.md
julius template-post -f ./dobermann -t ./template.html -i breed=dobermann -d
Template : template.html
HTML result : dobermann.html
The CLI and API are available as a NPM package.
# for the API
npm install julius-gpt -S
# for the CLI
npm install -g julius-gpt
The CLI has 4 groups of commands:
- prompt : custom prompt management.
- post: generate a post in interactive or auto mode.
- template-post : generate a content based on a template.
- wp: wordpress related commands : list, add, remove, update WP sites & publish posts.
~ julius -h
Usage: julius [options] [command]
Generate and publish your content from the command line 🤯
Options:
-V, --version output the version number
-h, --help display help for command
Commands:
prompt Prompt related commands
post [options] Generate a post in interactive or automatic mode
template-post [options] Generate a post based on a content template
wp Wordpress related commands. The
You need to have an OpenAI API key to use this CLI.
You can specify your OpenAI API key with the -k
option or with the environment variable OPENAI_API_KEY
.
See the CLI help to get the list of the different options.
~ julius post -h
~ julius post -tp "5 reasons to use AI for generating content"
Use the other parameters to personalize content even further.
A more advanced command
~ julius post -fp 1.5 -g -l french -tp "Emprunter\ avec\ un\ revenu\ de\ retraite\ :\ quelles\ sont\ les\ options\ \?" -f ./emprunter-argent-revenu-retraite -c Belgique -d
This command will generate a post in French with a frequency penalty of 1.5 for the audience of the country : Belgium. The topic (tp arg) is written in French.
~ julius post -i
It is not necessary to use the other parameters. The CLI will ask you some questions about the topic, language, ...
The template file can be in the markdown or HTML format. The template extension will be used to determine the final output.
~ julius template-post -t <file>.[md|html]
The CLI will execute all prompts mentioned in the template file. Each prompt short-code will be replaced by the output provided by the AI.
Template structure
Here is a simple example for the template file:
{{s:Your are an prompt tester. You have to write your answers in a makrdown block code.}}
{{c:your answer has to be "Content of prompt 1."}}
# Heading 1
{{c:your answer has to be "Content of prompt 2."}}
Prompt "s" is the system prompt Prompt with "c" are content prompt. they will be replaced by the output provided by the AI.
Like in Langchain, you can provide some input variables in the template like this one :
{{s:Your are an prompt tester. You have to write your answers in a makrdown block code in language : {language}.}}
{{c:Quelle est la capitale de la France ?"}}
# Heading 1
{{c: Quelle est la capitale de la Belgique ? "}}
Now, you can execute this template with the following command :
~ julius template-post -t <template-file>.md -i language=french
This is an experimental feature and the template syntax will be modified in an upcoming release.
By default, the CLI is using the latest Open AI model. We are working on the support of the following ones :
Provider | Models | Status | .env variable API KEY |
---|---|---|---|
OpenAI | gpt-4, gpt-4-turbo-preview | Stable | OPENAI_API_KEY |
Mistral | mistral-small-latest, mistral-medium-latest, mistral-large-latest | Experimental | MISTRAL_API_KEY |
Anthropic | Claude | Next Release | NA |
Groq | Mistral, Llama | Next Release | NA |
All models require an API Key. You can provide it either in the .env file or with the CLI parameter '-k'
You can choose your model with the -m parameter :
~ julius post -m mistral-large-latest ....
Use the help to have the list of the models
~ julius post -h
or
~ julius template-post -h
Why custom prompts?
- Default one are too generic.
- Julius's default prompts are written in English. Customs prompts can be created for a specific language.
- Give the possibility to add persona, writing style, remove IA footprint, add custom editorial brief, ....
Julius uses a set of prompts for content generation that can be customized by creating a new version in a separate directory. Each prompt is stored in a different file.
File name | Description | Inputs |
---|---|---|
system.txt | Can be used as an editorial brief or to add important information such as personas, editorial style, objectives, ... | None |
audience-intent.txt | Use to generate the audience and intent based on the article's subject. | {language} {topic} |
outline.txt | Use to generate article structure. | {language} {topic} {country} {audience} {intent} |
introduction.txt | Use to generate the article's introduction. | {language} {topic} |
conclusion.txt | Use to generate the article's conclusion. | {language} {topic} |
heading.txt | Use to generate the content of each heading. | {language} {headingTitle} {keywords} |
1. Make a copy of the default prompts
~ julius prompt create [name] [folder]
eg. :
~ julius prompt create discover ./my-prompts
This command will copy the default prompts into the folder : ./my-prompts/discover
2. Modify the prompts
Now, you can modify and/or translate the prompts in this folder
3. Use your prompts in the CLI
In the automatic mode, the cli will ask you the custom prompt path
~ julius -i
You can also use a CLI parameter "pf" to specify the folder path
~ julius -pf ./my-prompts/discover ...
This command displays the list of all registered Wordpress sites in the local file ~/.julius/wordpress.json.
The domain name or the id of the site can be used for the following commands.
~ julius wp ls
This command adds a new Wordpress site to the local file ~/.julius/wordpress.json.
~ julius wp add www.domain.com:username:password
This command displays the list of all registered Wordpress sites in the local file ~/.julius/wordpress.json.
~ julius wp info www.domain.com|id
This command removes a Wordpress site from the local file ~/.julius/wordpress.json.
~ julius wp rm www.domain.com|id
This command exports the list of all registered Wordpress sites in the local file ~/.julius/wordpress.json.
~ julius wp export wordpress_sites.json
This command imports the list of all registered Wordpress sites in the local file ~/.julius/wordpress.json.
~ julius wp import wordpress_sites.json
This command displays the list of all categories of a Wordpress site.
~ julius wp categories www.domain.com|id
This command creates a new post on a Wordpress site. the JSON file must have the following structure:
{
"title": "The title of the post",
"slug": "the-slug-of-the-post",
"content": "The content of the post",
"seoTitle": "The SEO title of the post",
"seoDescription": "The SEO description of the post",
}
This JSON file can be generated with the command julius post
or with the API.
By default, the Wordpress REST API doesn't allow you to update the SEO title and description. This information is managed by different plugins, such as Yoast SEO. You can code a plugin for this.
A plugin example for Yoast can be found in this directory: julius-wp-plugin You can create a zip and install it from the Wordpress dashboard.
You can code something similar for other SEO plugins.
~ julius wp post www.domain.com|id categoryId post.json
- The first argument is the domain name or the ID of the site.
- The second argument is the ID of the category on this WordPress. You can get the list of categories with the command
julius wp categories www.domain.com|id
- The third argument is a boolean to indicate if the WP used Yoast SEO plugin. If true, the SEO title and description will be published.
- The fourth argument is the path to the JSON file containing the post.
This command updates a post on a Wordpress site (title, content, SEO title & SEO description). the JSON file must have the following structure:
{
"title": "The title of the post",
"slug": "the-slug-of-the-post",
"content": "The content of the post",
"seoTitle": "The SEO title of the post",
"seoDescription": "The SEO description of the post",
}
This JSON file can be generated with the command julius post
or with the API.
~ julius wp update www.domain.com|id slug post.json [-d, --update-date]
- The first argument is the domain name or the ID of the site.
- The second argument is the slug of the post to update.
- The third argument is the JSON file.
- The fourth argument (optional) is to update the publication date or not.
See the unit tests : tests/test-api.spec.ts
- Quillbot: AI-powered paraphrasing tool will enhance your writing, grammar checker and plagiarism checker.
- Originality: AI Content Detector and Plagiarism Checker.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for julius-gpt
Similar Open Source Tools

julius-gpt
julius-gpt is a Node.js CLI and API tool that enables users to generate content such as blog posts and landing pages using Large Language Models (LLMs) like OpenAI. It supports generating text in multiple languages provided by the available LLMs. The tool offers different modes for content generation, including automatic, interactive, or using a content template. Users can fine-tune the content generation process with completion parameters and create SEO-friendly content with post titles, descriptions, and slugs. Additionally, users can publish content on WordPress and access upcoming features like image generation and RAG. The tool also supports custom prompts for personalized content generation and offers various commands for WordPress-related tasks.

vulnerability-analysis
The NVIDIA AI Blueprint for Vulnerability Analysis for Container Security showcases accelerated analysis on common vulnerabilities and exposures (CVE) at an enterprise scale, reducing mitigation time from days to seconds. It enables security analysts to determine software package vulnerabilities using large language models (LLMs) and retrieval-augmented generation (RAG). The blueprint is designed for security analysts, IT engineers, and AI practitioners in cybersecurity. It requires NVAIE developer license and API keys for vulnerability databases, search engines, and LLM model services. Hardware requirements include L40 GPU for pipeline operation and optional LLM NIM and Embedding NIM. The workflow involves LLM pipeline for CVE impact analysis, utilizing LLM planner, agent, and summarization nodes. The blueprint uses NVIDIA NIM microservices and Morpheus Cybersecurity AI SDK for vulnerability analysis.

latex2ai
LaTeX2AI is a plugin for Adobe Illustrator that allows users to use editable text labels typeset in LaTeX inside an Illustrator document. It provides a seamless integration of LaTeX functionality within the Illustrator environment, enabling users to create and edit LaTeX labels, manage item scaling behavior, set global options, and save documents as PDF with included LaTeX labels. The tool simplifies the process of including LaTeX-generated content in Illustrator designs, ensuring accurate scaling and alignment with other elements in the document.

MCP2Lambda
MCP2Lambda is a server that acts as a bridge between MCP clients and AWS Lambda functions, allowing generative AI models to access and run Lambda functions as tools. It enables Large Language Models (LLMs) to interact with Lambda functions without code changes, providing access to private resources, AWS services, private networks, and the public internet. The server supports autodiscovery of Lambda functions and their invocation by name with parameters. It standardizes AI model access to external tools using the MCP protocol.

langchain
LangChain is a framework for developing Elixir applications powered by language models. It enables applications to connect language models to other data sources and interact with the environment. The library provides components for working with language models and off-the-shelf chains for specific tasks. It aims to assist in building applications that combine large language models with other sources of computation or knowledge. LangChain is written in Elixir and is not aimed for parity with the JavaScript and Python versions due to differences in programming paradigms and design choices. The library is designed to make it easy to integrate language models into applications and expose features, data, and functionality to the models.

vectara-answer
Vectara Answer is a sample app for Vectara-powered Summarized Semantic Search (or question-answering) with advanced configuration options. For examples of what you can build with Vectara Answer, check out Ask News, LegalAid, or any of the other demo applications.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

ScreenAgent
ScreenAgent is a project focused on creating an environment for Visual Language Model agents (VLM Agent) to interact with real computer screens. The project includes designing an automatic control process for agents to interact with the environment and complete multi-step tasks. It also involves building the ScreenAgent dataset, which collects screenshots and action sequences for various daily computer tasks. The project provides a controller client code, configuration files, and model training code to enable users to control a desktop with a large model.

SWELancer-Benchmark
SWE-Lancer is a benchmark repository containing datasets and code for the paper 'SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?'. It provides instructions for package management, building Docker images, configuring environment variables, and running evaluations. Users can use this tool to assess the performance of language models in real-world freelance software engineering tasks.

lmql
LMQL is a programming language designed for large language models (LLMs) that offers a unique way of integrating traditional programming with LLM interaction. It allows users to write programs that combine algorithmic logic with LLM calls, enabling model reasoning capabilities within the context of the program. LMQL provides features such as Python syntax integration, rich control-flow options, advanced decoding techniques, powerful constraints via logit masking, runtime optimization, sync and async API support, multi-model compatibility, and extensive applications like JSON decoding and interactive chat interfaces. The tool also offers library integration, flexible tooling, and output streaming options for easy model output handling.

Tiger
Tiger is a community-driven project developing a reusable and integrated tool ecosystem for LLM Agent Revolution. It utilizes Upsonic for isolated tool storage, profiling, and automatic document generation. With Tiger, you can create a customized environment for your agents or leverage the robust and publicly maintained Tiger curated by the community itself.

LongRAG
This repository contains the code for LongRAG, a framework that enhances retrieval-augmented generation with long-context LLMs. LongRAG introduces a 'long retriever' and a 'long reader' to improve performance by using a 4K-token retrieval unit, offering insights into combining RAG with long-context LLMs. The repo provides instructions for installation, quick start, corpus preparation, long retriever, and long reader.

AI-Scientist
The AI Scientist is a comprehensive system for fully automatic scientific discovery, enabling Foundation Models to perform research independently. It aims to tackle the grand challenge of developing agents capable of conducting scientific research and discovering new knowledge. The tool generates papers on various topics using Large Language Models (LLMs) and provides a platform for exploring new research ideas. Users can create their own templates for specific areas of study and run experiments to generate papers. However, caution is advised as the codebase executes LLM-written code, which may pose risks such as the use of potentially dangerous packages and web access.

talking-avatar-with-ai
The 'talking-avatar-with-ai' project is a digital human system that utilizes OpenAI's GPT-3 for generating responses, Whisper for audio transcription, Eleven Labs for voice generation, and Rhubarb Lip Sync for lip synchronization. The system allows users to interact with a digital avatar that responds with text, facial expressions, and animations, creating a realistic conversational experience. The project includes setup for environment variables, chat prompt templates, chat model configuration, and structured output parsing to enhance the interaction with the digital human.

MCP-Bridge
MCP-Bridge is a middleware tool designed to provide an openAI compatible endpoint for calling MCP tools. It acts as a bridge between the OpenAI API and MCP tools, allowing developers to leverage MCP tools through the OpenAI API interface. The tool facilitates the integration of MCP tools with the OpenAI API by providing endpoints for interaction. It supports non-streaming and streaming chat completions with MCP, as well as non-streaming completions without MCP. The tool is designed to work with inference engines that support tool call functionalities, such as vLLM and ollama. Installation can be done using Docker or manually, and the application can be run to interact with the OpenAI API. Configuration involves editing the config.json file to add new MCP servers. Contributions to the tool are welcome under the MIT License.

AutoNode
AutoNode is a self-operating computer system designed to automate web interactions and data extraction processes. It leverages advanced technologies like OCR (Optical Character Recognition), YOLO (You Only Look Once) models for object detection, and a custom site-graph to navigate and interact with web pages programmatically. Users can define objectives, create site-graphs, and utilize AutoNode via API to automate tasks on websites. The tool also supports training custom YOLO models for object detection and OCR for text recognition on web pages. AutoNode can be used for tasks such as extracting product details, automating web interactions, and more.
For similar tasks

julius-gpt
julius-gpt is a Node.js CLI and API tool that enables users to generate content such as blog posts and landing pages using Large Language Models (LLMs) like OpenAI. It supports generating text in multiple languages provided by the available LLMs. The tool offers different modes for content generation, including automatic, interactive, or using a content template. Users can fine-tune the content generation process with completion parameters and create SEO-friendly content with post titles, descriptions, and slugs. Additionally, users can publish content on WordPress and access upcoming features like image generation and RAG. The tool also supports custom prompts for personalized content generation and offers various commands for WordPress-related tasks.

basehub
JavaScript / TypeScript SDK for BaseHub, the first AI-native content hub. **Features:** * ✨ Infers types from your BaseHub repository... _meaning IDE autocompletion works great._ * 🏎️ No dependency on graphql... _meaning your bundle is more lightweight._ * 🌐 Works everywhere `fetch` is supported... _meaning you can use it anywhere._

crewAI
CrewAI is a cutting-edge framework designed to orchestrate role-playing autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. It enables AI agents to assume roles, share goals, and operate in a cohesive unit, much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions. With features like role-based agent design, autonomous inter-agent delegation, flexible task management, and support for various LLMs, CrewAI offers a dynamic and adaptable solution for both development and production workflows.

PromptChains
ChatGPT Queue Prompts is a collection of prompt chains designed to enhance interactions with large language models like ChatGPT. These prompt chains help build context for the AI before performing specific tasks, improving performance. Users can copy and paste prompt chains into the ChatGPT Queue extension to process prompts in sequence. The repository includes example prompt chains for tasks like conducting AI company research, building SEO optimized blog posts, creating courses, revising resumes, enriching leads for CRM, personal finance document creation, workout and nutrition plans, marketing plans, and more.

laion.ai
laion.ai is a repository hosting content for the website https://laion.ai. It includes blog posts, markdown files for content editing, and credits to contributors. The content covers various topics such as about page, impressum, FAQ, team list, and blog feed.

Thinking_in_Java_MindMapping
Thinking_in_Java_MindMapping is a repository that started as a project to create mind maps based on the book 'Java Programming Ideas'. Over time, it evolved into a collection of programming notes, blog posts, book summaries, personal reflections, and even gaming content. The repository covers a wide range of topics, allowing the author to freely express thoughts and ideas. The content is diverse and reflects the author's dedication to consistency and creativity.

csinva.github.io
csinva.github.io is a repository maintained by Chandan, a Senior Researcher at Microsoft Research, focusing on interpretable machine learning. The repository contains slides, research overviews, cheat sheets, notes, blog posts, and personal information related to machine learning, statistics, and neuroscience. It offers resources for presentations, summaries of recent papers, cheat sheets for various courses, and posts on different aspects of machine learning and neuroscience advancements.

AI-Writing-Assistant
DeepWrite AI is an AI writing assistant tool created with the help of ChatGPT3. It is designed to generate perfect blog posts with utmost clarity. The tool is currently at version 1.0 with plans for further improvements. It is an open-source project, welcoming contributions. An extension has been developed for using the tool directly in Notepad, currently supported only on Calmly Writer. The tool requires installation and setup, utilizing technologies like React, Next, TailwindCSS, Node, and Express. For support, users can message the creator on Instagram. The creator, Sabir Khan, is an undergraduate student of Computer Science from Mumbai, known for frequently creating innovative projects.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.