
julius-gpt
Generate and publish your content from the command line with the help of AI (GPT) 🤯
Stars: 53

julius-gpt is a Node.js CLI and API tool that enables users to generate content such as blog posts and landing pages using Large Language Models (LLMs) like OpenAI. It supports generating text in multiple languages provided by the available LLMs. The tool offers different modes for content generation, including automatic, interactive, or using a content template. Users can fine-tune the content generation process with completion parameters and create SEO-friendly content with post titles, descriptions, and slugs. Additionally, users can publish content on WordPress and access upcoming features like image generation and RAG. The tool also supports custom prompts for personalized content generation and offers various commands for WordPress-related tasks.
README:
This Node.js CLI and API gives you the ability to generate content (blog post, landing pages, ...) with a LLM (OpenAI, ...). It can generate text in all languages supported by the available LLMs.
This project is using Langchain JS
🔄 Different modes for generating content: automatic, interactive, or with a content template.
🧠 Supported LLMs : OpenAI (stable), Mistral (experimental), Claude (upcoming release), Groq (upcoming release).
🌍 All languages supported by the available LLMs.
🔥 SEO friendly : generate post title, description & slug.
✍️ Default or custom prompts.
⚙️ Fine-tuning with completion parameters.
📝 Publish content on WordPress.
🌐 API.
🔜 Upcoming features: image generations, RAG, publish on NextJS.
- Features
- How it Works ?
- Warning
- Examples
- Installation
- CLI
- Wordpress related commands
- API
- Some Tools that can Help to Check Quality
- Credit
This component can be used in different modes:
- with the CLI ( interactive mode, automatic mode or with the help of a template).
- In your application with the API.
In interactive mode, the CLI will ask you for some parameters (topic/title, language, intent, audience, etc.).
In automatic mode, you need to supply all the necessary parameters to the command line. This mode of operation allows you to create a multitude of contents in series (for example, in a shell script).
Both modes will use different predefined prompts to generate the content:
- Generate the outline of the post (with the SEO description, SEO title, the slug).
- Generate the introduction.
- Generate the content of the different heading of the outline.
- Generate the conclusion.
The final result is in Markdown and HTML.
A template contains a document structure within a series of prompts. Each prompt will be executed in a specific order and will be replaced by the answer provided by the AI. It is possible to use different formats: Markdown, HTML, JSON, etc.
The main advantage of the template usage is the customisation of the output. You can use your own prompts. Templates are also interesting if you want to produce different contents based on the same structure (product pages, landing pages, etc.).
One of the problems of AI content generation is the repetition of the main keywords.
This script also uses temperature
, frequency_penalty
, and presence_penalty
parameters to try to minimize this.
See the OpenAI API documentation for more details.
When generating, the CLI gives you the ability to publish the content to your WordPress blog. Other CMS will be supported in the future. We need to support some headless CMS.
This is an experimental project. You are welcome to suggest improvements, like other prompts and other values for the parameters. The cost of the API calls is not included in the price of the CLI. You need to have an OpenAI API key to use this CLI. In all cases, you have to review the final output. AI can provide incorrect information.
Camping-cars écologiques ? Utopie ou réalité en 2024 ?
julius post -fp 1.5 -g -tp "5\ reasons\ to\ use\ AI\ for\ generating\ content" -f ./reasons-to-use-ai-content
Markdown result : 5 Reasons to Use AI for Generating Content
julius template-post -f ./dobermann -t ./template.md -i breed=dobermann -d
Template : template.md
Markdown result : dobermann.md
julius template-post -f ./dobermann -t ./template.html -i breed=dobermann -d
Template : template.html
HTML result : dobermann.html
The CLI and API are available as a NPM package.
# for the API
npm install julius-gpt -S
# for the CLI
npm install -g julius-gpt
The CLI has 4 groups of commands:
- prompt : custom prompt management.
- post: generate a post in interactive or auto mode.
- template-post : generate a content based on a template.
- wp: wordpress related commands : list, add, remove, update WP sites & publish posts.
~ julius -h
Usage: julius [options] [command]
Generate and publish your content from the command line 🤯
Options:
-V, --version output the version number
-h, --help display help for command
Commands:
prompt Prompt related commands
post [options] Generate a post in interactive or automatic mode
template-post [options] Generate a post based on a content template
wp Wordpress related commands. The
You need to have an OpenAI API key to use this CLI.
You can specify your OpenAI API key with the -k
option or with the environment variable OPENAI_API_KEY
.
See the CLI help to get the list of the different options.
~ julius post -h
~ julius post -tp "5 reasons to use AI for generating content"
Use the other parameters to personalize content even further.
A more advanced command
~ julius post -fp 1.5 -g -l french -tp "Emprunter\ avec\ un\ revenu\ de\ retraite\ :\ quelles\ sont\ les\ options\ \?" -f ./emprunter-argent-revenu-retraite -c Belgique -d
This command will generate a post in French with a frequency penalty of 1.5 for the audience of the country : Belgium. The topic (tp arg) is written in French.
~ julius post -i
It is not necessary to use the other parameters. The CLI will ask you some questions about the topic, language, ...
The template file can be in the markdown or HTML format. The template extension will be used to determine the final output.
~ julius template-post -t <file>.[md|html]
The CLI will execute all prompts mentioned in the template file. Each prompt short-code will be replaced by the output provided by the AI.
Template structure
Here is a simple example for the template file:
{{s:Your are an prompt tester. You have to write your answers in a makrdown block code.}}
{{c:your answer has to be "Content of prompt 1."}}
# Heading 1
{{c:your answer has to be "Content of prompt 2."}}
Prompt "s" is the system prompt Prompt with "c" are content prompt. they will be replaced by the output provided by the AI.
Like in Langchain, you can provide some input variables in the template like this one :
{{s:Your are an prompt tester. You have to write your answers in a makrdown block code in language : {language}.}}
{{c:Quelle est la capitale de la France ?"}}
# Heading 1
{{c: Quelle est la capitale de la Belgique ? "}}
Now, you can execute this template with the following command :
~ julius template-post -t <template-file>.md -i language=french
This is an experimental feature and the template syntax will be modified in an upcoming release.
By default, the CLI is using the latest Open AI model. We are working on the support of the following ones :
Provider | Models | Status | .env variable API KEY |
---|---|---|---|
OpenAI | gpt-4, gpt-4-turbo-preview | Stable | OPENAI_API_KEY |
Mistral | mistral-small-latest, mistral-medium-latest, mistral-large-latest | Experimental | MISTRAL_API_KEY |
Anthropic | Claude | Next Release | NA |
Groq | Mistral, Llama | Next Release | NA |
All models require an API Key. You can provide it either in the .env file or with the CLI parameter '-k'
You can choose your model with the -m parameter :
~ julius post -m mistral-large-latest ....
Use the help to have the list of the models
~ julius post -h
or
~ julius template-post -h
Why custom prompts?
- Default one are too generic.
- Julius's default prompts are written in English. Customs prompts can be created for a specific language.
- Give the possibility to add persona, writing style, remove IA footprint, add custom editorial brief, ....
Julius uses a set of prompts for content generation that can be customized by creating a new version in a separate directory. Each prompt is stored in a different file.
File name | Description | Inputs |
---|---|---|
system.txt | Can be used as an editorial brief or to add important information such as personas, editorial style, objectives, ... | None |
audience-intent.txt | Use to generate the audience and intent based on the article's subject. | {language} {topic} |
outline.txt | Use to generate article structure. | {language} {topic} {country} {audience} {intent} |
introduction.txt | Use to generate the article's introduction. | {language} {topic} |
conclusion.txt | Use to generate the article's conclusion. | {language} {topic} |
heading.txt | Use to generate the content of each heading. | {language} {headingTitle} {keywords} |
1. Make a copy of the default prompts
~ julius prompt create [name] [folder]
eg. :
~ julius prompt create discover ./my-prompts
This command will copy the default prompts into the folder : ./my-prompts/discover
2. Modify the prompts
Now, you can modify and/or translate the prompts in this folder
3. Use your prompts in the CLI
In the automatic mode, the cli will ask you the custom prompt path
~ julius -i
You can also use a CLI parameter "pf" to specify the folder path
~ julius -pf ./my-prompts/discover ...
This command displays the list of all registered Wordpress sites in the local file ~/.julius/wordpress.json.
The domain name or the id of the site can be used for the following commands.
~ julius wp ls
This command adds a new Wordpress site to the local file ~/.julius/wordpress.json.
~ julius wp add www.domain.com:username:password
This command displays the list of all registered Wordpress sites in the local file ~/.julius/wordpress.json.
~ julius wp info www.domain.com|id
This command removes a Wordpress site from the local file ~/.julius/wordpress.json.
~ julius wp rm www.domain.com|id
This command exports the list of all registered Wordpress sites in the local file ~/.julius/wordpress.json.
~ julius wp export wordpress_sites.json
This command imports the list of all registered Wordpress sites in the local file ~/.julius/wordpress.json.
~ julius wp import wordpress_sites.json
This command displays the list of all categories of a Wordpress site.
~ julius wp categories www.domain.com|id
This command creates a new post on a Wordpress site. the JSON file must have the following structure:
{
"title": "The title of the post",
"slug": "the-slug-of-the-post",
"content": "The content of the post",
"seoTitle": "The SEO title of the post",
"seoDescription": "The SEO description of the post",
}
This JSON file can be generated with the command julius post
or with the API.
By default, the Wordpress REST API doesn't allow you to update the SEO title and description. This information is managed by different plugins, such as Yoast SEO. You can code a plugin for this.
A plugin example for Yoast can be found in this directory: julius-wp-plugin You can create a zip and install it from the Wordpress dashboard.
You can code something similar for other SEO plugins.
~ julius wp post www.domain.com|id categoryId post.json
- The first argument is the domain name or the ID of the site.
- The second argument is the ID of the category on this WordPress. You can get the list of categories with the command
julius wp categories www.domain.com|id
- The third argument is a boolean to indicate if the WP used Yoast SEO plugin. If true, the SEO title and description will be published.
- The fourth argument is the path to the JSON file containing the post.
This command updates a post on a Wordpress site (title, content, SEO title & SEO description). the JSON file must have the following structure:
{
"title": "The title of the post",
"slug": "the-slug-of-the-post",
"content": "The content of the post",
"seoTitle": "The SEO title of the post",
"seoDescription": "The SEO description of the post",
}
This JSON file can be generated with the command julius post
or with the API.
~ julius wp update www.domain.com|id slug post.json [-d, --update-date]
- The first argument is the domain name or the ID of the site.
- The second argument is the slug of the post to update.
- The third argument is the JSON file.
- The fourth argument (optional) is to update the publication date or not.
See the unit tests : tests/test-api.spec.ts
- Quillbot: AI-powered paraphrasing tool will enhance your writing, grammar checker and plagiarism checker.
- Originality: AI Content Detector and Plagiarism Checker.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for julius-gpt
Similar Open Source Tools

julius-gpt
julius-gpt is a Node.js CLI and API tool that enables users to generate content such as blog posts and landing pages using Large Language Models (LLMs) like OpenAI. It supports generating text in multiple languages provided by the available LLMs. The tool offers different modes for content generation, including automatic, interactive, or using a content template. Users can fine-tune the content generation process with completion parameters and create SEO-friendly content with post titles, descriptions, and slugs. Additionally, users can publish content on WordPress and access upcoming features like image generation and RAG. The tool also supports custom prompts for personalized content generation and offers various commands for WordPress-related tasks.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

latex2ai
LaTeX2AI is a plugin for Adobe Illustrator that allows users to use editable text labels typeset in LaTeX inside an Illustrator document. It provides a seamless integration of LaTeX functionality within the Illustrator environment, enabling users to create and edit LaTeX labels, manage item scaling behavior, set global options, and save documents as PDF with included LaTeX labels. The tool simplifies the process of including LaTeX-generated content in Illustrator designs, ensuring accurate scaling and alignment with other elements in the document.

tribe
Tribe AI is a low code tool designed to rapidly build and coordinate multi-agent teams. It leverages the langgraph framework to customize and coordinate teams of agents, allowing tasks to be split among agents with different strengths for faster and better problem-solving. The tool supports persistent conversations, observability, tool calling, human-in-the-loop functionality, easy deployment with Docker, and multi-tenancy for managing multiple users and teams.

Whisper-TikTok
Discover Whisper-TikTok, an innovative AI-powered tool that leverages the prowess of Edge TTS, OpenAI-Whisper, and FFMPEG to craft captivating TikTok videos. Whisper-TikTok effortlessly generates accurate transcriptions from audio files and integrates Microsoft Edge Cloud Text-to-Speech API for vibrant voiceovers. The program orchestrates the synthesis of videos using a structured JSON dataset, generating mesmerizing TikTok content in minutes.

geti-sdk
The Intel® Geti™ SDK is a python package that enables teams to rapidly develop AI models by easing the complexities of model development and enhancing collaboration between teams. It provides tools to interact with an Intel® Geti™ server via the REST API, allowing for project creation, downloading, uploading, deploying for local inference with OpenVINO, setting project and model configuration, launching and monitoring training jobs, and media upload and prediction. The SDK also includes tutorial-style Jupyter notebooks demonstrating its usage.

OrionChat
Orion is a web-based chat interface that simplifies interactions with multiple AI model providers. It provides a unified platform for chatting and exploring various large language models (LLMs) such as Ollama, OpenAI (GPT model), Cohere (Command-r models), Google (Gemini models), Anthropic (Claude models), Groq Inc., Cerebras, and SambaNova. Users can easily navigate and assess different AI models through an intuitive, user-friendly interface. Orion offers features like browser-based access, code execution with Google Gemini, text-to-speech (TTS), speech-to-text (STT), seamless integration with multiple AI models, customizable system prompts, language translation tasks, document uploads for analysis, and more. API keys are stored locally, and requests are sent directly to official providers' APIs without external proxies.

SWELancer-Benchmark
SWE-Lancer is a benchmark repository containing datasets and code for the paper 'SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?'. It provides instructions for package management, building Docker images, configuring environment variables, and running evaluations. Users can use this tool to assess the performance of language models in real-world freelance software engineering tasks.

open-source-slack-ai
This repository provides a ready-to-run basic Slack AI solution that allows users to summarize threads and channels using OpenAI. Users can generate thread summaries, channel overviews, channel summaries since a specific time, and full channel summaries. The tool is powered by GPT-3.5-Turbo and an ensemble of NLP models. It requires Python 3.8 or higher, an OpenAI API key, Slack App with associated API tokens, Poetry package manager, and ngrok for local development. Users can customize channel and thread summaries, run tests with coverage using pytest, and contribute to the project for future enhancements.

lmql
LMQL is a programming language designed for large language models (LLMs) that offers a unique way of integrating traditional programming with LLM interaction. It allows users to write programs that combine algorithmic logic with LLM calls, enabling model reasoning capabilities within the context of the program. LMQL provides features such as Python syntax integration, rich control-flow options, advanced decoding techniques, powerful constraints via logit masking, runtime optimization, sync and async API support, multi-model compatibility, and extensive applications like JSON decoding and interactive chat interfaces. The tool also offers library integration, flexible tooling, and output streaming options for easy model output handling.

Tiger
Tiger is a community-driven project developing a reusable and integrated tool ecosystem for LLM Agent Revolution. It utilizes Upsonic for isolated tool storage, profiling, and automatic document generation. With Tiger, you can create a customized environment for your agents or leverage the robust and publicly maintained Tiger curated by the community itself.

MCP-Bridge
MCP-Bridge is a middleware tool designed to provide an openAI compatible endpoint for calling MCP tools. It acts as a bridge between the OpenAI API and MCP tools, allowing developers to leverage MCP tools through the OpenAI API interface. The tool facilitates the integration of MCP tools with the OpenAI API by providing endpoints for interaction. It supports non-streaming and streaming chat completions with MCP, as well as non-streaming completions without MCP. The tool is designed to work with inference engines that support tool call functionalities, such as vLLM and ollama. Installation can be done using Docker or manually, and the application can be run to interact with the OpenAI API. Configuration involves editing the config.json file to add new MCP servers. Contributions to the tool are welcome under the MIT License.

AI-Scientist
The AI Scientist is a comprehensive system for fully automatic scientific discovery, enabling Foundation Models to perform research independently. It aims to tackle the grand challenge of developing agents capable of conducting scientific research and discovering new knowledge. The tool generates papers on various topics using Large Language Models (LLMs) and provides a platform for exploring new research ideas. Users can create their own templates for specific areas of study and run experiments to generate papers. However, caution is advised as the codebase executes LLM-written code, which may pose risks such as the use of potentially dangerous packages and web access.

genai-for-marketing
This repository provides a deployment guide for utilizing Google Cloud's Generative AI tools in marketing scenarios. It includes step-by-step instructions, examples of crafting marketing materials, and supplementary Jupyter notebooks. The demos cover marketing insights, audience analysis, trendspotting, content search, content generation, and workspace integration. Users can access and visualize marketing data, analyze trends, improve search experience, and generate compelling content. The repository structure includes backend APIs, frontend code, sample notebooks, templates, and installation scripts.

testzeus-hercules
Hercules is the world’s first open-source testing agent designed to handle the toughest testing tasks for modern web applications. It turns simple Gherkin steps into fully automated end-to-end tests, making testing simple, reliable, and efficient. Hercules adapts to various platforms like Salesforce and is suitable for CI/CD pipelines. It aims to democratize and disrupt test automation, making top-tier testing accessible to everyone. The tool is transparent, reliable, and community-driven, empowering teams to deliver better software. Hercules offers multiple ways to get started, including using PyPI package, Docker, or building and running from source code. It supports various AI models, provides detailed installation and usage instructions, and integrates with Nuclei for security testing and WCAG for accessibility testing. The tool is production-ready, open core, and open source, with plans for enhanced LLM support, advanced tooling, improved DOM distillation, community contributions, extensive documentation, and a bounty program.

warc-gpt
WARC-GPT is an experimental retrieval augmented generation pipeline for web archive collections. It allows users to interact with WARC files, extract text, generate text embeddings, visualize embeddings, and interact with a web UI and API. The tool is highly customizable, supporting various LLMs, providers, and embedding models. Users can configure the application using environment variables, ingest WARC files, start the server, and interact with the web UI and API to search for content and generate text completions. WARC-GPT is designed for exploration and experimentation in exploring web archives using AI.
For similar tasks

julius-gpt
julius-gpt is a Node.js CLI and API tool that enables users to generate content such as blog posts and landing pages using Large Language Models (LLMs) like OpenAI. It supports generating text in multiple languages provided by the available LLMs. The tool offers different modes for content generation, including automatic, interactive, or using a content template. Users can fine-tune the content generation process with completion parameters and create SEO-friendly content with post titles, descriptions, and slugs. Additionally, users can publish content on WordPress and access upcoming features like image generation and RAG. The tool also supports custom prompts for personalized content generation and offers various commands for WordPress-related tasks.

basehub
JavaScript / TypeScript SDK for BaseHub, the first AI-native content hub. **Features:** * ✨ Infers types from your BaseHub repository... _meaning IDE autocompletion works great._ * 🏎️ No dependency on graphql... _meaning your bundle is more lightweight._ * 🌐 Works everywhere `fetch` is supported... _meaning you can use it anywhere._

crewAI
CrewAI is a cutting-edge framework designed to orchestrate role-playing autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. It enables AI agents to assume roles, share goals, and operate in a cohesive unit, much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions. With features like role-based agent design, autonomous inter-agent delegation, flexible task management, and support for various LLMs, CrewAI offers a dynamic and adaptable solution for both development and production workflows.

PromptChains
ChatGPT Queue Prompts is a collection of prompt chains designed to enhance interactions with large language models like ChatGPT. These prompt chains help build context for the AI before performing specific tasks, improving performance. Users can copy and paste prompt chains into the ChatGPT Queue extension to process prompts in sequence. The repository includes example prompt chains for tasks like conducting AI company research, building SEO optimized blog posts, creating courses, revising resumes, enriching leads for CRM, personal finance document creation, workout and nutrition plans, marketing plans, and more.

laion.ai
laion.ai is a repository hosting content for the website https://laion.ai. It includes blog posts, markdown files for content editing, and credits to contributors. The content covers various topics such as about page, impressum, FAQ, team list, and blog feed.

Thinking_in_Java_MindMapping
Thinking_in_Java_MindMapping is a repository that started as a project to create mind maps based on the book 'Java Programming Ideas'. Over time, it evolved into a collection of programming notes, blog posts, book summaries, personal reflections, and even gaming content. The repository covers a wide range of topics, allowing the author to freely express thoughts and ideas. The content is diverse and reflects the author's dedication to consistency and creativity.

csinva.github.io
csinva.github.io is a repository maintained by Chandan, a Senior Researcher at Microsoft Research, focusing on interpretable machine learning. The repository contains slides, research overviews, cheat sheets, notes, blog posts, and personal information related to machine learning, statistics, and neuroscience. It offers resources for presentations, summaries of recent papers, cheat sheets for various courses, and posts on different aspects of machine learning and neuroscience advancements.

AI-Writing-Assistant
DeepWrite AI is an AI writing assistant tool created with the help of ChatGPT3. It is designed to generate perfect blog posts with utmost clarity. The tool is currently at version 1.0 with plans for further improvements. It is an open-source project, welcoming contributions. An extension has been developed for using the tool directly in Notepad, currently supported only on Calmly Writer. The tool requires installation and setup, utilizing technologies like React, Next, TailwindCSS, Node, and Express. For support, users can message the creator on Instagram. The creator, Sabir Khan, is an undergraduate student of Computer Science from Mumbai, known for frequently creating innovative projects.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.