EasyNovelAssistant
軽量で規制も検閲もない日本語ローカル LLM『LightChatAssistant-TypeB』による、簡単なノベル生成アシスタントです。ローカル特権の永続生成 Generate forever で、当たりガチャを積み上げます。読み上げにも対応。
Stars: 92
EasyNovelAssistant is a simple novel generation assistant powered by a lightweight and uncensored Japanese local LLM 'LightChatAssistant-TypeB'. It allows for perpetual generation with 'Generate forever' feature, stacking up lucky gacha draws. It also supports text-to-speech. Users can directly utilize KoboldCpp and Style-Bert-VITS2 internally or use EasySdxlWebUi to generate images while using the tool. The tool is designed for local novel generation with a focus on ease of use and flexibility.
README:
軽量で規制も検閲もない日本語ローカル LLM『LightChatAssistant-TypeB』による、簡単なノベル生成アシスタントです。
ローカル特権の永続生成 Generate forever で、当たりガチャを積み上げます。読み上げにも対応。
内部で呼び出している KoboldCpp や Style-Bert-VITS2 を直接利用することもできますし、EasySdxlWebUi で画像を生成しながら利用することもできます。
記事
- 『【検閲なし】GPUで生成するローカルAIチャット環境と小説企画+執筆用ゴールシークプロンプトで叡智小説生成最強に見える』@kagami_kami_m
- 作例『[AI試運転]スパーリング・ウィズ・ツクモドウ』と 制作の感想。
動画
EasyNovelAssistantの利用検証, 負けヒロインの告白
つぶやき
@AIiswonder, @umiyuki_ai, @dew_dew, @StelsRay, @kirimajiro, @Ak9TLSB3fwWnMzn, @Emanon_14, @liruk, @maru_ai29, @bla_tanuki, @muchkanensys, @shinshi78, 865, 186, @kurayamimousou, @boxheadroom, @luta_ai, 0026, @liruk, @kagami_kami_m, @AonekoSS, @maaibook, @corpsmanWelt, @kiyoshi_shin, @AINewsDev, @kgmkm_inma_ai, @AonekoSS, @StelsRay, @mikumiku_aloha, @kagami_kami_m, @2ewsHQJgnvkGNPr, @ainiji981, @Neve_AI, @WreckerAi, @ai_1610, @kagami_kami_m, @kohya_tech, @kohya_tech, @G13_Yuyang, 0611, 0549
- 読み上げ音声に画像を割り当てて、字幕付きの動画の簡単作成に対応
- EasyNovelAssistant と EasySdxlWebUi で、絵と文章と音声をローカル PC で同時生成
- EasyNovelAssistant の音声読み上げ対応
インストールや更新で困ったことが起きたら、こちら を参照してください。
-
Install-EasyNovelAssistant.bat
を右クリックして名前をつけて保存
で、インストール先フォルダ(英数字のパスで空白や日本語を含まない)にダウンロードして実行します。WindowsによってPCが保護されました
と表示されたら、詳細表示
から実行
します。-
配布元から関連ファイルをダウンロード
することに問題がなければy
を入力します。 -
Windows セキュリティ
のネットワークへのアクセス許可は許可
してください。
- インストールが完了すると、自動的に EasyNovelAssistant が起動します。
インストール完了後は
-
Run-EasyNovelAssistant.bat
で起動します。 -
Update-EasyNovelAssistant.bat
で更新します。
次のステップは はじめての生成 です。
- 『Ninja-v1-RP-expressive-v2』を追加しました。
-
Aratako さんの自信作な新モデル 『Ninja-v1-RP-expressive』を追加しました。
- ロールプレイ用モデルですが、他の用途でも使えそうな感触です。
- ロールプレイ(チャット)をしたい場合は プロンプトフォーマット を確認して、
KoboldCpp/koboldcpp.exe
を 直接ご利用ください。
- Japanese-TextGen-Kage の更新に対応しました。
-
生成
メニューの生成の開始/終了 (Shift+F5)
のトグル誤操作の対策として、生成を開始 (F3)
と生成を終了 (F4)
を追加しました。
-
Japanese-TextGen-MoE-TEST-2x7B-NSFW と Japanese-Chat-Evolve-TEST-NSFW の Ch200 差し替え版に対応しました。
-
Japanese-Chat-Evolve-TEST-NSFW の
コンテキストサイズ上限
が8K
から4K
に下がっていますので、ご注意ください。
-
Japanese-Chat-Evolve-TEST-NSFW の
- Japanese-TextGen-MoE-TEST-2x7B-NSFW の ファイル名変更 に対応しました。
-
Japanese-TextGen-MoE-TEST-2x7B-NSFW 作者 dddump さん の新モデル 2種を追加しました。
-
Japanese-Chat-Evolve-TEST-NSFW は
コンテキストサイズ上限
を8K
まで設定できます。 -
Japanese-TextGen-Kage は
コンテキストサイズ上限
を32K
まで設定できます。- Geforce RTX 3060 12GB 環境では
コンテキストサイズ上限
が16K
だとGPU レイヤー
をL33
でフルロードできます。
- Geforce RTX 3060 12GB 環境では
-
Japanese-Chat-Evolve-TEST-NSFW は
大規模な更新ですので、不具合がありましたらお知らせください。
- プロンプト入力欄がタブ付きになり、複数のプロンプトの比較や調整がやりやすくなりました。
- 複数ファイルやフォルダを開けます。ドラッグ&ドロップにも対応しています。
- タブに
イントロプロンプト
を指定すると、他のタブのプロンプトを生成時に付け足せます。 - これらの章別執筆のサンプルを
sample/GoalSeek/
に用意しました(@kagami_kami_m さんの記事 を元にしています)。-
GoalSeek
のフォルダをドラッグ&ドロップして、フォルダごと読み込みます。 - 例えば
10-序章
タブを生成する際に、イントロプロンプトに指定した01-執筆
が自動的に前に付け足されます。- 前章を記憶として付け足したり、執筆済みの章を要約して任意に付け足したりもできます。
-
- 最近の個性豊かな軽量モデル公開ラッシュに対応しました。
-
llm_sequence.json
のフォーマットを変更しました。- 詳細は
EasyNovelAssistant/setup/res/default_llm_sequence.json
を参照ください。
- 詳細は
- 入力欄タブのコンテキストメニューに
タブを複製
を追加しました。
-
Ocuteus-v1 を KoboldCpp で試せる
KoboldCpp/Launch-Ocuteus-v1-Q8_0-C16K-L0.bat
を追加しました。- GPU レイヤーを増やして高速化したい場合は、bat をコピーして
Launch-Ocuteus-v1-Q8_0-C16K-L33.bat
などにリネームし、set GPU_LAYERS=0
をset GPU_LAYERS=33
に書き換えます。
- GPU レイヤーを増やして高速化したい場合は、bat をコピーして
-
設定
メニューにフォント
、フォントサイズ
、テーマカラーの反転
を追加しました。- フォントの選択欄が上下にとても長くなっていますので、キーボードの上下キーで選択してください。
-
config.json
の以下の項目を編集すれば、細かく色を設定することもできます。
"foreground_color": "#CCCCCC",
"select_foreground_color": "#FFFFFF",
"background_color": "#222222",
"select_background_color": "#555555",
-
コンテキストサイズ上限
以上の生成文の長さ
を指定した際に、生成文の長さ
を自動的に短縮するようにしました。- アップデート後に入力欄と関係のない文章が生成されていた方は、この対応で修正されます。
-
生成文の長さ
が 4096 以上の長文を生成する方法- モデルを Vecteus(4K) からLightChatAssistant や Ninja に変更
-
コンテキストサイズ上限
を 6144 以上に設定 -
生成文の長さ
を 4096 以上に設定
-
コンテキストサイズ上限
を増やすと VRAM 消費も増えますので、動作しない場合はモデルの GPU レイヤー数(L33
など)を引き下げてください。
-
sample/user.json
ファイルがあれば、他のsample/*.json
と同じようにユーザー
メニューを追加するようにしました。
-
インストールと更新
- インストールと更新の詳細説明とトラブルシューティングです。
-
はじめての生成
- EasyNovelAssistant のチュートリアルです。
-
モデルと GPU レイヤー数の選択
- 多様なモデルを効率的に利用する方法です。
-
Tips
- ちょっとした情報です。
-
動画の作成
- 読み上げ音声に画像を割り当てて、字幕付きの動画を簡単に作成します。
-
更新履歴
- 過去の更新履歴です。
このリポジトリの内容は以下を除き MIT License です。
- インストール時に ダウンロードするモノの一覧 を表示します。
-
EasyNovelAssistant/setup/res/tkinter-PythonSoftwareFoundationLicense.zip
は Python Software Foundation License です。 - Style-Bert-VITS2 がダウンロードする JVNV 派生物は CC BY-SA 4.0 DEED です。
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for EasyNovelAssistant
Similar Open Source Tools
EasyNovelAssistant
EasyNovelAssistant is a simple novel generation assistant powered by a lightweight and uncensored Japanese local LLM 'LightChatAssistant-TypeB'. It allows for perpetual generation with 'Generate forever' feature, stacking up lucky gacha draws. It also supports text-to-speech. Users can directly utilize KoboldCpp and Style-Bert-VITS2 internally or use EasySdxlWebUi to generate images while using the tool. The tool is designed for local novel generation with a focus on ease of use and flexibility.
chatgpt-web-sea
ChatGPT Web Sea is an open-source project based on ChatGPT-web for secondary development. It supports all models that comply with the OpenAI interface standard, allows for model selection, configuration, and extension, and is compatible with OneAPI. The tool includes a Chinese ChatGPT tuning guide, supports file uploads, and provides model configuration options. Users can interact with the tool through a web interface, configure models, and perform tasks such as model selection, API key management, and chat interface setup. The project also offers Docker deployment options and instructions for manual packaging.
wechat-bot
WeChat Bot is a simple and easy-to-use WeChat robot based on chatgpt and wechaty. It can help you automatically reply to WeChat messages or manage WeChat groups/friends. The tool requires configuration of AI services such as Xunfei, Kimi, or ChatGPT. Users can customize the tool to automatically reply to group or private chat messages based on predefined conditions. The tool supports running in Docker for easy deployment and provides a convenient way to interact with various AI services for WeChat automation.
langchain4j-aideepin-web
The langchain4j-aideepin-web repository is the frontend project of langchain4j-aideepin, an open-source, offline deployable retrieval enhancement generation (RAG) project based on large language models such as ChatGPT and application frameworks such as Langchain4j. It includes features like registration & login, multi-sessions (multi-roles), image generation (text-to-image, image editing, image-to-image), suggestions, quota control, knowledge base (RAG) based on large models, model switching, and search engine switching.
video2blog
video2blog is an open-source project aimed at converting videos into textual notes. The tool follows a process of extracting video information using yt-dlp, downloading the video, downloading subtitles if available, translating subtitles if not in Chinese, generating Chinese subtitles using whisper if no subtitles exist, converting subtitles to articles using gemini, and manually inserting images from the video into the article. The tool provides a solution for creating blog content from video resources, enhancing accessibility and content creation efficiency.
chatgpt-on-wechat
This project is a smart chatbot based on a large model, supporting WeChat, WeChat Official Account, Feishu, and DingTalk access. You can choose from GPT3.5/GPT4.0/Claude/Wenxin Yanyi/Xunfei Xinghuo/Tongyi Qianwen/Gemini/LinkAI/ZhipuAI, which can process text, voice, and images, and access external resources such as operating systems and the Internet through plugins, supporting the development of enterprise AI applications based on proprietary knowledge bases.
AMchat
AMchat is a large language model that integrates advanced math concepts, exercises, and solutions. The model is based on the InternLM2-Math-7B model and is specifically designed to answer advanced math problems. It provides a comprehensive dataset that combines Math and advanced math exercises and solutions. Users can download the model from ModelScope or OpenXLab, deploy it locally or using Docker, and even retrain it using XTuner for fine-tuning. The tool also supports LMDeploy for quantization, OpenCompass for evaluation, and various other features for model deployment and evaluation. The project contributors have provided detailed documentation and guides for users to utilize the tool effectively.
chatgpt-web
ChatGPT Web is a web application that provides access to the ChatGPT API. It offers two non-official methods to interact with ChatGPT: through the ChatGPTAPI (using the `gpt-3.5-turbo-0301` model) or through the ChatGPTUnofficialProxyAPI (using a web access token). The ChatGPTAPI method is more reliable but requires an OpenAI API key, while the ChatGPTUnofficialProxyAPI method is free but less reliable. The application includes features such as user registration and login, synchronization of conversation history, customization of API keys and sensitive words, and management of users and keys. It also provides a user interface for interacting with ChatGPT and supports multiple languages and themes.
llm-jp-eval
LLM-jp-eval is a tool designed to automatically evaluate Japanese large language models across multiple datasets. It provides functionalities such as converting existing Japanese evaluation data to text generation task evaluation datasets, executing evaluations of large language models across multiple datasets, and generating instruction data (jaster) in the format of evaluation data prompts. Users can manage the evaluation settings through a config file and use Hydra to load them. The tool supports saving evaluation results and logs using wandb. Users can add new evaluation datasets by following specific steps and guidelines provided in the tool's documentation. It is important to note that using jaster for instruction tuning can lead to artificially high evaluation scores, so caution is advised when interpreting the results.
awesome-rag
Awesome RAG is a curated list of retrieval-augmented generation (RAG) in large language models. It includes papers, surveys, general resources, lectures, talks, tutorials, workshops, tools, and other collections related to retrieval-augmented generation. The repository aims to provide a comprehensive overview of the latest advancements, techniques, and applications in the field of RAG.
emohaa-free-api
Emohaa AI Free API is a free API that allows you to access the Emohaa AI chatbot. Emohaa AI is a powerful chatbot that can understand and respond to a wide range of natural language queries. It can be used for a variety of purposes, such as customer service, information retrieval, and language translation. The Emohaa AI Free API is easy to use and can be integrated into any application. It is a great way to add AI capabilities to your projects without having to build your own chatbot from scratch.
GitHubSentinel
GitHub Sentinel is an intelligent information retrieval and high-value content mining AI Agent designed for the era of large models (LLMs). It is aimed at users who need frequent and large-scale information retrieval, especially open source enthusiasts, individual developers, and investors. The main features include subscription management, update retrieval, notification system, report generation, multi-model support, scheduled tasks, graphical interface, containerization, continuous integration, and the ability to track and analyze the latest dynamics of GitHub open source projects and expand to other information channels like Hacker News for comprehensive information mining and analysis capabilities.
gzm-design
Gzm Design is a free and open-source poster designer developed using the latest mainstream technologies such as Vue3, Vite4, TypeScript, etc. It provides features like PSD import, JSON import, multiple pages support, shortcut key support, template import, layer management, ruler tool, pen tool, element editing, preview, file download, canvas zooming and dragging, border stroke, filling, blending modes, text formatting, group handling, canvas size modification, rich text support, masking, shadow effects, undo/redo functionality, QR code tool, barcode tool, and ruler line npm package encapsulation.
RTXZY-MD
RTXZY-MD is a bot tool that supports file hosting, QR code, pairing code, and RestApi features. Users must fill in the Apikey for the bot to function properly. It is not recommended to install the bot on platforms lacking ffmpeg, imagemagick, webp, or express.js support. The tool allows for 95% implementation of website api and supports free and premium ApiKeys. Users can join group bots and get support from Sociabuzz. The tool can be run on Heroku with specific buildpacks and is suitable for Windows/VPS/RDP users who need Git, NodeJS, FFmpeg, and ImageMagick installations.
For similar tasks
EasyNovelAssistant
EasyNovelAssistant is a simple novel generation assistant powered by a lightweight and uncensored Japanese local LLM 'LightChatAssistant-TypeB'. It allows for perpetual generation with 'Generate forever' feature, stacking up lucky gacha draws. It also supports text-to-speech. Users can directly utilize KoboldCpp and Style-Bert-VITS2 internally or use EasySdxlWebUi to generate images while using the tool. The tool is designed for local novel generation with a focus on ease of use and flexibility.
tock
Tock is an open conversational AI platform for building bots. It offers a natural language processing open source stack compatible with various tools, a user interface for building stories and analytics, a conversational DSL for different programming languages, built-in connectors for text/voice channels, toolkits for custom web/mobile integration, and the ability to deploy anywhere in the cloud or on-premise with Docker.
StoryToolKit
StoryToolkitAI is a film editing tool that utilizes AI to transcribe, index scenes, search through footage, and create stories. It offers features such as automatic transcription, translation, story creation, speaker detection, project file management, and more. The tool works locally on your machine and integrates with DaVinci Resolve Studio 18. It aims to streamline the editing process by leveraging AI capabilities and enhancing user efficiency.
StoryToolkitAI
StoryToolkitAI is a film editing tool that utilizes AI to transcribe, index scenes, search through footage, and create stories. It offers features like full video indexing, automatic transcriptions and translations, compatibility with OpenAI GPT and ollama, story editor for screenplay writing, speaker detection, project file management, and more. It integrates with DaVinci Resolve Studio 18 and offers planned features like automatic topic classification and integration with other AI tools. The tool is developed by Octavian Mot and is actively being updated with new features based on user needs and feedback.
springboot-openai-chatgpt
The springboot-openai-chatgpt repository is an open-source project for a super AI brain that utilizes GPT technology to quickly generate language content such as copies, love letters, and questions. Users can input keywords to enhance work efficiency and creativity. The AI brain combines powerful question-answering systems and knowledge graphs to provide comprehensive and accurate answers. It supports programming tasks, generates code using GPT, and continuously strengthens its capabilities with growing data to provide superior intelligent applications.
AI-Director
AI-Director is a repository focused on AI video production tools and methods. It includes modules for generating script and storyboards, providing cinematography suggestions, and assisting with video editing. The repository aims to streamline the video production process by leveraging AI technologies to enhance creativity and efficiency.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
LocalAI
LocalAI is a free and open-source OpenAI alternative that acts as a drop-in replacement REST API compatible with OpenAI (Elevenlabs, Anthropic, etc.) API specifications for local AI inferencing. It allows users to run LLMs, generate images, audio, and more locally or on-premises with consumer-grade hardware, supporting multiple model families and not requiring a GPU. LocalAI offers features such as text generation with GPTs, text-to-audio, audio-to-text transcription, image generation with stable diffusion, OpenAI functions, embeddings generation for vector databases, constrained grammars, downloading models directly from Huggingface, and a Vision API. It provides a detailed step-by-step introduction in its Getting Started guide and supports community integrations such as custom containers, WebUIs, model galleries, and various bots for Discord, Slack, and Telegram. LocalAI also offers resources like an LLM fine-tuning guide, instructions for local building and Kubernetes installation, projects integrating LocalAI, and a how-tos section curated by the community. It encourages users to cite the repository when utilizing it in downstream projects and acknowledges the contributions of various software from the community.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.