
VMind
Not only automatic, but also intelligent. An Intelligent data Visualization System, based on LLM.
Stars: 142

VMind is an open-source solution for intelligent visualization, providing an intelligent chart component based on LLM by VisActor. It allows users to create chart narrative works with natural language interaction, edit charts through dialogue, and export narratives as videos or GIFs. The tool is easy to use, scalable, supports various chart types, and offers one-click export functionality. Users can customize chart styles, specify themes, and aggregate data using LLM models. VMind aims to enhance efficiency in creating data visualization works through dialogue-based editing and natural language interaction.
README:
Not just automatic, but also fantastic.Open-source solution for intelligent visualization.
Introduction • Demo • Tutorial • API• OpenApi
@visactor/vmind
is an intelligent chart component based on LLM provided by VisActor, including dialog-based chart generation and editing capabilities. It provides a natural language interaction interface, allowing you to easily create chart narrative works with @visactor/VMind
with just one sentence, and edit them through continuous dialogue, greatly improving your efficiency in creating data visualization works.
The main features of @visactor/vmind
include:
-
Easy to use: Just provide the data you want to display and a sentence describing the information you want to display, and
@visactor/vmind
will automatically generate the chart for you. Based on the existing chart, describe the modifications you want to make to the chart in one sentence, and@visactor/VMind
will help you achieve the desired effect. -
Strong scalability: The components of
@visactor/VMind
can be easily extended and customized, and new functions and features can be added as needed. By default, the OpenAI GPT model is used, and you can easily replace it with any LLM service. -
Easy narrative: Based on the powerful chart narrative ability of
@visactor/vchart
,@visactor/VMind
supports the generation of various types of charts, including line charts, bar charts, pie charts, etc., and can also generate dynamic bar charts and other dynamic charts, making it easy for you to narrate data. More chart types are being added. You can also use the dialog-based editing function to easily modify chart styles and animation effects, making it easy for you to create narratives. -
One-click export:
@visactor/VMind
comes with a chart export module, and you can export the created chart narrative as a video or GIF for display.
Enter the VMind repository and execute:
# Install dependencies
$ rush update
# Start the demo page
$ rush docs
Enter the VMind repository and execute:
# Install dependencies
$ rush update
# Start the VMind development page
$ rush vmind
You can start the vmind development page. You need to set your LLM service URL and API key to use it normally. You can modify the headers when calling the LLM in packages/vmind/__tests__/browser/src/pages/DataInput.tsx. You can create a new .env.local file in the packages/vmind folder and write in it:
VITE_SKYLARK_URL="Your service url of skylark model"
VITE_GPT_URL="Your service url of gpt model"
VITE_SKYLARK_KEY="Your api-key of skylark model"
VITE_GPT_KEY="Your api-key of gpt model"
VITE_PROXY_CONFIG="Your Vite proxy config for forwarding requests. Must be in JSON string format and is optional. Example: {"proxy": {"/v1": {"target": "https://api.openai.com/","changeOrigin": true},"/openapi": {"target": "https://api.openai.com/","changeOrigin": true}}}"
These configurations will be automatically loaded when starting the development environment.
- __tests__: Playground for development
- src/common: Common data processing, chart recommendation methods, chart generation pipelines
- src/gpt: Code related to gpt intelligent chart generation
- src/skylark: Code related to skylark intelligent chart generation
- src/chart-to-video: Code related to exporting videos, GIFs
# npm
$ npm install @visactor/vmind
# yarn
$ yarn add @visactor/vmind
First, we need to install VMind in the project:
# Install with npm
npm install @visactor/vmind
# Install with yarn
yarn add @visactor/vmind
Next, import VMind at the top of the JavaScript file
import VMind from '@visactor/vmind';
VMind currently supports OpenAI GPT-3.5, GPT-4 models and skylark-pro series models. Users can specify the model type to be called when initializing the VMind object, and pass in the URL of the LLM service. Next, we initialize a VMind instance and pass in the model type and model url:
import { Model } from '@visactor/vmind';
const vmind = new VMind({
url: LLM_SERVICE_URL, //URL of the LLM service
model: Model.SKYLARK, //Currently supports gpt-3.5, gpt-4, skylark pro models. The specified model will be called in subsequent chart generation
headers: {
'api-key': LLM_API_KEY
} //headers will be used directly as the request header in the LLM request. You can put the model api key in the header
});
Here is a list of supported models:
//models that VMind support
//more models are under development
export enum Model {
GPT3_5 = 'gpt-3.5-turbo',
GPT4 = 'gpt-4',
SKYLARK = 'skylark-pro',
SKYLARK2 = 'skylark2-pro-4k'
}
VMind supports datasets in both CSV and JSON formats.
To use CSV data in subsequent processes, you need to call the data processing method to extract field information and convert it into a structured dataset. VMind provides a rule-based method parseCSVData
to obtain field information:
// Pass in the CSV string to obtain the fieldInfo and the JSON-structured dataset
const { fieldInfo, dataset } = vmind.parseCSVData(csv);
We can also call the getFieldInfo
method to obtain the fieldInfo by passing in a JSON-formatted dataset:
// Pass in a JSON-formatted dataset to obtain the fieldInfo
const dataset=[
{
"Product name": "Coke",
"region": "south",
"Sales": 2350
},
{
"Product name": "Coke",
"region": "east",
"Sales": 1027
},
{
"Product name": "Coke",
"region": "west",
"Sales": 1027
},
{
"Product name": "Coke",
"region": "north",
"Sales": 1027
}
]
const fieldInfo = vmind.getFieldInfo(dataset);
We want to show "the changes in sales rankings of various car brands". Call the generateChart method and pass the data and display content description directly to VMind:
const userPrompt = 'show me the changes in sales rankings of various car brand';
//Call the chart generation interface to get spec and chart animation duration
const { spec, time } = await vmind.generateChart(userPrompt, fieldInfo, dataset);
In this way, we get the VChart spec of the corresponding dynamic chart. We can render the chart based on this spec:
import VChart from '@visactor/vchart';
<body>
<!-- Prepare a DOM with size (width and height) for vchart, of course you can also specify it in the spec configuration -->
<div id="chart" style="width: 600px;height:400px;"></div>
</body>
// Create a vchart instance
const vchart = new VChart(spec, { dom: 'chart' });
// Draw
vchart.renderAsync();
Thanks to the capabilities of the large language model, users can describe more requirements in natural language and "customize" dishes. Users can specify different theme styles (currently only gpt chart generation supports this feature). For example, users can specify to generate a tech-style chart:
//userPrompt can be in both Chinese and English
//Specify to generate a tech-style chart
const userPrompt = 'show me the changes in sales rankings of various car brand,tech style';
const { spec, time } = await vmind.generateChart(userPrompt, fieldInfo, dataset);
You can also specify the chart type, field mapping, etc. supported by VMind. For example:
//Specify to generate a line chart, with car manufacturers as the x-axis
const userPrompt =
'show me the changes in sales rankings of various car brands,tech style.Using a line chart, Manufacturer makes the x-axis';
const { spec, time } = await vmind.generateChart(userPrompt, fieldInfo, dataset);
Pass parameters when initializing the VMind object:
import VMind from '@visactor/vmind';
const vmind = new VMind(openAIKey:string, params:{
url?: string;//URL of the LLM service
/** gpt request header, which has higher priority */
headers?: Record<string, string> ;//request headers
method?: string;//request method POST GET
model?: string;//model name
max_tokens?: number;
temperature?: number;//recommended to set to 0
})
Specify your LLM service url in url (default is https://api.openai.com/v1/chat/completions) In subsequent calls, VMind will use the parameters in params to request the LLM service url.
📢 Note: The data aggregation function only supports GPT series models, more models will come soon.
When using the chart library to draw bar charts, line charts, etc., if the data is not aggregated, it will affect the visualization effect. At the same time, because no filtering and sorting of fields has been done, some visualization intentions cannot be met, for example: show me the top 10 departments with the most cost, show me the sales of various products in the north, etc.
VMind supports intelligent data aggregation since version 1.2.2. This function uses the data input by the user as a data table, uses a LLM to generate SQL queries according to the user's command, queries data from the data table, and uses GROUP BY and SQL aggregation methods to group, aggregate, sort, and filter data. Supported SQL statements include: SELECT, GROUP BY, WHERE, HAVING, ORDER BY, LIMIT. Supported aggregation methods are: MAX(), MIN(), SUM(), COUNT(), AVG(). Complex SQL operations such as subqueries, JOIN, and conditional statements are not supported.
Use the dataQuery
function of the VMind object to aggregate data. This method has three parameters:
- userInput: user input. You can use the same input as generateChart
- fieldInfo: Dataset field information. The same as generateChart, it can be obtained by parseCSVData, or built by the user.
- dataset: Dataset. The same as generateChart, it can be obtained by parseCSVData, or built by the user.
const { fieldInfo, dataset } = await vmind?.dataQuery(userInput, fieldInfo, dataset);
The fieldInfo and dataset returned by this method are the field information and dataset after data aggregation, which can be used for chart generation.
By default, the generateChart
function will perform a data aggregation using the same user input before generating the chart. You can disable data aggregation by passing in the fourth parameter:
const userInput = 'show me the changes in sales rankings of various car brand';
const { spec, time } = await vmind.generateChart(userInput, fieldInfo, dataset, false); //pass false as the forth parameter to disable data aggregation before generating a chart.
Under development, stay tuned
If you would like to contribute, please read the Code of Conduct and Guide first。
Small streams converge to make great rivers and seas!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for VMind
Similar Open Source Tools

VMind
VMind is an open-source solution for intelligent visualization, providing an intelligent chart component based on LLM by VisActor. It allows users to create chart narrative works with natural language interaction, edit charts through dialogue, and export narratives as videos or GIFs. The tool is easy to use, scalable, supports various chart types, and offers one-click export functionality. Users can customize chart styles, specify themes, and aggregate data using LLM models. VMind aims to enhance efficiency in creating data visualization works through dialogue-based editing and natural language interaction.

web-llm
WebLLM is a modular and customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including json-mode, function-calling, streaming, etc. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.

oasis
OASIS is a scalable, open-source social media simulator that integrates large language models with rule-based agents to realistically mimic the behavior of up to one million users on platforms like Twitter and Reddit. It facilitates the study of complex social phenomena such as information spread, group polarization, and herd behavior, offering a versatile tool for exploring diverse social dynamics and user interactions in digital environments. With features like scalability, dynamic environments, diverse action spaces, and integrated recommendation systems, OASIS provides a comprehensive platform for simulating social media interactions at a large scale.

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

palimpzest
Palimpzest (PZ) is a tool for managing and optimizing workloads, particularly for data processing tasks. It provides a CLI tool and Python demos for users to register datasets, run workloads, and access results. Users can easily initialize their system, register datasets, and manage configurations using the CLI commands provided. Palimpzest also supports caching intermediate results and configuring for parallel execution with remote services like OpenAI and together.ai. The tool aims to streamline the workflow of working with datasets and optimizing performance for data extraction tasks.

curator
Bespoke Curator is an open-source tool for data curation and structured data extraction. It provides a Python library for generating synthetic data at scale, with features like programmability, performance optimization, caching, and integration with HuggingFace Datasets. The tool includes a Curator Viewer for dataset visualization and offers a rich set of functionalities for creating and refining data generation strategies.

neo4j-graphrag-python
The Neo4j GraphRAG package for Python is an official repository that provides features for creating and managing vector indexes in Neo4j databases. It aims to offer developers a reliable package with long-term commitment, maintenance, and fast feature updates. The package supports various Python versions and includes functionalities for creating vector indexes, populating them, and performing similarity searches. It also provides guidelines for installation, examples, and development processes such as installing dependencies, making changes, and running tests.

model2vec
Model2Vec is a technique to turn any sentence transformer into a really small static model, reducing model size by 15x and making the models up to 500x faster, with a small drop in performance. It outperforms other static embedding models like GLoVe and BPEmb, is lightweight with only `numpy` as a major dependency, offers fast inference, dataset-free distillation, and is integrated into Sentence Transformers, txtai, and Chonkie. Model2Vec creates powerful models by passing a vocabulary through a sentence transformer model, reducing dimensionality using PCA, and weighting embeddings using zipf weighting. Users can distill their own models or use pre-trained models from the HuggingFace hub. Evaluation can be done using the provided evaluation package. Model2Vec is licensed under MIT.

llama_index
LlamaIndex is a data framework for building LLM applications. It provides tools for ingesting, structuring, and querying data, as well as integrating with LLMs and other tools. LlamaIndex is designed to be easy to use for both beginner and advanced users, and it provides a comprehensive set of features for building LLM applications.

AnyGPT
AnyGPT is a unified multimodal language model that utilizes discrete representations for processing various modalities like speech, text, images, and music. It aligns the modalities for intermodal conversions and text processing. AnyInstruct dataset is constructed for generative models. The model proposes a generative training scheme using Next Token Prediction task for training on a Large Language Model (LLM). It aims to compress vast multimodal data on the internet into a single model for emerging capabilities. The tool supports tasks like text-to-image, image captioning, ASR, TTS, text-to-music, and music captioning.

storm
STORM is a LLM system that writes Wikipedia-like articles from scratch based on Internet search. While the system cannot produce publication-ready articles that often require a significant number of edits, experienced Wikipedia editors have found it helpful in their pre-writing stage. **Try out our [live research preview](https://storm.genie.stanford.edu/) to see how STORM can help your knowledge exploration journey and please provide feedback to help us improve the system 🙏!**

AutoNode
AutoNode is a self-operating computer system designed to automate web interactions and data extraction processes. It leverages advanced technologies like OCR (Optical Character Recognition), YOLO (You Only Look Once) models for object detection, and a custom site-graph to navigate and interact with web pages programmatically. Users can define objectives, create site-graphs, and utilize AutoNode via API to automate tasks on websites. The tool also supports training custom YOLO models for object detection and OCR for text recognition on web pages. AutoNode can be used for tasks such as extracting product details, automating web interactions, and more.

LlamaIndexTS
LlamaIndex.TS is a data framework for your LLM application. Use your own data with large language models (LLMs, OpenAI ChatGPT and others) in Typescript and Javascript.

Bard-API
The Bard API is a Python package that returns responses from Google Bard through the value of a cookie. It is an unofficial API that operates through reverse-engineering, utilizing cookie values to interact with Google Bard for users struggling with frequent authentication problems or unable to authenticate via Google Authentication. The Bard API is not a free service, but rather a tool provided to assist developers with testing certain functionalities due to the delayed development and release of Google Bard's API. It has been designed with a lightweight structure that can easily adapt to the emergence of an official API. Therefore, using it for any other purposes is strongly discouraged. If you have access to a reliable official PaLM-2 API or Google Generative AI API, replace the provided response with the corresponding official code. Check out https://github.com/dsdanielpark/Bard-API/issues/262.

ollama-r
The Ollama R library provides an easy way to integrate R with Ollama for running language models locally on your machine. It supports working with standard data structures for different LLMs, offers various output formats, and enables integration with other libraries/tools. The library uses the Ollama REST API and requires the Ollama app to be installed, with GPU support for accelerating LLM inference. It is inspired by Ollama Python and JavaScript libraries, making it familiar for users of those languages. The installation process involves downloading the Ollama app, installing the 'ollamar' package, and starting the local server. Example usage includes testing connection, downloading models, generating responses, and listing available models.

tensorrtllm_backend
The TensorRT-LLM Backend is a Triton backend designed to serve TensorRT-LLM models with Triton Inference Server. It supports features like inflight batching, paged attention, and more. Users can access the backend through pre-built Docker containers or build it using scripts provided in the repository. The backend can be used to create models for tasks like tokenizing, inferencing, de-tokenizing, ensemble modeling, and more. Users can interact with the backend using provided client scripts and query the server for metrics related to request handling, memory usage, KV cache blocks, and more. Testing for the backend can be done following the instructions in the 'ci/README.md' file.
For similar tasks

VMind
VMind is an open-source solution for intelligent visualization, providing an intelligent chart component based on LLM by VisActor. It allows users to create chart narrative works with natural language interaction, edit charts through dialogue, and export narratives as videos or GIFs. The tool is easy to use, scalable, supports various chart types, and offers one-click export functionality. Users can customize chart styles, specify themes, and aggregate data using LLM models. VMind aims to enhance efficiency in creating data visualization works through dialogue-based editing and natural language interaction.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.