underthesea
Underthesea - Vietnamese NLP Toolkit
Stars: 1668
Underthesea is an open-source Vietnamese Natural Language Processing toolkit that provides easy API access to pretrained NLP models for tasks such as word segmentation, part-of-speech tagging, named entity recognition, text classification, and dependency parsing. The toolkit also includes features like Conversational AI Agent for chatting with an AI assistant specialized in Vietnamese NLP. It supports various Python versions and offers tutorials for different NLP tasks like sentence segmentation, text normalization, tagging, classification, sentiment analysis, named entity recognition, language detection, translation, and text-to-speech conversion. Additionally, it provides resources for Vietnamese NLP datasets and upcoming features include Automatic Speech Recognition.
README:
Underthesea is:
🌊 A Vietnamese NLP toolkit. Underthesea is a suite of open source Python modules data sets and tutorials supporting research and development in Vietnamese Natural Language Processing. We provides extremely easy API to quickly apply pretrained NLP models to your Vietnamese text, such as word segmentation, part-of-speech tagging (PoS), named entity recognition (NER), text classification and dependency parsing.
🎁 Support Us! Every bit of support helps us achieve our goals. Thank you so much. 💝💝💝
🎉 New in v9.1.5! Conversational AI Agent is here! Use agent("Xin chào") to chat with an AI assistant specialized in Vietnamese NLP. Supports OpenAI and Azure OpenAI. 🚀✨
To install underthesea, simply:
$ pip install underthesea
✨🍰✨Satisfaction, guaranteed.
Install with extras (note: use quotes in zsh):
$ pip install "underthesea[deep]" # Deep learning support
$ pip install "underthesea[voice]" # Text-to-Speech support
$ pip install "underthesea[agent]" # Conversational AI agentSentence Segmentation - Breaking text into individual sentences
-
Usage
>>> from underthesea import sent_tokenize >>> text = 'Taylor cho biết lúc đầu cô cảm thấy ngại với cô bạn thân Amanda nhưng rồi mọi thứ trôi qua nhanh chóng. Amanda cũng thoải mái với mối quan hệ này.' >>> sent_tokenize(text) [ "Taylor cho biết lúc đầu cô cảm thấy ngại với cô bạn thân Amanda nhưng rồi mọi thứ trôi qua nhanh chóng.", "Amanda cũng thoải mái với mối quan hệ này." ]
Text Normalization - Standardizing textual data representation and address conversion
-
Usage
>>> from underthesea import text_normalize >>> text_normalize("Ðảm baỏ chất lựơng phòng thí nghịêm hoá học") "Đảm bảo chất lượng phòng thí nghiệm hóa học"
-
Address Conversion
>>> from underthesea import convert_address >>> result = convert_address("Phường Phúc Xá, Quận Ba Đình, Thành phố Hà Nội") >>> result.converted "Phường Hồng Hà, Thành phố Hà Nội" >>> result.mapping_type <MappingType.MERGED: 'merged'>
-
Supports abbreviations
>>> result = convert_address("P. Phúc Xá, Q. Ba Đình, TP. Hà Nội") >>> result.converted "Phường Hồng Hà, Thành phố Hà Nội"
Tagging - Word segmentation, POS tagging, chunking, dependency parsing
-
Word Segmentation
>>> from underthesea import word_tokenize >>> word_tokenize("Chàng trai 9X Quảng Trị khởi nghiệp từ nấm sò") ["Chàng trai", "9X", "Quảng Trị", "khởi nghiệp", "từ", "nấm", "sò"] >>> word_tokenize("Chàng trai 9X Quảng Trị khởi nghiệp từ nấm sò", format="text") "Chàng_trai 9X Quảng_Trị khởi_nghiệp từ nấm sò"
-
POS Tagging
>>> from underthesea import pos_tag >>> pos_tag('Chợ thịt chó nổi tiếng ở Sài Gòn bị truy quét') [('Chợ', 'N'), ('thịt', 'N'), ('chó', 'N'), ('nổi tiếng', 'A'), ('ở', 'E'), ('Sài Gòn', 'Np'), ('bị', 'V'), ('truy quét', 'V')]
-
Chunking
>>> from underthesea import chunk >>> chunk('Bác sĩ bây giờ có thể thản nhiên báo tin bệnh nhân bị ung thư?') [('Bác sĩ', 'N', 'B-NP'), ('bây giờ', 'P', 'B-NP'), ('có thể', 'R', 'O'), ('thản nhiên', 'A', 'B-AP'), ('báo', 'V', 'B-VP'), ('tin', 'N', 'B-NP'), ('bệnh nhân', 'N', 'B-NP'), ('bị', 'V', 'B-VP'), ('ung thư', 'N', 'B-NP'), ('?', 'CH', 'O')]
-
Dependency Parsing
$ pip install underthesea[deep]
>>> from underthesea import dependency_parse >>> dependency_parse('Tối 29/11, Việt Nam thêm 2 ca mắc Covid-19') [('Tối', 5, 'obl:tmod'), ('29/11', 1, 'flat:date'), (',', 1, 'punct'), ('Việt Nam', 5, 'nsubj'), ('thêm', 0, 'root'), ('2', 7, 'nummod'), ('ca', 5, 'obj'), ('mắc', 7, 'nmod'), ('Covid-19', 8, 'nummod')]
Named Entity Recognition - Identifying named entities (e.g., names, locations)
-
Usage
>>> from underthesea import ner >>> text = 'Chưa tiết lộ lịch trình tới Việt Nam của Tổng thống Mỹ Donald Trump' >>> ner(text) [('Chưa', 'R', 'O', 'O'), ('tiết lộ', 'V', 'B-VP', 'O'), ('lịch trình', 'V', 'B-VP', 'O'), ('tới', 'E', 'B-PP', 'O'), ('Việt Nam', 'Np', 'B-NP', 'B-LOC'), ('của', 'E', 'B-PP', 'O'), ('Tổng thống', 'N', 'B-NP', 'O'), ('Mỹ', 'Np', 'B-NP', 'B-LOC'), ('Donald', 'Np', 'B-NP', 'B-PER'), ('Trump', 'Np', 'B-NP', 'I-PER')]
-
Deep Learning Model
$ pip install underthesea[deep]
>>> from underthesea import ner >>> text = "Bộ Công Thương xóa một tổng cục, giảm nhiều đầu mối" >>> ner(text, deep=True) [ {'entity': 'B-ORG', 'word': 'Bộ'}, {'entity': 'I-ORG', 'word': 'Công'}, {'entity': 'I-ORG', 'word': 'Thương'} ]
Classification - Text classification and sentiment analysis
-
Text Classification
>>> from underthesea import classify >>> classify('HLV đầu tiên ở Premier League bị sa thải sau 4 vòng đấu') ['The thao'] >>> classify('Hội đồng tư vấn kinh doanh Asean vinh danh giải thưởng quốc tế') ['Kinh doanh'] >> classify('Lãi suất từ BIDV rất ưu đãi', domain='bank') ['INTEREST_RATE']
-
Sentiment Analysis
>>> from underthesea import sentiment >>> sentiment('hàng kém chất lg,chăn đắp lên dính lông lá khắp người. thất vọng') 'negative' >>> sentiment('Sản phẩm hơi nhỏ so với tưởng tượng nhưng chất lượng tốt, đóng gói cẩn thận.') 'positive' >>> sentiment('Đky qua đường link ở bài viết này từ thứ 6 mà giờ chưa thấy ai lhe hết', domain='bank') ['CUSTOMER_SUPPORT#negative'] >>> sentiment('Xem lại vẫn thấy xúc động và tự hào về BIDV của mình', domain='bank') ['TRADEMARK#positive']
-
Prompt-based Classification
$ pip install underthesea[prompt] $ export OPENAI_API_KEY=YOUR_KEY>>> from underthesea import classify >>> classify("HLV ngoại đòi gần tỷ mỗi tháng dẫn dắt tuyển Việt Nam", model='prompt') Thể thao
Lang Detect - Identifying the Language of Text
Lang Detect API. Powered by FastText language identification model, using pure Rust inference via underthesea_core.
Usage examples in script
```python
>>> from underthesea import lang_detect
>>> lang_detect("Cựu binh Mỹ trả nhật ký nhẹ lòng khi thấy cuộc sống hòa bình tại Việt Nam")
vi
```
Translation - Translating Vietnamese text to English
-
Deep Learning Model
$ pip install underthesea[deep]
>>> from underthesea import translate >>> translate("Hà Nội là thủ đô của Việt Nam") 'Hanoi is the capital of Vietnam' >>> translate("Ẩm thực Việt Nam nổi tiếng trên thế giới") 'Vietnamese cuisine is famous around the world' >>> translate("I love Vietnamese food", source_lang='en', target_lang='vi') 'Tôi yêu ẩm thực Việt Nam'
Text-to-Speech - Converting written text into spoken audio
Text to Speech API. Thanks to awesome work from NTT123/vietTTS
Install extend dependencies and models
```bash
$ pip install "underthesea[voice]"
$ underthesea download-model VIET_TTS_V0_4_1
```
Usage examples in script
```python
>>> from underthesea.pipeline.tts import tts
>>> tts("Cựu binh Mỹ trả nhật ký nhẹ lòng khi thấy cuộc sống hòa bình tại Việt Nam")
A new audio file named `sound.wav` will be generated.
```
Usage examples in command line
```sh
$ underthesea tts "Cựu binh Mỹ trả nhật ký nhẹ lòng khi thấy cuộc sống hòa bình tại Việt Nam"
```
Conversational AI Agent - Chat with AI for Vietnamese NLP tasks
Conversational AI Agent with OpenAI and Azure OpenAI support.
Install extend dependencies
```bash
$ pip install "underthesea[agent]"
$ export OPENAI_API_KEY=your_api_key
# Or for Azure OpenAI:
# export AZURE_OPENAI_API_KEY=your_key
# export AZURE_OPENAI_ENDPOINT=https://xxx.openai.azure.com
```
Usage examples in script
```python
>>> from underthesea import agent
>>> agent("Xin chào!")
'Xin chào! Tôi có thể giúp gì cho bạn?'
>>> agent("NLP là gì?")
'NLP (Natural Language Processing) là xử lý ngôn ngữ tự nhiên...'
>>> agent("Cho ví dụ về word tokenization tiếng Việt")
'Word tokenization trong tiếng Việt là quá trình...'
# Reset conversation
>>> agent.reset()
```
Supports Azure OpenAI
```python
>>> agent("Hello", provider="azure", model="my-gpt4-deployment")
```
Agent with Custom Tools (Function Calling)
```python
>>> from underthesea.agent import Agent, Tool
# Define tools as functions
>>> def get_weather(location: str) -> dict:
... """Get current weather for a location."""
... return {"location": location, "temp": 25, "condition": "sunny"}
>>> def search_news(query: str) -> str:
... """Search Vietnamese news."""
... return f"Results for: {query}"
# Create agent with tools
>>> my_agent = Agent(
... name="assistant",
... tools=[
... Tool(get_weather, description="Get weather for a city"),
... Tool(search_news, description="Search Vietnamese news"),
... ],
... instruction="You are a helpful Vietnamese assistant."
... )
# Agent automatically calls tools when needed
>>> my_agent("Thời tiết ở Hà Nội thế nào?")
'Thời tiết ở Hà Nội hiện tại là 25°C và nắng.'
>>> my_agent.reset() # Clear conversation history
```
Using Default Tools (like LangChain/OpenAI tools)
```python
>>> from underthesea.agent import Agent, default_tools
# Create agent with built-in tools:
# calculator, datetime, web_search, wikipedia, shell, python, file ops...
>>> my_agent = Agent(
... name="assistant",
... tools=default_tools,
... )
>>> my_agent("What time is it?") # Uses datetime tool
>>> my_agent("Calculate sqrt(144) + 10") # Uses calculator tool
>>> my_agent("Search for Python tutorials") # Uses web_search tool
```
Vietnamese NLP Resources
List resources
$ underthesea list-data
| Name | Type | License | Year | Directory |
|---------------------------+-------------+---------+------+------------------------------------|
| CP_Vietnamese_VLC_v2_2022 | Plaintext | Open | 2023 | datasets/CP_Vietnamese_VLC_v2_2022 |
| UIT_ABSA_RESTAURANT | Sentiment | Open | 2021 | datasets/UIT_ABSA_RESTAURANT |
| UIT_ABSA_HOTEL | Sentiment | Open | 2021 | datasets/UIT_ABSA_HOTEL |
| SE_Vietnamese-UBS | Sentiment | Open | 2020 | datasets/SE_Vietnamese-UBS |
| CP_Vietnamese-UNC | Plaintext | Open | 2020 | datasets/CP_Vietnamese-UNC |
| DI_Vietnamese-UVD | Dictionary | Open | 2020 | datasets/DI_Vietnamese-UVD |
| UTS2017-BANK | Categorized | Open | 2017 | datasets/UTS2017-BANK |
| VNTQ_SMALL | Plaintext | Open | 2012 | datasets/LTA |
| VNTQ_BIG | Plaintext | Open | 2012 | datasets/LTA |
| VNESES | Plaintext | Open | 2012 | datasets/LTA |
| VNTC | Categorized | Open | 2007 | datasets/VNTC |
$ underthesea list-data --allDownload resources
$ underthesea download-data CP_Vietnamese_VLC_v2_2022
Resource CP_Vietnamese_VLC_v2_2022 is downloaded in ~/.underthesea/datasets/CP_Vietnamese_VLC_v2_2022 folder- Automatic Speech Recognition
Do you want to contribute with underthesea development? Great! Please read more details at Contributing Guide
If you found this project helpful and would like to support our work, you can just buy us a coffee ☕.
Your support is our biggest encouragement 🎁!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for underthesea
Similar Open Source Tools
For similar tasks
nlp-llms-resources
The 'nlp-llms-resources' repository is a comprehensive resource list for Natural Language Processing (NLP) and Large Language Models (LLMs). It covers a wide range of topics including traditional NLP datasets, data acquisition, libraries for NLP, neural networks, sentiment analysis, optical character recognition, information extraction, semantics, topic modeling, multilingual NLP, domain-specific LLMs, vector databases, ethics, costing, books, courses, surveys, aggregators, newsletters, papers, conferences, and societies. The repository provides valuable information and resources for individuals interested in NLP and LLMs.
adata
AData is a free and open-source A-share database that focuses on transaction-related data. It provides comprehensive data on stocks, including basic information, market data, and sentiment analysis. AData is designed to be easy to use and integrate with other applications, making it a valuable tool for quantitative trading and AI training.
PIXIU
PIXIU is a project designed to support the development, fine-tuning, and evaluation of Large Language Models (LLMs) in the financial domain. It includes components like FinBen, a Financial Language Understanding and Prediction Evaluation Benchmark, FIT, a Financial Instruction Dataset, and FinMA, a Financial Large Language Model. The project provides open resources, multi-task and multi-modal financial data, and diverse financial tasks for training and evaluation. It aims to encourage open research and transparency in the financial NLP field.
hezar
Hezar is an all-in-one AI library designed specifically for the Persian community. It brings together various AI models and tools, making it easy to use AI with just a few lines of code. The library seamlessly integrates with Hugging Face Hub, offering a developer-friendly interface and task-based model interface. In addition to models, Hezar provides tools like word embeddings, tokenizers, feature extractors, and more. It also includes supplementary ML tools for deployment, benchmarking, and optimization.
text-embeddings-inference
Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for popular models like FlagEmbedding, Ember, GTE, and E5. It implements features such as no model graph compilation step, Metal support for local execution on Macs, small docker images with fast boot times, token-based dynamic batching, optimized transformers code for inference using Flash Attention, Candle, and cuBLASLt, Safetensors weight loading, and production-ready features like distributed tracing with Open Telemetry and Prometheus metrics.
CodeProject.AI-Server
CodeProject.AI Server is a standalone, self-hosted, fast, free, and open-source Artificial Intelligence microserver designed for any platform and language. It can be installed locally without the need for off-device or out-of-network data transfer, providing an easy-to-use solution for developers interested in AI programming. The server includes a HTTP REST API server, backend analysis services, and the source code, enabling users to perform various AI tasks locally without relying on external services or cloud computing. Current capabilities include object detection, face detection, scene recognition, sentiment analysis, and more, with ongoing feature expansions planned. The project aims to promote AI development, simplify AI implementation, focus on core use-cases, and leverage the expertise of the developer community.
spark-nlp
Spark NLP is a state-of-the-art Natural Language Processing library built on top of Apache Spark. It provides simple, performant, and accurate NLP annotations for machine learning pipelines that scale easily in a distributed environment. Spark NLP comes with 36000+ pretrained pipelines and models in more than 200+ languages. It offers tasks such as Tokenization, Word Segmentation, Part-of-Speech Tagging, Named Entity Recognition, Dependency Parsing, Spell Checking, Text Classification, Sentiment Analysis, Token Classification, Machine Translation, Summarization, Question Answering, Table Question Answering, Text Generation, Image Classification, Image to Text (captioning), Automatic Speech Recognition, Zero-Shot Learning, and many more NLP tasks. Spark NLP is the only open-source NLP library in production that offers state-of-the-art transformers such as BERT, CamemBERT, ALBERT, ELECTRA, XLNet, DistilBERT, RoBERTa, DeBERTa, XLM-RoBERTa, Longformer, ELMO, Universal Sentence Encoder, Llama-2, M2M100, BART, Instructor, E5, Google T5, MarianMT, OpenAI GPT2, Vision Transformers (ViT), OpenAI Whisper, and many more not only to Python and R, but also to JVM ecosystem (Java, Scala, and Kotlin) at scale by extending Apache Spark natively.
scikit-llm
Scikit-LLM is a tool that seamlessly integrates powerful language models like ChatGPT into scikit-learn for enhanced text analysis tasks. It allows users to leverage large language models for various text analysis applications within the familiar scikit-learn framework. The tool simplifies the process of incorporating advanced language processing capabilities into machine learning pipelines, enabling users to benefit from the latest advancements in natural language processing.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.

