InternLM-XComposer

InternLM-XComposer

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output

Stars: 2473

Visit
 screenshot

InternLM-XComposer2 is a groundbreaking vision-language large model (VLLM) based on InternLM2-7B excelling in free-form text-image composition and comprehension. It boasts several amazing capabilities and applications: * **Free-form Interleaved Text-Image Composition** : InternLM-XComposer2 can effortlessly generate coherent and contextual articles with interleaved images following diverse inputs like outlines, detailed text requirements and reference images, enabling highly customizable content creation. * **Accurate Vision-language Problem-solving** : InternLM-XComposer2 accurately handles diverse and challenging vision-language Q&A tasks based on free-form instructions, excelling in recognition, perception, detailed captioning, visual reasoning, and more. * **Awesome performance** : InternLM-XComposer2 based on InternLM2-7B not only significantly outperforms existing open-source multimodal models in 13 benchmarks but also **matches or even surpasses GPT-4V and Gemini Pro in 6 benchmarks** We release InternLM-XComposer2 series in three versions: * **InternLM-XComposer2-4KHD-7B** ๐Ÿค—: The high-resolution multi-task trained VLLM model with InternLM-7B as the initialization of the LLM for _High-resolution understanding_ , _VL benchmarks_ and _AI assistant_. * **InternLM-XComposer2-VL-7B** ๐Ÿค— : The multi-task trained VLLM model with InternLM-7B as the initialization of the LLM for _VL benchmarks_ and _AI assistant_. **It ranks as the most powerful vision-language model based on 7B-parameter level LLMs, leading across 13 benchmarks.** * **InternLM-XComposer2-VL-1.8B** ๐Ÿค— : A lightweight version of InternLM-XComposer2-VL based on InternLM-1.8B. * **InternLM-XComposer2-7B** ๐Ÿค—: The further instruction tuned VLLM for _Interleaved Text-Image Composition_ with free-form inputs. Please refer to Technical Report and 4KHD Technical Reportfor more details.

README:

InternLM-XComposer-2.5

InternLM-XComposer2.5 ๐Ÿค— &nbsp๏ฝœ XComposer2.5 Technical Report ๐Ÿ“„

English | ็ฎ€ไฝ“ไธญๆ–‡

Thanks the community for HuggingFace Demo | OpenXLab Demo of InternLM-XComposer-2.5.

๐Ÿ‘‹ join us on Discord and WeChat

InternLM%2FInternLM-XComposer | Trendshift


Multimodal Projects of Our Team

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output

InternLM-XComposer2-: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD

InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Models

InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition

ShareGPT4Video: Improving Video Understanding and Generation with Better Captions

ShareGPT4V: Improving Large Multi-modal Models with Better Captions

MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs

DualFocus: Integrating Macro and Micro Perspectives in Multi-modal Large Language Models

InternLM-XComposer-2.5 excels in various text-image comprehension and composition applications, achieving GPT-4V level capabilities with merely 7B LLM backend. IXC-2.5 is trained with 24K interleaved image-text contexts, it can seamlessly extend to 96K long contexts via RoPE extrapolation. This long-context capability allows IXC-2.5 to perform exceptionally well in tasks requiring extensive input and output contexts.

  • Ultra-High Resolution Understanding: IXC-2.5 enhances the dynamic resolution solution proposed in IXC2-4KHD with a native 560 ร— 560 ViT vision encoder, supporting high-resolution images with any aspect ratio.

  • Fine-Grained Video Understanding: IXC-2.5 treats videos as a ultra-high-resolution composite picture consisting of tens to hundreds of frames, allowing it to capture fine details through dense sampling and higher resolution for each frame.

  • Multi-Turn Multi-Image Dialogue: IXC-2.5 supports free-form multi-turn multi-image dialogue, allowing it to naturally interact with humans in multi-round conversations.

  • Webpage Crafting: IXC-2.5 can be readily applied to create webpages by composing source code (HTML, CSS, and JavaScript) following text-image instructions.

  • Composing High-Quality Text-Image Articles: IXC-2.5 leverages specially designed Chain-of-Thought (CoT) and Direct Preference Optimization (DPO) techniques to significantly enhance the quality of its written content.

  • Awesome performance: IXC-2.5 has been evaluated on 28 benchmarks, outperforming existing open-source state-of-the-art models on 16 benchmarks. It also surpasses or competes closely with GPT-4V and Gemini Pro on 16 key tasks.

Please refer to Technical Report for more details.

Demo Video

๐Ÿ”ฅ For the best experience, please keep the audio on while enjoying the video.

https://github.com/InternLM/InternLM-XComposer/assets/147793160/8206f07f-3166-461e-a631-9cbcdec6ae75

Youtube Video

Please refer to Chinese Demo for the demo of the Chinese version.

News and Updates

  • 2024.07.15 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ ModelScope Swift supports InternLM-XComposer2.5-7B for finetuning and inference.
  • 2024.07.15 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ LMDepoly supports InternLM-XComposer2.5-7B for 4 bit quantization and inference.
  • 2024.07.15 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ InternLM-XComposer2.5-7B-4bit is publicly available.
  • 2024.07.03 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ InternLM-XComposer2.5-7B is publicly available.
  • 2024.07.01 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ ShareGPT4V is accepted by ECCV2024.
  • 2024.04.22 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ The finetune code of InternLM-XComposer2-VL-7B-4KHD-7B are publicly available.
  • 2024.04.09 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ InternLM-XComposer2-4KHD-7B and evaluation code are publicly available.
  • 2024.04.09 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ InternLM-XComposer2-VL-1.8B is publicly available.
  • 2024.02.22 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ We release DualFocus, a framework for integrating macro and micro perspectives within MLLMs to enhance vision-language task performance.
  • 2024.02.02 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ The finetune code of InternLM-XComposer2-VL-7B are publicly available.
  • 2024.01.26 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ The evaluation code of InternLM-XComposer2-VL-7B are publicly available.
  • 2024.01.26 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ InternLM-XComposer2-7B and InternLM-XComposer-VL2-7B are publicly available on Hugging Face and ModelScope.
  • 2024.01.26 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ We release a technical report for more details of InternLM-XComposer2 series.
  • 2023.11.22 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ We release the ShareGPT4V, a large-scale highly descriptive image-text dataset generated by GPT4-Vision and a superior large multimodal model, ShareGPT4V-7B.
  • 2023.10.30 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ InternLM-XComposer-VL achieved the top 1 ranking in both Q-Bench and Tiny LVLM.
  • 2023.10.19 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ Support for inference on multiple GPUs. Two 4090 GPUs are sufficient for deploying our demo.
  • 2023.10.12 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ 4-bit demo is supported, model files are available in Hugging Face and ModelScope.
  • 2023.10.8 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ InternLM-XComposer-7B and InternLM-XComposer-VL-7B are publicly available on ModelScope.
  • 2023.9.27 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ The evaluation code of InternLM-XComposer-VL-7B are publicly available.
  • 2023.9.27 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ InternLM-XComposer-7B and InternLM-XComposer-VL-7B are publicly available on Hugging Face.
  • 2023.9.27 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ We release a technical report for more details of our model series.

Model Zoo

Model Usage Transformers(HF) ModelScope(HF) Release Date
InternLM-XComposer-2.5 Video Understanding, Multi-image Multi-tune Dialog, 4K Resolution Understanding, Web Craft, Article creation, Benchmark ๐Ÿค—internlm-xcomposer2.5 internlm-xcomposer2.5 2024-07-03
InternLM-XComposer2-4KHD 4K Resolution Understanding, Benchmark, VL-Chat ๐Ÿค—internlm-xcomposer2-4khd-7b internlm-xcomposer2-4khd-7b 2024-04-09
InternLM-XComposer2-VL-1.8B Benchmark, VL-Chat ๐Ÿค—internlm-xcomposer2-vl-1_8b internlm-xcomposer2-vl-1_8b 2024-04-09
InternLM-XComposer2 Text-Image Composition ๐Ÿค—internlm-xcomposer2-7b internlm-xcomposer2-7b 2024-01-26
InternLM-XComposer2-VL Benchmark, VL-Chat ๐Ÿค—internlm-xcomposer2-vl-7b internlm-xcomposer2-vl-7b 2024-01-26
InternLM-XComposer2-4bit Text-Image Composition ๐Ÿค—internlm-xcomposer2-7b-4bit internlm-xcomposer2-7b-4bit 2024-02-06
InternLM-XComposer2-VL-4bit Benchmark, VL-Chat ๐Ÿค—internlm-xcomposer2-vl-7b-4bit internlm-xcomposer2-vl-7b-4bit 2024-02-06
InternLM-XComposer Text-Image Composition, VL-Chat ๐Ÿค—internlm-xcomposer-7b internlm-xcomposer-7b 2023-09-26
InternLM-XComposer-4bit Text-Image Composition, VL-Chat ๐Ÿค—internlm-xcomposer-7b-4bit internlm-xcomposer-7b-4bit 2023-09-26
InternLM-XComposer-VL Benchmark ๐Ÿค—internlm-xcomposer-vl-7b internlm-xcomposer-vl-7b 2023-09-26

Evaluation

We evaluate InternLM-XComposer-2.5 on 28 multimodal benchmarks, including image benchmarks MMDU, MMStar, RealWorldQA, Design2Code, DocVQA, Infographics VQA, TextVQA, ChartQA, OCRBench, DeepFrom, WTQ, VisualMRC, TabFact, MathVista, MMMU, AI2D, MME, MMBench, MMBench-CN, SEED-Bench, HallusionBench, MM-Vet, and video benchmarks MVBench, MLVU, Video-MME, MMBench-Video, TempCompass

See Evaluation Details here.

Compared with closed-source APIs and previous SOTAs on Video and Structural High-resolution images.

MVBench MLVU MME-Video MMBench-Video TempCompass DocVQA ChartVQA InfoVQA TextVQA OCRBench DeepForm WTQ VisualMRC TabFact
VideoChat2 InternVL1.5 LIVA InternVL1.5 Qwen-VL InternVL1.5 InternVL1.5 InternVL1.5 InternVL1.5 GLM-4v DocOwl 1.5 DocOwl 1.5 DocOwl 1.5 DocOwl 1.5
7B 26B 34B 26B 7B 26B 26B 26B 26B 9B 8B 8B 8B 8B
60.4 50.4 59.0 42.0 52.9 90.9 83.8 72.5 80.6 77.6 68.8 40.6 246.4 80.2
GPT-4V 43.5 49.2 59.9 56.0 --- 88.4 78.5 75.1 78.0 51.6 --- --- --- ---
Gemini-Pro --- --- 75.0 49.3 67.1 88.1 74.1 75.2 74.6 68.0 --- --- --- ---
Ours 69.1 58.8 55.8 46.9 90.9 82.2 69.9 78.2 69.0 71.2 53.6 307.5 85.2

Compared with closed-source APIs and previous SOTAs on Multi-Image dialog and General Visual QA Benchmarks.

MVBench MLVU MME-Video MMBench-Video TempCompass DocVQA ChartVQA InfoVQA TextVQA OCRBench DeepForm WTQ VisualMRC TabFact
VideoChat2 InternVL1.5 LIVA InternVL1.5 Qwen-VL InternVL1.5 InternVL1.5 InternVL1.5 InternVL1.5 GLM-4v DocOwl 1.5 DocOwl 1.5 DocOwl 1.5 DocOwl 1.5
7B 26B 34B 26B 7B 26B 26B 26B 26B 9B 8B 8B 8B 8B
60.4 50.4 59.0 42.0 58.4 90.9 83.8 72.5 80.6 77.6 68.8 40.6 246.4 80.2
GPT-4V 43.5 49.2 59.9 56.0 --- 88.4 78.5 75.1 78.0 51.6 --- --- --- ---
Gemini-Pro --- --- 75.0 49.3 70.6 88.1 74.1 75.2 74.6 68.0 --- --- --- ---
Ours 69.1 58.8 55.8 46.9 67.1 90.9 82.2 69.9 78.2 69.0 71.2 53.6 307.5 85.2

Requirements

  • python 3.8 and above
  • pytorch 1.12 and above, 2.0 and above are recommended
  • CUDA 11.4 and above are recommended (this is for GPU users)
  • flash-attention2 is required for high-resolution usage of InternLM-XComposer2.5.

Installation

Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries. Please refer to the installation instructions

Quickstart

We provide a simple example to show how to use InternLM-XComposer-2.5 with ๐Ÿค— Transformers.

Video Understanding
import torch
from transformers import AutoModel, AutoTokenizer

torch.set_grad_enabled(False)

# init model and tokenizer
model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval().half()
tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b', trust_remote_code=True)
model.tokenizer = tokenizer

query = 'Here are some frames of a video. Describe this video in detail'
image = ['./examples/liuxiang.mp4',]
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response, his = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
print(response)
#The video opens with a shot of an athlete, dressed in a red and yellow uniform with the word "CHINA" emblazoned across the front, preparing for a race. 
#The athlete, Liu Xiang, is seen in a crouched position, focused and ready, with the Olympic rings visible in the background, indicating the prestigious setting of the Olympic Games. As the race commences, the athletes are seen sprinting towards the hurdles, their determination evident in their powerful strides. 
#The camera captures the intensity of the competition, with the athletes' numbers and times displayed on the screen, providing a real-time update on their performance. The race reaches a climax as Liu Xiang, still in his red and yellow uniform, triumphantly crosses the finish line, his arms raised in victory. 
#The crowd in the stands erupts into cheers, their excitement palpable as they witness the athlete's success. The video concludes with a close-up shot of Liu Xiang, still basking in the glory of his victory, as the Olympic rings continue to symbolize the significance of the event.

query = 'tell me the athlete code of Liu Xiang'
image = ['./examples/liuxiang.mp4',]
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response, _ = model.chat(tokenizer, query, image, history=his, do_sample=False, num_beams=3, use_meta=True)
print(response)
#The athlete code of Liu Xiang, as displayed on his uniform in the video, is "1363".
Multi-Image Mutli-Tune Dialog
import torch
from transformers import AutoModel, AutoTokenizer

torch.set_grad_enabled(False)

# init model and tokenizer
model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval().half()
tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b', trust_remote_code=True)
model.tokenizer = tokenizer

query = 'Image1 <ImageHere>; Image2 <ImageHere>; Image3 <ImageHere>; I want to buy a car from the three given cars, analyze their advantages and weaknesses one by one'
image = ['./examples/cars1.jpg',
        './examples/cars2.jpg',
        './examples/cars3.jpg',]
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response, his = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
print(response)
#To analyze the advantages and disadvantages of each car, we need to consider factors such as brand reputation, performance, design, cost, and maintenance. 
#1. Mercedes-Benz: - Advantages: Known for its luxury and reliability, the Mercedes-Benz brand offers a high level of comfort, advanced technology, and superior craftsmanship. The vehicle in the image appears to be an SUV, which is versatile for both city driving and off-road conditions. - Disadvantages: Typically, Mercedes-Benz vehicles are more expensive compared to other brands, and they may require more frequent maintenance due to their luxury status. 
#2. Bugatti: - Advantages: Bugatti is renowned for producing some of the fastest and most powerful cars in the world. The vehicle in the image is a sports car, likely offering an exhilarating driving experience with its high-performance engine and advanced aerodynamics. - Disadvantages: Bugatti cars are extremely expensive, making them less accessible to the average consumer. They also require specialized knowledge for maintenance and may not be suitable for everyday driving due to their high-performance nature. 
#3. BMW: - Advantages: BMW is a well-respected manufacturer known for its innovative engineering and design. The car in the image appears to be an electric vehicle (EV), which offers advantages such as lower emissions, reduced noise pollution, and potentially lower operating costs. - Disadvantages: EVs like the one in the image may have a shorter range compared to traditional gasoline-powered vehicles, limiting their suitability for long-distance travel. 
#Additionally, the initial cost of purchasing an EV can be higher than that of a conventional car. In conclusion, each car has its own set of advantages and disadvantages. The Mercedes-Benz offers luxury and versatility, the Bugatti provides high performance and exclusivity, and the BMW represents innovation in electric vehicle technology. The choice ultimately depends on the buyer's priorities, such as comfort, performance, environmental impact, and budget.

query = 'Image4 <ImageHere>; How about the car in Image4'
image.append('./examples/cars4.jpg')        
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response, _ = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, history= his, use_meta=True)
print(response)
#The car in Image4 is a red sports car, which appears to be a Ferrari. Ferrari is a renowned Italian manufacturer known for producing some of the most iconic and high-performance sports cars in the world. - Advantages: Ferrari vehicles are synonymous with speed, luxury, and engineering excellence. 
#The car in the image likely offers an exhilarating driving experience with its powerful engine, advanced aerodynamics, and high-quality craftsmanship. The red color adds to the car's aesthetic appeal, making it stand out on the road. - Disadvantages: Ferrari cars are extremely expensive, making them less accessible to the average consumer. 
#They also require specialized knowledge for maintenance and may not be suitable for everyday driving due to their high-performance nature. In conclusion, the Ferrari in Image4 represents a pinnacle of automotive engineering and design, offering unmatched performance and luxury. 
#However, its high cost and specialized maintenance requirements make it less practical for everyday use compared to the other vehicles in the images.
High Resolution Image Understanding
import torch
from transformers import AutoModel, AutoTokenizer

torch.set_grad_enabled(False)

# init model and tokenizer
model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval().half()
tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b', trust_remote_code=True)
model.tokenizer = tokenizer

query = 'Analyze the given image in a detail manner'
image = ['./examples/dubai.png']
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response, _ = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
print(response)
#The infographic is a visual representation of various facts about Dubai. It begins with a statement about Palm Jumeirah, highlighting it as the largest artificial island visible from space. It then provides a historical context, noting that in 1968, there were only a few cars in Dubai, contrasting this with the current figure of more than 1.5 million vehicles. 
#The infographic also points out that Dubai has the world's largest Gold Chain, with 7 of the top 10 tallest hotels located there. Additionally, it mentions that the crime rate is near 0%, and the income tax rate is also 0%, with 20% of the world's total cranes operating in Dubai. Furthermore, it states that 17% of the population is Emirati, and 83% are immigrants.
#The Dubai Mall is highlighted as the largest shopping mall in the world, with 1200 stores. The infographic also notes that Dubai has no standard address system, with no zip codes, area codes, or postal services. It mentions that the Burj Khalifa is so tall that its residents on top floors need to wait longer to break fast during Ramadan. 
#The infographic also includes information about Dubai's climate-controlled City, with the Royal Suite at Burj Al Arab costing $24,000 per night. Lastly, it notes that the net worth of the four listed billionaires is roughly equal to the GDP of Honduras.
Instruction to Webpage
import torch
from transformers import AutoModel, AutoTokenizer

torch.set_grad_enabled(False)

# init model and tokenizer
model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval().half()
tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b', trust_remote_code=True)
model.tokenizer = tokenizer

query = 'A website for Research institutions. The name is Shanghai AI lab. Top Navigation Bar is blue.Below left, an image shows the logo of the lab. In the right, there is a passage of text below that describes the mission of the laboratory.There are several images to show the research projects of Shanghai AI lab.'
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response = model.write_webpage(query, seed=202, task='Instruction-aware Webpage Generation', repetition_penalty=3.0)
print(response)
# see the Instruction-aware Webpage Generation.html 

See the Instruction to Webpage results here.

Resume to Webpage
import torch
from transformers import AutoModel, AutoTokenizer

torch.set_grad_enabled(False)

# init model and tokenizer
model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval().half()
tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b', trust_remote_code=True)
model.tokenizer = tokenizer

## the input should be a resume in markdown format
query = './examples/resume.md'
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response = model.resume_2_webpage(query, seed=202, repetition_penalty=3.0)
print(response)

See the Resume to Webpage results here.

Screenshot to Webpage
import torch
from transformers import AutoModel, AutoTokenizer

torch.set_grad_enabled(False)

# init model and tokenizer
model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval().half()
tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b', trust_remote_code=True)
model.tokenizer = tokenizer

query = 'Generate the HTML code of this web image with Tailwind CSS.'
image = ['./examples/screenshot.jpg']
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response = model.screen_2_webpage(query, image, seed=202, repetition_penalty=3.0)
print(response)

See the Screenshot to Webpage results here.

Write Article
import torch
from transformers import AutoModel, AutoTokenizer

torch.set_grad_enabled(False)

# init model and tokenizer
model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval().half()
tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b', trust_remote_code=True)
model.tokenizer = tokenizer

query = '้˜…่ฏปไธ‹้ข็š„ๆๆ–™๏ผŒๆ นๆฎ่ฆๆฑ‚ๅ†™ไฝœใ€‚ ็”ตๅฝฑใ€Š้•ฟๅฎ‰ไธ‰ไธ‡้‡Œใ€‹็š„ๅ‡บ็Žฐ่ฎฉไบบๆ„Ÿๆ…จ๏ผŒๅฝฑ็‰‡ๅนถๆœชๅฐ†้‡็‚นๅ…จ่ฝๅœจๅคงๅ”้ฃŽๅŽไธŠ๏ผŒไนŸๅฑ•็Žฐไบ†ๆขๅผ˜ๆฐ”่ฑก็š„้˜ดๆš—้ข๏ผŒๅณๆ—ง้—จ้˜€็š„่ต„ๆบๅž„ๆ–ญใ€ๆœๆ”ฟ็š„ๆ—ฅ็›Š่กฐ่ดฅไธŽ้’ๅนดๆ‰ไฟŠ็š„ๅฃฎๅฟ—้šพ้…ฌใ€‚้ซ˜้€‚ไป•่ฟ›ๆ— ้—จ๏ผŒๅช่ƒฝๅ›žไนก>ๆฒ‰ๆฝœไฟฎ่กŒใ€‚ๆŽ็™ฝ่™ฝๅพ—็Ž‰็œŸๅ…ฌไธปไธพ่๏ผŒๆ“ขๅ…ฅ็ฟฐๆž—๏ผŒไฝ†ไป–ๅชๆ˜ฏๆˆไธบๅ”็Ž„ๅฎ—็š„ๅพก็”จๆ–‡ไบบ๏ผŒไธ่ƒฝ็œŸๆญฃๅฎž็Žฐๆœ‰็›ŠไบŽๆœๆ”ฟ็š„ๅฟ—ๆ„ใ€‚็„ถ่€Œ๏ผŒ็‰‡ไธญ้ซ˜ๆฝฎ้ƒจๅˆ†ใ€Šๅฐ†่ฟ›้…’ใ€‹ไธ€่Š‚๏ผŒไบบ่‡ณไธญๅนดใ€ๆŒ‚็€่‚š่…ฉ็š„ๆŽ็™ฝๅผ•ไผ—ไบบไน˜ไป™้นคไธŠๅคฉ๏ผŒไธ€่ทฏไปŽๆฐด้ขใ€็€‘ๅธƒ้ฃžๅ‡่‡ณ้“ถๆฒณ่ฟ›ๅ…ฅไป™>ๅฎซ๏ผŒๆŽ็™ฝ็‹‚ๅฅ”็€ไธŽไป™ไบบไปฌ็ขฐๆฏ๏ผŒๆœ€ๅŽๅคงๅฎถ็บต่บซ้ฃžๅ‘ๆผฉๆถก่ˆฌ็š„ไน้‡ๅคฉใ€‚่‚‰่บซ็š„ๅพฎ่ดฑใ€ไธ–่ทฏ็š„โ€œๅคฉ็”Ÿๆˆ‘ๆๅฟ…ๆœ‰็”จ๏ผŒๅŽๅท๏ผŒๆ‹˜ไธไฝ็ฒพ็ฅž็š„้ซ˜่นˆใ€‚โ€œๅคฉ็”Ÿๆˆ‘ๆๅฟ…ๆœ‰็”จ๏ผŒๅƒ้‡‘ๆ•ฃๅฐฝ่ฟ˜ๅคๆฅใ€‚โ€ ๅคๅพ€ไปŠๆฅ๏ผŒ่บซๅค„้—ฒ้กฟใ€้ญๅ—ๆŒซๆŠ˜ใ€่ขซ็—…็—›ๆŠ˜็ฃจ๏ผŒๅพˆๅคšไบบ้ƒฝๆ›พ็ปๅŽ†>ไบ†ไบบ็”Ÿ็š„โ€œๅคฑๆ„โ€๏ผŒๅดๅ่€Œๆˆๅฐฑไบ†ไป–ไปฌโ€œ่ฏ—ๆ„โ€็š„ไบบ็”Ÿใ€‚ๅฏนๆญฃๅœจ่ฟฝๆฑ‚ไบบ็”Ÿไปทๅ€ผ็š„ๅฝ“ไปฃ้’ๅนดๆฅ่ฏด๏ผŒๅฆ‚ไฝ•ๅฏนๅพ…ไบบ็”Ÿไธญ็š„็ผบๆ†พๅ’Œๅ›ฐ้กฟ?่ฏ—ๆ„ไบบ็”Ÿไธญๅˆๆœ‰ๆ€Žๆ ท็š„่‡ชๆˆ‘ๅšๅฎˆๅ’Œ่‡ชๆˆ‘่ฎคๅŒ?่ฏท็ป“ๅˆโ€œๅคฑๆ„โ€ไธŽโ€œ่ฏ—ๆ„โ€่ฟ™ไธคไธชๅ…ณ้”ฎ่ฏๅ†™ไธ€็ฏ‡ๆ–‡็ซ ใ€‚ ่ฆๆฑ‚:้€‰ๅ‡†่ง’ๅบฆ๏ผŒ็กฎๅฎš>็ซ‹ๆ„๏ผŒๆ˜Ž็กฎๆ–‡ไฝ“๏ผŒ่‡ชๆ‹Ÿๆ ‡้ข˜;ไธ่ฆๅฅ—ไฝœ๏ผŒไธๅพ—ๆŠ„่ขญ;ไธๅพ—ๆณ„้œฒไธชไบบไฟกๆฏ;ไธๅฐ‘ไบŽ 800 ๅญ—ใ€‚'
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response = model.write_artical(query, seed=8192)
print(response)
#่ฏ—ๆ„ไบบ็”Ÿ๏ผŒ่ดตๅœจๅšๅฎˆ
#ใ€Š่œๆ น่ฐญใ€‹ๆœ‰ไบ‘:โ€œ้—ฒๆ—ถ่ฆๆœ‰ๅƒ็ดง็š„ๅฟƒๆ€,ๅฟ™้‡Œ่ฆ็•™ๅƒ้—ฒๅทฅๅคซใ€‚โ€ไบบ็”Ÿๅœจไธ–,ๆ€ปๆœ‰ๅคฑๆ„ไน‹ๆ—ถ,ๅฝ“้ขๅฏน็ผบๆ†พๅ’Œๅ›ฐ้กฟ,่ฏ—ๆ„ๅœฐ็”Ÿๆดป็€ๆ‰่ƒฝไธบไบบ็”Ÿๅขžๆทปไธ€ๆŠนไบฎ่‰ฒใ€‚ไฝ•่ฐ“่ฏ—ๆ„ๅœฐ็”Ÿๆดป? ๆ‰€่ฐ“่ฏ—ๆ„ๅœฐ็”Ÿๆดป๏ผŒไพฟๆ˜ฏๅœจไบŽๅšๅฎˆๆœฌๅฟƒใ€็›ด้ข้—ๆ†พใ€่ถ…่ถŠ่‡ชๆˆ‘,ๅœจๅคฑๆ„ไธญๅฏปๆ‰พไบบ็”Ÿไปทๅ€ผใ€‚
#่ฏ—ๆ„ๅœฐ็”Ÿๆดป,้œ€ๅšๅฎˆๆœฌๅฟƒ,ๆทก็„ถๅค„ไน‹ใ€‚
#้™ถๆธŠๆ˜Žๆ›พๆ‰งๆ„่พžๅŽปๅฝญๆณฝๅŽฟไปค,ๅฝ’้š็”ฐๅ›ญ,โ€œ้‡‡่Šไธœ็ฏฑไธ‹,ๆ‚ ็„ถ่งๅ—ๅฑฑโ€,ๅœจๅฑฑๆฐด้—ดๅฏ„ๆƒ…่‡ชๅจฑ๏ผ›็Ž‹็ปด้ขๅฏนไป•้€”ๅคฑๆ„,็ปˆๆ—ฅๆฒ‰้†‰ไบŽ่ฏ—้…’ไน‹ไธญ,โ€œๅ…ดๆฅๆฏ็‹ฌๅพ€,่ƒœไบ‹็ฉบ่‡ช็Ÿฅโ€,ๅœจ่ฏ—้…’ไธญ้—ฒ้€ธ่‡ชๅฆ‚;ๆŽ็™ฝไป•้€”ไธ้กบ,่ขซ่ต้‡‘ๆ”พ่ฟ˜,ไฝ†ไป–ไพๆ—ง่ฑชๆฐ”ๅนฒไบ‘,โ€œๅคฉ็”Ÿๆˆ‘ๆ‰ๅฟ…ๆœ‰็”จ,ๅƒ้‡‘ๆ•ฃๅฐฝ่ฟ˜ๅคๆฅโ€,ๅœจๅคฑๆ„ไธญๅฆ็„ถ่ฑ่พพใ€‚ๅšๅฎˆๆœฌๅฟƒ๏ผŒไพฟ่ƒฝๅœจ้ญ้‡ๅคฑๆ„ไน‹ๆ—ถๅฎˆไฝ่‡ชๅทฑ็š„็ฒพ็ฅžๅฎถๅ›ญ,่ฎฉ็”Ÿๆดปๅ……ๆปก่ฏ—ๆ„ใ€‚ๅไน‹,่‹ฅไธ่ƒฝๅšๅฎˆๆœฌๅฟƒ,่€Œๅชๆ˜ฏไธ€ๅ‘ณ่ฟŽๅˆไธ–ไฟ—ไปฅๆฑ‚ๅพ—ๅ‡่ฟ,้‚ฃ็บตไฝฟ่บซๅฑ…้ซ˜ไฝ,ไบฆไผšไธงๅคฑ็”Ÿๆดป็š„ไน่ถฃใ€‚
#่ฏ—ๆ„ๅœฐ็”Ÿๆดป,้œ€็›ด้ข้—ๆ†พ,่ถ…่ถŠ่‡ชๆˆ‘ใ€‚
#โ€œ่ฅฟๅกžๅฑฑๅ‰็™ฝ้นญ้ฃž,ๆกƒ่Šฑๆตๆฐด้ณœ้ฑผ่‚ฅใ€‚้’็ฎฌ็ฌ ,็ปฟๆŸณๆž,ๅŠๆ–ค้…’,ไธ€็บถไธใ€‚ไบ”ๆน–ๅ››ๆตท็š†ๅฆ‚ๆญค,ไฝ•ๅฆจๅˆฐๆญคๅค„ๅฝ’ใ€‚โ€็™ฝๅฑ…ๆ˜“็š„ใ€Šๆธ”ๆญŒๅญใ€‹ๅ†™ๅ‡บไบ†ๅคšๅฐ‘ไบบ็š„ๆ„ฟๆœ›:ๆฒกๆœ‰ๆƒๅŠฟ็บทๆ‰ฐ,ๆฒกๆœ‰่ดซๅ›ฐๅ‡„ๅ‡‰,ๅชๆœ‰้’ๅฑฑ็ปฟๆฐดใ€็™ฝ้นญ้ธฅ้ธŸไฝœไผด,ๅฆ‚ๆญค่‡ช็”ฑ่‡ชๅœจ็š„็”Ÿๆดปไปคไบบ็ฅžๅพ€ใ€‚็„ถ่€Œ,็™ฝๅฑ…ๆ˜“ๅดๅนถๆฒกๆœ‰ๅ› ๆญค็œŸ็š„ๅฝ’้šๅฑฑๆž—,่€Œๆ˜ฏ็›ด้ขไบบ็”Ÿ,่ถ…่ถŠ่‡ชๆˆ‘,ๅ†™ไธ‹ไบ†ไธ€้ฆ–้ฆ–่ฏ—ๆ„่€ŒๅฏŒๆœ‰็Žฐๅฎžๅ…ณๆ€€็š„ไฝœๅ“ใ€‚ๅฆ‚ๆžœ็™ฝๅฑ…ๆ˜“ๅช้กพ้€ƒ้ฟไบบ็”Ÿ,้‚ฃๅˆๆ€Žไผšๆ‹ฅๆœ‰โ€œๅคงๅผฆๅ˜ˆๅ˜ˆๅฆ‚ๆ€ฅ้›จ,ๅฐๅผฆๅˆ‡ๅˆ‡ๅฆ‚็ง่ฏญโ€็š„็ป็พŽๆฏ”ๅ–ปๅ‘ข?ๅฆ‚ๆžœ็™ฝๅฑ…ๆ˜“ๅช้กพๅฝ’้šๅฑฑๆž—,้‚ฃๅˆๆ€Žไผšๅ†™ๅ‡บโ€œๆญคๆ›ฒๅชๅบ”ๅคฉไธŠๆœ‰,ไบบ้—ดๅ“ชๅพ—้…็™ฝๅฑ…ๆ˜“โ€่ฟ™ๆ ท็š„่ฏ—ๅฅๅ‘ข?
#่ฏ—ๆ„ๅœฐ็”Ÿๆดป,้œ€็›ด้ข้—ๆ†พ,ๅšๅฎˆๆœฌๅฟƒใ€‚
#ๆŽๆ–‡ๆณขๆ‚ฃๆœ‰ๆธๅ†ป็—‡,ๅŒป็”Ÿ่ฏดไป–ๆดปไธ่ฟ‡ไบ”ๅนด,ไฝ†ไป–ๆฒกๆœ‰ๅ› ๆญคๆ”พๅผƒๅฏน้Ÿณไน็š„็ƒญ็ˆฑ,่€Œๆ˜ฏไธŽ็—…้ญ”ไฝœๆ–—ไบ‰,ๆผ”ๅฅๅ‡บ็พŽๅฆ™็š„ไนๆ›ฒ;ๅญ™ๅฎถๆž—่‡ชๅนผๆ‚ฃๆœ‰่„‘็˜ซ,ไฝ†ไป–ไธ็”˜ไบŽๅ‘ฝ่ฟ็š„ๆ‰ๅผ„,็ปˆๆˆๅ…จๅ›ฝๆœ€็พŽๆ•™ๅธˆ;ๅฒ้“็”Ÿ้ฅฑๅ—็–พ็—…ๆŠ˜็ฃจ,ไฝ†ไป–ไป่ƒฝๅ‘ๅ‡บโ€œๆˆ‘ๅธธๅธธๅœจๆˆ‘็š„ๅฟƒๅคดๆธ…็‚น,ๆˆ‘ๆœ‰ไป€ไนˆ?โ€็š„ๅฉ้—ฎ,ๅนถ็”ฑๆญค่ตฐไธŠๆ–‡ๅญฆ้“่ทฏ,ไธบๅŽไธ–็•™ไธ‹ไธฐๅŽš็š„ๆ–‡ๅŒ–้—ไบงใ€‚่ฟ™ไบ›ไบบๆฒกๆœ‰้€ƒ้ฟ,่€Œๆ˜ฏ้€‰ๆ‹ฉ็›ด้ขไบบ็”Ÿ็š„็ผบๆ†พ,ๅœจๅšๅฎˆๆœฌๅฟƒ็š„ๅŒๆ—ถ่ถ…่ถŠ่‡ชๆˆ‘,ๆœ€็ปˆๅฎž็Žฐไบ†่‡ชๅทฑ็š„ไปทๅ€ผใ€‚
#่ฏ—ๆ„ๅœฐ็”Ÿๆดป,ๆ˜ฏไบŽๅคฑๆ„ไธญๅšๅฎˆๆœฌๅฟƒ,ไบŽ็ผบๆ†พไธญ่ถ…่ถŠ่‡ชๆˆ‘ใ€‚ๅฝ“้ขๅฏนไบบ็”Ÿ็š„็ผบๆ†พไธŽๆŒซๆŠ˜,ๅšๅฎˆๆœฌๅฟƒใ€่ถ…่ถŠ่‡ชๆˆ‘็š„ๅŒๆ—ถ,ไนŸๅฟ…ๅฐ†ไนฆๅ†™ๅฑžไบŽ่‡ชๅทฑ็š„่พ‰็…Œ็ฏ‡็ซ ใ€‚
#ๆ„ฟไฝ ๆˆ‘้ƒฝ่ƒฝ่ฏ—ๆ„ๅœฐ็”Ÿๆดป็€!

query = 'Please write a blog based on the title: French Pastries: A Sweet Indulgence'
with torch.autocast(device_type='cuda', dtype=torch.float16):
    response = model.write_artical(query, seed=8192)
print(response)
#French Pastries: A Sweet Indulgence
#The French are well known for their love of pastries, and itโ€™s a love that is passed down through generations. When one visits France, they are treated to an assortment of baked goods that can range from the delicate macaron to the rich and decadent chocolate mousse. While there are many delicious types of pastries found in France, five stand out as being the most iconic. Each of these pastries has its own unique qualities that make it special.
#1. Croissant
#One of the most famous pastries from France is the croissant. It is a buttery, flaky pastry that is best enjoyed fresh from the bakery. The dough is laminated with butter, giving it its signature layers. Croissants are typically eaten for breakfast or brunch, often accompanied by coffee or hot chocolate.
#2. Macaron
#The macaron is a small, delicate French confection made from almond flour, powdered sugar, and egg whites. The macaron itself is sandwiched with a ganache or jam filling. They come in a variety of colors and flavors, making them a popular choice for both casual snacking and upscale desserts.
#3. Madeleine
#The madeleine is a small shell-shaped cake that is light and sponge-like. It is often flavored with lemon or orange zest and sometimes dipped in chocolate. Madeleines are perfect for an afternoon snack with tea or coffee.
#4. ร‰clair
#The รฉclair is a long, thin pastry filled with cream and topped with chocolate glaze. It is a classic French treat that is both sweet and satisfying. ร‰clairs can be found in bakeries all over France and are often enjoyed with a cup of hot chocolate.
#5. Tarte Tatin
#The tarte Tatin is an apple tart that is known for its caramelized apples and puff pastry crust. It is named after the Tatin sisters who created the recipe in the late 19th century. Tarte Tatin is best served warm with a scoop of vanilla ice cream.
#These pastries are just a few of the many delicious treats that France has to offer. Whether you are a seasoned traveler or a first-time visitor, indulging in French pastries is a must-do activity. So go ahead, treat yourselfโ€”you deserve it!

Inference on Multiple GPUs

If you have multiple GPUs, but the memory size of each GPU is not enough to accommodate the entire model, you can split the model across multiple GPUs. First, install accelerate using the command: pip install accelerate. Then, execute the follows scripts for chat:

# chat with 2 GPUs
python example_code/example_chat.py --num_gpus 2

Inference Acceleration by LMDeploy

If InternLM-XComposer2d5 model inference optimization is required, we recommend using LMDeploy.

In the following subsections, we will introduce the usage of LMDeploy with the internlm-xcomposer2d5-7b model as an example.

First of all, please install the pypi package with pip install lmdeploy. By default, it depends on CUDA 12.x. For a CUDA 11.x environment, please refer to the installation guide.

Offline Inference Pipeline

from lmdeploy import pipeline
from lmdeploy.vl import load_image
pipe = pipeline('internlm/internlm-xcomposer2d5-7b')
image = load_image('examples/dubai.png')
response = pipe(('describe this image', image))
print(response.text)

For more on using the VLM pipeline, including multi-image inference or multi-turn chat, please overview this guide.

4-Bit Model

We offer 4-bit quantized models via LMDeploy to reduce memory requirements. For a memory usage comparison, please refer to here.

from lmdeploy import TurbomindEngineConfig, pipeline
from lmdeploy.vl import load_image
engine_config = TurbomindEngineConfig(model_format='awq')
pipe = pipeline('internlm/internlm-xcomposer2d5-7b-4bit', backend_config=engine_config)
image = load_image('examples/dubai.png')
response = pipe(('describe this image', image))
print(response.text)

Finetune

  1. Please refer to our finetune scripts.
  2. Inference and finetune support from ModelScope Swift

Gradio Deploy

We provide code for users to build a web UI demo. Please use gradio==4.13.0

Please run the command below for Chat / Composition:

# For Multimodal Chat
python gradio_demo/gradio_demo_chat.py

# For Free-form Text-Image Composition
python gradio_demo/gradio_demo_composition.py

The user guidance of UI demo is given in HERE. If you wish to change the default folder of the model, please use the --code_path=new_folder option.

Citation

If you find our models / code / papers useful in your research, please consider giving โญ and citations ๐Ÿ“, thx :)

@article{internlmxcomposer2_5,
      title={InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output}, 
      author={Pan Zhang and Xiaoyi Dong and Yuhang Zang and Yuhang Cao and Rui Qian and Lin Chen and Qipeng Guo and Haodong Duan and Bin Wang and Linke Ouyang and Songyang Zhang and Wenwei Zhang and Yining Li and Yang Gao and Peng Sun and Xinyue Zhang and Wei Li and Jingwen Li and Wenhai Wang and Hang Yan and Conghui He and Xingcheng Zhang and Kai Chen and Jifeng Dai and Yu Qiao and Dahua Lin and Jiaqi Wang},
      journal={arXiv preprint arXiv:2407.03320},
      year={2024}
}
@article{internlmxcomposer2_4khd,
      title={InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD},
      author={Xiaoyi Dong and Pan Zhang and Yuhang Zang and Yuhang Cao and Bin Wang and Linke Ouyang and Songyang Zhang and Haodong Duan and Wenwei Zhang and Yining Li and Hang Yan and Yang Gao and Zhe Chen and Xinyue Zhang and Wei Li and Jingwen Li and Wenhai Wang and Kai Chen and Conghui He and Xingcheng Zhang and Jifeng Dai and Yu Qiao and Dahua Lin and Jiaqi Wang},
      journal={arXiv preprint arXiv:2404.06512},
      year={2024}
}
@article{internlmxcomposer2,
      title={InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model},
      author={Xiaoyi Dong and Pan Zhang and Yuhang Zang and Yuhang Cao and Bin Wang and Linke Ouyang and Xilin Wei and Songyang Zhang and Haodong Duan and Maosong Cao and Wenwei Zhang and Yining Li and Hang Yan and Yang Gao and Xinyue Zhang and Wei Li and Jingwen Li and Kai Chen and Conghui He and Xingcheng Zhang and Yu Qiao and Dahua Lin and Jiaqi Wang},
      journal={arXiv preprint arXiv:2401.16420},
      year={2024}
}
@article{internlmxcomposer,
      title={InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition},
      author={Pan Zhang and Xiaoyi Dong and Bin Wang and Yuhang Cao and Chao Xu and Linke Ouyang and Zhiyuan Zhao and Shuangrui Ding and Songyang Zhang and Haodong Duan and Wenwei Zhang and Hang Yan and Xinyue Zhang and Wei Li and Jingwen Li and Kai Chen and Conghui He and Xingcheng Zhang and Yu Qiao and Dahua Lin and Jiaqi Wang},
      journal={arXiv preprint arXiv:2309.15112},
      year={2023}
}

License & Contact Us

The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/็”ณ่ฏท่กจ๏ผˆไธญๆ–‡๏ผ‰. For other questions or collaborations, please contact [email protected].

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for InternLM-XComposer

Similar Open Source Tools

For similar tasks

For similar jobs