LongCite

LongCite

LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA

Stars: 295

Visit
 screenshot

LongCite is a tool that enables Large Language Models (LLMs) to generate fine-grained citations in long-context Question Answering (QA) scenarios. It provides models trained on GLM-4-9B and Meta-Llama-3.1-8B, supporting up to 128K context. Users can deploy LongCite chatbots, generate accurate responses, and obtain precise sentence-level citations. The tool includes components for model deployment, Coarse to Fine (CoF) pipeline for data construction, model training using LongCite-45k dataset, evaluation with LongBench-Cite benchmark, and citation generation.

README:

LongCite

LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA

πŸ€— HF Repo β€’ πŸ“ƒ Paper β€’ πŸš€ HF Space

English | δΈ­ζ–‡

https://github.com/user-attachments/assets/68f6677a-3ffd-41a8-889c-d56a65f9e3bb

πŸ” Table of Contents

βš™οΈ LongCite Deployment

Environmental Setup: We recommend using transformers>=4.43.0 to successfully deploy our models.

We open-source two models: LongCite-glm4-9b and LongCite-llama3.1-8b, which are trained based on GLM-4-9B and Meta-Llama-3.1-8B, respectively, and support up to 128K context. These two models point to the "LongCite-9B" and "LongCite-8B" models in our paper. Given a long-context-based query, these models can generate accurate responses and precise sentence-level citations, making it easy for users to verify the output information. Try the model:

import json
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('THUDM/LongCite-glm4-9b', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('THUDM/LongCite-glm4-9b', torch_dtype=torch.bfloat16, trust_remote_code=True, device_map='auto')

context = '''
W. Russell Todd, 94, United States Army general (b. 1928). February 13. Tim Aymar, 59, heavy metal singer (Pharaoh) (b. 1963). Marshall \"Eddie\" Conway, 76, Black Panther Party leader (b. 1946). Roger Bonk, 78, football player (North Dakota Fighting Sioux, Winnipeg Blue Bombers) (b. 1944). Conrad Dobler, 72, football player (St. Louis Cardinals, New Orleans Saints, Buffalo Bills) (b. 1950). Brian DuBois, 55, baseball player (Detroit Tigers) (b. 1967). Robert Geddes, 99, architect, dean of the Princeton University School of Architecture (1965–1982) (b. 1923). Tom Luddy, 79, film producer (Barfly, The Secret Garden), co-founder of the Telluride Film Festival (b. 1943). David Singmaster, 84, mathematician (b. 1938).
'''
query = "What was Robert Geddes' profession?"
result = model.query_longcite(context, query, tokenizer=tokenizer, max_input_length=128000, max_new_tokens=1024)

print("Answer:\n{}\n".format(result['answer']))
print("Statement with citations:\n{}\n".format(
  json.dumps(result['statements_with_citations'], indent=2, ensure_ascii=False)))
print("Context (divided into sentences):\n{}\n".format(result['splited_context']))

You may deploy your own LongCite chatbot (like the one we show in the above video) by running

CUDA_VISIBLE_DEVICES=0 streamlit run demo.py --server.fileWatcherType none

Alternatively, you can deploy the model with vllm, which allows faster generation and multiconcurrent server. See the code example in vllm_inference.py.

πŸ€–οΈ CoF Pipeline

cof

We are also open-sourcing CoF (Coarse to Fine) under CoF/, our automated SFT data construction pipeline for generating high-quality long-context QA instances with fine-grained citations. Please configure your API key in the utils/llm_api.py, then run the following four scripts to obtain the final data: 1_qa_generation.py, 2_chunk_level_citation.py, 3_sentence_level_citaion.py, and 4_postprocess_and_filter.py.

πŸ–₯️ Model Training

You can download and save the LongCite-45k dataset through the Hugging Face datasets (πŸ€— HF Repo):

dataset = load_dataset('THUDM/LongCite-45k')
for split, split_dataset in dataset.items():
    split_dataset.to_json("train/long.jsonl")

You can mix it with general SFT data such as ShareGPT. We adopt Metragon-LM for model training. For a more lightweight implementation, you may adopt the code and environment from LongAlign, which can support a max training sequence length of 32k tokens for GLM-4-9B and Llama-3.1-8B.

πŸ“Š Evaluation

We introduce an automatic benchmark: LongBench-Cite, which adopt long-context QA pairs from LongBench and LongBench-Chat, to measure the citation quality as well as response correctness in long-context QA scenarios.

We provide our evaluation data and code under LongBench-Cite/. Run pred_sft.py and pred_one_shot.py to get responses from fine-tuned models (e.g., LongCite-glm4-9b) and normal models (e.g., GPT-4o). Then run eval_cite.py and eval_correct.py to evaluate the citation quality and response correctness. Remember to configure your OpenAI API key in utils/llm_api.py since we adopt GPT-4o as the judge.

Here are the evaluation results on LongBench-Cite: eval_results

πŸ“ Citation

If you find our work useful, please consider citing LongCite:

@article{zhang2024longcite,
  title = {LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA} 
  author={Jiajie Zhang and Yushi Bai and Xin Lv and Wanjun Gu and Danqing Liu and Minhao Zou and Shulin Cao and Lei Hou and Yuxiao Dong and Ling Feng and Juanzi Li},
  journal={arXiv preprint arXiv:2409.02897},
  year={2024}
}

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for LongCite

Similar Open Source Tools

For similar tasks

For similar jobs