
Graph-Reasoning-LLM
[KDD 2024]this is project for training explicit graph reasoning large language models.
Stars: 93

This repository, GraphWiz, focuses on developing an instruction-following Language Model (LLM) for solving graph problems. It includes GraphWiz LLMs with strong graph problem-solving abilities, GraphInstruct dataset with over 72.5k training samples across nine graph problem tasks, and models like GPT-4 and Mistral-7B for comparison. The project aims to map textual descriptions of graphs and structures to solve various graph problems explicitly in natural language.
README:
This repo contains the code, data, and models for "GraphWiz: An Instruction-Following Language Model for Graph Problems."
- GraphWiz, a series of instruction-following LLMs that have strong graph problem-solving abilities and output explicit reasoning paths.
- GraphInstruct, which offers over 72.5k training samples across nine graph problem tasks, ranging in complexity from linear and polynomial to NP-complete, extending the scope, scale, and diversity of previous benchmarks.
- This paper is accepted by KDD 2024! πππ
Models | Cycle | Connect | Bipartite | Topology | Shortest | Triangle | Flow | Hamilton | Subgraph | Average |
---|---|---|---|---|---|---|---|---|---|---|
In-Context Learning | ||||||||||
GPT-4 (zero-shot) | 38.75 | 17.00 | 65.25 | 5.00 | 9.25 | 5.75 | 3.25 | 59.25 | 45.50 | 27.67 |
GhatGPT (2-shot) | 51.25 | 43.75 | 70.75 | 4.50 | 3.50 | 17.25 | 8.50 | 54.25 | 43.00 | 32.97 |
GPT-4 (2-shot) | 52.50 | 62.75 | 74.25 | 25.25 | 18.25 | 31.00 | 7.75 | {75.75} | 46.75 | 43.81 |
Mistral-7B | ||||||||||
Naive SFT | 73.75 | 83.50 | 78.50 | 1.00 | 23.00 | 47.00 | 28.75 | 31.75 | 41.25 | 46.56 |
GraphWiz | 92.00 | 89.50 | 72.00 | 19.00 | 31.25 | 38.75 | 29.25 | 26.50 | 85.50 | 53.75 |
GraphWiz-DPO | 85.50 | 79.50 | 85.50 | 85.25 | 12.50 | 29.00 | 35.50 | 62.75 | 48.50 | 58.22 |
LLaMA 2-7B | ||||||||||
Naive SFT | 73.75 | 83.50 | 41.25 | 4.00 | 9.50 | 30.00 | 16.50 | 69.00 | 75.45 | 44.81 |
GraphWiz | 91.50 | 87.00 | 74.00 | 18.00 | 28.00 | 38.25 | 24.50 | 52.25 | 82.25 | 55.08 |
GraphWiz-DPO | 89.00 | 82.50 | 84.75 | 46.75 | 24.00 | 52.75 | 43.50 | 81.50 | 77.25 | 65.00 |
LLaMA 2-13B | ||||||||||
Naive SFT | 73.75 | 83.75 | 59.00 | 0.50 | 11.75 | 34.75 | 24.25 | 59.75 | 54.75 | 44.69 |
GraphWiz | 94.75 | 87.00 | 78.00 | 28.00 | 27.75 | 36.00 | 24.50 | 59.00 | 81.50 | 57.39 |
GraphWiz-DPO | 87.50 | 88.50 | 88.25 | 72.75 | 22.00 | 48.75 | 43.75 | 46.50 | 77.00 | 63.89 |
Our checkpoints and dataset are available at HuggingFace. You can directly download them according to the following links:
GraphWiz | Mixed-Task Training | DPO |
---|---|---|
π€7B-LLaMA 2 | πͺ GraphWiz-7B, GraphWiz-7B-RFT | πͺ GraphWiz-7B-DPO |
π€13B-LLaMA 2 | πͺ GraphWiz-13B, GraphWiz-13B-RFT | πͺ GraphWiz-13B-DPO |
π€7B-Mistral | πͺGrpahWiz-7B, GrpahWiz-7B-RFT | πͺ [GraphWiz-7B-DPO] |
π€GraphInstruct,
*-vanilla version means to our model only trained with Q:R=1:1
*-RFT refers to our model trained with all Q-R paths
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="GraphWiz/Mistral-7B")
alpaca_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request. \n### Instruction:\n{query}\n\n### Response:"
query = "Find the shortest path between two nodes in an undirected graph. In an undirected graph, (i,j,k) means that node i and node j are connected with an undirected edge with weight k. Given a graph and a pair of nodes, you need to output the shortest path between the two nodes. Q: The nodes are numbered from 0 to 8, and the edges are: (0,1,4) (1,2,7) (1,7,1) (1,3,4) (2,6,2) (2,4,8) (2,7,5) (3,6,1) (4,8,3) (5,6,6) (6,8,8) (7,8,7). Give the weight of the shortest path from node 0 to node 8."
input = alpaca_template.format(query = query)
output = pipeline(input)[0]['generated_text']
print(output)
Our training strategies include two stage: Mixed-task Training and DPO Alignment.
Before we start, we need to transfer our data into the deepspeed training format.
You can see examples in our dataset/GraphInstruct-DPO-ds.json file.
pip -r install requirements.txt
cd training/step1_supervised_finetuning
bash training_scripts/single_node/run_graph.sh
which consists of the following commands:
#!/bin/bash
# Copyright (c) Microsoft Corporation.
# SPDX-License-Identifier: Apache-2.0
# DeepSpeed Team
OUTPUT=$1
ZERO_STAGE=$2
DATA_PATH=$3
MODEL_PATH=$4
if [ "$OUTPUT" == "" ]; then
OUTPUT=/output/deepspeed/nlgreasoning/
fi
if [ "$ZERO_STAGE" == "" ]; then
ZERO_STAGE=3
fi
mkdir -p $OUTPUT
deepspeed --include localhost:0,1,2,3 --master_port=25001 main.py \
--data_path local/jsonfile_graph/$DATA_PATH \
--data_split 10,0,0 \
--model_name_or_path $MODEL_PATH \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 2 \
--max_seq_len 2048 \
--learning_rate 5e-6 \
--weight_decay 0. \
--num_train_epochs 2 \
--gradient_accumulation_steps 2 \
--lr_scheduler_type cosine \
--num_warmup_steps 500 \
--seed 1234 \
--save_interval 5000 \
--zero_stage $ZERO_STAGE \
--deepspeed \
--data_output_path $OUTPUT \
--gradient_checkpointing \
--output_dir $OUTPUT \
&> $OUTPUT/training.log &
cd training/step2_dpo_training
bash training_scripts/single_node/run_graph.sh
which consists of the following commands:
#!/bin/bash
# Copyright (c) Microsoft Corporation.
# SPDX-License-Identifier: Apache-2.0
# local/xjsonfile/rftV2
# DeepSpeed Team
OUTPUT=$1
ZERO_STAGE=$2
DPO_PATH=$3
SFT_PATH=$4
if [ "$OUTPUT" == "" ]; then
OUTPUT=output/deepspeed/nlgreasoning/dpo_beta0.5/
fi
if [ "$ZERO_STAGE" == "" ]; then
ZERO_STAGE=3
fi
mkdir -p $OUTPUT
deepspeed --include localhost:0,1,2,3,4,5,6,7 --master_port=25001 main.py \
--data_path local/jsonfile_graph/$DPO_PATH \
--data_split 0,10,0 \
--model_name_or_path $SFT_PATH \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--max_seq_len 2048 \
--learning_rate 5e-6 \
--weight_decay 0. \
--num_train_epochs 3 \
--gradient_accumulation_steps 2 \
--lr_scheduler_type cosine \
--num_warmup_steps 100 \
--seed 1234 \
--beta 0.5 \
--print_loss \
--zero_stage $ZERO_STAGE \
--deepspeed \
--data_output_path $OUTPUT \
--gradient_checkpointing \
--output_dir $OUTPUT \
&> $OUTPUT/training.log &
cd evaluation
bash test_graph.sh
If you want to construct additional graph problem data for training your own models for graph problem reasoning. Please refer to the following:
cd scripts
bash generate_all_train_datasets.sh
cd scripts
bash generate_all_test_datasets.sh
Here, we introduce how to select diverse paths for dpo training data:
Suppose we already have the sft model. You can directly use our models at HuggingFace: GraphWiz
cd evaluation
bash rft.sh
The default inference times 'seed' is set to 20.
Then we filter out the diverse reasoning paths:
cd find_paths
python3 select_path_dpo.py
python3 find_path.py
Please note that you should changle the data paths according to your local enviroment.
At last, you can obtain the json file like:
"0": {
"neg_response": [
"To find the maximum flow from node 4 to node 8, we need to consider all the possible paths from node 4 to node 7, and then from node 7 to node 8. \nOne such path is 4->5->7->8, which can carry a flow of 6 units, as limited by the edge (4->5).\nAnother path is 4->6->7->8, which can carry a flow of 7 units, as limited by the edge (4->6).\nHowever, not all of this flow can reach node 8, because the edge (7->8) has a capacity of 3 units, and the edge (6->8) has a capacity of 10 units. Therefore, the bottleneck of this path is the edge (7->8), which reduces the flow to 3 units.\nTherefore, by adding the flows of the two paths, we get the maximum flow from node 4 to node 8 as 6 + 7 = 13 units.\n### The maximum flow from node 4 to node 8 is 13 units.",
"The maximum flow from node 4 to node 8 is equal to the sum of the flows on the edges leaving node 4, which is 7 + 6 = 13 units. ### 13.",
"To find the maximum flow from node 4 to node 8, we need to consider all possible paths from node 4 to node 8 and their capacities. One such path is 4->5->7->8, which has a capacity of 6 units, as limited by the edge (4->5). Another path is 4->6->8, which has a capacity of 7 units, as limited by the edge (4->6). However, note that the edge (6->8) has a smaller capacity of 10 units, so it cannot carry all the flow from node 4 to node 8 via this path. Therefore, we need to hybridize the two paths and find a way to allocate the flow between them without compromising the edge capacities. One possible allocation is to send 6 units from node 4 to node 5, 3 units from node 5 to node 7, and 7 units from node 4 to node 6, which gives a total flow of 6 + 3 + 7 = 16 units. However, this allocation is not optimal, as it wastefully converts 6 units of flow from node 4 to node 5, which is then unnecessarily routed to node 7. A more optimal allocation is to send 3 units from node 4 to node 5, 6 units from node 5 to node 7, and 7 units from node 4 to node 6, which gives a total flow of 3 + 6 + 7 = 16 units. This allocation avoids the conversion of any flow from node 4 to node 5, which is then routed to node 7. Hence, the maximum flow from node 4 to node 8 is 16 units, which is the sum of the flows along the two paths.### 16.",
"### The maximum flow from node 4 to node 8 is 13 units."
],
"pos_response": [
"To find the maximum flow from node 4 to node 8, we need to consider all possible paths from node 4 to node 8 and their capacities. \nOne such path is through node 5, where node 4 can send up to 6 units to node 5, but node 5 can only forward up to 10 units to node 7, and node 7 can only send up to 3 units to node 8. This path can carry a maximum of 3 units of flow. \nAnother path is through node 6, where node 4 can send up to 7 units to node 6, and node 6 can forward up to 10 units to node 8. This path can carry a maximum of 7 units of flow. \nHowever, note that the capacity of the edge from node 6 to node 8 is 10 units, so it cannot carry more flow than that regardless of the incoming flow from node 4. \nTherefore, by adding the flows along the two paths, we find that the maximum flow from node 4 to node 8 is 10 units. ### 10.",
"The relevant edges for the flow from node 4 to node 8 are (4->5,6), (4->6,7), (5->7,10), and (6->8,10).\nOne possible path is from node 4 to node 5 and then from node 5 to node 7, and then from node 7 to node 8. This path can carry a flow of 10 units, as limited by the edge (5->7).\nAnother possible path is from node 4 to node 6 and then from node 6 to node 8. This path can carry a flow of 7 units, as limited by the edge (6->8).\nHowever, these two paths share the edge (6->8), which has a capacity of 10 units. Therefore, the total flow that can be sent from node 4 to node 8 cannot exceed 10 units, regardless of the mixture of paths.\nHence, the maximum flow from node 4 to node 8 is 10 units. ### 10."
],
"task": "flow",
"CoT_response": "To find the maximum flow from node 4 to node 8, we need to consider all possible paths from node 4 to node 8 and their capacities. \nOne such path is through node 5, where node 4 can send up to 6 units to node 5, but node 5 can only forward up to 10 units to node 7, and node 7 can only send up to 3 units to node 8. This path can carry a maximum of 3 units of flow. \nAnother path is through node 6, where node 4 can send up to 7 units to node 6, and node 6 can forward up to 10 units to node 8. This path can carry a maximum of 7 units of flow. \nHowever, note that the capacity of the edge from node 6 to node 8 is 10 units, so it cannot carry more flow than that regardless of the incoming flow from node 4. \nTherefore, by adding the flows along the two paths, we find that the maximum flow from node 4 to node 8 is 10 units. ### 10.",
"pos_sort": {
"edit": [
0,
1
],
"jaccard": [
0,
1
],
"tfidf": [
0,
1
],
"cosine": [
0,
1
]
**,
"neg_sort": {
"edit": [
0,
1,
3,
2
],
"jaccard": [
0,
2,
1,
3
],
"tfidf": [
2,
0,
1,
3
],
"cosine": [
0,
2,
1,
3
]
},
"pos_rft_paths_5": [
"The relevant edges for the flow from node 4 to node 8 are (4->5,6), (4->6,7), (5->7,10), and (6->8,10).\nOne possible path is from node 4 to node 5 and then from node 5 to node 7, and then from node 7 to node 8. This path can carry a flow of 10 units, as limited by the edge (5->7).\nAnother possible path is from node 4 to node 6 and then from node 6 to node 8. This path can carry a flow of 7 units, as limited by the edge (6->8).\nHowever, these two paths share the edge (6->8), which has a capacity of 10 units. Therefore, the total flow that can be sent from node 4 to node 8 cannot exceed 10 units, regardless of the mixture of paths.\nHence, the maximum flow from node 4 to node 8 is 10 units. ### 10."
],
"neg_rft_paths_5": [
"To find the maximum flow from node 4 to node 8, we need to consider all the possible paths from node 4 to node 7, and then from node 7 to node 8. \nOne such path is 4->5->7->8, which can carry a flow of 6 units, as limited by the edge (4->5).\nAnother path is 4->6->7->8, which can carry a flow of 7 units, as limited by the edge (4->6).\nHowever, not all of this flow can reach node 8, because the edge (7->8) has a capacity of 3 units, and the edge (6->8) has a capacity of 10 units. Therefore, the bottleneck of this path is the edge (7->8), which reduces the flow to 3 units.\nTherefore, by adding the flows of the two paths, we get the maximum flow from node 4 to node 8 as 6 + 7 = 13 units.\n### The maximum flow from node 4 to node 8 is 13 units.",
"To find the maximum flow from node 4 to node 8, we need to consider all possible paths from node 4 to node 8 and their capacities. One such path is 4->5->7->8, which has a capacity of 6 units, as limited by the edge (4->5). Another path is 4->6->8, which has a capacity of 7 units, as limited by the edge (4->6). However, note that the edge (6->8) has a smaller capacity of 10 units, so it cannot carry all the flow from node 4 to node 8 via this path. Therefore, we need to hybridize the two paths and find a way to allocate the flow between them without compromising the edge capacities. One possible allocation is to send 6 units from node 4 to node 5, 3 units from node 5 to node 7, and 7 units from node 4 to node 6, which gives a total flow of 6 + 3 + 7 = 16 units. However, this allocation is not optimal, as it wastefully converts 6 units of flow from node 4 to node 5, which is then unnecessarily routed to node 7. A more optimal allocation is to send 3 units from node 4 to node 5, 6 units from node 5 to node 7, and 7 units from node 4 to node 6, which gives a total flow of 3 + 6 + 7 = 16 units. This allocation avoids the conversion of any flow from node 4 to node 5, which is then routed to node 7. Hence, the maximum flow from node 4 to node 8 is 16 units, which is the sum of the flows along the two paths.### 16."
],
"query": "Find the maximum flow between two nodes in a directed graph. In a directed graph, (i->j,k) means that node i and node j are connected with an directed edge from node i to node j with weight k. Given a graph and a pair of nodes, you need to output the maximum flow between the two nodes. Q: The nodes are numbered from 0 to 8, and the edges are: (0->7,2) (0->3,9) (1->3,2) (2->3,2) (2->5,4) (4->5,6) (4->6,7) (5->7,10) (6->8,10) (6->7,9) (7->8,3). What is the maximum flow from node 4 to node 8?"
}
- "pos_rft_paths_5" refers to the diverse Correct reasoning paths (<=5);
- "neg_rft_paths_5" refers to the diverse InCorrect reasoning paths (<=5).
Please cite our paper if you use our data, model or code. Please also kindly cite the original dataset papers.
@articles{chen2024graphwiz,
title={GraphWiz: An Instruction-Following Language Model for Graph Problems},
author={Nuo Chen, Yuhan Li, Jianheng Tang, Jia Li},
journal={arXiv preprint arXiv:2402.16029},
year={2024}
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Graph-Reasoning-LLM
Similar Open Source Tools

Graph-Reasoning-LLM
This repository, GraphWiz, focuses on developing an instruction-following Language Model (LLM) for solving graph problems. It includes GraphWiz LLMs with strong graph problem-solving abilities, GraphInstruct dataset with over 72.5k training samples across nine graph problem tasks, and models like GPT-4 and Mistral-7B for comparison. The project aims to map textual descriptions of graphs and structures to solve various graph problems explicitly in natural language.

LLM4Decompile
LLM4Decompile is an open-source large language model dedicated to decompilation of Linux x86_64 binaries, supporting GCC's O0 to O3 optimization levels. It focuses on assessing re-executability of decompiled code through HumanEval-Decompile benchmark. The tool includes models with sizes ranging from 1.3 billion to 33 billion parameters, available on Hugging Face. Users can preprocess C code into binary and assembly instructions, then decompile assembly instructions into C using LLM4Decompile. Ongoing efforts aim to expand capabilities to support more architectures and configurations, integrate with decompilation tools like Ghidra and Rizin, and enhance performance with larger training datasets.

honey
Bee is an ORM framework that provides easy and high-efficiency database operations, allowing developers to focus on business logic development. It supports various databases and features like automatic filtering, partial field queries, pagination, and JSON format results. Bee also offers advanced functionalities like sharding, transactions, complex queries, and MongoDB ORM. The tool is designed for rapid application development in Java, offering faster development for Java Web and Spring Cloud microservices. The Enterprise Edition provides additional features like financial computing support, automatic value insertion, desensitization, dictionary value conversion, multi-tenancy, and more.

PURE
PURE (Process-sUpervised Reinforcement lEarning) is a framework that trains a Process Reward Model (PRM) on a dataset and fine-tunes a language model to achieve state-of-the-art mathematical reasoning capabilities. It uses a novel credit assignment method to calculate return and supports multiple reward types. The final model outperforms existing methods with minimal RL data or compute resources, achieving high accuracy on various benchmarks. The tool addresses reward hacking issues and aims to enhance long-range decision-making and reasoning tasks using large language models.

bee
Bee is an easy and high efficiency ORM framework that simplifies database operations by providing a simple interface and eliminating the need to write separate DAO code. It supports various features such as automatic filtering of properties, partial field queries, native statement pagination, JSON format results, sharding, multiple database support, and more. Bee also offers powerful functionalities like dynamic query conditions, transactions, complex queries, MongoDB ORM, cache management, and additional tools for generating distributed primary keys, reading Excel files, and more. The newest versions introduce enhancements like placeholder precompilation, default date sharding, ElasticSearch ORM support, and improved query capabilities.

HuatuoGPT-o1
HuatuoGPT-o1 is a medical language model designed for advanced medical reasoning. It can identify mistakes, explore alternative strategies, and refine answers. The model leverages verifiable medical problems and a specialized medical verifier to guide complex reasoning trajectories and enhance reasoning through reinforcement learning. The repository provides access to models, data, and code for HuatuoGPT-o1, allowing users to deploy the model for medical reasoning tasks.

Cherry_LLM
Cherry Data Selection project introduces a self-guided methodology for LLMs to autonomously discern and select cherry samples from open-source datasets, minimizing manual curation and cost for instruction tuning. The project focuses on selecting impactful training samples ('cherry data') to enhance LLM instruction tuning by estimating instruction-following difficulty. The method involves phases like 'Learning from Brief Experience', 'Evaluating Based on Experience', and 'Retraining from Self-Guided Experience' to improve LLM performance.

lance
Lance is a modern columnar data format optimized for ML workflows and datasets. It offers high-performance random access, vector search, zero-copy automatic versioning, and ecosystem integrations with Apache Arrow, Pandas, Polars, and DuckDB. Lance is designed to address the challenges of the ML development cycle, providing a unified data format for collection, exploration, analytics, feature engineering, training, evaluation, deployment, and monitoring. It aims to reduce data silos and streamline the ML development process.

DataFlow
DataFlow is a data preparation and training system designed to parse, generate, process, and evaluate high-quality data from noisy sources, improving the performance of large language models in specific domains. It constructs diverse operators and pipelines, validated to enhance domain-oriented LLM's performance in fields like healthcare, finance, and law. DataFlow also features an intelligent DataFlow-agent capable of dynamically assembling new pipelines by recombining existing operators on demand.

SciCode
SciCode is a challenging benchmark designed to evaluate the capabilities of language models (LMs) in generating code for solving realistic scientific research problems. It contains 338 subproblems decomposed from 80 challenging main problems across 16 subdomains from 6 domains. The benchmark offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions and test cases for evaluation. SciCode demonstrates a realistic workflow of identifying critical science concepts and facts and transforming them into computation and simulation code, aiming to help showcase LLMs' progress towards assisting scientists and contribute to the future building and evaluation of scientific AI.

AIOS
AIOS, a Large Language Model (LLM) Agent operating system, embeds large language model into Operating Systems (OS) as the brain of the OS, enabling an operating system "with soul" -- an important step towards AGI. AIOS is designed to optimize resource allocation, facilitate context switch across agents, enable concurrent execution of agents, provide tool service for agents, maintain access control for agents, and provide a rich set of toolkits for LLM Agent developers.

CodeGeeX4
CodeGeeX4-ALL-9B is an open-source multilingual code generation model based on GLM-4-9B, offering enhanced code generation capabilities. It supports functions like code completion, code interpreter, web search, function call, and repository-level code Q&A. The model has competitive performance on benchmarks like BigCodeBench and NaturalCodeBench, outperforming larger models in terms of speed and performance.

Consistency_LLM
Consistency Large Language Models (CLLMs) is a family of efficient parallel decoders that reduce inference latency by efficiently decoding multiple tokens in parallel. The models are trained to perform efficient Jacobi decoding, mapping any randomly initialized token sequence to the same result as auto-regressive decoding in as few steps as possible. CLLMs have shown significant improvements in generation speed on various tasks, achieving up to 3.4 times faster generation. The tool provides a seamless integration with other techniques for efficient Large Language Model (LLM) inference, without the need for draft models or architectural modifications.

auto-md
Auto-MD is a Python tool that converts various file types and GitHub repositories into Markdown documents optimized for quick indexing via large language models. It supports multiple file types, processes zip files/folders/individual files and GitHub repositories, generates single or multiple Markdown files, and creates a table of contents and metadata for each processed file.

Streamline-Analyst
Streamline Analyst is a cutting-edge, open-source application powered by Large Language Models (LLMs) designed to revolutionize data analysis. This Data Analysis Agent effortlessly automates tasks such as data cleaning, preprocessing, and complex operations like identifying target objects, partitioning test sets, and selecting the best-fit models based on your data. With Streamline Analyst, results visualization and evaluation become seamless. It aims to expedite the data analysis process, making it accessible to all, regardless of their expertise in data analysis. The tool is built to empower users to process data and achieve high-quality visualizations with unparalleled efficiency, and to execute high-performance modeling with the best strategies. Future enhancements include Natural Language Processing (NLP), neural networks, and object detection utilizing YOLO, broadening its capabilities to meet diverse data analysis needs.

BitBLAS
BitBLAS is a library for mixed-precision BLAS operations on GPUs, for example, the $W_{wdtype}A_{adtype}$ mixed-precision matrix multiplication where $C_{cdtype}[M, N] = A_{adtype}[M, K] \times W_{wdtype}[N, K]$. BitBLAS aims to support efficient mixed-precision DNN model deployment, especially the $W_{wdtype}A_{adtype}$ quantization in large language models (LLMs), for example, the $W_{UINT4}A_{FP16}$ in GPTQ, the $W_{INT2}A_{FP16}$ in BitDistiller, the $W_{INT2}A_{INT8}$ in BitNet-b1.58. BitBLAS is based on techniques from our accepted submission at OSDI'24.
For similar tasks

Graph-Reasoning-LLM
This repository, GraphWiz, focuses on developing an instruction-following Language Model (LLM) for solving graph problems. It includes GraphWiz LLMs with strong graph problem-solving abilities, GraphInstruct dataset with over 72.5k training samples across nine graph problem tasks, and models like GPT-4 and Mistral-7B for comparison. The project aims to map textual descriptions of graphs and structures to solve various graph problems explicitly in natural language.

lighteval
LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron. We're releasing it with the community in the spirit of building in the open. Note that it is still very much early so don't expect 100% stability ^^' In case of problems or question, feel free to open an issue!

Firefly
Firefly is an open-source large model training project that supports pre-training, fine-tuning, and DPO of mainstream large models. It includes models like Llama3, Gemma, Qwen1.5, MiniCPM, Llama, InternLM, Baichuan, ChatGLM, Yi, Deepseek, Qwen, Orion, Ziya, Xverse, Mistral, Mixtral-8x7B, Zephyr, Vicuna, Bloom, etc. The project supports full-parameter training, LoRA, QLoRA efficient training, and various tasks such as pre-training, SFT, and DPO. Suitable for users with limited training resources, QLoRA is recommended for fine-tuning instructions. The project has achieved good results on the Open LLM Leaderboard with QLoRA training process validation. The latest version has significant updates and adaptations for different chat model templates.

Awesome-Text2SQL
Awesome Text2SQL is a curated repository containing tutorials and resources for Large Language Models, Text2SQL, Text2DSL, Text2API, Text2Vis, and more. It provides guidelines on converting natural language questions into structured SQL queries, with a focus on NL2SQL. The repository includes information on various models, datasets, evaluation metrics, fine-tuning methods, libraries, and practice projects related to Text2SQL. It serves as a comprehensive resource for individuals interested in working with Text2SQL and related technologies.

create-million-parameter-llm-from-scratch
The 'create-million-parameter-llm-from-scratch' repository provides a detailed guide on creating a Large Language Model (LLM) with 2.3 million parameters from scratch. The blog replicates the LLaMA approach, incorporating concepts like RMSNorm for pre-normalization, SwiGLU activation function, and Rotary Embeddings. The model is trained on a basic dataset to demonstrate the ease of creating a million-parameter LLM without the need for a high-end GPU.

StableToolBench
StableToolBench is a new benchmark developed to address the instability of Tool Learning benchmarks. It aims to balance stability and reality by introducing features such as a Virtual API System with caching and API simulators, a new set of solvable queries determined by LLMs, and a Stable Evaluation System using GPT-4. The Virtual API Server can be set up either by building from source or using a prebuilt Docker image. Users can test the server using provided scripts and evaluate models with Solvable Pass Rate and Solvable Win Rate metrics. The tool also includes model experiments results comparing different models' performance.

BetaML.jl
The Beta Machine Learning Toolkit is a package containing various algorithms and utilities for implementing machine learning workflows in multiple languages, including Julia, Python, and R. It offers a range of supervised and unsupervised models, data transformers, and assessment tools. The models are implemented entirely in Julia and are not wrappers for third-party models. Users can easily contribute new models or request implementations. The focus is on user-friendliness rather than computational efficiency, making it suitable for educational and research purposes.

AI-TOD
AI-TOD is a dataset for tiny object detection in aerial images, containing 700,621 object instances across 28,036 images. Objects in AI-TOD are smaller with a mean size of 12.8 pixels compared to other aerial image datasets. To use AI-TOD, download xView training set and AI-TOD_wo_xview, then generate the complete dataset using the provided synthesis tool. The dataset is publicly available for academic and research purposes under CC BY-NC-SA 4.0 license.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.