opening-up-chatgpt.github.io

opening-up-chatgpt.github.io

Tracking instruction-tuned LLM openness. Paper: Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.

Stars: 68

Visit
 screenshot

This repository provides a curated list of open-source projects that implement instruction-tuned large language models (LLMs) with reinforcement learning from human feedback (RLHF). The projects are evaluated in terms of their openness across a predefined set of criteria in the areas of Availability, Documentation, and Access. The goal of this repository is to promote transparency and accountability in the development and deployment of LLMs.

README:

logo Opening up ChatGPT — tracking openness of instruction-tuned LLMs — openness leaderboard

Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In Proceedings of the 5th International Conference on Conversational User Interfaces. Eindhoven. doi:10.1145/3571884.3604316. (PDF)

Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI's ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, reinforcement learning data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as 'open source', many inherit undocumented data of dubious legality, few share the all-important RLHF components (a key site where human labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment.

Contents

Overview

We classify projects for their degrees of openness across a predefined set of criteria in the areas of Availability, Documentation and Access. The criteria are described in detail here.

Availability Documentation Access
  • Open code
  • LLM data
  • LLM weights
  • RL data
  • RL weights
  • License
  • Code
  • Architecture
  • Preprint
  • Paper
  • Model card
  • Data sheet
  • Package
  • API

If you find any of this useful, please cite our work:

@inproceedings{liesenfeld_dingemanse_2024,
	author = {Liesenfeld, Andreas and Dingemanse, Mark},
	title = {Rethinking open source generative AI: open washing and the EU AI Act},
	year = {2024},
	isbn = {9798400704505},
	publisher = {Association for Computing Machinery},
	address = {New York, NY, USA},
	url = {https://doi.org/10.1145/3630106.3659005},
	doi = {10.1145/3630106.3659005},
	pages = {1774–1787},
	numpages = {14},
	keywords = {Technology assessment, large language models, text generators, text-to-image generators},
	location = {, Rio de Janeiro, Brazil, },
	series = {FAccT '24}
}

@inproceedings{liesenfeld_opening_2023,
	address = {Eindhoven},
	title = {Opening up {ChatGPT}: tracking openness, transparency, and accountability in instruction-tuned text generators},
	url = {https://opening-up-chatgpt.github.io},
	doi = {10.1145/3571884.3604316},
	booktitle = {Proceedings of the 5th {International} {Conference} on {Conversational} {User} {Interfaces}},
	publisher = {Association for Computing Machinery},
	author = {Liesenfeld, Andreas and Lopez, Alianda and Dingemanse, Mark},
	year = {2023},
	pages = {1--6},
}

How to contribute

If you know of a new instruction-tuned LLM+RLHF model we should be including, you can also add an issue.

How to contribute to the live table:

  1. Fork the repo and edit an existing yaml file or create a new one based on the sample yaml file in /projects
  2. File a pull request to have your changes reviewed and, hopefully, merged into main.

The live table is updated whenever there is a change to the files in the /projects/ folder.

Related resources

We try to be fairly systematic in our coverage of LLM+RLHF models, documenting degrees of openness for >10 features. There are many other resources that provide more free-form listings of relevant stuff or that offer ways to interact with (open) LLMs:

Here are some background readings on why openness matters, why closed models make bad baselines, and why some of us call for more counterfoil research in times of hype:

  • The gradient of generative AI release — FACCT '23 paper by Irene Solaiman on degrees of openness in generative AI
  • Closed AI models make bad baselines, by Anna Rogers. Proposes a simple principle: "That which is not open and reasonably reproducible cannot be considered a requisite baseline."
  • Why ChatGPT is bad for open psycholinguistics — by Cassandra Jacobs. Quote: "The downsides of ChatGPT are specific to it—not intrinsic to language modeling as a whole. Using ChatGPT [in] one’s work undermines open science, reproducibility & lacks the flexibility of previous systems that could be manipulated & changed to suit one’s scientific needs."
  • Stop feeding the hype and start resisting, by Iris van Rooij. Quote: "It’s almost as if academics are eager to do the PR work for OpenAI (the company that created ChatGPT; as well as its predecessor GPT-3 and its anticipated successor GPT-4). Why?"
  • AI is a lot of work — by Josh Dzieza for The Verge. Quote: "ChatGPT seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing."

Contribute

Contributions welcome! Read the contribution guidelines first.

List of contributors:

Made with contrib.rocks.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for opening-up-chatgpt.github.io

Similar Open Source Tools

For similar tasks

For similar jobs