Skip to content

Commit de1f5c3

Browse files
authoredNov 12, 2023
Update docs for current project status. (LAION-AI#3730)
This PR updates the FAQ, which is currently very out of date, to the current status of the project. It also adds a notice to the introduction page in the docs that the project has concluded. Feel free to make any changes if it isn't worded how you would like.
1 parent 7558fa8 commit de1f5c3

File tree

2 files changed

+35
-157
lines changed

2 files changed

+35
-157
lines changed
 

‎docs/docs/faq.md

+29-157
Original file line numberDiff line numberDiff line change
@@ -15,12 +15,12 @@ In this page, there are some of the most frequently asked questions.
1515

1616
</summary>
1717

18-
We have released candidate supervised finetuning (SFT) models using both Pythia
19-
and LLaMa, as well as candidate reward models for reinforcement learning from
20-
human feedback training using Pythia, which you can try, and are beginning the
21-
process of applying (RLHF). We have also released the first version of the
22-
OpenAssistant Conversations dataset
23-
[here](https://huggingface.co/datasets/OpenAssistant/oasst1).
18+
This project has concluded. We have released supervised finetuning (SFT) models
19+
using Llama 2, LLaMa, Falcon, Pythia, and StabeLM as well as reinforcement
20+
learning from human feedback trained models and reward models, all of which are
21+
available at [here](https://huggingface.co/OpenAssistant). In addition to our
22+
models, we have released three datasets from OpenAssistant conversations, and a
23+
[research paper](https://arxiv.org/abs/2304.07327).
2424

2525
</details>
2626

@@ -31,9 +31,8 @@ OpenAssistant Conversations dataset
3131

3232
</summary>
3333

34-
You can play with our best candidate model
35-
[here](https://open-assistant.io/chat) and provide thumbs up/down responses to
36-
help us improve the model in future!
34+
Our online demonstration is no longer available, but the models remain available
35+
to download [here](https://huggingface.co/OpenAssistant).
3736

3837
</details>
3938

@@ -44,37 +43,18 @@ help us improve the model in future!
4443

4544
</summary>
4645

47-
The candidate Pythia SFT models are
46+
All of our models are
4847
[available on HuggingFace](https://huggingface.co/OpenAssistant) and can be
49-
loaded via the HuggingFace Transformers library. As such you may be able to use
50-
them with sufficient hardware. There are also spaces on HF which can be used to
51-
chat with the OA candidate without your own hardware. However, these models are
52-
not final and can produce poor or undesirable outputs.
48+
loaded via the HuggingFace Transformers library or other runners if converted.
49+
As such you may be able to use them with sufficient hardware. There are also
50+
spaces on HF which can be used to chat with the OA candidate without your own
51+
hardware. However, some of these models are not final and can produce poor or
52+
undesirable outputs.
5353

54-
LLaMa SFT models cannot be released directly due to Meta's license but XOR
54+
LLaMa (v1) SFT models cannot be released directly due to Meta's license but XOR
5555
weights are released on the HuggingFace org. Follow the process in the README
56-
there to obtain a full model from these XOR weights.
57-
58-
</details>
59-
60-
<details>
61-
<summary>
62-
63-
### Is there an API available?
64-
65-
</summary>
66-
67-
There is no API currently available for Open Assistant. Any mention of an API in
68-
documentation is referencing the website's internal API. We understand that an
69-
API is a highly requested feature, but unfortunately, we can't provide one at
70-
this time due to a couple of reasons. Firstly, the inference system is already
71-
under high load and running off of compute from our sponsors. Secondly, the
72-
project's primary goal is currently data collection and model training, not
73-
providing a product.
74-
75-
However, if you're looking to run inference, you can host the model yourself
76-
either on your own hardware or with a cloud provider. We appreciate your
77-
understanding and patience as we continue to develop this project.
56+
there to obtain a full model from these XOR weights. Llama 2 models are not
57+
required to be XORed.
7858

7959
</details>
8060

@@ -102,15 +82,13 @@ inference setup and UI locally unless you wish to assist in development.
10282
All Open Assistant code is licensed under Apache 2.0. This means it is available
10383
for a wide range of uses including commercial use.
10484

105-
The Open Assistant Pythia based models are released as full weights and will be
106-
licensed under the Apache 2.0 license.
107-
108-
The Open Assistant LLaMa based models will be released only as delta weights
109-
meaning you will need the original LLaMa weights to use them, and the license
110-
restrictions will therefore be those placed on the LLaMa weights.
85+
Open Assistant models are released under the license of their respective base
86+
models, be that Llama 2, Falcon, Pythia, or StableLM. LLaMa (not 2) models are
87+
only released as XOR weights, meaning you will need the original LLaMa weights
88+
to use them.
11189

112-
The Open Assistant data is released under a Creative Commons license allowing a
113-
wide range of uses including commercial use.
90+
The Open Assistant data is released under Apache-2.0 allowing a wide range of
91+
uses including commercial use.
11492

11593
</details>
11694

@@ -138,9 +116,8 @@ you to everyone who has taken part!
138116

139117
</summary>
140118

141-
The model code, weights, and data are free. We are additionally hosting a free
142-
public instance of our best current model for as long as we can thanks to
143-
compute donation from Stability AI via LAION!
119+
The model code, weights, and data are free. Our free public instance of our best
120+
models is not longer available due to the project's conclusion.
144121

145122
</details>
146123

@@ -151,10 +128,9 @@ compute donation from Stability AI via LAION!
151128

152129
</summary>
153130

154-
The current smallest (Pythia) model is 12B parameters and is challenging to run
155-
on consumer hardware, but can run on a single professional GPU. In future there
156-
may be smaller models and we hope to make progress on methods like integer
157-
quantisation which can help run the model on smaller hardware.
131+
The current smallest models are 7B parameters and are challenging to run on
132+
consumer hardware, but can run on a single professional GPU or be quantized to
133+
run on more widely available hardware.
158134

159135
</details>
160136

@@ -165,13 +141,7 @@ quantisation which can help run the model on smaller hardware.
165141

166142
</summary>
167143

168-
If you want to help in the data collection for training the model, go to the
169-
website [https://open-assistant.io/](https://open-assistant.io/).
170-
171-
If you want to contribute code, take a look at the
172-
[tasks in GitHub](https://github.com/orgs/LAION-AI/projects/3) and comment on an
173-
issue stating your wish to be assigned. You can also take a look at this
174-
[contributing guide](https://github.com/LAION-AI/Open-Assistant/blob/main/CONTRIBUTING.md).
144+
This project has now concluded.
175145

176146
</details>
177147

@@ -190,104 +160,6 @@ well as accelerate, DeepSpeed, bitsandbytes, NLTK, and other libraries.
190160

191161
</details>
192162

193-
## Questions about the data collection website
194-
195-
<details>
196-
<summary>
197-
198-
### Can I use ChatGPT to help in training Open Assistant, for instance, by generating answers?
199-
200-
</summary>
201-
202-
No, it is against their terms of service to use it to help train other models.
203-
See
204-
[this issue](https://github.com/LAION-AI/Open-Assistant/issues/471#issuecomment-1374392299).
205-
ChatGPT-like answers will be removed.
206-
207-
</details>
208-
209-
<details>
210-
<summary>
211-
212-
### What should I do if I don't know how to complete the task as an assistant?
213-
214-
</summary>
215-
Skip it.
216-
</details>
217-
218-
<details>
219-
<summary>
220-
221-
### Should I fact check the answers by the assistant?
222-
223-
</summary>
224-
225-
Yes, you should try. If you are not sure, skip the task.
226-
227-
</details>
228-
229-
<details>
230-
<summary>
231-
232-
### How can I see my score?
233-
234-
</summary>
235-
236-
In your [account settings](https://open-assistant.io/account).
237-
238-
</details>
239-
240-
<details>
241-
<summary>
242-
243-
### Can we see how many data points have been collected?
244-
245-
</summary>
246-
247-
You can see a regularly updated interface at
248-
[https://open-assistant.io/stats](https://open-assistant.io/stats).
249-
250-
</details>
251-
252-
<details>
253-
<summary>
254-
255-
### How do I write and label prompts?
256-
257-
</summary>
258-
259-
Check the
260-
[guidelines](https://projects.laion.ai/Open-Assistant/docs/guides/guidelines).
261-
262-
</details>
263-
264-
<details>
265-
<summary>
266-
267-
### Where can I report a bug or create a new feature request?
268-
269-
</summary>
270-
271-
In the [GitHub issues](https://github.com/LAION-AI/Open-Assistant/issues).
272-
273-
</details>
274-
275-
<details>
276-
<summary>
277-
278-
### Why am I not allowed to write about this topic, even though it isn't illegal?
279-
280-
</summary>
281-
282-
We want to ensure that the Open Assistant dataset is as accessible as possible.
283-
As such, it's necessary to avoid any harmful or offensive content that could be
284-
grounds for removal on sites such as Hugging Face. Likewise, we want the model
285-
to be trained to reject as few questions as possible, so it's important to not
286-
include prompts that leave the assistant with no other choice but to refuse in
287-
order to avoid the generation of harmful content.
288-
289-
</details>
290-
291163
## Questions about the development process
292164

293165
<details>

‎docs/docs/intro.md

+6
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,9 @@
1+
# Notice
2+
3+
**Open Assistant has now concluded.** Please see
4+
[this video](https://www.youtube.com/watch?v=gqtmUHhaplo) for more information.
5+
Thanks you to all those who made this project possible.
6+
17
# Introduction
28

39
> The FAQ page is available at

0 commit comments

Comments
 (0)
Please sign in to comment.