Biases: GPT-3, like all large language models trained on internet corpora, will generate stereotyped or prejudiced content. The model has the propensity to retain and magnify biases it inherited from any part of its training, from the datasets we selected to the training techniques we chose. This is concerning, since model bias could harm people in the relevant groups in different ways by entrenching existing stereotypes and producing demeaning portrayals amongst other potential harms.<sup>[[5]](#fn5)</sup> This issue is of special concern from a societal perspective, and is discussed along with other issues in the [paper](https://arxiv.org/abs/2005.14165) section on Broader Impacts.
0 commit comments