typo: Correct "NFSW" to "NSFW" in Model Card

Corrected a typo in the Stable Diffusion v2 Model Card. Changed "NFSW" to "NSFW" in the Limitations section to ensure accurate and clear documentation.
This commit is contained in:
CharlesCNorton 2024-06-12 10:00:20 -04:00 committed by GitHub
parent cf1d67a6fd
commit ac406f0056
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -63,7 +63,7 @@ Using the model to generate content that is cruel to individuals is a misuse of
- The model was trained mainly with English captions and will not work as well in other languages. - The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy - The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset - The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NSFW detector (see Training section).
### Bias ### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.