From ac406f0056d40e01cf7a27dcd784e9f1461a7b41 Mon Sep 17 00:00:00 2001 From: CharlesCNorton <135471798+CharlesCNorton@users.noreply.github.com> Date: Wed, 12 Jun 2024 10:00:20 -0400 Subject: [PATCH] typo: Correct "NFSW" to "NSFW" in Model Card Corrected a typo in the Stable Diffusion v2 Model Card. Changed "NFSW" to "NSFW" in the Limitations section to ensure accurate and clear documentation. --- modelcard.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modelcard.md b/modelcard.md index 4b61909..9de0558 100644 --- a/modelcard.md +++ b/modelcard.md @@ -63,7 +63,7 @@ Using the model to generate content that is cruel to individuals is a misuse of - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset - [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). + [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NSFW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.