Put in western or Texas and that’s what you get, the west is a huge area even just of America but the word is linked to a lot of movie tropes and stuff so that’s what you get.
This is also only when the language is English, ask in urdu or Bengali and you get totally different results, in fact just use urdu instead of Indian and get less turbans or put in Punjabi and you’ll get more turbuns.
Does it help the model to produce images that are indoubtably “american” for it’s raters or for it’s automated rating system? If yes they are statistically significant. Low frequency and systematic rarity can be both significant in a statistical analysis.
By that logic americans should always be depicted in cowboy hats.
I see you’ve watched anime featuring Americans.
Put in western or Texas and that’s what you get, the west is a huge area even just of America but the word is linked to a lot of movie tropes and stuff so that’s what you get.
This is also only when the language is English, ask in urdu or Bengali and you get totally different results, in fact just use urdu instead of Indian and get less turbans or put in Punjabi and you’ll get more turbuns.
Or just put turban in the negatives if you want
Ask an AI for pictures of Texans and see how many cowboy hats it gives back to you.
Does it help the model to produce images that are indoubtably “american” for it’s raters or for it’s automated rating system? If yes they are statistically significant. Low frequency and systematic rarity can be both significant in a statistical analysis.