Of course, not in a “we should generate and spread racist content” kind of way. But sometimes results are a caricature of all the stuff AI has ingested, so if its output is obviously biased, it might be a good indicator of particular ways people tend to be biased.

For example, if all of the AI-generated images for “doctor” are men, it’s pretty clear the source content is biased to indicate that doctors are/should be men. It would be a lot harder to look up all of the internet’s images of “doctor” to check for bias. There are probably a lot more nuanced cases where AI-generated content can make bias more apparent.

  • Daemon Silverstein@thelemmy.club
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    While I can’t see any usefulness for AI bias, I see a practical use for another AI common aspect, the AI hallucination: poetry (especially surrealist). The more random, the better for stochastic basis for making art and poetry. I’m used to write surrealist and stream-of-consciousness poetry and sometimes I use LLMs to suggest me tokens related to other tokens: the stochastic output feeds my own subconscious mind, then I write a piece based on the thoughts these tokens sparked inside my mind, then I use LLMs again to “comment and analyze” it, sometimes giving me valuable insights about what I wrote.