If so are these programs that claim to ‘poison’ the training datasets effective ?

  • Brummbaer@pawb.social
    link
    fedilink
    arrow-up
    1
    ·
    5 hours ago

    If I understand it right you need to enrich and filter data with human input so as not to collapse the model.

    Wouldn’t that imply if the human enrichment is emulating AI data too closely it will still collapse the model, since it’s now just the human filtering that’s mimicing AI data?