• AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 个月前

    That reminds me of a fairly recent article about research around visualisation systems to aid with interpretable or explainable AI systems (XAI). The idea was that if we can make AI systems that explain their reasonings, then they can be a useful tool, especially in the hands of domain experts.

    Turns out that actually, the fancy visualisations that made it easier to understand how the model had come to a conclusion actually made subject matter experts less accurate in catching errors. This surprised researchers and when they later tried to make sense of it, they realised that they had inadvertently dialled up people’s likelihood to trust the model because it looked legit.

    One of my favourite aphorisms is “all models are wrong, some are useful.” Seems that the tricky part is figuring out how wrong and how useful.