• lemmyng@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.

    • wet_lettuce@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      undefined> Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.

      It reminds me of the situation with steel where post atomic weapons its tainted. It can’t be used for scientific tools or equipment. You have to find and use it from pre-atomic bombs. https://en.wikipedia.org/wiki/Low-background_steel

      There is going to be “low background data” in the future.