Talk of artificial intelligence destroying humanity plays into the tech companies’ agenda, and hinders effective regulation of the societal harms AI is causing right now.
Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.
undefined> Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.
It reminds me of the situation with steel where post atomic weapons its tainted. It can’t be used for scientific tools or equipment. You have to find and use it from pre-atomic bombs. https://en.wikipedia.org/wiki/Low-background_steel
There is going to be “low background data” in the future.
Ah yes, the old AI alignment vs. AI ethics slapfight.
How about we agree that both are concerning?
Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.
undefined> Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.
It reminds me of the situation with steel where post atomic weapons its tainted. It can’t be used for scientific tools or equipment. You have to find and use it from pre-atomic bombs. https://en.wikipedia.org/wiki/Low-background_steel
There is going to be “low background data” in the future.