- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I absolutely agree. For me, a very similar thing can be said for art. It should be okay to scrap free and open source data, and copyleft art and make whatever you want with it. That is why I put a copyleft license on things! Just release the code, and acknowledge the source.
If you waste your time shitting on LLMs and AI models as a concept… You are totally missing the point. These are useful tools. Not as useful as the oligarchs want you to believe. But they are useful. And people find value in them. This is the reality.
You should focus on who controls and monopolises these concepts, not the concept itself.
This presupposes that there is some real and tangible benefit to LLMs, but an explanation of that argument is never put forward. Sure, the dream is to have the magic LLM genie write you code that runs perfectly and does what you want, but now you have code you didn’t write and can’t maintain. So you have to rely on the LLM genie for that to, and sooner rather than later - no matter how well you’ve trained it - it’s going to run across something it can’t do. And then what? You have a pile of code you don’t understand and no idea yourself how to fix it yourself because you kept having the genie do it.
What about all of that is worth the theft of the work of others, the resource demands to train and run the LLMs, and the debasement of actual coders?
it’s going to run across something it can’t do. And then what?
Experience shows that more often than not it will just lie to you.
The GPLv4 idea is interesting and I would like to see it happen. However, I’m not sure how it would apply in practice. What’s to keep that kind of company from rejecting the license and still train proprietary models on your code under fair use claims like they’re already doing?




