Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters – and that’s not saying much – than I do about python. It started out as my typical “google and do the monkey-see-monkey-do” kind of programming, but then I cut out the middle-man – me – and just used Google Antigravity to do the audio sample visualizer.
Tbf it’s his project so he can do whatever he wants
Issue is when people do things like that one dude who had Claude implement support for DWARF in… Whatever language it was (Something MLy I think?) and literally didn’t even remove the copyright attribution to some random 3rd person that Claude added. It was a PR of several thousand lines, all AI generated and he had no idea how it worked, but said it’s ok, Claude understands it. He didn’t even tell anyone he was going to be working on it so there was no discussion of the architecture beforehand.
Edit: Ocaml. So I was right that it was something MLy lol
From the project’s README:
This is the commit: https://github.com/torvalds/AudioNoise/commit/93a72563cba609a414297b558cb46ddd3ce9d6b5
Tbf it’s his project so he can do whatever he wants
Issue is when people do things like that one dude who had Claude implement support for DWARF in… Whatever language it was (Something MLy I think?) and literally didn’t even remove the copyright attribution to some random 3rd person that Claude added. It was a PR of several thousand lines, all AI generated and he had no idea how it worked, but said it’s ok, Claude understands it. He didn’t even tell anyone he was going to be working on it so there was no discussion of the architecture beforehand.
Edit: Ocaml. So I was right that it was something MLy lol
Only the words of it tho.
Only the probability of the next token after tokenisation of it.
Not even that.