Lena@gregtech.eu to Programmer Humor@programming.devEnglish · 4 months ago"Source code file"gregtech.euimagemessage-square237fedilinkarrow-up1870arrow-down110
arrow-up1860arrow-down1image"Source code file"gregtech.euLena@gregtech.eu to Programmer Humor@programming.devEnglish · 4 months agomessage-square237fedilink
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up30·4 months agoHey Grok, take this one file out of the context of my 250,000 line project and give me that delicious AI slop!
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up26·4 months agoJust really fuck up this shit. I want it unrecognizable!
minus-squareZetta@mander.xyzlinkfedilinkarrow-up2·edit-24 months agoPerfect, groks context limit is 256,000, and as we all know llm recal only gets better as context fills so you will get perfect slop that works amazingly /s and more info on quality drop with context increase here https://github.com/NVIDIA/RULER
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up1·4 months ago250,000 lines is way more than 250,000 tokens, so even that context is too small.
Hey Grok, take this one file out of the context of my 250,000 line project and give me that delicious AI slop!
Just really fuck up this shit. I want it unrecognizable!
Perfect, groks context limit is 256,000, and as we all know llm recal only gets better as context fills so you will get perfect slop that works amazingly
/s and more info on quality drop with context increase here https://github.com/NVIDIA/RULER
250,000 lines is way more than 250,000 tokens, so even that context is too small.