If this ain’t bad enough, their philosophy is cherry on the cake

Not good for: Tasks requiring human judgement or design decisions
So, like 99% of programming after you have been a professional programmer for longer than a year…
oh an entropy generator, how quaint.
the philosophy reads like coping mechanisms of an abused spouse
5 Repeat until completion
What the fuck IS completion here? Just running indefinitely until the user manually kills the loop?
Completion here is when shareholders orgasm simultaneously.
Just analyze the code and see if it halts. Simple.
Sometimes it the model just decides it’s perfect sometimes it keeps adding unit tests and features. I’ve also heard of cases where it determines it’s in a loop a kills it’s PID lol
This shit is so funny LOL
This could make great comedy

image originally found at https://knowyourmeme.com/memes/ralph-in-danger-im-in-danger
We have automated environmental destruction.
Man AI explanations are so… soulless and dull. And useless, even though I read this huge wall of text, I’m not closer to understanding what this plugin actually does.
It’s like it tries to rephrase a 5-sentence explanation as another 5-sentence explanation without grasping that it could be explained in 5 words concisely…
. And useless, even though I read this huge wall of text, I’m not closer to understanding what this plugin actually does.
That’s how I feel coming across half of github projects or frameworks.
Amen brother - so much assumed context (that I’m invariably lacking)
It’s like it tries to rephrase a 5-sentence explanation as another 5-sentence explanation
Sounds like they tested their plugin on the readme generation
even though I read this huge wall of text, I’m not closer to understanding what this plugin actually does
Sounds like every piece of corporate literature produced in at least the last decade
Because I’d bet my left kidney that at least the intro half of the readme is written by AI, if not the whole thing
So you know how when you’re interacting with an AI bot (chatgpt/copilot or Gemini or Claude etc) you have a chat history with it and it “remembers” its previous output and can reference it if you ask follow up stuff.
You can use that behavior to try to get the model to give you a better answer by having it “think” about the prompt you fed it either by asking further probing questions or honing your initial prompt in or staying “hey this thing you wrote doesn’t make sense” and it’ll try to fix it.
Well what this does is just takes the initial prompt and keeps feeding it to the bot after it spits out the response. Then it keeps doing that again and again with the same prompt. The idea is that it makes the AI look at it more and more and that the AI can “learn” from itself and make the output better.
In other words, it’s like if you just copied your prompt and after every response to it just pasted it back in and sent it on through again.
I’m very skeptical this works well but I’m not an expert in LLM or how they work, I just know enough about how they work to know this AI craze is most assuredly a market bubble.
I think you missed the bit that makes this sort of maybe make a tiny lick of sense: you use this to write code that you evaluate against tests, and put a maximum iterations counter in to make sure it doesn’t go infinite.
Yes, this is still going to melt the planet just a little bit faster every time it’s used. Yes, it’s most likely to completely fail and, even in cases where it eventually succeeds, it would likely have been orders of magnitude more compute efficient to vibe code it the usual way of actually, like, vibing the code. (Is that what re-prompting is called for vibe coding?)
But it could maybe, kinda, sometimes work. If you squint your eyes. And you’re a Boomer who doesn’t give a shit about the looming climate apocalypse.
I’d rather focus more on the people pushing AI or consuming it uncritically (which does include Baby Boomers, yes, but also people all over the age spectrum), and less on age groups we are unable to get out of.
me being upset about generation warring
a bit disheartening that people are doing the generation wars, most of the Baby Boomers I talk to give a shit about the looming climate apocalypse and are concerned about AI; god I hope that if people in my generation do bad things people aren’t inclined to dismiss me on sight as “stupid [andioop’s generational cohort]” especially since there is nothing I can do to change when I was born and thus what generational group I’m placed in. Wasn’t there a whole thing about not making assumptions of people based on demographics we cannot choose? Blame people for choosing the Nazi party, but not for their skin color or gender or sexuality. Why doesn’t that extend to age? Why are all elderly people lumped in with the bad ones at the top? Or is it just “a demographic you choose is okay to bash if positions of power are primarily composed of its members, no matter how many decent people who do not have that level of power share that demographic”? I think a lot of people are doing that and it rubs me super wrong, as a person whose demographics are not traditionally empowered so my perspective cannot just be dismissed as “fragile white male tears” because I am not a white male
I don’t think all Boomers are bad. Lots care about the climate, and (sadly) many younger people don’t care about the climate, too. Hence why I clarified:
a Boomer who doesn’t give a shit about the looming climate apocalypse.
The reason it matters they’re a Boomer is because anyone younger is going to be facing the reality of climate change regardless of what they care about.
Hey, I missed that caveat, thank you for pointing it out!
I’m going to just point out that hating all white males is just as backwards as hating all older generations. Hating anyone for stuff like that is stupid period. Not saying you do, it’s just the last sentence feels like you’re trying to appeal to people using the same BS you’re arguing against.
Oh yeah, you are absolutely right, that is another thing that bothers me. (Obviously I still believe listening to minority voices is important.) Me bringing that up is specifically directed at the type of person who might dismiss opinions from people because they are white males, especially if the opinion is about not liking the way majority demographics sometimes get spoken about. It is me saying to that type of person that they cannot dismiss my position of disliking punching up at demographics by going “of course you’d say that, you are the demographic being punched up at!” I am from the non-male, non-white demographics that they are trying to empower with this “lol white man bad” stuff.
And yes, fully aware speaking ≠ the many harms that went way past speaking done to minorities in the past. But also this kind of stuff is what I think contributes to some more people being funneled towards alt right perspectives. “Yes, white men bad is a hypocritical stance” video -> more “silly stupid liberals” videos (whether they do actually point out legit problems with social justice or not) down into “The world is actually completely stacked against white men, who are just better than the other demographics which is why women are only good for breeding and men of color are too stupid for anything besides manual labor. Structural/systemic oppression, biases, and other genders don’t exist, that’s woke DEI nonsense. Also, the stupid liberals telling you otherwise are also telling you lies that vaccines work and climate change is happening.”
I think the main idea is that it’s an “agent” that runs command line commands and then considers the output. It definitely helps sometimes to show a LLM the errors its code generated.
soulless and dull
it seems you have read my documentation
This feels like a shitpost that idiots misappropriated as a real method.
Why would you want a method based on a famously developmentally-challenged child? It isn’t “dogged perserverance despite setbacks”; it’s “he’s literally too stupid to evaluate his process and teach himself a better way to solve even the simplest tasks.”
Which, hilariously, neatly matches the machine-learning approach of “just keep shovelling data, it’ll work eventually”.
This feels like a shitpost that idiots misappropriated as a real method.
Problem is half of the damn AI “projects” I come across are like this
why would you want a method based on a famously developmentally-challenged child?
Fir’st of all, ABLEIST!!!
Second of all, appreciate the wisdom of Ralph
Fourth and 5th of all: your cats breath smells like cat food.
I don’t smell the cat’s breath because we’re not close like that. I’d appreciate it if you stayed away from my cat as well.
How can you say you love your cat if you don’t open mouth kiss it?
I don’t love the cat. We’re housemates.
I have a hard time believing this isn’t a shitpost project.
It kinda is, and kinda not. https://github.com/ghuntley/cursed <-- this was the result of letting it burn tokens for 3 months
burn tokens for 3 months
Alright Mr money bags we get it already
That’s the twist: this repo was shared on a teams channel
And this somehow arrives at a highly refined piece of code rather than the software equivalent of Habsburg Jaw?
I spent the morning hunting an API call that someone’s ai hallucinated into my code when they were updating it. I’m so done with llm’s
Jr’s are so dangerous with Ai. We have to basically throw away a large part of a project because a Jr that now quit, made it with Ai and nobody can figure out how to extend or maintain it. I showed my boss how it had 5 layers of abstraction in multiple places and he agreed we needed to trash that code.
Worst part is that it was a principal.
I am super curious about how someone pushed a commit without testing or validation, and that person is a Principal.
I don’t even know. Our unit tests are garbage, but they also rewrote the unit tests for the new code. Just didn’t actually check.
I approved this PR. I am culpable as well for not doing my due diligence
Don’t they unit test the code immediately after writing one? Vibe coded or not
This seems like a great way to burn through all your tokens (and several acres of rainforest) while you sleep. Fantastic for shareholders!
This surely is not going to get promoted by big companies
[email protected] and [email protected] if you were not being facetious
Thanks, I was not aware these subs existed. I guess it’s not a bad post thought
AI bros and the cognitive dissonance between “productivity” plugins/projects that make llms repeat their inputs and confidentially asserting that llms are actually good for code
I swear this is how my new team writes all their enterprise code









