Midwest $3.59/gal today
Midwest $3.59/gal today


Basically, generative AI is killing open source — between slop contributions overwhelming maintainers, and the increasing feasibility of “clean-rooming” open source software to remove any obligations that private companies might otherwise have towards projects they depend on, and models cleaving traffic away from the documentation sites that get people involved.
Edit: If you have advice to make it clearer, I’m all ears. The “fractions of a penny” scene seems to perfectly capture the tech grift mentality, so I wanted to use it as the base. But I wanted to steer away from keeping it as “stealing”, because I don’t think copyright is a good basis for AI criticism. I think the fragmentation, noise, and distrust with which it infests our global collaboration infrastructure is the key part. But I really don’t know how to put that succinctly. We don’t really have many historical analogues for this, so there’s not a good shorthand.
Use posting


If a weak crypto market can tank your company, I’m not sure you should be trusted for business advice.
Still, maybe he’s exactly the right kind of businessman for these times. Genuine value and productivity don’t matter anymore. It’s a potemkin economy, so why not staff it with potemkin labor?


Palantir has been doing this for ages, but yeah the LLM aspect is an interesting evolution of it. Probably overkill in the long term, but the availability and (for now) affordability is the main selling point.


Just in: program that can’t distinguish between data and instructions may be insecure!
How are they gonna get him to stand under a Rome statue?
Lazily-evaluated, too!
There just needs to be one universal standard that handles everyone’s use cases
Along those same lines: add “anal” to any car model you see. Hours of entertainment.
Description
The hero is rescued from a final plight from an unexpected source.
The rescuer may be someone who had previously abandoned the hero or even someone the hero does not know. In mythic stories, this intervention may come from a god.
Example
In Star Wars, Han Solo returns to help Luke fight the Tie Fighters. In Return of the Jedi, Luke needs the redeemed Vader to destroy the Emperor.
In Lord of the Rings, Frodo is unable to destroy the ring by himself, needing Gollum’s unwitting help to complete the deed. Soon after, Frodo and Sam are rescued by the eagles.


This is a terrible idea for Amazon, the cloud services company.
But for Amazon, the AI company? This is them illustrating the new grift that almost any company can do: use AI to keep a plausible mirage of your company going while reducing opex, and sacrifice humans when necessary to dodge accountability.
But his job wasn’t even to supervise the chatbot adequately (single-handedly fact-checking 10 lists of 15 items is a long, labor-intensive process). Rather, it was to take the blame for the factual inaccuracies in those lists. He was, in the phrasing of Dan Davies, “an accountability sink” (or as Madeleine Clare Elish puts it, a “moral crumple zone”).
https://locusmag.com/feature/commentary-cory-doctorow-reverse-centaurs/
Maybe through CXL you’ll be able to “download more RAM” (renting it, really)
Multiple things can be true at once.
Don’t think that voting alone is sufficient. Don’t think that voting and organizing are sufficient. But also don’t think that voting and organizing are not necessary.


But “shoot” is why you went in there!


This sounds eerily familiar…
I don’t know if Hearst told him to use a chatbot to generate their “Best of Summer Lists,” but it doesn’t matter. When you give a freelancer an assignment to turn around ten summer lists on a short timescale, everyone understands that his job isn’t to write those lists, it’s to supervise a chatbot.
But his job wasn’t even to supervise the chatbot adequately (single-handedly fact-checking 10 lists of 15 items is a long, labor-intensive process). Rather, it was to take the blame for the factual inaccuracies in those lists. He was, in the phrasing of Dan Davies, “an accountability sink” (or as Madeleine Clare Elish puts it, a “moral crumple zone”).
https://locusmag.com/feature/commentary-cory-doctorow-reverse-centaurs/


I feel like I’ve been saying this every day since like 2015 (Or maybe 2001. Or maybe 1999. Idk. Anyway…) but: This is a really bad idea.
Well-argued.
Unlearning Economics has a similar analysis of AI through the lens of cybernetics: https://youtube.com/watch?v=Km2bn0HvUwg
Like, as far as legal basis? Yes, as I understand it — but I am not a lawyer.
But if you’re hoping to leverage an LLM… Part of the reason they’re so good at producing replacements for e.g. react is that the source code for react is in the training data, along with test suites and a ton of commentary related to the source code.
So you’d be at a big disadvantage. That’s on top of the basic legibility challenges of decompiled binaries.