

Just wear disposable faces.
You humans wear the same face your entire life and then get upset when people recognize it?! Get over yourself! Aside from the obvious privacy issue, let’s be real: it’s also gross.
Just wear disposable faces.
You humans wear the same face your entire life and then get upset when people recognize it?! Get over yourself! Aside from the obvious privacy issue, let’s be real: it’s also gross.
Good to hear. For context, I made the switch late last year, so my experience may be outdated.
I use Koreader on Android (available on F-Droid or Google Play).
It works. Configuring fonts is a bit confusing — every time I start a new book that uses custom fonts, I need to remind myself how to override it so it uses my prefs. But aside from that, it does what I need. Displaying text is not rocket science, after all.
I used to like Librera, but I had to ditch it because its memory usage was out of control with very large files. Some of my epubs are hundreds of megabytes (insane, yes, but that’s reality) and Librera would lag for several seconds with every page turn. Android would kill it if I ever switched apps because it used so much memory. I had a great experience with it with “normal” ebooks though. It was just the big 'uns that caused issues.
That can’t be good. But I guess it was inevitable. It never seemed like Arc had a sustainable business model.
It was obvious from the get-go that their ChatGPT integration was a money pit that would eventually need to be monetized, and…I just don’t see end users paying money for it. They’ve been giving it away for free hoping to get people hooked, I guess, but I know what the ChatGPT API costs and it’s never going to be viable. If they built a local-only backend then maybe. I mean, at least then they wouldn’t have costs that scale with usage.
For Atlassian, though? Maybe. Their enterprise customers are already paying out the nose. Usage-based pricing is a much easier sell. And they’re entrenched deeply enough to enshittify successfully.
Better yet, use borg to back up. Managing your own tars is a burden. Borg does duduplication, encryption, compression, and incrementals. It’s as easy to use as rsync but it’s a proper backup tool, rather than a syncing tool.
Not the only option, but it’s open source, and a lot of hosts support it directly. Also works great for local backups to external media. Check out Vorta if you want a GUI.
Why? It’s Japanese and your browser should display it as マリウス. But I don’t know what that means.
Yeah, that’s true for a subset of code. But for others, the hardest parts happen in the brain, not in the files. Writing readable code is very very important, especially when you are working with larger teams. Lots of people cut corners here and elsewhere in coding, though. Including, like, every startup I’ve ever seen.
There’s a lot of gruntwork in coding, and LLMs are very good at the gruntwork. But coding is also an art and a science and they’re not good at that at high levels (same with visual art and “real” science; think of the code equivalent of seven deformed fingers).
I don’t mean to hand-wave the problems away. I know that people are going to push the limits far beyond reason, and I know it’s going to lead to monumental fuckups. I know that because it’s been true for my entire career.
If I’m verifying anyway, why am I using the LLM?
Validating output should be much easier than generating it yourself. P≠NP.
This is especially true in contexts where the LLM provides citations. If the AI is good, then all you need to do is check the citations. (Most AI tools are shit, though; avoid any that can’t provide good, accurate citations when applicable.)
Consider that all scientific papers go through peer review, and any decent-sized org will have regular code reviews as well.
From the perspective of a senior software engineer, validating code that could very well be ruinously bad is nothing new. Validation and testing is required whether it was written by an LLM or some dude who spent two weeks at a coding “boot camp”.
Just wail til they become AI-generated-JavaScript-only shops. They’re gonna be vibing like the Tacoma Narrows Bridge.
I remember when some company started advertising “BURN-proof” CD-R drives and thinking that was a really dumb phrase, because literally nobody shortened “buffer underrun” to “BURN”, and because, you know, “burning” was the entire point of a CD-R drive.
It worked though. Buffer underruns weren’t a problem on the later generations of drives. I still never burned at max speed on those though. Felt like asking for trouble to burn a disc at 52x or whatever they maxed out at. At that point it was the difference between 1.5 minutes and 4 minutes or something like that. I was never in that big a rush.
It’s a real post on Reddit. I don’t know what combination of screenshotting/uploading tools leads to this kind of mangling, but I’ve seen it in screenshots from Android, too. The artifacts seem to run down in straight vertical lines, so maybe slight scaling with a nearest-neighbor algorithm (in 2025?!?) plus a couple levels of JPEG compression? It looks really weird.
I’m curious. If anyone knows, please enlighten me!
XMPP is still around! Despite Google’s best efforts to kill it.
Honestly, I’m amazed how long it’s taken Microsoft to run GitHub into the ground. But let’s be real: enshittification is inevitable. This is Microsoft we’re talking about.
The best time to migrate away from GitHub was 2018. The second-best time is today.
The “free market” solution is for malpractice suits to be so ruinously expensive that insurance companies will apply sufficient pressure to medical practices to actually do their fucking jobs.
Same in the legal field, plus we should see a wave of disbarments already.
I’m not holding my breath. AI is shaping up to be history’s greatest accountability sink and I’ve yet to see any meaningful pushback.
All major browser engines are FOSS.
Chrome and Edge are proprietary wrappers around Chromium (BSD license). Firefox and derivatives are FOSS (Mozilla Public License). Safari is built around WebKit (LGPL/BSD).
The problem, however, is governance. These projects are all too big for anyone to realistically fork and maintain independently. So in practice, they are under control of Google, Mozilla, and Apple — all of which have questionable priorities (especially Google).
It would be like taking your compiled machine code and editing it by hand because your compiler sucks.
Just use the right tool from the start.
Wireless card readers are relatively new tech. I see them more and more as time goes on. New places usually give their waitstaff mobile readers, but there’s little motivation for older restaurants to upgrade their whole POS systems. POS systems have pretty long life expectancy. At least the older ones do.
Nobody should feel a strong need to upgrade after only two generations. Same deal with most tech like GPUs and CPUs.
I use my phone a lot and my Pixel 7 is fine. The primary factors driving my last couple upgrades were battery degradation and software support. Neither should be a big problem with a Fairphone.
I’m also trying to decide whether to stick with the Pixel/GrapheneOS ecosystem or go for Fairphone.
How hard/expensive was it to replace your battery? I looked on iFixIt and it seemed a lot harder than my orevious phones.
SQLite would definitely be smaller, faster, and require less memory.
Thing is, it’s 2025, roughly 20 years since anybody’s given half a shit about storage efficiency, memory efficiency, or even CPU efficiency for anything so small. Presumably this is not something they need to query dynamically.
Unfortunately, you still need a level of trust with Proton. Even aside from trusting that they will not bend to pressure to terminate your service, you’re also trusting them with your network of contacts, because metadata (including the sender, recipient, and subject line) are not end-to-end encrypted in Proton.