

I’m sure she will take this opportunity to reflect and grow as a person.
I’m sure she will take this opportunity to reflect and grow as a person.
Alien meaning “external”.
Electrical interference can come from all kinds of places, near and far. I guess technically you might get interference from other planets but I don’t think that’s what they meant. :) Solar flares are a possibility, though.
Oh. Well that sucks.
I think I remember you from before, if you’re the same Stamets. You posted a lot. I’ll join your new Risa then.
Is there something wrong with https://startrek.website/?
I actually did this a lot on classic Mac OS. Intentionally.
The reason was that you could put a carriage return as the first character of a file, and it would sort above everything else by name while otherwise being invisible. You just had to copy the carriage return from a text editor and then paste it into the rename field in the Finder.
Since OS X / macOS can still read classic Mac HFS+ volumes, you can indeed still have carriage returns in file names on modern Macs. I don’t think you can create them on modern macOS, though. At least not in the Finder or with common Terminal commands.
I was about to say this.
If they can’t give me a callback number that is publicly listed on their web site, then they’re most likely a scammer.
With Google, however, this is a scarier proposition than with most companies. If someone from my phone company, or my bank, or my insurance company called me, I could very easily call the actual company and talk to a human to confirm. I have no idea how I could ever talk to a human at Google. I’m not sure they even have a public phone line.
Unfortunately, you still need a level of trust with Proton. Even aside from trusting that they will not bend to pressure to terminate your service, you’re also trusting them with your network of contacts, because metadata (including the sender, recipient, and subject line) are not end-to-end encrypted in Proton.
Just wear disposable faces.
You humans wear the same face your entire life and then get upset when people recognize it?! Get over yourself! Aside from the obvious privacy issue, let’s be real: it’s also gross.
Good to hear. For context, I made the switch late last year, so my experience may be outdated.
I use Koreader on Android (available on F-Droid or Google Play).
It works. Configuring fonts is a bit confusing — every time I start a new book that uses custom fonts, I need to remind myself how to override it so it uses my prefs. But aside from that, it does what I need. Displaying text is not rocket science, after all.
I used to like Librera, but I had to ditch it because its memory usage was out of control with very large files. Some of my epubs are hundreds of megabytes (insane, yes, but that’s reality) and Librera would lag for several seconds with every page turn. Android would kill it if I ever switched apps because it used so much memory. I had a great experience with it with “normal” ebooks though. It was just the big 'uns that caused issues.
That can’t be good. But I guess it was inevitable. It never seemed like Arc had a sustainable business model.
It was obvious from the get-go that their ChatGPT integration was a money pit that would eventually need to be monetized, and…I just don’t see end users paying money for it. They’ve been giving it away for free hoping to get people hooked, I guess, but I know what the ChatGPT API costs and it’s never going to be viable. If they built a local-only backend then maybe. I mean, at least then they wouldn’t have costs that scale with usage.
For Atlassian, though? Maybe. Their enterprise customers are already paying out the nose. Usage-based pricing is a much easier sell. And they’re entrenched deeply enough to enshittify successfully.
Better yet, use borg to back up. Managing your own tars is a burden. Borg does duduplication, encryption, compression, and incrementals. It’s as easy to use as rsync but it’s a proper backup tool, rather than a syncing tool.
Not the only option, but it’s open source, and a lot of hosts support it directly. Also works great for local backups to external media. Check out Vorta if you want a GUI.
Why? It’s Japanese and your browser should display it as マリウス. But I don’t know what that means.
Yeah, that’s true for a subset of code. But for others, the hardest parts happen in the brain, not in the files. Writing readable code is very very important, especially when you are working with larger teams. Lots of people cut corners here and elsewhere in coding, though. Including, like, every startup I’ve ever seen.
There’s a lot of gruntwork in coding, and LLMs are very good at the gruntwork. But coding is also an art and a science and they’re not good at that at high levels (same with visual art and “real” science; think of the code equivalent of seven deformed fingers).
I don’t mean to hand-wave the problems away. I know that people are going to push the limits far beyond reason, and I know it’s going to lead to monumental fuckups. I know that because it’s been true for my entire career.
If I’m verifying anyway, why am I using the LLM?
Validating output should be much easier than generating it yourself. P≠NP.
This is especially true in contexts where the LLM provides citations. If the AI is good, then all you need to do is check the citations. (Most AI tools are shit, though; avoid any that can’t provide good, accurate citations when applicable.)
Consider that all scientific papers go through peer review, and any decent-sized org will have regular code reviews as well.
From the perspective of a senior software engineer, validating code that could very well be ruinously bad is nothing new. Validation and testing is required whether it was written by an LLM or some dude who spent two weeks at a coding “boot camp”.
Just wail til they become AI-generated-JavaScript-only shops. They’re gonna be vibing like the Tacoma Narrows Bridge.
I remember when some company started advertising “BURN-proof” CD-R drives and thinking that was a really dumb phrase, because literally nobody shortened “buffer underrun” to “BURN”, and because, you know, “burning” was the entire point of a CD-R drive.
It worked though. Buffer underruns weren’t a problem on the later generations of drives. I still never burned at max speed on those though. Felt like asking for trouble to burn a disc at 52x or whatever they maxed out at. At that point it was the difference between 1.5 minutes and 4 minutes or something like that. I was never in that big a rush.
It’s a real post on Reddit. I don’t know what combination of screenshotting/uploading tools leads to this kind of mangling, but I’ve seen it in screenshots from Android, too. The artifacts seem to run down in straight vertical lines, so maybe slight scaling with a nearest-neighbor algorithm (in 2025?!?) plus a couple levels of JPEG compression? It looks really weird.
I’m curious. If anyone knows, please enlighten me!
XMPP is still around! Despite Google’s best efforts to kill it.
If you can’t afford backups, you can’t afford storage. Anyone competent would factor that in from the early planning stages of a PB-scale storage system.
Going into production without backups? For YEARS? It’s so mind-bogglingly incompetent that I wonder if the whole thing was a long-term conspiracy to destroy evidence or something.