Never let perfection be the enemy of getting it to work.
I take my shitposts very seriously.
Never let perfection be the enemy of getting it to work.


Is this what normies feel like when Linux users tell them to just use Linux? I have some apologies to make.


POW is a far higher cost on your actual users than the bots.
That sentence tells me that you either don’t understand or consciously ignore the purpose of Anubis. It’s not to punish the scrapers, or to block access to the website’s content. It is to reduce the load on the web server when it is flooded by scraper requests. Bots running headless Chrome can easily solve the challenge, but every second a client is working on the challenge is a second that the web server doesn’t have to waste CPU cycles on serving clankers.
POW is an inconvenience to users. The flood of scrapers is an existential threat to independent websites. And there is a simple fact that you conveniently ignored: it fucking works.
Interface configuration and DNS resolution are managed by different systems. Their file structures are different. It’s been like this for many decades, and changing it is just not worth breaking existing systems.


No numbers, no testimonials, or even anecdotes… “It works, trust me bro” is not exactly convincing.
That’s a poython constructah, __init__?
If this is as significant an issue as you imply, please link some credible sources.
As far as I can tell, the “Chinese server” (or EU server) is just a public ID and Relay server, and necessary for the application to function unless a self-hosted server is used.
You can host the open-source ID and Relay servers for simple remote access at no cost. The pro subscription is mainly about account and device management.
services:
hbbs:
container_name: hbbs
image: rustdesk/rustdesk-server:latest
command: hbbs
volumes:
- ./data:/root
network_mode: "host"
depends_on:
- hbbr
restart: always
hbbr:
container_name: hbbr
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./data:/root
network_mode: "host"
restart: always
That is probably the second worst outcome. People suck.
23:22? Nah mate, my work phone turns off the moment I step through the gate. If someone chose to wait until after 16:00, they can wait until next morning to be told to fuck off.
Why split physical and data link when they are so closely related?
You can run Ethernet on any medium that has the capacity to transmit digital signals. It can be copper, optical, over-air laser, radio, on top of an analog carrier wave (ASK, FSK, PSK). The Ethernet traffic can be completely independent from the physical medium by using encapsulation (L2TP or any other protocol that encapsulates Layer-2). It can be pigeons carrying printouts of the Ethernet frames, scanned and reassembled at the destination. The same can be said about most Layer-2 protocols.
As long as the proper interfaces are present, the physical layer is completely transparent to the data link layer.
(edit) I should point out that Ethernet, specifically, transmits extra data before and after the frame (the preamble and inter-packet gap) that are used to configure the Rx circuit for reception, but the Layer-2 frame will be identical regardless of the medium.
Mount the network share (fstab or mount.cifs), and pass the login using the username= and password= mount options. Then point the volume at the mount point’s path.
https://www.mattnieto.com/how-to-mount-an-smb-share-to-a-docker-container-step-by-step/


It’s possible that, when the ISP revokes the public address and assigns a new one, the DNS record isn’t updated immediately and still points to the old address. Then every new request would be sent to the old, invalid address.
And this is where I start shilling for Tailscale. It’s a Wireguard-based mesh VPN that is designed to work from behind firewalls, NAT, and CGNAT. It has its own internal split DNS provider, and probably some mechanism to handle public address changes that is transparent to the tunnelled traffic. You can use it to share the server with only the devices that have the client installed, or expose the server to the internet.
I’ve got it set up on my OPNSense firewall as a subnet router that advertises the subnet where my servers are, and often stream from Jellyfin over it. There’s some overhead, but it’s never been disruptive.
Verifying that the code doesn’t contain regressions, bugs, or vulnerabilities, that it doesn’t conflict with whatever the owner is actively developing privately, in addition to making sure it wasn’t vomited out by a goddamn clanker, is a huge burden on a solo developer. They are free to decide whether to take on this responsibility.


What sounds like gatekeeping is often a strongly worded emphasis on having the prerequisite knowledge to not just host your services, but do it in a way that is secure, resilient, and responsible. If you don’t know how to set up a network, set up a resilient storage, manage your backups, set up HTTPS and other encryption solutions, manage user authentication and privileges, and expose your services securely, you should not be self-hosting. You should be learning how to self-host responsibly. That applies to everything from Debian to Synology.
Friends don’t let friends expose their networks like Nintendo advises.


Put all of the postcodes in a paginated list that displays only 30 entries at a time (60 and 100 per page for premium users), only has next/previous navigation buttons, orders the entries by popularity, and goes back to the first page if you reload the website. Or an infinitely scrolling page that loads each page dynamically, but returns 429 Too Many Requests if the user scrolls too fast.


I think you can get some kind of exemption for archival purposes. I know that the Internet Archive has one. But I also know that ultimately Microsoft is responsible for the data hosted on Github, and Microsoft’s interest is to not even risk getting sued.


At work, we use PiSignage for a large overhead screen. It’s based on Debian and uses a fullscreen Firefox running in the labwc compositor. The developer advertises a management server (cloud or self-hosted) to manage multiple connected devices, but it’s completely optional (superfluous in my opinion) and the standalone web UI is perfectly usable.


You can absolutely use it without a reverse proxy. A proxy is just another fancy HTTP client that contacts the server on the original client’s behalf and forwards the response back to it, usually wrapped in HTTPS. A man in the middle that you trust.
All you have to do is expose the desired port(s) to all addresses:
# ...
- ports:
- 8080:8080
…and obviously to set the URL environment variables to localhost or whatever address the server uses.
Uh… kinda? Powershell has many POSIX aliases to cmdlets (equivalent to shell built-ins) of allegedly the same functionality.
rmdirandrmare both aliases ofRemove-Item,lsisGet-ChildItem,cdisSet-Location,catisGet-Content, and so on.Of particular note is
curl. Windows supplies the real CURL executable (System32/curl.exe), but in a Powershell 5 session, which is still the default on Windows 11 25H2, thecurlalias shadows it.curlis an alias of theInvoke-WebRequestcmdlet, which is functionally a headless front-end for Internet Explorer unless the-UseBasicParsingswitch is specified. But since IE is dead, if-UseBasicParsingis not specified, the cmdlet will always throw an error. Fucking genius, Microsoft.