• 26 Posts
  • 596 Comments
Joined 3 years ago
cake
Cake day: June 13th, 2023

help-circle


  • It definitely makes it more difficult to switch endpoints manually. I have multiple VPN connections with different exit nodes configured for failover in case one (or more) of them is unreachable. I don’t run into geoblocking issues very often but I also don’t route all my WAN traffic over VPN. Just some of it.

    What you can automate depends on your routers capabilities. Mine is a Mikrotik which does have fairly extensive support for custom scripts. However, detecting Geoblocking is probably going to involve parsing HTTP responses which is beyond the capabilities of almost all consumer grade routers. You would have to effectively do a MITM attack (aka deep packet inspection) in order to accomplish that on something other than the client device.

    TLDR: I manually change routes to a different VPN if needed but I very rarely run into Geoblocking issues.


  • /*
    By all accounts, the logic in this method shouldn't work. And yet it does. We do not know why. It makes no sense whatsoever. It took three weeks and numerous offerings to the programming gods, including using one of the junior devs as a human sacrifice, to unlock this knowledge. DO NOT LET HIS VIOLENT AND UNTIMELY DEATH BE IN VAIN! Touch this at your own peril.
    --jubilationtcornpone 12/17/25
    */
    public async Task<IResult> CalculateResultAsync()
    {
         // Some ass backwards yet miraculously functional logic.
    }
    

  • I exclusively use my router as the VPN client for a few reasons. There are multiple services on my network that use the VPN. I’ve got static routes configured which effectively act as a kill switch and I can use QOS to prioritize traffic. It’s pretty much set it and forget it. You can use any VPN service as long at they offer a protocol your router supports. I use Proton via WireGuard and have for years.





  • Why not just use what you have until you can afford to and/or need to upgrade? SAS drives are more expensive because they typically offer higher performance and reliability. Hardware raid may be “old” but it’s still very common. The main risk with it is that if your raid card fails, you’ll have to replace it with the same model if you don’t want to rebuild your server from scratch.

    I’ve been running an old Dell PowerEdge for several years with no issues.



  • Oh man. “Inner restlessness” is probably my least favorite ADHD symptom. I’m not outwardly hyperactive but my defective little brain sure is.

    I used to treat it daily with Jim Beam but that’s not a good way to live either.

    Now I take my bed time meds (including melatonin) about 3 hours before bedtime and put on my blue light glasses. It’s not perfect but it’s better than it used to be.






  • First One:

    Big ASP.Net Core Web API that passed through several different contract developer teams before being finally brought in house.

    The first team created this janky repository pattern on top of Entity Framework Core. Why? I have no idea. My guess is that they just didn’t know how to use it even though it’s a reasonably well documented ORM.

    The next team abandoned EFCore entirely, switched to Dapper, left the old stuff in place, and managed to cram 80% of the new business logic into stored procedures. There were things being done in sprocs that had absolutely no business being done there, much less being offloaded to the database.

    By the time it got to me, the data layer was a nightmarish disaster of unecesary repo classes, duplicates entities, and untestable SQL procedures, some of which were hundreds of lines long.

    “Why are all our queries running so slow?”

    We’ll see guys, it’s like this. When your shoving a bunch of telemetry into a stored procedure to run calculations on it, and none of that data is even stored in this database, it’s going to consume resources on the database server, thereby slowing down all the other queries running on it.

    Second One:

    Web app that generates PDF reports. Problem was it generated them on-the-fly, every time the PDF was requested instead of generating it once and storing it in blob storage and it was sllloowwwww. 30 seconds to generate a 5 page document. There were a list of poor decisions that led to that, but I digress.

    Product owner wants the PDF’s to be publicly available to users can share links to them. One of the other teams implements the feature and it’s slated for release. One day, my curiosity gets the best of me and I wonder, “what happens if I send a bunch of document requests at once?” I made it to 20 before the application ground to a halt.

    I send a quick write up to the scrum Master who schedules a meeting to go over my findings. All the managers keep trying to blow it off like it’s not a big deal cause “who would do something like that?” Meanwhile, I’m trying to explain to them that it’s not even malicious actors that we have to be concerned about. Literally 20 users can’t request reports at the same time without crashing the app. That’s a big problem.

    They never did fix it properly. Ended up killing the product off which was fine because it was a pile of garbage.