I tried thinking of them and started laughing. Tried a second time to be sure and it happened again. Am I doing it right?
He/Him They/Them
Working in IT for about 15 years. Been online in one way or another since the late 90’s.
I like games / anime but very picky with them.
Cats are the best people.
I tried thinking of them and started laughing. Tried a second time to be sure and it happened again. Am I doing it right?
mail is the one thing I refuse to self host for the simple reason that despite not being particularly hard to get up and running initially, when it doesn’t work for whatever reason it can be and often is a gigantic pain in the ass to deal with, especially when it’s something out of your control. For personal there’s very good free options, for enterprise those same free options have paid options.
Whether it be gmail having a bad day and blocking you or whatever cloud provider or on prem infrastructure crapping out for long periods of time causing you to be cut off from email for a while and potentially missing incoming mail permanently if the retries time out. Or anything in between. It’s one of those things where I’m glad it isn’t my problem to deal with.
My only involvement with email is ensuring I have a local copy of my inbox synced up every week so if my provider were to ever die I still have all my content.
Buy the domain itself wherever you want. I like cloudflare, and a lot of people also suggest porkbun.com. You then point the nameservers for your domain to whatever DNS service you want. If you stick to cloudflare then it’s already done for you.
For dynamic DNS I use cloudflare’s one using my router to keep it updated. It’s easy to set up. Depending on your router you may need to run a service on a machine to do this instead. things like pfsense/opnsense should have it built-in.
You likely wouldn’t be using cloudflare for that level anyways, since you want it to work when you’re offline you’d bypass them entirely with local DNS server, local reverse proxy+certs. You’d use something like certbot with let’s encrypt which works fine. https://certbot.eff.org/
You’re right but you can get a wildcard for that level as well.
If you mean accessing them from within your LAN while your internet is down then no it won’t work.
What you should be doing is either split horizon DNS (LAN resolves local IPs, public resolves public IPs) or use different DNS hostnames internally, for example media.local.yourdomain.com
You then set up a reverse proxy in your LAN and point everything to that, use a let’s encrypt wildcard cert using the DNS challenge method so you can get *.yourdomain.com protected with a single cert. Since you use cloudflare you can use the cloudflare API plugin with certbot, it’ll automate everything for the DNS challenge and no need to keep opening ports or configuring http/https challenges every couple of months.
I went with docker but back then their documentation for it was trash and hardly worked. Had to trial and error it until it was functional. Hopefully they fixed that by now.
If you host the instance just for your own account to be under your control there’s hardly any overhead. I’m running it in docker in a debian 12 VM with 1 GB ram, 1 virtual CPU and 50GB virtual disk. Haven’t had any issues.
I also see your account is on “infosec.pub” in the same way mine is on “social.vmdk.ca” so you can try searching on lemmy.world or some other instance for the post in question using keywords. For example I found this on lemmy.world directly while searching for UAP, no idea if it’s what you’re talking about. https://lemmy.world/post/1812373
Are you sure they’re being deleted? Federation is a bit weird in that the instance you’re browsing from might not have a piece of content for whatever reason (database rollback/restore or other issues) but if it was posted then it’s out there somewhere on an instance that grabbed it while it was still up.
There are places where people literally leave the window open or door unlocked so people looking to steal shit can take a look without breaking the window, see they have nothing to steal and move on.
That’s one way to kill the WWW.
Those features make sense for people who mostly use mobile, however the price increases make it a lot less appealing even then. At some point people will realize they are paying more to play a video in the background or without ads than for netflix/disney or whatever people like these days.
I’ve yet to be made aware of any benefits at all. None of what you get from premium is either interesting or relevant.
Possible yes. Cost effective / valid business case probably not. Every extra 9 is diminishing returns: it’ll cost you exponentially more than the previous 9 and money saved from potential downtime is reduced. Like you said 32 seconds of downtime, how much money is that for the business?
You’re pretty much looking at multiple geographically diverse T4 datacenters with N+2 or even N+3 redundancy all the way up and down the stack, while also implementing diversity wherever possible so no single vendor of anything can cause you to not be operational.
Even with all that though, you’ll eventually get wrecked by DNS somewhere somehow, because it’s always DNS.
Welcome to the federation. The cookies that were promised don’t actually exist but at least we’re not reddit so we’ve got that going for us.
I run linux for everything, the nice thing is everything is a file so I use rsync to backup all my configs for physical servers. I can do a clean install, run my setup script, then rsync over the config files, reboot and everyone’s happy.
For the actual data I also rsync from my main server to others. Each server has a schedule for when they get rsynced to so I have a history of about 3 weeks.
For virtual servers I just use the proxmox built in backup system which works great.
Very important files get encrypted and sent to the cloud as well, but out of dozens of TB this only accounts for a few gigs.
I’ve also never thrown out a disk or USB stick in my life and use them for archiving, even if the drive is half dead as long as it’ll accept data I shove a copy of something on it, label and document it. There’s so many copies of everything that it can all be rebuild if needed even if half these drives end up not working. I keep most of these off-site. At some point I’ll have to physically destroy the oldest ones like the few 13 GB IDE disks that just make no sense to bother with.
If you’re using memory for storage operations, especially for something like ZFS cache, then you ideally want ECC so errors are caught and corrected before they corrupt your data, as a best practice.
In the real world unless you’re buying old servers off ebay that already have it installed the economics don’t make sense for self hosted. The issues are so rare and you should have good backups anyways. I’ve never run into a problem for not using ECC, been self hosting since 2010 and have some ZFS pools nearly that old. I exclusively run on consumer stuff with the exception of HBAs and networking, never had ECC.
Monitoring for my systems, like zabbix + grafana combo I want to do it but I never do. Mostly because the resources it would use, time it would take, and impact it would have on my storage (constant writes for the database on my SSDs would probably kill them faster). Right now I already get emails from my UPS for power issues, and from my proxmox hosts for backup status and ZFS status.
I’ll probably cave and do it once I add a new server to my cluster.
You might not even be able to install modern OS on it as many are starting to drop support for old hardware, I know the linux kernel did some pruning recently.