Oh neat! That looks like a perfect fit for me! I saved your post and will come back to it once the biyearly “just f*ing fo it again” motivation hits me once more :D
Oh neat! That looks like a perfect fit for me! I saved your post and will come back to it once the biyearly “just f*ing fo it again” motivation hits me once more :D
Yes, I do loose the origin IP and I’m a little bugged by it. It also means that ALL traffic incoming on a specific port of that VPS can only go to exactly ONE private wireguard peer. You could avoid both of these issues by having the reverse proxy on the VPS (which is why cloudflare works the way it does), but I prefer my https endpoint to be on my own trusted hardware. That’s totally my personal preference though.
I trust my VPS provider to not be interested enough in my data to setup special surveillance tooling for each and every possible software combination their customers might have. Cloudflare on the other hand only has their own software stack to monitor and all customers must adhere to it. It’s by design much easier for them to do statistics or snooping.
I am using the smallest tier VPS from IONOS for 1€/month. Good, reliable and trustworthy as it is a subsidiary of 1&1 telecommunications.
Rent a VPS, point DNS to it, have it act as central wireguard peer and connect your server(s). Then bridge incoming traffic to server via socat or firewall rules. Done
Sure it’s easy to set up, but the same behaviour is what I get with my handrolled solution. I rent a cheap VPS with a fixed IP solely for forwarding all traffic through wireguard. My DNS entries all point to the VPS and my servers connect to the VPS to be reachable. It is absolutely network agnostic and does not require any port shenanigans on the local network nor does it require a fixed IP for the internet connection of my home server.
Data security wise the HTTPS terminates on my own hardware (homeserver with reverse proxy) and the wireguard connection is additionally encrypted. There are no secrets or certificates on the rented VPS beyond the bare minimum for the wireguard tunnel and my public key for SSH access.
Shuttling the packets on the VPS (inet to wireguard) is done by socat because I haven’t had the will or need to get in the weeds with nftables/iptables. I am just happy that it works reliably and am happy to loose some potential bandwidth to the kernelspace/userspace hoops.
There’s prometheus node exporter which can collect such data from several hosts. You can hook it up with Grafana for neat dashboards and I’m almost sure it also integrates with Homeassistant.
What? I’ve never had the feeling that nextcloud assumes that. Are you using a special all-in-one docker image? Because I am using the regular one and pair it with db, redis etc. containers and am absolutely happy with it.
Maybe get a reputable one, the other ones are sadly malware infected in way to many cases. It’s a way for the manufacturer to make an extra buck from the sale.
If you have an AVM Fritz!Box home router you can simply create a new profile that disallows internet access and set the devices you want to “isolate” to that profile. They will be able to access the local network and be accessed by the local network just fine, but they won’t have any outgoing (or incoming) connectivity.
I’ve always been on android, so take this with a grain of salt. In my opinion Samsung phones have come a very long way. They used to be slower and bloated in comparison to other brands, especially while the market was still moving fast. I used to have a Sony, a ZTE, a Motorola, an Umi and a Jiayu - I tried quite a few over the years.
The recent generation are all fast enough and performance wise last 4+ years before they get noticably slow and an upgrade becomes necessary. Software support on Samsung is now phenomenal. I had so many bugs and hitches on other vendors’ phones and they were rarely fixed - the absolute opposite has been the experience on my Samsungs. Updates are frequent, smooth and stable.
I know this reads like an ad, but I was honestly positively suprised after I bought a Samsung tablet a few years back and have slowly switched over to Samsung devices. The same happened with all other members of my family. Samsung simply won.
I suppose the iPhone is very similar in that regard, both simply work and are great for everyday use. It’s almost boring!
I do advice you to look at the upper end though, they simply have more performance reserves. If you are a display menace and battery destroyer though, you won’t notice any significant slow down from the cheaper range in the 2 to 3 years you have before it becomes uneconomical to repair the device anyways.
If only modern kernels weren’t a problem. I wish you could just install new OSs like on PC.
It’s probably also highly automated and the staff’s job is just to watch for irregularities and alert the necessary teams.
I’ve used restic before and it worked great with OVH’s object storage. Moved away from cloud backups because of the cost though.
Yeah, has anyone ever actually tried restoring from then? I only remember one disgruntled redditor posting about it, but that’s about it.
Depends a lot on what backup software you use. Blackbase B2 ist just an S3-like object storage service. It’s the underlying software stack of many different things, one of those can be backup software. They do have their own backup solution though. But in that case B2 is the wrong product for you to look at.
But Borg does not work with object storage, it needs a borg process on the receiving side.
Oh and you also need a decently sized stone crusher for all your failed attempts and speedbenchies.
There’s Syncthing and it’s proprietary counterpart Resilio that allow you to sync folders between machines and send individual files over p2p. Very neat software.
I am very happy with mine and have only ever had one hiccup during updating that was due to my Dockerfile removing one dependency to many. I’ve run it bare metal (apache, mariadb) as well as containerized (derived custom image, traefik, mariadb). Both were okay in speed after applying all steps from the documentation.
Having the database on your fastest drive is definitely very important. Whenever I look at htop while making big copies or moves, it’s always mariadb that’s shuffling stuff around.
In my opinion there are 2 things that make nextcloud (appear) slow:
Managing the ton of metadata in the db that is used by nextcloud to provide the enhanced functionality
It is/was a webpage rendered mostly on the server.
The first issue is hard to tackle, because it is intrinsic and also has different optimums for different deployment scales. Optimizing databases is beyond my skillset and therefore I stick to the recommendations.
The second issue is slowly being worked around, because many applications on nextcloud now resemble SPAs, that are highly interactive and are rendered by your browser. That reduces page reloads and makes it feel more smooth.
All that said, I barely use the webinterface, because I rarely use the collaboration features. If I have to create a share I usually do that on the app because that’s where I send the link to people. Most of my usecase is just syncing files, calendars and contacts.
Nice, thank you!