DevOps as a profession and software development for fun. Admin of lemmy.nrd.li and akkoma.nrd.li.

Filibuster vigilantly.

  • 0 Posts
  • 126 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
  • Laptops/desktopes: no real naming scheme, they use non-static DHCP leases anyway.

    Physical servers: NATO phonetic alphabet. If I run out of letters something has gone terribly wrong right.

    VMs: I don;t have many of these left, but they are named according to their function and then a digit in case I need more. e.g. docker1, k3s1. This does mean that I have some potential oddities like a k3s cluster with foxtrot, alpha, and k3s1 as members, but IMO that’s fine and lets me easily tell if something is physical or virtual. I am considering including the physical machine name in the VM name for new things as I no longer have things set up such that machines can migrate… though I haven’t made a new VM in some time.

    Network equipment: Named according to location and function. e,g, rack-router, rack-10g, rack-back-1g, rack-ap, upstairs-10g, upstairs-ap. If something moves or is repurposed it is likely getting reconfigured so renaming at that point makes sense.







  • I believe the Pictrs is a hard dependency and Lemmy just won’t work without it, and there is no way to disable the caching. You can move all of the actual images to object storage as of v0.4.0 of Pictrs if that helps.

    Other fediverse servers like Mastodon actually (can be configured to) proxy all remote media (for both privacy and caching reasons), so I imagine Lemmy will move that way and probably depend even more on Pictrs.





  • I switched from Plex to Jellyfin several years ago and haven’t really looked back. Overall I just didn’t like the direction plex kept going (pushing shit streaming services, central auth, paywalling features), and dropped it even though I grabbed a lifetime plex pass back in the day. The only thing I miss about plex was the ease of developing a custom plugin for it since you could pretty much just drop python scripts in there and have it work, though their documentation for plugin development was terrible (and I think removed from their site entirely).




  • Basically, no:

    It can cause some wackiness… basically you will need to maintain that old domain forever and everything will still refer to that old domain.

    For example, your post looks like this from an ActivityPub/federation perspective:

    {
        [...]
        "id": "https://atosoul.zapto.org/post/24325",
        "attributedTo": "https://atosoul.zapto.org/u/Soullioness",
        [...]
        "content": "<p>I'm curious if I can migrate my instance (a single user) to a different domain? Right now I'm on a free DNS from no-ip but I might get a prettier paid domain name sometime.</p>\n",
    }
    

    The post itself has an ID that references your domain, and the the attributedTo points to your user which also references your domain. AFAIK there is no reasonable way to update/change this. IDs are forever.

    It would also break all of the subscriptions for an existing instance, as the subscriptions are all set to deliver to that old domain.

    IMO your best bet would be to start a new instance on the new domain, update your profile on the old one saying that your user is now @[email protected] and maintain that old server in a read-only manner for as long as you can bear.




  • terribleplan@lemmy.nrd.lito3DPrinting@lemmy.worldis PLA food safe?
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Nope. PETG is maybe the easiest “safer” option, but AFAIK there isn’t a true food safe filament. Also 3d printed things will basically be impossible to clean without extensive post-processing (including probably needing to coat it in something), so “safer” single use pretty much.




  • Business in the front:

    • Mikrotik CRS2004-1G-12S+2XS, acting as a router. The 10g core switch plugs into it as well as the connection to upstairs
    • 2u cable management thing
    • Mikrotik CRS326-24S+2Q+, most 10g capable things hook into this, it uses its QSFP+ ports to uplink to the router and downlink to the (rear) 1g switch.
    • 4u with a shelf, there are 4x mini-pcs here, most of them have a super janky 10g connection via an M.2 to PCIe riser.
    • “echo”, Dell R710. I am working on migrating off of/decomissioning this host.
    • “alpha”, Dell R720. Recently brought back from the dead. Recently put a new (to me) external SAS card into it, and it acts as the “head” unit for the disk shelf I recently bought.
    • “foxtrot”, Dell R720xd. I love modern-ish servers with >= 12-disks per 2u. I would consider running a rack full of these if I could… forgive the lack of a label, my label maker broke at some point before acquiring this machine.
    • “delta”, “Quantum” something or other, which is really just a whitelabeled Supermicro 3u server.
    • Unnamed disk shelf, “NFS04-JBOD1” to its previous owner. Some Supermicro JBOD that does 45 drives in 4u, hooked up to alpha.

    Party in the back:

    • You can see the cheap monitor I use for console access.
    • TP-Link EAP650, sitting on top of the rack. Downstairs WAP.
    • Mikrotik CRS328-24P-4S+, rear-facing 1g PoE/access switch. The downstairs WAP hooks into that as well as the one mini-PC I didn’t put a 10g card on. It also provides power (but not connectivity) to the upstairs switch. It used to get a lot more use before I went to 10g basically everywhere. Bonds 4x SFP+ to upllink via the 10g switch in front.
    • You can see my cable management, which I would describe as “adequate”.
    • You can see my (lack of) power distribution and power backup strategy, which I would describe as “I seriously need to buy some PDUs and UPSs”

    I opted for a smaller rack as my basement is pretty short.

    As far as workloads:

    • alpha and foxtrot (and eventually delta) are the storage hosts running Ubuntu and using gluster. All spinning disks. ~160TiB raw
    • delta currently runs TrueNAS, working on moving all of the storage into gluster and adding this in to that. ~78TiB raw, with some bays used for SSDs (l2arc/zil) and 3 used in a mirror for “important” data.
    • echo, currently running 1 (Ubuntu) VM in Proxmox. This is where the “important” (frp, Traefik, DNS, etc) workloads run right now.
    • mini-pcs, running ubuntu, all sorts of random stuff (dockerized), including this Lemmy instance. Mounting the gluster storage if necessary. They also have a gluster volume amongst themselves for highly redundant SSD-backed storage.

    The gaps in the naming scheme:

    • I don’t remember what happened to bravo, it was another R710, pretty sure it died, or I may have given it away, or it may be sitting in a disused corner of my basement.
    • We don’t talk about charlie, charlie died long ago. It was a C2100. Terrible hardware. Delta was bought because charlie died.

    Networking:

    • The servers are all connected over bonded 2x10g SFP+ DACs to the 10g switch.
    • The 1g switch is connected to the 10g switch with QSFP+ breakout to bonded 4x SFP+ DAC
    • The 10g switch is connected to the router with QSFP+ breakout to bonded 4x SFP+ DAC
    • The router connects to my ISP router (which I sadly can’t bypass…) using a 10GBASE-T SFP+.
    • The router connects to an upstairs 10g switch (Mikrotik CRS305-1G-4S+) via a SFP28 AOC (for future upgrade possibilities)
    • I used to do a lot of fancy stuff with VLANs and L3 routing and stuff… now it’s just a flat L2 network. Sue me.