• 8 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • I think you are looking at this wrong. Proxmox is not prod ready yet, but it is improving and the market is pushing the incumbent services into crappier service for higher prices. Broadcom is making VMware dip below the RoI threshold, and Hyper-v will not survive when it is dragging customers away from the Azure cash cow. The advantage of proxmox is that it will persist after the traditional incumbents are afterthoughts (think xenserver). That’s why it is a great option for the homelab or lab environment with previous gen hardware . Proxmox is missing huge features…vms hang unpredictably if you migrate vms across hosts with different CPU architectures (Intel -> AMD), there is no cluster-wide startup order, and things like DRS equivalents are still separate plugins. That being said knowing it now and submitting feedback or patches positions you to have a solution when MS and Broadcom price you out of on-prem.






  • Universities have huge endowments and investment portfolios. These are generally broad and in support of keeping the financial backing of the school stable; this is extremely prevalent in the large older universities like Harvard or Columbia (but almost all universities have one in some form or another). They support both students and ongoing academic research.

    While many of these portfolios consist of wider funds, many have specific investments in specific companies and industries. That means that the university is invested in, and taking benefit from, areas of industry. The main request is to divest the investment portfolios from companies owned by or supporting entities connected with Israel’s war on Gaza. In some cases this may be possible (move a ton of stock from a defense contractor making weapons sold to Israel to an energy company) and in some cases it may not (they’re invested in a wide market fund that itself invests in specific funds, but you can’t easily cherry-pick which stocks are actually in it). It’s also possible that there are research grants funded through companies who the students want to apply negative pressure to; cancelling a grant sends a message to the company, but also leaves entire teams and time-dependent science without funding, potentially ending it outright unless alternate funding can be found. There also may be contracts involved for specific research and engagements, and breaking a contract is more complicated than just ripping it up (especially if there are early termination policies outlined).

    Realistically, the best students can hope for is a commitment to investigate and divest where possible, which is frustrating but also makes sense. I’ve worked in higher education for 20 years and have seen this on a smaller scale around defense contractors during the wars in Afghanistan and Iraq. The endowment is a slow moving leviathan, but I think it’s a good place for the students to apply pressure.



  • Lego parts are incredibly precise, and the manufacturing tolerances have been consistent for decades. It’s nearly impossible to replicate that precision on any modern printers.

    That being said, different parts are more tolerant of wiggle room. Grabbing a stud is hard, grabbing a 2x4 is not. If you were going to print a minifig head, trying to replicate the neck barrel is gonna be tough, but making a larger hole with 2-3 ridges which taper to grip might be easier. If you plan what you’re doing and are realistic about what you can print, it’s definitely not out of the question.

    Lego is ABS if I’m correct.














  • One thing to add, the original sample was theorized to be superconductive due to the magnetic levitation, not a measurement of it’s resistance. In truth, “diamagnetic semiconductors” exhibit a similar levitation but without the lack of resistance; it’s now theorized that’s what the original authors experienced. The initial paper was also released with slim details, a lack of peer review, and a lot of unknowns. It’s possible that “doped with copper” is nuanced, and if you made 100 samples of this with different doping at an atomic level, you would get different results. That would mean “LK99” is easy to make, but “LK99 doped with copper to exactly achieve superconductivity at room temperature and pressure” is NOT easy to make, and we may not even have the tech to dope it precisely enough to be useful.

    The more likely outcome is research into a new doping technique which leads to material meta-science that could, one day, get a superconductor with practical properties. But this was sort of hyped as “a room temp/pressure superconductor that any mid-tier lab could make” which is just false…there are youtube science channels out there synthesizing the stuff and it’s just not what was “advertised.”




  • I’m not an expert, but I’ve been using TrueNas Scale since I cut over from TrueNAS core, and before that Freenas, since about 2010. I have a bunch of lessons and assumptions, but someone can correct me if these are misguided, they’re my tl;dr of knowledge.

    1. Your data drives should be in sets of 3 for a raidz1, or 5+ (I use 6) for a raidz2. While technically the minimum is 2 or 4 respectively, best performance and protection comes in sets of 3. This is a good synopses: https://superuser.com/a/1058545 In that case he points out that a 3-way mirror also works but then you lose a lot of the data integrity checking that comes with ZFS. I keep an offline spare; in your situation putting 3 drives in with a RAIDZ1 and keeping one in the drawer would give you ~8TB of capacity protected against bit flipping and drive failure. This is a better description of the raid levels: https://calomel.org/zfs_raid_speed_capacity.html
    2. In terms of just storage, that system will be fine, though ideally you get ECC RAM; that’s often a bigger swap, so if you can’t change that, so be it. It does matter in terms of integrity checking. The more containers you run, the tougher it gets to spec out. I have a separate proxmox hypervisor and routinely have 4+ jellyfin streams going at a time, so it wouldn’t be enough in my case, but you’ll have to experiment and scale. I will say, even though a separate proxmox box comes with a lot of headaches, it was more important than any schooling I ever did in terms of my IT career. Networking, monitoring, access control, suddenly I have a solution to every IT problem I encounter and I have experience with it.
    3. Personally, I do a 2-disk mirror for the OS, and then multiple 3 or 6 disk vdevs for data. If you lose the OS drive and it’s just 1, that’s fine if you have backups to just restore, but I find swapping in a cheap ssd is better. I use cheap-as-dirt 64G SSD’s as the boot drives, and if one dies, you can swap it and replace it in the UI, no problem. You can technically use 2 mis-matched sized disks, but it’ll fuss at you.
    4. Start with TrueNAS Scale as just a storage device; ideally that needs to be close to the hardware and not virtualized. In the beginning, especially since you’re likely dealing with 1 pool, just make 1 vdev for everything. You can make folders in there, or datasets, and play with partitioning data, sharing data to other computers, etc. I use NFS sharing AND iscsi luns to my proxmox, and ultimately I’m in 1 big dataset with multiple vdevs in it. Add your things like homeassistant one at a time; going through it will show you how you sort storage, how you provision it, etc. Over time, things grow; this will not be your final configuration, most people expand over time. You may decide “I want bulk storage in one vdev, I want containers and vm’s in another.” When you expand, that’s when you split things off and make more nuanced decisions. That will come from better assessing your needs.

    You mention Jellyfin…my struggles with that were never storage. My struggles there were networking; it was a big part of why I decided to upgrade my server networking to 10G, which supported running Jellyfin on another hypervisor and having all that go over the network.


  • A looking time ago I got 2 r710s for cheap. One was a test server which had Ubuntu server with a desktop environment on it (I was used to windows server administration) and the other was a headless Minecraft server. Both have about 200 gigs of RAM, but similar CPU configs. Every self hosting thing I tried went on the main server…DHCP, DNS, jellyfin, NextCloud, Apache, vaultWarden…it got out of hand. Even then, I used the desktop over x2go as a stable environment when moving between clients. We replaced the Minecraft server, then I converted that to proxmox, then peeled off services into smaller VMs in learning to use Ansible. Now every “server” is moved off and I basically want the underlying VM as a remote desktop. It isn’t precious per se, and it is backed up, but starting over would be a headache so I wanna take a real shot at P2V.