I recently noticed that htop displays a much lower ‘memory in use’ number than free -h, top, or fastfetch on my Ubuntu 25.04 server.

I am using ZFS on this server and I’ve read that ZFS will use a lot of RAM. I also read a forum where someone commented that htop doesn’t show caching used by the kernel but I’m not sure how to confirm ZFS is what’s causing the discrepancy.

I’m also running a bunch of docker containers and am concerned about stability since I don’t know what number I should be looking at. I either have a usable ~22GB of available memory left, ~4GB, or ~1GB depending on what tool I’m using. Is htop the better metric to use when my concern is available memory for new docker containers or are the other tools better?

Server Memory Usage:

  • htop = 8.35G / 30.6G
  • free -h =
               total        used        free      shared  buff/cache   available
Mem:            30Gi        26Gi       1.3Gi       730Mi       4.2Gi       4.0Gi
  • top = MiB Mem : 31317.8 total, 1241.8 free, 27297.2 used, 4355.9 buff/cache
  • fastfetch = 26.54GiB / 30.6GiB

EDIT:

Answer

My Results

tldr: all the tools are showing correct numbers. Htop seems to be ignoring ZFS cache. For the purposes of ensuring there is enough RAM for more docker containers in the future, htop seems to be the tool that shows the most useful number with my setup.

  • a_fancy_kiwi@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 hours ago

    Is there a good way to tell what percent of RAM in use is used by less important caching of files that could be closed without any adverse effects vs files that if closed, the whole app stops functioning?

    Basically, I’m hoping htop isn’t broken and is reporting I have 8GB of important showstopping files open and everything else is cache that is unimportant/closable without the need to touch SWAP.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      https://stackoverflow.com/questions/30869297/difference-between-memfree-and-memavailable

      Rik van Riel’s comments when adding MemAvailable to /proc/meminfo:

      /proc/meminfo: MemAvailable: provide estimated available memory

      Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up “free” and “cached”, which was fine ten years ago, but is pretty much guaranteed to be wrong today.

      It is wrong because Cached includes memory that is not freeable as page cache, for example shared memory segments, tmpfs, and ramfs, and it does not include reclaimable slab memory, which can take up a large fraction of system memory on mostly idle systems with lots of files.

      Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the “low” watermarks from /proc/zoneinfo.

      However, this may change in the future, and user space really should not be expected to know kernel internals to come up with an estimate for the amount of free memory.

      It is more convenient to provide such an estimate in /proc/meminfo. If things change in the future, we only have to change it in one place.

      Looking at the htop source:

      https://github.com/htop-dev/htop/blob/main/MemoryMeter.c

         /* we actually want to show "used + shared + compressed" */
         double used = this->values[MEMORY_METER_USED];
         if (isPositive(this->values[MEMORY_METER_SHARED]))
            used += this->values[MEMORY_METER_SHARED];
         if (isPositive(this->values[MEMORY_METER_COMPRESSED]))
            used += this->values[MEMORY_METER_COMPRESSED];
      
         written = Meter_humanUnit(buffer, used, size);
      

      It’s adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic, since some of that may not actually be available for use.

      top, on the other hand, is using the kernel’s MemAvailable directly.

      https://gitlab.com/procps-ng/procps/-/blob/master/src/free.c

      	printf(" %11s", scale_size(MEMINFO_GET(mem_info, MEMINFO_MEM_AVAILABLE, ul_int), args.exponent, flags & FREE_SI, flags & FREE_HUMANREADABLE));
      

      In short: You probably want to trust /proc/meminfo’s MemAvailable, (which is what top will show), and htop is probably giving a misleadingly-low number.

    • Onno (VK6FLAB)@lemmy.radio
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      This is the job for the OS.

      You can run most Linux systems with stupid amounts of swap and the only thing you’ll notice is that stuff starts slowing down.

      In my experience, only in extremely rare cases are you smarter than the OS, and in 25+ years of using Linux daily I’ve seen it exactly once, where oomkiller killed running mysqld processes, which would have been fine if the developer had used transactions. Suffice to say, they did not.

      I used a 1 minute cron job to reprioritize the process, problem “solved” … for a system that hadn’t been updated for 12 years but was still live while we documented what it was doing and what was required to upgrade it.