UPDATE EDIT:
Man it is crazy to watch the dashboard and console at the time. Even with no HDD’s spinning, and as much RAM as I can give the Scale VM, services just slowly takes over the RAM, until the console shows kernel panic.
core was solid for so long with everything i threw at it.
it runs out of memory after services soaks up all the RAM, ZFS cache is choked down to 3gb out of 16.
- xeon E3 1265LV2
- Asus p8z77-v-deluxe
- 32GB DDR3
- hba passed through to truenas running a mirror pool
VM for truenas is running on the local proxmox SSD.
- proxmox 9.1.1
- TrueNAS scale 25.10.0.1 but i tried a 24 version also
once the install starts crashing, the VM will still crash after booting up without the HBA card
I’ve seen a few posts with other people having the out of memory issues (OOM) but almost every reply says it will be fixed in the next update, which is older than what we’ve got now.
it did run okay enough JUST long enough to make the mistake of updating the ZFS flags, so now i can’t roll back to core.
does scale have this issue because it’s virtualized? would it run better on bare metal?
anyone tried xigmaNAS? freeBSD based again at least.
Unraid looks okay, but paywall?
open media vault?
any advice or discussion is appreciated!
Have you made sure your ram modules are all good? Was getting funky behavior with truenas in a proxmox vm, come to find out one of my 32gb sticks was bad. Removed the stick, no funky behavior. Replaced with a different stick, been solid since, maybe a year and a half on now.
This is what i am hoping to avoid! But no not yet. Does debian/proxmox have a built in memtest? or is this through bios? OR do i just start removing/swapping RAM?
I have NetBoot.xyz on my network for ipxe booting. They have memtest as a boot image, ran that for a few days. I don’t remember if it found the stick that was bad or if I just started pulling and booting and process of elimination.
man that sounds heckin awesome. It is too bad this is the machine i woulda had a netboot setup configured on!
Do you have an old raspberry pi or laptop? Netbootxyz will run on a potato with ethernet.
Runs fine for me, my TrueNAS VM got 24 GB RAM
how much RAM does “services” soak up on your install?
2.5GiB. I only run SSH, SMB and SMART services
I have TN Scale VM hosted in Proxmox. The only “issue” I have is the webgui gets pushed to SWAP if not used for more than a week. So when I connect it it literally takes a couple minutes while is gets shuffled back into RAM. Once it’s “warmed up” it’s fine. But my Scale VM is doing these things: manage ZFS pools, control NFS/Samba shares, replicate pool snapshots to off-site backup server. It intentionally have it do nothing else. All other services are in different VMs or LXC containers in Proxmox.
Does your Scale install have any SWAP space setup? That should prevent out of memory issues. Potential performance issues would be better than crashing.
man i didn’t even know VM’s on proxmox could have swap space, here i go learnin again
It will be controlled by Truenas not Proxmox. Truenas can add swap space to each drive automatically: https://www.ixsystems.com/documentation/truenas/11.3-U2.2/storage.html
But you probably already have existing drives so that doesn’t help. This might though: https://wiki.debian.org/Swap
But be aware that Truenas is design to be an appliance and doesn’t really want you tinkering under the hood. So you may have to manually add the SWAP after each boot of TN.
I would guess the best long term fix would be moving services out of the TN VM and into a different VM.
I’m not sure what it is, but Scale has never thrilled me. I’ve tested it a couple times and I just didn’t get along well with it. I’ve tested know Jim Salter (practicalzfs.com) has frequently recommended XigmaNAS as a strong (albeit less pretty) alternative to TrueNAS. I did some tests with that as well and it seemed perfectly fine. In the end I decided that when I migrate off of Core this winter, it’ll be to a bare metal FreeBSD system. I’m using it as an excuse to better learn that ecosystem and to bone up on ansible, which I’m using to define all of my settings.
BSD isn’t really being maintained. It gets contributions once and a while but the vast majority of development happens on Linux.
That’s certainly true in terms of TrueNAS Core, but FreeBSD itself is quite active (15.0-RELEASE dropped this month), as are the others BSDs.
All I know is that iX systems said that they might drop core since BSD isn’t nearly as well maintained as Linux.
If I remember correctly, that was largely in consideration of the large corpus of docker-packaged projects that could be used as a pre-built app ecosystem. That makes a lot of sense for anyone who really wants an appliance-like all-in-one system with minimal setup.
Next time, when you make major changes like ZFS upgrade, create a checkpoint and keep it for a while. You can roll back everything, even the pool version.
I personally like to run ZFS on a bare metal server, just the plain OS, no further “NAS” or virtualization software.
I don’t really know what your use cases are, so I cannot tell if it’s adequate for you.
One thing I can be certain of, is that I barely know enough to have this stuff going!
Will learn more about ZFS checkpoints! thanks for the tip
TrueNAS should do that automatically for the OS
It has been pretty solid for me. Make sure you are doing a proper pcie passthough as disk passthough doesn’t pass disk metadata.



