After years of running a makeshift NAS on an old desktop, I finally built a proper TrueNAS Scale server. Here’s what I learned about ZFS pool design, hardware selection, and avoiding common pitfalls.
Hardware selection
Budget was a constraint, so I optimized for reliability over raw speed:
- Motherboard: ASRock B550M with ECC support
- CPU: Ryzen 5 5600G (low TDP, enough for transcoding)
- RAM: 32GB ECC DDR4 (ZFS loves RAM for ARC cache)
- Boot: 2x 120GB SSD in mirror (TrueNAS boot pool)
- Data: 4x 8TB Seagate Exos in RAIDZ2
Why RAIDZ2
With 4 drives, RAIDZ2 gives you two-drive redundancy at the cost of usable space. The math:
Raw capacity: 4 × 8TB = 32TB
RAIDZ2 usable: ~16TB (2 drives parity)
For a homelab where data loss is catastrophic but performance isn’t critical, RAIDZ2 is the right call. RAIDZ1 with 4 drives is risky — a URE during rebuild can kill the array.
ZFS tuning
A few settings I changed from defaults:
# Enable compression (lz4 is fast and effective)
zfs set compression=lz4 tank
# Set record size based on workload
zfs set recordsize=1M tank/media # Large files
zfs set recordsize=16K tank/databases # Small random IO
# Enable dedup only if you have the RAM (1GB per TB)
# I don't. Left it off.
zfs set dedup=off tank
Snapshot strategy
ZFS snapshots are free (until you delete data). My retention policy:
| Interval | Keep |
|---|---|
| Hourly | 24 |
| Daily | 30 |
| Weekly | 12 |
| Monthly | 12 |
Automated via TrueNAS periodic snapshot tasks. Total snapshot overhead is typically under 5% of pool capacity.
Network considerations
A single 1GbE link tops out around 110MB/s — fine for most homelab use, but a bottleneck for large transfers. I added a USB 3.0 2.5GbE adapter as a temporary solution before running proper 10GbE.
What I’d do differently
- Buy 6 drives from the start — expanding a RAIDZ vdev is painful
- Get a proper case — my initial open-frame setup was a dust magnet
- Plan for UPS from day one — ZFS is resilient but dirty shutdowns still hurt
