< back to main

Hardware

for CORE and SCALE

Selecting hardware is the first step many users take on their TrueNAS journey. Whether they are repurposing an old gaming computer, buying used server gear from eBay and Craigslist, or purchasing a brand new rack mount setup, there are some important guidelines which we'll discuss below. The official hardware guide can be found here.

If you would prefer to purchase a prebuilt system to host TrueNAS CORE, the TrueNAS Mini or TrueNAS R-Series might be good options!

RAM/Memory


Memory sizing will depend entirely on the system's intended workload. If you are building a home NAS for a small handful of users, you will need less memory than someone building a high performance NAS to store very active databases.

ZFS will make use of as much memory as you can feed it. The majority of this memory gets used for ZFS' cache: the adaptive replacement cache (ARC). This clever cache algorithm keeps track of recently-used and frequently-used data to accelerate access to that data should it be needed again. The algorithm dynamically resizes itself based on how the data is being accessed in order to maximize cache hit rate. You can read more about the ARC algorithm in the ZFS section of this guide or here on Wikipedia.

iXsystems recommends a minimum of 8GB of RAM but most users will see a significant performance improvement by running at least 16GB or even 32GB. Smaller systems for home use will see diminishing returns by going to 64GB or 128GB. Higher performance systems will benefit from 128GB or more, particularly if the working dataset size (i.e., how much data is actively in use at any given time) is large. It's not uncommon to see TrueNAS Enterprise systems deployed with 768GB of memory to support very high performance workloads. The old "1GB RAM per 1TB storage" rule is really just a rough guide; RAM should be sized based on how active the workload is and how frequently it will be re-accessing the same data.

ECC RAM

Error Correcting Code (ECC) memory includes extra circuitry to detect and automatically correct single bit errors in RAM. Bit errors in RAM are often caused by cosmic rays (high-energy radiation originating from outer space) but can also be caused by heat and fluctuations in voltage. ECC memory DIMMs cost a bit more than non-ECC DIMMs and usually require a server CPU and motherboard.

ECC memory is recommended for all OpenZFS systems but is not strictly necessary. Long-time TrueNAS community members may have heard stories about the "scrub of death" where non-ECC memory causes on-disk corruption and total pool failure but this was debunked a long time ago.

Matt Ahrens (the co-founder of ZFS at Sun Microsystems) has stated that "there's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem.".

CPU


TrueNAS and OpenZFS under a light to medium workload are not excessively hard on the CPU. The TrueNAS docs recommend a minimum of a 2 core CPU but home users would benefit from a 4 or 6 core CPU. Most of the services running on TrueNAS will benefit more from higher CPU clock speeds than they will from a higher thread count, so most users should generally prefer (for example) a 3GHz 6 core CPU over a 2GHz 20 core CPU.

As with memory, higher performance workloads will need a higher-end CPU. If a user is designing an all-flash system, it is advisable to invest in a better CPU or the system will become "CPU bound" before the drives are really taxed. If you need to support 10's of thousands or 100's of thousands of IOPS, or if you need to move multiple gigabytes of data per second over high-speed links, you will likely need a higher-end CPU or even a dual socket setup.

If you want to use ZFS encryption, make sure your CPU supports AES-NI. Without AES-NI support, performance will be very poor. Most modern CPUs support this instruction set but it's good to check if you're unsure.

Used RAM/CPU Tips


If you are comfortable buying used hardware, you may consider getting an older CPU generation that supports DDR3 memory. DDR3 memory is much less expensive on the used market than DDR4.

Intel Xeon CPUs in Haswell or Ivy Bridge generation (v2 and v3 Xeons) are far less expensive than the newer Broadwell generation (v4) while still supporting PCIe 3.0 and offering very similar performance. For example, the Xeon 2667v3 had an MSRP of over $2,000 when it launched in 2014 but can be had on eBay for under $25 at the time of writing. Starting with Skylake, Intel switched to the "Xeon Scalable" naming scheme; this generation is still very expensive.

If you have an application that needs a lot of RAM but you don't want to pay for expensive high-density DIMMs, consider a dual- or even quad-socket system to get more DIMM slots. 8x 32GB DIMMs will almost always be more expensive than 16x 16GB DIMMs. Multi-socket systems will consume more power and need more (louder) cooling, so factor that into your planning as well.

Hard Drives


The most important rule when shopping for hard drives is to avoid shingled magnetic recording (SMR) hard drives. These drives use some special tricks to get greater density at a given price point but they have major issues running under ZFS (particularly during scrubs and resilvers). For this reason, drives used in a ZFS system should use conventional magnetic recording (CMR).

When designing a system to host a single pool of storage, you should plan to select a single disk capacity, disk speed, and RAID layout up front and stick with that for all expansions during the life of the system. For example, if you design your system to use 14TB drives running at 5,400 RPM in an 8-wide RAIDZ2 vdev layout, you will want to stick with 14TB 5400 RPM drives in 8-wide RAIDZ2 when you expand the pool. This means buying at least 8x 14TB drives at a time when you need to grow your capacity. The ability to grow your RAIDZ vdevs will come eventually, but until then, the recommended method to expand an OpenZFS pool is to add a new vdev of the same width and capacity as the pool is currently using. Building out your pool with future expansion needs in mind is highly recommended!

7200 RPM drives vs. 5400 RPM drives

When shopping for hard drives, you'll notice some run at 7200 rotations per minute (RPM) and some run at 5400 RPM. Usually, 7200 RPM drives will be a bit more expensive than 5400 RPM drives. They'll also advertise better performance than the slower-spinning variants: lower latency I/O and higher throughput. What they don't typically advertise is the increased power draw and heat output of these faster drives. More heat output means you'll need more fans running at higher speeds and producing more noise.

If you're running a small pool of drives, it might be worth the increased cost, power draw, and heat output to get slightly faster drives. The latency differences between 7200 and 5400 RPM drives will likely be negligible, but the increase in throughput will be a bit more noticeable and will scale (roughly) with the number of drives in the pool.

If we look at a large pool of drives, the percentage increase of the cost, power draw, and heat output all stay the same but the actual numbers will be much larger; a 20% increase of 50W is manageable but a 20% increase of 500W might be unacceptable. At the same time, a larger pool of drives will naturally have more throughput because of how ZFS aggregates drives and it's more likely you'll have throughput bottlenecks elsewhere in the system (like on the NIC, CPU, or in the natural protocol limits).

You'll have to think about your individual use case and decide whether 7200 RPM drives are worth the tradeoffs but be aware that if you start your system with 7200 RPM drives, it's highly recommended to stick with 7200 RPM drives when you expand. Mixing 5400 and 7200 RPM drives in a single pool can cause weird and unpredictable performance.

10K and 15K RPM SAS drives

10K and 15K RPM SAS drives are obsolete. They suck a ton of power and put out a huge amount of heat for only modest performance gains over slower NL-SAS (7200 RPM) drives. If you can find a really good deal on 10K/15K SAS drives or have a bunch on hand, they'll certainly work, but these drives are usually priced such that SSDs make a lot more sense.

Shucking External Hard Drives

Many home TrueNAS users will buy external USB hard drives and remove the SATA disk from the external enclosure in a process called "shucking". External USB hard drives are often far less expensive than standard internal drives at a given capacity and will usually go on sale around the Thanksgiving and Christmas holidays. You can check the latest prices here.

Shucked external hard drives will have a much more limited warranty than internal drives but the price difference will usually offset this. For example, you can get a 14TB external WD Easystore drive for under $200 while an internal 14TB WD Red Plus is usually $250. Let's assume the Easystore has no warranty while the 14TB drive comes with a 3 year warranty. The worst drives on Backblaze's drive report have an annualized failure rate less than 5%, so we'll use that for our calculations. If we buy 100x WD Easystore drives and 100x WD Red Plus drives, we would expect ~5 from each batch to fail per year. Our total cost for the Easystore drives is $20k and for the Red Plus it's $25k. After 3 years, we expect to have to purchase 15x WD Easystore drives (5 per year) at a cost of $3,000, making our total investment in the Easystore drives $23k. WD replaces the 15x failed Red Plus drives for free but our total investment is still $25k, or $2k more than if we had bought cheaper drives to begin with and just paid to replace them.

Obviously, it's not common for most users to buy 100 drives at a time, but the logic above still holds if you're only buying 1 drive at a time. You can estimate the value of the warranty by multiplying the expected AFR by the warranty length and the cost of the non-warrantied option. For the example above, that's 0.05 * 3 * $200 = $30. The 5% AFR we used is particularly bad; typical AFR is 0.5% to 1%, in which case the expected warranty value is $3 to $6.

Some users have suggested that the "white label" drives (so-called because the sticker label on the shucked external drives are white rather than red) are of an inferior quality or somehow binned down but the only evidence to back these claims has been anecdotal.

All Flash Pools


If you want to deploy an all-flash pool on TrueNAS, there are some additional considerations to bear in mind. In sufficient quantities, SATA and SAS SSDs will perform just about as well as NVMe SSDs under ZFS. The current advantage to NVMe under ZFS is that you can reach the performance ceiling with far fewer drives. For example, you may be able to get the same performance out of four NVMe SSDs as you would out of 20 SATA SSDs. As OpenZFS development continues, you may start to see more linear scaling with larger NVMe pools, but for now, you hit the ceiling pretty quickly.

Before selecting an SSD model to deploy, check the native page size of the disks you're considering. Some SSDs have a much larger page size that can cause issues when those drives are deployed in a RAIDZ layout. RAIDZ splits incoming write data evenly between all drives in the vdev and it's possible for that data to get divided into chunks smaller than what the SSD can gracefully handle. This causes write amplification and will hurt random write performance. You can tune around this property by increasing the recordsize or volblocksize of your datasets (see the ZFS tuning section of this guide) but that may not be ideal for some applications.

Boot Drives


TrueNAS needs one or more drive(s) dedicated to the boot pool. The boot pool device(s) can't be used for data storage and you can't (or really shouldn't) boot from any of the devices in your data pool. Previously, a simple USB flash drive was the recommended boot device for TrueNAS (which was at the time called "FreeNAS") systems. Since then, that guidance has shifted. Even seemingly high-quality USB flash drives have reliability issues and cause more headaches than they're worth.

A small SSD is now the recommended choice for a boot device. This could be a simple 2.5" 240GB SATA SSD or a small M.2 drive. You can find small SSDs in either form factor from WD or Intel in the $30-35 range. The downside to using a drive like this as a boot device is that you lose either a SATA or an M.2 slot that could otherwise be used for the pool. The tradeoff is a far more reliable boot volume than you would get with a USB drive.

You have the option of mirroring the boot pool between two (or more) devices. This will minimize service disruption in the event of a boot device failure for highly-critical workloads. If you can't sacrifice a second SATA or M.2 slot, you can run a single boot device and make regular TrueNAS config backups. In the event of a boot device failure, you can quickly reinstall and import your config to get up and running in under an hour. Boot device failure (regardless of whether it's set up in a mirror) should never impact data pool integrity.

Enterprise users may be familiar with SATA-DOMs: these are effectively thumb drives with SATA interfaces instead of USB interfaces. They can be installed inside of a chassis directly into supported SATA slots to act as boot devices and might seem at first glance like the ideal choice for such a task. SATA-DOMs are however notoriously unreliable, almost as unreliable as USB flash drives. For that reason, iXsystems has moved away from using mirrored SATA-DOMs as boot devices in all its Enterprise systems in favor of a single M.2 drive. Internal testing has demonstrated that even a single high quality M.2 SSD is far more reliable than mirrored SATA-DOMs. The same would be true of a single high-quality SATA or SAS SSD.

L2ARC


The level 2 adaptive replacement cache or "L2ARC" is ZFS' second caching tier (the primary being the ARC which lives in RAM). Blocks that fall out of the primary ARC get entered into the L2ARC and are managed by the exact same algorithm.

As with primary ARC, L2ARC will be most useful if your workload is frequently re-using the same data. For example, on a collaborative office file share where the same set of documents or media files are passed between multiple people, those files will likely end up with a copy in the primary or secondary ARC. If you're building a file server to support a small team of video editors that all work on a single project at a time, consider adding an L2ARC large enough that all the assets of that project will remain cached.

Other workloads will see limited benefit from a large ARC and may not see any benefit at all from an added L2ARC. A storage system for a large security camera setup will only re-access data if footage needs to be reviewed; unless the L2ARC is very very large, that data is likely to have fallen out of the cache anyway. Similarly, a home NAS supporting one or two users will likely not see the same data re-accessed over and over (or at least not enough that it won't fit in the main ARC), so an L2ARC may not be worthwhile.

If you use an L2ARC on an SSD pool, it needs to be much much faster to provide a tangible benefit to performance. iX has done extensive testing with a TLC SAS SSD-based pool and an NVMe-based L2ARC and has found that the performance benefits of such a setup are almost negligible, certainly not enough to justify the considerable expense of extra NVMe drives.

Will Too Much L2ARC Hurt Performance?

You may come across users claiming that adding L2ARC can actually reduce performance in some cases because OpenZFS uses some space in RAM to manage the secondary cache. While this is true in theory, an L2ARC will only cause performance issues in the most extreme cases: a huge set of L2ARC drives with only a meager amount of memory.

Every block stored in the system's L2ARC needs a small entry in a table in main memory. On the current version of OpenZFS, each of these entries take up 88 bytes in RAM. As we'll discuss later on in the ZFS tuning section, blocks are dynamically sized up to the recordsize value set on each dataset. The default recordsize value on ZFS is 128KiB.

Taking a fairly extreme example, let's assume the dataset's average block size is 32KiB. If we have 10TB of L2ARC attached to the pool, we can fit 312,500,000 of those 32KiB blocks in the L2ARC (10TB * 1000^3 / 32). Each of those blocks get an 88 byte entry in RAM, so we have ~25.6GiB of RAM dedicated to tracking L2ARC (312,500,000 * 88 / 1024^3). While that is a lot of RAM for L2ARC, unless the system only has 32GiB of memory, you should not see a dramatic performance decrease. A system with 64GiB of RAM should have the performance impact of the limited ARC size more than offset by the 10TB of L2ARC, especially if proper ARC size tuning is applied to the system.

If we set a higher recordsize value of 1MiB (as one might do if they're storing mostly large media files) and assume average block size is 512KiB, we can rerun the numbers with 10TB of L2ARC and find that we're only consuming a piddly 1.6GiB of RAM.

The L2ARC is assigned to a specific pool on ZFS. Unless you partition out a single SSD, there is no way to share a single L2ARC disk between two pools. The official OpenZFS documentation recommends not partitioning devices and rather presenting a whole disk to ZFS, so if you run multiple pools and they all need L2ARC, plan to run multiple SSDs.

SLOG Device


The ZIL and SLOG are arguably the most misunderstood concepts within ZFS. The section titled "The ZIL, the SLOG, and Sync Writes" in the ZFS portion of this guide will give you a better understanding of the purpose of a SLOG device and help you decide if you should add one to your pool. To summarize that lengthy section, you really only need a SLOG if you're running NFS, iSCSI, or S3. If you're only using SMB, you don't need a SLOG unless you change the sync settings on the shared dataset.

A good SLOG device will have very low write latency and high NAND endurance. If your application generates a lot of sync writes, the SLOG device will be getting hammered with tiny writes all day long. A drive with low NAND endurance could wear out within a few years or even months. You'll also want to check how the drive handles sudden power failures. Most modern consumer drives from reputable manufacturers will accurately inform the host system when a write was fully committed to non-volatile NAND but it's worth doing your own research. Try searching for your drive model plus "ZFS SLOG" to see what other people have said about it.

Because the ZIL only holds a few seconds of write data, the SLOG drive does not need to be huge. A 128GB drive is more than enough (in fact, a 32GB drive is also more than enough). You might even consider "over-provisioning" an SSD, a process that resizes the usable portion of the SSD's NAND so it has more spare cells to swap in as active cells wear out. You can safely take a larger drive down to 16GB and know that it will have tons of spare NAND cells to swap in.

The now-retired Intel Optane drives made an ideal choice for a SLOG. If you can find a 280GB 900p (either AIC or U.2) on the used market at a reasonable price, it will rival the performance of other drives at many times the cost.

Host Bus Adapter (HBA)


A NAS is pretty useless without drives to store data. Smaller NAS systems can just use onboard SATA ports for their drives but larger systems need more ports (sometimes a lot more ports). This is where HBAs come in.

What is an HBA?

Users unfamiliar with the enterprise computing space are often overwhelmed by the huge array of new terms and acronyms when they first enter the TrueNAS world. The idea of a host bus adapter (or "HBA") is one of those things that can seem very intimidating at first but is really very simple.

In the context of TrueNAS and OpenZFS, an HBA is what provides all the ports for your drives to plug into. HBAs can also provide connectivity for NVMe drives and Fibre Channel storage and a range of other technologies, but when discussing TrueNAS, an HBA will almost always provide SAS or sometimes SATA connectivity to a bunch of drives.

SAS HBAs (the variety of HBA we'll discuss here) most commonly come in the form of a PCIe add-on card but some server motherboards will have an integrated HBA to provide more onboard ports for drives. HBAs will either have internal ports for connecting drives inside the main chassis, external ports for connecting drives from other chassis back to the main system, or sometimes a combination of internal and external ports.

You may wonder if you even need an HBA. If your motherboard has enough SATA ports on it for all the drives in your boot and data pools, you don't need an HBA. If you have more drives than you do ports, you'll want to look into an HBA. You can run multiple HBAs if you need to connect a lot of drives.

Most home TrueNAS users will run SATA disks because they're less expensive than SAS disks. Thankfully, SATA disks work just fine in SAS drive bays and with SAS HBAs.

An Overview of SAS

SAS HBAs will usually have somewhere between two and six physical SAS ports on them. Somewhat confusingly, each physical SAS port carries four separate SAS "channels", so you might see an HBA with two physical ports called an eight-port HBA.

You'll commonly find three versions of SAS when shopping for NAS components. SAS-1 is very old and not recommended as it has some serious limitations. SAS-1 runs at 3Gbit/s per channel, or 12Gbit/s per physical SAS port.

SAS-2 is a great starting point for most home users. SAS-2 runs at 6Gbit/s per channel or 24Gbit/s per physical port. You can find a wide range of new and used SAS-2 HBAs, backplanes, and cables at very reasonable prices.

SAS-3 is the latest generation of SAS that you'll commonly come across online. It runs at 12Gbit/s per channel, 48Gbit/s per port and is significantly more expensive than SAS-2 equipment. It might make sense to use SAS-3 if you're building an all-flash system or you need to connect a large number of disks with a single cable using SAS expanders (discussed below).

If you want to attach SATA drives directly to your HBA, you can get breakout cables that have a single SAS connection on one side and four SATA connections on the other. If you have a rackmount chassis, you may instead slot the drives into bays on the front of the chassis. These drives mate into a large circuit board called a backplane. One side of the backplane (the side facing the front of the chassis) will have all the connectors that the drives slot into and the other side will have ports to connect your HBA and other ports for power.

The connections on the inside of the backplane for your HBA could be a bunch of individual SATA ports (expect 24 SATA ports if you have a 24 bay chassis), one SAS port per four drive slots (six SAS ports on a 24 bay chassis), or just one or two SAS ports for all the drives. The first two examples where you have one port or channel per drive refer to "direct attach" backplanes: there is a one-to-one mapping of drives to SAS channels on the HBA. The example where you only have just one or two SAS ports for all the drives will use a SAS expander: a chip on the backplane itself that acts as a SAS splitter. With a SAS expander on the backplane, you still have the same bandwidth between the backplane and the HBA (24Gbit/s for SAS-2 and 48Gbit/s for SAS-3), but that bandwidth is split out to all the attached drives.

If you are using a backplane with an onboard SAS expander, you should consider how much bandwidth your drives will need to avoid a bottleneck. With 24 drives running at ~150MB/s each, you have 1.2Gbit/s per drive, or 28.8 Gbit/s total. In this example, a single SAS-2 cable at 24Gbit/s of bandwidth would be a slight bottleneck. However, if that system is connected via a 10Gbit/s Ethernet link, the slight SAS-2 bottleneck really wouldn't be an issue.

Hardware RAID Card Instead of an HBA

While you technically can use a RAID card in TrueNAS, it's strongly discouraged. They introduce unnecessary complexity and additional points of potential failure between ZFS and the drives. ZFS is a software RAID solution and is designed to connect directly to the drives. If you use a hardware RAID card to manage your disks, you'll also be missing out on a lot of the data-protection features ZFS has to offer. If you can set the RAID card in passthrough or IT mode, it may work, but do some research before putting critical data on your pool.

HBA Recommendations

The go-to starter HBA in TrueNAS is the IBM M1015. It's a SAS-2 HBA with two physical ports (eight logical ports) that uses the very well-supported LSI SAS2008 chipset. You can find these cards on eBay for as little as $20. The M1015 will usually ship with RAID firmware pre-installed; per the section above, this is not what we want. Thankfully, you can download the IT mode firmware online and reflash the card so it's fully compatible with TrueNAS and ZFS. Details on the reflashing process can be found on STH.

If the M1015 won't fit your needs, you will want to look at other cards from LSI/Broadcom/Avago (this is one company that seems to go by many names). You'll need to decide how many ports you need and which SAS generation you would like. From there, you can narrow in on appropriate cards. The last part of LSI's HBA model name will indicate how many ports are on the card and whether they are internal or external ports. For example, the LSI 9305-16i has 16 internal ports (four physical SAS ports) while the LSI 9380-8i8e has eight internal ports and eight external ports.

If you purchase a non-LSI/Broadcom/Avago HBA, search around to ensure it's compatible. Cards from RocketRAID and other smaller vendors have limited driver support in TrueNAS. STH has an excellent article with more details on selecting an HBA that you can find here.

Network Interface Controller (NIC) Card


As you might infer from the description "network attached storage", the network will be the primary method of interacting with your NAS. The network interface controller is obviously what facilitates this so picking a high quality NIC is important.

Pretty much every motherboard made in the last 20 years includes an onboard network port and NIC. In most cases, it's perfectly fine to use these onboard ports in TrueNAS but it is worth checking the exact NIC chip used on the motherboard to ensure it's compatible with your intended TrueNAS version. Cheaper motherboards will commonly use cheaper NICs from vendors like Realtek. Driver support for these NICs may be limited or non-existent in TrueNAS. You can usually find out what type of NIC your motherboard has by looking at manufacturer documentation.

Modern gaming motherboards might have onboard 2.5GbE or 5GbE NICs (collectively called NBASE-T. Support for NBASE-T NICs is extremely limited in TrueNAS so these ports will likely run at 1GbE if they run at all.

If you purchase an add-on card (AIC) NIC, you'll want to source one from a major vendor like Chelsio, Intel, or Mellanox. In general, Chelsio and Intel NICs have excellent driver support on all TrueNAS versions and Mellanox cards are usually also fairly well-supported. Older cards will have better support than cards that just released. STH has an excellent (if a bit outdated) guide on NICs in TrueNAS.

Note that fancy NIC features like TCP offload and RDMA are typically not supported on TrueNAS. The NICs themselves will work fine but you won't be able to use the extra features so it's not worth paying more for these cards.

10 Gigabit Network Technologies

When venturing into the world of 10 gigabit network connectivity for the first time, you'll undoubtedly come across two different interface options: one that looks like a normal Ethernet port and runs over a normal-looking copper cable and another kind of weird looking one that runs over fiber optics and requires special transceiver modules (of which there are seemingly hundreds of options). In an effort to demystify this set of technology, we'll cover the basics of these two options here.

You can find many 10GbE ("10 gigabit Ethernet") options in an RJ45 form factor ("RJ45" is the official name for the normal looking Ethernet port) so many users will prefer to stick with what they know and run their 10 gig using this technology. 10GbE RJ45 does use a lot more power than other 10 gig networking options so equipment using it will require more cooling. For a 2 port NIC, it's not a huge difference, but for a network switch, it really starts to add up. This is one of the reasons that larger 10GbE RJ45 switches can be more expensive than alternative options.

10GbE RJ45 runs over twisted-pair copper cables like Category 6 and Category 6a (called "Cat 6" and "Cat 6a" for short). There are lots of different shielding options on Cat 6 and 6a cables but if you're just doing short runs up to a couple dozen feet, you can stick with whatever is cheapest. You almost certainly have some older Cat 5 or Cat 5e cables sitting unused in a drawer in your house and while these may work at 10 gigabit for very short lengths, it's not guaranteed. If you have Ethernet cables run through the walls in your house, don't count on it being able to reliably run at 10 gigabits per second. Many houses built today (even really fancy ones) use the much older Cat 5e for in-wall runs because of how much cheaper the cabling is. You'll sometimes find Cat 7 and Cat 8 cables offered from online vendors but you can skip these unless you're trying to do a really long run of ~100 meters.

The other 10 gigabit technology you'll come across is SFP+ (make sure you include the "+"; just plain "SFP" is the 1 gigabit version). SFP+ is designed to run over fiber optic cables. It relies on fiber transceivers to translate the electrical signals from the server to pulses of light that can be passed over the fibers and then to translate them back to electrical signals again on the other side. The transceivers are little modules about the size of a small pack of gum that slot into the SFP+ port (sometimes called an "SFP+ cage"). The fiber cables themselves then plug into the exposed part of the transceiver. Fun fact: "transceiver" is just the words "transmitter" and "receiver" smashed together, just like "modem" is a combination of "modulate" and "demodulate".

SFP+ supports a few different types of transceiver technologies that you might use depending on how long your cable runs are. Transceivers will most commonly be labeled either "SR" for "short reach" or "LR" for "long range". The "short" in "short reach" is very much relative as SR transceivers support runs up to 500 meters. If you need longer runs than that, "LR" has you covered up to 10 kilometers assuming you're using high quality fiber cables. You may also see "ER" and "ZR" options as well which can stretch to 40 and 120 kilometers respectively. In brief, unless you're connecting systems in neighboring counties, you'll want to use "SR" transceivers.

Fiber transceivers will usually have a certain compatibility coding embedded in their tiny processor chips. If you're buying non-OEM transceivers, you'll usually be able to select for compatibility with a broad range of hardware vendors like Cisco, Juniper, Mellanox, etc. Although less common today, network switch vendors can use this encoding to enforce vendor-lock in so you could only buy transceivers from them at a huge cost markup. This lock-in practice is less common on NICs but it's still a good idea to check if your switch and/or NIC manufacturer requires the use of name-brand optics. If they do enforce lock-in, you can usually still buy third party transceivers and pick the appropriate encoding to save some money. If your vendor does not enforce lock-in, you can pick any encoding option.

You may come across 1G or 10GBASE-T SFP+ transceivers that effectively convert an SFP+ port to an RJ45 port. These could be a good choice if you have absolutely no other connectivity options, but they're far more expensive than standard short reach transceivers and they will consume a lot more power. Many switch vendors advise against populating adjacent SFP+ ports with these BASE-T transceivers because the heat can burn out portions of the switch.

The fiber cables themselves will most commonly be terminated with "Lucent Connectors" or "LC". If you're buying new SFP+ transceivers and fiber cables, the transceiver should have LC female ports and the cable should have LC male plugs on either side. If you're using older equipment, you may see fiber cables terminated with Subscriber connectors ("SC"); these need SFP+ modules with SC plugs on them. You can find fiber cables with SC on one side and LC on the other if you need to interconnect older and newer gear.

When shopping for fiber cables, there are two major categories: multimode and single mode. Multimode fiber (or "MMF") uses a thicker fiber core and has the ability to pass multiple light modes or signal frequencies over the same strand. As a tradeoff for being able to pass multiple signals over the same strand of fiber, the total effective length of multimode cables are limited. When working with short-reach "SR" transceivers, you'll almost always use multimode fiber patch cables. Single mode fiber (or "SMF") cables only pass a single light mode at a time and can thus reach much greater distances, potentially over 100 kilometers. If you're using long-reach "LR" transceivers, you'll likely use single mode cables.

Multimode cables will almost always be set up as a "duplex" assembly meaning there are two physical fiber cables running right next to each other and they're both terminated into a single housing. One cable in the assembly will be used to transmit data and the other will be used to receive data. Single mode cables will usually be "simplex", meaning it only has one physical fiber cable. Single mode cables are typically only used for LR transceivers to support very long cable runs. Some LR transceivers have duplex fiber connectors, meaning you'll need two simplex single mode cables to send and receive data. Other LR transceivers are "bidirectional" or "BiDi" and support sending and receiving data over a single simplex fiber cable.

Because we still don't have enough complexity in all of this, both multimode and single mode fiber cables come in a variety of different "grades". Multimode cables can be found in OM1 through OM5 grades and single mode cables can be found in OS1 and OS2 grades. A higher the number means a longer maximum length and a higher supported bitrate (as well as a higher price).

OM1 and OM2 are older standards and support 10Gb data transmission via standard SR optics of up to 33 and 82 meters respectively. OM3 supports up to 300 meters and both OM4 and OM5 support up to 400 meters. Unless you need the extra length, OM3 cables are the sweet spot for normal use as the price difference between them and OM1/OM2 is usually negligible. If you're running single mode cable to support LR, ER, or ZR transceivers and kilometers-long runs, OS2 has largely replaced OS1.

One final option that can potentially simplify your set of available connectivity choices is the direct-attach copper cable (or "DAC" cable). A DAC cable consists of a thick copper cable with built-in transceivers on either side. They're available in lengths up to a few meters and are designed for runs inside of a single rack or maybe crossing over to an adjacent rack. DAC cables will consume more power than fiber cables and could make dealing with vendor compatibility and lock-in difficult: you won't find a DAC that has Cisco encoding on one side and Mellanox encoding on the other. The active optical cable (or "AOC") is a close cousin to the DAC in that it also comes as a complete assembly with built-in transceivers on either end but uses a real fiber optic cable between those transceivers. AOC's are available in much longer lengths and will use less power but are far more expensive than either DACs or transceivers plus patch cables.

If you're a bit overwhelmed by the huge array of possible combinations, it's understandable. Start by checking if your switch and/or NIC require special vendor encoding, then get some short-reach SFP+ transceivers that are either specified for your switch and NIC vendor or use the generic encoding option. Make sure to buy the transceivers in pairs: one for the switch side and another one for the server side. Next, find an OM3 multimode fiber patch cables with LC terminations of a suitable length. Alternatively, you can just buy a DAC cable but be aware of potential compatibility and cable routing issues.

Beyond 10 Gigabit

Ethernet supports several higher-speed options beyond 10 gigabit. The next step up is 25GbE which runs through an SFP28 port. SFP+ and SFP28 ports are physically identical and many SFP28 ports will run at either 10 or 25 gigabits per second depending on whether you insert an SFP+ or an SFP28 transceiver. All of the information relevant to SFP+ transceivers applies to SFP28 transceivers: they're available in SR, LR, and ER, they most commonly have duplex LC fiber connectors (but you can get BiDi transceivers that run everything via a single simplex cable), and you can find fiber cables in several different grades. OM3 multimode fiber will carry 25GbE up to 70 meters while OM4 and OM5 will handle up to 100 meters. OS2 single mode cables will still support runs of multiple kilometers if that's needed. You can also find SFP28 DAC cables and AOCs.

40 and then 100 gigabits are next up after 25. The most common 40 gigabit Ethernet standard is QSFP+ and the most common 100 gigabit standard is QSFP28. The "Q" in both of these stand for "quad" and refer to the fact that the standard is basically four SFP+ or SFP28 connections running in parallel. Like SFP+ and SFP28, QSFP+ and QSFP28 share a common port that can sometimes support both speeds depending on the installed optic. You can find short reach (SR4), long reach (LR4), and extended reach (ER4) transceivers for both QSFP+ and QSFP28. Interestingly enough, you can also find BiDi QSFP+ and QSFP28 transceivers that let you run 40 and 100 gigabits over normal duplex LC-terminated fiber like you would use for 10 and 25 gigabit.

Instead of tiny LC terminators, these 40 and 100 gigabit standards use larger Multi-Fiber Push On ("MPO") connectors (sometimes referred to as "MTP" cables because that's the brand name of the most common MPO connector; think "Kleenex" versus "tissue"). All MPO cables package many fiber strands into one, so thankfully there are no duplex options here. You'll find OM4 and OM5 multimode MPO cables (both good for up to 150 meters) as well as OS2 single mode MPO cables. 40 and 100GbE DAC cables are available but the cables are very thick and can be difficult to route effectively. You can also find AOCs that are more expensive but a bit easier to manage.

You may notice that some MPO cables advertise different fiber counts. The most common fiber count is 12-strand (called MPO-12) but you'll also commonly see a 16-strand variant (MPO-16). The more expensive MPO-16 cables are only really necessary if you're working with even faster Ethernet technologies than 100GbE like the 400 Gbit/s QSFP-DD or the 800 Gbit/s QSFP-DD112.

If you're venturing beyond 10 gigabit networking speeds, you should be aware that maximum sequential transfer speed of a single TCP/IP connection tends to cap right around 10 gigabits per second. This means that even if you run 100GbE between your beefy all-NVMe TrueNAS and your high-end desktop system, a single SMB transfer is unlikely to run much faster than 1 to 1.2 GB/s. There are a couple ways to exceed this speed limit including SMB multichannel and RDMA but support for these technologies on TrueNAS are still a long way out. iSCSI with round-robin MPIO can also get around this limit by splitting traffic between multiple links, but that isn't suitable for all applications.

The main appeal of faster-than-10-gig networking is the ability to support many users at up to this single-connection limit. If you have a system to support 20 video editing workstations each connected to the switch over 10 gig, you can connect the TrueNAS via 100GbE and (assuming the disks and CPU can keep up) achieve well in excess of 1.2 GB/s.

Motherboard


If you plan to use the onboard NIC and/or onboard SATA ports, you'll want to check that the network and SATA controllers are supported under TrueNAS. Your motherboard manufacturer should have the model name of these controllers somewhere in their documentation.

If you plan to use an HBA and an AIC NIC, you don't have to worry about compatibility of these onboard controllers. Instead, start by looking for a motherboard that is compatible with the CPU you've selected and then narrow in on the feature set you want. If you plan to run a large number of drives, you'll want a lot of PCIe slots and lanes to support all the HBAs.

It's also highly recommended to look for a motherboard that offers IPMI (intelligent platform management interface), sometimes called "lights-out management". IPMI is common in server-focused motherboards and lets you access a special hardware management interface to remotely power on and off the system, check and change fan speeds, and remotely log into the console. You can even remotely make changes to the BIOS settings from the IPMI. Motherboards with IPMI will usually have a simple video out (likely VGA) for initial system configuration. This is nice because you don't need a video card to install your OS.

If you're looking at enterprise motherboards from vendors like Supermicro and Asus, you'll notice that some boards have features like audio out and headers for front panel USB connectors. These motherboards are usually intended for workstations rather than servers. It is recommended to instead look for a server focused board that does not include these extra components, otherwise you'll be paying for features you don't use!

Single Socket vs. Multi Socket

Dual and even quad CPU socket motherboards are fairly common in the enterprise space. Their main advantage is fairly obvious: lots more CPU muscle for your system. A secondary advantage is that these motherboards will usually have a lot more DIMM slots than single socket motherboards: each CPU socket will have its own set of DIMM slots. This means you can run a lot more RAM on multi-socket motherboards, or run the same amount by using a greater quantity of lower capacity DIMMs to save some money.

The drawbacks of multi-socket motherboards are increased cost, increased power and heat, as well as latency introduced during non-uniform memory access or "NUMA" (when a CPU 1 needs to access data in memory tied to CPU 2). NUMA is a very complex topic that you can read more about here, but it's sufficient for our purposes to point out that NUMA is one factor in multi-CPU computing that prevents perfect linear performance scaling as you add more CPUs. In other words, by running a dual-socket CPU, you will not see double the performance of running the same CPU in a single socket motherboard and NUMA is one of the reasons why.

If you want to consider deploying a multi-socket system for TrueNAS, review the CPU section in this wiki for more information.

Chassis


For smaller builds, a normal desktop case works great as a home for your TrueNAS. Once you reach 10 or 12 hard drives, it becomes increasingly difficult to find a desktop case to accommodate all your hardware. It's at this point you might consider using a rackmount chassis instead. ("Chassis" is pronounced "cha-see". If you have multiple chassis, it's spelled the same, but it's pronounced "cha-sees".)

Rackmount chassis have standardized on a 19 inch width and a height measured in "rack units". One rack unit ("RU" or just "U") is 1.75". A typical 2U chassis will hold 12 hard drives, a 3U 16 hard drives, and a 4U 24 hard drives. Denser chassis are available (including monstrous 4U chassis that hold a whopping 102 drives) but those are usually reserved for more specialized applications.

Even if you don't have 24 drives to start, a 4U chassis is worth considering for a few reasons. A 4U chassis is easier to cool since you can fit larger 120mm or even 140mm fans inside it. 2U chassis can only fit 80mm fans which need to turn at very high RPMs to cool your components. You can also use a lot of standard desktop CPU coolers in a 4U chassis and full-height PCIe cards will fit just fine. Finally, a 4U 24 bay leaves room for pool expansion when you inevitably fill up your initial TrueNAS pool.

The Supermicro 846 is a great chassis and can sometimes be found on the used market for reasonable prices. The 846 supports a wide range of backplanes which can be swapped out if needed. The backplane is the large circuit board that sits behind all the drive bays and has all the connections necessary to power all the drives and provide data connection points to the motherboard or HBA. If you purchase an 846 chassis, make note of what backplane it includes in case you have to purchase a different backplane separately. Here's a quick overview of the 846 backplane options:

The -846A backplane is a good choice if you want to keep things simple and don't mind using 6x ports on an HBA or set of HBAs to connect all your drives. The SAS2-846EL1 or SAS3-846EL1 are a good choice if you want to limit the number of HBA ports you're consuming, just make sure to buy the backplane that matches the SAS version you're using. Avoid the SAS1-846EL1 and SAS1-846EL2 models as they're woefully obsolete and try to avoid the -846TQ unless you're a masochist. The SAS2/3-846EL2 models are not necessary on TrueNAS as the OS no longer supports SAS multipath setups.

It's possible to swap some of the fans in the 846 and do a few other simple modifications to keep noise output to a minimum. If you have access to a 3D printer, you can even print out a front fan bezel for more direct cooling of the front bays. You can find more details on the 3D printed fan bezel for the 846 here. More details on possible modifications to the 846 for noise management can be found in the YouTube video here.

Supermicro also makes a 36-bay chassis called the 847 with 24 bays up front and 12 bays in the rear. This may seem like an excellent option but be aware that the rear 12 bays are very difficult to cool. You'll end up needing a bunch of 80mm fans on the lower shelf running at high speed to keep reasonable disk temperatures back there. You can sometimes find an 847 for a lower price than an 846 in which case you can potentially go with the 847 and only use the front 24 bays. Be aware that you only have about 2U of space in the motherboard tray, so you'll have to get a low profile CPU cooler and use half-height PCIe cards. 846 backplanes will work in the 847 and 848.

If you don't want to go Supermicro for your rackmount chassis, you can look at the Norco RPC 4220 or the RPC 4224. They're a bit less expensive than the 846 but the build quality is a significant step down. If you do end up getting a rackmount chassis but don't have a server rack, you can remove one of the rack ears and set the server on its side or look into building a Laak Rack using an Ikea coffee table. Search around on Google for some ideas. If you have enough space for it, you can also get an open-frame rack for a few hundred dollars.

Power Supply


First and foremost, make sure you're using a high quality power supply with enough wattage to keep all your components fed. Just as with PC building, a poor quality power supply can cause all sorts of issues and even do real damage to other system components if it fails. If you're building in a standard desktop case, make sure you have enough SATA power connectors to attach all your drives.

Some rackmount chassis like the Supermicro 846 discussed above will come with support for dual power supplies. While you can technically rip out all the componentry and housing built into the chassis for the dual PSU setup and use a normal ATX power supply instead, it's not advisable. Dual PSUs are a nice feature for server systems as they can fail surprisingly often.

Supermicro makes quiet power supplies with an -SQ designation in the model name. That is worth considering if noise is an issue. The full model name for the 920W quiet PSU is PWS-920P-SQ; this should be more than enough for a basic TrueNAS setup and 24 drives.

Uninterruptible Power Supply


While ZFS is designed to prevent data corruption in the event of a sudden power failure, it's best to avoid testing that feature whenever possible. An uninterruptible power supply ("UPS") or battery backup system will maintain power to your system even when the mains power goes out.

UPSs will all have a power rating that they're capable of supporting. You'll want to determine how much power your TrueNAS system will draw and then select a UPS to handle that. It's common to see UPSs with power ratings listed in VA or volt-amps. Without diving too deep into electrical engineering concepts, this refers to an apparent power figure that accounts for the oscillating voltage and current waves going out of sync or out of phase with each other. The more those waves go out of sync, the more the load will appear to draw, hence "apparent power". UPSs should also list a wattage they're capable of handling and that number will always be lower than or maybe equal to the provided volt-amp number. You'll want to focus on the wattage number as that will more directly relate to the system load you'll be running on the UPS.

UPSs can be expensive but it's worth factoring them into your budget. Being able to skip over short ~5 minute power outages can save a lot of headaches. UPSs are not designed to power your system for more than a few minutes, just enough time to let it run for a bit and gracefully shut down.

Repurposing an Old Computer


It's very possible to convert an old computer into a great TrueNAS system but there are a few things to check before heading down this path.

Old desktop systems are much better suited for TrueNAS than old laptops. With laptops, you usually can't swap components or expand the system or directly attach more hard drives. While you can technically use external USB hard drives on a laptop-based TrueNAS, this is highly discouraged and will almost certainly lead to issues up to and including data loss.

If you have an old desktop system, you will want to check what type of NIC it uses. TrueNAS does not automatically support every NIC ever made and this is especially true of some of the cheaper NICs used on lower-end desktop motherboards. Your motherboard manufacturer should specify the NIC model used and if you search Reddit and/or the TrueNAS forums, you can read other users' experiences. Note that a given NIC may run better under TrueNAS SCALE than it does under CORE or vice versa.

If you're using onboard SATA ports, you should also check if the integrated SATA controller on the motherboard is supported under TrueNAS. As above, check the specs from your motherboard manufacturer and search Reddit and the forums.

Finally, be aware that your power and cooling requirements might change after stuffing your case full of hard drives. Make sure your PSU can supply enough power (and has enough connections) for your drives. If drive temperatures get too high, run your fans faster or add/or more fans.