Building ZFS Based Network Attached Storage Using FreeNAS 8
Back in 2004 Sun Microsystems announced a new filesystem which would combine a traditional filesystem with the benefits of a logical volume manager, RAID and snapshots. The result was ZFS (Zettabyte File System). Sun decided to release ZFS as open source and as a result it has found its way into FreeBSD, the operating system at the heart of the open source Network Attached Storage (NAS) solution FreeNAS 8.
Features like snapshots and inbuilt data integrity (where ZFS protects all data with 64-bit checksums that detect and correct silent data corruption) mean that ZFS is an excellent choice for high end NAS solutions. High end means systems with lots of disks and lots of RAM. A FreeNAS based ZFS system needs a minimum of 6 GB of RAM to achieve decent read/write performance and 8GB is preferred. For systems with more than 6TB of storage it is best to add 1GB of RAM for every extra 1TB of storage.
In this tutorial we will go through the steps to create a ZFS volume, create datasets within it and then look at the advantages of using snapshots. My test system has four hard drives, a 2GB drive for the FreeNAS installation and three 2TB drives for storage.
Starting with a fresh FreeNAS install, open a web browser and enter the address of the FreeNAS server (you can find the address from the console).
Create a ZFS Volume
To make disks available for storage they need to be added as a volume. The three 2TB disks on the test system can be added together as a RAID-Z set. RAID-Z is similar to RAID5 but it doesn’t suffer from a design flaw in RAID5 known as the “write hole.” This is where the RAID set can get into an inconsistent state if the server crashes, fails or loses power at just the wrong moment.
To create the ZFS RAID-Z volume, click the Storage icon in the toolbar below the FreeNAS logo. Click Create Volume, enter a Volume Name (eg. “store”) and click the disks to add to the RAID-Z set. RAID-Z needs a minimum of three disks. On my test system the three disks are called ada1, ada2 and ada3 (note that ada0 is the FreeNAS system disk). Choose “ZFS” and “RAID-Z” and click Add Volume.
The ZFS volume has now been created. The storage tab lists the configured volumes along with their size, free space and health. To see the drives used in any ZFS volume click the zpool status icon (the last icon in the actions list).
At this point the whole 4TB volume can be shared on the network, or it can be optionally divided into ZFS datasets. A dataset is like a folder on the volume, but it acts like a filesystem in that it supports snapshots, quotas and compression. Once a dataset is created its permissions can be set independently to that of the ZFS volume. This means that several datasets can be created, one for each group of users (eg. sales, developers, and marketing).
Click Create ZFS Dataset and enter a dataset name (eg. “sales”). If you don’t want to implement quotas or enable compression, leave the other fields as they are and click Add Dataset.
The storage tab lists the configured volume and the datasets. For the “marketing” dataset, I set a quota of 500GB by entering “500g” into the “Quota for this dataset” field on the “Create ZFS Dataset” dialog.
A snapshot is an exact read-only copy of the filesystem at the time of the snapshot. They are quick to create and a snapshot with no differences to the current filesystem occupies no space (eg. 0MB). As the filesystem changes, the size of the snapshot grows to hold the differences between the current files and the those when the snapshot was taken.
Snapshots provide an easy and efficient way to keep a history of files and allow earlier versions of a file (or even deleted files) to be recovered. FreeNAS supports periodic snapshots which can be configured to be automatically taken at regular intervals (even every 15 minutes) and then automatically purged after a set period (eg. one week).
To manually create a snapshot click the Create Snapshot icon from the list of Available actions (it looks like a black square with a plus sign in the top right corner). Edit the Snapshot Name to something more meaningful (eg. the reason for creating the snapshot) and click Manually Create Snapshot. If you create a snapshot of a volume with datasets, it is best to tick “Recursive snapshot” as this ensures that the snapshot for the volume and its datasets all occur at the same time.
Rolling Back with a ZFS Snapshot
If the need arises to recover modified or deleted files from a snapshot there are two methods that can be used.
One method is to just roll back the entire volume or dataset to the snapshot. This is drastic in the sense that every file will be rolled back and nothing will be spared. This is useful in classroom settings where the volume can be rolled back at the end of every class to a known state. Or, for businesses, in the case of a security breach and/or infection with malware, the volume can be restored to a previous state before the intrusion. However, again, note that every change (and new file) that was made after the snapshot will be lost.
The second method is to clone the snapshot and make this writable copy temporarily available on the network. Cloned snapshots act exactly like ZFS volumes or ZFS datasets. They can be shared on the network, the permissions can be altered and a further snapshot can even be created. The idea is that the clone can be shared on the network and then once the needed files have been copied off, the clone can be deleted.
To create a clone click ZFS Snapshots (which is just above the “Create ZFS Dataset” icon). Find the desired snapshot and click the Clone Snapshot icon. Click Clone Snapshot on the dialog. Click Active Volumes to see the list of volumes and datasets. At the bottom the cloned snapshot can be seen. This cloned snapshot can now be shared on the network over CIFS, NFS or AFP.
We have now created a ZFS volume, ZFS datasets and ZFS snapshots and went over two methods for recovering files with your snapshots.
Want to improve your IT skills? Sign up for a 3-day free trial to access TrainSignal’s entire training library.