New Features in ONTAP 9.4

While NetApp first created ONTAP over 25 years ago, innovations are still being added today with ONTAP 9.4. ONTAP 9.4 brings NVMe, 100GbE, 30TB SSDs, and enhancements to several recently released features.

Fabric Pool

Fabric Pool was first released in ONTAP 9.2 as a feature that allows you to tier data off to cheaper object storage. Originally, Amazon S3 and StorageGRID were the available tiers – Azure Blob Storage was added as a tier on 9.3.

Fabric Pool works by running two processes in ONTAP to move ‘cold’ blocks of data to the cloud. The first is a temperature scanner which is constantly evaluating the ‘temperature’ of a block. Active blocks are ‘hot’ and blocks that haven’t been used in a while are ‘cold’. The second process finds cold blocks and moves them to the object storage tier if the aggregate containing the cold blocks is over 50% full.

Previously, ONTAP had two policies for Fabric Pool. One that moved backup data and another that moved blocks only used by Snapshots. A new policy has been added in ONTAP 9.4 that will move any cold block in the volume to the object storage. This new policy also allows the user the specify the time that it takes for a block to become eligible to move to object storage. This information is also reported back to the storage administrator through the CLI and ONTAP System Manager.

NVE Secure Purge

NetApp Volume Encryption Secure Purge is important for any enterprise looking to abide by the new GDPR standards. The goal with secure purge is that the deleted data cannot be recovered from the physical media at a later point in time. To do this, ONTAP will remove any data from the filesystem which contains remnants of the deleted files. After this, it will re-encrypt the data which is leftover with new keys. This ensures that the data cannot be recovered.

NVMe

NetApp

NVMe deserves it’s own post in the future but I’ll give a quick overview of the capabilities of NVMe in ONTAP 9.4 here.

With ONTAP 9.4 and the AFF A800, NetApp is first to the market with end-to-end NVMe. It  includes NVMe drives in the AFF A800, NVMe over fabrics with Brocade Gen-6 Fibre Channel switches, and frontend host connectivity. FC-NVMe can be be used to deliver lower latency,  more bandwidth, and more IOPS. It can also be implemented with a non-disruptive upgrade to existing AFF A-Series controllers including the A300, A700, and A700s.

For more information about ONTAP 9.4 and other things NetApp has going on, head over to the NetApp Blog

-Aaron

Reclaim FC Datastore Space

Reclaim unused space on Thin Provisioned NetApp LUN

Something that’s annoying when you’re implementing thin provisioning for your Fibre Channel LUNs is that when you delete or move VMs from the LUN, the freed up space is not seen on the NetApp storage controller.

You can see this problem here where I’ve deleted files from the datastore so that VMware sees plenty of free space but NetApp still sees a 70% full LUN. Space has become available on the VMFS filesystem but the NetApp storage controller doesn’t recognize it because we don’t know what’s going on inside that filesystem.

There’s an easy way around this that could be handy if you need that extra space in NetApp ONTAP. Make sure that space-allocation is enabled on your lun before you try this. If it isn’t enabled – by default it will be disabled in ONTAP – you will see that the SCSI UNMAP is not supported in ESXi.

esxcli storage core device vaai status get
naa.600a09803830344a583f497178583352
VAAI Plugin Name: VMW_VAAIP_NETAPP
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: unsupported

You can follow these steps to enable them. Unfortunately it requires you to offline the LUN to reflect the changes so you’ll obviously want to move any VMs away from the LUN.

  1. Offline the LUN
    lun offline -vserver Infra-SVM -path /vol/workload2/lun1
  2. Modify the LUN to ensure space-allocation is enabled
    lun modify -vserver Infra-SVM -path /vol/workload2/lun1 -space-allocation enabled
  3. Online the LUN
    lun online -vserver Infra-SVM -path /vol/workload2/lun1

Now you can see that Delete Status is supported; we can continue on to free up some space.

esxcli storage core device vaai status get
naa.600a09803830344a583f497178583352
VAAI Plugin Name: VMW_VAAIP_NETAPP
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
esxcli storage vmfs unmap -l Test

After this you can see that my LUN in ONTAP’s System Manager reflects the correct size used!

Just a warning… You will probably take a performance hit in your vSphere cluster when you run the unmap command. Keep that in mind and run it during off peak hours.

-Kirk