Cisco Live NOC and PowerCLI

I wrote in a previous post about how Cisco Live runs on NetApp Storage. I’ll write a few posts describing some of the automation we use to get ready for the show and some ways that we monitor the hardware during the show. This is the first of those posts.

One of the things we need to do to prepare for Cisco Live US is move the data from a FAS8040 to the AFF8060 that we use during the show. NetApp SnapMirror makes the data easy to move but we also need to rediscover all of the VMs from the replicated data. This turns out to be a long process if you need to manually click on all of the .vmx files from the replicated data. To speed this process along, I used a VMware PowerCLI script. We’ve got four volumes but I’ll demonstrate the process for one of them, CLUS-A-01.

First, all of the VMs need to powered down. We’re going to have to remove all of them so they need to be powered down anyway. Once they’re powered down, check to see if there are any that are connected to the datastore from the CD Drive. Disconnect them if there are.
get-vm -datastore CLUS-A-01 | stop-vm
get-vm | Get-CDDrive | select @{N="VM";E="Parent"},HostDevice,IsoPath | where {$_.IsoPath -ne $null}
get-vm -Name clnoc-wifi-checker3 | get-cddrive | Set-CDDrive -NoMedia -Confirm:$false
Next I’ll need to take the inventory of all the VMs we’re going to move so that we can make sure they show up later. The VM name has often been changed and has nothing to do with the VMX name so I’ll pull both of those and dump them to a file as a reference. I’ll also dump the VMX paths into an object that I can import and use later.
get-vm -datastore CLUS-A-01 | select Name,@{E={$_.ExtensionData.Config.Files.VmPathName};L="VM Path"} | set-content /CLUS-A-01.txt
get-vm -datastore CLUS-A-01 | select @{E={$_.ExtensionData.Config.Files.VmPathName};L="VM Path"} | export-clixml /CLUS-A-01.xml
Now all VMs can be removed that are on the target datastore.
get-vm -Datastore CLUS-A-01 | where {$_.PowerState -eq "PoweredOff"} | remove-vm -Confirm:$false
I don’t have good steps for this part. I’ll come back in the future and add the commands but the SnapMirror needs one final update, then I need to break it. This makes the destination volume read/write. After the volume is accessible, mount it from the NetApp controller. Now, back to the PowerCLI…

Add the datastore in vCenter from it’s new location.
Get-VMHost -Location CLUS | new-datastore -Name CLUS-A-01 -Path /CLUS_A_1 -NFS -NfsHost
Now that the volume is accessible, I just need to grab the VMX paths that I dumped to a file earlier, loop through the VMX paths and add them back to vCenter.
import-clixml /CLUS-A-01.xml | foreach-object {
$vmhost = $vmhosts | get-random -count 1
New-Vm -RunAsync:$true -VMFilePath $_ -VMHost $vmhost<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

There you have it! We actually combined all of these commands into a little script that handles everything for us. This makes a easy way to get the FlexPod ready for Cisco Live!


Inside the NOC at Cisco Live

Several times a year, Cisco Live offers thousands of attendees the opportunity to learn about new and exciting technologies, network with a lot of smart folks, and have a blast while doing it. For me, Cisco Live offers an exciting opportunity as well. As a Technical Marketing Engineer on the Converged Infrastructure team at NetApp, I get the opportunity to create a lot of data center designs in a year but I don’t typically get to do the day-to-day support of the FlexPod for thousands of people. Cisco Live gives me the opportunity to do that and talk to attendees about the experience throughout the week as a member of the Network Operations Center (NOC) team. NetApp has been the official storage provider of the NOC for several years now, ever since the decision to collaborate on the infrastructure and run Cisco Live on a FlexPod.


The Cisco Live NOC team provides a vital role at the conference. We are a service provider for all of the Cisco employees, vendors, and attendees – ensuring everyone receives a reliable internet connection with great performance. We deploy a staggering amount of hardware during the week before the show, what we refer to as the setup week. Before we can get to the thousands of access points and switches that need to be deployed, we need to get our FlexPod up and running. For almost 5 years now, the data center at the core of Cisco Live has been a FlexPod Datacenter – a Cisco and NetApp converged infrastructure that combines best practices and industry leading support. The FlexPod Datacenter is where we run all of the applications required to configure, monitor, and maintain the network. These applications include video surveillance, WAN optimization, wireless network management, a lot of custom applications, just to name a few.

This summer at Cisco Live US in Orlando, we’re exciting to once again be running Cisco Live on a FlexPod containing 2 Cisco UCS Blade chassis, 2 Cisco Nexus 7Ks, and 4 NetApp AFF 8060s in a MetroCluster configuration. We designed this infrastructure with a few considerations in mind.

The primary design consideration for our infrastructure is business continuity. During the setup week, there are a lot of things going on. With hundreds of people on site tearing down from past conferences while also setting up for Cisco Live, there is plenty of opportunity for accidents. At Cisco Live Europe in Barcelona, an electrician pulled the power on one of our data centers, thinking it was used for the hairstyling convention that had just ended. It’s very important that throughout any issues we may encounter with any part of the infrastructure – even an entire data center – we continue to serve data and run the applications. For that, we turned to NetApp MetroCluster – a solution which synchronously mirrors your data between two data centers and has enough redundancy built in that you could lose a full data center, failover and continue serving data. With the MetroCluster as our storage solution, we are able to failover to the surviving data center with people cut the breaker to our data center, continue serving data, and switch control back once we have regained power.


In addition to business continuity, the flexibility and performance of the infrastructure is very important. Because of the fast-moving environment at Cisco Live, we often don’t have good requirements until right before the show. Because of this, we need a data center infrastructure that is flexible enough to handle all kinds of different workloads and protocols. Regardless of the chosen data center design, it needs to perform well. All Flash FAS is perfect as a platform that provides all the features we need combined with great performance. For example, at Cisco Live Barcelona this year, we planned on implementing Fibre Channel through the Cisco Nexus 5548. This required 2 Fibre Channel ISLs between the sites in addition to the Ethernet ISLs which we had for NFS traffic. At the last minute, the venue communicated that there were not enough links between the data centers and the plans would need to be changed. All Flash FAS made this an easy decision. It cost us just a small amount of time to convert our SAN boot infrastructure from Fibre Channel to iSCSI. With some storage controllers, this flexibility isn’t available. Regardless of any design we’ve chosen for Cisco Live, the All Flash FAS has been able to consistently respond to IOs with sub-millisecond latency. The NOC team has found that the controllers are capable of great performance with any workload that we choose.

One great thing about the NOC at Cisco Live US is that you can see all the infrastructure being used by stopping by the NOC booth in The Hub. We’d love for any attendees at CLUS in Orlando to swing by and talk about the infrastructure and any other NOC related things you’re interested in!

Mother’s Day Scallops

This isn’t the most typical thing to cook on the Big Green Egg. Especially with my first post. However, tonight I whipped an early Mother’s Day dinner for my wife. This one was mostly an excuse to use a new Himalayan Salt Block that was given to me as a gift.

After a little research, I determined that scallops were one of the best things to cook on a himalayan salt block which is perfect because my wife loves scallops! The trick with the salt block is to heat it up slowly to prepare for the cook. I let the grill come up to 400 degrees in around an hour as you can see on my graph from the Flame Boss.

During this time, I had some help from the little man salting the asparagus…

Now, on to the scallops. Grilling scallops is a great way to bring out their natural sweetness. That will be a great combination to go with the salt flavor that they’ll pick up from the salt block. Before placing them on the grill, I brushed on a sauce of butter, lemon juice, and honey to help brown them up a little. They cook quickly; I left them on for around 3 minutes per side. I thought the flavor and texture of the finished scallops was excellent although I wish they had browned up a little more. I probably tried to put too many on the salt block at once. The liquid from all the scallops at the same time probably prevented the browning some.

New Features in ONTAP 9.4

While NetApp first created ONTAP over 25 years ago, innovations are still being added today with ONTAP 9.4. ONTAP 9.4 brings NVMe, 100GbE, 30TB SSDs, and enhancements to several recently released features.

Fabric Pool

Fabric Pool was first released in ONTAP 9.2 as a feature that allows you to tier data off to cheaper object storage. Originally, Amazon S3 and StorageGRID were the available tiers – Azure Blob Storage was added as a tier on 9.3.

Fabric Pool works by running two processes in ONTAP to move ‘cold’ blocks of data to the cloud. The first is a temperature scanner which is constantly evaluating the ‘temperature’ of a block. Active blocks are ‘hot’ and blocks that haven’t been used in a while are ‘cold’. The second process finds cold blocks and moves them to the object storage tier if the aggregate containing the cold blocks is over 50% full.

Previously, ONTAP had two policies for Fabric Pool. One that moved backup data and another that moved blocks only used by Snapshots. A new policy has been added in ONTAP 9.4 that will move any cold block in the volume to the object storage. This new policy also allows the user the specify the time that it takes for a block to become eligible to move to object storage. This information is also reported back to the storage administrator through the CLI and ONTAP System Manager.

NVE Secure Purge

NetApp Volume Encryption Secure Purge is important for any enterprise looking to abide by the new GDPR standards. The goal with secure purge is that the deleted data cannot be recovered from the physical media at a later point in time. To do this, ONTAP will remove any data from the filesystem which contains remnants of the deleted files. After this, it will re-encrypt the data which is leftover with new keys. This ensures that the data cannot be recovered.



NVMe deserves it’s own post in the future but I’ll give a quick overview of the capabilities of NVMe in ONTAP 9.4 here.

With ONTAP 9.4 and the AFF A800, NetApp is first to the market with end-to-end NVMe. It  includes NVMe drives in the AFF A800, NVMe over fabrics with Brocade Gen-6 Fibre Channel switches, and frontend host connectivity. FC-NVMe can be be used to deliver lower latency,  more bandwidth, and more IOPS. It can also be implemented with a non-disruptive upgrade to existing AFF A-Series controllers including the A300, A700, and A700s.

For more information about ONTAP 9.4 and other things NetApp has going on, head over to the NetApp Blog


500 Word Summary: So Good They Can’t Ignore You

The Passion Hypothesis:
The key to occupational happiness is to find out what you’re passionate about and find a job that matches this passion.

I’ve talked to so many people who struggle to begin to fulfill the passion hypothesis because to start you have to be passionate about something. Most people are passionate about something but it’s usually more of a hobby, not something from which they can earn a living. It’s an incredibly common problem for young people in today’s world. I believe this is an excellent book for people struggling to find their passion.

In “So Good They Can’t Ignore You”, Cal Newport argues against ‘The Passion Hypothesis’ as the primary method by which people should plan their career. He instead proposes a alternative way to plan your career with the primary career goal being that you should love what you do. That proposal is made up of a few rules.

Rule #1: Don’t Follow Your Passion

Most people do not have a passion that defines the work they want to do or they have a passion that they can’t monetize. Passion often comes from working towards something for a long time and becoming excellent at it.

Rule #2: “Be so good they can’t ignore you”

This quote from Steve Martin captures what you need to do in order to build a career that you love. The key to finding work that you love is not to follow your passion but instead to get good at something rare and valuable. Cash in the career capital generated you gain from these skills for the traits that make work great.

Key to this rule is that you should begin with a focus on the value you can offer to the world, not the value that your job can offer you.

Rule #3: Consider saying no to maintain control

Control is one of the primary things that makes work enjoyable. When you have enough career capital to acquire more control, your employer is likely to do something to prevent you from gaining that control. Don’t fall into this trap and accept something (money) instead of gaining control. Also be careful not to acquire control without the appropriate career capital. Control gained this way is not sustainable.

Rule #4: Mission is important to creating work you love

Finally, after acquiring a lot of career capital, you can answer the question, “What should I do with my life?”. Missions are often found at the cutting-edge of a field. If you want a mission, try finding work on the cutting edge.

Mission driven projects are often tough. To make them easier, make little bets to answer questions about the project. Ensure that a project is remarkable enough that others want to talk about it. This is important if you want it to succeed.


I loved this book and the message that you don’t have to identify a passion first and act on it. It’s much better to work on your passion and let it change with you.


Reclaim FC Datastore Space

Reclaim unused space on Thin Provisioned NetApp LUN

Something that’s annoying when you’re implementing thin provisioning for your Fibre Channel LUNs is that when you delete or move VMs from the LUN, the freed up space is not seen on the NetApp storage controller.

You can see this problem here where I’ve deleted files from the datastore so that VMware sees plenty of free space but NetApp still sees a 70% full LUN. Space has become available on the VMFS filesystem but the NetApp storage controller doesn’t recognize it because we don’t know what’s going on inside that filesystem.

There’s an easy way around this that could be handy if you need that extra space in NetApp ONTAP. Make sure that space-allocation is enabled on your lun before you try this. If it isn’t enabled – by default it will be disabled in ONTAP – you will see that the SCSI UNMAP is not supported in ESXi.

esxcli storage core device vaai status get
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: unsupported

You can follow these steps to enable them. Unfortunately it requires you to offline the LUN to reflect the changes so you’ll obviously want to move any VMs away from the LUN.

  1. Offline the LUN
    lun offline -vserver Infra-SVM -path /vol/workload2/lun1
  2. Modify the LUN to ensure space-allocation is enabled
    lun modify -vserver Infra-SVM -path /vol/workload2/lun1 -space-allocation enabled
  3. Online the LUN
    lun online -vserver Infra-SVM -path /vol/workload2/lun1

Now you can see that Delete Status is supported; we can continue on to free up some space.

esxcli storage core device vaai status get
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
esxcli storage vmfs unmap -l Test

After this you can see that my LUN in ONTAP’s System Manager reflects the correct size used!

Just a warning… You will probably take a performance hit in your vSphere cluster when you run the unmap command. Keep that in mind and run it during off peak hours.