Brisket!

While I’m a typical North Carolina guy in that pulled pork with a vinegar-based BBQ sauce is the ideal BBQ, my dad prefers something closer to Texas BBQ. Because of this, he’s been asking me to make brisket for him since I got the Big Green Egg. This year for Father’s Day, I decided I’d go ahead and give it a try. Based on what I had read while poking around on the Big Green Egg forums, brisket was one of the more challenging things you can cook. If you don’t cook it long enough, it’s too tough and if you cook it too long, it can dry out. It’s tough to time because there could be a stall in the cook that prevents it from getting to the appropriate temperature in time. Some of these things are the main reasons I haven’t tried it yet. I’m happy to say that after some research and advice from folks at work, this turned out to be far easier than I thought.

After buying the 11 pound brisket from a local butcher (the smallest one they had?!), the first thing I needed to do was season it. Based on advice from a friend at work, I started with an all purpose seasoning that was mostly salt, layered on a standard BBQ seasoning, and finished it off with a course ground steak seasoning for texture. I put all these seasonings on 2 days before I wanted to smoke the meat. Next time, I’d like to get more creative with the seasonings. While all of these were good, I’d be interested to see what some spicier seasonings would do to the meat.

IMG_1071

The night before the smoke, I injected the brisket with 2 cups of beef broth. Once again, I’d like to get more creative with the ingredients for the injection next time. This was good but I’d like to see what other flavors I can bring to the meat.

IMG_1072

IMG_1082

Finally, I started smoking the brisket at 6:00 AM, with some hickory chunks providing the smoke at a temperature of 230. I was hoping this would allow the brisket to be ready between 3:00 and 4:00 PM, with some time to rest before I slice it. As you can see in the graph from the FlameBoss, the brisket quickly came up to 165 degrees. I pulled the meat, wrapped it in aluminum foil, added some beef broth, and placed it back on the Big Green Egg. The temperature stayed steady for a while before quickly coming up to 200 degrees. I pulled the brisket around 1:45 and placed it in a cooler to rest until time to eat. This ended up being a bit early to come off the smoker. In the future, I’ll probably start a couple hours later in the day.

img_1225.png

The final product was delicious and amazingly tender. while I sliced it, the whole family stood around grabbing pieces like they were candy. Everyone agreed it was tasty and that I would need to make it again for future events!

IMG_1086

Find VM Name by IP Address

What do you do when you’ve got a couple hundred VMs in vCenter and you need to find one? If you know its IP address, you can use this nice little PowerCLI snippet. It helped me solve a problem for one of my coworkers.

PS /Users/amkirk> Get-View -ViewType VirtualMachine | ?{ ($_.Guest.Net | %{ $_.IpAddress }) -contains "10.63.172.31" } | select Name

Name
----
csa-rules-server-01032018

-Aaron

Cisco Live NOC and PowerCLI

I wrote in a previous post about how Cisco Live runs on NetApp Storage. I’ll write a few posts describing some of the automation we use to get ready for the show and some ways that we monitor the hardware during the show. This is the first of those posts.

One of the things we need to do to prepare for Cisco Live US is move the data from a FAS8040 to the AFF8060 that we use during the show. NetApp SnapMirror makes the data easy to move but we also need to rediscover all of the VMs from the replicated data. This turns out to be a long process if you need to manually click on all of the .vmx files from the replicated data. To speed this process along, I used a VMware PowerCLI script. We’ve got four volumes but I’ll demonstrate the process for one of them, CLUS-A-01.

First, all of the VMs need to powered down. We’re going to have to remove all of them so they need to be powered down anyway. Once they’re powered down, check to see if there are any that are connected to the datastore from the CD Drive. Disconnect them if there are.
get-vm -datastore CLUS-A-01 | stop-vm
get-vm | Get-CDDrive | select @{N="VM";E="Parent"},HostDevice,IsoPath | where {$_.IsoPath -ne $null}
get-vm -Name clnoc-wifi-checker3 | get-cddrive | Set-CDDrive -NoMedia -Confirm:$false
Next I’ll need to take the inventory of all the VMs we’re going to move so that we can make sure they show up later. The VM name has often been changed and has nothing to do with the VMX name so I’ll pull both of those and dump them to a file as a reference. I’ll also dump the VMX paths into an object that I can import and use later.
get-vm -datastore CLUS-A-01 | select Name,@{E={$_.ExtensionData.Config.Files.VmPathName};L="VM Path"} | set-content /CLUS-A-01.txt
get-vm -datastore CLUS-A-01 | select @{E={$_.ExtensionData.Config.Files.VmPathName};L="VM Path"} | export-clixml /CLUS-A-01.xml
Now all VMs can be removed that are on the target datastore.
get-vm -Datastore CLUS-A-01 | where {$_.PowerState -eq "PoweredOff"} | remove-vm -Confirm:$false
I don’t have good steps for this part. I’ll come back in the future and add the commands but the SnapMirror needs one final update, then I need to break it. This makes the destination volume read/write. After the volume is accessible, mount it from the NetApp controller. Now, back to the PowerCLI…

Add the datastore in vCenter from it’s new location.
Get-VMHost -Location CLUS | new-datastore -Name CLUS-A-01 -Path /CLUS_A_1 -NFS -NfsHost 192.168.1.202
Now that the volume is accessible, I just need to grab the VMX paths that I dumped to a file earlier, loop through the VMX paths and add them back to vCenter.
import-clixml /CLUS-A-01.xml | foreach-object {
$vmhost = $vmhosts | get-random -count 1
New-Vm -RunAsync:$true -VMFilePath $_ -VMHost $vmhost<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
}

There you have it! We actually combined all of these commands into a little script that handles everything for us. This makes a easy way to get the FlexPod ready for Cisco Live!

-Aaron

Inside the NOC at Cisco Live

Several times a year, Cisco Live offers thousands of attendees the opportunity to learn about new and exciting technologies, network with a lot of smart folks, and have a blast while doing it. For me, Cisco Live offers an exciting opportunity as well. As a Technical Marketing Engineer on the Converged Infrastructure team at NetApp, I get the opportunity to create a lot of data center designs in a year but I don’t typically get to do the day-to-day support of the FlexPod for thousands of people. Cisco Live gives me the opportunity to do that and talk to attendees about the experience throughout the week as a member of the Network Operations Center (NOC) team. NetApp has been the official storage provider of the NOC for several years now, ever since the decision to collaborate on the infrastructure and run Cisco Live on a FlexPod.

Picture1

The Cisco Live NOC team provides a vital role at the conference. We are a service provider for all of the Cisco employees, vendors, and attendees – ensuring everyone receives a reliable internet connection with great performance. We deploy a staggering amount of hardware during the week before the show, what we refer to as the setup week. Before we can get to the thousands of access points and switches that need to be deployed, we need to get our FlexPod up and running. For almost 5 years now, the data center at the core of Cisco Live has been a FlexPod Datacenter – a Cisco and NetApp converged infrastructure that combines best practices and industry leading support. The FlexPod Datacenter is where we run all of the applications required to configure, monitor, and maintain the network. These applications include video surveillance, WAN optimization, wireless network management, a lot of custom applications, just to name a few.

This summer at Cisco Live US in Orlando, we’re exciting to once again be running Cisco Live on a FlexPod containing 2 Cisco UCS Blade chassis, 2 Cisco Nexus 7Ks, and 4 NetApp AFF 8060s in a MetroCluster configuration. We designed this infrastructure with a few considerations in mind.

The primary design consideration for our infrastructure is business continuity. During the setup week, there are a lot of things going on. With hundreds of people on site tearing down from past conferences while also setting up for Cisco Live, there is plenty of opportunity for accidents. At Cisco Live Europe in Barcelona, an electrician pulled the power on one of our data centers, thinking it was used for the hairstyling convention that had just ended. It’s very important that throughout any issues we may encounter with any part of the infrastructure – even an entire data center – we continue to serve data and run the applications. For that, we turned to NetApp MetroCluster – a solution which synchronously mirrors your data between two data centers and has enough redundancy built in that you could lose a full data center, failover and continue serving data. With the MetroCluster as our storage solution, we are able to failover to the surviving data center with people cut the breaker to our data center, continue serving data, and switch control back once we have regained power.

Picture2

In addition to business continuity, the flexibility and performance of the infrastructure is very important. Because of the fast-moving environment at Cisco Live, we often don’t have good requirements until right before the show. Because of this, we need a data center infrastructure that is flexible enough to handle all kinds of different workloads and protocols. Regardless of the chosen data center design, it needs to perform well. All Flash FAS is perfect as a platform that provides all the features we need combined with great performance. For example, at Cisco Live Barcelona this year, we planned on implementing Fibre Channel through the Cisco Nexus 5548. This required 2 Fibre Channel ISLs between the sites in addition to the Ethernet ISLs which we had for NFS traffic. At the last minute, the venue communicated that there were not enough links between the data centers and the plans would need to be changed. All Flash FAS made this an easy decision. It cost us just a small amount of time to convert our SAN boot infrastructure from Fibre Channel to iSCSI. With some storage controllers, this flexibility isn’t available. Regardless of any design we’ve chosen for Cisco Live, the All Flash FAS has been able to consistently respond to IOs with sub-millisecond latency. The NOC team has found that the controllers are capable of great performance with any workload that we choose.

One great thing about the NOC at Cisco Live US is that you can see all the infrastructure being used by stopping by the NOC booth in The Hub. We’d love for any attendees at CLUS in Orlando to swing by and talk about the infrastructure and any other NOC related things you’re interested in!

Mother’s Day Scallops

This isn’t the most typical thing to cook on the Big Green Egg. Especially with my first post. However, tonight I whipped an early Mother’s Day dinner for my wife. This one was mostly an excuse to use a new Himalayan Salt Block that was given to me as a gift.

After a little research, I determined that scallops were one of the best things to cook on a himalayan salt block which is perfect because my wife loves scallops! The trick with the salt block is to heat it up slowly to prepare for the cook. I let the grill come up to 400 degrees in around an hour as you can see on my graph from the Flame Boss.

During this time, I had some help from the little man salting the asparagus…

Now, on to the scallops. Grilling scallops is a great way to bring out their natural sweetness. That will be a great combination to go with the salt flavor that they’ll pick up from the salt block. Before placing them on the grill, I brushed on a sauce of butter, lemon juice, and honey to help brown them up a little. They cook quickly; I left them on for around 3 minutes per side. I thought the flavor and texture of the finished scallops was excellent although I wish they had browned up a little more. I probably tried to put too many on the salt block at once. The liquid from all the scallops at the same time probably prevented the browning some.

New Features in ONTAP 9.4

While NetApp first created ONTAP over 25 years ago, innovations are still being added today with ONTAP 9.4. ONTAP 9.4 brings NVMe, 100GbE, 30TB SSDs, and enhancements to several recently released features.

Fabric Pool

Fabric Pool was first released in ONTAP 9.2 as a feature that allows you to tier data off to cheaper object storage. Originally, Amazon S3 and StorageGRID were the available tiers – Azure Blob Storage was added as a tier on 9.3.

Fabric Pool works by running two processes in ONTAP to move ‘cold’ blocks of data to the cloud. The first is a temperature scanner which is constantly evaluating the ‘temperature’ of a block. Active blocks are ‘hot’ and blocks that haven’t been used in a while are ‘cold’. The second process finds cold blocks and moves them to the object storage tier if the aggregate containing the cold blocks is over 50% full.

Previously, ONTAP had two policies for Fabric Pool. One that moved backup data and another that moved blocks only used by Snapshots. A new policy has been added in ONTAP 9.4 that will move any cold block in the volume to the object storage. This new policy also allows the user the specify the time that it takes for a block to become eligible to move to object storage. This information is also reported back to the storage administrator through the CLI and ONTAP System Manager.

NVE Secure Purge

NetApp Volume Encryption Secure Purge is important for any enterprise looking to abide by the new GDPR standards. The goal with secure purge is that the deleted data cannot be recovered from the physical media at a later point in time. To do this, ONTAP will remove any data from the filesystem which contains remnants of the deleted files. After this, it will re-encrypt the data which is leftover with new keys. This ensures that the data cannot be recovered.

NVMe

NetApp

NVMe deserves it’s own post in the future but I’ll give a quick overview of the capabilities of NVMe in ONTAP 9.4 here.

With ONTAP 9.4 and the AFF A800, NetApp is first to the market with end-to-end NVMe. It  includes NVMe drives in the AFF A800, NVMe over fabrics with Brocade Gen-6 Fibre Channel switches, and frontend host connectivity. FC-NVMe can be be used to deliver lower latency,  more bandwidth, and more IOPS. It can also be implemented with a non-disruptive upgrade to existing AFF A-Series controllers including the A300, A700, and A700s.

For more information about ONTAP 9.4 and other things NetApp has going on, head over to the NetApp Blog

-Aaron