Understanding ITOps, DevOps, and NoOps

I’m trying to understand the difference between these three so that I can understand my customers better. It’s not obvious because everyone has a different opinion about where the lines fall between them – or even if there is a line. This is important because you can assume that customers in the different segments have difference needs for an observability solution. Here’s my best take on how they’re different and where those differences matter the most.

In traditional IT organizations, developers and systems administrators have opposing goals. Developers want to build features, innovating and creating new value for customers. On the other hand, System Administrators want to ensure that the deployed software will be reliable, performant, and secure, creating value for the business and customers. Because these two sets of people in an organization have different goals and don’t talk often, there is friction and software is released as smoothly as the business would like. When I think about these organizations that practice traditional IT, I think about my time at NetApp. When we identified a new application we wanted to build, we couldn’t just start writing code and deploying. First, we had a go through a lengthy process to get a VM from IT. They had to get information about how much storage, CPU, and memory was needed before provisioning the VM. Once we had it, we could deploy code to our 1 VM. This is solidly in the ITOps segment of the population. We could do some DevOpsy things to deploy code but we were always sitting on an ITOps framework for getting the necessary infrastructure.

Had we been practicing true DevOps at NetApp, we would have been able to deploy the infrastructure we needed – without asking a separate team. Other than the actual infrastructure being deployed through tickets, we did a lot of the other things necessary to claim DevOps. On DevOps teams, the team is responsible for the development, QA, deployment, infrastructure management, monitoring, and support of the application. The team may contain people with a role specific to one of those tasks or a team of generalists where anyone could manage any of the tasks. Either way, since the team is responsible for the operation of the application, they take on the new set of tasks. I’ve found that this causes the team to consider the impact of changes more and potentially be more careful. Nonetheless, DevOps teams can deploy faster and more dependably, helping to accomplish business objectives faster.

Finally, at some point, DevOps moves into NoOps – where automation and PaaS allows developers to deploy without the need to understand the infrastructure. The line between DevOps and NoOps is a bit murky to me but I think the main point is that the ‘Ops’ in NoOps has been minimized away so that it is trivial for the developer to execute. It’s worth pointing out that even in a NoOps practice, you would still expect some teams in the company to practice DevOps, as seen from Adrian Cockroft’s experience at Netflix. Those teams could be responsible for the platform or the automation that is making NoOps possible for the product developers.

I read a lot of content to try to understand this; here are a few links that I found especially helpful:

Catching the Wave

While I always enjoy the conversations on Invest Like the Best, one recent episode with Kevin Systrom and Mike Krieger, the founders of Instagram was one of my favorites. They talked a lot about product decisions that they made that felt very applicable to product management. One of the anecdotes I really liked was when they compared building a feature at the right time to catching a wave:

“There’s an optimal time to start paddling to catch a wave. If you start too early, you look like an idiot. If you start too late, you look like an idiot… …If you start at the right time, everything aligns”

Link to Castro clip of the quote.

This perfectly aligns with my experience as a product manager and is applicable whether you’re building a new product or features on an existing product. There’s always an optimal time to start working on something and doing it too early or too late often doesn’t have enough of an impact. This is why the right priority is so important and part of what makes the product management job quite difficult!

You can check out more Invest Like the Best episodes here.

Pulled pork in < 1 hour

My experience making and freezing pulled pork for future use.

My wife’s family has a great tradition at the annual beach week. Each couple in the family cooks one night of the week. This has several advantages over everyone making their own stuff or going out for dinner.

First – and most obviously – we get home cooked meals every night of the week. Second, we all eat together. It’s a great time to have everyone around the table and chatting at the same time. Third, I get to stay on the beach longer and come back to the house just before dinner. Finally, while it’s definitely not a competition to see who makes the best meal 😉, everyone goes all out for their night of the week. We have some great dinners during the week, my favorite being Trey’s homemade lasagna.

This year, I wanted to make my favorite, some carolina pulled pork. The problem is that I want to smoke it on the Big Green Egg and eat it an unknown number of days later. That turned out to not be a problem due to my Christmas experiment where I froze the pulled pork and reheated it before everyone arrived! My wife still insists that’s the best pork I’ve made…

I just finished smoking this one the Big Green Egg and I think it turned out great. I’ve been watching a lot of BBQ with Franklin on YouTube lately so I followed his advice on the rub. I added a bit of cayenne pepper and brown sugar to this:

  • Kosher Salt
  • Black Pepper
  • Paprika
  • Garlic Powder
  • Chili Powder

I didn’t measure the ingredients but it was close to equal parts salt, pepper, and sugar, paprika, with a pinch of the other ingredients.

Since I was making this for a lot of people, I had 3 pork butts on at once. That definitely maxed out the capacity for the large Big Green Egg!



I smoked this one at 250 for 5 hours over hickory chunks, wrapped it up, and waited until the internal temperature hit 195. It’s important to let the pork rest for a while after cooking to allow the muscle fibers to soak up the juices so I let them sit on the counter in foil for another 30 minutes. After that I pulled them apart, let them cool to room temperature, and put them in the freezer. There is no sauce in the freezer bags, I’ll put it on the pork when I warm it up. It should only take around an hour to warm them up and have them ready to eat.

It will be exciting to see how they turn out! Based on my previous experience with pulled pork, I think everyone will love them. 👍🏼



The Past Year with BGE

It’s been a while since I posted a blog about my grilling and smoking experiences with the Big Green Egg. For some reason, I find it far easier to eat whatever I cooked than take pictures and write about it. Nonetheless, I’ve been using the Big Green Egg a lot over the past year and have cooked a number of different things. Scrolling back in my FlameBoss log, I’m going to pull out some of the interesting cooks and point out some things that I’ve learned. I don’t know why but the graphs from the FlameBoss get screwed up a lot of the time so I won’t be able to show some of them.


Pulled Pork for Christmas

Every year, my extended family comes over to my house on December 23rd. We always have a great time catching up, of course with excellent food. Last year, I made pulled pork; it was a hit and I wanted to do it again. This year I had a predicament though. I was going to be out of town until about 3PM on the day that everyone was going to arrive. I decided I wanted to freeze the pulled pork and reheat it on the 23rd. It turns out this was something a lot of people have done and pulled pork reheats very well. The only big concern is ensuring that you safely freeze the meat so it doesn’t harbor bacteria. I cooked the pork about 10 days early with a typical rub on it:

  • 2 tablespoons sweet paprika
  • 2 tablespoons packed light brown sugar
  • 4 teaspoons kosher salt
  • 1 1/2 teaspoons chili powder
  • 1/4 teaspoon cayenne
  • 1/4 teaspoon dried oregano
  • 1/4 teaspoon dried thyme
  • 1/4 teaspoon ground cumin
  • 1/4 teaspoon freshly ground black pepper
  • 1/4 teaspoon garlic powder
  • 1/4 teaspoon onion powder


After letting it cool, I vacuum sealed 2 bags and cooled them in a cooler prior to freezing them. To reheat, I let the pork thaw for a while on the counter, put it in a pan with some vinegary sauce, cover it, and cook it at 300 degrees until it’s heated. The whole process was pretty simple; I highly recommend trying it out! Right now, I’m cooking another 24 pounds of pork shoulder to bring to our family’s beach week and eat there!



I’ve made ribs about 20 times this year. They are typically my go to meat to smoke if someone is coming over to the house for dinner. While they always come out cooked extremely well, I still don’t feel like I’ve got the method perfect. I’ve tried different ways to cook them, 3-2-1, 2-2-1, no aluminum foil, membrane off or on and it often feels like there are too many variables to say exactly what makes them great or just good.

There was one time I put some sauce on them that had too much sugar and it burned all over them. That was a super disappointing and I definitely learned to check the sauce out before using it.


Other Tasty Things…

Here are a few other things that I made in the past year. I didn’t branch out a ton but I did get several good cooks in.

  • Wings
  • Pork tenderloin
  • Carnitas
  • Bacon-wrapped turkey tenderloin
  • Pizza
  • Cookies

A54E3A1F-4C02-4152-AB28-8F58ADD40076.png  CDF5FC89-F298-45D3-A836-579E435160CA.png



New Features in ONTAP 9.6

ONTAP 9.6RC1 is out now, which is no surprise to those who follow the new ONTAP release cadence at NetApp. For several years, we’ve been releasing two versions a year, in the fall and spring. Typically a long-term support (LTS) release comes out in the fall but this year, the model has changed a bit. Going forward, every release will be a long-term supported release with 3 years of full support, 2 years limited support, and 3 years of self-service support. In case you were holding back on the spring release of ONTAP out of concern for it not being a LTS release, go ahead and upgrade! It will be a great experience with a simple automated upgrade, like any ONTAP upgrade.


The primary theme of ONTAP 9.6 is simplicity. I’ve talked to many customers and partners who will happily reduce the tunability of a product in exchange for a simpler user experience. With ONTAP 9.6, there are a number of improvements that will deliver a simpler experience for administrators.

The first of these features is a excellent out-of-box experience that reduces the setup to 5 simple steps. After the initial setup, quick provisioning workflows and guided LUN placement allow you to get your applications configured faster. Configuration of replication continues to be simplified, so you can ensure all your data is protected and available. Finally, upgrade has been simplified to allow faster and more convenient upgrade from a laptop.

The management ecosystem has changed as well. Now OnCommand System Manager is ONTAP System Manager. This is where you’ll go to manage a single ONTAP cluster. The look and feel has been updated to be more intuitive and provide simpler workflows. You’ll see some of these improvements the first time you log in. In the background, System Manager is using REST APIs to deliver a simpler management experience


AFF A320 and NVMe

Along with ONTAP 9.6, NetApp is releasing a new controller, the AFF A320, with onboard 100GbE ports for high performance connectivity. The AFF A320 will support the new NS224 NVMe expansion shelf. This combination of high performance and low latency will be a great fit for artificial intelligence and deep learning workloads.

StorageReview NetApp NS224 NVMe SSD Storage Shelf.png

Aggregate encryption

ONTAP supports a couple different types of encryption. Encryption through NSE drives means that everything on the cluster is encrypted. This feature is a great fit for secure environments but sometimes you only want to encrypt some of the data on the cluster. Volume level encryption allows you to encrypt data for an individual volume but could be tedious to maintain for many volumes. Aggregate level encryption fills the gap between these two, providing a simpler experience without needing to purchase special hardware or encrypt all the data on a cluster.

New MetroCluster Support

As a past MetroCluster engineer, I always like to keep track of the new things the team is doing. MetroCluster has been supported with an IP backend for over a year now. The team has slowly increased the distance between the sites, now allowing up to 700km of distance. New in ONTAP 9.6 is support for smaller systems, the AFF A220 and the FAS2750.



While I’m a typical North Carolina guy in that pulled pork with a vinegar-based BBQ sauce is the ideal BBQ, my dad prefers something closer to Texas BBQ. Because of this, he’s been asking me to make brisket for him since I got the Big Green Egg. This year for Father’s Day, I decided I’d go ahead and give it a try. Based on what I had read while poking around on the Big Green Egg forums, brisket was one of the more challenging things you can cook. If you don’t cook it long enough, it’s too tough and if you cook it too long, it can dry out. It’s tough to time because there could be a stall in the cook that prevents it from getting to the appropriate temperature in time. Some of these things are the main reasons I haven’t tried it yet. I’m happy to say that after some research and advice from folks at work, this turned out to be far easier than I thought.

After buying the 11 pound brisket from a local butcher (the smallest one they had?!), the first thing I needed to do was season it. Based on advice from a friend at work, I started with an all purpose seasoning that was mostly salt, layered on a standard BBQ seasoning, and finished it off with a course ground steak seasoning for texture. I put all these seasonings on 2 days before I wanted to smoke the meat. Next time, I’d like to get more creative with the seasonings. While all of these were good, I’d be interested to see what some spicier seasonings would do to the meat.


The night before the smoke, I injected the brisket with 2 cups of beef broth. Once again, I’d like to get more creative with the ingredients for the injection next time. This was good but I’d like to see what other flavors I can bring to the meat.



Finally, I started smoking the brisket at 6:00 AM, with some hickory chunks providing the smoke at a temperature of 230. I was hoping this would allow the brisket to be ready between 3:00 and 4:00 PM, with some time to rest before I slice it. As you can see in the graph from the FlameBoss, the brisket quickly came up to 165 degrees. I pulled the meat, wrapped it in aluminum foil, added some beef broth, and placed it back on the Big Green Egg. The temperature stayed steady for a while before quickly coming up to 200 degrees. I pulled the brisket around 1:45 and placed it in a cooler to rest until time to eat. This ended up being a bit early to come off the smoker. In the future, I’ll probably start a couple hours later in the day.


The final product was delicious and amazingly tender. while I sliced it, the whole family stood around grabbing pieces like they were candy. Everyone agreed it was tasty and that I would need to make it again for future events!


Find VM Name by IP Address

What do you do when you’ve got a couple hundred VMs in vCenter and you need to find one? If you know its IP address, you can use this nice little PowerCLI snippet. It helped me solve a problem for one of my coworkers.

PS /Users/amkirk> Get-View -ViewType VirtualMachine | ?{ ($_.Guest.Net | %{ $_.IpAddress }) -contains "" } | select Name



Cisco Live NOC and PowerCLI

I wrote in a previous post about how Cisco Live runs on NetApp Storage. I’ll write a few posts describing some of the automation we use to get ready for the show and some ways that we monitor the hardware during the show. This is the first of those posts.

One of the things we need to do to prepare for Cisco Live US is move the data from a FAS8040 to the AFF8060 that we use during the show. NetApp SnapMirror makes the data easy to move but we also need to rediscover all of the VMs from the replicated data. This turns out to be a long process if you need to manually click on all of the .vmx files from the replicated data. To speed this process along, I used a VMware PowerCLI script. We’ve got four volumes but I’ll demonstrate the process for one of them, CLUS-A-01.

First, all of the VMs need to powered down. We’re going to have to remove all of them so they need to be powered down anyway. Once they’re powered down, check to see if there are any that are connected to the datastore from the CD Drive. Disconnect them if there are.
get-vm -datastore CLUS-A-01 | stop-vm
get-vm | Get-CDDrive | select @{N="VM";E="Parent"},HostDevice,IsoPath | where {$_.IsoPath -ne $null}
get-vm -Name clnoc-wifi-checker3 | get-cddrive | Set-CDDrive -NoMedia -Confirm:$false
Next I’ll need to take the inventory of all the VMs we’re going to move so that we can make sure they show up later. The VM name has often been changed and has nothing to do with the VMX name so I’ll pull both of those and dump them to a file as a reference. I’ll also dump the VMX paths into an object that I can import and use later.
get-vm -datastore CLUS-A-01 | select Name,@{E={$_.ExtensionData.Config.Files.VmPathName};L="VM Path"} | set-content /CLUS-A-01.txt
get-vm -datastore CLUS-A-01 | select @{E={$_.ExtensionData.Config.Files.VmPathName};L="VM Path"} | export-clixml /CLUS-A-01.xml
Now all VMs can be removed that are on the target datastore.
get-vm -Datastore CLUS-A-01 | where {$_.PowerState -eq "PoweredOff"} | remove-vm -Confirm:$false
I don’t have good steps for this part. I’ll come back in the future and add the commands but the SnapMirror needs one final update, then I need to break it. This makes the destination volume read/write. After the volume is accessible, mount it from the NetApp controller. Now, back to the PowerCLI…

Add the datastore in vCenter from it’s new location.
Get-VMHost -Location CLUS | new-datastore -Name CLUS-A-01 -Path /CLUS_A_1 -NFS -NfsHost
Now that the volume is accessible, I just need to grab the VMX paths that I dumped to a file earlier, loop through the VMX paths and add them back to vCenter.
import-clixml /CLUS-A-01.xml | foreach-object {
$vmhost = $vmhosts | get-random -count 1
New-Vm -RunAsync:$true -VMFilePath $_ -VMHost $vmhost<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

There you have it! We actually combined all of these commands into a little script that handles everything for us. This makes a easy way to get the FlexPod ready for Cisco Live!


Inside the NOC at Cisco Live

Several times a year, Cisco Live offers thousands of attendees the opportunity to learn about new and exciting technologies, network with a lot of smart folks, and have a blast while doing it. For me, Cisco Live offers an exciting opportunity as well. As a Technical Marketing Engineer on the Converged Infrastructure team at NetApp, I get the opportunity to create a lot of data center designs in a year but I don’t typically get to do the day-to-day support of the FlexPod for thousands of people. Cisco Live gives me the opportunity to do that and talk to attendees about the experience throughout the week as a member of the Network Operations Center (NOC) team. NetApp has been the official storage provider of the NOC for several years now, ever since the decision to collaborate on the infrastructure and run Cisco Live on a FlexPod.


The Cisco Live NOC team provides a vital role at the conference. We are a service provider for all of the Cisco employees, vendors, and attendees – ensuring everyone receives a reliable internet connection with great performance. We deploy a staggering amount of hardware during the week before the show, what we refer to as the setup week. Before we can get to the thousands of access points and switches that need to be deployed, we need to get our FlexPod up and running. For almost 5 years now, the data center at the core of Cisco Live has been a FlexPod Datacenter – a Cisco and NetApp converged infrastructure that combines best practices and industry leading support. The FlexPod Datacenter is where we run all of the applications required to configure, monitor, and maintain the network. These applications include video surveillance, WAN optimization, wireless network management, a lot of custom applications, just to name a few.

This summer at Cisco Live US in Orlando, we’re exciting to once again be running Cisco Live on a FlexPod containing 2 Cisco UCS Blade chassis, 2 Cisco Nexus 7Ks, and 4 NetApp AFF 8060s in a MetroCluster configuration. We designed this infrastructure with a few considerations in mind.

The primary design consideration for our infrastructure is business continuity. During the setup week, there are a lot of things going on. With hundreds of people on site tearing down from past conferences while also setting up for Cisco Live, there is plenty of opportunity for accidents. At Cisco Live Europe in Barcelona, an electrician pulled the power on one of our data centers, thinking it was used for the hairstyling convention that had just ended. It’s very important that throughout any issues we may encounter with any part of the infrastructure – even an entire data center – we continue to serve data and run the applications. For that, we turned to NetApp MetroCluster – a solution which synchronously mirrors your data between two data centers and has enough redundancy built in that you could lose a full data center, failover and continue serving data. With the MetroCluster as our storage solution, we are able to failover to the surviving data center with people cut the breaker to our data center, continue serving data, and switch control back once we have regained power.


In addition to business continuity, the flexibility and performance of the infrastructure is very important. Because of the fast-moving environment at Cisco Live, we often don’t have good requirements until right before the show. Because of this, we need a data center infrastructure that is flexible enough to handle all kinds of different workloads and protocols. Regardless of the chosen data center design, it needs to perform well. All Flash FAS is perfect as a platform that provides all the features we need combined with great performance. For example, at Cisco Live Barcelona this year, we planned on implementing Fibre Channel through the Cisco Nexus 5548. This required 2 Fibre Channel ISLs between the sites in addition to the Ethernet ISLs which we had for NFS traffic. At the last minute, the venue communicated that there were not enough links between the data centers and the plans would need to be changed. All Flash FAS made this an easy decision. It cost us just a small amount of time to convert our SAN boot infrastructure from Fibre Channel to iSCSI. With some storage controllers, this flexibility isn’t available. Regardless of any design we’ve chosen for Cisco Live, the All Flash FAS has been able to consistently respond to IOs with sub-millisecond latency. The NOC team has found that the controllers are capable of great performance with any workload that we choose.

One great thing about the NOC at Cisco Live US is that you can see all the infrastructure being used by stopping by the NOC booth in The Hub. We’d love for any attendees at CLUS in Orlando to swing by and talk about the infrastructure and any other NOC related things you’re interested in!

Mother’s Day Scallops

This isn’t the most typical thing to cook on the Big Green Egg. Especially with my first post. However, tonight I whipped an early Mother’s Day dinner for my wife. This one was mostly an excuse to use a new Himalayan Salt Block that was given to me as a gift.

After a little research, I determined that scallops were one of the best things to cook on a himalayan salt block which is perfect because my wife loves scallops! The trick with the salt block is to heat it up slowly to prepare for the cook. I let the grill come up to 400 degrees in around an hour as you can see on my graph from the Flame Boss.

During this time, I had some help from the little man salting the asparagus…

Now, on to the scallops. Grilling scallops is a great way to bring out their natural sweetness. That will be a great combination to go with the salt flavor that they’ll pick up from the salt block. Before placing them on the grill, I brushed on a sauce of butter, lemon juice, and honey to help brown them up a little. They cook quickly; I left them on for around 3 minutes per side. I thought the flavor and texture of the finished scallops was excellent although I wish they had browned up a little more. I probably tried to put too many on the salt block at once. The liquid from all the scallops at the same time probably prevented the browning some.