83: Nutanix Weekly: Just-in-Time Cloud Storage for Nutanix Cloud Clusters

Mar 18, 2024

In the first iteration of Nutanix Cloud Clusters, your built-in NVMe storage was the only capacity you could tap into.  With Nutanix’s recent announcement of Just-In-Time storage with AWS Elastic Block Storage (EBS) you can now customize the amount of storage you need in a cluster with expansion capabilities.  

In this episode, we talk about the news from a blog post on nutanix.com posted by Dewayne Lessner about EBS with Nutanix Cloud Clusters.


Host: Phil Sellers
Co-Host: Harvey Green
Co-Host: Jirah Cox
Co-Host: Ben Rogers

Philip Sellers: Hello, everyone, and welcome to another edition of Nutanix Weekly. This is your host Phil Sellers, joined with a great panel of folks here? Another XenTegra podcast with what do we call it? With context.

00:00:21.730 –> 00:00:27.710
Philip Sellers: Joined today with Harvey Green, CEO of XenTegra-GOV. Harvey, how you doing?

00:00:28.180 –> 00:00:29.990
Harvey Green III: I am pretty good. How are you?

00:00:29.990 –> 00:00:35.330
Philip Sellers: Doing pretty well, also join with Jirah Cox, the man, the myth, the legend.

00:00:35.470 –> 00:00:36.470
Philip Sellers: Jirah, how are you?

00:00:36.738 –> 00:00:40.500
Jirah Cox: Hey, Phil, I’m I’m doing good. I don’t feel too legendary, but I’m here.

00:00:40.907 –> 00:00:44.569
Philip Sellers: Well, I guess it’s better than being infamous right.

00:00:45.050 –> 00:00:46.464
Jirah Cox: And it beats being mythical. So.

00:00:47.181 –> 00:00:54.890
Philip Sellers: And we we are joined by the mythical Ben Rogers. Then how are you, man?

00:00:55.410 –> 00:01:03.729
Ben Rogers: Good for those that are. Seeing this. I am safe. I am in a good location so so don’t be worried that I look like I’m in a dungeon, so.

00:01:03.990 –> 00:01:04.690
Harvey Green III: We’re all good.

00:01:06.530 –> 00:01:09.729
Harvey Green III: We were checking on him just a little bit ago, so.

00:01:10.020 –> 00:01:19.140
Philip Sellers: Yeah, trying to look for clues. Make sure that Ben’s not been kidnapped or otherwise detained against his bill.

00:01:19.340 –> 00:01:27.670
Ben Rogers: Wouldn’t it be so? Wouldn’t it be so impressive to be so important that somebody wanted to kidnap me.

00:01:27.956 –> 00:01:33.390
Philip Sellers: Or like my kid told me the other day. You’re not a kid. You can’t be kidnapped so.

00:01:36.370 –> 00:01:40.649
Ben Rogers: Mine would go in 24 h they would want to give you back. They’d be paying.

00:01:40.650 –> 00:01:43.145
Philip Sellers: To take you back.

00:01:44.283 –> 00:02:00.009
Philip Sellers: Well, guys, thanks so much for joining us. So we’re we’re here to talk new tanics it. It’s another week. Things haven’t stopped things are continuing to to move forward new tanics is continuing to

00:02:00.030 –> 00:02:10.770
Philip Sellers: to make progress and release new things. And that’s that’s one of the things we’ve got today. So we’ve we’ve got a new blog post out fresh off the presses

00:02:10.780 –> 00:02:21.310
Philip Sellers: just in time. Cloud storage for new tanics. Cloud clusters written by Dwayne Lesner, shout out to Dwayne, thank you for giving us something to discuss.

00:02:21.550 –> 00:02:33.496
Philip Sellers: We appreciate all of the contributors and stuff from inside new tanics that are constantly investing in the community and and giving us good information, keeping us surprised. What’s going on?

00:02:33.900 –> 00:02:37.359
Philip Sellers: so shout out to Dwayne today.

00:02:37.360 –> 00:02:45.520
Ben Rogers: Philip, can I interrupt you for something? Because I think I’m going to make the guest of this, podcast, the people that listen to this happy.

00:02:45.610 –> 00:03:14.749
Ben Rogers: I’m going to admit that this is a subject I do not know a lot about today, so I am coming here as a new Tannics employee, but as an new Tannics employee I’m looking a service that I really don’t know a lot about. So at this point I kind of feel like I’m listening to the podcast, to learn. So for your listeners out there, we don’t always come prepared. And we don’t always know all the subjects. And this is definitely one that I am not up to speed on. So I’m very much looking forward to the podcast and learning about ebs, storage in in

00:03:14.750 –> 00:03:16.149
Ben Rogers: and C 2, cloud.

00:03:16.400 –> 00:03:41.329
Philip Sellers: Yeah, th, this is the exciting one. Right? I mean, you know that that’s the thing, right? We’re talking about things that are brand new there’s not a lot of experts, I mean, especially outside of new tanics. This is brand new. This is hot off the presses. You know it. It’s been in the wild. At least this blog post has for a whole 5 days. So you know. But none of this stuff

00:03:42.500 –> 00:03:57.929
Philip Sellers: magically happens right? I mean, this is part of a roadmap. This is part of a long term plan. You know that that you know, folks in new tanics have been executing on. And so that that’s that’s the cool part is now we’re getting to benefit from

00:03:57.930 –> 00:04:12.820
Philip Sellers: from that long term roadmap. Yeah, I mean, as as we talk about cloud clusters and things like that. Let’s let’s take a quick minute before we jump into the the blog post and let’s level set. What is what is the Newtonics? Cloud cluster, Harvey.

00:04:14.550 –> 00:04:32.229
Harvey Green III: That is the I won’t call it the newest cause. It’s not that new, but that is the extension of new tanics and the architecture and the platform to run within the public cloud. Specifically, Amazon aws and Microsoft azure.

00:04:32.950 –> 00:04:33.290
Philip Sellers: Yeah.

00:04:33.720 –> 00:04:40.539
Philip Sellers: yeah, what? What’s different about new tanics, cloud clusters than any other new tanics. Installation, jyra.

00:04:42.240 –> 00:04:56.269
Jirah Cox: Actually, shockingly little right? So they run the exact same new text code, right for the hypervisor, the storage fabric, the management layers, the automation as any cluster you’d run on Prem as well. Right? So really, you’re just

00:04:56.623 –> 00:05:00.870
Jirah Cox: rather than owning hardware in your data center. You’re renting hardware in somebody else’s.

00:05:00.870 –> 00:05:14.744
Philip Sellers: Yeah. So I mean that that’s a great level set. And place to start is, yeah, this is the same new tanks we know and love from on. Prem. It’s just now sitting in someone else’s data center. In this case, aws and azure.

00:05:15.628 –> 00:05:17.659
Philip Sellers: So you know that

00:05:17.670 –> 00:05:33.593
Philip Sellers: that you know back to Ben’s Point, you know. Start there, really. What are we talking about? Where are we talking about it? You know, for the purposes of this blog post. We’re gonna be spending a lot of time talking about aws, because that’s where the new innovations happening

00:05:34.030 –> 00:05:43.250
Philip Sellers: from this topic. So, Gyra, why don’t you kick us off and tell us a little bit of what? What’s the news here? What’s the news in in C 2.

00:05:44.440 –> 00:05:48.880
Jirah Cox: Yeah, or maybe even Why.

00:05:49.040 –> 00:05:55.269
Jirah Cox: why is this so great to have right? And what’s the cut? The circumstances where customers would want to use this right? So

00:05:55.470 –> 00:05:59.150
Jirah Cox: one more, one more. Compare and contrast right with

00:05:59.380 –> 00:06:14.319
Jirah Cox: with running clusters on Prem, when when you on the hardware was in your data center. Right? You can assign a human to go touch that hardware, change it any way you want to. Right? Specifically, like, you know, maybe increase the size of the storage. There, right pop in pop in some new Ssds. Or whatever to grow a cluster

00:06:14.330 –> 00:06:17.020
Jirah Cox: and expand that storage pool

00:06:17.040 –> 00:06:18.060
Jirah Cox: resource

00:06:18.280 –> 00:06:37.070
Jirah Cox: with, of course. A rented node in aws at the bare metal layer. There you have a few options of of flavors of bare metal node type, but it is still off of a menu. Right? You’re gonna get this node or that node. It’s gonna have certain dimensions, certain parameters storage being one of them. And there’s really kind of no

00:06:37.310 –> 00:06:58.869
Jirah Cox: size of check. You can write no human, you can call to say, Hey, can you go? Add 2 more Ssds to these nodes in that data center for me, please. And thank you right like, that’s gonna go nowhere. So the hardware configurations kind of locked. So then, how do the the scenario right for our customers is, how do we help giving help? Our customers enjoy the same kind of flexibility right to

00:06:59.332 –> 00:07:04.517
Jirah Cox: to create clusters that are tailored to their needs. CPU memory storage

00:07:05.280 –> 00:07:16.709
Jirah Cox: when they move to say from an on-prem data center or a colo into public cloud. And so that’s where that’s the the use case for this right? That where we we haven’t had that before. If a customer wanted to say replicate

00:07:16.890 –> 00:07:37.680
Jirah Cox: workloads to Nc, 2, cause that’s pretty fashionable these days, right? Using, you know, owned resources on Prem in a primary data center. But a lot of customers look at it and say, Hey, I don’t wanna do a Dr. Data center anymore. Move my Dr to the cloud, right? I wanna rent that instead or use that as my first kind of cloud workload as my Dr. For my on prem vms.

00:07:37.820 –> 00:07:44.781
Jirah Cox: so if we come back to them with a customer, to the to the customer and say, Here’s a design for you, you know you maybe you need

00:07:45.320 –> 00:07:51.870
Jirah Cox: 3 or 4 nodes worth of hardware to run your vms, but you might need 6 nodes or 8 nodes.

00:07:51.870 –> 00:07:52.490
Philip Sellers: With the.

00:07:52.490 –> 00:08:00.339
Jirah Cox: Storage based on this cookie cutter node size to store your Vm. Data. Well, now, the customer is paying quite a bit of money for those

00:08:00.440 –> 00:08:06.189
Jirah Cox: 4, 5, 6, 7, 8 nodes that are just there for data storage. And aren’t there for the compute side need.

00:08:06.280 –> 00:08:15.519
Jirah Cox: or they’re not needed for application they’re only needed, maybe during an activation of the Dr. Environment, right? So then, paying for them all the time, is a bit prohibitive.

00:08:15.870 –> 00:08:23.999
Jirah Cox: So that’s kind of the scenario of like helping customers out that that this feature now is kind of justified by and kind of born into right. That’s the context.

00:08:24.430 –> 00:08:47.530
Philip Sellers: Yeah. And and you, you pointed out something. I think that’s really important. This often comes up when we’re talking to customers about Dr. Scenarios. So we’re talking about rehosting an entire workload into Nc, 2, those generally make sense. You need the number of nodes you need based on computer based on storage. Th. Those use cases make a lot of sense because

00:08:47.530 –> 00:08:58.870
Philip Sellers: you’re gonna need and consume all of that upfront. But with Dr. You just need a place to land that data. You’re not necessarily always going to need the compute to go with it.

00:08:58.870 –> 00:09:04.759
Philip Sellers: And so there’s this out of balanced thing that happens between

00:09:04.780 –> 00:09:10.269
Philip Sellers: your node, because generally they don’t have enough storage for for that particular use case.

00:09:11.270 –> 00:09:12.110
Philip Sellers: You know.

00:09:12.460 –> 00:09:29.049
Philip Sellers: we’re talking about aws today, but I know Harvey and I have worked on a couple of deals where we had an azure customer, and there’s even more restrictions on the azure in C 2, because there’s 2 node types as opposed to. I think we’re up to 6 or 8

00:09:29.170 –> 00:09:32.570
Philip Sellers: in aws. Now, is it 6?

00:09:32.980 –> 00:09:33.429
Philip Sellers: Something that.

00:09:33.650 –> 00:09:34.310
Jirah Cox: For sure. Yeah.

00:09:34.521 –> 00:09:35.998
Philip Sellers: We we would have to actually go.

00:09:36.210 –> 00:09:37.000
Jirah Cox: Correctly accurate.

00:09:37.510 –> 00:09:38.020
Philip Sellers: Yeah.

00:09:38.020 –> 00:09:39.110
Harvey Green III: Yeah, we have to.

00:09:39.110 –> 00:09:40.329
Jirah Cox: I can look at it for you.

00:09:41.960 –> 00:09:48.200
Philip Sellers: But there’s at least more variety to choose from. But again, in Dr. Use cases it doesn’t

00:09:48.310 –> 00:09:53.803
Philip Sellers: quite fit. So I think that’s that’s a great way to set up the discussion.

00:09:54.840 –> 00:09:56.789
Philip Sellers: so I’m not gonna

00:09:57.120 –> 00:10:01.060
Philip Sellers: make us wait any longer. What are we announcing? Jyar.

00:10:01.740 –> 00:10:11.530
Jirah Cox: So so with this new feature here, right? With the ability to to connect ebs here, which is like the elastic block storage right additional block storage that exists

00:10:11.570 –> 00:10:15.670
Jirah Cox: outside the customer’s nodes, but still within the parameters of Aws.

00:10:15.710 –> 00:10:25.749
Jirah Cox: You can now attach block storage to the Newtonics, nodes to create additional storage right to present more additional storage, basically

00:10:25.800 –> 00:10:37.619
Jirah Cox: very much akin to the scenario I talked about before, where you would just pop in additional Ssds into nodes that you own nodes you control same kind of outcome now available. Now for cloud clusters as well.

00:10:37.620 –> 00:10:56.799
Jirah Cox: This gives you a lot more flexibility. As you design this to say, I don’t need a lot of compute for perhaps for Dr. Right. All I’m only gonna run is like 90 Cvms to Host Cloud Storage and be my replication target. And I just want all the storage I can talk to. Then to replicate as my as my actual data tier. There.

00:10:57.330 –> 00:10:57.740
Philip Sellers: Yeah.

00:10:58.700 –> 00:10:59.260
Philip Sellers: for those.

00:10:59.260 –> 00:11:02.720
Harvey Green III: I mean confetti and special effects.

00:11:03.100 –> 00:11:04.540
Harvey Green III: And there’s what you’re.

00:11:04.540 –> 00:11:05.987
Jirah Cox: Dressing, good day.

00:11:07.870 –> 00:11:12.461
Philip Sellers: Yeah, I mean, we, we we need a sound machine for this podcast.

00:11:16.570 –> 00:11:17.680
Jirah Cox: We’ll we’ll put that in.

00:11:17.680 –> 00:11:19.170
Harvey Green III: Not scared of where that would

00:11:21.025 –> 00:11:21.760
Harvey Green III: good.

00:11:21.760 –> 00:11:25.629
Jirah Cox: I’m sure, the podcast host can put that in in the edit, right? There is an edit phase right.

00:11:26.381 –> 00:11:31.128
Philip Sellers: Yeah, there, there’s strictly no editing on this one, right?

00:11:33.880 –> 00:11:34.660
Harvey Green III: Hey! Man!

00:11:34.660 –> 00:11:35.169
Philip Sellers: But have you.

00:11:35.170 –> 00:11:37.981
Harvey Green III: We got one tape, just one.

00:11:38.450 –> 00:11:39.250
Philip Sellers: What, what’s why.

00:11:39.250 –> 00:11:39.870
Harvey Green III: In.

00:11:41.217 –> 00:11:41.989
Philip Sellers: No, I mean.

00:11:41.990 –> 00:11:42.390
Harvey Green III: Come on!

00:11:43.030 –> 00:11:57.840
Philip Sellers: It’s fantastic, I mean. Now, now you’ve got that extensibility right? So particularly for the Dr. Use case, you add the extra block storage you need takes some planning, some sizing on the front end. But now you can accommodate that in

00:11:58.587 –> 00:12:12.100
Philip Sellers: what we would call a pilot light cluster. You know. Is that a a term that’s well known, I guess I mean Pilot light when we start talking about Dr. I mean, do you guys know what that refers to?

00:12:12.610 –> 00:12:13.769
Philip Sellers: Yeah, it’s just a.

00:12:13.770 –> 00:12:14.120
Jirah Cox: Yeah.

00:12:14.120 –> 00:12:15.340
Harvey Green III: The frameworks.

00:12:15.380 –> 00:12:24.709
Harvey Green III: I I’ve actually heard more people referring to it that way outside of conversations that I’ve had before. And I was like, oh, there! I’m not the only one that says this alright cool.

00:12:24.710 –> 00:12:40.249
Philip Sellers: Yeah, and that that’s kind of the point of reference, right? Like, it’s sort of like a pilot light on your stove or on your gas logs or something. It’s there running and waiting for you to turn the switch on and need to to turn it on.

00:12:40.270 –> 00:12:56.089
Philip Sellers: And that way you’re ready to go when when you turn on the switch. And so it’s a minimal cluster in terms of Newtonics. But that makes a minimal cluster now much more powerful, much more valuable to the customer. So

00:12:56.996 –> 00:13:00.193
Philip Sellers: this is cool stuff, I mean, where?

00:13:01.280 –> 00:13:04.119
Philip Sellers: where do you see? Additional

00:13:05.023 –> 00:13:09.259
Philip Sellers: capabilities? Kind of coming in. From your perspective, Harvey.

00:13:10.420 –> 00:13:13.730
Harvey Green III: I mean ultimately for me. This, this is

00:13:14.560 –> 00:13:18.900
Harvey Green III: this is protection against all the stuff you don’t know is coming.

00:13:19.771 –> 00:13:35.329
Harvey Green III: Because you go and design this, and you can design it as accurately as you want to all day. But there will be things that you did not account for that somehow have to get taken care of. And this is this is your get out of jail free card.

00:13:36.880 –> 00:13:58.610
Philip Sellers: And Ben, I’ll throw it to you. I mean, as you’re working with customers. I know as I’m working with customers. Future proofing is a big part of what we do when we do sizing and create, you know, configs for them. Being able to slide it, I mean, like Jyra said. I mean being able to effectively slide in more capac. I mean, how how huge is that for cloud.

00:13:59.100 –> 00:14:13.199
Ben Rogers: Well, it’s gonna be huge for us, particularly. I’m I’m speaking now on the sales end. You know. We’ve done a lot of deals where the nodes, the the clusters got heavy, compute, wise because we were trying to

00:14:13.210 –> 00:14:15.520
Ben Rogers: facilitate a storage problem.

00:14:15.590 –> 00:14:34.129
Ben Rogers: And so now, with our ability to add storage to these clusters. That’s going to be great, because now we don’t have to inflate the the compute costs anymore, we can really look at what are the services that the workloads needs and properly design them. So from from an it configuration perspective. This is going to be awesome.

00:14:34.130 –> 00:14:48.850
Ben Rogers: I do have a few questions about this. I see in the drawing here that we’re adding this storage directly to the nodes. So essentially to Java’s comment, this is like popping in some extra SSD or Nvme. To these these nodes

00:14:49.090 –> 00:15:12.800
Ben Rogers: gyra, do we have to add equivalent storage to each node? Or can you explode one node or kind of give us an idea of how does this peanut butter spread across the cluster? Is it, you know, done? Each each node’s gonna get an additional ebs slice, or we could have one ebs on one node and the other nodes. I’m kind of confused there of how we would architect this.

00:15:13.890 –> 00:15:28.420
Jirah Cox: Yeah. So details, of course, do matter right? So the article calls out at at this point we’re gonna look at adding the search to the nodes in the cluster with uniformity. Right? So what? That? Every node in the cluster also just one more point, which is not

00:15:28.570 –> 00:15:41.960
Jirah Cox: a huge consideration. But it’s a minor consideration is that this is a cluster creation, time decision. Now, that’s also pretty easy to deal with, right, because with this ability to have this hardware on demand.

00:15:42.010 –> 00:16:08.430
Jirah Cox: moving from cluster size to cluster size and moving those workloads within a cloud. Az actually is way easier, right? Because there’s not that whole on prem data center experience of, like, I order a truckload of new nodes to live next my old nodes not to do something with the old nodes. No, we’re just gonna for a few hours rent 2 different clusters move workloads over and move from, you know, profile A to profile. B, so it’s not like quote like a hot, add exactly the end. The outcome is the same as adding those as a sees to a node.

00:16:08.430 –> 00:16:13.460
Jirah Cox: but the workflow to get there is a little bit different than just popping them in, because, of course, it is different.

00:16:15.580 –> 00:16:27.200
Philip Sellers: And that’s a shout out to native data services, right? You can just use built in replication. And all of the great things we know from a Dr. Perspective to move from one cluster to another.

00:16:28.050 –> 00:16:46.019
Jirah Cox: Totally. It’s a very it’s a very much cloud. Principles fail. Forward. You know. Declarative infrastructure version of of the same outcome. Right of get more storage versus a sort of day. 2 pets operation of like find a way to do it, you know, while the airplanes in the sky.

00:16:46.020 –> 00:16:46.680
Philip Sellers: Yeah.

00:16:47.186 –> 00:16:52.213
Philip Sellers: sort of like, yeah, versus refueling in the sky. I like that.

00:16:54.324 –> 00:17:05.432
Philip Sellers: Yeah. So the ebs is attached. It. It looks like in different mount points. I mean I I know the listeners can’t see it. The blog post has got a great

00:17:05.790 –> 00:17:17.409
Philip Sellers: visual here that been mentioned. That kind of shows the Ec 2 bare metal hosts, and you got your controllers, your Hv, your user vms, and then the storage controller with different

00:17:17.619 –> 00:17:23.899
Philip Sellers: are those mount points, one for Hv. One for the Cbm. Local storage and remote storage.

00:17:24.599 –> 00:17:27.689
Philip Sellers: Is that effectively how it happens under the covers.

00:17:28.770 –> 00:17:40.830
Jirah Cox: Yeah. The neat thing is that is what’s really cool here, right? Is that ebs storage when it’s presented to the nitro card in aws, basically just looks like a local block device, right? So we really haven’t had to teach

00:17:40.860 –> 00:17:47.180
Jirah Cox: no disrespect to engineering. And all the work put into this feature and testing it and qualifying it for customers.

00:17:47.180 –> 00:18:12.450
Jirah Cox: But it’s not like we had to go teach Zvm to go speak a whole new language here. Right? It’s basically go find other local block storage within the node and make it part of a usable cluster which we’ve been doing since, literally day one of the company right? So anything calls out here. In kind of the way. The stargate, which is our main. I/O handler, handles the storage right is very similar to like a node that would have physically present local Nvme and physically present local SSD.

00:18:12.450 –> 00:18:34.460
Jirah Cox: Of course those are both fast, but one is a little bit faster. So that’s where rights are gonna land, and then d stage down to the SSD tier, similar here. If there’s local Nvme in the box and then ebs attached to it. All that all that Nvme locally is, of course, tab it faster, a lot more local. We’ll use that first and then spill down over to the that ebs storage.

00:18:35.563 –> 00:18:47.740
Ben Rogers: Driver! What that’s telling me is our data locality, and all the goodness that we have built inside the cluster still remains the cluster smart enough to figure out if the load needs to be on local storage or on this ebs storage.

00:18:49.412 –> 00:19:08.699
Jirah Cox: Right? Of course, I mean, that was the first real data handling data governance trick, right? Was understanding. What is the hottest part of data and keeping that as localized and as close to work. What is possible. Right? Remember how we started the company right was with nodes that might have, like, say, 100 gig SSD. And like 2 2 TB spinning disks.

00:19:08.700 –> 00:19:27.459
Jirah Cox: not a whole lot of of very hot storage there, right? So we had to be pretty darn sure. What was the hot tier storage? What needed to go there? And, by the way, that SSD. Would cost you like 10 grand for a hundred gig back in the day. That was the long pull in the tent of the hardware bomb. So even though, of course, as as

00:19:27.690 –> 00:19:40.869
Jirah Cox: data center. Hardware has evolved over time right and cost per gig has come down dramatically. Our data placement engine, right of profiling storage and knowing what needs to be placed. The closest to the workload, of course, still has value, and still is in place.

00:19:42.810 –> 00:19:54.070
Philip Sellers: So someone coming into the aws conversation, can you tell us a little bit more about that nitro card? W. What is that? Actually cause

00:19:54.110 –> 00:20:02.140
Philip Sellers: it it, you know. Like many things in in the Aws Arsenal, it has branded names, and that may not actually explain what it’s doing here.

00:20:03.000 –> 00:20:21.590
Jirah Cox: Yeah. So great question. I’m I’m I’m almost as much of a lemon as probably everybody else in the call. But my view of the world. Right? Is that the nature card? It’s it is the dpu right, the super fancy Nick, within the node right that makes that allows aws to do all of the awesome Aws stuff

00:20:21.590 –> 00:20:38.199
Jirah Cox: at the data center level for the hardware. Right? So that’s where your networking gets handled. In this case also storage as well, so things that they can do like that present external storage as a local device. Of course, that’s because the the nitro card itself is custom. Silicon. You really you only get nitro nix

00:20:38.260 –> 00:20:45.609
Jirah Cox: within aws, right? It’s not not hardware. You can run in your own data center, but it’s part of what helps them run, run their cloud at scale as hyperscaler.

00:20:46.260 –> 00:20:46.880
Philip Sellers: Yeah.

00:20:47.100 –> 00:20:52.919
Philip Sellers: so we can think of it as sort of like a cross between a out of band management

00:20:53.220 –> 00:20:55.193
Philip Sellers: interface and

00:20:56.920 –> 00:21:00.309
Philip Sellers: a physical Pci device because it sits on the Pci.

00:21:01.965 –> 00:21:02.360
Jirah Cox: Sure.

00:21:02.360 –> 00:21:03.130
Philip Sellers: Yeah, on the PC.

00:21:03.130 –> 00:21:04.310
Jirah Cox: Haven’t been

00:21:04.400 –> 00:21:07.799
Jirah Cox: in band overlay underlay

00:21:08.030 –> 00:21:23.019
Jirah Cox: plus what if you squint? I guess it could play the role of an external storage. Hba, plus like sas expander like, it’s sort of all of that rolled into one. Because ultimately all those things are just software definable functions, right?

00:21:23.410 –> 00:21:33.599
Jirah Cox: And of course, everyone at aws now is rolling their eyes at our, at our terrible analogies for their I’m sure awesome silicon that they’re very proud of.

00:21:34.337 –> 00:21:39.469
Harvey Green III: You get the nitro button when you want something to be fast.

00:21:39.825 –> 00:21:40.180
Philip Sellers: Yeah.

00:21:40.480 –> 00:21:42.120
Philip Sellers: Yeah. It’s a very quick.

00:21:42.120 –> 00:21:44.249
Jirah Cox: Your other technology, badly described.

00:21:47.410 –> 00:21:56.179
Philip Sellers: But it’s the smarts, I mean, I I think this sum up what you just said. It’s the smarts that makes this really a a composable

00:21:56.921 –> 00:22:04.150
Philip Sellers: just to throw out another buzz word in the infrastructure industry. But a composable type of hardware.

00:22:04.460 –> 00:22:08.190
Philip Sellers: So again, back to what you’re saying about software defined.

00:22:08.310 –> 00:22:21.239
Philip Sellers: You can take this and and turn it into something more through the software. And that’s what the power of the new tax platform on the spare metal is actually giving to us is that composability to make this into a

00:22:21.570 –> 00:22:27.020
Philip Sellers: much more useful form than its native components?

00:22:30.080 –> 00:22:31.080
Philip Sellers: you know.

00:22:31.350 –> 00:22:39.930
Philip Sellers: we talk a little bit more in this this blog about, you know, adding and tearing that we’ve already covered.

00:22:40.030 –> 00:22:44.879
Philip Sellers: Let’s talk a little bit about the the on demand growing.

00:22:45.710 –> 00:22:53.570
Philip Sellers: you know, from a Dr. Use case, we’ve got that 3 node pilot light cluster. What happens when

00:22:53.680 –> 00:22:59.289
Philip Sellers: we want to do more or when we need to expand like, what does that look like.

00:23:00.170 –> 00:23:25.599
Jirah Cox: Yeah, a really cool outcome that this feature now gives us is that before we had really one way to grow a cluster, and that was to literally grow, grow the cluster pop in more nodes which we did add support for mixing node types in the cluster in the past that was really great to have. So you could have larger nodes to start with, and then maybe add more compute heavy nodes. Over time. Now we have 2 ways to grow a cluster right? That one stays. The first one stays on. The truck of you can always add more.

00:23:25.600 –> 00:23:26.290
Philip Sellers: One ghost.

00:23:26.290 –> 00:23:48.120
Jirah Cox: Cluster. But now, if you have a cluster that has this ebs feature attached to it, you can also scale up the ebs storage itself. So if you just needed more storage, then whole separate lever candidly, whole separate price point for scaling, there to say, just add more storage to a cluster without needing to to reach for the add more compute, add more nodes solution as well.

00:23:48.400 –> 00:23:50.569
Jirah Cox: So great ways to solve more problems.

00:23:50.890 –> 00:24:01.200
Jirah Cox: And what’s cool now is the article calls out that you can actually attach, you know, really a dramatic amount of storage here. Right? So we we still use our kind of rule of thumb here of, we want to see about 20%

00:24:01.200 –> 00:24:23.779
Jirah Cox: of the cluster storage stay within the nodes. Right? We don’t wanna run with a hundred percent fully ebs storage. But you can have up to 4 x as much attached to it, right? So you can have 20% locally in the box, 80% off on ebs. So that’s really a, a very large amount of storage. And B very cost effective in terms of that, that tier that you’re gonna use there for that storage as well.

00:24:24.650 –> 00:24:27.340
Ben Rogers: So, Jarra, I want to make sure. I understand. Here

00:24:27.380 –> 00:24:54.939
Ben Rogers: we have to make a decision to use ebs storage at the time of its of cluster creation. So I know I’m building a Dr. Cluster. I know I’m gonna want to use some ebs. I want to go ahead and establish that at the time of build. But you’re also telling me that when I have to, and when I have to declare a disaster that I can grow these ebs storage loans dynamically. They can be. They can be increased on the fly as I’m bringing up my disaster, recovery, scenario.

00:24:56.190 –> 00:25:17.919
Jirah Cox: Yeah. So you definitely can scale up the Ebs storage at a later time. Let’s call that here in the article. I would think what’s probably more realistic. There, Ben is. You could add more nodes for a Dr. Disaster right? When you need more compute. You. But of course you can. You can scale up the ebs at any time, right? But that would be more, for like a data growth handling, and then and then light up more nodes. If I need more compute, then my pilot, like cluster, might be able to handle

00:25:18.430 –> 00:25:20.749
Jirah Cox: from the the CPU memory side of things.

00:25:21.770 –> 00:25:31.390
Ben Rogers: Awesome. I was worried. You were gonna say that we would have to do another, you know, new cluster with new new sizes and migrate over there. It’s good to see that we can expand it on the fly, though.

00:25:32.230 –> 00:25:50.060
Jirah Cox: Yeah, yeah, that’s just for the get to no ebs attached to have ebs attached shift. And of course, if you’re listening to me like. If if you’re thinking about this, I would say, this is probably gonna become a pretty instant. Best practice to say, attach a minimal amount of ebs to your cluster whether you think you’re gonna need it or not, because over time you’re probably gonna change your mind.

00:25:50.060 –> 00:25:50.500
Philip Sellers: Yeah.

00:25:50.500 –> 00:25:54.379
Jirah Cox: Do need it. So start with that attached. To begin with.

00:25:55.270 –> 00:25:55.850
Philip Sellers: I’m away so.

00:25:55.850 –> 00:25:57.850
Harvey Green III: Price is definitely Harvey approved

00:26:00.830 –> 00:26:01.930
Harvey Green III: of the day. It’s.

00:26:01.930 –> 00:26:04.180
Philip Sellers: So, Kid tested. Harvey approved.

00:26:05.950 –> 00:26:23.841
Philip Sellers: Yeah, no, I mean, I think that’s great advice. I mean, anyone looking at Nc to, you know, putting in a minimal amount of ebs just buys you flexibility. So you know you you do have the ability to grow it. That’s that’s huge. And honestly. The first time I read through this I missed that point. So

00:26:24.715 –> 00:26:26.350
Philip Sellers: that’s another great

00:26:27.620 –> 00:26:28.630
Philip Sellers: great

00:26:28.700 –> 00:26:30.099
Philip Sellers: feather in the cap.

00:26:31.250 –> 00:26:36.139
Philip Sellers: you know one of the points here is that it can be orchestrated with prism. Central?

00:26:37.490 –> 00:26:40.199
Philip Sellers: and that’s also a huge

00:26:40.220 –> 00:26:46.599
Philip Sellers: advantage. Because we’re going to use calm. We’re going to use automation during those Dr. Events hopefully.

00:26:47.035 –> 00:26:51.760
Philip Sellers: to orchestrate things. So you can create a playbook that that also gives you that

00:26:51.840 –> 00:26:53.410
Philip Sellers: ability to

00:26:53.740 –> 00:26:57.699
Philip Sellers: to increase capacity. So it’s just another avenue

00:26:57.810 –> 00:26:59.777
Philip Sellers: to meet your need.

00:27:01.020 –> 00:27:08.399
Philip Sellers: The net outcome, though from a financial standpoint, is important, too. You mentioned it, but I don’t want to make sure that

00:27:08.850 –> 00:27:14.030
Philip Sellers: you know it doesn’t get explicitly called out again. I see Harvey shaking his head. Do you want to take this one? I mean.

00:27:14.490 –> 00:27:15.870
Harvey Green III: I think it’s.

00:27:15.870 –> 00:27:18.270
Philip Sellers: It’s a huge point to point out.

00:27:18.440 –> 00:27:22.770
Harvey Green III: Yeah, I’ll I’ll break it down to be very simple.

00:27:23.930 –> 00:27:29.593
Harvey Green III: More storage costs less than more nodes to get storage.

00:27:31.020 –> 00:27:32.260
Philip Sellers: Yeah, and especially.

00:27:32.260 –> 00:27:33.950
Ben Rogers: Pay attention to performance.

00:27:34.320 –> 00:27:35.140
Ben Rogers: Well, yes.

00:27:35.140 –> 00:27:45.449
Philip Sellers: Especially when we see these Dr. Scenarios on aws a lot of times the the licensing that goes along with it plus the node itself.

00:27:45.660 –> 00:27:50.920
Philip Sellers: it becomes a cost prohibitive kind of thing. Now, with a 3 node cluster.

00:27:50.980 –> 00:28:00.080
Philip Sellers: you’ve got everything you need for a Dr. Scenario, so it’s really a great recipe for that elastic Dr. And that’s

00:28:00.140 –> 00:28:13.469
Philip Sellers: there’s another graphic here in in the the blog post that that shows the elastic Dr, you know, grow grow basically multiple different ways. You’ve got your base. You add more storage from ebs

00:28:13.560 –> 00:28:16.510
Philip Sellers: until you, Max, that out. And then you can add more nodes.

00:28:17.270 –> 00:28:25.270
Ben Rogers: Man. This is scream. I know we’re talking a lot about Dr. And the articles geared towards Dr. But this is also screaming test dev staging.

00:28:25.310 –> 00:28:31.579
Ben Rogers: I mean, this opens up the door for development, getting development off site into cloud. I mean, this is

00:28:31.590 –> 00:28:34.180
Ben Rogers: this is going to be exciting for sure.

00:28:34.960 –> 00:29:01.450
Philip Sellers: That also opens. You know, for me, as I think about this other types of workloads, maybe imaging and packs where they’re very storage heavy that weren’t necessarily compatible with the the storage, the node types that we had inside of Aws. This may be a great way of doing some of that, and tearing it down to a less expensive tier of of storage, maybe not having to be all in in Vietnam.

00:29:01.610 –> 00:29:05.429
Ben Rogers: We were riding the Nus bus last week. So it sounds like we’re getting back on it this week.

00:29:08.390 –> 00:29:20.980
Philip Sellers: Yeah, I mean again, I mean, there’s so many good use cases. And now we’re talking Dr. Here. Because that really taps into the flexibility of this announcement. But there’s other use cases for sure.

00:29:22.030 –> 00:29:23.010
Harvey Green III: Absolutely.

00:29:25.740 –> 00:29:35.550
Philip Sellers: I’m drawing a blank at this point. So, Harvey, you wanna save me here. What else is? Goodness that we should we should talk about.

00:29:37.386 –> 00:29:39.009
Harvey Green III: I mean, this is.

00:29:39.240 –> 00:29:44.030
Harvey Green III: you know. Again we we talked about a little bit. This is overall, just another.

00:29:44.100 –> 00:29:48.119
Harvey Green III: a another play of flexibility. Another play at being able to

00:29:48.240 –> 00:29:59.460
Harvey Green III: go out and attach more storage to what you have node wise? Again. Just you know how I am. I I want flexibility always. I never wanted to

00:29:59.850 –> 00:30:08.449
Harvey Green III: into a corner this. This allows you definitely a way to get out. If you end up having to drag a bunch of data and

00:30:08.530 –> 00:30:19.599
Harvey Green III: don’t have enough storage to do that with from how things were before. Now, you’ve got the ability to just add that as you need it, which makes a huge difference.

00:30:20.190 –> 00:30:29.330
Philip Sellers: Yeah, flexibility is the underscore. Just another way. I think you you summed that up nicely, gyra. You know you’ve got

00:30:29.340 –> 00:30:34.549
Philip Sellers: so many ways to grow. Now, this is just another great enhancement.

00:30:36.450 –> 00:30:37.950
Philip Sellers: final thoughts, guys.

00:30:39.880 –> 00:30:47.379
Ben Rogers: So barrier remover to me. To be real honest with you. This has been a barrier that we’ve had with customers trying to get them. Then C, 2.

00:30:47.380 –> 00:31:10.280
Ben Rogers: The second point I’ll make. And we talked a little bit about this on the last podcast is that this is where new tanics. Really, this is, this is our home. Man is data management. And so this is just another example of how you know, we’re looking at the zeros and ones. We’re con, we’re containing the zeros and ones we’re managing the zeros and ones. And this is just adding to that story. So again falls right into Willhouse what we’re good at doing.

00:31:10.280 –> 00:31:34.250
Ben Rogers: managing data, the zeros and ones and man gives us a, you know, more opportunity and flexibility in our cloud services. I can’t wait to see something like this come to the azure side and and, man, I’m excited to see that happen in the next, you know, whenever that’s supposed to come. But this is good stuff, man. I’m very excited to see this come to market, and will make my job much easier when I’m communicating solutions to our customers in Nc. 2.

00:31:35.100 –> 00:31:35.770
Philip Sellers: Yeah.

00:31:37.650 –> 00:31:40.910
Philip Sellers: yeah, nitro nitro powered in this case.

00:31:47.550 –> 00:31:50.949
Philip Sellers: gyra, anything else you’d want to add for

00:31:51.080 –> 00:31:52.770
Philip Sellers: for our topic today.

00:31:53.660 –> 00:32:10.929
Jirah Cox: I’d say, if this is compelling, I’d say, give it a try kick give it a test track. Kick the tires you can go from literally hearing my voice to having a cluster running in aws in about 2526 min, so faster than you really would expect so you could prove this out to yourself. Pretty darn fast.

00:32:11.830 –> 00:32:23.764
Philip Sellers: Yeah, that that’s a great point. And again, I mean, it’s still powered by the same underpinnings. It still has all the same great advanced features on top of it. So

00:32:24.600 –> 00:32:32.429
Philip Sellers: it’s it’s that true portability, you know. Same services, same operational thing at, you know, we were having a customer discussion

00:32:32.610 –> 00:32:34.470
Philip Sellers: last week, and

00:32:34.500 –> 00:32:51.450
Philip Sellers: really came down to how do you operate? And operating the same way across multiple different destinations, multiple different clouds not having to adapt for the individual sort of things in each cloud. So a lot of power in that

00:32:52.940 –> 00:33:14.619
Philip Sellers: lot of power. Well, guys, I wanna say, thank you so much for joining us today to talk about ebs storage within C 2 clusters on. Aws, this has been interesting. I was looking forward to today’s conversation. So I wanna thank you all for listening. And thank you guys for joining into the conversation.

00:33:15.460 –> 00:33:16.619
Philip Sellers: Thank you, Philip.

00:33:16.970 –> 00:33:18.609
Philip Sellers: Until next year goes

00:33:19.040 –> 00:33:22.629
Philip Sellers: we will see you on the next podcast thanks. Everyone.