{"id":65701,"date":"2022-12-16T23:00:00","date_gmt":"2022-12-17T04:00:00","guid":{"rendered":"http:\/\/74d2948405.nxcli.io\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/"},"modified":"2025-07-01T16:49:34","modified_gmt":"2025-07-01T20:49:34","slug":"nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2","status":"publish","type":"post","link":"https:\/\/xentegra.com\/hi\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/","title":{"rendered":"63: Nutanix Weekly: Honey I Shrunk My Cluster (Multiple Nodes Down in RF2)"},"content":{"rendered":"<p><iframe loading=\"lazy\" src=\"https:\/\/www.buzzsprout.com\/1577275\/episodes\/11886956-nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2?iframe=true\" scrolling=\"no\" width=\"100%\" height=\"200\" frameborder=\"0\" style=\"width: 100%;height: 200px\"><\/iframe><\/p>\n<p>Within Nutanix we have Replication Factor (How many data copies are written in the cluster) and Redundancy Factor (how many nodes\/disks can go offline). Both can have a value of 2 and 3. What is what is explained here: <a href=\"https:\/\/next.nutanix.com\/how-it-works-22\/redundancy-factor-vs-replication-factor-37486\">Blog Post.<\/a><\/p>\n<p>So, when we have a larger cluster, we always recommend using RF3 (Redundancy Factor 3) as the risk is higher that you have multiple nodes\/disks go offline at the same time.<\/p>\n<p>During trainings and onsite customer work I often get the question, &#8220;what will happen if multiple nodes go offline in Redundancy Factor 2?&#8221; In this blog post I will explain different scenarios and their behaviors.<\/p>\n<p>Blog by: Jeroen Tielen<br \/>\u0939\u094b\u0938\u094d\u091f: \u090f\u0902\u0921\u0940 \u0935\u094d\u0939\u093e\u0907\u091f\u0938\u093e\u0907\u0921<br \/>\u0938\u0939-\u092e\u0947\u091c\u092c\u093e\u0928: \u0939\u093e\u0930\u094d\u0935\u0947 \u0917\u094d\u0930\u0940\u0928<br \/>\u0938\u0939-\u092e\u0947\u091c\u092c\u093e\u0928: \u092b\u093f\u0932\u093f\u092a \u0938\u0947\u0932\u0930\u094d\u0938<br \/>\u0938\u0939-\u092e\u0947\u091c\u093c\u092c\u093e\u0928: \u091c\u093f\u0930\u093e\u0939 \u0915\u0949\u0915\u094d\u0938<br \/>\u0938\u0939-\u092e\u0947\u091c\u092c\u093e\u0928: \u092c\u0947\u0928 \u0930\u094b\u091c\u0930\u094d\u0938<\/p>\n<div class=\"transcript\">\n<p><!--block-->\u0935\u0947\u092c\u0935\u0940\u091f\u0940\u091f\u0940<\/p>\n<p>1<br \/>00:00:02.270 &#8211;&gt; 00:00:19.830<br \/>Andy Whiteside: Hello, everyone! Welcome to episode. 63 of New tennis week. Name your host, Andy White Side it&#8217;s a December twelfth, 2,022 and I&#8217;ve got a a big crew of smart guys. Let me get them introduced here real quick, Harvey Green, my co-host from probably day one Were you on day one already I can&#8217;t remember. I feel like I was. But<\/p>\n<p>2<br \/>00:00:20.050 &#8211;&gt; 00:00:24.560<br \/>Harvey Green: who knows?<\/p>\n<p>3<br \/>00:00:25.180 &#8211;&gt; 00:00:30.590<br \/>Andy Whiteside: So, Harvey, this is your first December, actually running a business running this entire Gov business.<\/p>\n<p>4<br \/>00:00:30.660 &#8211;&gt; 00:00:33.099<br \/>Andy Whiteside: you know it&#8217;s getting ready to get crazy right?<\/p>\n<p>5<br \/>00:00:33.140 &#8211;&gt; 00:00:35.100<br \/>Harvey Green: Yes, 100%.<\/p>\n<p>6<br \/>00:00:35.600 &#8211;&gt; 00:00:48.029<br \/>Harvey Green: It has already done that. I don&#8217;t. I don&#8217;t mean to scare you, but that that means you&#8217;re making money as long as the people pay their bills. Yes, that is correct.<\/p>\n<p>7<br \/>00:00:48.120 &#8211;&gt; 00:00:54.029<br \/>Harvey Green: It&#8217;s always the caveat. That is always. It&#8217;s kind of the way most businesses run, I think. Right?<\/p>\n<p>8<br \/>00:00:54.230 &#8211;&gt; 00:01:06.550<br \/>Andy Whiteside: Well, yeah, it is. I&#8217;ve been this book about about You know a retailer retailer, he retailer from back in the 2,000. You guys may value America. I don&#8217;t. I didn&#8217;t remember it, and I do now that this is the book.<\/p>\n<p>9<br \/>00:01:06.880 &#8211;&gt; 00:01:13.319<br \/>Andy Whiteside: But it reminds me how many companies out there run with the idea of just creating revenue because they&#8217;re trying to sell themselves off as quick as they can.<\/p>\n<p>10<br \/>00:01:13.490 &#8211;&gt; 00:01:14.789<br \/>Andy Whiteside: and<\/p>\n<p>11<br \/>00:01:15.300 &#8211;&gt; 00:01:18.409<br \/>Andy Whiteside: you know. And when you have to pay your own bills, that&#8217;s a different world.<\/p>\n<p>12<br \/>00:01:20.380 &#8211;&gt; 00:01:26.210<br \/>Jirah Cox: Yeah, I think you&#8217;re either fundamentally in that yeah customers pay the bills model, or I guess you&#8217;re running a that collecting agency.<\/p>\n<p>13<br \/>00:01:26.250 &#8211;&gt; 00:01:27.730<br \/>Jirah Cox: Those are kind of your 2 choices.<\/p>\n<p>14<br \/>00:01:27.770 &#8211;&gt; 00:01:32.209<br \/>Andy Whiteside: Well, 100%. That&#8217;s the integr. Does a lot of that. We we track down a lot of payments.<\/p>\n<p>15<br \/>00:01:32.480 &#8211;&gt; 00:01:38.270<br \/>Philip Sellers: I think. Ultimately all businesses do, unless you pay up front or whatever. Philip sellers. How&#8217;s it going.<\/p>\n<p>16<br \/>00:01:38.690 &#8211;&gt; 00:01:40.690<br \/>Philip Sellers: Good! How are you? Good!<\/p>\n<p>17<br \/>00:01:40.740 &#8211;&gt; 00:01:42.350<br \/>Andy Whiteside: So you are<\/p>\n<p>18<br \/>00:01:42.930 &#8211;&gt; 00:01:55.779<br \/>Andy Whiteside: coming from the customer side the Vmware of Vmware customer. Specifically, the new tanks piece is something. Have you always looked at it like longingly from afar. Or is this, you know, really your first for a into the Newtonics world?<\/p>\n<p>19<br \/>00:01:56.420 &#8211;&gt; 00:02:09.770<br \/>Philip Sellers: so I spent a day with Gyra at the Durham headquarters last month, and he made me a believer. yes, I did mispronounce that. so?<\/p>\n<p>20<br \/>00:02:09.850 &#8211;&gt; 00:02:25.049<br \/>Philip Sellers: no, you know it&#8217;s it&#8217;s really interesting coming from a Vm. Or background. And and looking at where new Tanks is at today. you know I knew it when it was simplicity versus mechanics, you know. The hyper converged wars back before there was V. Sand.<\/p>\n<p>21<br \/>00:02:25.160 &#8211;&gt; 00:02:34.150<br \/>Philip Sellers: all of that kind of thing. So I followed the company for a long time, and i&#8217;m really impressed with the ecosystem and services that they enable. So<\/p>\n<p>22<br \/>00:02:34.160 &#8211;&gt; 00:02:49.669<br \/>Philip Sellers: I would say, what I see about it today is it&#8217;s really a platform play, and that&#8217;s the investment for a customer is that you&#8217;re investing in a platform that&#8217;s gonna allow you to deliver your it services that you need. And that&#8217;s a really cool.<\/p>\n<p>23<br \/>00:02:49.920 &#8211;&gt; 00:02:55.920<br \/>Philip Sellers: It&#8217;s a cool value. Prop: When you start going out and talking to customers about where Newton is at the day.<\/p>\n<p>24<br \/>00:02:56.030 &#8211;&gt; 00:02:56.750<br \/>\u0939\u093e\u0901\u0964<\/p>\n<p>25<br \/>00:02:56.950 &#8211;&gt; 00:02:59.750<br \/>Andy Whiteside: yeah, All on prem Colo.<\/p>\n<p>26<br \/>00:02:59.970 &#8211;&gt; 00:03:04.010<br \/>Andy Whiteside: in the cloud that matter on your terms where you want it<\/p>\n<p>27<br \/>00:03:04.040 &#8211;&gt; 00:03:05.420<br \/>Andy Whiteside: right at any point.<\/p>\n<p>28<br \/>00:03:06.170 &#8211;&gt; 00:03:08.890<br \/>Jirah Cox: Happy to have you on the<\/p>\n<p>29<br \/>00:03:09.180 &#8211;&gt; 00:03:14.939<br \/>Andy Whiteside: you&#8217;ve been around for quite a while as well. You&#8217;re still coming back, so must be having a little bit of fun.<\/p>\n<p>30<br \/>00:03:15.190 &#8211;&gt; 00:03:16.690<br \/>Jirah Cox: Yeah, man. No, it&#8217;s a blast<\/p>\n<p>31<br \/>00:03:16.840 &#8211;&gt; 00:03:23.090<br \/>Jirah Cox: you guys put up the only you canx flavored partner Podcast that i&#8217;m recording today<\/p>\n<p>32<br \/>00:03:23.520 &#8211;&gt; 00:03:27.110<br \/>Philip Sellers: this week only one this month, I hope.<\/p>\n<p>33<br \/>00:03:27.180 &#8211;&gt; 00:03:28.690<br \/>\u091c\u093f\u0930\u093e\u0939 \u0915\u0949\u0915\u094d\u0938: \u0939\u093e\u0901\u0964<\/p>\n<p>34<br \/>00:03:28.860 &#8211;&gt; 00:03:36.380<br \/>Andy Whiteside: we appreciate you having having you on here, you really do validate these conversations and get a lot of really good feedback from it.<\/p>\n<p>35<br \/>00:03:37.430 &#8211;&gt; 00:03:38.830<br \/>Jirah Cox: Yeah, it&#8217;s fun to do<\/p>\n<p>36<br \/>00:03:39.310 &#8211;&gt; 00:03:44.320<br \/>Andy Whiteside: Ben Rogers been you&#8217;ve been a customer, been a a friend of ours and multiple fronts.<\/p>\n<p>37<br \/>00:03:44.350 &#8211;&gt; 00:03:47.310<br \/>Andy Whiteside: is doing this podcast. What you thought it would be.<\/p>\n<p>38<br \/>00:03:47.930 &#8211;&gt; 00:04:00.920<br \/>Ben Rogers: it&#8217;s been very interesting, you know, when I did the Citrus podcast I had 25 years of citrus under my belt. So i&#8217;m very confident, and wherever we went this is a little bit different of a ball game, and so there&#8217;s been a couple of times I&#8217;ve showed up<\/p>\n<p>39<br \/>00:04:00.930 &#8211;&gt; 00:04:18.250<br \/>Ben Rogers: learned on the fly. You know. What I try to do is put it in a customer&#8217;s perspective, and either ask questions. I think customers would want to know answers to, but might be afraid to ask or lean Our friend the Ninja gyro here, and try to get the scoop from him. As I got him on the line. Just<\/p>\n<p>40<br \/>00:04:18.260 &#8211;&gt; 00:04:36.479<br \/>Ben Rogers: you being around him is definitely a great resources. So has it been when I thought I don&#8217;t I don&#8217;t know if I knew what to think about it. But, man, I&#8217;m enjoying myself. And I always get every time I publish this podcast I always get positive feedback from mechanics as an organization.<\/p>\n<p>41<br \/>00:04:36.490 &#8211;&gt; 00:04:48.010<br \/>Ben Rogers: and also customers of Newton that are out there that say they learn X, Y. And Z from listening to this. You know that that&#8217;s too polite to go like man. I thought there was a lot more preparation as a listener.<\/p>\n<p>42<br \/>00:04:48.110 &#8211;&gt; 00:04:57.340<br \/>Andy Whiteside: Well, he like you, said he did the citrus ones with us. So he he saw the wing in it model that still produces fruit.<\/p>\n<p>43<br \/>00:04:58.760 &#8211;&gt; 00:05:10.419<br \/>Ben Rogers: All right, so our blog let&#8217;s digress on that for 1&nbsp;s for those of you that are listening to this, you know Andy&#8217;s philosophy on these podcasts is, you learn the subject as you log in<\/p>\n<p>44<br \/>00:05:10.430 &#8211;&gt; 00:05:24.090<br \/>Ben Rogers: to the podcast. So all of us have about 2&nbsp;min to digest what we&#8217;re gonna talk about it expert format. So for all of you that are listing, just kind of understand we&#8217;re when we sit at the table what we&#8217;re dealing with on that.<\/p>\n<p>45<br \/>00:05:24.650 &#8211;&gt; 00:05:33.840<br \/>Andy Whiteside: Yes, but like in this case the blog is the title of it&#8217;s, honey. I shrunk my cluster multiple nodes down in Rf: 2<\/p>\n<p>46<br \/>00:05:33.850 &#8211;&gt; 00:05:47.499<br \/>Andy Whiteside: by what looks like Jerome Dial in would be the name if I want to try to repronounce that last name I would get. I would guess that&#8217;s what I would guess, feeling I don&#8217;t know where I got the th there was no th. I made that up<\/p>\n<p>47<br \/>00:05:47.850 &#8211;&gt; 00:05:50.650<br \/>Andy Whiteside: telen that makes total sense. But<\/p>\n<p>48<br \/>00:05:50.700 &#8211;&gt; 00:05:56.690<br \/>Andy Whiteside: gyro, you brought this blog, which thankfully you bring a lot of these prepared to talk about them.<\/p>\n<p>49<br \/>00:05:56.850 &#8211;&gt; 00:06:01.110<br \/>Andy Whiteside: What&#8217;s the gist of what happened here, and why he wrote this?<\/p>\n<p>50<br \/>00:06:01.770 &#8211;&gt; 00:06:08.479<br \/>Jirah Cox: Yes, this is. I love this post from Jerome. It&#8217;s on our community blog. By the way, where you know anybody can join as a member of the community. And<\/p>\n<p>51<br \/>00:06:08.610 &#8211;&gt; 00:06:14.100<br \/>Jirah Cox: and you know, if you want. If you&#8217;ve got something you want to say for sure it&#8217;s a great platform for it.<\/p>\n<p>52<br \/>00:06:14.330 &#8211;&gt; 00:06:30.819<br \/>Jirah Cox: Jerome posted this this right up from from conversations he&#8217;s had before of, you know. Hey, if i&#8217;m running in this case an example of 7 nodes with Rf. 2 right, which is our structure for maintaining a data protection sla of writing all data twice within a cluster.<\/p>\n<p>53<br \/>00:06:30.970 &#8211;&gt; 00:06:37.350<br \/>Jirah Cox: so that cluster. We say that it could lose one anything right, a disk or a node. What happens if it loses 2<\/p>\n<p>54<br \/>00:06:37.380 &#8211;&gt; 00:06:48.559<br \/>Jirah Cox: right? And what happens in that? In that case so great question, Great exploration. Here it&#8217;s a real real world customer facing question that we get a lot actually right is like the so we plan for this much failure. But if we get more failure than that.<\/p>\n<p>55<br \/>00:06:49.020 &#8211;&gt; 00:06:59.150<br \/>Andy Whiteside: So when he says in the second paragraph, we always, when we have larger customers, we customers clusters. We always recommend Rf. 3. Is that just because<\/p>\n<p>56<br \/>00:06:59.230 &#8211;&gt; 00:07:01.689<br \/>Andy Whiteside: more is better if you have the space.<\/p>\n<p>57<br \/>00:07:02.170 &#8211;&gt; 00:07:11.700<br \/>Jirah Cox: Yeah, and I&#8217;ve seen some of the stats the it comes down to, you know, like anything right if you ran even just like what things just hypervisor. Only<\/p>\n<p>58<br \/>00:07:11.770 &#8211;&gt; 00:07:17.320<br \/>Jirah Cox: if you&#8217;re thinking about compute availability, Well, gee! If I run a 10 node cluster or a 100 node cluster<\/p>\n<p>59<br \/>00:07:17.380 &#8211;&gt; 00:07:20.600<br \/>Jirah Cox: at some point I want to size beyond just n one<\/p>\n<p>60<br \/>00:07:20.690 &#8211;&gt; 00:07:22.879<br \/>Jirah Cox: right like I wouldn&#8217;t run a 99 node<\/p>\n<p>61<br \/>00:07:22.910 &#8211;&gt; 00:07:28.219<br \/>Jirah Cox: compute cluster of any hypervisor with only n plus one availability, because my odds as a<\/p>\n<p>62<br \/>00:07:28.250 &#8211;&gt; 00:07:33.889<br \/>Jirah Cox: her practitioner, as the admin having more than one availability zone, right, a blast radius, call it<\/p>\n<p>63<br \/>00:07:34.000 &#8211;&gt; 00:07:44.680<br \/>Jirah Cox: a single node, a compute factor. down at any one time. Those odds increase astronomically beyond a certain count threshold. it always depends. But you know, usually<\/p>\n<p>64<br \/>00:07:45.310 &#8211;&gt; 00:08:03.189<br \/>Jirah Cox: let&#8217;s. Let me fairly represent a lot of viewpoints here. Most Esses would tell you by by at least time, you&#8217;re hitting like node 24 in a cluster. We&#8217;re probably a threshold where we want to be designing for what we call R 3 right of all data written 3 times within the cluster versus all data written twice in a cluster.<\/p>\n<p>65<br \/>00:08:03.200 &#8211;&gt; 00:08:06.969<br \/>Jirah Cox: because my odds are having one node fail, and then another node<\/p>\n<p>66<br \/>00:08:07.030 &#8211;&gt; 00:08:15.830<br \/>Jirah Cox: fail after that, or maybe i&#8217;m doing maintenance. I have one down and then another. Node chooses that time to purple screen, or whatever it is. increase at that size<\/p>\n<p>67<br \/>00:08:16.160 &#8211;&gt; 00:08:21.669<br \/>Andy Whiteside: that that&#8217;s kind of my that&#8217;s kind of my wife&#8217;s logic. I&#8217;ve got 4 cars, most of them older and<\/p>\n<p>68<br \/>00:08:22.100 &#8211;&gt; 00:08:31.109<br \/>Jirah Cox: chances are the more older cars I get, the more chances. One of them is we broken down? Yeah, Fantastic analogy, right? I mean, I always say hardware gonna hardware. So<\/p>\n<p>69<br \/>00:08:32.330 &#8211;&gt; 00:08:33.860<br \/>Andy Whiteside: it&#8217;s gonna do what it&#8217;s gonna do.<\/p>\n<p>70<br \/>00:08:33.970 &#8211;&gt; 00:08:35.090<br \/>Harvey Green: True story<\/p>\n<p>71<br \/>00:08:35.159 &#8211;&gt; 00:08:45.099<br \/>Andy Whiteside: all right. So he gets a couple of screenshots of his environment. The first one is redundancy, factor, readiness with a category of 2. The options. If I had to drop down, ability here would be<\/p>\n<p>72<br \/>00:08:45.160 &#8211;&gt; 00:08:47.860<br \/>Andy Whiteside: 2 and 3. Are there any other options?<\/p>\n<p>73<br \/>00:08:48.650 &#8211;&gt; 00:09:04.569<br \/>Jirah Cox: Basically, that&#8217;s that&#8217;s true for like you know 99.9, 9 lot of 9 here, most workloads. It&#8217;s going to be Rf: 2 and Rf. 3 for for a very small amount of workloads where you&#8217;re doing in app redundancy right, and have availability. I&#8217;m thinking of like explan, I think enough to do from thinking of<\/p>\n<p>74<br \/>00:09:04.840 &#8211;&gt; 00:09:18.059<br \/>Jirah Cox: a couple of the workloads where the application is already gonna split that right and store it 2 places. Then, of course, you can do Rf: One understanding that when you&#8217;re running Rf: one, if something goes bump, and like a disk fails, or whatever it gets yanked.<\/p>\n<p>75<br \/>00:09:18.110 &#8211;&gt; 00:09:18.910<br \/>Jirah Cox: Then<\/p>\n<p>76<br \/>00:09:18.940 &#8211;&gt; 00:09:30.790<br \/>Jirah Cox: you told us you already have availability of the application little elsewhere. Right? So we don&#8217;t rebuild that data. So. but yeah, for the most part, from all for common virtualization use cases, it&#8217;s going to be Rf: 2 and R. 3.<\/p>\n<p>77<br \/>00:09:31.250 &#8211;&gt; 00:09:45.979<br \/>Andy Whiteside: I have to tell a little joke of myself for the longest time. I I said. Rf. Stood for right frequency, and then somebody pointed out that right starts with a W. And I was like, yeah, so, Andy, I think there is one thing that&#8217;s worth mentioning here.<\/p>\n<p>78<br \/>00:09:45.990 &#8211;&gt; 00:10:05.710<br \/>Ben Rogers: and Philip kind of handed to it when he gave his introduction. This is really what creates the platform of Newtonics our ability to do this Rf. Factor. Not only is it protecting us from a from a data protection standpoint. But this also comes into play with. How quickly will the cluster recover? You know, if we&#8217;ve got that data spread peanut butter<\/p>\n<p>79<br \/>00:10:05.720 &#8211;&gt; 00:10:20.050<br \/>Ben Rogers: as across these nodes. If a node fails, another node can take over really quick because it&#8217;s got the data on it. Also, we talked a lot about Vdi in in your podcast. And this is also what makes our Bdi the data.<\/p>\n<p>80<br \/>00:10:20.060 &#8211;&gt; 00:10:36.669<br \/>Ben Rogers: even though we&#8217;re replicating the data off, we&#8217;re still keeping that workload local on the node that the data is there. And again, if that node was the fail, we have the metadata to know where to pick it up. You know where we need to go next to get that. So again going back to Phillips.<\/p>\n<p>81<br \/>00:10:36.680 &#8211;&gt; 00:10:46.930<br \/>Ben Rogers: you know his mention of this is a platform. This is at the heart of this platform, and this really what makes new tanks saying when it comes to things like<\/p>\n<p>82<br \/>00:10:47.010 &#8211;&gt; 00:10:55.769<br \/>Ben Rogers: performance, replication, disaster, recovery. These are all the things we had on this idea of redundancy factor<\/p>\n<p>83<br \/>00:10:57.050 &#8211;&gt; 00:11:02.110<br \/>Jirah Cox: totally. We we live in D by Andy. I was gonna say, you know all 5 of us here right<\/p>\n<p>84<br \/>00:11:02.260 &#8211;&gt; 00:11:11.590<br \/>Jirah Cox: as technologists in the Carolinas. We&#8217;re already fighting uphill battle. Right? Thank you for not like, you know, highlighting our bad spelling on top of that right?<\/p>\n<p>85<br \/>00:11:12.500 &#8211;&gt; 00:11:15.060<br \/>Andy Whiteside: Oh, you must know I went to elementary school.<\/p>\n<p>86<br \/>00:11:17.040 &#8211;&gt; 00:11:19.749<br \/>Jirah Cox: so I said, all 5 of us here right I cast a wide net.<\/p>\n<p>87<br \/>00:11:21.360 &#8211;&gt; 00:11:27.100<br \/>Andy Whiteside: So if you want to tell us what this manage Vm: high availability piece means.<\/p>\n<p>88<br \/>00:11:28.250 &#8211;&gt; 00:11:34.529<br \/>Philip Sellers: probably looking at me like you. You&#8217;re not steering the screen, you idiot! Yeah, this part<\/p>\n<p>89<br \/>00:11:35.520 &#8211;&gt; 00:11:37.580<br \/>Philip Sellers: Zoom the<\/p>\n<p>90<br \/>00:11:39.490 &#8211;&gt; 00:11:40.810<br \/>Philip Sellers: hey? Go ahead, John.<\/p>\n<p>91<br \/>00:11:41.080 &#8211;&gt; 00:11:52.559<br \/>Jirah Cox: Oh, sure. So so the check box here&#8217;s what channel screen there. is that ha reservation? Right. So as a as the compute layer, right as virtualization, managing the the virtual machines, right, you can<\/p>\n<p>92<br \/>00:11:52.670 &#8211;&gt; 00:12:05.920<br \/>Jirah Cox: run the high availability engine and basically 2 models out of the box. You&#8217;re getting Ha! And you&#8217;re getting best effort right where impacted Vms from like a hardware failure event will get restarted automatically on surviving nodes in the cluster.<\/p>\n<p>93<br \/>00:12:06.100 &#8211;&gt; 00:12:23.940<br \/>Jirah Cox: that is, of course, out of the box You get best effort, and then with this checkbox you can opt into hey? Go ahead and pre reserve memory for me. So I get guaranteed availability of my beams already. Have a pre reserved space. and then to boot back up basically you&#8217;re moving from an n plus 0 Vm: memory model in front of one.<\/p>\n<p>94<br \/>00:12:24.200 &#8211;&gt; 00:12:24.890<br \/>\u090f\u0902\u0921\u0940 \u0935\u094d\u0939\u093e\u0907\u091f\u0938\u093e\u0907\u0921: \u0939\u093e\u0901\u0964<\/p>\n<p>95<br \/>00:12:25.520 &#8211;&gt; 00:12:38.079<br \/>Andy Whiteside: Yeah, I I jokingly brought that up to Philip because some of these things have been around for a little while or a while now and then mechanics platform, but the when I meet customers they assume some of this stuff is only available in the Vmware side of<\/p>\n<p>96<br \/>00:12:38.130 &#8211;&gt; 00:12:39.270<br \/>the solution.<\/p>\n<p>97<br \/>00:12:39.870 &#8211;&gt; 00:12:55.649<br \/>Philip Sellers: Yeah. And I mean this. This is essentially the same as the the Vmware. So you got a parity here. So you know. Ha! You You put in the number of hosts to fail, you know toleration level and in Vmware, and it reserves that space<\/p>\n<p>98<br \/>00:12:55.710 &#8211;&gt; 00:13:01.810<br \/>Philip Sellers: very similarly, although it&#8217;s a slightly different take, I guess, here from the new tenx platform side.<\/p>\n<p>99<br \/>00:13:02.180 &#8211;&gt; 00:13:16.569<br \/>Andy Whiteside: Well, and and Phil, if that&#8217;s something I would ask you, coming over from a pure Vmware world, where now you do both? Have you been surprised at how many these features that are available on the Acropolis side of things that maybe weren&#8217;t available when you looked at it a while back, if ever.<\/p>\n<p>100<br \/>00:13:16.980 &#8211;&gt; 00:13:20.249<br \/>Philip Sellers: Oh, yeah, yeah, I mean it. It&#8217;s<\/p>\n<p>101<br \/>00:13:20.760 &#8211;&gt; 00:13:26.009<br \/>Philip Sellers: It&#8217;s pretty incredible what&#8217;s been done out on the platform, and<\/p>\n<p>102<br \/>00:13:26.300 &#8211;&gt; 00:13:30.919<br \/>Philip Sellers: enabled on a HP it. It&#8217;s, you know it&#8217;s<\/p>\n<p>103<br \/>00:13:32.180 &#8211;&gt; 00:13:36.300<br \/>Philip Sellers: it&#8217;s very comparable coming from a DM. Or background.<\/p>\n<p>104<br \/>00:13:37.690 &#8211;&gt; 00:13:46.580<br \/>Jirah Cox: So it&#8217;s. I mean. I usually say it&#8217;s different, but it&#8217;s it. It&#8217;s it&#8217;s different. There&#8217;s not a check box for every check. Box. You used to see it on a platform, but it&#8217;s got everything you need.<\/p>\n<p>105<br \/>00:13:48.240 &#8211;&gt; 00:13:51.849<br \/>Andy Whiteside: So, Jarra, i&#8217;m gonna walk through this and kind of let you<\/p>\n<p>106<br \/>00:13:52.240 &#8211;&gt; 00:13:57.940<br \/>Andy Whiteside: go through what the customer had set up and give me the insight, and then i&#8217;ll let the guys just interrupt us and<\/p>\n<p>107<br \/>00:13:58.200 &#8211;&gt; 00:14:02.939<br \/>Andy Whiteside: comment as needed. You want to hit here where he&#8217;s talking about what the workload is.<\/p>\n<p>108<br \/>00:14:04.190 &#8211;&gt; 00:14:11.870<br \/>Jirah Cox: Yeah. So drone highlights he&#8217;s got 30 windows vms. They&#8217;re running windows. 11 zoom guy with btpm enabled.<\/p>\n<p>109<br \/>00:14:12.160 &#8211;&gt; 00:14:14.720<br \/>Jirah Cox: And of course, you know those vms are spread across<\/p>\n<p>110<br \/>00:14:14.770 &#8211;&gt; 00:14:23.979<br \/>Jirah Cox: nodes in the cluster right? So so there&#8217;s some vms running on every note in the cluster roughly. What would that be? 4 and a half or so? Vms per post, on average?<\/p>\n<p>111<br \/>00:14:24.470 &#8211;&gt; 00:14:30.839<br \/>Jirah Cox: so if nothing&#8217;s Maxed out on resources, he highlights these at like 25%, CPU consumption, about 40% memory consumption.<\/p>\n<p>112<br \/>00:14:32.470 &#8211;&gt; 00:14:34.109<br \/>Jirah Cox: So then,<\/p>\n<p>113<br \/>00:14:34.710 &#8211;&gt; 00:14:37.569<br \/>Andy Whiteside: How many vdi it was. Oh, 30.<\/p>\n<p>114<br \/>00:14:37.970 &#8211;&gt; 00:14:53.140<br \/>Jirah Cox: No, Yup, they&#8217;re yeah, 30. Yep, 30. Vm: so then he puts on his Chaos monkey hat and decides to go crash a node. So he logs into the hardware management out of band, you know. Tells it. Hey, Power off server immediately. No warning<\/p>\n<p>115<br \/>00:14:53.150 &#8211;&gt; 00:15:01.790<br \/>Jirah Cox: no notification to like the virtualization layer. The storage layer management layer of that right? So immediately it&#8217;s like power pulling the power cord. One node goes off.<\/p>\n<p>116<br \/>00:15:02.510 &#8211;&gt; 00:15:19.359<br \/>Jirah Cox: so then, of course, you know no surprise. What you expect. Right? H. A kicks in vms get restarted on surviving nodes in the cluster right? So only the impact of vms, of course, have to do anything right. Other Vm: just keep on running. so the remaining nodes power on the<\/p>\n<p>117<br \/>00:15:19.560 &#8211;&gt; 00:15:34.199<br \/>Jirah Cox: the dashboard right prison. We give a lot of pixels right there on the on the dashboard to showing the administrator cluster, state, and what&#8217;s going on right? What operations are we performing? What are we recovering from? Or is everything totally just situation normal. So immediately it shows it goes into a healing state.<\/p>\n<p>118<br \/>00:15:34.210 &#8211;&gt; 00:15:39.750<br \/>Jirah Cox: It it like alerts that hey Vms are getting migrated or restarting in order to get back to a highly available state.<\/p>\n<p>119<br \/>00:15:40.210 &#8211;&gt; 00:15:56.500<br \/>Jirah Cox: and then, of course, storage rebuild also occur right? So when you lose the hardware instance right? The hypervisor instance, you&#8217;re gonna lose some slice of all your user Vm. Your your customer provision Vms: and then, of course, our virtual machine as well, right. Our Cvm. Our Controller Vm.<\/p>\n<p>120<br \/>00:15:56.510 &#8211;&gt; 00:16:00.250<br \/>Jirah Cox: So we run one of on our renewable cluster so that Cbm. Goes down as well.<\/p>\n<p>121<br \/>00:16:00.810 &#8211;&gt; 00:16:07.810<br \/>Jirah Cox: It was. It was hosting some portion of the data right roughly. In this case one-seventh of all the data stored in the cluster.<\/p>\n<p>122<br \/>00:16:08.020 &#8211;&gt; 00:16:16.989<br \/>Jirah Cox: And so the other 6 surviving Cbms are going to start that rebuild to pick up the select from that that failed node, and therefore the failed Cvm. As well.<\/p>\n<p>123<br \/>00:16:17.820 &#8211;&gt; 00:16:23.390<br \/>Andy Whiteside: So gyra it didn&#8217;t say or did it, whether these were persistent or non- persistent. Vdi<\/p>\n<p>124<br \/>00:16:24.670 &#8211;&gt; 00:16:28.620<br \/>Jirah Cox: Good question it didn&#8217;t say, and let me think if that matters<\/p>\n<p>125<br \/>00:16:28.680 &#8211;&gt; 00:16:30.880<br \/>Jirah Cox: it really<\/p>\n<p>126<br \/>00:16:31.060 &#8211;&gt; 00:16:39.629<br \/>Jirah Cox: doesn&#8217;t right? Because if it&#8217;s not persistent. You&#8217;re gonna get back to that pristine image. That was whatever your you know, deep freeze state that gold master is.<\/p>\n<p>127<br \/>00:16:39.860 &#8211;&gt; 00:16:46.409<br \/>Jirah Cox: If it&#8217;s persistent, then that&#8217;s simply just a ha event power off clean through on another node<\/p>\n<p>128<br \/>00:16:46.750 &#8211;&gt; 00:17:02.180<br \/>Philip Sellers: just may take a little longer to grab all the bits from a non persistent, because there&#8217;s likely more of those. So the healing process will maybe take a little longer. But possibly, but to to chase the tangent if you&#8217;re using non persistent with us, probably doing<\/p>\n<p>129<br \/>00:17:02.360 &#8211;&gt; 00:17:05.090<br \/>Jirah Cox: well. If you&#8217;re doing Pvs right, then whatever network<\/p>\n<p>130<br \/>00:17:05.270 &#8211;&gt; 00:17:16.910<br \/>Jirah Cox: is gonna network. If you&#8217;re if you&#8217;re doing Mcs right, then you&#8217;re doing different thing differences in disks right against a goldmaster snapshot, and we actually under the covers. We&#8217;ll do what we call shadow clone.<\/p>\n<p>131<br \/>00:17:16.980 &#8211;&gt; 00:17:24.189<br \/>Jirah Cox: where, whenever we detect that there&#8217;s one V disk in the cluster. That&#8217;s doing extra duty right? It&#8217;s one V disk powering multiple vms.<\/p>\n<p>132<br \/>00:17:24.250 &#8211;&gt; 00:17:26.530<br \/>Jirah Cox: We actually will cache that the<\/p>\n<p>133<br \/>00:17:26.609 &#8211;&gt; 00:17:44.220<br \/>Jirah Cox: locally on every node in the cluster, so that all those reads go back to local flash. We&#8217;ll actually intercept that read operation from the hypervisor down to the V disk, and we dart it locally, even if the authority of copy was elsewhere in the cluster. So we&#8217;re shortening that read path You really won&#8217;t feel that I don&#8217;t think<\/p>\n<p>134<br \/>00:17:44.840 &#8211;&gt; 00:17:45.610<br \/>Philip Sellers: so.<\/p>\n<p>135<br \/>00:17:45.900 &#8211;&gt; 00:17:55.769<br \/>Andy Whiteside: So, Harvey. I&#8217;m not. I&#8217;m not right to say that if you were doing non persistence, and you were using Mcs or Dsa Pvs. This checkbox where you relate with the hardware reservation<\/p>\n<p>136<br \/>00:17:55.830 &#8211;&gt; 00:17:59.940<br \/>Andy Whiteside: this this probably doesn&#8217;t happen because you got to pull the machines to cover this.<\/p>\n<p>137<br \/>00:17:59.990 &#8211;&gt; 00:18:11.940<br \/>Harvey Green: Yeah, you you don&#8217;t have to have that extra reservation because you do have a pool, and it doesn&#8217;t matter if you lose them if you just restart more of them. The the user might<\/p>\n<p>138<br \/>00:18:12.230 &#8211;&gt; 00:18:15.100<br \/>Harvey Green: lose their session.<\/p>\n<p>139<br \/>00:18:15.480 &#8211;&gt; 00:18:19.760<br \/>Harvey Green: for you know, for a minute. But then they&#8217;re able to just restart another one.<\/p>\n<p>140<br \/>00:18:20.040 &#8211;&gt; 00:18:21.430<br \/>Jirah Cox: Yeah, there&#8217;s<\/p>\n<p>141<br \/>00:18:21.510 &#8211;&gt; 00:18:29.729<br \/>Jirah Cox: you&#8217;re saying a model where you&#8217;re doing like one to many right? You&#8217;re doing lots of users per per vm. Instance not a one to one.<\/p>\n<p>142<br \/>00:18:30.110 &#8211;&gt; 00:18:35.370<br \/>Andy Whiteside: It could be one to one of machines that are always running. So let&#8217;s say he had 30.<\/p>\n<p>143<br \/>00:18:35.390 &#8211;&gt; 00:18:42.680<br \/>Andy Whiteside: And truth is, he had 22 users or maybe had 30 users. He would probably have 35 to 40 up and running so he would have probably lost.<\/p>\n<p>144<br \/>00:18:42.820 &#8211;&gt; 00:18:46.390<br \/>Andy Whiteside: You know, some some of the users would have come back in and they would have had more machines.<\/p>\n<p>145<br \/>00:18:46.430 &#8211;&gt; 00:18:57.820<br \/>Andy Whiteside: the the the the hypervis, not the hypervisor, but the control plane would have recognize. Hey, i&#8217;m. I&#8217;m supposed to have 10 machines running and waiting. Something happened. I&#8217;m down to 9 or 5, or whatever I need to turn a bunch on, or whatever still up.<\/p>\n<p>146<br \/>00:18:58.250 &#8211;&gt; 00:18:58.940<br \/>Jirah Cox: Hmm.<\/p>\n<p>147<br \/>00:18:59.930 &#8211;&gt; 00:19:06.110<br \/>Andy Whiteside: So chances are good. Consultant would never check this box unless it was persistent. But who knows? I don&#8217;t know what the situation is.<\/p>\n<p>148<br \/>00:19:08.080 &#8211;&gt; 00:19:13.079<br \/>Andy Whiteside: anyway? all right. So the rebuild and then, Gyro, maybe i&#8217;m here<\/p>\n<p>149<br \/>00:19:13.780 &#8211;&gt; 00:19:19.240<br \/>Andy Whiteside: am I here where it says, data, resiliency, status or no. Yeah. So so the rebuild<\/p>\n<p>150<br \/>00:19:19.360 &#8211;&gt; 00:19:27.380<br \/>Jirah Cox: it continues. And it shows so immediately. It shows, you know, my my cluster, which is built to lose one of anything at once.<\/p>\n<p>151<br \/>00:19:27.430 &#8211;&gt; 00:19:46.580<br \/>Jirah Cox: whether it&#8217;s a disk or a note, or whatever. So on the dashboard right? They&#8217;re going to show fault. Tolerance is 0, right? We&#8217;ve got a little red flag next to it. you can click on it right, and even give you the detailed view of what&#8217;s currently rebuilding. And you know the the Newtonics cluster right? And really what the Cdm is doing for you all day every day.<\/p>\n<p>152<br \/>00:19:46.590 &#8211;&gt; 00:20:00.499<br \/>Jirah Cox: It&#8217;s not one giant monolithic application right? Like running Cdm. Or running aos. It&#8217;s got a whole bunch of micro services within the cluster that all work together right? That i&#8217;ll create the the the cluster, or the ring topology<\/p>\n<p>153<br \/>00:20:00.620 &#8211;&gt; 00:20:13.369<br \/>Jirah Cox: each one can like self-elect the different leader, leaders and follower states so it&#8217;ll show you what part in this screenshot. Here it&#8217;s showing the Cassandra ring itself. The metadata partition right? Which is where we store our data about customer data<\/p>\n<p>154<br \/>00:20:13.380 &#8211;&gt; 00:20:26.859<br \/>Andy Whiteside: in the cluster. That&#8217;s the layer that&#8217;s rebuilding in this case, to back to resiliency, back to health. To where it can lose another one. Anything right? A member in that ring. So let&#8217;s get back to that. Rf: that Rf: 2 stage where it&#8217;s statutory covered.<\/p>\n<p>155<br \/>00:20:27.160 &#8211;&gt; 00:20:37.099<br \/>Jirah Cox: You can think of it as getting back to Rf. To across a number of measures right? Rf: 2 for user data, Rf: 2 for detects data about the cluster itself<\/p>\n<p>156<br \/>00:20:38.100 &#8211;&gt; 00:20:39.010<br \/>Andy Whiteside: and<\/p>\n<p>157<br \/>00:20:39.260 &#8211;&gt; 00:20:42.299<br \/>Andy Whiteside: Terra. What happens if it does? If there&#8217;s not enough available<\/p>\n<p>158<br \/>00:20:42.650 &#8211;&gt; 00:20:44.190<br \/>Andy Whiteside: space.<\/p>\n<p>159<br \/>00:20:44.220 &#8211;&gt; 00:20:46.469<br \/>Andy Whiteside: or I guess they would have told you that before this happened<\/p>\n<p>160<br \/>00:20:47.320 &#8211;&gt; 00:20:56.609<br \/>Jirah Cox: so I mean, that&#8217;s part of the planning right, is we never want any customer to be in a place where they couldn&#8217;t lose their defined availability<\/p>\n<p>161<br \/>00:20:56.820 &#8211;&gt; 00:21:01.940<br \/>Jirah Cox: threshold, right like in our 2 cluster. Right you can one anything<\/p>\n<p>162<br \/>00:21:02.050 &#8211;&gt; 00:21:12.199<br \/>Jirah Cox: which includes a node? We don&#8217;t want you to be within a nodes worth of filling up the cluster right if you do. And then, yeah, totally, the question is going to fill up You&#8217;re gonna have a bad day.<\/p>\n<p>163<br \/>00:21:12.210 &#8211;&gt; 00:21:24.749<br \/>Jirah Cox: what it really does when you, if you really just fill it up in that you run a space, it&#8217;s going to go into a read-only state to protect itself and product your data to say i&#8217;m gonna not disallow any new rights because we&#8217;re just plum full,<\/p>\n<p>164<br \/>00:21:24.940 &#8211;&gt; 00:21:29.099<br \/>Jirah Cox: you know. Call support. We need to get this fixed or call, you know.<\/p>\n<p>165<br \/>00:21:29.350 &#8211;&gt; 00:21:35.680<br \/>Jirah Cox: Call ban call Harvey right call, Phil. We need to get you more space in here right so you know bigger disks other node, whatever it takes to<\/p>\n<p>166<br \/>00:21:35.960 &#8211;&gt; 00:21:44.599<br \/>Jirah Cox: to establish health there in the cluster. Often it can be like delete some old snapshots if you want it to. But yeah, if you run out of space forever, it&#8217;s just gonna it&#8217;s gonna go into a read-only state<\/p>\n<p>167<br \/>00:21:45.690 &#8211;&gt; 00:21:51.619<br \/>Andy Whiteside: Let me pause here, Ben Harvey, Philip, any questions, comments, takeaways.<\/p>\n<p>168<br \/>00:21:53.790 &#8211;&gt; 00:21:57.559<br \/>Harvey Green: no, I mean, I I think, that this is<\/p>\n<p>169<br \/>00:21:58.040 &#8211;&gt; 00:22:06.739<br \/>Harvey Green: definitely describing where you&#8217;ll be, and you just call Phil, and he&#8217;ll show up with all kinds of drives in his back pocket. He can just switch them out for you.<\/p>\n<p>170<br \/>00:22:06.790 &#8211;&gt; 00:22:12.700<br \/>Jirah Cox: You just you just you just run down to the best buy, and you know, suck some drives.<\/p>\n<p>171<br \/>00:22:16.000 &#8211;&gt; 00:22:33.739<br \/>Ben Rogers: Well, I mean one of the things we have to point out is before you got even into that state. The cluster would be screen the bloody murder. It would be going. Hey? I can&#8217;t. I can&#8217;t run our F 2. I&#8217;m not able to get the I&#8217;m not able to get to this compliance state need to have more memory. We have to capacity planning in the cluster. So<\/p>\n<p>172<br \/>00:22:33.750 &#8211;&gt; 00:22:47.499<br \/>Ben Rogers: even though I know we&#8217;re kind of in the lab, and we&#8217;re doing this. We won&#8217;t see the results. I don&#8217;t want any of our customers think that. Oh, there&#8217;s no warning that this is coming down these boxes that these things get where they&#8217;re in a UN unhealthy state.<\/p>\n<p>173<br \/>00:22:47.870 &#8211;&gt; 00:22:56.920<br \/>Jirah Cox: Totally right. But yeah, the cluster itself. We&#8217;ve we&#8217;ve taught it lots of tricks over the years. It&#8217;ll email you. It can send out Snmp alerts. It can open up service now. Tickets, if you allow that.<\/p>\n<p>174<br \/>00:22:57.000 &#8211;&gt; 00:23:04.819<br \/>Jirah Cox: lots of lots of fun tricks. It&#8217;s got to alert you, hey? We are entering a you know. Non resilient threshold right of the cluster state here.<\/p>\n<p>175<br \/>00:23:04.950 &#8211;&gt; 00:23:15.170<br \/>Jirah Cox: if you like, if you allow the phone home telemetry right? We call it pulse, right? So we can phone home with how healthy the cluster you&#8217;re running it can ping your account team right? They&#8217;ll reach out<\/p>\n<p>176<br \/>00:23:15.300 &#8211;&gt; 00:23:26.230<br \/>Jirah Cox: it. Can. there&#8217;s a is a new trick that clusters learned last year actually where you can say like in this case, let&#8217;s say you&#8217;ve got a 7 node cluster. So therefore you shouldn&#8217;t write more than 6 nodes worth of data.<\/p>\n<p>177<br \/>00:23:26.630 &#8211;&gt; 00:23:30.530<br \/>Jirah Cox: You can basically tell it only show me 6 tones worth of data.<\/p>\n<p>178<br \/>00:23:30.590 &#8211;&gt; 00:23:43.749<br \/>Jirah Cox: and don&#8217;t even pretend that the seventh node exists right. That is, a 100% is the 6 node line. so you can have it. Redraw all the graphs to say this is what full looks like. This is what if we&#8217;re healthy, is somewhere to the left of that<\/p>\n<p>179<br \/>00:23:43.780 &#8211;&gt; 00:23:48.260<br \/>Jirah Cox: Don&#8217;t don&#8217;t make me track. Am I over or under my 6 node threshold<\/p>\n<p>180<br \/>00:23:48.860 &#8211;&gt; 00:23:49.600<br \/>\u0938\u0939\u0940<\/p>\n<p>181<br \/>00:23:50.500 &#8211;&gt; 00:24:01.430<br \/>Andy Whiteside: so jar with this next sections is after 30&nbsp;min. Cbm. Not reachable for 30&nbsp;min. The node is being detached. In other words, it says, hey, i&#8217;m going to completely remove this guy from my stored.<\/p>\n<p>182<br \/>00:24:01.870 &#8211;&gt; 00:24:04.790<br \/>Jirah Cox: Yep. So this is what I really like a lot. So we&#8217;ve already<\/p>\n<p>183<br \/>00:24:04.910 &#8211;&gt; 00:24:09.249<br \/>Jirah Cox: begun the process. We&#8217;ve already at this point probably get close to completing the process of<\/p>\n<p>184<br \/>00:24:09.270 &#8211;&gt; 00:24:11.769<br \/>Jirah Cox: re-healing from the failure State<\/p>\n<p>185<br \/>00:24:11.890 &#8211;&gt; 00:24:16.290<br \/>Jirah Cox: what I love about mutants and the cluster design in general is it&#8217;s self healing<\/p>\n<p>186<br \/>00:24:16.320 &#8211;&gt; 00:24:24.890<br \/>Jirah Cox: So we don&#8217;t stay in a broken 7 node state for very long we transition to a healthy 6 node state. 6 nodes becomes the new normal.<\/p>\n<p>187<br \/>00:24:24.960 &#8211;&gt; 00:24:36.420<br \/>Jirah Cox: That&#8217;s all the nodes we have in the cluster, and all the nodes we know about right. So we don&#8217;t we don&#8217;t stay in sort of like a degraded state. We eject the seventh note that&#8217;s failed from the metadata ring right from our knowledge of what nodes exist in the cluster.<\/p>\n<p>188<br \/>00:24:36.510 &#8211;&gt; 00:24:43.130<br \/>Jirah Cox: for a lot of reasons, right one like we forget about it. There&#8217;s some things we can clean up we&#8217;re not waiting for him to come back online.<\/p>\n<p>189<br \/>00:24:43.760 &#8211;&gt; 00:24:46.550<br \/>Jirah Cox: another one, right? Let&#8217;s say that note is down for a week<\/p>\n<p>190<br \/>00:24:46.720 &#8211;&gt; 00:24:48.210<br \/>Jirah Cox: in a week. When it comes up<\/p>\n<p>191<br \/>00:24:48.330 &#8211;&gt; 00:24:50.090<br \/>Jirah Cox: it has almost no useful data.<\/p>\n<p>192<br \/>00:24:50.190 &#8211;&gt; 00:25:04.420<br \/>Jirah Cox: so we don&#8217;t want to treat it like it&#8217;s You know a a prodigal node returns home right? We&#8217;ll just read it like it&#8217;s a new node and the rethink data over to it rather than by differences like. Oh, what&#8217;s new? What&#8217;s what&#8217;s changed? What&#8217;s not changed.<\/p>\n<p>193<br \/>00:25:04.430 &#8211;&gt; 00:25:16.889<br \/>Jirah Cox: We&#8217;ll just cut bait, and then, if it comes back great, we will. We will accept it back in the cluster as a fresh node, taking that node 7 spot versus having to worry about differences. What data has changed, or what hasn&#8217;t<\/p>\n<p>194<br \/>00:25:17.620 &#8211;&gt; 00:25:21.220<br \/>Andy Whiteside: we always we always heal, we always heal down to a healthy state. We don&#8217;t<\/p>\n<p>195<br \/>00:25:21.250 &#8211;&gt; 00:25:23.860<br \/>Jirah Cox: stay in a degraded state whenever we can help it.<\/p>\n<p>196<br \/>00:25:24.050 &#8211;&gt; 00:25:27.409<br \/>Andy Whiteside: Yeah, that that allows you to sleep at night, not guessing what<\/p>\n<p>197<br \/>00:25:27.630 &#8211;&gt; 00:25:46.130<br \/>Ben Rogers: could be happening when not there to watch it. Guys any comments on that. Well, for for me personally, this would definitely give me peace of mind, because I&#8217;ve left the office several times. Where one drive that you know you had a R. 5 and one drive dropped out while you&#8217;re getting the drive. Shit back to you<\/p>\n<p>198<br \/>00:25:46.140 &#8211;&gt; 00:25:54.690<br \/>Ben Rogers: those couple of hours. You&#8217;re just praying to the it. Guides that don&#8217;t let anything go wrong. I mean so<\/p>\n<p>199<br \/>00:25:54.700 &#8211;&gt; 00:26:19.929<br \/>Ben Rogers: for me to know that my technology would self heal itself and assume that. Oh, the units bad. We&#8217;re gonna go ahead and get it out. The mix. We&#8217;re gonna continue running a healthy state, and you want to bring that unit back in great. But we&#8217;re going to treat that as a new unit. That&#8217;s awesome, man. It definitely gives a good level of comfort that you don&#8217;t have to, you know. Sit on pins and needles. Why, things are being shipped to you, or procured, or any of those things that we&#8217;re all used to dealing with.<\/p>\n<p>200<br \/>00:26:20.820 &#8211;&gt; 00:26:21.570<br \/>\u0928\u0939\u0940\u0902\u0964<\/p>\n<p>201<br \/>00:26:21.940 &#8211;&gt; 00:26:30.079<br \/>Andy Whiteside: So, Jerome, take a step further, I believe, and now he&#8217;s ready to at least know that he can take down another node and still be up and going.<\/p>\n<p>202<br \/>00:26:30.410 &#8211;&gt; 00:26:47.650<br \/>Jirah Cox: Yep. So the process here actually remains the same. So you could actually keep on crashing a node right as long as you&#8217;re gonna. You can crash one at a time. As long as you are allowing for that re heal to complete, and between each node failure, so you could go from 7 to 6, 6 to 5, 5 to 4<\/p>\n<p>203<br \/>00:26:48.660 &#8211;&gt; 00:27:02.540<br \/>Jirah Cox: for down to 3, and he actually talks about, you know what if you keep on going until you only have 2 nodes left, 2 nodes right at that point. we do under the covers right? We have a mandate to ring. That is, 3 nodes at a minimum size for Rf: 2.<\/p>\n<p>204<br \/>00:27:02.550 &#8211;&gt; 00:27:21.860<br \/>Jirah Cox: That&#8217;s about as that&#8217;s about as failed as you can get is you could have a 3 node cluster that. Then shucks! One more node, and is down to running on 2 nodes out of 3, 1, 2 legs out of 3 on the stool. Are there? that one won&#8217;t re heal because we you can&#8217;t shrink down to a 2 node, healthy cluster from a larger, like 7 node cluster<\/p>\n<p>205<br \/>00:27:21.870 &#8211;&gt; 00:27:32.430<br \/>Jirah Cox: 3. No. Is the minimum there. So once you hit that free node threshold and you lose one more node you can run. You can definitely hit Rf: 2 for customer data, right? All your data written twice. In that case it&#8217;s<\/p>\n<p>206<br \/>00:27:32.450 &#8211;&gt; 00:27:37.790<br \/>Jirah Cox: fully mirrored right. Everything&#8217;s on node one and node 2, and Node 3 is down out of. Think<\/p>\n<p>207<br \/>00:27:38.380 &#8211;&gt; 00:27:58.159<br \/>Jirah Cox: at that point. If you lost one more node well, then, totally. You have no cluster left, right you down to one node out of 7. That&#8217;s a a non survival situation for the data. Your data is safe, but it&#8217;s not going to run, not going to be operable, not going to be an online state for the cluster. and at that point you do have to go, get you some spare parts and bring at least one more note the tech online from your<\/p>\n<p>208<br \/>00:27:58.170 &#8211;&gt; 00:27:59.770<br \/>Jirah Cox: one node out of 7 State<\/p>\n<p>209<br \/>00:28:00.660 &#8211;&gt; 00:28:03.659<br \/>Andy Whiteside: And tyra. Was this all doable<\/p>\n<p>210<br \/>00:28:03.880 &#8211;&gt; 00:28:06.599<br \/>Andy Whiteside: because of Yes, the magic of Newtonics, plus<\/p>\n<p>211<br \/>00:28:06.810 &#8211;&gt; 00:28:10.190<br \/>Andy Whiteside: the fact that he was running at such a low capacity to begin with.<\/p>\n<p>212<br \/>00:28:10.820 &#8211;&gt; 00:28:16.049<br \/>Jirah Cox: Totally so that&#8217;s that&#8217;s the real limit that most customers will hit first right unless you are only using what<\/p>\n<p>213<br \/>00:28:16.260 &#8211;&gt; 00:28:19.869<br \/>Jirah Cox: simple math would tell us. One seventh of your storage capacity.<\/p>\n<p>214<br \/>00:28:20.010 &#8211;&gt; 00:28:29.290<br \/>Jirah Cox: You&#8217;ll hit that first of let&#8217;s say you&#8217;re You&#8217;re using 3 nodes worth of storage. Then, as soon as you have to rebuild onto fewer than 3 nodes, the data doesn&#8217;t fit.<\/p>\n<p>215<br \/>00:28:29.400 &#8211;&gt; 00:28:31.890<br \/>Jirah Cox: We&#8217;re gonna call that cluster full.<\/p>\n<p>216<br \/>00:28:32.190 &#8211;&gt; 00:28:38.879<br \/>Jirah Cox: it&#8217;s going to go into a real read-only state. And at that point you&#8217;ve got to you know, Certainly. Lay hands on the hardware that&#8217;s failed and bring that back online.<\/p>\n<p>217<br \/>00:28:39.020 &#8211;&gt; 00:28:39.720<br \/>\u0939\u093e\u0901\u0964<\/p>\n<p>218<br \/>00:28:39.760 &#8211;&gt; 00:28:40.470<br \/>okay.<\/p>\n<p>219<br \/>00:28:41.200 &#8211;&gt; 00:28:59.130<br \/>Jirah Cox: But yeah, you can. You can fail nose down to the I call it like the water level of the cluster. Right? If the cluster is a certain bucket, you can lose the top of the bucket and another slice of the bucket. You can keep on losing that until, however full it is and once you get that threshold, then then that&#8217;s gonna fill it up and we&#8217;ll go read only on that date.<\/p>\n<p>220<br \/>00:29:00.370 &#8211;&gt; 00:29:18.619<br \/>Andy Whiteside: and then the bringing this back online. It&#8217;s just a matter of adding a node, adding a node, adding a node as they become healthy, and you want to reintroduce them. They come in kind of come in as a foreign object. It looks like, do they? Totally. Yup: yeah. So there&#8217;s there&#8217;s one click on a prison there to say, yeah, like, Admit this node back in the cluster.<\/p>\n<p>221<br \/>00:29:18.630 &#8211;&gt; 00:29:29.290<br \/>Jirah Cox: If you&#8217;re unsure about it right like we. We sort of have a bit of a as a software layer. We have a little bit of a I think, a a wise and somewhat healthy mistrust of hardware health.<\/p>\n<p>222<br \/>00:29:29.390 &#8211;&gt; 00:29:36.329<br \/>Jirah Cox: So if that note&#8217;s been flapping, it&#8217;s been up and down it&#8217;s, it&#8217;s caused up enough enough enough heartburn that we ejected it from the ring.<\/p>\n<p>223<br \/>00:29:36.730 &#8211;&gt; 00:29:55.410<br \/>Jirah Cox: We&#8217;re gonna make you tell us. Trust that note again rather than do that sort of fully, proactively, right? Maybe you&#8217;re testing out some, you know. Flaky didn&#8217;t right, or got some weird power going on in one in the cabinet in the data center. we&#8217;ll let you tell us when that storm is passed, and then we&#8217;ll we&#8217;ll. We&#8217;ll admit that node back into the ring.<\/p>\n<p>224<br \/>00:29:57.090 &#8211;&gt; 00:30:01.619<br \/>Jirah Cox: So there&#8217;s some things that we think of. Some things we can&#8217;t programmatically determine as software only<\/p>\n<p>225<br \/>00:30:02.160 &#8211;&gt; 00:30:11.160<br \/>Andy Whiteside: mit ctl. And so to a large degree, it&#8217;s aware of that node and aware that it might come online. But it&#8217;s completely mitigated it for now until you tell it, hey? I&#8217;m ready for you to reconsider 150<\/p>\n<p>226<br \/>00:30:11.940 &#8211;&gt; 00:30:13.299<br \/>Andy Whiteside: bringing this guy back in.<\/p>\n<p>227<br \/>00:30:13.940 &#8211;&gt; 00:30:15.419<br \/>Jirah Cox: Yeah, that&#8217;s that&#8217;s fair.<\/p>\n<p>228<br \/>00:30:15.830 &#8211;&gt; 00:30:16.500<br \/>\u0920\u0940\u0915 \u0939\u0948\u0964<\/p>\n<p>229<br \/>00:30:17.320 &#8211;&gt; 00:30:19.360<br \/>Andy Whiteside: Philip, I&#8217;ll go to you first.<\/p>\n<p>230<br \/>00:30:19.590 &#8211;&gt; 00:30:23.740<br \/>Andy Whiteside: Any additional questions, comments, thoughts, things you like to add.<\/p>\n<p>231<br \/>00:30:23.810 &#8211;&gt; 00:30:29.270<br \/>Philip Sellers: Yeah, I wanted to ask a little bit about exposure time. So you know, we&#8217;ve got this 30&nbsp;min<\/p>\n<p>232<br \/>00:30:29.400 &#8211;&gt; 00:30:36.170<br \/>Philip Sellers: time out with the Cvm. Where it gets ejected out of the metadata ring. So<\/p>\n<p>233<br \/>00:30:36.690 &#8211;&gt; 00:30:44.140<br \/>Philip Sellers: you. You&#8217;re really kind of sitting in an exposure, time, or or exposed state for that 30&nbsp;min or<\/p>\n<p>234<br \/>00:30:44.380 &#8211;&gt; 00:30:49.550<br \/>Philip Sellers: 30&nbsp;min, plus, however long it takes to continue to the rebuild. Right?<\/p>\n<p>235<br \/>00:30:49.630 &#8211;&gt; 00:30:52.400<br \/>Jirah Cox: It&#8217;s a fantastic question. Actually believe it or not. You don&#8217;t<\/p>\n<p>236<br \/>00:30:52.480 &#8211;&gt; 00:30:57.360<br \/>Jirah Cox: so let&#8217;s as soon as you fail the node at the very next second<\/p>\n<p>237<br \/>00:30:57.470 &#8211;&gt; 00:31:00.679<br \/>Jirah Cox: we&#8217;re actually starting the customer data rebuild immediately.<\/p>\n<p>238<br \/>00:31:00.920 &#8211;&gt; 00:31:08.479<br \/>Jirah Cox: and every new right as well. So if the Vm. Generates a new data onto like, let&#8217;s say, at the beginning of the 6 surviving nodes in the cluster.<\/p>\n<p>239<br \/>00:31:08.550 &#8211;&gt; 00:31:16.470<br \/>Jirah Cox: We&#8217;re immediately honoring that right onto 2 different nodes. Right? Maybe it was initially going to be targeted to node one and node 7 for the 2 replica copies.<\/p>\n<p>240<br \/>00:31:16.510 &#8211;&gt; 00:31:28.649<br \/>Jirah Cox: But now it&#8217;ll be node, one and node 6 or node, one and node 5, or whatever it is, so we&#8217;d never accept to write as a as the platform. We never. It&#8217;s at the right from the Vm. That we can&#8217;t honor according to the replica factor, right? Rf: 2 or Rf. 3.<\/p>\n<p>241<br \/>00:31:29.080 &#8211;&gt; 00:31:39.450<br \/>Jirah Cox: So every data is immediately corrected, predicted that way for new data, and we immediately get the rebuild of customer data as well again immediately. So actually what it is it&#8217;s like. Let&#8217;s just say hypothetical for this credit scenario.<\/p>\n<p>242<br \/>00:31:39.490 &#8211;&gt; 00:31:50.460<br \/>Jirah Cox: New rights are immediately protected according to the replica factor, and let&#8217;s say the rebuild finishes in 15&nbsp;min. The other 15&nbsp;min is simply our confidence in the note itself<\/p>\n<p>243<br \/>00:31:50.480 &#8211;&gt; 00:31:58.840<br \/>Jirah Cox: before we eject it from the minute or ring. but that&#8217;s not really user-facing it Doesn&#8217;t expose, risk risk or or cause exposure to your point, fellow. Great question. Fantastic question.<\/p>\n<p>244<br \/>00:31:58.870 &#8211;&gt; 00:32:17.419<br \/>Philip Sellers: And that makes a ton of sense, too, because as long as the rebuild is in place, then all your bits are protected. Yeah. Yeah. So. So there&#8217;s parts of this that are totally inside baseball under the covers under the hood. I should say that we&#8217;re being super transparent about what happens, and this is all in the Mechanics Bible as well. Right?<\/p>\n<p>245<br \/>00:32:17.430 &#8211;&gt; 00:32:30.989<br \/>Jirah Cox: in terms of like what? What layers of data power the system and and contribute to availability. but yeah, it&#8217;s always been an operating. Thesis is, you know, we have to be absolutely paranoid about about<\/p>\n<p>246<br \/>00:32:31.000 &#8211;&gt; 00:32:43.470<br \/>Jirah Cox: customer data integrity, right? Otherwise we&#8217;re just useless as a platform, right? No one should trust us. So that&#8217;s always been Job. One is be a good steward of customer data, and that includes immediate rebuild with no delay timers until we start the rebuild. We never.<\/p>\n<p>247<br \/>00:32:43.520 &#8211;&gt; 00:33:00.409<br \/>Jirah Cox: We almost never Our view of the world is we never assume the hardware is gonna come back right, so we&#8217;ll never delay a rebuild, hoping it will, trying to make our life easier. We&#8217;ll take the harder path of sort of the rebuild now. And worst case, if that no does come back because it was just a transient power failure or a reboot.<\/p>\n<p>248<br \/>00:33:00.480 &#8211;&gt; 00:33:11.259<br \/>Jirah Cox: Well, then, worst case we&#8217;ve over, predicted some data, and we can do that garbage cleanup, right? But we&#8217;ll never. We&#8217;ll never let the customer run exposed hoping and praying that hardware comes back when it might not.<\/p>\n<p>249<br \/>00:33:12.140 &#8211;&gt; 00:33:17.899<br \/>Philip Sellers: Yeah, the other thing. And just to reiterate what you said. you know you you&#8217;ve got<\/p>\n<p>250<br \/>00:33:18.370 &#8211;&gt; 00:33:33.199<br \/>Philip Sellers: this concept of the local cache in every node, and with Vdi it&#8217;s one of the things that really makes it a great platform for running. Vdi is having that local copy on every single one of your nodes. So especially the non persistent desktops.<\/p>\n<p>251<br \/>00:33:33.370 &#8211;&gt; 00:33:35.280<br \/>Philip Sellers: They&#8217;re right there.<\/p>\n<p>252<br \/>00:33:35.510 &#8211;&gt; 00:33:44.679<br \/>Philip Sellers: you know it&#8217;s it&#8217;s it&#8217;s never going to be faster than that. Right. It&#8217;s hard. He says you&#8217;re you&#8217;re never going to get food faster than in your own kitchen. So<\/p>\n<p>253<br \/>00:33:45.110 &#8211;&gt; 00:33:49.220<br \/>Philip Sellers: it it&#8217;s it&#8217;s one of those great platform trade-offs, and and<\/p>\n<p>254<br \/>00:33:49.310 &#8211;&gt; 00:34:03.790<br \/>Philip Sellers: you said it earlier. But you know I just wanted to reiterate, re re-emphasize it because it is one of the great use, cases, and why we we like to do. Vdi on new tanks is architecturally. There&#8217;s things there that help us.<\/p>\n<p>255<br \/>00:34:05.950 &#8211;&gt; 00:34:18.300<br \/>Andy Whiteside: Yeah, there&#8217;s there&#8217;s there that we&#8217;ve wanted forever. Just took new tanks showing up to make it to where that was handled at a sub under the surface layer, so we could get on with brokering connections and<\/p>\n<p>256<br \/>00:34:18.750 &#8211;&gt; 00:34:20.830<br \/>Andy Whiteside: enabling user? Experience.<\/p>\n<p>257<br \/>00:34:22.960 &#8211;&gt; 00:34:25.689<br \/>Ben Rogers: You know what the best<\/p>\n<p>258<br \/>00:34:26.790 &#8211;&gt; 00:34:39.430<br \/>Ben Rogers: best thing about. Hv: Well, one of it&#8217;s the cost under the inclusion of it. But no, what are you highlighting.<\/p>\n<p>259<br \/>00:34:39.550 &#8211;&gt; 00:34:44.779<br \/>Ben Rogers: So you know we&#8217;re customers, you know. Let&#8217;s forget all about the prolong and all that<\/p>\n<p>260<br \/>00:34:44.810 &#8211;&gt; 00:35:00.130<br \/>Ben Rogers: we&#8217;re going into. You know a little bit of a recession. Time is going to get a lot of thought a little tight. A lot of customers are looking. This is a way. Could I get my budget a little skinnier and utilize something that I already own versus trying to reinvent the will with a secondary product.<\/p>\n<p>261<br \/>00:35:02.120 &#8211;&gt; 00:35:12.999<br \/>Andy Whiteside: Yeah, I I totally get it. I totally agree, I mean in the beginning that was there was an argument that was true, but not a no brainer. But as Philip pointed a while ago, this this thing that you&#8217;ve<\/p>\n<p>262<br \/>00:35:13.190 &#8211;&gt; 00:35:24.020<br \/>Andy Whiteside: you guys in new tennis have created that is, this cost-effective platform that now has all these additional services that&#8217;s been bolted onto it. it really expanded that story. It&#8217;s not free<\/p>\n<p>263<br \/>00:35:24.100 &#8211;&gt; 00:35:29.539<br \/>Ben Rogers: now. Nobody gets it for free, but it is included in your Als license<\/p>\n<p>264<br \/>00:35:30.130 &#8211;&gt; 00:35:30.770<br \/>right?<\/p>\n<p>265<br \/>00:35:31.290 &#8211;&gt; 00:35:34.720<br \/>Jirah Cox: Totally. Yeah. Plenty of opportunity there to simplify. I mean<\/p>\n<p>266<br \/>00:35:34.760 &#8211;&gt; 00:35:35.809<br \/>Jirah Cox: I mean<\/p>\n<p>267<br \/>00:35:36.140 &#8211;&gt; 00:35:47.330<br \/>Jirah Cox: zooming out for a second. I mean Kvm is one of the most widely deployed hypervisors in the world right? The real trick we taught it here. in addition to speaking, you know, Cvm. Is for high speed. Local storage<\/p>\n<p>268<br \/>00:35:47.370 &#8211;&gt; 00:35:59.409<br \/>Jirah Cox: is is manageability right like, you know, brought it into into the family. Right? You manage it with prism. There&#8217;s really not if you already know how to run new tanks and managing a. V is kind of a non issue, right? You run like any other cluster.<\/p>\n<p>269<br \/>00:35:59.490 &#8211;&gt; 00:36:02.620<br \/>Jirah Cox: so simplicity simplicity included.<\/p>\n<p>270<br \/>00:36:03.730 &#8211;&gt; 00:36:10.370<br \/>Philip Sellers: Yeah, it it&#8217;s funny, Andy, You asked about the enabling Ha reservation checkbox. You know<\/p>\n<p>271<br \/>00:36:10.560 &#8211;&gt; 00:36:30.449<br \/>Philip Sellers: there&#8217;s a lot here as I&#8217;ve explored the new tax platform. That reminds me of the early days of Vmware, where they make clustering easy. You just went in and checked the box, and that you had a cluster. And I remember doing windows. Cluster builds that took 2 weeks to to set up and configure and get all the bugs worked out prior to that.<\/p>\n<p>272<br \/>00:36:30.480 &#8211;&gt; 00:36:46.710<br \/>Philip Sellers: there&#8217;s a lot of that that exists in the platform, too. simplicity. They they&#8217;re They&#8217;re not. Tennis is doing a good job of delivering something complex under the covers in a simplistic way, operating it in a simplistic way.<\/p>\n<p>273<br \/>00:36:46.730 &#8211;&gt; 00:36:53.059<br \/>Philip Sellers: those are things I think, that resonate with technologists when they get to to see<\/p>\n<p>274<br \/>00:36:53.210 &#8211;&gt; 00:36:56.059<br \/>Philip Sellers: what&#8217;s being delivered to them.<\/p>\n<p>275<br \/>00:36:56.190 &#8211;&gt; 00:36:59.699<br \/>Philip Sellers: yeah, I I&#8217;ve looked at other hypervisors, one from a<\/p>\n<p>276<br \/>00:37:00.640 &#8211;&gt; 00:37:10.499<br \/>Philip Sellers: a large software company that might write operating systems, and you know it&#8217;s clunky. It. It takes some<\/p>\n<p>277<br \/>00:37:10.680 &#8211;&gt; 00:37:19.239<br \/>Philip Sellers: extra steps. You know it&#8217;s it&#8217;s that old school clustering that you know from windows. It&#8217;s. You know it&#8217;s the same on hyperv, and<\/p>\n<p>278<br \/>00:37:19.390 &#8211;&gt; 00:37:26.160<br \/>Philip Sellers: there there&#8217;s something to be said about the simplistic message, and and delivery that mechanics is doing here.<\/p>\n<p>279<br \/>00:37:26.860 &#8211;&gt; 00:37:37.849<br \/>Andy Whiteside: I think one of the that&#8217;s Newton&#8217;s has. Is it was born with the same hyper converge processes and concepts, and constructs, and from day one<\/p>\n<p>280<br \/>00:37:38.240 &#8211;&gt; 00:37:43.420<br \/>Andy Whiteside: that wasn&#8217;t something that had to, you know, find its way into the solution. It was born that way.<\/p>\n<p>281<br \/>00:37:43.530 &#8211;&gt; 00:37:50.110<br \/>Andy Whiteside: and you know sometimes it&#8217;s just helpful to be bored at a time when the future is there versus having to adapt to it.<\/p>\n<p>282<br \/>00:37:50.860 &#8211;&gt; 00:37:53.120<br \/>Jirah Cox: Yeah, it&#8217;s pretty fair. I mean the<\/p>\n<p>283<br \/>00:37:53.360 &#8211;&gt; 00:37:57.720<br \/>Jirah Cox: the point I like to make it. It was taught to me years ago. It&#8217;s like<\/p>\n<p>284<br \/>00:37:57.920 &#8211;&gt; 00:38:16.470<br \/>Jirah Cox: any system of sufficient ability to run like and solve business problems like this complexity in that right? It&#8217;s just a matter of it&#8217;s a design question of do we ask users to bear that complexity? Or do we buy that complexity? So they get some of that experience right there&#8217;s there&#8217;s complexity that is abstracting going on under the hood<\/p>\n<p>285<br \/>00:38:16.540 &#8211;&gt; 00:38:32.449<br \/>Jirah Cox: I I heard one time that you know we make. We evaluate any given Vm. Right on like 7 or 12 different metrics to determine where it&#8217;s going to land right. So there&#8217;s there&#8217;s plenty of decisions being made under the hood, but we just don&#8217;t ask users to wait into the thick of that, and make those for us right if we can. If we can abstract and we will.<\/p>\n<p>286<br \/>00:38:32.870 &#8211;&gt; 00:38:33.430<br \/>\u0938\u0939\u0940\u0964<\/p>\n<p>287<br \/>00:38:35.370 &#8211;&gt; 00:38:40.269<br \/>Andy Whiteside: Well, guys, we&#8217;re more or less out of time. This has been a good conversation been<\/p>\n<p>288<br \/>00:38:40.520 &#8211;&gt; 00:38:41.899<br \/>Andy Whiteside: any additional<\/p>\n<p>289<br \/>00:38:41.940 &#8211;&gt; 00:38:43.319<br \/>Andy Whiteside: thoughts, comments?<\/p>\n<p>290<br \/>00:38:43.580 &#8211;&gt; 00:38:53.000<br \/>Ben Rogers: No, I I you know I kind of go back to what they said at the beginning of the podcast man. This is a platform. It it&#8217;s not just a hyper converge system anymore.<\/p>\n<p>291<br \/>00:38:53.010 &#8211;&gt; 00:39:10.749<br \/>Ben Rogers: and all the other products that we have may kind of hinge off for this idea of, you know, spreading data across the cluster day, the resiliency, all these things. So you guys want to learn more. Reach out to us. It&#8217;s exciting time to be at Newton. We&#8217;re taking this technology to cloud. Now.<\/p>\n<p>292<br \/>00:39:10.810 &#8211;&gt; 00:39:20.549<br \/>Ben Rogers: that&#8217;s opening up some doors for so really good time to be employed for new tennis. Really good time to be running new technics, and and I look forward to what they what the future brings for us.<\/p>\n<p>293<br \/>00:39:22.180 &#8211;&gt; 00:39:24.799<br \/>Andy Whiteside: Harvey. What what did we miss? You want to cover?<\/p>\n<p>294<br \/>00:39:25.070 &#8211;&gt; 00:39:26.959<br \/>Harvey Green: well, since you<\/p>\n<p>295<br \/>00:39:27.030 &#8211;&gt; 00:39:40.090<br \/>Harvey Green: time stand and date stamp the episode at the start, i&#8217;ll just remind everybody that this this week is a on a third Friday week. So we&#8217;re doing our workshop on Friday from<\/p>\n<p>296<br \/>00:39:40.270 &#8211;&gt; 00:39:46.130<br \/>Harvey Green: 11 to 2 this week is mechanics database service. So<\/p>\n<p>297<br \/>00:39:46.200 &#8211;&gt; 00:39:48.009<br \/>Harvey Green: definitely jump on.<\/p>\n<p>298<br \/>00:39:48.960 &#8211;&gt; 00:39:50.640<br \/>Andy Whiteside: Formerly no one in America<\/p>\n<p>299<br \/>00:39:50.820 &#8211;&gt; 00:39:52.730<br \/>Harvey Green: formerly known as era.<\/p>\n<p>300<br \/>00:39:52.950 &#8211;&gt; 00:39:56.609<br \/>Jirah Cox: and that&#8217;s that&#8217;s that&#8217;s a phone workshop, right? That&#8217;s like<\/p>\n<p>301<br \/>00:39:56.750 &#8211;&gt; 00:40:01.490<br \/>Jirah Cox: that that goes so far beyond this kind of like Vm. Management data rebuilding. It&#8217;s like<\/p>\n<p>302<br \/>00:40:01.650 &#8211;&gt; 00:40:08.479<br \/>Jirah Cox: batteries included, right like. What can the platform really do for you and help streamline a lot of day. 2 operations.<\/p>\n<p>303<br \/>00:40:08.540 &#8211;&gt; 00:40:27.529<br \/>Harvey Green: Yeah. I I will laugh at one of my friends because he always tells me I I come back with the phrase, what else can you do every time he he bring something up? this? This is one of those occasions where we just gone through. You know this entire podcast on some of the things that are underpinning the entire platform.<\/p>\n<p>304<br \/>00:40:27.630 &#8211;&gt; 00:40:33.300<br \/>Harvey Green: and then it&#8217;s like, Well, what else can you do? Well, on Friday you&#8217;ll see more<\/p>\n<p>305<br \/>00:40:34.190 &#8211;&gt; 00:40:36.209<br \/>where that came from.<\/p>\n<p>306<br \/>00:40:38.080 &#8211;&gt; 00:40:42.290<br \/>Philip Sellers: no, I&#8217;ll be on the same workshop with<\/p>\n<p>307<br \/>00:40:42.590 &#8211;&gt; 00:40:50.670<br \/>Harvey Green: with Harvey on Friday. We&#8217;d love to see you, you know Bill&#8217;s gonna do all the work this time. He&#8217;s gonna sit there and relax.<\/p>\n<p>308<br \/>00:40:51.620 &#8211;&gt; 00:40:55.640<br \/>Andy Whiteside: I have no doubt it will go just fine gyra. One last chance anything.<\/p>\n<p>309<br \/>00:41:01.640 &#8211;&gt; 00:41:21.150<br \/>Jirah Cox: I shouldn&#8217;t be blanking on that. Oh, well, we announced that next come in Chicago right definitely. Talk to take out the site for that talk to your account teams. Tell me you want to get some passes. and we&#8217;d love to see it in Chicago as Dot next leaves the virtual world, and goes back to the physical or something like that. The the analogy breaks down<\/p>\n<p>310<br \/>00:41:21.470 &#8211;&gt; 00:41:33.129<br \/>Andy Whiteside: well. And if you&#8217;re looking for passes, if you renew with integrity, your existing licenses with their passes in. We&#8217;re going to be given away, I think 20 passes passes. We I think we plan to give.<\/p>\n<p>311<br \/>00:41:33.690 &#8211;&gt; 00:41:42.400<br \/>Jirah Cox: We plan to give away both the the broncos that we&#8217;re giving away as far as work has no boundaries. at the event. That&#8217;s right. There you go, man. I want to be a centigrade customer.<\/p>\n<p>312<br \/>00:41:42.510 &#8211;&gt; 00:41:55.430<br \/>Harvey Green: Hey? You should be. You should be. My fear is, you wouldn&#8217;t need us. But that&#8217;s okay. All right, guys, we&#8217;ll appreciate you guys joining and doing this with us, and we&#8217;ll do it again in a week or 2.<\/p>\n<p>313<br \/>00:41:56.560 &#8211;&gt; 00:41:59.020<br \/>Jirah Cox: Sounds good, All right, thanks, Everybody.<\/p>\n<p><\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Within Nutanix we have Replication Factor (How many data copies are written in the cluster) and Redundancy Factor (how many nodes\/disks can go offline). Both can have a value of &hellip;<\/p>","protected":false},"author":7,"featured_media":65766,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_price":"","_stock":"","_tribe_ticket_header":"","_tribe_default_ticket_provider":"","_tribe_ticket_capacity":"0","_ticket_start_date":"","_ticket_end_date":"","_tribe_ticket_show_description":"","_tribe_ticket_show_not_going":false,"_tribe_ticket_use_global_stock":"","_tribe_ticket_global_stock_level":"","_global_stock_mode":"","_global_stock_cap":"","_tribe_rsvp_for_event":"","_tribe_ticket_going_count":"","_tribe_ticket_not_going_count":"","_tribe_tickets_list":"[]","_tribe_ticket_has_attendee_info_fields":false,"footnotes":""},"categories":[5],"tags":[122],"class_list":["post-65701","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-podcast","tag-nutanix-weekly"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>63: Nutanix Weekly: Honey I Shrunk My Cluster<\/title>\n<meta name=\"description\" content=\"In this blog post I will explain different scenarios and their behaviors within Nutanix we have Replication Factor and Redundancy Factor.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/xentegra.com\/hi\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/\" \/>\n<meta property=\"og:locale\" content=\"hi_IN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"63: Nutanix Weekly: Honey I Shrunk My Cluster\" \/>\n<meta property=\"og:description\" content=\"In this blog post I will explain different scenarios and their behaviors within Nutanix we have Replication Factor and Redundancy Factor.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/xentegra.com\/hi\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/\" \/>\n<meta property=\"og:site_name\" content=\"XenTegra\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/XenTegra\/\" \/>\n<meta property=\"article:published_time\" content=\"2022-12-17T04:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-01T20:49:34+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/eadn-wc05-13529174.nxedge.io\/wp-content\/uploads\/2024\/03\/Nutanix-Weekly.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1100\" \/>\n\t<meta property=\"og:image:height\" content=\"600\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Chase Newmyer\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@xentegra\" \/>\n<meta name=\"twitter:site\" content=\"@xentegra\" \/>\n<meta name=\"twitter:label1\" content=\"\u0926\u094d\u0935\u093e\u0930\u093e \u0932\u093f\u0916\u093f\u0924\" \/>\n\t<meta name=\"twitter:data1\" content=\"Chase Newmyer\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u0905\u0928\u0941\u092e\u093e\u0928\u093f\u0924 \u092a\u0922\u093c\u0928\u0947 \u0915\u093e \u0938\u092e\u092f\" \/>\n\t<meta name=\"twitter:data2\" content=\"43 \u092e\u093f\u0928\u091f\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/\"},\"author\":{\"name\":\"Chase Newmyer\",\"@id\":\"https:\\\/\\\/xentegra.com\\\/#\\\/schema\\\/person\\\/84736408f096bfd92b80305aea8846a7\"},\"headline\":\"63: Nutanix Weekly: Honey I Shrunk My Cluster (Multiple Nodes Down in RF2)\",\"datePublished\":\"2022-12-17T04:00:00+00:00\",\"dateModified\":\"2025-07-01T20:49:34+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/\"},\"wordCount\":9502,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/xentegra.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Nutanix-Weekly.png\",\"keywords\":[\"Nutanix Weekly\"],\"articleSection\":[\"Podcast\"],\"inLanguage\":\"hi-IN\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/\",\"url\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/\",\"name\":\"63: Nutanix Weekly: Honey I Shrunk My Cluster\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/xentegra.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Nutanix-Weekly.png\",\"datePublished\":\"2022-12-17T04:00:00+00:00\",\"dateModified\":\"2025-07-01T20:49:34+00:00\",\"description\":\"In this blog post I will explain different scenarios and their behaviors within Nutanix we have Replication Factor and Redundancy Factor.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/#breadcrumb\"},\"inLanguage\":\"hi-IN\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"hi-IN\",\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/#primaryimage\",\"url\":\"https:\\\/\\\/xentegra.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Nutanix-Weekly.png\",\"contentUrl\":\"https:\\\/\\\/xentegra.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Nutanix-Weekly.png\",\"width\":1100,\"height\":600},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/xentegra.com\\\/resources\\\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/xentegra.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"63: Nutanix Weekly: Honey I Shrunk My Cluster (Multiple Nodes Down in RF2)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/xentegra.com\\\/#website\",\"url\":\"https:\\\/\\\/xentegra.com\\\/\",\"name\":\"XenTegra\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/xentegra.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"hi-IN\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/xentegra.com\\\/#organization\",\"name\":\"XenTegra\",\"url\":\"https:\\\/\\\/xentegra.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"hi-IN\",\"@id\":\"https:\\\/\\\/xentegra.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/xentegra.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/1519903807641-min.jpg\",\"contentUrl\":\"https:\\\/\\\/xentegra.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/1519903807641-min.jpg\",\"width\":200,\"height\":200,\"caption\":\"XenTegra\"},\"image\":{\"@id\":\"https:\\\/\\\/xentegra.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/XenTegra\\\/\",\"https:\\\/\\\/x.com\\\/xentegra\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/xentegra-llc\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/xentegra.com\\\/#\\\/schema\\\/person\\\/84736408f096bfd92b80305aea8846a7\",\"name\":\"Chase Newmyer\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"hi-IN\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/d46cd44f0bd433dc5a386cbac549c62fd92266e3951669c705b347be2130cca3?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/d46cd44f0bd433dc5a386cbac549c62fd92266e3951669c705b347be2130cca3?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/d46cd44f0bd433dc5a386cbac549c62fd92266e3951669c705b347be2130cca3?s=96&d=mm&r=g\",\"caption\":\"Chase Newmyer\"},\"url\":\"https:\\\/\\\/xentegra.com\\\/hi\\\/resources\\\/author\\\/chasenewmyer\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"63: Nutanix Weekly: Honey I Shrunk My Cluster","description":"In this blog post I will explain different scenarios and their behaviors within Nutanix we have Replication Factor and Redundancy Factor.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/xentegra.com\/hi\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/","og_locale":"hi_IN","og_type":"article","og_title":"63: Nutanix Weekly: Honey I Shrunk My Cluster","og_description":"In this blog post I will explain different scenarios and their behaviors within Nutanix we have Replication Factor and Redundancy Factor.","og_url":"https:\/\/xentegra.com\/hi\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/","og_site_name":"XenTegra","article_publisher":"https:\/\/www.facebook.com\/XenTegra\/","article_published_time":"2022-12-17T04:00:00+00:00","article_modified_time":"2025-07-01T20:49:34+00:00","og_image":[{"width":1100,"height":600,"url":"https:\/\/eadn-wc05-13529174.nxedge.io\/wp-content\/uploads\/2024\/03\/Nutanix-Weekly.png","type":"image\/png"}],"author":"Chase Newmyer","twitter_card":"summary_large_image","twitter_creator":"@xentegra","twitter_site":"@xentegra","twitter_misc":{"\u0926\u094d\u0935\u093e\u0930\u093e \u0932\u093f\u0916\u093f\u0924":"Chase Newmyer","\u0905\u0928\u0941\u092e\u093e\u0928\u093f\u0924 \u092a\u0922\u093c\u0928\u0947 \u0915\u093e \u0938\u092e\u092f":"43 \u092e\u093f\u0928\u091f"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/#article","isPartOf":{"@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/"},"author":{"name":"Chase Newmyer","@id":"https:\/\/xentegra.com\/#\/schema\/person\/84736408f096bfd92b80305aea8846a7"},"headline":"63: Nutanix Weekly: Honey I Shrunk My Cluster (Multiple Nodes Down in RF2)","datePublished":"2022-12-17T04:00:00+00:00","dateModified":"2025-07-01T20:49:34+00:00","mainEntityOfPage":{"@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/"},"wordCount":9502,"commentCount":0,"publisher":{"@id":"https:\/\/xentegra.com\/#organization"},"image":{"@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/#primaryimage"},"thumbnailUrl":"https:\/\/eadn-wc05-13529174.nxedge.io\/wp-content\/uploads\/2024\/03\/Nutanix-Weekly.png","keywords":["Nutanix Weekly"],"articleSection":["Podcast"],"inLanguage":"hi-IN","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/","url":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/","name":"63: Nutanix Weekly: Honey I Shrunk My Cluster","isPartOf":{"@id":"https:\/\/xentegra.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/#primaryimage"},"image":{"@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/#primaryimage"},"thumbnailUrl":"https:\/\/eadn-wc05-13529174.nxedge.io\/wp-content\/uploads\/2024\/03\/Nutanix-Weekly.png","datePublished":"2022-12-17T04:00:00+00:00","dateModified":"2025-07-01T20:49:34+00:00","description":"In this blog post I will explain different scenarios and their behaviors within Nutanix we have Replication Factor and Redundancy Factor.","breadcrumb":{"@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/#breadcrumb"},"inLanguage":"hi-IN","potentialAction":[{"@type":"ReadAction","target":["https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/"]}]},{"@type":"ImageObject","inLanguage":"hi-IN","@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/#primaryimage","url":"https:\/\/eadn-wc05-13529174.nxedge.io\/wp-content\/uploads\/2024\/03\/Nutanix-Weekly.png","contentUrl":"https:\/\/eadn-wc05-13529174.nxedge.io\/wp-content\/uploads\/2024\/03\/Nutanix-Weekly.png","width":1100,"height":600},{"@type":"BreadcrumbList","@id":"https:\/\/xentegra.com\/resources\/nutanix-weekly-honey-i-shrunk-my-cluster-multiple-nodes-down-in-rf2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/xentegra.com\/"},{"@type":"ListItem","position":2,"name":"63: Nutanix Weekly: Honey I Shrunk My Cluster (Multiple Nodes Down in RF2)"}]},{"@type":"WebSite","@id":"https:\/\/xentegra.com\/#website","url":"https:\/\/xentegra.com\/","name":"\u091c\u093c\u0947\u0928\u091f\u0947\u0917\u094d\u0930\u093e","description":"","publisher":{"@id":"https:\/\/xentegra.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/xentegra.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"hi-IN"},{"@type":"Organization","@id":"https:\/\/xentegra.com\/#organization","name":"\u091c\u093c\u0947\u0928\u091f\u0947\u0917\u094d\u0930\u093e","url":"https:\/\/xentegra.com\/","logo":{"@type":"ImageObject","inLanguage":"hi-IN","@id":"https:\/\/xentegra.com\/#\/schema\/logo\/image\/","url":"https:\/\/eadn-wc05-13529174.nxedge.io\/wp-content\/uploads\/2023\/06\/1519903807641-min.jpg","contentUrl":"https:\/\/eadn-wc05-13529174.nxedge.io\/wp-content\/uploads\/2023\/06\/1519903807641-min.jpg","width":200,"height":200,"caption":"XenTegra"},"image":{"@id":"https:\/\/xentegra.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/XenTegra\/","https:\/\/x.com\/xentegra","https:\/\/www.linkedin.com\/company\/xentegra-llc"]},{"@type":"Person","@id":"https:\/\/xentegra.com\/#\/schema\/person\/84736408f096bfd92b80305aea8846a7","name":"Chase Newmyer","image":{"@type":"ImageObject","inLanguage":"hi-IN","@id":"https:\/\/secure.gravatar.com\/avatar\/d46cd44f0bd433dc5a386cbac549c62fd92266e3951669c705b347be2130cca3?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/d46cd44f0bd433dc5a386cbac549c62fd92266e3951669c705b347be2130cca3?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d46cd44f0bd433dc5a386cbac549c62fd92266e3951669c705b347be2130cca3?s=96&d=mm&r=g","caption":"Chase Newmyer"},"url":"https:\/\/xentegra.com\/hi\/resources\/author\/chasenewmyer\/"}]}},"_links":{"self":[{"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/posts\/65701","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/comments?post=65701"}],"version-history":[{"count":1764,"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/posts\/65701\/revisions"}],"predecessor-version":[{"id":716392,"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/posts\/65701\/revisions\/716392"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/media\/65766"}],"wp:attachment":[{"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/media?parent=65701"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/categories?post=65701"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/xentegra.com\/hi\/wp-json\/wp\/v2\/tags?post=65701"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}