4: How to leverage Cloud Storage on AWS (PART TWO)

Oct 7, 2022

Jon Spallone of XenTegra and Brian Schoepfle of AWS discuss  Cloud Storage on AWS. This episode is part two where we breakdown what is offered in AWS and  how this benefits you in your cloud architecture.

Topics we’ll discuss are: 

  • AWS storage services
    • Data migration
      • AWS DataSync
      • AWS Snow Family
    • Hybrid cloud storage and edge computing
      • AWS Storage Gateway
      • AWS Snow Family
    • AWS Transfer Family
    • Disaster recovery and backup 
      • AWS Elastic Disaster Recovery (DRS)
      • AWS Backup

This episode is PART TWO

Host: Jon Spallone
Co-Host: Brian Schoepfle


00:00:03.370 –> 00:00:18.989
Jon Spallone: Welcome back to this week’s episode of in the Clouds with aws, Jon here, and I’ve got Brian back as well. If you remember last episode we touched on storage, and that was part one of our episode.

00:00:19.000 –> 00:00:32.329
Jon Spallone: This is going to be a continuation here, in part two, where we’re going to dig into some other layers of storage itself. So again, Brian, thanks for being here. I appreciate your time, and we can get set up to dig into our content today.

00:00:33.150 –> 00:00:35.520
Brian Schoepfle: Awesome looking forward to it. Thanks, Jon.

00:00:36.180 –> 00:01:04.969
Jon Spallone: So just to recap what we covered last week was the object file um and block storage. So this is what we would really have from like if we were on-prem to to think about it from their standpoint to be look at daz. It’s it’s direct attached storage that we’re going to either put to an instance, or we’re going to utilize it for high availability. It’s different scales and tiers based upon demands and needs that an Aws customer would have

00:01:05.200 –> 00:01:18.620
Jon Spallone: for their Aws storage. So I know we. We covered a lot of that in depth, So was there any key topics or points to refreshers that you would want to hit on before we move on to data migration.

00:01:19.450 –> 00:01:34.349
Brian Schoepfle: Yeah. So you know, as you said last time we talked about what it probably would be considered like the four main ways to store data inside an aws outside of you know, a managed database service, or something like that, which is something we could certainly get to.

00:01:34.360 –> 00:02:03.490
Brian Schoepfle: So we’ve talked about the many different flavors of of S. Three. It’s durability, it’s availability. And really being able to pick, you know the right class of storage, the right performance of storage based on what you need across both both s three and ebs, which is our our block storage service, and that these are the virtual disks that are attached to to Ec. Two instances we talked about efs as a linux-based managed elastic file system where you pay for what you use,

00:02:03.530 –> 00:02:09.840
Brian Schoepfle: and we went into a bunch of different flavors of Fs. And we talked about

00:02:09.850 –> 00:02:27.759
Brian Schoepfle: it’s integrations with muster file systems. We talked about Zfs managed windows, file shares, using the Smb protocol, and I think we spent quite a bit of time on on something that’s really excited for us right now, which is the integration with ah net app and cloud volumes on tab

00:02:27.770 –> 00:02:45.879
Brian Schoepfle: um and and fsx for for net app. So all that’s very exciting, certainly. But I think today we’re going to talk about. Okay, Now that we know what our various destinations could be. How do we get it there? And and where does it live? And what are we doing with it in the process?

00:02:45.890 –> 00:03:11.390
Brian Schoepfle: Um, you know, if you approach a migration Ah! In the same way that you might approach like moving your own house right? Um. You know those of us who have been through a big move. You know it’s we’re oftentimes like boxing stuff up, and we’re packing it up. And you know these are our precious belongings, and we can’t get to them or use them while they’re in transit. And then, once they arrive at our destination, we’ve got to spend all that time setting it back up again.

00:03:11.400 –> 00:03:40.899
Brian Schoepfle: Well, at the public sector, organization, level or the you know, commercial sector enterprise level, we don’t have the luxury of just shutting everything down while we tear it down, package it, move it, and and reassemble it at our new location. So we’re going to talk about some ways for offline and online data transfer today that I think will make things much clearer for folks about how they can get things into the cloud. And then also, as I think we’ll see a theme of merge here as we talked about a lot of these things,

00:03:40.910 –> 00:03:50.880
Brian Schoepfle: because there’s a real dual use case for many of these services that are not only for migration, but also for business, continuity and disaster recovery, which I know some we plan on getting to in the future.

00:03:50.890 –> 00:04:03.720
Jon Spallone: Yeah. And and also they they say, Hey, I know we’re gonna. I’m jumping around here a little bit. But we did hit on it last week, but the storage gateway that that pretty much is our course when we’re talking about

00:04:03.730 –> 00:04:19.200
Jon Spallone: this this integration of our on-prem worlds into our aws storage side the house, so I I think we’ll hit a little bit on that. I know we we covered it, but you know just some additional talking points, or whatever. We see how it breaks out.

00:04:19.209 –> 00:04:27.900
Jon Spallone: So with that we’re moving into the data migration phase. And now what we’re looking at here in our first category is our aws data sync.

00:04:28.210 –> 00:04:32.770
Brian Schoepfle: So this is really what we’re talking about here is that

00:04:33.030 –> 00:04:34.290
Jon Spallone: dated

00:04:34.410 –> 00:04:38.990
Jon Spallone: synchronization between our on-prem environments and our

00:04:39.030 –> 00:04:52.989
Jon Spallone: aws cloud environment. So What we would have here is is this: from what I’m looking at it’s really our object. Level storage that we’re replicating by using this. Is that correct?

00:04:53.470 –> 00:05:02.390
Brian Schoepfle: Well, the answer would be, yes. And so Aws data. Sync is one of the things that we’d want to A leverage for online data transfers,

00:05:02.400 –> 00:05:09.469
Brian Schoepfle: and it’s very, very useful for customers who are seeking to simplify, automate, and accelerate

00:05:09.600 –> 00:05:21.769
Brian Schoepfle: how they can copy large amounts of data between on-premises, storage edge locations, other clouds and also maybe even other aws services as well. So

00:05:21.780 –> 00:05:43.929
Brian Schoepfle: um data sync can copy data. Ah, for object storage, certainly, but it can also copy data between network files, so network, file, system shares or Nfs uh server, message, block or smb. The Hadoop distributed file system, any self managed object storage that someone might have, you know, having carried a bag for Emc in the past, i’m thinking

00:05:43.940 –> 00:06:00.459
Brian Schoepfle: the Emc Celera systems, if anyone remembers that. But and also Google Cloud Storage azure files and a and a bunch of different aws services, including the ones we talked about last time, which would be s three and fsx and and fsx for net at Mont.

00:06:00.710 –> 00:06:04.750
Jon Spallone: Okay, and then that’s that’s run with an agent. That’s on-prem

00:06:04.760 –> 00:06:08.569
Jon Spallone: um that’s allowing you to be able to do those connections correct

00:06:09.930 –> 00:06:11.060
Brian Schoepfle: So

00:06:11.220 –> 00:06:12.370
Brian Schoepfle: it’s.

00:06:13.210 –> 00:06:23.100
Brian Schoepfle: We’ll look at data sync from kind of two different ways. And one way to start approaching it is with a data sync service that’s currently in preview right? Now, which is data, sync discovery

00:06:23.110 –> 00:06:52.089
Brian Schoepfle: and data sync discovery will help you better understand how you’re using storage in an on-premise environment and then provide back to you some recommendations to inform your cost, estimates your plans for migrated to aws. If you remember, from our conversations that we had around migrations, we really emphasize the importance of the assessment phase of things that you know, looking at, not just the current size of the virtual machines that you have today. But what? What are the actual, underlying resource requirements that you’re going to have

00:06:52.100 –> 00:06:59.620
Brian Schoepfle: in the cloud on more modern architecture and things like that? So once we get out of an on-premise,

00:06:59.630 –> 00:07:21.390
Brian Schoepfle: you know, infrastructure just in general. Now, we’re presented with a lot more options in terms of where we can store our data and data sync discovery. Ah helps simplify those decisions with some recommendations and and giving you a better understanding of of how you’re using the current storage appliances and and infrastructure that you have today.

00:07:21.400 –> 00:07:34.810
Brian Schoepfle: Um, once we want to start. Ah, actually getting that data into the cloud. Um. Now we’re we’re moving stuff through through all those files. The systems that I mentioned before.

00:07:34.820 –> 00:08:01.499
Brian Schoepfle: Um, And what we’re actually gonna do is configure data sync to make an initial copy of the entire data set and and then schedule subsequent incremental transfers of any of the data that’s changed until we make a final cut over from on-premises to aws, and then from a business continuity perspective, maybe we never make that cut over. If we’re leveraging the service and

00:08:01.510 –> 00:08:11.140
Brian Schoepfle: um, you can schedule these migrations to run off hours. You can limit the network bandwidth that data sync uses by configuring a built-in throttle that it has

00:08:11.150 –> 00:08:39.919
Brian Schoepfle: and it’s also preserving metadata between your storage systems that have similar metadata structures which is really going to help smooth that transition of any end user files or applications that you’re bringing to the target aws storage service. Um! It will allow us to directly. Ah move cold data into a secure, long-term storage like at Amazon Sess three glazier, flexible retrieval um, or or place your deep archive. That’s giving us probably the best price

00:08:39.929 –> 00:08:44.140
Brian Schoepfle: a gigabyte of storage available in the cloud,

00:08:44.150 –> 00:09:04.800
Brian Schoepfle: and we can use filtering functionalities within that to exclude copying temporary files and folders, um and other things that we wouldn’t necessarily need or want to archive. And the good news is this is not really based on an agent. This is this is built on on connectivity between these devices that it’s established within the

00:09:05.050 –> 00:09:07.999
Brian Schoepfle: the the A. Douglas console.

00:09:08.010 –> 00:09:26.580
Jon Spallone: Yeah. And the other thing, too, like you, you brought up doing that assess before we move, and and getting an understanding of it that really comes into play, because when we’re talking about the connectivity from that on-prem to the the user’s tenant and the customer’s tenant that we need to make sure what we’re

00:09:26.590 –> 00:09:55.769
Jon Spallone: you know, putting through over that pipe. You know. How are we configuring that direct connect? You know what kind of bandwidth we’re putting through on that Because data is one thing, but we’re also gonna have additional things traversing that line. Additional services Customer may have beyond just the data side of the house. So I mean, it’s great to know that we can schedule that data sink at all off hours, because then, you know, when our user productivity is lower on the environment that gives us the ability to get it back.

00:09:55.780 –> 00:09:58.670
Replication across the the wire itself.

00:09:58.740 –> 00:09:59.990
Jon Spallone: Now, data,

00:10:00.000 –> 00:10:01.349
Brian Schoepfle: yeah, I want to know about it.

00:10:01.980 –> 00:10:31.489
Brian Schoepfle: So I want to jump back and and correctly sorry. I want to turn back and correct myself real quick. There, there! There is a data-sink agent which we would deploy as close as possible to our storage system, and that can be deployed on Vmware on Kpm on microsoft hyper-be. It can also be deployed on an Amazon, Ec. Two instance and something that we’re going to talk about a little bit later, and the agent can be deployed on on the snow cone device or an aws outpost rack. So there there is an agent of involved,

00:10:31.530 –> 00:10:34.490
Brian Schoepfle: and I want to make sure that I clear that up for our listeners.

00:10:34.500 –> 00:10:40.369
Jon Spallone: Okay, got you. And then what data sync is doing? Is it’s really taking that on-prem

00:10:40.770 –> 00:10:51.120
Jon Spallone: data that’s in storage and putting it into what we discussed last time last week is that that file object-level storage. So it’s really kind of

00:10:51.530 –> 00:10:56.489
Jon Spallone: I mean. Obviously it’s migrating it but it’s moving it from what they have on-prem

00:10:56.500 –> 00:11:01.600
Jon Spallone: one of those many options that we had discussed last week’s episode, The:

00:11:02.620 –> 00:11:10.939
Brian Schoepfle: Yeah. So data sync discovery is helping us pick the right tool for the job and then data sync itself is

00:11:10.950 –> 00:11:22.700
Brian Schoepfle: facilitating the replication of that data from our existing data sources into the Aws storage service that is best fit for purpose, based off the application and the usage pattern.

00:11:23.160 –> 00:11:53.070
Brian Schoepfle: You know, we’re also seeing customers leverage uh data sync to facilitate in the building of a data lake. Ah, and probably the core component of of a data lake on aws is is certainly Amazon s Three. Ah, we do have some at Aws Lake formation. Our data, lay formation tools. Ah specifically built for this. But we do see we do see customers using data sync to do this as well. Um. Based off the the needs that their application has, or what they’re trying to do with it.

00:11:53.080 –> 00:11:53.590
It’s a

00:11:53.600 –> 00:12:06.389
Brian Schoepfle: and and I want to point out I mean, I know we kind of who hit on the one side of the house. But you know what we can copy is that that Nfs shares that are out there s andb like you said

00:12:06.400 –> 00:12:19.129
Brian Schoepfle: uh hoopeda, and uh the h was a Hdfs. So that’s that’s what the the side of mouse is um as well as like you, said Snow Cone, and then are different layers of uh

00:12:19.300 –> 00:12:25.579
Jon Spallone: object stores that we talked about last time, but also the on-tap file system is involved with that as well.

00:12:26.700 –> 00:12:42.549
Brian Schoepfle: Yeah. So every flavor of efs that we talked about last time, including Fsx for windows, file server from Buster for open zfs, it turned out on tap s three um, The native Vfs service really, really all of the the storage services that we talked about last time.

00:12:43.420 –> 00:12:55.770
Jon Spallone: All right. So from data migration. We now go from that data sync side of the house to our snow family. So Snow family is doing that offline. Correct?

00:12:57.140 –> 00:13:10.660
Brian Schoepfle: Well, yeah. So you know the initial vision of the snowball devices, which was the the first generation of the Snow family.

00:13:10.670 –> 00:13:24.020
Brian Schoepfle: Ah! Was really not much more than that you can imagine, in one of those ruggedized pelican cases filled up, build up with with hard drives or or solid-state drives

00:13:24.030 –> 00:13:39.569
Brian Schoepfle: A. An E ink shipping label that is powered by a kindle attached to the outside, and it’s all wrapped up in this tamper fruit case with with with, you know, single gig and ten gig connections available to it, and the idea was,

00:13:39.580 –> 00:13:56.359
Brian Schoepfle: you know, for for those customers who are trying to move data at let’s say the terabyte scale into into aws. They may not have provisioned bandwidth. They may not have reliable bandwidth.

00:13:56.370 –> 00:14:22.549
Brian Schoepfle: Ah! And they may just have project needs that extend beyond just what you know, transferring over the wire would allow us to do, and as a as a bit of a a fun personal anecdote to kind of give you an idea of how how far the market has come, and how much more educated our customers have become about these sorts of things. It was several years ago at this point that we were responding to a solicitation

00:14:22.560 –> 00:14:37.759
Brian Schoepfle: um at at a previous company that I worked at for a Federal agency who was looking to bring Ah five point two petabytes of data into the cloud. Ah! For some genomics and and for some analytics research,

00:14:37.770 –> 00:14:44.529
Brian Schoepfle: and as part of their solicitation, they had not required. They had not requested any sort of data transfer service,

00:14:44.650 –> 00:14:57.090
Brian Schoepfle: and, as you know, the Federal Government and many public sector agencies work on a year-long budget and that sort of thing. And and we try to explain to the customer like, Hey, you, haven’t asked for any

00:14:57.100 –> 00:15:22.999
Brian Schoepfle: data transfer services like. When do you need to start working on this data? And their answer was, of course, like right away, like as soon as we can get it going, and I try to explain to them that on a fully saturated ten by connection it will take about two hundred and sixty five days for all that data to get into its its new location. And and they said, Well, that’s unacceptable. And I had to answer, Well, that’s physics. We can’t make the electrons and focus on go any faster.

00:15:23.010 –> 00:15:38.329
Brian Schoepfle: So. But customers become much more savvy about the work that’s required to move this data. And when we’re talking about data, certainly at that scale. It’s probably not going to be efficient or sufficient

00:15:38.340 –> 00:15:54.499
Brian Schoepfle: uh to to just try to transfer this data over a Vpn, or even a direct connect connection. And so uh, what snowball allows us to do is um, even from its very inception, is connect these devices to our network.

00:15:54.510 –> 00:16:18.940
Brian Schoepfle: Um get, you know, compress the data, encrypt the data, get it onto these secure devices Ah, safely ship and track these devices as they make their way to an aws uh availability zone at where they’re then connected by our technicians to our back plane, and then the data is rehydrated and and uploaded into into S. Three.

00:16:18.980 –> 00:16:36.520
Brian Schoepfle: Now the Snow family has matured a lot since it was initially released, so probably one of the first steps that we took is you could take A. I think you should, on a screen a little bit earlier. There’s been. This bifurcation of what snowball

00:16:36.530 –> 00:16:49.769
Brian Schoepfle: is is is really intended for, and as a result. We’ve built two different classes of appliances that are, you know, more fit for purpose, based off of what a customer might be looking to do

00:16:49.780 –> 00:17:18.059
Brian Schoepfle: so. The snowball edge. Storage optimize exactly what you might think like. This is our This is our highest capacity device that you can still carry with your hands. It’s got eighty terabytes of usable hard drive storage, a magnetic disk, one terabyte of useful Ssd. Storage, and there are forty B cpus inside of this device and that’s helping us. Do things like, manage the encryption, manage the compression, but also do things like

00:17:18.069 –> 00:17:47.719
Brian Schoepfle: execute. Lambda functions on the devices. Maybe i’m performing some sort of transformation in flight as part of these migrations. And now I have compute power and some amount of compute power inside of this device to be able to make that happen. I can cluster these together between the five and ten nodes, just like every other device it’s. It’s entirely encrypted, and and is is eligible for for him on, but just like it’s it’s snowball edge. Compute up

00:17:47.730 –> 00:18:16.910
Brian Schoepfle: demise, cousin, which you might imagine a little bit less in terms of the magnetic storage forty, two terabytes, just over seven and a half; usable terabytes of Ssd. Storage, but fifty, two be cpus and two hundred and eight gigabytes of ram that’s allow us to. If I really do need to run transformations. If I do really want to execute some sort of compute function on this data as it’s going in or out of the device I have that I have that ability.

00:18:16.920 –> 00:18:35.400
Brian Schoepfle: So my data gets on my my snowball edge appliances, whether they’re computer optimized or storage optimized and just like generation. One I’m: i’m compressing encrypting and doing what I whatever else I need to do. It’s shipped off to aws, and and ultimately hook back up into S three where my data begins to live.

00:18:35.420 –> 00:18:48.799
Brian Schoepfle: Now we did have some customers believe it or not who were telling us that even the ability to cluster ten nodes of eighty terabyte devices was not enough,

00:18:48.880 –> 00:19:07.080
Brian Schoepfle: and I I think it was the two thousand and seventeen reinvent. I’d have to go check the dates but famously. Ah, great big Eighteen Wheeler was rolled out onto stage, and that is the service that we know today as aws snowmobile.

00:19:07.090 –> 00:19:21.270
Brian Schoepfle: This is a This is literally a tractor trailer. You know it’s. It’s a it’s racks of storage. Apply. It like it’s it’s a whole data center in a shipping container forty, five foot shipping container.

00:19:21.330 –> 00:19:50.879
Brian Schoepfle: Ah, you know where we where we publish the weights of our other devices. They’re about fifty pounds. We don’t publish the weight of the snowmobile, because it’s certainly measured in the tons, but it does come with the same two hundred and fifty six bit encryption. It is eligible for hipaa compliance, and and this is for folks who truly need to move and I won’t even say, like single-digit petabytes. This is double digit petabytes, or more of data for a massive data center evacuation

00:19:50.940 –> 00:19:56.959
Brian Schoepfle: or or data center consolidation in moving and moving things to the cloud. So

00:19:57.140 –> 00:20:03.680
Brian Schoepfle: now the the the latest generation of the Snow family is the snow cone.

00:20:03.730 –> 00:20:10.989
Brian Schoepfle: The snow cone is a really interesting appliance for a number of different reasons.

00:20:11.130 –> 00:20:25.730
Brian Schoepfle: Maybe some of our listeners are familiar with Aws outposts. This is an aws outpost. They are, I would say it certainly within our family of hybrid cloud solutions.

00:20:25.830 –> 00:20:37.179
Brian Schoepfle: And if you consider Vmware cloud on aws as an extension of your beats sphere, environment, into the Aws Cloud

00:20:37.190 –> 00:21:06.299
Brian Schoepfle: Outposts kind of brings that motion backwards and says, How do I bring the Aws cloud into my data center? So these are configurable is forty, two U. Racks, or one or two u appliances, and they run about a dozen of the most popular common aws services that you might imagine. Efs. Ks S. Three obviously ebs and Pc. Two. Ah! And it’s all managed through the Aws Council. So it’s.

00:21:06.310 –> 00:21:35.729
Brian Schoepfle: It’s as though I you know i’m working on the aws cloud. But the aws cloud is in in my data center or or in my facility now, because they’re managed through via the Aws cli Sdk or the Aws console. They require consistent high quality network connectivity to remain operational, and we typically would not deploy a forty, two U outpost raft in any location where we were also able to establish a direct

00:21:35.740 –> 00:21:47.080
Brian Schoepfle: connect connection, which is about a gig or higher of consistent, dedicated, very low latency connectivity directly into an aws region.

00:21:47.340 –> 00:21:49.180
Brian Schoepfle: Now snow cone

00:21:49.190 –> 00:22:15.289
Brian Schoepfle: brings these compute. Ah, features! These these little devices you can hold in your hand. They’re only they’re less than five pounds. They have four B cpus, four gigs of memory eight terabytes of usable magnetic storage, and fourteen terabytes of Ssd. Storage. But these are ruggedized appliances that do not require consistent network connectivity

00:22:15.300 –> 00:22:44.129
Brian Schoepfle: so i’ve seen some great use cases in the in the in areas where network connectivity is bad. I’ve seen, you know, certainly like in utilities and minerals or oil and gas exploration, and we want to. We’re able to to put a device out there. Um, collect some telemetry, perform some computational analysis on that data before we’re you know, shipping the old one Brad back and bringing a new one in. And we’re cycling the the data through that way.

00:22:44.190 –> 00:23:11.319
Brian Schoepfle: And and there are great iot actually use cases for both the snow cone and the snowball edge compute optimized because I can even run aws iot green grass, which is, you know, kind of my my, I aws iot runtime and and device controller. I I can bring that onto this, this ruggedized appliance right into my facility that that might not have great Internet connection.

00:23:11.330 –> 00:23:22.889
Brian Schoepfle: And if there’s any place I want to say in the world, but we’re actually going to go off the world for this example. Ah, that does not have great Internet connectivity. That would probably be. That would probably be outer space.

00:23:22.900 –> 00:23:38.749
Brian Schoepfle: And and this summer a snow cone was put inside a capsule on a falcon b rocket, and the snow cone made its debut on the International Space Station this summer,

00:23:38.760 –> 00:23:52.670
Brian Schoepfle: so there are lots of Certainly not. All of our customers are going to be going to outer space, or or, you know, drilling for minerals in the Yukon. But this is a great way to

00:23:52.680 –> 00:24:02.499
Brian Schoepfle: facilitate what i’ll call. Maybe we’ve called that like asynchronous kind of data connection without waiting to,

00:24:02.530 –> 00:24:24.110
Brian Schoepfle: without waiting to execute any compute functions on that data, whether that’s analysis or or restructuring or scrubbing the data data quality of data, integrity activities and then bringing them back into the Aws cloud at some point for for further analysis, for storage, and to drive more business intelligence functions off of that.

00:24:24.220 –> 00:24:27.350
Jon Spallone: So with the with the snow cone itself, I,

00:24:27.360 –> 00:24:40.120
Jon Spallone: you know, taking off many of the hats that I have, I mean, I can see from my background in the military. When I was out in the field. This would be a great device, ruggedized, because obviously, you know,

00:24:40.350 –> 00:25:09.249
Brian Schoepfle: to offend anybody. But if you’re a marine, you understand, you know, as those crayon eaters, you know we can drop these things and throw them all over the place, and you know it’s still going to live. Um, Then I can also like you. You said, from the energy standpoint experiences I have working with a lot of energy companies where you know they’re doing a lot of mining or testing out the sites either in, you know South America or even, you know, out out West. And they’re actually doing these these testing where

00:25:09.260 –> 00:25:10.850
Jon Spallone: you know basically, they’re

00:25:10.890 –> 00:25:13.189
Jon Spallone: they are doing a

00:25:13.330 –> 00:25:21.800
Jon Spallone: an actual controlled explosion. It gives you that that sonic information back

00:25:21.810 –> 00:25:37.599
Jon Spallone: so that they could actually take that data later and then go and analyze, based upon all the different features that it’s hitting from a sonar standpoint of what’s being sent back to it. So those would be more of that snow cone

00:25:37.610 –> 00:26:03.210
Brian Schoepfle: side of the house. I’m. Just trying to put some use cases to it that you would see those devices. You know that one and done that that one user type, capability. And then when we go into that the Snowball family, we’re looking at something that would be more of a a set up site where this data and that would be a asynchronous or synchronous as far as the snowball side of the house is correct.

00:26:03.630 –> 00:26:11.840
Brian Schoepfle: That’s right. Yeah. And So you know, thinking about use cases for snow cone or snowball-edged. You know what I

00:26:11.850 –> 00:26:40.240
Brian Schoepfle: probably a great umbrella term is just kind of smart. Fill in the blank right whether we’re talking about smart cities, smart school, smart basis, our hospitals smart campus. Ah, you know the list goes on. Ah, typically we’re talking about some kind of sensor devices, right? Whether it’s smart cities. And we’re doing air quality, and we’re doing. Ah, noise, temperature, traffic patterns, all these things, or or something more in a facility’s perspective. Smart factories where we’re getting,

00:26:40.250 –> 00:27:10.150
Brian Schoepfle: you know, vibration and noise and and temperature control systems in there. Ah, great for that, because we can do iot sensor stream capture. We can aggregate metrics, and we can do a lot of control signaling and alarming right from both devices. Now I know that a lot of the folks listening may be in, for example, the K. Twelve space, or or serve those customers who do. And you know One thing that’s, you know, exploded over the past couple of years, for obvious reasons, is

00:27:10.160 –> 00:27:39.940
Brian Schoepfle: is remote learning, remote education, and and then also in the nonprofit space. What we’re seeing is a lot of, you know, just kind of media generation in general, and with remote learning or remote content production, no matter what we’re doing it, for we do have some significant compute needs in that space. So we are seeing customers use the snow cone in particular,

00:27:39.950 –> 00:27:49.190
Brian Schoepfle: for on the fly media transcoding and image compression, so that we can kind of start the media production process.

00:27:49.200 –> 00:27:55.710
Brian Schoepfle: Uh before we’re able to bring that back to an avid bay, or wherever we are, as we as we were, find that content,

00:27:56.760 –> 00:28:04.800
Jon Spallone: Then it so, and that’s that’s both side the houses where you’ve seen that from a snow cone and snowball utilization.

00:28:04.810 –> 00:28:11.359
Brian Schoepfle: Yeah, certainly. That the snow cone, which is, you know, light enough to be carried by a drone certainly fits inside of a backpack.

00:28:11.370 –> 00:28:30.860
Brian Schoepfle: Um! You know that there are some other cool, native, just media services that I can talk about from our elemental group that that can use these devices, but you know it’s It’s just like It’s this ability to bring a fragmentized small slice of the cloud out with you wherever you are, and it does not require an Internet connection to to get the work started.

00:28:31.290 –> 00:28:48.989
Jon Spallone: Okay, and then you, It is the data transfer that’s at it. Well, the device itself is a two, fifty, six encryption. And then the data territory transfer. That’s where this would tie in just for my sake and also the listeners. We would have data sync and play with this.

00:28:49.630 –> 00:29:16.739
Brian Schoepfle: Yeah. So data sync it data cent comes pre-installed on on all the snow devices. So it, you know it’s It’s starting. You know one of the great things about data Sync. It is It is this unified service. You know, that exists within the Aws console, and I can schedule jobs and and track the ah track the movement of my data. And i’m really getting a great, you know, single pane of last but top down view of

00:29:16.750 –> 00:29:28.949
Brian Schoepfle: the various data transfer jobs that I have in place. Whether that’s taking place in a snow device or elsewhere, You know, in terms of the offline data transfer.

00:29:28.960 –> 00:29:37.620
Brian Schoepfle: I, you know, just based off of the small size of the snow, Count. I’d probably say, if you needed just to transfer

00:29:37.630 –> 00:29:54.519
Brian Schoepfle: ten to twenty terabytes of data you might be able to, you know. It may be more practical for you to do something like that with either a single snowball edge compute, optimized or storage optimized device, or just do that over the wire, because obviously internet’s getting better all the time. But it’s never.

00:29:55.390 –> 00:30:03.790
Jon Spallone: And then i’m seeing here that the endpoint is Nfs. That’s how the storage is set up and configured. Is that correct?

00:30:03.800 –> 00:30:23.420
Brian Schoepfle: That’s how That’s how it presents itself in in most cases. And and so we we use that Nfs Mount Point on our network, and uh, and then at the end of the day we’re bringing that in back into a aws owned and managed data center where that’s where that’s going right into s three.

00:30:23.960 –> 00:30:32.540
Jon Spallone: I just pulled up the link here for the snowmobile. So for you, those of you out there that can’t see it. But if you if

00:30:32.550 –> 00:30:54.090
Brian Schoepfle: go into the Aws storage um, just Google it, and and you’ll get the direct link to it. I’m not going to give you Fqdn’s on here, but if you go there and then you go down in the data migration section you’ll see the the Snow family, and once you get into the still family, that’s where you can see the snowmobile.

00:30:54.100 –> 00:30:56.350
Jon Spallone: Yeah, I could see that here in this uh

00:30:56.400 –> 00:31:04.219
Jon Spallone: semi-truck that you have sitting in there, and I mean correct me if i’m wrong I mean the last time I’ve worked with a couple of

00:31:04.560 –> 00:31:07.969
Jon Spallone: cloud providers. I I know

00:31:08.360 –> 00:31:23.899
Jon Spallone: pretty much. This is the setup at most data centers now from a a cloud data center. It’s it’s kind of ah an all in one box that’s driven up. Set there, and you know, until it breaks and fixes, and you pull it out and put it another one, and it’s not like

00:31:23.910 –> 00:31:31.690
Brian Schoepfle: what we knew way way back in the day where it’s just a room full of racks of servers lined up in each one of the racks. Now,

00:31:31.700 –> 00:31:40.619
Jon Spallone: yeah, I mean, this is pretty cool. I mean, i’m gonna have to talk to Andy and see about getting a zentag or a snowmobile. So we’re gonna drive that around.

00:31:40.970 –> 00:31:46.890
Brian Schoepfle: Well, certainly, if you have exabytes of data that you need to move, i’m sure that we can set something up for you.

00:31:46.900 –> 00:31:48.390
Jon Spallone: Yeah,

00:31:48.400 –> 00:32:05.359
Jon Spallone: I think we would find the data just so we could put the we go on aside so that’d be a good thing. So from from our data migration standpoint, I mean, we pretty much hit it. It’s it’s the data sink side of the house, and then the snow family, which is, you know, those those

00:32:05.420 –> 00:32:10.860
Jon Spallone: I don’t want to call it an endpoint, because it’s not really, but it’s it’s that

00:32:10.870 –> 00:32:29.840
Jon Spallone: it would be that on-prem data collection that would bring it up that that snow family and then do that data migration across the data center? So Yeah. I mean, there is there anything else specifically on the data migration side that you would want to? Maybe we didn’t hit. But before we move on,

00:32:30.340 –> 00:32:37.020
Brian Schoepfle: so when we talk about data Sync and the Snow family. We’re talking about all types of data.

00:32:37.060 –> 00:32:38.720
Brian Schoepfle: It’s

00:32:38.940 –> 00:32:45.919
Brian Schoepfle: in most cases, you know. I wouldn’t say that there’s always these situations where once

00:32:46.730 –> 00:33:14.489
Brian Schoepfle: size fits all, but in a lot of cases one size fits most, and you know with whether it doesn’t really matter what kind of data we’re putting on the snow family device doesn’t really matter what kind of data we’re moving with. Aws data, saying, because it presents itself, as you know, so many different protocols that can be used, and it connects to so many different storage services on on the aws I love. That’s object, storage or file storage to get those things in there. But sometimes we’ve got cases where

00:33:14.500 –> 00:33:29.110
Brian Schoepfle: Um, we’ve got a We’ve got a large pool of data structured data that’s used for a very specific purpose, and that data is being stored in a database. And in that case

00:33:29.120 –> 00:33:39.499
Brian Schoepfle: this is where we start to recommend, you know, using a service that’s a little bit more specialized. And in this case we’re talking about the Aws database, migration service or Dms:

00:33:39.900 –> 00:33:54.440
Brian Schoepfle: So Ah Dms is a data transfer service that is specific to the movement of the data store inside of the database, and we can go. We can go like for like.

00:33:54.480 –> 00:34:09.910
Brian Schoepfle: So that’s you know, oracle to oracle on Rds. That’s sql to Sql. On Rds. Or any of those database types, and those that we mentioned in those that we haven’t onto an Ec. Two instance,

00:34:09.920 –> 00:34:29.840
Brian Schoepfle: and this is something that it is in fact agentless, and all we really need is the the endpoints and the appropriate credentials to facilitate this transfer. And now I’ve got continuous data replication from one database to another. I’ve also got the option to move from

00:34:29.850 –> 00:34:35.800
Brian Schoepfle: database of one type to a database of perhaps a more efficient or a more modern type,

00:34:35.810 –> 00:34:52.669
Brian Schoepfle: for example, getting out of Microsoft Sql. Or horrible database, these proprietary database engines into something more open, sourced, and modernized, like Aurora Rds for Mysql or postgres. Ql: Easy for me to say

00:34:52.679 –> 00:35:08.809
Brian Schoepfle: um. We actually combine the use of the the Aws schema conversion tool to change the structure of that data as a midpoint before we bring it into our target database.

00:35:08.820 –> 00:35:21.589
Brian Schoepfle: Ah, but this is facilitating continuous data replication. So if we go back to the very beginning of our conversation, which is like, you know, I want to be able to move from one house to another, but i’d still like to be able to use my stuff in between

00:35:21.600 –> 00:35:33.090
Brian Schoepfle: I’m. I’m keeping my source database online I’m. Continuing to run all of my applications, i’m replicating the existing data sets and all changes to that data

00:35:33.100 –> 00:35:52.710
Brian Schoepfle: to the target database, and if and when i’m ready to execute a cut over um like right up until the last minute that i’m that i’m ready to execute that cut over. I’ve got a current copy of my database in the Aws cloud and and the aws database. Migration service again is great for like for like,

00:35:52.840 –> 00:36:12.290
Brian Schoepfle: But coupled with this schema conversion tool, we can also affect a a data center, or sorry a database language change, or database engine change at the same time, and then it would not be a podcast with you, Jon, if I didn’t bring up your favorite creature in mine babblefish,

00:36:12.300 –> 00:36:31.190
Brian Schoepfle: Yes, which which we got to touch on here just for a second to say. Ah, you know we can restructure the data, but, as you know, rewriting the application to use the new database engine is is no easy task. So post sorry. Ah Babblefish

00:36:31.200 –> 00:36:41.409
Brian Schoepfle: uses an open source project which we’ve put on Github under the Apache two license and the post-rescue out license,

00:36:41.420 –> 00:36:57.549
Brian Schoepfle: and you can use Babelfish under either license, and this allows you to reduce migration, time, and risk, and migrate at your own pace, because Babelfish takes T. Sql. Commands, which is Microsoft sql service proprietary sql. Dialect,

00:36:57.560 –> 00:37:03.979
Brian Schoepfle: and supports the same communication protocol, but executes those transactions on a post-resql database

00:37:04.170 –> 00:37:19.589
Brian Schoepfle: um. So now i’m buying myself some time to upgrade my ah my application language when i’m already taking advantage of ah a more modernized, open source and more cost-effective database in the cloud.

00:37:19.740 –> 00:37:34.569
Brian Schoepfle: And And really this is. This is what we would utilize this service from migrating those legacy database controlled Os solutions and getting them into that native

00:37:34.580 –> 00:37:40.680
Jon Spallone: aws file set. So it’s not like. Just so. The users understand this. Isn’t A.

00:37:40.690 –> 00:37:58.800
Jon Spallone: You know any type of like replication is happening. This is I’ve made that decision, You know. I’m not going to use an on-tap for my replication between on-prem. I’m. Not going to do my my b themc setup, or N. C two set up to do this, you know, spread across my Vpc. In my on prem environment.

00:37:58.810 –> 00:38:10.849
Jon Spallone: I’m actually going to take these databases, rip them out of those native platforms that they’re aware of, and then put that data into new databases that are native to Aws

00:38:11.200 –> 00:38:30.090
Brian Schoepfle: with no change, with no change to the client application. So you know. In our very first conversation, when we talked about migrations we talked about, You know the hazards that are inherent and kind of just affecting the lift and shift migration and and never doing anything else. Um! All the data that we have shows that customers become increasingly cost efficient and get

00:38:30.100 –> 00:38:30.970
Brian Schoepfle: get it

00:38:30.980 –> 00:38:59.160
Brian Schoepfle: get more roi than they originally anticipated, the more that they modernize. So we’re thinking we want to look into the future and kind of work backwards from what our ideal state is getting out of the blinking lights business, and our existing data center has a lot of inherent benefits to it. But if we never take advantage of all these new avenues for modernization that have been presented to us by bringing our data and our applications into the cloud. We’re sort of missing out

00:38:59.170 –> 00:39:24.679
Brian Schoepfle: on on so much more. And so this is just an example of another way that Aws is making it easier for customers to to modernize and and taking into account the fact that not everybody has tons of application developers that can rewrite my existing applications that were written for Sql. Just because we made a business decision to start taking advantage of the cost and performance advantages of

00:39:24.690 –> 00:39:29.469
Brian Schoepfle: of using. For example, Aurora, postgresql,

00:39:29.480 –> 00:39:32.789
Jon Spallone: and I had a conversation earlier today actually about the

00:39:32.800 –> 00:39:42.939
Jon Spallone: I from what I see from aws It’s more of that agnostic platform, I mean you. You’ve got that standpoint of Here’s your Here’s your block.

00:39:42.950 –> 00:39:55.549
Jon Spallone: How am I going to use it? Am I going to use the proprietary partner solution to bring into it, or am I going to use some other solution. But it’s really from this standpoint we’re looking at it agnostic, that

00:39:55.560 –> 00:40:08.699
Jon Spallone: we can utilize this database migration to get you away from those vendor partner lock in type solutions with additional licensing overhead. And just do what we need to do and get that data.

00:40:08.710 –> 00:40:26.369
Jon Spallone: I’ve always said, Yeah, user experience is great. This, you know. The system you set up is great, but it’s all about the data. I mean that’s what it’s about. That’s why we’re all doing what we do, because we’re manipulating, using, reading, seeing data. And that’s That’s the key here. Now what i’m noticing is that

00:40:26.440 –> 00:40:41.859
Brian Schoepfle: on the Aurora side we’re calling out my sequel um! But we’re also wrapping in the the Microsoft full-grown sql side into this as Well, So this would be more of our windows based databases that we would see with Aurora.

00:40:42.530 –> 00:40:55.860
Brian Schoepfle: So with Aurora Aurora Aurora is a highly available modernized implementation of either my Sql. Or post or sql

00:40:55.870 –> 00:41:25.369
Brian Schoepfle: um it it uses a little bit different storage technology. It uses a self healing storage layer underneath that that is automatically um configured for for high availability, and makes it very, very easy to add. Additional reader nodes automatically. Promote a reader node to a writer node in the event of a failure, and it’s it’s. It’s very, very highly available and and very, very easy to work with, and there are two different flavors that

00:41:25.380 –> 00:41:46.559
Brian Schoepfle: I said. Ah, Rds is our managed database as a service that supports the open source languages that we talked about Mysql post Grants Ql. But it also supports proprietary database systems, including Microsoft, Sql and an oracle database that the Rds is also available in in Maria Db.

00:41:46.570 –> 00:41:51.519
Brian Schoepfle: And I think that what we’re really touching on here, Jon, is you? You won’t. You

00:41:51.530 –> 00:42:13.990
Brian Schoepfle: you won’t really hear aws too frequently, give very, very prescriptive guidance in the sense of you should use this development framework and this language at this runtime, and you should run on this version of this or that right. But one of the advantages of having the broadest and deepest set of capabilities of any Cloud provider is that we can really got a customer towards you should use the right tool for your application

00:42:14.000 –> 00:42:20.929
Brian Schoepfle: and your contacts, and there’s so many different variables. There’s so many. There’s almost as many different kinds of customers as there are customers,

00:42:20.940 –> 00:42:38.359
Brian Schoepfle: and and they all have that. They all have unique needs, and we have this ability to help you find the tool that is best fit for your purpose, where we aren’t trying to push you into something that is proprietary to aws that you know, could have, you know,

00:42:38.370 –> 00:42:45.180
Brian Schoepfle: could create technical debt for you down the road, which is what we’re trying to go to the cloud to get away from, to begin with.

00:42:45.190 –> 00:43:03.690
Brian Schoepfle: So so lots of different options here, and I know it can be difficult at times to to keep a lot of this straight. So that’s why we’ve got services like the Data Sync discovery tool like the database migration service, and like some of the other ones that we’re going to talk about that help you identify

00:43:03.700 –> 00:43:22.890
Brian Schoepfle: like what the current state is, and map that to What’s what’s the next best step for you in terms of a cloud deployment and then working with a partner like Santegra and the Aws Solutions architect team. Then we then we begin to put together a plan for for modernization and optimization of those workflows.

00:43:22.900 –> 00:43:38.460
Jon Spallone: Yeah, And I think that’s one of the keys there is that, you know I I I personally don’t know of any customers that are going to implement a cloud migration and strategically move these services right off the that one one shot.

00:43:38.470 –> 00:43:49.549
Jon Spallone: Some people might, but I know that most organizations I work with. They’re They’re looking at that migration. First let me get over into the cloud. I like we discussed in the first episode,

00:43:49.560 –> 00:44:04.509
Jon Spallone: you know. Once I get in there, how do I increase that roi afterwards? And these are things and avenues and approaches that we can look at there. So yeah, I mean, this is definitely a key here. But I know just from a standpoint where

00:44:04.520 –> 00:44:26.509
Jon Spallone: this is a data storage, a cloud storage solution. But in particular this is for migrating away from that service so we could. We could be migrating in our own tenant that we have at aws today or on Prem. Really taking these services across into these open source platforms, Rds and Aurora.

00:44:27.400 –> 00:44:45.989
Brian Schoepfle: And you know whether we’re talking about offline data storage devices that have compute workloads inside of them. Or we’re talking about a more online service like the database migration service, We’re blending that line between migration and modernization to enable customers to innovate and optimize faster,

00:44:46.000 –> 00:44:50.369
Brian Schoepfle: and and that, I think, is the key takeaway from from this part of the discussion.

00:44:50.380 –> 00:44:52.149
Jon Spallone: No, agreed agreed.

00:44:52.160 –> 00:45:00.009
Jon Spallone: So from there from the database side of the house and we go. We’ve got a couple of topics in here, the the hybrid,

00:45:00.020 –> 00:45:17.399
Jon Spallone: cloud storage and edge edge computing. I think we we’ve hit on the gateway in part one of this series. So is there anything in specifically that we would want to dig in a little bit more within the gateway itself,

00:45:17.480 –> 00:45:26.739
Brian Schoepfle: I mean, this is this: is either living on-prem as an appliance correct? Or It’s living within your tenant right.

00:45:27.250 –> 00:45:36.819
Brian Schoepfle: So you know, they probably the three main ways that you can see storage gateway deployed is as a virtual machine on the customers existing hardware.

00:45:36.890 –> 00:45:45.749
Brian Schoepfle: It’s also available as an appliance that you can order within the council, and it can also be installed on an Ec. Two instance,

00:45:45.760 –> 00:46:04.589
Brian Schoepfle: and the the storage gateway comes in a few different flavors that are. Again, the customer is going to select, based off of what’s best fit for their purpose. We talked, I think, the most about file gateway last time, because a file gateway is available with the

00:46:04.600 –> 00:46:27.820
Brian Schoepfle: with the fsx, for on tap on top of it, or with the um with windows, file, server, I should say, on on top of it. So native Smb. Client, running right there, file gateway is able to cache the most commonly used files while replicating and offloading storage into S Three, where it’s secured it’s backed up

00:46:27.830 –> 00:46:48.819
Brian Schoepfle: and and protect it. You know, through whether that’s you know turning on versioning inside of an S three bucket, or protecting it from deletes by requiring Mfa a lot of different things that we can do to centralize and secure the the storage of the of those files. The The storage gateway can also present itself as an ice cuzzy device. In both.

00:46:48.830 –> 00:46:54.200
Brian Schoepfle: Uh, we’ll call it. One version of this is the the volume gateway mode,

00:46:54.210 –> 00:47:21.189
Brian Schoepfle: but in in either case what we’re doing is basically replicating uh snapshots of the of the ice cuddly virtual disks inside of this appliance into the Aws cloud for desest recovery or or business continuity purposes. The The last way that that I see storage gateway be deployed, and I’ve done a lot of these. Both my current role and and previous roles is the storage gateway can also be configured in what we call shape gateway mode,

00:47:21.200 –> 00:47:22.419
Brian Schoepfle: and

00:47:22.430 –> 00:47:25.000
Brian Schoepfle: excuse me, and tape gateway

00:47:25.270 –> 00:47:30.049
Brian Schoepfle: allows us to present the storage gateway as a virtual tape library.

00:47:30.330 –> 00:47:48.240
Brian Schoepfle: Um, So it’s a grave target for beam. Ah! And and other existing on-premises. Ah, you know. Ah, tape back up workloads or Chronus and other ones You probably think of, and then getting getting those tapes

00:47:48.250 –> 00:47:50.950
uh, or those those virtual tapes

00:47:50.970 –> 00:48:03.169
Brian Schoepfle: ah stored into glacier for for long-term archival depending on how often we plan on retreating and there’s many different flavors of us Stream glacier that we could put it there. But this is really kind of

00:48:03.180 –> 00:48:25.079
Brian Schoepfle: probably the last step that we need to take if we’re still, if we’re still protecting our data with with K-based workloads we’re finally getting out of the Lto. I don’t even know what number we’re on. But I know those cases are very expensive, and and the supply chain, as we know, is not great. So we’re virtualizing all of that. We’re not really changing any of our K back on workloads. But we’ve got a more modern infrastructure and architecture behind it,

00:48:25.090 –> 00:48:42.059
Brian Schoepfle: so that we’re saving those tapes in another location. We’re not really not worried about the physical degradation of the media anymore, and I’ve replaced it, but maybe a very large tape library with a one or two, you appliance, that’s that’s writing these virtual tape files and then getting them into glacier.

00:48:42.070 –> 00:48:44.249
Brian Schoepfle: Ah! So you know, probably root

00:48:44.890 –> 00:48:49.599
Brian Schoepfle: reading between the lines. There the storage gateway is

00:48:49.610 –> 00:49:10.849
Brian Schoepfle: is an effective hybrid cloud appliance, but also can be part of a migration strategy because so much of what it does is replicating that data into the Aws cloud while keeping recently written or commonly accessed files cached locally to improve the performance for the end. User Any applications that might be using it.

00:49:11.100 –> 00:49:13.589
Jon Spallone: And and really, when when we’re talking about,

00:49:13.600 –> 00:49:25.490
Jon Spallone: just to clarify for some of the sole guys, we’re not talking about those old Dl t tapes when we’re talking in the backups, because a majority of what we’re doing in a data center like you said, we’ve got solutions that are doing backups

00:49:25.500 –> 00:49:39.080
Brian Schoepfle: into virtual storage, and that’s what we’re moving across a pipe. We just kind of use the same terms nowadays. But I notice here, too, that the gateway itself comes in three modes of you know

00:49:39.090 –> 00:49:57.869
Jon Spallone: the Internet, which would be, you know. I um, i’m assuming would be like our Ssl. Vpns that we would have set up a point to point to Vpn between our environment. The Vpc: Obviously we’re going to have that direct connect that’s set up there, and then also fits complaints, so that that meets a fixed standards one.

00:49:59.200 –> 00:50:11.219
Brian Schoepfle: Yeah. So depending on your the needs of your industry and your workload, the way in which the the storage gateway connects to

00:50:11.230 –> 00:50:27.740
Brian Schoepfle: whether it’s S. Three or S. Three Tape Library, one of the Fsx services A. Did us back up on and on um the the right connectivity for the right workload for for oftentimes for like remote office branch office stuff

00:50:27.750 –> 00:50:44.739
Brian Schoepfle: uh Ssl. Connectivity or or Vpn connectivity is is going to be sufficient. It does also allow for an endpoint connectivity directly onto the Vpc. Which is,

00:50:44.760 –> 00:50:59.930
Brian Schoepfle: if I could simplify that the most would be saying, we’re going to bring an Internet service and make it only reachable from within our virtual private cloud. So it’s as though it has private networking address space. And that’s not really how it works.

00:50:59.940 –> 00:51:18.569
Brian Schoepfle: Um, you know, into our network, and we’re not bringing. We’re not bringing that data even into the aws services that exist inside of our public zone, because they’re web-addressable like like s three. And then you know, for for those customers that do require fips compliance.

00:51:18.580 –> 00:51:27.189
Brian Schoepfle: Um, you know, in an aws golf Cloud region, for example. Uh, we do have. You know we do have endpoints with those encryption standards on them.

00:51:27.440 –> 00:51:40.010
Jon Spallone: Yeah, I just I was surprised, because, you know, again working a lot of the Government stuff back in my consulting days i’m sleeping in the mid Atlantic Fifths is always a huge

00:51:40.020 –> 00:52:00.410
Jon Spallone: ah demand in a lot of those Gov deployments that you’ve got. So this appliance itself is meeting Fits compliency. Now. Is this the obviously from the appliance, the brick appliance that would be hardware Fips compliancy. And then we’re doing software compliency based upon the virtual clients that we’re going correct.

00:52:01.440 –> 00:52:15.920
Brian Schoepfle: Yeah, So whenever we get to talking about compliance and what’s going to work within a certain compliance program or a regulatory authority, i’m. Obligated to say, although I was raised by lawyers. I am not a lawyer, and I don’t. I went on Tv.

00:52:15.930 –> 00:52:44.310
Brian Schoepfle: Um. So you know, we work with the Compliance team work with a partner work with the Aws solutions, architect and security team to make sure that each component of your solution is going to meet the requirements for the regulatory body, you know, I think we talked about in an earlier episode that customers do inherit the compliance controls of the underlying architecture inside of an aws region, you know. So if our hardware is being the the controls required for Pc. I. Dss. Or

00:52:44.320 –> 00:52:51.029
Brian Schoepfle: even something in the Department of Defense, like Dod Srg. Impact level two through five.

00:52:51.040 –> 00:53:11.370
Brian Schoepfle: Um, you know we can accommodate those things, but if you but within the shared responsibility, model between aws and our customers, if you’re not managing identity correctly. Or if you’re overwriting some controls and and sending out or storing data, you know, in your text format, and it’s not being encrypted,

00:53:11.380 –> 00:53:25.319
Brian Schoepfle: that’s not really what the aws is sphere of control. We can’t stop you from doing that, and and so it’s. It comes down to the architecture of the whole application, not trying to to find the the least common denominator of all of its component part.

00:53:25.330 –> 00:53:29.589
Brian Schoepfle: Yeah, and that that’s just kind of a rabbit hole of my background. Oh, yeah,

00:53:29.600 –> 00:53:54.160
Brian Schoepfle: I don’t know about the fifth side. I just was a I was impressed to see that it’s there just like you said, There’s a lot of I a standards that there out there that you’ve got to follow, and I the same thing whenever the clients, the officer says that’s what you do. But yeah, I mean, It’s nice to see that it’s all ready out of the game. It fits complaints. So that that’s very helpful to a lot of the public sector or those tips demands that are out there, because there’s a lot of

00:53:54.170 –> 00:54:01.789
Jon Spallone: civilian organizations that have to adhere to certain government standards. So this appliance does me.

00:54:01.800 –> 00:54:15.530
Jon Spallone: Fifths like you said. There’s different levels, different details that we get into, but quote unquote, having fips on. There. Lets a lot of people know. Okay, I’m: meeting a good standard here as far as compliancy is for the golf side.

00:54:15.980 –> 00:54:28.189
Brian Schoepfle: Yeah, if if you or your teammates at work have recently had a meeting about the President’s executive order on cybersecurity. You probably already know what this is, and you’ll be definitely interested in digging deeper into it.

00:54:28.200 –> 00:54:30.649
Jon Spallone: Yeah, definitely.

00:54:30.900 –> 00:54:55.400
Jon Spallone: Um. Okay. So that’s from the secure gateway, or I’m: sorry the storage gateway. Um, where we again we’re bringing up snow family in within our hybrid cloud storage. That’s really just kind of a recap of what we were talking about. The rugged eyes, the small medium, large deployment, and obviously Jumbo side of snow fits into this, because

00:54:55.410 –> 00:54:58.340
Jon Spallone: you know, it’s giving us that edge. Computing: correct,

00:54:59.020 –> 00:55:11.569
Brian Schoepfle: it is, Yeah. And these are unlike outposts, more appropriate for geographies or workloads that have poor intermittent or non-existent Internet. Access.

00:55:11.580 –> 00:55:40.890
Brian Schoepfle: And I am able to bring some data transfer and some compute services in there local to the afarliance. Ah, but typically there’s some sort of back and forth. Ah, you know, if we’re seeing long-term usage of these devices, it’s because we’re using the device for a that period of time. Aws is cross-shipping me a new one, and i’m always taking this data back into aws and more of a one time usage perspective, I’m getting one or more snowball edge compute or edge storage optimized

00:55:40.900 –> 00:55:42.120

00:55:42.130 –> 00:56:04.520
Brian Schoepfle: getting my data out of or replicating my data from my infrastructure into these ruggedized, secure devices, shipping them off to Aws to have that be, you know, rehydrated inside of the Aws data center, and i’m not trying to push that over. An unreliable Vpn connection that you know could be could be less than a gig in some case.

00:56:04.610 –> 00:56:21.129
Brian Schoepfle: Yeah, it’s funny. I’m actually looking at feature comparison right here and come on. Snowball doesn’t have any Ssds. I know we get one hundred petabytes, but

00:56:21.190 –> 00:56:40.690
Brian Schoepfle: I don’t think we’re we’re not trying to do anything fast in there other than take advantage of the enormous economy of scale, that having a forty five foot shipping container loaded with one hundred pair of petabytes of magnetic. This capacity will be able to bring to us. This is about. This is about.

00:56:40.700 –> 00:56:56.379
Brian Schoepfle: Get just open the floodgates and bring all the data in. And and even though you know this, we’re not talking about an overnight thing, this is days and weeks. In some cases it’s still much faster than trying to move petabytes or exabytes from one location to another over the wire.

00:56:56.500 –> 00:57:11.289
Brian Schoepfle: Yeah, And I also like like you had mentioned earlier in the feature matrix. You know the device. Wait not not there. We’re not giving you that. We’re not giving you a cpus. So yeah, it’s just a big truck of storage. That’s what it is.

00:57:11.300 –> 00:57:13.819
Brian Schoepfle: Yeah. And we we help

00:57:14.350 –> 00:57:26.799
Brian Schoepfle: we helped set up when when A. So when a snowmobile arrives at a customer location, a bunch of folks from Aws enterprise support are there helping the customer set up a carrier-grade switch

00:57:26.810 –> 00:57:54.480
Brian Schoepfle: to facilitate. You know the network connectivity into their data center where you know, we’re using local connectivity very low latency very fast to bring that into the disk and then get that off to to a data center. And I I wonder if we’re not published in the device way, because we don’t include the driver in that phone number. But yeah, this is all all you really need to know about. It is a petabyte or exabyte scale data transfer and a big old tractor trailer.

00:57:54.490 –> 00:58:09.460
Brian Schoepfle: And um, I think for for most folks the this for for most folks trying to move uh terabytes or small petabytes scale into aws. One of the snowball edge storage or compute optimized devices are are going to be the the best fit for them.

00:58:09.530 –> 00:58:28.860
Jon Spallone: Yeah, I think I just out of my own curiosity. Do you know of anybody that has actually done a snowball. I’m sure there is, but I don’t know if there’s anybody you could reference on it. I just i’m interested to know of. You know who I know. There’s a use case out there for it, but I mean it just It’s so interesting. It’s so interesting.

00:58:30.010 –> 00:58:40.090
Brian Schoepfle: Yeah. In in public sector There are lots of customers who have used this particularly at the Federal, civilian and department of Defense Level,

00:58:40.100 –> 00:58:53.019
Brian Schoepfle: who may not necessarily be repriminable customers, but we do have organizations like digital globe, who were able to execute a a massive,

00:58:53.030 –> 00:59:07.550
Brian Schoepfle: a transfer of data from one place to another. This is somebody who is doing Gis systems. They had so many petabytes of archive data, and they needed to get that into the cloud and just large file transfer protocols. The delivery workflows work they had done for them.

00:59:07.830 –> 00:59:25.120
Jon Spallone: Yeah, it’s just it. It blows my mind away. I mean, you know, to think years ago, back, when I was a young one, you know. I was excited to get a computer with sixty four Megs on it, and you know, just within, you know, two decades looking where we’re at Now we’ve got a

00:59:25.130 –> 00:59:28.269
Jon Spallone: tractor trailer of storage. I mean It’s just amazing.

00:59:29.350 –> 00:59:48.449
Jon Spallone: So with that we we kind of wrap up on that hybrid storage edge computing site because we did. We did cover a lot of the still family in our previous category. So now we kind of go into that manage, file, transfer, and and this for me. What i’m seeing. This is more along the lines of like A

00:59:48.680 –> 01:00:03.880
Brian Schoepfle: and data collaboration, and and not not necessarily from. You know what we would look at at a high level, you know, down in the weeds type data collaboration, we’re looking at more of

01:00:04.860 –> 01:00:17.599
Jon Spallone: and providing that access. For you know, data sharing necessarily not not really getting into like workflows and stuff like that, but just really getting that data share from one point to the other by correct

01:00:18.530 –> 01:00:31.769
Brian Schoepfle: Yeah. So I can’t speak to whether Sftp Ftps or just regular old Ftp. Is the best way to share by with

01:00:31.780 –> 01:00:42.190
Brian Schoepfle: customers and business partners. We have customers in a bunch of different industries. Whether that’s financial services, health care, media entertainment, retail advertising

01:00:42.200 –> 01:01:00.739
Brian Schoepfle: that are still using these transfer Protocols and sftp is the Ssh file transfer Protocol, which facilitates secure transfer beta over the Internet It’s got the security and functionality of Ssh. And It’s. Still very widely used to exchange data from the

01:01:00.750 –> 01:01:19.069
Brian Schoepfle: you know, from typically from one entity to another, and the transfer family that offers this fully managed support for the transfer of these files over the four protocols that are typically used for these kinds of workloads, which again are Sftp, Ftbs Ftp. And as two.

01:01:19.080 –> 01:01:36.069
Brian Schoepfle: And this is about moving files over these protocols into and out of either Amazon S. Three or Amazon Efs it’s a managed service that helps you migrate, automate and monitor file transfer, workload.

01:01:36.080 –> 01:01:59.390
Brian Schoepfle: It does not require you to change any existing client-side configurations for authentication access getting through the firewall. So folks who are using these protocols to bring data in or send data out. Um can use this service and and know that nothing will need to change for their customers, partners, internal teams, or the applications that are being used.

01:01:59.780 –> 01:02:08.360
Brian Schoepfle: Yeah. And and most of these protocols I don’t see it as prevalent as what it used to be just because of other collaboration

01:02:08.370 –> 01:02:37.099
Brian Schoepfle: tools that are out there. But I still see it like you said in in the financial industry. You know banking industry. There’s a lot of that um when they’re transferring loans from open source or or open users coming in. So I see that, and also in call centers. I still see these protocols being supported from a support standpoint, to be able to get walls shipped back from a customer so and so forth. But this is a service that’s set up there that’s natively can go

01:02:37.110 –> 01:02:38.069

01:02:38.690 –> 01:02:50.400
Jon Spallone: these calling services and then be able to put us into our file block level that we’ve got in the back end of storage in our cloud tenant that we’ve already spun up

01:02:51.140 –> 01:02:51.790
Brian Schoepfle: he’s.

01:02:51.800 –> 01:02:53.549
Brian Schoepfle: Yeah. So if

01:02:53.770 –> 01:03:03.510
Brian Schoepfle: if you’re using one of these protocols, You’re probably familiar with the clients that are using these like win, scp, file, Zilla, Cyberdock,

01:03:03.520 –> 01:03:23.990
Brian Schoepfle: Um, and the open Ssh. Clients. As you mentioned. There, we see a lot of this. Um, it judge to facilitate the movement of data out of a crm or erp. These are also great for just moving data from one point to another for archiving purposes,

01:03:24.000 –> 01:03:39.059
Brian Schoepfle: and um you can. You know the the the value here is can take, you know. Keep the same endpoints, Keep the same clients, no changes to your business partners or your customers. But you’re able to,

01:03:39.070 –> 01:03:48.040
Brian Schoepfle: and much more easily kind of manage and monitor and maintain this business to business file transfer. And this is bi-directional as well.

01:03:48.050 –> 01:03:59.469
Jon Spallone: Yeah. And then again, that’s where we get into the additional back-end support and services from aws within your subscription that you can do additional

01:03:59.700 –> 01:04:24.919
Jon Spallone: stuff with that data. You know, machine learning is doing your additional crunching on this data itself to be able to do processing So that that’s nice thing is that it shifts right in to your Aws Cloud. You don’t have to do any type of double hops or anything like that. You know the data is getting right there where you are allowing that native functionality that you’ve already set up and determined in your client.

01:04:25.760 –> 01:04:33.099
Yep. Yep. And a great way to understand the value of the Aws Transfer family is to think about what you need to do when you don’t have it,

01:04:33.110 –> 01:05:02.040
Brian Schoepfle: and that’s, you know. Post and manage your own file transfer service, which is going to require you to invest in operating and maintain the infrastructure you have to catch all the servers monitoring that for up time and availability. I’m sure anyone that’s worked in financial services or anybody that has a lot of the application. Customization, particularly on Crm or erp systems, know It’s about building these one off mechanisms to be able to provision users and audit their activity. Um. And this is all what we would call undifferentiated, heavy lifting,

01:05:02.180 –> 01:05:25.499
Brian Schoepfle: and by trans by by by shifting that to aws you can leverage an economy of skill to make that happen for you as a managed service. Now I’m focused more on growing my own business, supporting my own customers and not ah, you know, setting up my own sftp system as a cottage industry within my own organization. So that’s typically not why we get in business, um, you know, to run our own it.

01:05:26.550 –> 01:05:37.459
Jon Spallone: So the last category we have is a disaster recovery and backup, and I know we only got a couple of minutes. But I mean Really, this can be a whole rabbit hole into itself.

01:05:37.470 –> 01:06:00.810
Jon Spallone: Um! So then there’s There’s topics we can cover on this one later on, um, and also some additional partners that have aws services to work within it. But really, from that Vr. Standpoint, I mean, we hit a little bit on that when you were talking about the virtual tapes being able to to do that in that data migration um standpoint, or with data saying Sorry. And the uh

01:06:00.820 –> 01:06:12.839
Jon Spallone: what we’re looking at first in this category is the elastic disaster recovery. So Drs from Aws. And this really to me kind of seems like what

01:06:12.920 –> 01:06:17.490
Jon Spallone: it says. I mean. It’s just kind of it’s back up.

01:06:17.500 –> 01:06:25.449
Brian Schoepfle: I know it’s. It’s not. You know It’s not the prettiest thing in the world, but business continuity. We all have to worry about.

01:06:26.530 –> 01:06:47.259
Brian Schoepfle: Yep. So the the Aws delastic Aws elastic disaster. Recovery Service is the next generation of a technology that was originally built for migration. We’ve talked about the movement of the actual data in a database. We talked about the movement of files objects,

01:06:47.270 –> 01:07:03.609
Brian Schoepfle: you know. So we talked about the structure data. We’ve talked about unstructured data. We haven’t really talked about the movement of the servers themselves and elastic disaster. Recovery is block level replication of virtual machines from one location into Aws.

01:07:03.620 –> 01:07:18.100
Brian Schoepfle: The reason we call it a service, is It’s more than just, you know, deploying an agent and and facilitating that continuous data replication within a single console, I’m. Able to actually set up how I might

01:07:18.110 –> 01:07:41.279
Brian Schoepfle: either execute a cut over, how I might execute a cut over, whether that’s for the purposes of disaster, recovery where it’s not really my choice. When i’m going to cut over um, or when i’m actually executing a migration. But i’m setting things up. I’m able to launch instances to do non-disruptive tests. You know many compliance programs coming back to that do require you to go through Dr. Ah tests

01:07:41.290 –> 01:08:05.659
Brian Schoepfle: um! You know we can say that we have an Rto. Of this and an rpo of this. But have we actually executed a a restore? But Drs allows us to do that without disrupting our normal operations. So i’m able to maintain readiness. I’m able to monitor what’s going on. I could fail over either automatically or Ah, with a couple clicks. And then, in the case of disaster recovery, I might want to execute a a failed back

01:08:05.670 –> 01:08:14.149
Brian Schoepfle: uh in the case of a migration. It’s it’s not really a failover. It’s a cut over, but it’s the same technology. It’s the same service just being leveraged in two different ways

01:08:14.160 –> 01:08:16.469
Jon Spallone: so, and and then essentially

01:08:16.479 –> 01:08:36.910
Brian Schoepfle: equating this to an on-prem environment. This would be similar to what like themr site recovery does between two data centers is that that’s basically what I’m hearing, being able to fail over from one site to the next. And you know, if i’m going over to site A. Because, say these down, and then I have that migration back

01:08:36.920 –> 01:08:40.639
Jon Spallone: once. Once I say A is up and running, and then I can go live.

01:08:41.100 –> 01:08:51.499
Brian Schoepfle: Yep. So this is a point in time, recovery, lock, level replication that does not require me to run like

01:08:51.569 –> 01:09:05.600
Brian Schoepfle: costly kind of like, for, like ah compute instances, so you know, as it’s replicating it’s it’s writing to very small target servers. It’s updating their desk disks,

01:09:05.609 –> 01:09:17.800
Brian Schoepfle: and in the event of an outage or a need to fail over for any reason. Those virtual machines are re-provisioned at production, size, and so i’m not paying for

01:09:17.810 –> 01:09:32.620
Brian Schoepfle: i’m not paying for the sixteen cores that I need in production. Ah, when i’m i’m really just ah using aws as a disaster recovery site so very cost-effective, but but a very great tool for both migrations and disaster recovery.

01:09:33.580 –> 01:09:39.170
Jon Spallone: And then the last thing we’ll hit on, because i’m sure this one’s pretty quick is the aws backup

01:09:39.689 –> 01:09:40.490
Brian Schoepfle: you.

01:09:40.500 –> 01:09:41.389
Brian Schoepfle: Everybody is.

01:09:41.399 –> 01:09:42.090
Brian Schoepfle: It’s back there.

01:09:42.100 –> 01:10:09.669
Brian Schoepfle: Yeah, That backup not possessed recovery. Backup is about a data protection, strategy and all of the Aws services have had some sort of mechanism for data protection. Whether we’re automating snapshots of Bbs volumes attached to Ec. Two instances, or we’re doing high availability configurations of our Rds implementations for our efs implementation.

01:10:09.680 –> 01:10:23.100
Brian Schoepfle: Maybe we’re using cross-site replication for s three buckets to replicate all the objects that I write in one bucket into another bucket for safe keeping. What Aws backup allows me to do is

01:10:23.110 –> 01:10:40.309
Brian Schoepfle: from a single console, unify a backup strategy based off of a number of different indicators which could be the resource tags that i’m putting on things, who deployed them where they’re deployed, what service they are set, different retention, level requirements.

01:10:40.320 –> 01:10:47.979
Brian Schoepfle: Um set certain Rt: Well, our po objective requirements like, make sure that i’m not losing a certain amount of data,

01:10:47.990 –> 01:11:07.499
Brian Schoepfle: and with a unified backup plan, i’m taking care of all of that from a single console, and I don’t need to have a different backup strategy for each Aws service that can be protected by Aws backup. And we’re adding more services to that list all the time.

01:11:07.510 –> 01:11:21.520
Jon Spallone: Yeah, and like I said, it’s It’s Dr. Business continuity. It’s not sexy, but we all need to know it. I mean it’s It’s there. So it’s good to know that these services are there. So really, I think this this wraps this up on me

01:11:21.530 –> 01:11:40.939
Jon Spallone: storage side discussion that we’ve had. So completing our two episode series here. Um, Any Any other key points that you would want to hit on. I mean the the key thing that I would also want to bring up is obviously, if you want to dig in deeper on this, and this will be our listeners. If you wanted to get deeper. I mean

01:11:40.950 –> 01:12:07.149
Jon Spallone: so, and Tiger is available here to have those conversations. Aws. Um, you know it’s definitely deeper conversations than what we’re getting into here. Um, you know we’re just making sure that you’re aware of the options that are out there, and and how it may fit as you’re analyzing. And you’re looking at your cloud migration or your cloud solution. You have today. Um! But outside of that any key things that you would want to touch on before we wrap up,

01:12:07.820 –> 01:12:35.820
Brian Schoepfle: sure to to to wrap around it. You know. I know we’ve talked about a bunch of different things, and every time we have a conversation. We’re going to talk about a bunch of different things, and that’s just what comes with It’s Kyle being the Bell provider, with a really broad and very, very deep set of capabilities right, and the good news is as rich as our catalogue of products and services is. We have just a rich catalogue of partners, Whether that someone like Santiago

01:12:35.830 –> 01:12:42.040
Brian Schoepfle: or one of our Isv partners that runs on top of aws and helps facilitate all these things,

01:12:42.050 –> 01:13:11.470
Brian Schoepfle: we do not make, nor do we expect our customers to go it alone on any of this. And and whether you’re adopting cloud for the first time, or trying to modernize on cloud, or bringing more complex workflows into the cloud for the first time. There is this three-legged stool. Of our great isp partners, a systems integrator, like Integrra and the Aws team that is helping customers make the right choice so that they can focus on what’s important for them. You do not need to have a Phd. In Aws to get started with this,

01:13:11.480 –> 01:13:20.440
Brian Schoepfle: because there’s such a great community of folks out there that can help you understand what you’re trying to do, where you want to go and help pick the things that are likely to be the best choice for you.

01:13:21.130 –> 01:13:26.700
Jon Spallone: Great Great? Yeah, I know, and just from my experience it’s it’s not just

01:13:26.710 –> 01:13:49.139
Jon Spallone: so. You’re aware from centagorous experience we’re not just signed in a paper, and we’re partners. And There’s a lot that I’ve been going through a process to build our relationship, to build our understanding of technologies, and I know It’s not different for other partners that are out there. So when you get a trusted advisor from Aws, you are getting somebody who’s been talking about.

01:13:49.150 –> 01:13:57.919
Jon Spallone: So I mean, that’s one key thing about this organization. I’m getting as I go deeper within my aws around a whole

01:13:57.930 –> 01:14:26.010
Jon Spallone: um. So with that I mean, we’ll. We’ll wrap up on this week’s episode. Brian again. Thank you so much for being here. Um! I know we’ve got some stuff planned coming up the next couple of episodes, so you know we’ll get some. Ah, some other things. We’ll kind of get out of some of this. Ah block level. Ah! Discussions of what services are out there, but get into some more meatoes of different things that are going on within the aws world. So again, thanks, Brian. I appreciate it as always

01:14:26.890 –> 01:14:28.650
Brian Schoepfle: like Jon Toxin,