logo
Menu
The .NET on AWS Show, Featuring James Eastham!

The .NET on AWS Show, Featuring James Eastham!

In this episode, we are joined by Senior AWS Cloud Architect, James Eastham

Brandon Minnick
Amazon Employee
Published Feb 28, 2024
Last Modified Mar 15, 2024

Listen to the Audio Podcast

Listen to the Audio Podcast

Watch the Live Stream

Loading...

Transcript

Brandon Minnick 1:08
Hello, and welcome, everybody. Welcome back to another episode of the dotnet on AWS show. My name is Brandon Minnick, and with me as always, is my amazing co host. Francoise, Francoise. How was your week?
Francois Bouteruche 1:23
Fine, fine. It was an awesome week at NDC. Copenhagen. We were both together and it was really amazing. An amazing conference. Amazing speaker. So yeah, definitely a great week there with the community. I really knew Rendon this week.
Brandon Minnick 1:46
Oh, I mean, I'm super biased because yes, we're both at NDC. Copenhagen. If you haven't heard of the NDC conferences as a dotnet developer, look them up. Go. They're incredible. NDC actually stands for Norwegian Developers Conference. But over the years, they've expanded far outside of Norway, like we were just at the Conference in Copenhagen. They have conferences in Sydney and obviously in Oslo, London, Minneapolis or in the US. So NDCs kind of, it's like kind of KFC nowadays, like I think KFC used to Stanford, Kentucky Fried Chicken. But then they expanded to China. And like nobody knows what that means here. So we're just KFC. But yes. And DC are the best conferences you'll go to. And they have one coming up. I'll be speaking at the one in Porto. Gosh, just next month in October, so maybe I'll see you important. So yeah, great swag. We've got Yeah, we got a couple couple announcements this week. I think you've got the fun one. I've got the sad one. So I'll start with the sad news. And that way, we can get the show on a happy note before we bring in our amazing guests. But yeah, if you haven't heard about it, and maybe you're one of the rare dotnet developers like me, who does a lot of work on their Mac, the Microsoft announced they are deprecating Visual Studio for Mac, I think it's got about a year left until they'll and support on it. So it's kind of a an end of an era for me. Certainly, it was actually it used to be called Xamarin Studio. When I worked at Xamarin, we had created this IDE so you can create your Xamarin apps on both Mac OS and Windows. And then shortly after Microsoft acquired Xamarin, rebranded it into Visual Studio for Mac, which I always thought was a terrible idea. Because if you're going to call it Visual Studio, which should probably be the same as Visual Studio on the PC, when in reality, it was always just made for Xamarin developers like me, so I've always loved it. I still use it. I I am still using it. But for any other fellow Mac developers or dotnet developers on the Mac, check out JetBrains writer I am going to be moving over there. I've been actually pushed for a couple of years now or poked told that Why are you still using vs for Mac? You should use Ryder. It's amazing. So I know it's it's really good. It's supported done in Maui. Actually, before even just through for back support it down in Maui. So I'm excited about that. And then you see all of our amazing community members like if you ever watch a video of Nick capsis YouTube, he's always using a JetBrains rider and I was just watching the video the other day and of his and all the shortcuts and how quick he was able to code and actually got me excited. So I'm sad, but excited to learn a new tool and hopefully increase productivity. But with the sad news out of the way, Francoise. What's coming up?
Francois Bouteruche 4:53
Yes, I have a fun fencing coming up. So we are on tour. AWS is going on tour. We're in in Europe, starting in September 18. We, the tour is starting in London. So short story, we are taking a bus. So to be to be honest, the best will start only in Paris. So the first day it is in Nandan in on September 18. It is one one day conference for software developers. So this conference is really crafted. I'm working on the agenda with my peers. It is really crafted for software developers, that's our target audience. And you will learn a lot about all the cool tools and services we have for software developer and how you, you can be more productive using those tools. Because, to be honest, we've discovered through surveys that many of you don't know about these tools and how they can, like I know you, you are recording some videos about the areas toolkit, Brandon, and those tools can really help you to get up at speed when you are using. For example, visuals you could rather use speaking, but rather, we have the toolkit for writer. So during this full day, only one track, we will show you how you can so it is really our target show. We don't want to tell we want to show so you can expect a lot of demo during this day. And cherries on the cake. There will be some booths, weeds folks from service team. So you can expect to meet some full from the service team during those those days. So Thursday is in London on September 18, then we are taking the Eurostar to go to raise thoughts on when Wednesday 2020. And then we jump into the bus go to Britain, Amsterdam, Frankfurt, Zurich, Milan, Leung and Barcelona, so nine cities in three weeks. And you can also expect a lot of a lot of videos on the social networks. So this will be three intense week where we want to meet you where you are in your cities. And we want to chat with you. So don't hesitate to register for your date. I think we have the link for the reduced depression. And this will be really a great event. So expect a lot of a lot of swag. Also,
Brandon Minnick 7:55
free stuff. Is the idea events to go
Francois Bouteruche 8:00
yes, the event is completely free.
Brandon Minnick 8:04
Amazing book. Go go see it. I mean, why not? Right? Other than Francoise being exhausted at the end of three weeks traveling all around on a bus. I'm picturing like a big, big tour bus set, you know, like famous rock bands to around in, and all of our awesome speakers and product managers and engineers getting off the bus. That's so cool. Yeah, that's basically rock stars.
Francois Bouteruche 8:33
Yes, that's exactly the size of the big bursts have arrived, we will be able to record you inside the bus because we will there will be a place to record your inside of us. So I think it will be kind of crazy. And
Brandon Minnick 8:49
add us on tour. Don't miss it. Yeah, well, first of all, we have an amazing, amazing guests this week. So I don't want to take up any more of his time is also basically part of the show. If you've watched previous episodes, you've seen him you've seen him host before. So without further ado, James Eastman. Welcome to the show. Gotcha. Gotcha mid calf. Luckily, you were muted, though.
Good, James. I appreciate you coming back on the show. If you haven't seen James before, he's hosted a couple of episodes with me with Francoise of the show in the past. But James, for anybody who doesn't know yet. Who are you? What do you do?
James Eastham 9:44
Yeah, so I'm a professional services consultant here at AWS. So what that means is I actually function much like any consultant you'd get in the world clients, they go up to US customers and work with them to help them both things. So we've kind of two types of customer facing people at AWS, we've got solutions architects, and we've got professional services solutions. Architects are typically more high level design roadmap, helping customers look at what they want to do adopt new services, professional services actually get hands on and build things. So that's my day job. I kind of like to say like, by day, I do that. And then by night, I do all things serverless. So my big area of interest is serverless development. I've been working with lambda and dotnet, I think since lambda and dotnet was first a thing, I think dotnet core two, I think, was the first version of dotnet. In London, I can't remember. I kind of missed the whole containers ecosystem. So I went from like deploying things manually onto servers to lambda. Yeah, I used to whenever people talk to you about containers in Kubernetes. I kind of just gloss over a little bit and
Francois Bouteruche 10:52
what is this wrong? Why do you? Why would you ever need container?
James Eastham 10:57
Exactly? What do you mean? What can you always find? Anyway? So yeah, so that kind of missed all that, that I'm kind of backtracking now because I need to do some stuff with containers with the customers I'm working with. Yeah, that's my big word of interest is serverless and dotnet. I do do a little bit of stuff with Javon as well, in my log said on the show,
Brandon Minnick 11:15
of course. So yeah, yeah,
James Eastham 11:20
that's, that's that's kind of what I do.
Brandon Minnick 11:24
Yeah, it's absolutely incredible. James, I, I am so, so thankful that we have somebody like you at AWS. Because if you're not following James, if you're not following him on Twitter at power, James, if you're not following him on YouTube at at serverless. James, you're missing out on a lot of really incredible content. And what's what's even more impressive to me is, if if you've worked, or if you didn't work at AWS, you might think that James actually works on the lambda team, because he puts in so much work. He literally develops libraries and submits pull requests. And as far as I can tell you, are you just doing that in your free time? That's incredible.
James Eastham 12:12
Yeah. I always think I think one of the one of the one of the benefits I've got is that the only real dependency I have is my dog, like, I don't have families, I've got a girlfriend, but she's quiet. She's away a lot. So she's quite understand that about stuff like that. So yeah, mostly, mostly just free time stuff. It's something I'm like, incredibly passionate about, because when I think about some of the systems I've built in my career, when I was building things on like VMs, and stuff like that, there's so much involved in just getting started before you can even build anything. And I think what services like lambda and serverless technologies give you is that really quick way to validate an idea. So it might not necessarily be that you might build an app, and it might never, you might not run on lambda long term. But that ability to quickly approve an idea for very little cost, because your idea doesn't work and you don't pay for it. And you get 1,000,003 inbox among month on lambda forever. Like, that's just how it is. I just I just love talking about it. Because a lot of people getting started, especially in the whole dotnet community, because it's a bit of a paradigm shift. If you build in purely serverless things, and we're getting some of this stuff. It's such a paradigm shift in terms of how you're used to building application as a developer. So the content needs to be created.
Brandon Minnick 13:48
Right? Do it. Give the people what they want? Yeah, you mentioned the paradigm shift. And, and I'll say I'm, I'm heavily biased. I'm with you. I love serverless. I use it for my mobile apps that I have in the App Store, mostly because I don't make any money off the mobile apps. So if I can have a free back end, well, I can afford that. But the summit, the the biggest kind of thing you got to know or be aware of is what we call cold start times. And I have some tricks where I've worked around that. So you don't notice it in my app. But what what are those cold start times? And kind of what what direction are we headed in? Are this the lambda team direction is the lambda team heading in to bring those down and make things just simpler? So we don't necessarily maybe one day in the future? We don't have to have this conversation because it's like, oh, yeah, there is a cold start time but you wouldn't even notice it. Blink of an
James Eastham 14:52
Eye you try to teach some insider information ultimately on that, is that what you're doing?
Brandon Minnick 14:56
A little bit? I'm trying to what's what's coming for dotnet eight, can
James Eastham 14:59
we have fun What started, the call starts. And let's just I'm just very quickly trying to open up a deck from a talk I gave last week, because it's got some really good things on call sites. And what a call to action is because it's the maybe people listening who are hearing about lambda for the first time. So let me just share screen, if I couldn't work out how to do this, again, on stream yard stream, is it going to be screen one i screen to screen, you don't want to look at yourself. So for anyone who's on familiar, cold start in lamda is something that happens the first time a request hits the version of your lambda function. And there's a set of things that need to happen before the request can actually be processed. So when a request comes in to lamda, the there is a service that we call the worker service, sometimes the front end service that is going to look around to see if there is an execution environment available. What an execution environment is, is just a self contained, really lightweight container running on top of that micro VM. And this works service is going to look to see if there is a current execution environment for that specific version of your lambda function. If there isn't, it needs to create one of these execution environments. So it'll go off, it'll create the execution environment, the startup the container, it'll download your application code that might be from a zipped file on s3 That might be a container image in ECR, it will boot up the runtime, that will be the dotnet runtime. And then it likes to run the initialization part of your application code, which in the dotnet world is typically your constructor, the constructor of the class that has got your handler in it, after all that has happened, then the request will actually pass to your function. And that first bit is what what you'd known as a cold start. That varies from runtime to runtime. So runtimes, like rust to give an extreme example, that can be like single digit millisecond double digit milliseconds, like it's not even fair, how fast is this? In the in the here, and the other end of the spectrum is Java, which historically, Java could be up to 789 seconds of cold start dotnet, somewhere in the middle. So typically, I see, for a single purpose lambda function that's got one job, you might be looking at 700 800 900 milliseconds into the maybe last single digit seconds. And there's a really good repository that we have on GitHub, AWS, that has benchmarks for all the various different ways of running dotnet on lambda. Another important thing about cold starts is execution environments in general, is that each of these execution environments will only ever process one request at any one time. So what that means is that if 10 requests hit your lambda function at exactly the same millisecond, you're likely to get 10 gold stars at the center, that's that's just likely what's gonna happen. And then these execution environments will stick around for a period of time, which means if another request comes in, it doesn't need to do the cold start, you've got a warm start, then that's what we know is a warm start. And that will take will typically be dotnet. is, as we all know, dotnet is fast, like once why dotnet is incredibly fast. Is that cold start that can take a little bit more time for questions.
Francois Bouteruche 18:31
I have a question for you. Just for for people that are more familiar with I would say, the old dotnet framework runtime and I as for example. So to me, it looks like when you just deploy your new dotnet framework web application on if if you try to send 10 requests, when when it is bootstrapping, you will experience the exact same issue, your if error will be slow, and the first few requests will be very long to get an answer. And while your application server is warm, it is quite fast. So it's look, it looks like it's the same. In fact,
James Eastham 19:23
I think it's a really interesting point you raise their funds for and something I've thought a few times is if you think about a workload that managed to scale predictably and unpredictably, lambda will scale more quickly than an application with an easy to instance right because if an easy to instance needs to scale that auto scaling needs to trigger the instance needs to launch the windows needs the Lord IAS need to load all this stuff needs to happen. So yeah, I'd agree although lambda code starts can be impact Apple, if you compare that to the cold start of a server, yeah, yeah, absolutely. Yeah. And there's there's some, there's some things you can do with lambda to kind of get around that. So you've got provisioned concurrency is a feature of lambda, where you can set a certain number of execution environments to always be available. And initially, you might think that means it's not going to scale to zero, that's gonna cost me lots of money. But if you actually look at the pricing of provision, concurrency provision, concurrency is cheaper than on demand, lambda, providing you utilize the concurrency. So if you if you this varies region to region, because the pricing changes region to region, but in US East one, don't quote me on this exact number. Providing you're using over I think it's about 60% of your provision, concurrency provision, concurrency is cheaper than on demand. And so then you can auto scale you put in concurrency, you can spin that up and down, then obviously, if you burst past your provision, concurrency, then you'll just get cold start, you're not limiting yourself to that number of requests. So that's just a really, that was one, one to come back to your question that started running. That's one strategy for handling cold starts. If you have some kind of relatively predictable workload, then you could use vision concurrency to handle that. And then if you get that right, get your provisioning, right, you can actually save yourself some money. Which is really interesting. I don't know about this. Maybe a month ago, two months ago, I was like, Whoa, that's cool.
Brandon Minnick 21:33
Yeah, I was actually, like, first of all, I mentioned, we were both in Copenhagen for NDC Copenhagen last week, and was chatting with Burton Thwaites, another friend of the show. And he was saying how his company heavily uses lambda and serverless. And so their, their whole back end actually runs on serverless. And it wasn't until I was chatting with him last week that I learned you could do that. And he was explaining almost exactly the same thing where, yeah, they have a couple, essentially, instances already warmed up. And then yeah, the great thing is if they need more, it just automatically scales for them. So they try to, they try to dial in that number. So it's the most optimized for both cost and their user experiences. But it's really cool learning and hearing him explain how, yeah, their entire back end is serverless. And they have I don't know how many users but I would assume millions of users, at least millions of requests every day on on lambda. So that's really cool. And yeah, I was looking at taking a peek at the, the link we just shared because this is this is where you can find benchmarks that we have for for cold start times versus warm start times. From being honest, warm start times are basically negligible. I'm looking at these charts, and it's like nine milliseconds. And do I really care about nine milliseconds, not really like internet connections can fluctuate by nine milliseconds. Yeah, even looking at the cold start times, there's some that are getting down as low as 200 300 milliseconds. And that is actually where as a mobile developer, that's kind of where I target. And if you can zoom in a little bit James account that'll
James Eastham 23:35
lead to an accountant. And I just can't seem to scroll weirdly.
Brandon Minnick 23:42
But yeah, like literally, like a blink of an eye. Like when a human eye blinks. It takes about 300 350 milliseconds. And not coincidentally, that's kind of about the perceived amount of time that we can detect. So in the world of mobile, like if you tap a button on my app, and as long as I give you feedback, within 350 milliseconds, you won't even notice. Obviously, if it takes three seconds after you tap a button from I have to do something and be like, what happened in this app froze for three seconds. I definitely noticed that. Maybe 600 milliseconds, you might be like, Oh, that felt a little weird. I don't know why it felt weird, because it was less than a second. But I noticed something. But yeah, we get down towards 300. I'm seeing 260 milliseconds. At that point, I don't even know if I'm going to tell people about concert times if they're that good. But yeah, how do we how is this happening? How are we getting in this low?
James Eastham 24:43
So that's Yeah, so the this is all using native ELT, that has been one of the big recent dotnet developments that have really helped with lambda. So native ahead of time compilation for anyone who is listening who doesn't No is a feature of dotnet that went GA indominus. Seven that basically generates a natively compiled binary. Which kind of completely removes the need for JIT in it really massively increases your startup sort of performance. And yeah, it's just a really cool new feature of, of dotnet. So these numbers, if I scroll back up to the to give a bit of comparison, so this is dotnet, six, and dotnet, six, if we look at like dotnet, six, a basic eight x86 lambda function, your code starts you've got 700 milliseconds at p 50 966. At p 91.4 seconds at a p 99.
Brandon Minnick 25:41
Well, that's what does that P stand for? What's what's a p 50? James.
James Eastham 25:44
So that's the 50th percentile. So So 50% of requests were 778 milliseconds, the above 90% of the requests when Andrew 66, above 99% of the requests were 1.4 seconds. I think I've explained that right. That's kinda, you may be questioning my understanding. So yeah, just to be absolutely clear with these numbers, actually. So these numbers are only lambda one, what I mean by that is, you've got a request that hits API gateway, if you're using API gateway, that Scott or there's some latency between API gateway, and lambda. And then back again, this is purely the lambda part, not the latency between API gateway and lambda. So this will probably be and that's typically very small, but the actual numbers you'll see are probably slightly higher than this. This is purely lambda, like I said. So the other thing I will point out about these numbers, actually, is when I run these benchmarks, so the way we run these benchmarks, if I scroll a bit further, we have a really simple, there's a diagram on here somewhere, my computer is not happy today. There it is, we've got a relatively simple crud API that has API gateway with four separate lambda functions behind it to get lists, create and delete products. And it does that against the DynamoDB table. That's kind of the architecture of what we run this against. And then we run a 100 requests a second 400 requests a second for 10 minutes against the API endpoints. And then we use that to generate these numbers that you see in here. So that's how we actually run the benchmarks. The last time I run these, there was about 150,000, invokes of lambda, about 500 of them were called stars. So if you want to do some quick math about what percentage of requests that is that we're actually called starts, I don't want to do that math. Because you kind of you kind of said Brandon, like Weinstein, dotnet is fast. So if you think you've got the wrong things execution environments get get created, because each individual request is taking single digit milliseconds to respond. That means that execution environment is available again, and again, and again, and again. And again. So the same subset of execution environments can be reused. And this is a really interesting thing about lambda is when you are developing, every time you publish a new version of your code, you are guaranteed to see a cold start, because that's a new version of your application code, which means you need a new execution environment. So as you develop developing, you might think, ah, cold starts happening so often, like there's so many cold stars. But actually, I always say this to people run something like a production Lord against that same function, you've got a million free inbox. So just run something like some kind of production log, and use that as your baseline to see if code starts at a problem. Not if not the developer experience, if you will, because that will be slot guaranteed. So anyway,
Brandon Minnick 28:58
I love this stuff. Because I like I'm that weird guy that likes going deep into the dotnet runtime to figure out how it works. And I literally just gave a talk on async await where we show like the compiler generated code and all that fun stuff. So for here with Yeah, with H O. T. If you've never heard of that before, like James mentioned, it stands for ahead of time compilation. You might assume that dotnet is already ahead of time compiled because well, we have the compiler in Visual Studio and I tap build, which compiles my app. So what the heck's going on? Well, what the compiler is actually doing is it's lowering your code to intermediate language. So I L, if you're familiar with like assembly logic, it's kind of like that low level language. But then one of the reasons why dotnet is so fast is because what it does is it actually also compiles that code just in time. So that's it. JIT just in time compilation when you run it. So typically, you if you have a web server, running ASP. NET Core, it's as as soon as that code is about to run the dotnet compiler, just in time compiles it. And one of the benefits there is because the compiler is kind of running side by side that the code is it can optimize our code. So as our codes running, that just in time compiler can see like, which branch is being used, like you got an if statement here, and 99%, the time it always goes inside the if statement. So the dotnet compiler can basically start loading that code, assuming it's going to go there. And if 99% of the time it does, then it's just optimized itself and saved a bunch of time. So it's, it's really interesting how done it works under the hood. But also, yeah, with ao t, it basically compiles all those bits ahead of time. So basically, it's got to think of all like the system libraries like system dot daytime, and all that stuff that we get for free and dotnet. That's got to be compiled. So ao t does result in a slightly bigger app like your, your DLL, your binary is going to be a little larger, because it's got to also kind of include that compiled dotnet code. But yeah, the performance benefits we see are amazing. And it's actually something we've used recommended a long time for our data, Maui apps, Xamarin apps, because with mobile apps, there's a rule where if they don't load within three seconds, then typically user will assume the apps frozen, it'll force quit, it probably give you a one star review and delete the app off their phone. So we we've been trying to optimize our startup times in the mobile app world forever. And so it's, it's really cool to see kind of the this correlation in this allegory with serverless. We're kind of the same idea, right? Like, we need this code starting as fast as possible. How do we do it? And now I love seeing dotnet entering this world of IoT, because we've been using it for years and mobile. So it works. I'm excited about it. And yeah, like James mentioned it, a Iot did debut with Dan at seven. And, but the data team was still kind of like, you know, use it, but test it first. Because you know, we're, we're confident, but we're not confident in it. But we've done it eight. Now, I'm seeing that confidence that go ahead. Yeah, go ahead and use it. EO T is a thing now. You can trust us. And so I can't wait. Because yeah, if if my back end takes 200 milliseconds to spin up. Great users won't even notice that.
James Eastham 32:54
Yeah, absolutely. And one of the super interesting thing that's coming in, in dotnet, A is the ASP. Net support, albeit minimal, at first, like it's not full ASP. Net. Because one of the really interesting things you can do with lambda. And dotnet, is you can actually run ASP NET on lambda. So you can visit there's a nougat package you can add, which is something like Amazon, lambda ASP. NET core server hosting, I really feel that they could have give that a shorter name anyway. And
Francois Bouteruche 33:24
you know, we are good.
You know, we are good at naming. Yes, true.
Brandon Minnick 33:29
Yeah, maybe so easy.
James Eastham 33:33
But you can so you can bet you can run. So if you're using minimal API's, it's like a single line of code, you build ODOT services to add AWS lambda hosting, passing what you're putting in front of lambda API gateway, and your ASP. NET application will run on London. Now, before a OT, you were to look at that same webpage, you were typically looking at, like 1.1 1.2 seconds. This is dotnet, six and middle API's. So you've typically got like 1.1 1.2 1.3 seconds of cold start. As you said, Brandon, that you're getting over a second that's becoming perceptible now. But when we run some benchmarks with dotnet, seven, and native at a minimal API's, again, I don't recommend anyone does this because Microsoft don't officially support it. But we were just interested. This is a minimal API running on lambda. So what that means is 99% of all your requests take under a second. And then I think I've got dotnet eight. Yeah, so dotnet, a native API's. That gets down as low as 100 milliseconds and on the dotnet eight benchmarks have started to put on the number of invoke so you'll see here for this benchmark 155,679, and the requests were warm starts at four of the inbox record starts. Literally none and even within that 84 The max call start time was 830 milliseconds and that's fine ASP. Net on Monday, compact alternatively it so that when what was what I said at the start about the paradigm shift and the programming model that's different. This net vo T going a bit more each year. In terms of like it's not, it's not full. It's not full, minimal API's. I can promise you, I've probably got the code example actually. Let's go into source, let's go into you can tell I've got loads of prepping to this. Okay, let's talk, let's load this up. And then we can we can come back to that in a sec. So yeah, with with minimal API's with ASP. Net going something like GA and on the eighth, then that that opens up a whole bunch of super interesting use cases for lambda. Yeah.
Brandon Minnick 35:48
I feel like we've been, we've been hammering cold starts. So if you're tired of talking to talking about it, apologies, but it is important. But I do want to end with just one more recommendation, because now I've been using surplus for for years for my mobile apps, because again, my mobile apps in the app stores are free, I don't make any money off of them. So I love serverless, because it's super, super cheap, I can pay a couple pennies every month. But what I've done, and what you can do and your apps is you can, you can also, I don't say trick your users, but you can distract them. So there's, there's an old story that there used to be a an office building, and, you know, big tall office building, and lots of workers would come in every day. And they'd be, you know, maybe on time, but running a little late for a meeting. And the elevator takes forever to come. And there's always people complaining about how slow the elevators are. And the building looked into it, there's nothing they can do, because well, it's an elevator, it's got to be safe, we can't just make them fast and put people's lives in danger. And what they ended up doing was they installed mirrors in the lobby. So the elevators didn't get any faster. But now when people show up, they're they're getting ready for their meeting, they're looking in the mirror, you know, adjusting their time and fixing their hair, make sure they look good. And in the middle of doing that the elevator shows up. So even though the elevator took the same amount of time, they stopped getting complaints that the elevators are slow. And so what I do in my apps, if if and when you download any of my apps from the App Store, you'll see I have a little splash screen that loads up with some fun animations. Like I spend the logo, I had some Texas wings along the bottom to kind of let you know, like the apps launching. And it's that little distraction that sure maybe there's a bit of a cold start penalty, because you're the first first person to launch my app in the morning. But what you're greeted with is Oh, that's a fun little animation and read. Okay, so you don't even realize that this is happening behind the scenes. So there's still also ways like that. They'll say old school ways to get around it. I'll be it our cold start times are dropping dramatically with every release of dotnet, which I are every release of dotnet, which I love to see. So, James, you mentioned there's a there's another part of this paradigm shift. So yes, you have to be aware of cold start times. But you mentioned it's they're all event based. But what does that mean?
James Eastham 38:21
lamda is triggered by events that lambda is reactive. If you think about the reactive manifesto, or reactive programming, event driven compute, that is what lambda is. So lambda simply reacts to things that are happening. And then things might be an API request, that equalent might be a message on a queue, it might be a event publish to an event bus, it might be a manual inbox, because you want to manually invoke a function for some reason. So what that means is that lambda only runs when it needs to do something when there's nothing to do. It's not running, it's just doing nothing unless using provision concurrency. But if you are using provision concurrently, you're fully utilizing it because now you know the trick. So that that is that is what it means by it's event driven compute, which might seem weird when you're building API's. And there's a few different ways lambda runs. One of the ways lambda runs is what's known as the synchronous invocation model, which is where API gateway sends an event to London and then waits. London returns a response API gateway at the response request goes back to the caller versus an Event Bus whereas the event event of using Amazon event bridge that will then trigger lambda and November to go on doing other things in your sends that event off and one that goes and does its own thing, if that's what you mean by event driven. The reason that is a slightly different when I talk about a paradigm shift, I guess what I mean more there is the actual programming model that you would use to build your functions. So for example, if you have a lock I'm just trying to open some code up he's my laptop running so slowly today.
Brandon Minnick 39:57
I know is does that But especially demos like this where because we're live streaming on the internet, or showing or video or Showing on screen, sometimes computer gets a little mad at you.
James Eastham 40:08
Yeah. And I think for the period, I got a new I got a new camera, and for a period only I'm for care. And it absolutely just destroyed my laptop. A lot of that was trying to process the four care nests of it.
Francois Bouteruche 40:24
So no, you have to buy a new laptop. You mean?
James Eastham 40:28
Yeah, maybe that's it? Yeah.
Brandon Minnick 40:33
Twitter, James manager, let me let him know that James needs the most powerful MacBook money can buy.
James Eastham 40:38
I've ever nearly actually joined. So I'm on my worlds are very needed. I've got an empty Mac, like my personal computer. And I very nearly drained from that for this exact reason. Because I was like, a few minutes from point, my laptop is going to not cope with this and seemingly, so I've got an example of the looking right. But that's an example of how you can do things better. And I don't want to start with how you can do things better. I want to show you things the old way of doing things. But we can start with that because my laptop's Not, not. So when you build what we call a single purpose, lambda functions, we've talked about running ASP NET on lambda, that's, that's a completely possible completely okay way of doing things if you're care. But you know, if you're building like an internal HR system that's only access three times a week by, you know, John in accounts, then maybe with an ASP NET, and that's perfectly fine. Because one second cold start one and a half second codes type something that's accessed three times a week, with the trade off being you have literally zero operational overhead, maybe that's a trade off. And it's just you're just joining a company. Here's our wages, we don't we don't need a job. So what if you're not doing that the other way you could build under and typically what we'd recommend as the best practice for building on Monday is what you call a single purpose funder function. What you mean by that is just a lambda function that does one thing, one job, one job only does that one job really, really well. And the way you define a lambda function in that method is when the if you think back to that code start graphic from earlier, when lambda gets to the point where it needs to initialize your function called, you give the lambda service a string, that's known as the handler, string. And that's how lambda knows how to invoke your function in the dotnet world that's made up of the name of the assembly, the namespace and the class, and then the method that you want to invoke. So what lambda does when it starts up Your call is it will look for an assembly in your package in your zip file, that's called, in this case, it will look five stock trader dot set stock price handling, no, yes, stock price function, that's the name of my assembly. Once it finds the assembly, it will load the assembly, then it will look for the class to initialize. In this case, the class will be stock traded dot set price handler dot function, it'll initialize that class and then the handler method, which will be the name of an actual method that needs to be a public method. That is what will actually get involved that is what your request will actually be passed into. And obviously, that that kind of method, best loaded dynamically way of programming is kind of slightly different to what we're used to as dotnet. Developers, because typically, we just kinda have a lot of endpoints, maybe with some annotations on there, like this is a get this is a post this is a port but built in API's that is. So that that is that will shift. And then because you don't have a framework like ASP. Net, you don't natively have things like dependency injection, obviously, we all love Dependency injection is not the developers but natively in lambda, you don't kind of get that out of the box, unless you're using ASP. Net again. So we looked at this at abs, and we built a library, that is called the lambda annotations framework. And lambda annotations framework is what you are seeing here. So what the annotation framework does is it uses C sharp source generators to actually generate a lot of the bootstrap type code at compile time. So you see, I've got a method here, the public method, annotated with lambda function. So this is gonna this this method is going to be its own lambda function. And it's also going to be sourced by a REST API. So I add them to annotations and at compile time, hopefully writer plays ball it excellent. So really cool feature of RAD that I only learned recently, is that you can actually look at the source generated code within rider. So what's going to happen at compile time is riders actually going to generate all of this code. So he's generally In August, if I scroll down here is generating all of the stuff required to take that event that comes into lambda, which is just gonna be a JSON string. In this case, it will be like a JSON representation of the API request. And it's going to do all the work to kind of convert that see, realize that DC realize that handle errors and return 500 or 400, that annotation is going to add all of this bootstrap tack code that you can see here, which means your function can then get super, super simple, then you assume your your function can just be you know, what you actually need to do in your code. And you don't need to worry about all of that kind of boilerplate code around it. The other thing that unlocks is dependency injection. If you've seen the same lambda function, I've also got a startup.cs file, which will look very familiar to anyone building with dotnet. This just looks like any old startup.cs file. Apart from this sneaky little annotation that's at the top here, which is lambda startup. What that annotation does is tell the source generators to generate all of the code required for dependency injection, so it will actually set up the dependency injection container. But it will generate the code required to set up a dependency injection container, should I say what that means is the magical function called I can just use dependency injection like I'm used to. So you see, I'm injecting a handler into my constructor here. And it brings that really familiar developer experience to a lambda that you entered annotating methods, you can use dependency injection, you can build things in a way that at least I think is more familiar to, to Doug, this is the default way I build all my lambda functions now is using and brings that extreme, one thing to call out with with dependency injection. And anything you might be doing in startup is that the code you wrote in your startup file will directly impact your cold start, because it's all this code here that's going to run as part of your cold start. So if you're doing lots of complicated things, you're loading secrets, you're doing all this crazy stuff as part of your startup, that's then going to impact your code starts to pick up interesting reasons. Because yes, you might be Yeah, he started us. Yes.
Francois Bouteruche 47:25
Yeah. Yeah. That was my sorry. That was my, my, my fear when you said, Hey, we can bring dependency injection into it, because you have to be careful about the balance between Okay, I want more flexibility and be able to use dependency injection, but you will have to be aware that dependency injection when will impact your startup, your cortisol time? Yeah, the more you're doing the So that's if you can just be concrete object rather than use dependency JSON, it will be faster.
James Eastham 48:08
Yeah, absolutely.
Brandon Minnick 48:10
And it's something we probably could do. I think, James, you're the one that wrote this code. So I'll throw the suggestion over the fence Tia. If we don't already in the generated code, we could add like a trace dot write line, or I'm not sure how the logging is done under the hood that just says, starting configure services. And kind of, so that way, if I go and try to figure out like, why is my lambda taking so long to load, I thought it would be faster than in the logs, it would see like, starting to figure services, and you can kind of see like, oh, all this codes happening, like my lambda codes, finishing in two seconds. But for some reason, configure services is taking five seconds. And that's probably exaggerated. But even something like that can be can be helpful.
James Eastham 49:03
Yeah, that's a really interesting idea that I like it. That's the second thing. That's great. And a segue is the brand itself, the second thing you've thrown off to them to be in your room. So sticking with annotations for a second. And coming back to that whole thing about nativity, both what to expect even further back. So when you're running dotnet on lambda, lambda has a set of what it calls managed runtimes. These are runtimes that are managed and deployed by the London service. So you've got a dotnet dotnet, six, currently dotnet core three one has just been depreciated. You've got Java, you've got Python, you've got Gore, you've got this one node. But then it's also got this capability of what's called a custom runtime, and that's where you can bring your own runtime to lamda. Now the lumber service commits to only ever supporting LTS releases of all languages. This isn't just dotnet specific, like across all Python node, etc. Lambda will only support Management times as LTS releases as a managed runtime. Which means if you want to end on a seven on lambda, you need to do that using a custom runtime. And if you're using a custom runtime, you need to do all of this crazy stuff here. So if you're using a custom runtime, yeah, if you're using a custom runtime, you need to actually have like a program.cs file, you need to know to call it problem, not CS, but you probably should, you need to have like a static min method. And then you actually need to manually bootstrap the runtime. And the same applies if you're using native AR t, you need this code here to manually bootstrap the runtime. And you need to kind of know about all this additional stuff that you need to know how to create. And frankly, it's not particularly pretty good, let's all be honest, like func API gateway or proxy request. I love that context. So that's something you've got to remember to do and know how to do. So what Thank you, Brandon, for this. But what Brandon suggested in a call we had a few weeks ago, is could we not sort of generate some of that, and not quite right now. But there's an open API that you can keep track of on the ATBS, lambda runtime to kinda do all of this fire. So this is another example I've got here of another lambda function. And you'll see this lambda function much the same way as before, it's just annotated with this lambda function attribute. And it's annotated with this HTTPS HTTP API attribute, I can't speak. Also, this lambda function is dotnet, seven. So this is a dotnet, seven lambda function. This runs on lambda, this works perfectly well on lambda, what you'll notice is if you can see, I didn't try to zoom I'd rather in that couldn't work out how to do it in the new version, but you'll see if you can squint, there's actually no program.cs file in here. What there is, is an assembly attribute called lambda generate men, so you need to add this somewhere in your assembler. And what that tells source generators, the annotations framework to do is to actually generate this piece of code here. So what this is actually going to do now is because I've got two different lambda functions within the same project, when you actually deploy this, you need to add an environment variable to say which of the two handlers to use the amount of two independent lambda functions, you need to tell it which one to use using the environment variable. But we are now actually source generated this program.cs file for you. And this isn't merged right now. So if you go and try this right now, this will not work to be absolutely clear. But there is an open PR, on the London dotnet. runtime to the abyss. Hey, look at that. So this will apply for if you've got a seven if you want to start trying dotnet eight on Monday, if you want to use native AR t now we're in the future, even if you're using native AR T in dotnet. Eight, even though it's a managed runtime, you'll still I don't know if we can change that. But I don't know seven for sure. You need to do all this bootstrapping stuff. So now of using lambda annotations, all of this will be done for you. So thanks, Brandon.
Brandon Minnick 53:20
I mean, so yeah, like James mentioned, we chatted about this, I'm super impressive, we're thankful because, you know, something for me, like, I like to stay as close to the bleeding edge as possible. Maybe not on the bleeding edge. But I like I like my latest bits. And, you know, with every new version of dotnet, comes performance improvements. You know, there's always blog posts from the dotnet team about, hey, if you just changed from dotnet, six to seven, your apps gonna run 10% faster. And if that's all I got to do easy performance improvement, I go to my manager and I say, Hey, give me a raise, because I just made my app 10% faster. So yeah, I always like to stay up on the latest and greatest. And, yeah, looking at this code, you're just sharing Jas, like, I have an app with a bunch of lambda functions in it. And they're all running on dotnet seven. And I can't wait to see this hit the main branch get released, because I'm going to remove so much code from my apps. And I've always had my favorite PRs are the ones that delete more code than add. So with this, I feel like I can I can kind of just do what I want with lambda. Like there's less restrictions I have to think about like, yes, you have to keep in mind about managed runtimes. But not really, if we can write super easy code like this, and we can always have, you know, whatever version of dotnet running, it's going to just make all of our lives so much easier. With deleting code
James Eastham 54:57
a couple of gotchas. One Got you. And one thing to think about. So when you're using a custom runtime versus a management time you pay for the code, start with the management time your code starts for free. With custom runtimes, you pay for your, as of right now. Now, if you're using native AR T, that could be pretty minimal. But that's just something to keep in mind is that you will then start being accused of paying for it providing you get past a million involved a month, of course,
Brandon Minnick 55:24
as you say, that just counts towards the default. Like, you would just see that on your bill is like lambda, time, yeah, some extra charge.
James Eastham 55:38
thing. The other interesting direction I'm thinking of taking this same bit of functionality is around API's. So the minute that obviously generates all that code and says you need to set an environment variable to tell it which handler to use. Now, if you imagine and I realize we're very short on time, so I'll try to be quick. If you imagine you've got an API, you've got four endpoints on your API, and you want all of them endpoints to be natively compiled, they've all got the same memory requirements, security requirements, dependencies, all of that's the same, you probably want to run that in a single lambda function. But you will then need to write the code to map the right endpoint and method to the right handler under the hood, if that makes sense. So the event that comes into lambda will have like get and then slash whatever the path is, it's been called. So whether we could use the same functionality to actually auto generate all of that mapping fire. So then you can just annotate all these functions deploy one lambda function that is your entire API. But you can get to the point where it's rebuilding ASP. Net now, but you see, you see the ID and then you've got this slightly saved and you've got this one natively compiled binary. That is your entire API, but you don't have all this ASP NET. gubbins that's slowing things down a little bit. And under the hood, it's doing all that mapping on your behalf. So that's another interesting direction. I'm thinking of taking that same functionality. I need to talk to norm about that before. Yeah.
Brandon Minnick 57:02
Yeah, I love it. Because all of it, especially using the annotations paradigm, like that's kind of become how we use dotnet. You know, anything from SP dotnet core, you're gonna see similar annotations, and they're, even in my dotnet, Maui world, we have this amazing library called the MVVM community toolkit that introduced all these annotations that will generate source code for you. So I really see that invitation source generations, that's the future for certainly building libraries. But yeah, for us is done at developers consuming them. And so we're kind of heading towards this direction with dotnet lambdas, where it's, it's going to be just super easy to learn and use, because it's this, it's using all the same paradigms, we're used to kind of all the all the hard parts, all that codes auto generated for us. And so not only do we get less code, but it's kind of the similar workflow that we've familiarize yourself with, with this, like service collection builder pattern. Everybody's using that. And then with annotations kind of leading the way. But my goodness, James, we have two minutes left, we have to have you back on the show. Certainly, once these PRs are merged, and certainly what's done at eight is stable, we've got the benchmarks out, we're we've got to have your back
James Eastham 58:32
to service development without any longer that how you can build applications without a single line of code. Like that's where I was gonna go with this. But anyway, alright.
Brandon Minnick 58:41
Well, we'll stay tuned next time. Yeah. James, for folks who want to keep up the conversation, see all the latest bits on serverless? Where can they find you online?
James Eastham 58:51
Twitter is probably the best place at pump our gems. It's kind of kind of do this kind of way. There it is. Is there. Yeah, that's probably the best place to reach out on Twitter. My DMS are always open, so to speak. I always love to chat about any of this stuff. So yeah, please feel free to reach out anybody on YouTube, which Brandon shared at the start. I Francois shared, it's gonna be there. There we go. So yeah, thank you for having me on. It's been a it's been a pleasure. I could talk about this for hours.
Brandon Minnick 59:15
Yeah, and thanks so much for joining us again, James. Like I said, we'll have you back on in a couple of weeks, couple months whenever that timeline lands. And thank you so much for joining us. Don't forget to subscribe to the AWS Twitch channel so you never miss an episode. Francoise and I will be back twice a month. First, there's first and third Mondays of every month. You can follow us here on the dotnet and on AWS show. We'll see you next time.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments