logo
Menu
The .NET on AWS Show, Featuring Norm Johanson!

The .NET on AWS Show, Featuring Norm Johanson!

In this episode we are joined by Principal .NET Engineer, Norm Johanson. Join us as we discuss .NET + Serverless using AWS Lambda!

Brandon Minnick
Amazon Employee
Published Mar 18, 2024

Listen to the Audio Podcast

The .NET on AWS Podcast

Watch the Live Stream

Loading...

Transcript

Brandon Minnick 1:08
Hello, everybody, and good morning. Welcome to another dotnet on AWS show. This edition brought to you by time change where we we just rolled the clocks and we're all scrambling and feeling tired. I know I am, Francoise, how you doing? How's your week?
Francois Bouteruche 1:29
I'm fine. My week went well. And just to share with our audience. I was like, Brandon, 10 minutes ago at the joints, what? But it's it's in eight minutes what? I miss the time shift.
Brandon Minnick 1:51
No worries me. Before the show, I was already complaining about how tired I'm feeling how much I hate when we have to change the clocks. But at least this one's my favorite time of the year because we move the clocks forward. So here in the US, we now get an extra hour of daylight at the end of the day, which is certainly my my much preferred method. Oh, but enough about the clocks. Friends. Well, you said you mentioned before we got started. You had some announcements to share with folks this week. So what's what's the good news? Yes.
Francois Bouteruche 2:25
So as people may know, I work a lot with degenerative AI stuff from a C sharp developers perspective in the past months and weeks. And I'm super happy because in the last two weeks, we've made some enhancements with bedrock. So no, we are adding a few new models to bedrock support, namely, Mr. Graham, so Mr. l seven, B four. So for those who don't know. So Amazon bedrock is a fully manage, model, several less model provider service. So basically, you have a single API and you can use several large language model to bring generative AI into your into your application. And we do provide several model our own with Amazon Titan was also the model from on tropic from Coesia ethic. And two weeks ago, we announced the addition of Mistral large language model. And I'm super happy because this one is a French startup. So as a French person, I'm super happy to see Miss Frank coming in into Amazon bedrock. And I have some Ultra this in the in the in the in the chat. I've written the code example for for the so you can find Mistral or to you could example on how to use Mistral in your C sharp application at this link, as well as our other large language model. Not only mistral. We've also added entity Pro for cloud three. So anthropic has just announced new new model. So I'm still working on the Kinect sample for a cloth three, because cloth three brings moving new capabilities to bedrock with the ability. I've started to play with it, you send it a picture and it describes a picture very precisely. So really cool new feature addition to Amazon bedrock. So you can navigate to this URL and look at the code example for you those capabilities into your C sharp application. So super proud of of this announcement in the past two weeks.
Brandon Minnick 4:54
That's great news. Yeah, I was just literally going to ask you about cloud three because I haven't had a chance to try it. But I saw there's a couple blog posts floating around about how it compares with the GPT. Four. And it looks like it's incredible.
Francois Bouteruche 5:13
Yeah, yeah. I've started to play with it. I'm pretty impressed. So on topic, I shed some numbers on popular benchmark, and they are outperforming Chad GPD GPD for the model, not not the application GPT for the model on many popular benchmark. So that's pretty amazing. And when you start to play with it, it feels amazing. So I still need to die. And that's pretty amazing.
Brandon Minnick 5:48
Absolutely incredible. Well, thanks so much, Francoise. We have such an amazing guest this week that won't have any more further Adieus. Our our guest this week is the principal engineer focusing on the dotnet developer experience at AWS. Or as I like to say he's, he is the AWS dotnet guy, Norm Johansson. Welcome to the show.
Norm Johanson 6:16
You way oversell me. Sure.
Brandon Minnick 6:23
I was thinking about saying he's the, he's the David Fowler of dotnet on AWS. But maybe
Norm Johanson 6:32
we have many great people here at at with it helped build this great dotnet stuff. So I happen to be the guy that just sticks around for the longest time.
Brandon Minnick 6:43
Yeah, that's gonna say, a little behind the scenes for folks who might not know, we do. We asked pre show questions. And one of the things we ask is, hey, get started in dotnet. And in normal, your response you mentioned, you've been doing this since 2010. Is that right?
Norm Johanson 7:02
Yeah, I joined it with the 2010. That's right. Yeah, absolutely. I think we had about a dozen services in AWS at that time. It was a very different it was back then. You know, it was, you know, nowadays, it you know, what the main responsibilities I have is, making sure the SDK the Dennis gay is healthy and shipping. Up to date used to be you know, we would just send out a bunch of emails a week, we're releasing today, what are we shipping, and, you know, we'd really like maybe once a month or something like that, but now it's it's, you know, the material we've gotten, you know, we ship every single weekday, basically, there's a whole system and an a lots of orchestration to make sure everything gets out, which is great, you know, you know, compared to those early, you know, startup days, you know, it's hard to think of maybe this is a startup, but you know, we were, you know, 2010 was still early days for us.
Brandon Minnick 8:00
Absolutely incredible. I've only been with AWS for about a year and a half now. So I'm very, very impressed. So, norm, I know we have a couple or a lot of amazing demos to show off today. But before we dive into it, if there's anybody out there who hasn't heard of you, who are you? What do you do?
Norm Johanson 8:23
So yeah, a focus, you know, started on the dotnet SDK team that we have here. And then I've kind of branched out and sort of helping anyone, I can make sure they have a good dotnet at this story. So I partner with teams like Beanstalk, lambda, other teams, build lots of our high level libraries, and try to help push dotnet to be a great platform for us. You know, I've always been in you know, I've been in this industry for more years than I care to remember. But I was always that guy on the team. That would always be the one that would build the tools, right? You always have a guy like, we're building whatever website we're doing something but I was always that guy that would build the tools for the teams just because I wanted to make things more efficient. And so it just sort of worked out perfectly when you know, my last My job was shutting down. And this came up and it's like, Oh, I get to just be paid to just make tools for people and this is what I love to do. I love to just try to make tools and work with developers and make cool stuff so
Brandon Minnick 9:27
and I'm certainly glad you love it is we get to reap reap all the benefits of of your hard work. Late nights sweat and tears.
Norm Johanson 9:39
Those are the best coding nights when you when you have those late nights. No one's around. You turn the music on and you just start coding who knows what's gonna come out of it. All.
Brandon Minnick 9:48
Right, you get in the zone. I had one of those last week where next thing I knew is almost 10 o'clock at night. I was like I thought they were done with work for the day. I was like, yeah, sorry. I just really got to do this. The exact
Norm Johanson 10:00
same experience, said my wife popped into the ER, it's one in the morning, you should go to bed, but I'm having fun.
Brandon Minnick 10:10
Absolutely incredible. So, we normally we had a huge announcement recently about adding dotnet eight support for AWS lambda. And it sounds like we have a couple awesome awesome demos to share with everybody and show folks how they can take advantage and get up and running. So let's jump into it. We're like to get started.
Norm Johanson 10:36
It's kind of catching up on the I saw James's episode you guys did with him a couple of weeks ago. And so I thought, well, what's the part two of that one spike, kind of like, Let's go past that, let's do the sequel of that one. So it kind of came up with some demos on that one. But yeah, we launched wham to support it on the 22nd of February. So a couple of weeks ago, we've had it. So very exciting. You know, largely for DACA, it's, you know, it's you know, we're getting the latest and greatest, your experience moving from six to eight, it's pretty smooth. You know, which, you know, basically, you get the latest news done updates. The things you should be aware of, obviously, this is gonna be the first time that we have a render function that's based off Amazon, Linux 2023. So that is, you know, the big, the biggest thing I would say, outside of the data points that we have previous version was Amazon, Linux two, we are highly encouraging Desinger, even if you're not doing well, if you're doing stuff on Linux, and we will see you Sydney moving to 20.3. As I understand later, talking to the books at Microsoft, the changes that they plan with dotnet in mind is going to make it so that my mind is now going to work on a URL to Amazon Linux two, because they're going to raise the minimum GLVC version, which is beyond what dotnet are able to split. So that's how we're really pushing, you'll see everything is being Aereo 2023. That's your future path of using Linux and dotnet. Together is using that together. So containers, you know, easy to answer all that you should be using that. So, you know, il 23, it's got all the latest native dependencies on there, it's got OpenSSL, three to the latest GLVC versions, lots of goodness in there. So So one thing to take note, you know, also, you know, we were trying to make sure people, you know, set realizes, when when Glenda watches the new lambda runtime, of course, everything the first thing he was asked is, how fast is it going to be? Let's do go measure everything right. And, you know, I guess that's what we do, too, right? It's measuring that lambda is particularly tricky, especially with brand new runtimes. So when you do blend the runtimes, like lambda to get the performance it does, it's built on huge light layers of caches. There's lots of layers of caches that going on. And so now that we have done that out, it is a brand new runtime. So you can imagine it's pretty low in all those caches. So you're gonna see, like, over, if people started dropping down there, it'll be filled into more of these caches. So you get more, your performance will keep continually to improve, am I talking about performance uptime or cold start, which is the always the one that everyone's always talking about? So be really aware that, you know, you know, this is still being, you know, filled in all those caches. Atlanta does caching in really interesting ways. Like when I first started with, like, lamda caching is like, we're just gonna get cash the whole dotnet runtime, right, that's what they do, right? And they just gotta fill it in. But, you know, they're, you know, layer that works at a much lower level, right? I mean, they're caching at individual block level, like, you know, when essentially, the OS is trying to read more blocks out of the system. That's when it's caching more of this, you know, essentially stuff out of the system. So it's not like, you know, as you get more of those pieces loaded up into all these caches is when things will start getting hotter and hotter, and you'll get faster and faster performance. Of course, if you're really concerned about cold, sir, if that's your biggest thing, the biggest thing you should really be looking into is doing native IoT. That is the thing that will really you have the most dramatic effect on your cold starts. I think one thing I saw when I was watching James's show, he was talking about how the diner team they added arg support for a spinet core. That was a big part of that. No, no, it's not always going to court but it's like, you know, a fracking portion of that that works in there. So I thought what we could do, if you guys are interested we can have Your hello world acing a core application. If we want to see what it looks like just to take an ASP. NET Core application and get that working in lambda. Sounds good?
Brandon Minnick 15:08
That sounds incredible. Yeah.
Norm Johanson 15:09
You have to watch while I drink something that I spoke a lot of words there. So I have to refill it just splash water maybe this morning here. So yes.
Brandon Minnick 15:23
Yeah, appreciate you waking up early with this
Norm Johanson 15:33
alright. Alright, here we go. And we got Visual Studio here. Everyone's favorite tool here for dotnet developing, right? So here is this is your basically this is the you know, if you do in Visual Studio today, this is the the the new project template you get, if you say that he's gonna coordinate with you, this is what we have in here. Nothing, you know, I haven't done anything special here. So obviously, I should go add something interesting code in here to make it really exciting. But you know, I could push it five day long. We know you're bugging things that make it a Iot though is one, it's only using things that are at compliant in the project file, it actually says it's going to when you publish publish it to a Iot. So it's pretty simple in their world, right? So to take this regulated core and application and make it run in lamda. In our world, essentially, you just have to add a nougat package. And you got this one, and I did see James made fun of our naming system, which naming is hard. It is a very long name. So we add this package on the Amazon lambda A spinet core server hosting the hosting stuff Xander became, hey, we needed a way to distinguish when you're using minimal API's top level API's. That's essentially where that name came from. I don't know what. Because there is just the HTML of course server, that's what we originally launched. But then when dotnet six, right, the Honorable home and my POS and we different sort of coding pattern. That's what we did. Naming is hard.
Brandon Minnick 17:22
So you could still use the one with the dot hosting suffix, even if you're not using a minimal APIs. Is that right?
Norm Johanson 17:31
Yeah. I mean, the really the me because the hosting just depends on the extent of course, server one, I think that just shows up here, right? Packages, right. So that just depends on that one. That's really where all the real work is at. If you use that directly, essentially, you have to basically create a subclass of funk a class in that works is basically the entry point to your lambda function. Didn't work with me first. So we created this one. So once you add that, the only line of code you really have to add then as you say, building services and land. And then we just say I wanted to do this as a real estate API. And that's really the only coding changes you have to do. I can still push f5, run this locally. And this is probably going to go to the wrong screen. But you see, this is why I think a lot of people liked this libraries, because I can still do the regular external core development, everything I like. But it just like one thing to play around with changes behavior runs as a lambda function.
Brandon Minnick 18:48
I'm guessing a little bit here. But is what this line is doing. Is it just replacing Kestrel with something? Or how we run dotnet? On ADUs? Land? Yeah,
Norm Johanson 19:02
basically. So in dotnet, this was my fun diving deep. And this is the glory of open source, right. So that's open source, and you go, I like to tell this is what they're doing. Right. So at a high level, we have Kestrel, which does all of the networking crazy fun this, but in the end, it has an interface, I think it's called like ice server. I server is basically what translates what it gets from its raw system into what a standard for request and response should look like. So what this library at a super high level does is essentially it takes this is what an event looks like I got from a spinet core, we implemented that I server and we take you know the apt get response and we convert it back and forth in between the two. So we basically just swap in our translation layer in there in there was no Kestrel. Okay, so, so pretty simple. Now, you might notice, there we got a fun little green squiggly line there for our Right, right. And this is the thing to always be worried about when you're doing a spinner or a Iot. You want to make sure you're doing things that are trim safe. Because if you're doing a gene, you have a small package, smallest zero as possible. And so it's going to trim anything out that it doesn't think that you're using. By default, when you add this line in here, it's going to use reflection that we serialize there, because we're getting incoming JSON events, it's going to use, it's going to convert those into your download types, it's going to use the JSON reflection. But using reflection will to was problematic can be if those types are trimmed away. So this is where we did, we went through a lot of our libraries, when we came to just donate in general of like, let's go through everything and Mark what's safe and what's not safe to use and energy. So this line that we say makes everything super simple is saying it's not safe for IoT. So what do we do? What do we do? So, as you if you've worked with lambda functions, right, you register a lambda serializer, right? This is the serializer, that that takes care of converting the your lambda, the lambda events to your dot not tied to your focus. So what we have an override here that says, Okay, let's pass in the source generated base serializer. So I can say, new and I'm just going to type the whole thing out because I don't feel like getting USING statement.
Brandon Minnick 21:37
And on dot lambda dot serialization dot system, dot text, source generator, dot serialization dot system, text, JSON dot source generator, lamda, JSON serializer.
Norm Johanson 21:51
There you go. There's a quiz at the end. And that is a generic thing where you got to pass in this JSON serializer context, right? So we already have one here. So with that JSON serialized context is what tells the source generator, what to generate all of the code for, for doing reflection. So the, when you do a new AR T project, it was you know, the market seems the same boat, right? Hey, your JSON serialization and replication, you source generation basic position. So they already put in this serializer complex in there. This is where you register all of the types that need to be have to generate it for source generation. So because they've got this to do object, right, they've got that on there. We need to add in our types on there as well. So we've got, yeah, even if,
Brandon Minnick 22:52
even if we're, let's say, we're focusing Outlander right now. But let's say we're going to publish this to Elastic Beanstalk with native EO t, we would still need to add support for this JSON. Yep. And all these Json serializable types of isn't necessarily something that's specific to serverless, more of just a dotnet, EO native IoT thing, correct? Yeah,
Norm Johanson 23:15
if you're doing a OT, you need to not just do you know, this exterior lighting, do you need to use the source generation base JSON serialization, which means excuse me, creating this context, and then having it, you know, listing all of the types that you want in there. And then that dotnet will do as part of compilation, the source general kick in, it'll generate all of the code that's needed for that. And we'll also make sure that none of that gets trimmed away in that because it knows, hey, you're clearly being referenced. Yeah, I won't trim that out. So. So that's really nice.
Brandon Minnick 23:51
We get a little bump to write, even if we weren't using database IoT, if we went down this route, I would imagine with source generators or give us a little performance bug. Yeah,
Norm Johanson 24:03
that's true. When we added this source generator, lambda serializer, and dotnet. Six, and a lot of people would use it, because that would also reduce their cold starts as well. I mean, you should you even if you're not using a tee. I mean, obviously, just using the reflection based one is that I don't have to think about it just works. And I get that I only have to think about things. But if you if you can do a or two, but you still want to, you know, find whatever tricks you can to improve performance. You should use the source generate one and that also cuts down your cold start because doing that initial reflection of all those types takes time. So there you go, Okay. Should we deploy it? Okay, yeah, it's all the code we needed to do. So I could go in right click Deploy. I'm gonna actually just do it from the command line just so I don't have to lock up my Visual Studio. And there, that's the other project playing around with didn't need to do that. Alright, so we got our dotnet CLI and gotta run the DEP server lists command. So that doesn't it lambda yo, it's got to do with get deploy functions where you deploy straight to the service, or do serverless, where deploy essentially the through CloudFormation. And now, because we're deploying an IoT, and I'm running on Windows, you can see we actually we ran, where we're using Docker to do the compilation. So that's one of the things our tooling does for you. Because then when you're in this is the way done that works is if you're building something in negative 80, you need to build it on the same platform, you're going to run it. So probably not too many of us are actually using Amazon Linux. 2023 is our DEV environment. But we essentially we have this Sam build images, right? You know, you can see the SAM definitely build image, which is based off of l 2023. And we use that to essentially do the compilation. So essentially, essentially mounting your solutions file, or the folder of your solutions, the folder where your solution files that and then running your build in there, so we can get something that's on there. So
Brandon Minnick 26:21
I'll say I was I was trying to play around with some of these bits before we officially launched, or faciliated. Now it's done any support. And yeah, that was one of the problems I kept running into is, you know, I was my lambda function is running on, say, x86, but my Mac is and to ARM processor and yeah, trying to figure out how okay, how can I build for x86 If I'm on an arm computer, but we can just use Docker images? Genius? Yep.
Norm Johanson 26:58
Yeah, that was definitely during those pre days before everything out, like, you know, we couldn't get, you know, the build image out, we had all these other dependencies. So a lot of people were trying to figure out me through them build image. I get it. And I wanted to get this out before but it just like we had so many like chicken and egg things going on, that it just kind of came out at the end. But now everything is out. Yeah, you can see, it's all deployed. And it looks like I left my stack up there. From what I'm doing this before. I go and click that link, which went to the wrong window. But you can see very exciting there. So just to give you an idea of
Brandon Minnick 27:39
a response we saw when it was running locally, just exactly what.
Norm Johanson 27:44
Yep. But if we look at the logs here, because again, let's let's do everything live and what could go wrong, right? I mean, let's look at C, this is a o t, right, we can see the you know, the duration here is five milliseconds, and our new time was 233 milliseconds. So that was really fast of you know, just giving, like cold starts on there. Let's just do just some comparison. Let's switch it to not be at and so everyone remember those numbers? I'm going to quiz you. Yeah, so I'm gonna change that back to to turn that off. I have found that when you toggle that around, you should really just blow it off folders. Otherwise, odd stuff gets left around. When that odd stuff, you ended up actually deploying both the native executable and all of your regular stuff. So it's double the size, double the font. So it's gonna delete that out of there. So and again, all you have to do the only difference is just that flag, we can go run that same deployment. Now you can see we're not doing or two, we're just gonna run the regular download publish. You have all of the deals done before before when we did it, the only thing in there was just the executable config files, I think. And so we see like here, we just deployed, here's our executable and all of our extra config files. And now we're getting the whole thing. So almost done it. There we go. I started long enough to go to one. So again, everything works as the stone. Let's free friendship. Who remembers the number of friends who are what are the numbers?
Francois Bouteruche 29:37
So five milliseconds and 233 for for the duration.
Norm Johanson 29:43
All right, so let's see. Am I doing a live demo fail? So duration went up to 600 milliseconds, and that was set on 47. So that's our difference, right? So if you really want the fastest, yeah, and again, this is one data point here, obviously a real performance benchmark should drove multiple attempts and things like that. But you can see that you get to get an order of magnitude difference here in this particular hello world case, difference when you're switching to LG. So there are, you know, IoT complexity do some things, or how do you route all those updates along? Well, if you can, if you can get your application code, that way, you can get some big benefits code start with.
Brandon Minnick 30:34
Yeah, this is something that's really incredible. And for folks who might have missed the announcement, on native IoT, basically, what that is doing now is compiling your app all the way down to the processor level. Whereas before, when you typically when you click Compile for dotnet apps, it compiles it down to what's called the MSI L or Microsoft intermediate language, then, when you launch your app dotnet quickly looks at what CPU is running on, and then does that next step to compile it down, say in real time, so it's just in time compiling it, on top of what we got when we push that five, but yeah, that was native vo T, it'll compile it all the way down ahead of time. So you lose some cool things like using reflection to write write code at runtime. But at the same time, yet, we get these amazing performance benefits for our app can startup so much faster, because there's less things done it has to do when we launched the app. So it's, it's really incredible. Advantage of it for serverless like this,
Norm Johanson 31:50
the work that they did, you know, because we did a lot of work to update the dotnet SDK, which is a huge system at this point, right? So we added a new dotnet, eight runtime for the SDK. And we made, we went through and marked everything is trimmable. You know, Mark, there's a few areas of the SDK that are not safe for everything, but we've marked them that aren't safe, and everything else is. But I was really surprised when I did that work, how much you still can do quite a bit of reflection in there. So it's not like reflection is gone. The trick is, when you're doing this is you just have to know what the types are at compile time. And that's where it's tricky when you know, when you're doing a JSON serializer, right? If it's just like, hey, take an object and serialize it, you don't know it at compile time. So it doesn't work. Right. But you know, we do a lot of you know, you know, the dotnet SDK is 14 years old, right, and we've got every trick you can think of under the sun to keep it going. And you know, it's mainly contained its compatibility for those 14 years. So we do a lot of things and reflections, stuff like that in there. But you know, largely, it's still work, we just had to do a lot of annotations and stuff to make it work in there. We didn't have to take that much out. There are a few things in there the big one that I think most people will notice, you know, we have that done to be high level library that sits on top of the service client, that is kind of like a JSON serialization problem, because it does, here's your object, go do all of the reflection to how to turn it into a data item that we have not yet. Made it AOTC nothing we never will, I'd like to it's just, you know, there's only so much you can get done before it launches, right. So we mark that, as you know, requires unreferenced code, because it does a lot of like that open ended reflection on there. So
Brandon Minnick 33:40
very cool. Yeah. And again, it's, we have to do this, because when we're done at eight is doing this native IoT stuff for us. It's looking at all our code and trying to figure out what code is being used what code it should keep in the executable. And, yeah, especially when it comes to serialization. deserialization. A lot of those times that can just be a generic type of t. And so if we want to keep our type, like in this example, in the in the file, new template, it's a to do, then, yeah, we have to let on it. And they're like, No, we will be using this. It might not be explicitly called out in the code, because there's some behind the scenes magic, but keep this type. Don't, don't trim it away. So
Norm Johanson 34:26
one of the things that we so this is the you know, what's on the screen here, this is the project file that you know, obviously the Microsoft dotnet templates generate for if you look at our a Iot lander project templates, and that's what this example is kind of based on. We always set this trim mode as partial, as it says what we set in the project. So what partial does is it basically says we're going to trim everything the ovary assembly that's marked as trimmable If it's not marked as trimmable, we're not going to trim anything in the Getting everything. This isn't done at compile, say we felt like I'm doing this. But, you know, that's what the data compiler is supposed to. So you know, you can still use things that are potentially not trim safe. In NF T, you'll get a much larger executable, because it's got to include, it's going to include the entire assembly in there. And your Microsoft sterilants is basically how we're supporting the two, assuming you have more than one, if you've trim warnings, it's on you to really test this to make sure it all works. And that sort of, you know, we have seen people that have been able to use, like, I've done a horrible logo, essentially, they have to make sure everything is included in there. But it's, you know, it's really on you to really, you know, the pressure is on you to do all that testing to make sure it's always going to work. So ideally, it's you know, only use things that are trim safe and emission, which are mornings, but that's sort of us recognizing, hey, warranty community is still in its early stages, a lot of libraries are still working on getting updated. We put that in there thing, we're just watching everything that's not motors from Google. But again, it's on us, the author's zero. If you're seeing those warnings, you got to really make sure this is still working for you don't just tell them, Hey, it works locally, everything just works.
Brandon Minnick 36:29
Yeah, I'll say I didn't know about remote partial, and I feel a lot better as a as a library maintainer is, that's one of my To Do lists is to go through my libraries and update add tribute capabilities. And yeah, again, my concern was leaving folks behind, you wanted to take advantage of native immunity, they wouldn't be able to use my libraries. So good to know that we can steer folks towards remote parts
Norm Johanson 36:58
still got updated. We still got to do the work. Yeah. But there are some problems. Yeah. dotnet. Right. You know, we got it. We've been doing dotnet for 20 plus years. But we have to change patterns here to make this work. And it's, you know, there's a lot of libraries out there that have, you know, built on those, you know, fun tricks that we haven't done that have, you know, doing things all at runtime? Sorry, go ahead. No, no, please. Yeah, so this sample here, if we want to talk about this one, it's not this one I was gonna show, you know, more on the trimming. So this is a sample here. So where we have our using our lambda annotations library, so our lambda annotations Library is where you can essentially, you know, put attributes on like, Hey, this is a lambda function. And this is its memory and all these fun things, right. And so, you know, you're able to, you know, use a kind of a more idiomatic coding pattern here where we can have dependency injection, where we have our startup class here on adding modality, the extra things I want in there. And then that gets injected into our class. I'm trying, I don't know how you guys are doing the whole using promo constructors as you guys. If you got, it's hard for me to remember it.
Brandon Minnick 38:29
Yeah, saying, but I usually get the little green squiggle. And then I'll say, Would you like to refactor? This is a primary primary constructor. That's so much nicer.
Norm Johanson 38:40
Yeah, I always like it when I do it. But I forget about writing up the sample. It's like, Oh, I've never used annotations with the primary constructor. I assumed it would work. But your person would write it's just a constructor. But yeah, so yeah, we can inject those in there. In this automatically syncs up your CloudFormation template, as you're writing, so you can see here I've got a function in here, let's create a function. Right, and I can just go in, so this is lambda function. So that's all you do. And then that gets, you know, hopefully, you know, it's inserted right now. So the puzzle so we can descend the stress of a logo thing. So it kind of helps making a lot of those things easier. So what I kind of want to also talk about here though, well, this is a good so here we're using the the data service client to go look up zip codes, and you can see we're just using the regular service client in here and just showing an example so so I think we call it 373 100. So this is three Oh, One, as of any version of 373 100 involve zero downloads to get was when we added the dominant target, as well as we went through mark everything as trimmable. So remembering that I said trim mode is partial, if you using version before 300, it would just include the entire SDK, which is a pretty large DLL. And now with with 300, they are Marcus Trimble. So they've done a compiler is on it. So to give you an idea what that means. So I use this tool. I think its size and scope.
Brandon Minnick 40:42
Oh, that's a cool name.
Norm Johanson 40:43
Yeah, it's a, it's a cool, it's a really useful tool, if you're getting into trying to figure out your energy, size and stuff like that. This is a tool that was made by one of the developers on the dotnet team over at Microsoft. So you can actually tell the compiler to generate some extra diagnostics file. And I did this by in my config file, I said go general, is m step files, which essentially writes out a lot of diagnostics information about the native executable that's on there. And then this token, actually then visualize that information. So let's show you what I mean by that.
Right. Okay, so sir, by May. So you can see, here's our security, there's our web package right? In we can see it's been trimmed. So if we look like inside of its models, where you can see all of all the operations, all of the requests response, you just have just that get out and get a response in there, because that's the only thing we need and the whole inside that executable only about 22k is used for DynamoDB. Cores, or is 475. If we were to look at the actual size of it. Which one are you earlier? So you can see, normally Dan, it'd be 700k. And core is almost two Meg's so you know, you can see trimming, most of us are not calling every single API inside diabete. There's a lot of API's in there, right? Same thing, s3, s3 has a huge number of API's in there, right. But now with the SDK, you can really trim those things down to just the things that you're using. And again, the smaller you make it one is when you make it, so you had no trim warnings and stuff gives you a much warm, fuzzy feelings that everything's gonna work out great. And God because, you know, the compiler can find everything, there's nothing it's warning warning you about. And as well as a smaller deployment bundle, gives you a faster cold start time. So
Brandon Minnick 43:04
I was just about to ask is, because all this is great. And certainly while my file size would be smaller, but yeah, when it comes to server lists, really what I care about the most is the cold start time. So if if we can trim it and give faster cold starts, then Oh, my goodness, I wouldn't. Yep,
Norm Johanson 43:21
definitely empty, it's got great stuff going for it. But you definitely want to make sure you can get a trim warnings as much if not all those trim warnings to go away as possible. Which might mean you have to do a bit more work and refactoring your code to get over there. So that's your cost right? If you if you're working with a high layer or something that requires very low latency and you need at then you can definitely do that and I think if you're given the choice and this is you know, we've heard that people have said hey, dammit IoT, or some dotnet landed closer it's too long. I'm gonna go learn node and write that so now is your choice. Do I want to go learn node or do I want to go in just do some work to make things at safe like that to me like I'd rather do the extra work to make things AOTC then have to go learn a whole nother children obviously I'm a longtime dotnet dev so long as I don't want to do note anyways people out there but
Francois Bouteruche 44:20
yeah, I'm on your I'm on your with it with this. I would had it also prevented me to learn words for example, because another programming language that people is oh, I will move to rest because it was get much more performance. But given the performance improvement within the CBOT you can okay before us. I already have my codebase in C Sharp if it was to rewrite everything with rust or load maybe I just spend some time to to optimize my C sharp code, make it trimmable and ensure everything works.
Norm Johanson 44:59
Yeah, Rust is great. It definitely is, if you want the fastest, fastest go for it, but you know, is not your high productivity language that we're all used to in dotnet. And we have a lot of stuff in there. So definitely cool stuff. So I don't think there's that much more exciting just to show in this demo, I just wanted this one. Like we've launched annotations, I think we launched it last summer. So if you know, longtime lambda developers know, there's sort of two different programming models to do with dotnet, you can deploy as a class library, or you can deploy as an executable. So when you deploy as an executable, you have to have a main and to do all the bootstrapping and stuff like that. So in version, version update, we did that I'm thinking vember. And this is where, you know, James helped out a lot. We added this new way with annotations to be I want to generate a main. So I can deploy as an executable, which is then also what makes it work as a an IoT class library approach. You can't do this AOTF because you need to build the whole entire application is at the class library, where we are essentially dynamically loading that DLL inside of our are done that host inside lambda, we do it that way. So this is cool to have, like, you know, I love this. I was I would love to spend more time in source generated source generation technologies. Just super cool to me. I mean, so yeah, we just say generate main, but if you look at what's going on under the covers in this library, so analyzers annotations, yeah, you can see, yeah, there's that mean, it's automatically being set up for us, wherever you. Were basically with how we do that is because when you deploy an executable to lambda, you just say your function handler string is the name of the DLL, or in this case, the executable here, right? In a class library, you get to specify the actual method you want to know is assembly type, name and function name. So to get around, how do we have multiple functions inside there, we have this concept of we always just set this environment variable annotations handler to which function you want to run. And the search generation takes care of automatically setting that environment variable up. So if I go in now, the C, void, foo function, right. And we'll call that a lambda function, compile that should, and we could go back and look, you can see that a variable got put in there, it'll be good to get that generated code that also got put in there. So we're taking care of getting all that for you. So you still have that experience that I can have how many functions I wanted to make sense. And all that's just going to be taken care of, for you. And then we also generate all the other fun code of handling the dependency injection, and getting all that set up. One thing,
Francois Bouteruche 48:12
just a quick question, to make it clear for, for our audience, you're not tied to Visual Studio, because for example, I use a rider on my Mac, to use lambda notation and not resisting you're not tied to Visual Studio. It's, you can use it with rider with the dotnet. CLI. It works everywhere.
Norm Johanson 48:33
Yeah, yeah, that's what I, you know, when I, when I was convincing people around here, hey, let's do source generationally. Oh, man, we got to work with a student and get it right. I think no, the grading, we just because this is just part of the compiler, you do it once the compiler is everywhere, right? There's no, you know, it's at the lowest level we do we add a feature there. And it's available to everybody in there. So we have we've got developers on on all of their IDs, that makes it work there. I have this one that I wanted to make sure this is kind of, you know, something that I think can confuse people with annotations, because there's the question of again, like, what is the lifecycle of things when is when are things actually created? If you know, in a normal lambda function, right, you've got your type that contains your function, we will call that constructor of that type. One says is during that knit cart and knit duration, and it will call the function for every invocation. So annotations is still the same way. So we generate this class, that's a wrapper around it. And it's going to call the constructor of you know, essentially that wrapper once as well. So that when it comes to dependency injection, ask the question is like, What time are these things going to be created? So you got it really so? Anything that you inject at the constructor level is essentially these are things that are Singleton's right this is I'm gonna get the single instance for one thing Gotta use that same one. If you have an example where you want something like I want a service that's new for every invocation. So or, you know, we basically create a new scope, and you can see that in the generated code do No, wrong, it's the first one. So you can see this is that wrapper that gets generated. And for each one of those, we create a scope inside there. So there's a, you know, the large, you know, creation of the class, and then there's one for the actual scope. And if I want a surface that that is scope level, I would just go and add it again to my list of services. So I can just say, from service, my, if I had something, but anything you add from services now that's going to be put in at a scope level. So you can have like, I want a new date database connection or something every time because I don't think he actually wanted your database connection at that level. But you know,
Brandon Minnick 51:01
this is so good to know. Because yeah, that's something that I'm always worried about. When it comes to doing serverless stuff, I've been told before, you know, be careful using statics because they can get reused. And I would assume Singleton's would have a similar warning or use case. And so yeah, this is cool. I didn't realize we were keeping everything as scoped. So we don't necessarily have to worry about multiple serverless. instances or apps sharing any static variables across them.
Norm Johanson 51:39
Yep, no, of course, you know, me, if you want. You know, if you're looking for performance, if you hate your you're gonna reuse the same thing every single time. Yeah, put it in the constructor of more than one class once, right. But yeah, if it's something that is like, maybe this is something on a per user basis, sort of context, then, yeah, you can have, you know, invocation level at that time, they're very cool. Yeah. So we're running low on time, I see that, I did want to do one plug, though, if that's okay. Of course, not gonna have a whole lot of time for this. So this is sort of lambda related, but not, we've been working on a new library. It's out there in beta, we call it not that one, the dotnet messaging library. So this is where essentially, we're trying to make using SQS. And SNS an event bridge, easier to use. So this is not like a general purpose messaging library. This is like this, we're focusing on, we're not hiding this as at best specific, we're just trying to make it easier to use that. And you can see, you know, essentially, like, during your setup time, you can say I want to add these publishers, where I can say I want my chat message to go to this queue. And there was like versions for Sonos, and event bridge. So you're on all yours. subscriber, where you where you want publish, and you have handles are trying to rush this do fast here. But yeah. Anyway, we'll, we'll get to the really interesting thing in one second here, there's sort of two sides to the story, right? There's, I want to publish messages. And I want to actually start processing those messages. You have a subscriber. Right, so the subscriber consuming messages? Yeah,
Francois Bouteruche 53:39
I think you're just trying to tease us to come back to another piece. That's
Norm Johanson 53:46
yeah, yeah, I know, I'm rushing too much. But this is the part I'm gonna show up with lambda here, though. And then I'll be done here. So normally, you would have like a container that's just pulling SQS. And then processing messages messages. But we can also do this in lambda. So new, the redirect to a lambda angle here. So again, using annotations here, like I've got in my startup, I'm sharing, I want to go register, my delivery message and my confirmation jumped to this SQS publishers, you know, settle that up. I can then have my front end using that which I can go in and just inject my publishers, you know, again, using our fun di that we all love goes in there as their interesting publishing logic. And then we don't even have to use the container for doing the processing, we can actually use SQS integration with lambda to process messages in there. And, you know, we have this Eyeland a messaging service in there that can actually use the process in there. So you are totally right. I went through this way too fast. Is it just big cheese, but it's definitely something I'm really curious to get people's feedback on. It's in beta and now we're hoping to get it into more of a preview stage pretty soon. And I'm really curious to get people's feelings on using this library. You can use it both in containers, or in any way you basically use it anywhere. But yeah, we've got the way to make sure you can use it for lambda functions, as well as containers. Just to throw too much agree with the lesson.
Brandon Minnick 55:20
Yeah, this is really cool. We we got a sneak peek, I think last week. And I will say when it comes to this new AWS standard messaging, SDK, it, it just makes sense. When I was watching the demo, our internal demo last week, it's like, yeah, that's exactly how I would guess that I would use it. And this is so simple. It's like, Yeah, well, my goal is why haven't we done this before?
Norm Johanson 55:51
Yeah, oh, my God, this is the only boilerplate code you should have to write to lambda. And then after that, it should just be write your business logic, here's my handler, do do my business logic, that's what we really want. But we don't want to hide anything with you know, what's available in those services. And there, one thing that we are trying to do is where it is opinionated a bit is how is the messages are serialized. And there were, you know, the community, you know, to, let's say, they really want us to use the cloud event spec for how it's serializing all the messages in there. So I could actually write it like, I do. I have a cloud events, Java, publisher, that's it's good sending here. So trying to get it all, you know, using as much in there. But also, it's, you know, we're gonna use the AI as much as possible to inject if you want to override any of our behavior if you want. If you don't like Cloud events, you want to have your own serialization format, you can inject your own implementation versions of those services.
Brandon Minnick 56:47
Super, super cool. I'm very excited to see where this is going. But like you said, we've only got a couple minutes left, we get cut off at the top of the hour, whether we keep talking or not. So normal for folks who want to stay in touch, follow all the latest happenings with dotnet on AWS. Where can they find you?
Norm Johanson 57:07
Oh, ah, so I'm still working on Twitter. It's probably my most common place people reach out to me is Twitter. So there's a good socket norm is my hero there for weird long home. I think that was my homework assignment years ago was the right socket that I use that for everywhere I go, it's where it came from. And then obviously, I'm on GitHub a lot, I get tagged a lot there. And all of our ad has done a repost, you can always tag me on there, and I do my best to jump in where I can.
Brandon Minnick 57:34
And also socket norm on GitHub. No,
Norm Johanson 57:37
it's norm J, that when it makes more logical sense.
Brandon Minnick 57:42
I have to say, I'm the co travel on Twitter, but br medica GitHub, just to keep it fresh, just keep users guessing. But Norman, thanks so much, again, for coming on the show. As as we keep progressing with the newest dotnet messaging libraries, we'd love to have you back on to show that off and keep people in the loop about everything dotnet and AWS and also just from the bottom of my heart, thank you for everything you do for us the dotnet community with AWS, we couldn't do without you. Highly, highly appreciate you and wouldn't know where we'd be without norm Johansson. So thank you, norm. And thank you for joining us. Don't forget to subscribe to the AWS Twitch channel so you never miss a show. We'll be here every other week, Monday mornings at 9am. Pacific. And don't forget, we also have an audio podcast. So find us on Spotify, Apple podcasts, anywhere where you can use podcasts, you can find the dominant AWS show, and we'll see you at the next one.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments