The .NET on AWS Show, featuring Henrique Graca!
In this episode, we are joined by Senior AWS Engineer, Henrique Graca! Join us as we dive into the world of Powertools for AWS Lambda.
Brandon Minnick
Amazon Employee
Published Oct 28, 2024
Continue to The .NET on AWS Show, featuring Mauro Servienti!
Loading...
Brandon Minnick 1:12
Hello everybody,
and welcome back to another.net on AWS show.
As always,
I'm your host, Brandon Minnick,
and with me today. It's
a very special guest co host. Sally. Sally, welcome to
the show.
Salih Guler 1:26
Yeah. Thank you. Thank you for having me. I'm really excited to be part of this show this week. I'm a bit nervous, but I think we got this
Brandon Minnick 1:35
You're gonna do great. Yeah. I so I do have some some happy, sad news. As you noticed, Francois, our typical co host, is not here this week, so is doing an amazing job filling in for him. Thank you. But Francois got a job at Microsoft, so he'll be leaving us on the.net on AWS show. But we're so excited for Francois to be moving moving back to Microsoft, where you can do more for.net and the.net community. So I'm sure we'll see, we'll see him again as a guest in the future, and I know he's going to be doing amazing things. But Sally, tell the world what do you work on at AWS.
Salih Guler 2:15
So hello everyone. I i have a similar background to Brandon. Actually, I come from a mobile app developer background. I have focused on many cross cross platform tools as well as native application development. I joined AWS because of my expertise in this field, and right now, I teach people about how they can use AWS services with their mobile applications.
Brandon Minnick 2:43
Love it, yeah. I mean, as a mobile developer as well, like I'm very biased, it's obviously the best platform. Everybody should be making mobile apps. That's so great. And so before the show, we're talking about sharing some fun announcements with the world. And Sally, you mentioned you've got a show tomorrow.
Salih Guler 3:04
Yeah, absolutely. So every week on Tuesday, let me check the time so I can tell it for everyone in so I'm based in Berlin, so my time will be tomorrow at 6pm to 7pm and for the folks who are in the West Coast, it will be from 9am till 10am we have a show on Twitch called front end web and mobile dev hour that we talk about how you can build the apps, like I mentioned, and use the tools that we provide to you by AWS. So every week, on Tuesday, including tomorrow, you can check this out and before the show. We also talked about having Brandon on as well, so stick around to that show tomorrow and you might see him in the following weeks.
Brandon Minnick 3:51
Yeah, I wish I could do it tomorrow. I was telling Sally that I've got a, I've got a doctor's appointment. Actually, tore my shoulder a couple months ago. Uh, playing water polo, so, doing some rehab, some physical therapy, but I'll move things around. I would love to join you. So yeah, subscribe to the channel that way you never miss it and and you can come join us when I, when I come on the show, but Sally, we have, we have such an amazing guest today who's going to show us so many cool things. So without further
ado,
Speaker 1 4:23
let's welcome Enrique grassa, welcome to the show.
Henrique Graca 4:27
Thank you, Brendan, thank you Sally. Thanks for having me. It's good to be here.
Brandon Minnick 4:32
Yeah, thanks so much for joining us. For anybody who hasn't met you before, who are you and what do you
Henrique Graca 4:37
do? Yeah, sure. So I'm Enrique. I'm based in Portugal, Lisbon, and I've been at AWS for the past two years. I had three different roles at AWS already so but now I'm what's called the senior solution architects engineer, and I am working on the tool. Tool that we're going to talk about today, portals for AWS, lambda,
Brandon Minnick 5:05
super cool, super cool. Now there's so there's always one question we love to ask every guest that comes on the show, because it's it's fun to learn how we all come from different backgrounds. Maybe some of us went to college, maybe some of us learned about our own How did you yourself get started with C sharpin.net,
Henrique Graca 5:23
yeah, sure. So I started with, I wouldn't call it dotnet, but in the Microsoft developer ecosystem, I started with VB six, right? Not because very old, but the technology I was working was also for like a insurance company that essentially insurance claims for cars. So I started with VB six, then moved to vb.net did some Windows Forms and stuff like that, and then started C sharp, same time with asp.net and from there on, it stayed with C sharp. So I've been using C sharp since remember Visual Studio, 2005 so I started probably seven, eight. So I started working on on C sharp. And from there on, I've worked in many places Building C sharp, back end, front end, a little bit of Angular and JavaScript as well. So yeah, my background has been developed most of the time. Then I joined Microsoft as a solution architect, and I delve more into the architecture side of things. And as I mentioned, joined AWS two years ago to help to be a solution architect. But also then I heard about Portus for AWS, lambda, and I want to be part of it, and I joined the team, and it's amazing. Yeah,
Brandon Minnick 6:44
go for you. And I don't blame me, because it is cool, and I can't wait to show it off. But before the show, we were chatting, and you're going to be you mentioned you live in Portugal. Lisbon, very jealous, but you're going to be in Porto next week for a conference.
Henrique Graca 6:58
Exactly. Yeah, NBC portal will be on the 18th. My session will be on the 18th, I believe, 2pm so if you guys want to say hi, I'll be there on the 18th to talk about serverless and dotnet, not similar to what I'm going to talk about today, because it's very focused on the other talk is going to be more serverless in general, with a touch of issues and troubles for.net developers,
Brandon Minnick 7:28
all the good stuff, like the that's the real stuff that we face every day, right? It's like, you can, you can stand on the stage and show off the cool, shiny bits, or, like, let's talk about we're actually struggling with every day, and that's fixing bugs and performance issues, exactly all that fun stuff, cool. I'm very jealous, Enrique, I was there at NDC Porto last year as a speaker, and super jealous. I won't be able to see this year, but I know you're gonna do a great job. And if anybody's in the area, go check it out. NDC Porto is an amazing conference.
It's definitely one of the, no, I'm
gonna say it is the best third party.net conference in the world. So if you can make it, you're nearby, go check it out. But Enrique, we have so much cool stuff that you've brought for us today, so I don't want to take up any more of your time. Where? Should we jump in? Yeah,
Henrique Graca 8:21
let's jump to the presentation. I have some slides, hopefully I won't bore you too much with the slides, and then we can show some some code of power tools in action. Yeah, sounds good.
Brandon Minnick 8:33
And we'll do our best to let it dictate everything for the folks listening to the podcast, which you can subscribe to on your favorite podcast feed. So check us out on the.net on AWS show in your ears, if you can't join us, live here on Twitch.
Henrique Graca 8:52
Very cool. Take you take a run. Cool. So can I get started? Yeah, of course. Yeah, yeah. So, so what I brought for the audience today is really like it's a story, and I bet many serverless developers can relate to it's a story about pushing serverless applications to production, and something doesn't work as expected. And the worst part with not just serverless, but most of the applications we deploy to production, unfortunately, is we can figure out why it's happening, and we are basically flying blind when it comes to observability. And in my early serverless days, it kept happening, and we build all these cool, scalable apps with lambda only to realize we had no clue how to properly monitor log or trace them. The most challenging parts, and there were plenty, were that our logs were a mess, tracing across lambda functions hard and getting meaningful metrics. Was also, I would say, a nightmare. So we knew we needed better observability, observability, but it the task seemed very daunting to get started because there was tons of documentation, tons of ways to doing it, boilerplate code, complex configs and many, many weeks refactoring if you wanted to add those observability, observability, so it wouldn't be fun. But today I'm bringing you good news, right? So, and this is where I'm here to tell you how power tools for AWS lambda is changing the game on observability, but not just it's making it doable, but simple and efficient as well. Now, as folks, more folks moving to as more folks started moving to production rate serverless apps, they were hitting some unexpected bumps. And that's why, if you don't know, please follow that link. That's why AWS came up with a well architected serverless lens tool. It's like a cheat sheet for developers and architects to make sure the serverless workloads are following best practices and ready for the big leagues, right? So, but here's the thing, during these well architected reviews, a couple of pain points kept popping up so developers like I mentioned in the story, they were struggling with structured logging, distributed tracing and meaningful metrics to show and to query. And what's really interesting is that a lot of teams only start thinking about overseeability when they are already in production or just about to launch. And this is where we see the shift in the shift in design. And this leads to, also, sorry,
Brandon Minnick 11:58
I should have interrupted you. You're doing great. I just had a question for maybe some folks who haven't heard or aren't familiar with the terms you've mentioned, tracing and observability. What is that
Henrique Graca 12:10
cool? Yeah, I'll explain a little bit at the end, but I can summarize it a little bit. So essentially, getting insights of your applications that are running in production, right? So, like I mentioned, you don't want to be flying blind. So you want to see how your application, your code, is performing in production. So logging is one of the aspects, so developers look at the logs, and you can see exceptions, the stack traces, so all of the things happening, and there's metrics that can be for application, but also business metrics. So they have those two types of metrics, like, say, number of items added to a cart. That could be a metric, number of times I call this function that can be also another metric, like, for instance, to test which path is more common for users than the other path. So you can do those things, and then fine tune the feature flags, etc, and then it comes tracing, which gives you traceability to your request from end to end. That's the goal, right? You know that someone called a web browser, the refresh, the API was called, then you call a database or a DynamoDB, there's some queuing involved, etc. So you get the whole end to end of a request, and then you can troubleshoot and see, for instance, where our latency is greater. Then you can tackle it. If any exceptions occur during that path. You can also see, okay, this is failing. You can then pinpoint which services failing, etc. Obviously, in an ideal world, we'll have all those answer questions answered, but most of the cases, it's hard to achieve that full observability path. Right
Brandon Minnick 13:54
in the real world, we're scrambling to figure out what went wrong in production and and then we're gonna add in tracing,
Henrique Graca 14:00
exactly, yeah, exactly, exactly. So as I was saying, yeah. So we see more and more people thinking about observability, but they are adding it later in the pipeline, and this can lead to all sorts of problems like Rush solutions right higher costs for having to retrofit existing code, and obviously potential for new bugs or performance issue with those those changes. And so the million dollar question is, why can't we bake in this critical observability features right from the start, during initial development? And that's where power tools for AWS lambda steps in. I would say it's like a Swiss army knife for serverless developers so you can and it was designed to make implementing these best practices agrees, so especially when it comes to observability, by bringing power tools into the mix. Early on, you can then set up structured logging without breaking a sweat, get distributed tracing with minimal work, and collect and analyze key business and operational metrics easily. Now, Power Tools started out focusing on observability, but it's grown into so much more. It's got all of these utilities that streamline common tasks, high the potency we're going to talk about and it's got it's essentially to make developers more productive across different roles, not just developers different roles. We see a lot of data folks, especially on the serverless area, adopting power tools, so they have a lot of batch processing stuff like that. So it really multiple roles, not just developers. And as you can see from the numbers, developers are loving it. So these figures you see in the screen, they really show how Power Tools is hitting the mark in solving those real world challenges. For the serverless community, there is a great adoption, especially, I won't lie, so it's not dotnet, but especially, Python is our largest and the first language to appear. As you can see, we have support for multiple languages. As I mentioned, Python is leading because it's the oldest one. And in terms of serverless, Python and TypeScript are probably the default options you would fall into if you're not a.net shop, right? And this means that the goal of this feature parity in this matrix, it's it means that you get consistent tools across your favorite languages, and you can't adopt also power tools gradually. You can start small and then add more features as your app grows, right? It's not an all or none feature. So this was the theory, and I'll jump into a bit of the code right and looking back at the slide I showed the beginning, by bringing power tools into the mix, we're looking at some serious benefits on these three topics. Right? We'll spend way less time setting up these observable observability features, logging in metric will be consistent across different functions, troubleshooting, performance tuning become a whole lot easier, and the greatest benefit you'll be aligned with the AWS well architected Best Practices right from the start. To summarize, power tools for AWS lambda is changing the game in serverless development. It's all about baking in those best practices and observability from day one, instead of treating them like an afterthought. So I'll pause here, and then I'll start looking at some of the features. Sorry, yeah, any questions or Brandon or
Unknown Speaker 18:02
not yet? Yeah,
Brandon Minnick 18:03
no, I feel like we, we all go through this journey at least once, sometimes a couple times as developers, where you know you've got a deadline, you got to get something up and running. You got to get it working, the boss says by the end of the week, so you figure out, and then it works. And sure enough, guaranteed something something happens, either your servers go down, or the instance throws an error, or something breaks. And then we got to figure out why, only to then realize we never implemented the tools like tracing to help us figure that out. So yes, I absolutely agree we should all do this from the start. But exactly, don't worry, everybody, if you haven't, because I've been there too. Yeah, exactly,
Henrique Graca 18:49
especially so there is a big movement, like, for instance, if you deploy to containers, etc. Obviously, everyone knows about open telemetry, which is some of the things we also want to tackle in the near future.
Salih Guler 19:02
Don't know it. Why this open telemetry? For the folks, we don't know it? Ah, yes,
Henrique Graca 19:07
exactly. So open telemetry, I would say it's like the name suggests, is an open form of, I wouldn't say protocol, but it's a sort of a consensus of how you should implement or work with logging tracing. Probably we should start with tracing and metrics. Logging, it was added afterwards, initially, or better yet, I think this year, or unless it was g8 the login part. But essentially, it's like an agreement between multiple parts that your schema or how you do, login, tracing, metrics should conform to this open telemetry standard. And one of the things we see a lot about a talk about in containers, is really that open telemetry because with containers, you obviously control the compute, and you can have what's called the Open telemetry collector, which is like an. Agent similar to what CloudWatch does for serverless and lambdas, but you have that agent that collects the metrics, tracing and loggings from your applications. The hardest part with the serverless is that you don't have compute right? So you just have your function code that runs there you could achieve, obviously, open telemetry if you have an EC two instance, or ECS fargate, etc, something that's long lived to keep your agents running, but then you have issues of, okay, how does it scrape my endpoint? Is it public? It's not so a lot of considerations in the serverless area which don't apply to traditional compute servers, right? So that's, that's one of the challenges in in serverless environments, to bring all of this, yeah, but Yeah, good question. So yeah,
Brandon Minnick 20:51
and I'm just dropping a link in the chat. Now we, we did. We've actually done a couple episodes on observability, so if you missed our previous episodes on observability, check out the link in the chat. You can go back and catch our previous show. We brought on Martin Thwaites from honeycomb to talk all about observability, how we should use it, why we should use it. He's also a.net engineer, so it's, it's all, all for us. All the.net observability
Henrique Graca 21:18
honeycomb actually does a lot of work on I haven't used their product, but I've seen from Twitter and the examples they'll do a lot of serverless observability with open telemetry, which is very interesting. But yeah, for serverless has other downsides, like cold starts and all those things, right? So it adds more to the cold start myth, right? So, but, yeah, cool. So focusing on parcels as a.net developer, so all the snippets here are in dotnet. So as a.net developer, if you want to add logging, you could either use the i lambda context you see there on the screen. I don't know if you can see the mouse, yeah, I lambda context, or you can use the traditional ilogger. I'm creating a factory here in the constructor where you can inject a dependency injection, etc. So one of the things that happens is in structure logging what you want. The goal is to send logs in in form of JSON to CloudWatch, so you can then query and obviously filter logs based on those fields, and not just a string, which will be hard to to parse. So traditional way you do it without power tools. If you adapt power tools for.net you have, you can see there, there's like a decorator, an attribute on the function, the handler as a parallelism. If you used to use lambda annotations, like you say, this is lambda function, etc. It's a very similar concept. You add an annotation there is logging. It's not needed, but I'll explain why it's there. And then you can use the static logger or inject an i logger to start logging traditional like log information, etc. What you get from adding that decorator is that it's going to read from the ILM, the context, and it's going to enrich your logs, your structure logs to CloudWatch and adds all of these metrics you see here, like function names, the X, ray trace ID, if it's a cold start or not, so all those things. And obviously you can then add more, and I'll show you in the demos, you can add more keys to enrich the logs. You can change how the it's called Custom formatter. You can change the format of the logs to adapt to whatever you have now currently, and you want to to filter etc, by that. So essentially, it's that easy, right? So you just import the new get package, unless lambda, particles, logging, add the decorator. It should be fine. The decorator has other properties that I'll be showing in the in the demo. So this is one on one structured logging. Then you have the distributing tracing, right? So the thing we were talking about very high level. So we have the traces that we were discussing. They are like the they map all the requests as the journey through the serverless app. And then you have those segments which represent the invocations of lambda or API gateway, as you see there, so each invocation, and then there's the sub segments that give the details of the downstream calls that that segment did, right? So if a lambda calls the DynamoDB or an HTTP endpoint that will be a sub segment, we can even add more to the traces with annotations, and they are key value pairs, sort of for easily, easily filtering on the AWS console, or always outside, but filtering on the console, then you can create your dashboards there. And then there's metadata which is not indexed, so not for filtering, but you can add objects. There, right? So it's not just key value pairs. And obviously the big benefits of tracing is we can visualize these request flows across services, Spot the performance bottlenecks and debug these complex distributed systems without pulling our hairs out right. And the best part of all of this is that if I show you the traditional implementation, so I had to split it in two, essentially what you do traditionally with the AWS X ray SDKs you create. So the first segment is already built in by default, so everything else is sub segments you add, and you have to have a lot of try catches, because if it fails, you have to add an exception or capture error, and then you have to finally end the sub segments right. And you have to do this. Imagine have two functions. You have to do this for the handler and the function that gets called with power tools, you can achieve the same thing with much less code, right? So the same idea decorators. So what I'm doing here is the exact same code as we had before, and I'm just decorating the handler. I added more properties, but that's a default property where you can choose okay, if it errors, I don't want to trace it to show it in the console, but the default is response and error, meaning that the metadata for this request is stored. And you can then, in this example, the string to upper, the input will be there in the metadata by default, and and the second method that gets called, it's called to upper and then on the dashboard, we can see that this sub segment called this segment, and we add an annotation that we can then filter on the console. So this is
Brandon Minnick 26:52
so, so much nicer, my goodness, yeah,
Speaker 3 26:55
for the folks listening along the code we're looking
Brandon Minnick 26:59
at a second ago that was using using x ray, or specifically the X ray NuGet package. Because that X ray, we'll call it logging or tracing method call could fail, we have to wrap, then all of our code in a try catch block, just in case. And it's, it's really messy, you know? It takes this one, two line AWS lambda function and turns it into 15 lines. And you know what happens if we the logging fails initially? Well, does the lambda run? Maybe, maybe not. Probably not, because through an exception, whereas now with power tools, it's literally just an annotation you put above the method. So just square bracket tracing. You can pass in a property, like the segment name, but that's it. So one line of code, we've got tracing enabled way, way easier and so under the hood, what is this doing? Enrique.
Henrique Graca 28:05
So we essentially doing not much as the logging, which has a lot of work behind the scenes, serialization, etc. This is more, I wouldn't oversimplify it, but it's obviously we have to use the X ray SDK, because in terms of serverless, that's the agent that's running on lambda. So it still uses behind the scenes, the X ray recorder, but it does all of those plumbing. And if it fails, does not a global try, catch, but it catches the method. Essentially, if people are familiar with aspect programming, this is essentially what it is, right? So we wrap that tracing attribute as a around the function, so the function is invoked by the aspect, and then there's a try, catch. If it fails, we then send to X ray, and we always terminate the segments after each method is called. So yeah, love it. Yeah, it's
Brandon Minnick 29:01
anytime I can write less code and there's less code for me to maintain.
Henrique Graca 29:08
Exactly cool. So the next, but not less important, is the metrics. So we again use CloudWatch for our metrics, and CloudWatch organizes the data in a hierarchy. So at the top we have the namespaces, which is a bucket full of related metrics. Then there is the dimensions, the similar to tags that give you extra context about the metrics, and then your individual metric, right? The specific data points your measuring CPU request counts some other business metric, right? So like you see shopping cart clicked or closed, and we can also set alarms on CloudWatch based on thresholds. This is. Vital for being alerted at night if something goes wrong. And this allows also the business, these business metrics, to make data driven decisions, not just for the end customer, but also how do to optimize the architecture, right? So if something is going for a flow too too often, and we're not optimizing like lower level, like an index on a database for that query, we could then optimize and do those things. So essentially, as you same example. So how you do it with route power tools? It's a bit messy, because, yes, there is a lot of a lot of code to send. And so essentially, how the metrics are performed. There's like a collection, and then you add metrics, etc, inside of it. And then the the hardest part, I'll spend this part in the other way. So the as you see there on line 57 the put metric data, which then is calling called the CloudWatch agent, put metric metric data async. So this is the default of how you use the metrics API in AWS to put metrics on CloudWatch. There is, however, an alternative to all of this code you could based on there's a format that's called EMF so embedded metrics format that allows us as developers to write logs to the standard out in A JSON payload that is following a specific CloudWatch schema. So the two benefits of this is that power tools uses that by default, EMF, so we're not using this API, because this API obviously is blocking, right? So you have to make API calls, then it comes back, although it's hsync, one thread has to go out, right? So what EMF does is you don't have to do this. You just output stuff to the console out in this case, and then the CloudWatch agent looks, sees the schema and then sends it back to CloudWatch metrics. So much better approach. You don't have to have all again try catch your code is non blocking. So much better. This just to give you a context, because people say, okay, metrics, you can do it other way, but obviously you can. This is the default way of doing it, right, without EMF. So as I mentioned, Power Tools uses EMF, and this is, again, same concept. This is how you add metrics to your application, right? So let's say this is the function handler. You want to add a metric, and you want to capture cold start, very common, like you have the serverless you want to see how many cold starts I had. So you just, you start thinking, Okay, we have too many cold starts, maybe pre warm a little bit at this time, because, you know, we're going to get more customers, as you see in the metrics. So a lot of things. So we keep give you a nice option to add, just capture cold start, so you don't have to add more metrics. But again, you can add metrics or annotations with the static metrics. Add metric, right? So we have a metric there that says, each time this handler is called increment one count. And then you can go to CloudWatch and and see that those metrics
Brandon Minnick 33:25
right. And again, just so much nicer for folks who couldn't see the code previously. It was probably 50 lines of code that got added to this two line AWS lambda function just to do just to handle metrics. But again, with power tools, just like with tracing, where with tracing, we just added an attribute above the method called tracing to capture metrics. We just add an attribute above the method for our function called metrics, and then I assume it writes all that code for us behind the scenes.
Henrique Graca 34:00
Yeah, it's not that exact code is the EMF way. So essentially, we're just creating that list, right? That that dictionary of things you want to send, and writing it out to the console. We're going to see it when I open the AWS console. We're going to see the log, the metrics output there. They look like logs, but then the CloudWatch agent will pick them up and send them to CloudWatch. Very cool. Yeah, before
Salih Guler 34:26
we move forward, I want to ask one thing we are mentioning constantly, the code is great. Code is great, but are we going to mention if anywhere can they see similar examples? Is it from the documentation, or do we have a sample app that they can just prove
Henrique Graca 34:45
Yes, definitely. Great question, yes. So we have our documentation. I believe Brandon is sharing the website. So if you go to the power tools website, you can find the links for all the supported languages specifically to.net Is the link you see there on the screen. If you go there, we have a lot of documentation. Is something that we are very proud of, the documentation. So we write a lot of documentation. And if you follow and go to the GitHub repo, we have a folder, I think it's outside of source, so it's right in the root. So we have a folder called examples, and we have examples of all of this code that you can just clone the repo and look into the examples and build yourself
Salih Guler 35:33
cool, awesome. Oh yeah, I got your listeners.
Brandon Minnick 35:40
Yeah. Yeah. So that's it's github.com/aws-power, tools is the org, and then in there is where, like Enrique said, you can find all the.net stuff. So there's a repo called power tools, dash, lambda, dash.net, and it's all open source. And like said, we've got the docs here you can dig into learn, yeah, everything you need to know,
Henrique Graca 36:02
yeah, the docs there, they're just a map for the website. So we use also great open source tool, MK docs to to build our documentation. So it's, again, all open source. And yeah, great help. And
Brandon Minnick 36:16
here's Enrique with the the latest, yeah, latest. Merge those pull requests commit to name. Yeah,
Henrique Graca 36:22
exactly, yeah. So that was so we, I'll talk about later, but the main things we're focusing I've been focusing on last couple of months is the IoT readiness of this, which is a big thing on.net right? IoT. So we're almost there. So, yeah, happy to
Brandon Minnick 36:41
get that out. Good luck, man. Yeah, I just did add an AOT to couple libraries I maintain. And yeah,
Henrique Graca 36:49
the serialization. Yeah, cool. Right. Nice. So we talked about the three big observability topics, metrics, login, tracing. So let's switch gears a little bit to one of the features that's not observability, which is item point disease, one of my favorite features. I believe it's one of the most used after the the other three ones. Ah, sorry, yeah. So this was a full example, full example of the tree. So with these three tags there and more code, you had the full observability, right? So we had logging, metrics, tracing, and you can do all of those things, right? So, but we'll see again.
Brandon Minnick 37:34
Let's just taking your your existing in us lambda function and putting an attribute above it that says tracing, and putting an attribute above it that says metrics, and putting an attribute above that says logging exactly, and then power tools implements all that code for you exactly.
Henrique Graca 37:49
So another cool part I'll show you in the demo, but since it's on the screen, so the we have another property on the logging which is correlation ID, path. So essentially it looks at the request, and you can say, Okay, go, in this case, go to the headers, header one, grab that value and use that value as a field on the logging called correlation ID. And this helps you. Then if you go on the AWS console and say, okay, where this correlation ID happened and all your lambda functions that logged with that correlation ID, you can then see them all, right? So this is the whole distributing tracing part, right? So you can append more stuff to the logging, and you don't have to do a custom field. We just say, Okay, this is the event we're getting. Give us the James path to get there, and we grab it and we put it in the in the logging context for you as well. So those small things? Yeah, there's a lot, so I won't talk about all of them. So idepotency and I have some slides, but I'm looking at time, maybe I'll rush off. So essentially, item potency is a fancy word that means, if we do something once or 100 times, you always get the same result, right? So and for specific cases on distributed systems and API, this is super important, right? It keeps things consistent, predictable, and especially when issues happen, right? Networks stop failing. Requests get duplicated mostly scenarios of like batch processing, where there's, like, at least once, but not at most once, so they keep sending the same event. And if, if you keep processing the same event, you're gonna spend more money without doing item point disease. It's good for those cases as well. And so yeah. So these are all the features that Adam policy has, like on the handler, outside of the handler, civilization, validation timeout, so we get the context of the I think I have the example here, so I'll talk on the slide again. Same thing. This has a pre work to do on the constructor where we got. To say, what is the DynamoDB table you want to use to store the idempotency key and results? So we can say, okay, is the key there? Yes, return. The same thing we have on Dynamo right? So this is how idempotency works for we also have a local cache, so you don't have to always be hitting DynamoDB if you want, it hits locally for a specific amount of time. If it finds it, returns it. If it doesn't, goes to Dynamo etc, right? So typical caching system in Python, we are still working on implementing Python also supports Redis, so you can use Redis as well instead of DynamoDB. We, at the moment, we only support DynamoDB, so DynamoDB is our back end for item Ponte C. It's fast, it's cheap, so it's good for this key value pair kind of scenario. And again, we decorate the item Pontus On the handler, and we're done, right? So that method, if the same request is sent, in this case, an API gateway request is sent, we check it everything. If it matches, get back what you have from the cache from DynamoDB. The other example
Brandon Minnick 41:12
question for hurricane, so what? And again, like this, there's not a lot of code here, which is great. So like, I'm looking at the code, and we just have to quickly configure our idempotency to point to our DynamoDB database table. And then again, we just add in an attribute above our lambda function that says, idempotent.
But then what? How does this
work under the hood? So does it? It sees the request coming in, and then, before running my code and my function quickly checks my DynamoDB table to see if there's a matching request, and if there is, then we just use that existing one. Is that? Right?
Henrique Graca 41:55
100% that's that's exactly, wow. So it's the same thing aspect, right? So this code runs before. So imagine this code has your function then inside it, so the first step will be okay. Is that important? See on right, is it the method decorated so it only runs if it's accurate? Is that important? See on Yes. Go there. Query it if the so what we do is we do MD five, so it doesn't have to be secure. It's just like a hash of the whole body or request you you you're receiving, and then we compare it to what what DynamoDB has, and then we send the response if it matches right. So it doesn't even run your code.
Brandon Minnick 42:35
There's a in the chat earlier, redcor Bro said, I need this right now in my POC, and I feel like I need this right now in my ads. Yeah, so good to see this.
Henrique Graca 42:50
That's cool. That's cool. Thanks. So yeah, on the previous example, you saw the it was decorating the handler, which means so every request is going to catch. But you can also decorate just methods like, say, in this example, imagine that we have, like, a long running or an HTTP call, whatever, and we're just really caching or making that method idempotent, right? So? And the way this works is, you can see it on the commutation, but it has to have at least one parameter your method, which is the parameter we'll use for the key if it has more than one. There is another attribute where you specify which parameter is the key for that idempotency object, right? So again, a lot of flexibility on this. And there's also, for instance, if you decorate your whole handler, you can also specify, again, through those James path filters, JSON filters, you can say, okay, API gateway sends like a big payload. And then there's the body, we can say. And there's some utilities say, power tools, JSON body. So it's gonna get the body. And then you can say, but if it's a JSON request, body.id, I only want to ignore the full payload and just focus on that key. So that key has to match whatever is in Dynamo, right? So it doesn't have to be the whole payload you're getting. It's there, right? So this is cool. So let's say if, if your request is not always changing, but the ID is changing, or everything else is changing, but the ID stays the same, and you only want to focus on the ID that's the one that's going to recreate another request and ignore dynamic, right? And again, you can specify the duration of that, right? So I believe by default is like 30 seconds, one minute I have to check documentation, but you can then adjust how long you want that caching to be cool. So yeah, idempotency one of my favorites, besides the traditional logging metrics and tracing. So yeah, very briefly again, IoT support. So IoT big. A topic in the.net world. Amazing. So the things we already have with AOT support are metrics, tracing and login. So the three big ones, we wanted to get them as fast as you can. They are the most used ones, adding potency. There's the PR there in review, to be merged. Hopefully in two weeks time, we'll have adequately the demo I'm going to show today already has that with my latest build item policy working there. So yeah, we were getting there. Hopefully we have all of these by the before we invent even. And as I mentioned that the biggest challenge is really importing any library or dotnet application to 80s, really civilization. It changed a lot. It needs to know everything beforehand. So we used to like all of these dynamic type of language, right reflection, all of that. Yeah, it's good in the long run, but it's like it's hard now, and so hijacking
Brandon Minnick 46:07
your screen for a second. But just want to show this off. In case folks hadn't heard of native AOT. This is a chart that we put together that shows off our cold start times versus warm start times. And there's listings, if we go back to.net six before AOT existed, yeah, you can see we're getting cold starts around 900 milliseconds to one second. But nowadays, with.net eight and native AOT, cold starts are dropping down to around 300 400 milliseconds, and a lot of this is literally just because of how.net works and what AOT is doing, because.net is has historically been a just in time compiled language. So yes, we click Compile in Visual Studio, but our code just gets lowered down into Microsoft's intermediate language, and it's not until the code actually runs on the server or on your laptop or on your phone, wherever your code's running, it's all starts running, that dotnet actually looks at what the CPU is on that box on that server and goes, Oh, it's a x86 64 CPU. I'll compile the code down to there so that it it can be run on that that processor with native AOT. The compiler actually does that all ahead of time. So we can tell it things like, if you know what processes you're running on, you can specify that ILC instruction set property in your csproj file to get even more performance. You can also tell it to optimize for speed or size, depending on what you're looking for. So just wanted to show this for folks, because, yeah.net, weird in the fact that we compile our apps, but it's also just in time compiled at runtime. But with native AOT, it'll do all that processing work ahead of time, so it doesn't have to wait for a code to spin up to run on the processor to see what that processor is to turn our code into the low level bits, the ones and zeros, the assembly language that the processor can actually understand. So, huge performance boost with AOT, if you're not already using it, highly recommend it
Henrique Graca 48:15
100% Yeah. And for serverless in particular, you're going to see it's like, a huge improvement, right? So, like, like I mentioned, so JIT is amazing, right? So the performance you get, we're doing all of those things on the fly. It's just and it optimizes a lot. But in terms of cold starts, AOT is, like, really impressive, right? Because there's not the whole Buddha bootstrap kind of thing in the beginning. So what you see on the screen is the when you start a new lambda function with an AOT template, you're gonna get the main method there. Now, besides the handler, and in this main method, there's, I forgot what the name was, I believe it's lambda, default source generator, essentially for use for logging, we have to change that source, lambda, social for the power tool, source generator, serializer, and this is important for if you want to log on the handler, because it's something that happens before the logging starts. So it has to be done right in the beginning behind the scenes. Like, if you want to go deeper behind the scenes, what we do is that we combine the serialization context of the client, right? And I'll show you in the code with the power tool serialization context so power tools can serialize your types, right? So this is the whole crazy business you have to do with with AOT, but this is the way we we've done it. You just say portals, source generator, serializer of type, lambda function, serializer, serializer context, which is a class you have on your lambda function to serialize your own types. Ignore the new custom formatter, but that's how you pass a new formatter. For, for the logging, but the main part is, is that one for either Ponte C again, you need to pass in your serialization context. You saw in the constructor, we set the table, but now we have a new line starting on.net eight, which has with you some relation serialization context, right? I say.net eight because source generation is not just for IoT, right? You You can still, you can start using Source generation with normal.net eight, and be ready for IoT. But obviously this is a new way of building things most people don't. So you need to pass, again, the JSON serialization context to your to your lambda to your item policy. Okay, so before I do this, let me quickly show some code, and
here we go. Share Screen, entire screen. So now I'll be looking at that side. So sorry about that. So screen share the screen, right? So this is a normal.net eight lambda function, and you see here on line 22 we have the expiration so we can configure how long we want it to be cached on Dynamo. So I mentioned the correlation ID. We're going to see it in action so everything else, so we can add custom keys to the logger, and I added some annotations that we're going to search on on the on the lambda console. And this, essentially, this method is either potent so we can cache this HTTP call, so the rest, really, it doesn't matter. It's just plain code. So if I go to our lambda console, so this is dotnet eight lambda, and one of the issues you have, and one of the pushbacks on.net on lambda, is this, right? So this cold start wasn't great, right? So 1.8 seconds, right? So not ideal, obviously, again, I mentioned just in time does a lot of work, and we click again it. You can see it. But what I want to draw your attention is to remember what I talk about, EMF. So this is one of the metrics being generated with our dimensions, etc. And this metric is called Start. So it's going to add cold starts to our metrics. Then we have our first logging, which you see, has the correlation ID. And one of the things I asked the login to do was to log the incoming event. So don't do this in production, because you can contain sensitive information. But essentially, I'm logging the payload that application gateway, or the test button I did here, right? So this, this event I'm sending has is, is all over here, and then I have more logging, logging with custom data. So all the fun stuff that you can use and to log. And this is the event I sent. You saw value one in the correlation ID, which is the value one on the header, right. So value one, pretty cool. So like I said, next calls is going to be fast, right? 100 if I keep clicking, is single digit millisecond maybe, yeah, 40 milliseconds. So now I have the same application built with IoT, right? Exactly same code. The main differences are this thing I mentioned. So we have to have the context, and we also have to have our power tools, JSON serializer you see here, and this guy, lambda function serializer is essentially this class here at the bottom right. So if you're doing source generation, you you're familiar with this. So essentially you have to say which types you're going to serialize, and this context is going to be passed in to do the serialization, right? So there's no more serialize to do type or ever. No, you have to say, go get this. This is how you serialize, right? So this is how it's done in IoT,
Brandon Minnick 54:11
yeah. And for the folks, again, due to the concept of ahead of time compiling AOT, the JSON serializer to the hood does use reflection to initialize objects for us, and you can't do that with AOT. AOT doesn't allow reflection. And so one of the ways we get around that is, yeah, we create these, these types, and we add, we can add attributes to them to say, hey, this JSON serializer is going to be serializing these specific types, and so just figure that out ahead of time. And literally, what it does, like Enrique was saying, it uses source generators to write that code that it need it's going to need before, before we compile the app. So again, it just makes our app faster. So. Speed improvements. Yeah, it
Henrique Graca 55:01
doesn't make the package smaller, but it makes the application, but yeah, so we can prove it right. So if I click here, you're going to see this was a Iot, same request. So this one actually took a little bit longer than expected, but from Traditionally, it takes like 200 milliseconds, but cold starts went down like three times, right? If I do it again, it's going to start going to the single digit right, so it's nine milliseconds, etc. And again, this is working with either pontesy, so I can show that as well after I think I put 20 seconds, so we should be hitting DynamoDB again, and yeah, and soon we're gonna start seeing the requests here, or let me just change this to two or three, and you're gonna see here. So the request took a little bit longer, because, remember, this request is going to dynamo and going HTTP call, right? And we can see here everything called, so in terms of the back end and traces, let's see if you can see it fast here. Okay, so I have some here, and what I mentioned is, so this is a fresh call, right? So goes here and and just with those annotations, right? So I can see the function handler and then the call to do API. If I go to a newer request, we'll stop seeing that and just see one, not this one.
Brandon Minnick 56:35
And for the curious, we're also talking about tracing and metrics earlier, and how they're they're like, logging, but they're better. This is why so, like, this is a trace that I'm seeing here with timelines and all this data attached to it, instead of just like a a line in a log that says, like, started at this specific time, ended at this specific time. No, we get visualizations. We get all sorts of data that we can use Exactly,
Henrique Graca 57:01
yeah, and as I mentioned, so we have the metadata, we have annotations, and these annotations, then we can search them, right? So we can go to a traces and say, annotation dot, I think was customer ID equals one, I'll lose that. And so these are all with the custom ID equals one, and obviously we can then search, right? I don't think I had a two, so I did three, I believe. And if we query,
Brandon Minnick 57:38
and Eric, I'm gonna bring us up because we only have two minutes left. Yeah, we'll have to bring you on into another show. I mean, obviously, anytime you're doing more cool stuff with power tools, please, please come back. But for the folks who want to learn more, they want to keep up with you online. Where can folks find you on the internet?
Henrique Graca 58:00
So I'm on X, former Twitter and as well discord. So if we can show this, I'll show the slide very fast on
the community. We can go, sorry, we can go to the Powertools, dot, AWS, dot Dev, and we can find everything there. So there's the discord channel, Twitter, so all of that we can we are there, and obviously GitHub, GitHub issues. We are there all the time, answering questions. And
Brandon Minnick 58:37
be sure to follow Ricky. You can find him at hj, grasa, G, R, A, C, A, on online. And thank you. Thanks so much for joining us for today's episode. Thanks so much. Yeah, likewise, thanks for coming on, Enrique, and don't forget to subscribe. We're here every other Monday. The.net AWS show streams live to the AWS Twitch channel, and then we also release us as an audio podcast the following Monday. So if you want to hang out with Enrique and me and Sally again on Monday, subscribe to the.net on AWS Show Podcast and we'll see you again in two weeks.
Continue to The .NET on AWS Show, featuring Mauro Servienti!
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.