The .NET on AWS Show, Featuring Dror Helper!

The .NET on AWS Show, Featuring Dror Helper!

In this episode, we are joined by Senior Microsoft Specialist, Dror Helper! Join us as we delve into automated testing, integration testing, and more!

Brandon Minnick
Amazon Employee
Published May 20, 2024

Listen to the Audio Podcast

Listen to the Audio Podcast

Watch the Live Stream



Francois Bouteruche 1:03
Okay, my Brandon is doing this every two weeks. So I messed up, of course. Hi, everyone. Thanks for joining. Thanks for being here today with us for a new episode of the dotnet on AWS show. And I'm for separation. today. I'm with James is. Hey, James, how are you doing today? James will consider Euro. Well,
James Eastham 1:33
there we go. We get there eventually. There are too many mute buttons. Too many different. Yeah, I need a simple a kit, I think, yeah, I'm doing good. It's raining. It's wet. It should be nearly spring, but it's not still. So yeah, we can complain about the weather. For a good fact. Yeah, I cannot leave because I don't live in Britain.
Francois Bouteruche 1:51
So gems are, what are the news, what you've seen in the in the past weeks in the tech space or in the NBA space that caught your eyes. So
James Eastham 2:02
there's a there's a new library that is available in preview right now that the dotnet SDK team have built this in AWS library called the AWS messaging framework for dotnet. And what this is, is a extremely lightweight AWS specific messaging implementation. So if you need to publish messages to I don't know, an SQS queue, and then you want to consume them with another service at the other side, the messaging framework makes it really easy to do that. So it's like where you just publish your message. At the consumer side, you just define your handler, you implement an interface for an abstract class account for exactly what it is. And then the messaging framework handles all that wiring in between fires, which really cool. It supports things like open telemetry, so you can get good observability. There's really interesting idea, because he's really AWS focused. It's really like where the support for lambda has a little bit of crossover with button, the power tools. So yeah, it's just really interesting library. If you're doing messaging with AWS services very specifically, then it's a cool idea. Yeah,
Francois Bouteruche 3:11
have a have here that it's still in developer preview. And the service team is looking for feedback. Yeah,
James Eastham 3:17
absolutely. Yeah. So if you if you are interested, if the URL is somewhere down here, fingers the right way around, or over there. Where are we with that side? That side? Anyway, whatever we are there, you will be somewhere here somewhere, somewhere. So yeah, that will take you to get a page if you want to have a look at it. Give us any feedback in GitHub issues, anything like that, then we'd love to hear them or reach out to me in Francoise direct. Yeah.
Francois Bouteruche 3:40
And for all the folks who are listening to us on the podcast, you will find the URL in the description.
James Eastham 3:49
Yeah, I always forget about the pocket. I recorded a podcast the other day and I was like talking with my hands and I was like and I was like wait, nobody can see anything that I'm doing this is not helpful. So yeah, forget we got on a podcast as well doing this show.
Francois Bouteruche 3:59
Yeah. So please contribute give it back to the service team, this is your opportunity to to influence this library. So really, they are looking for your feedback. Let let them know how they can improve. And so no, I just want to welcome the guests of today. Drawer Alper. So drawer is specialist Microsoft Specialist Solution Architect at LDS girl Welcome. Welcome to the show.
Dror Helper 4:35
Thank you for having me.
Francois Bouteruche 4:39
Thank you. Thank you for being there today. So draw. Same question for all our guests. Let us know who you are and how did you get into into the high tech industry in the dotnet space.
Dror Helper 4:55
So that was well two decades or A little more than two decades ago, I started working as a student in a big IT company. It was an index, right in C++. And it was during one of the when the bubble burst, or shortly after one of those bubbles subprime in this case. So once I finished my university, I had no other job. And I found a startup on the other side of the country, which started working with this new dotnet thing we took in early 2003. We started working with C sharp VB dotnet 1.1. Got me very excited about that specific language ever since I've been around the
Francois Bouteruche 5:48
excuse me, you're exciting with VB dotnet? Visual?
Dror Helper 5:54
I prefer not to answer that specific question. We had a very heated debates in the beginning, which is better and which is worse. You know, even in Microsoft, I guess they had this competition going between those two teams or languages. It's slightly too verbose to my test. still around, you can still write code in VB dotnet. Right? Yeah. We might even support it. And, yeah, I've been with C sharp ever since playing between C sharp and native languages, C++ and calm and all this sort of fun. All throughout my career. Somewhere along the line, became a consultant started freelancing. And then one day, we got a new customer, which was very angry with with this old consultant, he just fired him because that consultant took his code they had in SBQ, and dotnet framework, monolith, any fluid on AWS. And they had no idea what to do. And that consultant suggested that they translate all the code to I don't remember Ruby or Perl or Node js or something. But it was not possible. They didn't even know what the code that they got installed. Luckily, for us, it was just as dotnet core 1.1, that zero came out. So we started using that on AWS almost immediately. And lambda functions without that call. And yes, we tripped on all every single memory leak and back on the way, some of which, in the new runtime, some which because you run a Linux and Windows, all sorts of fun and games along the way. But the project worked, and the customer was very happy. We were able to modernize microservices, and slowly shift things from his huge monolith to things on inside Docker and lambda functions. I guess so roughly three or four years later, I started doing the same thing in AWS. This week, I just keep on doing the same thing for magazines or just in different places.
Francois Bouteruche 8:23
That's reusability.
Dror Helper 8:27
So a big part of it is the usability of your career. Yeah.
James Eastham 8:34
Maybe maybe, maybe it violates the DRI principle. Because it sounds like you're repeating yourself a lot there. That's it. That's a programmer joke if it ever was one, isn't it? Yeah,
Dror Helper 8:47
I guess with that i You only get one paycheck. Yeah, yeah. Yeah.
Francois Bouteruche 8:55
Yeah, um, I know, we've discussed this just before the show that I know there is a topic you are really fond of it. It's testing and put in place this strategy, especially for several lists. I love you to to use the space today to talk hills to talk more about tell us more about testing, Federalist testing and testing it with lambda, especially in dotnet. So yeah, let us know.
Dror Helper 9:31
So let's start from the beginning. As I said, I used to work with customers modernizing code, and it's impossible to do that without proper automatic testing when talking about testing. We're not talking about the tester job. That's an important job, which I have a lot of really appreciate those people. I think they do amazing job. I have no idea what how to do that better than them and the About what developers need to do write automated tests, to make sure that their code function today continues to function tomorrow, don't surprise you. If you start to break things, you will know about it if you start to take parts of your code and move them to other places, but for functionality continue to work, you have to know about that. And for that, you need good codings, which most of you go to the cloud. And you don't want to start testing everything outside of your comfort of your own machine. manually. You want to make sure that before you go and start have lengthy and expensive integration with other services or services in the cloud in AWS, you want to make sure that at least your logic works. That's the bare minimum. Yeah,
Francois Bouteruche 10:50
it's if the guarantee to avoid it works on my machine only.
Dror Helper 10:57
Well, yes, that is true. But these days with Docker and other things, we can make sure that what's worked on my machine works exactly the same anywhere else. Yeah. But other than that, you know, the most frustrating thing is a developer and I did that for quite some time, is when you work on something, and and you've been in the middle, when you go installed, implement something new, someone comes to you and say there's a bug, you need to fix it right now need to stop what you're doing. For everything away, you know, it takes people don't appreciate the fact that developers they take time to hit up to, you know, Scylla, 200 kind of mentality, you can be whenever someone stops you, it takes you a while to get back after that context switch. And when you have a bug, you're already frustrated at this point. And if you have a manager that I had those throughout my career, that will come up with five minutes to ask you whether or not to fix that, it adds even more frustration and more stress to the whole experience. So I prefer at least to know that the things I wrote my code works. And then if there's a problem, maybe someone else's code, maybe it's integration, at least I eliminated the parts, I'm responsible for all the things I have more control of, and I can do that is it. And when someone else change my code, if a test fails, it's his problem, not mine. And he knows that it did something wrong. And if he changes that, and notice will fail, then I'll get a call in the middle of the night or the end of the day to come fix something which I have no, no clue, no idea what was done. So I do really think those are important. And maybe.
Francois Bouteruche 12:52
And the thing, the thing you mentioned about the interruption. It's not. It has some scientific proof of the effect. It has been tested. It's a professor in the University of California, Irvine, I don't remember her name, but they proved that it takes roughly 25 minutes or so to recover and to get back to your flow when you're interrupted. So you're working, you're in your flow. And you're interrupted, it takes you 25 minutes to get back to spin. So right it has an impact on productivity. Yeah,
Dror Helper 13:36
it's very frustrating. You know, it's your mental health is been hurt. If you get enough of those throughout the day of what your week. I also when you look at things, why are we talking about serverless? When you think about it, in case of a lambda function serverless first of all, the thing you're responsible for code is in its almost pure kind of form. Everything is your code. And on one end, on the other end, there's a lot of things that are other services that either trigger your code, or being triggered and used by your code, you have no control on. But you do need to make sure when you get to the integration that at least the parts you wrote work exactly as you intended them to work. So when lambda function, it's on one hand, very easy to it, any tests that are easy to write integration tests, even end to end tests on AI on one end, and on the other end, you can spend or waste a lot of time deploying to the cloud, seeing something not working, change your code deployed again. And again, you know, do those manual steps, this round trip back and forth. I don't know about it is a waste of money. It's not maybe not a lot, unless you really go overboard there but it does waste your time doing it this way. Now when I know James would agree, because me and James are part of a team within AWS that feel very strongly about serverless testing. We've been working for our classes to help developers learn more and have good examples and good documentation on service testing. How to do that what we suggest, although even between us, everyone has his own approach his own idea of what of how to do that we did try to give as much information as much on one end to be very specific or very opinionated, on the other end, give you enough information for you to, you know, figure out if that's the way you want to use. And not just for dotnet, by the way, the whole team and just know them before I joined, been very busy writing examples for TypeScript and Python, Java. And also dotnet. Yeah,
Francois Bouteruche 16:07
you're mentioning several Islam, right? Also, yes.
Dror Helper 16:15
Yeah, so servers, land, AV testing patterns, all of those are also on GitHub. Basically, we write them deployed them to get up and then they have been synchronized. Built in serverless. Land. Very good place. If you write in lambda function, regardless of the testing path, which is important. Serverless land is a good resource to learn to do a lot of things from infrastructures code to how to connect to specific services. And we also have the part on test samples inside then we have specific samples for dotnet. It's a huge group, it's not is a lot of people helped us also in the dotnet part, but in the places and there's a lot of initiatives like that within AWS, which are semi formal, which means it's not something that someone told us to do something we felt very passionate about doing. Because one provide the guidance or the help with the customers that we don't see day to day. As a specialist solution architect, I do get to work with customers, but obviously not all of them. And not all the time. So it's it's nice to have somewhere to points or to give more information. Also, it's very good for the customers do work with to that and there's a sample that you should check, and it's over there. And then I can save everybody a lot of time.
Francois Bouteruche 17:47
Yeah. And personally, I certainly appreciate the work you've did with this group, because I stayed here. I hear a lot of our customer I've discussed with many developers out there. And because I have too many discussions like oh, yes, Italy, Islam. I've seen it's for Python and JavaScript. And I'm always no, no, it's not only for Python, and JavaScript, we have a deep support for Java, we have a deep support for dotnet and C sharp. And I'm super proud to see all the cuts imports for C sharp and dotnet. You've provided to several Islam to just to let people know, hey, you can use AWS from the if you're doing C sharp and dotnet code. That's an amazing series for for this stack.
Dror Helper 18:42
Although Yeah, being on the interview, chair here, I have to give credit where credit is due most of the samples that your co hosts work, or at least the beginning.
I joined for the testing part. And also a lot quite a bit of those as well.
James Eastham 19:04
You can use it out you did all the heavy lifting who did all the hard ones out. It's the easy ones.
Dror Helper 19:11
Yeah, and we are continuing this unit as well. We just figured out how to approach because I think that and then let's dive into the code because I can I can continue I can continue preaching about unit testing forever. But
Francois Bouteruche 19:30
yeah, show please show the club.
Dror Helper 19:33
We can here we are bill. How are we? Yeah, someone called me a unit testing zealot. In the past, and that's fine. I take that as a compliment. What we want is essentially also help developers understand that it's basically easier to write unit tests than not writing And it's it's counterintuitive, because when you think about it writing code, and the unit test will most likely take more time than just writing code. That's simple addition, a plus b, bigger than a. But the fact is that you will get spent that time probably, with a lot of interests. When you do this fun trip of deploy tests manually, then deploy again, then ship it and then get the test fail to come to your chair and tell you that there's a bug. And then you share, say, it works on my machine. And then we'll bring his manager and you'll be in your manager, and no one's happy from that specific engagement. So you do want those things, it's very easy. But the problem is an AC teach developers to write unit tests, the problem is it will take you roughly a week or two, to really understand the benefit. And you have to do writing the test for those two weeks. And there's no way to do that. In in a day, or in an hour, to get the understanding how it helps you. You can provide a lot of samples, but until that they will salience in one form or the other, then you will probably figure that you want to hide them from now on.
Francois Bouteruche 21:18
I have a quick question for you. Because that's something that strike me or at least, this is a position of mine. I do feel my unit test mitosis. Our most stables on my my actual code, because they when I worked is I don't when they are written, they are there. And this, they stay there and I use them regularly. While I, I change I rewrite the code regularly. So at some point, I was like, Okay, it's more, it's more useful to invest in my tests, because this cause is very stable, while the other code is quite cheap, I changed it very regularly.
Dror Helper 22:07
Right. And that's a trick that's, that's the thing, like in the tests in a way that they do not cover the way you implemented your code. But what it does, and it does take some practice getting there. But yeah, the 10. Again, you have to commit to try out what, you know, 10 days to two weeks, roughly. And then you'll figure out that you can change your code easily, because a lot of customers will see especially moving now we do, I handle a lot of migrations with customers, either migrations or modernizing existing code, dotnet framework, dotnet core, or dotnet, six, eight now. And they're really fluid. They want to plan everything in advance why they want to do that, because they, they don't feel comfortable changing the existing code. And you don't want to be there, you want to be able to change your code as much as you want. Because code in the end of the day, will change. Always, because you have new features, you have new bug, you fix things, you change things, you want to make sure it keeps on walking. That's the gist of it. And, you know, there's a book fairly old one called working effectively with legacy code, I don't know, if people still read it, it's 20 years ago, still relevant. And it's his definition of legacy code is code without tests, because that code will can only detail it can only it can never improve, because you'll break something. Now, I did find developers, I did meet a lot of developers throughout my career that have the ability to sort of compile everything that I had. And they have the ability to grasp huge code bases. And they don't, I don't know, if don't need any test that they can do without those people are there. And the problem is that they are also working within a team of other people, or they might want at some point to move to another project. So we always need this safety net that also helps you to plan your codes in advanced as you go along.
Francois Bouteruche 24:20
It's interesting, the other kind of Brendham of those folks, you want to basically bring them there what they have in mind in in test because you believe you're stuck with the code. Yes, then you when you hold these code works, but no, they are not there anymore. Exactly.
Dror Helper 24:41
And even if they don't leave they become managers or move to another project. Or at you know what, even in some point forget what they implemented two years ago. So it's it's very hard to do without. Now as I say Add with the lambda function, it's really simple. Let me show you what I mean. Once I figured out here we go. Because your bits, the parts you're responsible for, in the end of the day a function, which you can easily take away parts and put other parts, which we call dependency injection. And usually your code does work with other bits. Now, we have a lot of examples under the GitHub library, or you can go for several hours length, but every one of them show a different thing or URL, try to focus on one or two. And in this case, we're talking about something which is not deterministic, although if when you run the tests, you will think it is we have a lambda function that gets an event from s3, and then writes the result in a queue. Now, the event from s3, there's no guarantee when it will happen, it probably will happen pretty fast. So in most cases, that this will pass, if we'll take this whole system test set as one whole end to end, kind of way, and then in your build server from time to time, usually, like before we lose, that's when those things happen, it will fail. And you'll get used to the fact that is the test, you wrote the test, everything will pass and failed and pass and fail. And what a good developer will do or developer will do in this case, is always remember to run the test again, when he fails, it won't again ignore the result. And in this point, that test has no meaning because it does not tell us anything. So we want a more deterministic way to test this kind of behavior. The different kinds of tests, we talked about unit tests, unit tests, essentially is just taking the code within the lambda, you know, in a way, grabbing all the dependencies, if I need to read and read something from s3, I want to find a way for one to write stuff into a queue in the simple queuing service, then I want to throw that away. And I do that using dependency injection, I just do all of AWS SDK, clients have interfaces, which I can easily replace with fake objects, or mocks those objects inherit, well, depending on the level using dotnet. In my distant past, I used to work for a company called Tag mock, which used a different method of faking objects. But when you go to open source solutions, like fake it easy, and substitute and even all the one, they will use inheritance in order to take the class you try to fake and overwrite all the methods replace them with absolutely nothing, or default behavior. So if we go here to the unit test, we can see that and this is using faculties, that's one of my favorite mocking framework. But that's just my preference in dotnet, you can find analyst
Francois Bouteruche 28:21
for the false broadcast, so you're showing some piece of code in writer some piece of C sharp code, unit test. And as you are using the fact attribute, I guess you're using x unit as your testing library. And so for those who are on the podcasts, you can watch the video in replay on the Twitch areas channel if you want to see the details.
Dror Helper 28:50
Yes, and I'll try to cover as much as I can, which was basically Yes, I use. Usually with unit tests, no matter which language is a way to mark a method as what we call quote, unquote, the unit test. Although in the end of the day, the unit testing framework, just the way to learn methods, has nothing to do with unit tests per se doesn't care what you'd like inside of them. And in the next unit use facts in order for the testimonial in this case guy that it can be visual. So that can be any other application that know how to look for those specific attributes. Grab that method and just run it and if it gets an exception that as fails, if it doesn't get an exception the test passes. Simple as that. And usually I have three parts. This is the first part in which I'll arrange set everything up, get the fake object I need and I'll create something necessary event using using a helper method I use Builder to create the event I wanted to use. I even set here an environment variable which is usually a big nono with spoilers unit tests. Since you don't want to do those, probably the better ways. And then then I'll tell that fake object I've created, which mimic my SQS client, let me just close this, that when that method is called, then this result should be returned, and essentially show was if a message was sent successful.
Francois Bouteruche 30:24
So if I'm correct, you're sitting in your fake object to say, hey, if someone called Send me send a think method, you will return a Send Message response saying it's okay. And the message ID is 1234. Yeah,
Dror Helper 30:44
and I'm trying to keep it simple without, yes, so essentially, the reason I'm doing that is because within the method, I'm testing the function handler, he'll probably have code that will at some point, will try to do exactly that, I have a send message queue, that serializes and somewhere along deep within my lambda function, it will call the client. Now the reason I can use that in my tests is because of what I did, I've created a fake client, and I pushed it inside my, my function using constructor dependency injection, that's a method I've just created. Because usually with lambda functions, you don't have parameters in the constructors. But essentially, instead of initial and dissolves, I just pass a fake one, and assign exactly the same code, I just want to focus on the code within the lambda function, and nothing else. So everything that is external dependencies goes away. And that's a way to do that. And all this just to say, function, and we'll call the lambda function entry point, depending on how you write a lambda code. But that's what we did here. And I just want to say that I get back the result, which is the same ad, I got back from the message, it's a very simple, trivial test. Unit tests should be simple. We don't, we don't want any logic. We don't want something that can create a bug in my tests, because then I might be testing the wrong thing. And I don't want to have just one unit test, I want to have a bunch of those. That's the important thing. And I do have, well, not a lot, but quite a few tests for slightly trivial code. Especially the important thing is focusing on things like common cases, problematic, I didn't get the right argument, the file was not found things like that. Q U of L was not set, I didn't have the environment variable I need in order to perform the operation. Now, strictly speaking, again, as I said, setting invalid variables. And that's something to do in unit test probably should be I've done it in a different way, if I wanted to do a PO unit test for quarter court. But in the end of the day, we have a job to do. And this was simpler. And as a developer, I it's all about our way, where to invest your time and where you give yourself some value. Now the cool thing about unit test is once I use the unit testing framework, running unit tests, it's just a matter of pressing play. And everything ran less than usually pretty fast. Those unit tests, everything is in memory. So should be few milliseconds, once it was set up. And then I put it in my build server and everybody happy. But I'm not testing everything. I'm just testing my code. So now if I have a problem deploy, you've been suffering Well, that happened, I know that it is in my infrastructure that didn't connect everything properly. I expected to get something I effect the wrong way, expected to get an event but actually forced the event in a different completely different way, which I didn't figure it out. On my queue, I don't write correctly to my queue. So I need something else. And that's something that happens because in the end of the day unit test is a little bit an exercise in self delusion. I take away everything and I tell my test how everything else that I have, I'm not responsible for supposed to walk. But I need to hope that I'm doing it correctly. And I'm don't always I'm not always collecting this layout. So probably I want something else I want to check those integration points. And that's where I really need to use actual external dependencies, cues, s3 and so on. And usually those are integration tests I use in in X in it I have the notion of feature Is that you have that enable me to suffer milk. So essentially, I'm going to create a queue, because I want to check that that queue is being written correctly. And my message is exactly as I expected it to be. So I just create the fixture, and then I'll create the queue. I usually tend when I create things on AWS or an external dependency, I have to be wary of two things. Here in my code, you can say like paste a GUI ID or use a timestamp or something for my cue name. The reason I do I do that is to prevent a problem have to concur test runs of two different people or build server and may not everybody has his own AWS account in every company, every developer, and so on. So I want to make sure that every test uses a different set of resources. And depending on the resource you use, you find your way to do this kind of uniqueness. In cubes, it's really simple. Just give it a name with a timestamp with your computer unique name with the good. And also make sure that in the end of the day, when you finish running the tests, you dispose of everything, don't want to find out in the end of the month that only developers started, a lot of resources are just hanging around, you know, cluttering your AWS account, you don't want to do that. But after I've did this, and created this fixture that will run before all my tests and will clean after all. Yeah, that's,
Francois Bouteruche 36:33
that's quite a shift. Just to highlight this, I was thinking about what you've just said. An opportunity that PB cooperator brings in onto the table, it's so easy here to to create a queue and just delete a queue at the end of the test. I've been there in the past in companies with on premise setup. And you, when you need a new queue, a New Message Broker, you're like, okay, just fill this form, and you'll get what you need in two weeks. So that's also a big shift in terms of how you can do things. Yes, it's so flexible, to create new results, use them to test and to test the integration, and then just destroy those resources. Right. And
James Eastham 37:29
so without that, if you were running this on your build server, your build server would also need access to be able to create and delete cubes, right? That's kind of a prereq of all this is that now I could automate your tests fail from something completely different. If you're, if you're, however, you're running your business, that's easy to have actions or whatever that is, then you need them permissions, right.
Dror Helper 37:51
And some companies don't allow the developers to start, you know, queues and s3 is and DynamoDB tables, or whichever, you know, start, you might even need a virtual machine there. And yeah, as they say, it's a good opportunity to show the alternative. And it doesn't need to be AWS, by the way, you might use in a Postgres database, which you can install everywhere, or just run it in Docker. And started in the beginning, Docker gives you a lot of power. In that regard. A lot of dependencies can be started in Docker, you run a test, and then you fall off in a way in don't have, you know, these resources, which hang around, because those kinds of tests, those integration tests that test the interaction between different parts and need the actual or something that represent your external dependency doesn't necessarily production rate, what you need, but it's close enough. So you make sure that your logical that your queries are correct that your messages or API Zuko are exactly what you need. The problem with those is that it says shelter the state, which you try to avoid it in the testing, notice, you won't don't want to test to reach the same place and some will affect one another. Here, essentially, if two tests will try to lie to the same queue is two tests will try to use the same place the same account the same resource, or the same database, I need to be, I need to make sure that they don't somehow affect one another. So yeah, that
James Eastham 39:28
was that was gonna be one of my follow up questions, etc, when you're using queues. And this is probably a very specific problem to queues but because you're creating this queue as part of the fixture, which means correct me if I'm wrong, this cord will run before any of the unit tests run. Correct. So if you had if you then have four integration tests, and you might be getting on to this or sorry, if you are, but if you then had four integration tests that run they're all going to share the same cue. How do you then distinguish between and if you go home Maybe I've segwayed this perfectly. But in the past, I've had a challenge where you end up with three messages in the queue. And then how do you know which message relates to which test? And then you receive your messages, and you get all three? So let's, let's see.
Dror Helper 40:14
That's correct. And that's a balance. And in the past, one of our colleagues, I think, I think it was, Dan foxy did a session on that. And they talked about the fact that, especially with cue you have, you can have messages lying around between your tests, or you might even run tests concurrently, and then you end up with a lot of different problems. How do you account for that, and it's a balancing the end of the day, between creating a lot of resources, you can create one pupil tests, but then you will accumulate, it will take more time, maybe I don't know if a lot on that. But, you know, if you have 100, or 1000 tests, you're probably not a good approach to have 1000 views out there. And you do get to those numbers, at some point if your project is successful. So you need to find the balance between, you know, having a lot of resources and having one in some managing it. So in this case, what I did for those specific tests, and that's where I'm trying in a way you do account for your specific problem, don't try to solve all those problems. When I do get get next message, don't know if I did it here or in the other tests. I tend to ask for the specific message I'm looking for. If I have different kinds of messages, or oil, I use tend to usually No, I don't think I did it here. But I might do it in in other projects in our samples. Because we want to avoid the problem of something leftover from one test, in this case, was something between native and the fact that I know that I landed, I specifically run those tests sequentially. Because I know there is a problem, though. And I always take one message. But if one test will fail, probably it will affect other test runs. And that can be a problem, I might get a false kind of problems, because one test failed and caused all the problems, other tests, I do tend to create methods that do ask for a specific message we go for those places, well, I want to make sure that I got this ID or that specific line.
James Eastham 42:41
Okay, so you put all your recipes in and apply them to see if you've got the message that you want within that set of messages. Yes,
Dror Helper 42:48
but But again, it's let's say you have three messages leftover for this, or maybe someone else has somehow started that it is something that the queue, it does, it does reduce the problem of getting false positives, it does not eliminate them completely. And that's why Another approach is not to use AWS in this case. Because what I care about is not that I know that AWS service SQS service works, I just want to make sure that I know how to call it correctly. And there are out there a bunch of local AWS services emulator, which knows how to mimic the specific service does not give you the same functionality as far as performance or credentials, you know, I am does not affect them in any way. But it will enable me to have a specific case for my tests and something that I can easily start and stop without worrying that someone else hijacked my resource in some way go into my AWS account, especially beneficial in companies where they don't allow if we test on or if we develop a URL to start installing things on AWS all the time or in some scope. This is something that you can see in this sample, in fact, in this sample, and I think GMC did something similar with the test containers. Also in the in our test samples here.
James Eastham 44:31
I did very quickly draw for anyone who's not familiar. So test containers is a project that originated in Java, I think, but they've got they've got SDKs for lots of different runtimes now and it allows you to start up Docker images as part of your unit tests is kind of the simplest way to explain it. So when you are fixed, you're the same way draw Did you can actually did write an example here. So you can ask so you're actually running the docker run command in this case, but the test containers SDK allows you to actually start up Yeah, exactly. And then stop them again after the test run.
Dror Helper 45:02
And this container is I think is a better way I did it quick and dirty way I'm using in this code for the see him is processed out I'm running Docker exit, and I'll get an image in this case this image is AWS, a dynamo local, we have an image for running Dynamo emulator, which is a lot for tests, it's very easy to set up and start up, it's relatively quick. And you can set a new sort of DynamoDB obviously it does not, it will not limit you on LinkedIn right quarters, it will not check your permissions, those have different places to test. But it will enable you to test that especially with.net, you might get wrong, you know, you have a lot of API's with DynamoDB, you might get something wrong, you might not initialize it correctly, those things you you can test logic wise before deploying to the cloud, you don't want you prefer to do that on your machine. That's
James Eastham 46:03
a really important caveat here to something that I've hit my head against multiple times with any of the ABS SDKs in dotnet. Do you see the way so for anyone listening on the podcast drawer is creating a new configuration object for Dynamo DB. And he's manually setting the service URL to be on localhost, which is where DynamoDB local is going to be running. If you set the region as part of this configuration, if you set the region manually overrides the service URL. So if you've set the service URL to localhost and the regions EU West one, it will default to the EU West one service URL endpoint and it will override whatever you set as localhost. So just to be worried if anybody listening Yes. Don't send me an endpoint, if you're doing your thing. The region again,
Dror Helper 46:53
especially with microservices, I found myself writing code I don't really like which is if you want one local, which is called the region and restarted this way. This, we need to find a better way. It is another option. In fact, in this sample, I did both. And this one, it's not trivial, getting that thing to walk, you know, creating things with Docker takes time. And I did both ways in order to see the difference. And the difference is that if you use the AWS version, all you need to do is a new DynamoDB client. So it's either out the only diff,
Francois Bouteruche 47:40
close annotation.
Dror Helper 47:43
Once sec, without annotation, here we go. So local version versus using the cloud version. Cloud version is one line long, just initializing the client that points to whatever you created with an MVP, basically, with your region. And that's it. Specifically, because DynamoDB is relatively simple to initialize kind of, you know, once you have the table and everything, you just need to know which it is in Jen, you walk in, and then you get table get your information. So yeah, starting things locally, you'll have to work out them, although this content is nice. And then the way I am saying showing it. But on the other hand, if you want to run locally, or you cannot, you don't have your own account your account. You're not, you don't have to collect permission to start up, if not at least make sure that you're calling the API's correctly. You know, you're using correctly the AWS SDK, you don't need to use AWS, you don't need to test AWS SDK, but especially with DynamoDB. Or if you have very complex queries, for example, those things you want to make sure that are correct. Logically before going and running things manually in the cloud.
James Eastham 49:05
Yeah, I'm a big fan of DynamoDB. Local, like, I used to be pretty against emulators for the reasons you've described, or like it doesn't test IRM it doesn't test stuff, but DynamoDB local, especially when you get into queries and filter expressions and things like that, like it's so that the iteration cycles are so much quicker. And like you will if you're using Postgres or SQL Server in Docker, so I am a big fan of I'm a big fan of local. Yes.
Dror Helper 49:30
The only caveat again is, for example, well SQL Server in Docker is harder to set up, especially in cases where a token windows a base and a I don't think this used to be a container image being maintained by Microsoft, I think at this point. They don't, they don't do that. But you can probably create one yarn, but it's quite big and and say and Postgres takes a while to set up with DynamoDB. Local, which was built especially for those needs, it will start up pretty fast. Yeah, we'll pause this probably we're talking longer period. So you can't really, you don't want to create the your Docker image for every single test run, you want to punch of those tests is infection. And then you need to ping it until you see that you get the queries back, which means that it started running. But other than that, you're good to go. And you can clean it afterwards. Obviously, you can always install those dependencies. But I found it to be easier to use Docker nowadays, because easily cleaned up, very easy to run it in other environments is, you know, in your build server on other machines without having this long he used to have we still do long word documents explaining what you need to install on your machine and you build Sayreville to make sure that everything runs everything compiles everything works. With Docker, it seems like some of it went away.
Francois Bouteruche 50:57
Just a heads up. We we have nine minutes left. So just want to make sure that if you have suggestions, might you have time?
Dror Helper 51:07
Yeah. So briefly, on the last things, we talked about integration, we talked about unit tests. Now you don't have to do everything. I do believe that if you just have unit test, at least you know that the problem is not in the logic. If you're an integration test, especially the problematic ones, the complex ones, you know that the problem is not there. And the last automatic test we talk about is end to end, which is slightly different. End to end means I'm under the old system, this case, my part of my system, from setting something in s3 bucket all the way to seeing something is in the view. There's quite well, obviously, you should check those examples, all of them come with the commentation. explain exactly what we tried to show and explain what you need to be wary of. But with end to end tests, the big difference that you usually run them after deployment. I like unit tests, integration tests that I can run on my machine, without the cloud or with the cloud dependent on my choices. But I can run them without deploying the whole system before. And with end to end, I do want to test the whole system. That's the purpose I want. It has things I haven't tested in those previous tests, I've already tested logic, I tested the integration points, haven't tested my old system end to end with permissions with setting it up correctly, all of that. Now, this part is usually where a lot of companies start from when they talk about test automation, because it's logic, it's logical to deploy everything and test everything. But those tests used to be tend to be more brittle, and take longer to learn. Even lambda functions. So you do want to make sure that you're only testing the main workflow, the main scenarios that you really care about. Yeah. And
Francois Bouteruche 53:00
this is where you can come to the conversation about okay, do do I test in production? Personally, I would say, of course, you use this and test in production, because you don't want your user to discover an issue. Before you you want to, to discover the issue in your production environment before you use it. Yes,
Dror Helper 53:22
but first, we want to run it on staging. Definitely before you get to production, if you have a staging debit account, especially with lambda, the thing, the nice thing about automatic testing in lambda, this led to a lot of changes as you move between systems, if you do it correctly. And what they mean is that, in this case, I'm using SAM, so I have a script on how to deploy the lambda. So it doesn't matter which account it will be the same. And that's something I learned from James I, in my tests, I have a fixture, but that feature will take the same stack, I just deployed in AWS because I'm running post deployment and take all the information, the queue name, and there's free name and the lambda out of it. And then basically, it's end to end. So I need to create a new file 14 as free, and then check that it came on the other end. I need probably to do polling until I get the message on the other end because we are talking about something which is a synchronous, but other than that, very similar to what I did within it as but with all the dependencies.
James Eastham 54:31
And just, again, for people on the podcast Sorry to interrupt this quickly. But what what Drew was doing there is deploy in the actual application called with CloudFormation with attributes some with CloudFormation, you can define outputs in your CloudFormation. So you can output a queue URL, you can output the function and the function RN. And then in the actual fixture for the for the test actually using the CloudFormation dotnet SDK to actually read the CloudFormation stack and then you can programmatically get access to them same outputs, which means you can actually read the exact qu, AR and everything SOP names. From Scott.
Dror Helper 55:10
Yeah, another option is to pass everything as invalid variables for your build. But this is nicer because it doesn't require me to do any additional manual steps. I've already deployed it, you know, it doesn't matter which infrastructures code I'm using. But with lambda, you probably want to use one CDK CloudFormation. Sam, or I don't know, whatever else TerraForm below me whatever it is, all of them have the ability to get in for that information. So you don't need to configure it for your machine and for your fellow developer machine for your build servers, and so on. And, and the tests are simpler this test and that's the, we'll finish with that. The test basically is creating a new file and put the data on s3, because that's, that's the workflow with testing. So I'm using the SDK for that. And then I can automatically make sure that my deployment is correct file is all the all the bits are connected together. And all my permissions are configured. Those are the parts I couldn't test in my previous tests, which run on my machine without that specific scenario, that specific deployment. And yes, once you have that, you can have the same test on any full environment, it's up to you.
Francois Bouteruche 56:30
We need to wrap up draw. Thanks for all the the good insight. If people want to connect with you, whether online or in person, where where do they can connect with you. Okay, so first
Dror Helper 56:46
of all, online, on Twitter, my other amazon.com email, it's always the my first the first letter of my first name and my last name help. So if you look for me on Twitter, if you look, you can send an email. Other than that, in May 14 to 17 of 13 to 17. There's a conference in London called sed conference, the conference a lot of dotnet. I'm going to talk about several assessing exactly that. And also about deployment, deploying SP dotnet micro services to AWS. If you they'll just you know, stop and say
Francois Bouteruche 57:29
okay, just just want to share also, one YouTube video you've shared. It's one of your talk at the last reinvents so people can watch this video on these concerns. Thanks. Thanks a lot. Rob. I know you were a bit afraid like we, we like be able to fill the hour with I know we could fill another hour. So thank you.
James Eastham 57:58
It's never enough time an hour. It's never enough.
Francois Bouteruche 58:02
Definitely. So we see you the other week. We have another dotnet on areas show in two weeks from now. And thanks for your time falls. Enjoy your day and see you soon.
James Eastham 58:16
Thanks, everyone.
Dror Helper 58:18
Thank you

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.