AWS Logo
Menu
The .NET on AWS Show, featuring Jim Bennett!

The .NET on AWS Show, featuring Jim Bennett!

In this episode we are joined by the Head of Developer Advocacy @ Pieces, Jim Bennett! Join us as we learn how GenAI can make our lives easier.

Brandon Minnick
Amazon Employee
Published Oct 7, 2024

Listen to the Audio Podcast

Listen to the Audio Podcast

Watch the Live Stream

Loading...

Transcript

Brandon Minnick 1:00
Brandon,
hello everybody, and welcome back to another episode of the.net on AWS show. My name is Brandon Minnick, and with me, I am so happy to have him back, so happy to see him again. It's my amazing co host, Francois. Francois, how is Hello. How's your three weeks? How's your month?
Francois Bouteruche 1:29
Yeah, a three, three weeks break. The mapping. Chance to live in France, of course. And yes, it was amazing. During my vacation, I had the opportunity to attend the Olympic Games for one day with my son. It was an amazing experience. Of course, it was in Paris, so it was amazing. Yeah, it was really great. Yeah,
Brandon Minnick 1:55
it looked incredible from TV, you know? I mean, Paris is such a beautiful city. And, yeah, just seeing, you know, the triathlons, the Olympics, even, like, I think volleyball and tennis for right there in front of the Eiffel Tower. At least, it felt like everything was right in front of the Eiffel Tower. It's just Yeah, so beautiful to watch.
Francois Bouteruche 2:14
Yeah. And, you know, friend us French people, we are very pessimistic. So the few months, the few months is before the Olympic Games, we were like, it will be a mess, it will be a major failure, it will be awful. And finally, we did a great job, and it was amazing, and it seems everyone enjoyed it. So it was an amazing experience. So if one day you can attend developing in your country or anywhere else in the world, if you have this opportunity, I really encourage you to cite this opportunity. That's, yeah, that's a amazing experience. So,
Brandon Minnick 3:05
yeah, yeah, well, we're super happy to have you back. We're definitely jealous. But we were talking before the show about a little dotnet, aspire, AWS, CDK project that's been going on. You want to tell us more about that?
Francois Bouteruche 3:29
Yeah, yeah, I know I've stolen your news for the week. Yes. So we when dotnet is prior has been announced. We've added that LDS was in with the support for cloud formation, but no, we are working on bringing CDK support. So for those who don't know what is CDK. CDK stands for cloud development kit, AWS cloud development kit, and it helps you to write your infrastructure as code, but with your favorite programming language, like you can write your infrastructure in C Sharp, in Java, in JavaScript or Python. And what we are doing there with this pull request is to to support CDK so that you, as aspire, will bootstrap your services using CDK so it will deploy your the required resources that you've defined in your AWS account thanks to CDK and in general, what I know that developers are more comfortable with CDK because. They are using their favorite programming language than with CloudFormation, with a huge JSON or YAML file, depending on your preference between JSON or yaml, but it's a huge JSON or YAML file that you have to maintain. So in general, they'll prefer things like this, where you can simply write, okay, I want to add my, my stack to my application. That's that's really simple. I'm really excited about this. This work,
Brandon Minnick 5:37
yeah, me too. This is something that we've been, well, we can't take any credit for it, but it's, it's This PR has been open for a while now. It's all been thanks. Thanks to Vincent. You know Vincent works at AWS, but you know his job isn't to necessarily do this full time, although I think you'd be great at it. So it's, yeah, definitely been a community group effort here where everybody's chipping in. And yeah, we're showing the link to the pull request here on the screen. You can find the link in the comments here on Twitch. But yeah, kind of like Francois was saying it's, it's really cool. If you've never used a CDK, or never heard of the CDK, the cloud development kit here at AWS, like Francois said, it allows you to essentially do the same thing you might be used to doing in like a YAML file, or like Azure has Azure bicep, where you define your infrastructure as code, but With the CDK, you you literally defined it as code. No offense to YAML or JSON, but here, like in C Sharp, you just say, like, dot, add s3 bucket, and it goes, Okay, I'll add an s3 bucket to your to your stack. Got it so. Really cool stuff. And I'm super excited to see this being added to.net aspire, because we've been working with David Fowler from Microsoft. He's actually been here with us on the.net enables show before and when our as we submit our PRs from AWS. You know, we kind of do it with our our flavor of AWS cloud. I forget the word I'm looking for, but it's slightly different than the same code you might publish to Azure. And it's been cool seeing David look at these PRs and go, Oh, I might actually have to steal some of that. Like that's a really cool way of doing it. So also just fun, inspiring fellow C sharp.net developers and improving the overall stack. But we're actually, we're not here to talk about.net Aspire today, I promise. Because we have such, such a cool guest. He's been a longtime friend of mine. We used to work at Microsoft together back in the day, but nowadays he's leading up the developer advocacy at pieces for development, which is focusing on enabling developers to be more productive, leveraging contextual awareness. You know him from speaking at conferences all around the world. He's lived on four continents, done mobile. He's done desktop, even scientific, space stuff. Jim Bennett, welcome to the show.
Jim Bennett 8:23
Hello, hello. Thank you for having me. Very excited to be here. Great to see you both again.
Brandon Minnick 8:29
Yeah. Thanks so much for making the time. Thanks so much for joining us on the show. For those who may have never met you before, who are you and what do you do?
Jim Bennett 8:38
Who am I? What to do? So I'm Jim currently heading up developer advocacy at pieces for developers. I've been working as an as an engineer for way too long. It's kind of depressing. I think I actually started my career before some of the people I work with were actually born, which is nice. Yeah, I've been engineer for over 25 years, I've been in developer relations as a as a community member since probably eight, nine years ago. Did it professionally for six and a half, seven years, obviously, Brandon, as you mentioned, at least, welcome you at Microsoft. So I've done advocacy for Xamarin back in the day. Gotta love me some Xamarin, which is dotnet for mobile, for those who haven't come across that I've done, education, advocacy, Internet of Things, live streaming, community and yeah, currently working for a small startup based out of Cincinnati, Ohio, where we build a developer tool to make developers downside more productive by bringing together a horizontal co pilot. So we're kind of putting AI across everything a developer does across from the research that you do in your browser, the conversations you have with the colleagues and the code in your IDE Yeah, and I'm a big fan of Star Wars Lego, as you probably gathered this year, the question I get asked a lot, is this real? Yes, this is real. Lego I can go and take things off the shelf. It is real.
Brandon Minnick 10:03
Oh, be careful. Yes. For those of you listening to the audio podcast, check out the video. Jim's background is incredible. Not only is it just a super cool background with the lighting, but all these LEGO Star Wars pieces that, yeah, if you didn't reach around and just grab one, I would assume it was probably fake too, because it's very impressive.
Jim Bennett 10:27
But Mandalorian fans, we've got a razor crest behind me. I've got a classic, ultimate Plexus series, Millennium Falcon, six and a half 1000 pieces. That's like, 20 years old. I've got X Wing. I've got Yoda atats, the tentative four. I've got Luke's speeder, I've got Luke's X Wing helmet, and this is a small part of my collection, and it's all lit up. So I've got lighting kits for some of the LEGO sets, and I've got some glow going on the background. So yes, it's a very, very cool background. A big Star Wars LEGO fan, big Star Wars fan, in general. LEGO fan,
Brandon Minnick 11:01
putting our backgrounds to shame, certainly. Oh, yeah, Jim, yes. Can't wait to get into everything about pieces and how it makes our developers lives easier. But there, there is one question we love asking everybody the first time they come on the show, and that is, how, how did you get started in.net you've been doing.net for a long time now, but if we go way back where, where and when did you first find it? Yeah,
Jim Bennett 11:30
so I first started with.net 1.1 so we are talking all the way back before generics, before anything useful?
Unknown Speaker 11:42
Yeah, yeah.
Francois Bouteruche 11:46
Just, just for clarification, when you say dotnet 1.1 is it is dotnet framework 1.1 because today, maybe some full don't even know that there is a dotnet framework before dotnet.
Jim Bennett 11:59
That is true. We're not talking dotnet core version one or anything. We are talking dotnet framework 1.1 the original version of C sharp. Back in the day when you had C sharp, vb.net, they were dabbling with iron Python, iron Ruby and dotnet for various different programming languages. Yeah, no generics, nothing really powerful. Yeah, we're talking right back in the day, early, 2000s probably 2005 was when I first started with.net so, yeah, almost 20 years ago. So I was working the banking sector. I started out doing cool stuff in C Plus Plus, working for a scientific software company, building some very, very cool algorithms to store chemical structures. It was great. It was fun. Didn't pay as well as the banks. So I sold my sold to the devil, and jumped to the banking sector and doing a lot of C Plus Plus there. But there was a movement for.net so what a lot of banks were doing is putting.net on the front end. You know, wind forms as it was back in the day. The original wind forms battleship gray. You sized your buttons exactly, none of this scaling or anything. Everything perfectly sized by pixels, and your wind forms on the front end and jar from the back end. So I did C Plus Plus, but I was looking for something else, and my manager jumped ship from the bank I was at, went to another one and said, Hey, I want to hire you. We do C sharp. So can you learn some C sharp quickly? And I'll get you an interview in front of our engineers. We'll get you on the team. So I literally dived into C sharp really quickly. And when I say dived in, I actually only had a Mac at home. I had a MacBook Pro and the original 12 inch MacBook Pros, and I had to install a VM on there to get Windows so I could use.net so, you know, this was.net wasn't available on Windows. Wasn't sorry, it wasn't available on Mac or Linux. It's just windows. So I literally had to install a VM, just where I could get done at on there to learn enough to pass an interview. And then started working at this bank in C Sharp, 1.1 they were looking at C sharp two, for some things, and it just kind of grew from there. C sharp started off with WinForms, moved to WPF, and then got out of banking, and the WPF moved to zaren and got into the whole mobile space. So it's, it was a, kind of a long journey over time. It's kind of nice to watch the ecosystem grow, because obviously.net back in the day, was very different. The.net framework was very closed. You know, it's all closed source. This idea of open source was, open source is a cancer, according to Steve Ballmer, things have changed since. So, yeah, but watch it go from C sharp.net, 1.1 to two with generics, moving on from there, then the kind of rise of.net Core. What's this open source version? The whole kind of mono going on? Yeah. I remember seeing. Saying, when I first saw mono, someone had some.net running on a on an old Android phone, I was like, wow, you can do this stuff with mono. This is really, really cool. And kind of seeing that, and getting excited about the idea of putting.net everywhere, and then kind of the work that, like, you know, Miguel Nat and Joseph did at Xamarin, to kind of bring dotnet to other platforms, that, to me, was one of those, this is fundamentally amazing moments. You know, I was enjoying doing.net but the ability to just build a mobile app was just literally mind blowing. And that's kind of one of those foundational moments for me that first fired up and, you know, I'm using C sharp, and I'm writing some code, and I'm deploying it to my iPhone, and that was just like, just comparing that back to the day where it was battleship Gray, Windows Forms on Windows. Just kind of watching that journey was kind of fundamentally incredible. It really kind of changed who I was. And after that was like, I've got to lean into the Xamarin thing heavily. That's why I became a Xamarin MVP and joined the Xamarin community. And the rest is history, as they say,
Brandon Minnick 16:05
Yeah, and I was, I'm, I'm heavily biased. I mean, that's how, that's how I know Jim back in my Xamarin days, and that's now called.net Maui for everybody's never heard of Xamarin, but yeah, it's certainly Jim. One of the things I love the most about C sharp is it? It keeps evolving. It keeps there's adding new features, new functionality. Like you mentioned, you can use it for mobile and IoT, and there's even AIML libraries, even like around semantic kernel nowadays. So it's, it's something that I feel like has grown with me as I've grown in my career, and there's certainly no reason to leave C sharp nowadays, because you want to do front end, website, mobile, no problem, back end, no problem. Like, there's nothing you can't do with it. So I certainly appreciate that
Jim Bennett 16:52
it's very, very powerful. I mean, I've, I've been in and out of it and over the last few years. So whilst focusing on Xamarin, it was C sharp all the time. I kind of dipped out of it, moved on to other languages, and then again, I go back to it every time I do. I'm just amazed by how productive I am, how fully featured everything is. I need to do something. It's there, and just watching the syntax evolve. Spent a couple of days building out a C sharp, SDK last week, and just like, oh, okay, how do I make this better? Primary constructors? Oh yes, I remember those. Dig into those. Wow, suddenly my code is cleaner. And just watching all those features that have evolved to bring the language to because it was very, very verbose at the start, and it's kind of just watching how much we've kind of shrunk that down, we've removed this boilerplate, and it just to speed you up as a developer. And yeah, you end up just banging code out. And as you say, everything works, everything's supported. You can do it anywhere. I mean, I saw a question in the chat earlier from someone saying, yeah, how to get rid of the awkward pub giggles? How do you guys deal with Python, rust, PHP, crew, to stay relevant? It's like the old song. Anything you can do, I can do better. I mean, you know, okay, we may not compile down to as fast system code as rust, but then again, I can build everything, and I can build it now, and I can build it quickly, and I'm not fighting with a borough checker every five minutes to work out how my my code works. So it's just the fact that it is such a phenomenally powerful ecosystem. And it's not just about the language, either it's the ecosystem, because you've got C sharp if you want to go down the oo semi functional route, or you've got F sharp if you want to go down the functional semi o route. So you kind of got a choice of different languages depending on your your style. And then this whole framework, this whole ecosystem, works across both of those languages. Like you said earlier about the AWS, CDK, in.net Aspire. You want to build out with a functional programming language, use F sharp, it will work. Want to build out with no language, use C sharp, it'll work. And so everything's there. It works. It's great, it's fast, it's powerful. And yeah, I think there is nothing you can't do with it, except for maybe build an ultra fast operating system, near low level kernel libraries. Other than that, you can do everything
Brandon Minnick 19:10
great. That's like senpai teach in the comments saying assembly is clearly faster than C sharp. So if we all just write our code in assembly language, then our absolute obviously read faster.
Jim Bennett 19:24
You say that. You say that every time studies are done comparing handcrafted assembly to what compiler generates, compiler, generated code is usually faster. So yeah, you know, if you want, I'm happy to jump on a stream with you, and you can write some code assembly, and I'll write something to do exactly the same job in C Sharp, and we'll compile it all up, and we can see which is faster, yeah,
Brandon Minnick 19:46
just give me, like, a two hour head start. It's been a long time since I've had to, yeah, like, write assembly code, move in stuff between registers, and I. Yeah, no, I'm good. I think I'll take this slight overhead i Get with a great language like C sharp. But now I know we could talk about C sharp all day. I certainly can. It's my favorite. I'm heavily biased. I mean, heck, this is the.net on AWS show, but Jim, I know you wanted to dive into pieces today, so you gave us a little tease earlier, but yeah, what? What is pieces? Why? Why should I use it as a developer?
Jim Bennett 20:30
Yeah, so you obviously use AI, I'm guessing, course, because everybody uses AI to be, yeah, and say, for example, yeah, we use AI last week when I was doing the C sharp, SDK, what's this primary constructor thing? I'm going to ask AI. So we've all got the point where AI is this assistant kind of sits with us in everything we do. The problem with a lot of AI at the moment is it's very siloed. So if I want to use, you know, a co pilot tool inside my IDE that lives in my IDE. So, you know, if I'm using GitHub copilot, codewhisperer, whatever that is in living inside my IDE only, and with that as well, it's siloed to whatever model has been chosen for that particular copilot tool. Now we we as developers. We don't just live in our IDE we actually live in multiple different places. We are collaborating with colleagues. Coding is a team sport, so we're constantly having this conversation with colleagues about the things we need to do, or we have tickets that we're chatting about. We have GitHub issues or tickets in JIRA or whatever tool we're using. We're using, we're having these conversations, this collaboration inside various different tools. We are then coding in our IDE and as we're coding, we're jumping into our browser to do research. We're reading documentation. And so as I'm jumping in these three different areas, really, what I want is an AI that's with me everywhere that I go. You know, I don't want to have to go to chat. GPT, over here in my browser, ask some questions, copy some code. Paste into my IDE. Copy it from my IDE into a co pilot chat. Ask questions, you know, go into my GitHub issue, copy text from that. I don't want to have to keep jumping around. I don't want to have to go to all these different places and use all these different tools. I want something that sits kind of across where I am, and that's literally what pieces is. Pieces is a horizontal co pilot that is designed to take everything you do across everywhere that you are. So I can go into my browser, I can fire up a co pilot chat. I can grab a snippet of code, I can ask questions about it. I can then jump into my IDE. I can look at my copilot chats, and I've got that same question there with me. I can then take that code, I can drop it my IDE. I can jump into my browser again and look at the JIRA ticket that I'm reviewing, that I'm working on. I can scroll through that JIRA ticket. I can jump to my copilot and I can ask questions about that JIRA ticket. And I've got this ability to interact with the AI with the context across everything that I do, whether I'm in my browser, whether I'm in reading documents, whether I'm in Slack, Gchat teams or in my IDE I've got my AI with me across everything. It becomes this kind of memory store of all the conversations I have. It becomes a memory store of the code that I've got, and it becomes the one place where everything comes together. Pieces is we're not tied to one. LLM, because you'll see all these people said, Oh, I've been using chat GPT for a while. It's great. Now I'm using Claude, and Claude is better, but all my chats are in chat GPT, and I want them in Claude. And I'm in pieces, choosing the AI that I want as I want it, jumping around between different AIS all the time and getting the best at those AIS, whether they are cloud based AIS or even on device. AIS running locally on my laptop so I can be on a plane with no Wi Fi, and I can carry on the conversations that I was having at home in my IDE jump into my browser countless conversations using a local LLM, and it's just this powerful co pilot assistant that's with me across everything I do. So that's kind of the the long, the long spiel about pieces. It's, it's your friend that's with you. We kind of what one day we want to get the Jarvis point. I don't have ever seen Iron Man, so kind of Jarvis is there, and when, when Tony Stark asked Jarvis a question, Jarvis understands what, what, what it is he's talking about. Because Jarvis is there kind of watching. We eventually want to get to that point where, on your developer machine, in your developer environments, we know everything you're doing, so we can answer those questions. We can kind of proactively give you the answers. So think of us as kind of heading towards that Jarvis point, because we want to bring together the context of everything you
Brandon Minnick 24:52
do. Wow, yeah, yeah. Something you know, something you mentioned Jim, really struck me. And it. Almost like, I don't say building a second brain. I think that gives too much credit to AI, making it seem sentient. It's not, but essentially offloading things that you don't have to remember anymore or write down. Because I feel like that's a struggle I've had just my entire life, is something will pop into my head and I go, Oh, I need to remember this. And if I ever go, no problem. There's no way I'll forget it. I'm definitely going to forget it, so I have to write it down. And usually that involves, like, setting some sort of calendar reminder, because, oh, there's something I want to tell Francois at work, but today's Saturday, so let me remind myself to do it on Monday, and then I'll that reminder message pops up, and I have no idea what it even means anymore two days later. And this seems like the biggest win here, where you don't have to necessarily remember everything. Or, you know, if we have that conversation, like, hey, Jim's like, did you ask me about this back in April? Like, I don't. I don't remember April. Man, but yeah, having all that context right there, I feel like I haven't heard of a tool like that yet. You know, it's always like, Okay, I'm working on something, but now I need to solve a problem, or maybe just write unit tests, right apps. So that's when I jump over to whatever, whatever tool we're using to help us write the code. Yeah, whether it's we've got Amazon Q in the AWS toolkit here, but you almost have to teach the AI first. Like, here's what I'm doing, here's what I'm working on. Like, I want this to be in end unit. I want to be in C Sharp. I want to use asyncawait. You know, you kind of have to build that up, whereas it sounds like and correct me if I'm wrong, but it sounds like a tool, like pieces would, since it's been with you along the way, it would already know, yeah, of course. I know you're writing code in C Sharp. I know exactly how your project's architected, that you're using dependency injection, and it feels like I could just go to and say, Hey, I need my boss told me to write unit tests. Can you do that for me, for all this code instead of I think what I do nowadays is I have this whole paragraph two paragraph diatribe about what I'm working on and what language I want it to be in, how I want to be architected. Then I also have to copy paste my C sharp class in there to say, like, this is the Class Under Test. You know, write the write the test for this. Whereas it seems very fluid with pieces, is that? Is that? Right?
Jim Bennett 27:39
Yes, yeah. I mean, it's, it's not when we're not the point where it knows everything, and everything becomes the context of every single conversation. Because these AIs have limited context windows. You know, even with Google Gemini, there's still a limit as to how much information we can provide to each conversation. The idea with pieces is, as you kick off each conversation, you give it the context is relevant to that particular conversation. So in your in your example, you would, you'll kick off a new conversation, and you could literally take your C sharp project and, say, add this entire folder of code as as context for my conversation. I need unit tests for this class, and because it's already got the entire folder of code, it understands that for that conversation. This is the project I'm basing on. So, so the LLM will then be able to go, Yep, okay, you're using dependency injection. You're using end unit. Here's the kind of unit test that you want. And so you can do that in our in our desktop app, for example. Then you could jump into the IDE and kind of paste those unit tests into where you want them, and it just works. And then you kick off another conversation. Drop in a different project, maybe that one's using x unit, and then you ask the same thing in the unit test, and because you've got that context, that whole folder of code, it'll know to do an x unit, because that's what that particular project is using. So you kind of pick the context of each conversation, which is great, because you don't want it to be confused by something else that's going on. You know, imagine we were collaborating on a ticket in GitHub, and we had this long conversation about this ticket. And I then go to the LM and say, I need help with this. And it goes, Oh, you're working on the stick of the Brandon. Let me give you help. It's like not different thing I'm working on. And so being able to say at each conversational level, you know, this is the context I want. Gives you that flexibility to have what you want without it being polluted by too much information. Because there's that kind of bizarre thing with llms, that if you give it too much information, it gives you it's a gives you a terrible job. You need to give it just the right information at just the right time. And that's kind of what pieces helps you do, is kind of give you that ability to just drop in the context that that you want. So, you know, what should we do? A demo. Let's, let's do a demo. Yeah, I will talk through it for those on who are listening on the podcast app. So give me a second to share my screen, entire screen. There we.
Francois Bouteruche 30:01
Yeah, you can just a quick question where you're setting up the demo so you can hide in the same context, for example, your repository and your GitHub issue, and say, Hey, this ticket and this repository is part of the context. And now I will ask you some question.
Jim Bennett 30:16
Yep, yes, that's good. You can do that. So pieces kind of runs. We have this thing in the background called pieces OS, which we have a small bug. It doesn't like when you change the screen size. So I just changed my screen size to smaller, and it doesn't quite like that. There we go. So pieces OS kind of runs in background. This is kind of the engine that makes everything work. And then from there we have a desktop app. We have browser plugins, IDE plugins, whether it's VS code, Visual Studio, JetBrains, all these different plugins kind of talk to one place. And this is why everything can kind of come together. You can have a conversation in one place and then see that conversation in other places. And if I just let me just zoom in a little bit. I click this button down here. This is kind of the cool thing to show off. Is you can choose your LLM. So that's always been the the fun side with I'm going to try Claude. Do I like it? Do I not like it? I'm going back to chat GPT. You know, this ability to flip around between llms is kind of baked into everything we do. So I can say, you know, I want GPT four. Oh, I want Claude. I want Gemini. You know, you can choose which one that works best for you. And everyone's got their opinions on which one they want. But we've got everything here in one place, and then on device as well. You know, Mistral, Microsoft, llama, Gemma, some IBM ones as well. Granite. So
Francois Bouteruche 31:45
of course, you won't miss for you want a French advisor,
Jim Bennett 31:48
obviously, obviously, you know, downside is the 11 takes a two hour lunch break every day. But other than that, yeah, definitely. And I mean, that was love, by the way, because I honestly, I love the French attitude on working. Of you know, you you work to live, you don't live to work. I love the French attitude. So, yes, but that's all here. So I can choose the LMS I want. I can flip with each shout to different LMS, and then kind of choose what kind of context I want to bring in. So I could say, for example, I'm going to add a folder. I'm gonna go to GitHub, going to add the C sharp project here. Let's add this and say, describe this C sharp SDK project, and I'm going to pray for demo God sacrifice a chicken. And it failed. This is always the way. It's a problem with llms. You have to be a little bit strict with how you prompt them. So Knowing my luck, it had to have failed, didn't it? When I when I do a demo, this is embarrassing. Let's try that again. Let's add a folder of code I want to manage just add this entire projects. So
Brandon Minnick 33:08
Jim's going through the file explorer inside the pieces client app that stick and cruise files and has dropped in a whole folder, which I assume is code or it's one file, pieces, client.ss.cs, and the intent is, have have pieces learn all my code, and then I can ask you questions about it. Yeah, and
Jim Bennett 33:37
what Jim's also seeing is lots of errors where it's failing to do so, so the curse the demo gods, it's not actually working right now. Ah, okay, um, that's always the way. When you wait, when you do a demo, for some reason, it's not picking up the files I'm describing. But you know what, what I will do, I will do. Let's think of another use case. Then let's imagine. I wanted to look through the AWS dot SDK. So I've got the browser up with the AW SDK for.net and I'm just kind of reading through it. Okay, it's got things. Let's go the API Reference Guide, for example, Brandon. What's your favorite? Uh.net, SDK in AWS.
Brandon Minnick 34:26
Oh, do I have to know the actual name of the package pick? Let's see search for something with s3 in it. S3
Jim Bennett 34:36
Okay, so let's scroll down to s3
Brandon Minnick 34:41
Yeah. Here we go, Amazon s3 club. There you go, amazon.is. So there's, there's a fun naming thing here where some things are called Amazon s3 some things are called AWS lambda. And I remember asking my first show, I was like, I don't get it, you know? And like, most of the icons are all the. Same color, but some are red, some are blue. Like, what's the difference? No, it's just, literally just a marketing thing. So, so yeah, don't let that deter you. If you see like, like Amazon EC two versus AWS, lambda, apparently means nothing,
Jim Bennett 35:23
fair enough, it's just then marketing naming, okay, this is, this is terrible documentation. I'm not seeing any code samples here. Wow. Oh, here we go. Okay, yeah.
Brandon Minnick 35:42
And it looks like are you using a browser that's stripping out a lot of other stuff from the website? No, I've never seen the plain text version of this, but that's okay.
Jim Bennett 36:02
Okay, so let's see if this works. So I'm reading here about this copy object method. Okay, and then what I should be able to do is some in my browser look at the copy object method from the s3 SDK. So in theory, now if I go into pieces, kind of turn on live context and say, describe the copy object method I was just reading about in my browser. Okay, based the word for history, it appears you're recently viewing the AWS SDK for.net version three API documentation, specifically the copy object method, Amazon three. It's given me a code snippet with the method. It's given me description, the parameters and all the important notes. It's kind of summarize what I was reading the documentation. Now, obviously this feels like, but you were just reading this in the docs. But imagine I was reading this, and then something else, and then something else, and then something else, and then I went for lunch, then I came back, then I had a meeting, and there's a, ah, that copy object method. What was that all about? Go in here, ask the question, and I kind of get this information all streamed back to me. So suddenly I've got this here. You know what? This is, the method I want. I'm going to save this code snippet over here, and that's going to save the code snippet, and then I'm going to jump to my IDE I'm currently in Visual Studio Code, bring up a new file, and then I'm going to go to pieces. Oh, look. And what I've opened up, open up the the pieces copilot tab, and here's the same conversation I was having. So this idea of I have to go to chat GPT in my browser, ask a question, and copy and paste everything I'm in my IDE and I've got the same conversation I was having in the desktop app. Or if I jump to my browser again, to my browser, I bring up pieces in my browser, I go to my co pilot chats. Oh, look. Which one is it? If it should be here somewhere that hasn't refreshed the demo goals are not liking me today. This is the curse of DevRel, is we end up using the pre release versions, and we'll often get things a little bit out of date. Okay, let me just refresh the page, see if that fixes it. Yeah, we often get things with all the bugs. So okay, just show my earlier conversation in here. So, but just having this idea that conversations go with me to the different places that I am. So this is the same conversation I was having and back in Visual Studio Code conversations having about copy object method. If I go and look at my saved materials, I saved this C sharp method for copying an object. I saved that inside pieces desktop. Here is the code as a code snippet. It's being described by AI. It's got some suggested searches, tags and the actual URL that came from who did this. And so, as well as asking questions, getting information, I want is I can also save these snippets of code as well, so it becomes a kind of brain that memory. So I have those useful code snippets with me absolutely everywhere, and they're all augmented with AI. They've all got tagged as well. I can then manually tag this myself. So if I was building up a set of code snippets because I'm doing a computer science degree, for example, I could ask questions about the topics I'm working on, save the code snippet, tag it with a course that I'm doing, or I could solving a particular ticket. I could tag it with a ticket
Francois Bouteruche 39:34
just a quick question on to use pieces efficiently. Imagine I'm working on my project. I've activated pieces, so it's grabbing what I'm doing in my ID in my browser. Suddenly I'm interviewed, because as a developer, it's a fake story, because we are never integrated during our coding session. And. Uh, but I'm interviewed by an engine task my manager asked me to fix under the bug. Or do you deal with this? You can switch to another conversation or context and resume back to the previous one. Or does it work? Yep,
Jim Bennett 40:14
yes. So for example, I'm back. I'm in Visual Studio Code. I've got my this is my co pilot chat. I was talking about copy object. I can start a new co pilot chat. I can this particular case, it could be, Oh, can you just summarize this particular ticket? Or you're having this conversation, some I need help this conversation. So kind of switching context. And here I could say, I know summarize the conversation I'm having in streamyard, in my browser, with choir, 238,
and here's a summary of the actual the chat that's going on the stream out at the moment, where I'm on the.net AWS show featuring me. There was discussion about quality, regular.net world. And so I've kind of got this completely separate context. I'm here looking at one conversation, and then, yeah, we can finish up this particular task, this bug, or whatever is I'm looking at. And then back to my conversation, my co pilot chats, and I'm back into copy object.
Francois Bouteruche 41:29
Yeah, I love that. Yeah, I
Brandon Minnick 41:32
think I've ever seen like real time. AI like that before, because, yeah, there's so much training that goes involved that a lot of the big, the well, the large language models that are out there, they get trained on the corpus of human information, so the internet once, or at least before they get released. And yeah, they always have to draw that line in the sand. It's like, hey, sorry. I've only been trained on data up through July 2023 I don't know anything about that yet. It's like, Ah, well, I need to work on this new thing, and you don't know it, whereas pieces literally summarize, like the Twitch chat that we're having right now.
Jim Bennett 42:15
Yeah, so it's the way, the way it works. And I would like to add. This is secure. This is on device. This is private. I'm going to emphasize this right now. The way it works is pieces will be capturing screenshots of the active windows that you're working on. We then run OCR over that, then we delete the screenshot. So I'll state that right now, we do not keep the screenshots. We just keep the OCR that we capture. We then we've got some proprietary AI that does kind of relevancy checks on it. Is this something you are likely to be to be using for your job as a developer? You know, if it's kind of the private conversations you're having, we try and filter that out PII. We filter that out API key.
Francois Bouteruche 42:58
If I'm shopping on Amazon, on amazon.com,
Jim Bennett 43:03
we try and filter out that. Yeah, it's all about what's what's relevant. It's, it's a hard choice to make. You know, we've had this debate about bank information. Is, if you're looking at bank information, is it because you're doing internet banking, or is it because you work in a bank? And so it's, it's kind of hard to work out the true relevancy of what you're doing, but we try and do that, we then save it all as local embeddings. Everything is processed on device, and the only time anything leaves that encrypted on device store of information is when you ask a copilot a question about it, and you're using a cloud based copilot. So we try and work out what we think is relevant from the information that we've we've captured, send that to copilot. So if you really want to be completely secure as you're doing your banking, you can actually turn off what we call the workstream patent engine. So you can say, You know what, I want to go and do shopping on Amazon. Pause this for 15 minutes because it's I don't want to capture this information. So you can got this ability to turn it off. Or you go, you know what, capture everything you like. But I'm going to use an on device LLM, and so that way everything that you've captured is not going to the cloud. So you know, if you don't want open AI or anthropic or Google to see what you're doing, you can just use an on device model. You can fire up Mistral, and you can ask that question of the on device model be a little bit slow in the cloud, but it's that way everything stays on device. And so kind of just having this ability to leverage everything you're doing this is this idea of this, this brain that sits with you and remembers everything is incredibly, incredibly powerful. And it's, as you say, you get interrupted, you come back after a few hours. The next day, you can ask these questions. You can get this information. Classic thing in the morning, what was I doing yesterday? Yesterday, night, you were doing this, this, this, this, and this. Here you go. Carry on with that. Yeah,
Francois Bouteruche 44:50
I'm a bit curious. Sorry, Brandon, you mentioned the context window and the whole have a limit. Is assuming that quite quickly we reach these limits for the different LLM, you you are using. Or do you deal with this? You you summarize the context window before sending it to the LLM. Or does it work? I'm a bit curious.
Jim Bennett 45:21
Yeah. So we obviously, depending which LM you use, you get different context limits. We do try and summarize where we can, and we do try and work out a relevancy score where we can. So, you know, for example, in that question where I asked about the streamyard chat, then we're going to have noticed when we capture the the browser window that it's streamyard. Therefore all this information can be tagged to streamyard. And so we can pick up the net the word streamyard browser, so we know that, okay, only send the information from my browser that has been that's got something to do with streamyard. So we kind of have this relevancy check on what we're doing. You know, if I'm saying stream yard in my browser, the chat with quality three a it's not going to send anything around what I was researching around the copy object in in the AWS docs, because that it would have worked out that that's not relevant to the question being asked. So this kind of, there is a kind of relevancy check in everything that we do, obviously it's never going to be perfect, so the more generic the question, the better the answer we can give, but the more information we have to send. So it comes down to that kind of prompt engineering that we all have to do as engineers, is, how do we ask the most effective questions?
Brandon Minnick 46:39
Yeah, that makes a lot of sense. And you know, Jimmy, you mentioned earlier, you're talking about safety security, because that, that was certainly my first question when I'm watching this. We friends, one I work, work for AWS, and our RIT locks down everything, which sometimes makes it really difficult to just get our job done, because things like our Slack matches, messages are deleted after what, like, a year, year and a half. So even conversations I've had with people and I like, Oh, I know exactly where to send that. Find that link. Francois sent it to me last year, and I go to grab it, and it's gone. So when it yeah, when it comes to pieces, I would see like the first immediate pushback would be, yeah, it's taking screenshots of your work computer. You're doing proprietary stuff. We can't have that being sent to the cloud. But it seems like pieces actually is or has that as one of their top priorities. So what would somebody like me, or what should I be saying to my boss or my it, Director, my CTO, if I say like, Hey, this is a tool that would greatly improve my developer productivity, because I imagine that's the first, first question I'm going to get is, wait a minute, isn't it taking screenshots of your your laptop and using those to train a model? We can't have that.
Jim Bennett 48:14
Yeah, no. Great, great question. So we have built this with security and privacy right from day one, literally, that was there. Everything we do we want to make sure that we are secure and private. And I don't remember there was a an operating system that announced a similar stuck piece of functionality that got a lot of backlash, because they were keeping screenshots on device at all times. So you know, we are not like that. We are we are a lot more secure in everything we do. So the idea is, all the processing happens locally. The only time anything leaves the encrypted store on your machine is when you ask a co pilot conversation about it, and you're using a cloud based LLM so if you only ever use local llms, then nothing is ever going to leave your machine. And I know this works, because I've been on planes with no Wi Fi. I've Yeah, just tell my wife off to do not do that now, because I'm streaming to my wife off, I'll drop off. But everything kind of stays on device if you use a local LLM and obviously we're starting to have conversations with enterprises who are saying the same thing, of this is fine, but it can leave my machine as long as it only goes to my deployed LM. And that's something we are looking at in the in the very near future, that we will connect this to your deployed to your tenant. So if you've got an LLM deployed to your your AWS infrastructure, then one of the things we are working on is ability to connect it to that. So if you're saying, Yep, it's fine, as long as it stays inside my walled garden, then yes, we will be able to keep it inside your walled garden. Yeah. So talk to talk a lot of customers about this. At the moment, it's all Yeah, priorities and deliver. Rules. Can't say when we're going to be rolling this out, but that's something that's kind of the high priority for us because of this kind of secure, secure situation. It's as long as everything stays on your machine or inside our infrastructure we're happy with it, cool, then we will just connect this to the LM insider
Brandon Minnick 50:16
infrastructure. Very cool. Yeah, I can absolutely see a future where, you know, this is just the default or work, and then sure there's gonna be some companies who are still, you know, old school, that don't do it. And you start working at those companies, you're like, Well, wait a minute, Where's, where's all my features. I
Jim Bennett 50:36
mean, people are doing it anyway. You know, it's how many people just go onto GPT and just paste in some code and say, describe this? You know, it's we're all doing it anyway, if we're going to do it, at least put it in one place. You know, don't have people going to chat GPT and asking the questions, because there was the an issue. Wasn't there a while ago when it first came out, that, I think, it was Samsung. Well, engineers at Samsung were asking questions of chat GPT, yeah, and then there was someone who managed to hack and get Samsung IP out of chat GPT, because the engineers were using it, because it's a powerful tool. So if you're going to have a tool, why not have one that you've got this ability to tie it into LMS or just one offline. I mean, you know, let's, let's, let's do this. You know what? Let's sit back to the pieces desktop app, for example, and start a new chat, but turn my live context on. I'm going to go to my llms, and I'm going to go to, let's do Mistral, you know, because I've got the one downloaded, and then I'm going to say just, you know what? Let me just copy and paste the same question that I asked before to summarize the conversation, because I'm too lazy to type, it's going to run a bit slower because it's running the on device LM, and this is the first time I kicked it off, so it's going to take a while for it to load the LLM to memory. So, yeah, cold start. That's the way it goes. So it's got to spin up the LLM. In fact, if I bring up my Activity Monitor,
Brandon Minnick 52:09
the question you asked it is to summarize the conversation, the Twitch conversation we're having with user acquire 238 Yeah, the same question from earlier about something you were doing online, but all this now is happening on the local device. So nothing's getting sent to the cloud, no, no data is leaking out there, and there it goes. Now I'm answering it right now, so
Jim Bennett 52:33
it's it's given me less information than the first time when I first asked the question, I was using Claude 3.5, and it gave me a very long, detailed answer. Now I'm using Mistral, which is a 7 billion parameter model running on device. It's given me a shorter answer because they're small language models running on device usually do run a bit smaller, but it's given me three points about that conversation. So none that left my device. I'm just gonna bring up my Activity Monitor, if I go to memory is, you'll see we're kicking off. I'm just going to zoom in so pieces OS, which is kind of the engine that runs it all, is currently on 3.17 gigabytes of memory. So it's not a huge amount. Yeah, say less than slack. It's not, not huge, obviously, as we use different size local models, that memory goes up and down depending on the model that you choose. But yeah, I asked that question, nothing left my device at all. Yeah,
Francois Bouteruche 53:34
and you knew this conversation about, is my data going outside to me, it remember the conversation we had back in the day when we started to use the cloud, where a company where no, no, no, we don't use Cloud, no, no. And you had, you had a whole team doing some shadow it in the cloud, because they were stuck with their IT team. They couldn't have new servers, and so they were doing some shadow and that's exactly the same situation. No, no, we don't use AI because it's too risky. And you have all parts of your organization that you're already using, chat, GPT, whether you want to or not. So to me, it's better to to regulate than just say no or yes. So that's what you provide is really an answer to this. Yes. You can decide whether it go outside or it's the local
Jim Bennett 54:35
Yes, yeah, that's the thing. If you, if you have a good AI policy at your organizational level around what we can and can't do, then people not going to step around that policy. We are engineers. We are smart. We are paid to be smart and solve problems. And sometimes we solve problems by breaking the rules because we want to be more efficient. You know, we have this demand on our time to deliver. And. And every day it's deliver more, deliver more, deliver more. Well, if you're not going to give me the tools to do it, I'm going to find ways to do it myself, because I'm smart, and so, you know, a lot of engineers feel like they're a bit, yeah, they're hackers. You know, these processes are here to stop me. I'm going to get around the process because I can. So if you don't want engineers to do that, you need to think about how you can provide them the tools that they want. And so by having some a tool where you can dictate at a policy level which llms They're allowed to use, you're suddenly saying to developers, yes, please have these tools. Please use these tools. All we ask is that you do this. So let me give you a tool that's compliant with our policies, that gives you the power for what you need to do, and so just by having this ability to choose different LMS, you can then pick one based off the corporate policy.
Brandon Minnick 55:55
Makes a lot of sense. Well, Jim, we're coming up on the top of the hour, and for our longtime subscribers, you'll know that we get cut off right at the top of the hour, whether we're still talking or not. So I can't believe this is almost over. The time has flown by. But Jim, for anybody who wants to follow up on pieces or chat more about it, learn more about it, where can they find all the information. Yeah.
Jim Bennett 56:21
So pieces dot app is our URL. We can get that there we go, on the bottom there. Thank you very much. Yeah, pieces dot app head there you can download. We run on Mac, Windows and Linux, so across any operating system, we are there. We also have a Discord server. So I think we get that link dropped as well. Jump on our Discord server. We can chat with you there, support you and help us out, which is cool. So, yeah, I think I dropped that in our Yeah, pieces of app, slash discord. So our Discord server in the in the studio chat, I dropped it going right up so, and then we also, if you jump on our Discord. We have an Open Source section there pieces OS, the engine that runs this. We have open source SDKs for that as well. So if you want to build your own tool that can access your code snippets or access the CO pilots, you can do that. We have a Python SDK. We have a C sharp SDK. Just drop that on Friday. Devin, get a link to that to be in the studio chat. Brandon, get linked to that on screen. But if you want to build a.net app that leverages the LM of your choice, you can build that against pieces. And right now, pieces is free, completely free. Not that one. Yeah, pieces is currently free. So download. Get going with it. We have a GitHub discussion on pricing. If you're curious, is what we're thinking about for pricing? About for pricing. But right now, it's free to access all the llms, so no reason not to give it a go.
Brandon Minnick 57:51
Love that. Well, well, thanks again, Jim for for joining us on the show today. Thanks for showing us all about pieces. Make sure everybody give give Jim a follow. You can find him around the internet at Jim Bob Bennett, so at, j, i, m, B, O, B, B, E n, n, e, t, t, keep you up to date on all the latest happenings for pieces and thanks again for joining us this week on the.net on AWS, AWS show. Don't forget to subscribe to our Twitch channel so you never miss an episode. We're here every two weeks, and we also publish as an audio podcast. So if you can't watch us on video, you can take us with you on the go. You can find us on your favorite streaming podcast service as the.net and AWS show, and we'll be back again in two weeks, so we'll see you then. Thanks again. Thank.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments