I’ve seen four eras of learning computer science, but the best time to learn is today

I’ve seen four eras of learning computer science, but the best time to learn is today

Despite what doomers might say, there has never been a better time to learn how to code

Nathan Peck
Amazon Employee
Published Jul 9, 2024
I've been writing code and learning about computers since I was a kid in elementary school. My first exposure to programming was in a school computer lab, steering a turtle around the screen using the LOGO programming language. Of course, at the time I didn't really understand what was going on, but it was fun!
Fast forward a few years and I am learning BASIC, and running code on an ancient Tandy 1000 computer:
Back then, learning how to code was really hard. There was no internet at home, so I did the majority of that early learning very slowly, by reading physical books. Books answered some of my questions, but not all of them, especially when it came to debugging error messages.
I remember at some point in my early teenage years I wanted to learn Linux, so I checked out a giant tome from the library. It was called the "Linux Bible" and as I remember it seemed about six inches thick (including the CD-ROM's in a plastic pocket inside the cover).
The problem was that as comprehensive as this brilliant book attempted to be, there were still bizarre failures that would happen with my Linux setup. At the time, the only solution I had was to wipe the hard drive and reinstall Linux using the CD-ROM's.
It was a few more years until I finally got internet access. With internet access, I gained a new desire to code webpages and make them interactive. I started learning JavaScript. But early browsers were a mess of differing API's and nonstandard implementations, so this naturally led me down the route of learning jQuery:
jQuery was the first programming tool that I didn't learn from a book. Instead, I learned jQuery using the internet. At this time the best way to learn was from other internet people in online forums. I spent many hours in forums asking questions and providing answers to questions that were within my own realm of knowledge.
But these early forums had their own problems. Some internet forums were very elitist. You would be mocked for asking a question that was deemed to basic. Another downside was that you could dig around a forum for ages, and find a thread that perfectly matched what you were looking for, but when you read into the replies you would find that the original poster just came back with "don't worry guys I solved this problem, it was easy", and no other details!
A few more years later and I’m working at my first startup. I start to learn more about server side infrastructure and AWS:
100% of my learning about AWS was via the internet. I never took a class or read a book about AWS. Instead I’ve spent over a decade doing Google searches, reading Stack Overflow threads, official AWS documentation pages, Github issues, Medium articles, and technical blogs about AWS. I’ve also spent countless hours looking at sample applications, reference code, and CloudFormation templates, in order to understand how AWS concepts fit together.
Along the way I went from AWS customer, to AWS employee:
 
My hairline is a bit worse now, but on the plus side I’ve gained a beard and tattoos. More importantly, the tech I work on has gotten way more interesting. First, I spent seven years working on container orchestration at scale, as part of the team building and scaling what I’m confident is the largest container orchestration system in the world. Now I work with generative AI, specifically focused on Amazon Q Developer, a suite of developer focused tools powered by generative AI.

Continual learning is key, and the tools for learning keep getting better

In looking back on my journey into software development I’ve found that continual learning is the key to effective building. As software engineers we are constantly seeking the next level of abstraction that offers the most concise way to turn our ideas into reality. This continual journey of abstraction has defined my career so far. On the programming language side of things I seen developers move from compiled languages, to interpreted languages, to web applications. On the hardware side of things I’ve grown from locally hosted applications, to client/server applications, to virtual machines in the cloud, to serverless applications powered by cloud functions and containers.
Learning and keeping up with all the latest tools and abstractions has been challenging along the way, but in each era there has been a key tool for learning new concepts:
  1. Books, with the downside of a physical book having limited capability to answer more complex, specific problems
  2. Internet forums and message boards, with the downside of being dependent on unreliable interpersonal interactions
  3. Internet search, with the downside of relying on search engine ranking algorithms to filter out garbage results and direct you to the ideal match for what you are looking for
Today there is a new tool, for the current era, with it’s own downside:
  • Generative AI powered by large language models, with the downside of hallucinations and guardrails
As a learning tool generative AI far exceeds the boundaries of what came before:
  • It exceeds the limits of what any single book or website can contain. A good LLM is trained on a vast collection of knowledge that is far greater than what a human could possibly consume within many lifetimes.
  • It is reliable at answering your very specific questions, even weird things that seem like they are unique to your own context.
  • You can interact with an LLM using natural language, similar to communicating to a human. However the LLM responds faster than a human could, and without judgement. It won’t ridicule you for asking a basic question, and it always tries to be helpful.
  • It does a fantastic job at filtering out “noise”. The internet has become a sea of “search engine optimized” garbage results. Hunting for the “signal” among the noise is getting more and more challenging. An LLM skips all the ads and filler content, and answers back with the relevant details very quickly.
That said, generative AI is not perfect. The first problem that I see today is hallucinations that happen when a model attempts to be “excessively helpful”. When asked a question for which it has no good answer, an AI model will often make up a statistically likely, reasonable sounding answer which is actually not grounded in reality. The second problem is badly tuned guardrails. In an attempt to prevent models from making up bad answers, or communicating in harmful ways, model distributors often package the model with safety measures which can end up being over aggressive to the point of causing the model to avoid answering perfectly valid questions.
No method of learning new skills is perfect, but I believe that what we currently have with generative AI is the single best learning tool for software engineering that has ever existed. Never before have future software engineers had such an instantaneously interactive tool that they can use to pick up new software engineering skills. I’m already learning new skills differently today than I once did, and I’m talking to other folks who are also using generative AI as a learning tool. In many cases these are folks who had never coded before, but they are now being guided into learning code and running generated code snippets that solve their real world use cases and problems.

So what does the future look like?

There is a lot of “doomerism” when it comes to the future of software engineering. Is AI going to replace us human software engineers? I don’t think so.
For a very long time, and even today, the bottleneck in software engineering has been implementation speed. A healthy, innovative business with hungry customers is always capable of generating ideas for what to build faster than it can generate implementations. I don’t think generative AI will change that. It can help us implement ideas faster, but that will just lead to even more demand for more things to build.
If the software industry reaches a point where we no longer need additional human software engineers it won’t be because of generative AI, it will be because we had an industry wide, collective failing of imagination and ability to come up with new things to build.
Imagine three different organizations:
  • Organization A hires many human software engineers, but decides not to provide these software engineers with generative AI assistance. Perhaps the org even mandates a ban on the usage of generative AI for work.
  • Organization B attempts to minimize the number of human software engineers that it hires, thinking that generative AI should allow the org to reduce its headcount. Instead a smaller set of human engineers are squeezed to operate under significant crunch time, relying on a generative AI agent to assist them to hit deadlines.
  • Organization C chooses to hire an appropriate number of human software engineers, as well as provide them with appropriate generative AI agent assistance as a force multiplier for their abilities.
Which org would you expect to build and innovate faster, and which one would you bet on as an investor? My money would be on organization C every time. This organization will produce better implementations and it will produce them faster than it's competition, because it is leveraging the best of both worlds: human decision making and intuition alongside AI agent assistance.
There has never been a better time to learn to code. We have the very best learning tool that we have ever had, and the future is full of ideas to build, which will need people to help build them. I’m optimistic, and I can’t wait to see what kind of powerful tools I’ll be working with as a software engineer as we progress in this next era of software engineering.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

2 Comments