"Magic is just science that we don't understand yet" — Arthur C. Clarke


As we age, epiphanies become few and far between. It’s a consequence of having more time on Earth, being witness to and experiencing more things. “Been there, done that” the saying goes. What is an epiphany though? Defined as “a moment of sudden revelation or insight”. In other words your perspective shifts in an instant. But what if, this sudden shift in perspective is only a realization? A realization of a shift that has been happening over some period of time. Is it really then an epiphany? If you pay close attention to your thoughts and actions, you’ll start to realize that over time they migrate. It’s human nature to adapt, and the world is ever changing. Thus, our thoughts, actions, and perspectives must change to adapt to meet the needs of the present. Maybe, epiphanies are rare because they only appear when our thoughts cross the line of least noticeable difference. The line between who we thought we are and who we actually are. Maybe, it’s just a discovery of your changing self. And maybe, just maybe the more curious you are, the more epiphanies you might have.

Maybe.

The other day I was talking with a colleague. I have known him for quite some time. Early in his career, I was his manager. I did my best to imparted my “wisdom” on him in that time. We’ve since transitioned companies around the same time, from and to the same company, and remained in contact. One of our conversations diverged into, for lack of a better term “obsessing over code”. In my younger years, I was a huge proponent of this. I obsessed over every line of code. I required that I had a reason for every line of code, and I knew what every line did. Why? Because it mattered, it forced me to take inventory of what I was doing. It forced me to simplify now, so that I wouldn’t have to toil later. Deferred gratification. But this comes at a cost. It’s a slow and painful process. Unless of course you enjoy that pain. The outcome is generally clean, readable, maintainable, efficient code. Personally, I like the what that process produced, and I enjoyed the process. To me, it was art. This process works in small and medium scale companies. Your target market and product scale are smaller. Additionally, fewer developers and engineers touch the code. You can define a consistent style (or flavour) and patterns, and maintain them. At this point in my life, I also assumed that this worked for large scale companies. I believed that for products to scale to mass market proportions, they must be architected and developed in this way. Every engineer obsessing about the code, every detail, every byte over the wire.I believed that if I was to ever work for a FAANGM that I’d be surrounded by it. Early in my career I devoured books by people I idolized that worked for these companies. It gave me a [false] sense. I thought this was how that world worked. It was a world I wasn’t part of but was striving to be in.

The reality, this was naivety and a mostly false assumption. Entirely based on my limited world view and perspective. My new reality began to crystalize. I realized that large scale software systems weren’t about the perfect layering of classes. The perfect separation of concerns. The obsession about the lines of code. They were more about the systems that create, deploy, and manage the product. It is an obsession about the outcome, and not the source (code). An incredible amount of effort goes into tweaking engineering systems, build systems, review systems, and feedback systems. Equally remarkable is the attention given to security systems and on-call systems. It’s a system level obsession, a macro-level obsession for macro-level software. I quickly stopped obsessing over each line of code. I began to obsess over how many 9s our system could maintain. I still hold onto the belief that obsessing on the micro can lead to improvements in the macro. The problem is, it doesn’t scale. You can’t do it at hyper-scale. If we let perfect be the enemy of good enough, we don’t ship and we definitely don’t delight. We need systems that maintain good enough. We need even better systems that allow us to react and improve upon areas we lack. This way you can focus effort where it matters, this is how you scale. Given enough time, any codebase is going to degrade. If enough people have touched it. It will turn into something you’d look at side eyed. You’d ask “how does that even work?” but if it’s still maintaining reliability, it’s still got active daily users that it delights, it’s still alive. Then does it really matter how bad the code is?

Some of the major adoption blockers of AI coding I hear are… Is that the code it writes is really bad. It needs heavy review. You need to spend just as much time understanding it, as you would’ve to write it in the first place. Whatever the statement is, it often refers to the quality of the code produced. If you’re more senior in your career, and you work on a legacy or huge mono-repo (1M + LOC). You’re likely reading and understanding more code than you’re writing. If you measured pure LOC output it’s not that many. Your impact comes from influence, designs, and building systems, not just through code. It’s not to say that code isn’t important at all, it has its place. It’s the material between an idea and realization. But I am beginning to think it’s not as important as I’ve given it credit for. I pause to question the effort we invest in clean architecture, design patterns, single responsibility principle. I ask myself why? To me, the reasoning is simple. When humans are involved, these practices save us considerable time and pain. When things are clean, well instrumented, with good documentation. They’re easy to work with, we can understand them, troubleshoot them, spot bugs easily. When code is well architected, it has good divisions of responsibility. New code is easy to add. It’s also a lot easier to maintain. So we invest this time upfront to avoid paying it plus interest later. If you’ve ever dealt with a large legacy codebase with significant technical debt, you know what I’m talking about. But in order to have these things, we have to have started with a good design in the first place. A good separation of concerns in the system, a good idea of input possibilities and output values. We need a good design from the start. We also need a good understanding of patterns and practices. Only then can we produce good code. So then is it really the code that is important?

One of the major challenges with AI coding right now is its application to huge code bases. People often expect it to work magic, and miraculously refactor and make the code more understandable. This is a fools errand. LLMs are good at recognizing patterns, but guess what… so are humans. If you struggle to understand a codebase quickly, an LLM will struggle too. The difference, and herein lies the problem, is that you as a human can understand intent. LLMs cannot. So where you may think you see a pattern, but recognize it doesn’t fit with the intent of the code. The LLM will be completely oblivious. These are the instances where the LLM “goes left” as I like to say. It takes a wrong turn and starts producing garbage. This is what folks refer to as “hallucinations”, but depending on the context, this is also considered creativity. In a lot of work I’ve done with LLM coding. I stop. I say to myself, “this is awesome.” I never would’ve thought to make it look or work like that. Then other times, I stop. I say “this is terrible.” I never would’ve made it look or work like that. The difference is the perspective and component it is working on. My experience utilizing LLMs to make code modification in large unruly codebases, has been mostly negative. In a few instances, like updating packages, or mid-scale well architected systems it has been instrumental. Package updates, or cross-repo refactors have saved a lot of time. Stuff like small precision type changes, or spanning changes that across many multi-concern layers, bad. Which keeps me coming back to, “if I can’t understand it quickly, how can I expect an LLM to?” If your codebase pattern is that there’s no pattern, then no LLM will fix that. So we should stop trying to make it. Instead we should start investing in how we can utilize the tooling to make things better. Invest in exposing pattern intent to the LLM, and what that should look like in a repeatable patterns. Enforcing good design, and high level system patterns. Then have the LLM work within those boundaries. This can be done by implementing examples and using instruction files to direct the LLM at what to do. We should be using this tooling to build validation and testing harnesses too. Build validation in layers unit test, integration, system level tests. This way, we do not need knowledge of every line of code. We can ensure the system behaves the way we intend by way of validation. If we’re just starting development. Start with solid designs, fundamental patterns, and let the LLM do its thing. Then use it to build systems to ensure the system functions the way you intend.

I’m betting the skills we need in the future will be the ability to design and create systems. We will need the ability to quickly understand the behaviour of a system and troubleshoot. Additionally, we will need the ability to perform large system validation. It might seem closer to the skillset required of the old SDET role, which is somewhat ironic. The overarching point is that the developer or software engineer role quickly evolves. The role becomes a system designer, architect, and tester. Which means we can all level up our thinking and our influence. We all become architects, and we have a team of LLMs that are executing on our visions. This is a future I’m excited for. At least for commercial and consumer products. Coding as we know it, is going to become a niche exercise. It will be limited to industries where software is a matter of life and limb. Where formal validation is still necessary. Who knows though, maybe one day LLMs will be able to produce at that level (if they’re not already). This is saddening though — as much as software development is a science, it’s also an art. There’s something to be said about the craft. I hope there will still be corners to be amazed. Libraries like ASIO and the STL showcase cleanliness and shear brilliance. Perhaps, enjoying these will be relegated to a Sunday hobby.

If you’re not already – you should be learning these tools. They’re not going away. You need to find a way to utilize them in your work stream. Instead of nay-saying and avoiding them, find ways to leverage them to work for you. Question what is truly important in your role as a developer. The reality might be different than the story you’re telling yourself…

As always, Happy Coding!

“You can’t connect the dots looking forward; you can only connect them looking backwards.” — Steve Jobs

Leave a comment