Blog

They say Hindsight is 20/20

As we enter into a new year, I can’t help but reflect on the last. What I did, what I didn’t do, with a focus things I’d like to accomplish in the coming 12 months. I’ll spare you the details of my self reflection. Though, I would like to share the method that I use for my approach. I am aware that this is a tech blog, some might feel that personal and self growth have no place here. Well to them, I say “thanks for the view”! ūüôā This is Unparalleled Adventure after all, and what would an adventure be without some reflection? For those of us in the industry, unless you want to stagnate, it’s important to set goals, reflect, and to take steps to get better. Self reflection and employee growth isn’t always on a company’s radar, which is interesting, because capital growth and ROI often are. For me, it seems there should be a positive correlation between employee growth, and growth of a company. However, I digress. If it’s not part of your companies yearly mandate to reflect and set goals. You should mandate it for yourself! Often times developers want to become super-star 10x developers overnight. This isn’t realistic, and can lead to disappointment in ones self. The reality of it, is that it takes time, and lots of work. Becoming a great developer is a lot like optimization, you can’t optimize what you don’t measure. So in order to progress, you need to reflect, and take stock of where you are, set some goals to progress, and most of all have a vision for where you want to go. For some of us, this can be quite an eye opening experience. If you’ve never reflected and looked objectively where you are at, you might surprise yourself, one way, or the other. If you’re struggling with the how to reflect, and set goals. It can be helpful to understand someone else’s approach.

Over the years, I’ve adapted a model that I feel has worked (for the most part) for me. As like anything in life, it doesn’t need to be followed to a tee, nor should it be written in stone. Instead it should grow and adapt, as you grow and adapt. You should follow what works, and disregard what doesn’t. My method may serve a a jumping-off point, or give you enough perspective to turn and run in a different direction. I’ve built the model I use off of what has worked for me in the past, removed what hasn’t, built on foundations learned in school, as well as tactics and practices I’ve picked up from books, conferences, podcasts, and some creative ideas I crafted myself.

If you’ve ever read the book Tools of Titans by Tim Ferris, you’d have seen the break down and categorization of Healthy, Wealthy, and Wise. Healthy being what directly affects the health of ones body. Wealthy being what directly affects the health of ones bank account. And, Wise being what direct affects the health of ones mind. For a few years, I crafted goals like I’d been trained in school. Short-term, Mid-term, and Long-term goals. I want to exercise more in the next 30 days. I want to learn to program in the next 6 months. I want to have my vehicle paid off in the next year. Regardless of the quality of these goals. I always struggled with a way to categorize and relate them. What does exercise, programming, and having my vehicle paid off have to do with each other? This problem was only magnified by my ability to be overly self critical. I felt that they didn’t make sense, and in turn it didn’t make me want to work towards them. After reading Tools of Titans, I decided to categorize my goals into the three categories. Healthy – goals that pertain to my physical health. Wealthy – goals that pertain to my financial health. Wise – goals that pertain to my mental health. I still kept the Short, Mid, and Long term lengths, and decided on one of each goal per category. This gave me a total of nine goals. Which wasn’t bad. I made them SMART goals, based on their time frame and I went from there.

The problem was within categories, the goals didn’t relate. The Short Term goal didn’t run into the Mid Term goal, which in turn didn’t do anything to help me get towards the Long Term goal. The second pitfall, was that my short term goal would get completed, and I would have nothing to work on that was “Short Term”, while I worked on the longer term goals. This year, I’ve decided a good structure is to have a “summary” for each category, and 3 SMART goals that work towards the summary per quarter. This leaves me with 3 months of goal to focus and work on, and in the next quarter make an adjustment. I’m hopeful this breakdown will give me more to work with, and allow me to work a little closer to my goals.

So, that brings us to the goals for my blog. In the two years I’ve had this blog, I’ve posted a whopping 14 posts. I’m proud of that, but it’s not the amount of content I would’ve liked to have posted. Though I never put a measurable goal, I would’ve liked to have closer to 50 posts. This year I dug in and set a goal, my goal for 2020 is 12 posts. It’s only 1 per month, so it’s seemingly doable. I’m hoping that 12 posts will raise readership and subscribers, and in 2021 I’ll be able to put more fingers to keys and share more of my learning.

In 2019, I had a total of 2 041 views, with 1 442 unique visitors. The post with the most views was¬†The one where we reverse engineered Microsoft’s C++ Unit Test Framework (Part 1) it had a total of 714 views! The series as a whole did quite well, totalling around 1500 views. My posts that talked about architecture, design, and advice I wish I would’ve listened to each had a whopping 1 view. (Probably was me reading it. :P) It’s obvious what people are interested in, and to be honest, it’s what I’m most interested in too. The code, digging into the details of the code, and doing obscure things.

I look forward to the next series of posts!

Until then, Happy Coding!

PL

Life can only be understood backwards; but it must be lived forwards. — Soren Kierkegaard

 

A Single Responsibility

Over the years I quite often find myself pondering software development patterns and practices. It’s the kind of thing I think about in my downtime, when I’m driving to and from work, or sitting at home… pondering. I always seem to come back to one question, is the common interpretation of the pattern, that is how the majority of the world sees and implements it, what the author had intended? Is that inventor okay with it? It’s not like I’m going to solve this mystery today. One day I might have the opportunity to meet one of these greats and ask them. Is the outcome of their idea what they intended? I often times find myself reading coding, or working on past code where the implementation, doesn’t line up with my current interpretation of the pattern / practice. Though, upon talking about the code, or reading comments, or blind guessing. It can be clear that the intent was to implement towards a given pattern. In my last post, I talked about Separation of Concerns, which in my opinion can be closely related to, and often confused with the “Single Responsibility Principle“.

The intent of my last post, on Separation of Concerns, was to show that it in itself is not a¬†pattern.¬†It’s merely a practice that can be applied to thought, at any level. This can apply to your application at a¬†macro level (application architecture), all the way down to the micro level (lines of code). Where I see things start to get fuzzy for people, is around the application into class level. The Single Responsibility Principle, is an embodiment of Separation of Concerns, but this embodiment is at a specific level. Unlike Separation of Concerns, Single Responsibility Principle applies only at the class level. I mean, that’s in the definition.

Classes should have a single responsibility and thus only a single reason to change.

But it’s like everything in software, open for interpretation. Patterns, Practices, and Principles fall victim to the subjective nature of application development. The real world of development rarely sees foo and bar, outside of whiteboard sessions. This means that you have to deal with real world objects, and real world problems. Then as developers we have to translate the canonical and sometimes naive FooBar examples, into our real world problems. Sometimes, more often than not, especially with less experienced developers, this leads to incorrect or harmful application of these principles.

Sometimes strict adherence to an interpretation of SRP and “Separation of Concerns”, can be deleterious to an application. The unfortunate nature of this, is that it’s a problem that doesn’t manifest until much later, when it’s too late. Now, I’m not trying to sit on my high-horse and say I don’t misapply these things. In fact, it would be foolish to think that anyone is perfect when it comes to this. I would be willing to bet that Martin Fowler himself doesn’t get it right 100% of the time. The difference is that with experience, you’re able to spot the blunder before it’s too late. Before you’ve gone to far, and you’re on the cliff being faced with a re-write. In the real world, this often times ends in a manager wishing he would’ve reviewed the code a little earlier, or a little more often. Hopefully, this post will help to clarify and add some litmus tests to the application of this principle.

First off, Separation of Concerns, isn’t SRP. If you think that, just forget it. Right now.

Separation of Concerns is the organization of thoughts, the ability to separate components, that are not overlapping, or mutually exclusive. Single Responsibility Principle, is an application of this practice, of grouping things that have the same concern, or reason for change into a single class. So it has a¬†real world, level of application. It’s at the class level, that’s where you apply it…. And this is where the problem stems.

Say you have an application, you’ve got a Web Service that deals with incoming client web requests, and you’ve got an Auxiliary Service that is moving data to and from a database, and servicing long running system requests. This is an example of Separation of Concerns. This is¬†not¬†an example of Single Responsibility Principle. It’s too macro, we’re talking at our application level. Not at the¬†class¬†level. The problem that will stem from this form of thinking, is¬†macro level class development. God classes. Sure — you have a class that represents your service that “Fulfills web service requests”. That’s his single responsibility… But is it? Is it really? If we imagine this mock class, he would have to

  1. Receive a message from the Web Server Service
  2. Parse said message
  3. Understand what to do
  4. Fulfill the request
  5. Build the response

Now, that’s definitely more than one responsibility! But from a macro level, it’s easy to see your class as having a single responsibility. You’re Separating your Concerns, remember?

In this case, it would probably make sense to have 1 class for building and parsing messages, 1 class that can switch on those messages to dispatch what to do, and 1 class for each of the actions to fulfill the requests. Woah. Woah. Woah. Wait just a minute. You just said 1 class for building and parsing… Isn’t that violating the SRP? Well, the answer as so many are in Software Development, is ‘it depends’. That statement, was intentional. It was meant to bring light to the fact, that the definition says “a single reason for change”. When you’re dealing with protocols, building and parsing can often be symmetrical. Therefor, if the protocol changes, that could be our single reason for change of this class. So it could be said to have a single responsibility of dealing with the protocol.

As you can see, where you focus the light of Single Responsibility will really play a factor into the organization and structure of your code. Just wait until you start shining that light too close.

When you start focusing the light at a micro level into your code. You’ll start to actually factor¬†out¬†responsibility.

Imagine you have a system, that is used to dispatch sms style messages, but it’s old school, so it takes time. You’ve got a client facing API called a MessageBroker, and a background service called a MessageDispatcher. Clients of your API deal directly with the MessageBroker, they give the MessageBroker in the format of

class Message 
{
public:
   enum RecipientTypes { Specified, Random }; 

public:
   const Address sender_;
   const Address recipient_;
   const RecipientTypes type_;
   const String &message_;
};

The intent, is that we give the MessageBroker the message, and he’ll do something with it for later pick-up by the MessageDispatcher. The MessageDispatcher will ensure delivery of the message to the recipient. Now, the API of the Message class is such that if you set the type, to Specified and set the address, the message will arrive at your intended target. However! If you set the type to Random, it should go to a random target. This randomness, isn’t really random, it could be based on a location heuristic.

You might think it’s best to define an interface for the message broker, and make it look something like this.

interface IMessageBroker 
{
    void Send(const Message &msg);
};

Then you might implement the IMessageBroker, something like this.

class MessageBroker : public IMessageBroker
{
public:
    void Send(const Message &msg)
    {
           // Validate the fields are appropriate for database
           ValidateFields(msg);
           // put it in the db
           Database.Store(msg);
    }

};

There you have it! Our SRP MessageBroker! He has a single responsibility. He accepts a message and stores it in the database. That’s it right? Well, you might be asking “What about sending to a Random recipient?” Yah — that’s not the MessageBroker’s responsibility… His responsibility is to “Accept” a message, and store it in the Database. It’s someone else’s¬†responsibility to determine the recipient.¬†/s

I hope even without the /s, you saw the sarcasm in that. But this is the reality of imposing SRP at a micro level. You’ve just shirked the responsibility of this class. He’s not anything more than a glorified secretary for the database. Worse yet, the real responsibility is now imposed somewhere it doesn’t belong.

Let’s say you require the client of your API to specify it. Well — you’ve just opened up a can of worms… Who’s to say they’ll use the same heuristic? They might have your carrier running all over Hell’s Half acre for the random recipient. Okay — force them to use a Utility class to generate the random. What you’re saying now, is that they have to¬†remember¬†to use your utility class to set the recipient. Or else… That’s not a very good design.

It makes sense, to put the responsibility, where it belongs. If calculation of a random recipient is the behaviour of how this system works. It only makes sense to put that logic into the MessageBroker.

class MessageBroker : public IMessageBroker
{
public:
    void Send(const Message &msg)
    {
        if(msg.type_ == MessageTypes.Random)
           msg.recipient_ = GenerateRandomRecipient(msg.sender_);

        // Validate the fields are appropriate for database
        ValidateFields(msg);
        // put it in the db
        Database.Store(msg);
     }

};

What’s curious about this, is that you can start to see you never needed the IMessageBroker interface in the first place. You see, the point of an interface is to be able to define a boundary of responsibility. Then have any implementation of that interface, provide that contract. Now, in our case, what’s our contract? Well, we get the Message somewhere where it can be dispatched. If it’s specified as a Random type, then we have to know to generate a random recipient, and get that to be dispatched. Would you get this out of the contract the IMessageBroker defines? I wouldn’t. So, for this design, it doesn’t make sense to have an interface. It’s a specific implementation, for a specific Message class. They’re very tightly coupled to each other. The behaviour expectation of your Message client, is very much dependent on the implementation behind that interface, implicitly. As you can see, his responsibility, really came to light once we took that step backwards, and looked at it for what it really was. (If you’re curious how I would do it differently, shoot me an e-mail!)

In summary, when you’re trying to apply the Single Responsibility Principle, it’s really important to focus your view at the right level. Take an objective look, and ask,¬†what is this thing really doing? What would be reasons for me to make edits to this class?¬†If you are honest, and you start to see that your class has many jobs. You can start to refactor some of those into their own classes, and compose that larger class of the smaller classes. Nothing says that Composition cannot occur with SRP. It just means your larger class’ job, becomes orchestration. There’s nothing wrong with that. The trap you don’t want to fall into, is shirking responsibility. You don’t want to start refactoring your code, where you’re pulling out responsibilities, and putting them on other classes. This will lead to a host of skeleton classes that are merely pass-through objects. The refactor in that case, is to look for the responsibility you’ve pushed on your clients. Ask yourself,¬†should the client need to know that? If I was using this, and didn’t write it, would I know to do this?¬†Those are the types of questions that you’ll find start to drive responsibility back into where they belong. That said, it’s a balance, and finding it is tough, it’s just a matter of practice.

I hope you enjoyed the post. As always — Happy Coding!

PL

“The price of greatness is responsibility”¬†— Winston Churchill

 

Separating your Concerns

I spent a lot of time debating what I should title this post. Should it be “Buzz Words”? Or maybe “Separation of Concerns and SRP”… SOLID Concerns? In the end, I settled on this — Separating your Concerns. I also spent a bunch of cycles, asking myself what I really wanted to cover. What ideas do I want to convey? Then I spent some time thinking about how I would convey that message. What literary road would I take? From whom’s style would I borrow. However, none of this is of concern at this moment. The fact that it’s written is all that matters now.

I want you to think about the word, Organization, and what it means to you. Just take a moment to reflect on it.

When I think about Organization, I think about form, and about order. Organization makes me think about neatness, about cleanliness, and about space. Room to breath, and room to work. Organization to me is logical, it’s order — and the opposite of that is chaos, and anarchy. In our world, spaghetti.

Hold that thought.

Now, I want you to consider what “programming” is. What exactly is it for? What is a programming “language” for? Be specific.

You might think to yourself — “programming is telling the computer what to do.” Kind of. “A programming language is the syntax we follow to do that.” Kind of.

One of the definitions of “programming”, is the act of providing a computer (or machine) with with coded instructions for the automatic performance of a task. Now, that’s the definition of programming. But anyone who has worked on any project bigger than a hobby project, with multiple developers, knows it’s so much more than that.

Programming, not only is about telling the machine what to do. It’s doing it in a way that other¬†people can understand it. It’s almost as if our programs are stories, and the compiler is the translator that turns it into instructions. The programming language, therefor is our medium of communication, between developers. Since, realistically the computer can’t understand C++ or C# directly. Consider for a moment, if we all had to “program” in CPU instructions written in binary, something a computer can understand directly. As soon as you have more than one person on the project, you’ve got a nightmare. It would be enough that the person has to communicate with the computer in that way, let alone with other developers. Trust me when I say, programming in higher level languages is for us, not them.

You might say “No. Comments are for communicating with other developers, and code is for the computers.”, to which I would reply simply “You’re wrong.” If you think that, I am sorry for you. (Actually, I’m more sorry for people who work with you.)

So programming, then is really a form of communication. It’s not completely unlike writing a story. This may seem like a bit of a Woo-Woo metaphor, but it helps to illustrate my point. Your tangled rats nest of notes you took in school, might’ve served its purpose for your studying. Though, it’s highly unlikely that it is of any value to anyone else. Not unlike programming, just because something is working when you write it, doesn’t make it of value to anyone else. Unfortunately, that mess of notes you took 3 years ago, is probably of little value now, even to you. The realistic truth, is that you then, is not you now. This is just like bad code. Trust me, I know because I’ve written so much bad code it’s mind boggling.

It’s not like this wasn’t a problem with writing. That’s why people invented literary tools, sentence structure, paragraphs. That’s why we have archetypes, and plot patterns. It’s to better communicate our stories. These things exist in programming, we have numerous languages, different patterns and practices. Hell, we even have smells. All these things try and make our code more easy to understand, for others, and for ourselves.

Let’s go back to Separating of Concerns, and how any of what I just wrote actually applies to programming. If we stick with the metaphor, that coding is a lot like writing, then what in writing is “Separation of Concerns”? Well, in your travels I’m going to assume you’ve read a book. In that book might’ve been a Table of Contents, that Table of Contents laid out the chapters of the book. The ‘Logical Separations” in the book. The book is structured in such a way, that the chapters flow. The author has taken the time to logically bundle pieces of relevant information. Can you imagine reading a book that is teaching you about Botany, and in Chapter 1 they dive directly into the root structure of a Hibiscus, and on the next page the configuration of soil nutrients? That’s not a book I want to read. So the act of bundling relevant information, for consumption of the reader, is in fact “Separation of Concerns”. Now, you can take this further. Do you think that when the author is crafting her book, that she’s worried about what type of paper it’s going to be printed on, or the font size? Probably not. Though — if she’s writing a book that contains needs special paper and tiny font,¬† likely she will ensure that the printing process will indeed support her special paper and tiny font. She certainly isn’t going to worry about this while she’s writing the book. In this regard, the author is Separating Concerns, regarding her¬†functional requirements¬†i.e. writing the book, and the¬†non-functional¬†requirements that the book needs to be on special paper in tiny font.

If we consult Wikipedia in regards to “Separation of Concerns”, we can look at the statement from Edward Dijkstra, that¬†probably¬†coined the term.

Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one’s subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained¬†‚ÄĒon the contrary!‚ÄĒ by tackling these various aspects simultaneously. It is what I sometimes have called¬†“the separation of concerns”, which, even if not perfectly possible, is yet the only available technique for effective ordering of one’s thoughts, that I know of. This is what I mean by “focusing one’s attention upon some aspect”: it does not mean ignoring the other aspects, it is just doing justice to the fact that from this aspect’s point of view, the other is irrelevant. It is being one- and multiple-track minded simultaneously.

The first time I read this quote, I don’t think I fully groked it. So I read it again. And again. And again. Then I remembered a term that a lot of developers will preach.¬†Make it work. Make it good. Make it fast.¬†Coined by Kent Beck, in the UnixWay we’ll go with sometime ago. But Dijkstra beat him to it. Kind of. The common theme here, is that these pieces are logically separated. You concern yourself with the correctness, the efficiency, and the functionality of your application each in isolation. All the while, keeping in the¬†back¬†of your mind the other concerns. It was eye opening.

Alright — enough philosophy. How does that actually affect my programming?

Well we know now, that Separation of Concerns is essentially grouping logically functions of our program. That allows us to worry about a certain function of our application, without flooding our minds with other pieces. It allows you to¬†focus.¬†I don’t care how smart you think you are, you can’t focus on more than one thing at a time. If you try when you’re programming, you’ll end up with a tangled mess. You’ll start to bring pieces that belong in other areas, into the area you are. Then you’ll take pieces from where you are, and put them in areas they don’t belong. The smartest people, have the innate ability to focus on one thing at a time, while keeping track of all the other things outside.

While authors have chapters. As programmers we have abstractions. In my opinion, the two definitions that most closely apply to programming are:

the process of considering something independently of its associations, attributes, or concrete accompaniments.

and

the quality of dealing with ideas rather than events.

The first part of an abstraction, is that it lets you consider the component devoid of its baggage. You don’t care that it references EntityFramework, Boost 1.61, or the entirety of¬† GitHub. You don’t care that it has 1.3 million member variables, and 42 private functions. The only thing you care about is that it fulfills its contract to you. That is, you care about it’s public API. You ask for a list of Foo, you better get a list of Foo. That’s why you need to keep your public API clean and concise. Because that’s your first layer of abstraction.

The second definition, and an extremely important part of an abstraction. Is that it allows you do deal with a concept, rather than its details. This lets you think at a higher level, and deal with larger concepts without drowning in all the details. The (not so) modern remote or clicker on a television, had an abstraction that allowed you to press “up”, and that would tune the frequency of the television to the channel. This allowed you to work with the idea of “up” and “down” on your remote, to tune your television. Instead of being concerned with the tuning of the radio frequency that represents the channel.

Obviously given the fact that this entire post is riddled with metaphors, you might already know I’m a sucker for metaphors. The good news is, I’m about ready to unveil the¬†pi√®ce de r√©sistance,¬†of my metaphors. It’s the one I think most clearly buttons this concept up. It’s the car. The car, isn’t just a fantastic example when we’re teaching inheritance. It’s just a great all around metaphor when it comes to programming.

Specifically the steering wheel — this is a fantastic abstraction metaphor. You get into your car, and you look at the wheel. Do you care about the mechanics behind it? No. Do you care about the linkages between the wheels and whatever makes them turn? Nope. How about the power steering pump? The electronics? Nope, and Nope. You care that this thing makes your vehicle go left and right.

Do you know why this is so important when it comes to driving? It’s because driving is complicated, you need to have quick reaction times, and be able to focus on other things, like the rules of the road. At that level, you only have the capacity to concern yourself with “left” and “right” when it comes to steering. Imagine what driving would be like, if you had to push and pull leavers and rotate gears to turn. It would be a nightmare. That wheel in the cockpit of your vehicle, allows you to focus on actually driving, instead of the details of turning the wheels. It moves that concern to a different area. Now, do the mechanics behind that wheel still matter — hell yeah! But as long as they’re functioning when you’re driving, it isn’t your concern. It’s beautiful. The second benefit of this, you can drive a bunch of different cars. You could have a beat up pinto today, and tomorrow be “rolling ‘benz”, all because you know how to use a steering wheel. Steering stays the same, and it has for many years.

Another awesome benefit of the steering wheel, debug-ability. Say you crash your pinto taking a turn. Pretty easy to figure out where the bug is. Is the bug between the seat and the wheel, or the wheel and the head lights? Let’s check, when I turn left, do the wheels turn left? Yes. Is it the correct amount? Check. Okay — problem lies between the wheel and the seat. Easy.¬† It’s also really easy to check this when the car comes off the lot, or after you’ve had it serviced. Weird — that sounds a lot like unit testing to me.

So in order to sum all of this up. Separation of Concerns affects the entire spectrum of software development, from architectural design, right down to the details of the code. It’s about organization, and only concerning yourself with one topic at a time, all the while keeping in mind those other pieces. You first concern yourself with the design of your program. Capture the like ideas, divide them into chapters. While you’re implementing your chapters, look for steering wheels. Let your chapters flow into one another, but don’t mix Chapter 1 and Chapter 13.¬† Find spots for steering wheels, don’t force them to use gears and knobs. Mostly — know that this is a process, and it’s hard. You’re likely not going to get it right the first time. It’s just about trying, keeping these concepts in the back of your mind, and looking for opportunities to apply them.

I hope that this post has brought some light to the topic of Separation of Concerns. Maybe next time I’ll try my hand on one of the other concepts of Computing that I got wrong so many times.

“Organizing is what you do, before you do something, so that when you do it, it is not all mixed up” — A.A. Milne

I hope you have a splendid day, and as always — Happy Coding!

PL

 

 

Separation of Concerns

Make it work, Make it Right, Make it Fast

What I wish I would’ve listened to – Part 1

You’ve heard it a million times, “here’s some advice I wish they would’ve told me when I was your age”. In my case, I’m very fortunate to have grown up with great parents, who shared with me a lot of their life experience and knowledge (thanks Mom & Dad). Sometimes though, I was just too stubborn to hear or see it, or maybe I just wasn’t ready.¬† Regardless, I think that much of “What I wish they would’ve told me”, is sometimes actually “What I wish I would’ve listened to, when they told me.”. We often just don’t get it at the time. Maybe we just want to do things differently, or we know better. Now, don’t get me wrong, it’s not to say I haven’t blazed my own trails, and that I haven’t walked on paths not yet traveled by my mentors. But it’s to recognize, that we often times miss some of the things our elders want us to know. This isn’t to say that the world would be a better place, if we just followed everything our parents, grandparents, teachers, or bosses told us to do. We would then just be a society of automaton, that can’t think for ourselves. Regardless of whether we wanted to hear it, or they never told us, this miscommunication becomes things “I wish they would’ve told me that when I was younger”.

So, now it’s my turn — my reflection on my career sparked an urge to share some of the life lessons I’ve learned in my short time on this rock, and my even shorter time as a Software Developer. The things I wish I would’ve paid a bit more attention to. Those things that I’ve had to learn the hard way. That advice I wish I would’ve heeded. I’m going to try and make it as accessible as possible, but there’s a high likelihood, it will become information lost in the folly of our future generations. Just as lost wisdom of our elders past.

Television will rot your brains… You should read more.

These days with all the streaming services, access to everything our friends and idols do every moment in their lives at the tip of our fingers. Who even has time for reading? Who even wants to read? From as early back as I can remember, I despised reading. My grandmother, and my mother, did their best to try and get me to read as a youngster. Reading to me, giving me access to any books that I desired. I desired a lot of books, I liked the way they looked on my shelf. All with perfect spines. It was obvious that reading was for people who couldn’t watch movies. Why would anyone want to stare at a bunch of text on paper, when they could stare into the wonder that was film?

By the time I was in high-school, I could probably count the number of books I’d actually completed on one hand.¬† For most of my elementary and junior high days, the internet wasn’t much of a thing just yet. So you couldn’t just find the synopsis online. I had to work to do less. I did things like read the first and last paragraph in each chapter. Read the back of the book. One memory that sticks out, was from grade 11. We had to write a book report on The Life of Pi. Obviously, I loathed book reports, because not only did I have to read — I had to write. So instead of actually reading the book, I read¬†part¬†of the book, and relied on chatting with my classmates to “absorb” the rest of the book. I did a pretty good job of gathering the information. I got the scoop on what happened in the book from a friend, we’ll call him Chris, because that was his name. Chris filled me in on the plot. A boy and his family, and their zoo get on a boat to move to Canada, and sell the zoo. On the journey, the freighter sinks leaving Pi and some other zoo animals, Tom Hank’s style castaway’d on a life raft, one being an adult Bengal tiger. They have some trials and tribulations, some of the animals die. Pi and the tiger eventually get rescued. End of story. If you’ve read the book, or seen the movie… Spoiler alert. There’s this whole part about Pi telling an alternate story which involves humans, as the zoo animals. To which you’re left a choice of which story to believe. Apparently, my English teacher thought this was of some of “literary importance”. Thanks a lot, Chris… Needless to say, the grade on that report wasn’t on the fridge.

You’d think this would be enough to push me to the books. Nope. I was able to skate by on minimal effort through high-school, into and through university. In university, the internet was in full force, so I didn’t have anymore “Chris” episodes. However, after university in my early 20’s, I hit my breaking point. I had just finished a climb with a friend from school, we were all going to go out for beers. It was with friends of a friend, she was an engineer, so I figured I was safe. Arts degrees…. amirite? So we go out for beers, and somehow we get on the topic of reading and “literature”. I still get chills down my spine when people pronounce it lit-er-at-ture. The round table ended up on me, and my reply was that I didn’t read. To which, one fellow commented “you can’t read?” I stumbled and tried to recover, “No. No. I can read. I have a university degree… I just don’t read for fun.” “Sure.” He smirked. That rocked me. After that, I would be a “reader”.

It’s been about 10 years since that guy made a fool of me. I wouldn’t consider myself a book worm by any means. But I read everyday. I didn’t start that right away. It just came organically, I just started reading programming books. That evolved into other non-fiction, self help, biography, history, philosophy, etc… I’m not a huge fiction fan though. The thing that I failed to realize, as a kid, youth, teen, and young adult is just¬†why reading is so important. I’m sure someone, at some point in my life told me. I wasn’t ready for it, because it didn’t affect me at the time. I didn’t need reading to get by at the time, and getting by was what I was focused on. I suspect, if you’ve read this far, “getting by” isn’t what you’re interested in.

People have probably spouted the traditional benefits of reading. Things like mental stimulation, relaxation, and increase of knowledge. In order to set myself apart, I won’t use those traditional examples. Hopefully my examples will be more application and less theory. Maybe, they’ll apply more to a modern age.¬† In a modern corporate world, why is reading so important? You’ve heard the saying “Cash is King”. How about “Communication is King”. The way you communicate with your colleagues, mentors, and supervisors determines largely how you progress through your career. You could be the most brilliant entrepreneur, but if you can’t communicate your brilliance, to an outsider you look like a buffoon. In my opinion, strong communication skills are the key to success. You want a successful fulfilling career? You’ve got to be able to foster healthy corporate relationships, you do this by communicating your thoughts and ideas. You hope you get a mentor or supervisor with good communication skills as well. So she can communicate to you, your weaknesses and how you can improve.

Communication, is successfully conveying information to an audience. The key point of that is “successfully”. The thing is, that humans are social beings. We’re made to communicate. A group of like-minded individuals have an easier time communicating, and thus successfully exchanging ideas. If you put the “cache-me-ouside-how-bo-dah” girl (Bhad Bhabbie), in a room of her friends. They’d all understand her. Put her in a room of executives, and you’d have a lot of men in suits scratching their heads. Did someone say cash? In order to be good at communication, you’ve got to be able to communicate outside your immediate circle, or those like minded individuals. As always, you might be wondering why I’m explaining this, and what the hell it has to do with reading. Well — writing is a form of communication, and like it or not, with IM, SMS, and all the various forms of messaging applications. Writing is becoming a very important form of communication in the workplace. So when you read a variety of books, you learn a variety of communication styles, with a variety of communication styles. When you have an arsenal of communication styles, you’ll be more suited to customize your communication style to the audience at hand. This will allow you to have a higher percentage of your thoughts and ideas understood by others.

The second important part, is that reading provides you with better “mental models”. If you read non-fiction, you’ll amass knowledge of history, current facts, etc. If you read fiction, you’ll learn about archetypal story telling.¬† You can use this knowledge to help you frame the massive amounts of stimulus (data) you receive every day. Being able to frame, sort, and organize the data you receive allows you to make better informed decisions. Which in turn leads to better and more successful outcomes.

In summary, if you’re having trouble getting your messages received by others in your world. You should read more, that television will rot your brains.

‚ÄúThe more that you read, the more things you will know. The more that you learn, the more places you‚Äôll go.‚ÄĚ ‚Äď Dr. Seuss

On Leading Yourself

Forgive me, for it has been two months since my last post.

Sometimes life just ends up getting in the way of the things you have desires to do. Your passions and goals get moved to the back burner, as you deal with the daily chaos that is life.  It takes a concerted effort to be able to maintain balance where one is able to dedicate consistent time to their hobbies, interests, and self growth.

In the past couple months, I’ve had a bit of what one could call a career “do-si-do”. It was good and gave some great insight, and different perspectives on careers, life, and balance. It also allowed me to solidify some of the things I had questions about, or the unknowns I wasn’t sure about. I was able to meet a tonne of new people, and gain a whole bunch of different insights into other people’s point of view. When reflecting on everything that transpired in the last 6 months, I couldn’t help but look back on my career. Now, I haven’t had the longest career, I’ve only been a software professional for about 8 years. However, I think my journey has been quite fruitful, and has allowed me to learn a lot about the industry, and learn a lot about myself. Interestingly enough, in my 8 years, I’ve spent more than five, as a “lead”. For me — this felt normal. Isn’t this how every software developers career progresses?

Spoiler alert — it’s not. To me though, it never seemed like I did anything out of ordinary to move myself towards being a technical lead, or even into a team lead position. In fact, when I graduated and started my career, my goal was just to be the best developer. Management? Eff that. I wanted to be the best developer, ever. Not the best developer I could be, not to be the best developer at the company. I wanted to be the best developer — EVER. Lofty goal right? SMART Goal? Wrong, I’ll be the first to say it was a stupid goal. Though it was rooted in good intention, for the most part, I won’t lie and say I wasn’t enticed by the money, women and fame I would get. The reason I wanted it so badly, was that I had never really been the best at anything. Growing up I was exactly average, if you placed me on a bell curve I would be right smack dab in the center of the bell. This being average my whole life lead me to want one thing. I want to be the best at something. I didn’t know what, but I wanted it, bad.

When I got to university, I selected my primary study as business. What 17 year old high-school male doesn’t want to graduate with a business degree, become an investment banker, and then be Batman? I have parents so that tells you something about me becoming Batman. I found out that a degree in business required a lot of reading, something that I wanted no part of. Faced with a daunting path ahead, which consisted of a plethora of reading and likely writing. I did what any 18 year old, pre-business student who hates reading and writing would do. I looked for the easiest out possible. The obvious choice here, was a computing science degree. I had taken Computing Science 101, and received an A (woo not average). So I selected the obvious easiest path forward. Computers were always a thing I was good at, I just hated the thought of being a nerd. Here I was faced with the choice to be a nerd, or suffer having to read books. shudder Flash forward a few years, what was not as easy a path as initially thought, and I graduated. You guessed it, I had an average GPA. To my surprise though, I graduated with a job.

It wasn’t long into my career when I realized that in order to be good at something, I mean above average good, you have to put a substantial amount of effort in. Those people, who are the best at something, anything. Pick something running, swimming, math, music, or writing, you name it. I can tell you one thing about that person, they worked their ass off to get to where they are. It was a cold realization that day, if I wanted to be the best at something, I had to work harder than everyone around me. All the years I spent trying new hobbies, searching for that one thing I would be the best at. I finally realized I would never find it. Because it was right there in front of me. It was a matter of picking something and dedicating myself to it. That meant, to achieve my goal of being the best developer¬†ever. I would need to practice, a lot.

You’re probably thinking at this point, that I’m just telling you my life story. That I’m not actually giving any relevant information on how to actually become a technical lead. Maybe you’re right. Let’s just consider though, if you’re young and you’re looking to be a lead developer, or a team lead, you’re probably not looking to ride the conveyor of time. Sure, that’s one approach to becoming a lead, you wait it out. Eventually, you’ll add enough years of seniority, and enough people in front of you will age out that you’ll get promoted. That is if they don’t hire someone external ahead of you. If you’re fine with that, you can stop reading. If you’re looking to get off the conveyor, you have to be the best, or at least better than the guy in front of you. And that takes work, significant work. You might say “people who are technically strong don’t make the best people leaders”, and you would be right, sometimes. To be a technical lead though, you have to be technically strong. To get strong, you have to train.

So, let’s address that elephant of strong technical leaders not being good people leaders. If we look at why often people who are strong technically, struggle with the interpersonal side. Ask the question, what does it take to be strong technically? Well, with computers especially, you have to be really good at telling them what to do. You have to be very explicit in your instructions. If you’re not, the computer, doing exactly what you told it, will behave different from your expectation. The response is easy. Inspect the source, tell it what you really meant, and have it try again. Machines aren’t like people. After you re-tell what to do, it will execute your instructions with the utmost care and accuracy as before. To be technically strong, you only have to know in depth how and why the computer behaves the way it does. In short, you have to think like a computer. Unfortunately, humans don’t act like computers. So in order to lead people, you have to think, you guessed it — like a human. Which is a hard dichotomy to master. Hence the difficulty between being a technical leader and a people leader.

When I became a Team Lead, I will fully admit, that I wasn’t ready. The guys who had to work with me, would say the same thing. So what did I do that made me stand out? I can’t say for sure, but I think it had something to do with the way I lead myself. Recently I was listing to a Jocko Podcast 170, about 25 minutes in he gives advice to a person who interviewed for a team lead position and didn’t get it, because they didn’t have experience. His quotes is something along the lines of “You’re in charge of something.” He uses examples like machinery or a process. Something. If you can’t think of anything you’re in charge of, think of this. You’re in charge of yourself. The reality is, that if a company can’t see that you’ve got it figured out enough to be in charge of yourself. How can you expect them to feel comfortable letting you be in charge of others? In my case, I didn’t know how to listen to people. I thought being a Team Lead meant barking orders, and expecting perfect results. I didn’t know what I was doing. The silver lining, was I knew how to realistically evaluate the situation, and learn from it. This let me move forward. Moving forward is key. No one is perfect, and no one is the perfect team lead, especially in the technical industry. There’s too much variability, and when people are involved nothing¬†can¬†be perfect. What I have, is a burning desire to do better. I never want to end a day, and be worse at something than I was the day before. There may be setbacks, there may be plateaus, but overall I want an upward direction. So maybe I wasn’t the best at listening, my thought was how can I get better at listening? Maybe I’m not good at communicating. How can I get better at communication? It’s always been about the next step. Taking the next step, and the next step, is how you break away from the pack and lead. This also keeps you humble, because you can realistically see your faults. You have to see them to get better. If you can see your own faults, it will help you lead. A realization that everyone has faults, lets you pick not only yourself up, but others around you as well.

Realistically, if you want to be any sort of lead, in any sort of industry. You have to start by leading yourself. Making the decisions that will ensure that you don’t stagnate. Just because you work 8 hours per day, doesn’t mean you’re getting better. It takes a focused effort, it takes practice, and it takes willpower. You also need the realization that nothing in life is a guarantee. Leading yourself, means taking these steps, not with a short sighted goal, but a long term goal of being better. It means putting in the effort day in, and day out to ensure,¬†you’re¬†better for it. Naturally people will see this, and want to follow it. Eventually, you will be able to switch your focus from supporting and teaching yourself, to supporting and teaching others. That right there, is a whole different story.

“Example is not the main thing in influencing others, it is the only thing.” Albert Schweitzer

[ASPeKT] Oriented Programming

I recently had the pleasure of doing a podcast, with Matthew D. Groves, of¬†Cross Cutting Concerns¬†blog. He essentially “wrote the book”, so to speak, on Aspect Oriented Programming. It’s called¬†AOP in .NET,¬†without pumping his tires too much, I will say that his book is pretty great. I just recently finished reading it, and came to the conclusion that Matthew and I are on the same page regarding a lot of Software Development Fundamentals. Specifically, his take on AOP (Aspect Oriented Programming) and the benefits of it. The ability with AOP to factor out common code that crosses all areas of your code base, and encapsulate it so that you have a single point of change, is a very powerful concept. He illustrates concepts like these, and many others in his book. It also gives a nice overview into the different tools available in the AOP world. Even after writing my own AOP library, I was still able to learn a lot from this book. If you’re interested in AOP or Software Development in general, you should definitely check it out.

To follow up with Matthew’s Podcast, featuring me and the library I created called [ASPeKT] (no relation to ASP, I just liked the way it looked). I wanted to write a post that overviews AOP, and some of the benefits of this powerful programming paradigm. I want to talk about [ASPeKT], the benefits and costs of using a simple AOP library. Then detail some of the challenges I faced as I wrote it, as I work to make it more known, and as I continue to make it better.

Why Aspect Oriented Programming?

AOP is a little known, yet very widely used programming paradigm. How can it be little known, yet very widely used, you ask? Mostly because it’s built into a lot of .NET libraries and frameworks that people use every day, but they just don’t know they’re actually using AOP concepts. Interestingly enough, .NET ASP MVC Authentication, uses AOP patterns. Most people just go about programming and using the Authenticate attribute, without knowing they’re actually hoisting a cross-cutting concern up with a more declarative (cleaner) approach, and letting the framework deal with it.¬† For me, as a skeptic of AOP at the beginning, it was a¬†huge eye opener to realize that these concepts are actually applied all over in the .NET world. We just don’t realize it. This also brings to light, the power of being able to spot cross-cutting concerns, and be able to encapsulate them, using a library like [ASPeKT]. Thus, removing the need for clunky boilerplate code, and copy-pasta patterns that live in documentation, or worse yet only the minds of your more senior developers.

What is Aspect Oriented Programming, really?

In order to really understand what AOP is, we have to first understand what a “cross-cutting concern” (CCC) is. A CCC is, is the fundamental problem AOP looks to solve. These CCCs typically tend to be the non-functional requirements of your application. They’re code, that spreads¬†across¬†your code base, but they’re not easily factored into their own units. The canonical CCC is logging or auditing. Your application can function, business wise, without it. Yet, if you were to implement the need for “logging” across your application, you would end up with logging code, that pollutes your actual business logic. Things like logging the entry and exit of functions. You end up with code like this.

class Foo 
{
    public void Bar() 
    {
        Console.WriteLine("Entered Bar");
        Console.WriteLine("Pour a drink.");
        Console.WriteLine("Exiting Bar");
    }
};

You can see how this can get tedious, you end up having ‘rules’ or ‘patterns’ that define how and what you log. “This is how we log entry into a function, with it’s parameters. ‘Entering Function – Function. The format lives in a document, stored in SharePoint.” Now imagine when one day that needs to change, and you’re forced to update the 43 million different log lines across your application. Welcome to Hell. Talk about why they call these things ‘concerns’.

Logging isn’t the only concern that spreads and tangles across your code. Things like Authorization, what I call function ‘contracts’ or defensive programming, threading, and many other patterns. These patterns are often overlooked as cross-cutting and don’t always speak to boilerplate, but with a keen eye and some creativity can be teased out into their own re-usable aspects.

Using an AOP tool, allows you to hoist this boilerplate code into its own encapsulated class, where it belongs. Then it allows you to place it declaratively, which makes the code read more explicitly. But also, removes the clutter from the actual business logic of the function. These tools make it so you don’t have to weed through the logging, defensive programming, authorization, etc. Just to find out what the actual intention of the function is. You’ll end up writing code more akin to this.

class LoggedAttribute : Aspect 
{
    public override void OnEntry(MethodArgs args)
    {
        Console.WriteLine($"Entering {args.MethodName}");
    }
    public override void OnExit(MethodArgs args)
    {
        Console.WriteLine($"Exiting {args.MethodName}");
    }
};

class Foo
{
     [Logged]
     public void Bar()
     {
          Console.WriteLine("Pour a drink.");
     }
};

This code effectively functions the same as above. However,¬† it is substantially cleaner, and the biggest benefit is the ease of change. Now, if I’ve used the LoggedAttribute across my code base and I want to make a change. I only need to make the change in one spot, as opposed to everywhere we copied and pasted the logging lines. AOP allows you to offload the tedious boilerplate code, on the machine. Who is much, much, much faster at typing than humans. The machine also, never makes a typo.

Now that you know what a cross-cutting concern is, I can explain AOP. Effectively AOP is a tool, set of tools, or a paradigm to deal with cross-cutting concerns. In order to prescribe to an AOP model, you need the following 3 things.

  1. A Join Point Рthis is the point at which our AOP model allows us to interject code. In the case of [ASPeKT], the join-points are method boundaries. i.e. Entering / Exiting of a function. Other libraries like Castle DynamicProxy, and PostSharp allow for actual method interception, so you can say the join-point is this instead of that. This can be useful for things like error handling, retry logic, or threading models.
  2. A Point Cut – this is the way in which you specify, where you want to apply code to the Join Points. Think of this as a sentence describing where you want the code to run. I know my Join Point is Entry/Exit of function. My Point Cut could be “every function that starts with My”, or simply, “every function in the application”. [ASPeKT] uses attribute placement as the Point Cut definition. So where you place the attribute, determines how it will apply the code to the Join Points.
  3. Advice – essentially, the code you want to run at the Join Points. So for [ASPeKT], this is the code you write in OnEntry / OnExit / OnException.

Given these three things, you can start to encapsulate the CCC, which then allows you to start¬†applying¬†that code to other code. You don’t need to copy-pasta, you don’t need to find the document that says how to do logging or auditing. You just¬†apply¬†logging, you¬†apply¬†auditing. It becomes very powerful, because these concerns are now tools in your development toolkit. They’re easily tweaked, they’re modifiable without the daunting overhead of find and replace in every file in the solution. Now, they make sense.

So hopefully now you see the benefit of AOP, and can maybe start to see places where AOP could benefit your projects or workplace. I’ll be honest, it’s not always the easiest sell. So if you’re interested and you want more information, please feel free to reach out. I love talking about these kinds of things.

[ASPeKT] In a Nutshell

My first real intriguing look at AOP, was as I was reading the book Adaptive Code via C# — Gary Maclean Hall. He gave a brief introduction into AOP, mostly to describe its use with logging. Which at the time, I laughed and thought, ‘there’s really no other use than logging’. Later on in the book, he describes a pattern to reduce dependencies on lower layer APIs, by translating API level exceptions into higher level exceptions. This is so higher level components can make decisions based on the exceptions, without being explicitly tied to the lower level library. Consider the use of a library that is encapsulating sending data to a webserver for storage. Something like Office 365 Drive.

You might have code like this.

// MyStorage.dll - our internal wrappers, depends on OfficeWebDrive.dll
interface IStorage 
{
      public void Store(string name, Stream stream);
}


class OfficeDrive : IStorage
{
    public void Store(string name, Stream stream)
    {
         // use OfficeWebDrive.dll third party library
         // make requests to store data,
         // can throw "OfficeWebDrive.StorageFullException"
    }
};
// Application.exe - depends on MyStorage.dll, 
// but should have no dependency on OfficeWebDrive.dll

class Book 
{
    public void AddText(string text)
    {
        // Code to append text.
    }

    public void Save() 
    {
        storage_.Store(Title, Data);
    }
   
    Stream Data { get; }
    string Title { get; }
};

class Application 
{
    public void SomethingHere()
    {
         try
         {
             Book book = new Book("AOP in .NET");
             book.AddText("Lorem Ipsum");
             book.Save();
         }
         catch(OfficeWebDrive.StorageFullException e)
         {
               // Deal with the StorageFullException
         }
    }
};

As you can see, we’ve done our best to program to an interface, using the IStorage interface. Our application should have no dependencies on the actual underlying storage libraries. Except that because we need to deal with the StorageFullException, we need to now explicitly make a dependency between our application and the lower level third party API.

The sentiment then, from Adaptive code, is to wrap the 3rd party library calls, and translate the exception. Like so.

class OfficeDrive : IStorage
{
    public void Store(string name, Stream stream)
    {
        try 
        {
            // use OfficeWebDrive.dll third party library
            // make requests to store data,
            // can throw "OfficeWebDrive.StorageFullException"
        }
        catch(OfficeWebDrive.StorageFullException e)
        {
             // translate the error
             throw new MyStorage.StorageFullException(e.Message);
        }
    }
};

Now, the higher level code can make the same decisions about what to do when the storage is full, but no dependencies on the low level libraries. If we wanted, we could completely change the underlying storage mechanism. Without needing the Application to rebuild.

‘Hey Gary, this is a great spot for AOP’ I thought being clever.

class OfficeDrive : IStorage
{
    [TranslateException(typeof(OfficeWebDrive.StorageFullException), 
     typeof(MyStorage.StorageFullException))]
    public void Store(string name, Stream stream)
    {
        // use OfficeWebDrive.dll third party library
        // make requests to store data,
        // can throw "OfficeWebDrive.StorageFullException"
    }
};

Now, we let the AOP framework handle the boilerplate try/catch code. This also really calls out what is happening, and why.

Where does [ASPeKT] come in?

Well — after I thought about this, I wanted to build it. I guess I could’ve easily used the PostSharp Express version and whipped it out really quickly. But that’s not me. If I’m going to understand something, I’m going to¬†understand¬†it. So I set off to write a small AOP library, where I could solve this problem. It was a rather simple concept, or so I thought.

I didn’t even really know what an AOP library did. That’s where the research started, “open source AOP libraries”, “how does PostSharp work”, etc, etc. Just some of the multitude of search terms I used while I did research into how to build an AOP library.

I had the concept of what I wanted. I wanted something that I could declare an attribute, that would translate an exception. Easy.

Let’s go down the rabbit hole.

At it’s core, AOP is actually allowing a machine to write code for you. You’ve moved the copy and paste from your fingers Ctrl+C / Ctrl-V to the machines apt fingers. Allowing the computer to ‘weave’ the code for you.¬† You define the boilerplate as an aspect, and you let the machine do the work for you, because that’s what they’re good at (no offense computer).

You’ve got three options for when this can happen.

  1. In the source code, before you compile. Using some type of marked up code in your source, you could run a pre-compiler that will insert code snippets into the source. Before the compiler does it’s work.
  2. After the compiler — though not impossible on native assemblies. It is far easier on languages like C# and Java, which use an intermediate language, between source code and assembly. We can post-process the compiled intermediate language (IL), to apply the code we want.
  3. At runtime. Well this has somewhat of an over-watch pattern, where something is watching the calls as they run through and interpreting whether or not to run our aspects.

Now — knowing this, which one do you choose? My thought process was as follows.

  1. Option 1: Modify the source — I don’t want to write a compiler. Well, I do, but not for this. So that was off the table, at least for now. You also have dependency on language syntax, not that I would expect much of C# syntax to change, but still.
  2. Option 3: At runtime – I don’t want this. I come from a native background, specifically C++. I don’t want to pay overhead where I don’t have to. I didn’t want to have something that is monitoring functions as they run, or building code at runtime. Just wasn’t what I wanted.

So that left option 2, what exactly is that? How does it even work?  I need something that will run, after the binary compiles, and modify it to add in the CCC code.

Let’s go deeper…

To understand post compile weaving, we must first understand how the .NET runtime operates, at a high level. The .NET Runtime is what’s called a “virtual machine”. The draw towards these “virtual machines”, was started with JAVA created by Sun Microsystems. The large benefit to JAVA, was that it was a compile once, run anywhere language. This was hugely powerful, when you consider the alternative of compiling for every architecture, as is the case with C and C++.¬† The virtual machine, allows you to compile to a well known set of instructions (IL), which then will output machine specific instructions for the hardware when run through the virtual machine. This way, you only need to write a virtual machine for the hardware, and you open yourself up to all the applications available for that runtime. This is one of the reasons JAVA became so hugely popular, because you could write a JVM for your VCR and all of a sudden pow¬†smart VCR, with lots of apps available.

Obviously, Microsoft saw the benefit and flexibility in this and took advantage, so they started shipping JAVA in Visual Studio. They had their own JVM, and JAVA compiler as well. They also saw an advantage to extend this language, for the Windows operating system. Enter J++, Microsoft’s implementation of Java with extensions for the Windows OS. With J++, came a lawsuit from Sun Microsystems. Microsoft had a non-compliant JVM and they were violating the terms of the license agreement (who reads those things anyways?). Wah. Wah. So what does Microsoft do? They do what any billion dollar software development company would do. They eat the lawsuit, and take J++ and turn into what we now know as C#.¬† They also see the immense power in this .NET Runtime, and see that they can compile a whole multitude of different languages into IL. With the release of .NET Runtime 1.0, there was support for 26 languages (I think). To be completely honest, I’m glad that Sun sued Microsoft, because I hate JAVA as a language, and I love C#. So it was a win, in my opinion.

Anyways — aside from that little lesson in history, we can understand how ‘weaving’ works in a .NET language. Like I said, C# is a language that compiles to IL, aka CIL or MSIL. An intermediate language, is on the fence. Between a language that is “compiled” to actual assembly (hardware instructions). And an interpreted language like JS, that is completely interpreted at runtime. The C# compiler, takes the C# language and outputs a binary form of instructions that complies to the .NET runtime standard. These instructions are then interpreted, and compiled Just-In-Time to output hardware instructions. This means, that after we compile to IL, and before running, we can insert some extra IL instructions, then voila.

How do you weave IL?

Weaving IL is actually pretty straight forward, if you know what you’re doing. Which I didn’t. Ha. At first, I kind of flew by the seat of my pants. I knew I needed to re-write the IL, and I had heard about tools like Roslyn and Mono.Cecil. Roslyn being a compiler, wasn’t exactly what I wanted. I needed at tool to modify the IL, which is exactly what Mono.Cecil is. It not only uses the built in Reflection.Emit, but also adds a lot of ease to manipulating the IL.

The task at hand for me, was to open the binary, find the spot that I had declared my “TranslateExcepton” and then insert the instructions for that, into the method where I declared.¬† I just decided to make it generic, to work with any Aspects I created. I will spare the gory details, but the high-level was as follows.

  1. Open the compiled .NET assembly
  2. Find functions decorated with Aspekt.Aspects
  3. Write IL to instantiate the Aspect, and make the entry call
  4. Write IL to wrap the existing code in a try/catch
  5. Execute the existing code
  6. Write IL to make the exit call

I could write an entire post on weaving IL, but not today. If you’re interested in that, please drop me a line and let me know. I can describe, in detail, all the pain I felt while I was learning how to do this.

Once, I had figured out how to do this. I had an executable that I could run against my compiled .NET binaries, which would weave aspect code, where I place attributes. Then all I needed to do was write the aspect, to translate the exception. You can actually find this aspect in the [ASPeKT] code. Kind of like an ode to the beginning, for me.

What came next?

Now that I had the start of this framework. I thought to myself, that I had actually built something that could be useful. “People might actually want to use this”. I had also always wanted to author an open source library. So I started reaching out on Reddit, and making the foundation more easily accessible for people to use. I made the core more stable. Then I started writing new features, like [ASPeKT] Contracts. It was a surprising journey. In the past I had written many throw away libraries, and many small tools that have never seen the light of day. Always just to prove to myself, well, that I could. There was just something to this one, it was a niche library and I thought there was something to it.

I guess the reality is that I’m still in the middle of this journey, with [ASPeKT] core having 588 downloads, and Contracts having slightly less at 416. I’m still working towards my goal of 1000 downloads for each. Realistically, a project that uses [ASPeKT]¬†could be the spark that it needs to ignite. Until then though, I will just keep plugging away and making it cooler and easier to use.

Why should you use [ASPeKT]?

Well — if you’re at all curious in the inner workings of an AOP library, the code is public on my¬†GitHub¬†or you can easily get the package on NuGet. The second, is maybe you have a project that could benefit from AOP?¬†[ASPeKT] is pretty lightweight, and easy to get started with. Though, if you’re looking for a robust, feature complete, production ready library — [ASPeKT] isn’t there yet. If you’re looking for a library to contribute to, or a library that can be adapted into something production ready, then shoot me an email!

‚ÄúMan cannot discover new oceans unless he has the courage to lose sight of the shore.‚ÄĚ
‚Äē¬†Andre Gide

As always, thanks for reading. Happy Coding!

PL

The one where we reverse engineered Microsoft‚Äôs C++ Unit Test Framework (Part 3) – Exceptions

Again, if you haven’t already done so I suggest reading through Part 1 and Part 2 of the series. I’ll do a quick TL; DR; recap, but it may not do it justice.

What feels like ages ago, I started on a journey to implement a clone of Microsoft’s C++ Unit Test Framework Test Adapter, but fix some issues I had with it. These included a better reporting mechanism for standard C++ exceptions, and better error messages when libraries were failing to load. Typically this was due to missing dependencies. Part 1 of the series explained Microsoft’s technique to for exposing a dynamic set of classes and functions, for the test framework to call. This is typically solved with reflection, but given that there is no standard mechanism for reflection in C++, some neat tricks in the binary sections were played. Part 2 explores the second step, after discovery — execution. How can we take the metadata information, and actually have it run some code? It’s essentially a plugin system, in a plugin system. Part 2 left off where we were able to actually execute our test methods, but when an Assertion would fail the entire framework would collapse on itself.

I lift my head from my hands, wipe the tears from my eyes, and get real with the situation. I’ve started down this path of re-implementation, why stop now?

Among the package that is shipped with Microsoft’s C++ Unit Test Framework, there is a file called Assert.h. This is the header that you’re to include, in order to make assertions in your tests. Asserting is a critical part of Unit Testing. The three portions of a unit test are essentially:

  1. Setup
  2. Act
  3. Assert

There’s fancy acronyms describing this (AAA), I¬† prefer mine (SAA). In a general sense, in your test you will do these three things. You setup the test to have your code under test in the state it needs to be. You run the code you want to test. Then you Assert that what was supposed to happen happened, and potentially Assert that other things that weren’t supposed to happen, didn’t. Though this becomes a slippery slope into a rabbit hole.¬†That being said, asserting is a¬†very important part of unit testing, arguably the most important part. So we’ve hit a bit of a conundrum, because negative assertions are the part that tell us when our code isn’t working correctly. Anyone who is experienced with TDD will tell you there is more value in the red lights than the green ones. That means it’s a problem when our framework crashes and burns on a failed Assert.

So, to follow suit with re-implementation. I decided that I would just do away with Microsoft’s Assert library, in favour of my own. Obviously this is the best idea possible. Don’t get me wrong, it’s not like I didn’t try to figure out what was happening with their Assert library. The problem is there is less “public” information in it. Meaning that unlike the CppUnitTest.h file where a lot of the work was in the header because it was template code, most of the assertion code, lived in a compiled binary. The only code in the Assert.h file was template code for comparing generic types. It means I had no real way to figure out what they were doing. All I knew is that whatever they were doing, was crashing my application, and it worked for theirs. So I’ll make one that works with my framework. Now, you might be thinking.

Of what value is re-implementing the Microsoft C++ Unit Test Framework? Is there any actual value in now re-implementing part of their library. 

The answer is probably not, but if you’re curious like me, you like to figure out how things work. The way I do this, is I look at something and implement it myself. If I can do that, I typically run into the problems that the original author ran into, and then I can understand why they solved the problem the way they did. I’ve approached software development this way for my entire professional career, and I’d like to think it has paid dividends.

In all honesty, how hard could writing an assertion library be? Like, basically you only check a few different things. Are these two things equal? Are these two things not equal? Is this thing null? Is this thing not null? If the test passes, don’t do anything. If the test fails, stop executing and barf out some kind of message. If we ponder this for a moment, can we think of something we could use to halt execution and spit out some form of a message? I know! I’ve written this code a lot.

if ( !some_condition )
    throw condition_violated_exception("Violated Condition");

Well, exceptions look like a good place to start for our assertion library. So that’s where we’re going to start. The plan is essentially to have a bunch of Assert calls, that when they fail will throw an exception. Easy right? The code could look something like this.

// MyAssert.h

#include <AssertViolatedException.h>


namespace MyAssert
{
    template <typename T>
    static void AreEqual(const T& expected, const T& actual)
    {
        if( expected != actual )
           throw AssertViolatedException(fmt::format("{0} != {1}", expected, actual));
    }
};

By no means is this complete, it’s really just to illustrate the point that when we have two objects that aren’t equal we generate an exception with an appropriate message.

God it feels good to be a gangster.

Now we can go about our mission, we’ll just use MyAssert.h, and completely disregard Microsoft’s version Assert.h. Given that we’ve implemented so that any¬†escaped¬†standard C++ exception will end up in the framework’s handler. I can guarantee that the assertions will end up there. Right? Given that I didn’t show a snip of AssertViolatedException.h, you can assume that it’s derived from std::exception. If you’re interested in how I actually implemented it, you can find the file here. It has a little bit more complexity, for capturing line information, but largely it’s the same idea.

I’m sure this is EXACTLY how Microsoft would’ve done it.

After we’ve implemented this, we can use it in the same way that you would if you were to use the Assert library included in Microsoft’s framework.

#include <MyAssert.h>
#include "calculator.h"

TEST_CLASS(TestCalculator)
{
public:
    TEST_METHOD(TestAdd)
    {
          calculator c;
          int val = c.add(2, 2);
          MyAssert::AreEqual(4, val);
    }
};

This is great, and it works, for the most part. It works for this case. Can you see where it falls down? If we recall, we’re using a standard C++ exception for our assertion. Unfortunately, the compiler doesn’t care whether or not the exception begins from our MyAssert class or any other place in the application. This means, that any handler prepared to handle a std::exception, will catch our assertion.¬† Consider this code.

#include <functional>
class calculator
{
   std::function<int(int, int)> on_add_;
public:
     template <typename AddFnT>
     void on_add(const AddFnT &on_add)
     {
           on_add_ = on_add;
     }

     int add(int a, int b)
     {
          try 
          {
                return on_add_(a, b);
          }
          catch(const std::exception &e)
          {
                 return 4;
          }
     }
};
#include <MyAssert.h>
#include "calculator.h"

TEST_CLASS(TestCalculator)
{
public:
    TEST_METHOD(TestAdd)
    {
        calculator c;
        c.on_add([](int a, int b)
        {
               MyAssert::AreEqual(2, a);
               MyAssert::AreEqual(2, b);
               return a + b;
        });
        int val = c.add(2, 22); // see the error here? Tyop.
        MyAssert::AreEqual(4, val);
    }
};

This isn’t by any means good code, nor does it really make sense. Everyone knows developers are notorious for cutting corners to save time, and someone decided 4 was a good error code, and someone made a typo in the test. The unfortunate thing about this, is that it passes. The light is green, but the code is wrong. Now, you’re saying “no one would ever do this in their right mind.” Consider the case where you have a layer between your business logic, and your database. You call the business logic function, it does some work and it passes the values to the store function. A way to test to make sure you’re populating the DB with the right values, is to abstract the database layer and intercept at that level. You also, likely want some error handling there as well. If an exception was to throw from the database. So there you go, at this point our Assert library falls down, hard.

It’s been a long read, and you may feel like you’re getting ripped off at this point, because I haven’t really explained much. Realistically, we actually just learned a whole lot. So I encourage you to keep reading. Microsoft has an Assert library that you can use to make assertions about your program. The generally accepted form of “assertions” within an application is an exception. Microsoft can’t use standard exceptions in their framework, because it could interact with the application under test. I just proved that, by trying it. So what the hell did they do? Well the application crashes, that means something is happening.

Most modern day programmers are familiar with exceptions, most people just see them as the defacto standard for error handling. (I really want to get into talking about return values vs. exceptions for error handling, but that will take me down a rabbit hole.) To keep it short, exceptions in various languages allow for your normal program flow to be separated (for the most part) from your exceptional program flow. Wikipedia, defines exceptions as “anomalous or exceptional conditions requiring special processing”. If you’re familiar exceptions and handling them, you’ve probably seen try/catch before. If you’re familiar with modern C++, you probably at least know¬†of¬†this concept. What you might not know, is that the C++ standard only define¬†what¬†an exception is, not how¬†it is implemented. This means that the behaviour of exceptions and handling them in C++ is standardized, but the way that compiler vendors implement them is free game. Another thing people might not know is languages like C and older languages, don’t have a concept of exceptions.¬† An exception can come from a custom piece of code, like one above where we throw the exception OR from somewhere deeper down maybe it’s a hardware exception like divide by zero, or out of memory exception. The end result in C++ is the same. Hence why it’s called a “standard” exception. The algorithm is pretty simple. Find an appropriate handler, and unwind the stack until that handler, call the handler. You ever wonder how though?

Well, low level errors like divide by zero, will come from the hardware, generally in the form of an interrupt. So how do we go from a hardware level interrupt to our C++ runtime? On Windows, this is called Structured Exception Handling (SEH). It is Windows concept of exceptions, within the OS itself. There’s a good explanation of this in the book Windows Internals – Part 1 by Mark Russinovich. At a high level, the kernel will trap the exception, if it can’t deal with it itself, it passes it on to user code. The user code, can then either A) deal with the exception and report back stating that, or B) tell the kernel to continue the search. Because this is at the OS level, this is not a language specific thing. So runtimes built on Windows, will use this to implement exceptions within the language. This means that MSVC uses SEH to implement C++ exceptions within the C++ runtime. Essentially the runtime generates a Structured Exception Handler for each frame, and within this the runtime can search for the appropriate handler in the C++, unwind the stack and call the destructor of the objects, then resume execution in the handler. Obviously, these generated Structured Exceptions are well known for the C++ runtime, so it can know how to appropriately deal with the exception.

What if Microsoft was using a Structured Exception for their assertion? The behaviour lines up with that hypothesis, in that something is generated on failed assertion that crashes the application. In SEH, if there isn’t an appropriate handler found the application will be terminated.¬† How can we prove that? Well it turns out it was easy. Though it’s not recommended. Microsoft recommends if you’re using exceptions in C++ you use standard exceptions, but there is Windows API that you can use in your code to do SEH.

#include "Assert.h"
TEST_METHOD(AssertFail)
{
   __try
   {
       Assert::AreEqual(0,1);
   }
   __except(EXCEPTION_EXECUTE_HANDLER)
   {
       Logger::WriteLine(L"Gotcha!");
   }
}

After we compile and run this code it’s pretty obvious what’s going on. When the Assert::AreEqual fails, we land smack dab in the handler. So I guess that mystery is solved. We just need to figure out how and where to do the exception handling. Now, the __try/__except/__finally APIs are built into C++ on Windows, and allow us to basically install a frame based exception handler. They work very similar to the way you would expect a try/catch to work. After some research I decided this wasn’t exactly what I wanted. I wanted to be able to catch an exception regardless of stack frame. I stumbled upon Vectored Exception Handling. This is an extension to SEH, that allows you to install a handler that gets called regardless of where you are. So you can globally register

The solution then is rather straight-forward. We just need to register an Exception Handler, when the exception throws we can catch it, record the error and continue on our way.¬† If you read the code in the repository, I had to go through a bunch of layers of indirection to actually get the message to pass to the Test Window Framework. That’s because the architecture of the application has a .NET component, a .NET/CLI component and a pure native component. So for sake of simplicity the way that I went about handling the exception was like this, but not exactly this.

LONG TestModule::OnException(_EXCEPTION_POINTERS *exceptionInfo)
{
    //  if the code is the MS Unit test exception
    if (exceptionInfo->ExceptionRecord->ExceptionCode == 0xe3530001)
    {
        NotifyFailure(reinterpret_cast<const wchar_t*>(exceptionInfo->ExceptionRecord->ExceptionInformation[0]));
    return EXCEPTION_CONTINUE_EXECUTION;
    }
    else
        return EXCEPTION_CONTINUE_SEARCH;
}

auto h = ::AddVectoredExceptionHandler(1, &OnException);

This is where I had to do a bit of guess work. If you recall, this handler will get called for all exceptions. But we only want to do something when it’s an Assert exception. So I had to make the assumption that Assert threw an exception with the code 0xe3530001. Then I did a bit of sleuthing in the memory, to see that a pointer to the message was stored in the first index of the ExceptionRecord ExceptionInformation. With that I could grab the message and fail appropriately.¬† That being said, I’m not sure if this solution lines up 100% with Microsoft’s functionalities.

To summarize this long journey, I set out to set some things right with the behaviour of Microsoft’s CPP Unit Test Framework. It started out as something fun to investigate and it turned out to be a great learning experience. Out of all the projects that I’ve worked on, I’ve probably learned the most about ingenuity from this one. There are a lot of¬† neat tricks, cool uses of obscure APIs, and really just overall an interesting view of how Microsoft engineers their tools. You might be wondering to yourself if it was actually worth it. For me, it was about learning, it was about facing the challenges and working to understand them. It wasn’t every really truly about replacing what ships with Visual Studio. So yes, it was worth it. Though I would love it if Microsoft could fix the issues… If I can do it, they most certainly can do it.

Recapping the issues that I ended up solving:

  1. Report a better error on standard exceptions [check]
  2. Report a better error for binaries that fail to load
  3. Support Test Names with Spaces [check]

As you can see, I only solved 2 of the 3 things I set out to solve! The last one is kind of a cop out, because I sort of just lucked into it. When I was messing around with the class attributes, I enhanced my framework to understand a special attribute, to be able to change the test name. Other than just getting it from the method name. So you could specify a test name with a space.

Reporting a better error when binaries fail to load — this is really hard. What makes it hard, is that there isn’t (that I can find) a good API to report the missing dependency. This is something you need to hand roll. Now, there are tools that do it. Specifically Dependency Walker. But, my understanding is that I would need to roll my own dependency walking algorithm. This unfortunately will be a story for another day.

I really hope you enjoyed reading this series. I had quite a bit of fun working on this project, and a lot of fun writing about it.

‚ÄúWhat we call the beginning is often the end. And to make an end is to make a beginning. The end is where we start from.‚Ä̬†— T.S. Elliot

Happy Coding!

PL

 

References:

MSDN on Structured Exception Handling

https://www.codeproject.com/Articles/2126/How-a-C-compiler-implements-exception-handling