[ASPeKT] Oriented Programming

I recently had the pleasure of doing a podcast, with Matthew D. Groves, of Cross Cutting Concerns blog. He essentially “wrote the book”, so to speak, on Aspect Oriented Programming. It’s called AOP in .NET, without pumping his tires too much, I will say that his book is pretty great. I just recently finished reading it, and came to the conclusion that Matthew and I are on the same page regarding a lot of Software Development Fundamentals. Specifically, his take on AOP (Aspect Oriented Programming) and the benefits of it. The ability with AOP to factor out common code that crosses all areas of your code base, and encapsulate it so that you have a single point of change, is a very powerful concept. He illustrates concepts like these, and many others in his book. It also gives a nice overview into the different tools available in the AOP world. Even after writing my own AOP library, I was still able to learn a lot from this book. If you’re interested in AOP or Software Development in general, you should definitely check it out.

To follow up with Matthew’s Podcast, featuring me and the library I created called [ASPeKT] (no relation to ASP, I just liked the way it looked). I wanted to write a post that overviews AOP, and some of the benefits of this powerful programming paradigm. I want to talk about [ASPeKT], the benefits and costs of using a simple AOP library. Then detail some of the challenges I faced as I wrote it, as I work to make it more known, and as I continue to make it better.

Why Aspect Oriented Programming?

AOP is a little known, yet very widely used programming paradigm. How can it be little known, yet very widely used, you ask? Mostly because it’s built into a lot of .NET libraries and frameworks that people use every day, but they just don’t know they’re actually using AOP concepts. Interestingly enough, .NET ASP MVC Authentication, uses AOP patterns. Most people just go about programming and using the Authenticate attribute, without knowing they’re actually hoisting a cross-cutting concern up with a more declarative (cleaner) approach, and letting the framework deal with it.  For me, as a skeptic of AOP at the beginning, it was a huge eye opener to realize that these concepts are actually applied all over in the .NET world. We just don’t realize it. This also brings to light, the power of being able to spot cross-cutting concerns, and be able to encapsulate them, using a library like [ASPeKT]. Thus, removing the need for clunky boilerplate code, and copy-pasta patterns that live in documentation, or worse yet only the minds of your more senior developers.

What is Aspect Oriented Programming, really?

In order to really understand what AOP is, we have to first understand what a “cross-cutting concern” (CCC) is. A CCC is, is the fundamental problem AOP looks to solve. These CCCs typically tend to be the non-functional requirements of your application. They’re code, that spreads across your code base, but they’re not easily factored into their own units. The canonical CCC is logging or auditing. Your application can function, business wise, without it. Yet, if you were to implement the need for “logging” across your application, you would end up with logging code, that pollutes your actual business logic. Things like logging the entry and exit of functions. You end up with code like this.

class Foo 
{
    public void Bar() 
    {
        Console.WriteLine("Entered Bar");
        Console.WriteLine("Pour a drink.");
        Console.WriteLine("Exiting Bar");
    }
};

You can see how this can get tedious, you end up having ‘rules’ or ‘patterns’ that define how and what you log. “This is how we log entry into a function, with it’s parameters. ‘Entering Function – Function. The format lives in a document, stored in SharePoint.” Now imagine when one day that needs to change, and you’re forced to update the 43 million different log lines across your application. Welcome to Hell. Talk about why they call these things ‘concerns’.

Logging isn’t the only concern that spreads and tangles across your code. Things like Authorization, what I call function ‘contracts’ or defensive programming, threading, and many other patterns. These patterns are often overlooked as cross-cutting and don’t always speak to boilerplate, but with a keen eye and some creativity can be teased out into their own re-usable aspects.

Using an AOP tool, allows you to hoist this boilerplate code into its own encapsulated class, where it belongs. Then it allows you to place it declaratively, which makes the code read more explicitly. But also, removes the clutter from the actual business logic of the function. These tools make it so you don’t have to weed through the logging, defensive programming, authorization, etc. Just to find out what the actual intention of the function is. You’ll end up writing code more akin to this.

class LoggedAttribute : Aspect 
{
    public override void OnEntry(MethodArgs args)
    {
        Console.WriteLine($"Entering {args.MethodName}");
    }
    public override void OnExit(MethodArgs args)
    {
        Console.WriteLine($"Exiting {args.MethodName}");
    }
};

class Foo
{
     [Logged]
     public void Bar()
     {
          Console.WriteLine("Pour a drink.");
     }
};

This code effectively functions the same as above. However,  it is substantially cleaner, and the biggest benefit is the ease of change. Now, if I’ve used the LoggedAttribute across my code base and I want to make a change. I only need to make the change in one spot, as opposed to everywhere we copied and pasted the logging lines. AOP allows you to offload the tedious boilerplate code, on the machine. Who is much, much, much faster at typing than humans. The machine also, never makes a typo.

Now that you know what a cross-cutting concern is, I can explain AOP. Effectively AOP is a tool, set of tools, or a paradigm to deal with cross-cutting concerns. In order to prescribe to an AOP model, you need the following 3 things.

  1. A Join Point – this is the point at which our AOP model allows us to interject code. In the case of [ASPeKT], the join-points are method boundaries. i.e. Entering / Exiting of a function. Other libraries like Castle DynamicProxy, and PostSharp allow for actual method interception, so you can say the join-point is this instead of that. This can be useful for things like error handling, retry logic, or threading models.
  2. A Point Cut – this is the way in which you specify, where you want to apply code to the Join Points. Think of this as a sentence describing where you want the code to run. I know my Join Point is Entry/Exit of function. My Point Cut could be “every function that starts with My”, or simply, “every function in the application”. [ASPeKT] uses attribute placement as the Point Cut definition. So where you place the attribute, determines how it will apply the code to the Join Points.
  3. Advice – essentially, the code you want to run at the Join Points. So for [ASPeKT], this is the code you write in OnEntry / OnExit / OnException.

Given these three things, you can start to encapsulate the CCC, which then allows you to start applying that code to other code. You don’t need to copy-pasta, you don’t need to find the document that says how to do logging or auditing. You just apply logging, you apply auditing. It becomes very powerful, because these concerns are now tools in your development toolkit. They’re easily tweaked, they’re modifiable without the daunting overhead of find and replace in every file in the solution. Now, they make sense.

So hopefully now you see the benefit of AOP, and can maybe start to see places where AOP could benefit your projects or workplace. I’ll be honest, it’s not always the easiest sell. So if you’re interested and you want more information, please feel free to reach out. I love talking about these kinds of things.

[ASPeKT] In a Nutshell

My first real intriguing look at AOP, was as I was reading the book Adaptive Code via C# — Gary Maclean Hall. He gave a brief introduction into AOP, mostly to describe its use with logging. Which at the time, I laughed and thought, ‘there’s really no other use than logging’. Later on in the book, he describes a pattern to reduce dependencies on lower layer APIs, by translating API level exceptions into higher level exceptions. This is so higher level components can make decisions based on the exceptions, without being explicitly tied to the lower level library. Consider the use of a library that is encapsulating sending data to a webserver for storage. Something like Office 365 Drive.

You might have code like this.

// MyStorage.dll - our internal wrappers, depends on OfficeWebDrive.dll
interface IStorage 
{
      public void Store(string name, Stream stream);
}


class OfficeDrive : IStorage
{
    public void Store(string name, Stream stream)
    {
         // use OfficeWebDrive.dll third party library
         // make requests to store data,
         // can throw "OfficeWebDrive.StorageFullException"
    }
};
// Application.exe - depends on MyStorage.dll, 
// but should have no dependency on OfficeWebDrive.dll

class Book 
{
    public void AddText(string text)
    {
        // Code to append text.
    }

    public void Save() 
    {
        storage_.Store(Title, Data);
    }
   
    Stream Data { get; }
    string Title { get; }
};

class Application 
{
    public void SomethingHere()
    {
         try
         {
             Book book = new Book("AOP in .NET");
             book.AddText("Lorem Ipsum");
             book.Save();
         }
         catch(OfficeWebDrive.StorageFullException e)
         {
               // Deal with the StorageFullException
         }
    }
};

As you can see, we’ve done our best to program to an interface, using the IStorage interface. Our application should have no dependencies on the actual underlying storage libraries. Except that because we need to deal with the StorageFullException, we need to now explicitly make a dependency between our application and the lower level third party API.

The sentiment then, from Adaptive code, is to wrap the 3rd party library calls, and translate the exception. Like so.

class OfficeDrive : IStorage
{
    public void Store(string name, Stream stream)
    {
        try 
        {
            // use OfficeWebDrive.dll third party library
            // make requests to store data,
            // can throw "OfficeWebDrive.StorageFullException"
        }
        catch(OfficeWebDrive.StorageFullException e)
        {
             // translate the error
             throw new MyStorage.StorageFullException(e.Message);
        }
    }
};

Now, the higher level code can make the same decisions about what to do when the storage is full, but no dependencies on the low level libraries. If we wanted, we could completely change the underlying storage mechanism. Without needing the Application to rebuild.

‘Hey Gary, this is a great spot for AOP’ I thought being clever.

class OfficeDrive : IStorage
{
    [TranslateException(typeof(OfficeWebDrive.StorageFullException), 
     typeof(MyStorage.StorageFullException))]
    public void Store(string name, Stream stream)
    {
        // use OfficeWebDrive.dll third party library
        // make requests to store data,
        // can throw "OfficeWebDrive.StorageFullException"
    }
};

Now, we let the AOP framework handle the boilerplate try/catch code. This also really calls out what is happening, and why.

Where does [ASPeKT] come in?

Well — after I thought about this, I wanted to build it. I guess I could’ve easily used the PostSharp Express version and whipped it out really quickly. But that’s not me. If I’m going to understand something, I’m going to understand it. So I set off to write a small AOP library, where I could solve this problem. It was a rather simple concept, or so I thought.

I didn’t even really know what an AOP library did. That’s where the research started, “open source AOP libraries”, “how does PostSharp work”, etc, etc. Just some of the multitude of search terms I used while I did research into how to build an AOP library.

I had the concept of what I wanted. I wanted something that I could declare an attribute, that would translate an exception. Easy.

Let’s go down the rabbit hole.

At it’s core, AOP is actually allowing a machine to write code for you. You’ve moved the copy and paste from your fingers Ctrl+C / Ctrl-V to the machines apt fingers. Allowing the computer to ‘weave’ the code for you.  You define the boilerplate as an aspect, and you let the machine do the work for you, because that’s what they’re good at (no offense computer).

You’ve got three options for when this can happen.

  1. In the source code, before you compile. Using some type of marked up code in your source, you could run a pre-compiler that will insert code snippets into the source. Before the compiler does it’s work.
  2. After the compiler — though not impossible on native assemblies. It is far easier on languages like C# and Java, which use an intermediate language, between source code and assembly. We can post-process the compiled intermediate language (IL), to apply the code we want.
  3. At runtime. Well this has somewhat of an over-watch pattern, where something is watching the calls as they run through and interpreting whether or not to run our aspects.

Now — knowing this, which one do you choose? My thought process was as follows.

  1. Option 1: Modify the source — I don’t want to write a compiler. Well, I do, but not for this. So that was off the table, at least for now. You also have dependency on language syntax, not that I would expect much of C# syntax to change, but still.
  2. Option 3: At runtime – I don’t want this. I come from a native background, specifically C++. I don’t want to pay overhead where I don’t have to. I didn’t want to have something that is monitoring functions as they run, or building code at runtime. Just wasn’t what I wanted.

So that left option 2, what exactly is that? How does it even work?  I need something that will run, after the binary compiles, and modify it to add in the CCC code.

Let’s go deeper…

To understand post compile weaving, we must first understand how the .NET runtime operates, at a high level. The .NET Runtime is what’s called a “virtual machine”. The draw towards these “virtual machines”, was started with JAVA created by Sun Microsystems. The large benefit to JAVA, was that it was a compile once, run anywhere language. This was hugely powerful, when you consider the alternative of compiling for every architecture, as is the case with C and C++.  The virtual machine, allows you to compile to a well known set of instructions (IL), which then will output machine specific instructions for the hardware when run through the virtual machine. This way, you only need to write a virtual machine for the hardware, and you open yourself up to all the applications available for that runtime. This is one of the reasons JAVA became so hugely popular, because you could write a JVM for your VCR and all of a sudden pow smart VCR, with lots of apps available.

Obviously, Microsoft saw the benefit and flexibility in this and took advantage, so they started shipping JAVA in Visual Studio. They had their own JVM, and JAVA compiler as well. They also saw an advantage to extend this language, for the Windows operating system. Enter J++, Microsoft’s implementation of Java with extensions for the Windows OS. With J++, came a lawsuit from Sun Microsystems. Microsoft had a non-compliant JVM and they were violating the terms of the license agreement (who reads those things anyways?). Wah. Wah. So what does Microsoft do? They do what any billion dollar software development company would do. They eat the lawsuit, and take J++ and turn into what we now know as C#.  They also see the immense power in this .NET Runtime, and see that they can compile a whole multitude of different languages into IL. With the release of .NET Runtime 1.0, there was support for 26 languages (I think). To be completely honest, I’m glad that Sun sued Microsoft, because I hate JAVA as a language, and I love C#. So it was a win, in my opinion.

Anyways — aside from that little lesson in history, we can understand how ‘weaving’ works in a .NET language. Like I said, C# is a language that compiles to IL, aka CIL or MSIL. An intermediate language, is on the fence. Between a language that is “compiled” to actual assembly (hardware instructions). And an interpreted language like JS, that is completely interpreted at runtime. The C# compiler, takes the C# language and outputs a binary form of instructions that complies to the .NET runtime standard. These instructions are then interpreted, and compiled Just-In-Time to output hardware instructions. This means, that after we compile to IL, and before running, we can insert some extra IL instructions, then voila.

How do you weave IL?

Weaving IL is actually pretty straight forward, if you know what you’re doing. Which I didn’t. Ha. At first, I kind of flew by the seat of my pants. I knew I needed to re-write the IL, and I had heard about tools like Roslyn and Mono.Cecil. Roslyn being a compiler, wasn’t exactly what I wanted. I needed at tool to modify the IL, which is exactly what Mono.Cecil is. It not only uses the built in Reflection.Emit, but also adds a lot of ease to manipulating the IL.

The task at hand for me, was to open the binary, find the spot that I had declared my “TranslateExcepton” and then insert the instructions for that, into the method where I declared.  I just decided to make it generic, to work with any Aspects I created. I will spare the gory details, but the high-level was as follows.

  1. Open the compiled .NET assembly
  2. Find functions decorated with Aspekt.Aspects
  3. Write IL to instantiate the Aspect, and make the entry call
  4. Write IL to wrap the existing code in a try/catch
  5. Execute the existing code
  6. Write IL to make the exit call

I could write an entire post on weaving IL, but not today. If you’re interested in that, please drop me a line and let me know. I can describe, in detail, all the pain I felt while I was learning how to do this.

Once, I had figured out how to do this. I had an executable that I could run against my compiled .NET binaries, which would weave aspect code, where I place attributes. Then all I needed to do was write the aspect, to translate the exception. You can actually find this aspect in the [ASPeKT] code. Kind of like an ode to the beginning, for me.

What came next?

Now that I had the start of this framework. I thought to myself, that I had actually built something that could be useful. “People might actually want to use this”. I had also always wanted to author an open source library. So I started reaching out on Reddit, and making the foundation more easily accessible for people to use. I made the core more stable. Then I started writing new features, like [ASPeKT] Contracts. It was a surprising journey. In the past I had written many throw away libraries, and many small tools that have never seen the light of day. Always just to prove to myself, well, that I could. There was just something to this one, it was a niche library and I thought there was something to it.

I guess the reality is that I’m still in the middle of this journey, with [ASPeKT] core having 588 downloads, and Contracts having slightly less at 416. I’m still working towards my goal of 1000 downloads for each. Realistically, a project that uses [ASPeKT] could be the spark that it needs to ignite. Until then though, I will just keep plugging away and making it cooler and easier to use.

Why should you use [ASPeKT]?

Well — if you’re at all curious in the inner workings of an AOP library, the code is public on my GitHub or you can easily get the package on NuGet. The second, is maybe you have a project that could benefit from AOP? [ASPeKT] is pretty lightweight, and easy to get started with. Though, if you’re looking for a robust, feature complete, production ready library — [ASPeKT] isn’t there yet. If you’re looking for a library to contribute to, or a library that can be adapted into something production ready, then shoot me an email!

“Man cannot discover new oceans unless he has the courage to lose sight of the shore.”
― Andre Gide

As always, thanks for reading. Happy Coding!

PL

The one where we reverse engineered Microsoft’s C++ Unit Test Framework (Part 2)

If you haven’t already read Part 1, of this series, then I suggest giving it a skim. If not, I’ll give a quick TL;DR;

A few years ago, I was frustrated with some of the idiosyncrasies of Microsoft’s C++ Unit Test Framework. I set out on a mission to develop a custom test adapter, to solve my problems. It would essentially replace the stock adapter that lives in Visual Studio’s Test Explorer window. There would be no difference to writing tests, the only differences would be in the execution of the tests. Things like capturing standard C++ exceptions and displaying the text, and also understanding why binaries were failing to load. It was a bit of a mission. Part 1 of this series, dives into the mechanisms that Microsoft has developed to expose metadata about the tests. A C++ semi-reflection of sorts, allowing inspection of the binary, without loading it, to discover executable tests. They did this using a set of macros, storing information about the test functions in special sections in the binary. That exercise in discovery, was very fun and exciting for me. But it didn’t stop there, we still need to figure out execution of these tests.

“I have no idea what I’m doing.”  I sat with my head in my hands. I was trying to reason out why the test executor was crashing the engine when test asserts failed. 

Writing a test adapter that plugs into Visual Studio is a relatively simple task. You need a managed (.NET/CLR) library, that publicly exposes two interfaces.

public interface ITestDiscoverer
{
     void DiscoverTests(IEnumerable<string> sources, IDiscoveryContext discoveryContext, IMessageLogger logger, ITestCaseDiscoverySink discoverySink);
}

public interface ITestExecutor
{
    void Cancel();
    void RunTests(IEnumerable<string> sources, IRunContext runContext, IFrameworkHandle frameworkHandle);
    void RunTests(IEnumerable<TestCase> tests, IRunContext runContext, IFrameworkHandle frameworkHandle);
}

The first interface is used to populate the Test Explorer with the test information. If you’re unfamiliar with the Test Window in Visual Studio. You should get familiar with it, it’s your friend and can be a great teacher if used correctly. You can display it by selecting Test -> Windows -> Test Explorer. It’s come a long way since Visual Studio 2012, and I’m sure Microsoft will be enhancing it in further versions. There’s a reason Microsoft is investing in this Unit Test technology. Unit testing is an important part of software development. With a few C# attributes, sprinkled with some reflection you could easily craft your own managed test executor. Then you describe what types of files your discoverer discovers, and you implement this function. It will get called after a successful build (I’m not sure how live testing affects this.), telling your discoverer a list of files to discover tests in. Your discoverer, should then load the files, look for tests, then send the Test Cases to the Discovery Sink, which will then have those tests displaying in the Test Explorer window. From the last post, you can see how we could implement the ITestDiscoverer interface, in C++/CLI and then use the library we created to walk the binary searching for test cases. So I won’t go into detail on that.

The next actual hurdle, is with execution of the tests, this is done with the ITestExecutor interface. Again, I will leave it up to your imagination, or you can look at my project here to see how this gets tied into the managed world of Visual Studio. I will be describing how we dive into the actual native execution of these tests.

If we step behind the curtains, and think about really what execution of a test is. It’s just a fancy way of executing a function (or method if you prefer) on a class. There is some ‘execution engine’ which is the process in which this will live and occur, and that process will load in your library (or executable for that matter), instantiate one of your test classes, then execute the method or ‘Test Case’ on your class. This is a bit of an over simplification, but for all intents and purposes, that is the ‘magic’ behind the Test Explorer window. Now, if you’re doing shared libraries or DLLs on Windows in C++, there are two ways to call exported functions. The first method, is to use project referencing (or include the header, and add the .lib file path) and allow the compiler and linker to do the work. Another approach is to use ::LoadLibrary and dynamically load the binary yourself. The downside to using the compiler and linker, is that you have to know the classes and libraries at compile time. But it takes care of all the details of loading the library, and linking for you. The benefit to using ::LoadLibrary, is that you’re not tied to binaries at compile time. You could use this, along with some interfaces, to create a plugin system. The downside is there is only implicit contracts between the libraries you’re loading and the application. The compiler cannot enforce that the plugin is implemented correctly.

Our test discoverer and executor, is in essence a plugin which loads plugins. Each of the test libraries, are in fact plugins exposing an interface that we want to call on. So, we need to use a method where we dynamically load the libraries at run-time. When you’re doing dynamic loading of DLLs, it isn’t enough to simply load the DLL into your process. You have to know what and where you want to call. With C++, using classes, this concept gets harder to see. So, as I love to do, we will simplify and go back to an age where these things were more straight-forward. The age of C.

Let’s imagine we were going to write some C application, that would load different plugins to print some text. This is a relatively simple problem, and it will illustrate the point. The design is simple, we could have two components. We would have our executable “Print Application” and a set of “Printer Driver” plugins or DLLs.

// Printer Application 
#include "Windows.h"

// a function pointer, describing the signature of our plugin print function
int (*PrinterFunction)(char*, int);

// our print function
int print(char *driverPath, char * textToPrint, int length)
{
    // We want to load our driver
    HMODULE library = ::LoadLibrary(driver);
    if(library == NULL)
        return 0; // we failed to load, can't do nothing.
    
    // We want to call our drivers print function
    PrinterFunction printFunction = (PrinterFunction)::GetProcAddress(library, "printText");
    if(printFunction == NULL)
        return 0; // no function in the DLL, can't print.

   // finally print.
   return printFunction(textToPrint, length);
    
}

int main(char **argv, int argc)
{
   int printed = print(argv[0], argv[1]);
   printf("You printed %d bytes to %s", printed, argv[0]);
}

 

// Old Brother Printer Driver
#include "OldBrotherPrinter.h"

// The __declspec(dllexport) here, tells the compiler to expose the function
int __declspec(dllexport) printText(char *textToPrint, int length)
{
   PRINTER p = ConnectToOldBrother("COM1");

   return SendBytesToPrinter(p, textToPrint, length);
}

If you were to compile and run this, you would get an .exe and a .dll. One is the application itself and the other is our plugin printer library. When we run our application, we can give it the path to our OldBrotherPrinter.dll and some text, and it should print our text.

There are two very important things to note here. The first is the function pointer that we’ve declared. This means that we know the signature of the function that we want to call. It’s a function that takes a character pointer, and an int as arguments, then returns an int. The second part is that we know the name, “printText”. Now, if the library doesn’t expose a function called “printText” we can’t get the address to it. If it’s not the same signature, we’re going to have a bad time calling it. There are some implicit contracts between the caller and the library implementer. The call to ::LoadLibrary, will load the binary into our memory space. The ::GetProcAddress call, will find the address to that function in our memory space, so that we can make a call on it. We need to cast the return to our known signature, so that we can call it with our arguments. The take-away from this exercise, is that we need to know the name of the function, and the signature, to be able to load and call it on a loaded library.

The reason that I needed to explain this using C, is because it is less complex than C++. As we know, in C++ there are things like classes, and more importantly, function overloading. In plain C, you could see the function name was exported as the name “printText”. This is because in C, you can only have ONE function named “printToText”. In C++, we have the concept of function overloading. Allowing us to do something like.

// Printer Functions.
#include <string>

int printToText(int someInteger);
int printToText(const std::string &someText);
int printToText(char someCharacter);

If you’re asking, well how can that be, they’re all named the same, how can you differentiate them? That’s the right question. This is done by something called ‘name decoration’.  The function names, really look more like this, from MSVC 19, in Visual Studio 2019.

// Decorated function names
?printToText@@YAHH@Z
?printToText@@YAHABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z
?printToText@@YAHD@Z

Now, when you mix in classes, the decoration gets a little bit more complicated. Code like this.

// Printer Class
#include <string>
class PrinterClass
{
    const std::string myData_;
public:
    int __declspec(dllexport) printToText()
    {
        return 0;
    }
};

Will result in a decorated function named something like this.

?printToText@PrinterClass@@QAEHXZ

If you somehow could know the decorated name, then you could load that function by its decorated name. Alright, at this point you’re probably thinking. Are we ever going to get back to talking about C++ Unit Test Framework? Realistically, that’s what you came here to read. However, this is really important background. I hope you can see the foreshadowing. If not, I’ll lay it out on the table.

In order to dynamically load a library, and make a call into it. We need to know three things.

  1. The name of the library
  2. The name of the function we want to call
  3. The signature of the function we want to call

So, knowing we need those things. I hope you’re asking yourself, which ones do we have? Which ones don’t we have, and how do we get them? Well I can tell you, we have 1 and we have 3.  What we are missing, is the all important name of the function we want to call.  The framework will supply us the name of the binary, we know the function signature is ‘void (void)’ so, we just need the name of the function we want to call.

Huh. How the heck are we going to get that? A user can name the function whatever you want to name it. We also have that added pain that the functions live in classes, which the user can also name. Stumped yet? Yeah — me too. When I’m stumped, I go back to the drawing board. In this case, let’s go back to reviewing that “CppUnitTest.h” file. Do you recall back to when we looked at the TEST_METHOD macro? If not, it looks like this.

#define TEST_METHOD(methodName)\
static const EXPORT_METHOD ::Microsoft::VisualStudio::CppUnitTestFramework::MemberMethodInfo* CALLING_CONVENTION CATNAME(__GetTestMethodInfo_, methodName)()\
{\
    __GetTestClassInfo();\
    __GetTestVersion();\
    ALLOCATE_TESTDATA_SECTION_METHOD\
    static const ::Microsoft::VisualStudio::CppUnitTestFramework::MethodMetadata s_Metadata = {L"TestMethodInfo", L#methodName, reinterpret_cast<const unsigned char*>(__FUNCTION__), reinterpret_cast<const unsigned char*>(__FUNCDNAME__), __WFILE__, __LINE__};\
\
    static ::Microsoft::VisualStudio::CppUnitTestFramework::MemberMethodInfo s_Info = {::Microsoft::VisualStudio::CppUnitTestFramework::MemberMethodInfo::TestMethod, NULL, &s_Metadata};\
    s_Info.method.pVoidMethod = static_cast<::Microsoft::VisualStudio::CppUnitTestFramework::TestClassImpl::__voidFunc>(&methodName);\
    return &s_Info;\
}\
void methodName()

You can see, that one of the macros that is being used is the FUNCTION, and another FUNCDNAME. Well, we know FUNCTION will give us the un-decorated name of the function, maybe FUNCDNAME would be a decorated one? Thank you Microsoft documenation!

‘__FUNCDNAME__ Defined as a string literal that contains the decorated name of the enclosing function. The macro is defined only within a function. The __FUNCDNAME__macro is not expanded if you use the /EP or /P compiler option.

This example uses the __FUNCDNAME____FUNCSIG__, and __FUNCTION__ macros to display function information.’

Well color me stoked, we just made the next tiny step. A decorated function name! But this macro is weird. Do you remember what the memory looked like?

0x07462D94  54 00 65 00 73 00 74 00 4d 00 65 00 74 00 68 00 6f 00 64 00 49 00 6e 00 66 00 6f 00 00 00 00 00 00 00 00 00 44 00  T.e.s.t.M.e.t.h.o.d.I.n.f.o.........D.
0x07462DBA  75 00 6d 00 6d 00 79 00 41 00 73 00 73 00 65 00 72 00 74 00 00 00 00 00 00 00 00 00 00 00 43 50 50 55 6e 69 74 54  u.m.m.y.A.s.s.e.r.t...........CPPUnitT
0x07462DE0  65 73 74 49 6e 76 65 73 74 69 67 61 74 6f 72 54 65 73 74 3a 3a 6e 65 73 74 65 64 3a 3a 44 75 6d 6d 79 43 6c 61 73  estInvestigatorTest::nested::DummyClas
0x07462E06  73 3a 3a 5f 5f 47 65 74 54 65 73 74 4d 65 74 68 6f 64 49 6e 66 6f 5f 44 75 6d 6d 79 41 73 73 65 72 74 00 00 00 00  s::__GetTestMethodInfo_DummyAssert....
0x07462E2C  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3f 5f 5f 47 65 74 54 65 73 74 4d 65 74 68 6f 64 49 6e  ....................?__GetTestMethodIn
0x07462E52  66 6f 5f 44 75 6d 6d 79 41 73 73 65 72 74 40 44 75 6d 6d 79 43 6c 61 73 73 40 6e 65 73 74 65 64 40 43 50 50 55 6e  fo_DummyAssert@DummyClass@nested@CPPUn
0x07462E78  69 74 54 65 73 74 49 6e 76 65 73 74 69 67 61 74 6f 72 54 65 73 74 40 40 53 47 50 42 55 4d 65 6d 62 65 72 4d 65 74  itTestInvestigatorTest@@SGPBUMemberMet
0x07462E9E  68 6f 64 49 6e 66 6f 40 43 70 70 55 6e 69 74 54 65 73 74 46 72 61 6d 65 77 6f 72 6b 40 56 69 73 75 61 6c 53 74 75  hodInfo@CppUnitTestFramework@VisualStu
0x07462EC4  64 69 6f 40 4d 69 63 72 6f 73 6f 66 74 40 40 58 5a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  dio@Microsoft@@XZ

Hmm, the decorated name is.

?__GetTestMethodIn
fo_DummyAssert@DummyClass@nested@CPPUnitTestInvestigatorTest@@SGPBUMemberMet
hodInfo@CppUnitTestFramework@VisualStu
dio@Microsoft@@XZ

It looks, like the function name is __GetTestMethodInfo_DummyAssert(). That’s not the name of our function? Our function was called DummyAssert. Color me confused. Looking back at the macro, now we can see it actually just macros out a metadata function, and then starts our function. So it was never really capturing our method name at all, it was capturing metadata about our function. Shoot! How do we call it now?

Ahhhhh! Time to breathe. Time to rack our brains. Time to dig deep.

Well — what exactly is this MethodMetadata for? They wouldn’t put it in for no reason. So it’s gotta be useful. If we look closely, and kind of expand the macros, removing some non-essentials, that function boils down to this.

static const ::Microsoft::VisualStudio::CppUnitTestFramework::MemberMethodInfo* __GetTestMethodInfo_DummyAssert()
{
    // removed lines above for simplicity.
    static ::Microsoft::VisualStudio::CppUnitTestFramework::MemberMethodInfo s_Info = {::Microsoft::VisualStudio::CppUnitTestFramework::MemberMethodInfo::TestMethod, NULL, &s_Metadata};
    s_Info.method.pVoidMethod = static_cast<::Microsoft::VisualStudio::CppUnitTestFramework::TestClassImpl::__voidFunc>(&DummyAssert);
    return &s_Info;
}

We can see that they are statically allocating a MemberMethodInfo class, and then, they set something called a pVoidMethod member on that, then the return the address to that. The signature of the GetTestMethodInfo_DummyAssert, function returns a const MemberMethodInfo*. We can see now that we’re getting somewhere, this function captures a pointer to the actual test method. So the algorithm we want is something along the lines of.

1. Use our tool set to scan for the MethodMetadata
2. Load the library into memory
3. Load the __GetTestMethodInfo_X function, by its decorated name in the metadata
4. Call this function, to return us the MemberMethodInfo
5. Make our call on the method.pVoidMethod function pointer

Could it be so simple? Unfortunately, not. If you recall from our simple example, we used free functions. What I mean by free functions, is that they are functions that aren’t associated with any data. I’m sorry what? This has nothing to do with the problem at hand.

Yay! Another history lesson.  If we compare procedural programming vs. object oriented programming. We can look at procedural programming as a set of defined functions where we enter into one, and it calls another, and another so on and so forth. There is procedure to its logic. Where as with object oriented programming, we have this concept of a class or object, that gets instantiated, and has a set of methods that operate on it, they may act on other objects, so on and so fourth. Thus, it can be harder to follow having object A calling object B. etc. etc. However, the two principles aren’t all that different, if you look at it with a different perspective. You can actually model object oriented programming in a procedural language. You do this by creating some data model, and a set of functions that work against it. Consider this C-like code.

struct Person
{
   char *name;
   int age;
};

void construct(Person *p, char *name, int age)
{
     p->name = malloc(strlen(name));
     strcpy(p->name, name);
     p->age = age;
}

void printName(Person *p)
{
    printf("My name is %s", p->name);
}

void destruct(Person *p)
{
    free(p->name);
}

As you can see, this looks a lot like a simple C++ class. You have a constructor, destructor and a simple printName function. You notice that each function operates on a Person*. I didn’t invent this pattern, or discover it. In fact, this was the beginnings of C++. Of course, C++ has come a long way. But still at it’s core, classes or objects are a set of functions, that operate on a chunk of data. Class functions in C++, take a pointer to that data, the instance, as their first argument. When I said that we only worked on free functions, our library example only worked against functions that were not being called on object instances. Our void functions in our test methods, act against the test class instance. Therefor, we can’t just “call” the function outright, or bad things will happen, demons could fly out of our noses. We don’t want that. It has to work with an instance of our data, our class. So that means, we need to actually create an instance of our class first.

So, in order to do that we need to know what our class is. This is really getting complicated. Let’s look at that file again, to see if we can glean some details.

// This is a part of the VSCppUnit C++ Unit Testing Framework.
// Copyright (C) Microsoft Corporation
// All rights reserved.

///////////////////////////////////////////////////////////////////////////////////////////
// Macro to define your test class. 
// Note that you can only define your test class at namespace scope,
// otherwise the compiler will raise an error.
#define TEST_CLASS(className) \
ONLY_USED_AT_NAMESPACE_SCOPE class className : public ::Microsoft::VisualStudio::CppUnitTestFramework::TestClass<className>

...

#pragma pack(push, 8)
    struct TestClassInfo
    {
        TestClassImpl::__newFunc pNewMethod;
        TestClassImpl::__deleteFunc pDeleteMethod;

        const ClassMetadata *metadata;
    };
#pragma pack(pop)

...

template <typename T>
class TestClass : public TestClassImpl
{
    typedef T ThisClass;

public:
    static TestClassImpl* CALLING_CONVENTION __New()
    {
        CrtHandlersSetter setter;
        return new T();
    }

    static void CALLING_CONVENTION __Delete(TestClassImpl *p)
    {
        CrtHandlersSetter setter;
        delete p;
    }


    // assume method matches this pointer
    virtual void __Invoke(__voidFunc method)
    {
        typedef void (ThisClass::*voidFunc2)();
        voidFunc2 method2 = static_cast<voidFunc2>(method);

        CrtHandlersSetter setter;
        (static_cast<ThisClass *>(this)->*method2)();
    }

    static EXPORT_METHOD const ::Microsoft::VisualStudio::CppUnitTestFramework::TestClassInfo* CALLING_CONVENTION __GetTestClassInfo()
    {
        ALLOCATE_TESTDATA_SECTION_CLASS
        static const ::Microsoft::VisualStudio::CppUnitTestFramework::ClassMetadata s_Metadata = {L"TestClassInfo", reinterpret_cast<const unsigned char*>(__FUNCTION__), reinterpret_cast<const unsigned char*>(__FUNCDNAME__)};

        static const ::Microsoft::VisualStudio::CppUnitTestFramework::TestClassInfo s_Info = {&__New, &__Delete, &s_Metadata};
        return &s_Info;
    }

    static EXPORT_METHOD const ::Microsoft::VisualStudio::CppUnitTestFramework::TestDataVersion* CALLING_CONVENTION __GetTestVersion() 
    {
        ALLOCATE_TESTDATA_SECTION_VERSION
        static ::Microsoft::VisualStudio::CppUnitTestFramework::TestDataVersion s_version = { __CPPUNITTEST_VERSION__ };

        return &s_version;
    }
};

Here, we see the same pattern. This method __GetTestClassInfo(), has ClassMetadata which has the decorated name to the __GetTestClassInfo() method. We can load that method and call it. From there this TestClassInfo object, has pointers to a __newFunc and __deleteFunc. This was the key to unlocking our success! We can see the finish line now. The macro, TEST_CLASS, ensures that you derive from the template class TestClass<T>. It uses CRTP to be type aware of our class, and defines two static functions. One that returns a TestImpl* called __New(), which creates a new instance of T (our type) and the other deletes it __Delete(TestImpl*). It also defines a function called __Invoke(__voidFunc) which invokes a void method, against ‘this’. TestImpl is defined as.

// This is a part of the VSCppUnit C++ Unit Testing Framework.
// Copyright (C) Microsoft Corporation
// All rights reserved.

class TestClassImpl
{
public:
    TestClassImpl() {}
#ifdef FEATURE_CORESYSTEM
    virtual ~TestClassImpl() {}
#else
    virtual ~TestClassImpl() noexcept(false) {}
#endif

    typedef TestClassImpl* (CALLING_CONVENTION *__newFunc)();
    typedef void (CALLING_CONVENTION *__deleteFunc)(TestClassImpl *);

    typedef void (TestClassImpl::*__voidFunc)();

    virtual void __Invoke(__voidFunc method) = 0;

protected:
    struct CrtHandlersSetter
    {
    typedef void (__cdecl *INVALID_PARAMETER_HANDLER)(const wchar_t* pExpression, const wchar_t* pFunction, const wchar_t* pFile, 
    unsigned int line, uintptr_t pReserved);

        CrtHandlersSetter()
        {
            if(IsDebuggerAttached())
            {
                debuggerAttached = true;
                return;
            }
            
            debuggerAttached = false;
            // Suppress the assert failure dialog.
            oldReportMode = _CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_FILE);
            oldReportFile = _CrtSetReportFile(_CRT_ASSERT, _CRTDBG_FILE_STDERR);
            // Set the handler
            oldInvalidParameterHandler = _set_invalid_parameter_handler(reinterpret_cast<INVALID_PARAMETER_HANDLER>(InvalidParameterHandler));
        }
        
        ~CrtHandlersSetter()
        {
            if(debuggerAttached)
            {
                return;
            }
            
            _CrtSetReportMode(_CRT_ASSERT, oldReportMode);
            _CrtSetReportFile(_CRT_ASSERT, oldReportFile);
            _set_invalid_parameter_handler(oldInvalidParameterHandler);
        }

private:
        // Check if a debugger is attached.
        __declspec(dllexport) static bool __stdcall IsDebuggerAttached();
       // The handler for invalid parameters
        __declspec(dllexport) static void __cdecl InvalidParameterHandler(const unsigned short* pExpression, const unsigned short* pFunction, const unsigned short* pFile, 
    unsigned int line, uintptr_t pReserved);
    
private:
        _invalid_parameter_handler oldInvalidParameterHandler;
        int oldReportMode;
        _HFILE oldReportFile;
        bool debuggerAttached;
     };
};

So __Invoke(__voidFunc), is a pure virtual function, that will allow us, using the miracle of polymorphism, to make the call into our class, from just a pointer to TestImpl.

You can see that we have everything we need.

1. Load the binary metadata, and find the test method we want
2. Determine which class it exists in
3. Load the ClassMetadata, to get the decorated name of the __GetClassInfo() function
4. Call __GetClassInfo() to retrieve the class info
5. Use the pointer to __New(), to create an instance of the class
6. Use the decorated GetMethodInfo() name, to call GetMethodInfo()
7. Use the method info, to __Invoke(__voidFunc), on the instance of TestClassImpl we created earlier.
8. Success.

So, I went and did just that. I made the test executor do the steps above. It invoked the method, the test ran. I was over the moon. It was time to fix my first issue. I hurriedly wrapped the __Invoke(__voidFunc) call in a try/catch. Like this

// MsCppUnitTestAdapter.cpp:210 
void VsTestAdapterExecutionContext::ExecuteMethod(System::String ^methodName, TestResult ^r)
{
    ResultRecorder cb(r); 
    static_cast<ResultReporterExceptionHandler*>(handler_)->Reset(&cb);
    try
    {           
        auto className = context_->Info().GetClassNameByMethodName(MarshalString(methodName));
        // load the test class, from the method name, this will load the class into the execution context, but it's name if it doesn't exist, it'll load it.
        TestClass_ *tc = nullptr;
        if (!stdx::find_if(*classes_, tc, FindByClassName(className)))
            tc = context_->CreateClass(className); // if you didn't call load, you won't get the class initialize.

        tc->Reset(); // this will reset the class i.e. create a new instance
        tc->InvokeMethodSetup();
        cb.OnStart();
        tc->InvokeMethod(MarshalString(methodName));
        cb.OnComplete();
        tc->InvokeMethodCleanup();
    }
    catch (const std::exception &e)
    {
        cb.OnError(System::String::Format("Uncaught C++ exception. {0}", gcnew System::String(e.what())));
    }
    catch (...)
    {
        cb.OnError("Unknown C++ Exception");
    }
    static_cast<ResultReporterExceptionHandler*>(handler_)->Reset();
    
}

I didn’t go into any detail about how I ended up getting the class name from the function name. The simple answer is that I parse the class name, from the function name. I also didn’t go into detail, about setup / teardown of the classes and modules. The above snip does some of that, as well does some reporting about the tests. I’ll admit it’s a bit messy, but it works. You can see that I capture std::exceptions and print the error. Now, if a C++ std::exception escapes the test method, my framework will catch it and print the error.

By this point, I was over the moon. I had done something harder than I thought I could and I really pushed my understanding of how these things worked. I had run some tests, and I was getting green lights in the Test Explorer window. I had let some std::exceptions escape, and saw the tests were failing correctly. I made sure the exception information was displayed in the test window. Time to try some negative assertion tests. I setup a test that had a bad assertion, something like.

TEST_METHOD(willAssert)
{
    Assert::AreEqual(3,4, L"They're not equal"); 
}

Each time I ran ‘willAssert’, the test would stay semi-opaque, as if I hadn’t run it at all. When I watched the Task Manager, the test execution engine process would disappear when I ran the test. Oh no. 

I put my head into my hands. I have no idea what I’m doing.

I hope that Part 2 of this series was equally as entertaining as the first part. I actually really loved putting the execution part of this code together. It was such a puzzle. Stay tuned for the next piece of the puzzle, where we explore Structured Exception Handling.

“Magic lies in challenging what seems impossible” — Carol Moseley Braun

Happy Coding!

References

Predefined Macro Definitions