"Magic is just science that we don't understand yet" — Arthur C. Clarke


The click-bait title is completely intentional. 🙂 Spoiler though, I don’t actually hate dependency injection (DI). In fact, I love DI. I think it’s one of the best concepts of software design. It enables flexible software architectures, better testing, and flexibility and reusability of components. It’s an amazing concept.

So why the click-bait title? Because it’s a common misnomer to call Inversion of Control, Dependency Injection. If you take anything away from this post, take away this statement DI != IOC; Dependency Injection is the practice of decoupling the creation of dependencies from the responsibility of your class. In short, having your dependencies provided or injected at construction time (or by setters). This is not Inversion of Control containers. IOC containers effectively take a declarative list of objects, and when asked to construct them build a dependency graph and constructs the objects recursively constructing the dependency and then injecting it. This inverts the control, because instead of starting at the leaf and constructing the objects upwards as you would in a traditional procedural program. The IOC container allows you to take control of the top level object, ensuring the dependencies exist for you. My background is in C++ and without fancy reflection, it’s impossible to have an IOC container. Does this mean that you can’t have DI? I don’t think so, in fact for decoupled, testable, maintainable code you have to take dependencies in the constructor. But, you have to build the dependency graph by hand. Which lets you actually see how much of a mess you’re making.

My real beef is with IOC containers, and the ease in which it makes taking a dependency on something. Modern IOC containers, make it trivially easy for a developer to take a dependency on another class. It’s simple, declare an interface, implement said interface, register the implementation to the interface, take the dependency in your constructor voila. This is “DI” at its finest. Except, it takes away the thought process behind taking a dependency on something, and what that means. And don’t be confused just because you program “to an interface” doesn’t make it any less coupled. When you take a dependency on something you’re stating “my class needs this class to do its work”. If you truly evaluate that statement, I suspect that often it doesn’t actually require the dependency. In a lot of cases, the dependency is orthogonal to the work being done by the class.

Here’s 3 issues that I think IOC containers cause in codebases.

  1. Dependency overload – Taking too many dependencies. Spaghetti not onions.
  2. Responsibility confusion – Confusing who does what, forcing clients to take too many dependencies and do the work for themselves.
  3. Implicit responsibility – Relying on implementation details in a chain of dependents.

Dependency Overload

The term “spaghetti code” has been around since the 70’s. In my terms it means your code has no structure, and is a tangled mess. It resembles a plate of overcooked spaghetti that Scott Conant would scoff at. There are other pasta themed analogies for shitty code, but we’ll stick with spaghetti. This is in stark contrast to “onion code” (my term), which has well defined structure that layers, resembling the cross section of a fresh cut red onion. The ease in which an IOC container makes taking a dependency rarely allows the developer to take pause and ask “should I really be taking this dependency here?”. In good layered code, the amount of dependencies your class takes should be related to where it exists in the structure. The closer you are to the center of the onion, the less (if any) dependencies you should take. As you move out naturally you’ll have more dependencies. Until the final layer, your executable, where you depend on everything. But if it’s trivially easy to depend on something, it’s very easy to tie knots in that structure. You can easily move to a lower layer object (providing it isn’t a dependency of the higher layer object), and take a dependency on it. Your container won’t tell you this is a mistake.

So – what’s the solution? My advice is to take pause when you’re writing the constructor, ask yourself if this is truly a “dependency”, or could you pass the information via a parameter. The question to pose is “can it be resolved outside the scope of this class?”. Consider this example.

class FooClient : IFooClient
{
   private IHttpClient client_;
   private IFooEncoder encoder_;

   public FooClient(IHttpClient client, IFooEncoder encoder)
   {
       client_ = client;
       encoder_ = encoder;      
      /// Setup base URL for Foo service, etc...
   }
   
   public async Task<FooStringResponse> SendStringMessageAsync(string message)
   {
        var fooStringMessage = encoder_.EncodeStringMessage(message);
        var httpReq = fooStringMessage.ToHttpRequest();
        var resp = await client_.SendAsync(httpReq);
        /// Error handling, etc..
        return ParseFooStringResponse(resp.Content);
   }

   public async Task<FooObjectResponse? SendObjectMessageAsync<T>(T object)
   {
       /// Same as above just for object, JSON encode/decode, etc..
   }
}


Can you see how easy it was to take a dependency on the IFooEncoder? However, it’s easy to see this dependency isn’t needed. We don’t need to be using the encoder in this class at all. It’s much better suited outside the class in another layer.

class FooClient : IFooClient
{
   private IHttpClient client_;

   public FooClient(IHttpClient client)
   {
       client_ = client;

      /// Setup base URL for Foo service, etc...
   }
   
   public async Task<FooResponse> SendMessageAsync(FooMessage fooMessage)
   {
        var httpReq = fooMessage.ToHttpRequest();
        var resp = await client_.SendAsync(httpReq);
        /// Error handling, etc..
        return ParseFooResponse(resp.Content);
   }
}

class FooRequester
{
   private IFooClient client_;
   private IFooEncoder encoder_;

   public FooRequester(IFooClient client, IFooEncoder encoder)
   {
       client_ = client;
       encoder_ = encoder;      
   }

   public async Task<FooStringResponse> SendStringMessageAsync(string message)
   {
        var fooStringMessage = encoder_.EncodeStringMessage(message);
        var resp = await client_.SendMessageAsync(fooStringMessage);
        return TranslateToStringResponse(resp);
   }

   public async Task<FooObjectResponse? SendObjectMessageAsync<T>(T obj)
   {
       var fooObjMessage = encoder_.EncodeObjectMessage(obj);
       var resp = await client_.SendMessageAsync(fooObjMessage);
       return TranslateToObjectResponse(resp);
   }
}

The code could be cleaned up a bit, but you can see the clear responsibility of each line. This approach separates the messages and their resolution into separate classes, making the code simpler and easier to understand.


Responsibility Confusion

Similar to Dependency Overload, Responsibility confusion is the act of taking too many unnecessary dependencies, where the responsibility of these dependencies is ambiguous. In most cases a layer is missing, causing a higher up layer to have to do more than it should. It presents itself with a confusion of who is responsible for what.

Here’s an example

interface IFooClient
{
    Task RequestAsync(string permission);
}

interface IFooPermissionsProvider
{
    bool IsPermitted(HttpContext, string permission);
}

class BarController 
{
   private IFooClient client_;
   private IFooPermissionsProvider permissions_;

   public BarController(IFooClient fooClient, IFooPermissionsProvider fooPermissions)
   {
       client_ = fooClient;
       permissions_ = fooPermissions;
   }

   public async Task OnFooRequest()
   {
        if(!permissions_.IsPermitted(HttpContext, "Edit"))
        {
             return await client_.RequestAsync("View");
        }
        else 
        {
             return await client_.RequestAsync("Edit");
        }
        
   }

}

In this case, it makes sense to introduce an additional layer where the permissions are checked. It’s clearly not the responsibility of the BarController to do this work.

A much friendlier design

interface IFooClient
{
    Task RequestAsync(string permission);
}

interface IFooPermissionsProvider
{
    bool IsPermitted(string userId, string permission);
}

class FooUserClient: IFooUserClient
{
   private IFooClient client_;
   private IFooPermissionsProvider permissions_;
  
   public FooRequester(IFooClient client, IFooPermissionProvider fpp)
   {
       client_ = fooClient;
       permissions_ = fooPermissions;
   } 

   public Task RequestAsync(string userId)
   {
        if(!permissions_.IsPermitted(userId, "Edit"))
        {
             return await client_.RequestAsync("View");
        }
        else 
        {
             return await client_.RequestAsync("Edit");
        }
   }
}

class BarController 
{

   private IFooUserClient client_;
   public BarController(IFooUserClient client)
   {
      client_ = client;
   }


   public async Task OnFooRequest()
   {
       return await client_.RequestAsync(HttpContext.User.Id);
        
   }

}

Implicit Responsibility

It doesn’t take ChatGPT to recognize there’s a pattern throughout these smells. That is, not keeping responsibility where it should be. Implicit responsibility happens when you take a dependency on two or more classes, and make an assumption on the behaviour of one of them. It’s subtle, but I’ll try to illustrate below.

interface IRequestCounter
{
   void IncrementCount(string url);
   int GetCount(string url);
}

interface IClient
{
  Task MakeRequestAsync(string url);
}


class BarController
{
   private IClient client_;
   private IRequestCounter counter_;

   BarController(IClient client, IRequestCounter counter)
   {
       client_ = client;
       counter_ = counter;
   }

   public async Task OnFoo()
   {
      await client_.MakeRequestAsync(FooConstants.Url);
      if(counter_.GetCount(FooConstants.Url) == 100)
      {
         // You win!! You're the 100th requester!
      }
 
   }
  
}

You can see here there’s an implicit assumption that the IClient will increment the request count. You could easily argue the validity of this “well yes, why wouldn’t the client increment the request count?” Well, because this is not part of the IClient‘s explicit interface, the user of this interface knows nothing about request counts. Since we likely wrote the code, and know the implementation details of the client, we know that it does this work. The problem is, that if IClient ever gets a new implementation, we can break code at a much higher level i.e. BarController. Worse yet, unit tests are unlikely to catch this, because of the dependency separation. You would have a mock IClient making requests, and a mock IRequestCounter returing the expected values. But you’d likely miss the implementation detail you’ve assumed.

A much better approach is to again, create another layer that handles the responsibility of keeping track of requests made, and use that as a dependency in BarController.

interface ICountingClient
{
    Task<int> MakeRequestAsync(string url);
}

class BarController
{
   private IClient client_;

   BarController(ICountingClient client)
   {
       client_ = client;
   }

   public async Task OnFoo()
   {
      var count = await client_.MakeRequestAsync(FooConstants.Url);
      if(count == 100)
      {
         // You win!! You're the 100th requester!
      }
 
   }
  
}

Short wrap up. We went through 3 smells that can happen when we get too DI happy in our enterprise code. Remember the one takeaway DI != IOC, and next time you reach for that easy to use dependency, just take an extra second to make sure it fits.

Until next time happy coding.

“Strength is found in weakness. Control is found in dependency. Power is found in surrender.” – Dan B. Allender

Leave a comment