Dependency Injection with Clojure using DIME

Shantanu Kumar
8 min readMay 1, 2018

Do we need Dependency Injection with Clojure? That’s a slightly misplaced question — it does not even begin with the problem definition. Clojure is a functional programming language. In this post we would explore the “dependency problem” with Clojure programming and what dependency injection can bring to the table. Later, we discuss DIME, a Push-model based dependency injection library.

Functions are “Contract + Implementation”

Consider the following function:

(fn save-order [db-connection order-details]) ; caller passes two args

This function requires the caller to pass two arguments, which forms its API contract. It all sounds well until the caller is in conflict with this contract. What if the caller (say, web layer) doesn’t have (or if you find it inappropriate and burdensome to lug around) the database connection as an argument? A contract the caller is looking for may be different from the implementation. The caller may want to call (fn save-order’ [order-details]) — a subset of the above implementation, without being bothered about db-connection — in view of this caller contract, db-connection is a dependency of save-order.

Push versus Pull

The save-order function can either accept db-connection as an argument (Push model) or resolve it by looking up some var (Pull model) instead. The signature (fn save-order [db-connection order-details]) we saw in the previous section implies Push model because we push db-connection as an argument to the function.

On the other hand, an example of Pull model would be as follows:

(def ^:redef db-connection nil) ; mutated during initialization

(fn save-order [order-details]) ; uses db-connection

Unfortunately, the Pull model implies Place-oriented programming, involves mutation (potentially leading to ordering issues with other mutation) and is riddled with issues related to mocks during testing. These issues are fundamental in nature without a sound remedy. However, the Pull model manages to reduce the API contract down to what the caller may expect.

The Push model is not free of challenges either. The first issue that arises is cascading-push problem, where the caller must acquire the dependency before passing it on, and the caller’s caller must do the same thing and so on. That is turtles all the way down, right? The second is too-many-dependencies issue, where a large number of dependencies are required by the outermost caller to begin the invocation chain. With a large number of dependencies how do you manage the function arities? Even though one can adopt a convention to dedicate the first argument of every function to receiving dependencies as a map, it is unwieldy and it quickly turns into a “grab bag” anti-pattern (from my personal experience) when somebody is tempted to abuse the dependency-map to pass runtime parameters in order to avoid or delay code refactoring. The third issue is editor-inconvenience, wherein you cannot navigate to the source code of a dependency because it is passed as argument. Power users of Clojure code editors are often used to the convenience of quickly navigating to the source code of an invoked function, but passed arguments do not contain the metadata to make that feasible.

Though the challenges of Push model make it almost impractical, what if there was a way to automate the process and remove the elements of human error? Perhaps, using a dependency injection library/framework! We will discuss other aspects for the remainder of this post in light of the Push model.

Push-model benefits

So, why should anybody consider Push-model over a convenient Pull-model? Fundamentally, push-model forces you to decouple the implementation from the contract. This decoupling leads to several benefits, the first one being a simpler application initialization model that is easier to reason about. You do not need to worry about mutation (and the order and source of mutation) to prepare and store dependencies. (However, small mutations for the purpose of development i.e. REPL/Test mode are OK.) The second benefit is a derivative of the first — decoupling allows very flexible and pervasive mocking during tests, which means you can simulate success or failure of dependencies at any level and implement quite sophisticated test cases. The third benefit is that since there is no mutation, any kind of instrumentation, enhancement or wrapping of the dependencies becomes much easier. With automation of the Push-model overhead, the benefits start outweighing the associated challenges.

Comparison with Object-Oriented approach

If this were to be implemented using an OO language, probably the implementation would encapsulate db-connection as a private (dependency) member and expose only save(order-details) method as the contract. So, OO languages have this baked-in concept of dependency versus contract.

To achieve a similar separation of dependency versus contract, we need two variants of the save-order function we noticed above — one is the (current) implementation, and another that is a caller’s view of the implementation. The latter may be created from the former by encapsulating the dependency.

A case for SOLID

SOLID (makes sense for Clojure) represents five fundamental programming principles, listed below:

  • Single responsibility principle
  • Open/closed principle
  • Liskov substitution principle
  • Interface segregation problem
  • Dependency inversion principle

When you allow a dependency to be exposed in the API contract, you (a) make the caller responsible for procuring db-connection (violating Single-responsibility principle) and (b) pollute the API contract with implementation details (violating Interface-segregation principle).

Tests and Mocks

We test only implementations, not contracts. This makes unit testing with mocks quite easy with the Push model, i.e. passing dependencies as arguments. The test code can supply mock versions of the dependencies as arguments.

This scheme sounds kosher for unit tests, but what about integration tests? Integration tests can be done in a similar fashion as unit tests, except that fully constructed integration dependencies should be passed instead of mocks in the case of integration tests. We can also build upon unit tests and integration tests to design “scenario tests” with varying levels of simulation for various functionality and infrastructure components for an application.

Dependency lifecycle

Dependencies are not only ready-to-use values, sometimes you need to initialize them first and often those initialized things are stateful objects that require de-initialization. Let us revisit the original function with a helper:

(fn save-order [db-connection order-details])

(fn make-db-connection [host port db-name password options])

The db-connection argument accepted by save-order is created using the make-db-connection function, which depends on five arguments that are probably read from a configuration file. The password was probably encrypted and had to be decrypted before use. As you can see, there is an order of steps to initialize/resolve and pass dependencies. Similarly, there may be a certain sequence of steps involved for de-initialization as well.

Dependency Injection Made Easy (DIME)

DIME is a namesake library that makes Dependency-injection easy while following the Push model. We have enumerated the challenges with Push model in earlier sections — DIME takes care of those problems and provides a smoother experience of using the Push model. In the sections below we would go into the details of how DIME works.

DIME: Dependency graph

DIME requires upfront declaration of all dependencies tagged with identifiers, then given a dependency graph infers the resolution order of all dependencies and realizes the graph by injecting the cascading dependencies. Below is an example of how to declare such dependencies using var metadata (DIME also supports other ways to declare dependencies):

(defn ^{:expose :db-conn :post-inject dime.util/post-inject-invoke}
make-db-connection
[^:inject {:keys [host port name password options]})

(defn db-fetch-order [^{:inject :db-conn} db-connection order-id])

(defn cached-fetch-order [^:inject db-fetch-order order-id])

Here, we declare three functions (in various namespaces) in this dependency order: make-db-connectiondb-fetch-ordercached-fetch-order

Function make-db-connection is injected with configuration parameters that one must pass as seed data. Other dependencies are correlated by the names they are exposed/injected as. While realizing the graph, we must pass a seed map containing keys :host, :port, :name, :password and :options with corresponding values.

DIME: Lifecycle

DIME figures out the dependency resolution order as Directed Acyclic Graphs, starting with the least dependent. In the example above, it first injects host, port, name, password and options seed-attributes into make-db-connection, which returns a no-argument function because there are no non-injectable arguments. That injected function is post-processed using dime.util/post-inject-invoke that invokes the no-argument function to return a database connection. The database connection is exposed as :db-conn in the dependency graph.

The db-fetch-order function is injected with :db-conn returning a function (fn [order-id]) exposed as :db-fetch-order in the dependency graph. Similarly, the function cached-fetch-order is injected with :db-fetch-order returning a function (fn [order-id]) exposed as :cached-fetch-order in the dependency graph. Dependency resolution happens sequentially here. After injections succeed, a map of expose-keys to injected functions is returned.

While the post-processing step in DIME allows stateful action to happen during dependency resolution, there is no provision for corresponding cleanup action. The caller must clean up any stateful resources by looking up the dependency graph. While this limitation may seem odd, in practice the vast majority of dependencies are stateless that need no cleanup. The few stateful dependencies may be explicitly cleaned up by writing code.

DIME: Testing & mocking

Mocking is quite easy with DIME as all dependencies are passed as arguments. For example, the function cached-fetch-order may be passed a mock version of db-fetch-order as argument to simulate a suitable success or failure condition. This works well with ordinary situations as well as tricky scenarios such as diamond dependencies, concurrent scenarios, lazy evaluation and asynchronous code.

DIME: REPL and Editor convenience

DIME version 0.5.0 added development support features, one of which can be used by tools to locate the source code of injected dependencies. This has also been used to extend the popular Emacs CIDER plugin command M-. (called meta-dot to navigate to the source of a function). The DIME repo contains a `dime-cider.el` file listing the Emacs configuration to extend CIDER M-. such that it recognizes the DIME metadata and looks up the dependencies on-the-fly to navigate to their source code. This takes away the pain of locating the source of injected dependencies in Push-model dependency injection.

There are also utility functions to dynamically create injected functions as vars at runtime. Those are useful at the REPL to work with various use cases. The same utility functions are also useful to lookup the implementation source code. The DIME docs would be a good place to explore these aspects better.

If you like this post, you may want to follow me on Twitter and Github.

You may like to discuss this on Hacker News or on Reddit.

(Thanks to Vijay Mathew, Ravindra Jaju and Vipin Nair for reviewing drafts of this post.)

--

--

Shantanu Kumar

Experienced software artisan. Into Open Source, functional programming, Java, Clojure, databases, distributed systems, scalability, performance.