Scala

Gang of Four Patterns With Type-Classes and Implicits in Scala (Part 2)

Type-classes are a powerful tool for library creators and maintainers. They reduce boilerplate, open libraries to extension, and act as a compile time switch. Similarly, the GoF patterns are also a collection of software organizational patterns aimed at improving the quality of code. The last blog post explored using one such pattern with type-classes and implicits, the bridge pattern. In this post I’ll move on to using the adapter pattern.

The adapter pattern is the most widely used and recognized type-class using a GoF pattern within the Scala community. The standard library includes several examples such as Ordering and Numeric. As with the case of any well designed and implemented library, their use is transparent and invisible to library consumers. Many
 
people coming to Scala have probably used these without even realizing it. If you’re already familiar with the adapter pattern, skip the next section.

Adapter Pattern

The adapter pattern (sometimes called a wrapper pattern) is an OO-design concept invented to solve the problem of code duplication and promote code reuse in the presence of disparate interfaces. It does so by unifying code around a common interface and encapsulates the GoF philosophy of “programming to an interface.” From this brief description the bridge and adapter patterns may seem indistinguishable but their purposes are quite strikingly different. Whereas the bridge pattern is used to allow N concepts to vary independently behind N interfaces, the adapter pattern is used to reduce N interfaces to one when the underlying concept is the same. To wit, it is used as the “glue” to tie or adapt inconsistent APIs together (hence the name.)

Adapters are more of a handy man’s tool for pragmatic code construction. The common use case being when there is a preexisting component or library that has to be made to work within a code base that was not designed to accommodate it. In general, they are built in advance only when libraries need backwards compatibility between versions or to serve as extension points to future library users. So what’s it look like? Let’s take as an example two interfaces for addition:

trait Addable{
  def add(x: Int, y: Int) = x+y
}
trait Summable{
  def plus(x: Int)(y: Int) = x+y
}

where Summable‘s “plus” method exposes the curried form of Addable’s “add” method. If we wanted to make these two interfaces inter-operable in a purely OO world we’d produce:

class Add2SumAdapter(addable: Addable) extends Summable{
  def plus(x: Int)(x: Int) = addable add (x, y)
}
class Sum2AddAdapter(sum: Summable) extends Addable{
  def add(x: Int, x: Int) = sum plus (x) (y)
}

Which essentially packages up one interface for another. Library decoupling achieved. As an aside, it has been argued that as we move towards a functional style of programming that we can mitigate the need and boilerplate required for the adapter pattern. In FP languages the adapter morphs into currying, function composition and lifting. This change in dynamic is important to realize, as mitigation is not equivalently elimination. One needs to think about the type signature of the functions above to see how the need for adaptation continues to be true in functional terms:

type Plus = (Int => Int => Int)
type Add = ((Int, Int) => Int)

There just is no way to avoid confronting the type mismatch. It will have to be handled somewhere.

Adapter Pattern in Type-Classes

Before we begin with the adapter pattern and type-classes in depth, let’s take a step back and talk about bigger issues; library scope issues. To that we mean, let’s talk about a concept related to the adapter pattern, the Dependency Inversion Principle or DIP for short. The hallmark of code written using this technique is a decoupling of higher level modules from lower level modules by forcing the lower level modules to conform to an interface defined at the higher level. Thus the inversion is the higher level module defines the building blocks upon which it is built and not the other way around.

DIP results in cleaner and more extensible code but to do so relies heavily on the use of structural design patterns, the adapter being one. Scala allows for implicit conversions between types and, thus, DIP could be implemented in terms of implicit conversions in a very OO-style of coding. This would solve the problem of too much interface adapting boilerplate within our code but, as a side effect, it would lead to code that was unnaturally hard to debug and errors that were even harder to trace. There’s a reason these were turned off by default in 2.10 (and if you’ve experienced the joys of mutable implicit state you’d have a painful understanding why.)

Generalizations of DIP at a type level leads to an even more powerful construct known as ad hoc polymorphism. In this case we define a type-class which acts as an adapter, allowing objects of that type to express a finite set of operations. This set of operations defines an interface to which code can be written, independently of the type of the object. Instead of wrapping the class within an interface, the object and it’s type becomes an argument to the interface. Implicit scope resolution is relied upon to inject the correct type-class based on the argument type at the function call site.

Before we go further let me point out that DIP, when used in the context of implicits as we have just described, may sound similar to another code organizational technique known as dependency injection. DIP is not dependency injection and neither is what we have just described. DIP is resolved at compile time while DI revolves around run-time resolution. One can make use of the other but they are as different as different can be.

Let us look at two example implementations of date and time in Java: java.Date and joda.DateTime. Both represent date and time with methods for modification. However, one is a mutable construct whose methods work by side-effect and the other, immutable whose methods return a new instance. If we wish to work with a date/time type but still remain decoupled from the concrete realization of that type, we’d encode the interface of the behaviors in a reusable type-class:

trait DateFoo[DateType]{
  def addHours(date: DateType)(hours: Int): DateType
  def addDays(date: DateType)(days: Int): DateType
  def addMonths(date: DateType)(months: Int): DateType
  def addYears(date: DateType)(years: Int): DateType
  def now(): DateType
}

Anywhere in our application we could then write code which looked like the following:

trait TimeTrackDAO{
  def checkin[A](unit: String, key: String, time: A)(implicit date: DateFoo[A]): A
  def checkout[A](unit: String, key: String, time: A)(implicit date: DateFoo[A]): A
  def itemize(unit: String, key: String): Int
}

and as long as there was an implicitly scoped DateFoo type-class for “A,” both the “checkin” and “checkout” methods would just work. And if there wasn’t such a type? We could only use the “itemize” method because the other two would not compile! Meditate on that for a second.

Let me say it another way, there is nothing stopping us from using the “itemize” method of our TimeTrackDAO if we have not defined any DateFoo type-class in the system. Only when we attempt to use either the “checkin” or “checkout” methods with a type lacking a DateFoo would our code fail to compile. This is our compile time switch mentioned in the introductory paragraph. Type-classes with implicit resolution allow class/trait functionality to be enabled or disabled at compile time based on a type-parameter and scoping rules.

Conclusion

OO-design patterns and OO concepts have spent years being refined and explored by developers due, in large part, to the predominance of OO in mainstream languages. Good, SOLID methodologies have arisen as a need to combat the complexity naturally arising out of OO-based designs. Functional concepts, such as type-classes, have only begun to trickle into languages which share a hybridization of the two paradigms.

While there is a divide between standard OOP idioms and functional constructs like type-classes, the two can actually be used towards a common good. FP is not a silver bullet. It will not invalidate the common wisdom of many OO inspired ideas but FP concepts due force us to rethink the approach we take in wrestling code complexity. The not so surprisingly useful nature of languages with functions as first class citizens is just the tip of the iceberg.

Using the adaptor pattern with FP inspired type-classes, we’ve shown how to reduce boilerplate, open code to extension and impose compile time constraints. There has been no “magic” or hard to trace issues like those that arise out of implicit conversions or DI styled libraries. Type-classes using the adapter pattern are deliberate and explicit in use, with well defined scope resolution which is enforced at compile time. They are the perfect combination of OO and FP principles.
 

Reference: Gang of Four Patterns With Type-Classes and Implicits in Scala (Part 2) from our JCG partner Owein Reese at the Statically Typed blog.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button