A good friend asked me a quote today, and I thought I’d share it with you. Consider this code snippet:
The question I was asked was whether the above code was better for having the self-describing method, or whether the inline method refactoring should be used, as described on Refactoring Guru, and as originated by Martin Fowler. How interesting that the former website uses the same example as the latter… where the latter is from the original source.
Anyhow, this is an interesting dilemma. Is the following code better?
The second version is definitely more concise. The first version seemed to be word and operator soup…
However, if there were multiple places where this logic were needed, I’d want a method. I’d also want a method that didn’t talk so much. How about this?
And you could leave it there…
Questions like this can end with something like what’s the form we find easiest to read right now? Generally, you want something that doesn’t take too much thinking about and which balances conciseness with explanation.
But What’s Going On?
The old tech lead trick is to answer the question at hand and then step back and say something like “What’s going on here, though?”.
In the above code, there’s this Map just lying around. In order to make sense of it we need a function to decide things. Is this a case of a missing type? Is there an anemic object model that’s too anemic?
Should we make a service to render out of the data some useful facts?
Anemic Model vs Data on the Loose
The days of smart objects are generally gone. We realise that if we put too much behaviour into model objects, they get too coupled with the wrong things. A model object should not, for instance, know how to save itself, because that just ends badly. Similarly, if you have a model object and some feature of the software has a preferred way of rendering it, but some other feature of the software has a different one – e.g. the difference between format on screen and format on disk, the model is probably not responsible for implementing the conversions to those formats.
The simplest solution to this is to have a very lightweight object model with getters and setters and other real basics, and then have services to transform that object model into other things, or pluck data from it.
I tend to consider things like JSON format to be native to an object model that was brought into life to represent the JSON data.
It’s generally obvious when an object is doing too much if it knows about more than one external technology that cares about representing it. That said, model objects that are annotated for two equivalent ways of rendering that exact model to JSON may be okayish…
And the whole point of the anemic object model discussion is to remind you that Map is not an object model. What we’ve got is data on the loose.
Data on the Loose
If you have some stuff in memory in basic collection classes and you need to create general purpose algorithms to mine that data, it looks like you may have a missing model object.
One consideration, as hinted at above, is whether this designation of premium is an external opinion, or an internal facet of the model object. It looks like in this case, it’s an internal of how that model would explain its own inner representation.
However, always look at the natural separation of concerns and watch out for data on the loose!
Opinions expressed by Java Code Geeks contributors are their own.