Featured FREE Whitepapers

What's New Here?

software-development-2-logo

The 4 Levels of Freedom For Software Developers

For quite some time now I’ve been putting together, in my mind, what I think are the four distinct levels that software developers can go through in trying to gain their “freedom.” For most of my software development career, when I worked for a company, as an employee, I had the dream of someday being free. I wanted to be able to work for myself. To me, that was the ultimate freedom.But, being naive, as I was, I didn’t realize that there were actually different levels of “working for yourself.” I just assumed that if you were self-employed, you were self-employed. It turns out most software developers I have talked to about this topic have the same views I did – before I knew better. I’ve written in the past about how to quit your job, but this post is a bit different. This post is not really about how to quit your job, but the different levels of self-employment you can attain, after you do so. The four levels The four levels I am about to describe are based on the level of freedom you experience in your work; they have nothing to do with skill level. But, generally we progress up these levels as we seek to, and hopefully succeed, in gaining more freedom.  So, most software developers start at level one, and the first time they become self-employed, it is usually at level three – although it is possible to skip straight to three. Here is a quick definition of the levels (I’ll cover them each in detail next.)Employed – you work for someone else Freelancer – you are your boss, but you work for many someone elses Product creator – you are your own boss, but your customers determine what you work on Financially free – you work on what you want when you want; you don’t need to make moneyI started my career at level one and bounced back and forth between level two and level one for quite awhile before I finally broke through to level three. I’m currently working on reaching level four – although, I’ve found that it is easy to stay at level three even though you could move to four. Along the way, I’ve found that at each level I was at, I assumed I would feel completely free when I reached the level above. But, each time I turned out to be wrong. While each level afforded me more general freedom, each level also seemed to not be what I imagined it would be. Level one: employedLike I said, most software developers start out at this level. To be honest, most software developers stay at this level – and don’t get me wrong, there is nothing wrong with staying there – as long as you are happy. At this level, you don’t have much freedom at all, because you basically have to work on what you are told to work on, you have to work when you are told and you are typically tied to a physical location. (Throughout this post, you’ll see references to these three degrees of freedom.) Working for someone else isn’t all that bad. You can have a really good job that pays really well, but in most cases you are trading some amount of security for a certain amount of bondage. You are getting a stable paycheck on a regular interval, but at the cost of a large portion of your freedom. Now, that doesn’t mean that you can’t have various levels of traditional employment. I think there are mini-levels of freedom that exist even when you are employed by someone else. For example, you are likely to be afforded more freedom about when you start and leave work as you move up and become more senior at a job. You are also likely to be given a bit more autonomy over what you do – although Agile methodologies may have moved us back in that regard. You might even find freedom from location if you are able to find a job that allows you to telecommute. In my quest for more freedom, I actually made a trade of a considerable amount of pay in order to accept a job where I would have the freedom of working from home. I erroneously imagined that working from home would be the ultimate freedom and that I would be content working for someone else the rest of my career, so long as I could do it from home. (Don’t get me wrong, working from home has its perks, but it also has its disadvantages as well. When I worked from home, I felt more obligated to get more work done to prove that I wasn’t just goofing off. I also felt that my work was never done.) Now, like I said before, more people will stay at level one and perhaps move around, gaining more freedom through things like autonomy and a flexible working schedule, but there are definite caps on freedom at this level. No one is going to pay you to do what you want and tell you that you can disappear whenever you want to. You are also going to have your income capped. You can only make so much money working for someone else and that amount is mostly fixed ahead of time. Level two: freelancerSo, this is the only other level that I had really imagined existed for a software developer, for most of my career. I remember thinking about how wonderful it would be to work on my own projects with my own clients. I imagined that as a freelancer I could bid on government contracts and spend a couple of years doing a contract before moving on to the next. I also imagined an alternative where I worked for many different clients, working on different jobs at different times – all from the comfort of my PJs. When most software developers talk about quitting their jobs and becoming self-employed, I think this is what they imagine. They think, I like I did, that this is the ultimate level of freedom. It didn’t take me very long as a freelancer to realize that there really wasn’t much more freedom, working as a freelancer, than there was working for someone else. First of all, if you have just one big client, like most starting freelancers do, you are basically in a similar situation as what you are when you are employed – the big difference is that now you can’t bill for those hours you were goofing off. You will likely have more freedom about your working hours and days, but you’ll be confined to the project your client has hired you to work on and you might even have to come into their office to do the work. This doesn’t mean that you don’t have more freedom though, it is just a different kind. If you have multiple clients, you have more control over your life and what you work on. You can set your own rate, you can set your own hours and you can potentially turn down work that you don’t want to do – although, in reality, you probably won’t be turning down anything – especially if you are just starting out. Don’t get me wrong, it is nice to have your own company and to be able to bill your clients, instead of being compelled to work for one boss who has ultimate control over your life, but freelancing is a lot of work and on a daily basis it may be difficult to actually feel more free than you would working for someone else. Given the choice of just doing freelancing work or working for someone else, I’d rather just take the steady paycheck. I wouldn’t have said this five years ago, but I know now that freelancing is difficult and stressful. I really wouldn’t go down this road unless you know this is what you want to do or you are using it as a stepping stone to get to somewhere else. From a pay perspective, a freelancer can make a lot more money than most employees. I currently do freelance work and I don’t accept any work for less than $300 an hour. Now, I didn’t start at that rate – when I first started out $100 an hour was an incredible rate – but, I eventually worked my way up to it. (If you want to find out how, check out my How to Market Yourself as a Software Developer package.) The big thing though, is that your pay is not capped. The more you charge and the more hours you work, the more you make. You are only limited by the limits of those two things combined. Level three: product creatorThis level is where things get interesting. When I was mostly doing freelancing, I realized that my key mistake was not in working for someone else, but in trading dollars for hours. I realized that as a freelancer my life was not as beautiful as I had imagined it. I was not really free, because if I wasn’t working I wasn’t getting paid. I actually ended up going back to fulltime employment in order to rethink my strategy. The more and more I thought about it, the more I realized that in order to really gain the kind of freedom I wanted, I would need to create some kind of product that I could sell or some kind of service that would generate me income all the time without me having to work all the time. There are many ways to reach this level, but perhaps the most common is to build some kind of software or software as a service (SASS) that generates income for you. You can then make money from selling that product and you get to work on that product when and how you see fit. You can also reach this level by selling digital products of some sort. I was able to reach this level through a combination of this blog, mobile apps I built, creating royalty generating courses for Pluralsight and my own How to Market Yourself as a Software Developer package. (Yes, I have plugged it twice now, but hey this is my blog – and this is how I make money.) You have quite a bit of freedom at this level. You no longer have any real boss. There is no pointy-haired boss telling you what to work on and you don’t have clients telling you what projects to work on either. You most likely can work from anywhere you want and whenever you want. You can even disappear for months at a time – so long as you figure out a way to handle support. Now, that doesn’t mean that everything is peaches and roses at this level either. For one thing, I imagined that if I was creating products, that I would get to work on exactly what I wanted to work on. This is far from the truth. I have a large degree of control over what I choose to work on and create, but because I am bound by the need to make money, I have to give a large portion of that control over to the market. I have to build what my customers will pay for. This might not seem like a big deal, but it is. I’ve always had the dream of writing code and working on my own projects. I dreamed that being a product creator and making money from my own products would give me that freedom. To some degree it does, but I also have to pay careful attention to what my audience and customers want and I have to put my primary focus on building those things. This level is also quite stressful, because everything depends on you. You have to be successful to get paid. When you are an employee, all you have to do is show up. When you are a freelancer, you just have to get clients and do the work – you get paid for the work you put in, not the results. When you are a product creator, you might spend three months working on something and not make a dime. No one cares how much work you did, only results count. As far as income potential, there is no cap here. You might struggle to just make enough to live, but if you are successful, there is no limit to how much you could earn, since you are not bound by time. At this level you are no longer trading hours for dollars. To me, it isn’t worth striving for level two, it is better to just work for someone else until you can reach level three, because this level of freedom is one that actually makes a big difference in your life. You still may not be able to work on just what you want to work on, but at least at this point – once you are successful – all the other areas of your life start to become much more free. Level four: financially freeI couldn’t come up with a good name for this level, but this is the level where you no longer have to worry about making money. One thing I noticed when I finally reached level three was that a large portion of what was holding me back from potentially doing exactly what I wanted to do was the need to generate income. Now, it’s true that you can work on what you want to work on and make money doing it, but often the need to generate income tends to influence what you work on and how you work on it. For example, I’d really like to create a video game. I’ve always dreamed of doing a large game development project. But, I know it isn’t likely to be profitable. As long as I am worrying about income, my freedom is going to be limited to some degree. If I don’t have passive income coming in that is more than enough to sustain me, I can’t just quit doing the projects that do make me money and start writing code for a video game – well, I could, but it wouldn’t be smart, and I’d feel pretty guilty about it. So, in my opinion, the highest form of freedom a software developer can achieve is when they are financially free. What do I mean by financially free though? It basically means that you don’t have to worry about cash. Perhaps you sold your startup for several millions of dollars or you have passive income coming in from real estate or other investments that more than provides for your daily living needs. (For some good information on how to do this or how this might work, I recommend starting with the book “Rich Dad Poor Dad”.) At this level of freedom, you can basically do what you want. You can create software that interests you, because it interests you—you aren’t worried about profitability. Want to create an Android app, just because, go ahead. Want to learn a new programming language, because you think it would be fun – go for it. This has always been the level of freedom I have secretly wanted. I never wanted to sit back and not do anything, but I’ve always wanted to work on what interested me and only what interested me. Every other level that I thought would have this freedom, I realized didn’t. I realized that there was always something else that was controlling what I worked on, be it my boss, my clients or my customers. Now, this doesn’t mean that you can’t still make money from your projects. In fact, paradoxically, I believe, if you can get to this stage, you have the potential to make the most money. Once you start working on what you want to work on, you are more likely to put much more passionate work into it and it is very likely that it will be of high value. This is where programming becomes more like art. I don’t have any proof of this, of course, but I suspect that when you don’t care about making money, because you are just doing what you love, that is when you make the most of it. Don’t get me wrong, you might be able to focus on doing what you love, even if you aren’t making any money. I know plenty of starving artists do – or at least they tell themselves they do – but, I can’t do it. I’ve tried it, but I always feel guilty and stressed about the fact that what I am working on isn’t profitable. In my opinion, you really have to be financially free to experience true creative freedom. I’m actually working on getting to this level. Technically, I could say I am there now, but I am still influenced greatly by profitability. Although, now, I am not choosing my projects solely on the criteria of what will make the most amount of money. I am turning down more and more projects and opportunities that don’t align with what I want to do as I am trying to transition to working on only what interests me as my passive income is increasing. What can you gather from all this? Well, the biggest thing is that freedom has different levels and that, perhaps, you don’t want to be a freelancer, after all. I think many software developers assume working for themselves by freelancing will give them the ultimate freedom. They don’t realize that they’ll only be able to work on exactly what they want to work on when they are actually financially free. So, my advice to you is that if you want to have full creative control over your life and what you work on, work on becoming financially free. If you want a high degree of autonomy in most of the areas of your life, you should try to develop and sell products. If you are happy just being your own boss, even if you have to essentially take orders from clients, freelancing might be the road for you. And, if all of this just seems like too high of a price to pay, you might want to just stay where you are at and keep collecting your weekly paychecks – nothing wrong with that.Reference: The 4 Levels of Freedom For Software Developers from our JCG partner John Sonmez at the Making the Complex Simple blog....
java-logo

Creating Your Own Java Annotations

If you’ve been programming in Java and using any one of the popular frameworks like Spring and Hibernate, you should be very familiar with using annotations. When working with an existing framework, its annotations typically suffice. But, have you ever found a need to create your own annotations? Not too long ago, I found a reason to create my own annotations for a project that involved verifying common data stored in multiple databases.         The Scenario The business had multiple databases that were storing the same information and had various means of keeping the data up to date. The business had planned a project to consolidate the data into a master database to alleviate some of the issues involved with having multiple sources of data. Before the project could begin however, the business needed to know how far out of sync the data was and make any necessary corrections to get in back in sync. The first step required creating a report that showed common data that belonged in multiple databases and validated the values, highlighting any records that didn’t match according to the reconciliation rules defined. Here’s a short summary of the requirements at the time:Compare the data between multiple databases for a common piece of data, such as a customer, company, or catalog information. By default the value found should match exactly across all of the databases based upon the type of value. For certain fields we only want to display the value found and not perform any data comparison. For certain fields we only want to compare the value found and perform data verification on the specific data sources specified. For certain fields we may want to do some complicated data comparisons that may be based on the value of other fields within the record. For certain fields we may want to format the data in a specific format, such as $000,000.00 for monetary amounts. The report should be in MS Excel format, each row containing the field value from each source. Any row that doesn’t match according to the data verification rules should be highlighted in yellow.Annotations After going over the requirements and knocking around a few ideas, I decided to use annotations to drive the configuration for the data comparison and reporting process. We needed something that was somewhat simple, yet highly flexible and extensible. These annotations will be at the field level and I like the fact that the configuration won’t be hidden away in a file somewhere on the classpath. Instead you’ll be able to look at the annotation associated with a field to know exactly how it will be processed. In the simplest terms, an annotation is nothing more than a marker, metadata that provides information but has no direct effect on the operation of the code itself. If you’ve been doing Java programming for a while now you should be pretty familiar with their use, but maybe you’ve never had a need to create your own. To do that you’ll need to create a new type that uses the Java type @interface that will contain the elements that specify the details of the metadata. Here’s an example from the project: @Target(ElementType.FIELD) @Retention(RetentionPolicy.RUNTIME) public @interface ReconField {/** * Value indicates whether or not the values from the specified sources should be compared or will be used to display values or reference within a rule. * * @return The value if sources should be compared, defaults to true. */ boolean compareSources() default true;/** * Value indicates the format that should be used to display the value in the report. * * @return The format specified, defaulting to native. */ ReconDisplayFormat displayFormat() default ReconDisplayFormat.NATIVE;/** * Value indicates the ID value of the field used for matching source values up to the field. * * @return The ID of the field. */ String id();/** * Value indicates the label that should be displayed in the report for the field. * * @return The label value specified, defaults to an empty string. */ String label() default "";/** * Value that indicates the sources that should be compared for differences. * * @return The list of sources for comparison. */ ReconSource[] sourcesToCompare() default {};} This is the main annotation that will drive how the data comparison process will work. It contains the basic elements required to fulfill most of the requirements for comparing the data amongst the different data sources. The @ReconField should handle most of what we need except for the requirement of more complex data comparison, which we’ll go over a little bit later. Most of these elements are explained by the comments associated with each one in the code listing, however there are a couple of key annotations on our @ReconField that need to be pointed out.@Target – This annotation allows you to specify which java elements your annotation should apply to. The possible target types are ANNOTATION_TYPE, CONSTRUCTOR, FIELD, LOCAL_VARIABLE, METHOD, PACKAGE, PARAMETER and TYPE. In our @ReconField annotation it is specific to the FIELD level. @Retention – This allows you to specify when the annotation will be available. The possible values are CLASS, RUNTIME and SOURCE. Since we’ll be processing this annotation at RUNTIME, that’s what this needs to be set to.This data verification process will run one query for each database and then map the results to a common data bean that represents all of the fields for that particular type of business record. The annotations on each field of this mapped data bean tell the processor how to perform the data comparison for that particular field and its value found on each database. So let’s look at a few examples of how these annotations would be used for various data comparison configurations. To verify that the value exists and matches exactly in each data source, you would only need to provide the field ID and the label that should be displayed for the field on the report. @ReconField(id = CUSTOMER_ID, label = "Customer ID") private String customerId; To display the values found in each data source, but not do any data comparisons, you would need to specify the element compareSources and set its value to false. @ReconField(id = NAME, label = "NAME", compareSources = false) private String name; To verify the values found in specific data sources but not all of them, you would use the element sourcesToCompare. Using this would display all of the values found, but only perform any data comparisons on the data sources listed in the element. The handles the case in which some data is not stored in every data source. ReconSource is an enum that contains the data sources available for comparison. @ReconField(id = PRIVATE_PLACEMENT_FLAG, label = "PRIVATE PLACEMENT FLAG", sourcesToCompare ={ ReconSource.LEGACY, ReconSource.PACE }) private String privatePlacementFlag; Now that we’ve covered our basic requirements, we need to address the ability to run complex data comparisons that are specific to the field in question. To do that, we’ll create a second annotation that will drive the processing of custom rules. @Target(ElementType.FIELD) @Retention(RetentionPolicy.RUNTIME) public @interface ReconCustomRule {/** * Value indicates the parameters used to instantiate a custom rule processor, the default value is no parameters. * * @return The String[] of parameters to instantiate a custom rule processor. */ String[] params() default {};/** * Value indicates the class of the custom rule processor to be used in comparing the values from each source. * * @return The class of the custom rule processor. */ Class<?> processor() default DefaultReconRule.class;} Very similar to the previous annotation, the biggest difference in the @ReconCustomRule annotation is that we are specifying a class that will execute the data comparison when the recon process executes. You can only define the class that will be used, so your processor will have to instantiate and initialize any class that you specify. The class that is specified in this annotation will need to implement a custom rule interface, which will be used by the rule processor to execute the rule. Now let’s take a look at a couple of examples of this annotation. In this example, we’re using a custom rule that will check to see if the stock exchange is not the United States and skip the data comparison if that’s the case. To do this, the rule will need to check the exchange country field on the same record. @ReconField(id = STREET_CUSIP, label = "STREET CUSIP", compareSources = false) @ReconCustomRule(processor = SkipNonUSExchangeComparisonRule.class) private String streetCusip; Here’s an example where we are specifying a parameter for the custom rule, in this case it’s a tolerance amount. For this specific data comparison, the values being compared cannot be off by more than 1,000. By using a parameter to specify the tolerance amount, this allows us to use the same custom rule on multiple fields with different tolerance amounts. The only drawback is that these parameters are static and can’t be dynamic due to the nature of annotations. @ReconField(id = USD_MKT_CAP, label = "MARKET CAP USD", displayFormat = ReconDisplayFormat.NUMERIC_WHOLE, sourcesToCompare = { ReconSource.LEGACY, ReconSource.PACE, ReconSource.BOB_PRCM }) @ReconCustomRule(processor = ToleranceAmountRule.class, params = { "10000" }) private BigDecimal usdMktCap; As you can see, we’ve designed quite of bit of flexibility into a data verification report for multiple databases by just using a couple of fairly simple annotations. For this particular case, the annotations are driving the data comparison processing so we’re actually evaluating the annotations that we find on the mapped data bean and using those to direct the processing. Conclusion There are numerous articles out there already about Java annotations, what they do, and the rules for using them. I wanted this article to focus more on an example of why you might want to consider using them and see the benefit directly. Keep in mind that this is only the starting point, once you have decided on creating annotations you’ll still need to figure out how to process them to really take full advantage of them. In part two, I’ll show you how to process these annotations using Java reflection. Until then, here are a couple of good resources to learn more about Java annotations:The Java Annotation Tutorial - http://docs.oracle.com/javase/tutorial/java/annotations/ Java Annotations - http://tutorials.jenkov.com/java/annotations.html How Annotations Work - http://java.dzone.com/articles/how-annotations-work-javaReference: Creating Your Own Java Annotations from our JCG partner Jonny Hackett at the Keyhole Software blog....
json-logo

Converting JSON to XML to Java Objects using XStream

XStream library can be an effective tool for converting JSON to Java to XML translations to and fro. Lets explore each one of them one by one, and see which driver is used. Handling JSONs To convert JSON to Java objects all you have to do is initialize XStream object with an appropriate driver and you are ready to serialise your objects to (and from) JSON. XStream currently delivers two drivers for JSON to Object ocnversion:  JsonHierarchicalStreamDriver: This does not have an additional dependency, but can only be used to write XMLJettisonMappedXmlDriver: This is based on Jettison and can also deserialize JSON to Java objects again.Jettison driver Jettison driver uses Jettison StAX parser to read and write data in JSON format. It is available in XStream since version 1.2.2 and is implemented in com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver class. In order to get this working, we need to add the dependencies in pom : <dependencies> <dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.4.7</version> </dependency> <dependency> <groupId>org.codehaus.jettison</groupId> <artifactId>jettison</artifactId> <version>1.1</version> </dependency> </dependencies> And the code to convert JSON to object and object to Json : XStream xstream = new XStream(new JettisonMappedXmlDriver()); xstream.toXML(xml); //converts Object to JSON xstream.fromXML(obj); //Converts Json to Object Serializing an object to XML To serialize an Object to XML XStream uses 2 drivers :StaxDriver XStream xstream = new XStream(new StaxDriver()); xstream.toXML(xml); //converts Object to XML xstream.fromXML(obj); //Converts XML to ObjectDomDriver XStream xstream = new XStream(new DomDriver()); xstream.toXML(xml); //converts Object to XML xstream.fromXML(obj); //Converts XML to ObjectFinally, lets see all these in one class: package com.anirudh;import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver; import com.thoughtworks.xstream.io.xml.DomDriver; import com.thoughtworks.xstream.io.xml.StaxDriver;/** * Created by anirudh on 15/07/14. */ public class Transformer<T> {private static final XStream XSTREAM_INSTANCE = null;public T getObjectFromJSON(String json){ return (T) getInstance().fromXML(json); }public String getJSONFromObject(T t){ return getInstance().toXML(t); }private XStream getInstance(){ if(XSTREAM_INSTANCE==null){ return new XStream(new JettisonMappedXmlDriver()); } else { return XSTREAM_INSTANCE; } }public T getObjectFromXML(String xml){ return (T)getStaxDriverInstance().fromXML(xml); }public String getXMLFromObject(T t){ return getStaxDriverInstance().toXML(t); }public T getObjectFromXMLUsingDomDriver(String xml){ return (T)getDomDriverInstance().fromXML(xml); }public String getXMLFromObjectUsingDomDriver(T t){ return getDomDriverInstance().toXML(t); }private XStream getStaxDriverInstance(){ if(XSTREAM_INSTANCE==null) { return new XStream(new StaxDriver()); }else{ return XSTREAM_INSTANCE; } }private XStream getDomDriverInstance(){ if(XSTREAM_INSTANCE==null){ return new XStream(new DomDriver()); }else{ return XSTREAM_INSTANCE; } } } Write a JUnit class to test it: package com.anirudh;import com.anirudh.domain.Product; import org.junit.Before; import org.junit.Test;/** * Created by anirudh on 15/07/14. */ public class TransformerTest {private Transformer<Product> productTransformer; private Product product; @Before public void init(){ productTransformer = new Transformer<Product>(); product = new Product(123,"Banana",23.00); } @Test public void testJSONToObject(){ String json = productTransformer.getJSONFromObject(product); System.out.println(json); Product convertedproduct = productTransformer.getObjectFromJSON(json); System.out.println(convertedproduct.getName()); }@Test public void testXMLtoObjectForStax(){ String xml = productTransformer.getXMLFromObject(product); System.out.println(xml); Product convertedproduct = productTransformer.getObjectFromXML(xml); System.out.println(convertedproduct.getName()); }@Test public void testXMLtoObjectForDom(){ String xml = productTransformer.getXMLFromObjectUsingDomDriver(product); System.out.println(xml); Product convertedproduct = productTransformer.getObjectFromXMLUsingDomDriver(xml); System.out.println(convertedproduct.getName()); }} The full code can be seen here. In the next blog, we will compare the use cases, exploring where what fits in.Reference: Converting JSON to XML to Java Objects using XStream from our JCG partner Anirudh Bhatnagar at the anirudh bhatnagar blog....
agile-logo

Agile Myth #1: “Agile is a Methodology”

First of all, if you look up the word “methodology” in the dictionary, it says, “study of methods”. When people in the technical or research fields say “methodology”, they really mean “process”. So what is a process? A process is basically a set of instructions:  You follow Step 1, Step 2, Step 3… You use this activity in situation A, you use another activity in situation B. You use this document template, that design notation, etc.  Now process is essential to writing good software. All the “Agile Methodologies” – Scrum, XP, Kanban, etc. – prescribe processes. However, what differentiates Agile processes from other processes like RUP (which is a very good process if evaluated solely on process) is the underlying values and principles. So Scrum, XP, Kanban – these are processes that are founded on Agile. But Agile itself is not a process. Rather, Agile is the values and principles that allow software development processes to be successful. Read the Agile Manifesto, the Principles Behind the Agile Manifesto, the Values of Extreme Programming, the 7 Key Principles of Lean Software Development. Agile processes try to reduce the amount of process in software development, but something has to replace the what has been removed, or the remaining lightweight processes fall apart. It is values and principles that replace the heavyweight processes that have been removed, so that the lightweight processes have a foundation to stand. Let me go through some of the values and principles common to many Agile processes: Customer Value Kent Beck, the inventor of XP, said the reason he invented XP was “to heal the divide between business people and programmers.” Traditional processes tend to put development teams and business stakeholders at odds with each other, which is wrong, since the very purpose of a development team is to create something that is valuable and useful for the business stakeholders. Agile reminds development teams of this primary purpose – that we exist to create things of value to our customers. This is the main motivation for everything we do – why we solicit feedback, why we deliver in short iterations, why we allow changes even late in a project, why we place such strong emphasis on testing… The first characteristic of a team that is really Agile, as opposed to one that is just going through the motions of Agile, is that the members of the team are primarily motivated by the desire to create things of value for their customers. Evidence-Based Decision-Making In Agile, emphasis is placed on making decisions based on evidence. We make estimates after we’ve done a few iterations and have an idea of our team’s velocity, not estimates just pulled out of someone’s butt without any basis. We derive requirements based on actual customer feedback, not what we think our customers will want even if the customers have not had the chance to try out a working system. We base our architectures on Spikes – small prototypes which we subject to analysis and tests – not just vendor documentation or articles that we’ve read, or whatever is the hype technology of the year. Technical Excellence Of the 17 authors of the Agile Manifesto, many of them are thought leaders in the engineering side of software development – Kent Beck is the creator of JUnit and Test-Driven Development, Martin Fowler has written several books on Design Patterns and Refactoring, Robert Martin has written several books on Object-Oriented Design and code quality… A lot of people think Agile is mainly about project management, but the creators of Agile put equal if not greater emphasis on engineering as well. Feedback, Visibility, Courage Agile is about bringing up problems early and often, when they’re still easier to fix. We have stand-up meetings not so that we can update the project manager of our work, but so that we can bring up issues that others can help us with. We have big visible charts so that everyone, not just in the team but in the organization, knows about the progress of the work and if there are any issues. We emphasize testing so that we can discover issues earlier rather than later. For all this to work, there needs to be a culture of courage. This is especially difficult for Filipino culture. When there’s a problem, we Filipinos tend to keep it to ourselves and fix it ourselves – we’ll work unpaid overtime and weekends to try to fix a problem, but when we fail the problem just ends up bigger then when we started. A culture of courage is fundamental in creating a culture and process of feedback and visibility. Eliminating Waste One way we eliminate waste is with requirements. When we specify a lot of requirements upfront, a lot of work in specifying and building the requirements gets wasted, since it’s only when customers are actually able to use a product do we realize that a lot of the requirements were wrong. Instead, in Agile, we specify enough requirements to build the smallest releasable system, so that we can get reliable feedback from customers before building more requirements. Another example is by limiting a team’s “Work in Process” or WIP. For example, if the team notices that number of stories are “in process” exceeds their self-designated “WIP Limit”, they stop and check what’s causing the problem. If, for example, the problem is that the developers are building more features than the testers can test, the developers should stop working and instead help with the testing. Otherwise, untested work just piles up behind the testers as waste. Human Interaction One of the most wasteful things I’ve seen is how people hide behind emails, documents and issue-tracking tools to communicate, and this just leads to issues dragging on and on. A single issue discussed over email might end up being a thread of over a hundred emails, across several departments separated by different floors, with multiple managers CC’d to the discussion. Often, many of these issues that drag on for days or weeks could be resolved in minutes with people just having a conversation with one another. This is why Agile advocates a co-located, interdisciplinary team. A developer can just talk to the DBA across the table in order to resolve an issue within minutes, in less time than it would have taken the issue to be typed up on an email or as a ticket. This is why requirements are initially written as very brief User Stories, to serve simply as a placeholder for face-to-face discussions between the Product Owner and the development team. This is why stand-up meetings are preferred to status reports, because spontaneous collaboration can be initiated there, whereas status reports are largely ignored except for the most meticulous of project managers. Others There’s a number of other principles and values that I haven’t covered here, that are probably much better explained at their respective sources, anyway. Again, I recommend reading up on the Agile Manifesto, the Principles Behind the Agile Manifesto, the Values of Extreme Programming, and the 7 Key Principles of Lean Software Development. This is the first in a series of twelve myths of Agile that I will be discussing, based on my presentation at SofTech 2014 software engineering conference. Hang on for my next blog post, where I will discuss Agile Myth #2: “Agile is About Project Management”.Reference: Agile Myth #1: “Agile is a Methodology” from our JCG partner Calen Legaspi at the Calen Legaspi blog....
software-development-2-logo

Through The Looking Glass Architecture Antipattern

An anti-pattern is a commonly recurring software design pattern that is ineffective or counterproductive. One architectural anti-pattern I’ve seen a number of times is something I’ll call the “Through the Looking Glass” pattern. It’s named so because APIs and internal representations are often reflected and defined by their consumers. The core of this pattern is that the software components are split in a manner that couples multiple components in an inappropriate manner. An example is software that is split between a “front end” and a “back end”, but they both use the same interfaces and/or value objects to do their work. This causes a situation where almost every back end change requires a front-end change and visa versa. As particularly painful examples, think of using GWT (in general) or trying to use SOAP APIs through javascript. There are a number of other ways this pattern can show up… most often it can be a team structure problem. Folks will be split by their technical expertise, but then impede progress because the DBA’s who are required to make database changes are not aware of what the front end application needs so the front end developers end up spend a large amount of time “remote controlling” the database team. It can also be a situation where there is a desire for a set of web service APIs are exposed for front end applications, but because the back-end service calls are only in existence for the front end application, there end up being a number of chicken and egg situations. In almost every case, the solution to this problem is to either #1 add a translation layer, #2 #1 simplify the design, or #3 restructure the work such that it can be isolated on a component by component basis. In the web service example above, it’s probably more appropriate to use direct integration for the front end and expose web services AFTER the front end work has been done. For the database problem, the DBA probably should be embedded with the front-end developer and “help” with screen design so that they have a more complete understanding of what is going on. An alternate solution to the database problem might be to allow the front end developer to build the initial database (if they have the expertise) and allow the DBA to tune the design afterwords. I find that it’s most often easiest and most economical to add a translation layer between layers than trying to unify the design unless the solution space can be clearly limited in scope and scale. I say this because modern rapid development frameworks ([g]rails, play…) now support this in a very simple manner and there is no great reason to NOT do it…except maybe ignorance.Reference: Through The Looking Glass Architecture Antipattern from our JCG partner Mike Mainguy at the mike.mainguy blog....
java-logo

Java’s Volatile Modifier

A while ago I wrote a Java servlet Filter that loads configuration in its init function (based on a parameter from web.xml). The filter’s configuration is cached in a private field. I set the volatile modifier on the field. When I later checked the company Sonar to see if it found any warnings or issues in the code I was a bit surprised to learn that there was a violation on the use of volatile. The explanation read:        Use of the keyword ‘volatile’ is generally used to fine tune a Java application, and therefore, requires a good expertise of the Java Memory Model. Moreover, its range of action is somewhat misknown. Therefore, the volatile keyword should not be used for maintenance purpose and portability.I would agree that volatile is misknown by many Java programmers. For some even unknown. Not only because it’s never used much in the first place, but also because it’s definition changed since Java 1.5. Let me get back to this Sonar violation in a bit and first explain what volatile means in Java 1.5 and up (until Java 1.8 at the time of writing). What is Volatile? While the volatile modifier itself comes from C, it has a completely different meaning in Java. This may not help in growing an understanding of it, googling for volatile could lead to different results. Let’s take a quick side step and see what volatile means in C first. In the C language the compiler ordinarily assumes that variables cannot change value by themselves. While this makes sense as default behavior, sometimes a variable may represent a location that can be changed (like a hardware register). Using a volatile variable instructs the compiler not to apply these optimizations. Back to Java. The meaning of volatile in C would be useless in Java. The JVM uses native libraries to interact with the OS and hardware. Further more, it is simply impossible to point Java variables to specific addresses, so variables actually won’t change value by themselves. However, the value of variables on the JVM can be changed by different threads. By default the compiler assumes that variables won’t change in other threads. Hence it can apply optimizations such as reordering memory operations and caching the variable in a CPU register. Using a volatile variable instructs the compiler not to apply these optimizations. This guarantees that a reading thread always reads the variable from memory (or from a shared cache), never from a local cache. Atomicity Further more on a 32 bit JVM volatile makes writes to a 64 bit variable atomic (like longs and doubles). To write a variable the JVM instructs the CPU to write an operand to a position in memory. When using the 32 bit instruction set, what if the size of a variable is 64 bits? Obviously the variable must be written with two instructions, 32 bits at a time. In multi-threaded scenarios another thread may read the variable half way through the write. At that point only first half of the variable is written. This race-condition is prevented by volatile, effectively making writes to 64 bit variables atomic on 32 bit architectures. Note that above I talked about writes not updates. Using volatile won’t make updates atomic. E.g. ++i when i is volatile would read the value of i from the heap or L3 cache into a local register, inc that register, and write the register back into the shared location of i. In between reading and writing i it might be changed by another thread. Placing a lock around the read and write instructions makes the update atomic. Or better, use non-blocking instructions from the atomic variable classes in the concurrent.atomic package. Side Effect A volatile variable also has a side effect in memory visibility. Not just changes to the volatile variable are visible to other threads, but also any side effects of the code that led up to the change are visible when a thread reads a volatile variable. Or more formally, a volatile variable establishes a happens-before relationship with subsequent reads of that variable. I.e. from the perspective of memory visibility writing a volatile variable effectively is like exiting a synchronized block and reading a volatile variable like entering one. Choosing Volatile Back to my use of volatile to initialize a configuration once and cache it in a private field. Up to now I believe the best way to ensure visibility of this field to all threads is to use volatile. I could have used AtomicReference instead. Since the field is only written once (after construction, hence it cannot be final) atomic variables communicate the wrong intent. I don’t want to make updates atomic, I want to make the cache visible to all threads. And for what it’s worth, the atomic classes use volatile too. Thoughts on this Sonar Rule Now that we’ve seen what volatile means in Java, let’s talk a bit more about this Sonar rule. In my opinion this rule is one of the flaws in configurations of tools like Sonar. Using volatile can be a really good thing to do, if you need shared (mutable) state across threads. Sure thing you must keep this to a minimum. But the consequence of this rule is that people who don’t understand what volatile is follow the recommendation to not use volatile. If they remove the modifier effectively they introduce a race-condition. I do think it’s a good idea to automatically raise red flags when misknown or dangerous language features are used. But maybe this is only a good idea when there are better alternatives to solve the same line of problems. In this case, volatile has no such alternative. Note that in no way this is intended as a rant against Sonar. However I do think that people should select a set of rules that they find important to apply, rather than embracing default configurations. I find the idea to use rules that are enabled by default, a bit naive. There’s an extremely high probability that your project is not the one that tool maintainers had in mind when picking their standard configuration. Furthermore I believe that as you encounter a language feature that you don’t know, you should learn about it. As you learn about it you can decide if there are better alternatives. Java Concurrency in Practice The de facto standard book about concurrency in the JVM is Java Concurrency in Practice by Brain Goetz. It explains the various aspects of concurrency in several levels of detail. If you use any form of concurrency in Java (or impure Scala) make sure you at least read the former three chapters of this brilliant book to get a decent high-level understanding of the matter.Reference: Java’s Volatile Modifier from our JCG partner Bart Bakker at the Software Craft blog....
software-development-2-logo

Test Attribute #2: Readability

This is the 2nd post on test attributes that were described in the now famous “How to test your tests” post. We often forget the most of the value we get from tests come after we’ve written them. Sure, TDD helps design the code, but let’s face it, when everything works for the first time, our tests become the future guardians of our code. Once in place, we can change the code, hopefully for the better, knowing that everything still works. But if (and when) something breaks, there’s work to be done. We need to understand what worked before that doesn’t now. We then need to analyze the situation: According to what we’re doing right now, should we fix this problem? Or is this new functionality that we now need to cover with new tests, throwing away the old one? Finally there’s coding and testing again, depending on the result of our analysis. The more we move further in time, the tests and code get stale in our mind. Then we forget about them completely. The cost of the changes rises. The analysis phase becomes longer, because we need to reacquaint ourselves with the surroundings. We need to re-learn what still works, and what stopped working. Even if we knew, we don’t remember which  changes can cause side effects, and how those will work out. Effective tests minimize this process. They need to be readable. Readability is subjective.  What I find readable now (immediately after I wrote it), will not seem so in 6 months. Let alone to someone else. So instead of trying to define test readability, let’s break it down to elements we care about, and can evaluate. What A Name The most important part of a test (apart from testing the right thing) is its name. The reason is simple: When a test breaks, the name is the first thing we see when the test fails. This is the first clue we get that something is wrong, and therefore, it needs to tell us as much as possible. The name of a test should include (at least) the specific scenario and the expected result of our test. If we’re testing APIs, it should say that too. For example: @Test public void divideCounterBy5Is25() { ... I can understand what the test does (a scenario about dividing Counter), the details (division by 5) and the expected result for this scenario (25). If it sounds like a sentence – even better. Good names come from verbally describing them. It doesn’t matter if you use capitalization, underscore, or whatever you choose. It is important that you use the same convention that your team agrees on. Names should also be specific enough to mentally discern from other sibling tests. So, in our example: @Test public void divideCounterBy0Throws() { ... This test is similar enough to the first name to identify it as a “sibling” scenario, because of the resemblance in the prefix. The specific scenario and result are different. It is important, because when those two will appear together in the test runner, one fails and one doesn’t, it helps us locate the problem before even starting to debug. These are clues to resolve the problem. What A Body If our names don’t help locate the problem, the test body should fill the gaps. It should contain all the information needed to understand the scenarios. Here are a few tips to make test code readable:Tests should be short. About 10-15 lines short. If the setup is long, extract it to functions with descriptive names. Avoid using pre-test functions like JUnit’s @Before or MSTest [TestInitialize]. Instead use methods called directly from the test. When you look at the code, setup and tear down need to be visible, otherwise, you’ll need to search further, and assume even more. Debugging tests with setup methods is no fun either, because you enter the test in a certain context that may surprise you. Avoid using base test classes.  They too hide information relevant for understanding the scenarios. Preferring composition over inheritance works here too. Make the assert part stand out. Make sure that body and name align.Analysis takes time, and the worst kind (and slowest) requires debugging. Tests (and code) should be readable enough to help us bypass this wasteful process. Better Readability In Minutes We are biased and so think that we write code and tests that are so good, everyone can understand it. We’re often wrong. In agile, feedback is the answer. Use the “Law of the Third Ear”. Grab an unsuspecting colleague’s ear, and pull it close to your screen, and she can tell you if she understands what the test does. Even better, pair while writing the tests. Feedback comes us a by product and you get better tests. Whatever you do, don’t leave it for later. And don’t use violence if you don’t need to. Make tests readable now, so you can read them later.Reference: Test Attribute #2: Readability from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

The Life(Cycles) of UX/UI Development

It recently occurred to me that not one of the dozens and dozens of user interfaces I’ve worked on over the years, had the same methodology/lifecycle.  Many of those were results of the environments under which they were constructed: startup, BIG company, government contract, side-project, open-source, freelance, etc. But the technology also played a part in choosing the methodology we used. Consider the evolution of UI technology for a moment. Back in the early days of unix/x-windows, UI development was a grind.  It wasn’t easy to relocate controls and re-organize interactions.  Because of that, we were forced to first spend some time with a napkin and a sharpie, and run that napkin around to get feedback, etc. The “UX” cycle was born. Then came along things like Visual Basic/C++ and WYSIWIG development. The UI design was literally the application. Drag a button here. Double click. Add some logic. Presto, instant prototype… and application. You could immediately get feedback on the look and feel, etc. It was easy to relocate, reorganize things and react to user feedback. What happened to the “UX” cycle?  It collapsed into the development cycle. The discipline wasn’t lost, it was just done differently, using the same tools/framework used for development. Then, thanks to Mr.Gore, the world-wide web was invented, bringing HTML along with it. I’m talking early here, the days of HTML sans javascript frameworks. (horror to think!)  UI development was thrown back into the stone ages. You needed to make sure you got your site right, because adjusting look and feel often meant “rewrite” the entire web site/application. Many of the “roundtrip” interactions and MVC framework, even though physically separated from the browser,  was entwined in the flow/logic of the UI. (Spring Web Flow anyone? http://projects.spring.io/spring-webflow/) In such a world, again — you wanted to make sure you got it right, because adjustments were costly.   Fortunately, in the meantime the UX discipline and their tools had advanced. It wasn’t just about information display, it was about optimizing the interactions.  The tools were able to not only play with look, but they could focus on and mock out feel. We could do a full UX design cycle before any code was written. Way cool. Once blessed, the design is/was handed off to the development team, and implementation began. Fast forward to present day.  JavaScript frameworks have come a long way.  I now know people that can mockup user experience *in code*, faster and more flexibly than the traditional wire-framing tools. This presents an opportunity to once again collapse the toolset and smooth the design/development process into one cohesive, ongoing process instead of the somewhat disconnected, one-time traditional handoff. I liken this to the shift that took place years ago for application design. We used to sit down and draw out the application architecture and the design before coding: class hierarchies, sequence diagrams, etc. (Rational Rose and UML anyone?). But once the IDE’s advanced enough, it became faster to code the design than to draw it out. The disciplines of architecture and design didn’t go away, they are just done differently now. Likewise with UX.  User experience and design are still of paramount importance.  And that needs to include user research, coordination with marketing, etc.  But can we change the toolset at this point, so the design and development can be unified as one?  If so, imagine the smoothed, accelerated design->development->delivery (repeat) process we could construct! For an innovative company like ours, that depends heavily on time-to-market, that accelerated process is worth a ton. We’ll pay extra to get resources that can bridge that gap between UX and development, and play in both worlds. (Provided we don’t need to sacrifice on either!) On that note, if you think you match that persona. Let me know. We have a spot for you! And if you are a UXer, it might be worth playing around with angular and bootstrap to see how easy things have become. We don’t mind on the job training!Reference: The Life(Cycles) of UX/UI Development from our JCG partner Brian ONeill at the Brian ONeill’s Blog blog....
agile-logo

What we forget about the Scientific Method

I get fed up hearing other Agile evangelists champion The Scientific Method, I don’t disagree with them, I use it myself but I think they are sometimes guilty of overlooking how the scientific method is actually practiced by scientists and researchers. Too often the scientific approach is made to sound simple, it isn’t. First lets define the scientific method. Perhaps rather than call it “scientific method” it is better called “experimentation.” What the Agile Evangelists of my experience are often advocating is running an experiment – perhaps several experiments in parallel but more likely in sequence. The steps are something like this:    Propose a hypothesis, e.g. undertaking monthly software releases instead of bi-monthly will result in a lower percentage of release problems Examine the current position: e.g. find the current figures for release frequency and problems, record these Decide how long you want to run the experiment for, e.g. 6 months Introduce the change and reserve any judgement until the end of the experiment period Examine the results: recalculate the figures and compare these with the original figures Draw a conclusion based on observation and dataI agree with all of this, I think its great, but… Lets leave aside problems of measurement, problems of formulating the hypothesis, problems of making changes and propagation problems (i.e. the time it takes for changes to work though the system). These are not minor problems and they do make me wonder about applying the scientific method in the messy world of software and business but lets leave them to one side for the moment. Lets also leave aside the so-called Hawthorne Effect – the tendency for people to change and improve their behaviour because know they are in an experiment. Although the original Hawthorne experiments were shown to be flawed some time ago the effect might still be real. And the flaws found in the Hawthorne experiments should remind us that there may be other factors at work which we have not considered. Even with all these caveats I’m still prepared to accept an experimental approach to work has value. Sometimes the only way to know whether A or B is the best answer is to actually do A and do B and compare the results. But, this is where my objections start…. There are two important elements missing from the way Agile Evangelists talk about the scientific method. When real scientists – and I include social-scientists here – do an experiment there is more to the scientific method than the experiment and so there should be in work too. #1: Literature review – standing on the shoulders of others Before any experiment is planned scientists start by reviewing what has gone before. They go to the library, sometimes books but journals are probably more up to date and often benefit from stricter peer review. They read what others have found, they read what others have done before, the experiments and theories devised to explain the results. True your business, your environment, your team are all unique and what other teams find might not apply to you. And true you might be able to find flaws in their research and their experiments. But that does not mean you should automatically discount what has gone before. If other teams have found consistent results with an approach then it is possible yours will too. The more examples of something working the more likely it will work for you. Why run an experiment if others have already found the result? Now I’m happy to agree that the state of research on software development is pitiful. Many of those who should be helping the industry here, “Computer Science” and “Software Engineering” departments in Universities don’t produce what the industry needs. (Ian Sommerville’s recent critique on this subject is well worth reading “The (ir)relevance of academic software engineering research”). But there is research out there. Some from University departments and some from industry. Plus there is a lot of research that is relevant but is outside the computing and software departments. For example I have dug up a lot of relevant research in business literature, and specifically on time estimation in psychology journals (see my Notes on Estimation and Retrospective Estimation and More notes on Estimation Research.) As people used to dealing with binary software people might demand a simple “Yes this works” or “No it doesn’t” and those suffering from physics envy may demand rigorous experimental research but little of the research of this type exists in software engineering. Much of software engineering is closer to psychology, you can’t conduct the experiments that would give these answers. You have to use statistics and other techniques and look at probabilities. (Notice I’ve separated computer science from software engineering here. Much of computer science theory (e.g. sort algorithm efficiency, P and NP problems, etc.) can stand up with physics theory but does not address many of the problems practicing software engineers face.) #2: Clean up the lab I’m sure most of my readers did some science at school. Think back to those experiments, particularly the chemistry experiments. Part of the preparation was to check the equipment, clean any that might be contaminated with the remains of the last experiment, ensure the workspace was clear and so on. I’m one of those people who doesn’t (usually) start cooking until they have tidies the kitchen. I need space to chop vegetables, I need to be able to see what I’m doing and I don’t want messy plates getting in the way. There is a word for this: Mise en place, its a French expression which according to Wikipedia means: “is a French phrase which means “putting in place”, as in set up. It is used in professional kitchens to refer to organizing and arranging the ingredients (e.g., cuts of meat, relishes, sauces, par-cooked items, spices, freshly chopped vegetables, and other components) that a cook will require for the menu items that are expected to be prepared during a shift.” (Many thanks to Ed Sykes for telling me a great term.) And when you are done with the chemistry experiment, or cooking, you need to tidy up. Experiments need to include set-up and clean-up time. If you leave the lab a mess after every experiment you will make it more difficult for yourself and others next time. I see the same thing when I visit software companies. There is no point in doing experiments if the work environment is a mess – both physically and metaphorically. And if people leave a mess around when they have finished their work then things will only get harder over time. There are many experiments you simply can’t run until you have done the correct preparation. An awful lot of the initial advice I give to companies is simply about cleaning up the work environment and getting them into a state where they can do experiments. Much of that is informed by reference to past literature and experiments. For example:Putting effective source code control and build systems in place Operating in two week iterations: planning out two weeks of work, reviewing what was done and repeating Putting up a team board and using it as a shared to-do list Creating basic measurement tools, whether they be burn-down charts, cumulative flow diagrams or even more basic measurementsYou get the idea? Simply tidying up the work environment and putting a basic process in place, one based on past experience, one which better matches the way work actually happens can alone bring a lot of benefit to organizations. Some organizations don’t need to get into experiments, they just need to tidy up. And, perhaps unfortunately, that is where is stops for some teams. Simply doing the basics better, simply tidying up, removes a lot of the problems they had. It might be a shame that these teams don’t go further, try more but that might be good enough for them. Imagine a restaurant that is just breaking even, the food is poor, customers sometimes refuse to pay, the service shoddy so tips are small, staff don’t stay long which makes the whole problem worse, a vicious circle develops. In an effort to cut costs managers keep staffing low so food arrives late and cold. Finally one of the customers is poisoned and the local health inspector comes in. The restaurant has to do something. They were staggering on with the old ways until now but a crisis means something must be done. They clean the kitchen, they buy some new equipment, they let the chef buy the ingredients he wants rather than the cheapest, they rewrite the menu to simplify their offering. They don’t have to do much and suddenly the customers are happier: the food is warmer and better, the staff are happier, a virtuous circle replaces a vicious circle. How far the restaurant owners want to push this is up to them. If they want a Michelin star they will go a long way, but if this is the local greasy spoon cafe what is the point? – It is their decision. They don’t need experiments, they only need the opening of the scientific method, the bit that is too often overlooked. Some might call it “Brilliant Basics” but you don’t need to be brilliant, just “Good Basics.” (Remember my In Search of Mediocracy post?). I think the scientific method is sometimes, rightly or wrongly, used as a backdoor in an attempt to introduce change. To lower resistance and get individuals and teams to try something new: “Lets try X for a month and then decide if it works.” Thats can be a legitimate approach. But dressing it up in the language of science feels dishonest. Lets have less talk about “The Scientific Method” and more talk about “Tidying up the Kitchen” – or is it better in French? Mise en place…. come to think of it, don’t the Lean community have Japanese word for this? Pika pika.Reference: What we forget about the Scientific Method from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....
junit-logo

An alternative approach of writing JUnit tests (the Jasmine way)

Recently I wrote a lot of Jasmine tests for a small personal project. It took me some time until I finally got the feeling of getting the tests right. After this, I always have a hard time when switching back to JUnit tests. For some reason JUnit tests did no longer feel that good and I wondered if it would be possible to write JUnit tests in a way similar to Jasmine. Jasmine is a popular Behavior Driven Development testing framework for JavaScript that is inspired by RSpec (a Ruby BDD testing Framework). A simple Jasmine test looks like this:   describe('AudioPlayer tests', function() {   var player;  beforeEach(function() {     player = new AudioPlayer();   });      it('should not play any track after initialization', function() {     expect(player.isPlaying()).toBeFalsy();   });      ... }); The describe() function call in the first line creates a new test suite using the description AudioPlayer tests. Inside a test suite we can use it() to create tests (called specs in Jasmine). Here, we check if the isPlaying() method of AudioPlayer returns false after creating a new AudioPlayer instance. The same test written in JUnit would look like this: public class AudioPlayerTest {   private AudioPlayer audioPlayer;  @Before    public void before() {     audioPlayer = new AudioPlayer();   }  @Test   void notPlayingAfterInitialization() {     assertFalse(audioPlayer.isPlaying());   }      ... } Personally I find the Jasmine test much more readable compared to the JUnit version. In Jasmine the only noise that does not contribute anything to the test are the braces and the function keyword. Everything else contains some useful information. When reading the JUnit test we can ignore keywords like void, access modifiers (private, public, ..), annotations and irrelevant method names (like the name of the method annotated with @Before). In addition to that, test descriptions encoded in camel case method names are not that great to read. Besides increased readability I really like Jasmine’s ability of nesting test suites. Let’s look at an example that is a bit longer: describe('AudioPlayers tests', function() {   var player;  beforeEach(function() {     player = new AudioPlayer();   });      describe('when a track is played', function() {     var track;        beforeEach(function() {       track = new Track('foo/bar.mp3')       player.play(track);     });          it('is playing a track', function() {       expect(player.isPlaying()).toBeTruthy();     });          it('returns the track that is currently played', function() {       expect(player.getCurrentTrack()).toEqual(track);     });   });      ... }); Here we create a sub test suite that is responsible for testing the behavior when a Track is played by AudioPlayer. The inner beforeEach() call is used to set up a common precondition for all tests inside the sub test suite. In contrast, sharing common preconditions for multiple (but not all) tests in JUnit can become cumbersome sometimes. Of course duplicating the setup code in tests is bad, so we create extra methods for this. To share data between setup and test methods (like the track variable in the example above) we then have to use member variables (with a much larger scope). Additionally we should make sure to group tests with similar preconditions together to avoid the need of reading the whole test class to find all relevant tests for a certain situation. Or we can split things up into multiple smaller classes. But then we might have to share setup code between these classes… If we look at Jasmine tests we see that the structure is defined by calling global functions (like describe(), it(), …) and passing descriptive strings and anonymous functions. With Java 8 we got Lambdas, so we can do the same right? Yes, we can write something like this in Java 8: public class AudioPlayerTest {   private AudioPlayer player;      public AudioPlayerTest() {     describe("AudioPlayer tests", () -> {       beforeEach(() -> {         player = new AudioPlayer();       });      it("should not play any track after initialization", () -> {         expect(player.isPlaying()).toBeFalsy();       });     });   } } If we assume for a moment that describe(), beforeEach(), it() and expect() are statically imported methods that take appropriate parameters, this would at least compile. But how should we run this kind of test? For interest I tried to integrate this with JUnit and it turned out that this actually very easy (I will write about this in the future). The result so far is a small library called Oleaster. A test written with Oleaster looks like this: import static com.mscharhag.oleaster.runner.StaticRunnerSupport.*; ...@RunWith(OleasterRunner.class) public class AudioPlayerTest {   private AudioPlayer player;      {     describe("AudioPlayer tests", () -> {       beforeEach(() -> {         player = new AudioPlayer();       });            it("should not play any track after initialization", () -> {         assertFalse(player.isPlaying());       });     });   } } Only a few things changed compared to the previous example. Here, the test class is annotated with the JUnit @RunWith annotation. This tells JUnit to use Oleaster when running this test class. The static import of StaticRunnerSupport.* gives direct access to static Oleaster methods like describe() or it(). Also note that the constructor was replaced by an instance initializer and the Jasmine like matcher is replaced with by a standard JUnit assertion. There is actually one thing that is not so great compared to original Jasmine tests. It is the fact that in Java a variable needs to be effectively final to use it inside a lambda expression. This means that the following piece of code does not compile: describe("AudioPlayer tests", () -> {   AudioPlayer player;   beforeEach(() -> {     player = new AudioPlayer();   });   ... }); The assignment to player inside the beforeEach() lambda expression will not compile (because player is not effectively final). In Java we have to use instance fields in situations like this (like shown in the example above). In case you worry about reporting: Oleaster is only responsible for collecting test cases and running them. The whole reporting is still done by JUnit. So Oleaster should cause no problems with tools and libraries that make use of JUnit reports. For example the following screenshot shows the result of a failed Oleaster test in IntelliJ IDEA:If you wonder how Oleaster tests look in practice you can have a look at the tests for Oleaster (which are written in Oleaster itself). You can find the GitHub test directory here. Feel free to add any kind of feedback by commenting to this post or by creating a GitHub issue.Reference: An alternative approach of writing JUnit tests (the Jasmine way) from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books