Featured FREE Whitepapers

What's New Here?


How to Fail With Drools or Any Other Tool/Framework/Library

What I like most at conferences are reports of someone’s failure to do or implement something for they’re the best sources of learning. And How to Fail with Drools (in Norwegian) by C. Dannevig of Know IT at JavaZone 2011 is one of them. I’d like to summarize what they learned and extend it for introduction of a tool, framework, or library in general based on my own painful experiences. They decided to switch to the Drools rule management system (a.k.a. JBoss Rules) v.4 from their homegrown rules implementation to centralize all the rules code at one place, to get something simpler and easier to understand, and to improve the time to market by not requiring a redeploy when a rule is added. However Drools turned out to be more of a burden than help for the following reasons:Too little time and resources were provided for learning Drools, which has a rather steep learning curve due to being based on declarative programming and rules matching (some background), which is quite alien to the normal imperative/OO programmers. Drools’ poor support for development and operations – IDE only for Eclipse, difficult debugging, no stacktrace upon failure. Their domain model was not well aligned with Drools and required lot of effort to make it usable by the rules. The users were used to and satisfied with the current system and wanted to keep the parts facing them such as the rules management UI instead of Drools’ own UI thus decreasing the value of the software (while increasing the overall complexity, we could add).At the end they’ve removed Drools and refactored their code to get all rules to one place, using only plain old Java – which works pretty well for them. Lessons Learned from Introducing Tools, Frameworks, and Libraries While the Know IT team encountered some issues specific to Drools, their experience has a lot in common with many other cases when a tool, a framework, or a library are introduced to solve some tasks and problems but turn out to be more of a problem themselves. What can we learn from these failures to deliver the expected benefits for the expected cost? (Actually such initiatives will be often labeled as a success even though the benefits are smaller and cost (often considerably) larger than planned.) Always think twice – or three or four times – before introducing a [heavyweight] tool or framework. Especially if it requires a new and radically different way of thinking or working. Couldn’t you solve it in a simpler way with plain old Java/Groovy/WhateverYouGot? Using an out of the box solution sounds very *easy* – especially at sales meetings – but it is in fact usually pretty *complex*. And as Rich Hickey recently so well explained in his talk, we should strive to minimize complexity instead of prioritizing the relative and misleading easiness (in the sense of “easy to approach, to understand, to use”). I’m certain that many of us have experienced how an “I’ll do it all for you, be happy and relax” tool turns into a major obstacle and source of pain – at least I have experienced that with Websphere ESB 6.0. (It required heavy tooling that only few mastered, was in reality version 1.0 and a lot of the promised functionality had to be implemented manually anyway etc.) We should never forget that introducing a new library, framework or tool has its cost, which we usually tend to underestimate. The cost has multiple dimensions:Complexity – complexity is the single worst thing in IT projects, are you sure that increasing it will pay off? Complexity of infrastructure, of internal structure, … . Competence – learning curve (which proved to be pretty high for Drools), how many people know it and availability of experts that can help in the case of troubles Development – does the tool somehow hinder development, testing or debugging, f.ex. by making it slower, more difficult, or by requiring special tooling (especially if it isn’t available)? (Think of J2EE x Spring) Operations – what’s the impact on observability of the application in production (high for Drools if it doesn’t provide stack traces for failures), on troubleshooting, performance, deployment process, …? Defects and limitations – every tool has them, even though seemingly mature (they had already version 4 of Drools); you usually run into limitations quite late, it’s difficult if not impossible to discover them up front – and it’s hard to estimate how flexible the authors have made it (it’s especially bad if the solution is closed-source) Longevity – will the tool be around in 1, 5, 10 years? What about backwards compatibility, support for migration to higher versions? (The company I worked for decided to stop support for Websphere ESB in its infrastructure after one year and we had to migrate away from it – what wasted resources!) Dependencies – what dependencies does it have, don’t they conflict with something else in the application or its environment? How it will be in 10 years?And I’m sure I missed some dimensions. So be aware that the actual cost of using something is likely few times higher than your initial estimate. Another lesson is that support for development is a key characteristics of any tool, framework, library. Any slowdown which it introduces must be multiplied at least by a 106 because all those slowdowns spread over the team and lifetime of the project will add up a lot. I experienced that too many times – a framework that required redeploy after every other changes, an application which required us to manually go through a wizard to the page we wanted to test, slow execution of tests by an IDE. The last thing to bear in mind is that you should be aware whether a tool and the design behind it is well aligned with your business domain and processes (including the development process itself). If there are mismatches, you will need to pay for them – just thing about OOP versus RDBMS (don’t you know somebody who starts to shudder upon hearing “ORM”?). Conclusion Be aware that everything has its cost and make sure to account for it and beware our tendency to be overly optimistic when estimating both benefits and cost (perhaps hire a seasoned pessimist or appoint a devil’s advocate). Always consider first using the tools you already have, however boring that might sound. I don’t mean that we should never introduce new stuff – just want to make you more cautious about it. I’ve recently followed a few discussions on how “enterprise” applications get unnecessarily and to their own harm bloated with libraries and frameworks and I agree with them that we should be more careful and try to keep things simple. The tool cost dimensions above may hopefully help you to expose the less obvious constituents of the cost of new tools. Reference: How to Fail With Drools or Any Other Tool/Framework/Library from our JCG partner Jakub Holy at the “Holy Java” Blog. Related Articles :Are frameworks making developers dumb? Open Source Java Libraries and Frameworks – Benefits and Dangers Those evil frameworks and their complexity Java Tutorials and Android Tutorials list...

The Misuse of End To End Tests – Testing Techniques 2

My last blog was the first in a series of blogs on approaches to testing code, outlining a simple scenario of retrieving an address from a database using a very common pattern:…and describing a very common testing technique: not writing tests and doing everything manually. Today’s blog covers another practise which I also feel is sub-optimal. In this scenario, the developers use JUnit to write tests, but write them after they have completed writing the code and without any class isolation. This is really an ‘End to End’ (aka Integration) test posing as a Unit Test. Although yesterday I said that I’m only testing the AddressService class, a test using this technique starts by loading the database with some test data and then grabbing hold of the AddressController to call the method under test. The AddressController calls the AddressService which then calls the AddressDao to obtain and return the requested data. @RunWith(UnitilsJUnit4TestClassRunner.class) @SpringApplicationContext("servlet-context.xml") @Transactional(TransactionMode.DISABLED) public class EndToEndAddressServiceTest {@SpringBeanByType private AddressController instance;/** * Test method for * {@link com.captaindebug.address.AddressService#findAddress(int)}. */ @Test public void testFindAddressWithNoAddress() {final int id = 10; BindingAwareModelMap model = new BindingAwareModelMap();String result = instance.findAddress(id, model); assertEquals("address-display", result);Address resultAddress = (Address) model.get("address"); assertEquals(Address.INVALID_ADDRESS, resultAddress); }/** * Test method for * {@link com.captaindebug.address.AddressService#findAddress(int)}. */ @Test @DataSet("FindAddress.xml") public void testFindAddress() {final int id = 1; Address expected = new Address(id, "15 My Street", "My Town", "POSTCODE", "My Country");BindingAwareModelMap model = new BindingAwareModelMap();String result = instance.findAddress(id, model); assertEquals("address-display", result);Address resultAddress = (Address) model.get("address"); assertEquals(expected.getId(), resultAddress.getId()); assertEquals(expected.getStreet(), resultAddress.getStreet()); assertEquals(expected.getTown(), resultAddress.getTown()); assertEquals(expected.getPostCode(), resultAddress.getPostCode()); assertEquals(expected.getCountry(), resultAddress.getCountry()); } }The code above uses Unitils to both load the test data into a database and to load the classes in our Spring context. I find Untils a useful tool that takes the hard work out of writing tests like this and having to setup such a large scale test is hard work. This kind of test has to be written after the code has been completed; it’s NOT Test Driven Development (which from previous blogs, you’ll gather I’m a big fan), and it’s not a unit test. One of the problems with writing a test after the code is that developers who have to do it see it as a chore rather than as part of development, which means that it’s often rushed and not done in the neatest of coding styles. You’ll also need a certain amount of infrastructure to code using this technique, as a database needs setting up, which may or may not be on your local machine and consequently you may have to be connected to a network to run the test. Test data is either held in test files (as in this case), which are loaded into the database when the test runs, or held permanently in the database. If a requirement change forces a change in the test, then the database files will usually need updating together with the test code, which forces you to update the test in at least two places. Another big problem with this kind of test, apart from the lack of test subject isolation, is that fact that they can be very slow, sometimes taking seconds to execute. Shane Warden in his book ‘The Art of Agile Development’ states that unit tests should run at a rate of “hundreds per second”. Warden also goes on to cite Michael Feather’s book Working Effectively with Legacy Code for a good definition of what a unit test, or is not: A test is not a unit test if:It talks to a database. It communicates across a network. It touches the file system. You have to do special things to your environment (such as editing configuration files) to run it.…now I like that. …although I don’t necessarily agree with point three. One of the main tenants of good unit test code is Readability. Method arguments that are passed to objects under test are sometimes large in size, especially when they’re XML. In this case I think that it’s more pragmatic to favour test readability and store data of this size in a data file rather than having it as a private static final String, so I only adhere to point three where ever practical. Unit tests can be summed up using the FIRST acronym: Fast, Independent, Repeatable, Self Validating and Timely, whilst Roy Osherove in his book The Art Of Unit Testing sums up a good unit test as: “an automated piece of code that invokes the method or class being tested and then checks some assumptions about the logical behaviour of that method or class. A unit test is almost always written using a unit testing framework. It can be written easily and runs quickly. It’s full automated, trustworthy, readable and maintainable”. The benefit of an End to End test is that they do test your test subject in collaboration with other objects and surroundings, something that you really must before shipping your code. This means that when complete, you code should comprise of hundreds of unit tests, but only tens of ‘End to End’ tests. Given this, then my introductory premise, when I said that is technique is ‘sub-optimal’, is not strictly true; there’s nothing wrong with ‘End to End’ tests, every project should have some together with some ordinary integration tests, but these kinds of tests shouldn’t replace, or be called unit tests, which is often the case. Having defined what a unit test is, my next blog investigates what you should test and why… Reference: The Misuse of End To End Tests – Testing Techniques 2 from our JCG partner Roger Hughes at the Captain Debug blog Related Articles :Testing Techniques – Not Writing Tests What Should you Unit Test? – Testing Techniques 3 Regular Unit Tests and Stubs – Testing Techniques 4 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9 Using FindBugs to produce substantially less buggy code Developing and Testing in the Cloud...

Change Without Redeploying With Eclipse And Tomcat

They say developing Java is slow because of the bloated application servers – you have to redeploy the application to see your changes. While PHP, Python, etc. scripting languages, allow you to “save & refresh”. This quora question summarizes that “myth”. Yup, it’s a myth. You can use “save & refresh” in java web applications as well. The JVM has the so-called HotSwap – replacing classes at runtime. So you just have to start the server in debug mode (the hotswap feature is available in debug mode) and copy the class files. With eclipse that can be done in (at least) two ways:WTP – configure the “Deployment Assembly” to send compiled classes to WEB-INF/classes FileSync plugin for eclipse – configure it to send your compiled classes to an absolute path (where your tomcat lives)I’ve made a more extensive description of how to use them in this stackoverflow answer. Now, of course, there’s a catch. You can’t swap structural changes. If you add a new class, new method, change the method arguments, add fields, add annotations, these can’t be swapped at runtime. But “save & refresh” usually involves simply changing a line within a method. Structural changes are more rare, and in some cases mean the whole application has to be re-initialized anyway. You can’t hotswap configuration as well – your application is usually configured in some (.xml) file, so if you change it, you’d have to redeploy. But that, again, seems quite an ordinary scenario – your app can’t just load its bootstrapping configuration while running. Even more common is the case with html & css changes. You just can’t live without “save & refresh” there. But that works perfectly fine – JSPs are refreshed by the servlet container (unless you are in production mode), and each view technology has an option for picking template files dynamically. And that has nothing to do with the JVM. So you can develop web applications with Java almost as quickly as with any scripting language. Finally, I must mention one product with a slogan “Stop redeploying in Java” – JRebel. They have created a very good product that is an improved HotSwap – it can swap structural changes as well. And has support for many frameworks. The feature list looks really good. While it’s a great product, I wouldn’t say it’s a must. You can be pretty productive without it. But be it HotSwap or JRebel – you must make sure you don’t redeploy to reflect changes. That is a real productivity killer. Reference: Change Without Redeploying With Eclipse And Tomcat from our JCG partner Bozho at the Bozho’s tech blog. Related Articles :Eclipse Shortcuts for Increased Productivity Eclipse: How attach Java source Eclipse Memory Analyzer (MAT) Multiple Tomcat Instances on Single Machine Zero-downtime Deployment (and Rollback) in Tomcat; a walkthrough and a checklist Java Tutorials and Android Tutorials list...

Java 7 Feature Overview

We discussed previously everything that didn’t make it into Java 7 and then reviewed the useful Fork/Join Framework that did make it in. Today’s post will take us through each of the Project Coin features – a collection of small language enhancements that aren’t groundbreaking, but are nonetheless useful for any developer able to use JDK 7. I’ve come up with a bank account class that showcases the basics of Project Coin features. Take a look… public class ProjectCoinBanker {private static final Integer ONE_MILLION = 1_000_000; private static final String RICH_MSG = "You need more than $%,d to be considered rich.";public static void main(String[] args) throws Exception { System.out.println(String.format(RICH_MSG, ONE_MILLION));String requestType = args[0]; String accountId = args[1]; switch (requestType) { case "displayBalance": printBalance(accountId); break; case "lastActivityDate" : printLastActivityDate(accountId); break; case "amIRich" : amIRich(accountId); break; case "lastTransactions" : printLastTransactions(accountId, Integer.parseInt(args[2])); break; case "averageDailyBalance" : printAverageDailyBalance(accountId); break; default: break; } }private static void printAverageDailyBalance(String accountId) { String sql = String.format(AVERAGE_DAILY_BALANCE_QUERY, accountId); try ( PreparedStatement s = _conn.prepareStatement(sql); ResultSet rs = s.executeQuery(); ) { while (rs.next()) { //print the average daily balance results } } catch (SQLException e) { // handle exception, but no need for finally to close resources for (Throwable t : e.getSuppressed()) { System.out.println("Suppressed exception message is " + t.getMessage()); } } }private static void printLastTransactions(String accountId, int numberOfTransactions) { List transactions = new ArrayList<>(); //... handle fetching/printing transactions }private static void printBalance(String accountId) { try { BigDecimal balance = getBalance(accountId); //print balance } catch (AccountFrozenException | ComplianceViolationException | AccountClosedException e) { System.err.println("Please see your local branch for help with your account."); } }private static void amIRich(String accountId) { try { BigDecimal balance = getBalance(accountId); //find out if the account holder is rich } catch (AccountFrozenException | ComplianceViolationException | AccountClosedException e) { System.out.println("Please see your local branch for help with your account."); } }private static BigDecimal getBalance(String acccountId) throws AccountFrozenException, AccountClosedException, ComplianceViolationException { //... getBalance functionality }}Briefly, our ProjectCoinBanker class demonstrates basic usage of the following Project Coin features.Underscores in numeric literals Strings in switch Multi-catch Type inference for typed object creation try with resources and suppressed exceptionsFirst of all, underscores in numeric literals are pretty self-explanatory. Our example, private static final Integer ONE_MILLION = 1_000_000;shows that the benefit is visual. Developers can quickly look at the code to verify that values are as expected. Underscores can be used in places other than natural groupings locations, being ignored wherever they are placed. Underscores in numeric literals cannot begin or terminate a numeric literal, otherwise, you’re free to place them where you’d like. While not demonstrated here, binary literal support has also been added. In the same way that hex literals are prefixed by 0x or 0X, binary literals would be prefixed by 0b or 0B. Strings in switch are also self-explanatory. The switch statement now also accepts String. In our example, we switch on String argument passed to the main method to determine what request was made. On a side note, this is purely a compiler implementation with an indication that JVM support for switching on String may be added at a later date. Type inference is another easy-to-understand improvement. Now instead of our old code List<Transaction> transactions = new ArrayList<Transaction>();we can simply do List<Transaction> transactions = new ArrayList<>();since the type can be inferred. Probably won’t find anyone who would argue that it shouldn’t have been so since the introduction of generics, but it’s now here. Multi-catch will turn out to be very nice for the conciseness of exception handling code. Too many times when wanting to actually do something based on the exception type thrown, until now we were forced to have multiple catch blocks all doing essentially the same thing. The new syntax is very clean and logical. Our example, catch (AccountFrozenException | ComplianceViolationException | AccountClosedException e)shows how easily it can be done. Finally, the last demonstration of a Project Coin feature is the try with resources syntax and support for retrieving suppressed exceptions. A new interface has been introduced, AutoCloseable that has been applied to all the expected suspects including Input/OutputStreams, Readers/Writers, Channels, Sockets, Selectors and java.sql resources Statement, ResultSet and Connection. In my opinion, the syntax is not as natural as the multi-catch change was, not that I have an alternative in mind. try ( PreparedStatement s = _conn.prepareStatement(sql); ResultSet rs = s.executeQuery(); ) { while (rs.next()) { //print the average daily balance results } } catch (SQLException e) { //handle exception, but no need for finally to close resources for (Throwable t : e.getSuppressed()) { System.out.println("Suppressed exception message is " + t.getMessage()); } }First we see that we can include multiple resources in try with resources – very nice. We can even reference previously declared resources in the same block as we did with our PreparedStatement. We still handle our exception, but we don’t need to have a finally block just to close the resources. Notice too that there is a new method getSuppressed() on Throwable. This allows us to access any Exceptions that were thrown in trying to “autoclose” the declared resources. There can be at most one suppressed exception per resource declared. Note: if the resource initialization throws an exception, it would be handled in your declared catch block. That’s it. Nothing earth-shattering but some simple enhancements that we can all begin using without too much trouble. Project Coin also includes a feature regarding varargs and compiler warnings. Essentially, it boils down to a new annotation (@SafeVarargs) that can be applied at the method declaration to allow developers to remove @SuppressWarnings(“varargs”) from their consuming code. This has been applied on all the key suspects within the JDK, but the same annotation is available to you in any of your genericized varags methods. The Project Coin feature set as it is described online is inconsistent at best. Hopefully this will give you a solid summary of what you can use from the proposal in JDK 7. Reference: Java 7 – Project Coin Feature Overview from our JCG partners at the Carfey Software Blog. Related Articles :Java 7: try-with-resources explained GC with Automatic Resource Management in Java 7 A glimpse at Java 7 MethodHandle and its usage Manipulating Files in Java 7 Java SE 7, 8, 9 – Moving Java Forward Java Tutorials and Android Tutorials list...

Devoxx Day 1

Participating at Devoxx brought me enough motivation to post my first blog entry. I am for the first time here and I am really impressed by how it is organized. There are a record number of top speaker present. For me it is a problem choosing the presentation to attend. But thanks to the organizers all events will be available on parleys.com in late December and participants will receive a free subscription so there is nothing to regret at the end of the presentations. The number of participants is also very impressive 3350 from 60 different JUGs, from 40 countries. The only drawback of event’s popularity is the shortage of wireless connections, only 800 IPs are available and the most participants have a laptop plus a smartphone. For me the Kindle saved the day with its free Internet connection but unfortunately for some speakers a few presentation demos did not work due to this. This year main topics beside Java are HTML5, new languages and Android. I will share a few ideas I picked today at presentations I attended. Oracle opening keynote and JDK 7, 8, and 9 presentionOracle is committed to Java and wants to provide support for it on any device. JSE 7 for Mac will be released next week. Oracle would like Java developers to envolve in JCP, to adopt a JSR and to attend local JUG meetings. JEE 7 will be released next year. JEE 7 is focused on cloud integration, some of the features are already implemented in glassfish 4 development branch. JSE 8 will be release in summer of 2013 due to “enterprise community request” as they can not keep the pace with an 18 month release cycle. The main feartures included in JSE8 are lambda support, project Jigsaw, new Date/Time API, project Coin++ and adding support for sensors. JSE 9 probably will focus on some of these features:self tuning JVM improved native language integration processing enhancement for big data reification (adding runtime class type info for generic types) unification of primitive and corresponding object classes meta-object protocol in order to use type and methods define in other JVM languages multi-tenancy JVM resource managementBleeding edge HTML5 by Paul Kinlan from GoogleWeb application will be richer and more intelligent due to addition of new APIs like page Visibility API , WebIntends, WebRTC, WebAudio API. Visbility API determines if a page is visible, can verify online/offline connection status, pre-rendered pages. Browsers will have support WebIntents, these are custom apps, to handle specific types of actions, for example sharing an image to social network for these for example flikr can register by the user to handle these type of actions. WebRTC will make possible face to face communication possible without plugins. Web Audio API adds support for complex audio processing.NoSQL for java developers by Chris Richardson from SpringSourceThe future application will have polyglot persistence for different type of data. An interesting article about this was written today by Martin Fowler. NoSQL solves many problems but has also limitation and careful thought has to be made in advance because changes will be costly. Queries not data model drives NoSQL database design modelProductivity enhancements in Spring 3.1 by Costin Leau from SpringSourceSpring 3.1 will brings the following improvements:environemental abstraction, diferent types of beans/property files are activated with a profile setting like dev, test, prod. improved java based configuration with new annotations cache abstraction improvements, implementations are pluggable by implementing a few interfaces. Spring MVC improvements, support for Servlet 3.0 support for JSE7 Hibernate 4.0 and Quartz 2.x supportNext week will be release the RC2 and shortly followed by the GA. Spring 3.2 will be released in the third quarter of 2011.Tomorrow will be another great day for Java in Antwerp. I will try to keep you posted with the notes I make during presentations. If you are also at Devoxx you can find me by looking for the guy with world renown Transylvania JUG shirt. Reference: Transylvania JUG at Devox day 1 from our JCG partners at the Transylvania JUG. Related Articles :DOAG 2011 vs. Devoxx – Value and Attraction Java SE 7, 8, 9 – Moving Java Forward Java EE Past, Present, & Cloud 7 Official Java 7 for Mac OS X – Status The OpenJDK as the default Java on Linux Java Tutorials and Android Tutorials list...

DOAG 2011 vs. Devoxx – Value and Attraction

Yesterday (November 15, 2011) DOAG 2011 started in Nuremberg. I am with the German Oracle Users Group since some years and it is always a pleasure to contribute to this great conference. A little drawback is, that Devoxx is going on in parallel and I am really sad to see so many people over there in Belgium which I would have loved to meet. I guess, that is what happens if you have different conferences going on in a very short time frame: you have to make hard decisions about which one to attend. And even if I have spoken to Stephan and Fried about if there is any chance to align the two conferences, it seems as if this is not going to happen. So, here are my thoughts about which of both brings more value to you. Being a regular reader of my blog you are interested in Java EE, Oracle Middleware (WebLogic or GlassFish) and probably some other products related to the former. DOAG is about the Oracle Community DOAG likes to call themselves the “Oracle community”. The conference is the annual get-together of Oracle users for 23 years now. Three days of knowledge, current information on using oracle solutions successfully, and the exchange of hands-on experience are the values which differentiate them from others. A very special focus lies on having mostly German content. Even if this changed over the last few years and you see more and more English speakers around, it’s still a mostly German speaking event. They are supported by some of the German and European based Oracle ACEs and lot of the current information comes directly from Oracle speakers coming over from the HQ for this event. The Java related content is weak. Sorry to say, but I believe that the “Java Track” suffers from having big competition with Devoxx. Another point is, that nobody is expecting pure Java content at an Oracle products focused conference. There is a little (friendly) fight with the UK’OUG going on about who held’s the biggest European Oracle conference. I personally believe that they are more or less equal in size (according to the numbers I have heard). I cannot comment on quality because I have never been to the UK’OUG conference. Devoxx is about the Java Community The Devoxx is “The Java™ Community Conference”. Rebranded from former JavaPolis it’s basically the conference of the Belgian JUG. With it’s no1 speakers and topics it has become one of the main Java conferences around. The Devoxx conference is a special blend of many IT disciplines, ranging from Java to Scripting, FX to RIA, Agile to Enterprise, Security to Cloud and much more. Compared to others it might only have one real competitor which is JavaOne. Stephan is doing a splendid job attracting the right kind of speakers to have a real first class line-up. One big plus is the fact, that Google isn’t preventing their guys from speaking there. Even if it is in Europe this is an English speaking conference. DOAG vs. Devoxx What is that all about? Oracle vs. Java ?? Probably. Two of the bigger European conferences with different focus areas make it hard to decide which one to attend for people being with one feet on each side. Writing this blog post was helpful for me to decide what my personal options are for next year. I would love to be in Belgium and meet again with the Java Community. But there is also this little red devil on my right shoulder telling me, that I should keep in touch with the latest in Oracle related developments. If you are in a similar situation, you are now waiting for a recommendation, right? Here it is: It’s brief and short: Looking for Oracle products, come visit Germany. Looking for the best Java related content out there beside JavaOne? Visit Devoxx. If you have interest in both areas, you will have to make your own decision and try to focus on the part that is most valuable to you. Reference: DOAG 2011 vs. Devoxx – Value and Attraction from our JCG partner Markus Eisele at the “Enterprise Software Development with Java” blog. Related Articles :Java SE 7, 8, 9 – Moving Java Forward Java EE Past, Present, & Cloud 7 Official Java 7 for Mac OS X – Status The OpenJDK as the default Java on Linux Developing and Testing in the Cloud Java Tutorials and Android Tutorials list...

SOLID – Single Responsibility Principle

The Single Responsibility principle (SRP) states that:There should never be more than one reason for a class to change. We can relate the “reason to change” to “the responsibility of the class”. So each responsibility would be an axis for change. This principle is similar to designing classes which are highly cohesive. So the idea is to design a class which has one responsibility or in otherwords caters to implementing a functionality . I would like to clarify here that one responsibility doesnt mean that the class has only ONE method. A responsibility can be implemented by means of different methods in the class. Why is that this principle is required? Imagine designing classes with more than one responsibility/implementing more than one functionality. There’s no one stopping you to do this. But imagine the amount of dependency your class can create within itself in the due course of the development time. So when you are asked to change a certain functionality, you are not really sure how it would impact the other functionalities implemented in the class. The change might or might not impact other features, but you really can’t take risk, especially in production applications. So you end up testing all the dependent features. You might say, we have automated tests, and the number of tests to be checked are low, but imagine the impact over time. These kind of changes get accumulate owing to the viscosity of the code making it really fragile and rigid. One way to correct the violation of SRP is to decompose the class functionalities into different classes, each of which confirms to SRP. An example to clarify this principle: Suppose you are asked to implement a UserSetting service where in the user can change the settings but before that the user has to be authenticated. One way to implement this would be: public class UserSettingService { public void changeEmail(User user) { if(checkAccess(user)) { //Grant option to change } } public boolean checkAccess(User user) { //Verify if the user is valid. } }All looks good, until you would want to reuse the checkAccess code at some other place OR you want to make changes to the way checkAccess is being done OR you want to make change to the way email changes are being approved. In all the later 2 cases you would end up changing the same class and in the first case you would have to use UserSettingService to check for access as well, which is unnecessary. One way to correct this is to decompose the UserSettingService into UserSettingService and SecurityService. And move the checkAccess code into SecurityService. public class UserSettingService { public void changeEmail(User user) { if(SecurityService.checkAccess(user)) { //Grant option to change } } }public class SecurityService { public static boolean checkAccess(User user) { //check the access. } }Another example would be: Suppose there is a requirement to download the file – may be in csv/json/xml format, parse the file and then update the contents into a database or file system. One approach would be to: public class Task { public void downloadFile(location) { //Download the file } public void parseTheFile(file) { //Parse the contents of the file- XML/JSON/CSV } public void persistTheData(data) { //Persist the data to Database or file system. } }Looks good, all in one place easy to understand. But what about the number of times this class has to be updated? What about the reusability of parser code? or download code? Its not good design in terms of reusabiltiy of different parts of the code, in terms of cohesiveness. One way to decompose the Task class is to create different classes for downloading the file – Downloader, for parsing the file – Parser and for persisting to the database or file system. Even in JDK you must have seen that Rectangle2D or other Shape classes in java.awt package dont really have information regarding how it has to be drawn on the UI. The drawing information has been embedded in the Graphics/Graphics2D package. A detailed description can be found here. Reference: SOLID- Single Responsibility Principle from our JCG partner Mohamed Sanaulla at the “Experiences Unlimited” blog. Related Articles :Are frameworks making developers dumb? Not doing Code Reviews? What’s your excuse? Why Automated Tests Boost Your Development Speed Using FindBugs to produce substantially less buggy code Things Every Programmer Should Know Java Tutorials and Android Tutorials list...

Testing Techniques – Not Writing Tests

There’s not much doubt about it, the way you test your code is a contentious issue. Different test techniques find favour with different developers for varying reasons including corporate culture, experience and general psychological outlook. For example, you may prefer writing classic unit tests that test an object’s behaviour in isolation by examining return values; you may favour classic stubs, or fake objects; or you may like using mock objects to mock roles, or even using mock objects as stubs. This and my next few blogs takes part of a very, very common design pattern and examines different approaches you could take in testing it. The design pattern I’m using is shown in the UML diagram below, it’s something I’ve used before, mainly because it is so common. You may not like it – it is more ‘ask don’t tell’ rather than ‘tell don’t ask’ in its design, but it suits this simple demo.In this example, the ubiquitous pattern above will be used to retrieve and validate an address from a database. The sample code, available from my GitHub repository, takes a simple Spring MVC webapp as its starting point and uses a small MySQL database to store the addresses for no other reason than I already have a server running locally on my laptop. So far as testing goes, the blogs will concentrate upon testing the service layer component AddressService: @Component public class AddressService {private static final Logger logger = LoggerFactory.getLogger(AddressService.class);private AddressDao addressDao;/** * Given an id, retrieve an address. Apply phony business rules. * * @param id * The id of the address object. */ public Address findAddress(int id) {logger.info("In Address Service with id: " + id); Address address = addressDao.findAddress(id);businessMethod(address);logger.info("Leaving Address Service with id: " + id); return address; }private void businessMethod(Address address) {logger.info("in business method"); // Do some jiggery-pokery here.... }@Autowired void setAddressDao(AddressDao addressDao) { this.addressDao = addressDao; }}…as demonstrated by the code above, which you can see is very simple: it has a findAddress(…) method that takes as its input the id (or table primary key) for a single address. It calls a Data Access Object (DAO), and pretends to do some business processing before returning the Address object to the caller. public class Address {private final int id;private final String street;private final String town;private final String country;private final String postCode;public Address(int id, String street, String town, String postCode, String country) { this.id = id; this.street = street; this.town = town; this.postCode = postCode; this.country = country; }public int getId() { return id; }public String getStreet() {return street; }public String getTown() {return town; }public String getCountry() {return country; }public String getPostCode() {return postCode; } }As I said above, I’m going to cover different strategies for testing this code, some of which I’ll guarantee you’ll hate. The first one, still widely used by many developers and organisation is… Don’t Write Any Tests Unbelievably, some people and organisations still do this. They write their code, deploy it to the web server and open a page. If the page opens then they ship the code, if it doesn’t then they fix the code, compile it redeploy it, reload the web browser and retest. The most extreme example I’ve ever seen of this technique: changing the code, deploying to a server, running the code, spotting a bug and going around the loop again was a couple of years ago on a prestigious Government project. The sub-contractor had, I guess to save money, imported a load of cheap and very inexperienced programmers from ‘off-shore’ and didn’t have enough experienced programmers to mentor them. The module in question was a simple Spring based Message Driven Bean that took messages from one queue, applied a little business logic and then pushed it into another queue: simples. The original author started out by writing a few tests, but then passed the code on to other inexperienced team members. When the code changed and a test broke, they simply switched off all the tests. Testing consisted of deploying the MDB to the EJB container (Weblogic), pushing a message into the front of the system and watching what came out of the other end and debugging the logs along the way. You may say that an end to end test like this isn’t too bad, BUT to deploy the MDB and to run the test took just over an HOUR: in a working day, that’s 8 code changes. Not exactly rapid development! My job? To fix the process and the code. The solution? Write tests, run tests and refactor the code. The module went from having zero tests to about 40 unit tests and a few integration tests and it was improved and finally delivered. Done, done. Most people will have their own opinions on this technique, and mine are: it produces unreliable code; it takes longer to write and ship code using this technique because you spend loads of time waiting for servers to start, WARs / EJBs to be deployed etc. and it’s generally used by more inexperienced programmers, or those who haven’t suffered by using this technique – and you do suffer. I can say that I’ve worked on projects where I’m writing tests whilst other developers aren’t. The test team find very few bugs in my code, whilst those other developers are fixings loads of bugs and are going frantic trying to meet their deadlines. Am I a brilliant programmer or does writing tests pay dividends? From experience, if you use this technique, you will have lots of additional bugs to fix because you can’t easily and repeatably test the multitude of scenarios that accompany the story you’re developing. This is because it simply takes too long and you have to remember each scenario and then manually run them. I do wonder whether or not the not writing tests technique is a hangover from the 1960’s when computing time was expensive, and you had to write programs by hand on punched cards or paper tape and then check over visually using a ‘truth table’. Once you were happy that you code worked, you then sent it to the machine room and ran your code – I’m not old enough to remember computing in the 60s. The fact that machine time was expensive meant that automated testing was out of the question. Although computers got faster, this obsolete paradigm continued on, degenerating into one where you missed out the diligent mental check and just ran the code and if it broke you fixed it. This degenerate paradigm was (is?) still taught in schools, colleges and books and was unchallenged until the last few years. Is this why it can be quite hard to convince people to change their habits? Another major problem with this technique is that a project can descend into a state of paralysis. As I said above, with this technique your bug count will be high and this gives a bad impression to project managers with the perception that the code stinks and enforces the idea that you don’t change the code unless absolutely necessary as you might break something. Managers become hesitant about authorising code changes often having no faith in the developers and micro-managing them. Indeed the developers themselves become very hesitant about adding changes to code as breaking something will make them look bad. The changes they do make are as tiny and small as possible and without any refactoring. Over time this adds to the mess and the code degenerates even more becoming a bigger ball of mud. Whilst I think that you should load and review a page to ensure that every thing’s working, it should only be done at the end of the story, once you have a bundle of tests that tell you that your code is working okay. I hope that I’m not being contentious when I sum this method up by saying that it sucks, though time will tell. You may also wonder why I included it, the reason is to point out that it sucks and offer some alternatives in my following blogs. Reference: Testing Techniques – Part 1 – Not Writing Tests from our JCG partner Roger Hughes at the Captain Debug blog. Related Articles :The Misuse of End To End Tests – Testing Techniques 2 What Should you Unit Test? – Testing Techniques 3 Regular Unit Tests and Stubs – Testing Techniques 4 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9 Using FindBugs to produce substantially less buggy code Developing and Testing in the Cloud...

Scala Tutorial – code blocks, coding style, closures, scala documentation project

Preface This is part 12 of tutorials for first-time programmers getting into Scala. Other posts are on this blog, and you can get links to those and other resources on the links page of the Computational Linguistics course I’m creating these for. Additionally you can find this and other tutorial series on the JCG Java Tutorials page. This post isn’t so much a tutorial as a comment on coding style with a few pointers on how code blocks in Scala work. It was instigated by patterns I was noting in my students’ code; namely, that they were packing everything into one-liners with map after map with map after map, etc. These map-over-mapValues-over-map sequences of statements can be almost incomprensible, both for some other person reading the code, and even for the person writing the code. I do admit to a fair amount of guilt in using such sequences of operations in class lectures and even in some of these tutorials. It works well in the REPL and when you have lots of text to explain what is going on around the piece of code in question, but it seems to have given a bad model for writing actual code. Oops! So taking a step back, it is important to break operation sequences up a bit, but it isn’t always obvious to beginners how one can do so. Also, some students indicated that they had gotten the impression that one should try to pack everything onto one line if possible, and that breaking things up was somehow less advanced or less Scala-like. This is hardly the case. In fact much to the contrary: it is crucial to use strategies that allow readers of your code to see the logic behind your statements. This isn’t just for others — you are likely to be a reader of your own code, often months after you originally wrote it, and you want to be kind to your future self. A simple example I’m giving an example here. of what you can do to give your code more breathing space. It’s not a very meaningful example, but it serves the purpose without being very complex. We begin by creating a list of all the letters in the alphabet. scala> val letters = "abcdefghijklmnopqrstuvwxyz".split("").toList.tail letters: List = List(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z)Okay, now here’s our (pointless) task: we want to create a map from every letter (from ‘a’ to ‘x’) to a list containing that letter and the two letters that follow it in reverse alphabetical order. (Did I mention this was a pointless task in and of itself?) Here’s a one-liner that can do it. scala> letters.zip((1 to 26).toList.sliding(3).toList).toMap.mapValues(_.map(x => letters(x-1)).sorted.reverse) res0: scala.collection.immutable.Map] = Map(e -> List(g, f, e), s -> List(u, t, s), x -> List(z, y, x), n -> List(p, o, n), j -> List(l, k, j), t -> List(v, u, t), u -> List(w, v, u), f -> List(h, g, f), a -> List(c, b, a), m -> List(o, n, m), i -> List(k, j, i), v -> List(x, w, v), q -> List(s, r, q), b -> List(d, c, b), g -> List(i, h, g), l -> List(n, m, l), p -> List(r, q, p), c -> List(e, d, c), h -> List(j, i, h), r -> List(t, s, r), w -> List(y, x, w), k -> List(m, l, k), o -> List(q, p, o), d -> List(f, e, d))That did it, but that one-liner isn’t clear at all, so we should break things up a bit. Also, what is “_” and what is “x”? (By which I mean, what are they in terms of the logic of the program? We know they are ways of referring to the elements being mapped over, but they don’t help the human reading the code understand what is going on.) Let’s start by creating the sliding list of number ranges. scala> val ranges = (1 to 26).toList.sliding(3).toList ranges: List[List[Int]] = List(List(1, 2, 3), List(2, 3, 4), List(3, 4, 5), List(4, 5, 6), List(5, 6, 7), List(6, 7, 8), List(7, 8, 9), List(8, 9, 10), List(9, 10, 11), List(10, 11, 12), List(11, 12, 13), List(12, 13, 14), List(13, 14, 15), List(14, 15, 16), List(15, 16, 17), List(16, 17, 18), List(17, 18, 19), List(18, 19, 20), List(19, 20, 21), List(20, 21, 22), List(21, 22, 23), List(22, 23, 24), List(23, 24, 25), List(24, 25, 26))It’s quite clear what that is now. (The sliding function is a beautiful thing, especially for natural language processing problems.) Next, we zip the letters with the ranges and create a Map from the pairs using toMap. This produces a Map from letters to lists of three numbers. Note that the lengths of the two lists are different: letters has 26 elements and ranges has 24, which means that the last two elements of letters (‘y’ and ‘z’) get dropped in the zipped list. scala> val letter2range = letters.zip(ranges).toMap letter2range: scala.collection.immutable.Map] = Map(e -> List(5, 6, 7), s -> List(19, 20, 21), x -> List(24, 25, 26), n -> List(14, 15, 16), j -> List(10, 11, 12), t -> List(20, 21, 22), u -> List(21, 22, 23), f -> List(6, 7, 8), a -> List(1, 2, 3), m -> List(13, 14, 15), i -> List(9, 10, 11), v -> List(22, 23, 24), q -> List(17, 18, 19), b -> List(2, 3, 4), g -> List(7, 8, 9), l -> List(12, 13, 14), p -> List(16, 17, 18), c -> List(3, 4, 5), h -> List(8, 9, 10), r -> List(18, 19, 20), w -> List(23, 24, 25), k -> List(11, 12, 13), o -> List(15, 16, 17), d -> List(4, 5, 6))Note that we could have broken this into two steps, first creating the zipped list and then calling toMap on it. However, it is perfectly clear what the intent is when one zips two lists (creating a list of pairs) and then uses toMap on it immediately, so this is certainly a case where it makes sense to put multiple operations on a single line. At this point we could of course process the letter2range Map using a one-liner. scala> letter2range.mapValues(_.map(x => letters(x-1)).sorted.reverse) res1: scala.collection.immutable.Map] = Map(e -> List(g, f, e), s -> List(u, t, s), x -> List(z, y, x), n -> List(p, o, n), j -> List(l, k, j), t -> List(v, u, t), u -> List(w, v, u), f -> List(h, g, f), a -> List(c, b, a), m -> List(o, n, m), i -> List(k, j, i), v -> List(x, w, v), q -> List(s, r, q), b -> List(d, c, b), g -> List(i, h, g), l -> List(n, m, l), p -> List(r, q, p), c -> List(e, d, c), h -> List(j, i, h), r -> List(t, s, r), w -> List(y, x, w), k -> List(m, l, k), o -> List(q, p, o), d -> List(f, e, d))This is better than what we started with because we at least know what letter2range is, but it still isn’t clear what is going on after that. To make this more comprehensible, we can break it up over multiple lines and give more descriptive names to the variables. The following produces the same result as above. letter2range.mapValues ( range => { val alphavalues = range.map (number => letters(number-1)) alphavalues.sorted.reverse } )Notice that:I called it range rather than _ which is a better indicator of what mapValues is working with. After the => I use an open left bracket { The next lines are a block of code that I can use like any block of code, which means I can create variables and break things down into smaller, more understandable steps. For example the line creating alphavalues makes it clear that we are taking a range and mapping it to the corresponding indices in the letters list (e.g., the range 2, 3, 4 becomes ‘b’,’c’,’d’). For such a list, we then sort and reverse it (okay, so it started out sorted, but you can imagine plenty of times you need to do such sorting). The last line of that block is what the result of the overall mapValue for that element (here, indicated by the variable range) is.Basically, we get a lot more breathing room, and this becomes even more essential as you dig deeper or do more complex operations during a map-within-a-map operation. Having said that, you should ask yourself whether you should just create and use a function that has a clear semantics and does the job for you. For example, here’s an alternative to the above strategy that is perhaps clearer. def lookupSortAndReverse (range: List[Int], alpha: List[String]) = range.map(number => alpha(number-1).sorted.reverse)We’ve defined a function that takes a range and a list of letters (called alpha in the function) and produces the sorted and reversed list of letters corresponding to the numbers in the range. In other words, it is what the anonymous function defined after range in the previous code block did. We can thus easily use it at the top-level mapValue operation with completely clear intent and comprehensibility. letter2range.mapValues(range => lookupSortAndReverse(range, letters))Of course, you should especially consider creating such functions if you use the same operation in multiple places. Closures One further final note. Note that I passed the letters list into the lookupSortAndReverse function such that its value was bound to the function internal variable alpha. You may wonder whether I needed to include that, or whether it is possible to directly access the letters list in the function. In fact you can: provided that letters has already been defined, we can do the following. def lookupSortAndReverseCapture (range: List[Int]) = range.map(number => letters(number-1).sorted.reverse)letter2range.mapValues(range => lookupSortAndReverseCapture(range))This is called a closure, meaning that the function has incorporated free variables (here, letters) that come from outside its own scope. I generally don’t use this strategy with named functions like this, but there are many natural situations for using closures. In fact you do it all the time when you are creating anonymous functions as arguments to functions like map and mapValue and their cousins. As a reminder, here was the map-within-a-mapValue anonymous function we defined before. letter2range.mapValues ( range => { val alphavalues = range.map (number => letters(number-1)) alphavalues.sorted.reverse } )The letters variable has been “closed over” in the anonymous function range => { … }, which is not very different from what we did with the closure-style lookupSortAndReverse function. All the code in one spot Since there are some dependencies between the different steps in this tutorial that could get things mixed up, here’s all the code in one spot such that you can run it easily. // Get a list of the letters val letters = "abcdefghijklmnopqrstuvwxyz".split("").toList.tail// Now create a list that maps each letter to a list containing itself // and the two letters after it, in reverse alphabetical // order. (Bizarre, but hey, it's a simple example. BTW, we lose y and // z in the process.)letters.zip((1 to 26).toList.sliding(3).toList).toMap.mapValues(_.map(x => letters(x-1)).sorted.reverse)// Pretty unintelligible. Let's break things up a bitval ranges = (1 to 26).toList.sliding(3).toList val letter2range = letters.zip(ranges).toMap letter2range.mapValues(_.map(x => letters(x-1)).sorted.reverse)// Okay, that's better. But it is easier to interpret the latter if we break things up a bitletter2range.mapValues ( range => { val alphavalues = range.map (number => letters(number-1)) alphavalues.sorted.reverse } )// We can also do the one-liner coherently if we have a helper function.def lookupSortAndReverse (range: List[Int], alpha: List[String]) = range.map(number => alpha(number-1).sorted.reverse)letter2range.mapValues(range => lookupSortAndReverse(range, letters))// Note that we can "capture" the letters value, though this makes the // requires letters to be defined before lookupSortAndReverse in the // program.def lookupSortAndReverseCapture (range: List[Int]) = range.map(number => letters(number-1).sorted.reverse)letter2range.mapValues(range => lookupSortAndReverseCapture(range))Wrapup Hopefully this will encourage you to use clearer coding style and demonstrates some aspects of code blocks that you may not have realized. However, this just scratches the surface of writing clearer code, and a lot of it will just come with time and practice and realizing how necessary it is when you look back at code you wrote months ago. Note that one easy thing you can do to create better code is to try to stick established coding conventions. For example, see the coding guidelines for Scala on the Scala documentation project. There is also a lot of other very useful stuff, including tutorials, and it is actively evolving and growing! Reference: First steps in Scala for beginning programmers, Part 12 from our JCG partner Jason Baldridge at the Bcomposes blog. Related Articles :Scala Tutorial – Scala REPL, expressions, variables, basic types, simple functions, saving and running programs, comments Scala Tutorial – Tuples, Lists, methods on Lists and Strings Scala Tutorial – conditional execution with if-else blocks and matching Scala Tutorial – iteration, for expressions, yield, map, filter, count Scala Tutorial – regular expressions, matching Scala Tutorial – regular expressions, matching and substitutions with the scala.util.matching API Scala Tutorial – Maps, Sets, groupBy, Options, flatten, flatMap Scala Tutorial – scala.io.Source, accessing files, flatMap, mutable Maps Scala Tutorial – objects, classes, inheritance, traits, Lists with multiple related types, apply Scala Tutorial – scripting, compiling, main methods, return values of functions Scala Tutorial – SBT, scalabha, packages, build systems Fun with function composition in Scala How Scala changed the way I think about my Java Code Testing with Scala Things Every Programmer Should Know...

Diminishing Returns in software development and maintenance

Everyone knows from reading The Mythical Man Month that as you add more people to a software development project you will see diminishing marginal returns. When you add a person to a team, there’s a short-term hit as the rest of the team slows down to bring the new team member up to speed and adjusts to working with another person, making sure that they fit in and can contribute. There’s also a long-term cost. More people means more people who need to talk to each other (n x n-1 / 2), which means more opportunities for misunderstandings and mistakes and misdirections and missed handoffs, more chances for disagreements and conflicts, more bottleneck points. As you continue to add people, the team needs to spend more time getting each new person up to speed and more time keeping everyone on the team in synch. Adding more people means that the team speeds up less and less, while people costs and communications costs and overhead costs keep going up. At some point negative returns set in – if you add more people, the team’s performance will decline and you will get less work done, not more. Diminishing Returns from any One Practice But adding too many people to a project isn’t the only case of diminishing returns in software development. If you work on a big enough project, or if you work in maintenance for long enough, you will run into problems of diminishing returns everywhere that you look. Pushing too hard in one direction, depending too much on any tool or practice, will eventually yield diminishing returns. This applies to:Manual functional and acceptance testing Test automation Any single testing technique Code reviews Static analysis bug finding tools Penetration tests and other security reviewsAiming for 100% code coverage on unit tests is a good example. Building a good automated regression safety net is important – as you wire in tests for key areas of the system, programmers get more confidence and can make more changes faster. How many tests are enough? In Continuous Delivery, Jez Humble and David Farley set 80% coverage as a target for each of automated unit testing, functional testing and acceptance testing. You could get by with lower coverage in many areas, higher coverage in core areas. You need enough tests to catch common and important mistakes. But beyond this point, more tests get more difficult to write, and find fewer problems. Unit testing can only find so many problems in the first place. In Code Complete, Steve McConnell explains that unit testing can only find between 15% and 50% (on average 30%) of the defects in your code. Rather than writing more unit tests, people’s time would be better spent on other approaches like exploratory system testing and code reviews or stress testing or fuzzing to find different kinds of errors. Too much of anything is bad, but too much whiskey is enough. (Mark Twain, as quoted in Code Complete) Refactoring is important for maintaining and improving the structure and readability of code over time. It is intended to be a supporting practice – to help make changes and fixes simpler and clearer and safer. When refactoring becomes an end in itself or turns into Obsessive Refactoring Disorder, it not only adds unnecessary costs as programmers waste time over trivial details and style issues, it can also add unnecessary risks and create conflict in a team. Make sure that refactoring is done in a disciplined way, and focus refactoring on those areas that need it the most: on code that is frequently changed, routines that are too big, too hard to read, too complex and error-prone. Putting most of your attention refactoring (or if necessary rewriting) this code will get you the highest returns. Less and Less over Time Diminishing returns also set in over time. The longer that you spend working the same way and with the same tools, the less benefits you will see. Even core practices that you’ve grown to depend on don’t pay back over time, and at some point may cost more than they are worth. It’s time again for New Year’s resolutions – time to sign up at a gym and start lifting weights. If you stick with the same routine for a couple of months, you will start to see good results. But after a while your body will get used to the work – if you keep doing the same things the same way your performance will plateau and you will stop seeing gains. You will get bored and stop going to the gym, which will leave more room for people like me. If you do keep going, trying to push harder for returns, you will overtrain and injure yourself. The same thing happens to software teams following the same practices, using the same tools. Some of this is due to inertia. Teams, organizations reach an equilibrium point and they want to stay there. Because it is comfortable, and it works – or at least they understand it. And because the better the team is working, the harder it is to get better – all the low-hanging fruit has been picked. People keep doing what worked for them in the past. They stop looking beyond their established routines, stop looking for new ideas. Competence and control lead to complacency and acceptance. Instead of trying to be as good as possible, they settle for being good enough. This is the point of inspect-and-adapt in Scrum and other time boxed methods – asking the team to regularly re-evaluate what they are doing and how they are doing it, what’s going well and what isn’t, what they should do more of or less of, challenging the status quo and finding new ways to move forward. But even the act of assessing and improving is subject to diminishing returns. If you are building software in 2-week time boxes, and you’ve been doing this for 3, 4 or 5 years, then how much meaningful feedback should you really expect from so many superficial reviews? After a while the team finds themselves going over the same issues and problems and coming up with the same results. Reviews become an unnecessary and empty ritual, another waste of time. The same thing happens with tools. When you first start using a static analysis bug checking tool for example, there’s a good chance that you will find some interesting problems that you didn’t know were in the code – maybe even more problems than you can deal with. But once you triage this and fix up the code and use the tool for a while, the tool will find fewer and fewer problems until it gets to the point where you are paying for insurance – it isn’t finding problems any more, but it might someday. In “Has secure software development reached its limits?” William Jackson argues that SDLCs – all of them – eventually reach a point of diminishing returns from a quality and security standpoint, and that Microsoft and Oracle and other big shops are already seeing diminishing returns from their SDLCs. Their software won’t get any better – all they can do is to keep spending time and money to stay where they are. The same thing happens with Agile methods like Scrum or XP – at some point you’ve squeezed everything that you can from this way or working, and the team’s performance will plateau.What can you do about diminishing returns? First, understand and expect returns to diminish over time. Watch for the signs, and factor this into your expectations – that even if you maintain discipline and keep spending on tools, you will get less and less return for your time and money. Watch for the team’s velocity to plateau or decline. Expect this to happen and be prepared to make changes, even force fundamental changes on the team. If the tools that you are using aren’t giving returns any more, then find new ones, or stop using them and see what happens. Keep reviewing how the team is working, but do these reviews differently: review less often, make the reviews more focused on specific problems, involve different people from inside and outside of the team. Use problems or mistakes as an opportunity to shake things up and challenge the status quo. Dig deep using Root Cause Analysis and challenge the team’s way of thinking and working, look for something better. Don’t settle for simple answers or incremental improvements. Remember the 80/20 rule. Most of your problems will happen in the same small number of areas, from a small number of common causes. And most of your gains will come from a few initiatives. Change the team’s driving focus and key metrics, set new bars. Use Lean methods and Lean Thinking to identify and eliminate bottlenecks, delays and inefficiencies. Look at the controls and tests and checks that you have added over time, question whether you still need them, or find steps and checks that can be combined or automated or simplified. Focus on reducing cycle time and eliminating waste until you have squeezed out what you can. Then change your focus to quality and eliminating bugs, or to simplifying the release and deployment pipeline, or some other new focus that will push the team to improve in a meaningful way. And keep doing this and pushing until you see the team slowing down and results declining. Then start again, and push the team to improve again along another dimension. Keep watching, keep changing, keep moving ahead. Reference: Diminishing Returns in software development and maintenance from our JCG partner Jim Bird at the “Building Real Software” blog. Related Articles :Dealing with technical debt On the importance of communication in the workplace Services, practices & tools that should exist in any software development house This comes BEFORE your business logic! How many bugs do you have in your code? Java Tutorials and Android Tutorials list...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: