Featured FREE Whitepapers

What's New Here?


Implementing Producer/Consumer using SynchronousQueue

Among plenty of useful classes which Java provides for concurrency support, there is one I would like to talk about: SynchronousQueue. In particular, I would like to walk through Producer / Consumer implementation using handy SynchronousQueue as an exchange mechanism. It might not sound clear why to use this type of queue for producer / consumer communication unless we look under the hood of SynchronousQueue implementation. It turns out that it’s not really a queue as we used to think about queues. The analogy would be just a collection containing at most one element.     Why it’s useful? Well, there are several reasons. From producer’s point of view, only one element (or message) could be stored into the queue. In order to proceed with the next element (or message), the producer should wait till consumer consumes the one currently in the queue. From consumer’s point of view, it just polls the queue for next element (or message) available. Quite simple, but the great benefit is: producer cannot send messages faster than consumer can process them. Here is one of the use cases I encountered recently: compare two database tables (possibly just huge) and detect if those contain different data or data is the same (copy). The SynchronousQueue is quite a handy tool for this problem: it allows to handle each table in own thread as well as compensate the possible timeouts / latency while reading from two different databases. Let’s start by defining our compare function which accepts source and destination data sources as well as a table name (to compare). I am using quite useful JdbcTemplate class from Spring framework as it extremely well abstract all the boring details dealing with connections and prepared statements. public boolean compare( final DataSource source, final DataSource destination, final String table ) { final JdbcTemplate from = new JdbcTemplate( source ); final JdbcTemplate to = new JdbcTemplate( destination ); } Before doing any actual data comparison, it’s a good idea to compare table’s row count of the source and destination databases: if( from.queryForLong('SELECT count(1) FROM ' + table ) != to.queryForLong('SELECT count(1) FROM ' + table ) ) { return false; } Now, at least knowing that table contains same number of rows in both databases, we can start with data comparison. The algorithm is very simple:create a separate thread for source (producer) and destination (consumer) databases producer thread reads single row from the table and puts it into the SynchronousQueue consumer thread also reads single row from the table, then asks queue for the available row to compare (waits if necessary) and lastly compare two result setsUsing another great part Java concurrent utilities for thread pooling, let’s define a thread pool with fixed amount of threads (2). final ExecutorService executor = Executors.newFixedThreadPool( 2 ); final SynchronousQueue< List< ? > > resultSets = new SynchronousQueue< List< ? > >(); Following the described algorithm, the producer functionality could be represented as a single callable: Callable< Void > producer = new Callable< Void >() { @Override public Void call() throws Exception { from.query( 'SELECT * FROM ' + table, new RowCallbackHandler() { @Override public void processRow(ResultSet rs) throws SQLException { try { List< ? > row = ...; // convert ResultSet to List if( !resultSets.offer( row, 2, TimeUnit.MINUTES ) ) { throw new SQLException( 'Having more data but consumer has already completed' ); } } catch( InterruptedException ex ) { throw new SQLException( 'Having more data but producer has been interrupted' ); } } } );return null; } }; The code is a bit verbose due to Java syntax but it doesn’t do much actually. Every result set read from the table producer converts to a list (implementation has been omitted as it’s a boilerplate) and puts in a queue (offer). If queue is not empty, producer is blocked waiting for consumer to finish his work. The consumer, respectively, could be represented as a following callable: Callable< Void > consumer = new Callable< Void >() { @Override public Void call() throws Exception { to.query( 'SELECT * FROM ' + table, new RowCallbackHandler() { @Override public void processRow(ResultSet rs) throws SQLException { try { List< ? > source = resultSets.poll( 2, TimeUnit.MINUTES ); if( source == null ) { throw new SQLException( 'Having more data but producer has already completed' ); }List< ? > destination = ...; // convert ResultSet to List if( !source.equals( destination ) ) { throw new SQLException( 'Row data is not the same' ); } } catch ( InterruptedException ex ) { throw new SQLException( 'Having more data but consumer has been interrupted' ); } } } );return null; } }; The consumer does a reverse operation on the queue: instead of putting data it pulls it (poll) from the queue. If queue is empty, consumer is blocked waiting for producer to publish next row. The part which is left is only submitting those callables for execution. Any exception returned by the Future’s get method indicates that table doesn’t contain the same data (or there are issue with getting data from database): List< Future< Void > > futures = executor.invokeAll( Arrays.asList( producer, consumer ) ); for( final Future< Void > future: futures ) { future.get( 5, TimeUnit.MINUTES ); }   Reference: Implementing Producer/Consumer using SynchronousQueue from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog. ...

Devoxx UK free ticket giveaway

Java Code Geeks are proud to conduct another important giveaway for the Java community! For this one we have teamed up with the Devoxx community and managed to get a ticket for the Devoxx UK 2013 London community conference going to take place on the 26th and 27th of March 2013. That ticket is the prize for our next giveaway. A prize worth of £350! Prior going into the specifics about our giveaway we would like to say a few words about the Devoxx UK community conferences. The Devoxx non-profit community conferences have been very successfully run in Belgium and France. We enjoyed them so much that the London Java Community (LJC) decided to bring a version over to London (http://devoxx.co.uk)! So we’re bringing the who’s who of the Java/JVM and Software Development including folks such as Milton Smith – head of Java security and Charlie Nutter – the inventor of JRuby, as well as showcasing talent from the local development community. Tracks Include:Languages on the JVM Java SE Java EE Mobile Architecture, Cloud and Security Web and Big Data Methodology The FutureWe’d love to see you and members from your dev teams there as attendees (it’s a non-profit conference by developers for developers). It’s only £350 for the two days, so get in quick at http://reguk.devoxx.com More details at: http://www.devoxx.co.uk Register now at: http://reguk.devoxx.com Twitter: @DevoxxUK How to Enter? Just send an email here using as subject “Devoxx UK 2013 London“. An empty email will do, It’s that simple! (Note: By entering the contest you will be automatically included in the forthcoming Java Code Geeks Newsletter.) Deadline The contest will close on Friday 01 March 2013 PT. The winner will be contacted by email, so be sure to use your real email address! Important Notes… Please spread the news! The larger the number of participants the more people will get a chance of participating to one of the best conferences about Java around!     Good Luck! The Java Code Geeks Team ...

How Friction Slows Us Down in Software Development

I once joined a project where running the “unit” tests took three and a half hours. As you may have guessed, the developers didn’t run the tests before they checked in code, resulting in a frequently red build. Running the tests just gave too much friction for the developers. I define friction as anything that resist the developer while she is producing software. Since then, I’ve spotted friction in numerous places while developing software.           Friction in Software Development Since friction impacts productivity negatively, it’s important that we understand it. Here are some of my observations:Friction can come from different sources. It can result from your tool set, like when you have to wait for Perforce to check out a file over the network before you can edit it. Friction can also result from your development process, for example when you have to wait for the QA department to test your code before it can be released. Friction can operate on different time scales. Some friction slows you down a lot, while others are much more benign. For instance, waiting for the next set of requirements might keep you from writing valuable software for weeks. On the other hand, waiting for someone to review your code changes may take only a couple of minutes. Friction can be more than simple delays. It also rears its ugly head when things are more difficult then they ought to be. In the vi editor, for example, you must switch between command and insert modes. Seasoned vi users are just as fast as with editors that don’t have that separation. Yet they do have to keep track of which mode they are in, which gives them a higher cognitive load.Lubricating Software Development There has been a trend to decrease friction in software development. Tools like Integrated Development Environments have eliminated many sources of friction. For instance, Eclipse will automatically compile your code when you save it. Automated refactorings decrease both the time and the cognitive load required to make certain code changes. On the process side, things like Agile development methodologies and the DevOps movement have eliminated or reduced friction. For instance, continuous deployment automates the release of software into production. These lubricants have given us a fighting chance in a world of increasing complexity. Frictionless Software Development It’s fun to think about how far we could take these improvements, and what the ultimate, frictionless, software development environment might look like. My guess is that it would call for the combination of some of the same trends we already see in consumer and enterprise software products. Cloud computing will play a big role, as will simplification of the user interaction, and access from anywhere. What do you think? What frictions have you encountered? Do you think frictions are the same as waste in Lean? What have you done to lubricate the frictions away? What would your perfect, frictionless, software development environment look like?   Reference: How Friction Slows Us Down in Software Development from our JCG partner Remon Sinnema at the Secure Software Development blog. ...

JavaFX 2 XYCharts and Java 7 Features

One of my favorite features of JavaFX 2 is the standard charts it provides in its javafx.scene.chart package. This package provides several different types of charts out-of-the-box. All but one of these (the PieChart) are ‘2 axis charts’ (specific implementations of the XYChart). In this post, I look at the commonality between these specializations of XYChart. Along the way, I look at several Java 7 features that come in handy. A UML class diagram for key chart types in the javafx.scene.chart package is shown next. Note that AreaChart, StackedAreaChart, BarChart, StackedBarChart, BubbleChart, LineChart, and ScatterChart all extend XYChart.    As the UML diagram above (generated using JDeveloper) indicates, the PieChart extends Chart directly while all the other chart types extend XYChart. Because all the chart types other than PieChart extend XYChart, they share some common features. For example, they are all 2-axis charts with a horizontal (‘x’) axis and a vertical (‘y’) axis. They generally allow data to be specified in the same format (data structure) for all the XY charts. The remainder of this post demonstrates being able to use the same data for most of the XYCharts. The primary use of a chart is to show data, so the next code listing indicates retrieving of data from the ‘hr‘ sample schema in an Oracle database. Note that JDBC_URL, USERNAME, PASSWORD, and AVG_SALARIES_PER_DEPARTMENT_QUERY are constant Strings used in the JDBC connection and for the query. getAverageDepartmentsSalaries() /** * Provide average salary per department name. * * @return Map of department names to average salary per department. */ public Map<String, Double> getAverageDepartmentsSalaries() { final Map<String, Double> averageSalaryPerDepartment = new HashMap<>(); try (final Connection connection = DriverManager.getConnection(JDBC_URL, USERNAME, PASSWORD); final Statement statement = connection.createStatement(); final ResultSet rs = statement.executeQuery(AVG_SALARIES_PER_DEPARTMENT_QUERY)) { while (rs.next()) { final String departmentName = rs.getString(COLUMN_DEPARTMENT_NAME); final Double salaryAverage = rs.getDouble(ALIAS_AVERAGE_SALARY); averageSalaryPerDepartment.put(departmentName, salaryAverage); } } catch (SQLException sqlEx) { LOGGER.log( Level.SEVERE, 'Unable to get average salaries per department - {0}', sqlEx.toString()); } return averageSalaryPerDepartment; } The Java code snippet above uses JDBC to retrieve data for populating a Map of department name Strings to the average salary of the employees in each department. There are a couple of handy Java 7 features used in this code. A small feature is the inferred generic parameterized typing of the diamond operator used with the declaration of the local variable averageSalaryPerDepartment (line 8). This is a small granule of syntax sugar, but it does make the code more concise. A more significant Java 7 feature is use of try-with-resources statement for the handling of the Connection, Statement, and ResultSet resources (lines 9-11). This is a much nicer way to handle the opening and closing of these resources, even in the face of exceptions, than was previously necessary when using JDBC. The Java Tutorials page on The try-with-resources Statement advertises that this statement ‘ensures that each resource is closed at the end of the statement’ and that each resource will ‘be closed regardless of whether the try statement completes normally or abruptly.’ The page also notes that when there are multiple resources specified in the same statement as is done in the above code, ‘the close methods of resources are called in the opposite order of their creation.’ The data retrieved from the database can be placed into the appropriate data structure to support use by most of the XYCharts. This is shown in the next method. ChartMaker.createXyChartDataForAverageDepartmentSalary(Map) /** * Create XYChart Data representing average salary per department name. * * @param newAverageSalariesPerDepartment Map of department name (keys) to * average salary for each department (values). * @return XYChart Data representing average salary per department. */ public static ObservableList<XYChart.Series<String, Double>> createXyChartDataForAverageDepartmentSalary( final Map<String, Double> newAverageSalariesPerDepartment) { final Series<String, Double> series = new Series<>(); series.setName('Departments'); for (final Map.Entry<String, Double> entry : newAverageSalariesPerDepartment.entrySet()) { series.getData().add(new XYChart.Data<>(entry.getKey(), entry.getValue())); } final ObservableList<XYChart.Series<String, Double>> chartData = FXCollections.observableArrayList();chartData.add(series); return chartData; } The method just shown places the retrieved data in a data structure that can be used by nearly all of the XYChart-based charts. With the retrieved data now packaged in a JavaFX observable collection, the charts can be easily generated. The next code snippet shows methods for generating several XYChart-based charts (Area, Bar, Bubble, Line, and Scatter). Note how similar they all are and how the use the same data provided by the same method. The StackedBar and StackedArea charts can also use similar data, but are not shown here because they are not interesting for the single series of data being used in this example. Methods for Generating XYCharts Except BubbleChart and Stacked Charts private XYChart<String, Double> generateAreaChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final AreaChart<String, Double> areaChart = new AreaChart<>( xAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalary( this.databaseAccess.getAverageDepartmentsSalaries())); return areaChart; }private XYChart<String, Double> generateBarChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final BarChart<String, Double> barChart = new BarChart<>( xAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalary( this.databaseAccess.getAverageDepartmentsSalaries())); return barChart; }private XYChart<Number, Number> generateBubbleChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final Axis<Number> deptIdXAxis = new NumberAxis(); deptIdXAxis.setLabel('Department ID'); final BubbleChart<Number, Number> bubbleChart = new BubbleChart( deptIdXAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalaryById( this.databaseAccess.getAverageDepartmentsSalariesById())); return bubbleChart; }private XYChart<String, Double> generateLineChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final LineChart<String, Double> lineChart = new LineChart<>( xAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalary( this.databaseAccess.getAverageDepartmentsSalaries())); return lineChart; }private XYChart<String, Double> generateScatterChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final ScatterChart<String, Double> scatterChart = new ScatterChart<>( xAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalary( this.databaseAccess.getAverageDepartmentsSalaries())); return scatterChart; } These methods are so similar that I could have actually used method handles (or more traditional reflection APIs) to reflectively call the appropriate chart constructor rather than use separate methods. However, I am using these for my RMOUG Training Days 2013 presentation in February and so wanted to leave the chart-specific constructors in place to make them clearer to audience members. One exception to the general handling of XYChart types is the handling of BubbleChart. This chart expects a numeric type for its x-axis and so the String-based (department name) x-axis data provided above will not work. A different method (not shown here) provides a query that returns average salaries by department ID (Long) rather than by department name. The slightly different generateBubbleChart method is shown next. generateBubbleChart(Axis, Axis) private XYChart<Number, Number> generateBubbleChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final Axis<Number> deptIdXAxis = new NumberAxis(); deptIdXAxis.setLabel('Department ID'); final BubbleChart<Number, Number> bubbleChart = new BubbleChart( deptIdXAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalaryById( this.databaseAccess.getAverageDepartmentsSalariesById())); return bubbleChart; } Code could be written to call each of these different chart generation methods directly, but this provides a good chance to use Java 7’s method handles. The next code snippet shows this being done. Not only does this code demonstrate Method Handles, but it also uses Java 7’s multi-catch exception handling mechanism (line 77). /** * Generate JavaFX XYChart-based chart. * * @param chartChoice Choice of chart to be generated. * @return JavaFX XYChart-based chart; may be null. * @throws IllegalArgumentException Thrown if the provided parameter is null. */ private XYChart<String, Double> generateChart(final ChartTypes chartChoice) { XYChart<String, Double> chart = null; final Axis<String> xAxis = new CategoryAxis(); xAxis.setLabel('Department Name'); final Axis<? extends Number> yAxis = new NumberAxis(); yAxis.setLabel('Average Salary'); if (chartChoice == null) { throw new IllegalArgumentException( 'Provided chart type was null; chart type must be specified.'); } else if (!chartChoice.isXyChart()) { LOGGER.log( Level.INFO, 'Chart Choice {0} {1} an XYChart.', new Object[]{chartChoice.name(), chartChoice.isXyChart() ? 'IS' : 'is NOT'}); }final MethodHandle methodHandle = buildAppropriateMethodHandle(chartChoice); try { chart = methodHandle != null ? (XYChart<String, Double>) methodHandle.invokeExact(this, xAxis, yAxis) : null; chart.setTitle('Average Department Salaries'); } catch (WrongMethodTypeException wmtEx) { LOGGER.log( Level.SEVERE, 'Unable to invoke method because it is wrong type - {0}', wmtEx.toString()); } catch (Throwable throwable) { LOGGER.log( Level.SEVERE, 'Underlying method threw a Throwable - {0}', throwable.toString()); }return chart; }/** * Build a MethodHandle for calling the appropriate chart generation method * based on the provided ChartTypes choice of chart. * * @param chartChoice ChartTypes instance indicating which type of chart * is to be generated so that an appropriately named method can be invoked * for generation of that chart. * @return MethodHandle for invoking chart generation. */ private MethodHandle buildAppropriateMethodHandle(final ChartTypes chartChoice) { MethodHandle methodHandle = null; final MethodType methodDescription = MethodType.methodType(XYChart.class, Axis.class, Axis.class); final String methodName = 'generate' + chartChoice.getChartTypeName() + 'Chart';try { methodHandle = MethodHandles.lookup().findVirtual( this.getClass(), methodName, methodDescription); } catch (NoSuchMethodException | IllegalAccessException exception) { LOGGER.log( Level.SEVERE, 'Unable to acquire MethodHandle to method {0} - {1}', new Object[]{methodName, exception.toString()}); } return methodHandle; } A series of images follows that shows how these XY Charts appear when rendered by JavaFX. Area ChartBar ChartBubble ChartLine ChartScatter ChartAs stated above, Method Handles could have been used to reduce the code even further because individual methods for generating each XYChart are not absolutely necessary and could have been reflectively called based on desired chart type. It’s also worth emphasizing that if the x-axis data had been numeric, the code would be the same (and could be reflectively called) for all XYChart types including the Bubble Chart. JavaFX makes it easy to generate attractive charts representing provided data. Java 7 features make this even easier by making code more concise and more expressive and allowing for easy application of reflection when appropriate.   Reference: JavaFX 2 XYCharts and Java 7 Features from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

One jar to rule them all

Trip down the memory lane Back in 1998, when I was a C/C++ developer, trying my hands on Java, a few things about the language were, to put it mildly – irritating – for me. I remember fretting about these quite a lot            Why isn’t there a decent editor for this? C/C++ had quite a few. All that I had for Java was the good old notepad. Why do I have to make a class, when all I want is a function? Why wasn’t a function an object as well? Why can’t I just package everything into one zip/jar and let the end user launch with a double click?and a few others. Back then, I found me frequently chiding myself for not being able to let go of my ‘C/C++ way of thinking’ and embracing ‘Java way’ of doing things. Now, writing this article in 2013, about a decade and a half later, surprisingly all of those early irritations are gone. Not because I have embraced ‘Java’ way, but because java has changed. Idle chit chatting aside, the point of this article is to talk about one of these questions – ‘Why can’t I just package everything into one zip/jar and let the end user launch with a double click?’. Why do we need this – one zip/jar – that is executable? If you are a developer, coding away happily on your IDE (I despise you all who have coded java on Eclipse, NetBeans from day one and have not had to code on Notepad), assisted by Google (I positively totally hate all of you all who did not have to find stuff on internet before Google), there is probably no convincing case. However, have you faced a situation whenYou have been pulled into the data centre because the guy there have followed your deployment steps but your application / website will simply not work? All of a sudden the environment variables are all messed up, when ‘nobody at all so much as touched’ the production boxes, and you are the one who has to ‘just make it work’. You are sitting with your business stakeholder and staring incredulously at a ‘ClassNotFound exception’ and were convinced that Java did not like you at all.In short, what I am trying to say is, when you are in the ‘relative’ sanity of your dev box / environment, a one executable jar does not really do anything for you. But the moment you step into the twilight zone of unknown servers and situations (sans the IDE and other assortment of tools) you start appreciating just how much a single executable jar could have helped. Ok, I get it. But, what’s the big deal? We can make such a package / zip / jar in a jiffy if we have to. Isn’t that so. In all my naivety, I thought so and found out the answer the hard way. Let me walk you through it. Fire up your editors folks. Let’s create a executableJar project. I use jdk1.7.0, STS, Maven 3.0.4. If you are new to Maven or just not hands on, I recommend you read this and this. File: C:\projects\MavenCommands.bat ECHO OFF REM ============================= REM Set the env. variables. REM ============================= SET PATH=%PATH%;C:\ProgramFiles\apache-maven-3.0.4\bin; SET JAVA_HOME=C:\ProgramFiles\Java\jdk1.7.0 REM ============================= REM Standalone java application. REM ============================= call mvn archetype:generate ^ -DarchetypeArtifactId=maven-archetype-quickstart ^ -DinteractiveMode=false ^ -DgroupId=foo.bar ^ -DartifactId=executableJar001 pause After you run this batch file, you will have a fully compilable standard java application. Go ahead compile it and build a jar (mvn -e clean install). You will end up with a executableJar001-1.0-SNAPSHOT.jar at C:\projects\executableJar001\target. Now lets go ‘ java -jar jarFileName‘. And here you stumble the first time. In geeky vocabulary it tells you that there were no class with a main method and hence it did not know what to execute. Fortunately this is an easy one. There are standard java process to solve it. And there is a Maven plugin to solve it. I will use the latter. Updated File: /executableJar001/pom.xml ... <dependencies> ... </dependencies><build> <plugins> <!-- Set main class in the jar. --> <plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-jar-plugin</artifactid> <version>2.4</version> <configuration> <archive> <manifest> <mainclass>foo.bar.App</mainclass> </manifest> </archive> </configuration> </plugin></plugins> </build> ...You can compile and assemble the application again (mvn -e clean install). It is will create a jar file in target folder. Try running the jar from command line again. This time you will get intended result. So, we are all sorted, right? Wrong. Very wrong. Why? Everything seems fine. Let’s dig in a bit deeper and we will find why everything is not as sorted as it looks at the moment. Let’s go ahead and add a dependency e.g. let’s say we want to add logging and for that we want to use a third party jar i.e. logback. I will let Maven handle dependencies in the development environment. Updated File : /executableJar001/pom.xml ... <dependencies> <!-- Logging --> <dependency> <groupid>ch.qos.logback</groupid> <artifactid>logback-classic</artifactid> <version>1.0.9</version> </dependency> </dependencies><build> ... </build>Updated File: /executableJar001/src/main/java/foo/bar/App.java package foo.bar;import org.slf4j.Logger; import org.slf4j.LoggerFactory;public class App { private final static Logger logger = LoggerFactory .getLogger(App.class);public static void main( String[] args ) { System.out.println( 'Hello World!' ); logger.debug('Hello world from logger.'); } } Now let’s compile and run the jar from command prompt using jar command. Did you see what happened? Exception in thread 'main' java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory Basically it is saying that the class (i.e the actual code) of the LoggerFactory (i.e. the 3rd party jar that we had added in development environment) was not found. Oh, but surely we should be able to tell java to pick up the 3rd party libraries from some folder. Definitely. It is almost a certainty – if you are asking that question – that for most of your applications you tell the JVM where the 3rd party / dependency libraries are. You tell this by setting classpath. You could possibly be using some application server e.g. Tomcat / Jetty and that could be picking up some dependencies itself. And that is exactly where the problem originates. As a developer, I provide a x.jar that works. However, for it to work, it depends on a.jar (which in turn might depend upon b.jar and c.jar … you get the point). When I, as a developer, bundle up my deliverable, a x.jar, there is a dependency – on whoever I am handing this out to – to make sure that the classpath is correctly set in the other environment where x.jar is supposed to work. It is not that big a deal, mostly. However, it is not trivial either. There are multitude of ways that the dependencies on target environment could get messed up. There might be routine updates. There might be some other application deployed in the same production box, that needed an update on a jar that nobody thought would impact yours. We can discuss and debate the multitude of ways that these kind of mishaps can be stopped, but bottom line is x.jar (the developers responsibility) has dependencies (that the developer do not directly control). And that leads to mishaps. Of course, if you add into this mix the whole lot of variables that comes in because of different versions, different application servers, etc etc. the existing solution of providing x.jar only, quickly starts looking very fragile. So, what do we do? Say thanks to Dr. P. Simon Tuffs. This gentleman explains how he catered to this problem in this link. It is a good read, I recommend it. What I have explained in very laymen terms (and have barely scratched the surface), Simon takes a deep dive into the problem and how he solved it. Long story short, he coded a solution and made it open source. I am not going to replay the same information again – read his article, it is quite informative – but I will call out the salient point of his solution.It allows folks to create a single jar that contains everything – your code, resources, dependencies, application server (potentially) – everything. It allows the end use to run this entire humongous jar by the simple java -jar jarFileName command. It allows developers to develop the same way they have been developing e.g. if it is a web application, the war file structure, remains same. So there are no changes in the development process.Fine. So how do we go about doing it? There are many places where it is detailed out. The One-JAR website. Ant with One-JAR. Maven with One-JAR. Let’s see it in action on our dummy code. Thankfully there is also a Maven plugin for this. Sadly it is not in the Maven Central repository (Why? Folks why? You have put in 98% of work. Why be sluggish about the last 2%?). It comes with nice usage instructions. Updated file: /executableJar001/pom.xml ... <dependencies> ... </dependencies><build> <plugins> ...<!-- If you wanted to bundle all this in one jar. --> <plugin> <groupid>org.dstovall</groupid> <artifactid>onejar-maven-plugin</artifactid> <version>1.4.4</version> <executions> <execution> <goals> <goal>one-jar</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <!-- Required only if you are usng onejar plugin. --> <pluginrepositories> <pluginrepository> <id>onejar-maven-plugin.googlecode.com</id> <url>http://onejar-maven-plugin.googlecode.com/svn/mavenrepo</url> </pluginrepository> </pluginrepositories>Now all you need to do is run mvn -e clean package. You will get, apart from the normal jar, a fat self sufficient jar as well. Go ahead, do the java -jar jarFileName from command prompt again. It should work. Hmm.. that sounds good. Why isn’t everybody going for this? And this One-JAR seems to be around since 2004. Why are we not seeing more players in this market? You know what they say about free lunches? There are none. While the concept is quite neat and very practical, it does not mean that every other player have decided to join in. So if your website ‘needs’ to be hosted on one of the biggie paid application servers (I don’t know why you want to keep paying for those proprietary software and folks that understand them. Should you not pay for only quality folks and rely on the open source apps that do not lock you in) One-JAR might not be a feasible solution for you. Also, I hear unconfirmed murmurs about how things might get sluggish (during load up if your app is big. So, before you decide to commit to using this, I recommend you do a POC and make sure that other bits of your techstack are not unhappy with One-JAR. My personal opinion is, 2004 was perhaps a little too early for this kind of thing. People were still struggling with stuff like standardization of build and release process, getting clear player in ORM area, closing on a clear player for MVC framework etc. Not that those questions have been answered yet, or will be any-time soon. But I think the flavour of current problems in IT world are aroundHow to make DevOps work. How to make the entire build and release automated. How to leverage the open source libraries to provide solid dependable software while ensuring there are no heavy proprietary software causing lock-in and hence make the solution less agile for future business requirement.And in my mind, One-JAR plays very nicely in that area. So, I definitely expect to see more of this tool and / or more tools around this concept. And, to be fair, there are more players in this area. Thanks to Christian Schlichtherle for pointing this out. There is Maven Assembly Plugin and Maven Shade Plugin which cater to this exact same problem. I have not tried them yet but from the documentation they look quite alright, feature wise. Dropwizard, although not the same thing, but in essence is very similar. They have extended the whole one jar concept with embedded app server, out of the box support for REST, JSON, Logback, sort of in a nice neat package, that you could just use straight off the shelf. So, as I keep saying, these are nice exciting times to be in technology business, particularly if you like tinkering around with software.   Reference: One jar to rule them all from our JCG partner Partho at the Tech for Enterprise blog. ...

Scala pattern matching: A Case for new thinking?

The 16th President of the United States. Abraham Lincoln once said: ‘As our case is new we must think and act anew’. In software engineering things probably aren’t as dramatic as civil wars and abolishing slavery but we have interesting logical concepts concerning ‘case’. In Java the case statement provides for some limited conditional branching. In Scala, it is possible to construct some very sophisticated pattern matching logic using the case / match construct which doesn’t just bring new possibilities but a new type of thinking to realise new possibilities. Let’s start with a classical 1st year Computer Science homework assignment: a fibonacci series that doesn’t start with 0, 1 but that starts with 1, 1. So the series will look like: 1, 1, 2, 3, 5, 8, 13, … every number is the sum of the previous two. In Java, we could do: public int fibonacci(int i) { if (i < 0) return 0; switch(i) { case 0: return 1; case 1: return 1; default: return fibonacci(i-1) + fibonacci(i - 2); } } All straight forward. If 0 is passed in it counts as the first element in the series so 1 should be returned. Note: to add some more spice to the party and make things a little bit more interesting I added a little bit of logic to return 0 if a negative number is passed in to our fibonacci method. In Scala to achieve the same behaviour we would do: def fibonacci(in: Int): Int = { in match { case n if n <= 0 => 0 case 0 | 1 => 1 case n => fibonacci(n - 1) + fibonacci(n- 2) } } Key points:The return type of the recursive method fibonacci is an Int. Recursive methods must explictly specify the return type (see: Odersky – Programming in Scala – Chapter 2). It is possible to test for multiple values on the one line using the | notation. I do this to return a 1 for both 0 and 1 on line 4 of the example. There is no need for multiple return statements. In Java you must use multiple return statements or multiple break statements. Pattern matching is an expression which always returns something. In this example, I employ a guard to check for a negative number and if it a number is negative zero is returned. In Scala it is also possible to check across different types. It is also possible to use the wildcard _ notation. We didn’t use either in the fibonacci, but just to illustrate these features… def multitypes(in: Any): String = in match { case i:Int => 'You are an int!' case 'Alex' => 'You must be Alex' case s:String => 'I don't know who you are but I know you are a String' case _ => 'I haven't a clue who you are' }Pattern matching can be used with Scala Maps to useful effect. Suppose we have a Map to capture who we think should be playing in each position of the Lions backline for the Lions series in Austrailia. The keys of the map will be the position in the back line and the corresponding value will be the player who we think should be playing there. To represent a Rugby player we use a case class. Now now you Java Heads, think of the case class as an immutable POJO written in extremely concise way – they can be mutable too but for now think immutable. case class RugbyPlayer(name: String, country: String); val robKearney = RugbyPlayer('Rob Kearney', 'Ireland'); val georgeNorth = RugbyPlayer('George North', 'Wales'); val brianODriscol = RugbyPlayer('Brian O'Driscol', 'Ireland'); val jonnySexton = RugbyPlayer('Jonny Sexton', 'Ireland'); val benYoungs = RugbyPlayer('Ben Youngs', 'England');// build a map val lionsPlayers = Map('FullBack' -> robKearney, 'RightWing' -> georgeNorth, 'OutsideCentre' -> brianODriscol, 'Outhalf' -> jonnySexton, 'Scrumhalf' -> benYoungs);// Note: Unlike Java HashMaps Scala Maps can return nulls. This achieved by returing // an Option which can either be Some or None.// So, if we ask for something that exists in the Map like below println(lionsPlayers.get('Outhalf')); // Outputs: Some(RugbyPlayer(Jonny Sexton,Ireland))// If we ask for something that is not in the Map yet like below println(lionsPlayers.get('InsideCentre')); // Outputs: None In this example we have players for every position except inside centre – which we can’t make up our mind about. Scala Maps are allowed to store nulls as values. Now in our case we don’t actually store a null for inside center. So, instead of null being returned for inside centre (as what would happen if we were using a Java HashMap), the type None is returned. For the other positions in the back line, we have matching values and the type Some is returned which wraps around the corresponding RugbyPlayer. (Note: both Some and Option extend from Option). We can write a function which pattern matches on the returned value from the HashMap and returns us something a little more user friendly. def show(x: Option[RugbyPlayer]) = x match { case Some(rugbyPlayerExt) => rugbyPlayerExt.name // If a rugby player is matched return its name case None => 'Not decided yet ?' // } println(show(lionsPlayers.get('Outhalf'))) // outputs: Jonny Sexton println(show(lionsPlayers.get('InsideCentre'))) // Outputs: Not decided yet This example doesn’t just illustrate pattern matching but another concept known as extraction. The rugby player when matched is extracted and assigned to the rugbyPlayerExt. We can then return the value of the rugby player’s name by getting it from rugbyPlayerExt. In fact, we can also add a guard and change around some logic. Suppose we had a biased journalist ( Stephen Jones) who didn’t want any Irish players in the team. He could implement his own biased function to check for Irish players def biasedShow(x: Option[RugbyPlayer]) = x match { case Some(rugbyPlayerExt) if rugbyPlayerExt.country == 'Ireland' => rugbyPlayerExt.name + ', don't pick him.' case Some(rugbyPlayerExt) => rugbyPlayerExt.name case None => 'Not decided yet ?' } println(biasedShow(lionsPlayers.get('Outhalf'))) // Outputs Jonny... don't pick him println(biasedShow(lionsPlayers.get('Scrumhalf'))) // Outputs Ben Youngs Pattern matching Collections Scala also provides some powerful pattern matching features for Collections. Here’s a trivial exampe for getting the length of a list. def length[A](list : List[A]) : Int = list match { case _ :: tail => 1 + length(tail) case Nil => 0 } And suppose we want to parse arguments from a tuple… def parseArgument(arg : String, value: Any) = (arg, value) match { case ('-l', lang) => setLanguage(lang) case ('-o' | '--optim', n : Int) if ((0 < n) && (n <= 3)) => setOptimizationLevel(n) case ('-h' | '--help', null) => displayHelp() case bad => badArgument(bad) } Single Parameter functions Consider a list of numbers from 1 to 10. The filter method takes a single parameter function that returns true or false. The single parameter function can be applied for every element in the list and will return true or false for every element. The elements that return true will be filtered in; the elements that return false will be filtered out of the resultant list. scala> val myList = List(1,2,3,4,5,6,7,8,9,10) myList: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)scala> myList.filter(x => x % 2 ==1) res13: List[Int] = List(1, 3, 5, 7, 9) Now now now, listen up and remember this. A pattern can be passed to any method that takes a single parameter function. Instead of passing a single parameter function which always returned true or false we could have used a pattern which always returns true or false. scala> myList.filter { | case i: Int => i % 2 == 1 // odd number will return false | case _ => false // anything else will return false | } res14: List[Int] = List(1, 3, 5, 7, 9) Use it later? Scala compiles patterns to a PartialFunction. This means that not only can Scala pattern expressions be passed to other functions but they can also be stored for later use. scala> val patternToUseLater = : PartialFunction[String, String] = { | case 'Dublin' => 'Ireland' | case _ => 'Unknown' } What this example is saying is patternToUseLater is a partial function that takes a string and returns a string. The last statemeent in a function is returned by default and because the case expression is a partial function it will returned as a partial function and assigned to pattenrToUseLater which of course can use it later. Finally, Johnny Sexton is a phenomenal Rugby player and it is a shame to hear he is leaving Leinster. Obviously, with Sexton’s busy schedule we can’t be sure if Johnny is reading this blog but if he is, Johnny sorry to see you go we wish you all the best and hopefully will see you back one day in the Blue Jersey.   Reference: Scala pattern matching: A Case for new thinking? from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog. ...

Managing the Stream of Features in an Agile Program

One of the challenges in a program is how you manage the checkins, especially if you have continuous integration. I am quite fond of continuous integration, no matter how large your program is. I also like short iterations. (Remember Short is Beautiful?) But imagine a product where you have a platform and layers. I’m separating the GUI and the API for the GUI, so you can see the application and the middleware and the platform. Now, this architecture is different from separate-but-related products that also might be a program. This is an archetype of an architecture, not your architecture. I am sure you have more than 3 middleware components or 4 app layer components. The product I’m thinking about had 12 middleware components, about another 12 platform components and about 50 app layer components. It was a big program with about 200 people working on the program for over 2 years. I wanted to simplify the picture so we could have a conversation. The features cut through the app layers and the middleware layers. The colored lines are the features. Now, multiply these lines by each project team and each feature, and you can see what happens in a program. Imagine if I added colored lines for 25 features for 25 different feature teams. It could be a problem. However, if project teams limit their WIP (work in progress), and swarm around features, integrating as they proceed, they have fewer features in progress. And, if they are networks of people, so they have communities of practice, they have ways of talking to each other, so they don’t have to wait to sync with each other. People talk with each other when they need to. That’s it. When features are small, and they integrate every day, or every other day, people expect to discuss integration issues all the time. And, while I am not a fan of integration teams, even I admit that on a large program, you might need an integration team to help keep things moving. This is in addition to what everyone does of syncing with the main line everyday: taking down the new additions to the main and syncing just before putting new changes up.If you keep a stream of features moving in a program, even with many feature teams, as long as the project teams keep talking to one another, you are okay. You are not okay if someone decides, “I own this code and no one else can touch it.” Now, you might decide that all middleware code or all particular component code has to be reviewed. Or it all has to be smoke tested. Or, it all has some other gate that it has to go through before it can be changed. Or you pair on everything. Or, you have situational code ownership. That’s perfectly okay. You decide on the technical mores for your program. It’s a good idea to have them. But, the larger the program, the less you can have one gatekeeper. Because that person cannot be in one place, holding their fingers in the figurative dike. This is why I don’t like czar-like architects, but I do like embedded architects in the teams for agile programs. When the product reaches a certain level of maturity, you can work on a particular component as a feature, and swap it in or out once you change it as a feature. This takes significant skill. If you want to be agile for a program, you need to be more agile and lean, not less. You need to have smaller stories. You need to work by feature, not by architecture. You need to swarm. Well, that is if you don’t want program bloat. Okay, what’s confusing to you? What have I forgotten? Where do you disagree? Let’s discuss.   Reference: Managing the Stream of Features in an Agile Program from our JCG partner Johanna Rothman at the Managing Product Development blog. ...

Jenkins Description Setter Plugin for Improving Continuous Delivery Visibility

In Continuous Delivery each build is potentially shippable. This fact implies among a lot of other things, to assign a none snapshot version to your components as fast as possible so you can refer them through all the process. I suggest creating a release branch, assign the version to the project and then start the typical pipeline (compile, tests, code quality …) steps to release branch.               If you are using Jenkins, your build job screen will look something like:Note that we have released the project many times, but there is no quick way to know exactly which version has been constructed in build number 40. To avoid this problem and having a quick overview of which version has been executed in each build job instance, we can use Jenkins description setter plugin. This plugin sets the description for each build, based upon a regular expression of the build log file. So your build job screen will look something like:Much better, now we know exactly the result of a build job and which product version has been generated. So first step is installing the plugin by simply going to: Jenkins -> Manage Jenkins -> Manage Plugins -> Available. After installation you can open Build Job configuration screen and add a post-build action called ‘ Set build description‘. Then add a regular expression for extracting the version number. In this case the regular expression is: \[INFO\] from version 0\.0\.1-SNAPSHOT to (.*) Take a look at next fragment of build log file:[INFO] Scanning for projects...[INFO][INFO] ------------------------------------------------------------------------[INFO] Building hello 0.0.1-SNAPSHOT[INFO] ------------------------------------------------------------------------[INFO][INFO] --- versions-maven-plugin:2.0:set (default-cli) @ hello ---[INFO] Searching for local aggregator root...[INFO] Local aggregation root: /jobs/helloworld-inital-build/workspace[INFO] Processing com.lordofthejars.helloworld:hello[INFO] Updating project com.lordofthejars.helloworld:hello[INFO] from version 0.0.1-SNAPSHOT to 1.0.43Props: {project.version=1.0.43, project.artifactId=hello, project.groupId=com.lordofthejars.helloworld} At line 12 we are logging the final version of our product for current pipeline execution, so we create a regular expression which parses that line and the part between brackets are used as decorator. Depending on log traces the regular expression will differ from this one. In this case, we are always using the same SNAPSHOT version in development and only when product is going to be released (this could be 3 times per day or every night) the final version is generated and set. Hope this plugin helps you to make your builds more clear.   Reference: Jenkins Description Setter Plugin for Improving Continuous Delivery Visibility from our JCG partner Alex Soto at the One Jar To Rule Them All blog. ...

Dry parameter names

How often do you see code like this, especially when using dependency injection, single-responsibility principle, and other “good practices”?                   class FriendsTimelineReader(userFinder: UserFinder, userStatusReader: UserStatusReader, statusSecurityFilter: StatusSecurityFilter) { ... } Note that the parameter names are exact copies of the class name. That’s certainly not DRY! In Java writing such code is a bit easier than in Scala, since you write the type first, and then the variable name can be auto-completed (the fact that there’s an auto-complete in IDEs indicates that it’s a common pattern). In Scala there’s more typing, but then, less boilerplate around declaring fields/defining a constructor/assigning to fields. How to avoid that? We can always use cryptic variable names, like “ssf” instead of “statusSecurityFilter“. But then the whole effort of having nicely named classes to make the code readable, which is quite a hard task, goes to waste. Of course, variable names are very useful, for example when we have to distinguish between the 10 String parameters that our method takes, create an Int counter, etc. But the more we use specialised wrapper-types (and now possibly even more, with Scala 2.10 introducing value classes) to make our code more type-safe, the more fine-grained our services – the more often the variable/parameter names will mirror the class name. Even when there are a couple of instances of a class, often the parameter name contains some qualifier + the class name (e.g. given a Person class, personAssociator.createFriends(firstPerson, secondPerson)). It would be interesting to see some statistics on how often variable name is a mirror of the class name or a suffix – I suspect it can be a large percentage. Maybe the next step then in type safety is limiting the complete freedom when naming parameters? Or even, getting rid of the parameter names at all in some cases. For example. In the snippet from the beginning, we just want to get “an instance” of a UserFinder. Most probably there will ever be only one instance of this class in the system, and certainly one will be used at a time by other classes. So why not let the class express that it wants an instance of a class, without having to name it? Let’s use the indefinite article “a” as a keyword (we don’t really care which instance is passed): class FriendsTimelineReader(a UserFinder, a UserStatusReader, a StatusSecurityFilter) Now, how would you use such a variable inside a class? Let’s use the definite article “the” – the instance that was given to the class, for example: val user10 = (the UserFinder).findById(10) Maybe this looks a bit like science-fiction, but wouldn’t it be convenient? Or maybe this problem is already solved by some mechanisms in other languages? Adam EDIT 27/01/2013: Extending the above to handle qualifiers: class Friends(a 'first Person, a 'second Person) { ... (the 'first Person).getName() ... }   Reference: Dry parameter names from our JCG partner Adam Warski at the Blog of Adam Warski blog. ...

Good & Bad Patterns in Development and Operations

As part of my role at a new company I’ve been asked to provide feedback about structuring Dev & Ops as well as what sorts of things work and don’t. I certainly don’t claim to have all the answers, but I’ve seen some very functional and some very dysfunctional organizations. I’ve spent a fair amount of time thinking about what works & why. Below is a cleaned up version of a message I sent to our CEO who asked for my thoughts on what does and doesn’t work. This was intended as scaffolding for further discussion so I didn’t go into deep details. If you want more details on any particular area just throw some comments out there.   I realize not all these issues are black & white to many folks – there are gray areas. My goal with this message was to drive conversation. I figure this is probably review to many folks, but maybe it’ll help someone. First, there are some very simple goals that all these bullets drive toward & they’re somewhat exclusive to SaaS companies:Customers should continuously receive value from Developers as code is incrementally pushed out Developers should get early feedback from customers on changes by enabling features for customers to test We can address problems for customers very quickly – often in a matter of hours We can inspect and understand customer behavior very deeply, gathering exceptional detail about how they use the service. We can swap out components & substantially change the underlying software without the customer knowing (if we do it right) We can measure how happy customers are with changes as we make them based on behavior & feedbackThe lists below are what I feel make that possible (Good) and what inhibit it (Bad) : Culture & Communication Good:Stand ups. Retrospectives Small, self-formed teams (Let folks work on their area of passion) Use Information Radiators whenever possible (Kanban boards, stats on big monitors, etc) Decisions by teams, Leaders facilitate consensus Discovering what doesn’t work is part of finding the right solution, not something to fear. Hackathons allow Developers to do things they are passionate about Hire for personality & team fit first, technical ability second Data driven decisions, strive to have facts to back up decisions. Make the right behavior the easiest thing to do – build a low resistance path to doing the right thing.Bad:Top down decision making Strict role assignments & Silos Fear of not getting it right the first time Hiring for technical ability thinking team fit will come later Creating process out of fear that makes it difficult to do the right thing.Eliminate Manual Processes Good:Continuous Deployment / Delivery Fully automated testing Test Driven Development Fully automated system monitoring, configuration & provisioning Separate Deploy & Release (Feature toggles) Deploy from master, do not branch (Forces particular behaviors)Bad:Manual testing by a QA Team – sometimes it’s necessary, but should be avoided Deploying off a branch, slows things down & allows for other bad behaviors Writing tests after writing code, code isn’t written with testing in mind Developers relying on other teams to perform tasks that could be automated. Processes that are the result of fear rather than necessary business process.If it moves, measure it Good:Collect high resolution metrics about everything you possibly can Developers can add new metrics by pushing new code, do not rely on additional configuration by other teams. Graphs & metrics can be seen by anyone – Developers should rely on these. There should be individuals or teams who are passionate about data visualization & analysis. Dev teams rely on these metrics to make decisions, help identify what metrics are important Developers watch metrics after pushing new code, watch for trend changes (Devs take responsibility for availability)Bad:Operations has to configure new metrics after developers have added support for them (Manual) Operations monitors metrics & asks Dev teams when they think there’s a problem Developers don’t look at metrics unless something is brought to their attention Code doesn’t expose metrics until someone else asks for itAnd here is the long version of all of that… #1 Culture & Communication Above all else I consider these most important. I think most problems in other areas of the business can be overcome if you do well in these areas. Rally has been, by far, the best example of a very successful model that I’ve seen in this area. They aren’t unique – there are other companies with similar models & similar successes. Main pointsStand ups. By far the most effective tool for keeping everyone in touch. As teams grow you have to break them apart, so you have a 2nd standup where teams can bring cross-team items to share. Projects are tackled by relatively small, typically self-formed teams. Get individuals who are interested in working in an area together & they feed on each others passion. Perform retrospectives. This gives individuals & small groups the ability to voice concerns in a way that fosters resolution. There’s an art to facilitating this but it works well when done right. It also allows recognition of things that are done well. Use open information radiators – it should be easy to see what’s going on by looking at status somewhere vs. having to ask for status, go to meetings, etc. Kanban boards are great for this. Leaders exist to facilitate and help drive consensus but decisions are largely made by teams, not leaders. This makes being a leader harder, but it makes the teams more empowered. Accept that things may not work & the team and company will adjust when things do not work. This makes it easy to try new things & easy for people to vocalize when they think it isn’t working. If it’s hard to change process then people are more resistant to try new things. This goes back to retrospectives for keeping things in check. Also important in this are “spikes” or time boxed efforts explicitly designed to explore possibilities. Give developers time to pursue their own projects for the company. Many awesome features have come out of Hackathons where developers spent their own time to build something they were passionate about. Hire for personality fit first. I have seen many awesome people find a special niche in a company because they grew into a role that you couldn’t hire for – but what made that possible was that they worked well with the team as an individual. Hiring for technical skill also means you lose that skill when that person leaves, I would prefer to have cross-functional teams. Data driven decisions. This helps keep emotion and “I think xyz” out of the discussion & focuses on the data we do and do not have. If we don’t have data we either get more or acknowledge we may not be making the right decision but we’re going to move forward. Make the right thing the easiest thing. I’ve seen too many companies put process out there that makes the “right thing” really difficult, so it gets bypassed. The right thing should be an express train to done – very little resistance and very easy to do. It’s when you start wanting to do things differently that it should become harder, more painful.Also, everyone owns the quality of the service. This includes availability, performance, user experience, cost to deliver, etc. At my last company, there was exceptional collaboration between Operations, Engineering and Product (and across engineering teams) on all aspects of the service and there was a strong culture of shared ownership & very little finger pointing. If you want more details on this specific to Rally I wrote a blog post with some more info: Blog Post #2 Obsessively eliminate manual process – let computers do what they are good at. This is so much easier to do up front. There should be as little manual process as possible standing between a developer adding value for customers (writing code) and that code getting into production. There may be business process that controls when that feature is enabled for customers – but the act of deploying & testing that code should not be blocked by manual process. I refer to this as separating “Deploy” from “Release” – those are two very different things. Testing should only be manual to invalidate assumptions, validating assumptions should be automatic When we assume that if x is true then y will occur, there should be a test to validate that this is true. Testers should not manually validate these sorts of things unless there is just no way to automate them (rare). Testers are valuable to invalidate assumptions. Testers should be looking at the assumptions made by Developers and helping identify those assumptions that may not always be correct. Too many organizations rely on manual testing because it’s “easier”, but it has some serious drawbacks:You can only change your system as fast as your team can manually test it – which is very slow. Your testing is done by humans who make mistakes and don’t behave predictably so you get inaccurate results. The # of tests will only grow over time, requiring either more humans or more time, or both. It doesn’t scale.Over time the software quality gets lower, takes longer to test, and the test results become less reliable. This is a death spiral for many companies who eventually find it very hard to make changes due to fear & low confidence in testing. Avoiding this requires developers spend more time up front writing automated tests. This means developers might spend 60-70% of their time developing tests vs. writing code – this is the cost of doing business if you want to produce high quality software. That may seem excessive, but the tradeoffs are significant:Much higher code quality which stays high (those tests are always run, so re-introduced bugs (regressions) get caught) Faster developer on boarding, the tests describe how the code should behave and act as documentation. Refactoring code becomes easier because you know the tests describe what it should do. Each commit to the codebase is fully tested, allowing nearly immediate deployment to production if done right. Problems that make it into product feed back into more tests & continually improve code quality.Much of the time developing tests is spent thinking about how to solve the problem, but you are also writing code with the intent of making it testable. Code is often written differently when the developer knows tests need to pass vs. someone manually testing it. It’s much harder to come along later and write tests for existing code. You will hear me talk about Continuous Deployment & Continuous Integration – I feel these practices are extremely important to driving the above “good” behaviors. If you strive for Continuous Deployment then everything else falls into place without much disagreement because it has to be that way. This has a lot of benefits beyond what’s listed above:Value can be delivered to customers in days or hours instead of weeks or months Developers can get immediate feedback on their change in production New features can be tuned & tweaked while they are fresh in a developers mind You can focus on making it fast to resolve defects, no matter how predictable they are, rather than trying to predict all the ways things might go wrong. Most of the tools and behaviors that enable Continuous Deployment scale to very large teams & very frequent deployments. Amazon is a prime example of this, deploying something, somewhere, about every 11 seconds. Many companies that are in the 30-100 engineer size talk about deploying tens of times per day. This also impacts how you hire QA/Testers. This is a longer discussion, but you want to hire folks who can help during the test planning phase & can help Developers write better tests. Ideally your testers are also developers & work in a way that’s similar to Operations, helping your Developers to be better at their jobs.#3 If it moves, measure it I mentioned above, two big advantages a SaaS organization has are the amount it can learn about how customers use the product & the ability to change things rapidly. Both of these require obsessive measurement of everything that is going on so that you know if things are better or worse. Some of these metrics are about user behavior & experience to understand how the service is being used. Other metrics are about system performance & behavior. The ability to expose some % of your customer base to a new feature & measure their feedback to that is huge. Plenty of companies have perfected the art of A/B testing but at the heart of it is the ability to measure behavior. Similar to testing, the software has to be built in a way which allows this behavior to be measured. System performance similarly requires a lot of instrumentation to identify changes in trends, to identify problems areas in the application & to verify when changes actually improve the situation. I’ve been at too many companies where they simply had no idea how the system was performing today compared to last week to understand if things were better or worse. At my last company I saw a much more mature approach to this measurement which worked pretty well, but it required investment. They had two people fully dedicated to performance analysis & customer behavior analysis.   Reference: Good & Bad Patterns in Development and Operations from our JCG partner Aaron Nichols at the Operation Bootstrap blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: