Featured FREE Whitepapers

What's New Here?


Spring Data JPA and pagination

Let us start with the classic JPA way to support pagination. Consider a simple domain class – A ‘Member’ with attributes first name, last name. To support pagination on a list of members, the JPA way is to support a finder which takes in the offset of the first result(firstResult) and the size of the result(maxResults) to retrieve, this way:               import java.util.List;import javax.persistence.TypedQuery;import org.springframework.stereotype.Repository;import mvcsample.domain.Member;@Repository public class JpaMemberDao extends JpaDao<Long, Member> implements MemberDao{public JpaMemberDao(){ super(Member.class); } @Override public List<Member> findAll(int firstResult, int maxResults) { TypedQuery<Member> query = this.entityManager.createQuery('select m from Member m', Member.class); return query.setFirstResult(firstResult).setMaxResults(maxResults).getResultList(); }@Override public Long countMembers() { TypedQuery<Long> query = this.entityManager.createQuery('select count(m) from Member m', Long.class); return query.getSingleResult(); } } An additional API which returns the count of the records is needed to determine the number of pages for the list of entity, as shown above. Given this API, two parameters are typically required from the UI:the current page being displayed (say ‘page.page’) the size of list per page (say ‘page.size’)The controller will be responsible for transforming these inputs to the one required by the JPA – firstResult and maxResults this way: @RequestMapping(produces='text/html') public String list(@RequestParam(defaultValue='1', value='page.page', required=false) Integer page, @RequestParam(defaultValue='10', value='page.size', required=false) Integer size, Model model){ int firstResult = (page==null)?0:(page-1) * size; model.addAttribute('members',this.memberDao.findAll(firstResult, size)); float nrOfPages = (float)this.memberDao.countMembers()/size; int maxPages = (int)( ((nrOfPages>(int)nrOfPages) || nrOfPages==0.0)?nrOfPages+1:nrOfPages); model.addAttribute('maxPages', maxPages); return 'members/list'; } Given a list as a model attribute and the count of all pages(maxPages above), the list can be transformed to a simple table in a jsp, there is a nice tag library that is packaged with Spring Roo which can be used to present the pagination element in a jsp page, I have included it with the reference.So this is the approach to pagination using JPA and Spring MVC. Spring-Data-JPA makes this even simpler, first is the repository interface to support retrieving a paginated list – in its simplest form the repository simply requires extending Spring-Data-JPA interfaces and at runtime generates the proxies which implements the real JPA calls: import mvcsample.domain.Member;import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository;public interface MemberRepository extends JpaRepository<Member, Long>{ // } Given this, the controller method which accesses the repository interface is also very simple: @RequestMapping(produces='text/html') public String list(Pageable pageable, Model model){ Page<Member> members = this.memberRepository.findAll(pageable); model.addAttribute('members', members.getContent()); float nrOfPages = members.getTotalPages(); model.addAttribute('maxPages', nrOfPages); return 'members/list'; } The controller method accepts a parameter called Pageable, this parameter is populated using a Spring MVC HandlerMethodArgumentResolver that looks for request parameters by name ‘page.page’ and ‘page.size’ and converts them into the Pageable argument. This custom HandlerMethodArgumentResolver is registered with Spring MVC this way: <mvc:annotation-driven> <mvc:argument-resolvers> <bean class='org.springframework.data.web.PageableArgumentResolver'></bean> </mvc:argument-resolvers> </mvc:annotation-driven> the JpaRepository API takes in the pageable argument and returns a page, internally automatically populating the count of pages also which can retrieved from the Page methods. If the queries need to be explicitly specified then this can be done in a number of ways, one of which is the following: @Query(value='select m from Member m', countQuery='select count(m) from Member m') Page<Member> findMembers(Pageable pageable); One catch which I could see is that that pageable’s page number is 0 indexed, whereas the one passed from the UI is 1 indexed, however the PageableArgumentResolver internally handles and converts the 1 indexed UI page parameter to the required 0 indexed value. Spring Data JPA thus makes it really simple to implement a paginated list page. I am including a sample project which ties all this together, along with the pagination tag library which makes it simple to show the paginated list. Resources:A sample projects which implements a paginated list is available here : https://github.com/bijukunjummen/spring-mvc-test-sample.git Spring-Data-JPA reference: http://static.springsource.org/spring-data/data-jpa/docs/current/reference/html/  Reference: Spring Data JPA and pagination from our JCG partner Biju Kunjummen at the all and sundry blog. ...

Camel 2.11 – HTTP proxy routes with url rewriting functionality

In the upcoming Apache Camel 2.11 release I have recently added support for plugging in custom url rewrite implementations to HTTP based routes (http, http4, jetty). This allows people to control the url mappings, when you use Camel to proxy/bridge HTTP routes. For example suppose you need to proxy a legacy HTTP service and plugin a strategy for mapping the urls. This is now much easier with Camel 2.11. There is a new option urlRewrite added to the various HTTP components, to plugin a custom url rewriter. For example having a http proxy route as shown, where we use the new urlRewrite option on the http producer endpoint.     from("jetty:http://localhost:{{port}}/myapp?matchOnUriPrefix=true") .to("jetty:http://somewhere:{{port2}}/myapp2?bridgeEndpoint=true&throwExceptionOnFailure=false&urlRewrite=#myRewrite"); In a nutshell you can implement a custom strategy by implementing the UrlRewrite interface, as shown below. As this is from an unit test, we just replace yahoo to google in the url (yes its not a real-life applicable example). public class GoogleUrlRewrite implements UrlRewrite {@Override public String rewrite(String url, String relativeUrl, Producer producer) { return url.replaceAll("yahoo", "google"); } } In the rewrite method Camel provides you with the absolute url (eg including scheme:host:port/path?query) or a relative url which is the offset from the uri configured in the route (see further below). However it all gives you the full power to control the url mappings, and even return a new absolute url. If you return null, then the default strategy is used, which is a 1:1 url mapping. That is not all there is also a new component Introducing the new camel-urlrewrite component The new camel-urlrewrite component is a implementation of the new url rewrite plugin based on the UrlRewriteFilter project. This project has strong support for specifying your rewrite strategies as rules, and have its engine evaluate the rules. For example we can have N+ rules in the url rewrite XML configuration file. In the example below we have a rule to rewrite urls to adapt to a legacy system which is using JSP. <urlrewrite><rule> <from>/products/([0-9]+)</from> <to>/products/index.jsp?product_id=$1</to> </rule></urlrewrite> This project has even support for Apache mod_rewrite styles which allow you to define rules as you would do with the Apache HTTP server. Though if you are not familiar with the mod_rewrite style then its dense and takes some time to understand – but very powerful. All this is documented at the camel-urlrewrite component page with examples. And if you want to look for more, then checking the unit tests source code is also a good way to learn more. I encourage you to take a look at the new camel-urlrewrite page as it has full examples and more details, that what I have outlined in this short blog.   Reference: Camel 2.11 – HTTP proxy routes with url rewriting functionality from our JCG partner Claus Ibsen at the Claus Ibsen riding the Apache Camel blog. ...

Implementing Producer/Consumer using SynchronousQueue

Among plenty of useful classes which Java provides for concurrency support, there is one I would like to talk about: SynchronousQueue. In particular, I would like to walk through Producer / Consumer implementation using handy SynchronousQueue as an exchange mechanism. It might not sound clear why to use this type of queue for producer / consumer communication unless we look under the hood of SynchronousQueue implementation. It turns out that it’s not really a queue as we used to think about queues. The analogy would be just a collection containing at most one element.     Why it’s useful? Well, there are several reasons. From producer’s point of view, only one element (or message) could be stored into the queue. In order to proceed with the next element (or message), the producer should wait till consumer consumes the one currently in the queue. From consumer’s point of view, it just polls the queue for next element (or message) available. Quite simple, but the great benefit is: producer cannot send messages faster than consumer can process them. Here is one of the use cases I encountered recently: compare two database tables (possibly just huge) and detect if those contain different data or data is the same (copy). The SynchronousQueue is quite a handy tool for this problem: it allows to handle each table in own thread as well as compensate the possible timeouts / latency while reading from two different databases. Let’s start by defining our compare function which accepts source and destination data sources as well as a table name (to compare). I am using quite useful JdbcTemplate class from Spring framework as it extremely well abstract all the boring details dealing with connections and prepared statements. public boolean compare( final DataSource source, final DataSource destination, final String table ) { final JdbcTemplate from = new JdbcTemplate( source ); final JdbcTemplate to = new JdbcTemplate( destination ); } Before doing any actual data comparison, it’s a good idea to compare table’s row count of the source and destination databases: if( from.queryForLong('SELECT count(1) FROM ' + table ) != to.queryForLong('SELECT count(1) FROM ' + table ) ) { return false; } Now, at least knowing that table contains same number of rows in both databases, we can start with data comparison. The algorithm is very simple:create a separate thread for source (producer) and destination (consumer) databases producer thread reads single row from the table and puts it into the SynchronousQueue consumer thread also reads single row from the table, then asks queue for the available row to compare (waits if necessary) and lastly compare two result setsUsing another great part Java concurrent utilities for thread pooling, let’s define a thread pool with fixed amount of threads (2). final ExecutorService executor = Executors.newFixedThreadPool( 2 ); final SynchronousQueue< List< ? > > resultSets = new SynchronousQueue< List< ? > >(); Following the described algorithm, the producer functionality could be represented as a single callable: Callable< Void > producer = new Callable< Void >() { @Override public Void call() throws Exception { from.query( 'SELECT * FROM ' + table, new RowCallbackHandler() { @Override public void processRow(ResultSet rs) throws SQLException { try { List< ? > row = ...; // convert ResultSet to List if( !resultSets.offer( row, 2, TimeUnit.MINUTES ) ) { throw new SQLException( 'Having more data but consumer has already completed' ); } } catch( InterruptedException ex ) { throw new SQLException( 'Having more data but producer has been interrupted' ); } } } );return null; } }; The code is a bit verbose due to Java syntax but it doesn’t do much actually. Every result set read from the table producer converts to a list (implementation has been omitted as it’s a boilerplate) and puts in a queue (offer). If queue is not empty, producer is blocked waiting for consumer to finish his work. The consumer, respectively, could be represented as a following callable: Callable< Void > consumer = new Callable< Void >() { @Override public Void call() throws Exception { to.query( 'SELECT * FROM ' + table, new RowCallbackHandler() { @Override public void processRow(ResultSet rs) throws SQLException { try { List< ? > source = resultSets.poll( 2, TimeUnit.MINUTES ); if( source == null ) { throw new SQLException( 'Having more data but producer has already completed' ); }List< ? > destination = ...; // convert ResultSet to List if( !source.equals( destination ) ) { throw new SQLException( 'Row data is not the same' ); } } catch ( InterruptedException ex ) { throw new SQLException( 'Having more data but consumer has been interrupted' ); } } } );return null; } }; The consumer does a reverse operation on the queue: instead of putting data it pulls it (poll) from the queue. If queue is empty, consumer is blocked waiting for producer to publish next row. The part which is left is only submitting those callables for execution. Any exception returned by the Future’s get method indicates that table doesn’t contain the same data (or there are issue with getting data from database): List< Future< Void > > futures = executor.invokeAll( Arrays.asList( producer, consumer ) ); for( final Future< Void > future: futures ) { future.get( 5, TimeUnit.MINUTES ); }   Reference: Implementing Producer/Consumer using SynchronousQueue from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog. ...

Devoxx UK free ticket giveaway

Java Code Geeks are proud to conduct another important giveaway for the Java community! For this one we have teamed up with the Devoxx community and managed to get a ticket for the Devoxx UK 2013 London community conference going to take place on the 26th and 27th of March 2013. That ticket is the prize for our next giveaway. A prize worth of £350! Prior going into the specifics about our giveaway we would like to say a few words about the Devoxx UK community conferences. The Devoxx non-profit community conferences have been very successfully run in Belgium and France. We enjoyed them so much that the London Java Community (LJC) decided to bring a version over to London (http://devoxx.co.uk)! So we’re bringing the who’s who of the Java/JVM and Software Development including folks such as Milton Smith – head of Java security and Charlie Nutter – the inventor of JRuby, as well as showcasing talent from the local development community. Tracks Include:Languages on the JVM Java SE Java EE Mobile Architecture, Cloud and Security Web and Big Data Methodology The FutureWe’d love to see you and members from your dev teams there as attendees (it’s a non-profit conference by developers for developers). It’s only £350 for the two days, so get in quick at http://reguk.devoxx.com More details at: http://www.devoxx.co.uk Register now at: http://reguk.devoxx.com Twitter: @DevoxxUK How to Enter? Just send an email here using as subject “Devoxx UK 2013 London“. An empty email will do, It’s that simple! (Note: By entering the contest you will be automatically included in the forthcoming Java Code Geeks Newsletter.) Deadline The contest will close on Friday 01 March 2013 PT. The winner will be contacted by email, so be sure to use your real email address! Important Notes… Please spread the news! The larger the number of participants the more people will get a chance of participating to one of the best conferences about Java around!     Good Luck! The Java Code Geeks Team ...

How Friction Slows Us Down in Software Development

I once joined a project where running the “unit” tests took three and a half hours. As you may have guessed, the developers didn’t run the tests before they checked in code, resulting in a frequently red build. Running the tests just gave too much friction for the developers. I define friction as anything that resist the developer while she is producing software. Since then, I’ve spotted friction in numerous places while developing software.           Friction in Software Development Since friction impacts productivity negatively, it’s important that we understand it. Here are some of my observations:Friction can come from different sources. It can result from your tool set, like when you have to wait for Perforce to check out a file over the network before you can edit it. Friction can also result from your development process, for example when you have to wait for the QA department to test your code before it can be released. Friction can operate on different time scales. Some friction slows you down a lot, while others are much more benign. For instance, waiting for the next set of requirements might keep you from writing valuable software for weeks. On the other hand, waiting for someone to review your code changes may take only a couple of minutes. Friction can be more than simple delays. It also rears its ugly head when things are more difficult then they ought to be. In the vi editor, for example, you must switch between command and insert modes. Seasoned vi users are just as fast as with editors that don’t have that separation. Yet they do have to keep track of which mode they are in, which gives them a higher cognitive load.Lubricating Software Development There has been a trend to decrease friction in software development. Tools like Integrated Development Environments have eliminated many sources of friction. For instance, Eclipse will automatically compile your code when you save it. Automated refactorings decrease both the time and the cognitive load required to make certain code changes. On the process side, things like Agile development methodologies and the DevOps movement have eliminated or reduced friction. For instance, continuous deployment automates the release of software into production. These lubricants have given us a fighting chance in a world of increasing complexity. Frictionless Software Development It’s fun to think about how far we could take these improvements, and what the ultimate, frictionless, software development environment might look like. My guess is that it would call for the combination of some of the same trends we already see in consumer and enterprise software products. Cloud computing will play a big role, as will simplification of the user interaction, and access from anywhere. What do you think? What frictions have you encountered? Do you think frictions are the same as waste in Lean? What have you done to lubricate the frictions away? What would your perfect, frictionless, software development environment look like?   Reference: How Friction Slows Us Down in Software Development from our JCG partner Remon Sinnema at the Secure Software Development blog. ...

JavaFX 2 XYCharts and Java 7 Features

One of my favorite features of JavaFX 2 is the standard charts it provides in its javafx.scene.chart package. This package provides several different types of charts out-of-the-box. All but one of these (the PieChart) are ‘2 axis charts’ (specific implementations of the XYChart). In this post, I look at the commonality between these specializations of XYChart. Along the way, I look at several Java 7 features that come in handy. A UML class diagram for key chart types in the javafx.scene.chart package is shown next. Note that AreaChart, StackedAreaChart, BarChart, StackedBarChart, BubbleChart, LineChart, and ScatterChart all extend XYChart.    As the UML diagram above (generated using JDeveloper) indicates, the PieChart extends Chart directly while all the other chart types extend XYChart. Because all the chart types other than PieChart extend XYChart, they share some common features. For example, they are all 2-axis charts with a horizontal (‘x’) axis and a vertical (‘y’) axis. They generally allow data to be specified in the same format (data structure) for all the XY charts. The remainder of this post demonstrates being able to use the same data for most of the XYCharts. The primary use of a chart is to show data, so the next code listing indicates retrieving of data from the ‘hr‘ sample schema in an Oracle database. Note that JDBC_URL, USERNAME, PASSWORD, and AVG_SALARIES_PER_DEPARTMENT_QUERY are constant Strings used in the JDBC connection and for the query. getAverageDepartmentsSalaries() /** * Provide average salary per department name. * * @return Map of department names to average salary per department. */ public Map<String, Double> getAverageDepartmentsSalaries() { final Map<String, Double> averageSalaryPerDepartment = new HashMap<>(); try (final Connection connection = DriverManager.getConnection(JDBC_URL, USERNAME, PASSWORD); final Statement statement = connection.createStatement(); final ResultSet rs = statement.executeQuery(AVG_SALARIES_PER_DEPARTMENT_QUERY)) { while (rs.next()) { final String departmentName = rs.getString(COLUMN_DEPARTMENT_NAME); final Double salaryAverage = rs.getDouble(ALIAS_AVERAGE_SALARY); averageSalaryPerDepartment.put(departmentName, salaryAverage); } } catch (SQLException sqlEx) { LOGGER.log( Level.SEVERE, 'Unable to get average salaries per department - {0}', sqlEx.toString()); } return averageSalaryPerDepartment; } The Java code snippet above uses JDBC to retrieve data for populating a Map of department name Strings to the average salary of the employees in each department. There are a couple of handy Java 7 features used in this code. A small feature is the inferred generic parameterized typing of the diamond operator used with the declaration of the local variable averageSalaryPerDepartment (line 8). This is a small granule of syntax sugar, but it does make the code more concise. A more significant Java 7 feature is use of try-with-resources statement for the handling of the Connection, Statement, and ResultSet resources (lines 9-11). This is a much nicer way to handle the opening and closing of these resources, even in the face of exceptions, than was previously necessary when using JDBC. The Java Tutorials page on The try-with-resources Statement advertises that this statement ‘ensures that each resource is closed at the end of the statement’ and that each resource will ‘be closed regardless of whether the try statement completes normally or abruptly.’ The page also notes that when there are multiple resources specified in the same statement as is done in the above code, ‘the close methods of resources are called in the opposite order of their creation.’ The data retrieved from the database can be placed into the appropriate data structure to support use by most of the XYCharts. This is shown in the next method. ChartMaker.createXyChartDataForAverageDepartmentSalary(Map) /** * Create XYChart Data representing average salary per department name. * * @param newAverageSalariesPerDepartment Map of department name (keys) to * average salary for each department (values). * @return XYChart Data representing average salary per department. */ public static ObservableList<XYChart.Series<String, Double>> createXyChartDataForAverageDepartmentSalary( final Map<String, Double> newAverageSalariesPerDepartment) { final Series<String, Double> series = new Series<>(); series.setName('Departments'); for (final Map.Entry<String, Double> entry : newAverageSalariesPerDepartment.entrySet()) { series.getData().add(new XYChart.Data<>(entry.getKey(), entry.getValue())); } final ObservableList<XYChart.Series<String, Double>> chartData = FXCollections.observableArrayList();chartData.add(series); return chartData; } The method just shown places the retrieved data in a data structure that can be used by nearly all of the XYChart-based charts. With the retrieved data now packaged in a JavaFX observable collection, the charts can be easily generated. The next code snippet shows methods for generating several XYChart-based charts (Area, Bar, Bubble, Line, and Scatter). Note how similar they all are and how the use the same data provided by the same method. The StackedBar and StackedArea charts can also use similar data, but are not shown here because they are not interesting for the single series of data being used in this example. Methods for Generating XYCharts Except BubbleChart and Stacked Charts private XYChart<String, Double> generateAreaChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final AreaChart<String, Double> areaChart = new AreaChart<>( xAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalary( this.databaseAccess.getAverageDepartmentsSalaries())); return areaChart; }private XYChart<String, Double> generateBarChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final BarChart<String, Double> barChart = new BarChart<>( xAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalary( this.databaseAccess.getAverageDepartmentsSalaries())); return barChart; }private XYChart<Number, Number> generateBubbleChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final Axis<Number> deptIdXAxis = new NumberAxis(); deptIdXAxis.setLabel('Department ID'); final BubbleChart<Number, Number> bubbleChart = new BubbleChart( deptIdXAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalaryById( this.databaseAccess.getAverageDepartmentsSalariesById())); return bubbleChart; }private XYChart<String, Double> generateLineChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final LineChart<String, Double> lineChart = new LineChart<>( xAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalary( this.databaseAccess.getAverageDepartmentsSalaries())); return lineChart; }private XYChart<String, Double> generateScatterChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final ScatterChart<String, Double> scatterChart = new ScatterChart<>( xAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalary( this.databaseAccess.getAverageDepartmentsSalaries())); return scatterChart; } These methods are so similar that I could have actually used method handles (or more traditional reflection APIs) to reflectively call the appropriate chart constructor rather than use separate methods. However, I am using these for my RMOUG Training Days 2013 presentation in February and so wanted to leave the chart-specific constructors in place to make them clearer to audience members. One exception to the general handling of XYChart types is the handling of BubbleChart. This chart expects a numeric type for its x-axis and so the String-based (department name) x-axis data provided above will not work. A different method (not shown here) provides a query that returns average salaries by department ID (Long) rather than by department name. The slightly different generateBubbleChart method is shown next. generateBubbleChart(Axis, Axis) private XYChart<Number, Number> generateBubbleChart( final Axis<String> xAxis, final Axis<Double> yAxis) { final Axis<Number> deptIdXAxis = new NumberAxis(); deptIdXAxis.setLabel('Department ID'); final BubbleChart<Number, Number> bubbleChart = new BubbleChart( deptIdXAxis, yAxis, ChartMaker.createXyChartDataForAverageDepartmentSalaryById( this.databaseAccess.getAverageDepartmentsSalariesById())); return bubbleChart; } Code could be written to call each of these different chart generation methods directly, but this provides a good chance to use Java 7’s method handles. The next code snippet shows this being done. Not only does this code demonstrate Method Handles, but it also uses Java 7’s multi-catch exception handling mechanism (line 77). /** * Generate JavaFX XYChart-based chart. * * @param chartChoice Choice of chart to be generated. * @return JavaFX XYChart-based chart; may be null. * @throws IllegalArgumentException Thrown if the provided parameter is null. */ private XYChart<String, Double> generateChart(final ChartTypes chartChoice) { XYChart<String, Double> chart = null; final Axis<String> xAxis = new CategoryAxis(); xAxis.setLabel('Department Name'); final Axis<? extends Number> yAxis = new NumberAxis(); yAxis.setLabel('Average Salary'); if (chartChoice == null) { throw new IllegalArgumentException( 'Provided chart type was null; chart type must be specified.'); } else if (!chartChoice.isXyChart()) { LOGGER.log( Level.INFO, 'Chart Choice {0} {1} an XYChart.', new Object[]{chartChoice.name(), chartChoice.isXyChart() ? 'IS' : 'is NOT'}); }final MethodHandle methodHandle = buildAppropriateMethodHandle(chartChoice); try { chart = methodHandle != null ? (XYChart<String, Double>) methodHandle.invokeExact(this, xAxis, yAxis) : null; chart.setTitle('Average Department Salaries'); } catch (WrongMethodTypeException wmtEx) { LOGGER.log( Level.SEVERE, 'Unable to invoke method because it is wrong type - {0}', wmtEx.toString()); } catch (Throwable throwable) { LOGGER.log( Level.SEVERE, 'Underlying method threw a Throwable - {0}', throwable.toString()); }return chart; }/** * Build a MethodHandle for calling the appropriate chart generation method * based on the provided ChartTypes choice of chart. * * @param chartChoice ChartTypes instance indicating which type of chart * is to be generated so that an appropriately named method can be invoked * for generation of that chart. * @return MethodHandle for invoking chart generation. */ private MethodHandle buildAppropriateMethodHandle(final ChartTypes chartChoice) { MethodHandle methodHandle = null; final MethodType methodDescription = MethodType.methodType(XYChart.class, Axis.class, Axis.class); final String methodName = 'generate' + chartChoice.getChartTypeName() + 'Chart';try { methodHandle = MethodHandles.lookup().findVirtual( this.getClass(), methodName, methodDescription); } catch (NoSuchMethodException | IllegalAccessException exception) { LOGGER.log( Level.SEVERE, 'Unable to acquire MethodHandle to method {0} - {1}', new Object[]{methodName, exception.toString()}); } return methodHandle; } A series of images follows that shows how these XY Charts appear when rendered by JavaFX. Area ChartBar ChartBubble ChartLine ChartScatter ChartAs stated above, Method Handles could have been used to reduce the code even further because individual methods for generating each XYChart are not absolutely necessary and could have been reflectively called based on desired chart type. It’s also worth emphasizing that if the x-axis data had been numeric, the code would be the same (and could be reflectively called) for all XYChart types including the Bubble Chart. JavaFX makes it easy to generate attractive charts representing provided data. Java 7 features make this even easier by making code more concise and more expressive and allowing for easy application of reflection when appropriate.   Reference: JavaFX 2 XYCharts and Java 7 Features from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

One jar to rule them all

Trip down the memory lane Back in 1998, when I was a C/C++ developer, trying my hands on Java, a few things about the language were, to put it mildly – irritating – for me. I remember fretting about these quite a lot            Why isn’t there a decent editor for this? C/C++ had quite a few. All that I had for Java was the good old notepad. Why do I have to make a class, when all I want is a function? Why wasn’t a function an object as well? Why can’t I just package everything into one zip/jar and let the end user launch with a double click?and a few others. Back then, I found me frequently chiding myself for not being able to let go of my ‘C/C++ way of thinking’ and embracing ‘Java way’ of doing things. Now, writing this article in 2013, about a decade and a half later, surprisingly all of those early irritations are gone. Not because I have embraced ‘Java’ way, but because java has changed. Idle chit chatting aside, the point of this article is to talk about one of these questions – ‘Why can’t I just package everything into one zip/jar and let the end user launch with a double click?’. Why do we need this – one zip/jar – that is executable? If you are a developer, coding away happily on your IDE (I despise you all who have coded java on Eclipse, NetBeans from day one and have not had to code on Notepad), assisted by Google (I positively totally hate all of you all who did not have to find stuff on internet before Google), there is probably no convincing case. However, have you faced a situation whenYou have been pulled into the data centre because the guy there have followed your deployment steps but your application / website will simply not work? All of a sudden the environment variables are all messed up, when ‘nobody at all so much as touched’ the production boxes, and you are the one who has to ‘just make it work’. You are sitting with your business stakeholder and staring incredulously at a ‘ClassNotFound exception’ and were convinced that Java did not like you at all.In short, what I am trying to say is, when you are in the ‘relative’ sanity of your dev box / environment, a one executable jar does not really do anything for you. But the moment you step into the twilight zone of unknown servers and situations (sans the IDE and other assortment of tools) you start appreciating just how much a single executable jar could have helped. Ok, I get it. But, what’s the big deal? We can make such a package / zip / jar in a jiffy if we have to. Isn’t that so. In all my naivety, I thought so and found out the answer the hard way. Let me walk you through it. Fire up your editors folks. Let’s create a executableJar project. I use jdk1.7.0, STS, Maven 3.0.4. If you are new to Maven or just not hands on, I recommend you read this and this. File: C:\projects\MavenCommands.bat ECHO OFF REM ============================= REM Set the env. variables. REM ============================= SET PATH=%PATH%;C:\ProgramFiles\apache-maven-3.0.4\bin; SET JAVA_HOME=C:\ProgramFiles\Java\jdk1.7.0 REM ============================= REM Standalone java application. REM ============================= call mvn archetype:generate ^ -DarchetypeArtifactId=maven-archetype-quickstart ^ -DinteractiveMode=false ^ -DgroupId=foo.bar ^ -DartifactId=executableJar001 pause After you run this batch file, you will have a fully compilable standard java application. Go ahead compile it and build a jar (mvn -e clean install). You will end up with a executableJar001-1.0-SNAPSHOT.jar at C:\projects\executableJar001\target. Now lets go ‘ java -jar jarFileName‘. And here you stumble the first time. In geeky vocabulary it tells you that there were no class with a main method and hence it did not know what to execute. Fortunately this is an easy one. There are standard java process to solve it. And there is a Maven plugin to solve it. I will use the latter. Updated File: /executableJar001/pom.xml ... <dependencies> ... </dependencies><build> <plugins> <!-- Set main class in the jar. --> <plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-jar-plugin</artifactid> <version>2.4</version> <configuration> <archive> <manifest> <mainclass>foo.bar.App</mainclass> </manifest> </archive> </configuration> </plugin></plugins> </build> ...You can compile and assemble the application again (mvn -e clean install). It is will create a jar file in target folder. Try running the jar from command line again. This time you will get intended result. So, we are all sorted, right? Wrong. Very wrong. Why? Everything seems fine. Let’s dig in a bit deeper and we will find why everything is not as sorted as it looks at the moment. Let’s go ahead and add a dependency e.g. let’s say we want to add logging and for that we want to use a third party jar i.e. logback. I will let Maven handle dependencies in the development environment. Updated File : /executableJar001/pom.xml ... <dependencies> <!-- Logging --> <dependency> <groupid>ch.qos.logback</groupid> <artifactid>logback-classic</artifactid> <version>1.0.9</version> </dependency> </dependencies><build> ... </build>Updated File: /executableJar001/src/main/java/foo/bar/App.java package foo.bar;import org.slf4j.Logger; import org.slf4j.LoggerFactory;public class App { private final static Logger logger = LoggerFactory .getLogger(App.class);public static void main( String[] args ) { System.out.println( 'Hello World!' ); logger.debug('Hello world from logger.'); } } Now let’s compile and run the jar from command prompt using jar command. Did you see what happened? Exception in thread 'main' java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory Basically it is saying that the class (i.e the actual code) of the LoggerFactory (i.e. the 3rd party jar that we had added in development environment) was not found. Oh, but surely we should be able to tell java to pick up the 3rd party libraries from some folder. Definitely. It is almost a certainty – if you are asking that question – that for most of your applications you tell the JVM where the 3rd party / dependency libraries are. You tell this by setting classpath. You could possibly be using some application server e.g. Tomcat / Jetty and that could be picking up some dependencies itself. And that is exactly where the problem originates. As a developer, I provide a x.jar that works. However, for it to work, it depends on a.jar (which in turn might depend upon b.jar and c.jar … you get the point). When I, as a developer, bundle up my deliverable, a x.jar, there is a dependency – on whoever I am handing this out to – to make sure that the classpath is correctly set in the other environment where x.jar is supposed to work. It is not that big a deal, mostly. However, it is not trivial either. There are multitude of ways that the dependencies on target environment could get messed up. There might be routine updates. There might be some other application deployed in the same production box, that needed an update on a jar that nobody thought would impact yours. We can discuss and debate the multitude of ways that these kind of mishaps can be stopped, but bottom line is x.jar (the developers responsibility) has dependencies (that the developer do not directly control). And that leads to mishaps. Of course, if you add into this mix the whole lot of variables that comes in because of different versions, different application servers, etc etc. the existing solution of providing x.jar only, quickly starts looking very fragile. So, what do we do? Say thanks to Dr. P. Simon Tuffs. This gentleman explains how he catered to this problem in this link. It is a good read, I recommend it. What I have explained in very laymen terms (and have barely scratched the surface), Simon takes a deep dive into the problem and how he solved it. Long story short, he coded a solution and made it open source. I am not going to replay the same information again – read his article, it is quite informative – but I will call out the salient point of his solution.It allows folks to create a single jar that contains everything – your code, resources, dependencies, application server (potentially) – everything. It allows the end use to run this entire humongous jar by the simple java -jar jarFileName command. It allows developers to develop the same way they have been developing e.g. if it is a web application, the war file structure, remains same. So there are no changes in the development process.Fine. So how do we go about doing it? There are many places where it is detailed out. The One-JAR website. Ant with One-JAR. Maven with One-JAR. Let’s see it in action on our dummy code. Thankfully there is also a Maven plugin for this. Sadly it is not in the Maven Central repository (Why? Folks why? You have put in 98% of work. Why be sluggish about the last 2%?). It comes with nice usage instructions. Updated file: /executableJar001/pom.xml ... <dependencies> ... </dependencies><build> <plugins> ...<!-- If you wanted to bundle all this in one jar. --> <plugin> <groupid>org.dstovall</groupid> <artifactid>onejar-maven-plugin</artifactid> <version>1.4.4</version> <executions> <execution> <goals> <goal>one-jar</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <!-- Required only if you are usng onejar plugin. --> <pluginrepositories> <pluginrepository> <id>onejar-maven-plugin.googlecode.com</id> <url>http://onejar-maven-plugin.googlecode.com/svn/mavenrepo</url> </pluginrepository> </pluginrepositories>Now all you need to do is run mvn -e clean package. You will get, apart from the normal jar, a fat self sufficient jar as well. Go ahead, do the java -jar jarFileName from command prompt again. It should work. Hmm.. that sounds good. Why isn’t everybody going for this? And this One-JAR seems to be around since 2004. Why are we not seeing more players in this market? You know what they say about free lunches? There are none. While the concept is quite neat and very practical, it does not mean that every other player have decided to join in. So if your website ‘needs’ to be hosted on one of the biggie paid application servers (I don’t know why you want to keep paying for those proprietary software and folks that understand them. Should you not pay for only quality folks and rely on the open source apps that do not lock you in) One-JAR might not be a feasible solution for you. Also, I hear unconfirmed murmurs about how things might get sluggish (during load up if your app is big. So, before you decide to commit to using this, I recommend you do a POC and make sure that other bits of your techstack are not unhappy with One-JAR. My personal opinion is, 2004 was perhaps a little too early for this kind of thing. People were still struggling with stuff like standardization of build and release process, getting clear player in ORM area, closing on a clear player for MVC framework etc. Not that those questions have been answered yet, or will be any-time soon. But I think the flavour of current problems in IT world are aroundHow to make DevOps work. How to make the entire build and release automated. How to leverage the open source libraries to provide solid dependable software while ensuring there are no heavy proprietary software causing lock-in and hence make the solution less agile for future business requirement.And in my mind, One-JAR plays very nicely in that area. So, I definitely expect to see more of this tool and / or more tools around this concept. And, to be fair, there are more players in this area. Thanks to Christian Schlichtherle for pointing this out. There is Maven Assembly Plugin and Maven Shade Plugin which cater to this exact same problem. I have not tried them yet but from the documentation they look quite alright, feature wise. Dropwizard, although not the same thing, but in essence is very similar. They have extended the whole one jar concept with embedded app server, out of the box support for REST, JSON, Logback, sort of in a nice neat package, that you could just use straight off the shelf. So, as I keep saying, these are nice exciting times to be in technology business, particularly if you like tinkering around with software.   Reference: One jar to rule them all from our JCG partner Partho at the Tech for Enterprise blog. ...

Scala pattern matching: A Case for new thinking?

The 16th President of the United States. Abraham Lincoln once said: ‘As our case is new we must think and act anew’. In software engineering things probably aren’t as dramatic as civil wars and abolishing slavery but we have interesting logical concepts concerning ‘case’. In Java the case statement provides for some limited conditional branching. In Scala, it is possible to construct some very sophisticated pattern matching logic using the case / match construct which doesn’t just bring new possibilities but a new type of thinking to realise new possibilities. Let’s start with a classical 1st year Computer Science homework assignment: a fibonacci series that doesn’t start with 0, 1 but that starts with 1, 1. So the series will look like: 1, 1, 2, 3, 5, 8, 13, … every number is the sum of the previous two. In Java, we could do: public int fibonacci(int i) { if (i < 0) return 0; switch(i) { case 0: return 1; case 1: return 1; default: return fibonacci(i-1) + fibonacci(i - 2); } } All straight forward. If 0 is passed in it counts as the first element in the series so 1 should be returned. Note: to add some more spice to the party and make things a little bit more interesting I added a little bit of logic to return 0 if a negative number is passed in to our fibonacci method. In Scala to achieve the same behaviour we would do: def fibonacci(in: Int): Int = { in match { case n if n <= 0 => 0 case 0 | 1 => 1 case n => fibonacci(n - 1) + fibonacci(n- 2) } } Key points:The return type of the recursive method fibonacci is an Int. Recursive methods must explictly specify the return type (see: Odersky – Programming in Scala – Chapter 2). It is possible to test for multiple values on the one line using the | notation. I do this to return a 1 for both 0 and 1 on line 4 of the example. There is no need for multiple return statements. In Java you must use multiple return statements or multiple break statements. Pattern matching is an expression which always returns something. In this example, I employ a guard to check for a negative number and if it a number is negative zero is returned. In Scala it is also possible to check across different types. It is also possible to use the wildcard _ notation. We didn’t use either in the fibonacci, but just to illustrate these features… def multitypes(in: Any): String = in match { case i:Int => 'You are an int!' case 'Alex' => 'You must be Alex' case s:String => 'I don't know who you are but I know you are a String' case _ => 'I haven't a clue who you are' }Pattern matching can be used with Scala Maps to useful effect. Suppose we have a Map to capture who we think should be playing in each position of the Lions backline for the Lions series in Austrailia. The keys of the map will be the position in the back line and the corresponding value will be the player who we think should be playing there. To represent a Rugby player we use a case class. Now now you Java Heads, think of the case class as an immutable POJO written in extremely concise way – they can be mutable too but for now think immutable. case class RugbyPlayer(name: String, country: String); val robKearney = RugbyPlayer('Rob Kearney', 'Ireland'); val georgeNorth = RugbyPlayer('George North', 'Wales'); val brianODriscol = RugbyPlayer('Brian O'Driscol', 'Ireland'); val jonnySexton = RugbyPlayer('Jonny Sexton', 'Ireland'); val benYoungs = RugbyPlayer('Ben Youngs', 'England');// build a map val lionsPlayers = Map('FullBack' -> robKearney, 'RightWing' -> georgeNorth, 'OutsideCentre' -> brianODriscol, 'Outhalf' -> jonnySexton, 'Scrumhalf' -> benYoungs);// Note: Unlike Java HashMaps Scala Maps can return nulls. This achieved by returing // an Option which can either be Some or None.// So, if we ask for something that exists in the Map like below println(lionsPlayers.get('Outhalf')); // Outputs: Some(RugbyPlayer(Jonny Sexton,Ireland))// If we ask for something that is not in the Map yet like below println(lionsPlayers.get('InsideCentre')); // Outputs: None In this example we have players for every position except inside centre – which we can’t make up our mind about. Scala Maps are allowed to store nulls as values. Now in our case we don’t actually store a null for inside center. So, instead of null being returned for inside centre (as what would happen if we were using a Java HashMap), the type None is returned. For the other positions in the back line, we have matching values and the type Some is returned which wraps around the corresponding RugbyPlayer. (Note: both Some and Option extend from Option). We can write a function which pattern matches on the returned value from the HashMap and returns us something a little more user friendly. def show(x: Option[RugbyPlayer]) = x match { case Some(rugbyPlayerExt) => rugbyPlayerExt.name // If a rugby player is matched return its name case None => 'Not decided yet ?' // } println(show(lionsPlayers.get('Outhalf'))) // outputs: Jonny Sexton println(show(lionsPlayers.get('InsideCentre'))) // Outputs: Not decided yet This example doesn’t just illustrate pattern matching but another concept known as extraction. The rugby player when matched is extracted and assigned to the rugbyPlayerExt. We can then return the value of the rugby player’s name by getting it from rugbyPlayerExt. In fact, we can also add a guard and change around some logic. Suppose we had a biased journalist ( Stephen Jones) who didn’t want any Irish players in the team. He could implement his own biased function to check for Irish players def biasedShow(x: Option[RugbyPlayer]) = x match { case Some(rugbyPlayerExt) if rugbyPlayerExt.country == 'Ireland' => rugbyPlayerExt.name + ', don't pick him.' case Some(rugbyPlayerExt) => rugbyPlayerExt.name case None => 'Not decided yet ?' } println(biasedShow(lionsPlayers.get('Outhalf'))) // Outputs Jonny... don't pick him println(biasedShow(lionsPlayers.get('Scrumhalf'))) // Outputs Ben Youngs Pattern matching Collections Scala also provides some powerful pattern matching features for Collections. Here’s a trivial exampe for getting the length of a list. def length[A](list : List[A]) : Int = list match { case _ :: tail => 1 + length(tail) case Nil => 0 } And suppose we want to parse arguments from a tuple… def parseArgument(arg : String, value: Any) = (arg, value) match { case ('-l', lang) => setLanguage(lang) case ('-o' | '--optim', n : Int) if ((0 < n) && (n <= 3)) => setOptimizationLevel(n) case ('-h' | '--help', null) => displayHelp() case bad => badArgument(bad) } Single Parameter functions Consider a list of numbers from 1 to 10. The filter method takes a single parameter function that returns true or false. The single parameter function can be applied for every element in the list and will return true or false for every element. The elements that return true will be filtered in; the elements that return false will be filtered out of the resultant list. scala> val myList = List(1,2,3,4,5,6,7,8,9,10) myList: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)scala> myList.filter(x => x % 2 ==1) res13: List[Int] = List(1, 3, 5, 7, 9) Now now now, listen up and remember this. A pattern can be passed to any method that takes a single parameter function. Instead of passing a single parameter function which always returned true or false we could have used a pattern which always returns true or false. scala> myList.filter { | case i: Int => i % 2 == 1 // odd number will return false | case _ => false // anything else will return false | } res14: List[Int] = List(1, 3, 5, 7, 9) Use it later? Scala compiles patterns to a PartialFunction. This means that not only can Scala pattern expressions be passed to other functions but they can also be stored for later use. scala> val patternToUseLater = : PartialFunction[String, String] = { | case 'Dublin' => 'Ireland' | case _ => 'Unknown' } What this example is saying is patternToUseLater is a partial function that takes a string and returns a string. The last statemeent in a function is returned by default and because the case expression is a partial function it will returned as a partial function and assigned to pattenrToUseLater which of course can use it later. Finally, Johnny Sexton is a phenomenal Rugby player and it is a shame to hear he is leaving Leinster. Obviously, with Sexton’s busy schedule we can’t be sure if Johnny is reading this blog but if he is, Johnny sorry to see you go we wish you all the best and hopefully will see you back one day in the Blue Jersey.   Reference: Scala pattern matching: A Case for new thinking? from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog. ...

Managing the Stream of Features in an Agile Program

One of the challenges in a program is how you manage the checkins, especially if you have continuous integration. I am quite fond of continuous integration, no matter how large your program is. I also like short iterations. (Remember Short is Beautiful?) But imagine a product where you have a platform and layers. I’m separating the GUI and the API for the GUI, so you can see the application and the middleware and the platform. Now, this architecture is different from separate-but-related products that also might be a program. This is an archetype of an architecture, not your architecture. I am sure you have more than 3 middleware components or 4 app layer components. The product I’m thinking about had 12 middleware components, about another 12 platform components and about 50 app layer components. It was a big program with about 200 people working on the program for over 2 years. I wanted to simplify the picture so we could have a conversation. The features cut through the app layers and the middleware layers. The colored lines are the features. Now, multiply these lines by each project team and each feature, and you can see what happens in a program. Imagine if I added colored lines for 25 features for 25 different feature teams. It could be a problem. However, if project teams limit their WIP (work in progress), and swarm around features, integrating as they proceed, they have fewer features in progress. And, if they are networks of people, so they have communities of practice, they have ways of talking to each other, so they don’t have to wait to sync with each other. People talk with each other when they need to. That’s it. When features are small, and they integrate every day, or every other day, people expect to discuss integration issues all the time. And, while I am not a fan of integration teams, even I admit that on a large program, you might need an integration team to help keep things moving. This is in addition to what everyone does of syncing with the main line everyday: taking down the new additions to the main and syncing just before putting new changes up.If you keep a stream of features moving in a program, even with many feature teams, as long as the project teams keep talking to one another, you are okay. You are not okay if someone decides, “I own this code and no one else can touch it.” Now, you might decide that all middleware code or all particular component code has to be reviewed. Or it all has to be smoke tested. Or, it all has some other gate that it has to go through before it can be changed. Or you pair on everything. Or, you have situational code ownership. That’s perfectly okay. You decide on the technical mores for your program. It’s a good idea to have them. But, the larger the program, the less you can have one gatekeeper. Because that person cannot be in one place, holding their fingers in the figurative dike. This is why I don’t like czar-like architects, but I do like embedded architects in the teams for agile programs. When the product reaches a certain level of maturity, you can work on a particular component as a feature, and swap it in or out once you change it as a feature. This takes significant skill. If you want to be agile for a program, you need to be more agile and lean, not less. You need to have smaller stories. You need to work by feature, not by architecture. You need to swarm. Well, that is if you don’t want program bloat. Okay, what’s confusing to you? What have I forgotten? Where do you disagree? Let’s discuss.   Reference: Managing the Stream of Features in an Agile Program from our JCG partner Johanna Rothman at the Managing Product Development blog. ...

Jenkins Description Setter Plugin for Improving Continuous Delivery Visibility

In Continuous Delivery each build is potentially shippable. This fact implies among a lot of other things, to assign a none snapshot version to your components as fast as possible so you can refer them through all the process. I suggest creating a release branch, assign the version to the project and then start the typical pipeline (compile, tests, code quality …) steps to release branch.               If you are using Jenkins, your build job screen will look something like:Note that we have released the project many times, but there is no quick way to know exactly which version has been constructed in build number 40. To avoid this problem and having a quick overview of which version has been executed in each build job instance, we can use Jenkins description setter plugin. This plugin sets the description for each build, based upon a regular expression of the build log file. So your build job screen will look something like:Much better, now we know exactly the result of a build job and which product version has been generated. So first step is installing the plugin by simply going to: Jenkins -> Manage Jenkins -> Manage Plugins -> Available. After installation you can open Build Job configuration screen and add a post-build action called ‘ Set build description‘. Then add a regular expression for extracting the version number. In this case the regular expression is: \[INFO\] from version 0\.0\.1-SNAPSHOT to (.*) Take a look at next fragment of build log file:[INFO] Scanning for projects...[INFO][INFO] ------------------------------------------------------------------------[INFO] Building hello 0.0.1-SNAPSHOT[INFO] ------------------------------------------------------------------------[INFO][INFO] --- versions-maven-plugin:2.0:set (default-cli) @ hello ---[INFO] Searching for local aggregator root...[INFO] Local aggregation root: /jobs/helloworld-inital-build/workspace[INFO] Processing com.lordofthejars.helloworld:hello[INFO] Updating project com.lordofthejars.helloworld:hello[INFO] from version 0.0.1-SNAPSHOT to 1.0.43Props: {project.version=1.0.43, project.artifactId=hello, project.groupId=com.lordofthejars.helloworld} At line 12 we are logging the final version of our product for current pipeline execution, so we create a regular expression which parses that line and the part between brackets are used as decorator. Depending on log traces the regular expression will differ from this one. In this case, we are always using the same SNAPSHOT version in development and only when product is going to be released (this could be 3 times per day or every night) the final version is generated and set. Hope this plugin helps you to make your builds more clear.   Reference: Jenkins Description Setter Plugin for Improving Continuous Delivery Visibility from our JCG partner Alex Soto at the One Jar To Rule Them All blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: