Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


JavaFX: Space Invaders in less than 175 LOC

With the current version of the API I’m at less than 175 LOC for Space Invaders. I’ve got lot’s of “Functional Interfaces” in my APIs that can be converted to Lambda Expressions with JavaFX 8( e.g. SpriteProvider and CollisionHandler). That will make the code nicer and shorter. I could probably also reduce the linecount by bundling the recources (e.g. TileSets) and creating more factories and Builders (SpriteBuilder). But I’m getting closer to what I want…           package de.eppleton.fx2d.samples.spaceinvaders;import de.eppleton.fx2d.collision.*; import de.eppleton.fx2d.*; import de.eppleton.fx2d.action.*; import de.eppleton.fx2d.tileengine.*; import java.util.Collection; import java.util.logging.*; import; import; import javafx.scene.canvas.*; import javafx.scene.input.*; import*; import javafx.scene.paint.Color; import javafx.scene.text.Font; import javax.xml.bind.JAXBException; import org.openide.util.Lookup; import org.openide.util.lookup.Lookups;public class SpaceInvaders extends Game {Points TEN = new Points(10); Points TWENTY = new Points(30); Points THIRTY = new Points(40); DoubleProperty invaderXVelocity = new SimpleDoubleProperty(0.3); AudioClip shootSound = new AudioClip(SpaceInvaders.class.getResource('/assets/sound/shoot.wav').toString()); AudioClip invaderKilledSound = new AudioClip(SpaceInvaders.class.getResource('/assets/sound/invaderkilled.wav').toString()); MediaPlayer mediaPlayer = new MediaPlayer(new Media(SpaceInvaders.class.getResource('/assets/sound/invader_loop1.mp3').toString())); int score = 0; String message; int[][] enemies = new int[][]{ {30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30}, {20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20}, {20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20}, {10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10}, {10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10} };@Override protected void initGame() { final GameCanvas canvas = getCanvas(); try { TileSet invaders = TileMapReader.readSet('/assets/graphics/invaders1.tsx'); TileSet playerTiles = TileMapReader.readSet('/assets/graphics/player.tsx'); final TileSetAnimation animation30 = new TileSetAnimation(invaders, new int[]{0, 1}, 2); final TileSetAnimation animation10 = new TileSetAnimation(invaders, new int[]{4, 5}, 2); final TileSetAnimation animation20 = new TileSetAnimation(invaders, new int[]{2, 3}, 2); final TileSetAnimation playerAnimation = new TileSetAnimation(playerTiles, new int[]{0}, 100_000); for (int i = 0; i < enemies.length; i++) { int[] is = enemies[i]; for (int j = 0; j < is.length; j++) { Points points = is[j] == 30 ? THIRTY : is[j] == 20 ? TWENTY : TEN; Sprite sprite = new Sprite(canvas, '' + ((j * 11) + i), 50 + (40 * j), 140 + (40 * i), 30, 20, Lookups.fixed(points)); sprite.setAnimation(is[j] == 30 ? animation30 : is[j] == 20 ? animation20 : animation10); sprite.setVelocityXProperty(invaderXVelocity); } } Sprite player = new Sprite(canvas, playerAnimation, 'Player', 350, 620, 40, 30, Lookup.EMPTY); player.setAnimation(playerAnimation); player.addAction(KeyCode.LEFT, ActionFactory.createMoveAction(playerAnimation, 'left', -4, 0, 0, 0)); player.addAction(KeyCode.RIGHT, ActionFactory.createMoveAction(playerAnimation, 'right', 4, 0, 0, 0)); player.addAction(KeyCode.UP, new ShootAction(playerAnimation, 'fire', new BulletProvider(), new HitHandler(), shootSound)); } catch (JAXBException ex) { Logger.getLogger(SpaceInvaders.class.getName()).log(Level.SEVERE, null, ex); } canvas.addLayer(new Background()); canvas.addBehaviour(new MoveInvadersBehavior()); mediaPlayer.setCycleCount(MediaPlayer.INDEFINITE);; canvas.addLayer(new SpriteLayer()); canvas.start(); }@Override protected double getViewPortWidth() { return 700; }@Override protected double getViewPortHeight() { return 700; }public static void main(String[] args) { launch(args); }private class Points {int points;public Points(int points) { this.points = points; }public int getPoints() { return points; } }static class BulletProvider implements SpriteProvider {@Override public Sprite getSprite(GameCanvas parent, double x, double y) { return new Sprite(parent, 'bullet', x, y + 10, 10, 20, Lookup.EMPTY); } }class HitHandler implements CollisionHandler {@Override public void handleCollision(Collision collision) { Points points = collision.getSpriteTwo().getLookup().lookup(Points.class); if (points != null) { score += points.getPoints();; collision.getSpriteOne().remove(); collision.getSpriteTwo().remove(); } } }class MoveInvadersBehavior extends Behavior {@Override public boolean perform(GameCanvas canvas, long nanos) { Collection<Sprite> sprites = canvas.getSprites(); boolean stop = false; boolean win = true; for (Sprite sprite1 : sprites) { if (sprite1.getLookup().lookup(Points.class) != null) { win = false; if (sprite1.getX() > 650 || sprite1.getX() < 50) { invaderXVelocity.set(-invaderXVelocity.doubleValue() * (stop ? 0 : 1.3)); if (sprite1.getY() >= 600) { message = 'Game Over!'; stop = true; mediaPlayer.stop(); } for (Sprite sprite2 : sprites) { if (sprite2.getLookup().lookup(Points.class) != null) { sprite2.setY(sprite2.getY() + (stop ? 0 : 20)); } } break; } } } if (win) { message = 'You win!'; canvas.stop(); mediaPlayer.stop(); } return true; } }class Background extends Layer {@Override public void draw(GraphicsContext graphicsContext, double x, double y, double width, double height) { graphicsContext.setFill(Color.BLACK); graphicsContext.fillRect(0, 0, width, height); graphicsContext.setFill(Color.WHITE); graphicsContext.setFont(Font.font('OCR A Std', 20)); graphicsContext.fillText('SCORE<1> HI-SCORE SCORE<2>', 30, 30); graphicsContext.fillText('' + score + ' 9990 ', 30, 60); graphicsContext.fillText(message, 300, 400); graphicsContext.fillText('' + 3 + ' CREDIT ' + 1, 30, 680); graphicsContext.setFill(Color.GREEN); graphicsContext.fillRect(30, 650, 640, 4); } } } Here’s a video of the game:  Reference: JavaFX: Space Invaders in less than 175 LOC from our JCG partner Toni Epple at the Eppleton blog. ...

Styling JavaFX Pie Chart with CSS

JavaFX provides certain colors by default when rendering charts. There are situations, however, when one wants to customize these colors. In this blog post I look at changing the colors of a JavaFX pie chart using an example I intend to include in my presentation this afternoon at RMOUG Training Days 2013. Some Java-based charting APIs provided Java methods to set colors. JavaFX, born in the days of HTML5 prevalence, instead uses Cascading Style Sheets (CSS) to allow developers to adjust colors, symbols, placement, alignment and other stylistic issues used in their charts. I demonstrate using CSS to change colors here. In this post, I will look at two code samples demonstrating simple JavaFX applications that render pie charts based on data from Oracle’s sample ‘hr’ schema. The first   example does not specify colors and so uses JavaFX’s default colors for pie slices and for the legend background. That example is shown next. EmployeesPerDepartmentPieChart (Default JavaFX Styling) package rmoug.td2013.dustin.examples;import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.chart.PieChart; import javafx.scene.layout.StackPane; import javafx.stage.Stage;/** * Simple JavaFX application that generates a JavaFX-based Pie Chart representing * the number of employees per department. * * @author Dustin */ public class EmployeesPerDepartmentPieChart extends Application { final DbAccess databaseAccess = DbAccess.newInstance();@Override public void start(final Stage stage) throws Exception { final PieChart pieChart = new PieChart( ChartMaker.createPieChartDataForNumberEmployeesPerDepartment( this.databaseAccess.getNumberOfEmployeesPerDepartmentName())); pieChart.setTitle('Number of Employees per Department'); stage.setTitle('Employees Per Department'); final StackPane root = new StackPane(); root.getChildren().add(pieChart); final Scene scene = new Scene(root, 800 ,500); stage.setScene(scene);; }public static void main(final String[] arguments) { launch(arguments); } } When the above simple application is executed, the output shown in the next screen snapshot appears.I am now going to adapt the above example to use a custom ‘theme’ of blue-inspired pie slices with a brown background on the legend. Only one line is needed in the Java code to include the CSS file that has the stylistic specifics for the chart. In this case, I added several more lines to catch and print out any exception that might occur while trying to load the CSS file. With this approach, any problems loading the CSS file will lead simply to output to standard error stating the problem and the application will run with its normal default colors. EmployeesPerDepartmentPieChartWithCssStyling (Customized CSS Styling) package rmoug.td2013.dustin.examples;import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.chart.PieChart; import javafx.scene.layout.StackPane; import javafx.stage.Stage;/** * Simple JavaFX application that generates a JavaFX-based Pie Chart representing * the number of employees per department and using style based on that provided * in CSS stylesheet chart.css. * * @author Dustin */ public class EmployeesPerDepartmentPieChartWithCssStyling extends Application { final DbAccess databaseAccess = DbAccess.newInstance();@Override public void start(final Stage stage) throws Exception { final PieChart pieChart = new PieChart( ChartMaker.createPieChartDataForNumberEmployeesPerDepartment( this.databaseAccess.getNumberOfEmployeesPerDepartmentName())); pieChart.setTitle('Number of Employees per Department'); stage.setTitle('Employees Per Department'); final StackPane root = new StackPane(); root.getChildren().add(pieChart); final Scene scene = new Scene(root, 800 ,500); try { scene.getStylesheets().add('chart.css'); } catch (Exception ex) { System.err.println('Cannot acquire stylesheet: ' + ex.toString()); } stage.setScene(scene);; }public static void main(final String[] arguments) { launch(arguments); } } The chart.css file is shown next: chart.css /* Find more details on JavaFX supported named colors at*//* Colors of JavaFX pie chart slices. */ .data0.chart-pie { -fx-pie-color: turquoise; } .data1.chart-pie { -fx-pie-color: aquamarine; } .data2.chart-pie { -fx-pie-color: cornflowerblue; } .data3.chart-pie { -fx-pie-color: blue; } .data4.chart-pie { -fx-pie-color: cadetblue; } .data5.chart-pie { -fx-pie-color: navy; } .data6.chart-pie { -fx-pie-color: deepskyblue; } .data7.chart-pie { -fx-pie-color: cyan; } .data8.chart-pie { -fx-pie-color: steelblue; } .data9.chart-pie { -fx-pie-color: teal; } .data10.chart-pie { -fx-pie-color: royalblue; } .data11.chart-pie { -fx-pie-color: dodgerblue; }/* Pie Chart legend background color and stroke. */ .chart-legend { -fx-background-color: sienna; } Running this CSS-styled example leads to output as shown in the next screen snapshot. The slices are different shades of blue and the legend’s background is ‘sienna.’ Note that while I used JavaFX ‘named colors,’ I could have also used ‘#0000ff’ for blue, for example.I did not show the code here for my convenience classes ChartMaker and DbAccess. The latter simply retrieves the data for the charts from the Oracle database schema via JDBC and the former converts that data into the Observable collections appropriate for the PieChart(ObservableList) constructor. It is important to note here that, as Andres Almiray has pointed out, it is not normally appropriate to execute long-running processes from the main JavaFX UI thread (AKA JavaFX Application Thread) as I’ve done in this and other other blog post examples. I can get away with it in these posts because the examples are simple, the database retrieval is quick, and there is not much more to the chart rendering application than that rendering so it is difficult to observe any ‘hanging.’ In a future blog post, I intend to look at the better way of handling the database access (or any long-running action) using the JavaFX javafx.concurrent package (which is well already well described in Concurrency in JavaFX). JavaFX allows developers to control much more than simply chart colors with CSS. Two very useful resources detailing what can be done to style JavaFX charts with CSS are the Using JavaFX Charts section Styling Charts with CSS and the JavaFX CSS Reference Guide. CSS is becoming increasingly popular as an approach to styling web and mobile applications. By supporting CSS styling in JavaFX, the same styles can easily be applied to JavaFX apps as the HTML-based applications they might coexist with.   Reference: Styling JavaFX Pie Chart with CSS from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

Releasing more often drives better Dev and better Ops

One of the most important decisions that we made as a company was to release less software, more often. After we went live, we tried to deliver updates quarterly, because until then we had followed a staged delivery lifecycle to build the system, with analysis and architecture upfront, and design and development and testing done in 3-month phases. But this approach didn’t work once the system was running. Priorities kept changing as we got more feedback from more customers, too many things needed to fixed or tuned right away, and we had to deal with urgent operational issues. We kept interrupting development to deploy interim releases and patches and then re-plan and re-plan again, wasting everyone’s time and making it harder to keep track of what we needed to do. Developers and ops were busy getting customers on and fire fighting which meant we couldn’t get changes out when we needed to. So we decided to   shorten the release cycle down from 3 months to 1 month, and then shorten it down again to 3 weeks and then 2 weeks, making the releases smaller and more focused and easier to manage. Smaller, more frequent releases changes how Development is done Delivering less but more often, whether you are doing it to reduce time-to-market and get fast feedback in a startup, or to contain risk and manage change in an enterprise, forces you to reconsider how you develop software. It changes how you plan and estimate and how you think about risks and how you manage risks. It changes how you do design, and how much design you need to do. It changes how you test. It changes what tools people need, and how much they need to rely on tools. It changes your priorities. It changes the way that people work together and how they work with the customer, creating more opportunities and more reasons to talk to each other and learn from each other. It changes the way that people think and act – because they have to think and act differently in order to keep up and still do a good job. Smaller, more frequent releases changes how Development and Ops work together Changing how often you release and deploy will also change how operations works and how developers and operations work together. There’s not enough time for heavyweight release management and change control with lots of meetings and paperwork. You need an approach that is easier and cheaper. But changing things more often also means more chances to make mistakes. So you need an approach that will reduce risk and catch problems early. Development teams that release software once a year or so won’t spend a lot of time thinking about release and deployment and operations stuff in general because they don’t have to. But if they’re deploying every couple of weeks, if they’re constantly having to push software out, then it makes sense for them to take the time to understand what production actually looks like and make deployment – and roll-back – easier on them and easier on ops. You don’t have to automate everything to start – and you probably shouldn’t until you understand the problems well enough. We started with check lists and scripting and manual configuration and manual system tests. We put everything under source control (not just code), and then started standardizing and automating deployment and configuration and roll-back steps, replacing manual work and check lists with automated audited commands and health checks. We’ve moved away from manual server setup and patching to managing infrastructure with Puppet. We’re still aligning test and production so that we can test more deployment steps more often with fewer production-specific changes. We still don’t have a one-button deploy and maybe never will, but release and deployment today is simpler and more standardized and safer and much less expensive. Deployment is just the start Improving deployment is just the start of a dialogue that can extend to the rest of operations. Because they’re working together more often, developers and ops will learn more about each other and start to understand each other’s languages and priorities and problems. To get this started, we encouraged people to read Visible Ops and sent ops and testers and some of the developers and even managers on ITIL Foundation training so that we all understood the differences between incident management and problem resolution, and how to do RCA, and the importance of proper change management – it was probably overkill but it made us all think about operations and take it seriously. We get developers and testers and operations staff together to plan and review releases, and to support production and in RCA whenever we have a serious problem, and we work together to figure out why things went wrong and what we can do to prevent them from happening again. Developers and ops pair up to investigate and solve operational problems and to improve how we design and roll out new architecture, and how we secure our systems and how we set up and manage development and test environments It sounds easy. It wasn’t. It took a while, and there were disagreements and problems and back sliding, like any time you fundamentally change the way that people work. But if you do this right, people will start to create connections and build relationships and eventually trust and transparency across groups – which is what Devops is really about. You don’t have to change your organization structure or overhaul the culture – in most cases, you won’t have this kind of mandate anyways. You don’t have to buy into Continuous Deployment or even Continuous Delivery, or infrastructure as code, or use Chef or Puppet or any other Devops tools – although tools do help. Once you start moving faster, from deploying once a year every few months to once a month and as your organization’s pace accelerates, people will change the way that they work because they have to. Today the way that we work, and the way that we think about development and operations, is much different and definitely healthier. We can respond to business changes and to problems faster, and at the same time our reliability record has improved dramatically. We didn’t set out to “be Agile” – it wasn’t until we were on our way to shorter release cycles that we looked more closely at Scrum and XP and later Kanban to see how these methods could help us develop software. And we weren’t trying to “do Devops” either – we were already down the path to better dev and ops collaboration before people started talking about these ideas at Velocity and wherever else. All we did was agree as a company to change how often we pushed software into production. And that has made all the difference.   Reference: Releasing more often drives better Dev and better Ops from our JCG partner Jim Bird at the Building Real Software blog. ...

Java is dead (again)

Here is a couple of responses to this annual question I thought worth sharing: The Day Java lost the Battle There is a common myth amongst technologists that better technology will always be the most successful or that you must keep improving or die. A counter example I use is the QWERTY keyboard. No one who uses it, does so because it is a) natural or easy to learn b) faster to use c) newer or cooler than the alternatives. Yet many developers who couldn’t imagine using anything other than a qwerty keyboard insist that Java must be dead for these reasons. I have looked at predictions that Java is dead from the year 1996 and found these predictions follow Java’s popularity and when there was a drop interest due to the long age of Java 1.4 and Java 6, there was also a drop in predictions that Java is dead. (When IMHO that would have been a good time to question such things) I have come to the conclusion that passionate calls that Java is dead is a good sign that Java is alive and well and annoying developers who would prefer people used a “better” language. In a discussion on the same topic I added: Tiobe Index This table suggest Java has the highest interest of any language. (Possibly in part due to a security issue) Secondly, the other languages which are it’s main competition are both older and lower level. While there are many who would like to believe that higher level languages are winning, there isn’t any evidence this is the case. For example, the security hole was in using Java with medium security level (not the default) as an applet. While Java applet are not that popular, running ruby, php or python in a browser is far less popular. On a final note: Just because Java is popular, doesn’t make it the best, but conversely it’s failings are not  a good indication of the beginning of the end.  If you look at talent show winners, the most popular celebrities or election winners, you have to wonder what makes these people so special really.  It is not surprising you might think the same thing about Java, but just like popular people, what make a language popular is not purely a technical or rational argument.   Reference: Java is dead (again) from our JCG partner Peter Lawrey at the Vanilla Java blog. ...

Testing Expected Exceptions with JUnit Rules

This post shows how to test for expected exceptions using JUnit. Let’s start with the following class that we wish to test:                   public class Person { private final String name; private final int age;/** * Creates a person with the specified name and age. * * @param name the name * @param age the age * @throws IllegalArgumentException if the age is not greater than zero */ public Person(String name, int age) { = name; this.age = age; if (age <= 0) { throw new IllegalArgumentException('Invalid age:' + age); } } } In the example above, the Person constructor throws an IllegalArgumentException if the age of the person is not greater than zero. There are different ways to test this behaviour: Approach 1: Use the ExpectedException Rule This is my favourite approach. The ExpectedException rule allows you to specify, within your test, what exception you are expecting and even what the exception message is. This is shown below: import static org.hamcrest.Matchers.*; import static org.junit.Assert.*;import org.junit.Rule; import org.junit.Test; import org.junit.rules.ExpectedException;public class PersonTest {@Rule public ExpectedException exception = ExpectedException.none();@Test public void testExpectedException() { exception.expect(IllegalArgumentException.class); exception.expectMessage(containsString('Invalid age')); new Person('Joe', -1); } } Approach 2: Specify the exception in the @Test annotation As shown in the code snippet below, you can specify the expected exception in the @Test annotation. The test will pass only if an exception of the specified class is thrown by the test method. Unfortunately, you can’t test the exception message with this approach. @Test(expected = IllegalArgumentException.class) public void testExpectedException2() { new Person('Joe', -1); } Approach 3: Use a try-catch block This is the ‘traditional’ approach which was used with old versions of JUnit, before the introduction of annotations and rules. Surround your code in a try-catch clause and test if the exception is thrown. Don’t forget to make the test fail if the exception is not thrown! @Test public void testExpectedException3() { try { new Person('Joe', -1); fail('Should have thrown an IllegalArgumentException because age is invalid!'); } catch (IllegalArgumentException e) { assertThat(e.getMessage(), containsString('Invalid age')); } }   Reference: Testing Expected Exceptions with JUnit Rules from our JCG partner Fahd Shariff at the blog. ...

Java 7: Fork/Join Framework Example

The Fork/Join Framework in Java 7 is designed for work that can be broken down into smaller tasks and the results of those tasks combined to produce the final result. In general, classes that use the Fork/Join Framework follow the following simple algorithm:               // pseudocode Result solve(Problem problem) { if (problem.size < SEQUENTIAL_THRESHOLD) return solveSequentially(problem); else { Result left, right; INVOKE-IN-PARALLEL { left = solve(extractLeftHalf(problem)); right = solve(extractRightHalf(problem)); } return combine(left, right); } } In order to demonstrate this, I have created an example to find the maximum number from a large array using fork/join: import java.util.Random; import java.util.concurrent.ForkJoinPool; import java.util.concurrent.RecursiveTask;public class MaximumFinder extends RecursiveTask<Integer> {private static final int SEQUENTIAL_THRESHOLD = 5;private final int[] data; private final int start; private final int end;public MaximumFinder(int[] data, int start, int end) { = data; this.start = start; this.end = end; }public MaximumFinder(int[] data) { this(data, 0, data.length); }@Override protected Integer compute() { final int length = end - start; if (length < SEQUENTIAL_THRESHOLD) { return computeDirectly(); } final int split = length / 2; final MaximumFinder left = new MaximumFinder(data, start, start + split); left.fork(); final MaximumFinder right = new MaximumFinder(data, start + split, end); return Math.max(right.compute(), left.join()); }private Integer computeDirectly() { System.out.println(Thread.currentThread() + ' computing: ' + start + ' to ' + end); int max = Integer.MIN_VALUE; for (int i = start; i < end; i++) { if (data[i] > max) { max = data[i]; } } return max; }public static void main(String[] args) { // create a random data set final int[] data = new int[1000]; final Random random = new Random(); for (int i = 0; i < data.length; i++) { data[i] = random.nextInt(100); }// submit the task to the pool final ForkJoinPool pool = new ForkJoinPool(4); final MaximumFinder finder = new MaximumFinder(data); System.out.println(pool.invoke(finder)); } } The MaximumFinder class is a RecursiveTask which is responsible for finding the maximum number from an array. If the size of the array is less than a threshold (5) then find the maximum directly, by iterating over the array. Otherwise, split the array into two halves, recurse on each half and wait for them to complete (join). Once we have the result of each half, we can find the maximum of the two and return it.   Reference: Java 7: Fork/Join Framework Example from our JCG partner Fahd Shariff at the blog. ...

Leveraging MOXy in your Web Service via JAX-WS Provider

In previous articles I demonstrated how EclipseLink JAXB (MOXy) is directly integrated into the JAX-WS implementations in WebLogic (as of 12.1.1) and in GlassFish (as of 3.1.2). In this post I’ll demonstrate how to leverage MOXy in any application server by using the JAX-WS Provider class.Web Service The Provider mechanism in JAX-WS provides you a way to create a Web Service with direct access to the XML. Through the @ServiceMode annotation you can specify whether you want all of the XML from the message or just the payload.   FindCustomerService All the magic happens in the invoke method. Since we specified PAYLOAD as the service mode the input will be an instance of Source that represents the body of the message. All JAXB (JSR-222) implementations can unmarshal from a Source so we will do that to realize the request. After we perform our business logic we need to return the body of the response as an instance of Source. To achieve this we will wrap our response objects in an instance of JAXBSource. package blog.jaxws.provider;import javax.xml.bind.*; import javax.xml.bind.util.JAXBSource; import javax.xml.transform.Source; import*;@ServiceMode(Service.Mode.PAYLOAD) @WebServiceProvider( portName = 'FindCustomerPort', serviceName = 'FindCustomerService', targetNamespace = '', wsdlLocation = 'WEB-INF/wsdl/FindCustomerService.wsdl') public class FindCustomerService implements Provider<Source> {private JAXBContext jaxbContext;public FindCustomerService() { try { jaxbContext = JAXBContext.newInstance(FindCustomerResponse.class, FindCustomerRequest.class); } catch (JAXBException e) { throw new WebServiceException(e); } }@Override public Source invoke(Source request) throws WebServiceException { try { Unmarshaller unmarshaller = jaxbContext.createUnmarshaller(); FindCustomerRequest fcRequest = (FindCustomerRequest) unmarshaller .unmarshal(request);Customer customer = new Customer(); customer.setId(fcRequest.getArg0()); customer.setFirstName('Jane'); customer.setLastName('Doe');FindCustomerResponse response = new FindCustomerResponse(); response.setValue(customer);return new JAXBSource(jaxbContext, response); } catch (JAXBException e) { throw new WebServiceException(e); } }} MOXy as the JAXB Provider To specify that MOXy should be used as the JAXB provider we need to include a file called that is located in the same package as our domain model with the following entry (see: Specifying EclipseLink MOXy as your JAXB Provider). javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory WSDL Below is the WSDL that corresponds to our Web Service. One draw back to using the Provider approach is that the JAX-WS implementation can’t automatically generate one for us (see: GlassFish 3.1.2 is full of MOXy (EclipseLink JAXB)). A WSDL is necessary as it defines a contract for the client. It can even be used to generate a client. <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wsu='' xmlns:wsp='' xmlns:wsp1_2='' xmlns:wsam='' xmlns:soap='' xmlns:tns='' xmlns:xsd='' xmlns='' targetNamespace='' name='FindCustomerService'> <types> <xsd:schema> <xsd:import namespace='' schemaLocation='FindCustomerService.xsd'/> </xsd:schema> </types> <message name='findCustomer'> <part name='parameters' element='tns:findCustomer'/> </message> <message name='findCustomerResponse'> <part name='parameters' element='tns:findCustomerResponse'/> </message> <portType name='FindCustomer'> <operation name='findCustomer'> <input wsam:Action='' message='tns:findCustomer'/> <output wsam:Action='' message='tns:findCustomerResponse'/> </operation> </portType> <binding name='FindCustomerPortBinding' type='tns:FindCustomer'> <soap:binding transport='' style='document'/> <operation name='findCustomer'> <soap:operation soapAction=''/> <input> <soap:body use='literal'/> </input> <output> <soap:body use='literal'/> </output> </operation> </binding> <service name='FindCustomerService'> <port name='FindCustomerPort' binding='tns:FindCustomerPortBinding'> <soap:address location='http://localhost:8080/Blog-JAXWS/FindCustomerService'/> </port> </service> </definitions> XML Schema Below is the XML schema that corresponds to the payload of our message. One draw back to using the Provider approach is that the JAX-WS implementation can’t leverage JAXB directly to automatically generate the XML schema directly, so we need to supply one. <?xml version='1.0' encoding='UTF-8'?> <xsd:schema xmlns:ns0='' xmlns:xsd='' targetNamespace=''> <xsd:element name='findCustomerResponse' type='ns0:findCustomerResponse' /> <xsd:complexType name='findCustomerResponse'> <xsd:sequence> <xsd:element name='return' type='ns0:customer' minOccurs='0' /> </xsd:sequence> </xsd:complexType> <xsd:element name='findCustomer' type='ns0:findCustomer' /> <xsd:complexType name='findCustomer'> <xsd:sequence> <xsd:element name='arg0' type='xsd:int' /> </xsd:sequence> </xsd:complexType> <xsd:complexType name='customer'> <xsd:sequence> <xsd:element name='personal-info' minOccurs='0'> <xsd:complexType> <xsd:sequence> <xsd:element name='first-name' type='xsd:string' minOccurs='0' /> <xsd:element name='last-name' type='xsd:string' minOccurs='0' /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> <xsd:attribute name='id' type='xsd:int' use='required' /> </xsd:complexType> </xsd:schema> Request Objects The highlighted portion of the XML message below is what we are going to receive in our Provider as in instance of Source. We will create a JAXB model to map to this section. <?xml version='1.0' encoding='UTF-8'?> <S:Envelope xmlns:S=''> <S:Header/> <S:Body> <ns2:findCustomer xmlns:ns2=''> <arg0>123</arg0> </ns2:findCustomer> </S:Body> </S:Envelope> FindCustomerRequest The root element is in a different XML namespace than the rest of the body. We will leverage the @XmlRootElement annotation to specify the namespace (see: JAXB & Namespaces ). package blog.jaxws.provider;import javax.xml.bind.annotation.*;@XmlRootElement( namespace='', name='findCustomer') public class FindCustomerRequest {private int arg0;public int getArg0() { return arg0; }public void setArg0(int arg0) { this.arg0 = arg0; }} Response Objects The highlighted portion of the XML message below is what we are going to return from our Provider as in instance of Source. We will create a JAXB model to map to this section. <S:Envelope xmlns:S=''> <S:Header /> <S:Body> <ns0:findCustomerResponse xmlns:ns0=''> <return id='123'> <personal-info> <first-name>Jane</first-name> <last-name>Doe</last-name> </personal-info> </return> </ns0:findCustomerResponse> </S:Body> </S:Envelope> FindCustomerResponse package blog.jaxws.provider;import javax.xml.bind.annotation.*;@XmlRootElement(namespace='') public class FindCustomerResponse {private Customer value;@XmlElement(name='return') public Customer getValue() { return value; }public void setValue(Customer value) { this.value = value; }} Customer One of the many reasons to use MOXy is its path based mapping (see: XPath Based Mapping). Below is an example of how it is specified using the @XmlPath annotation. package blog.jaxws.provider;import javax.xml.bind.annotation.*; import org.eclipse.persistence.oxm.annotations.XmlPath;@XmlType(propOrder = { 'firstName', 'lastName' }) public class Customer {private int id; private String firstName; private String lastName;@XmlAttribute public int getId() { return id; }public void setId(int id) { = id; }@XmlPath('personal-info/first-name/text()') public String getFirstName() { return firstName; }public void setFirstName(String firstName) { this.firstName = firstName; }@XmlPath('personal-info/last-name/text()') public String getLastName() { return lastName; }public void setLastName(String lastName) { this.lastName = lastName; }}   Reference: Leveraging MOXy in your Web Service via JAX-WS Provider from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog. ...

Understanding the Play Filter API

With Play 2.1 hot off the press, there have been a lot of people asking about the new Play filter API. In actual fact, the API is incredibly simple:                   trait EssentialFilter { def apply(next: EssentialAction): EssentialAction } Essentially, a filter is just a function that takes an action and returns another action. The usual thing that would be done by the filter is wrap the action, invoking it as a delegate. To then add a filter to your application, you just add it to your Global doFilter method. We provide a helper class to do that for you: object Global extends WithFilters(MyFilter) { ... } Easy right? Wrap the action, register it in global. Well, it is easy, but only if you understand Plays architecture. This is very important, because once you understand Play’s architecture, you will be able to do far more with Play. We have some documentation here that explains Plays architecture at a high level. In this blog post, I’m going to explain Play’s architecture in the context of filters, with code snippets and use cases along the way. A short introduction to Plays architecture I don’t need to go in depth here because I’ve already provided a link to our architecture documentation, but in short Play’s architecture matches the flow of an HTTP request very well. The first thing that arrives when an HTTP request is made is the request header. So an action in Play therefore must be a function that accepts a request header. What happens next in an HTTP request? The body is received. So, the function that receives the request must return something that consumes the body. This is an iteratee, which is a reactive stream handler, that eventually produces a single result after consuming the stream. You don’t necessarily need to understand the details about how iteratees work in order to understand filters, the important thing to understand is that iteratees eventually produce a result that you can map, just like a future, using their map function. For details on writing iteratees, read my blog post. The next thing that happens in an HTTP request is that the http response must be sent. So what is the result that of the iteratee? An HTTP response. And an HTTP response is a set of response headers, followed by a response body. The response body is an enumerator, which is a reactive stream producer. All of this is captured in Plays EssentialAction trait: trait EssentialAction extends (RequestHeader => Iteratee[Array[Byte], Result]) This reads that an essential action is a function that takes a request header and returns an iteratee that consumes the byte array body chunks and eventually produces a result. The simpler way Before I go on, I’d like to point out that Play provides a helper trait called Filter that makes writing filters easier than when using EssentialFilter. This is similar to the Action trait, in that Action simplifies writing EssentialAction‘s by not needing to worry about iteratees and how the body is parsed, rather you just provide a function that takes a request with a parsed body, and return a result. The Filter trait simplifies things in a similar way, however I’m going to leave talking about that until the end, because I think it is better to understand how filters work from the bottom up before you start using the helper class. The noop filter To demonstrate what a filter looks like, the first thing I will show is a noop filter: class NoopFilter extends EssentialFilter { def apply(next: EssentialAction) = new EssentialAction { def apply(request: RequestHeader) = { next(request) } } } Each time the filter is executed, we create a new EssentialAction that wraps it. Since EssentialAction is just a function, we can just invoke it, passing the passed in request. So the above is our basic pattern for implementing an EssentialFilter. Handling the request header Let’s say we want to look at the request header, and conditionally invoke the wrapped action based on what we inspect. An example of a filter that would do that might be a blanket security policy for the /admin area of your website. This might look like this: class AdminFilter extends EssentialFilter { def apply(next: EssentialAction) = new EssentialAction { def apply(request: RequestHeader) = { if (request.path.startsWith('/admin') && request.session.get('user').isEmpty) { Iteratee.ignore[Array[Byte]].map(_ => Results.Forbidden()) } else { next(request) } } } } You can see here that since we are intercepting the action before the body has been parsed, we still need to provide a body parser when we block the action. In this case we are returning a body parser that will simply ignore the whole body, and mapping it to have a result of forbidden. Handling the body In some cases, you might want to do something with the body in your filter. In some cases, you might want to parse the body. If this is the case, consider using action composition instead, because that makes it possible to hook in to the action processing after the action has parsed the body. If you want to parse the body at the filter level, then you’ll have to buffer it, parse it, and then stream it again for the action to parse again. However there are some things that can be easily be done at the filter level. One example is gzip decompression. Play framework already provides gzip decompression out of the box, but if it didn’t this is what it might look like (using the gunzip enumeratee from my play extra iteratees project): class GunzipFilter extends EssentialFilter { def apply(next: EssentialAction) = new EssentialAction { def apply(request: RequestHeader) = { if (request.headers.get('Content-Encoding').exists(_ == 'gzip')) { Gzip.gunzip() &>> next(request) } else { next(request) } } } } Here using iteratee composition we are wrapping the body parser iteratee in a gunzip enumeratee. Handling the response headers When you’re filtering you will often want to do something to the response that is being sent. If you just want to add a header, or add something to the session, or do any write operation on the response, without actually reading it, then this is quite simple. For example, let’s say you wanted to add a custom header to every response: class SosFilter extends EssentialFilter { def apply(next: EssentialAction) = new EssentialAction { def apply(request: RequestHeader) = { next(request).map(result => result.withHeaders('X-Sos-Message' -> 'I'm trapped inside Play Framework please send help')) } } } Using the map function on the iteratee that handles the body, we are given access to the result produced by the action, which we can then modify as demonstrated. If however you want to read the result, then you’ll need to unwrap it. Play results are either AsyncResult or PlainResult. An AsyncResult is a Result that contains a Future[Result]. It has a transform method that allows the eventual PlainResult to be transformed. A PlainResult has a header and a body. So let’s say you want to add a timestamp to every newly created session to record when it was created. This could be done like this: class SessionTimestampFilter extends EssentialFilter { def apply(next: EssentialAction) = new EssentialAction { def apply(request: RequestHeader) = {def addTimestamp(result: PlainResult): Result = { val session = Session.decodeFromCookie(Cookies(result.header.headers.get(HeaderNames.COOKIE)).get(Session.COOKIE_NAME)) if (!session.isEmpty) { result.withSession(session + ('timestamp' -> System.currentTimeMillis.toString)) } else { result } }next(request).map { case plain: PlainResult => addTimestamp(plain) case async: AsyncResult => async.transform(addTimestamp) } } } } Handling the response body The final thing you might want to do is transform the response body. PlainResult has two implementations, SimpleResult, which is for bodies with no transfer encoding, and ChunkedResult, for bodies with chunked transfer encoding. SimpleResult contains an enumerator, and ChunkedResult contains a function that accepts an iteratee to write the result out to. An example of something you might want to do is implement a gzip filter. A very naive implementation (as in, do not use this, instead use my complete implementation from my play extra iteratees project) might look like this: class GzipFilter extends EssentialFilter { def apply(next: EssentialAction) = new EssentialAction { def apply(request: RequestHeader) = {def gzipResult(result: PlainResult): Result = result match { case simple @ SimpleResult(header, content) => SimpleResult(header.copy( headers = (header.headers - 'Content-Length') + ('Content-Encoding' -> 'gzip') ), content &> => simple.writeable.transform(a)) &> Gzip.gzip()) }next(request).map { case plain: PlainResult => gzipResult(plain) case async: AsyncResult => async.transform(gzipResult) } } } } Using the simpler API Now you’ve seen how you can achieve everything using the base EssentialFilter API, and hopefully therefore you understand how filters fit into Play’s architecture and how you can utilise them to achieve your requirements. Let’s now have a look at the simpler API: trait Filter extends EssentialFilter { def apply(f: RequestHeader => Result)(rh: RequestHeader): Result def apply(next: EssentialAction): EssentialAction = { ... } }object Filter { def apply(filter: (RequestHeader => Result, RequestHeader) => Result): Filter = new Filter { def apply(f: RequestHeader => Result)(rh: RequestHeader): Result = filter(f,rh) } } Simply put, this API allows you to write filters without having to worry about body parsers. It makes it look like actions are just functions of request headers to results. This limits the full power of what you can do with filters, but for many use cases, you simply don’t need this power, so using this API provides a simple alternative. To demonstrate, a noop filter class looks like this: class NoopFilter extends Filter { def apply(f: (RequestHeader) => Result)(rh: RequestHeader) = { f(rh) } } Or, using the Filter companion object: val noopFilter = Filter { (next, req) => next(req) } And a request timing filter might look like this: val timingFilter = Filter { (next, req) => val start = System.currentTimeMillisdef logTime(result: PlainResult): Result = {'Request took ' + (System.currentTimeMillis - start)) result }next(req) match { case plain: PlainResult => logTime(plain) case async: AsyncResult => async.transform(logTime) } }   Reference: Understanding the Play Filter API from our JCG partner James Roper at the James and Beth Roper’s blogs blog. ...

CPU Cache Flushing Fallacy

Even from highly experienced technologists I often hear talk about how certain operations cause a CPU cache to ‘flush’. This seems to be illustrating a very common fallacy about how CPU caches work, and how the cache sub-system interacts with the execution cores. In this article I will attempt to explain the function CPU caches fulfil, and how the cores, which execute our programs of instructions, interact with them. For a concrete example I will dive into one of the latest Intel x86 server CPUs. Other CPUs use similar techniques to achieve the same ends. Most modern systems that execute our programs are shared-memory multi-processor systems in design. A shared-memory system has a single memory resource that is accessed by 2 or more independent CPU cores. Latency to main memory is highly   variable from 10s to 100s of nanoseconds. Within 100ns it is possible for a 3.0GHz CPU to process up to 1200 instructions. Each Sandy Bridge core is capable of retiring up to 4 instructions-per-cycle (IPC) in parallel. CPUs employ cache sub-systems to hide this latency and allow them to exercise their huge capacity to process instructions. Some of these caches are small, very fast, and local to each core; others are slower, larger, and shared across cores. Together with registers and main-memory, these caches make up our non-persistent memory hierarchy. Next time you are developing an important algorithm, try pondering that a cache-miss is a lost opportunity to have executed ~500 CPU instructions! This is for a single-socket system, on a multi-socket system you can effectively double the lost opportunity as memory requests cross socket interconnects. Memory HierarchyFor the circa 2012 Sandy Bridge E class servers our memory hierarchy can be decomposed as follows:Registers: Within each core are separate register files containing 160 entries for integers and 144 floating point numbers. These registers are accessible within a single cycle and constitute the fastest memory available to our execution cores. Compilers will allocate our local variables and function arguments to these registers. When hyperthreading is enabled these registers are shared between the co-located hyperthreads. Memory Ordering Buffers (MOB): The MOB is comprised of a 64-entry load and 36-entry store buffer. These buffers are used to track in-flight operations while waiting on the cache sub-system. The store buffer is a fully associative queue that can be searched for existing store operations, which have been queued when waiting on the L1 cache. These buffers enable our fast processors to run asynchronously while data is transferred to and from the cache sub-system. When the processor issues asynchronous reads and writes then the results can come back out-of-order. The MOB is used to disambiguate the load and store ordering for compliance to the published memory model. Level 1 Cache: The L1 is a core-local cache split into separate 32K data and 32K instruction caches. Access time is 3 cycles and can be hidden as instructions are pipelined by the core for data already in the L1 cache. Level 2 Cache: The L2 cache is a core-local cache designed to buffer access between the L1 and the shared L3 cache. The L2 cache is 256K in size and acts as an effective queue of memory accesses between the L1 and L3. L2 contains both data and instructions. L2 access latency is 12 cycles. Level 3 Cache: The L3 cache is shared across all cores within a socket. The L3 is split into 2MB segments each connected to a ring-bus network on the socket. Each core is also connected to this ring-bus. Addresses are hashed to segments for greater throughput. Latency can be up to 38 cycles depending on cache size. Cache size can be up to 20MB depending on the number of segments, with each additional hop around the ring taking an additional cycle. The L3 cache is inclusive of all data in the L1 and L2 for each core on the same socket. This inclusiveness, at the cost of space, allows the L3 cache to intercept requests thus removing the burden from private core-local L1 & L2 caches. Main Memory: DRAM channels are connected to each socket with an average latency of ~65ns for socket local access on a full cache-miss. This is however extremely variable, being much less for subsequent accesses to columns in the same row buffer, through to significantly more when queuing effects and memory refresh cycles conflict. 4 memory channels are aggregated together on each socket for throughput, and to hide latency via pipelining on the independent memory channels. NUMA: In a multi-socket server we have non-uniform memory access. It is non-uniform because the required memory maybe on a remote socket having an additional 40ns hop across the QPI bus. Sandy Bridge is a major step forward for 2-socket systems over Westmere and Nehalem. With Sandy Bridge the QPI limit has been raised from 6.4GT/s to 8.0GT/s, and two lanes can be aggregated thus eliminating the bottleneck of the previous systems. For Nehalem and Westmere the QPI link is only capable of ~40% the bandwidth that could be delivered by the memory controller for an individual socket. This limitation made accessing remote memory a choke point. In addition, the QPI link can now forward pre-fetch requests which previous generations could not.Associativity Levels Caches are effectively hardware based hash tables. The hash function is usually a simple masking of some low-order bits for cache indexing. Hash tables need some means to handle a collision for the same slot. The associativity level is the number of slots, also known as ways or sets, which can be used to hold a hashed version of an address. Having more levels of associativity is a trade off between storing more data vs. power requirements and time to search each of the ways. For Sandy Bridge the L1D and L2 are 8-way associative, the L3 is 12-way associative. Cache Coherence With some caches being local to cores, we need a means of keeping them coherent so all cores can have a consistent view of memory. The cache sub-system is considered the ‘source of truth’ for mainstream systems. If memory is fetched from the cache it is never stale; the cache is the master copy when data exists in both the cache and main-memory. This style of memory management is known as write-back whereby data in the cache is only written back to main-memory when the cache-line is evicted because a new line is taking its place. An x86 cache works on blocks of data that are 64-bytes in size, known as a cache-line. Other processors can use a different size for the cache-line. A larger cache-line size reduces effective latency at the expense of increased bandwidth requirements. To keep the caches coherent the cache controller tracks the state of each cache-line as being in one of a finite number of states. The protocol Intel employs for this is MESIF, AMD employs a variant know as MOESI. Under the MESIF protocol each cache-line can be in 1 of the 5 following states:Modified: Indicates the cache-line is dirty and must be written back to memory at a later stage. When written back to main-memory the state transitions to Exclusive. Exclusive: Indicates the cache-line is held exclusively and that it matches main-memory. When written to, the state then transitions to Modified. To achieve this state a Request-For-Ownership (RFO) message is sent which involves a read plus an invalidate broadcast to all other copies. Shared: Indicates a clean copy of a cache-line that matches main-memory. Invalid: Indicates an unused cache-line. Forward: Indicates a specialised version of the shared state i.e. this is the designated cache which should respond to other caches in a NUMA system.To transition from one state to another, a series of messages are sent between the caches to effect state changes. Previous to Nehalem for Intel, and Opteron for AMD, this cache coherence traffic between sockets had to share the memory bus which greatly limited scalability. These days the memory controller traffic is on a separate bus. The Intel QPI, and AMD HyperTransport, buses are used for cache coherence between sockets. The cache controller exists as a module within each L3 cache segment that is connected to the on-socket ring-bus network. Each core, L3 cache segment, QPI controller, memory controller, and integrated graphics sub-system are connected to this ring-bus. The ring is made up of 4 independent lanes for: request, snoop, acknowledge, and 32-bytes data per cycle. The L3 cache is inclusive in that any cache-line held in the L1 or L2 caches is also held in the L3. This provides for rapid identification of the core containing a modified line when snooping for changes. The cache controller for the L3 segment keeps track of which core could have a modified version of a cache-line it owns. If a core wants to read some memory, and it does not have it in a Shared, Exclusive, or Modified state; then it must make a read on the ring bus. It will then either be read from main-memory if not in the cache sub-systems, or read from L3 if clean, or snooped from another core if Modified. In any case the read will never return a stale copy from the cache sub-system, it is guaranteed to be coherent. Concurrent Programming If our caches are always coherent then why do we worry about visibility when writing concurrent programs? This is because within our cores, in their quest for ever greater performance, data modifications can appear out-of-order to other threads. There are 2 major reasons for this. Firstly, our compilers can generate programs that store variables in registers for relatively long periods of time for performance reasons, e.g. variables used repeatedly within a loop. If we need these variables to be visible across cores then the updates must not be register allocated. This is achieved in C by qualifying a variable as ‘volatile’. Beware that C/C++ volatile is inadequate for telling the compiler not to reorder other instructions. For this you need memory fences/barriers. The second major issue with ordering we have to be aware of is a thread could write a variable and then, if it reads it shortly after, could see the value in its store buffer which may be older than the latest value in the cache sub-system. This is never an issue for algorithms following the Single Writer Principle but is an issue for the likes of the Dekker and Peterson lock algorithms. To overcome this issue, and ensure the latest value is observed, the thread must not load the value in the local store buffer. This can be achieved by issuing a fence instruction which prevents the subsequent load getting ahead of a store from another thread. The write of a volatile variable in Java, in addition to never being register allocated, is accompanied by a full fence instruction. This fence instruction on x86 has a significant performance impact by preventing progress on the issuing thread until the store buffer is drained. Fences on other processors can have more efficient implementations that simply put a marker in the store buffer for the search boundary, e.g. the Azul Vega does this. If you want to ensure memory ordering across Java threads when following the Single Writer Principle, and avoid the store fence, it is possible by using the j.u.c.Atomic(Int|Long|Reference).lazySet() method, as opposed to setting a volatile variable. The Fallacy Returning to the fallacy of ‘flushing the cache’ as part of a concurrent algorithm. I think we can safely say that we never ‘flush’ the CPU cache within our user space programs. I believe the source of this fallacy is the need to flush, mark or drain to a point, the store buffer for some classes of concurrent algorithms so the latest value can be observed on a subsequent load operation. For this we require a memory ordering fence and not a cache flush. Another possible source of this fallacy is that L1 caches, or the TLB, may need to be flushed based on address indexing policy on a context switch. ARM, previous to ARMv6, did not use address space tags on TLB entries thus requiring the whole L1 cache to be flushed on a context switch. Many processors require the L1 instruction cache to be flushed for similar reasons, in many cases this is simply because instruction caches are not required to be kept coherent. The bottom line is, context switching is expensive and a bit off topic, so in addition to the cache pollution of the L2, a context switch can also cause the TLB and/or L1 caches to require a flush. Intel x86 processors require only a TLB flush on context switch.   Reference: CPU Cache Flushing Fallacy from our JCG partner Martin Thompson at the Mechanical Sympathy blog. ...

Android SDK New Build System

A post ago I talked about that Google is in the process of developing a new build system for application developers using Gradle which is in fact Groovy based. Since there only at version 0.2 right now that may be a slight wait as they still have IDE ADT integration, RenderScript support, NDK support, ProGuard support, Lint Support, emma support, and JUnit report.xml and html generation. But, one should still start getting an idea of how it works so some resources: New Build System Concepts Using the New Build System Gradle User Guide for 1.2 also has a free book if you register and the link is on the front page of the site. For those who want to play with it now, the current alpha release is in the maven repo under android tools and its current version is 0.2. Its not the gradel android plugin at github: Maven repo new Build System Gradle Android Plugin So when will we see it in the ADT plugin or full sdk? Not sure, it be a nice holiday present if we get it before the last major holiday of this year.   Reference: Android SDK New Build System from our JCG partner Fred Grott at the GrottWorkShop Blog blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.